►
Description
Blog post with insights and URLs: https://everyonecancontribute.com/post/2020-11-04-cafe-7-docker-hub-rate-limit-monitoring/
- https://about.gitlab.com/blog/2020/10/30/mitigating-the-impact-of-docker-hub-pull-requests-limits/
- https://about.gitlab.com/blog/2020/10/30/minor-breaking-change-dependency-proxy/
B
A
Hopefully,
okay
yeah,
before
we
start
today
and
before
I
tell
you
that
I've
kind
of
hijacked
the
topic
again
today,
I
would
like
to
welcome.
A
And
if
a
new
face
a
new
person
in
our
lovely,
everyone
can
contribute
coffee
chat
round.
Therefore,
I
want
to
start
with
a
short
introduction
round
and
I'm
asking
again:
how
is
the
weather
in
lower.
C
It's
a
little
bit
rainy
outside,
but
it's
fine,
so
my
name
is
mike
lagner.
I'm
a
product
manager
and
software
engineer
at
the
company
called
zkw
in
austria
we
make
headlamps
for
the
automotive
industry
and
a
part
of
the
lg
corporation
and
yeah.
We
build
high
performance
computing
stuff
in
c,
mainly
in
c
plus,
plus
c
d
engines,
ray
tracing
engines
and
so
on.
B
Okay
yeah,
my
name
is
oh,
so
my
name
is
nikka
smith,
I'm
currently
working
as
senior
dev
of
the
engineer
at
verity.
What
we
are
currently
doing
is
we
provide
detailed
identity,
cloud
solutions
for
enterprises
and
I'm
doing
there
mostly
the
platform
stuff
so
helping
our
internal
teams
to
deploy
the
application
and
also
developing
the
platform
for
the
next
generation
yeah.
So
that
would
be
my
part
mostly.
A
Okay,
I'm
the
crazy
one
who
had
the
idea
of
like
doing
a
a
technical
coffee
chat.
I'm
working
as
a
developer,
eventually
listed
gitlab
yeah,
I'm
just
trying
to
like
discuss
the
hot
topics,
find
ideas,
try
things
out
and
everything
around
it
and
I'm
handing
over
to
christoph
right
now.
D
Yeah
hi,
I'm
christoph
I'm
a
senior
consultant,
I'm
mostly
a
website
guy,
but
I'm
switching
over
to
kubernetes
in
the
moment,
I'm
in
hessen
near
frankfurt,
but
yeah
born
in
bavaria.
So
I'm
still
having
his
as
a
strong
accent,
but
I
think
in
english
everybody
will
understand
so
highly
interested
in
gitlab
and
kubernetes,
but
at
the
work
or
business,
I'm
mostly
doing
other
stuff
on
deployments
and
ansible
and
and
that
stuff.
A
C
A
Time
yeah:
yes,
I'm
myself
working
for
comedia
in
hamburg
and
yeah,
mainly
working
with
python,
and
having
the
focus
on
automation
and
monitoring.
A
Great
thanks
again,
the
other
thing
I
kind
of
was
thinking
about
so
last
week.
We
learned
that
we
knew
it
already,
since
always
that
docker
hub
would
be
adding
the
rate
limiting
to
like
the
the
pull
requests
or
the
pulls
from
doc.
When
you
execute
a
ci
cd
job,
when
you
deploy
something
into
kubernetes,
when
you
do
something
else,
and
that's
totally
fine,
and
we
probably
should
all
be
paying
them
money
for
for
their
service.
A
But
the
thing
was,
we
were
not
sure
how
to
mitigate
that
problem
and
what
what
possible
ways
could
be
done
and,
as
things
go
along,
we
have
been
discussing
this
in
this
in.
I
think
it's
september,
in
our
german
coffee
chat
and
talking
about
like
proxiing
stuff,
caching
proxies
other
things
and
one
of
the
ideas
was
to
run
your
own
docker
registry.
Proxy
cache,
maybe
use
a
different
vendor
and
other
things,
and
we
at
gitlab
in
our
slack
channel.
A
It
was
kind
of
the
discussion
going
on
and
I
think
it
was
thursday
afternoon.
My
time
probably
said:
okay,
we
should.
We
should
do
something
about
it
because
it
we
might
be
affected
in
our
kidlet.com
shared
run
of
lead.
We
might
not
be
affected
because
we're
running
in
google
cloud
in
gcp
and
could
use
that
registry,
but
mainly
our
our
self-hosted
users
will
be
affected
and
one
of
the
things
which
resulted
out
of
that
was
or
out
of
the
discussion.
A
This
is
linked
to
the
agenda
as
well
was
that
we
found
this
is
a
blog
post
and
we
also
found
documentation
how
you
can
actually
determine
whether
you're
affected
from
the
current
rate
limit,
and
we
can.
We
learned
that
there
is
a
new
response,
header,
which
is
red
limit,
limit
and
limit
remaining
and
with
some
curl
magic.
A
You
could
obtain
an
access
token
and
later
on
use
that
for,
like
inmate
doing
a
real
pull
request,
which
kind
of
decreases
your
limit,
which
is
bad,
but
this
is
the
only
way
to
find
out
whether
you
kind
of
have
reached
the
limit
or
not,
or
you
will
be
reaching
the
limit,
and
one
of
the
things
was
okay.
Now
that
we
know
that
how
to
like
mitigate
that,
we
should
probably
also
write
down
specific
things
which
can
be
done,
and
this
was
kind
of.
A
The
second
approach
which
we
took
was
to
write
a
blog
post,
which
went
live
last
first
last
friday,
then
to
say
hey
what
what's
possible.
What
can
you
do
about
it,
especially
because
at
that
time,
when
we
discussed
that
it
was
not
clear
that
docker
will
be
soft
enforcing
the
limits,
which
is
right
now,
I
think
5
000
pull
requests
and
later
on.
It
will
be
100,
just
100.,
and
so
we
said.
A
Well,
maybe
we
want
to
use
a
registry
mirror
which
like
which
is
available,
which
situation
is
like
going
on?
What
what
can
you
do
and
and
some
like
usable
examples,
and
I
I
kind
of
figured
that
it's
a
long,
blog
post,
so
my
my
team
members
have
invested.
Basically,
it
was
a
it's
a
huge
brain
dump
and
we
are
looking
into
ways
of
like
updating
it
and
sharing
even
more
knowledge
and
another
point
we
discussed
was
here.
A
We
have
something
already,
which
is
called
the
dependency
proxy
and
and
then
I
was
like
yeah,
but
it's
just
like
it's
it's
in
the
premium
tier
right
now.
So,
if
you're,
using
gitlab
self-hosted
in
the
core
or
community
edition,
or
even
in
the
starter
version,
you
couldn't
have
access
to
that,
and
so
they,
the
quick
decision,
was
made
to
say:
okay,
we're
moving
this
to
open
source.
A
A
The
thing
was
when,
when
everything
happened
on
on
monday-
and
I
think
it
was
9am
pt,
which
is
6
p.m,
european
time
zone
or
test
set-
I
was
like
yeah,
but
this
this
whole
curl
command
thing.
A
You
don't
want
to
use
that
in
your
monitoring
or
somewhere
else.
It
might
be.
Maybe
it's
maybe
I
can
like
write
something
else
and
it's
like
okay,
maybe
write
it
in
in
rust,
yeah,
but
I
don't
know
rust
that
good.
A
We
were
just
learning
rust,
golang
yeah,
I
kind
of
had
a
brain
block
and
it
was
like.
Oh,
I
know
python
and
I
can't
ride
python
like
when
I'm
sleeping,
and
so
it
was
kind
of
saying.
Okay,
I'm
writing
a
python
script
now
and
then
okay,
maybe
let's
just
make
it
a
monitoring
plugin-
and
this
was
kind
of
it-
was
fun
to
code
actually
and
you
can
add
it
into
a
monitoring
system.
The
only
problem
is
that
it
decreases
the
remaining
count
by
one
each
time.
A
So
it's
still
not
perfect
yet,
and
I
think
it's
worth
a
discussion
on
what
else
can
we
do?
Should
we
like?
There
was
a
challenge
discussed
at
this
discussion
on
on
twitter
yesterday.
I
think
where
I
said,
I'm
I'm
not
rewriting
that
in
rust
and
nicolas
said
yes,
challenge
accepted,
so
I'm
not
sure
if
he
will
be
writing
russ
today
or
someone
else,
or
maybe
we
don't
even
write
russ
today
I
don't
know
the
thing
is
I
want
to
learn
or
I
want
to
like
hear
from
you.
A
D
A
Question
are
you
affected
by
the
docker
hub
rate
limit
and
what
are
your
thoughts
on
solving
the
problem.
D
I'm
not
directly
affected
because
we
mostly
have
our
local
registries
running
here
for
for
kubernetes
in
the
enterprise
networks,
so
I'm
doing
mostly
stuff
for
financial
institutes
and
and
so
we
we
host
mainly
everything
on
premise
but
on
my
private
stuff,
I'm
I'm
a
little
bit
of
affected.
So
I
I
do
some
some
docker
stuff
on
my
local
machines
on
my
on
my
price
and
so
on.
D
But
I
haven't
recognized
any
errors
in
the
moment
so,
but
I
think
it's
in
the
middle
of
that
docker
will
just
disappear
because
I
think
it
started
when
they
sold
their
enterprise
customers
and
and
lost
the
battle
with
kubernetes.
And
that's
the
next
step.
It's
hard.
C
C
Your
enterprise
sector,
do
you
render
the
base
images
so.
D
C
About
that
about
the
rate
limiting,
so
we
use
yeah
privately
definitely
affected
because
yeah,
no,
I
don't
host
my
own
registry
at
home,
so
but
there
is
kidlet.com
or
github's
registry
which
maybe
also
work
and
yeah
in
in
the
enterprise
we
use
at
artifactory
for
some
years
now
to
import
our
images
and
what
we've
just
done
is
use
the
virtual
mechanism
of
artifactory
to
render
the
cloud
that
the
docker
hub
images
in
and
cache
it.
C
So
it's
a
proxy
system
and
that
defector
has
a
nice
feature
that
you
can
go
into
the
gui
and
turn
on
the
offline
switch.
So
if
anything
goes
wrong
or
not
in
the
way
we
like,
we
have
some
kind
of
yeah
emergency
button,
so
we
can
guarantee
that
our
system
is
running
but
yeah.
We
also
now
use
the
render
everything
in
approach
and
host
it
ourselves
and
it's
it's
generally.
I
think
the
best
idea
to
host
your
important
stuff
at
your
infrastructure
and
not
relying
on
another
infrastructure
part.
C
B
Right
now
I
didn't
solve
a
problem
because
we
aren't
currently
not
affected,
because
we
don't
have
so
many
ci
bits
currently
that
using
external
images,
I
see
it
a
little
bit
different
from
different
perspectives,
so
typically
as
a
normal
user,
I
won't
be
affected
at
all,
because
I
have
two
phases
of
limiting,
so
it
means
at
first,
if
I'm
doing
stuff
from
my
local
machine
and
I'm
authenticated,
I
have
at
least
200
ports
in
six
hours,
mostly
there's
so
many
stuff
that
I
can't
do
manually
by
my
own.
B
So
mostly
these
changes
will
affected
all
the
ci
systems.
So
if
you
are
running
ci
systems
and
you're
using
external
images,
there
could
be
some
implications
mostly
and
for
that,
luckily,
we
in
the
company
also
using
only
all
our
own
images,
mostly
for
all
the
stuff.
So
we
have
a
simple
base
image
repository
where
we
pull
in
the
images
from
the
crop
that
we
need
to
get
everything
into
viewer.
B
But
the
hard
part
about
all
this
stuff
is
mostly
because
we're
doing
a
way
of
syncing
the
stuff
manually
is
to
updating
all
the
child
images
of
the
base
images.
So
we
have
appearance
that
has
updated
and
when
do
you
notify
them
to
update
them
all
at
once?
So
probably
you
don't
want
to
do
that
when
you
need
to
handle
at
least
50
or
100
repositories
at
one
at
once,
because
currently
everyone
doing
a
really
distributed
way
in
our
company.
So
that
means
we
have
a
lot
of
repositories
that
are
independent
from
on.
B
But
when
we
are
saying
we
are
updating
our
base
images
from
the
operations
side,
because
there
come
some
security
issue,
it
was
in
the
past.
It
was
really
hard
to
sending
a
pull
request
to
the
developers
and
open
it
to
repositories
and
notice
them
hey.
Please
update
your
base
image
because
a
new
version
comes
out,
so
this
is
another
problem.
When
you're
doing
the
stuff,
you
have
your
own
base
images
and
you
need
to
update
them
for
all
your
repositories
or
all
your
customers.
B
So
your
internal
customers,
the
different
approach
that
can
be
used
for
that
is
mostly
that
a
lot
of
people
now
recommend
to
use
is
to
using
a
proxy
so
like
you
can
use
the
dependency
proxy
in
gitlab.
You
can
use
the
if
you're,
using
only
docker
hub,
you
can
using
directly
with
the
distribution
as
pull
through
cache.
But
if
you
have
a
problem,
you
want
to
use
multiple
repositories
like
docker
hub,
google
container
registry,
and
how
does
the
code
also
try?
B
But
I
think
this
isn't
the
end
mostly
because
then
you
save
your
base
images
mostly,
but
what
you
also
need
is
to
cache
the
dependencies
in
video
privilege,
mostly
so,
for
example,
if
you're
using
a
debian
based
image,
you
should
also
test
your
pages
that
you're
currently
using
because
sometimes
it
could
be
also
happen.
That
changes
are
happen
on
the
debian
base,
if
you're
using
a
debian
based
image.
B
So
this
is
also
so
you
are
shifting
a
little
bit
of
a
problem
mostly,
and
for
that
case
you
need
also
be
ensure
that,
if
you're
using
airplane,
it's
quite
straight
away,
because
alpine
has
two
options
in
his
operating
system
by
itself,
because
you
can
package
all
your
artifacts
into
one
tar
and
can
using,
for
example,
a
simple
stretch,
image
and
then
cropping
only
the
dependencies
into
your
next
image
that
you
want
to
use
and
can
be
ensure
that
you're,
using
only
the
correct
package
versions
on
that.
Yes,
this
is
a
short
outlook.
B
What
we
are
currently
doing
currently,
because
we
haven't
got
so
many
images
we
sticking
to
the
base
image
approach,
because
we
have
around
50
between
100
base
images,
so
this
is
quite
comfortable
to
maintain
it
in
a
senior
repository
having
a
little
gitlab
pipeline
to
update
them,
putting
them
from
locker
up
and
shifting
it
into
our
own
registry.
B
So
for
that,
it's
quite
straightforward
mostly
so
this
would
be
my
it's
a
simple
approach,
but
it
says
also:
some
traffic
can
have
some
caveats.
Yeah,
probably
we'll
see
in
future
that
a
lot
of
more
companies
will
have
offer
us
a
public
registry.
So,
for
example,
ecr
will
probably
have
an
option
to
mirror
all
the
stuff.
Google
container
registry
has
also
the
option
to
mirror
all
the
images
from
all
the
sources,
but
we
will
see
probably
that
more
options
come
to
this.
B
A
different
problem
is
mostly,
then
that
we
have
a
lot
of
distributed
ways
where
we
can
get
the
images
from,
and
I'm
really
like
the
idea
that
some
people
also
propose
that
we
have
a
single
hub
where
we
can
get
all
the
images
from
this.
The
main
idea
between
weathercraft,
mostly
because
this
was
sent
for
a
place
where
you
know.
Okay,
I
don't
have
the
image
on
my
local
machine.
I
go
to
drop
looking
for
it
and
see:
okay,
here's
the
image
I
can
use
it
mostly.
B
Hopefully
we
have
some
supporters
for
that,
so
that
the
cncf
responds
like
that
or
someone
else
who
has
a
public
registry
that
can
help
us
to
reduce
the
mitigation
of
the
high
distribute
of
high
distribution
that
you
need
to
know
at
least
okay
for
this
vendor.
I
need
to
go
to
this
docker,
this
repository
url,
for
this
one
or
to
a
different
wire,
or
you
need
to
set
up
a
proxy
to
cover
the
problem
yeah,
so
that
was
a
short
artwork.
B
Hopefully
I
didn't
talk
too
much
yeah,
so
I
wouldn't
hand
over
to
muscle
or
see
how
they
are
currently
handling
it.
A
Just
just
to
catch
up
on
the
public
registry,
I've
shared
some
years
with
you
and
I
will
link
them
later
on
in
the
blog
post,
aws
announced
something
about
public
watch
history
and
my
colleagues
also
shared
the
digital
ocean
has
a
container
registry.
A
So
I
think
everyone
is
like
trading
their
or
publishing
their
enterprise
offering
or
their
offerings
now
given.
Given
they
announced
the
card
rate
limits.
A
And
internally
we
also
use
harbor
for
the
images
we
use
for
our
ci
and
yeah.
We
have
different
teams
working
with
containers,
so
they
or
the
team
for
the
internal
ci
they
use,
as
other
just
said,
and
we
are
managing
our
cloud
offerings
and
we
actually
use
of
rendered
images
we
download
from
docker
hub-
and
we
use
the
ecr
at
amazon
for
these
images.
A
So
we
yeah,
we
don't
don't
know
the
images
from
docker
hub
directly
that
often
because
for
us
that,
as
part
of
our
release
process
like
we
released
our
infrastructure
code
independently
from
the
product
code,
and
yet
there
are
several
release
processes
internally
and
so
for
now
we
are
not
really
affected
because
we
do
have
the
base
images
already.
We
update
them
yeah,
not
that
often
that
200
downloads
in
six
hours,
I
think,
is
the
limit
that
this
would
affect
us
and
yep.
For
now,
we
surveyed
with
the
vmware
harbor
and
amazon.
A
C
B
B
So
that
it
won't
be
disappear
mostly
because
currently,
the
main
function
of
the
crap
is
that
you
can
easily
fetch
images
from
that
and
the
other
problem
or
not.
Problem
is
right.
Now,
if
you're,
using
the
docker
hub,
your
docker
demon
right
now,
the
docker
daemon
is
configured
to
pull
automatically
images
from
docker
hub.
So
you
can't
change
it
at
least
also
if
you
don't
recompile
your
data
demon.
B
For
that
part,
I
think
it
will
still
exist
fall
on
our
face,
and
mostly
a
lot
of
people
should
also
pay
for
the
stuff,
because
we
use
it
in
a
lot
of
times,
mostly
in
the
in
the
past,
for
all
getting
all
the
images.
B
I
think
no
one
estimated
that
it
was
so
frequently
used
so
in
terms
of
because
we
have
no
container
orchestrators.
Every
container
orchestrator
now
tries
to
fetch
all
the
images,
and
this
will
have
a
huge
impact
of
in
terms
of
traffic
and
also
probably
storing
images.
So,
as
company
view,
I
wouldn't
say
upload
your
images
to
the
doctoral
public.
B
To
probably
you
just
use
your
own
private
registry
that
you
change
over
access,
that
you
can
audit
it
who
use
it
or
who
uses
it
not,
but
the
dot
crop
is
a
great
place
easily
to
share
images
with
different
users,
because
it's
directly
integrated
in
the
dry
daemon.
So
there's
a
different
question
to
that.
B
If
you
start,
if
you're
using
docker
in
the
next
feature
for
as
a
local
developer
and
if
we
are
coming
back
from
the
idea
when
doctor
was
born,
doctor
was
mostly
proposed
for
developers,
because
it's
really
quite
easy
to
use.
You
can
easily
spin
up
containers,
so
containers
exist
before
docker.
Also,
so
google
used
antenna
since
2008
and
docker
was
400
hours
goes
into
ga
mode.
I
think
in
2015
when
when
the
1.0
reached,
so
I
think
it
won't
be
going
away
directly.
B
Probably,
but
it's
quite
more,
it's
an
interesting
point
of
view
from
the
vendor
side.
So
if
you're
selling
your
software,
how
can
I
let
my
customers
quite
easily
benefited
to
use
my
images?
So
currently
we
saw
in
the
past
already
that
because,
for
example,
elasticsearch
have
their
own
repositories,
they
don't
use
docker
hub
at
all.
B
Mostly
if
you
want
to
get
the
latest
elastic
search
image,
you
need
to
go
to
their
own
registry
to
download
it
currently
that
it's
not
so
com,
not
so
comfortable
right
now
for
the
developer,
because
you
need
to
use
a
different
url,
it's
not
so
hard
to
use
that.
But
yeah
you
lose
a
little
bit
of
confidence
and
not
not
confidence
a
little
bit
of
convincing
convenience,
mostly
so
that
it's
quite
easy
to
use
and
we
will
see
how
it
will
evolve
mostly
so.
C
The
interesting
part
is,
will
docker
ever
release
environment
variables,
something
like
that
that
you
can
wipe
out
the
the
docker
hub.
So
it
would
be
much
easier
to
integrate
that
but,
like
I
would
guess,
that's
a
thing
they
would
love
to
hear
from
the
enterprise
side.
B
You
know,
I
know
a
lot
of
people
that
are
using
container
d
directly
instead
of
the
docker
daemon
to
use
that,
and
also
we
have
a
different
optimized
operating
system.
So,
for
example,
like
flacca
or
container
denotes
a
translucent
stat
right
now,
yeah,
instead
right
now
by
drawers,
so
there's
a
fedora
container.
Linux
is
for
same
mostly
where
you're
running
only
the
parts
that
you're
really
needing
and
currently
they
don't
use
that
redeemer.
B
B
Probably
you
can
have
a
lot
of
more
convenience
that
the
developer
doesn't
need
to
care
about
which
url
will
be
used.
So
that
means
probably
you
need
to
have
more
rules
in
your
kubernetes
trust
or
near
container
outreach,
mostly
like
that.
So,
for
example,
like
we
have
in
development,
we
have
something
like
linting
options.
What
you
can
do
or
what
you
can't
do.
It
will
be
same
up
here
for
container
and
there
are
already
solutions
for
that.
B
For
example,
you
can
use
open
policy
agent
to
restrict
your
whole
cluster
to
say,
okay,
you
aren't
allowed
to
use
any
images
from
docker
hub
so
that
doesn't
need
to
be
technical
implementation.
So
that
means
it
doesn't
need
to
be
something
like
hey,
okay,
we're
switching
the
dns.
Are
we
shifting?
We
don't
allow
external
traffic
to
all
of
this
stuff.
We
can
use
this
in
a
more
higher
level.
So,
for
example,
we
could
use
oppa
for
that,
and
it
will
tell
us
it
would
tell
it's
for
user.
Okay,
you
shouldn't
use
this
image.
B
Please
use
the
correct
image
url
so
that
the
user
feedback
of
the
developer
experience
will
be
a
little
bit
better
and
you
have
it
in
a
really
conformant
way,
because
you
can
test
your
policies
and
also
writing
them
in
a
really
easy
declarative
way,
mostly
yeah.
C
So
what
are
the
options,
if
you
don't
have
a
kubernetes
cluster,
like
github
sierra
runners,
or
something
like
that,
if
they're
not
running
on
kubernetes.
B
Oppar
can
work
also
on
every
it
works
also
only
with
dr
mostly
move
to
radium,
so
you
can
enforce
the
same
positive
routes.
Can
you
paste
the
link.
B
Yeah
I
can
face
it.
I
need
to
look
I'll.
Probably
I
can
probably
the
korean
is,
but
it,
but
it
works
also
with
doctor,
because
I
checked
it
earlier,
so
the
car
so
this
for
the
authorization
part,
but
this
will
also
work
for
the
policy
part
yeah.
A
A
Do
you
think
that,
like
pod
man
now
being
endorsed
by
red
hat
and
fedora
and
openshift,
do
you
see
that
on
the
rise
for
for
other
vendors
and
other
systems.
B
From
which
freezer
stakeholder
for
that,
so
typically,
I
would
say
as
a
developer,
so
for
I'm
in
the
role
as
a
developer,
I
want
to
develop
my
application
mostly
and
want
to
ship
it
into
production.
B
So
I
would
say
I
don't
care
at
all,
mostly,
so
I
don't
literally
decision
if
it's
docker
or
if
it's
spot
man,
I
want
to
have
a
simple
streamlined
way.
So
this
should
be
something
like
a
simple
tooling.
So
if
you
are
looking
into
bigger
enterprises,
so,
for
example,
like
we
discussed
in
the
in
the
trophy
chat
with
with
waypoint
and
so
on,
probably
there
are
some
platforms
that
support
this
and
you
don't
need
to
care
about
that
at
the
developer
at
all.
B
So
probably
you
want
to
do
only
a
git
commit
and
everything
happens
automatically.
So
this
was
where
open
shift
begins
like
in
this
way
and
also
right.
We
have
no
cloud
foundry.
What
is
a
really
bigger
platform
or
hiroku
who
doing
all
the
developer
experience
mostly.
This
comes
in
terms
of
what
is
the
feature
set
of
your
platform.
So,
as
the
operator
I
could
say,
okay,
I'm
could
be
impediment.
Okay,
should
I
stick
to
pop
man,
or
should
I
use
docker
or
should
I
use
contain
id?
B
This
comes
a
little
bit
about
stability,
and
how
is
my
experience
with
these
two
is
mostly,
but
when
you
are
looking
from
a
platform
perspective
like
like
operators,
so
currently
I
would
say:
okay,
my
platform
is
to
need
this.
I'm
currently
down
here
right
now.
What
is
the
real
container
runtime
in
terms
of
first
point
view,
but
you
need
to
care
about
that.
Probably
when
you
have
problems
in
production,
you
need
to
understand
how
potman
is
currently
working
and
how
is
how
is
docker
working
and
when
you're
going
down
the
rabbit
hole?
A
B
A
Theoretically,
it
should
work,
but
practically
I
think
there
are
some
commands
in
potman
which
are
not
implemented
right
now.
So
there
is
an
a
feature
request
for
the
gitlab
ci
runners
to
support
portman
as
an
executor,
but
I
think
something
with
pull
or
something
else
is
missing,
but
that
would
be
kind
of
like
still
me
as
a
developer,
I
don't
care
how
the
container
started
and
how
is
how
it's
executed.
A
The
thing
I
was
wondering
is
like:
if
I
want
to
create
my
own
docker
images,
do
you
think
it's
easy,
or
do
you
think
it's
hard
or
what
is
your
opinion
on
like
I
don't
want
to
use
docker
hub.
I
want
to
create
my
own
images.
How?
How
would
you
start
with
that.
B
B
You
know
yeah
so
yeah
right
now,
because
I
have
in
my
past
job.
I
had
a
lot
of
discussion
with
developers
and
probably,
for
example,
is
my
example.
My
persona
would
be
a
java
developer,
so
a
java
developer
as
a
java
developer.
I
want
to
develop
my
code,
and
probably
I
don't
want
to
do
the
switch
out
of
my
echo
system.
B
So,
for
example,
I
want
to
put
images
only
with
maven
or
with
how
do
they
call
gradle,
it's
mostly
the
other
famous
tool
that
you
use
as
built
system
and
probably
what
we
did
in
the
past.
So
we
shifted
all
the
docker
related
stuff
because
the
developer
don't
use
it
mostly
only
the
operators
were
using
it
and
we're
using,
for
example,
tooling,
that
integrates
directly
in
maven
and
brett,
and
for
that
something
like
a
jib
so
java
from
google
and
how
to
draw
it.
B
It
builds
container
images
without
darker,
so
mostly
like
every
two,
because
the
cameras
are
only
archives
to
archives.
So
I
think
this
will
be
more
of
a
race
in
terms
of
hey.
You
don't
need
to
use
docker
to
build
your
images,
mostly
because
it
has
also
some
security
changes,
because.
B
How
does
it
work?
It
works
like
because
every
image
layer
is
only
a
car
archive
mostly
and
every
image
there
has
no
own
hash
id
for
the
content,
validation.
So
every
time
you
make
a
change,
it
can
check.
Okay,
do
I
need
to
replace
the
star
archive
or
not?
So
this
is
the
idea
between
the
base
station.
B
It's
only
two
archives
and
valerian
have
been
happening
because
you
have
an
hierarchy
of
which
the
archive
needs
to
be
extracted
on
which
part,
and
for
that
this
is
the
reason
why
you
don't
need
to
try
to
build
this.
Probably
this
is
the
reason
why
people
shifting
from
dot
right
to
use
different
two
lane,
because
you
need
to
have
run
at
least
a
docker
daemon
and
the
dot
redeem
needs
to
have
root
privileges
on
your
system
to
build
a
docker
image.
B
Definitely,
but
for
that
you
can
use
altitudes
that
you,
because
we
have
now,
but
it's
important,
that
this
shift
can
only
happens
because
we
have
a
standard.
We
have
the
standard,
oci
open,
container
image,
and
this
allows
us
to
do
this
using
different
tools,
because
for
outcomes,
every
time
is
an
open
container
image.
How
does
it
draws?
I
seem
to
destroy
it
open
to
damage.
A
But
I
I
just
want
to
like
highlight
an
emotion
when
I'm
not
so
deep
into
all
the
things
and-
and
I
I
hit
the
rate
limit
now
like
coming
back
to
the
the
429
problem.
How
would
I
start
with
like
creating
my
own
docker
images?
A
Should
I
use
like
if,
if
I
use,
gitlab
or
or
github
or
whatever,
what's
what's
the
best
way
to
say
hey,
I
want
to
maintain
my
own
docker
images.
What
would
you
say
in
that
regard
without
going
into
like
left
tool
right
to?
I
think
the
tooling
shouldn't
really
matter.
C
I
think
any
registry
is
fine.
The
main
problem
you
always
get
is
the
the
url
part.
So
if
you
have
a
lot
of
repositories
with
existing
dock
up
short
links,
then
it's
really
painful.
C
So
that
was
nicholas
is
mentioning,
is
if
you
can
switch
it.
It
depends
on
what
on
your
environment,
if
you
can
do
it
in
your
kubernetes
cluster,
then
I
would
go
this
way,
but
and
then
restrict
it.
But
at
the
end
I
would
say
if
you
have
gitlab
and
you
have
access
to
the
package
registry,
make
a
ci
chop,
build
your
image
or
or
pull
it
down
and
push
it
to.
Your
registry
is
definitely
a
good
thing
and
find
a
good
way
to
to
update
your
url
for
existing
images.
C
A
So
if
so,
let
me
just
share
my
screen,
I
haven't
checked
it
out
what
one
of
my
team
members
greg
has
built,
but.
A
I
will
link
that
so
this
is
something,
and
I
was
kind
of
asking
in
the
direction,
because
I
know
that
we
discussed
it
also
like
how
to
update
all
the
ci
templates
and
one
of
the
ideas
was
that
we
have
kind
of
an
a
variable
which,
which
does
that
and
some
updating
script.
A
Let's
just
see
this,
this
was
a
proof
of
concept
to
see
whether
we
can
like
update
our
own
images,
which
we
needed
to
migrate.
A
It's
basically
pulling
everything
from
from
docker
hub
and
then
recreating
it
tagging
it.
This
is
the
important
part
and
and
also
then
pushing
it,
oh
and
the
local
delete,
because
otherwise
your
hard
disk
will
explode.
Let's
just
see
what
this
does.
Okay,
oh.
A
Just
just
cuz,
okay
yeah,
it
was
more
or
less
a
proof
of
concept.
I
I
wouldn't
recommend
trying
it
out
in
production
right
now,
but
I
know
you've
said
you
also
want
to
share
something.
B
Yeah,
okay,
I
want
to
show
you
hijacking
the
part
of
what
let
me
check,
which
is
the
correct
window,
because
I
don't
want
to
share
my
4k
screen
so
that
we
need
to
dig
into
everything.
But
I
think
this
is
the
correct
one.
Okay,
mostly,
you
should
see
something
like
renovate
under
the
chord
right
yeah,
so
you
should
see
on
your
browser
with
four
tabs:
don't
okay,
yeah
literally,
like
probably
the
first
step
that
you
need
to
do
to
use
your
own
base
image
is
that
you
have.
B
The
simplest
approach
is
that
you
have
a
simple
docker
file
it
containing
the
upstream
image
that
could
be,
for
example,
here
right
now
it
could
be
alpine,
and
what
you
need
to
do
then,
is
to
probably
bring
it
with
gitlab
ci
into
your
own
registry.
So
you
have
a
simple
pipeline
job
currently
bringing
into
this
right
now
in
and
when
we
are
checking
the
operations
part,
how
was
it
yeah?
Okay?
Luckily,
we
have
now
a
lot
of
tapes
in.
A
B
Probably
what
michael
also
says
it's
quite
hard
to
update
them,
so
luckily
we
have
here
also
the
different
base,
image
versions
and
when
we
have,
for
example,
an
application
that
currently
using
this
image.
So
that
means
how
can
I
consume
all
my
own
base
images?
So
what
you
need
to
do
is
like
you
need
whoops
it's.
For
one
part,
this
is
a
different
image.
I
will
tell
you
later
why
it's
a
bit
different.
B
We
have
here
now
the
simple,
and
this
could
be
quite
tedious
to
update
this
every
time.
So
you
don't
want
to
do
this
on
your
own,
doing
it
going
out
to
your
repositories-
and
here
comes
the
cool
tool
in
that
I
found
out,
and
this
tool
called
is
called
renovate
and
renovate
can
do
a
true
job
for
you,
mostly
because
what
it
does
it
can
scan
multiple
repositories
and
checking,
for
example,
for
docker
files
and
looking
if
a
base
image
has
been
updated
or
not
and
will
create
automatically
a
pull
request.
B
So
you
can
have
also
more
configurations
in
terms
of
hey.
When
should
it
be
applied,
should
it
be
automation
to
be
rebased
when
a
new
comes
in
and
then
we
can
automatically
easily
when
we
have
a
full
automated
pipeline,
we
could
also
say,
for
example,
it
should
be
auto
merge.
So
when
our
checks
will
happen,
I
trip
merge
this
in
and
for
that
we
have
this
tool.
It
called
run
all
I
think,
and
literally
what's
really
cool
about
that.
Let
me
check.
B
It's
currently
it's
right
now,
it's
open
source,
so
you
can
use
it
on
your
own.
It
helps
you
with
automating
overdepends,
it's
not
only
for
doctor.
You
can
use
it
for
all
types
of
dependencies.
When
I
checked
into
this,
mostly,
I
found
out
that
you
can
use
a
lot
of
more
about
that.
So
in
terms
of
looking,
where
is
it
for
documentation?
B
It's
not
directly
on
the
side,
true,
I
hope
so
yeah.
So
currently
they
have
a
lot
of
not
linear
support,
so
you
can
have
different
students
which
platform
you
want
to
use
which
managers
and
managers
comes
now
in
so
which
tools
or
which
can
be
updated
automatically.
So
it
means
you
can
update
your
lhca
dependencies.
You
can
update
your
your
mpm
dependencies,
alter
also
docker
different
compose
files.
Docker
files
also
determines
what's
really
great.
B
So
then
you
would
also
take
over
problems
that,
michael
for
him
before
said,
you
can
also
use
it
to
update
your
home
requirements
and
what
you
need
to
do
to
update
this
is
really
just
straightforward,
so
I
have
a
small
image
that
it's
called
a
renovate.image.
So
this
is
currently
the
pipeline
that
I
am
currently
using
to
bit
the
image.
Okay,
I
don't
have
the
job
right
now
for
updating
the
image,
but
currently
render
weight
needs
to
be
installed,
as
is
an
mpm
package.
B
So
you
need
to
write
your
own
mpm
config.js,
and
for
that
you
have
at
least
you
can
independent
on
your
platform.
We
use
gitlab
for
that
can
saying:
okay,
which
user
should
accessing
the
api.
B
So
for
that
we're
using
the
renovaton
we
can
say
which
repository
should
be
watched
in
our
case
and
what
was
the
renovate
docker
app
and
then
we
can
say
this
is
why
quite
what
not
right
now
out
of
box,
but
you
can
say
okay,
I
want
to
do
all
the
updates
automatically
and
for
that
we're
using,
for
example,
we're
using
only
data
files
should
be
checked,
and
this
can
help
you
a
really
getting
pain
of
updating
all
the
images
by
your
own
and
the
missing
piece
of
that
is
only
running
this
job.
B
So
you
have
a
simple
ci
job
and
running
is
via
a
scheduled
pipeline,
and
it
should
be,
for
example,
running
every
one
hour
to
check
if
dependencies
need
to
be
updated
or
not,
and
then
no
new
requests
come
into
your
arm
repositories
that
were
watched
and
then
you
can
updating
them
all
at
once,
mostly
without
any
heavy
usage.
B
B
How
do
you
handle
that
currently?
What
I
did
is
by
default
all
these
managers,
so,
like
you
said,
okay,
do
I
need
to
care
about
all
the
stuff
that
currently
can
be
updated
or
not?
Right
now
you,
I
wrote
in
my
config
json
that
it
needs
only
to
check
all
the
docker
files
that
are
currently
in
the
project.
You
can
also
do
writing
your
own
radix
expressions
to
do
all
managers
that
are
not
currently
generate
built
in,
so
you
have
a
lot
of
options
to
do
that
a
little.
B
If
you
are
better
than
me,
because
I
can't
write
right
now
typescript,
then
you
can
also
write
your
manager
directly
into
the
project
and
so.
B
Yeah,
it's
it's
the
config
file
for
the
renovate
bot.
So
luckily
you
start
with
renovatebot
with
npm
run
renovate
because
it's
an
mpm
package
and
then
it
will
check
for
the
chromefit.js
okay.
B
B
B
I
don't
want
to
have
more
than
five
pull
requests
open
at
a
time
from
renovatebot
to
update
all
my
dependencies
overwrite
the
defaults
right
now
here
yeah,
and
this
would
be
the
way
to
go
to
do
this,
with
maintaining
all
your
base
images
on
your
own,
because
then
what
I
need
to
do
right
now,
in
my
images
or
with
my
example
app,
I
need
only
to
update
my.
B
B
Let
me
check
something
okay,
so,
for
example,
I
want
to
get
the
latest
because
to
serve
contains
3.1,
I
would
say:
okay,
I
need
to
update
this
and
then
luckily
we
could
run
the
pipeline
and
so
on.
It
will
be
updated
then,
and
this
can
help
you
to
doing
the
maintenance
or
doing
all
the
toilet
work
of
regarding
updating
all
these
images
in
these
different
repositories-
and
this
is
one
point
of
getting
into
more
the
part
of.
B
B
Yeah,
so
that
would
be
my
part
on
maintaining
audiobase
images
mostly
and
then
with
the
creation
of
mica
that
he
did
so
putting
all
the
images
into
one
repository.
And
then
you
have
probably
renovate
bot
in
the
same
repository
and
running
on
a
stadium
or
when
a
new
pool
happens
and
then
sending
out
the
prs
to
all
your
repositories.
B
So
when
I
started
in
the
industry
I
was
I
had
every
time
in
mind:
okay,
I
will
automate
everything
and
I
need
to
do
any
work
anymore,
but
this
happened.
This
doesn't
happen
right
now
and
I
don't
see
it
in
the
future
because
we
have
more
parts
to
maintain
so
probably.
A
But
I
think
it's
really
great.
I
need
to
try
that
out,
especially
because,
if
we
find
a
way
to
like
propose
an
example
which
works
for
everyone,
it
should
make
it
easier
to
say:
hey,
you
follow
the
these
10
steps
and
then
you
have
your
your
own
container
registry
update
whatever
whatever
is
going
on,
because
I
can
imagine
that
yeah
this
this
could
be
a
real
problem,
so
you
first
you're
starting
with
your
boom
ubuntu
latest
and
then
you're
deciding
okay.
A
I
need
to
stay
with
like
20,
20
or
four,
and
then
someone
else
says
well,
but
I
want
to
use
dbn
and
then
someone
else
says:
oh,
we
need
to
use
centos
and
then
you
have
like
it's
growing
over
time
and
you
need
to
find
a
way
to
like
update
them
and
also
find
out
the
references
in
your
ci
configuration,
because
some
some
people
might
just
write
in
latest
others
might
just
like
specify
everything
and
specifying
everything
then
also
needs
some
security
scans
and
secure
security
integration
like
scanning
the
the
docker
images
for
all
new
abilities.
A
This
is
a
the
second
part
of
that
additional
maintenance.
So
it's
not
done
by
docker
hub
for
us,
but
we
need.
We
need
to
take
care
about
security
ourselves,
but
yeah.
I
think
everything
which
helps
making
this
easier
is
more
than
welcome.
B
Yeah,
and
also
probably
you
need
like,
because
we
are
all
lazy
people,
so
sometimes
I
would
say:
okay,
I
would
something
use
embedded
crap,
but
then
you
can
use,
for
example,
hudulent
or
mahaster
doctor
lynn,
if
you're
checking
the
image
for
images.
B
Quite
true
and
probably
you
can
run
it
on
same
key,
I'm
allowing
only
the
pipelines
that
have
access
to
a
trusted
registry,
so,
for
example,
it
could
be
only
that
your
ci
will
only
continue
to
work
if
it
comes
from
your
own
images
so
that
you
have
a
fast
feedback
loop
for
the
developers
or
for
yourself.
Okay,
I
don't
using
on
public
image
right
now
from
the
darker
app
and
not
from
our
own.
So
in
terms
of
compliance
and
security.
This
will
help
you
also
a
little
bit.
C
B
Can
do
something
like
so,
for
example,
if
you
want
to
use
environment
values,
you
can
use
something
like
this.
B
You
mean
something
like
this:
this
works
right
now,
okay,
so
and
then
you
can
say,
for
example,
if
you
want
to
build
a
new
image.
Currently
you
can
say
that
a
bit
and
then
I
don't
know
so
something
what's
currently
we
want
to
downgrade.
It
doesn't
make
sense
at
all,
but
you
want
to
overwrite
it
and
you
can
do
something
like
this
yeah,
but
I
don't
like
the
idea
of
that
mostly
to
do
this,
because
it's
not
written
then
into
the
ci
system,
so
you
have
no
source
of
truth
of
that.
B
So
there's
no
declarative
way.
So
I
like
more
approach
of
replacing
that
so
that
you
have
something
more.
It
could
be
something
like
this
and
then
changing
it
right
now
here,
but
for
luckily
I
found
renovate
two
weeks
ago
and
this
will
doing
most
of
the
jobs
now
from
my
side,
so
they
can
pause
on
different
tasks
instead
of
updating
images
every
week,
or
at
least
one
a
month.
B
B
Tries
to
pull
an
image,
the
darker
daemon.
If
you
configure
the
mirror,
it
will
go
first
time
every
time
to
the
mirror.
If
he
did
a
successful
core
bait
so
then
he
will
stick
to
the
mirror
and
don't
use,
try
and
go
into
the
docker
app,
mostly.
B
A
B
A
Fact
this
is
about
docker
and
docker.
Actually,
but
the
thing
is,
you
can
configure
your
docker
demon
json
and
tell
it
a
registry
mirror,
which
means,
if
you
use
it
in
your
self-hosted
environment.
That's
not
not
an
issue.
This
issue
is
about
like
our
own
gitlab.com
fleet,
so
yeah
this.
This
is
one
of
the
ways
and
we
could
also
okay.
When
in
your
runner,
you
might
need
to
update
the
lookup
path
for
for
this
one.
So.
A
Mirrors,
I
think
I've
seen
it
as
heroes-
oh
yeah,
it's
actually
in
the
blog
post
about
the
about
mitigating
this,
so
you
can
pass
it
as
a
cli
flag
for
docker
t
or
you
edit.
The
configuration
file
and
just
add
that
and
say
hey.
This
is
like
package
registry
mirror
gitlab.com.
Whatever.
A
Yeah
and
just
just
to
to
remind
you
about
this
blog
post,
this
is
mainly
written
for
self-hosted
environments.
Okay,
you
could,
you
could
use
when
you're
using
gitlab.com
and
your
own
runners
in
your
own
virtual
machines.
This
applies
as
well,
but
for
for
gitlab.com
we're
using
different
methods
and
we're
using
the
gcp
cloud.
So
this
this
kind
of
puts
puts
away
some
stuff,
but
there
are
still
challenges
but
yeah.
I
would
totally
love
to
to
see
you
try
this
out
and
report
back
how
things
go.
B
C
A
It
well
it's
actually
on
on
gitlab.com
in
the
blog
already,
so
we
won't
need
to
like
blog
about
it,
but
I
will
definitely
link
that
just
mention
that
somehow
in
the
agenda,
which
I'm
writing
in
my
thousand
tops,
but
the.
B
C
Exactly
because
I
do
this
already
for
for
making
our
internal
dns
available
in
a
dock
image
for
some
runners
to
to
have
access
to
the
internal
network,
so
bringing
the
sandbox
by
definition,
but
only
it
was
very
specific
text,
see
around
us.
A
So
I
think
so
the
discussion
was
really
good.
We
also
talked
as
long
as
possible
to
avoid
coding
anything.
C
A
Yeah,
so
I
will
be
like
pushing
out
the
blog
post,
maybe
this
week
about
the
monitoring
plug-in.
I
did
right.
I
will
try
some
other
mitigations
as
well
just
to
make
it
easier
because
the
curl,
the
bose
passing
thing
is
my
opinion.
Not
so
good,
and
the
other
thing
to
keep
in
mind
is
like
how
to
mitigate
it
in
a
way
of
whenever
the
ci
runner,
docker
cli,
does
something
maybe
detected
in
there.
A
So
we
will
be
seeing
how
we
can
improve
it
in
our
own
software
and
make
it
more
visible
to
the
user
and
how,
like
everyone
else,
can
like
deal
with
it.
The
problem
is
one
of
the
problems
I've
seen
is
currently
everything
runs
out
unauthenticated.
A
So
even
if
you
say
hey,
I'm
I'm
getting
a
pro
tier
from
docker
hub
now,
because
I
really
want
to
pay
them
and
I
think
they
deserve
the
money.
To
be
honest,
I
still
need
to
edit
all
my
ci
configuration
because
I
need
to
add
the
toggle
login.
You
can
do
that,
but
it's
still
work.
You
need
to
like
keep
in
mind
so
like
sitting
there
and
doing
nothing
or
just
saying
hey
this,
the
soup.
We
need
a
super,
easy
solution.
B
There
are
a
lot
of
solutions,
so
many
ways
goes
through.
So
literally
there
are
a
lot
of
options
that
you
can
use,
and
mostly
you
need
to
see
if
it's
not
too
over
engineered
for
your
use
case
or
if
you
understand
it
also,
at
least
so,
there's
also
some
big
impediments,
but
I
think
they
are
the
electro
system
has
trained
for
that.
So
we
have
a
lot
of
tuning.
You
don't
need
to
rebuild.
You
don't
need
to
re
reinvent
the
wheel
all
at
once,
so
everything
is
right
now
out
there.
A
I
I
would,
I
would
add,
to
that
point
you
use
what's
there
already,
so
it
doesn't
make
sense
that
you,
for
example,
install
gitlab
now,
just
because
of
the
the
registry
you
could
probably,
but
it's
if,
if
you
already
have
a
running
system-
and
it's
like
you-
have
a
local
registry
for
something
already
try
to
go
that
approach.
If
it's
easier
for
you
to
say,
hey,
I'm
I'm
buying
a
subscription
and
it
costs
me
like
200
a
year
or
something
like
that.
A
A
Okay,
good
last
sentence:
good
luck,
sentence
famous
last
words,
and
now
production
is
burning.
No
with
that.
I
hope
that
everyone
like
found
some
some
hints
some
new
ideas.
I
will
be
publishing
the
blog
post,
I
think,
tomorrow,
on
friday
the
recording
is
already
on
youtube
either
way,
and
then
everyone
can
like
look.
Look
it
up,
look
up
the
blog
post,
the
tweets
and
so
on.