►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
A
A
So
you
know
we've
had
we
heard
these
requirements
a
lot
from
open
source
users
as
well
as
people
internally
in
the
company.
If
you
pull
access
away
from
docker
hub
docker
hub
will
throttle
you
or
id
band,
sometimes
even
for
private
deployments,
or
you
know,
you
know
fairly
locked
down
environment
or
air
gap,
or
something
like
that.
You
can
use
deposit
cache
to
let
you
gain
access
to
those
images,
but
without
violating
the
security
policies
put
in
place
right
so.
A
A
A
You
know
setting
up
for
multiple
target
repositories,
whether
they're
on
the
same
instance
or
on
different
target
registries,
so
I
can
have
a
proxy
cache
for
for
harbor
for
another
harbor.
I
can
have
a
proxy
cache
for
gcr
and
one
for
acr,
et
cetera,
so
multiple
target
repositories
per
harvard
instance.
This
goal
here-
so
you
know
four-
are
back
so
the
the
one
of
the
distinctions
between
this
proxy
project
and
a
regular
project
is
the
proxy
project.
A
A
A
A
A
So,
if
you're
trying
to
hit
gcr.io
library
busy
boxer,
you
know
through
use
of
a
mutating
web
book
or
something
else,
you
should
automatically
intercept
that
request
and
then
reroute
it
to
the
proxy
cache.
That's
been
set
up
for
it
and
if
you
specify
the
proxy
project
in
the
in
the
prospect
to
begin
with
then
default
directly
from
the
proxy.
A
So
it's
an
image
for
flowchart.
This
is
detailing
some
of
the
scenarios
around
pulling
from
the
proxy
cache,
so
the
first
one
is
fairly
easy
right.
It's
when
you're
pulling
an
image
from
the
proxy
the
first
time
the
proxy
is
unpopular
to
begin
with,
and
so
it
will
hit
the
upstream
pull
it
down,
store
it
in
the
cache
and
the
server
decline.
A
And
you
know
every
time
before
it
serves
the
the
image
decline.
It'll
always
check
the
upstream
to
see
if
there's
a
newer
version,
so
you're
pulling
by
tag
right.
So
it's
possible
that
on
the
upstream
that
tag
now
belongs
to
a
different
digest,
so
the
proxy
is
responsible
for
always
pulling
the
latest
copy
before
serving
to
the
client.
A
Because
you
know,
the
assumption
is
that
this
proxy
has
been
set
up
with
the
approval
of
the
admin
on
the
upstream.
So
you
know
everyone's
everyone's
is
aware
of
a
proxy
cache
being
set
up
for
that
upstream
repository
for
the
clients.
So
if
the
image
has
been
removed
from
the
upstream
registry,
you
know
from
the
the
point
of
view
of
the
hardware
proxy
cache
it.
It
means
that
it's
not
intended
to
be
served
to
docker
clients,
so
the
proxy
cache
will
respect
the
wishes
of
the
upstream
and
basically
abort
that
pull
request.
A
A
So
we
also
have
the
ability
to
set
some
retention
policies
on
the
proxy
project.
This
is
very
similar
to
what
we
have
right
now:
attack
retention
policies
for
a
hardware
project,
but
you
know
the
difference
between
the
the
retention
policies
on
the
proxy
cache
are
constantly
enforced.
So
it's
not
something
that
you
run
on
on
a
scheduled
task
basis
right.
It's
not
execution
based
it's
just
it's
long
running,
it's
always
enforced.
A
So
during
the
creation
of
the
proxy
project,
you
can
specify
retention
time,
which
is
how
long
the
artifacts
will
live
on
the
cache
before
it's
removed,
or
you
can
specify
policy
like
only
retain
the
most
recently
pulled
x,
artifacts
I'll
support,
negative
one
frame
for
retention,
etc.
So
this
is,
you
know,
the
format,
and
the
goal
of
this
is
very
similar
to
what
we
have
attack
intention
right
now,.
A
And
finally,
the
stretch
goal
is,
you
know,
because
you're
always
we
talked
about
you're,
always
going
to
be
checking
the
upstream
to
make
sure
you
you
serve
the
latest
version
of
that
tag,
so
no.
This
is
just
allowing
it
to
just
do
it
in
the
background
right
every
time.
A
Every
time
there's
an
update
in
the
upstream,
the
proxy
cache
will
silently
update
itself.
So
so
it
doesn't
have
to
do
that
when
it's
when
there's
an
actual
pull
request,
complete
logging
of
all
pushes
and
pulls.
You
know
it's
very
important,
because
the
proxy
cache
is
now
functioning
as
a
as
an
intermediary
to
you
know,
broker
all
the
transactions
between
the
client
and
the
upstream.
So
it's
important
to
lock
all
the
requests
go
into
the
proxy
cache
for
bookkeeping
purposes.
A
A
A
C
I'm
I'm
very
excited
about
this
one.
This
is
actually
something
that
I
talked
to
daniel
yang
about
last
year,
when
we
we
were
running
an
event
over
at
kubecon
in
barcelona,
and
we
wanted
to
have
a
kind
of
a
proxy
cache
or
a
registry
mirror
of
some
sort
for
an
event
over
at
kubecon.
C
We
had
a
bunch
of
people
pulling
down
images
and
we
wanted
them
to
immediately
point
towards
a
local
harbor
instance,
instead
of
going
out
and
pulling
down
images
from
docker
hub.
So
what
I
found
was
an
old
old
document
and
it's
still
in
the
the
main
repo-
and
this
is
not
being
used
anymore,
I
was
told
the
this
was
functionality
that
was
pulled
out
a
long
time
ago.
C
A
I
think
the
document
referring
to
is
it's
leveraging,
so
docker
distribution
itself
has
that
ability
it
has
the
ability
to
configure
like
a
remote
url,
because
we,
you
know,
we
consume
docker
distribution.
You
can
do
that
in
harbor
essentially,
and
this
feature
of
the
implementation.
A
I
think
we're
still
looking
to
leverage
that,
but
with
some
minor
changes
to
it,
but
the
problem
with
native
that
native
ability
of
doctor
distribution
tactics
as
of
pull
through
caches,
it's
limited
to
docker
hub.
It
didn't
open
itself
up
to
other
third
party
registries.
There
was
a
pr
to
get
emerged,
support
for
you,
know
other
registries,
and
then
I
think
it's
been
two
years
and
still
hasn't
been
merged
and
then
container
d
just
went
off
and
did
it
so
container
d?
A
A
Proxying
from
multiple
target
registries
per
harvard
instance
is
a
key
requirement
here.
So
I
think
in
most
cases
those
users
just
are
processing
for
one
most
people
are
just
trying
to
get
this
feature
to
so
they
can
talk
to
a
docker
hub,
but
in
the
future
we
definitely
see
people
trying
to
proxy
for
different
different
instances
or
not
having
to
stand
up
a
new
harbor
to
proxy
for
something
else.
A
B
Very
cool,
so
that
would
work
for
an
intermittent
connection
too
so
say:
you've
got
some
instance
of
harbor
that
doesn't
have
a
very
good
internet
connection
and
it
could
lack
act
as
a
a
proxy
to
to
keep
a
local
copy
of
files
and
update
them
when
possible.
Failing
more
gracefully
right,
yup,
very
cool.
A
So
here's
a
2.0
release,
you
know,
try
it
out.
It
was
a
big
release
for
us
for
four
months
4.5
months
in
the
making,
and
then
we
are
looking
at
some
of
the
things
that
we're
trying
to
do
for
2.1.
So
if
you
go
to
the
harvard
project
board.