►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
hey
everyone
welcome
to
the
harper
community
meeting.
So
you
know
we
released
the
harper
2.1.
If
you
go
into
releases.
A
So
you
see
the
we
jaded
last
week,
so
you
know
this
has
all
the
features
that
we've
been
talking
about:
the
non-garbage
non-blocking,
garbage
collection,
proxy
cache,
the
p2p
integration
with
kraken
and
dragonfly
and
systick
image
scanner
and
then
support
for
kubernetes
machine
learning
data
models.
A
So
please
give
it
a
try.
I
think
the
website
and
docs
have
all
been
updated
to
reflect
the
latest
version.
The
only
thing
left
is
the
blog,
which
is
under
review
it's
coming
out
very
soon,
and
then
I
believe
the
helm
version
comes
out
at
the
end
of
this
week.
A
A
So
you
know,
the
proxy
cache
goes
a
long
way
and
help
you
to
help
mitigate
against,
because
you're
no
longer
pulling
so
much
from
docker
hub
or
another
upstream
registry
you're
pulling
from
the
cache
when
possible.
A
I
did
notice
that
you
know
when
I
was
writing
up
about
this
feature,
that
the
proxy
cache
feature
in
the
blog
that
there's
something
we
can
actually
do.
We
can
we
can
improve
upon.
A
So
if
you
go
through
the
docker
hub
blog
on
the
new
limit
rate
limiting
which
goes
into
effect
on
november
1st,
so
you
have
100
poles
per
six
hours
for
knobs
and
200
poles
for
authenticated
users,
and
the
limit
is
actually
based
on
the
number
of
get
manifest
requests,
whereas,
whereas
it
was
previously
based
on
image
blobs.
So
now
you
can,
even
though,
if
even
if
the
image
is
completely
intact,
if
none
of
the
image
blobs
were
updated,
you're
still
triggering
the
ray
limiter.
A
So
you
know
we
we
sort
of
designed
this
feature
prior
to
this
announcement,
but
in
anticipation
of
this
development
right
because
users
have
been
coming
to
us
and
then
talking
about
they
were
getting
really
limited
by
docker
hub
already,
but
essentially
right
now,
every
time
you
do
every
time
you
hit
a
proxy
project,
the
proxy
is
hitting
the
upstream
to
see
if
the
image
has
been
updated
before
it.
A
You
know
either
serves
you
what's
in
the
cache
or
reaches
out
to
get
the
updated
copy
from
docker
hub
and
then
sync
it
down
to
you,
but
it's
actually
doing
a
pull
manifest
request,
so
we
are
also
triggering
the
ray
limiter
every
time
we
do.
We
compare
the
digest
to
see
if
we
have
to
repo.
A
So
I
kind
of
realized
that,
as
I
was,
writing
up
the
blog
and
so
there's
gonna,
be
I've
already
filed
a
ticket
here
for
the
212
to
change
from
a
get
request
to
a
heck
request.
So
if
the
image
hasn't
been
updated,
you're
not
triggering
that
radiometer
you're
only
triggering
it.
If
you're,
you
know
getting
a
newer
copy
of
that
image,
so
it
should
be
much
better.
B
A
It
yeah
I
mean
we.
I
wish
we
had
caught
this
earlier,
but
I
guess
we
just
didn't
read
between
the
lines
that
they
were.
You
know
charging
based
on
get
requests
and
instead
of
image
blocks
so,
but
it
will
be
part
of
the
2.1.1
which
is
coming
out.
A
A
There's
a
fallback
right,
but
with
this
with
this
improvement,
it'll
be
a
lot
better.
C
So
what
happens
with
the
proxy
cache
here
if
we
are
at
or
above
the
limit
already,
and
we
tried
to
pull
down
another
image?
C
Will
we
get
an
error
somewhere
in
the
proxy
cache
logs
or
what's
happening.
A
D
A
I
I
think
they
would
just
deny
the
request
right
now
not
throttle
just
yeah.
C
Similar
to
how
github
does
it
if
you've
done
too
many
requests,
it
just
cut
you
off
and
you
get.
A
Well,
we
can
try
it
it's
easy
to
test
the
theory,
but
we'll
have
something
of
logs
that
reflect
that
it's!
It's
not.
You
know
it's
being
throttled
by
the
client.
C
A
You're
right
that
that's
a
separate
case,
because
the
only
click
the
only
cases
we
cover
right
now
are
you
know
if
the
image
is
non-existent
in
the
upstream,
in
which
case
you
know
we
interpret
as
it's
intentional
and
they
no
longer
want
to
serve
that
image.
So
the
cat
we
wouldn't
even
serve
what's
in
the
cache
or
if
the
cache
is
not
reachable,
then
you
know
we
treat
it
as
the
cash
is
down.
I'm
sorry
not
to
cash.
The
upstream
then
we'll
just
serve.
A
What's
the
cash
in
the
case
where
the
cash
blocks
you
it's
low
ambiguous,
but
I
think
it
would.
It
would
just
serve.
B
Seems
like
something
that
might
make
a
reasonable
feature
flag,
because
I
can
see
some
users
wanting
it
to
just
serve
what's
in
the
cache,
because
after
all,
that's
why
they've
deployed
a
cache,
but
some
users
would
want.
No,
if,
if
the
upstream's
down
or
blocking
it,
then
we
want
to
pass
through
that
authorization.
Failure.
A
A
C
So
do
we
have
a
way
to
schedule
the
the
cached
images.
C
Yeah
no
schedule
updates
essentially.
A
No,
the
behavior
right
now
is
it'll,
always
it'll
always
reach
out
to
the
upstream
to
see
if
it's
the
to
see
if
it's
stale
and
if
it
is
it'll,
always
update
it.
We.
C
C
I
mean
so
if
I
set
up
a
replication,
that's
a
that's!
A
different
target
right
with
replication.
We're
putting
the
the
registry
that
we're
pulling
from
is
then
the
as.
C
Yeah,
so
you
wouldn't
serve
yeah,
I
guess
replicated
images
from
the
proxy
cache.
That's
what
I'm
saying.
A
All
right
yeah,
I
mean
there
is
no
scheduled
proxy
ability
right
now,
if
it's
on
it's
just
it's
triggered
by
an
actual
pull
right,
whether
it's
through
your
pod
yaml
or
whether,
let's
do
a
docker
pull
command.
Okay,
okay,
there's
no
like
scheduled,
hitting
the
proxy
cache
to
and
then
to
the
hit
the
upstream.
A
Not
yet
we
kept
the
mvp
experience
pretty
basic,
like
we
thought
about
adding
more
stuff
into
it,
but
and
that's
that's
ultimately
why
we
decided
to
just
leverage
the
creating
the
project
like
the
hardware
project.
It's
exact
same
process,
except
you
enable
as
a
proxy
cache
and
pretty
much
all
the
other
features
within
the
project
have
been
disabled,
except
for
the
retention
policy.
There's
a
default
retention
policy
of
seven
days,
and
so
we
do
anticipate.
A
A
Feature
hopper
pretty
much
nailed
down.
You
can
go
to
the
project
page
from
against
from
you
know,
within
the
main
hardware
page
just
go
into
projects
and
then
go
into
the
hardware
project
board
to
look
at
the
2.2
product
suggestion
swimlane.
So
there's
going
to
be
a
couple
of
improvements
to
robot
accounts.
A
The
one
I
want
to
talk
about
today
is
really
exposing
metrics.
So
this
is
something
that
you
know.
We've
talked
about
a
lot
exposing
metrics
through
prometheus
on
to
an
endpoint
where
prometheus
can
script
the
data
and
there's
actually
a
proposal.
That's
been
put
out
if
you
go
into
community
repo
under
pull
request,
there's
an
ad
metrics
for
harbor.
A
A
A
You
know
the
relevant
statistics
in
harvard
whether
that's
on
the
infrastructure,
it's
running
on
whether
it's
on
the
kubernetes
cluster,
the
the
orchestration
itself
or
some
of
the
business
logic,
some
of
the
the
registry
operations
everything
that's
currently
exposed,
both
through
api,
is
potentially
a
candidate
for
for
exposing
to
prometheus.
We
haven't
really
nailed
down
to
the
final
bucket
list
of
what
we
want
to
expose
gonna
try
to
keep
2.2
pretty
simple,
just
to
make
sure
that
you
know
the
framework
works
and
yeah.
A
Two
possible
solutions
for
exposing
it
to
prometheus
and
so
yeah
the
contents
to
expose
you
know
we
talked
about.
These
are
just
some
of
the
basic
things
that
are
listed
here:
request
in
flight,
anything
from
api
metrics
provided
by
docker
distribution
itself.
A
A
You
know
like
health
status
and
then
there's
the
the
orchestration
piece,
whether
it
runs
as
a
docker-composed
cluster
or
as
a
kubernetes
cluster,
and
then
you
know
relate
to
that.
Some
of
the
registry
operations
like
or
not
register
operations,
but
the
hardware
registry,
hardware,
registry
infrastructure
itself,
you'll
deploy
services
and
any
kind
of
cron
jobs,
and
then
finally,
there's
the
the
business
piece
right,
which
is
all
the
registry
operations
anything
that
hardware
api
provides.
A
So
you
can
see
here
in
how
to
expose
we're
just
focusing
on
exposing
core,
but
eventually
we're
going
to
be
covering
all
the
different
pods
as
part
of
the
harvard
deployment.
Like
I
don't
know,
there's
core
registry
job
service
database,
nginx,
charm,
museum,
redis
right
all
these
can
be
exposed
and
then
there's
the
exporter.
Here.
Exporter
is
basically
running
to
collect
and
forward
these
service
stats
from
all
the
various
carbon
components
to
the
prometheus
endpoint
yeah.
We
look
we're
looking
at
open
telemetry
as
a
collector,
so
like
in
the
second
solution.
A
A
The
last
thing
is,
you
know:
how
do
we,
how
do
we
run
the
prometheus
as
part
of
whether
it's
part
of
the
harbor
cluster
operative
as
a
whole,
or
if
it's
just
something
that
the
harper
cluster
operator
can
can
call
through
a
crd
or
something
like
that-
and
I
think
you
know,
particularly
when
it's
deployed,
as
through
our
hardware
operator
as
an
cluster,
there's
a
lot
more
information
related
metrics.
You
know
we
want
to
look
at
possibly
dead
processes.
A
A
So
this
is
a
basic
proposal,
just
glancing
through
this,
I
would
add
you
know
everything
should
be
exported
by
projects
so
report
generation
can
be.
You
know,
tied
to
the
tenants,
maybe
also
be
able
to
explore.
B
A
So
this
is
something
that
we've
talked
about
for
a
long
time.
Unfortunately,
we
kept
putting
it
off,
but
you
know
we're
finally
getting
around
to
it
and
the
metrics.
For
you
know,
if
you
want
your
registry
to
be
part
of
your
kubernetes
strategy,
then
this
is.
This
is
definitely
important.
So
at
the
end,
if
you
have
any
comments
on
the
design
here
or
anything
you
want
to,
you
know
to
be
exposed
to
prometheus.
Specifically,
please
you
know
go
through
this
and
make
some.
A
So
that's
what
I
want
to
share
today,
specifically
on
the
prometheus
integration,
and
then
I'm
gonna
talk
about
the
robot
accounts,
one
on
the
next
meeting,
when
we
have
more,
but
essentially
we're
doing
a
couple.
Things
for
web
accounts.
We're
scoping
robot
accounts
to
the
instance
level,
instead
of
creating
a
robot
at
the
project
level.
A
So,
instead
of
going
into
a
project
to
create
a
robot
you're
going
to
be
creating
a
robot
account
or
I
guess
they
should
be
called
they,
you
know
they
function
like
a
service
account
at
the
system
level
or
you
can
permission,
add
permissions
for
that
robot
account
to
different
projects
at
creation
and
be
able
to
dynamically
change
those
permissions.
A
The
other
thing
is
we're
probably
going
to
open
up
api
access
for
robot
accounts,
which
is
currently
not
available
when
deployed
with
oidc,
but
that's
something
also
users
have
been
asking
for.
I
think
there's
a
lot
of
value
in
that.
A
So
yeah
please
go
through
the
2.2
product
suggestions
swimlane
because
we're
in
early
planning
right
now
we
don't
really
have
a
date
lockdown
for
release.
So
it's
going
to
depend
on
the
particular
payload
that
won't
deliver
but
definitely
go
through
and
add
any
questions
and
comments
you
have,
I
don't
know
if
michael
or
jonas
have
anything
else
to
add
released
or
the
community
or
in
general.
A
Sure
I.
A
So
I
can
just
talk
about
a
little
bit,
essentially
with
what
we
did
in
in
the
2.1
release
for
p2p
integration.
Is
we
allow
users
to
deploy
a
p2p
solution
like
alibaba,
dragonfly
or
uberkraken,
and
have
those
be
paired
against
harper
as
an
endpoint?
Think
of
it,
as
like
a
registry
endpoint
that
you
can
use
for
replication
and
there's
there's
an
action
within
harbor
called
preheating.
A
The
name
is
still
in
flux.
We
might
change
it,
but
essentially
what
pretty?
What
preheating
does?
Is
it
moves?
It
migrates
images
from
harvard
to
the
p2p
side
to
the
p2p
network,
where
it's,
the
content
is
seated
through
a
series
of
nodes.
That's
highly
available
and
highly
distributed
to
you
know,
distribute
those
images
to
thousands
or
possibly
tens
of
thousands
of
nodes
at
a
much
more
efficient,
much
faster
rate
than
we
could
ever
do
throughout
through
harvard
itself.
A
We
sort
of
offloaded
that
work
to
the
p2p
side,
and
you
know
they
all
have
their
own
way
of
doing
it.
So
we're
sort
of
opening
up
that
ecosystem
by
by
you
know
doing
it.
The
way
we
did
allowing
you
to
to
pair
against
any
number
of
external
p2p
solutions-
and
so
you
know
preheating
an
image-
is
very
similar
to
deploying
image
or
pulling
image,
because
you
know
it.
A
It
signifies
a
deployment
of
that
image
into
the
wild
and
that's
why
we
have
security
policies
controlling
the
preheating
action,
which
means
that
you
know
much
like
you
can
configure
security
policies
for
for
pulling
an
image
off
harper.
You
can
set
a
policy
like
disallow,
pulling
an
image
if
it's
unsigned
or
or
disallow
preheating
image,
if
it's
unsigned
or
disallow
pre-heating
image,
if
it
has
a
scanned
vulnerability
of
a
certain
severity
or
higher
right,
we're
sort
of
borrowing
from
that
workflow
in
the
project
configurations
for
image
polling.
A
But
the
the
one
thing
that
we
added
is
you
can
specify
the
specific
repository
names
and
the
tag
names
on
which
to
to
match
against
and
then
have
those
policies
overlay
on
top
to
determine.
You
know
how
the
preheating
is
triggered,
so
so
it's
all
doable
from
within
the
harbor
ui
you
at
the
endpoint.
A
You
turn
on
you
configure
the
policies
that
you
want
and
then
yep.
It's
very
easy:
that's
really.
A
A
I
also
want
to
just
say
real
quick
that
we
released
the
2.0.3
as
well
and
it
there
is
a
certain
bug
that
crops
up
I've
seen
it
within
two
customers.
Already,
essentially
you
can
there's.
Let
me
go
to
release
test
real,
quick
to
open
up
the
issue,
full
list
of
issues
in
2.3.
A
B
A
Yeah,
so
we
usually
do
n
minus
two
for
features
for
for
for
bugs
sorry,
michael.
D
D
And
it
talks
about,
you
know
what
you
know,
how
we,
how
we
deal
with
this.
Maybe
we
should
update
now
alex
now
I
have
to
that
one.
So
there
you
go
okay,
yeah.
We
talk
about
the
policy
there,
but
then
you
know,
maybe
we
should
update
that
minor
release.
Support
matrix,
just
update
it
really
quickly.
Now.
B
A
Makes
sense
otherwise
it's
n
minus
two
for
minor
and
minus
two
minor
releases
and
for
things
like
vulnerabilities,
you
know
we
always
adhere
to
that,
but
for
some
of
the
smaller
things
like
a
ui
issue-
or
you
know
at
that
type
or
something
like
that-
we
still
adhere
to
this
pretty
strictly.
But
you
know
the
the
actual
release.
Dates
really
depends
on
our
bandwidth.
A
A
All
right,
I
don't
know
if
there's
anything
else,
anybody
wants
to
share,
but
that's
it
for
me.
A
A
B
Yeah,
I
think
it's
great,
I
frankly,
I
think
it's
great
that
you're
doing
it
at
all,
given
that
there
are,
there
are
the
other
meetings
for
the
actual
developers.
I
think
it's
great
that
you're
doing
a
specific
community
meeting,
so
I'm
I'm
happy
with
whatever
cadence
you
think
makes
sense.
Monthly
is
totally
fine
for
me.
A
Great
thanks
all
right,
if
nothing
else,
I
think
we
can
end
here
thanks
guys.