►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
yeah,
I
think
you
should
record
yourself:
okay,
yes,
please
go
ahead
and
sorry.
B
Yeah
so,
as
alex
said,
this
is
working
progress.
We
are
pretty
advanced
and
actually
today,
I'm
putting
together
a
proposal
compliant
with
a
template,
so
you
will
be
able.
If
everyone
is
interested,
please
go
ahead,
you
will
have
a
chance
to
have
a
peak
view
on
on
what
you
are
working
on.
B
So
that's
it
from
my
side
and
maybe
one
last
comment
as
as
alex
mentioned,
we
will
provide
probably
the
first
adapter
would
be
for
micro
scanner,
which
is
an
open
source
image
scanner
from
aqua,
but
also
probably
will
wrap,
clear
and
and
we'll
see
how
far
we
can
go
with
encore.
C
Great,
can
you
talk
about?
Maybe
you
know
the
different
value
propositions
of
aqua
versus
claire?
You
know
oh
user.
I
used
awkward
in
addition
to
clear
or
why
would
someone
use
aqua
as
a
replacement
for
claire,
maybe.
B
Yeah,
this
is
a
good
product
question,
I'm
more
on
this
technical
side,
but
in
short,
as
you
already
mentioned,
there's
a
difference
in
type
of
operating
system
supported.
There
is
a
difference
of
application
dependencies.
So,
for
example,
claire
can
scan
npm
npms
for
node.js
community
but,
for
example,
does
not
support
the
oracle
linux
distribution.
So
there
is
a
difference.
B
There
is
a
kind
of
a,
I
don't
call
it
competition,
but
you
know
emerging
scanners
and
I
think
this
will
be
a
opportunity
to
to
all
of
them
to
to
the
users
to
choose
the
best
one
and
even
somehow
use
the
unified
adapter
api
to
compare
the
results
and
have
a
choice,
and
obviously
there
is
a
different
pricing
models.
The
microscan
is
open
source,
it's
constrained
somehow,
but
on
the
other
hand
it
gives
you
quite
accurate
results.
B
I
don't
want
to
again
compare
those
two
during
the
community
meeting,
but
obviously
we'll
give
kind
of
a
benchmark-
and
this
is
you
know,
having
a
bunch
of
adapters
implemented,
will
give
you
a
real
value
of
comparing
and
choosing
them
right
now.
You
are
constrained
to
clear
you.
Don't
really
know
how
often
the
cl
the
clear
database
is
updated
and
you
are
very
limited
by
you
know
the
capability
of
clear
maintainers
to
to
help
you
out
with
you
know
supporting
it.
It
doesn't
really
scale
very
well.
C
So
this
is
a
feature
we're
working
on
for
1.10.
So
this
is
not
a
sleeper
for
a
1.0
release,
so
this
is
something
that
we
could
potentially
deliver
before
the
end
of
the
year.
But
it's
it's
very
much
in
a
road
map
and
it's
we're
already
working
on
engineering
side
of
things
as
you
can
see
yep.
So
that's
it
for
me
on
the
community
side.
D
E
E
E
Okay,
so
let
me
give
you
some
background
mark
project
courthouse.
This
feature:
harbor
enforces
quota
on
resource
usage
of
a
project,
setting
a
hard
limit
on
how
much
a
particular
artifact
account
and
story
that
your
project
can
use.
So
this
is
the
anchor
feature
in
1.9
last
time.
Last
week,
a
last
time
community
meeting,
I
just
demo
a
briefly
idea
about
the
calls
about
the
in
this
meeting.
I
will
tell
you
some
end
to
end
scenario
about
a
site
or
storage
on
artificon.
E
So,
as
you
can
see,
we
had
a
new
summary
page
for
a
project.
You
can
get
the
project
summary
data
in
this
page
and
let's
try
to.
E
Library,
yes,
so
let's
see
yes,
you
can,
you
can
see
the
corner
change,
the
column
change
to
one,
and
we
have
some
megabytes
calculated
of
all
my
story
and
so
that's
the
size
and
the
con
are
we
have.
We
have
intercepted
the
request
to
about
the
docker
purge,
so
we
analyzed
the
request
to
capture
the
blob
site
for
each
per
docker
image.
So
a
lot
that
size
is
for
the
docker
image
and
the
con
is
for
the
artifact,
so
the
card
are
also
enabled
for
the
hem
chart.
E
Yes,
as
you
can
see,
the
count
is
down
to
one:
let's
try
to
push
another
image
into
my
library
project
and
try
to
see
what
happened.
E
Yes,
you
can
see
the
count
has
been
after
two,
but
the
size
doesn't
change.
So
why?
Because
I
I
just
put
the
same
image
with
different
type
in
harbor
of
the
shared
blob
in
project
will
not
be
double-counted,
so
this
two
image
are
shared
the
same
blobs,
so
you
can
see
that
we
have.
E
E
E
E
Yes,
you
can
see
the
story.
Consumption
is
up
to
a
third
megabyte
in
the
real-time
project
and
in
the
library
it's
the
same
thing
happens.
So
the
shared
blob
cross
project
will
be
double
counted.
E
E
E
E
So
the
return
successfully,
let's
go
into
the
diamond
yeah
we've
got
the
corner
has
changed.
We
have
one
ripple
in
the
site
at
the
same
eyes
the
source
project,
and
let
me
give
you
our
interesting
case
about
the
when,
when
I
reach
to
the
threshold
of
my
project
called,
I
will
what
will
happen.
So,
let's
try
to
get
to.
E
Happened:
yeah
the
the
java
image
is
about
maybe
four
or
five
hundred.
E
Yes,
look
you
can
see
the
last
blob
cannot
be
pushed
successfully
because
the
size
is
already
reached
through
the
threshold.
That
was,
let's
try
to
see
how
many
we
got
it
yeah.
We,
let's
try
to
see
this.
The
color
of
size
project
we
already
got
100
megabyte
vlog
has
been
uploaded
successfully
to
my
project,
but
last
one
we
300
megabyte
size,
so
that
request
cannot
be
pushed
successfully
to
my
my
project
because
the
the
the
calls
has
already
been
reached
to
the
limitation.
E
A
Yeah
our
first
comment:
I
think
this
is
super
cool.
Thank
you.
A
A
So
after
your
fail,
the
storage
will
the
storage
be
rollback
or
you
know
we
just
do
the
calculation
based
on
blob
and
we
need
to
wait
for
the
dc.
E
A
A
Oh
okay,
yeah,
that's
really
cool
and
by
the
way
I
think
we
need
to
do
a
little
more
investigation
in
the
docker's
code,
yeah
or
if
we
can
fail
early
because
currently
the
experience
I
think
hopefully
we
can.
You
know,
improve
the
experience
before
we
release,
so
that
user
will
not
need
to
wait
for
a
theory
twice
to
really
see
the
error
message.
We
will
try
to
fail
early
to
give
a
user
very
quick
feedback.
Saying
the
quota
is,
you
know
not
enough.
E
Yeah,
I
will
figure
out
why
the
dockland
is
keeping
retry
to
push
the
field
abroad.
A
E
F
Okay,
I
think
the
quarter
demo
is
very,
very
fantastic,
okay.
Okay,
what
I
want
to
talk
about
is
the
tag
retention,
which
is
also
anchor
feature
for
1.99
and
before
we
know
watching
the
demo.
I
want
to
give
you
some
introduction
about
this
feature,
because
I
think
we
need
to
understand
the
core.
The
main
idea
of
this
feature.
F
Okay,
I
think
the
motivation
to
to
develop
this
feature,
because
you
know
as
a
artifact
of
registry
hover
wheel
surveying
will
serve
most
of
the
cd
pipeline
and
you
know
as
a
csct
pepperland.
They
always
create
a
large
scale.
Artifacts,
and
you
know
they
will.
You
know,
create
a
lot
of
you
know,
out-of-date
artifacts
and
the
cleaning.
Those
auto,
ditch
artifacts
is
a
strong
requirement,
because
you
know
we
needed
to
clean
the
repository
to
store
some
new
ones.
F
The
background
I
want
to
mention
here
is
this
idea
is
originated
from
nixon
system,
which
is
one
of
our
us
meetinger
from
highland
software.
Last
year
he
developed
a
harbor
delta
taggarty
project,
which
is
a
script-based
project
to
deliver
the
similar
idea.
Yeah,
you
can
run
some
scripts
to
clean
your
repository
and
and
then
we
ask
nissan
to
you
know,
to
work
out
a
native
code
with
the
native
solution
and
he
submits
the
original
proposal.
F
Yeah,
the
the
original
proposal
is
the
submissive
medicine
last
year
and
this
year,
when
we
want
to
deliver
this
feature,
we
have
some
new
changes
and
some
new
requirements
coming
in,
so
we
modify
the
proposal
so
for
this
tag
retention
we
have
two
proposals:
okay,
about
the
media,
main
idea
of
attack
irritation.
F
It
will
be
a
policy-based
and
the
policy
will
be
created
under
the
project
level.
Each
project
will
have
one
only
one
retention
policy,
and
in
this
retention
policy
you
can
create
a
multiple
rows.
F
The
rule
is
a
standard
pattern
containing
you
know
like
action
like
some
condition,
and
you
can
also
specify
some
selector
to
narrow
down
the
scope.
Okay,
I
at
least
at
least
the
example.
For
example,
you
can
retain
the
most
recently
pushed
10
image
and
the
match
is
some.
You
know
their
tags
may
match
some
special
pattern
in
repository,
a
orb
or
or
some
regular
expression,
and
for
the
rules
in
the
same
policies.
F
The
relation
for
those
rules
in
the
same
policy
will
be
all
a
relation
I'll
give
an
example
here,
for
example,
if
we
have
one
or
draw
zero,
if
the
after
a
value
h,
the
zero
may
also
put
you
know,
keep
wrapper
x
tag.
One
wrapper
x
takes
three
wrapper
y
tank
two
and
we
have
another
rule
one.
After
a
value
into
this
rule,
the
output
may
be
a
return.
F
That
means
we
keep
reaper
x,
take
three
a
repository
so
after
the
whole
execution
of
the
these
two
rules,
the
tag
is
reproductive
security,
provide
attack
to
repo
everybody
exhibits
three
and
the
repository
fail.
Fair
will
be
retained,
all
other
ones,
you
know
all
others
are
not
in
this.
Retaining
list
will
be
removed.
F
Okay,
in
one
thousand
now
we
will
support
the
following
five
arrows:
the
first
one
is
retirement
scene.
That
means
you
you
just
you
do
not
need
to
spend
any
parameter,
but
you
can
use
some
selector
to
narrow
down
the
the
candidate
list
and
we
also
support
the
return.
The
mostly
pushed
this
rule
is
a
value
evaluated
based
on
the
push
the
time
of
the
image
or
chart
retain
image
from
the
last
last
number
of
days.
This
is
a
rule,
is
evaluated
based
on
the
lifetime
of
the
artifact
and
return
the
most
recent
port.
F
F
Okay,
so
I
need
here
I
want
to
sex
those
contributors.
Nissan
from
highlight
one
came
from
wemar,
that's
my
john
from
wherever
vmware.
They
did
a
lot
of
work
to
deliver
this
feature.
F
F
So
first,
let's
say
because
checker
retention
is,
you
know,
working
at
the
product
level.
Here
is
the
project.
Let's
see
what
we
have
now
we
have
five
repositories
for
the
golan.
We
have
two
tags,
one
is
the
ttl
one
is
you
know
1.25.5
and
back
to
repository,
let's
see
what
we
have.
Oh
sorry,
we
have
not
chart.
F
F
We
will
provide
some
document
or
some
tips
to
describe.
Actually
you
can
read,
you
can
understand,
what's
the
rule
from
their
text,
for
example,
let's
say
always
and
matching
something
like,
for
example,
golan
and
matching
everything
for
golang
and
we
do
not
set
any
labels.
So
here
is
now
we
have
a
variable
created
return,
always
with
repository
matching,
go
and
golang,
and
the
tags
matching
double
star.
That
means
all
the
text
and
the
chart.
All
the
tags
on
the
golang
will
be
retained.
F
F
F
F
F
So
you're
you
can
say
because
api
github
is
not
covered
in
the
rules,
so
it
will
be
deleted
after
around
the
text,
rotation
and
let's
see
another
one,
go
along
okay,
because
the
way
or
keep
going
so
everything
will
be
there.
Let's
check
my
mouth,
it's,
for
example,
photon.
F
Okay,
photon
will
be
delayed,
so
this
is
a
dry
run.
So
actually
you
can,
you
know,
give
you
a
preview
or
report
for
what
will
happen
after
you
launch
the
retention
rules
before
you
real,
really
trigger
the
action.
I
think
that
so
you
know
our
retention
will
be
destroyed.
Some
artifacts
so
check
what
will
happen.
Okay
now,
let's
say
launch
a
real
action.
F
Okay,
I
think
everything
is
okay.
Let's
see
go
along
what
happened
go
long
either
return
some
other
will
dedicate.
So
this
is
a
report.
Let's
back
to
the
repository,
you
can
say
only
golang.
F
Only
the
tag
on
the
golan
are
retained.
Let's
back
to
navigate
to
the
harm
charts.
Okay,
you
can
see
only
the
agent
here.
Okay,
that's
the
real
action.
Actually
we'll
provide
a
schedule
here
you
can
set,
you
can
set
the
schedule
to
peer
radio
could
trigger
the
retention
process
and
you
can
also
add
the
more
rules
here.
F
C
Yeah,
I
have
a
question:
how
many
rules
can
configure
right
now
at
most,
is
there
an
upper
limit.
F
A
A
A
A
The
only
concern
is
that
we
are
supporting
a
combination
of
such
at
first.
This
is
a
really
powerful
feature,
no
disagreement,
but
when
we
support
a
combination
of
rules,
it
will
be
easily
you
know
to,
I
won't
say
easily,
but
yes,
possibly
confuse
the
other
meme
like
what
will
happen.
A
Image
number
or
count
or
number
of
days
and
there's
a
I
don't
know
like
always
and
and
we
have
the
filters
and
the
different
rules.
And
if
you
consider
this,
a
combination
of
that
yeah.
F
A
F
Yeah,
I
know
because
actually
yeah,
I
think,
because
it's
a
policy
basically
it's
in
one
point:
there
are
multiple
rules,
so
maybe
it's
a
lead
to
you
know
it's
not
a
very
easy.
First,
you
need
to
understand
the
behavior
of
those
rules,
but
actually
most
of
the
rules.
You
know
you
can
understand
their
behavior
from
their
text.
For
example,
the
most
recent
quote
the
most
recent
push.
So
that
means
the
most
recent
push
because
means
the
latest
right.
F
For
example,
if
you
said
the
number
is
three,
so
they
will
only
keep
the
three
latest
pushed
image
or
other
will
be
cleared,
so
the
post
will
be.
You
know
calculated
based
on
the
push
the
time
right.
So
I
think
those
ones
is
it's.
It's
not
so
difficult
to
understand.
So.
A
H
Ahead,
stephen,
I
think
if
you
look
at
the
pcr
of
aws
their
retention
policy,
their
documentation
page,
which
has
list
all
the
rules,
all
the
composition,
how
to
explain
that.
Maybe
we
should
add
a
similar
page
in
our
harbor
as
well.
Explain.
F
H
H
Yeah
I
have,
I
have
a
question
about.
What's
the
difference
between
recent
active
and
recently
pulled.
F
F
That
means
yeah
yeah,
it's
a
yeah,
it's
a
compender
behavior,
for
example.
You,
you
post
or
you
pull
the
wheel,
make
the
image
active
right.
So
you
because
you
it's
an
apple
action
to
the
image,
so
the
image
is
becoming
active
again.
So
we
need
to
select
the
latest
one
of
the
post
or
port
time.
H
C
E
So
I
have
a
question:
do
we
need
to
set
harbor
into
read-only
model
during
the
execution?
I
mean
the
retention
rules,
execution.
C
C
A
A
Question
I
I
can
give
you
an
example
in
the
chat
window
like
docker
pool
ubuntu.
A
256
blah
blah
blah
blah
blah
like
this,
because
currently
the
the
filter
is
based
on
text
yeah.
So
if,
if
user
pull
it
using
char
256,
that
means
the
filter
will
be
bypassed
right
or
do
we
plan
to
do
any
magic.
F
F
A
F
Because
because
I
forgot
to
yeah,
I
believe
so
because
the
java
is
running
in
job
service,
so
the
retention
process
is
running
in
the
drop
service,
so
the
deletion
is
triggered.
It's
it's
launched
by
delete
api,
so
I
think
quarter
wheel,
update
yeah
because
it
was
triggered
via
the
v2
proxy.
A
F
E
A
A
A
We
may
continue
the
discussion
offline,
but
personally,
I
think
a
better
way
is
to
update
the
usage
when
user
delete
the
image,
because
imagine
I'm
a
project
admin,
I'm
not
a
system
admin
and
I
push
some
image
by
mistake.
Then
my
quota
is
full.
Now
I
want
to
put
a
very
important
image,
but.
E
A
So
I
think
maybe
we
can
take
that
offline,
but
I
I
don't
think
it's
a
very
good
choice
here.
Just
like
that
cannot
trigger
dc.
He
has
to
wait
for
the
admin
to
trigger
dc,
so
he
can
be
able
to
push.
H
F
G
When
will
the
deletion
be
happen,
I
mean
daily
or
weekly
or
what's
the
timeline
you
can.
G
With
the
gc
or
or
the
other
public
collection
cycle,
no.
G
After
we
restore
from
the
gc,
would
it
would
it
would
he
resume
the
actual
tech
deletion.
F
Yeah,
I
think,
because
we
run
the
retention
process
in
the
job
search.
So
if
the
job
is
filled
will
retract
three
times
with
a
reasonable
interval,
that's
only
the
only
back-off
solution
so
far.
G
G
Case
that
might
skip
or
skip
one
cycle
for
the
for
the
for
the
attack
retention
action
right.
Yes,.
F
G
G
E
I
I
just
have
a
one
more
request
for
tech
retention
and
I
want
to
clone
my
used
from
project
a2
project
b,
because
that
is
so
complicated
for
me
to
set
a
new
rule
with
the
same
sightings.
So
so
it's
better
to
provide
a
way
to
clone
my
rule
to
another
project.
A
C
C
Well,
I
think
you
don't
have
to
understand
what
what's
going
on
when
you
add
them
together,
because
they're
all
treated
as
separate
rules,
so
you
could.
Potentially
you
know
the
15
rules
just
means
it
could
be
15
different,
end
users
who
you
know
set
up
these
rules.
They're,
not
you
know,
they're,
it's
an
it's
an
or
a
relationship,
not
an
end,
so
basically
it
always
it'll
always
retain
whatever
you're
trying
to
retain
okay.
G
Point
we
need
to
emphasize
that
you
know
it's
not
a
global,
maintain
policy.
It's
only
for
this
project
or
it's
a
global
one.
It's
15
per
project
right
now,
well,
product,
okay,
for
one
project
for
for
per
project.
You
have
15
rules
to
retain
tax.
You
want
yeah,
so
so
that's
the
deletion
actually
will
be.
It
will
be
trigger
triggering
the
the
replication.
G
G
H
C
Okay,
by
the
way
I
just
wanna
for
the
people
who
will
be
listening
to
this
on
their
on
their
like
a
recording.
We
also
have
another
feature:
that's
a
little
far
out,
not
for
a
1.9,
I
think,
possibly
for
1.10,
where
we
had
this
concept
of
an
immutability,
which
means
that
certain
tags
cannot
be
deleted.
So
you
know
in
the
context
of
tag
retention,
I
think,
if
you
had,
if
you
had
configured
immutability
at
a
repository
level,
you
know
it
would
be
excluded
from
attack
retention
policies
where
it
wouldn't
take
effect.
C
Basically,
so
it
basically
allows
you
to
you
know
when
you
right
now.
The
default
behavior
is
as
you.
If
you
push
the
same
tag
over
and
over
again
pointer,
will
change
to
the
new
digest
right
so
by
allowing
immutability,
you
can't
re-push
the
same
tag,
because
those
those
image
builds
are
probably
in
use
right
now,
and
so
we
want
to
have
that
traceability
between
the
image
build
and
the
actual
image
version
that's
being
hosted.
C
D
Okay,
if
hello
questions
you
are
under
today's
meeting.
Thank
you.
Thank
you.
Everyone
thanks
thank.