►
Description
CNCF Harbor's Community Zoom Meeting
SIG Docs kick off!
A
Okay,
we're
on
hello.
Everyone
welcome
to
the
harvard
community
meeting
today
is
february
23rd
and
that's
official
community
meeting
for
the
project
harbor,
which
means
we
follow
the
cncf
god
of
conduct,
so
in
simple
words,
just
behave,
be
nice
to
the
others,
and
let's
have
fun
meeting
today
with
that.
A
I'm
gonna
share
the
gender
for
today.
Give
me
a
second.
A
B
Yes,
hello,
everyone.
So
today
we
are
launching
our
sig
docs
group,
which
is
going
to
be
a
group
focused
on
helping
the
harper
documentation
and
anything
related
therein
to
docs,
and
I'm
really
excited
about
this
group
and
and
hopefully
everyone
else
on
the
call
is
as
well.
I
think
it's
going
to
be
a
really
great
opportunity
for
folks
who
are
either
new
to
the
project,
who
want
to
learn
more
about
harbor
or
for
folks
who
want
to
learn
more
about
technical
writing.
B
It's
really
great
opportunity
just
to
get
out
there
and
test
things
around
and
so
to
go
through
things
about
the
group
today.
We're
going
to
do
just
a
brief
introduction
of
of
ourselves
for
the
new
folks
and
then
arlen
is
gonna.
Do
a
brief
overview
of
harbor,
if
you.
B
So,
if
you
are
new
to
the
project,
you'll
get
a
little
more
information
about
harper
itself
and
then
I'm
going
to
do
a
demo
of
some
of
our
tools
and
website
stuff
and
how
to
contribute
and
then
we'll
just
take
a
quick
look
at
good
first
issues
and
some
of
our
first
projects
that
we
have
going
on.
So
I
guess
intros
I
should
probably
start
off
by
introducing
myself.
I
am
abby.
I
am
going
to
be
the
sig
docs
lead
person.
B
So
I'll
be
the
point
person
if
you
have
questions,
feel
free
to
reach
out
I've
been
working
with
harbor
for
about
two
years.
I
think
which
is
like
a
lot
longer.
I
was
thinking
about
that
this
morning
and
it's
much
more
time
than
I
had
thought.
B
I
was
with
the
project,
so
I've
been
with
us
for
about
two
years
and
I've
been
technical
writing
for
probably
about
seven
years
and
then
you
know
just
sort
of
gotten
open
source
and
I'm
a
really
huge
fan
of
being
able
to
do
things
in
open
source
and
and
the
community
building
opportunities
that
there
are.
So
I'm
looking
forward
to
this
group
taking
off,
I
guess
for
intros,
do
we
want
to
do
go
around?
I'm
not
sure
if
that
would
work
orlan.
If
you
have
any.
A
Let's
do
that?
Okay,
let's
do
that
I'll!
Take
the
next
one.
Maybe
some
of
you
know
me:
I'm
the
community
manager
for
harbor,
I'm
currently
employed
and
I
hope,
for
a
long
time,
I'll
be
employed
by
vmware
but
yeah.
So
I
I
took
over
the
community
management
for
harbor
for
about
a
year
ago.
Prior
to
that
I
was
deep
technical,
hands-on
engineer,
so
I
switched
gears
a
bit
towards
this
one.
I
really
enjoyed
doing
it.
A
So
I
hope
everyone
in
this
in
this
code,
especially
the
new
folks,
will
get
the
vibe
of
the
of
the
team
and
will
enjoy
contributing
with
this
one
I'll
hand
over
to
vladim.
C
D
So
hello,
everybody,
my
name
is
madim,
I'm
one
of
the
main
or
recent
maintainer
for
project
harbor,
and
I
contribute
mostly
in
in
form
of
community
so
helping
people
out
with
harbor
installation,
setup
and
problems,
but
I
also
do
contribute.
You
know,
bug
fixing
and
opening
up
issues
and
create
problems
that
we
see
when
we
work
with
harbor.
C
You
go
next
yeah.
I
have
a
quite
expensive,
extensive
experience,
kind
of
you
know
in
general,
documentation,
I'm
innovation
manager
and
and
the
consultant
so
again,
yeah.
I
I
wrote
quite
quite
a
bit
of
of
technical
business
documents.
I
don't
code
and
I
never
work
with
this
coding.
Technical
writing,
despite
the
fact
that
I
spent
two
years
at
mozilla,
which
is
open,
source
project
and
kind
of.
Obviously
I
I
spoke
with
and
work
with
developers
there
so
kind
of
you
know
what
I'm.
C
What
I'm
trying
to
figure
out
is
that,
where
it's
it's
right
time
for
me
to
join
this
project,
because
I
obviously
will
need
a
lot
of
help
and
again,
if
this
group
has
resources
kind
of
you
know
to
teach
me
if
you
want,
and
I'm
I'm
very
willing
to
to
to
learn
whatever
needed
to
do
this
job,
then
then
I'll
be
kind
of
happy
to
to
contribute.
C
If,
if
there
is,
there
is
no
resources
and
people
actually
have
enough
time
only
to
do
what
is
currently
needed,
then
then-
maybe
maybe
maybe
not
so-
I'm
sorry
kind
of
introducing
this
piece
of
uncertainty,
but
but
that's
the
truth.
A
Yeah,
thank
you,
I
think.
That's.
The
whole
idea
behind
this
sick
is
to
introduce
people
to
the
project
and
to
the
open
source
world
and
also
to
help
them
out
ramping
up
and
to
get
them
educated.
How
we
do
stuff
with
the
project
with
with
github,
for
example,
your
first
pr
hugo,
you
name
it
all
everything
around
it.
A
Actually,
it's
on
me
that
I
I
was
wasn't
ready
with
like
a
short
video
of
your
first
steps
in
the
pr
process,
but
I
I
really
hope
I'll
be
ready
until
the
end
of
the
week
or
beginning
next
week,
so
you
have
like
us
something
to
step
on
and
and
educate,
and
then
of
course
you
have
to
do
your
homework
read
on
your
own,
but
I'll
be
around
and
and
ivy
as
well
to
help
out
anyone
who
wants
to
to
join
us
without
I'm
going
through
the
list
that
I
can
see
on
my
screen.
A
I'm
sorry
for
that,
but
dora
your
next.
F
Hello,
everyone
so
I
actually
joined
this
call
because
I
saw
abby's
post
on
reddit
and
I
am
a
documentation
manager
and
the
lead
technical
writer
for
a
engineering
firm
here
in
canada.
So
I
have
experience
with
coding,
but
I
am
not
a
developer
myself.
F
F
Basically,
I
picked
up
scala
to
help
with
their
graph
analytics
tool,
and
I
was
also
their
technical
writer
contributing
for
a
few
months,
and
so
I'm
very
happy
to
be
here,
and
hopefully
I
can
help
out
with
the
project.
A
G
Okay,
hey
folks,
sorry,
I
was
talking
on
mute
yeah
good
evening,
and
so
I'm
working
with
vmware
in
the
cloud
management
business
unit
at
vmware,
and
I
have
been
contributing
to
harbor
for
quite
some
time.
Although
this
is
my
first
community
meeting
that
I've
joined
so
so
I
was
just.
G
I
was
one
of
the
first
features
that
I
put
in
harbor
was,
like
you
know:
normalizing
the
vulnerability
scan
data
into
relational
schemas,
so
that
you
know
it
allows
for
faster
reporting
and
more
ad
hoc
reporting
structure,
as
opposed
to
the
json
blob
processing.
That
was
my
first
pr.
I'm
I'm
right
now
I'll
be
presenting
a
subsequent
proposal.
G
That
is
the
next
in
line
once
this
agenda
item
completes
for
my
harbor
is
my
like
site
project
during
my
day
job,
I'm,
I'm
a
lead
engineer
for
multi-cloud
networking
analytics
an
observation
product
called
as
we
realize
network
insight
which
deals
with
vmware
cloud,
aws,
etc.
So
pure
tech
networking,
basically
and
I
enjoy
contributing
to
hardware,
and
one
of
the
primary
reasons
is
that
it
gives
me
experience
in
open
source
and
also
I
would.
G
I
would
honestly
admit
that
it
was
where
I
learned
to
write:
go
code,
production
level
go
code
and
it
all
began
with
that
so
yeah.
That
is
that's
about
me.
A
H
Hi
folks,
my
name
is
jonas
rosslyn.
I
used
to
be
the
community
manager
for
harbor
for
a
couple
of
years
until
orland
took
over
that
responsibility.
I
am
now
the
head
of
community
here
at
vmware
focused
on
community
engagement
for
a
bunch
of
open
source
projects,
including
harbor,
orlan
and
abby,
are
on
my
team,
and
I'm
super
super
happy
to
see
this
new
project
new
sig
docs
blooming
out.
I'm
very
excited
to
see
what's
going
to
happen
here,.
A
I
J
Yeah
no
problem,
that's
right!
So
hi,
I'm
manogna,
I'm
working
with
mercedes-benz
research
and
development
organization,
I'm
the
product
owner
of
harbour
product
in
our
organization.
I
haven't
started
com
contributing
to
the
hubba
community.
Yet,
but
I've
been
consistently
joining
the
community
meetings
to
align
our
product
according
to
what's
happening
in
the
community
and
with
the
harbour
products.
So
that's
about
it.
K
Hi,
I'm
ian
from
the
harbor
engineer
team.
Welcome
to
your
guests
to
the
harvard
community.
I
I
have
joined
the
harbor
for
more
than
five
years
at
the
very
beginning
phase.
So
I
can
maybe
probably
answer
other
all
your
questions
about
the
harvard
details.
You,
if
you
guys
have
any
questions
about
that,
just
feel
free
to
answer
that.
L
Okay,
I'm
dodging
a
storm,
I'm
working
in
harbor
project
for
four
years,
many
focus
on
adapt
authentication
and
they
they
party
cash
features
and
also
provided
some
support
for
the
hotel,
the
portions
of
the
hubble.
M
A
Thank
you
and
you
have
next
one.
It's
minor
young.
N
A
Thank
you,
I
think
with
that,
did
I
miss
anyone
because
the
obviously
the
list
is
changing,
I
I
hope
not
so
with
that.
Should
I
be
as
we
discussed,
shall
we
do
like
a
brief
harbor
overview
or.
A
Yeah,
okay,
also
I've
passed
it
in
the
zoom
chat
link
to
your
recent
talk
that
I
made
at
fosdem
in
europe.
It
was
about
harbor
101,
so
you
can
check
the
whole
talk
there
with
some
demos,
but
before
that
I'm
gonna
share
my
slides
from
from
that
same
talk,
and
I'm
just
gonna
go
over
very
briefly
on
that.
A
Okay,
I'm
gonna
miss
all
that,
but
yeah
I
said
it
already.
Who
am
I
so
hybrid
is
the
open
source
project
and
the
only
one
graduated
right
now
at
cncf
in
the
category
of
container
registry?
A
With
that
said,
it's
like
the
de
facto,
the
first
choice
of
the
cloud
native
world
to
go
to
on-premise
installation
or
cloud
installation,
but
self-managed
registries.
Other
the
project
was
started
back
in
2014
in
vmware,
in
the
china
office,
open
source
in
2016
and
by
the
2020
got
graduated
from
cncf.
A
So
that's
the
very
brief
harvard
timeline
by
the
way.
Oh,
I
missed
pooja
right.
Okay,
can
I
finish
this
one
and
then
we
can
jump
back
to
you.
Sorry.
So
that's
the
brief
timeline
by
the
way
some
of
the
okay.
Many
of
the
maintenance
are
on
this
call.
So
please
interrupt
me
if
I'm
saying
some
something
stupid
or
incorrect,
so
white,
harbor
and
yeah.
Those
are
slides
for
that
that
that
event,
but
yeah
the
community,
which
are,
I
think,
is
the
price.
Why
you're
joining
this
call
right?
A
Now,
it's
huge
and
very
colorful
and
very
nice
to
work
with
I
it's
so
open
source,
of
course,
and
still
host
it.
As
I
said
so,
the
whole
idea
behind
harvard
is
not
to
use
it
as
a
service
from
some
provider.
A
But
if
you,
if
you
are
a
big
company
and
you
want
to
host
your
images-
that's
one
of
the
choices,
so
the
architecture
is
too
much
over
here,
but
everything
is
layered,
so
you
have
different
integrations
with
different
identity
providers
and
in
in
simple
words,
you
can
use
hardware
to
store
your
docker
images
or,
for
example,
your
home
charts.
A
A
We
have
the
easy
vulnerability
scanning,
which
is
one
of
the
main
part
of
harbor,
and
why
people
are
using
it,
because
you
have
integrated
that
within
content,
trust,
which
means
you
can
sign
your
your
images
with
notary
or
right
now
we
are
integrating
with
cosine
the
hem
chart.
Wouldn't
fix
me
if
I'm
wrong,
but
I
think
we're
going
to
remove
the
the
sharp
museum
soon
right.
A
We
have
web
portal,
which
interact,
allows
you
to
interact
with
harbor
in
the
visual
way,
but
also
we
have
the
restful
api,
which
is
the
programmable
way
how
your
systems
or
your
scripts
can
can
talk
to
harbor
and
also
we
have
multi
type
of
deployments
of
harbor.
You
can
install
it
through
docker
compose
helm,
chart
you
name
it.
So
that's
very,
very
brief
to
touch
base
on
what's
harbor
and
what
you
can
expect
from
from
the
project
and
in
which
areas
the
the
the
work
of
technical
writer
can
can
go.
A
So
everything
be
around
containers
and
artifacts
in
the
in
the
harbor
registry.
I'm
gonna
stop
that
sharing.
I
hope
that
wasn't
too
quick
or
too
slow
give
me
a
second
to
share
again
the
agenda.
A
Okay,
I
I
think
okay,
first
of
all,
pooja,
I'm
sorry
if
I'm
again
mispronouncing
your
name,
but
can
you
please
brief
us
who
you
are
and
why
you
join
us.
A
Great
thanks
did
I
miss
anyone
else?
I'm
sorry!
If,
if
I
miss
you,
please
raise
your
hand
or
just
a
mutant.
Speak.
I'm
sorry,
sorry
for
that!
All
right
abby,
I
think
next
part
is
you
to
dog,
stealing
and
contributing
and
github
you
want
to
take
over
from
this.
B
A
B
Am
gonna
talk
through
some
of
our
tooling
and
hopefully
get
through
a
demo
with
if
the
demo
gods
are
with
me,
so
why
don't?
I
start
by
sharing
my
screen.
B
So
hopefully
you
all
can
see
my
screen.
Are
you
shipping,
okay
cool?
So
this
is
our
main
harbor
repo.
We
have
all
of
our
content
in
a
website
repo
on
github
and
the
main
tools
that
we
use
are
github
and
markdown,
which
is
a
markup
language
for
plain
text.
So
it's
it's
pretty
straightforward
to
adopt
if
you're,
not
if
you're
not
familiar
with
it,
it's
just
simple,
simple
markup.
B
Just
so
you
can
see
you
can
see
it's
just
very
simple
markup
for
links
and
and
etc.
So,
if
that's
something
you're
new,
hopefully
it's
easy
for
you
to
pick
up,
but
there's
a
lot
of
tutorials
online
available
as
well.
If
you're
not
familiar
with
it,
the
our
website
is
built
with
hugo,
which
is
a
static
site
generator,
and
that
is
what
that
means.
B
Is
it
takes
our
markdown
files
and
puts
it
through
our
templating
renders
all
of
the
content
as
h,
static,
html
file.
So
then
we
can
then
host
host
the
files
and
build
our
website.
So
it's
pretty
pretty
pretty
neat
if
you,
if
you're
know
anything
about
stock
site
generators.
I
think
hugo
is
a
really
great
example
of
one,
but
you
don't
also
need
to
know
a
lot
about
about
it
to
to
to
actually
contribute.
If
that's
not
something
you
want
to
get
dive
into.
B
Yeah,
so
this
is
where
all
the
things
are,
so
the
first
step,
if
you
would
like
to
contribute,
would
be
to
fork
our.
I
don't
know.
I
have
all
these
things
open
fork,
our
repo,
so
you
could
go
right
into
github,
create
the
fork.
I
already
have
one
existed,
but
this
this
will
pop
up
and
you'll
be
able
to
create
the
fork.
You
could
tell
that
you
are
on
looking
at
the
fork.
This
happens
to
me
all
the
time
and
sometimes
I
think
I'm
on
the
wrong.
B
I
think
I'm
on
the
the
actual
up
the
the
main
website,
but
I'm
actually
like
looking
at
the
wrong
thing,
so
you
could
tell
you're
looking
at
your
fork
when
it
says
your
name
and
the
website
and
that
it
is
from
the
github
organization
that
can
have
the
harbor
organization.
B
So
this
is
your
local
fork,
which
is
basically
a
just
a
copy
of
the
the
repo
and
you
will
want
to
clone
that
to
your
computer,
so
you
should
be
able
to
do
that
through
this
website.
Like
I
already,
I
already
have
one
downloaded,
but
you
can
use
this
modal
to
copy
whatever
and
then
just
run
clone.
B
You
can
also
use
github,
desktop,
which
I
use
a
lot
and
that's
just
a
you
know,
interface
that
github
offers,
and
it's
very
so
you
can
see
it's
pretty
easy
to
use
if
you're
not
familiar
with
git
or
the
command
line,
and
you
just
want
a
more
straightforward
way
to
do
it
without
having
to
learn
git
right
away.
You
could
just
go
through
everything
with
the
github
desktop.
B
B
So
when
you
have
everything
all
cloned
and
you're
in
your
website
directory,
which
is
you
know
your
local
version
of
it,
you
can
see
all
the
files
are
the
same
as
what
is
here,
because
I
have
it
all
cloned
down
here.
We
also
have,
if
you
wanted
to
make
a
change,
we
do
offer
a
way
to
run
the
website
locally
when
we
have
a
script
so
once
you
have
it
cloned,
you
can
run
this
script
from
your
directory.
B
Wherever
you
have
cloned,
let's
see,
if
I
and
that'll
you
know,
prepare
the
whole
website
so
that
you
can
view
it
locally
kind
of
a
little
bit
long
and
then,
if
you
need
to
do
this
to
install
any
of
the
dependencies,
you
wouldn't
necessarily
need
to
do
this
every
time
and
then
you
can
go
and
too
many
windows
like
I
said,
and
this
will
start
off
hugo
in
the
background
running,
which
will.
B
Create
a
local
copy
of
the
website,
so
this
is
exactly
like
the
go
harbor
website,
it's
just
what
you
have
running
locally
and
the
cool
thing
about
with
this
setup
is
that
you
can
see
it's
still
running
in
your
terminal.
B
So
if
you
go
and
you
wanted
to
make
a
change
and
you
can,
you
can
actually
see
those
changes
happening
around
real
time
as
you
make
changes
and
updates
within
the
code
within
the
content
files
so
over
here
I
have
just
another
example:
another
terminal
window
and
it's
the
same,
the
same
thing
as
the
files,
so
in
the
docs
folder,
is
where
all
sorry
in
the
docs
folder
is
where
we
actually
keep
all
of
our
main
contents.
B
This
is
where
all
the
files
that
you
would
want
to
be
changing
most
likely
as
as
you
go
through
and
make
an
update.
So,
for
example,
if
we
wanted
to
just
to
see
how
this,
how
this
all
works,
you
can
go
into
your
text
editor,
I
tend
to
use
atom.
It
could
be
any
text
editor
that
you
want,
that
is
able
to
open
up
the
files.
B
So
this
is
the
index
file
that
you're
seeing
here
under
stocks,
and
so,
if
you
wanted
to
see
these
changes
as
they
were
made
in
real
time,
so
let
me
just
show
you
what
I'm
talking
about
first,
so
you
can
make
that
and
now
you
see
you
made
those
changes
live
on
your
local
environment
and
that's
all
because
you
have
the
local
setup
running.
So
if
you
saw
before
that,
I
switched
over
to
edge.
So
this
is
where
I'm
kind
of
being
a
little
bit
all
over
the
place
here.
B
So
the
main,
the
docs
that
are
changed,
are
made
out
of
the
main
branch.
In
order
to
see
those
changes
locally,
you
should
you
need
to
be
working
out
of
slash
edge.
B
If
you
were
making
a
change
against
a
different
setup,
then
you
would
be
it's
just
where
the
changes
are
being
made
so
in
edge.
B
B
B
You
can
see
these
changes
is
get
status,
and
this
will
show
you
where
all
of
the
commits
were
were
made,
that
you've
made
that
we
made
the
change
the
index
file.
So
actually
I
kind
of
jumped
ahead
so
before
doing
this,
you'd
also
probably
want
to
make
these
changes
in
a
branch.
So
you'd
have
to
do
that.
You
would
get
create
a
new
branch
called
you
know,
whatever
your
branch
name
would
want
to
be,
and
then
you'd
want
to
jump
from
the
main
branch
into
your
new
branch.
B
B
In
a
bit,
sorry,
thank
you
so
now
you're
in
the
demo
branch
and
you
can
see,
let's
see.
B
So
you
still
have
that
change,
that's
there.
So
if
we
wanted
to
to
go
forward
with
making
this
change
and
making
a
pr
to
submit
this
change,
the
next
thing
that
you
would
want
to
do
is
you
want
to
add
your
commit
to
the
branch
and
now.
B
And
the
m
is
for
a
message
which
is
you
know
whatever,
whatever
it
would
want
to
be
when
you're
going
through
this?
B
It's
really
good
helpful
to
have
commit
messages
that
that
makes
sense
to
what
your
changes
are,
so
that
it's
you
know
easy
to
to
identify
what
you're
doing
update
stocks
header
for
demo
and
then
for
each
pr
for
each
commit
that
you
want
to
make
to
the
harbor
repo
it
needs
to
be
signed,
which
is
just
a
way
to
signify
that
you,
you
have
signed
off
and
verified
these
changes.
It
is
something
that
we
require.
B
We
cannot
merge
commits
unless
they're
assigned,
but
if
you
do
forget
to
do
this
step,
while
you're
doing
it
while
you're
actually
making
a
commit
and
making
a
pr,
it's
okay,
it's
very
easy
to
fix
and
adjust
it's
just
something
that
we
we
require
of
each
commit.
So
you
might,
you
might
need
to
to
to
to
do
that,
so
you
could
do
not
sing
off.
That
would
be
hilarious
if
we
had
it
as
thing
off
sign
off.
So
that
will
add
your
changes
and
you
can
see.
B
So
the
next
thing
that
you
want
to
do
is
to
actually
go
and
push
these
changes
so
right
now
you
only
made
the
changes
to
your
on
your
machine
and
you
need
to
actually
push
those
up
to
your
book
that
you
have
made
so
a
open
accent.
You
know
actual
github
so
get
push,
and
this
is
very
small.
B
B
So
that
is
so
now
that
you've
done
this.
You
should
be
able
to
see
your
changes
in
your
fork,
so
this
is
now
up
on
github,
so
you
can
make
the
change
and
github
will
pop.
It
will
pop
up
and
actually
make
this
button
here
for
you
to
make
a
pr.
So
this
will
make
a
pr
which
is
pull
request
from
your
fork
of
the
changes
that
you
just
made
into
the
harbor
upstream
repo,
which
is
where
the
live.
You
know
the
live
website.
Repo
is
everything.
B
Is
there,
so
you
can
see
that
it
had
whatever
you
had
as
your
commit
message.
That's
the
title
already
populated.
If
you
want
to
add
any
more
content,
describing
what
your
pr
is
doing.
That's
really
helpful
for
us,
especially
just
just
to
give
us
more
context
of
what
you're
doing.
B
Whatever
it
is,
and
then
you
can
actually
create
the
pull
request,
so
now
it
is
now
you
have
a
pull
request
created
and
that
someone
is
able
to
review
these
changes
that
you
want
to
make
for
our
all
pull
requests.
We
require
two
changes
to
be
made
to
reviewers,
which
is
mostly
to
get
a
like
a
copy,
editing
review
and
a
technical
review.
B
Oh
one,
sorry
I
made
a
mistake:
it's
only
one
review
is
required
to
get
things
merged
just
to
make
some
of
it.
So
if
you
do
create
a
pr-
and
I'm
not
sure
actually,
if
you're
able-
I
forget,
permissions,
I'm
not
sure
if
you're
actually
able
to
add
a
reviewer.
B
But
if
you
aren't
tag
me
orlan
and
we
can
help
you
out
with
that
and
one
of
the
cool
things
that
we
do,
that
we
are
our
web
hosting
tool
is
called
edlify
and
one
of
the
cool
things
that
netlify
offers
is
a
way
to
preview.
Your
changes
live.
So
whenever
you
make
a
pull
request,
you're
able
to
see
when
this
line,
whenever
it
is
finishing,
finishes
loading
you're,
actually
able
to
go
into
a.
B
See
the
changes
live
so
that
when
you
are
when
you're
reviewing,
you
can
see
the
changes
again
and
then
you
can
also
make
sure
that
everything
loads
correctly.
So
if
you
don't
have
the
local
setup
working
or
whatever
you
just
want
to
make
a
change,
you
can
also
create
a
pr
and
then
see
the
changes
man.
This
is
being
slow.
Oh
thank
you.
Orlan
love
to
comment,
thank
you
and
approved
so
with
orland's
review.
B
We're
able
to
merge
this
as
soon
as
all
the
checks
pass
again
still
being
slow.
B
Memory
is
not
that
slow
but
anyway,
so
so
the
point
is
that
it
will
provide
a
preview
for
you,
so
that
you'll
be
able
to
see
your
changes,
and
it
makes
reviewing
a
little
bit
easier
while
we're
waiting
for
this
do
people
have
questions
I
kind
of
jumped
around
a
bit.
So
hopefully,
hopefully
this
is
making
sense
to
you
if
you're,
if
you're
familiar
with
with
the
github
flow.
But
if
you
have
any
questions,
let
me
know.
B
Yeah,
so
you'd
want
to
make
from
your
fork.
You
probably
would
like
want
to
make
a
branch
of
your
own
for
whatever
changes
you
want
to
make
a
good.
B
Most
often
when
you're
making
a
change,
you'll
probably
have
an
issue
assigned
to
it.
So
for
say
for
this
issue.
If
you're
fixing
this
issue
right
here
and
you
were
creating
a
pr,
it's
probably
something
you
could
do
is
put
a
branch.
You
know
fix,
fixes
290
as
your
branch,
okay
and
then,
when
you
go,
make
the
the
pull
request.
You
know
it's
kind
of
connected
back
to
the
issue.
B
You're
also
able
to
connect
issues
just
in
github
by
saying
when
you
create
the
pr2,
you
can
also
add
in
you
know,
fixes
290
and
it'll
it'll
automatically
connect
those
two
things
and
it'll
help
on
auto
closing,
but
I
would
say
name
everything
after
the
issue
number,
if
you
have
an
issue,
if
not
just
use
a
short
description
of
what
what
the
pr
is
for
cool.
Thank
you.
B
Yeah,
so
you
can
see
that
you
now
have
a
preview
of
your
changes
against.
You
know
a
preview
of
the
website
that
it's
just
this
is
on
the
internet.
If
people
wanted
to
go,
this
is
just
available
to
folks,
so
everyone
can
see
it.
B
It's
not
just
on
my
local
machine
if
you're
interested
in
poking
around
were
there
any
other
questions.
B
Okay,
if
not,
I
can
kind
of
go
through
sorry.
Zoom
is
like
blocking
all
of
my
stuff,
so
I
can
kind
of
go
through
just
our
repo
issues
that
we
have.
We
have
a
few
labeled
as
good
first
issues
and
those
would
be
things
that
are
good
first
issues
and
hopefully
it's
something
that
new
contributors
are
able
to
pick
up
and
go
through
and
make.
B
Let
me
see
what
this
one
is
so
yeah
so
go
just
to
go
and
up
make
an
update
based
on
these
issues,
so
good,
first
issues.
So
if
you
are
looking
for
something
new,
this
would
be
a
great
place
to
start.
If
you
want
to
get
into
something
that
is
more,
maybe
more
website
related.
We
also
have
a
few
that
are
web
improvements.
You
know
the
search
results
are
a
little
wonky.
B
We
don't
have
them
separated
out
by
versions,
so
it
kind
of
just
separates
everything,
puts
everything
together,
which
is
something
that
kind
of
doesn't
make
any
sense.
It's
not
ordered
quickly.
So
when
you
search
for
something
it
might
pop
up
an
older
version
instead
of,
we
obviously
want
to
have
that
be
the
most
recent
version
and
another.
So
I
don't
have
this.
B
Let
me
see
if
I
can
get
this
open.
One
of
the
other
issues
that
we
as
a
group
would
like
to
focus
on
is
the
localization
of
the
harbor
interface.
So
we
harbor
hat
offers
a
you
know,
a
ui
for
folks
and
we've
had
a
lot
of
community
contributions
to
translate
that
interface
over
the
years,
and
we
only
have
like
on
on
the
current
maintainer
team,
the
the
bandwidth
to
verify
the
english
language
and
the
chinese
language
translations.
B
So
not
all
of
these
translations
have
been
kept
up
with
they're
up
to
date
in
terms
of
the
actual
ui
pieces.
So
how
this
file
works
is
that
you
have
this
file
with
all
these
little
bits
and
then,
when
the
ui
you
know
gets
built,
it
pulls
this
content
in
and
sort
of,
populates
like
tooltips
and
and
buttons
etc
on
the
website.
B
So
to
do
the
the
localization
you'd
only
have
to
work
out
of
this
file,
but
you
can
see
that
some
of
it
is
you
know
some
of
it.
This
is
the
french
file.
Some
of
it
is
in
french,
and
some
of
it
is
not
in
french.
So,
as
things
have
been
updated,
we've
populated
these
files,
with
with
the
english
versions
of
the
of
the
configuration
options,
but
we
haven't
had
community
members
actually
translate
it.
B
So
we're
looking
for
folks,
if
you
do
know
other
languages
than
english
or
chinese,
and
you
are
willing
to
help
with
that.
That
is
a
big
area
of
contribution
that
we're
looking
to
get
folks
to
help
out
with
the
great
thing
with
harbor
is.
We
are
a
really
big
community
and
you
know
very
globalized.
So
hopefully
there
are
some
folks
here
who
can
help
out
with
that,
and
I
have
this
issue
here.
It's
linked
in
the
community
thing
and
I
can
also
link
it
here.
B
In
the
the
chat,
if
folks
are
interested
so.
B
F
Hi
abby,
I
was
just
wondering
if
you
have
questions.
What
is
the
best
way
for
us
to
contact
you
excellent
question.
B
B
Maybe
there
isn't,
I
don't
know
there
was
a
link
in
the
blog
post
to
that
slack
channel.
If
you
wanted
to
join.
B
Yeah,
thank
you.
Thank
you
yeah.
So
this
has
all
the
contact
information.
The
slack
group,
the
slack
channel
I
mentioned,
is
probably
the
best
way
to
do
it
to
just
you
know,
ask
questions
and
and
get
work
with
us
there.
If
you
also
are
noticing
something
on
whatever
and
you
want
to
open
an
issue.
That's
also
a
really
great
way
to
get
involved.
If
there,
if
there
isn't
anything,
you
see
here
that
you
want
to
work
on,
that's
okay.
B
Hopefully
there
is,
but
I'd
also
see
recommend
if
you
just
want
to
start
reading
through
the
docs
and
if
you
notice
anything
that
you
know,
maybe
some
better
wording
or
broken
links
or
typos
or
anything,
and
you
want
to
just
get
your
feet
wet
on.
You
know
creating
a
pr.
That's
another
great
way
to
do
that
if
there
isn't
actually
a
specific
issue,
you
want
to
work
on,
but
you
just
want
to
get
get
more
in
in
more
more
familiar
with
the
harbor
docks
and
the
pr
process.
B
Sharing
hello,
everyone
you're
back,
I
can
see
you
all
now.
I
thought
it
was
all
I
had.
So
if
there
aren't
any
more
questions,
hopefully
you
know
hope
to
see
you
around
and
hopefully
the
harbor
project
is
a
good
fit
for
you
all,
and
I
am
looking
forward
to
some
doc's
contributions
so
orlan.
I
will
hand
it
back
to
you.
A
A
Okay,
by
the
way
we
have
the
team
who
can
help
out
with
the
german
verification
of
the
translation,
I
can
do
the
bulgarian
one
if,
if
needed,
not
sure
how
many
regulars
are
using
it
anyway.
Okay
with
that,
let's
say:
let's,
let's
see,
I
want
to
share
the
agenda,
but
it's
kind
of
lost:
okay,
okay,
so
next
in
the
agenda,
we
have
a
lot
go
ahead.
G
Okay,
I
hope
you're
all
able
to
see
my
desktop,
and
I
want
to
put
this
down
okay.
So
so,
first
of
all,
thank
you
for
providing
me
an
opportunity
to
present
this
proposal
in
the
hubba
community
meeting.
So
to
give
a
quick
background
around.
What
this
proposal
is
all
around
is
that
there
is
a
right
now
we
want
to
share
data
artifacts
between
sub
sub
subsystems
and
harbor
by
subsystems.
G
I
would
mean
core
hardware
services
like
hardware
core
and
hardware
job
service,
and
we
would
want
to
do
it
in
a
in
a
scalable
way
that
is
also
cross-platform.
In
the
sense
it
doesn't
have
to
rely
on
the
assumptions
that
you
know
we
are
going
to
have
access
to
a
particular
persistent
volume
or
you
know
we
don't
want
to
do
things
in
a
way.
That
is
like
that's
going
to
create
a
lot
of
problems
in
terms
of
implementation,
complexities
or
performance
bottlenecks.
G
So
so
so
the
the
whole
feature
started.
Sorry,
the
installment
charger
yeah
the
whole
feature
started
when
I
am
implementing
a
core
feature
for
exporting
hardware,
exporting
vulnerability
records
to
a
csv
file.
G
It
all
works,
but
the
problem
is
that,
right
now,
I'm
I'm
running
la
harbor
on
my
laptop
and
it's
pretty
easy
to
pretty
easy
to
persist
that
csv
file
on
a
volume
that
is
shared
between
the
the
job
service,
the
harbour
job
service
and
the
hardware
core,
so
which
means
that
the
hardware
core
will
be
able
to
serve
that
file.
G
G
The
way
harvard
harbor
mounts
persistent
volumes
is
that
two
persistent
volumes
cannot
be
shared
across
to
a
persistent
volume
cannot
be
mounted
repeatedly
by
two
process
by
two
services,
which
means
that
if
there
is
a
persistent
volume
that
is
mounted
by
job
service,
there
is
no
way
that
the
core
will
be
able
to
mount
that
persistent
volume
in
its
and
and
and
read
the
data
from
it.
So
so
with
this,
what
happens
is
like?
G
We
need
to
understand
how
we
can
how
we
can
share
the
data
between
the
two
and
that's
the
reason
with
this
and
that's
the
reason
we
ended
up
building
a
framework
or
a
it's
a
core
backend
framework
that
will
allow
arbitrary
data
sharing
between
sharing
of
arbitrary
data
blobs
between
two
hardware
components.
G
There
were.
There
were
multiple
approaches
that
that
we
evaluated
before
we
went
ahead
into
developing
this
proposal.
One
was
that
you
know
we
could
have
a
stateful
http
session
between
the
harbour
job
service
and
the
harbour
core,
but
then
it
also
means
that
there
is
some
level
of
synchronous,
api
execution
that
needs
to
be
maintained.
G
Some
state
needs
to
be
maintained
either
in
the
core
service
or
in
the
job
service,
and-
and
this
will
definitely
limit
scale,
because
whenever
we
have
state
persistence,
it
means
that
it
impacts
scale
typical.
Then
we
have
the
second
problem
in
kubernetes
environments,
where
multiple
replicas
of
the
job
service
may
exist.
The
harbour
service
core
service
may
again
need
to
understand
to
which
job
service
did.
G
It
submit
a
particular
request
to
generate
that
ad
hoc
data
blob
and
then
make
sure
that
the
next
time
it
communicates
with
the
same
job
service
replica
to
pull
that
data
or
and
and
again
that
means
it's
complex
state
management
when
we
have
to
share
the
data.
So
so
this
complex
state
management
logic
within
the
harbour,
job
service
and
harbour
core.
G
While
it
can
be
implemented
for
one
single
feature,
we
would
not
want
to
repeat
it
again
and
again
for
multiple
features,
because
each
data
sharing
like
like
tomorrow,
let's
say
we
go
ahead
and
start
providing
complete
reports
on
bill
of
materials.
We
would
not
again
want
to
create
one
more.
You
know
http
session
based
sharing
mechanism
to
to
share
bill
of
materials
data.
So
this
was
this
was
approach
one
and
that's
the
reason
it
was
rejected.
Then
the
second
one
was
using
harbor
database.
G
We
could,
we
could
just
simply
dump
the
data
file
or
data
block
within
the
harvard
database.
Just
like
we
used
to
dump
the
vulnerability
results
as
a
json
blob.
G
This
is
better
than
the
complex
stateful
http
interfaces,
and
definitely
scalable,
because
postgre
has
built-in
support
for
binary
data
like
large
objects
and
wide
ea.
But
the
problem
is
again
with
respect
to
the
storage
size
like
by
da
column,
it
stores
the
binary
data
directly
into
the
database.
Why,
and
while
it
simplifies
retrieval
the
storing
good
it.
G
Postgres
says
that
the
client
is
responsible
for
escaping
and
encoding
the
data
prior
to
storage
on
the
server
and
similarly
reading
would
require
custom
encoding
decoding
logic,
which
means
that
again
we're
introducing
complexity
and
again,
we
are
not
able
to
like
different
different
reports
like
pdf
files,
etc.
That
you
want
to
share
like,
like,
let's
say,
a
job
generates
a
pdf
report
and
we
want
to
share
it
with
the
core.
G
The
same
type
of
encoding
may
not
be
useful,
as
it
would
may
be
for
plain
text
so
again,
like
byte
ea
is
a
logo
for
a
large
data
file
like
it
also
means
that
we
have
to
load
all
this
data
file
in
memory
and
perform
the
encoding
and
decoding
so
pretty
complex
logic
coming
in
again,
if
using
a
log
field
that
is
large
object.
It
stores.
G
The
problem
is
that
the
the
postgresql
database
will
store
this
file
outside
of
the
data
storage
area,
but
and
it
stores
a
pointer
to
this
file.
The
problem
is
like,
unlike
java,
which
is
which
is
having,
which
is
having
well-defined
interfaces
to
pull
these
data
files
from
those
point
from
those
locations.
Our
golang
sdk
that
we
used
today
and
and
I
researched
a
lot
of
golang
poster
sql
drivers.
They
do
not
support
lob,
large
object,
apis,
etc.
G
So
sony
c
and
java
that
today
have
support
for
support
for
pulling
large
object
data
files
today.
So
so
this
again
is
shot
down.
We
could
use
persistent
volumes,
but
I
mentioned
the
challenges
and
in
the
interest
of
time,
I'll
repeat
that,
but
it's
basically
you
would
not
want
to
like
it
sometimes
you'll
not
be
allowed
to
remount
a
container
in
another
amount
of
persistent
volume
in
another
service,
and
this
results
in
this
results
in
persistent
volumes,
also
being
a
no-go.
G
So
what
we
came
up
with
was
we
came
up
with
the
concept
of
system
data
artifact,
which
is
nothing,
but
it's
an
arbitrary
blob
of
data.
It
is
not
necessarily
an
oci
artifact,
it's
not
compatible
with
the
oci
standard
specification,
so
it's
it's.
It's
created
by
a
harbor
harbour
job
service,
it's
created
by
a
harvard
service
to
be
consumed
by
other
harbour
services,
and
it
has
a
definite
life
span.
Normally,
it's
not
good
to
create
extremely
like
like
system
artifacts
that
have
infinite
lives.
G
Normally,
there
is
a
there's,
a
reason
why
system
artifacts
were
created.
They
are
mainly
data
sharing
mechanisms
and
they
should
be
cleaned
up
once
once
their
utility
is
done.
So
so
what
happens
is
like
we
introduce
this
concept
of
a
system.
Data
artifact,
and
we
also
talk
about
the
ownership,
so
by
here
are
some
terminologies
that
we
need
to
understand
with
respect
to
how
these
system
artifacts
are
created,
who
owns
them
and
what
is
the
layout.
G
So
we
have
a
concept
of
a
vendor
which
is
nothing
but
the
hardware
subsystem
that
introduced
the
system
artifact.
For
example,
if
you
were
doing
the
csv
export
data,
then
we
have,
we
have
implemented
a
job
called
a
scan,
export
job
that
will
be
that
will
be
responsible
for
that
will
be
generating
these
files.
We
have
the
repository
and
repository
is
nothing
but
it's
the
it's
the
name
of
the
harbor
repository
basically
just
like
we
have
ubuntu
repo.
G
So
similarly,
we'll
have
we'll
have
some
name
for
the
system
data
that
is
generated,
so
it
would
be
the
repository
that
would
contain
the
data
block,
the
type
of
the
blob,
so,
for
example,
this
is
mainly
again
for
metadata
and
tracking
purposes
like
we
would
want
to
understand
what
type
of
data
is
present
in
the
file.
Like
type
is
csv
export
detail
or
it's.
It's
left
to
the
vendor
to
to
populate
this
type.
G
Basically-
and
this
is
mainly
for
metadata
purposes,
and
then
we
have
the
digest,
which
is
nothing
but
the
digest
of
the
data
block
that
is
present
within
the
repository
and
and
the
following
rules
apply.
A
vendor
can
create
multiple
repositories,
so
there
could
be
multiple
repositories
that
can
be
created
by
a
vendor.
Each
repository
is,
however,
associated
with
a
single
vendor,
so
you
won't
have
two
repositories.
G
You
don't
have
a
repository
created
by
multiple
vendors.
A
repository
can
have
multiple
types
of
data
blobs
like
for
example.
If
we
have
a
csv
export
job,
it
could
be,
it
could
be
generating
two
blobs,
a
one
could
be
a
summary
report
and
one
could
be
a
detailed
report
so
so,
and
it
could
be
done
by
the
same
job,
which
makes
sense,
because
the
summary
report
does
nothing,
but
it's
an
aggregation
of
the
details.
So
so
repository
can
have
multiple
types
of
data.
Blobs
and
a
digest
is
associated
with
a
single
data
block.
G
So
every
so
there's
a
one
is
to
one
mapping
between
the
digest
and
the
corresponding
data.
Now,
one
of
the
important
things
to
happen
here
is
that
we
want
to
prevent
name,
space
clashes
when
between
user-created
artifacts
and
system
artifacts,
so
because
all
projects
reside
under
our
namespace.
So
what
we
need
to
do
is
like
we
have.
We
have
decided
to
have
a
built-in
namespace,
just
like
what
traditional
sql
databases
or
like
you
know,
there
are
built-in
tables
right.
G
Similarly,
we'll
have
a
built-in
namespace
with
the
name
says,
underscore
hardware
underscore
ns.
The
name
has
been
chosen
because
such
a
name
space
is
like
pretty
much
difficult
to
come
like
it's,
it's
not
so
intuitive
for
any
end
user
to
define
such
names
and,
of
course
like
we
can't
deny
the
possibility,
but
this
seems
to
be
a
little.
G
This
seems
to
be
better
and
it
is
also
compliant
with
the
names
repository
naming
conventions
and,
additionally,
we'll
have
to
make
sure
that
the
project
api
that
needs
to
will
need
to
make
sure
that
if
anybody
is
creating
a
namespace
with
cis
underscore
harbor
underscore
ns
or
is
attempting
to
create
one
such
namespace.
We
just
we
return
back
an
error
saying
that
you
know
this
cannot
be
done,
because
this
is
a
reserve
name
and
then
I'll
I'll
come
to
the
storage
artifacts
hierarchy.
I
think
I
I
will.
G
I
will
just
directly
jump
over
to
the
diagram.
Okay,
I'm
sorry!
So
for
the
storage
artifacts
hierarchy,
you
can
I'm
just
presenting
this.
I
don't
know
what's
wrong.
G
G
Yeah
yeah-
that's
cool
yeah,
so
so
you
can
see
to
it
that
you
know.
There's
like
this
is
how,
under
the
at
the
disk
level,
the
storage
hierarchy
will
be
visible.
You
will
have
the
registry
blobs
and
under
the
blobs
you
will
have
like
a
digest
and
then
the
data
corresponding
to
each
of
these
blobs
or
each
of
these
reports.
So
this
would
be
the
this
would
be
the
data
storage
hierarchy
that
is
present
so
so
effectively.
G
It
means
that
what
we're
creating
is
single
artifact
like
like
it's
a
data
blob
corresponding
to
that
there
is
just
one
repo
that
holds
the
data
block
one
or
more
data
blocks.
Again,
as
I
mentioned
and
yeah,
I
think.
O
G
So
so
this
is
the
this
is,
so
we
just
saw
the
hierarchy:
data
storage
hierarchy.
We
also
have
like
like
if
we
go
to
the
pr
there's,
a
system,
artifacts
repository
folder
hierarchy,
which
describes
how
this
repositories
are
going
to
be
created.
It's
the
same
hierarchy
like
it's.
It's
on
similar
lines
of
what
I
displayed
right
now
and
hence
the
complete
system.
Artifact,
would
actually
be
sys
hardware
namespace,
slash
vendor,
which
would
be
like
whoever
is
generating
that
artifact.
G
Whatever
name
you
want
to
give
to
that
repository.
Maybe
it
could
be
the
job
name,
it
could
be
some
unique
system,
defined,
name,
etc,
auto
generated
name,
etc,
and
then
the
type.
So
this
would
be
the
the
complete
hierarchy
and,
under
this
you'll,
ultimately
find
the
data
block.
If
you
were
to
browse
through
the
file
system
repository,
one
of
the
important
parts
here
would
be
about
tracking
system
data,
artifact
creation,
because
the
whole
reason
we
built
this
framework
was
that
well
today.
G
Also
it
is
possible
to
create
arbitrary
blobs
in
the
registry
and
nothing
stops
us
from
doing
it
if
you
use
a
registry
api,
so
why
this
framework
in
place?
This
framework
was
in
place
because
it
does
a
lot
more
than
just
creating
the
vendor
or
creating
the
blobs.
It
actually
tracks
the
system
artifact.
So
by
tracking
it
means
that
it
manages
the
entire
life
cycle
of
this
of
the
system
created
system
artifacts.
G
So
every
system
artifact
is
tracked
in
a
table
which
is
called
a
system
artifact
table,
and
we
can
see
here
that
there,
which
contains
as
primary
attribute
like
it,
has
an.
G
And
there
is
a
repository,
the
digest,
the
size,
the
vendor
the
type
and
the
create
time
and
any
extra
attributes
that
we
may
need,
for.
You
know
like
metadata
or
some
tags
that
are
associated
with
those
with
those
artifacts.
So
when
the
system
artifact,
so
every
every
blob
that
is
written
to
the
desk
has
a
corresponding
system.
Artifacts
record
that
is
created
in
the
system
artifacts
table
and
the
reason
for
this
would
be
like,
like
we'll,
have
a
top
level
crud
api
or
a
create
update,
delete
api.
G
That
will
ensure
that
you
know
if
we
go
through
the
system,
artifact
manager,
it
takes
care
of
actually
creating
the
blob,
as
well
as
the
corresponding
tracking
record
and,
and
this
tracking
record
is
what
will
be
used
by
other
subs
other
pieces
of
this
framework,
like
the
one
that
is
going
to
ensure
that,
like
we
don't
have
dangling
system
artifacts
remaining
there
or
we
don't
have
very
old
system
artifacts
remaining
there
like
every
system,
artifact
has
a
lifespan
and
we
need
to
clean
it
up
now.
G
There's
no
way
for
the
cleanup
logic
to
actually
loop
through
the
entire
directory
hierarchy
and
find
out
what
needs
to
be
deleted.
It
will
go
to
the
system
artifact.
It
will
go
to
the
system
artifact
table
using
this
interface
and
then
ask
for
system
artifacts
that
are
candidates
for
deletion
and
then
go
and
delete
those
okay.
One
of
the
most
important
part
here
is
about
artifact
cleanup,
which
is
which
is
around
like
normally
it
is.
G
It
is
required
that
the
syst
that
whoever
owns
the
artifact
cleans
up
the
artifact
so
whoever
let's
say
there
is
a
csv
export.
For
example,
it
has
been
decided
like
we
generate
the
csv
export
but
we'll,
but
when
the
user
downloads
it
we'll
delete
it
from
we'll
delete
it
from
the
backend
store,
so
that
you
know
the
space
is
reclaimed
but
oftentimes
in
a
real
world.
There
will
be
errors
like.
G
Sometimes
it
will
just
be
that
some
things
went
wrong,
you're
not
able
to
delete
that
file
from
the
disk
for
some
os
reasons
and
or
there's
just
quality
logic
right
I
mean
the
logic
was
not
clear
and
or
not
correct,
and
we
are.
We
are
leaving
back
too
many
files
on
a
test,
so
what
it
means
is
that
there
is
a
artifact
cleanup,
cleanup
manager,
that
is,
that
is
part
of
this
framework.
G
So
this
cleanup
manager
wakes
up
every
day
at
12
12
a.m
and
is
going
to
and
it's
going
to
and
of
course
it
can
be
scheduled,
but
it
will
run
every
24
hours
and
we'll
delete
all
artifact
data
that
is
older
than
24
hours.
Okay,
so
so
this
this
ensures
that
the
longest
lifetime
by
default
for
a
system
artifact
as
24
hours.
Now,
why
did
I
use
a
term
by
default
because
we
have
in
order
to
ensure
that
you
know
there
would
be
some
reports?
G
G
The
cleanup
criteria
tells
me
that,
for
a
particular
for
every
vendor
can
register
a
cleanup
criteria
which
can
say
that
make
sure
that
it
returns
a
list
of
it
returns
a
list
of
pointers
or
file
or
artifact
ids
that
can
be
deleted
and
it's
completely
up
to
that
vendor
to
decide
what
ids
to
return.
So,
let's
say
if
a
vendor
has
generated
100
artifacts
and
the
vendor
says
that
you
know
just
make
sure
that
I
want
to
purchase
reports
at
least
for
seven
days,
so
it
is
greater
than
this
24
hours
timeline.
G
So
the
implementation
that
the
vendor
provides
for
this
list
method
is
going
to
be
that
it.
It
makes
sure
that
it
returns
only
the
ids
of
those
system
artifacts
that
are
older
than
like
you
know,
seven
days,
so
the
artifact
cleanup
manager
will
make
sure
that
those
artifacts
that
are
seven
days
or
newer
will
not
be
deleted
from
the
system.
So
so,
even
so
this
this
provides
a
very
this,
provides
complete
flexibility
to
the
consumer
of
this
framework
to
decide
how
long
they
want
to
retain
that
artifact
data
that
they
have
generated.
G
If
every
system
like.
If
the
system
does
not
specify
clean
up
criteria,
then
there
is
a
default
cleanup
criteria
that
will
make
sure
that
it
is
deleting
artifacts
that
are
that
are
older,
that
meaning
for
us,
okay,
and
then
we
also
want
to
make
sure
that
you
know
there
are
these
tail
conditions
and
race
conditions
that
happen,
that
the
data
goes
away
from
the
from
the
blob
from
the
disk
store,
but
is
remaining
in
the
artifact
manager
table
or
the
artifact
manager
table
entry
is
deleted
and
it
is
not.
G
It
is
still
not
deleted
from
the
disk.
So
such
type
of
intermittent
failures
can
be
auto
corrected
if
we
have
this
periodic
polling
logic
that
wakes
up
and
make
sure
that
it
tries
to
bring
this
state
to
an
eventual,
consistent,
eventually,
consistent
manner
like
that
is
designed
storage
quota
management.
There
is
no
storage
quotas
right
now
for
system
artifacts.
We
at
least
for
this
release.
G
We
are
not
going
to
put
anything
it's
targeted
for
2.6,
so
we
are
not
planning
for
any
quota
management
right
now,
because
quota
management
requires
a
lot
more
thought
process
also,
but-
and
there
are
many
options
in
how
we
could
consider
quotas
like,
should
it
be
per
vendor
level,
or
should
it
just
be
saying
that
you
know
hey,
you
know
twenty
percent,
like
what
many
databases
say
or
what
linux
says.
G
I
give
me
two
times
the
memory
two
times
the
physical
memory
as
by
a
slab
space
like
you
know
that
was
the
older
way
linux
systems
to
work.
So
so
do
we
go
that
way
or
do
we
say
that
no
per
window,
you
know
I
I
can
give
a
quota.
So
this
creates
a
lot
of
problems,
because
these
are
internal
things
and
internal
aspects,
and
if
you
start
exposing
quota
management,
then
the
end
user
starts
getting
knowledge
about.
G
You
know
these
are
the
types
of
artifacts
that
are
getting
generated
by
the
system,
which
may
be
overwhelming
to
the
end
user
in
the
long
run.
So
so
we
need
to
think
about
storage
quota
management.
We
are
not
implementing
it
right.
Now,
conformance
to
the
oci
artifact
standard,
we
are
not.
We
are
not
conforming
to
the
oci
artifact
standard
because
there
are
no
manifests
here
and
there
is
no
ui
visibility
again
of
the
system
artifacts.
G
For
the
simple
reason
these
are
internal
artifacts,
they
may
exist
now
they
may
not
exist
a
minute
later
and
it
is
purely
for
data
sharing
purposes.
So
we
don't.
We
don't
expose
this
through
ui
or
apis.
So
that's
the
reason.
The
system
artifact
manager
is
a
completely
separate
table.
It's
not
it's
not
a
table
that
is
referred
to
by
any
of
the
existing
harbor
middleware
logic
for
pulling
data
related
to
artifacts,
and
you
know
even
the
garbage
collector
like
we
have
our
own
garbage
collection
through
the
clean
up
mechanism.
G
We
have
not
modified
the
existing
garbage
collector
for
for
that.
The
one
thing,
however,
that
we
would
need
to
do
is
we
need
to
at
least
say
that
these
system
artifacts
are
like
that
like
when
we
are
showing
the
available
disk
space
available
desk
space
or
available
to
use
storage
space
by
hardware.
We
need
to
account
for
the
space
that
is
used
used
by
the
system.
Artifacts,
it's
pretty
simple
to
do,
since
we
are
the
artifact
manager
it
is
like,
and
we
have
a
tracking
record.
G
We
don't
need
really
need
to,
like
you
know,
iterate
through
a
directory
hierarchy
and
get
it
we
can
get
it
from
that
tracking
table
or
that
metadata
table.
So
so
this
is
the
framework
using
this
we
have
so
there
was
a
comment
on
the
csv
export
feature
by
vadim
and,
like
should
that
be
closed.
G
Well,
the
csv
export
feature
is
installed
for
a
bit
until
we
implement
these
until
we
implement
the
system,
artifact
manager
and
then
csv
export
functionality
will
leverage
the
system
artifact
manager
to
create
those
to
create
those
csv
artifacts.
So
so,
basically,
everything
that
wants
to
share
data
between
two
services.
It's
recommended
to
use
this
new
framework.
That's
coming
up.
A
Thank
you
we're
seven
minutes
after
the
hour,
so
I'm
I'm
not
sure
how
many
of
the
folks
on
the
call
has
time
to
to
discuss
that
and
anyone
questions.
We
can
go
a
quick
round
of
questions
if
anyone
is
having
anything
about
that
proposal.
D
From
my
side,
it's
good-
I
mean
it's,
it's
a
valuable
addition
to
harbor.
You
know
storing
artifacts,
especially
if
we
consider
the
fact
that
people
would
like
to
export
cvs
and
and
making
reports
about
registries
about
the
artifacts.
D
It
makes
sense
to
have
some
sort
of
functionality
that
able
to
store
to
store
artifacts
or
reports
for
that
most.
If
this
is
like
the
idea
here-
and
it
makes
sense
to
extend
help
up
with
this
functionality,.
A
D
No,
something
is
it
okay,
I
mean
we're
still
missing
a
bit.
I
guess.
K
A
Okay,
can
you
share
the
link
into
the
harbor
death
channel,
so
folks,
yeah
and
and
maybe
tech
go
harbor
all
maintainers
in
the
github,
so
yeah
bye
dora
thanks
for
joining,
so
everyone
can
can
take
it.
Take
it
take
a
look
and
review.
G
A
Yeah,
it's
a
it's
a
team,
yeah
I'll,
send
it
to
you.
Okay,
okay,
yeah,
give
me
a
sec
yeah
with
this
one.
I
I
think
we
have
another
point
in
our
agenda
for
today
about
releases.
Are
we
added
something.
B
Yeah,
sorry,
I
know
we're
over
time
and
I
can
take
this
to
the
the
channel
if
we
want
to
discuss
more
to
the
the
dead
whatever
channel,
we
want
to
talk
it
into.
B
B
A
Yeah
my
two
cents
releasing
on
friday
is
not
the
best
idea.
I
mean
the
visibility
for
releasing
on
friday
will
be
like
half
of
the
the
visibility
that
we
can
get
on
monday,
for
example,
so
I'm
I'm
in
favor
of
releasing
on
monday
in
general,
anytime,
all
right.
Let's
discuss
that
in
the
channel
and
thank
you
everyone
for
the
folks
who
joined
for
our
sick
dogs.
Thank
you
very
much
once
again,
I
hope
we
can
work
together
on
this
one,
everyone
who
is
interested
in
that
please
reach
out
to
me.
A
My
name
is
irene
vasilev
again
and
to
abby
in
the
harvard
dev
channel
or
directly
in
slack,
so
we
can
set
up
the
next
meeting
when
we're
gonna
meet,
because
that
was
part
of
the
agenda
for
today,
but
there's
no
time
to
discuss
this
one.
I
don't
want
to
hold
everyone
else
more
on
this
one,
so
we
can
touch
on
slack
on
this
one
and
decide
a
date
and
make
it
reoccurring
every
single
month.