►
From YouTube: CNCF Harbor's Community Zoom Meeting - Dec 15, 2021
Description
CNCF Harbor's Community Zoom Meeting
A
Hello,
everyone,
my
name,
is
william
vasilev
and
I'm
the
community
manager
for
harbor
today
is
december
15th
and
that's
official
cncf
meeting.
So
please
follow
the
conduct
and
just
be
a
nice
person.
With
that
said,
I
can
see
we
have
one
proposal
to
discuss.
So
can
you
please
share
that
or
do
you
want
me
to
share
yeah.
B
B
Okay,
can
you
see
my
screen
yeah
yeah,
okay,
hi,
everyone,
I'm
tony
and
today
I
want
to
share
the
proposal
of
a
synchronized
update
of
artifacts
protem
and
pro
count.
B
The
background
is
we
when
we
we
find,
we
found
some
db
connections
back
in
our
environment.
So
we
do
some
performance
testing
in
our
environment
and
find
this
find
this
issue.
So
the
protein
and
pro-count
you
can,
you
can
see
the
protein
is
artifact
protein
at
the
artifact
page
in
in
the
hardware
ui,
and
the
pro
count
will
choose
at
the
the
repository
page.
It's
the
post
represents
the
poor
count.
B
Okay,
so
so
currently,
if
the
user
pulls
an
artifact
from
the
hardware
and
hardware
call
component
will
raise
two
guarantees
to
update
the
artifacts
protem
and
pro
count
in
the
background
and
for
every
proquest.
B
B
So
my
my
proposal
is
is
to
synchronize,
update
and
merge
the
update
operation
of
artifacts
protem
and
pro
count
to
reduce
the
duplication
of
update
work
to
the
database
and
also
provide,
and
we
also
provide
the
configuration
to
keep
the
original
version
updates,
which
means
the
sigma.
B
Okay,
our
goal
our
goals.
I
list
the
three
points.
The
first
one
is
reduce
the
number
of
database
logs
and
the
second
one
is
fuel
operations
on
the
database
output
update
and
the
third
one
is
improve
the
throughput
of
the
harbor.
In
the
case
of
high
concrete
pole
and
our
non
goals
is
we
we?
We
cannot
ensure
the
fully
accurate
protein
and
four
kinds
of
artifacts
and
another
is.
B
We
cannot
injure
the
immediate
synchronized
protein
and
poor
count
of
artifact
after
you
pulling
the
artifact,
which
means
if
you
pull
pull
an
artifact
from
harbor
and
you
go
to
the
harbor
ui
page
to
see
the
protein
or
procons.
Maybe
it's
not
synchronized.
B
B
So
the
implementation
is
code
related.
The
code
path
is
under
the
src
controller
event,
handler
internet
artifact,
dot
go
and
the
basic
code
implementation
is.
It
is
very
easy.
I
just
separated
the
two
three
points.
The
first
one
is
catch
the
operation
of
pour
into
the
cat
storage
in
the
previous
implementation.
B
B
For
example,
in
the
previous,
if
user,
post
post
an
artifact
for
100
times,
we
will
update
the
database
for
100
times,
but
after
we
improve,
we
just
need
to
update
the
database
for
one
time
and
just
set
the
just
set
the
four
count
or
four
counts
to
to
100.
In
previous.
We
need
we
need
to
update
the
pro
counts,
to
plus
one
every
time
and
and
for
100
attempts.
B
But
after
we
only
need
one
one
time
and
this
the
last
one
is
we,
we
will
pre-nuclear
flash
data
from
catch
to
the
database.
B
So
we
have
a
configuration
time
interval
to
flash
the
data
from
cache
to
the
database
so
about
the
cache
selection.
We
consider
radis
and
memory,
and
we
do
some
compilation
on
these
two
two
two
selections,
the
advantages
of
radius
is
raised.
Service
can
be
deployed
in
high
available
mode
and
the
data
of
protem
and
pro
count
will
not
lost
if
captioning
not.
B
The
available
network,
I
o
operations
between
radius
and
core,
maybe
also
includes
the
hardware
component
hardware
performance
and
the
and
the
business
side
needs
to
implement
additional
distributed
logs
to
ensure
data
consistent
when
mutable
core
instance
exists
exist,
because
if,
if
you
have
mutable
core
instance
for
high
available
these
mutable,
this
instance
may
be
update
the
radius
key
at
one
time,
and
so
you
need
to
ensure
the
consistence
to
of
the
key
and
the
value.
B
So
the
memory,
the
advantages
of
memory
is
memory.
Operation
is
very
fast,
although
radishes
also
fast,
but
memory
is
more
fast
and
there's
no
network
io
needed,
and
if
we
use
memory
the
we
als.
We
also
don't
need
too
much
code
to
maintenance,
the
the
the
heavy
business
business,
business
logic
and
mutable
car
handlers,
their
self-update
operation.
B
So
so
no
additional
logic
needed
to
keep
the
operation
sequence.
But
the
disadvantages
is:
is
our
data?
What
are
stored
in
memory?
So,
if
call
crashed
or
restart
the
data
will
be
lost
and
the
similar
update
work
when
mutable
coincidence
exists,
which
means
if,
if
you
have
multiple
coincidence,
the
these
instances
do
the
similar
update
work.
B
B
One
is
pro
constor
and
another
is
the
pro
count
store
a
protein
and
product
store
and
two
logs
to
to
to
keep
the
sync
and
and
if,
if
user,
pull
an
effect,
we
only
update
the
data
in
the
cache
memory
and
and
we
we
will
have
two
goal
routines
as
consumer
or
worker
to
consume
the
the
two
stores
data.
B
B
B
Our
scenarios
is,
however,
deployed
by
local
compose
in
the
host,
which
is
cpu
cores
and
16
g
memory,
and
we
use
k6
to
simulate
5000
concurrency
to
pause
the
same
image
manifest,
and
I
have
linked
the
test
tool.
It's
under
the
goal,
harbor
organization.
B
The
success
rate
is
not
100
and
after
we
fix,
we
only
cost
about
eight
eight
minutes
and
the
and
all
requests
has
has
successful,
and
the
average
requests
is
about
five
500
requests
and
after
it's
about
900.
B
First,
let's
check
the
pg
state
activity
before
the
the
pg
connection
is
very
high.
It's
reached
which
the
limit
of
car
set
we
set.
The
max
open
connections,
900
and
the
active
connection
of
pg
is,
is
900
reached
the
limits
until
tested
down
and
after
we
fix,
we
can
see,
the
yellow
line
is
active
connection,
it's
very
low
and
the
green.
The
green
line
is
idle
connection
which
can
be
ignored
because
it's
it
will
be
released
after
the
test
done.
It
will
not
influence
the
the
normal
user.
Normal
use.
B
B
B
About
about
between
twelve
thousand
and
fourteen
forty
fourteen
thousand,
like
which
legs
hung
out
at
a
fixed
number,
but
after
we
fix
the
ground,
things
has
choose
dynamic
change
and
the
max
guarantees
numbers
has
reached.
20
000,
which
means
golan
run
time,
can
schedule
a
mark,
can
schedule
more
guarantees
to
to
to
handle
some
business
logic
which
can
approve,
improves
the
through
output
of
the
harbor.
B
C
I
have
a
question
regarding
the
inconsistency
of
the
pull
time
and
pull
count,
and
I
was
thinking
about
how,
because
you
you
mentioned
that
it's
not
consistent,
it's
clear,
because
the
two
systems
not
synchronized,
but
you
could
turn
it.
You
could
turn
it
into
eventually
consistent
state.
So
if
you
update
only
the
latest
timestamp
and
always
update
the
highest
number
of
count,
you
will
end
up
in
an
eventually
consistent
state,
meaning
that
after
the
pool
rates
settle
down,
you
will
have
a
correct
state
of
the
pool
time.
C
Right,
you
could,
you
could
have
this
eventually
consistent
state,
and
this
would
be
even
even
even
better
if
you
just
you,
know,
update
the
latest
timestamp
like
the
newer,
timestamp
and
also
add
the
count
on
top,
so
you
always
count
up,
but
only
overwrite,
the
timestamp.
If
it's
newer,
so
this
this
you
will.
You
will
then
up
end
up
in
an
eventually
consistent
state
which
would
be,
I
think
then
yeah.
It's
fine
for
everyone.
C
And
then
another
question
I
have
is:
did
you
did
you
had
a
chance
to
look
at
pulling
mechanisms
of
postgres
not
of
postgres
but
the
pulling
pulling
pool
libraries
so
because
I
think
it
it
would
also
help
if
we,
if
harbor,
would
use
a
proper
pooling
library
like
this,
which
is
not
the
one
that's
built
in,
which
is
yeah
kind
of
a
really
low
or
not
not
fun,
not
written
functionality.
C
So
I
think
if,
if
we
would
use
a
a
proper
pooling
library
like
a
pgx,
this
would
also
be
would
have
different
numbers
on
than
than
shown
here.
D
Yeah
yeah
ready,
thank
you,
and
would
you
please
comment
on
that?
Pr
for
the
you
mentally
consistent.
D
Yeah,
thank
you
anything
you
have.
I
also
have
a
question.
You
mentioned
the
data.
We
are
flush
to
the
db
periodically
right.
The
default
default
is
10
seconds.
I
I
also
can
understand
if
the.
D
B
D
A
Thanks
a
lot
for
this
one,
please
everyone
use
your
vote
and
comment
for
this
proposal,
so
we
can
drive
it
through.
A
A
There's
a
a
pr
to
update
the
governance
dock,
it's
kind
of
working
progress,
but
please
everyone
can
you
take
a
look.
I
can
all
assign
you
as
reviewers,
because
we
need
a
super
majority
of
this
one
to
accept
the
new
governance
dock.
A
So
we
want
to
introduce
a
few
new
six
six
community
and,
for
example,
the
upcoming
january
me
and
abigail
we're
going
to
kick
off
a
new
effort
to
start
the
docs
working
group,
which
will
be
hosted
under
a
different
sick
just
for
purpose
of
managing
it
properly.
A
So
my
ask
is,
if
you
can
go
over
this
pull
request
in
the
community
repository
108.80,
if
you
can
read
through
the
document
and
mark
something
if
you're
not
happy
with
it
or
add
new
stuff.
A
A
If
you're
interested
in
any
of
these
conferences,
I've
already
submitted
two
cfps
for
fosdem
one
for
cover
and
one
for
another
project
that
I'm
working
on,
and
my
idea
is
to
do
like
a
harbor,
101
gonna
talk
very
brief.
A
Introduction
status,
update
on
the
project,
deployment
and
few
demos
around
few
stuff
that
I
want
to
show
the
audience
and
primarily
is
to
try
to
get
some
some
folks
joining
us
from
that
community,
which
is
huge
community
of
developers,
and
my
last
topic
for
today
is,
as
I
said
me
and
abigail
you're
gonna
kick
off
a
dog's
working
group,
maybe
second
half
of
january,
so
any
recommendations
from
your
side
or
any
inputs
would
be
great,
so
we're
gonna
use
already
existing
process
to
start
it
up
and
announce
it
properly
to
the
proper
channels.
A
C
I
had
the
question
to
the
cfp:
I'm
planning
to
to
make
a
cfp
for
harbor
for
for
a
coupon
and
I'm
currently
writing
down.
I
was
looking
at
the
cfp
process
and
it
says
I've
posted
youtube
on
slack
as
well.
A
Are
you
trying
to
submit
for
the
maintainers
track
or
you
as
a
maintainer,
to
submit
a
talk.
C
Or
more
likely,
the
second
one,
because
it
will
it
will
be
about
harbor,
but
it
will
be
also
about
other
tools
like
open
policy
agents,
it's
more
about
like
integrating
harbor
into
kubernetes
and
automate,
workflows
and
and
tools
there.
You
know
presenting
the
tools,
how
they're
working
and
bringing
it
all
together
for
for
the
kubernetes
operator.
So
it's
it's
not
about
the
showing
new
features
functionality
of
hardware,
it's
more
about
how
to
work
with
hardware
and
the
kubernetes
environment.
A
I
think
not
sure
if
we
already
submitted
the
maintenance
track
talk.
Do
you
know
that.
D
C
Okay,
so
it's
it's,
it's
not,
then
then
it's
not
the
main
chain
attracting.
B
D
The
past
in
nintendo
chuckle,
we
will
introduce
the
new
features
and
the
road
map
and.
D
D
C
D
Yeah,
so
only
two
days
left.
A
Yeah
and
and-
and
I
think
last
time
steven
do
fill
in
the
deform
for
the
maintainers
track.
I
hope
we
don't.
We
don't
miss
this
one,
this
time
yep
as
we
did
for
china.
A
Sorry,
okay,
yeah,
but
I
can
get
into
three
with
cncf
if
you
want,
if
you
you're,
not
able
you
to
find
the
link
to
submit
a
talk
as
a
maintainer,
because
I
think
there
are
two
processes,
as
you
said,
like
a
normal
speaker
and
one
from
maintainer
point
of
view,
so
I'll
try
to
help
you
out
with
this
one.
A
Yep
all
right,
so
in
that
case,
I
think
we
can
coordinate
that
that's
a
bit.
If
you
want,
if
any,
if
any
of
us
is
accepted,
I
think
we
can
we
can
join.
If
you
want
and
do
something
together,
it
would
be
cool.
I
think
right,
okay,
another
topic,
since
the
christmas
holidays
are
approaching
in
the
vast
past
of
the
world
since
starting
next
week.
A
I
think
we
can
skip
all
meetings
until
january
the
beginning
of
january,
but
I
just
want
to
have
everyone's
agreement
on
this
call.
If
everyone
is
happy
with
this
one.
A
I
can
see
only
with
him
checking
his,
so
I
can
get
that.
That's
a
okay
and
yeah
all
the
other
quiet
people
I'll
get
your
your
silence
as
a
yes
as
well.
Okay!
So
what
I'm
gonna?
Do
I'm
gonna
write
a
mail
to
the
manic
list
and
pin
a
message
into
the
slack
channel
that
we're
gonna
skip
until
january.
Whatever
comes
the
first
meeting.
D
I
have
just
noticed
that
av
and
put
a
question
in
the
chat
and
the
regarding
update
on
2.5,
so
I
can
provide
a
briefing
update
for
that
and,
like
we
talked
in
previous
community
meeting,
when
we
plan
in
the
2.05,
so
in
2.5
we
majorly
have
our
focus
on
support
the
cosen.
D
D
D
Yeah
we
have,
I
think
we
have
mentioned
it
previously.
The
the
currently
the
rough
target
date
for
2005
is
by
end
of
february
next
year.
A
So
we,
if
it's
end
of
february,
that
means
that
would
be
like
end
of
may
or
or
june,
will
be
2.6.
A
This
trying
to
figure
out
what
will
be
the
update
for
for
kubecon
if
it's
go,
if
it's
going
to
be
2.6,
but
we
won't
be
there
yet.
Thank
you
for
that
and
we
have
another
question
about
the
lock
for
j2
vulnerability.
C
So
there
is
no
search
capability
built
in
in
humber,
which
is
based
on
elasticsearch
or
in
on
solaris,
or
that
that
like
so
there
is,
I
mean
harbor
is
not
affected
by
this
vulnerability,
because
there
is
nowhere
in
the
whole
stack
log4j
used.
A
Can
you
write
a
mail,
a
very
short
one,
to
the
to
the
main
list,
so
we
can
answer
that
yeah
for
the
whole
community,
not
to
like
I
mean,
and
we
can
point
to
that
mail
in
tweet
and
everything
so,
okay,
we
can.
We
can
inform
the
whole
community
that
we're
clear
of
this
one
yeah
thanks
for
that.
I
I'm
I'm
I
I
was
gonna
write
it,
but
I'm
not
sure
if
I
can
wear
this
properly
in
the
right
terms.
As
you
already.
A
Nope,
okay,
then
thank
you
for
your
attendance
today
very
useful
discussions.
I
think
and
see
you
after
the
new
year's,
if
on
like
was
that
50?
No,
that
should
be
6th
of
january.
I
suppose,
if
not
the
first
time
right
after
the
christmas
break,
happy
holidays,
everyone
and
talk
to
you,
bye,
bye,
bye,
bye,.