►
Description
CNCF Harbor's Community Zoom Meeting
A
Okay,
hello,
everyone,
my
name
is
Olin
and
I'm.
The
community
manager
for
Harbor
today
is
Wednesday
the
21st
of
September
and
that's
official
community
meeting
as
such.
Please
follow
the
code
of
conduct
at
cncf
and
in
practice
just
be
nice
to
each
other
with
that.
Thank
you.
Everyone
I'm,
sorry
for
being
a
bit
late,
technical
problems
over
here,
so
I'm
gonna
share
the
plan
and
what
we
have,
because
we
have
in
the
agenda
for
today,
just
give
me
a
sec
there.
You
go
I
hope
you
can
see
that.
A
I
paste
it
into
this
issue,
if
you
want
add
your
name
and
as
indeed
attend
the
list
and
add
your
topics
for
today,
I'm
a
bit
trusty
and
I
just
came
back
from
from
vacation.
So
please
give
me
time
and
I'll
I'll
speed
up
for
the
in
the
next
days,
so
yeah
I
can
see.
We
have
one
topic:
do
you
want
to
take
over
and
and
discuss
it.
C
Yeah,
thank
you.
Okay,
hello,
everyone.
My
name
is
Changi
and
I'm
from
VMware,
and
today
my
topic
is
about
proposal
for
copy
over
trunk
for
the
harbor
application.
C
You
know
in
the
previous
hardware
version
the
replication
only
can
support,
replicate
or
copy
the
image
blob
by
whole
blog,
but
it's
not
supported
copy
over
chunk
when
copying
the
image
blocks,
but
why
we
need
to
support
this
case
because,
with
the
development,
a
edge
compute,
the
image
registry,
such
as
Hardware,
be
deployed
to
the
edge
nodes
to
achieve
the
better
performance
and
the
independency.
C
So
for
some
users
they
want
to
replicate
images
from
one
Center
Harbor
to
Edge
hardware
store
from
Edge
hovers
to
pulling
image
from
the
center
Harbor,
but
usually
the
network
for
Edge
is
restrict
and
even
unstable.
The
low
back
bandwidth
and
high
latency
environment
is
a
big
challenge
for
replication.
So
that's
really
what
that's
that's
why
we
want
to
support
this
one
for
replication.
Our
goals
is
to
improve
the
quality
of
the
layer
copying
for
the
replication.
C
Meanwhile,
we
can
boost
the
constant
for
replication
when
some
issues
occurred,
such
as
Network,
JT
and
service
restarts.
C
Next,
let's
go
through
the
proposal
details
we
will
add
option
in
the
replication
policy
to
identify
whether
need
to
enable
copy
our
trunk.
The
default
behavior
is
same
as
before
that
that
is
not
not
copyright
just
copy
by
blog
the
part,
the
per
trunk
size
has
the
default
value,
and
actually
we
don't
recommend
users
to
configure
this
value,
because
the
value
too
big
or
too
small
is
not
suitable
for
the
replication,
but
user
can
override
it
by
Envy
it
will
they
really
want
to
change
it.
C
Also,
we
need
to
implement
the
copy
by
trunk
Logic
for
the
in
the
replication
job,
and
actually
we
Define
two
phases
based
different
Scopes
to
import
this
future.
The
first
phase
provides
the
availability
and
the
support,
some
basic
functions,
and
for
the
second
phase
we
will
enhance
and
bring
up
breakpoint
and
receive
mechanism
by
the
radius.
C
Currently,
the
size
of
trunk
is
not
exposed
for
API,
because
we
we
don't
want
to.
We
don't
want
users
to
change
it
and
the
default
trunk
size
is
10
MB,
but
user
can
override
it.
By
sets
the
environment
variable
replication
trunk
size.
C
Next,
let's
look
at
the
scope
of
two
phases:
the
scope
of
this
one
is
that
spot
replication
copied
by
trunk,
and
we
try
copy
by
trunk
like
before.
If,
if
error
happened
in
the
replication,
this
behavior
is
similar
with
the
previous
implementation.
Just
each
in
in
the
before
version,
We
replicate
The
Blob
and
whenever
happens,
we
retry
copy
this.
C
The
phase
2
scope
is
a
trunk
resuming
crossing
the
job,
coaching
policy
and
even
mutable
attempts
exclusion,
for
example,
job
retry.
That
means,
if,
if
you,
if
your
job
is
error,
and
next
time
you
want
to
trigger
it
by
manually
and
in
the
Space
2,
you
can
resume
the
three
points
by
the
before
by
the
previous
job
application
to
to
implement
this.
We
need
to
Cache
the
trunk
location
and
last
end
range.
You
can
think
the
last
end
range
is
the
free
point
in
the
Reddit.
C
Okay,
next
Let's
take
a
look
for
these
two
pictures.
These
two
pictures
shows
the
difference
between
copy
by
blob
and
copy
by
trunk.
First,
let's
check
the
first
picture.
The
first
picture
is
copy
by
blog.
That
means
the
currently
implementation.
C
If,
if
you,
if
you
want
to
copy
one
image
from
cells
to
Target,
and
your
image,
have
three
blocks
blob,
one
two
three
and
the
replication,
you
need
to
pull
the
whole
block.
For
example
I'm
the
the
replication.
You
need
to
pull
the
global
one
and
then
push
it,
push
the
whole
block
to
the
Target,
and
that
way
it's
down
it.
It
can
pull
the
next
one
block
two
and
the
next
blob
string.
C
But
if
we
use
copy
by
trunk
the
difference
is
we
spill
it
the
blob
to
different
chunks
to
different
chunks,
for
example.
C
Here
I
spill,
it
the
blob
to
string
trunks
trunk,
one
two
three
for
the
replication
yet
pull
the
first
trunk,
for
example
trunk
one
and
pull
it
and
push
it
to
the
Target
and
then
put
the
chunk
two
and
then
push
it
to
the
Target
and
final
pull
the
trunk
string
and
then
put
it
to
the
Target.
The
block,
2
and
block
string
is
similar
with
the
global
one,
the
the
they
all
need
to
stability
them
to
multiple
Trunks
and
then
pull
and
push
them
one
by
one
hell.
C
For
example,
if
you
have
a
what
are
you
if
you
want
to
copy
100
MB
Block
in
if
you
copy
by
blob,
you
need
to
pull
the
whole
100
data
and
then
push
them
to
Target,
for
example,
if
before
99
they
they
dates
is
copied
to
the
Target.
Only
one
MP
data
remained,
but
but
in
the
in
this
time
the
network
broken.
C
So
if
you
copy
back
below
you,
you
need
to
retry
this.
You
need
to
prove
the
whole
100
100
MB
data
and
then
push
them,
but
if
we
use
copy
by
trunk,
we
assume
the
trunk
size
is
1
MB.
So
we
spill
it,
the
one
block
to
100
chunks
and
then,
if
we
successfully
push
the
99
Trunks
and
only
one
trunks
failed,
we
just
need
to
retry
to
pull
and
push
the
last
trunks.
That's
only
one
MB
if
one
MB
transfer
needs
about
one.
C
Second,
we
use
copy
by
blob
will
cost
more
than
100
seconds.
But
if
we
copy
background,
we
just
need
to
one.
Second,
to
copy
the
last
chunk.
C
C
If
user
want
to
enable
copy
by
trunk,
they
need
to
check
this
checkbox
and
in
the
30.
The
content
is
best
by
whether
to
copy
the
blob
by
trunk
and
transfer
by
trunk
may
increase
the
number
of
API
requests,
because
we
stabilit
one
one
request
to
multiple
requests
regarding
the
DB
schemer,
we
need
to
add
a
new
field
copy
by
trunk
in
the
policy
model,
and
also
migration
circle
is
required
for
upgrade.
C
C
C
This
API
actually
is
implemented
by
distribution,
so
for
Harbor
we
just
need
to
follow
the
follow
the
description
of
the
trunk
and
implement
the
color
for
them.
C
Okay,
let's,
let's
look
the
picture
to
understand
the
process
of
the
trunk
API
Paul
blob
by
trunk
is
the
simple
and
no
big
difference.
Then
cool
blob
by
blob,
the
the
API
path
and
method
is
same
with
with
before.
C
The
only
thing
you
need
to
is
is
the
header
range,
the
the
key
is
range
and
the
value
is
bytes.
Equal
start
location
to
end
location,
for
example,
from
zero
to
one
thousand
Push
by
push
block
by
trunk
is
different,
is
is
has
a
big
difference
than
before.
C
Firstly,
you
need
to
post
this
API
to
create
the
temper
founder
in
the
distribution.
This
step
is
same
with
before
and
then
the
distribution
will
return
the
location
one
for
your
next
upload
for
the
trunk
upload,
you
need
to
use
patch
method
to
put
your
birth
trunk
with
this
location
and
also
in
the
header.
You
need
to
specify
the
content
range
stats
and
to
end,
and
then,
when
you
put
this
patch
this
Aki,
you
can
have
a
location
to
return
return
in
the
response.
Header
and
next.
C
You
need
to
use
the
location
2
as
the
API
path
and
for
upload
your
next
chunk,
and
then
you
got
the
locations
ring
here.
If
you
have
more
trunks
like
such
as
chunks
ring
trunk
ball
trunk
5,
the
logic
is
same
as
before.
You
need
to
use
the
location
returned
from
last
trunk
for
next
trunk
upload
here,
I
assume
the
trunk
string
is
the
last
chunk.
The
last
trunk
API
is
different.
You
need
to
change
the
patch
method.
To
put,
and
also
you
need
to
add
the
query:
parameter
digit
equal
digits.
C
The
digits
is
the
blob
Digest,
and
then
you
got
the
201
created
response.
That
means
your
blob
push
the
example
successfully.
C
Okay,
this
is
the
process
of
problem
electron
and
push
block
by
trunk,
but
in
the
previous
implementation,
all
the
adapter
completes
the
same
underlying
registry
client
interface.
We
Define
the
common
operations
to
the
distribution,
V2
AP,
but
no
trunk
API.
So
we
need
to
extend
two
methods
for
trunk,
replication,
pool
blog
trunk
and
the
push
block
chunk.
We
need
to
specify
the
start
and
end
parameter
in
the
function
for
building
the
URL.
C
Okay
regarding
the
implementation,
the
phase
one
implementation
is
simple:
we
just
need
to
implement
the
pool,
block,
chunk
and
put
block
trunk
method
and
support
the
retry
for
copy
by
trunk.
As
before
you
try
to
problem
the
retrieval
trunk,
adopts
the
same
strategy
with
block
default
five
times
with
backhaul
and
also
can
be
configured
by
environment.
Variable
The,
Phase,
2
implementation.
Here,
I
want
to
point
this.
Space
have
not
been
fully
discussed
and
detailed.
More
details
need
to
be
padded.
C
The
following
is
just
the
initialized
thought
about
this
part,
so
feel
free
to
leave
your
comments.
If
you
have
any
idea
the
key
point
of
phase
two
is
breakpoint
and
resume
for
the
process
of
trunk.
We
can
trunk
API,
we
need
to
store.
We
know
we
need
to
store
the
location
for
next
trunk
push
and
the
last
push
the
chunk
and
range,
which
means
break
point.
C
So
we
need
to
Define
common
interface
for
easily
integration
and
adapter
in
the
future.
We
Define
the
trunk
recorder
interface
and
in
the
investing
space
we
have
five
methods,
get
upload
location,
get
breakpoint,
that
upload
location
and
that
break
point,
and
the
last
one
is
clear.
Clear
means
cling.
The
cash
implement
the
red
strong
recorder,
which
cached
the
location
and
break
point
in
the
radius
can
share
with
mutable,
Shadow
or
concrete
jobs.
C
This
value
can
be
still
be
persisted
when
the
job
was
error
and
used
for
an
extra
try.
The
data
only
lost
when
the
redist
data
was
gone,
so
we
put
the
data
in
the
red,
so
we
need
the
key
and
value
here.
We
have
two
type
data,
the
location
and
the
brief
point.
The
key
format
of
location
is
the
fixed
value,
replication
and
source
Destination
type
block
digest
and
the
uid.
The
uid
is
the
session.
Id
return
data
by
distribution
and
the
final
is
the
type
it's
a
fixed
value
location.
C
The
key
format
of
breakpoint
is
familiar
with
location
just
to
change
the
last
fixed
value.
Type.
Three
points
in
the
normal
case,
with
the
same
SRP
and
DFT,
should
only
have
one
record
for
location
and
viewpoint,
but
here
we
inject
the
uuid
as
key
to
handle
the
special
case
which
can
lead
to
conflict.
C
There
are
following
citations,
which
can
cause
multiple
locations
and
break
points
for
the
same
source
and
the
destination
by
different
uid,
for
example,
if
you
have
one
event
based
replication
and
in
this
time
you
push
the
image
concurrently,
so
that
will
trigger
two
replication
jaw
in
one
time
in
one
time
and
the
next
citation
is
the
the
replication
is
the
it
is
configured
as
measurely
and
the
user
triggered
them
by
concrete
API
call
that
will
that
will
create
two
jobs
concurrently
to
handle
the
replication
so
to
resolve
the
conflict.
C
We
need
a
resume
magnetism.
Our
our
resume
magnetism
is
two
steps.
For
example,
here
we
have
two
breakpoints
for
one
image:
uid1
and
the
uid2.
They
have
their
location
and
breakpoint.
C
Firstly,
we
need
to
filter,
filter
the
data
by
their
image
name,
source
and
the
DST,
and
then
we
got
the
the
four
key
and
values
and
then
we
need
to
choose
the
bigger
one,
because
the
bigger
one
means
these
three
point
has
pushed
more
content,
so
that
can
save
the
following
time
for
resume.
C
But
what
happened
if
multiple
jobs
resumed
from
the
same
point,
just
like
this
picture
should,
for
example,
I,
have
two
jobs,
job
one
and
job
two,
and
they
resume
from
the
same
breakpoint
3.1
they
they
may
be
push
the
chunk
API
in
the
meanwhile,
for
example,
the
job
one
push
range
100
to
200
and
the
job
tool
also
pushed
the
range
100
and
200.,
but
the
distribution
will
validate
the
date
size,
so
I
mean
there
are
only
one
API
can
success.
If
job
2
success,
the
job
one
will
be
failed.
C
So
finally,
only
one
job
can
set
breakpoints
and
sense
location.
So
here
does
not
have
any
concrete
issue,
because
only
one
can
access
the
success.
One
will
handle
the
stats
and
the
everyone
will
be
built
here.
We
need
to
notice
that
the
key
of
breakpoint
and
location
ring
ready
should
have
the
expel
time
to
avoid
30
data
when
the
job
status
is
abnormal.
The
default
expert
time
is
2
hours.
C
C
But
but
if
you
want
to
Cache
the
cache,
the
location
and
the
blue
point
for
long
pen,
you
can
just
the
fixed
palette.
Time
should
also
be
supported
to
user
configuration.
C
C
The
last
I
want
to
point
is
currently
copy
by
trunk,
only
enables
for
cells,
and
the
Target
registry
are
both
Harbor
income.
Other
adapters,
such
as
GitHub
registry
Azure
registry
and
so
on,
are
not
verified
whether
spot
trunk
transport
performance
theories
should
support
if
they
follow
the
OCS.
Back
apart
from
Harvard
UI,
we
do
the
validation
you
can
only
enable
copy
by
trunk
when
the
source
and
the
target
both
hover.
A
A
Anyone
else
is
having
any
topics
or
I
can
I
want
to
ask
something
about
cubecon
North
America
is
coming
and
I
saw
the
video
so
I'm
going
to
provide
some
feedback
on
it,
but
I
think
it's
great
so
far
for
the
maintenance
track,
as
a
second
thing,
I
think
with
him
requested
to
have
a
kiosk
for
the
project,
but
that
he
must.
He
must
confirm
that
and
the
other
thing
I
send
a
mail
to
the
maintenance
minute
list
like
two
days
ago.
A
Maybe
I
don't
know
we
have
an
end
date
until
23rd
of
September.
If
you
want
to
be
featured
on
the
stage
on
the
Keynotes
and
I
really
want
this
time.
We
to
be
there,
we
missed
the
last
one.
A
So
if
anyone
thinks
that
something
from
the
new
release,
for
example,
q6
is
the
best
thing
that
we
should
mention
there.
I
can
fill
in
the
the
form.
I
just
need
your
input
on
what
to
put
into
the
forum
what
kind
of
information
we
want
to
provides
for
the
Keynote.
A
If
you
watched
last
time,
there
is
like
a
project
updates,
and
someone
from
the
cncf
goes
goes
on
stage
and
reads
the
updates
and
we
missed
that
opportunity
to
to
show
the
the
cncf
that
the
community
that
we
have
implemented
cosine
on
T5
and
I,
think
we
can
use
data
opportunity
now
to
to
submit
to
mention
that
55
for
cosine
and
the
new
things
in
2
6
are
that
list.
It
should
be
short,
not
very
extensive,
because
we
have
like
30
seconds
per
project
or
something.
A
So
if
you
have
any
ideas,
please
shoot
them
now
or
you
can
contact
me
afterwards
and
we
can
discuss
it
Geraldine.
Do
you
have
something
in
mind?
Maybe.
B
Oh
yeah,
you
mentioned
they
mentioned
some
I
mean
KIRO
is
interesting.
Stuff,
yes,
yeah,
okay,
I!
Think!
C
A
B
I
mean
previously,
we
do
not
have
experiments
what
the
Keynotes
up
there
right.
We
do
yeah.
Oh,
we
do.
Okay,
we
do
have
previously.
We
just
and
the
reduced
information.
B
The
the
major
features
yeah,
okay,
so
I
think
with
the
only
I
think
you
just
mentioned.
We
can
talk
a
little
about
the
cosine
from
2.5
under
some
new
feature
from
2006
I.
Think
it's
very,
very
good
idea.
A
Yeah
yeah,
we
should
be
there
and,
for
example,
why
the
release
is
pointing
to
too
long
yeah,
so
I
think
it
should
be
something
of
this
list
over
here.
A
So
yeah
we
just
do
we,
we
just
have
to
figure
out
which
one
we
think
will
be
the
most
eye-catching
for
for
the
rest
of
the
crowd.
That's
that's
the
reason
why
I
think
we
should
mention
the
2-5
release
for
cosine.
B
B
For
2.6,
maybe
I
think
we
have
done
catch
layer
feature
which
is
very
useful
for
the
Enterprise
users
to
improve
the
performance.
I
think
that
is.
B
I
mean
the
first
one
is
catch
layer
right,
so
you
speak
to
you,
I
think
yeah
I
think
it
is
really
useful
for
the
Enterprise
used
users,
because
that
is
how
we
improved
a
lot
of
performance
for
cooling,
artifact
and
the
second
one
in
total
sex
is
the
CV
export
I
think
this
Feature
Feature
is
asked
by
Community
for
quite
a
while.
A
Okay,
yeah
I.
Think
that's
that's
like
the
highlights.
That's
good
for
highlights,
because
when
you
have,
we
need
highlights
not
something
very
big
as
I
said,
so,
that's
great!
Okay,
do
we
have
the
same
thing
for
the
CV?
No.
B
As
something
into
the
the
Google
Doc
of
2.6,
that's
shared
by
you.
A
Yeah
that
the
one
that
wasn't
posted
on
the
release-
okay,.
A
B
B
I
mean
previously,
we
we
planned
to
write
a
blog
for
2.6
release
right,
so
I
think
the
prohali,
the
food
working
on
the
CV
export
feature
has
provided
some
material
for
this
to
get
some
information
yeah.
A
And
also
now,
I'm
back
I
saw
that
that
information
wasn't
posted
during
the
release
cycle,
so
I'm
gonna
work
it
and
and
post
it.
B
B
C
A
A
But
yeah
it's
my
thing
to
do.
I'm
just
asking
you
for
for
feedback
on
this
one.
So
practically
we
have
to
fill
in
that
form.
B
A
So,
what's
the
name
of
the
projects
this
month,
because
past
six
months
includes
practically
two
to
five
for
release
as
well.
A
Like
on
the
on
the
edge
of
D5,
but
but
I
I,
because
if
you
watch
the
Keynotes
from
the
last
kubecon,
it
is
future
huge
amounts
of
announcements
from
different
projects
and
they
were
like
logos
on
stage
and
7.
000
people
watching
live
and.
B
B
B
A
So
I'll
fill
in
this
one
based
on
the.
B
Okay,
great,
thank
you.
So
if
anything
else
from
need
from
myself,
let
us
know
in
the
next
one
days.
A
Yeah
tomorrow
and
Friday
are
days
off
in
Bulgaria,
but
I
I
I'll
try
to
make
it
happen
now,
not
to
forget
and
not
to
postpone
it
to
the
last
minute.
B
And
so
to
submit
this
the
way
you
you
have
the
permission
to
submit
right.
B
A
Yeah
and
then
the
other
thing
is
with
him:
I'm
gonna
check
offline
with
him
about
doing
the
kiosk.
Is
there
anyone
from
the
team
planning
to
to
go
to
kipcon.
B
No,
no!
Nobody
from
from
Beijing
I'm.
A
Also
not
going
yeah
so
yeah,
so
we'll
have
with
him
there.
Okay,
he
said
he's
going.
Maybe
the
things
for
me
will
change
in
the
last
minute.
I
don't
know,
but
yeah
I.
B
A
New
management
on
the
line,
I'm
not
sure
how
that
will
play
out
so,
but
whatever
we
have
a
team
there
and
and
if
he
he's
doing
the
kiosk
will
be
great
okay.
So
that's
everything
from
my
side
to
today.
Anyone
else,
okay,.
B
I'm
sure
something
yeah,
just
just
a
notification.
From
my
side.
Two
weeks
later,
the
China
will
have
the
National
Day
Holiday
I
start
from
October
1st
to
October
7th,
so
Charlie
Charlie.
So
we
we
may
not
be
able
to
join
community
meeting
in
that
week.
A
Interesting,
do
we
have
a
community
meeting
with
us.
A
Give
me
a
sec
first,
so
that's
the
week
of
dirt
and
that's
fine
yeah
yeah.
We
do.
We
can
cancel
that
if
you
want
our
output
like
a
little
vote
into
the
maintainers
Channel.
A
Cancel
that
so
you
don't
worry,
you're
gonna
miss
something
or
not.
A
Hey
great
holidays,
if
we
don't
see
each
other
till
then
okay,
anyone
else
any
issues
that
I
can
help
with
I'm
a
bit
Rusty
after
a
month
holiday
but
yeah
I'll
be
okay,
I'll,
I'll,
stop
sure,
I'll,
stop
recording
I'm!
Sorry.