►
From YouTube: Harbor Community Meeting - June 16, 2021
Description
CNCF Harbor's Community Meeting
A
To
the
cloud
okay,
we're
set
hello,
everyone
today
is
june
16th
and
that's
the
official
hardware
community
meeting,
I'm
olive
the
community
manager
for
harbor
and
I'm
gonna
facilitate
this
meeting
today.
As
this
is
official
cncf
meeting,
please
follow
the
code
of
conduct
from
cncf
be
respectful
to
the
others.
In
short,
so
we
I'm
gonna
share
the
agenda.
We
don't
have
much
for
today's
agenda.
A
This
one
and
yeah:
can
you
see
my
screen?
Yep
yep,
all
right
so
since
we
don't
have
anything
put
it
in
for
the
agenda
today,
anyone
with
topics,
so
we
can
kick
off
this
one.
B
I
know
we
have
an
upcoming
release
happening
next
week.
Do
we
have
someone
on
the
call
that
can
talk
about
the
the
new
release.
C
B
D
Hi
everybody:
I
will
share
this
proposal
with
you
yeah.
Thank
you,
a
member
of
tcr
from
the
cloud.
D
D
D
And
most
of
the
senior
you
learn
on
the
time
of
pulling
images.
Take
too
long
in
the
entry
life
cycle
of
continuous
data
up
at
the
moment,
there
is
some
immediate
third
evaluation
projection,
but
they
belong
to
the
some
special
wonder
or
not.
Actually,
enough,
harper
is
a
well
known
as
a
very
activity
community
air
force
wave
the
whole
creator
of
under
natural
sub
projects
in
the
hubble
community
and
around
financial
projects.
It's
aware
maintained
by
the
community.
D
D
Of
break
so
they're
sitting
the
network
too
late,
they
know
they
are,
could
access
the
harbour
and
a
normal
developer?
Frank
should
install
touching
our
employees
to
build
the
accommodator
applicator
special
aspect
and
procedure
attractor
to
the
hybrid
instance,
and
the
ops
engineer
should
create
a
cables
workload
with
the
effects.
G
Do
you
need
to
sharing
to
share
your
screen.
D
I
think
a
way
I
don't
need
the
share
screen.
Maybe
beyond
should
share
this
screen.
I
can.
I
can
stop
sharing
if
needed
so.
F
Alive,
I'm
I'm
here,
wait,
wait
a
minute.
D
F
F
Need
us
use
refs,
so
we
also
have
our
image
image
format.
Image
image
format,
which
can
very
addressable,
is
addressable
our
use
scratch
fs.
F
Sorry
about
that
accurate
kernel,
kernel
version
two
is
we
need
you
we
needed
the
key
point
of
two
is
we
need
a
fuse
or
other
other
kernel
file
system
to
provide
the
root
fs
for
container?
F
So
so
we
just
because
we
use
squash
fs.
So
we
we
used.
We
need
our
our
aperture
use.
Fuse
means
which
the
component
name
is
aperita
have
used
to
provide
the
container
router
fs.
F
F
F
F
Which
major
minor
is
also
zero,
so
we
can
pre-order
to
to
to
optimize
the
file
lookup
and
some
some
iops
operations.
F
Our
our
workflow
is
the
upgrade
builder
to
build,
build
aries
crash
fs
blob
from
docker
storage
and
merge.
Merge,
merge
every
blob
metadata
to
a
new
blob.
F
And
and
when
we
we
in
a
node,
were
in
google
light
node,
we
can
use
container
d
snap
shorter,
which
plugin,
which
we
we
we
call
the
operator
snapshot
to
create
a
aperture
to
provide
the
to
provide
the
image
root.
Fs
and
overlay,
and
an
overlay
read,
write
layer
for
local
local
file
system
like
xfs
or
x,
ext
for
fs
to
a
read,
read,
write
layer,
and
this
is
the
completely.
F
Our
apparatus
keep
immediate
acceleration,
have
some
key
feature.
We
can
support
multiple
data
source
like
a
registry.
It
is
harder
or
distribution,
or
some
api
and
posix
compatible
file
system
like
nfs
or
some
others,
and
we
will.
F
So
when
we
on
demand
download
some
some
data
block,
we
will
to
put
it
put
them
to
the
local
file
system.
F
We
have
some
four
is
the
hybrid
reliability.
This
is
our
key
feature
for
from
other
from
other,
like
I've
used
some
projector
like
a
statistic
or
needles
or
other
our
evidence.
F
Process
process
can
can
be
recovered
by
our
like,
like
a
master
process,
because
the
ibus
fuse
process
in
user
space
provides
the
root
effects.
If
the
fuse
crashed.
If
you
use
process
crashes
or
other
other
problem,
the
container
will
will
have
some
master.
F
I
will
have
some
problem
in
immediately,
so
we
have.
We
have
some
some
some,
how
to
say.
F
Some
kind
of
feature
to
fix
this.
F
The
fab
is
a
weak
prefetch
because
of
some
view
like
a
gillipc
or
or
others
a
key.
So
in
the
image
we
will
find
the
prefetch.
The
data
block
when
container
contains
that
it
is
the
download
from
the
registry
where
they
source
at
it.
It
didn't
on
demand.
Download
six
is
the
prepared.
F
We
will
pre
prepare
metadata
early.
So
this
is
this
is
the
weekend
we
can
we
we
we
like,
we
call
it
the
merge
method.
F
Seven
is
we
have
the
date
block
checksum,
because
we
we
download
our
every
datablock
from
from
from
registry
to
network.
We
must
have
to
check
some
so
currently
we
we
use
the
checkout.
We
use
the
a
compressed
algorithm
to
to
do
this,
like
lz4.
F
F
Our
workflow
is
the
continuously
used
operator
snapshot
to
execute
port,
creating
command
or
image
pooling
command
to
its
operator.
Snapshotter
will
use
apert
fuse
to
construct
containers
to
root
fs
three
is
applied,
have
used
through
the
meta,
merge
meta
layer
which
super
blocks
stored
in
then
operator
fuse
will
know,
mapping
between
files
and
the
different
layers
and
blocks.
F
Six,
when
the
container
start
apple
ws,
as
the
fifth
system
will
receive
actual
read,
requests
and
find
out
whether
there
is
a
current
corresponding
data
block
cache
locally
and
return
data
from
the
local
cache.
Firstly,
if
data
block
has
not
been
cached,
it
will
read
block
from
this
source
design
our
design,
our
our
design.
Actually
our
design
is
you
can
you
can
you
can
consider
it's
a
extended
squash
fs,
the
superglue?
A
super
block
is
the
squash
fs.
F
Add
a
blob
table
which
is,
I
know,
table
director
triple
fragment
table
uid
gid
table
extra
extra
attribute
table
is
from
squash
fs.
We
just
added
the
blob
table.
F
F
It
can
direct
file
from
a
different
blob.
So
I
know
the
type
of
way
also
supports
a
directory
file,
symbol,
link,
block
device,
chart
device,
fifo
and
the
socket.
F
Other
we
have
the
direct
directory
table.
It
also
is
a
orange
orange
squash
fs
directory
table.
We
also
will
add
the
add
the
blob.
F
F
F
So
I
will
be
okay,
this
is
my
introduce
some
questions.
F
As
our
background
is,
is
our
when
we
only
for
support
squad,
rfs
and
us
under
on
the
data
sources,
see
nfs,
we
use
the
production,
but
this
source
will,
but
by
the
way
it
won't
extend
this
source
from
registry
and
asset
we
use
must
use
fs
I
have
used.
I
Okay-
and
I'm
not
very
sure
about
these
snippets-
you
pasted
of
the
manifest
okay
yeah,
for
example,
based
on
the
oci
convention.
The
media
type
of
the
config
blob
may
should
be
changed
to
some.
You
know
apple
rate,
specific
media
type.
G
I
Because
we
use
this
media
type
tool
for
identifying
different
kind
of
artifacts.
I
I
think
it
is
this,
but
yeah
yeah.
I
think
we
need
to
you,
know
double
check
offline
with
other
maintainers.
E
F
Okay,
but
for
container
ds,
cri
and
other
other
other
snapshots.
It's
only
support.
F
Okay,
so
we
can
so
so.
This
is
our
reason,
for
we
need
a
subproject
and
is
coupled
with
harper.
I
I
I
I
don't
quite
understand
why
it
has
to
be.
To
be
honest,
maybe
I
missed
some.
You
know
a
previous
discussion,
but
I'm
it
doesn't
seem
personally,
I'm
not
very
sure
why
it
should
be
put
under
gold
harbor,
because
harvard
is
you
know,
registry,
and
I.
I
G
I
G
I
F
So
for
for
for
my
opening,
I
I
I
know
our
harbour
will
we
will
build
a
image
acceleration
work
group?
Is
this
steven.
G
Yeah,
okay,
actually
I
want
to
share
some
information.
After
the
discussion
of
this
proposal.
Yeah,
as
you
mentioned,
we
have
set
up
we'll
go
over
to
working
on
the
image
acceleration,
the
top.
I
think
the
task
automation
of
that.
What
group
is
we
provide?
G
You
know
different
image,
acceleration
tools
or
image
accelerator
to
you
know
to
convert
to
the
normal
or
regular
image
to
accelerate
format,
and
then
the
user
can
you
know,
pull
the
image
in
a
different.
You
know
accelerating
way,
I
think
the
the
mission
of
that
worker
right
now
to
you
know
it's
focused
on
the
integration.
G
It's
not
so
you
know
working
on
some
new
accelerator.
That's
different
measure.
I
want
to
clarify
this
is
the
proposal
we
discussed
we
are
discussing
here
is
totally
different
with
the
mission
we
will
do
in
the
group.
D
G
The
goal
of
the
workplace
provider
of
the
integration,
of
course,
if
this
upgrade
right
after
this
a
new
accelerator
is
ready,
it
can
also
be
integrated.
You
know
into
harbor
with
the
general
you
know,
integrating
framework.
I
I
A
I
F
G
F
I
G
I
F
F
I
think
it
is
this
workflow
operator.
Workflow
also
need
that
the
docker
local
storage
yeah.
I
D
I
We
need
to
think
which
is
a
better
workflow
like
do
we
yeah
suggest
user
to
build
it
locally
or
do
we
you
know
tell
us
you
need
to
do
the
conversion.
I
personally
prefer
the
apparent
way,
because
if
you,
if
you
need
to
do
the
conversion
like
you,
need
to
push
the
docker
image
to
cover
first
and
do
the
conversion,
we
need
to
think
how
we
associate
these
two
images,
because
they
have
the
same
content
that
we
we
need
to
tell
user
this
fact.
D
G
I
G
H
I
G
F
F
I
G
F
C
G
Like
some
like
content
replication,
you
push
one
copy
of
the
content
to
the
central
node
like
its
registry
or
http
server
server,
and
then
you
know
synchronize
or
replicate
choose
a
local
cache.
Then
the
container
runtime
can
read
a
different
look.
What
I
mean
is
you
do
not
need
to
you
know
to
repeat
the
building
process.
G
I
think
for
most
part,
it's
similar
to
nadas,
just
you
know
actually
in
the
in
nandas
they
have
agents
in
the
container
runtime
red.
Here
they
use
the
local
first
system.
F
Yeah,
we
also
have
the
local
fire
system,
like
other
immediate
acceleration,
project,
star,
jj
or
need
us.
A
local
is
a
master
master
local
system.
A
local
country
is
necessary.
B
F
What's
the
most
difference,
this
strategy
will
need
us
just
to
use
the
library
views
were
just
a
little
effuse
and
connection
with
kernel,
but
when
I've
used
process
is
down
or
crashed,
the
podium
or
container
is
done.
F
D
F
Okay,
I
will
continue
even
though
need,
as
a
reminder
have
used
root,
advance
support,
also
crashed.
They
cannot
continue
to
read,
read
some
files
or
others,
but
but
apparatus
have
used
a
week.
We
we
invite
us
a
mikai
sister.
F
This
word,
this
word
mccain
needs
also
a
mechanics.
We
also
have
use
process
crashed.
If
we
will,
the
container
is,
is
iohan
and
when
we
remount
immediately
reminds
of
use
the
container
con,
the
container
root
have,
as
we
also
can,
can
be,
read
right
because
fuse
and
the
kernel
communicate
with
fd.
If
the
fuse
process
down
the
fd
will
be
released,
we
we,
we
have
a
like
a
babysitter
process
to
hold
the
fd,
and
we
have
some
snapshots
and.
F
To
hold
some
some
open,
fd
interviews
and
and
lowest
lowest
request,
yeah
I've
used.
F
F
So
it's
the
read-only
file
system
will
be
either
review.
Re
re
recovery
reviews
in
read-only
file
system
will
be
easy
from
with,
for
will
be
easier,
more
easier
than
than
read.
Write
versus.
F
I
think
this
is
the
most
most
most
different
from
strategizing
and
needles.
We
will.
We
want
a
project
with
which
is
a
product
production,
ready.
G
G
G
G
A
All
right,
I'm
sorry,
anyone
I
think
the
dim
has
a
pr
that
needs
attention.
Do
they
want
to
discuss
it
right
now
or.
J
Yes,
I
have
this
pr
open
still
since
some
time
and
yeah
for
us,
it's
a
it's
a
bit
of
a
of
a
burden,
and
there
is
a
lot
of
users
requesting
it
on
the
issue
and
I've
mentioned
it
already
last
time-
and
you
know
I
honestly
don't
want
to
be
too
pushy,
but
it's
it's
kind
of
a
small
issue
and
I
would
like
to
help
you
kind
of
make
a
decision
how
this
can
be
approved
or
help
you
to
get
to
the
proof.
J
You
know
we
can
jump
on
a
review
session
together
with
someone
from
the
harbor
team,
and
I
can
you
know,
demonstrate
how
it
is
working,
what
the
problem,
what
problems
it
does
solve.
I
also
mentioned
it
in
the
in
the
pr
request,
but
yeah
I
would
like
to
see
if
we
can
kind
of
resolve
it
together.
J
Yeah,
we
went
over
it
once
and
it's
basically
a
simple
feature,
but
it
allows
to
use
you
know:
robot
accounts
for
replication,
and
this
is
for
us
helpful
or
for
for
our
customers
helpful.
A
J
You
know
about
it's,
not
it's
not
a
feature
by
persay,
it's
more
of
a
bug
fix
because
it's
really
a
tiny
function
change,
but
it
allows
multiple
use
cases.
You
know
to
make
possible
basically
use
robot
accounts
for
api
access,
or
you
know
for
replication,
and
so
I
can.
I
can
write
about
make
a
block
block
entry
or
some
entry
and
documentation
how
to
use
robot
accounts
for
api
access,
and
this
is
basically
what
happens
here
so
we're
using.
E
You
know
this,
it's
not
really
a
matter
of
like
how
do
we
do
it
right,
it's
a
matter
of,
should
we
do
it.
This
was
always
the
issue.
Okay,
yeah
of
that
of
like.
Where
would
your
grand
open
api
access
to
robot
accounts.
J
But
it's
already
there,
you
know.
So
it's
it's
not
something
that
is
it's
a
new
functionality.
It's
already
there
and
it's
just
you
know
basically
fixing
the
bug,
because
the
function
is
called
get
projects
and
also
get
project,
but
it
does
get
projects
which
is,
in
my
opinion,
not
correct
it.
Should
you
know?
So
if
you,
if
you,
if
you
look
at
the
code,
changes
really
just
the
three
lines
of
code
change,
but
I
mean
in
general,
yeah
you're
right.
J
E
I
think
the
concern
at
least
previously
it
was
because
the
replication
has
an
end
point
right,
you're
replicating
from
one
from
the
source.
Excuse
me
from
the
from
the
current
harvard
instance
to
another
harper
instance.
If
you're
configuring
yeah
push
based
and
then
similarly
you
need
credentials
for
the
targeting,
since,
if
you're
doing
a
poll
based
and
so
any
kind
of
credentials.
E
For
you
know,
the
target
registry
instance
doesn't
even
have
to
be
harbor
right.
It's
like
one
of
the
many
instances
that
we
support,
like
the
cloud
registries
or
koi,
or
something
like
that
that
that
you
know
responsibility
really
rests
with
the
system
admin,
so
it
used
to
seem
like
it
wasn't
appropriate
for
robot
accounts
to
be
able
to
configure
application.
We
also
just
sort
of
you
know
lock
down
the
surface
area
of
what
a
robot
account
can
do.
E
E
C
J
E
G
G
Okay,
now
quicker
news
yeah,
we
have
released
the
hub
operator
1.0.1,
it's
a
patch
release.
Yesterday,
yeah
we
released
and
in
this
patch
release
we
introduced
some.
You
know
enhancements
to
the
operator.
I
think
there's
some
items
can
be
highlighted.
The
first
one
is
we
support
the
counter
as
the
ingress
controller,
and
we
also
you
know
to
let
the
operators
support
in
one
operator
version
to
support
multiple
hardware
version,
only
the
multiple
harbor
patch
version,
so
one
operator
can
deploy.
G
You
know
multiple
harbor
patch
version,
not
metal
version,
and
we
also
fix
some.
You
know
bugs
that
when
the
harbor
class
cr
is
deleted,
some
you
know
results
are
not
deleted,
so
we
have
fixed
that.
G
Exactly
I
think
it
is
2.2
zero.
E
And
you
said
it
supports
it
supports
contour
in
the
operator.
G
G
D
E
Yeah
we're
so
you
know
we're
close
to
releasing
the
2.3
and
we're
going
through
the
the
feature.
The
list
of
features
requests
for
2.4.
E
So
if
you,
if
you
guys,
have
any
featured
requests
anything
you'd
like
to
see
you
know
accomplished
in
2.4,
this
would
be
the
time
to
to
go
into
the
harbor
issues
on
github
and
creating
an
issue
or
adding
comments
to
an
existing
issue
and
yeah
I'll,
be
adding
some
feedback
to
a
list
of
issues
that
we're
gonna
be
working
on,
but
you
know
feel
free
to
to
ping
me
or
to
ping.
Anyone
in
the
channel
yeah,
so
you're
gonna
have
one
more
release.
One
more
minor
release
this
year.
E
Right
so
we'll
probably
start
we're
looking
at
2.4
right
now
and
we'll
probably
do
a
release
right
before
the
next
kubicon
in
october.
You
know
probably
ga
after
kubicon
right.
That's
what,
as
we
usually
do.
We
have
a
demobile
fc
build
or
an
rc1
rc2
right
before
kubicon,
and
then
we
have
the
actual
ga.
E
You
know
a
few
weeks
after
that
right.
So
this
is
gonna,
be
last
one
for
the
year.
We
don't
know
exactly
how
long,
but
that's
roughly
the
timeline.
So
if
you
have
anything
you
want
to
get
get
done.
If
it's
important,
definitely,
you
know
put
it
on
the
github
issues.
J
E
It
is
being
discussed
and
it
is
part
you
know
our
intention
to
release
some
version
of
that
in
2.4.
I
don't
know
for
actually
release
it,
but
we're
going
to
be
working
on
it
at
the
very
least.
J
J
E
E
E
Pull
you
in
to
those
discussions
so
yeah
we've
been
thinking
about
it
a
little
bit
from
in
the
context
of
what
we
need
in
our
downstream
products
yeah
but
yeah.
It
would
be
good
to
to
move
this
to
the
upstream
as
early
as
we
can.
So
thanks
for
bringing
this
up-
and
you
know
I'll-
let's,
let's
move
this
to
a
more.
E
You
know
more
public
forum
for
for
discussion
and
design,
but
essentially
we're
going
to
need
a
very
lightweight
instance
of
harbor,
because
the
current
harbor
is
too
large
and
the
satellite
harbor
does
not
need
the.
You
know
the
full
suite
of
features
that
we
that
we
have
today
in
harper
right.
So
a
lot
of
the
the
gating
mechanisms,
for
you
know
pushing
the
images
to
that
satellite
instance.
E
We're
not
really,
you
know
envision
any
kind
of
pushing
images
back
from
the
satellite
to
the
central.
It's
really
just
for
running
pure
application
workloads
and
so
also
the
things
like
you
know,
scanning
images
that
should
all
be
done
on
the
on
the
central
harbor.
It's
going
to
scan
its
you
know,
signatures
are
verified
and
then
it
gets
pushed
out
to
satellite
and
so
trying
to
make
the
satellite
as
small
as
possible.
E
Because,
let
me
know
the
footprint
for
these
edge
nodes
are
very
small
at
times
in
most
edge
solutions,
you
know,
there's
a
1u
or
2u
single
server.
You
know
with
limited
limited
ram
in
storage,
so
that's
kind
of
the
plan
here
but
yeah.
Let
me
let
me
let
me
get
you
more
involved
in
those
discussions
with
dean.