►
Description
Harbor Community Meeting - China/Europe Time Zone - March 25, 2020
A
B
C
B
Okay,
hello,
everyone
and
welcome
and
the
host
of
today's
meeting.
This
is
the
agenda.
First,
I
will
try
the
progress
of
harvard
version.
B
The
last
is
the
time
for
the
three
this
this
current.
B
B
A
B
C
Okay,
so
before
pr
do
their
demo,
I'd
like
to
use
several
minutes
to
quickly
share
the
overview
of
our
hub
operator,
so
here
is
a
simple
diagram
to
describe
what
we
will
deliver
our
hover
operator.
C
So
far
we
have
a
operator,
we
call
it
a
hardware
carburetor
this
operator
is
donated
by
pr
the
ovh
team,
and
this
operator
will
focus
on
the
you
know
manage
the
lifecycle
of
the
only
the
hardware
service
components.
It
will
not
cover
any.
You
know
dependent
service
hardware
relay
on
like
the
database,
radius
and
the
storage,
so
this
is
a
pure
operator
for
the
pure
harbor.
Okay.
So,
as
I
think
that's
not
enough,
so,
based
on
this
operator,
we
are
design
and
develop
a
hyper
class
operator.
C
This
operator
will,
you
know,
provide
a
hard
cluster
cr.
This
cr
will
be
covered,
we'll
cover
the
you
know,
all
in
one
solution.
That
means,
besides
the
harvest
service
components,
we
also,
you
know,
manage
the
it's
dependent
service
like
foster
securities
cluster
and
possibly
some
in-cluster
storage
will
use
the
miao
to
take
that
responsibility.
C
So
the
harbour
cluster
will
be
all
in
one
solution,
for
you
know:
deliver
a
high
available
and
stability
harbor
studies,
okay
for
the
for
securities
and
the
meal.
That's
we'll
leverage
some
community
operators.
They
are
already
there
so
we'll
leverage
those
operators
to
deliver
our
operators.
So
under
the
last
part
we
also
cover
the
backup
operator.
So
in
future
this
is
our
last
part
after
we
complete
you
know
really
some
hardcast
or
hardware
operator.
We
are
back
to
to
see
if
we
can
deliver
the
back
cover
and
recover
scenario.
C
Okay,
so
far
the
harbour
coverage
and
the
harbor
cluster
bridge
are
developed
in
parallel.
They
are
you
know,
maintaining
different
repo
code
repository
and,
in
you
know,
in
future,
not
too
long
term.
In
future,
will
you
know
will
merge
those
two
operators
as
a
one
operator.
That
means
that
we'll
have
a
very
flexible
operator
to
cover.
You
know
you
can
use
only
external
dependent
service
or
you
can
deliver
all
in
one
hover.
Okay,
so
that's
the
overview
of
the
operator.
C
So
I
think
for
this.
This
here
is
a
diagram
to
show
the
relationship
between
the
different
operator
so
so
far,
based
on
current
design.
When
you
deliver
all
you,
one
use
a
hard
cluster
to
deploy
all
in
one
hardware
service.
There
will
be
unit
really
on
maybe
five
different
operators.
They
have
different
controller
to
manage
the
different
cr.
I
think
the
top
one
is
the
hardware
cluster.
C
This
cover
classes
here
we
own
the
harbor
cr
and
also
on
the
prosecutor,
redis
and
ocr,
and
you
know,
for
the
harvard
cr,
based
on
the
current
design
hardware
cr
will
rely
on
the
kubernetes,
build
building
in
resource
tab
like
deployment
security,
configure
config
map
or
service
to
construct
the
overall
hardware
service
so
and
the
yin
in
filter
the
harbor
operator.
That
means
maintains
the
harvester
will
do
some
improvements
for
each
service.
C
For
you
to
service
component
of
harbor.
We
will,
you
know,
defend
a
new
cr4
for
that
space
component,
so
in
future,
harvester
will
own
different
service
component
crs.
That
will
make
the
reconsider
process.
Very.
You
know
highly
efficient,
and
you
know
maybe
more
simple
and
more
powerful.
So
that's.
C
Yeah
yeah,
correct,
okay,
so
so
for
the
operator
release
plan,
I
think
the
hyper
carburetor
will
release
the
first
release.
That's
the
zero
dot
file
before
end
of
april
and
for
the
1.0
we
so
far.
We
still
have
no
any
concurrent
plan.
Once
we
finalize
the
plan
we'll
share
with
the
community
okay
and
for
the
harbour
cluster
operator,
we
have
three
community
over
contributors
we
have
from
from
netease
from
qing
cloud.
C
These
three
guides
will
work
on
the
hyperclass
operator
of
course-based
hardware
operator
and
other
community
database
already
operator
to
deliver
the
class
operator
so
far.
This
this
work
is
only
you
know
in
the
desired
phase.
Not
any
coders
is
is
introduced,
so
no
any
conquerator
is
a
release
plan
same
with
the.
If,
once
we
have
a
confirmatory
display
we'll
share
with
the
with
the
community,
so
that's
overall
view
of
the
hub
operator
visa
and
the
map.
C
B
D
A
C
A
D
Yeah,
but
if
I'm
not
wrong,
it
requires
roots
to
install
the
zoom
client.
D
C
D
So
here
I'm
here
to
present
you
the
operator,
I'm
pierre
from
ovh,
which
is
a
cloud
provider
so
in
the
future
story
behind
this
operator
is
why
why
we
needed
an
operator
and
we,
I
will
show
you
how
we
built
it
and
how
it
works.
So,
stephen,
let's
go
thank
you.
D
D
D
D
D
D
Then
the
other
simple
use
case
is
to
update
the
arbor
resource,
which
is
simply
change
the
go
object
in
memory
and
apply
it
to
the
kubernetes.
By
the
way,
it
also
delete
unwanted
resources.
Imagine
a
specification
with
chat
museum
which
are
obeyed
to
delete
the
chat
museum.
The
operator
take
care
of
this
cave
and
simply
delete
the
pod
where.
C
D
D
So
I
wanted
to
show
a
few
comments
in
my
terminal
but,
like
I
said,
I'm
not
in
at
the
office,
so
I
I
was
afraid
about
the
the
live
bag,
and
I
I
set
up
a
few
screenshots
here
to
deploy
the
operator.
It's
simply
one
single,
but
you
can
see
it
in
red.
The
hardware
core
operator,
controller
manager,
xxx,
and
this
this
is
the
main
and
only
pod,
which
contains
the
operator.
D
D
D
B
D
Thanks
to
the
domain,
I
will
show
you
where
we
can
find
it
and
where
we
can
change
update
it.
B
D
D
D
I
forgot
how
about
that
registry
that
paper
and
ovh,
but
yeah,
it's
a
so
we
have
the
public
url
and
it
owns
all
pod.
You
can
see
over
there,
so
this
harbor
under
the
chat,
museum,
the
clear
core
and
job
service
notary
features
and
all
are
managed
by
the
operator.
You
just
saw.
D
D
I
wanted
to
do
some
shook
event,
but
okay,
thank
you
steven,
so
yeah
yeah.
Please
keep
this
slide.
D
D
D
All
this
permission,
management
are
handled
by
the
kubernetes
api
and
this
is
exposed
to
a
specific
resource
which
is
secret.
So
if
a
user
can
access
to
this
secret
in
the
right
namespace
in
the
right,
kubernetes
cluster,
it
can
display
the
password
and
go
to
the
to
the
arbor
ui
and
connect
thanks
to
the
admin
user.
D
D
D
D
We
have
the
same
kind
of
architecture
with
database
secret.
So
in
the
middle
of
the
slide
we
have
core
database
secret
and
job
service
ready,
secret
and
notary
database
secret.
You.
We
have
database
this
secret,
pretty
much
everywhere,
because
every
component
requires
the
database.
D
But
thanks
to
that
kind
of
list,
with
a
kind
of
link,
we
can
manage
pull
up
databases
in
secret
resources
and
link
to
the
arbor
resource
and
that's
pretty
nice
to
automate
automate
dive.
All
of
the
stuff
you
can
see
in
the
end
of
this
specification,
the
public
url,
which
is
sample
registry,
that
apparent
that
ovh,
the
simple
demo
that
I
wanted
to
show
you
is
to
update
this
public
url
and
you
will
see
the
this
change
live.
D
But
just
please
trust
me:
it
works
and
let's
go
ahead
about
the
title
of
this
side.
We
have
arbor
stakes,
specification
and
arbor
config.
D
D
D
So
in
this
slide
we
have
two
things:
the
first
one
is
at
this
moment.
We
have
configuration
in
specification,
it's
an
issue
in
the
first
development,
but
we
can
improve
it
in
the
next
to
the
red,
the
blue,
the
blue,
the
blue
dash.
We
have
the
narrativity
sources
and
worker,
which
are
probably
configuration.
B
D
Configuration
itself
and
should
be
in
an
external
resource
which
is
configuration,
but
the
way
we
will
under
it
stephen
explained
it
earlier,
is
to
have
splitted
specification
with
pleated
resources.
We
will
have
the
portal
resource
the
job
service
resource
at
kdr,
and
that
means
we.
D
B
D
One
other
point
I
would
like
to
to
watch
statues
is:
I
mean
we
in
this
slide.
I
only
show
you
the
specification
that
kubernetes
are
as
also
statuses
in
every
resource,
and
I
would
like
to
I
don't
know,
set
up
the
job
service
only
if
the
redis
is
really
is
really
ready
and
to
don't
know
if
it
is
ready
just
it
should
be
based
on
the
kubernetes
status.
A
Thank
you.
Thank
you
so
much
pierre.
So
I
also
wanted
to
mention
that
the
with
a
lot
of
the
work
that
the
ovh
cloud
team
did
on
harbor
we're
very
fortunate
for
them
to
be
huge
contributors
to
a
community
and
and
donate
the
operator
to
hardware.
So
we've
released
a
blog
post,
announcing
that
and
very
important
now.
A
To
be
part
of
that,
so
yeah,
absolutely
and
then
jeremy
and
and
pierre
are
also
now
maintainers
of
harbor,
and
they
will
continue
contributing
to
our
community
and
enhancing
the
operator,
like
both
pierre
and
steven
zell
mentioned
earlier,
and
we
will.
We
welcome
other
future
contributions
and
we
look
forward
to
seeing
the
operator
be
the
method
for
life
cycle
management
of
hardware
moving
forward
for
everyone.
E
E
Is
it
something
that
should
be
contributed
by
the
vendors
or
are
you
going
to
implement
the
you
know
crds
for
most
popular
scanners?
If
you
could
comment
on
that,
that
would
be
great.
D
Yeah,
there
is
true
answer
on
your
question.
We
have
at
this
moment
it's
we
have
to
copy
past
many
files
about
claire
and
rename
it
to
3v.
D
B
C
I'd
like
to
see
more
words
about
danielle's
question,
so
for
my
understanding
for
our
operator,
we
only
cover
the
default
scanner
so
for
other
scanner.
We
will
not
current
in
operator,
so
that
will
mean
after
two
dot
zero.
We
will
both
cover
both
claire
and
the
tray,
but
for
other
platform
scanners.
They
need
to
you
know
to
install
with
their
way-
and
you
know,
configure
through
the
hardware.
Dashboard
not
operator
operator
will
keep
consistency
with
other.
You
know
installation
approach
we
provided
so
far,
so
we
only
cover
the
default
scanner.
C
B
B
D
Yeah,
can
we
can
we
scale
specific
services
yeah?
I'm
not.
We
provide
this
replication
count
on
every
component,
so
you
can
deploy
two
registries
and
to
our
worker
independently,
but
we
do
not
handle
every
use
case.
For
example,
if
you
deploy
two
hardware
core
you
will,
you
may
have
issue
to
share
session
and
with
two
registries
you
may
have
issue
to
to
share
secrets
the
http
secret-
whatever
I
don't
remember,
but
yeah
you
can
scale
it,
but
be
I
mean
caution
about
the
shared
secrets
between
your
application
replicated
component.
D
D
A
All
right
folks
we're
out
of
time
as
well,
so
we
should
probably
end
this
meeting
and
if
there's
any
additional
questions
or
concerns,
please
post
them
on
slack
on
the
hardware
channel
or
on
the
mailing
list,
and
we
can
try
to
answer
them.