►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I'm
not
sure,
okay,
I
I
don't
think
so.
A
Start:
hey
everyone
welcome
to
the
harvard
community
meeting,
so
the
agenda
for
today
is
one
of
our
maintainers
steven
stevenson
will
take
you
through
the
progress
on
the
harper
operator.
I
have
a
demo
of
the
end
to
end.
D
B
Okay,
cool:
let's
start
it
yeah.
Thank
you.
Everyone
for
joining
this
session.
I'd
like
to
introduce
some,
you
know
milestone
of
cloud
cover
class
operator
development
work
here
is
the
today's
agenda.
First,
I
will
quickly
go
through.
B
You
know,
give
you
a
quick
introduction
about
the
background
and
some
knowledge
about
hover
operator,
hub
class
operator,
and
then
I'd
like
to
share
the
root
map
of
costa
rica,
and
you
know
the
overall
hardware
operators
and
then
I'd
like
to
you
know,
introduce
the
operator
what
group
you
know
they
are
working
on
the
hyper
class
operator
development
network
and
the
last
part
is
a
live
demo.
B
Okay,
here's
some
background
of
the
opera
hub
operator.
So
first
we
have
so
far.
Currently
the
status
is,
we
have
a
hardware
operator,
we
call
it
the
hardware
it's
under
the
gohab
stash
have
a
dash
operator
repository
and
this
operator
is
contributed
by
the
oh
cloud
team.
Current
version
is
0.5.1
and
this
operator
is
the
focus
on
you
know
deploying
the
harbor,
the
harbor
component.
You
know
it
does
not
include
the
you
know:
dependent
cells
like
a
database
cache
and
the
storage.
So
here
is
a
very
simple
diagram
in
the
hubble
carburetor.
B
There
will
be
a
you
know:
harbor
customer
resource
harvester,
and
this
cr
will
manage
the
core
portal
of
sas
notary
and
some
other.
You
know
hardware
component
size
and
it
it
not
only
includes
the
required.
You
know,
hardware
components,
it
also
included
the
the
optional
component
yeah.
You
can
select
the
two
you
can
decide
to.
If
you
want
to
install
the
notary,
clear
or
transmit
them,
it's
the
you
know
keep
the
similar
experience
as
the
offline
installer
of
the
installer
experience.
B
Okay,
based
on
this
we'd
like
to
you
know,
we
hear
some
more
ways
from
the
you
know:
community.
Yes,
they
want
to.
You
know
they
want
operator
to
deploy
us
higher
availability,
and
you
know
stability.
You
know,
you
know
they
want
to
harbor
department
with
you
know
higher
ability
and
stability
so
based
on
the
hardware
corporator,
we
start
to
you
know
to
develop
a
cluster
bridge.
B
B
This
operator
is
built
based
on
the
corporate
and
not
only
cover
the
hardware
components.
It
will
also
include.
You
know,
cover
the
all
the
dependent
service
of
the
hardware
component
size
like
the
process
secret
database,
the
redis
cache
and
the
storage
here
is
the
overall
architecture
of
the
cluster
operator.
So
from
this
diagram
we
can
say
the
overall,
you
know
structure
of
the
class
operator
at
the
top
level
there
we
will
define
a
hard
cluster,
you
know
customer
resource,
it
will
cover
the
band,
the
hyper
class
controller
and
it
will
manage
the
over
all.
D
B
You
know
all
of
the
service
you
know
for
running.
The
harbour
registry
include
the
harbor
component
source,
as
well
as
the
dependent
service,
so
the
harbour
cluster
customer
resource
we
own
the
harbor
custom
resource,
as
well
as
the
process
seeker
and
the
radius
and
the
storage.
This
story
they
covered
by
the
mio,
the
storages
they
are.
Okay
for
to
you,
know
for
handling
such
dependent
service.
We
leverage
the
exiting
you
know
kinds
of
operators,
for
example
for
the
prosecutor.
B
We
use
the
prosecutor
operator
so
here
there
here
we
have
a
positive
controller
ready
to
control.
I
mean
l2
car
to
to
handle
the
the
all
the
dependent
service,
okay
and
use
the
call
operator
we
use
the
corporate
also
handle
the
harvest
customer
resource
in
the
harbor
customer
results.
It
will
recover
the
increased
service
and
kinds
of
ports
to
run
the.
B
Hubble
registry,
okay-
and
I
think
the
overall
workflow
is
where
the
hybrid
cluster
controller
will
cause
a
related
dependent
service
to
create
the
dependent.
You
know
service
ad
and
conf
and
inject
the
related
information
into
the
hardware
customer
results
to
start
the
harvard
registry.
So
that's
the
the
overall
flow
is
simple
weekly.
We
start
processing
restart
ready,
so
we
start
storage
and
then
we
inject
the
radiator
information
to
the
harvester.
Then
we
start
the
harbor
okay,
so
the
overall
harbor
will
start
to
work.
So
this
is
the
overall
architecture
of
the
cluster
operator.
B
So
we
have,
you
know,
defend
a
custom
resource,
a
crd,
to
describe
the
hard
cluster
cr.
We
can
quickly
go
through
the
release.
Spec.
B
This
fact
can
be
found
in
the
repository.
It's
very
simple:
we
cut
off,
you
know
most
of
the
unnecessary
purpose.
We
only
expose
the
necessary
not
required
field
to
the
user,
to
configure
the
cluster.
I
think
most
of
the
parties
keep
the
same
with
the
hardware
crd
yeah,
that's
provided
in
the
hardware
carburetor.
B
So
like
you
can
defend
a
version.
This
version
is
used
to
determine
which
image
you
know
we
will
use
for
starter
the
hardware
service
and
the
public.
You
are
different.
You
know
the
access
point
of
the
harbor
as
registry.
You
can
defend
me,
you
can,
you
know,
create
a
admin
password
for
your
admin,
account
and
define
you
know
a
significant
issue
to
generate
a
related
certificate.
B
So
far,
we
use
the
server
manager
to
cover
the
related
statement,
certificate
management
and
you
can
define
how
many
replicas
you
want
to
to.
You
know
how
many
copies
you
want
to
run
the
service.
Okay,
this
is
a
global
replicate.
Actually,
we
leave
flexibility
for
the
job
service.
You
can
defend
special
replicator
for
job
search
because,
obviously
needed
to
you
know
maybe
need
the
most
resource
to
run
the
job.
So
you
can
defend
a
different
replicator.
Maybe
more,
you
know
larger
than
the
global
replica.
B
Okay-
and
here
is
some
you
know,
the
common
part
is
for
some
optional
component
service,
like
you
can
defend
the
clear
so
far.
We
don't
support
the
traffic,
yet
the
charter
museum
and
the
notary.
Okay.
After
that,
you
need
to
defend
the
religious
dependence
service.
The
first
one
is
the
cache.
We
use
the
redis
as
a
cache.
B
You
can
define
external
redis
because,
for
example,
you
already
have
a
red
service
or
some
you
know
cloud
already
service,
then
you
can
directly
use
a
defend
account
as
a
eternal
and
create
a
secret
to
to
point
to
your
external
radius
yeah.
If
you
you
want
to
you
know,
you
know
in
class
already
you
can
define
it
as
a
in
class.
B
Here
you
need
to
just
define
some
resource
requirement
and
the
storage
class
and
the
storage
size
then
we'll
use
the
you
know
register
operator
to
create
the
right
service
for
this
harbor.
Okay,
I
think
it's
the
similar
case
to
the
database
and
the
storage.
You
can
use
external
database
if
you
want
and,
of
course
you
can
use
in-class
32-bits
that
will
create
a
high
availability
process
sticker
for
for
your
harbor,
the
storage
is
truly
the
same.
B
You
can
use
some
external,
for
example,
s3
swift,
all
the
you
know,
storage
driver-
that's
you
know
supported
by
the
registry.
Of
course
you
can
use
in
cluster
in
class
operator
in
cluster
storage.
This
in-class
storage
is
supported.
You
know
by
the
meow
storage.
Okay
in
this,
you
just
need
to
define
some.
You
know
resource
requirement.
B
B
Okay,
after
the
quick
introduction
of
the
current
operator,
I'd
like
to
share
some
other
map
of
the
operator
so
far,
you
know
for
the
cover
carburetor.
They
only
cover
the
hardware
version
1.10.
B
Actually,
we,
you
know
have
published
the
2.0.
Maybe
the
two
one
will
publish
you
soon
so
now
the
og
team-
it's
already
you
know,
started
to
work
on
the
operator
version
1.0.
This
version
will
target
the
harbor
version
2.x
and
they
also
refactor
the
crd
architecture
and
do
lots
of
improvements.
B
So
far
we
only
have
once
they
are.
You
know
as
and
once
they
are
under
the
commodon.
You
know
are
driven
by
the
coordinated
native
resource,
so
in
the
1.0
for
each
component
0
they
will
have
their
own
sales.
Okay,
and
this
those
cr
you
know
have
their
own.
You
know
reconcile
a
process
and
those
cells
were
owned
by
the
harbor.
They
are
okay,
so
after
the
corporation
is
done,
the
class
operator
will
do
you
know
we
also
do
refractor
2,
you
know
to
rebase
on
the
new
over
operator
version.
B
B
Okay,
I'd,
like
you
know,
we
have
started
the
work
for
several
months.
I
think
most
of
people
have
contributed
a
lot.
You
know
they
are
community
contributors
they.
Actually
this
is
their
part
time.
So
I'd
like
to
you
know
to
say
thank
you
very
much
for
everyone
for
the
outstanding.
You
know
contributions.
B
They
are,
you
know,
karma
drew
from
umami
and
time
for
one
from
netease
and
andrew
from
chincloud,
and
we
also
have
you
know
from
vmware
and
the
way
we
from
vmware.
A
A
Right
give
a
shout
out
to
pierre
and
a
movie.
B
Yeah
here
I
just
share
the
contribution
of
the
cloud
operator.
Yeah
got
it
okay
before
the
lab
demo.
I
want
to
clarify
some
new
issue
so
far,
because
you
know
the
operator
you
know
is
targeting
the
hardware
version
1.
B
in
this
hardware
version
harvard
does
not
support
the
redis
sentinel
mode.
So
we
need
to
you
know
to
cite
the
reddish
replicator
as
the
only
one
and
to
avoid
such
a
purple.
Okay
and
for
those
theaters
harbor
cr.
There
is
a
it's
not
correctly
sized.
So
in
the
service
status
for
the
harbor
sales,
this
is
it.
It
will
will
always
be
unknown.
So
we'll
check
it
later.
So
that's
a
two
new
issue:
okay
about
the
demo.
First,
it's
yeah.
B
It's
only
on
the
kubernetes
card,
so
I
use
according
to
class,
is
created
by
a
manual
by
the
vmware
tesla
mission
controller.
This
is
our
production.
You
know
kubernetes
management
and
the
platform
subspace
the
platform
I
have
created
to
man
according
to
the
class
on
this
platform.
It's
this
car.
They
have
a
32
core,
and
you
know
126
gb
memory
and
one
controller
node.
We
plus
three
worker
nodes
for
the
overall
demo
flow.
There
has
some,
you
know
a
preparation.
B
First,
you
know
we
needed
to
deploy
a
certain
manner
because
we
use
the
second
manager
to
manage
the
certificates,
so
we
needed
to
deploy
the
manager
to
this
cluster
and
we
also
need
you
know.
We
involve.
We
use
the
ingress
controller
to
you
know
to
expose
the
accessible
service,
so
we
need
to
deploy
the
ngx
in
english
controller
to
the
to
the
cluster
and
we
enable
the
load
balancer
mode
for
the
ingress
controller.
B
B
We
can
start
to
deploy
harbor,
I
have
you,
know,
dropped
deployment
manifest
yamo
here
we
are
set
to
the
domain
at
hub.gohar.com
and
the
notary,
access
and
product
notary
dash,
harvard.file
I'll
set
the
replica
equal
to
three,
including
all
the
hardware
components
like
yeah,
including
the
option
like
a
clear
notary
and
the
channel
limit
museum
and
all
the
dependent
services
will
use,
will
enable
in
cluster
mode.
B
B
B
And
I
also
have
a
ingress,
ngx
ingers
controller
running
here.
Everything
is
okay,
it's
healthy
and
actually
the
related
operator
has
been
deployed
before
so
I'll,
override
the
operators.
B
B
So
different
create
namespace
and
install
crd
and
create
a
cluster
opening
row
row
bending
and
create
the
operator
deployment.
So
all
everything
is
in
this
oem
one
yammer
so
for
install
the
operator
is
very
easy.
You
just
use
your
cubosity
arrow.
B
Apply
use
this
on
your
manifest
to
deploy
others
required
yeah
because
I
have
you
know,
deployed
the
operator
before,
so
everything
is
just
to
check
if
they
needed
to
do
some
changing.
I
think
I've
seen
it
okay.
So,
let's
back
to
the
management
portal,
we
can
say,
for
example,
in
the
hardware
cluster
operator
system.
Here
the
hard
class
operator
is
running.
B
B
B
B
B
I'll
create
a
hardware
namespace
and
create
the
miss
twitter
secret
from
a
adobe
account,
and
I
use
a
self-third
issuer
to
issue
my
certificate,
and
this
is
not
neat.
That's
okay
and.
B
From
a
harbour,
cluster
I'll
use
the
red
scene
class
so
far,
because
we
have
no
issues,
the
replicator
should
be
set
to
one
and
also
set
the
public
ur
to
harbor
dot,
go
harbor
dot,
io
set
the
notary
to
notary
dashhalf,
dot,
gohab
dot,
l
set
the
replicator
to
three
and
let's
and
the
enable
charger
museum
and
enable
claire.
B
B
In
the
hardware
namespace,
so
you
can
see,
there
is
a
hover
cluster
they
are
created.
Version
is
blah
blah
blah
blah
public,
your
blah
blah
blah.
Actually,
if
you
want
to
use
appender
dash
o,
you
know
option
white,
you
can
see
more
information,
you
can
say
the
cache
if
the
cache
service
is
ready.
If
the
database
is
ready,
if
the
storage
is
ready
and
if
the
hardware
cr
is
ready
so
far
and
no
one
is
ready.
B
B
B
B
B
B
C
C
B
C
C
C
C
C
C
B
B
B
B
I
think,
because
there
are
some
you
know,
twitter
reading
issue,
so
it's
very
slow
to
read.
It
will
be
back
later.
Let
me
see,
I
think,
it's
okay,
the
replication
log.
It
has
it's
not.
B
B
It's
very
slow,
we'll
check
it
a
little.
I
think
the
logo
will
be
back.
B
So
I
think
all
the
function
of
you
know
doesn't
work
aware,
so
you
can
use
one
command
to
deploy
your
hardware
cluster
with
each
a
mode.
So
it's
very
simple
and
suitable,
and
you
can
you
know.
B
You
can
get
your
you
know,
workflow
hardware
registry
service,
when
you
want,
and
actually
in
this
cluster,
I
have
deployed
two
hardware.
You
can
say
let's
through
the
portal
to
say
the
first
one
is,
is
the
harbor
I
deploy
the
you
know
today,
a
new
corporation
into
the
harbor
namespace.
Actually,
here
is
sample.
B
This
is
also
a
harbor.
Actually,
in
my
cluster
there
around
two
two
coverage
are
running
and
this
one.
It
also
can
be.
B
B
Okay,
you
can
see
in
my
cluster,
I
have
two
two
hover,
so
you
can
deploy
the
actually
beside
your
resource.
Of
course
you
can
deploy
any.
You
know,
number
of
hybrids
in
your
coordinate
cluster
and
it's
very
easy.
Just
develop
hardware
classes
back
use
cobra
play.
Okay,
everything
is
it's
it's
up,
so
that's
all
the
demo
scenario.
A
B
Yeah,
actually,
the
first
training
where
you
we
are
trying
to
can
yeah
you
can
deploy
current.
Actually,
I
have
tried
deploying
in
the
calendar
cluster
and
I
dropped
a
document.
It's
under
the
cloud
operator
repository
in
the
dog
docs
folder.
There
is
the
installation
local.
B
So
this
is
target
for
canon
cluster.
You
can
follow
this
one
to
deploy
a
hardware
to
your
camera
cluster,
but
this
dock
is
a
link
to
out
of
data
because
it
it
does
not
use
the
custom
key
customization
template
to
deploy
operator.
It
uses
the
you
know
the
original
deploy,
for
example,
for
the
harvard
cluster
operator
and
the
hardware
operator
they
use
from
the
source
code
and
for
others
they
can
refer.
You
know
different
github
resource
to
deploy
the
operator,
but
yeah,
that's,
okay!
You!
E
E
B
Yeah,
it
is
actually
we
are
considering,
for
you
know
future
we
can
use
more
flexible,
but
I
think
so
the
manager
can
use
your
route
to
say
to
send
its
certificate,
so
you
do
not
need
to.
E
B
Yeah,
I
think
that's
actually.
I
have
to
do
some
research
on
that
part.
I
think
this
is
the
limitation
of
certain
monitor,
because
the
certain
manager
of
a
controller
will,
you
know,
identify
the
ingress
and
generate
the
certificate,
so
it
it
seems
it's
it's
a
little
hard
for
somebody
to
generate
ca
to
cover
multiple
domains.
So
so
so
that's
the
current
limitation.
B
E
B
Yeah
portfolio,
so
maybe
I
got
to
you,
know,
learn
more
about
the
sermon,
maybe
because
I
think
the
ceremony,
because
currently
we
use
the
self
center.
So
that's,
maybe
you
know
the
cause
of
the
limitation.
You
know
occur
so
maybe
use
others.
You
know
issuer.
B
B
I
think
so
far
it's
active
because
you
know
there
are
some
com
commits
coming
into
their
repository,
maybe
10
or
20
days
ago.
So
I
think
it's.
E
B
B
E
B
B
Yeah
we
we,
we
hide
some
configuration.
B
Yeah
we
have-
I
just
mentioned
in
the
spec
here
I
think
at
the
last
we
needed
to
wretch
a
document
so
far,
just
a
comment
in
the
in
the
spike
md
file,
for
example,
let's
say
database
here
you
need
to
create
a
cigarette
about
contour,
something
something
something.
Let's
see.
E
Okay,
and
as
for
the
replica
setting,
there
is
a
global
replica
setting
and
one
replica
for
job
service
is
that
due
to
the
limitation
of
the
ovh
operator-
or
it's
just,
you
know
your
opinion
that
we
we
want
to.
You
know,
expose.
B
Yeah
our
consent,
our
consideration
of
disaster
spectra.
We
want
to
keep
this.
You
know
spec
as
as
simple
as
possible.
Right,
I
know
the
different
you
know.
User
may
have
different
requirements.
That's
it's
our
start
point.
So
far.
We
want
to
okay.
This
is
you
know
the
only
required
subset
of
the
configuration,
so
I
think
the
job
set
because
they
need
you
know
more
cpu
or
memory,
so
they
yeah
they
can
increase
the
separated.
You
know,
respectively,
to
a
more
large
replica.
B
A
Yeah,
so
do
I
have
to
use
min
io?
Can
I
use
something
else?
A
I
said:
do
I
have
to
use
min
io?
Can
I
how
do
I,
how
do
I
set
something
else.
B
Yeah
so
far,
you
know,
for
the
has
storage
me.
Always
it's
a
it's
a
very
you
know
it's
a
good
choice,
because
it's
an
extra
compatible
that
is
supported
by
the
you
know
by
the
retro
driver
and
but
if,
if,
if
there's
a,
if
without
meow,
I'm
not
sure
if
there
are
other
choice
to
support
the
in-class
storage,
but
you
can
anyway,
you
can.
B
You
know
at
least
you
can
use
external.
I
think
the
worst
case
is
you
can
deploy.
You
know
me
alternate
it
in
it
and
point
to
the
harbor
cluster
spec
as
external
storage
size,
but
that's
not
the
original
design
purpose
of
such
you
know
operate
class
upgrade.
So
we
want
to
press
some
all
you
want.
You
know
experience
for
deploying
your
hardware.
B
E
B
Yet
nothing
and
that's
it.
I
I
want
to
clear
for
this-
is
I
think
it's
a
previous
version.
Even
we
released
the
you
know
the
0.5.
I
think
it's
a
preview
version,
because
we
don't
want
to
do
more
work
on
this
version
because
we
have
late.
We
have
a
newer
version
like
2.02.1,
so
we
want
to
put
some
effort
to
work
on
the
you
know
the
new
version,
so
absolutely
yeah,
but
of
course
the
new
version
will
you
know,
based
on
the
current
work.
B
F
F
Steven
do
I
think
this
is
really
great.
This
is
one
feature
we
have
looking
for
like
more
than
one
years.
F
D
Oh
sorry,
I
have
a
quick
question
regarding
the
garbage
collection.
So
yeah
we
actually
running
harper
1.10.4
and
we
are
planning
to
migrate
to
2.1
directly
so
because
of
the
garbage
collection
which
we
need
to
run,
which
which
actually
involves
the
registry
to
be
in
read-only
mode.
So
so
one
from
1.10.4
can
we
directly
jump
to
2.1?
D
Yes,
okay,.
A
G
So
when
is
the
release
plan?
Is
there
any
any
release
yet
because
we
saw
that
there's
the
the
tech
demo
out?
Is
there
any
date?
So
there.
A
F
Yeah,
I
think
if
you
have
some,
because
internally
we
have
testing,
we
have
created
a
very
large
storage
of
images
to
do
the
gc.
So
far
we
haven't
found
any
issue,
but
given
that
you
are
interested,
if
you
have
any
staging
or
developing
environment,
you
want
to
try
and
preview.
We
can
also
share
view
to
you
and
you
probably
also
can
do
some
tests
before
rita
time.
G
D
G
Of
course,
promoting
it
through
production,
but
we
were
just
wondering
about
the
general
time
frame,
because
we,
of
course
the
garbage
collect
collection
as
a
blocking
factor
on
production
is
not
great.
G
F
Can
if
you
can
take
in
the
new
build
and
do
some
notification
images
to
the
two
dollars
build
and
run
gcps
in
our
daily
work?
That
will
be
really
helpful
for
our
fast,
so
the
the
scenario
we
have
created
might
not
have
that
large
number
of
stories,
as
as
you
have
right
so
yeah,
you
can
do
that
really
helpful.
D
Yeah
sorry,
but
I
have
one
more
question
regarding
the
image
tags
listing
so
right
now
we
are
using
gcs
as
the
backend
storage,
so
whenever
we
actually
try
to
view
the
tags,
if
at
all,
if
there
are
like
too
many,
then
it
takes
quite
a
lot
of
time
for
loading
it.
So
is
it
something
which
we
are
doing
in
the
configuration
side,
which
is
a
mistake
or
or
is
it
already
noticed
in
as
an
issue
in
harbor
side
itself,.
E
So
it's
fixing
v2,
but
if
you
have
many
many
images
the
migration
may
take
a
while,
because
in
v2
we
store
all
this
information
in
harvard's
database
in
v1.
Some
of
the
data
is
still
in
the
storage,
so
we
need
to
call
registry
api
to
get
info.
That's
why
the
listing
is
slow
because
it
need
to
call
a
lot
of
you
know.
E
The
storage
api
to
get
this
input
in
v2
is
much
better
in
this
particular
case,
but
that's
that
the
mic
in
the
migration
when
hubble
startup
it
will
work
through
all
your
images
and
try
to
extract
the
data
and
insert
the
records
into
database.
So
that
may
take
a
relatively
long
time
just
be
prepared
and
if
you
are
using,
especially
if
you
are
deploying
hardware
on
kubernetes,
you
need
to
set
the
you
know.
You
need
to
adjust
the
setting
of
your
prop
of
the
hardware
core
part.
E
E
A
Okay,
I
think
we'll
get
in
here
if
no
more
questions
thanks.
Everyone
yeah
thanks.