►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Okay,
great,
thank
you
so
hello.
Everyone.
Thank
you
for
joining
harper
community
conference.
Call
I'm
the
host
of
this
meeting,
I'm
stephen
to
the
harvard
media.
So
here
is
the
agenda
for
today's
meeting.
We
have
streaming
topics
the
first
one
is
I'd
like
to
invite
our
product
manager
alex
hugh
to
talk
about
more
about
the
next
release
version
2.0
and
after
that,
I'd
like
to
invented
min
zhang
from
vermont
harbor
team
to
talk
about.
A
B
Action
to
drive
the
harbor
sale
process
last
we'd
like
to
invite
herring
hong
jong
to
talk
about
a
harbor
use
case,
the
harbor
plus
the
fate
used
sharing
okay.
B
So
the
further
topic
I'd
like
to
invite
alex
few
to
talk
about
the
more
about
the
next
release
alex,
are
you
there.
A
Okay,
so
I'm
just
gonna
go
down
the
list
of
the
things
that
we're
working
on
for
hardware
2.0,
so
the
latest
release
is
hardware
harbor
1.10
and
essentially
we
are
going
to
do
quite
a
bit
of
refactoring
to
support
oci,
so
we're
calling
the
next
release
hardware
2.0.
A
So
the
first
thing
is
ocs
support
and
that's
something.
We've
talked
about
in
the
last
couple
of
meetings,
we're
finally
fully
committed
to
completing
the
osi
support,
or
at
least
a
big
chunk
of
it,
and
the
idea
is
for
harvard
to
host
or
to
extend
support
for
new
cloud
native
artifact
types
right,
such
as
operators,
bundles,
rpms,
etc,
beyond
just
helm,
charts
and
container
images,
which
is
what
we
have
right
now-
and
this
is
done
by
following
a
common
set
of
industry,
favorite
apis
called
oci,
open
container
initiative
and
the
reva.
A
So
currently,
there's
no
way
to
delete
a
single
image
tag
without
deleting
all
the
other
reference
image
tags.
Sharing
that
shot
digest
it's
a
limitation
of
the
docker
registry
that
we're
consuming,
because
it
only
implements
the
deletion
of
a
digest.
A
So
the
way
we're
resolving
this
is
we're
going
to
add
a
new
data
structure
called
a
harvard
tag.
That's
basically
one
to
one
with
the
actual
docker
image
tag
and
the
user
on
the
front
on
the
front
end
through
the
hardware
ui
or
you
know,
curling
into
the
hardware
ready.
The
docker
registry
only
operates
on
the
hardware
tag,
so
you're
not
actually
you're,
not
actually
operating
on
the
docker
image
tag
so
to
the
user.
You
know
from
the
user's
pov
it's
indistinguishable
from
working
on
the
existing
image
tags
right,
but
it's
it's
much
safer.
A
So
when
you
run
garbage
collection,
it
doesn't
take
the
instance
down,
so
it's
still
online,
but
it
does
put
harper
into
read-only
mode,
which
means
that
you
can't
push
images
to
it
and
you
know,
depending
on
the
size
of
your
image
set,
that
can
take
quite
a
while
right.
If
you
have
terabytes
of
data
or
more
so
the
goal
is
to
deliver
a
non-blocking
gz,
which
means
that
users
can
push
and
delete
images
from
the
hardware
registry,
while
garbage
collection
is
ongoing,
so
it
runs
silent
in
the
background
and
there's
no
possibility
of
image
corruption.
A
A
P2P
dragonfly,
you
know
dragonfly
is
a
popular
p2p
distribution
tool
as
well
as
uber
kraken.
So
these
are
tools
that
we're
looking
to
leverage
to
better
distribute
images
from
within
the
hardware
industry
to
lots
of
other
docker
hosts.
A
So
you
can
sort
of
picture
this
hubspoke
model
where
the
the
registry
sits
in
the
center
and
you
wanna
use
something
like
dragonfly
or
kraken
to
concurrently
push
images
outward
right,
so
they
have
a
much
more
efficient,
a
more
elegant
peer-to-peer
solution,
instead
of
just
adding
little
balancers
everywhere
right
and
they
also
have
mechanisms
to
pre-heat
an
image
that
you
know,
they're
going
to
get
pulled,
and
you
know
different
policies
that
can
trigger
promotion
to
their
supervisor
node.
A
So
this
is
something
we're
working
with
the
community
on.
You
know
we
have
external
external
maintainers
and
other
people
in
the
community
that
are
very
interested
and
we
have
a
working
group
on
this
that
meets
every
thursday.
A
If
someone
is
interested
in
this,
please
please
let
me
know
we
have
one
tomorrow
night,
actually
9
00
pm
beijing
time
web
hooks.
This
is
just
extending
current
web
hooks,
adding
additional
web
hooks
for
some
of
the
features
that
we
added
in
1.99
and
1.10.
So
web
hooks
are
so
dis
associated
with
attack
retention,
attack,
immutability
things
like
that
and
finally,.
A
You
know
docker
notary
is
tackling
image
signing
for
a
multi-registry
world,
so
this
means
the
pers.
This
means
persisting
the
image
signature
across
image
in
different
registries
and
so
we're
following
the
progress
on
the
docker
notary
side
to
sort
of
drive
our
requirements
on
that
end.
A
So,
hopefully,
you
know
we'll
work
with
the
docker
guys
to
get
something
done
in
the
next
release
or
two.
A
So
that's
just
sort
of
a
high
level
overview
of
some
of
the
larger
features
that
we're
working
on
for
harper
2.0.
There's
a
project
board
on
the
github
repo,
which
is
you
know,
it's
a
great
way
to
keep
track
of
the
things
that
we're
working
on.
It
has
links
to
prds
links
to
the
actual
epics
and
then
the
breakdowns
of
the
stories.
B
Okay,
thank
you
alex
for
the
greater
sharing
I
think.
V.
2.0
is
another
major
milestone
for
harvard
project,
and
if
you
have
question
you
can
keep
it,
there
keep
them,
and
we
are.
You
can
ask
in
the
qris
stage.
So,
let's
move
on
to
next
topic.
B
C
B
B
C
B
C
A
D
C
Sorry,
perfect:
okay,
okay,
as
as
you
can
see,
this
is
the
travis,
the
privacy
I
we
are
currently
using.
When
you
submit
a
pr
it
will,
it
will
be
tracking
by
valid
service,
and
we
will
do
some
do
some
tracking,
like
ut,
test
api
test
and
also
ui
ui
test.
Travis
is
good,
but
there
is
some.
There
is
some
challenge
for
us
and
that
we
are
using
the
free
version
of
travis.
So
the
main
challenge
is,
we
can
only
run
to
concurrent
drop
in
travis.
C
So
when
there
are
many
developers
submit
code
and
they
have
to
wait
quite
a
long
time
to
wait,
the
the
travis
checking
them,
then
his
pr
can
be
merged.
So
recently,
github
provide
the
new
the
new
actions,
which
is
also
target
for
the
ci
ci
check.
C
Use
I
have
to
find
us
some
pr
that
carry
into
the
master
sorry
at
the
moment.
C
C
C
And
the
list
pr
is
is,
is
what
I
have
done
to
introduce
the
github
action.
As
you
can
see,
we
just
need
to
create
a
yaml
file
in
the
github
workflows.
Folder
like
this,
and
the
list
field
will
define
the
environment
and
the
the
house
of
which
the
the
the
host
image,
which
will
run
the
ci
scripts
and
the
jobs
here,
are
five
drops
ud
test
and
api
tests
and
others,
and
also,
and
also
in
each
jobs.
C
As
you
see
the
the
time
action
github
action
used
is
similar
to
the
travis,
which
is
about
20
minutes,
but
the
the
most
benefit
is
it
it.
We
can
run
concurrently
drops,
we
can
run
more
concurrently,
jobs
and
another
interesting
thing
is,
we
may
is
github.
Action
also
allow
to
use
self-hosted
vms.
That
means
we
can
we
we
can
create
a
vm
in
in
our
company
in
internet,
set
up
a
vm
and
it
will
register
register
the
the
vm
to
the
github
and
the
listen
under
github
pro
request
and
run
the
job
locally.
C
C
B
Okay,
the
memphis
timeline
t-shirt.
I
think
you
can
close
your
topic.
I
think
the
most
advantage
of
this
is
that
we
can
increase
the
concurrent
job
at
the
same
time
improve
the
productivity
when
there
are
many
pr
submitted
to
hover
ripple
right.
B
D
Yes,
I'm
here,
do
you,
okay,
can.
B
I
think
it's
time
it's
too
tight.
We
only
have
nine
minutes.
D
Sure,
yeah
cool,
so
I'm
going
to
quickly
talk
about
some
of
our
integration
for
harbor
with
another
open
source
projects
called
fate.
It's
a
it's
in
the
ai
area
called
the
federated
learning.
D
So
let
me
give
some
quick
intro
about
some
of
the
background
about
this
project.
It's
like
the
in
the
ai
industry.
Right
now.
The
success
of
ai
lies
in
two
important
things.
One
is
the
computing
power.
The
other
is
the
big
data
that
can
be
trained
into
a
useful
model.
So
for
the
computing
power
it
can
be
relatively
easy
to
achieve,
because
you
can
pay
for
some
money
again,
you
can
buy
good
ai
chips
like
gpu
or
tpu,
or
you
can
run
some
for
in
the
cloud,
but
the
other
factor.
D
The
other
elements
like
big
data,
is
hard
to
get
so
nowadays
many
people
think
the
most
valuable
resource
is
not
no
longer
oil,
but
the
data,
the
big,
the
big
companies
like
google,
facebook
and
microsoft.
They
have
a
large
amount
of
data
that
can
be
used
for
training
to
into
a
good
model
and
be
used
as
great
many
useful
or
interesting
ai
technology,
ai
applications.
D
D
So
it's
hard
to
for
the
for
the
users
to
get
the
necessary
data
to
trend
a
useful
data
model
like
the
so
so
there's
a
statistic
saying
that
about
80
of
the
enterprises.
D
D
And
furthermore,
there
is
some
like
regulations
in
the
in
the
in
everywhere
that
prevent
the
use
of
consumer
data
like
the
gdpr
that
was
implemented
last
year
in
europe
and
also
in
the
in
california
and
u.s.
That's
a
ccpa,
a
consumer
privacy
act
that
will
be
maybe
effective
next
year,
just
two
weeks
away
and
also
in
china
there's
cyber
security
law
that
is
protecting
the
consumer
data.
D
So
this
all
kinds
of
regulations
and
the
and
the
relative
reality
of
the
enterprise
that
having
isolated
data,
I
would
prevent
the
application
of
the
machine
learning
to
build
a
powerful
or
interesting
data
model.
So
we
have
a
dilemma
here
that
we
like
to
build
a
good
application
based
on
a
lot
of
the
large
amount
of
data.
But
we
are.
We
cannot
do
it
because
we
have
the
regulation
laws
enforcement
here
and
also
we
have
other
business
reasons
that
will
prevent
us
from
doing
so.
So
what
do
we
do?
D
Then?
We
have
a
way
of
call
technology
in
the
ai
or
machine
learning
domain.
It's
called
federated
learning
to
to
help
us
to
do
to
utilize
data
from
different
organizations.
D
While
we
can
still
be
able
to
preserve
the
privacy
of
the
data,
google
released
two
papers
in
2017
talking
about
some
techniques
about
federated
learning.
The
idea
is
like,
for
example,
in
this
side
in
the
lab.
D
In
the
left-handed
example,
the
android
phone
devices-
and
they
have
an
agent
collecting
some
of
the
user
data
and
perform
local
training
and
when
they
have
when
they
have
a
local
model,
they
can
aggregate
the
model
and
sent
to
the
center
of
the
cloud
essential
cloud
plate
a
central
place
in
the
cloud
to
to
build
a
globally
optimized
model.
And
then
this
model
is
pushed
back
to
the
device
side
for
better,
better
performance,
a
better
accuracy
of
the
prediction.
D
During
this
process,
the
user's
data
are
encrypted
and
sent
to
the
central
place
in
the
cloud,
and
then
they
can
cancel
out
with
each
other
when
they
add
them
together.
So
in
this
way,
the
privacy
of
the
user
data
is
protected
and
still
like,
google
can
still
view
a
useful
model
for
the
android
phone.
D
Users,
so
right
now
there
has
been
quite
a
few
years:
development
in
the
academic
world,
as
well
as
in
the
industry.
So
right
now
there's
some
of
the
use
cases
that
are
quite
interesting.
Here
I
mean
in
china.
We
have
a
few
interesting
use
cases
in
the
in
the
finance
and
other
area,
for
example
in
the
in
in
this
case,
in
the
central
bank
and
the
commercial
bank,
they
are
working
together
to
find
out
the
anti-money
laundering
activity
in
the
in
the
transactions.
D
Normally,
the
central
bank
and
the
commercial
bank
cannot
share
data
with
each
other,
but
they
can
use
the
federation
learning
technology
to
protect
the
privacy
of
the
data
and
still
be
able
to
build
an
interesting
data
model
can
be
used
for
the
machine
learning
algorithm
to
achieve
some
of
the
prediction
model
that
can
detect
the
antimony
modeling
activity.
D
D
So
if
you
want
to
do
some
of
the
to
build
some
of
the
federated
learning
day,
machine
learning
applications,
we
need
some
powerful
tools.
We
need
some
powerful
tools
for
the
for
the
training,
so
there's
an
open
source
project
project
called
fate,
a
federated
ai
technology,
enabler
that
can
help
us
to
do
so,
and
the
fate
is
now
was
originated,
initiated
by
rebank
and
the
first
internet
bank
in
china
and
was
donated
to
linux
foundation.
D
Early
this
june
this
year
they
have
been
the
first
industry
level,
federated
learning
framework
to
work
on
the
to
create
to
provide
out-of-box
usability
for
the
federated
learning,
and
our
team
here
in
china
has
been
working
with
them
for
some
of
the
old
some
of
the
deployment
using
container
and
cognitive
technologies
and
as
well
as
other
community
members,
are
working
on
this
in
this
area
too.
D
Here's
basically
the
idea
is
very
simple-
I
mean
so
each
of
the
parties
is
performed.
It's
a
is
the
participating
party.
They
can
have
their
own
set
of
fade
deployment
and
then
they
work
and
work
with
the
exchange
component
together
to
change
the
to
change
the
the
data
and
build
a
collectively
built
model
and
then
each
of
the
inside
each
of
the
party.
D
They
will
have
all
many
components
here,
as
shown
here
like
like,
like
about
10
components
here,
and
some
of
them
can
be
scaled
can
be
checked
out
or
scale
components.
So
we
see
that
it's
very
complicated
for
them
to
build
or
run
the
to
run
the
data.
So
we
build
containerized
containerize,
all
the
components
hit
there,
and
then
they
no
longer
need
to
compile
the
source
code
from
the
source
code
and
we
can
use
stock
composer
buildi
to
deploy
the
fade
platform.
D
And
then
we
also
have
the
hem
chart
for
the
kubernetes
deployment
for
this
fader
components
and
so
from
you.
By
using
this
cloud
native
technology,
they
save
a
lot
of
time
from
deployment
of
the
containers,
so
normally
they
need
a
one
day
or
two
to
deploy
the
phase
system
by
by
using
the
container
technology.
They
only
use
for
about
one
to
two
hours.
There's
a
lot
of
saving
in
time.
D
Also,
we
use
hardware
here
to
manage
the
container
image
and
hand
charts,
so
the
image
canada
image
can
be
replicated
from
docker
hub
in
a
regular
way,
so
that
when
there's
new
version
coming
to
the
going
to
the
docker,
they
can
automatically
automatically
replicate
it
to
local
repository
and
also
by
setting
up
a
hardware
registry
locally.
They
can
have
offline
image
and
also
have
the
hem
truck
imported
into
the
harbor
so
that
they
can
have
offline
access
to
to
for
the
deployment
and
and
also
for
the
operation
for
their
for
the
local
clusters.
D
So
that's
reduce
the
complexity
of
the
operations.
So
this
part
of
code
is
right
now
in
a
project
called
cube,
fade
and
also
it's
alongside
with
the
open
source
project
called
fade.
So
both
of
them
are
in
is
open
source
under
the
namespace
of
regulated
ai
and
if
you're
interested,
you
can
go,
take
a
look
and
see
how
we
use
hardware
there
for
the
for
the
image,
management
and
amateur
management.
B
Okay,
great,
thank
you.
Thank
you,
sorry
for
the
critical
share.
I
think
because
this
is
a
first
use
case.
However,
you
see
in
the
machine
learning
or
ai
scenario
so,
and
we
don't
have
enough
time
for
you
guys
to
ask
questions.
So
if
you
have
any
question,
you
can
post
it
to
the
wechat
channel
or
slack
channel
about
the
2.0
hybrid
version
about
the
github
action
about
the
harboring,
a
feat
so
thank
you,
hi
everyone
for
your
journey
pro
time
limitation,
we'll
close
this
meeting
today.
Thank
you
guys.