►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hear
you?
Oh,
oh,
okay,
okay,
okay,
let's
let,
let's
get
start
start
hi
everyone,
I'm
away
from
vmware!
I'm
host
of
this
meeting.
A
C
A
A
A
Memen
will
share
the
verbal
support
progress
later
this
system.
Cve
wirelessly,
is
ready
under
vitamin
by
10
years
later
this
this
log
and
the
point
for
log
shipping
were
will
be
done
in
last
sprint
and
will
be
a
demo
by
by
chair
later
and
the
preferred
performance
test
stage.
One
is
done:
okay.
Next,
let's
welcome
henry
share
the
share.
The
copaca
in
shanghai
welcome.
B
Hi
yeah
thanks
thanks
very
well
for
the
introduction,
so
hello,
everyone,
I'm
henry
henry,
I'm
one
of
the
maintainers
of
project
harbor.
Could
you
go
back
a
little
bit
open?
One
page
wait,
wait
yeah,
so
we
had
a
very
wonderful
event
of
cubecoin
shanghai
last
week
in
china
and
it's
during
the
june
24
and
and
to
26..
B
So
during
this
event,
we
have
a
few
activities
in
for
harper
projects,
so
the
harbour
maintainers
most
of
them
are
in
china
and
all
the
chinese
maintainers
are
gathering
in
shanghai
city
and
discuss
the
roadmap
and
all
the
project
related
issues,
and
we
have
a
wonderful
harbour
community
event
with
see
other
csf
projects,
dragonfly
and
thai
kv,
and
also
in
this
kubecon.
We
have
a
hardware
session
to
introduce
our
project
and
its.
B
B
B
So
from
the
picture
you
can
see
dan
khan,
the
csf
executive
director
and
also
allen
the
the
general
manager
of
vmware
r
d,
china
and
xiang
li,
the
csf
doc
member.
They
were
all
there
for
for
this
event's
kickoff
next.
B
And
so
here's
one
thing
more
pictures
for
the
for
the
event
you
can
see
about.
There
are
about
100
people
showing
up
in
this
event
and
the
maintainer
like
steven
zhao,
gender,
ming,
ming,
pay
and
also
frank
kong
of
sharing
the
harvard
road
map,
as
well
as
their
use
cases
in
the
in
their
company.
B
B
That's
the
team
is
we're
currently
working
on.
There
were
about
40
people
in
the
room
and
because
the
room
was
was
relatively
small,
so
we
didn't
have
more
people
to
to
come,
but
actually
many
people
were
blocked
by
the
door
outside
of
this
room.
Okay,
next
and
here
you
can
see
the
picture
of
a
vmware
boost.
B
We
have
set
up
some
flyers
or
some
souvenirs
for
the
harbor
users.
Many
users
came
to
our
booth
and
talked
to
us
about
the
project
harbor
and
discussed
a
lot
of
feedback
and
issues
or
suggestions
to
the
projects.
So,
according
to
our
estimation
that
there
are,
there
were
more
than
100
users
at
the
booth
that
they
were
coming
to
us
and
talk
to
us
for
the
project.
B
B
So
so,
besides
the
cube
corn,
shanghai
just
finished,
we
would
like
to
call
for
some
participation
in
the
cubecoin
san
diego
for
the
cfp,
which
is
will
be
ended
by
july
12th.
So
we
welcome
any
harbor
users
to
share
their
use
cases
in
the
kubecon.
B
So
if
you
want
to
submit
a
cfp
or
you,
if
you
have
any
topic
ideas,
please
feel
free
to
talk
to
us.
Maybe
we
can
help
you
to
summarize
and
we
can
help
you
to
present
or
co-present
some
of
the
topics
in
kubecon
san,
diego,
based
on
some
of
the
feedback
from
the
community
users.
B
So
we
are
kind
of
thinking
planning
some
of
the
offline
meetups
in
the
a
few
cities
in
china.
The
the
pew
pew
users
of
harbor
actually
offer
the
venues
in
different
cities.
So
we
have
the
sponsored
venues
and
potentially
we
can
have
speakers
or
users
in
different
cities
that
we
can.
We
can
host
so
some
offline
meetup.
B
So
if
we
know
that,
if
just
so
please
let
us
know
your
cities
that
you
are
in,
then
you
want
us
to
have
some
offline
events.
Then
we'll
see
whether
we
can
organize
some
events
in
your
local
cities
and
then
we
are
planning
for
it.
So
please
do
let
us
know
what
you
thinking,
what
you
are
thinking
and
and
possibly
then
we
we
can.
Hopefully
we
hope
you
can
have
some
meet
up
or
some
offline
event
in
your
city
if
they
receive
strong
demand.
B
So
that's
all
for
this
event,
update,
wait,
wait
back
to
you.
D
Hey
henry
one
question:
this
is
my
one
thing
is
michael,
so,
on
the
first
part
around
cubicle
in
san
diego,
I
know
that
some
of
you
guys
may
or
may
not
be
able
to
travel
to
the
united
states.
But
if
you
have
any
ideas
for
good
discussions
that
our
harp
or
end
users
will
love,
please
share
them.
We
can
have
a
discussion.
D
B
B
A
Oh
sorry,
sorry,
sir
next
time
is
feature
diamond
time.
First,
first,
let's
welcome
daniel
diamond
assist
instrument,
live
or
cve
white
list.
E
You
see
my
screen:
yes,
yeah,
okay,
so
the
feature,
I'm
downloading
is
a
cv
white
list
and
currently
this
is
working
in
progress
and
we
have
the
basic
workflow
and
the
system
level
cv
done
on
the
demo.
This
part.
First,
let's
look
at
this,
my
environment,
for
this
project
library.
If
I
go
to
the
configuration
tab,
we
can
set
the
deployment
security
options
here.
I
can
prevent
image
images
with
vulnerability,
severity
of
high
and
and
about
from
being
deployed.
E
So
if
I
do
the
darker
pole,
yep
yeah
now
I
cannot
pull
it
because
the
severity
of
this
reddish
image
is
above
height.
Let's
go
back
to
harbor.
E
As
you
can
see,
there
are
a
few
high
severity
vulnerabilities
of
this
double
image.
I
try
to
pull
because
of
the
setting
in
the
product.
I
cannot
pull
it,
but
now
we
have
introduced
a
white
list
so
that
system
admin
after
reviewing
this
vulnerabilities.
We
can
add
this
into
the
white
list
so
that
the
boundaries
can
be
ignored
during
docker
pool.
Let's
see
what
I
can
do,
if
I
go
to
the
system
settings
the
ui
components
are
another
development,
so
it's
not
we're
gonna
refine
it
later.
D
There
are
seven
of
them,
so
let
me
add
them
one
by
one
and
and
while
you're
doing
that
daniel.
Let
me,
let
me
add
a
little
bit
more
color
so
so,
like
daniel
said,
we're
we're
finalizing
the
user
interface
here
to
make
it
a
little
bit
more
usable
and
and
the
biggest
reason
why
we
are
learning
the
cbe
white
list
is
so
that
imagine
you
have
a
an
application
and
a
new
zero
day.
D
Cd
comes
out,
but
you're
dependent
on
a
component
or
a
library
that
it
doesn't
have
an
update
for
that
cd.
So
what
do
you
do
if
you
want
to
protect
yourself,
like
danielle,
enabled
in
his
project
you're
preventing
the
pull
of
that
image
based
on
that
cv?
But
that
means
that
nobody
can
deploy
your
image
in
production.
So
you
have
a
kubernetes
cluster
and
you're
using
that
image.
You
can't
scale
your
application,
so
that's
also
bad.
D
E
Yeah
thanks
michael
for
the
explanation,
yeah,
that's
the
most
valuable
use
case
for
this
white
list
and
now
that
I've
add
all
the
high
severity
vulnerability
to
the
white
list.
I
saved
it.
If
I
call
again
it
should
yeah.
I
can
do
it
because
all
the
high
severity
vulnerabilities
are
ignored.
E
While
I'm
pulling
this
image,
I
mean
the
vulnerabilities
are
ignored
by
the
interceptor
of
harbor
because
of
the
existence
of
the
white
list
and,
for
example,
if
I
remove
one
of
the
vulnerability
and
save
again
yeah,
because
now
it
only
filters
these
six
vulnerabilities,
but
there
are
still
one
vulnerability
that
will
take
into
account
and
that
is
highest
severity.
E
So,
if
I
pull
again,
it
will
fail
because
the
overall
vulnerability,
vulnerable
level
is
considered
high,
so
yeah,
that's
the
basic
workflow
of
the
cv
white
list.
Next
we're
going
to
refine
the
ui
components
and
add
the
project
level
white
list.
We're
gonna,
allow
the
product
admin
to
overwrite
the
system,
wireless
or
reuse,
the
system
wireless,
so
each
product
can
have
different
cv.
Ui
list
yeah!
That's
all
I
want
to
demo
today
any
questions
or
comments.
A
Okay,
thank
you.
Next
is
the
external
syslog
driver
from.
F
Oh
okay,
in
in
the
previous
version
of
hubble,
we
only
can
storage
the
logs
on
our
local
machine
and
currently
we
support
to
export
our
loads
to
external
syslog,
syslog
endpoint
and
you
just.
F
F
Enable
the
external
endpoint
and
put
your
protocol
here
and
the
host
and
your
port,
and
now,
if
you
enable
the
external
endpoints
and
now
the
logs,
will
export
to
the
endpoint
that
you
configure
here
and
the
logo
no
longer
store
it
on
your
local
machine.
Let
me
check
the
result
and.
F
F
Okay,
let's
see
oh,
this
is
a
logo
inside
is
a
vmware's
product,
which
is
a
logo,
storage
and
analysis
system,
and
it
is
also
a
six
logo
compatible
on.
Let's
say
because
you
can
say
the
address
is
that
jsi
configured
in.
F
F
Okay,
you
can
see
the
log
insider
is
receiving
the
logs
from
the
hubble
and
all
the
logs
you
can
find
here
and
you
can
use
in
this
login
site
to
configure
the
component
of
example.
If
you
wanna
check
the.
D
Thank
you
so
much.
I
guess.
The
biggest
thing
that
I
want
to
mention
here
is
that
you
know
log
inside
is
just
one
product
that
we
could
use
for
integration
with
hardware,
essentially
any
log
analytics
and
analog
consumption
product.
That's
a
persist.
Log
like
elk
splunk
any
of
the
google
or
azure
aws.
It
will
support
all
of
them.
So
logs.
F
Yes,
I
I
also
test
it
on
friend,
speed
and
using
elasticsearch
as
backhand.
It
also
works.
I
think
every
six
logo
compatible
logs
can
support.
D
A
A
E
G
A
The
level
oh
okay,
let's
let's,
let's
do.
F
H
H
H
H
D
Hey
henry
or
steven
rand,
maybe
after
we're
done,
could
you
maybe
translate
a
little
bit
for
everybody
else
as
well?
Just
the
high
level
stuff.
H
H
E
Yeah,
I
think
henry
can
help
translate
afterwards.
Currently
frank
is
introducing
his
solution
to
set
up
a
harbor
aj
in
his
company.
G
So
frank,
you
can,
you
know
complete
one
statement
hold
on
pause,
a
minute.
H
B
Okay,
so
here's
some
of
the
quick
translation
for
this
setup.
Basically,
this
is
this
is
architecture
done
by
the
chihu
360,
an
internet
company
in
china,
they're
doing
harbor
for
the
multi
data
center
and
high
availability
architecture.
B
So
basically
they
have
multiple
harbor
instances
and
having
the
s3
as
the
backhand
storage
for
sharing
the
data
and
each
registry
will
have
a
standalone
domain
name
and
also
can
they
can
use
the
the
intelligent
resolution
service
to
automatically
resort
to
the
right
places
right
in
the
center
for
the
user
and
harvard
has
the
master
slate
mode.
B
B
They
have
a
harbor
database
set
up
for
mysql.
Also
it's
a
master
slave
and
then
my
sequel
will
replicate
data
to
the
slaves
instances
so
so
that
harbor
can
be
used
in
different
data.
Centers.
Okay,.
H
B
So
so
in
the
deployment
of
the
the
keyhole
360,
they
use
their
own
open
source
project
wayne
for
their
for
their
for
for
kubernetes,
for
deployment
of
harbor
so
they're
they
have
internal
s3,
radis
and
bicycle
service,
and
they
use
the
no
port
and
actual
and
load
balancer
for
harbor
instances
and
for
their
master
instances
of
harbor.
They
have
the
components
of
harbor,
ui
admin,
server,
job
service
and
other
registry
components
in
the
the
master
data
center
and
for
the
clear
they
had
they.
They
didn't
have
a
multi
multiple
instances.
B
So
they
only
have
one
instance
of
clear
for
the
for
the
image
scanning
so
one
when
this
master
node
is
set
up,
they
have
replicated
to
other
slave
instances.
So
in
the
slave
instance
instances
they
have
harbor
ui
admin,
server,
job
service
and
registry
components
for
the
multi
data
center
copy
of
the
instances.
B
Yeah,
this
is
because
they're
using
1.5,
so
that's
why
they
have
my
sql.
H
H
B
Okay,
so
this
is
the
explanation
of
the
harbor
multi-data
center
replication,
so
they,
this
replication
happening,
is
happening
in
a
few
layers.
So
the
first
layer
is
the
image
layer
in
the
image
layer.
They
use
hardware
replication
for
the
synchronization
between
different
instances
and
also
for
the
the
image
backup
and
they
also
have
the
storage
level
backup.
B
So
basically,
they
use
s3
mirror
tools
to
replicate
the
blocks
data
across
so
so
normally
in
inside
an
image,
we
usually
have
block
data
and
also
feel
bad
metadata,
so
they
use
s3
mirror
I
mean
at
the
storage
layer
to
speed
up
the
replication
process
and
they
they
have
some
managed
to
replicate
this
and
with
the
image
layer,
application
from
harbor
and
also
they
have
the
harbor
slave
slave
instance
for
the
for
the
read-only
issues.
B
So
anyone
you
have
any
topics,
any
any
questions
feel
free
to
ask
if
you
feel
free
to
speak
in
in
in
chinese
feel
free
to
do
so.
H
H
B
So
so
the
question
was
the
what's
the
scale
of
this
setup
in
chihuahua
360..
So
the
answer
was:
there
are
about
1
000
nodes,
a
kubernetes
worker
notes
in
the
in
the
cluster,
and
the
maximum
bandwidth
is
about
100
megabytes
per
second
right.
If
I
think
it's
correct
for
for
the
download
of
the
image
the
the
concurrent
user
is.
H
B
A
D
By
the
way
I
wanted
to
mention
one
thing:
so
all
the
recordings
from
our
hardware
community
meetings.
Instead
of
going
to
the
meeting
schedule,
we
created
the
channel
on
youtube.
So
there
is
a
link
from
the
same
page,
so
there's
a
youtube
channel
that
has
both
hardware
related
presentations.
F
Excuse
me,
this
is
austin
and
I
have
a
question.
Can
you
hear
me.
F
Okay,
so
actually
I
propose
a
question
in
the
slack
channel
and
oh
it's
not
about
the
version
1.9,
because
we
like
a
1.8
release,
so
it's
already
supposed
it
can
support
like
open
id
connect
authentications.
But
I
would
like
to
know:
do
you
have
any
plan
to
enhance
the
group?
I
mean
because
currently
it's
only
based
on
ldap
authentication,
so
I
would
like
to
know
I
mean
in
the
future.
Will
we
support
like
it
can
like
a
separator
open
id
connect
with
group
access
control.
E
Yeah
yeah,
I
I
have
you
open
an
issue
on
github.
You
work
for
samsung,
okay,
okay,
yeah
nice
to
meet
you
yeah.
I
think
we
are
doing
some
refractory
in
that
part,
but
group
support.
We
probably
don't
have
bandwidth
for
that
in
1.9,
we'll
see
you
1.10,
probably
okay,.