►
From YouTube: Harbor Community Meeting 20190925 - Americas Time zone
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everybody
and
welcome
to
the
harbor
cncf
harbor
community
meetup,
it's
the
25th
of
september.
This
is
a
recorded
conversation.
I
recorded
meetings.
So
please
make
sure
you
adhere
to
the
cncf
code
of
conduct.
So
we
have
a
few
things
for
you
guys
today
and
josh
has
joined
us
to
talk
a
little
bit
about
the
chart,
chart,
museum
and
harbor.
So
I
will
look
forward
to
kind
of
having
his
expertise
and
kind
of
talk
to
us
about
some
of
the
work
that
he's
doing
there.
A
Let
me
know
if
you
can
see
this
yep
looks
good,
all
right
cool,
so
there's
a
few
things
that
you
want
to
put
on
the
agenda
and
then
we'll
open
it
up
for
everyone
to
kind
of
bring
any
topics
of
concerns
that
they
have
so
the
first
one
is
we
release
version
1.9
on
september
19th,
so
that
was
last
thursday.
A
Again,
congratulations
to
to
the
team
and
the
folks
that
worked
really
hard
on
this.
I
know
nathan
is
really
excited
that
we
have
tag
retention
policies
now
for
the
first
time
in
in
forever
in
harvard
so
that
was
you
know,
thank
you
for
spearheading
and
kind
of
lighting
up
a
fire
on
the
community
for
that
work,
and
I'm
really
glad
to
see
that.
A
But
if
you
kind
of
think
of
our
release-
and
you
know
some
of
the
features
that
we
have-
you
know-
it's
a
really
really
comprehensive
release.
For
the
first
time
ever,
we
can
have
enterprises
deploy
hardware
within
their
environment
and
really
be
able
to
set
guardrails
around.
How
should
developers
or
business
units
within
the
organization
consume
hardware.
Everything
from
webhook
support
for
ci
cd
integration,
to
tag
retention,
to
coda
policies
to
syslog,
endpoint
log
forwarding.
A
All
of
those
are
really
key
features
that
enable
enterprises
to
go
and
put
hardware
put
the
guardrails
in,
and
let
developers
use
hardware
as
a
self-service,
so
a
really
amazing
release
by
the
community
and
will
probably
have
a
webinar
that
will
guide
everyone
through
some
of
the
features
of
1.9
so
stay
tuned.
For
that,
I
think
our
tentative
date
is
october
15th.
A
The
blog
that's
linked
here
is
available,
so
you
can
read
these
features
at
a
high
level
and
if
you
have
any
questions,
feel
free
to
chat
with
us
on
the
cncf
list
or
or
slack
any
questions
on
1.9
by
anybody.
A
All
right
cool,
so
the
next
thing
and
coincidentally
happened
to
be.
On
the
same
day,
we
had
to
issue
our
first
cv:
advisory
from
harbor,
the
for
all
releases
between
170
to
175
and
between
180
to
on
a2
harbor,
had
a
critical
vulnerability
that
basically
resulted
in
any
malicious
user
being
able
to
create
a
brand
new
account
in
harbor
and
make
them
an
administrator.
So
this
is
a
significant
elevation
of
privilege.
A
The
workaround
is
to
disable
self-registration
in
hardware
and
in
the
advisory.
We
talk
about
that
as
well,
but
we
also
patched
all
the
releases
of
hardware,
so
there's
a
patch
for
1.,
823,
17.6
and
1.9.0.
All
three
of
them
have
patched
this
vulnerability,
so
this
was
the
first
time
that
we
went
through
that
process.
It
was
a
significant
you
know,
step
forward
for
hardware
as
a
project
because
it
allowed
us
to
not
only
figure
out
how
to
deal
with
the
cve,
but
we
also
created
and
finalized
our
hardware
security
policy.
A
So
here
we
have
our
security
release
process,
and
essentially
this
is
we
created
this
process
mimicking
what
other
cncf
projects
have
done.
So,
if
you've
got
project
envoy
kubernetes
that
they
have
a
very
similar
release
process,
and
this
dictates
around
what
should
happen
if
someone
internally
or
externally
finds
a
vulnerability
in
hardware,
how
should
they
report
it?
A
What
is
the
process
under
which
you're
gonna
patch
release
and
disclosure?
This
and
very
importantly,
what
is
the
embargo
policy
because,
with
security
fixes,
we
don't
want
it
to
be
known
to
the
entire
world
around
what
kind
of
issues
we
have
in
harbor
until
we've
had
the
chance
to
remediate
them
and
guard
our
users.
A
The
set
of
maintainers
from
harbor
will
probably
be
aware
of
the
security
fixes
ahead
of
time,
because
they're
going
to
be
members
of
the
cncf
security
list
for
harbor,
but
everybody
else,
that's
a
distributor
of
harper
will
be
under
an
embargo
policy
and
be
a
part
of
the
cncf
hardware
distributors
list
that
will
be
made
aware
of
the
cv
understand
what
the
embargo
policy
is
and
communicate
with
them
to
identify
the
right
date
for
disclosure
and
that's
the
data
which
we're
gonna
basically
disclose
the
cv,
create
the
advisory
policies
and
let
everyone
know
how
to
remediate
it,
whether
that's
by
patching
or
by
doing
any
workaround
so
feel
free
to
read
through
this
security
policy.
A
If
you
guys
have
any
questions
or
concerns
slack
the
maintainer
list
or
send
us
an
email,
we'd
love
to
hear
from
you
and
the
advisor
that
you
have
today
is
again
staying
in
go
harbor
under
the
harbor
repo
under
advisories.
There
is
one
published
advisory
here,
so
you
can
oops
here
you
can
click
on
it
and
it
talks
about
the
affected
versions
of
the
vulnerability.
A
It
has
a
link
to
the
disabling
of
the
self-registration.
So
you
can
patch
your
environment
if
you
cannot
upgrade
immediately
and
then
it
includes
the
fixed
versions
for
for
fixing
this
issue,
and,
like
I
mentioned
176,
183
and
1
9,
do
not
have
this
problem,
so
you
know
I.
A
I
don't
want
to
say
that
you
know
having
a
cve
was
a
good
thing,
because
it
really
wasn't
that
has
significant
impact
to
our
customers,
but
the
fact
that
we
had
to
go
through
this
for
the
first
time
it
allowed
us
to
really
build
a
robust
process
around
this,
that
mimics,
what
other
cncf
projects
have
done
and
and
that's
a
good
thing
now.
We
know
how
to
deal
with
future
cvs.
A
All
right,
cool
continuing
so.
B
One
question
was:
was
that
security
issue
actually
found
by
a
user
or
just
someone
on
the
team
so
very
very.
A
Interesting,
so
the
actual
cv
was
found.
Here's
how
we
we
kind
of
viewed
harbor.
A
We
started
you
know
back
in
june
time
frame
we
decided
we
want
to
do
two
security
penetration
testings
of
hardware,
the
first
one
is
paid
by
cmcf
and
that's
going
to
be
done
in
the
second
week
of
october
by
an
external
vendor
called
c53
in
the
meantime,
because
that
was
getting
too
close
to
the
110
release,
time
frame
for
for
for
harbor,
which
we're
planning
to
release
it
at
or
near
cubicon
us,
we
decided
to
have
a
second
penetration
testing
that
we
paid
for
as
vmware,
so
you
know
vmware
being
a
huge
contributor
to
hardware.
A
We,
we
got
an
internal
team
and
basically
budgeted
them
to
do
security
and
penetration
testing
of
hardware.
They
identified
the
bug,
then,
when
we
made
a
check-in
for
the
bug
to
basically
let
everyone
know
that
this
is
backward
fixing
and
build
the
releases
of
hardware.
There
was
a
security
researcher
that
had
a
crawler
that
identified
the
check-in
and
contacted
us
and
created
the
blog
post
about
it
and
a
little
bit
of
a
media
storm
around
it.
A
But
at
the
end
of
the
day,
our
our
team
identified
the
bug
we
fixed
it
and
we
reacted
very
quickly,
but
obviously
someone
external
to
the
hardware
team
blogging
about
it
did
hurt
us
a
little
bit
in
the
future.
We've
talked
about
doing
what
kubernetes
does
we
just
obfuscate,
our
our
checkings
a
little
bit
that
way
they
don't
indicate
that
this
is
a
security
fix
and
after
we've
made
the
disclosure
and
the
advisor
is
go
back
and
edit
the
details
and
enter
where
they
are.
A
Because
history
has
shown
us
that
people
are
crawling
or
fixes
to
find
cds
yeah,
it
gets
clicks
thanks.
So
right,
so
one
that
110
for
110,
where
we
have
a
few
minor
features
that
we're
tracking.
So
if
you're,
if
you're
involved
in
the
project,
you
know
we're
trying
to
basically
create
immutable
repositories,
we're
trying
to
create
a
limited
guest
account,
we're
obviously
fixing
a
lot
of
the
security
and
vulnerability
fixes
that
may
be
found
as
part
of
the
penetration
testing.
A
That
will
happen
the
second
week
of
october,
but
the
biggest
and
most
major
feature
of
that
is
a
pluggable
scanner
that
will
allow
both
aqua
and
encore
to
bring
their
own
scanners
into
hardware
and
allow
them
to
allow
a
customer
who
is
the
project
owner
to
dictate
what
scanner
they
want
to
use.
They
can
use
the
built-in
clear
scanner
or
they
can
use
the
aqua
or
anchor
scanners
as
a
means
to
validate
and
run
compliance
check
against
their
images.
There's
a
demo
of
that.
A
So
if
you
go
to
our
youtube
playlist
you,
it
should
be
there
within
the
day
that
from
the
morning's
meeting
that
has
the
demo
of
the
plugable
scanner,
the
cncf
webinar
that
we
have
on
october
15th
will
also
include
the
demo
of
that.
So
that's
a
joint
effort
between
the
hardware
team,
aqua
and
anchor
so
really
really
good
stuff.
There.
A
The
next
we
have
josh
who's,
also
on
the
line
that
kind
of
did
a
demo
and
and
basically
talked
a
little
bit
about
the
chart
museum
to
harbor.
I
want
to
mention
a
couple
of
things,
so
one
of
our
goals
that
you
have
moving
forward
is
to
have
better
integration,
with
bitnami
and
and
and
cubabs
and
the
goal
there
is.
A
It
is
already
possible
that
you
can
host
some
of
the
helm
charts
into
hardware,
because
hardware
does
use
sharp
museum,
but
we
want
to
make
that
better
and
more
seamless
for
the
end
users
that
are
using
cubabs,
so
josh
kind
of
got
a
much
head
start
on
this,
so
he
has
some
work
here
with
helm,
that
and
and
I'll
I'll
basically
turn
the
mic
over
to
josh.
So
he
can
tell
us
about
what
he
did
so
far,
but
me
as
a
core
maintainer
of
harbor.
A
C
Much
see
my
screen-
okay,
yes,
absolutely!
Thank
you
so
yeah
so
like
he
was
saying.
I
did
this
presentation
earlier
today
and
I
think
I
tried
to
stuff
too
much
into
it.
So
I've
revised
it
a
little
bit
if
you
want
more
of
the
history
on
this
check,
the
recording
for
the
earlier
call
and
then
there's
also
a
full
length
of
this.
C
That's
that
I
gave
at
helm
summit
so
that
should
be
coming
out
too
and
I'll
try
to
share
that
with
the
harbor
slack
room
when
I
get
it,
but
basically
I
want
to
just
talk
about
sort
of
the
push
that
has
been
from
the
helm
side
of
starting
to
put
things
into
registries.
C
So
just
a
little
bit
about
me,
I
am
involved
with
the
helm
project
via
creating
the
chart
museum
project.
So
shortly
after
I
released
this,
it
was
sort
of
donated
to
helm
as
an
official
sub
project
and
it's
yeah.
It's
now
used
as
like
a
back
end
for
a
lot
of
different
services,
harbor
being
one
of
them
who
uses
stripe
museum
as
a
back
end
to
serve
helm
repository.
C
So
the
sort
of
I
think
where
we're
going
in
the
cloud
native
ecosystem
is
that
sort
of
sometime
next
year.
C
I
think
even
now,
we'll
see
that
the
oci
distribution
api
will
become
the
standard
for
storing
things
that
are
used
by
cloud
native
tools.
So
what
kind
of
things
am
I
talking
about?
I'm
thinking
about
things
like
helm,
charts
things
like
opa,
bundles,
there's
cnab,
which
is
a
project
that
has
an
artifact
type,
so
things
like
this
will
start
to
be
shared
using
oci.
C
So
I
didn't
put
a
lot
of
detail
into
oci,
there's
kind
of
a
lot
to
unpack
I'll
share,
my
my
full
slides,
which
has
more
history,
but
basically
in
2013,
docker
came
out
and
they
had
their
own
specification
for
running
containers
in
their
own
way.
The
next
year
rocket
came
out
from
core
os
and
they
sort
of
had
a
competing,
more
open
standard
to
do
these
things.
C
So
then,
finally,
the
next
year,
they
agreed
on
let's
release
the
open
container
initiative,
which
will
be
the
open
specification
for
running
containers
and
then
several
years
after
that
in
2018
they
released
another
specification
called
distribution,
spec
and
the
distribution.
Spec
is
essentially
the
api
from
the
docker
registry
project,
so
docker
distribution
or
docker
registry,
whenever
you're
put
you're
doing
a
docker
push
or
a
docker
pool
it's
using
this
api
spec
that
has
now
sort
of
been
donated
to
oci
as
their
distribution
specification.
C
So
now,
when
we're
talking
about
registries,
we're
not
calling
it
docker
registry
we're
calling
them
oci
registries,
for
example,
and
here's
sort
of
a
visual
of
like
what
that
what
that
is
exactly
the
you
know.
The
red
lines
represent
api
calls
to
these
different
registries,
such
as,
like
amazon,
ecr,
docker
hub
and
things
like
that.
So
every
time
you
do
a
docker
pusher
docker
pool
you're
using
this
api,
that's
known
as
oci
distribution.
C
So
there's
there's
sort
of
been
a
timeline.
That's
happened
over
the
last
year
that
has
encouraged
people
to
start
using
that
api
to
store
specifically
helm
charts,
but
there's
talks
about
all
sorts
of
different
artifact
types,
and
it's
really
just
become.
The
question
of
this
is
more
of
a
generalized
api
for
storage,
not
just
something
for
container
images,
even
though
technically
oci
has
container
in
the
name.
So
it's
a
little
bit
misleading
and
that
has
actually
spawned
a
new
project.
I
didn't
mention
this
this
morning.
C
C
Oh
okay,
I
just
I
heard
him-
I
might
come
on
you,
sorry,
so
yeah,
so
that
so
basically
that's
still
kind
of
under
discussion.
There
has
been
a
library
called
aurus
which
is
which
stands
for
oci
registry
is
storage,
and
this
is
simply
just
an
opinionated
way
to
share
arbitrary
artifacts
over
oci
distribution.
C
So
this
isn't.
This
hasn't
been
fully
agreed
on
by
the
oci
maintainers
group,
but
I
think
it's
it's
headed
in
the
right
direction,
and
this
project
is
following
closely
that
open
containers
artifacts
repo
to
to
be
told.
You
know
the
right
way
to
do
these
things
and
the
helm
project,
so
the
helm,
3
client
has
used
auros
as
a
dependency.
C
So
now
you
can
start
to
do
things
like
logging
into
different
registries
using
docker
style
logins.
You
can
start
to
download
charts
from
registries
in
the
similar
way
that
you
would
do
a
docker
image.
You
can
save
charts
and
do
a
local
cache
which
I'll
share
in
a
demo
in
a
second
and
then
you
can
also
upload
charts
using
helm,
chart
push
and
keep
in
mind
like
again.
This
is
all
very,
very,
very
early
stages.
C
C
So
the
question,
then,
is
you
know,
helm
3
supports
this
type
of
functionality,
but
where
are
the
registries
that
that
support
that
and
right
now,
because
it's
still
under
discussion
with
oci,
it's
a
bit
of
a
tricky
question,
because
the
public
hubs
like
docker
hub
and
amazon
ecr
are
concerned
about
you,
know
things
like
security
and
like
user
experience
and
they're,
not
necessarily
interested
in
running
some
experimental
project
of
running
helm,
charts
until
oci
kind
of
gives
the
thumbs
up
on
the
way
to
go
about
it.
C
The
one
exception
is
azure
container
registry,
they're
sort
of
behind
the
aurus
effort
and
all
these
types
of
things.
So
they
have
this
sort
of
support
early
on.
C
C
That'll
run
a
registry
on
localhost
5000
and
then,
which
is
really
the
point
of
this
talk,
is
the
other
option
is
harbor
which
is
and
focusing
you
know
all
these
efforts
on
creating
a
robust
registry
for
on
the
cloud
native
ecosystem,
and
so
I
really
see
that
the
direction
that
this
that
all
of
this
should
go
is
getting
this
type
of
support
as
a
first
class
citizen
in
harbor
to
kind
of
push,
you
know
push
along
this
initiative
of
using
a
standard
for
saving
like
different
cloud
native
artifacts
in
a
standard
way,
and
this
will
then
push
on
to
the
cloud
vendors
to
say.
C
Okay,
you
know
this
is
maybe
a
good
idea
harvard
supports
it.
So,
right
now
it's
there's
kind
of
a
lot
of
moving
parts
and
I
want
to
get
more
time
to
work
with
the
maintainers
of
the
project
of
the
harbor
project.
But
I
have
a
fork
here:
it's
under
blood
orange
io,
slash
harbor,
and
this
has
basically
modified
a
validation
method
to
allow
oci
images
alongside
docker
images.
C
But
if
you
do
this,
it
sort
of
has
shows
some
issues
in
the
ui
and
I
and
I
opened
a
ticket
earlier
and
I'll
go
over
what
that
looks
like,
but
you
basically
just
need
to
replace
the
core
image,
which
is
this
one,
that's
based
on
top
of
master
at
like
version
1.9
and
then
for
some
reason.
I,
with
the
helm,
chart
I
needed
to
disable
notary
for
a
certificate.
That's
probably
just
an
issue
with
the
helm
chart
and
then
you
have
to
give
a
tls
certificate
in
some
way.
C
So
this
is
an
example
of
using
cert
manager
in
let's
encrypt
to
host
hardware
registry
at
harvard.mysite.io
and
I'll-
and
I
put
this
into
the
cncf
harbor
slackroom.
But
here
is
a
link
for
sort
of
some
commands
that
you
can
use
to
install
helm3,
deploy
harbor
with
the
support
and
then
start
to
push
charts.
C
C
So
I
have
here
several
different
helm,
charts
from
the
stable
repository,
so
what
I
can
do
with
helm3
is:
let's
take
an
example
here
ambassador
and
I
can
save
a
like
a
normal
helm.
C
C
So
now
what
I've
done
is
I've
saved
this
chart
to
local
cache
we're
trying
to
do
some
things
with
helm,
where
we
actually
don't
allow
you
to
or
be
more
strict
about
the
types
of
tags,
so
we
can
enforce
semantic
versioning
and
things
like
that,
so
that
would
be
a
rule
set
on
top
of
oci.
C
C
C
C
Like
figuring
out,
if
you
have
the
correct
scope
to
do
things
and
the
different
oci
api
requests
that
you're
that
you're
making
to
upload
these
things,
there
is
some
sort
of
like
it
does
seem
like
with
the
ui
for
harbor.
There's
some
issue
where,
after
the
first
chart
that
I
upload
others
don't
appear.
A
Very
very
cool.
Thank
you.
Thank
you
josh.
You
know.
This
is
great
work
by
the
way,
and
one
of
the
things
I
wanted
to
read
today
here
is
that
we
at
harbor
were
very
interested
in
the
oci
and
the
artifact
management.
That's
happening
as
part
of
the
oci
discussions.
You
know
our
goal
is
eventually
to
be
the
cloud
native
store
for
all
the
assets
right
now
that
could
include
cena
bundles,
which
is
one
of
the
things
that
microsoft
cares
about.
A
Operators
opa,
you
know,
there's
a
whole
bunch
of
items
that
are
in
the
cloud
native
world.
The
developers
may
or
may
not
need
and
having
a
repository
for
all
of
them.
That
follows
our
back.
That
follows
compliance
ruling
and
everything
else
that
harbor
brings
to
the
table
is
going
to
be
very
important.
So
our
goal
is,
you
know,
follow
that
direction
and
we're
probably
going
to
have
alex
one
of
the
guys
from
the
team
to
work
with
you
moving
forward
to
figure
out
how
hard
work
can
be
a
better
citizen
in
this
ecosystem.
A
Unfortunately,
1.10
is
completely
locked
and
full
right
now,
but
this
was
starting
the
planning
for
111.
I
think
that's
the
right
time
for
us
to
have
the
discussion
with
you
and
the
team
and
figure
out
how
to
be
a
better
citizen
in
this
yeah
excellent.
That
sounds
great.
That
sounds
great
cool
and
I
don't
have
anything
else
on
the
agenda
about
the
minute.
Does
anybody
have
a
question
or
concern
or
anything
else.
D
Just
to
clarify
as
a
result
of
the
docker
registry
supporting
oci
artifacts
does
harbor
also
support
oci
already,
and
it's
just
we're
adding
support
in
the
ui
or
do
we
need
so.
C
Yeah,
so
sort
of
basically
that
there's
a
there's
a
service
called
harbor
core
and
harbor
core
proxies
the
request.
Someone
can
correct
me
if
I'm
wrong
proxies
the
requests
down
to
distribution,
so
it
has
extra
layers
of
validation
and
extracting
data.
You
know
to
enable
some
of
the
cool
things
that
harbor
does
so
distribution
itself
does,
if
you
run
it
right
now,
with
no
extra
configuration
support.
All
of
this
stuff,
yeah.
A
Underneath
hardboard
is
a
docker
distribution
right,
so
if
the
right
version
that
includes
this
support
is
there,
it
will
support
it,
but
obviously
to
fully
leverage
this
full
functionality.
We
need
to
do
work
in
hardware
to
enable
it
right
so
so
today
you
can
just
go
and
pump
whatever
docker
distribution
supports
into
hardware.
If
harbor
doesn't
know
about
that.
Just
yet.
C
D
A
Cool
excellent,
well,
everybody
have
a
great
day,
see
you
guys
in
a
couple
of
weeks
and
nathan,
you
and
I
will
probably
be
in
the
maintainers
meeting-
that's
happening
within
the
next
few
days.
Yep.
I
marked
my
time
slots
for
that
already.
I
saw
that.
Thank
you
all
right.
Everybody
have
a
good
day.
Bye,
see
ya.