►
From YouTube: Harbor Community Meeting 20200115 - Americas Time zone
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
cncf
hardware
meeting
be
aware
that
this
is
a
recorded
meeting.
So
please
adhere
to
the
cmcf
code
of
conduct
all
right,
so
it's
been
a
while
since
the
last
time
we
met-
and
I
wanted
to
kind
of
update
everybody
on
a
few
of
the
different
features
and
things
that
hardware
is
working
on
and
make
sure
that
we
give
you
an
opportunity
to
ask
questions
and
address
any
of
your
concerns
and
and
do
some
follow-ups
what
has
happened
in
the
last
one
month.
A
Well,
hardware,
1.10
shipped,
so
you
know
that's
that
was
a
huge
release
from
our
end,
it
added
the
plugable
scanner
support.
So
now
we
have
dusek.
We
have
3v
from
aqua
as
well
as
encore
enterprise
and
engine
that
are
able
to
supplement
some
of
the
scanning,
that's
built
in
with
clear
and
give
you
more
options.
So
you
can
scan
your
artifacts
in
hardware.
A
We
also
publish
the
hardware
1.10
blog
post.
So
if
you
go
to
go,
hardware.io
I'll
paste,
a
link
right
now
as
well
to
our
chat,
but
you
can
see
the
blog
post
that
we
issued
to
kind
of
document.
The
1.10
release
talk
about
some
of
the
major
features:
everything
from
the
flagabo
scanners
to
the
immutable
registries,
capabilities
that
we
added
some
of
the
ydc
group
membership
as
well
as
the
limited
guest.
So
if
you
want
to
learn
more,
please
look
at
the
blog
post.
A
Our
docs
are
also
getting
a
complete
revamp,
so
we're
going
to
have
brand
new
documentation
on
our
website
very
soon
as
well.
So
it
should
be
easier
for
you,
everyone
to
search
and
find
content
on
hardware,
any
questions
on
1.10
all
right
and
our
plan
is
to
ship
a
1
to
10
patch.
That
will
include
some
bug,
fixes
and
things
that
were
found
by
the
community,
as
well
as
by
some
of
our
internal
testing
that
didn't
make
it
in
1.10.
A
That's
supposed
to
go
out
first,
half
of
february,
so
1
to
10
patch
will
come
out
then,
and
always
whenever
we
should
be
patch,
we
also
have
an
update
to
the
home
chart.
So
all
the
things
that
you're
expecting
to
see
from
from
the
harbor
release
will
always
be
there
all
right
in
terms
of
what's
coming
up.
Next,
I'm
not
sure
how
many
of
you
guys
are
super
engaged
in
our
community,
but
we
had
created
a
document
called
a
feature
hopper
for
1.11,
which
was
later
renamed
to
hardboard
2.0.
A
So
the
next
release
of
hardware
will
be
a
major
release
of
hardware.
I
pasted
the
link
to
the
document
that
kind
of
outlines
some
of
the
big
features
that
we're
adding
into
the
dough.
Now
not
everything
here
in
this
list
is
gonna
make
it
in
just
like
any
other
project
in
the
open
source
community.
We
started
in
a
with
a
prioritized
backlog
of
items.
We
want
to
build,
we
we
costed
them.
We
identified
the
architecture
and
the
high
level
features
of
each
of
them,
and
then
we
started
executing
very
likely.
A
We
will
implement
the
oci
registry
support
as
well
as
the
garbage
collection
feature.
That's
that's
part
of
the
candidate
features
and
then
anything
else
that
kind
of
can
slip
through
the
cracks
and
get
in
here
will
look
at
this
wall.
But
let
me
talk
a
little
bit
at
the
high
level
in
terms
of
what
those
features
are
the
oci
register
support
and
essentially
makes
harbor
to
be
oci
compliant.
So
that
means
that
harbor
will
be
able
to
support
the
push
and
pull
of
any
oci
artifact.
A
What
are
some
of
the
oci
artifacts
hum,
charts,
container
images,
opa
operators
and
potentially
cena
bundles
as
well
so
you'll
be
able
to
push
and
pull
all
these
images
but,
more
importantly,
the
entire
ecosystem
of
capabilities
that
harbor
provides,
would
also
work,
so
you'll
be
able
to
apply
codas
retention
policies,
replication,
airbag
and
authorization
robot
accounts.
All
of
these
capabilities
are
hardware
that
you're
used
to
basically
being
able
to
use.
A
A
We're
also
working
with
the
bigger
oci
team
and
harbor
has
a
seat
at
the
table
in
terms
of
attending
meetings
and
basically
being
able
to
to
consume
the
contributions
are
coming
out
of
that
community
and
as
that
work
is
right
now
on
track
to
be
delivered
with
a
tentative
release
date
at
or
near
cubicon
europe,
which
is
basically
last
week
of
march
first
week
of
april.
So
that's
that's.
What
we're
looking
for
the
second
big
feature
of
this
release
is
garbage
collection.
A
So
today,
when
you
run
garbage
collection,
especially
for
our
customers
that
have
terabytes
and
terabytes
of
storage
on
hardware,
what
you're
seeing
is
that
it.
It
basically
blocks
all
the
push
and
pull
operations
for
a
significant
amount
of
time.
So
we
have
a
customer
that
has
29
terabytes
on
hardware
and
gc
could
take
pretty
much
a
day.
So
so
that's
not
an
acceptable
option.
So,
as
part
of
the
work
that
we're
doing
right
now
is
we're
trying
to
identify
a
way
as
part
of
this
reorganization
for
oci
as
well.
A
It
gave
us
the
opportunity
to
figure
out
how
to
make
garbage
collection
better,
so
it
would
be
non-blocking.
So
what
that
means
is
that
any
time
that
you
run
garbage
collection
on
hardware
against
your
your
registry,
you
still
be
able
to
push
pull
and
delete
images.
So
so
those
operations
will
be
unaffected.
So
your
registry
will
no
longer
be
in
read-only
mode.
A
I
think
that's
probably
one
of
the
biggest
blockers
and
biggest
issues
that
folks
are
having,
as
they're
scaling
up
their
hardware
instances
and
right
now.
I
think
we
have
more
than
10
customers
that
have
terabytes
of
storage
on
hardware
that
we
know
of
those
are
the
kind
of
two
biggest
items
but
feel
free
to
peruse
the
feature
hopper
and
if
there's
a
specific
feature
that
you
really
feel
passionate
about
where
that's,
because
you
have
that
affects
you
as
an
organization,
go
another
comment
to
the
actual
github
issue.
A
If
you
feel
like
you,
have
the
necessary
skills
and
want
to
come
and
help
us
develop
the
feature,
please
ping
me
privately
on
slack
or
or
post
a
message
in
the
github
issue
and
would
love
to
see
more
and
more
contributions
from
from
everyone.
A
The
the
last
two
things
that
I
kind
of
wanted
to
mention
is:
we
do
have
a
github
issue
that
we've
been
kind
of
advertising
to
the
community
in
terms
of
encouraging
everybody
to
come
in
and
tell
us
their
use
case
about
harbor
how
their
what
kind
of
size
of
environment
they
have
how
they're
using
it.
We
want
to
encourage
all
of
you
guys
to
come
and
do
the
same,
and
I'm
pasting
a
link
here
to
that
that
issue
here
on
the
chat
window.
A
This
is
just
simple
github
issue
and
you
see
that
many
customers
have
come
in
and
put
in
their
case
and
said:
hey
I'm
using
hardware.
I
have
x
amount
of
storage
these
many
images
across
this
many
projects.
A
This
is
a
release
I'm
having
and
more
than
a
few
folks
have
funded,
also
as
an
opportunity
to
come
and
tell
us
that,
yes,
I'm
using
hardware
high
scale,
but
I'm
also
encountering
some
problems
and
that's
that's
a
good
way
to
get
our
attention
because,
as
a
highly
visible
thread,
everybody
from
product
managers
to
their
leaders
in
engineering
are
looking
at
it.
So
folks
have
had
issues
whether
ui,
based
or
otherwise
it's
a
good
way
for
them
to
get
attention
and
put
them
at
the
top
of
the
stack.
A
This
is
the
updates
that
I
had
for
today.
Now
I
want
to
open
it
up
to
everyone
and
see
if
anybody
has
any
issues
and
basically
keep
an
opportunity
for
you
guys
to
ask
questions.
B
I
got
one
question,
michael
regarding
the
garbage
collection.
A
C
Hi,
my
name
is
chris
davis
you'll
find
me
on
github
as
cdchris12.
I
work
for
amazeeio.
C
We
are
a
web
hosting
platform,
we're
currently
tied
to
open
shift
as
our
back
end,
but
we're
looking
to
make
the
shift
to
native
kubernetes
and
this
all
started
with.
We
need
a
container
registry
for
that
and
then
we
realized
harper
does
security
scanning
and
that
got
us
into
claire,
which
then
begot
ancore,
and
it
looks
like
we'll
be
looking
into
encore
enterprise
soon
too.
So
we
dove
off
the
deep
end
on
that.
One.
A
Very
nice,
that's
awesome,
chris
and
since
you're
going
to
be
looking
into
anchor
and
encore
enterprise,
we
did
a
webinar
with
encore
not
too
long
ago.
Let
me
actually
see
if
I
can
find
it
really
quickly
and
we
basically
structure
the
webinar
on
some
of
the
key
capabilities
that
anchor
has
with
with
hardware
and
I'll
I'll,
find
it
soon
enough
and
and
paste
it
here.
A
So
as
you're
looking
into
this,
you
can,
basically,
you
know,
put
you
together
with
the
right
folks
and
feel
free
to
ask
us
questions.
C
I
do
have
one
question
about
the
need
for
our
use
of
anchor.
If
you
don't
mind
me
asking
now
so
one
of
our
one
of
our
issues
with
harbor
is
that
it's
really
difficult
to
see
a
bird's
eye
view
of
all
of
your
containers.
When
you're
looking
for
security
scan
information,
you
have
to
view
them
per
project
per
repository
per
container.
It's
so
hard
to
do
that.
So
that's
the
reason
we
went
with
that.
We're
looking
at
going
with
anchor
their
enterprise
product
provides
a
ui
for
all
of
the
scans
that
it's
doing
is.
A
Yeah
that
I
mean
that's
a
that's
a
very
valid
point.
I
think
that
it
does
make
sense
for
us
to
to
create
some
high-level
bird's-eye
view
more
around.
A
You
know
how
many
vulnerabilities
are
found
across
your
projects,
so
when
you're
looking
at
the
high
level
projects
view,
you
don't
have
to
dig
into
each
project,
would
you
mind
creating
a
feature
request
issue
on
our
github
page
to
kind
of
document
that
that
way,
you
attribute
the
the
request
to
you
as
well,
but
also
this
will
give
an
opportunity
to
other
users
to
come
and
chime
in
and
say,
hey.
You
know
chris
is
right.
I
want
I
need
this
too,
and
then
you
know
the
more
folks
that
come
in
and
talk
about
it.
A
A
A
And
it
looks
like
I
couldn't
hear
anybody
earlier
while
I
was
speaking
so
if
someone
had
any
questions
that
I
com,
I
did
not
really
wrote
you.
I
couldn't
hear
you
so
if
someone
like
and
I
realized,
I
realized
that
I
couldn't
hear
anybody
when
jonas
picked
me
and
says
like.
Can
you
not
hear
me?
I
was
like
no,
so
I
jonas,
if
I
missed
what
you
said
earlier,
please
go
ahead,
go
ahead
now,
all.
B
B
Yay
the
wonders
of
the
internet
yeah,
so
I
I
just
had
a
question
regarding
a
garbage
collection,
so
I'm
looking
at
the
dock
here
we're
talking
about
allowing
manual
execution
as
well
as
scheduled,
tasking
we
looked
into
having
something
like
if
it
is.
If
it
has
been
in
in
garbage
for
more
than
x
amount
of
days,
then
delete
otherwise
leave.
A
Yeah,
so
this
this
is
a
the
garbage
collection.
That's
happening
here
is
more
at
the
layer,
the
the
docker
layer.
So
what
happens
is
as
images
are
getting
pushed
and
pulled.
Several
layers
that
may
have
been
in
use
at
some
point
are
no
longer
in
use,
so
it's
very
hard
for
us
to
know
when
it
was
not
in
use,
but
because
this
is
like
think
of
this
as
a
referential
table,
that's
huge,
especially
if
you
have
like
a
terabyte
plus
ecosystem
of
containers.
So
what
happens
is
a
garbage
collection?
A
You
can
dictate
how
often
you
want
to
run
in
so
you
run
it
daily
or
every
week
or
on
demand,
and
what
that
gc
happens,
we'll
go
and
find
the
the
references
are
no
longer
have
a
pointer
to
them
and
that's
basically
what
we
delete.
So
it's
no
longer
it's
very
hard
for
us
to
know
how
long
it
has
not
been
referenced,
because
that's
not
an
information
that
docker
provides
us,
but
don't
know
if
it
anybody
is
using
it
or
not
and
clean
it
up.
Gotcha.
A
C
A
Perfect,
that's
awesome
sure
zak
would
be
will
be
ecstatic
about
that.
So
we
have
a
question
from
tiana.
I
realize
he
has
an
issue
with
his
microphone.
That's
a
true
issue,
not
my
end
in
this
case,
he's
saying:
asking:
does
the
garbage
collection
take
into
account
users
doing
pool
by
digest
yeah,
so
so
the
gc
in
this
case
it's
not
necessarily
user,
based
it's
more
on
the
end.
So
it's
looking
at
all
the
images
and
the
references
that
they
have
across
all
the
layers.
A
So
if
users,
what
users
are
pulling
images
or
not,
is
not
is
not
affected
by
that.
However,
we
have
a
different
feature.
Called
retention
and
retention
of
images
does
take
into
account
dates
like
what
jonas
mentioned
earlier.
So
so,
if,
if
you're
looking
at
some
of
the
retention
policies,
that
harbor
has
those
provide
capabilities
to
basically
limit
the
number
of
images
that
you
have
in
a
project
based
on
on
on
on
either
pools
or
based
on
a
data
damage
was
pushed
and
I'll.
Send
you
guys
a
link
to
that
in
a
second
as
well?
A
So
so
retention
policies
are
all
about
hey.
I
want
to
have
a
compliance
policy
that
says:
don't
keep
images
are
super
old
so
that
users
don't
install
them,
because
those
images
are
likely
to
have
vulnerabilities
are
likely
to
have
really
old
code.
So
I
want
to
clean
up,
especially
you
have
a
ci
cd
system
that
keeps
pushing
an
image
every
day
or
every
hour
or
at
whatever
interval.
A
A
A
Interesting,
so
we
had
a
so
that's
something
that
we
are
discussing
today.
The
battery
is
included
out
of
the
box.
Scanner
is
clear.
Obviously
we
as
a
community
feel,
like
trivia,
is
probably
a
better
scanner.
Trivia,
the
one
that
aqua
has
been
publishing
and
we're
working
on
a
plan
to
see
if
it
is
possible
to
incorporate
trivia
as
a
secondary
built-in
scanner
in
hardware.
A
Clear
will
not
go
away
at
least
not
in
the
immediate
future,
because
you
will
need
to
support
backwards,
compatibility
and
upgrades,
but
in
the
future
s3b
becomes
more
and
more
prevalent
in
the
community,
then
that
might
become
the
default
scanner.
We're
ways
away
from
that,
but
that's
something
we're
looking
into
so
I
believe
in
the
feature
hopper.
There
should
be
an
item
for
that,
but
if
it
isn't
I'll
make
sure
we
add
it
great
thanks.
C
A
Very
cool
all
right
so
covered
chris,
so
john,
you
want
to
go
next.
Tell
us
a
little
bit
about
how
you're
using
hardware
and
what
you
are
for,
and
you're
muted.
You
really
need
it
hey.
It
wasn't
just
me
that
did
that
messed
up
today.
D
Double
double
mute,
so,
first,
first
time
I
get
to
stay
on
a
call.
I
work
for
vmware
with
newly
part
of
vmware
from
the
pivotal
acquisition.
D
I
work
with
a
group
of
people
who
are
developing
the
customer
facing
registry
for
distributing
all
of
the
containerized
products
that
we're
working
on
interesting,
new
and
more
recent
motivation
is
the
anyone
who's
familiar
with
pass
or
or
the
pivotal
application
service.
There's
a
new
version
of
it.
That's
going
to
be
built
on
top
of
kubernetes.
So
previous
to
the
last
couple
of
weeks,
we
had
about
four
or
five
internal
teams
that
we're
going
to
be
distributing
some
containerized
software.
D
Then
we
move
it
to
a
different
project
within
harbor,
that's
public
facing,
and
so
so
things
like
the
granular
access
control
features
and
things
like
a
lot
of
the
manipulations
we're
doing
at
the
project
level
are
because
of
the
way
that
we
have
to
manage
access
control,
which
is
actually
we
haven't,
found
anything
too
significant,
as
limitations
there
using
the
retag
feature,
it's
it's
super
fast
to
be
able
to
move
things
from
one
project
to
the
other,
so
we
just
use
the
api
for
that
as
we've
started
interviewing
some
customers
for
consuming
this
software,
we've
started
to
find
some
interesting
things,
though
one
of
which
is
they're
having
difficulty
in
understanding
how
to
set
up
their
own
registry.
D
D
In
a
way
that's
going
to
meet
their
broader
enterprise
requirements,
they
don't
just
want
a
version
of
it
running
on.
You
know,
mini
cube
on
a
developer's
laptop
or
something
like
that.
They
want
a
real
production
version
and
the
other
part
of
this
is
they
want
to
quickly
know
when
there's
a
new
version
of
the
software
available.
D
So
right
now,
if,
if
you're
using
a
concourse
pipeline,
one
of
the
you
know
pivotal
ci
cd
system,
it's
pinging
against
harbor
every
minute
for
every
container
image
that
you
could
possibly
pull
down,
which,
in
and
of
itself,
isn't
really
a
super
massive
demand.
But
when
every
single
one
of
the
customers
are
pinging
for
every
single
one
of
the
container
images
that
are
possibly
hosted,
it
adds
up
so
we're
starting
to
investigate
the
possibility
of
what
we're
calling
an
edge
registry
deployment
to
actually
manage
that
for
the
customer.
D
So
it
would
be
something
that
we
could
install
help
them,
some
more
of
like
a
productized
version
of
a
little
bit
more
advanced
than
like
the
harbor
tile
that
some
people
may
be
familiar
with,
and
the
main
point
of
that
is
not
so
much
the
the
harbor
piece
but
to
manage.
Maybe
some
of
the
replication
as
well
to
configure
it
so
that
they
can
more
efficiently
pull
all
the
images
down
rather
than
having
some
other
external
mechanism.
A
Yeah
this
is
this
is
awesome,
john,
and
I
think
we
have
a
few
things
down
the
pipeline
that
might
help
you
we'll
also
love
for
you
guys
to
come
and
contribute.
Are
you
part
of
andrew
stringer's
team
or
extended
team?
Yes,
yep,
okay,
cool
yeah,
just
just
making
sure
yeah
I've
been
talking
to
him.
So
he's
awesome,
love
the
fact
that
you're
wearing
a
hardwood,
t-shirt
or
jersey
or
a
hoodie,
so
that
that's
good
yeah.
So
there's
a
couple
of
features
that
harbor
will
introduce
at
near
future.
A
They
won't
be
part
of
the
next
release,
but
already
started
thinking
about
them.
The
first
one
is
a
proxy
cache
capability,
so
as
you're
talking
about
edge
deployments
being
able
to
preheat
or
have
those
deployments
be
locally
on
the
edge,
but
only
bring
in
the
images
that
they
need
to
in
how
docker
registry
proxy
cache
works
that
might
be
very
advantageous.
We
actually
tested
proxy
cache
under
our
oci
work,
and
it
seems
that
most
of
the
things
are
working.
The
only
thing
that
we
don't
have
is
you
know.
A
Obviously,
we
don't
have
the
test
pipelines
around
that
and
we
don't
have
the
ability
to
configure
the
local
hardware,
so
very
likely
2.1
will
have
proxy
cache
capabilities.
It's
like
at
the
fundamental
level.
It's
working.
The
second
thing
is,
you
mentioned
you
know
being
able
to
install
a
hardboard
instance
locally
at
the
edge
or
anywhere
else
in
a
like
similar
to
how
the
hardware
tile
is
in
pks,
we're
actually
working
on
an
operator
pattern
for
hardware
that
will
basically
take
care
of
everything.
A
Installation
configuration
upgrades
very
likely,
install
some
of
the
dependent
components
like
postgres
and
redis,
so
it
will
be
a
fully
fledged
solution
that
enables
you
to
to
deploy
hardware
using
a
simple
yammer
file,
so
we're
looking
for
folks
that
are
going
to
come
and
contribute
on
that.
So
if
that
seems
interesting
and
aligns
with
your
scenarios,
you
guys
can
even
take
the
lead
on
that
and
we
can
help
out
on
the
back
end.
D
Yeah,
I
think,
those
those
both
sound
super
interesting
and
we're
we're
in
the
process
of
negotiating
we're.
Our
team
would
love
to
contribute
as
much
as
possible
to
to
harbor,
but
we
just
have
to
figure
out
the
the
resource
constraints
and
staffing,
and
things
like
that.
So
hopefully
it's
a
little
easier
now.
A
Yep
absolutely
cool
all
right
and
I
think
tiana,
I
don't
know
if
you
want
to
type
any
of
the
stuff
that
you
have
in
terms
of
where
you
work
and
how
you're
using
hardware
it's
up
to
you
we'll
give
you
a
minute.
If
you
want
to.
A
A
Oh
tiana
you're
from
infoshifter.
I
know
you
guys
so
you're
the
guys
are
based
in
las
vegas,
so
infosifter
basically
has
a
small
appliance
that
basically
has
a
full
cloud
native
this.
Basically
it's
a
full
cloud
native
appliance
box.
A
I
met
some
of
the
folks
from
your
organization
so
nice
to
see
you
here
tiana,
so
they
are
looking
into
you
know
what
kind
of
registers
they
can
distribute
as
part
of
their
appliance
and
hardware
is
one
of
the
options.
So
hopefully
we
can
make
that
happen
all
right.
So
we
have
about
two
minutes
left.
If
someone
has
a
pressing
questions
or
anything
we'll
give
I'll
give
them
the
mic
for
two
minutes,
but
otherwise
we'll
call
it
a
day.
A
All
right.
Everybody
well
have
a
great
rest
of
your
day
and
see
you
in
two
weeks.
I
will
cancel
the
meeting
because
it's
the
chinese
new
year,
the
spring
festival,
so
most
folks,
are
out
from
our
team
and
then
we'll
reconvene
see
you
guys
in
a
month
and
hopefully
we're
going
to
have
a
lot
more
updates
about
hardware.
Then
bye,
everybody.