►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
the
first
update
I
want
to
give
is
we're
progressing
well
on
our
donation
effort
slow,
but
but
well
so
we
opened
up
two
individual
prs
and
I'm
gonna
open
them
up
here.
So
you
can
see
them.
The
first
pr
is
against
sick
storage,
and
basically
this
is
a
checklist
of
all
the
things
that
sig
storage
has
been
validating
to
make
sure
that
hardboard
is
conformant
and
ready
to
graduate
in
cncf.
A
If
you
go
to
githubs
in
cf6
storage,
it's
the
pr
52
you'll
notice
here
some
of
the
details,
including
links
to
all
the
documentation
and,
more
importantly,
the
answers
that
are
validating
and
committing
as
hardware
maintainers
and
hardware
community.
The
hardware
has
met
right.
Some
of
the
these
are
the
graduation
1.2
criteria,
we'll
have
actually
one
of
the
most
important
documents
here.
Is
this
tech
due
diligence
document?
So
if
you
open
this
up,
it
has
a
wealth
of
information
on
hardware.
This
is
like
30
pages
of
information
from
everything
from
our
community.
A
A
So
if
you
wanted
to
check
it's
hard
already
to
graduate
within
cncf,
this
is
basically
what
we're
using
to
kind
of
drive
that
we
also
have
a
slide
deck
that
goes
through
four
or
five
slides
in
a
much
more
compressed
format,
some
of
the
most
important
details,
but
if
you
want
to
know
a
little
bit
more
about
hardware
and
identify
like
even
know
who's,
the
technical
lead
for
an
area,
this
document
has
it
all
and
we
actually
push
some
of
that
information
as
well
to
our
pages.
A
So
so,
if
you
go
here,
one
of
the
things
that
it's
talking
about
is
who
our
maintainers
are.
One
of
the
things
we've
added
in
harbor
lately
is
the
ability
to
identify
the
maintainers.
But,
more
importantly,
what
is
that
area
of
responsibility?
A
A
So
we
have
these
prs
for
both
c
grand
time
and
sig
storage.
Our
goal
has
always
been
to
see
if
we
can
graduate
by
the
by
the
time
of
cubicon
europe,
but
you
know
given
that
cncf
is
and
the
toc
and
others
are
super
busy
or
having
a
hard
time
getting
on
their
schedule
and
getting
their
schedule
is
easy.
Actually
getting
them
to
do.
Work
is
harder,
so
we
are
I'm
hopeful
and
optimistic
that
we
might
get
it
by
by
the
next
six
weeks.
But
you
know
it's
a
goal,
not
a
commitment.
A
A
The
main
features
of
that
release
is
going
to
be
three
distinct
features:
number
one.
We
are
updating
hardware
to
be
oci
compliant,
so
you'll
be
able
to
do
oci
at
oc
any
oci
compliant
artifact
into
hardware
that
includes
cena,
bundles
opas,
hum,
charts,
container
images
and
anything
else.
That
might
end
up
being
oci
you'll,
be
able
to
push
them
and
pull
them
from
hardware
you'll
be
able
to
adhere
to
policies
like
like
quota
policy.
Retainment
policy
web
hooks
will
work.
Everything
in
harbor
will
work.
A
The
only
thing
that
would
be
tricky
is
scanning.
If
that
oci
artifact
has
the
capability
to
be
scanned
by
one
of
our
scanners,
it
will
work,
otherwise
it
won't
in
conjunction
with
the
oci
compliance.
We're
also
revamping
key
components
of
the
hardware
user
interface
to
make
it
a
little
bit
more
modern,
we're
actually
even
introducing
dark
mode
which
is
really
cool.
A
I'm
personally
excited
about
that,
and
then
the
ui
is
going
to
have
a
better
look
and
feel
a
little
bit
more
modern
and
and
more
adjacent
to
to
the
new
work
that
we're
doing
on
oci
as
a
side
note
we're
also
revamping
the
hardware
website
so
go.
Hardware.Io
is
gonna,
be
updated
before
cubicon,
hopefully
within
the
next
couple
of
weeks,
and
it's
gonna
be
two
major
enhancements.
The
look
and
feel
of
the
ui
is
gonna,
be
tremendously
changed.
A
It's
gonna
be
a
lot
more
modern,
easy
to
find
things
easy
to
navigate,
but
the
second-
and
this
is
the
most
important
one-
our
documentation
will
be
built
into
the
website.
So
if
you
want
to
find
out
something
about
hardware,
you'll
be
able
to
find
it
in
the
website,
you'll
be
able
to
search,
filter
and
and
and
identify
things.
So
that's
going
to
improve
the
experience
for
folks
that
are
looking
to
find
something
or
new
folks
are
getting
onboarded
into
hardware.
A
This
is
all
the
first
item.
The
second
item
is
for
improving
garbage
collection,
so
garbage
selection
today
in
harbor
requires
downtime.
So
if
you
have,
you
know,
have
users
have
multi-terabyte
stores
in
harbor
and
that
garbage
collection
could
take
hours,
which
means
that
you
have
downtime
for
hours
for
your
operations.
A
What
some
customers
have
resulted
to
doing
is
they
have
two
hardware
instances
that
they
keep
identical
with
either
mirroring
or
replication,
and
they
do
garbage
collection
in
one
while
they
switch
the
low
balancer
to
the
other
and
then
when
they're
done
they
they
do
vice
versa.
Obviously,
that
works,
but
it's
so
expensive,
so
being
able
to
do
live.
A
A
Because
we
need
to
keep
it
there
for
upgrades,
as
well
as
other
situations
where
customers
want
to
keep
their
existing
configuration,
but
3v
will
be
available
as
the
built-in
scanner
for
for
vulnerability,
static
analysis.
It's
a
much
better
scanner,
much
more
well
maintained
to
have
a
great
relationship
with
aqua
in
the
security
space.
They
they
helped
us.
You
know,
deliver
the
plugable
scanning.
A
A
From
a
planning
perspective
before
I
deep
dive
into
the
ui,
one
of
the
things
that
we're
working
on
is
we're
starting
to
work
on
the
themes
of
the
next
release
of
hardware,
and
this
is
going
to
be
important
because
we
have
to
generate
some
collateral
for
our
cubicon
presentations
and
the
next
list
of
harbor
will
be
focused
entirely
almost
on.
Basically
being
you
know,
for
lack
of
a
better
word.
It's
gonna
be
around
distribution
and
image
locality.
A
I
think
of
capabilities
that
will
enable
that
is
proxy
cache
capabilities,
which
we
actually
have
almost
working.
So
we
are
able
to
get
proxy
hardware
to
support
proxy
caching.
Today
we
just
can't
ship
it,
because
it
requires
quite
a
bit
of
testing
that
we
don't
have
bandwidth
to
do
in
2.0,
but
proxy
cash
will
be
there.
We
will
finally
finish
the
work
on
the
p2p
distribution,
either
with
dragonfly
or
kraken.
A
Well,
I
don't
know
yet
which
of
the
two
will
end
up
supporting
first
but
very
likely
dragonfly
and
then
the
third
thing
that
we're
going
to
do
is
we're
going
to
actually
get
hardwood
to
be
a
better
citizen
in
edge
deployments.
So
we're
going
to
add
some
raid
limiting
and
and
bandwidth
throughput
limitations
on
three
operations.
A
A
Any
questions
on
what's
coming
next,
I
know
that
tiana
and
I
know
that
you
and
your
team,
your
this,
that
last
feature
might
be
something
that
might
be
attractive
for
you,
since
you
guys
work
with
quite
a
bit
of
edge
deployments.
C
A
Every
feature
of
hardware:
we
welcome
more
and
more
contributions,
so
if
anybody
wants
to
jump
on
hardware
and
help
us
deliver
some
of
these
features
we'll
we
welcome
new
contributors,
all
right.
So
next
I
want
to
show
you
guys
a
little
bit
of
the
ui
demo.
This
is
a
recorded
demo
that
we
we
did
earlier
today
in
as
part
of
the
asia
time
zone,
presentation
of
the
harbor
community.
A
I
don't
have
access
to
this
environment
because
it's
on
a
local
developers
when
I
say
early,
this
is
super
early
like
you
know,
nobody
has
it
except
that
they
have,
but
I
thought
it
would
be
important
for
me
to
show
you
guys
a
few
of
the
ui
screens,
so
you
can
get
an
idea
of
the
look
and
feel
of
the
new
hardware.
If
you
have
any
questions,
please
feel
free
to
ask
me.
So
would
you
at
the
high
level
you
start
with
the
same
structure,
structural
interface,
that
we
have
today
in
harbor?
A
You
start
from
a
global
harbor,
then
you
have
the
logical
abstraction
of
projects
and
that's
the
abstraction
of
which
you
can
enforce
everything
from
policy
to
static
analysis
scanning
to
our
back.
All
of
that
still
happens
at
the
project
level.
That
does
not
change
under
project,
though
you
have
repositories
and
that's
where
things
start
changing.
A
So
you
have
the
repositories,
which
is
generally
what
you
think
of
today,
but
then
underneath
each
repository,
for
example,
head
tracks
or
a
line
of
business
application
under
that
you
get
to
see
and
and
he'll
click
here
in
a
second,
you
get
to
see
artifacts
instead
of
tags
in
today's
harbor,
the
the
the
logical
ladder
is
a
project
repo
tag.
A
Now
the
logical
abstract,
the
logical
ladder
is
project,
repo
artifact
tag,
and
the
reason
why
this
is
important
and
as
you
see
in
this
ui
is
because
with
oci
artifacts
the
artifact
is
the
is
the
basically
is
the
key
in
the
in
this
discussion
and
then
the
same
artifact
can
be
tagged
with
one
or
more
tags.
We're
going
to
improve
this
ui
a
little
bit
and
if
you
view
the
presentation
later
on,
we
had
a
lot
of
questions
on
this,
but
the
most
important
thing
here
is
you're
gonna.
A
A
You
might
tag
that
both
as
version
2.3
and
also
as
latest
you
might
tag
it
twice
and
then,
as
a
new
release
comes
up,
the
latest
tag
can,
you
know,
can
keep
moving
forward,
but
the
version
2.3
will
stay
and
that
will
be
the
immutable
tag
there.
Oh
by
the
way,
immutable
tags
will
also
work
here,
they're
not
going
away.
A
So
in
this
case
you
can,
you
can
have
the
same,
artifact
be
tagged
multiple
times
and
you
scan
the
artifact
id,
not
not
the
tag
and
he'll
drill
into
one
of
these
artifacts
here.
Let
me
actually
push
it
forward,
so
I
can
show
you
and
within
that
artifact
you
you're
able
to
he's
showing
you
the
operations
like
delete
copy
and
all
of
these
things
and
actually
before
I
go
into.
A
Somewhere
here
he
drilled
in
and
he's
able
there
it
is
so
when
we
go
in
into
one
of
the
digest
for
for
an
artifact,
you
can
see
that
you'll
be
able
to
see
all
the
details
like
build.
History
summary
dependencies
values
scanning
will
also
be
here
similar
to
how
you
see
that
in
hardware
today,
with
additional
thing
that
you're
able
to
add
or
remove
tags
for
compliance
reasons-
and
we
talked
about
this
today-
on
the
call
we're
gonna
restrict,
who
can
add,
remove
tags
to
an
image.
A
A
So
that's
kind
of
at
the
high
level
what
the
new
ui
would
look
like
at
there's,
no
more
screens
to
be
shown
here,
but
there
was
a
lively
discussion
this
morning
around
everything
from.
Is
there
a
better
way
to
represent
this
data,
rather
than
going
with
a
full
die
with
the
full
view
around
artifact
and
id
like
if
I'm
an
end
user,
you
know
and
let's
take,
for
example,
the
the
third
artifact
here,
707
f
52
means
nothing
to
so.
A
Is
there
a
better
way
for
us
to
represent
that
and
we're
going
to
look
into
that
other
competitive
products
like
google
registry,
follow
a
similar
model?
I'm
not
saying
it's
the
right
way.
A
A
If
they
don't
it's
kind
of
they're,
I
wouldn't
call
them
more
fun
but
I'll
question
a
little
bit
their
usage,
so
so
we're
gonna
look
into
some
a
better
user
experience
here.
D
A
A
We
are
getting
permission
from
the
different
vendors
and
consortiums
that
own
a
lot
of
the
artifacts,
so
we
can
actually
use
their
legally
acceptable
trademark
images,
so
we're
working
on
that
actually
cncf
is
going
to
battle
with
us
with
docker
to
be
able
to
have
the
copyrighted
image
of
this
little
whale
here.
So
so
we're
we're
basically
going
through
that
discussion
now.
A
The
second
thing
is
some
of
these,
as
you
correctly
indicated,
are
our
files
versus
some
of
them
are
folders,
so
one
of
the
things
that
we
might
want
to
basically
put
is
a
little
parenthesis
after
the
name
to
indicate
if
this
is
a
singular
artifact
versus
multiple-
and
I
believe
we
actually
young
drilled
into
one
of
these
at
some
point
in
here
finding
it
will
be
not
easy.
A
A
That
is
correct
and
there
you
go.
He
did
click
on
it
eventually.
So,
if
you
have,
rather
than
having
different
artifacts,
that
correspond
to
the
different
architectures
you'll
be
able
to
bundle
them
all
into
one,
and
you
can
see
here
the
differentials
that
that
exist
for
the
different
os
architectures
of
those
images.
A
And
one
other
thing:
we
are
going
to
work
that
on
a
single
repository,
the
tags
need
to
be
unique,
so
we're
going
to
try
to
enforce
that
in
hardware.
So
we
don't
want
you
to
attack
two
separate
artifacts
with
the
latest
tag
and
you
know
not
being
able
to
pull
them.
So
we're
going
to
try
to
enforce
that
not
try.
We
will
enforce
it.
A
A
Is
this
the
video
shared
yet
I'll
I'll
give
you
the
shared
link
right
now
we
upload
them
on
youtube,
takes
a
week
or
so
usually,
but
I'll
I'll.
Give
you
guys
a
link
right
now,
there's
no.
A
No
problem
with
that,
actually
it
could
take
a
few
days
to
to
upload
them.
A
A
All
right
cool,
I
want
to
put
you
guys
on
the
stop,
but
so
who's
going
to
who's
committed
to
going
to
amsterdam.
I
know
that
me
and
kenny
coleman
definitely
are
going
anybody
else.
That's
come
for
sure,
going
beyond
the
folks
that
want
to
go
to
escape
the
cold.
A
Okay,
well,
hopefully
see
you
there
so
we'll
have.
We
have
at
least
two
presentations
on
hardware
and
then
we're
also
signed
up
for
quite
a
bit
of
hours
at
the
both
vmware
booth
for
the
volkswagen
vmware,
as
well
as
the
cncf
answer
bar,
where
the
some
of
the
graduated
and
incubating
and
sandbox
projects
get
boot
time,
so
we'll
be
at
both
cool.
Well,
everybody
have
a
good,
oh,
go
ahead.
John.
B
Just
a
quick
question,
I
don't
know
if
you
have
the
answer
for
the
kubernetes
operator
for
harbor,
there
was
the
folks
from
I
think
it
was
ovh
were
looking
at
trying
to
open
source
what
they
had
done
in
there,
but
it
looks
like
that
was
like
a
month
ago.
I
haven't
seen
any
updates
since
then
yeah,
so
so.
A
So
so
he
actually,
we
went
through
the
hurdle
right
now
and
they
have
updated
to
apache
too.
It
happened
late
last
week,
actually
so
on
fridays,
when
we
got
adapted
from
them.
So
now
the
hardware
team
is
doing
due
diligence
to
understand
the
operator
and
and-
and
you
know,
is
there
any
changes
we
need
to
make
them
before
we
bring
it
into
the
mix.
A
It's
not
going
to
happen
in
time
for
for
harvard
to
the
dole
release,
but
we'll,
hopefully
we
might
have
a
minimal
viable
product
sometime
after
that,
we'll
do
some
basic
things
of
cardboard.
The
second
thing
is
that
the
ovh
operator
did
make
some
assumptions
around
the
ovh
environment
in
terms
of
how
reddish
or
postgres
was
installed,
we're
trying
to
figure
out
how
to
swap
those
for
like,
for
example,
the
postgres
aj
operator
that
got
the
company
that
that
that
did
it.
A
A
So
that
there's
a
couple
of
them
there's
also
the
zealand
operator,
so
we're
gonna
look
into
both
of
them.
A
And
I
believe
know
that
we
we
have
a
few
folks
that
did
some
comparisons
between
patrony
and
zalando
and
at
the
high
level
it
looks
like
zalando
may
make
more
sense
for
us
because
of
the
how
his
architecture
is
defined
and
it
kind
of
matches
the
kubernetes
operating
model
a
little
bit
closer.
A
So
no
not
a
done
deal
yet,
but
zerando
might
be
able
to
pick
patron.
Everything
was
made
by
crunchy
data.
If
I'm
correct,
I'm
not
mistaken.
A
B
A
B
A
Cool
well,
everybody
have
a
good
rest
of
your
day.
Thank
you
for
attending
and
stay
warm
and
safe.