►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
another
harvard
meeting
it's
the
20th
of
may.
Thank
you
for
attending.
As
always,
these
are
recorded
conversations.
Please
adhere
to
the
code
of
contact
by
cncf,
so
we
don't
have
any
anything
but
good
news
this
week.
So
good
news
number
one
hardware
2.0
shift,
so
we
have
you
know
a
slew
of
media
mentions.
We
have
like,
I
believe
at
the
last
count.
We
had
seven
articles
but
I
believe
a
couple
more
are
coming
up,
so
we
have
container
general
new
stack,
techstrong
tv.
A
We
have
the
register
dev
class
heisei
and
software
development
times
it's
just
the
times
for
software
development,
and
I
think
the
only
one
that's
missing
is
tfir.
A
Is
that
pretty
much
accurate
jonas,
I
think
tfir
is
the
only
one
that
hasn't
submitted
it
yet
so
so
lots
of
media
attentions.
We
have
a
blog
post
on
this.
If
you
want
to
learn
about
the
new
capabilities
which
all
of
you
guys
already
know,
then
you
know
go
view.
The
blog
post
go.
Try
it
out
go
hardware.
Dot
io
has
been
updated
with
2.0
release.
A
So
if
you
want
a
very
quick
demo
go
demo.cohable.io
and
it's
available,
you
can
request
an
account
or
you
can
just
if
you
have
one
already
simply
sign
in.
I
remember
the
environment
gets
recycled
every
other
day,
but
it
is
there
it's
available.
It
has.
You
know,
try
out
the
new
functionality,
the
the
one
that
I
love
the
most
and
I
already
started
using.
It
is
a
slack
integration
being
able
to
actually
get
some
slag
endpoints
and
be
able
to
get
notifications
on
only
specific
web
hooks
that
you
care
about.
A
That's
pretty
huge
ssl
to
ssl
encryption.
Important
but
not
same.
I
mean
it's
invisible
to
me
as
fm.
I
can't
demo
that
really
nicely,
but
the
oci
the
slide
integration,
the
trivia
as
the
building
scanner.
All
of
those
they're
huge
easy
to
to
to
basically
view,
then,
on
top
of
that,
obviously
harbor
is
on
really
the
last
step
of
the
graduation.
A
When
I
say
the
last
step,
a
public
voting
period
opened
a
little
bit
over
a
week
ago
and
on
tuesday,
right
after
memorial
day
in
the
united
states
on
the
26th
hardboard
is
gonna
go
up
for
voting
by
the
cncftoc.
That's
when
we
encourage
all
of
you,
john.
Maybe
maybe
not
you,
don't
count
your
vmware,
but
everybody
else.
I
would
love
for
you
guys
to
to
go
in
and
say:
hey
I
use
harbor.
A
I
like
harbor,
here's,
my
non-binding
support
for
the
project,
and
you
know
god
willing,
will
basically
a
graduate
because,
maybe
I
shouldn't
say
god,
maybe
some
people
are
yeah.
I
just
entered
the
religious
discussion.
Jonas,
maybe
you
need
to
bleep
me
out
when
you're
out,
you
need
to
bleep
me
out
when,
when
you're
pushing
out
the
recording,
then
so
those
are
the
kind
of
two
big
updates.
At
the
same
time,
we're
actually
looking
into
the
2.1
prioritization-
I
think
I
mentioned
that
last
week
or
last
time
we
met
so
those
things
didn't
change.
A
You
know
at
a
high
level
we're
going
to
implement
proxy
cache
capabilities.
There's
a
issue
with
a
document
that
people
can
go
comment
on
we're
going
to
try
to
integrate
with
dragonfly
we're
going
to
try
to
have
non-blocking
garbage
collection
and
then
we're
also
going
to
try
to
create
a
new
type
of
robot
account
called
a
service
account.
That's
globally,
available
across
your
entire
hardware
ecosystem.
A
So
you'll
have
access
to
all
the
projects
for
pull
mode,
only
permissions
so
like,
and
that's
going
to
be
used
by
scanners
or
any
other
ci
cd
tools
that
need
access
to
hardware
to
be
able
to
pull
content
out
of
those
are
really
the
major
features
we're
targeting
release
august
now
so
that
well,
when
I
say
midogas,
I
don't
mean
the
release
mute
august,
release
candidate,
midogas
right
before
cubicon
eu,
that's
going
to
happen
on
the
16th
17th
17th,
and
then
I
will
release
probably
right
before
labor
day.
A
So
that's
kind
of
gives
you
kind
of
an
estimate
of
where
we're
gonna
be.
We
are
gonna
have
two
sessions
at
cubicon
eu,
the
virtual
confidence
we're
gonna,
have
an
intro
and
a
deep
dive
session
for
hardboard.
We
accepted
our
sessions
last
week,
so
we
are
going
to
show
these
2.1
capabilities
around.
You
know
our
ability
to
to
have
proxy
caching
and
and
and
dragonfly,
and
we
look
forward
to
getting
a
lot
of
feedback
from
the
conference,
and
you
know
the
release
will
come
a
week
or
two
later.
A
A
Hardboard
is
the
same
if
you
had
the
project
that
was
using
claire
as
a
scanner
before
and
you
upgrade
it
to
the
low.
That's
not
going
to
change
you're
still
going
to
be
using
clear,
which
is
why
we
didn't
phase
out
clear,
but
you
have
the
option
to
change
the
scanner
to
3v,
for
example,
moving
forward.
The
only
thing
that
has
changed
between
the
two
releases
is
at
the
api
level.
A
So
if
you're
calling
the
api
or
hardware
the
api
from
v1,
it
changed
to
v2,
but
the
structure
of
the
api,
the
parameters,
the
return
values,
everything's
changing
the
same.
So
if
you
just
do
a
search
and
replace
slash,
v1,
slash,
2,
slash,
v2,
I'm
almost
guaranteeing
99.9
percent
of
things.
We
just
work.
C
We
did
find
some
other
things
with
the
way
that
we
were
some
of
the
apis
around
retagging
and
like
when
you,
if
you
get
down
to
actually
manipulating
images
because
of
the
new
structure
with
the
the
oci
artifacts,
there's
a
little
bit
of
a
change
there,
but
like
absolutely
everything
with
helm,
charts
was
the
exact
same.
A
lot
of
the
other
things
were
the
exact
same.
A
You
know
be
worth
calling
it
out
you're
right,
because
now
we
have
before
we
used
to
have
artifact
and
then
there
was
a
tag
and
now
it's
it's
artifact
index
tag
and
you
can
have
multiple
tags
now.
It's
just
that
just
not
just
one.
So
so
the
big
change
there
is,
you
could
have
multiple
tags
and
on
tags
since
that
topic
came
up.
A
This
is
important
to
note
before
we
actually
depended
on
the
docker
distributions
to
manage
everything
for
us
like
the
tags,
the
layers
and
then
when
you
did
garbage
collection,
it
was
a
blocking
juicy
where
everything
just
basically
froze
for
write
until
everything.
Basically
until
all
the
layers
were
reclaimed
now
we
actually
use
docker
distribution
to
serve
content.
You
can
push
and
pull
images
using
docker
distribution,
but
the
actual
management
of
the
layers
happens
in
the
hardware
database.
It
is
an
important
difference.
A
A
That's
it
any
any
questions,
any
concerns.
Anybody
try
it
out.
I
mean
I
know
john.
You
said
you
might
tried
out.
So
that's
good!
That's
awesome.
Anybody
else.
B
One
of
the
things
we
were
looking
into
was
the
the
operator,
the
helm
operator
or
the
harbor
operator.
Do
you
have
any
idea
of
when
2.0
might
ship
there.
A
So
they're
they're
working,
even
the
home
chart,
has
not
been
updated.
Yet
we
just
usually
need
a
few
more
days
to
to
update
that.
I
haven't
got
an
update
on
when
the
operator
will
will
come.
A
Jonas,
can
you
write
a
quick
note
to
our
in
the
in
our
channel
so
alex
someone
can
reply
and
we
can
give
back
the
answer
my
thinking
is
probably
the
operator
is
going
to
be
updated
within
the
next
couple
of
weeks,
but
I
don't
know
exactly
when
we
also
had
a
discussion
among
in
the
community
meeting
at
china
time.
A
Maybe
two
weeks
ago
around
you
know,
what's
going
to
happen
with
the
operator,
when
are
we
going
to
take
it
to
1.00
and
I
think
that's
going
to
be
influence
around
you
know
stability
and
reliability
and
basically
being
able
to
support
all
the
features
that
we
want
around,
be
able
to
deploy,
redis,
be
able
to
deploy
postgres
both
in
aj
and
non-aj
modes
and
we're
probably
looking
about
a
three
four
month
road
map.
A
Now
you
know
these
are
community
contributions.
Sometimes
people
get
you
know
pulled
into
another
job
or
something
else
like
things
change.
So
take
that
with
a
grain
of
salt,
but
we're
thinking,
like
you
know,
october
time
frame,
if
not
earlier,
we'll
probably
have
a
the
operator
to
be
stable
and
then
we're
going
to
have
to
make
some
tough
choices.
Do
we
maintain
two
deployment
models
for
harbor,
helm
and
operator,
or
do
we
basically
use
the
home
chart
to
drive
the
operator
and
converge
on
that?
So
one
is
a
wrapper
of
the
other
choices.
A
Yeah
absolutely
well,
I
mean
you
know.
I
had
a
discussion
with
the
cube
ups
team
earlier
today,
and
one
of
the
things
that
you
know
cubs
today
supports
church
museum
only
well
support
other
things,
but
the
one
compatible
with
hardware
is
church
museum
right.
So
if
you,
if
you
deploy
the
local
cube
apps
in
your
cluster
and
you're,
pointing
to
hardware
as
your
repository
for
all
your
images,
you
know
we're
not
gonna
support
chart
museum
forever.
A
Our
goal
is
to
continue
down
the
oci
part
right
and
be
the
oci
compliant
registry,
and
have
full-blown
support
for
that
across
the
board
for
everything
right,
which
you
already
do
so
so
we
need
to
figure
out.
How
can
we
enhance
cuba,
so
it
talks
oci
to
us
rather
than
talking
to
us
using
chart
museum.
A
A
A
We
have
a
cncf
webinar
on
hardware
on
may
28th,
so
there's
here,
I
can
share
the
link
here
too.
One
second.
A
A
You
know,
use
hardboard
as
the
as
a
container
image
registry
as
a
transpo
that
registry
for
kubernetes-
and
you
know
the
premise
under
which
we're
basically
talking
about
that-
is
that
you
know
2020
has
one
of
the
biggest
jumps
in
production
usage
of
kubernetes.
You
can't
really
deploy
clusters
and
you
can
operate
clusters
of
that
registry.
So
harbor
now
becomes
a
key
ingredient
for
all
your
cloud
native
deployments
and
with
the
added
support
for
oci
and
expanding
the
types
of
supported.
A
Artifacts
were
really
the
best
complement
to
kubernetes,
so
in
in
in
that
webinar
we're
just
basically
gonna
go
through
an
in-depth
view
of
all
the
capabilities.
We're
gonna
talk
about.
You
know:
oci
trivia,
the
web
hooks
all
of
the
things
that
harbor
has
and
very
likely.
I'm
gonna
basically
do
a
20
minute
flyby
on
all
the
features
of
hardware.
A
A
All
right
we'll
give
everybody
back
15
minutes.
Well,
thank
you
all
for
attending
appreciate
it
tuesday
go
vote.
Go
submit.
Your
non-binding
vote
for
hardware
sign
up
for
the
webinar
join
us
put
the
link
here.