►
From YouTube: CNCF TOC Meeting 2019-11-05
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
CNCF TOC Meeting 2019-11-05
D
E
C
Yeah
I
mean
we're
just
gonna,
do
Sig
updates
and
the
harbour
graduation
review
today,
but
just
friendly
reminder
that
all
of
us
are
focused
on
cube
con.
It's
gonna
be
quite
a
large
event.
I
think
we're
close
to
about
11,000
people
registered
with
about
two
weeks
to
go.
I
just
wanted
to
remind
folks
that
please
book,
hotels
and
everything,
a
lot
of
things
are
sold
out
and
also
only
about
I,
think
15
to
20
percent
of
our
attendees
so
far
have
registered
for
a
Colo
event.
C
So
if
something
like
on
boy
con
is
in
your
interest
or
contributor
summit,
please
register
for
those
and,
more
importantly,
if
you
log
into
sched
there's
the
ability
to
kind
of
like
select
which
talks
are
interested
attending.
Please
go
ahead
and
do
that
because
it's
really
hard
to
plan
room
sizes
in
advance
based
on
interest
and
we're
gonna
try
to
use
all
that
data
from
sched
to
do
so.
So
any
questions
on
cube
con
before
we
move
on.
D
D
There
were
some
of
his
project
presentation
scheduled
for
the
TOC
which,
but
I
moved
in
to
see
Jeff
delivery,
so
we
had
in
there
obviously
article
presenting
operate
the
framework
which
that's
a
lot
of
discussion,
so
there
is
like
a
longer
discussion
of
those
as
a
side.
Note
I
also
try
to
keep
the
discussions
on
the
mailing
list,
but
also
started
to
have
issues
in
the
github
repository,
so
we
kind
of
have
them
better
archives.
D
D
Yes,
this
year,
we
are
in
the
same
building
right
now,
so
you
have
a
discussion
having
to
proceed
with
these
projects
going
forward.
I
think
some
of
them
lend
themselves
nicely
with
some
of
the
work
we're
doing
right
now,
which
gets
us
to
the
second
point
model
for
application
delivery,
which
has
been
around
for
a
while.
D
This
is
this
document
and
we're
very
specifically
spent
quite
some
work
into
like
putting
in
together
the
different
layers
of
what
we
look
into
you,
which
will
serve
as
a
basis
on
how
we
put
projects
into
this
scope,
but
areas
and
woodwork.
They
are
related
to
an
example
LinkedIn
here
where
Oracle
was
doing
this.
Some
of
the
other
submissions,
we're
doing
is
pretty
well
already
as
well
and
I.
D
Think
it's
also
well
as
the
building
blocks
for
the
landscape
work
that
we
should
be
doing
to
good
projects
in
two
different
areas,
so
everyone
who's
interested
in
the
work
and
hasn't
seen
this
model.
Yet,
please
provide
comments
just
going
forward.
We
will
use
it
as
a
basis
of
most
of
the
work
that's
coming
out
of
next
topic
and
all
sessions
for
delivery.
There
will
be
two:
there
will
be
like
a
general
introduction,
looking
at
the
beeping
participation,
but
now
it's
still
a
bit
low
for
them,
maybe
not
expected,
but
no
I
don't
want
it.
D
We
want
to
get
more
people
involved,
I
believe
the
cute
concession
can
help
us
there
and
the
second
one
will
really
be
about
the
delivery
model
and
timing
deeper
in
there
and
operator
and
operator
have
our
combined
proposals,
yes
to
Chris's,
question
and
I.
Think
that's
also
a
bigger
discussion
to
have
there,
because
it's
an
open
source
project
jointly
with
a
bigger
infrastructure
project
around
the
the
operator
hub
and
that
I
think
is
one
that
definitely
needs
deeper
discussion
because
was
a
word.
D
The
infrastructure
er
should
be
hosted
and
these
kind
of
things
and
that's
the
next
step.
Then
we
should
definitely
take
on
that
one
little
bit
more
for
more
detailed
discussion
on
yes,
parallel
to
helm,
charts
in
some
way,
so
there
we
are
looking
at,
obviously
for
guidance
from
that.
You
see
how
we
want
to
handle
this
going
forward.
Should
this
still
be
part
of
like
one
organization
owning
it?
Would
this
them
be
something
that
the
CNC
F
should
be
be
owning?
I.
Think
that's!
The
bigger
discussion
point
definitely
around
the.
D
If
you
operate
the
framework
and
also
operator,
so
we
can
facilitate
a
follow-up
discussion
with
it
in
there
there
have
been
some
discussions,
but
no
final
conclusion
reached
yet,
but
I
definitely
I
can
learn
more
and
whoever
can
share
it.
How
you
exactly
handle
it
for
help,
but
there
could
be
useful,
because
this
is
a
proposal
we
can
then
get
back
to
the
folks
around
operator
hub
as
well.
D
F
D
G
The
other
thing
that
was
discussed
on
the
mailing
list
and
wasn't
really
close
now
was
having
operator
hub
really
closely
like
tied
to
the
operator
framework,
given
that
there
were
other
projects
submitting
that
we're
also
creating
operators
that
coupling
I
think
is
like
I
guess
slightly
concerning
and
he's
operated
that
the
question
of
is
operator
her,
but
for
all
operators
and
our
operators
defined
as
per
communities
or
is
it
is
operator
hub
just
for
operator
framework
operators?
No
and.
I
The
team
that
the
intent
is
to
have
it
host
all
operators
regard
their
creation
and
that's
already
true.
Today,
rook
was
not
created
with
the
operator
framework
and
it
lives
happily
out
there,
as
does
nuba
and
I
could
pry
name
a
couple.
Others
so
I
mean
there's
already.
You
know,
operators
that
live
there,
that
they
don't
use
it.
So
the
idea
is
any
and
all
can
exist.
I
mean
there's
little
things
to
get
it
into
there.
You
know
that
are
required,
but
it
doesn't
require
the
framework.
I
So
that's
the
first
piece
and
actually
the
team
kind
of
lamented
whether
or
not
they
even
should
include
that
with
the
proposal,
because
they
were
worried
about
that
concern,
actually
that
people
would
think
it's
tightly
coupled
it
isn't
so
yeah
Joe
to
your
point,
I
I
think
we
could
work
on
what
we
feel
like
a
definition
of
an
operator
is,
if
that
would
be
helpful
in
terms
of
cloud
native
definition,
yeah.
D
Taken
out
of,
like
one
action
item,
to
really
define
what
operator
is
and
then
get
back
to
the
team,
whether
they
want
to
split
it
up
into
two
separate
things
like
one
place
to
store
them,
but
the
finest
operators
and
the
other
one
like
doing
really
operate.
The
frameworks
is
one
way
to
build
it.
So
one.
K
J
We
go
with
the
original
definition
if
we
redefine
it,
who
gets
pulled
in
and
how
does
this
there's
a
little
bit
of
complexity?
That
I
would
ask
that
get
worked
out
and
have
some
traceability
to
it.
So
that
way,
it
doesn't
go
in
necessarily
just
one
person's
direction
or
one
groups
direction,
because
they
were
the
ones
pulled
into
the
conversation
at
the
time
that
had
happened.
I.
H
D
Yeah
I
think
it's
it's
two
things
it
might
be
just
a
map
and
the
other
hand
I
think
it's
a
trusted
source
also
for
people
to
go
for
operators
and
it's
a
bigger
decision.
How
much
do
we
want
to
have
something
to
the
trust,
and
so
when
it
comes
to
all
these
operators
tested
which
versions
to
work
with?
How
far
do
we
want
to
go?
H
J
H
J
D
M
F
N
N
We
we
have
had
presentations
from
the
project,
but
we
haven't
collected
all
of
our
all
of
our
feedback,
yet
so
so
I
apologize
that
this
lift
will
will
cover
this
off
in
our
next
in
our
next
session.
In
the
meantime,
we
have
a
number
of
different
groups
that
are
leading
different
initiatives
on
contents
that
we
intend
to.
You
have
drafts,
or
at
least
some
good
milestones
ready
for
for
pube
con.
N
N
A
little
hand
is
going
to
be
stored
in
in
github,
the
idea
being
that
we're
going
to
be
hoping
that
the
community
can
can
contribute
use
cases
and
different
and
different
options
and
build
up
a
library
of
of
use
cases
which
richer
community
can
use
over
time
and
we'll
have
a
process
of
sort
of
keeping
it
evergreen
and
sort
of
archiving
of
old
use
cases.
As
time
passes.
N
We
also
have
a
group
working
on
performance
and
benchmarking
white
paper.
Some
of
the
new
some
of
new
members
of
the
of
storage
think
I've
got
together
here
and
we're
covering
of
some
educational
aspects
to
cover
of
the
the
concepts
of
this
and
the
document.
Some
of
the
common
pitfalls
and
we're
also
documenting
some
some
guidance
on
how
to
use
different
tools
for
testing
performance
of
volumes
and
on
databases.
D
N
I
N
Definitely
yep
agreed,
and,
and
finally
we
have-
we
have
a
document
that
Aaron
has
put
together,
which
I
believe
was
was
discussed
with
the
talk
and
according
to
the
agenda
is
probably
going
to
be
discussed
a
little
later
on,
but
documenting
the
process
for
reviewing
sandboxes
sandbox
projects,
which
which
hopefully
will
give
us
a
way
to
formalize.
That's
that
process
going
forward
so
I
think
that's.
That's
the
updates
from
sick
storage
happy
to
take
any
questions.
B
F
M
M
Right
now,
I
am
the
only
one
sort
of
actively
involved
as
a
co-chair.
Here
I
am
still
soliciting
helpers
for
co-chairs.
So
if
you're
interested
and
would
like
to
contribute,
your
help
would
be
most
welcome,
rather
than
hold
up
the
process
until
such
time
as
we
have
that
I'm
happy
to
stand
as
a
an
interim
chair
until
we
find
more
suitable
people
or
more
people.
Unless
there
are
objections
to
that.
M
Hoping
to
ascribe
it
to
we
certain
will
put
some
effort
into
kick
starting
at
a
coupon
I.
Think
that's
a
good
place
to
get
people
together,
but
no
the
short
answer
is:
there
has
not
been
a
much
active
participation
other
than
my
putting
the
Charter
together
and
there
were
some
comments
on
the
Charter,
but
not
much
attendance
at
the
meetings.
O
All
right,
Oh,
sing,
Network
and
I
got
some
of
my
updates
in
late,
so
we've
got
half
a
word
in
there
good.
Well,
a
short
and
sweet
update.
We've
talked
about
sig
Network
for
one
quite
some
time.
The
a
charter
had
been
drafted
and
open
for
review
about
four
weeks
ago
near
as
I
can
tell
we've
addressed
I.
Think
all,
but
maybe
one
one
comment:
it's
a
link
to
the
Google
Doc
is
the
first
type
of
link
in
this
slide.
So
that's
still
up.
O
M
Had
one
comment-
and
it's
actually
not
only
specific
to
this
sig
but
I-
think
in
general,
the
this
respect
out
about
7-6,
initially
as
a
sort
of
draft
to
cover
the
full
landscape,
and
they
had.
You
know
various
names
that
have
changed
over
time,
which
is
understandable
but
I've
seen
a
common
theme,
which
is
that
more
than
one
of
the
SIG's
have
actually
narrowed
in
scope.
So
sig
network
was
called
sig
traffic
and
it
explicitly
called
out.
You
know
much
more
than
networking.
M
It
was
all
basically
communication
related
stuff,
satg,
RPC
and
many
other
types
of
replicator
of
system,
components
and-
and
it
seems
like
you
know
in
the
Charter-
still
says
that
it
covers
those
projects.
But
it
seems
very
focused
on
networking
and
it
seems
to
come
out
of
the
networking
working
group,
which
was
much
more
narrow
in
scope,
that
thick
traffic
and
sig
app
delivery
has
kind
of
had
a
similar
thing.
M
I
think
in
that
it
was
originally
intended
to
be
broader
than
just
CI,
CD
and
and
delivery,
but
but
actually
application
development
etc,
as
well
and
and
as
a
result,
we're
finding
a
few
projects
that
are
coming
along.
That
are
saying,
I
can't
find
where
I
fit
in,
because
I
don't
seem
to
fit
into
the
scope
of
these.
So
so
the
question
I
have
to
the
to
the
group
and
to
sig
at
least
TOC
is.
M
So
we
had
one
recently
cloud
events
you
know
felt
like
they
didn't
fit
into
sig
app
delivery,
and
that
was
clearly
not
the
intention
and
nobilis
felt
like
they
didn't
fit
into
the
existing
SIG's.
Now
you
know
that's
an
indication
that
things
are
either
miscommunicating
what
their
scope
is
or
that
we
define
the
seriousness
or
something
one-nothing.
M
F
Only
opinion
is
it's
herbalists.
You
know.
Service
doesn't
fit
naturally
into
one
of
the
existing
seeds,
because
I
think
it
probably
warrants
something
like
a
sick
of
its
own
and
they
I
don't
know
if
Doug
is
on
the
call,
but
they've
been
discussing
the
possibility
of
oh,
oh
now.
What
was
it
called
platform
at
platform.
P
Runtime
to
me-
and
so
that's
where
I
was
a
bit
confused
about
that
proposal,
but
it
makes
sense
to
try
to
have
a
discussion.
I
could
cut
about
this
because
the
group
of
us
obviously
be
there
and
it
might
be,
but
it
have
a
face-to-face
discussion
around
this
sure
and
and
and
to
be
clear.
I
didn't
want
to.
You
know,
focus
specifically.
Q
Q
So
there
is
better
interoperability
and
better
I
would
say
full
stack
that
can
be
created
from
this
project
in
related
areas
right,
so
maybe,
when
signals
grows
too
big,
it
makes
sense
to
separate
this
into
two
six
or
something,
but
it's
only
good
idea
to
separate
it
in
my
view,
because
it
just
doesn't
entirely
fit
into
our
area
of
interest
right
I
will
separate
it
based
on
too
big.
You
don't
have
time
enough
time
to
cover
it.
D
So
so
maybe
my
take
from
from
liberal
perspective,
I
think
they
didn't
really
narrow
the
scope
per
se.
I
think
we
leave
the
scope
where
it
is,
but
you
also
have
to
see
very
currently
of
people
involved
that
are
interested
in
being
written
works
and
we
can't
have
too
many
francs
in
parallel.
That's
why
was
always
mandating
for
a
road
map
or
to
focus
on
first
and
I?
D
Think
that's
what
we
start
just
starting
to
do
right
now
and,
for
example,
for
application
definition
and
like
the
other
topics,
there's
not
a
lot
coming
in
a
lot
of
like
the
Oracle
worgen
arrow
is
an
example
of
approach.
It's
bring
in
was
a
lot
around
the
CD
front
and
those
of
them
the
Opera,
the
cyber
projects
came
in
and
where
people
obviously
are
interested
in
working
on
right
now.
So
it's
also
a
bit
community
driven
and
which
people
engage
in
this.
R
I
keep
I
think
this
question
that
Quentin
brought
up
for
that
conversation
I.
Think
it's
a
really
valid
point.
We're
gonna
learn
a
lot
as
we
figure
out.
You
know
how
to
to
navigate
the
serverless
and
delivery
sig
situation,
so
I
think
it's
just
something
that
we
need
to
talk
more
about
and
set
examples
for.
Q
M
Igor
I
I
agree
with
you
and
and
I
that's
sort
of
my
perspective
as
well.
The
more
we
can
get
people
in
closely
related
areas
talking
to
each
other,
the
better
and
I
think
we
should
be
more
inclined
to
doing
that
than
to
splitting
them
out
into
separate
pieces
whenever
we
get
the
opportunity
at
least
initially,
you
know
the
the
argument
for
lack
of
people
to
contribute
is
is
a
reasonable
one
that
I
Louis
alluded
to,
but
but
I
think
you
know
in
this
case
we
have.
M
You
know,
group
of
service
people
and
a
group
of
app
delivery
people.
So
we
do
have
the
people.
The
question
is
whether
to
put
them
together
in
the
same
room,
to
talk
about
it
together
or
separate
them
into
different
rooms.
Talk
about
them
separately
and
I,
lean
towards
the
former
and
I
just
wanted
to
bring
this
up
as
to
what
what
the
rest
of
the
people's
opinion
was,
but
I
think
it's
perfectly
reasonable
to
defer
that
conversation
to
kyouko
know
and
everyone's
there
in
face
to
face,
and
hopefully
we
can.
S
S
The
stage
within
CF
is
to
continue
that
growth
of
a
who've
seen
you
know
community
when
I
get
that
stamp
of
approval
from
the
CN
CF,
so
that
or
users
that
are
putting
it
into
production
can
continue
to
do
so
and
have
that
CN
CF
stand
behind
it
as
well.
So
in
a
nutshell,
harbour
is
an
open
source
container
image
registry
that
secures
images
with
role
based
access
control
with
scan
images
for
vulnerabilities,
and
then
we
signed
images
as
trusted.
S
S
So,
if
you
think
about
you
know,
why
do
you
want
to
run
your
own
registry
Amy?
This
is
a
slide
where
I'm
going
to
tell
you
next
every
now
and
then
sorry.
So
as
a
user,
you
know
you
want
to
have
security
and
compliance
around
your
images,
so
you
want
to
have
comprehensive
policy.
You
will
have
some
registry
and
data
ownership
and
as
a
user,
you
also
want
to
have
identity
Federation.
S
So
you
can
use
a
single
set
of
user
logins
that
you
can
use
as
you're,
interacting
with
your
registry,
with
some
form
of
building
multi-tenancy
next
slide.
Please
so
harbour
solves
that
by
enabling,
as
he
said,
of
capabilities,
we
have
vulnerability
scanning
using
layer
and,
as
I'm
talking
talking
in
a
few
slides,
were
also
adding
anchor
and
awkward
3v
into
the
mix,
who
have
CVE
exceptions.
With
you
image
signing
using
notary,
we
allow
you
to
enforce
coders
in
your
project.
S
S
Now,
if
you
think
at
the
infrastructure
layer
users
want
to
deploy
on
any
infrastructure,
they
wanna
have
choice
that
they
want
private
one
hosted
on
at
the
edge
on
a
public
cloud.
You
want
data
locality
and
they
want
a
registry
that
gives
them
kubernetes
and
docker
compliance
next
slide,
please
from
a
scale
and
control
perspective.
You
want
to
have
control
access
to
artifacts
you'll,
be
able
to
replicate
your
resources
based
on
business
needs.
S
So
if
you
have
certain
assets
that
are
jaw
replicated
around
the
globe,
you
might
want
to
be
able
to
have
containers
being
accessible
in
those
regions
or
there
might
not
be
enough
enough
compute
capacity
to
be
able
to
scan
for
vulnerabilities
at
the
edge
next
slide,
please.
So
what
hardware
does,
in
this
case
we've
added
complete
replication,
where
you
can
replicate
on
demand
or
based
on
schedule?
Your
harbour
artifacts
to
a
wide
variety
of
targets.
We
support
another
harbour
instance.
We
support
docker
registry
docker
hub
while
we
cloud
AWS.
S
S
Might
want
automation,
extensibility?
How
can
you
put
a
registry
into
your
environment
where
it
can
be
plug-and-play
with
existing
investments
in
infrastructure
and
services
next
slide,
and
to
enable
that
Harbor
has
support
for
seasonal
integration?
We
have
web
hooks,
so
you
can
automate
Hardware
along
with
your
CI
CD
pipeline.
You
can
execute
different
actions
based
on
results
of
things
that
happen
in
Harbor,
who
have
a
fully
fledged
REST
API,
as
well
as
robot
account
to
enhance
automation
needs
it's
like
is
the
harbor
architect.
S
I'm,
not
gonna,
spend
too
much
time
here,
but
you
know
a
couple
of
key
things
that
I
wanted
to
kind
of
note
is
that
you
have
a
fairly
componentized
architecture
that
can
be
deployed
both
on
a
docker
node
using
docker
compose
or
it
can
be
deployed
using
a
home
chart
on
kubernetes.
It's
is
fairly
different.
Components
are
fairly
isolated
and
we
make
use
of
a
lot
of
other
cloud
native
assets
and
tools,
so
we
use
chart
museum
from
home,
charts
and
use
the
docker
registry
for
our
for
our
registry.
We
use
notary
for
signing.
S
We
have
replicated
registered
providers
that
allow
us
to
do
replication.
That
mentioned
earlier,
targeting
the
various
different
register
implementations
and
then
at
the
scanning
layer.
We
have
support
for
CentOS
layer
as
the
build
team
batteries
included
scanning
capability,
adivasi
enhanced
harbor
to
allow
users
to
have
use
aqua
3v,
as
well
as
the
uncor
engine
and
enterprise,
so
we're
giving
users
a
choice
so
that
if
they
want
to
build
that
bring
their
own
scanner
based
on
their
business
needs
or
difference
coming
capabilities,
they
have.
S
So!
At
the
high
level
harbor
started
VMware
in
June
2014,
it
was
donated
to
CNCs
last
July,
and
then
we
was
incubating
and
seen
CF
since
November
of
2018.
We
have
twenty
plus
product
implementations
today
of
harbor.
These
are
different.
Products
are
either
embedding
or
shipping
harbor
along
the
distributions,
115
plus
contributing
organizations
and
over
300
community
members
that
are
contributing
discussing
or
using
harbor
next
slide,
please.
This
is
kind
of
our
money
slide.
S
If
we
look
at
the
harbor
community
and
the
growth
of
experience
ever
since
you
joined
cm
CF,
you
can
kind
of
see
on
the
bottom
right
the
diagram,
it's
kind
of
both
the
number
of
contributing
companies
and
developers
has
been
steadily
increasing.
Six.
Then,
if
you
look
at
the
harbor
activity
on
github,
he
has
has
this
steady
state
of
both
contributions.
S
S
So
that
kind
of
gives
you
an
idea
of
the
harbor
community's
fairly
vibrant
and
I
know
gonna
discuss
a
little
bit
of
our
maintainer
diversity
later
on,
but
the
community
is
very
strong
who
have
every
day
that
we
interact
to
on
harbor.
We
find
the
new
user
does
as
harboring
production
and
is
using
it
at
a
very
large
scale
that
we
didn't
know
about,
because
it
just
works
they're
using
it
and
they're
very
happy
with
it,
and
they
don't
come
back
to
us
for
for
any
questions
or
concerns
next
slide.
Please.
S
So
one
of
the
new
features
that
were
adding
in
harbour
1
to
10
that
will
have
a
release
candidate
at
cubic
on
in
a
couple
of
weeks
is
this
concept
of
an
interrogation
service
who
essentially
extends
extending
harbor
to
include
pluggable
scanners
so
that
both
users,
companies
or
other
partners
can
come
and
implement
their
own
scanning
capability
on
top
of
harbor
today?
That
scanning
is
really
concentrating
on
checking
four
CVEs
and
doing
study
from
your
ability
analysis,
but
the
API
and
the
pluggable
architecture
is
connect
span
so
that
it
could
include
a
lot
more.
S
S
So
you
know,
Harbor
haha'
has
had
a
healthy
number
of
releases
in
the
last
year,
but
they're
not
done
as
part
of
being
good
stewards
of
the
community.
We
have
a
fairly
well-documented
road
map
that
talks
about
some
of
the
investments
that
we
want
to
do
both
in
near
term
as
well
as
long
term.
We
use
github
projects
to
kind
of
track
our
different
releases
in
terms
of
swimlanes,
and
when
we
look
at
a
road
map,
we
basically
separated
into
three
different
stream
lines:
management
where
we
want
to
build.
S
Kubernetes
operators
want
to
sign
in
policy
application,
observability
enhancements
around
image
distribution,
so
we
want
to
implement
proxy
cache
capabilities
as
well
as
p2p
distribution
on
top
of
opers
Kraken,
as
well
as
dragonfly
by
Alibaba
that
you
guys
mentioned
earlier
today
and
they're
in
the
extensibility
front.
We
already
have
web
hooks,
but
we
want
to
enhance
them.
Wanna,
keep
adding
more
things
around
automation
and
enabling
users
to
plug-and-play
Harbor
with
existing
solutions
and
services.
S
When
an
enhanced
interrogation
service
that
we
just
mentioned
on
the
previous
slide
and
then
last,
we
want
to
be
good
stewards
of
the
community
and
get
plugged
into
the
right
OCIO
registry
and
OCI
conformist,
so
that
harbor
can
have
support
for
additional
artifacts
beyond
container
images
and
home
shots.
Next
slide.
Please
so
I
want
to
basically
turned
a
mic
here
to
methanol
Oh
from
Highland
software
to
give
us
a
couple
of
minutes
overview
of
what
they're
doing
with
harbour.
K
Thanks
Michael
hi
everybody,
I
am
Nathan
Lowe
and
I'm
a
platform
engineer
at
Highland
software,
a
leader
in
the
content
services
platform
space.
My
team
has
pioneered
the
use
of
containers
at
Highland
over
the
last
few
years,
we've
made
heavy
use
of
harbours
and
have
been
using
it
in
our
day-to-day
development
cycle.
Since
version
1.2
our
instance
currently
stores
approximately
2,400
unique
tags
across
670
different
images
and
175
projects.
These
containers
consume.
K
Additionally,
my
team
uses
a
for
many
teams,
rather
that
use
a
separate
system
for
managing
their
deployments.
These
teams
make
use
of
harbours
for
storing
their
home
charts
as
well,
and
we
will
most
likely
see
more
of
our
developers
adopting
this
feature
in
the
future.
As
we
onboard
more
developers
and
applications
into
our
SAS
platform,
the
harbor
team
has
been
very
receptive
to
feedback
this
whole
time
and
a
very
good
example
of
this
as
a
tool
that
we
developed
in
later
open
source
to
automate
the
cleanup
of
old
tags.
K
We
see
Harbor
continuing
to
play
a
very
important
role
in
our
development
lifecycle
for
the
foreseeable
future
and
we're
excited
to
see
how
it
continues
to
evolve.
To
support
new
use
cases
out,
as
well
as
improving
the
experience
of
existing
ones.
So,
thanks
for
your
time
and
now
I'll
hand
it
back
over
to
Michael
I
think
you've
anything
so.
S
I'm
next
slide,
please
Amy.
So
I
wanted
to
talk
a
little
bit
about
another
customer
as
well
of
ours
VP,
which
is
a
leading
payment
solutions
provider.
The
you
know
they
couldn't
make
it
on
the
call.
But
one
of
the
important
things
I
want
to
show
here
on
their
architecture
is
that
they
have
two
instances
of
Harbor
in
two
different
data
centers,
where
they
have
an
active,
active
kubernetes
environment
in
each
one
of
these
data,
centers
and
they're,
using
harbor
to
replicate
the
resources.
S
One
of
our
biggest
users
is
DD,
which
is
right,
sharing
service
in
China,
and
they
have
an
environment
with
20
plus
terabytes
of
storage
on
harbor,
and
today
we
learned
that
mule
soft
is
actually
using
harbor
in
production.
With
about
five
and
a
half
terabytes
of
storage
thousand
plus
unique
images-
and
they
told
us
that
they
have
17
million
pool
operations
on
hard
boards
over
17
million,
so
you
know
we
constantly
every
day
we're
learning
about
a
new
customer,
that's
horrible
in
production
and
they're,
happily
using
it
next
slide.
S
S
Customers
are
using
harbor
in
production
or
some
various
stages
of
production
in
the
doctor's
file
who
have
customer
testimonials
and
harbor.
Today
has
a
few
Enterprise
distributions
with
support
agreements
behind
it,
mainly
by
VMware,
Enterprise,
PCAST,
essential
PKS,
and
this
year
integrated
containers
like
to
believe
we
have
a
healthy
number
of
commuters.
S
Although
our
maintainer
diversity,
maintainer
diversity,
is
no
hi,
we
do
have
five
external
maintainer
there
on
the
project
with
104
active
contributors,
346
authors
of
full
request
and
171
commuters
today
and,
like
I
mentioned
earlier,
who
have
a
new
release
every
three
months
with
over
1,800
peers
merged
just
in
the
last
year,
I
want
to
open
it
up
for
questions
right
now.
I
know
you
guys
had
some
concerns,
so
I
want
to
take
the
opportunity
to
give
you
guys
time.
S
A
S
Downloads
were
measured
basically,
so
we
use
Google
storage
and
use
the
Google
API
to
basically
see
down
loads
off
of
the
harbor
binaries,
and
then
we
also
have
some
of
them
on
docker
hub
as
well.
Now,
dr.
hub
is
a
little
bit
weird
because
you
also
use
it
for
our
CI
CD
and
that
has
like
over
a
million
downloads.
So
you
can
really
use
that
number.
So
we're
going
based
on
our
storage
API
s
as
well
as
docker
hub.
S
S
A
You
know
I'm
gonna,
be
devil's,
advocate
and
I'm
gonna
express,
modest
and
respectful
concern
for
the
longevity
of
this
project.
What
D?
What
can
you
do
by
the
way
and
I
think
the
user
testimonials
were
great.
That
definitely
helps
a
lot
and
I
think
it's
important
to
see
that
there
is
a
use
case
for
an
open
source
registry
of
this
type.
That
is
not
one
of
the
hosted
ones,
etc,
etc.
So,
that's
fabulous.
A
A
S
Super
valid
concern,
I
mean
I,
was
expecting
that
the
VMware
has
been
a
steward,
huge
contributor
to
Humber
since
2014,
and
we've
actually
shown
no
signs
that
we
were
ever
gonna
pull
out
of
the
project
and
actually,
in
fact,
to
be
increasing
our
investment
into
Harbor
every
six
months
or
so.
We've
added
more
contributors
provided
more
folks
that
are
doing
community
relationships,
program
management,
product
management,
who
show
no
signs
behind
that.
S
However,
because
this
is
a
concern,
we
have
four
or
five
external
maintainer
that
are
also
started
to
get
expertise
in
different
areas
of
the
project.
So,
for
example,
Daniel
from
aqua
is
becoming
our
security
expert.
So
he's
in
charge
of
a
lot
of
the
pluggable
adapters
and
he's
basically
stood
in
that
area.
S
We
have
Nettie's
that
has
taken
ownership
around
webhooks
who
have
qihoo
that
has
taken
ownership
around
their
application
providers
to
all
of
the
different
public
cloud
providers,
so
we're
starting
to
delegate
some
of
these
responsibilities
to
folks
outside
VMware,
so
they
can
take
the
leadership
position
there
and
start
helping
us
both
enhance
their
knowledge
in
those
areas
and
should
ever
something
happen
to
VMware.
Even
though
there's
no
indication
that
that
will
happen,
the
project
could
be
in
a
good
shape.
Yeah.
A
More
worried
about
something
happening
to
the
project
and
I
I've
been
a
you
know,
VMware,
when,
when
things
have
gotten
shut
down,
it
does
happen
out
there.
Unfortunately,
even
things
that
look
pretty
strong,
so
it's
great
to
hear
that
you
have
a
plan
that
you're
executing
on
for
bringing
people
up
through
the
ladder
from
you
know
initial
contributions
up
to
maintain
this
year,
probably
to
hit
to
see
more
about
that
and
to
understand
on
what
timeframe
you
know
you
want
there
to
be.
A
Let's
say:
I,
don't
know
two
core
maintain
errs:
who
who
are
able
to
do
a
release
who
are
not
necessarily
in
your
organization.
I,
think
that
would
for
me
be
a
sign
that
the
community
has
confidence
in
the
project,
long
term
or,
alternatively,
that
there
is
some
Express
commitment
from
VMware,
demonstrating
that
at
least
for
some
period
of
time
they
come
is
committed.
You
know
to
give
you
time
to
continue
to
grow
the
community
yeah.
S
We
can
definitely
give
you
that
commitment
from
VMware
and
I
can
answer
a
couple
of
the
other
items
as
well.
One
of
the
other
things
I
wanted
to
mention
earlier
is
that
you
know
when
the
security
penetration
testing
company
q50
3-1
over
our
code.
They
were
impressed
at
the
level
of
code,
both
from
a
documentation
standpoint,
the
segmentation
of
the
code
services
architecture.
It
was
very
easy
for
them
to
follow,
so
the
torrent
I
think
that
we've
always
invested
in
so
that
the
seeds
are
there
for
the
project
to
continue
no
matter.
S
What
happens
then,
like
the
core
maintenance
I
wanted
to
bring
one
more
thing
around
that
they're
they're,
not
they
don't
have
any
more
power
than
a
regular
maintainer
is
just.
We
basically
gave
an
elevated
title
to
some
of
them
more
leaders
of
the
project.
Anyone
can
do
a
release
of
hardboard.
Today.
S
Everything
is
well
documented,
our
CI
engine
and
everything,
and
they
should
be
able
to
do
that
so
kind
of
just
add
some
additional
color
there
and
we're
not
necessarily
trying
to
get
the
project
were
super
open
to
having
more
contributions
from
external
folks
to
come
in,
and
our
goal
is
to
continuously
adding
more
maintainer
that
are
actively
involved
in
the
project
and
they
have
an
interest
and
a
commitment
to
continuously
help
us
improve.
So
that's
been
our
goal.
Correct.
A
That's
the
I'm
not
personally
in
any
way
trying
to
advocate
that
you
should
not
gate
the
project.
Oh,
that
is
one
strategy
that
leads
you
down
a
certain
path
and
will
lead
to
one
set
of
questions.
It
sounds
like
you've
chosen
a
strategy
where
you
want
to
have
a
very
open
engaged
governance
and
expand
the
maintainer
ship
and
the
governance
fairly
quickly
is
that
is
that
a
correct
understanding,
I?
Don't.
S
Know
if
I
would
say
quickly,
I
want
to
expand
it.
A
healthy
project
includes
maintainers
from
many
companies
and,
as
we
were,
making
more
and
more
investments
into
hardware,
we
can
accelerate
those
investments
and
accelerate
the
delivery
of
those
features
if
you've
got
more
folks
contributing
to
it
so
okay.
So
what
is
your
governance
model
right
now?
A
All
right,
well,
I
think
it
would
be
great
if
you
could
tell
us
in
that
case,
how
you
think
over
the
next
six
to
12
months,
CN
CF
could
help
to
make
sure
that
that
process
proceeds
not
too
fast,
but
at
the
speed
that
you
wanted
to
and
at
that
point
I
don't
have
any
other
questions
for
now.
I
want
to
know
if
anyone
else
from
the
TOC
wants
to
ask
a
question:
I
do.
R
Mm-Hm,
hey
great
presentation,
Michael,
this
is
Michelle
and
you
know
I
was
just
looking
over
the
contributing
document,
and
this
is
one
of
the
cleanest
contributing
documents
I've
ever
seen
by
the
way,
but
I
wanted
to
know
like
how
do
you
make
design
decisions?
The
core
maintainer
of
the
project
have
that
elevated
status
because
they've
been
leaders
in
the
in
the
project.
Are
they
the
ones
who
approve
design
decisions
or
just
the
entirety
of
the
maintainer
group
vote?
How
does
that
work.
S
Usually
we
we
all
get
together
once
every
three
months
based
on
our
released
cadence
and
we
discuss
the
investments
that
we
want
to
make
and
so
far
we've
never
had.
Anybody
disagree.
So
everybody's
in
agreement
with
the
investments
that
you
wanna
make
so
far
and
the
decisions
are
pretty
much
have
been
made
in
Animus
Lee,
who
never
had
to
actually
put
it
to
a
vote
which
is
a
sign
of
a
healthy
community.
S
But
if
we
had
to
make
a
vote
and
we'll
follow
the
governance
which
basically
every
company
that
is
a
member
of
the
harbor
maintainer,
gets
one
vote,
but
so
far
unilateral.
You
know
we
all
agree
on
the
investments
and
then
we
divvy
up
the
work
based
on
availability
and
different
commitments
that
folks
have
and
then
we
start
working
on
them.
S
Okay,
if
Aaron
of
our
maintenance
have
brought
different
concerns,
for
example,
ladies
who
took
leadership
on
weapons
or
very
interested
in
that
and
took
the
lead
on
that.
Nate
from
highland
was
a
big
proponent
of
the
tug
retention.
So
he
kind
of
pushed
that
and
we
got
folks
to
work
on
it
and
created
a
working
group
to
enable
that
and
so
on
and
so
forth.
Okay.
S
We
put
it
to
vote.
Essentially
it
requires
a
long
history
of
contributions
into
harbor,
a
dedication
to
continuing
to
advocate
for
harbor
and
then
usually
it's
just
a
period
of
time.
So
as
some
of
our
main
carriers
have
had
a
significant
amount
of
time
on
harbor
right
now,
we'll
start
from
adding
them
as
well.
Okay
in
Hoover's
the
entire
main
Tennessee.
So
everybody.