►
From YouTube: CNCF TOC Meeting - 2018-06-19
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
A
E
A
If
you
go
to
slide
5,
we
have
two
community
presentations,
one
from
the
open,
metrics
community
and
the
other
from
the
harbor
community,
which
I
believe
both
are
on
right
now
before
I
move
on
to
those
community
presentations,
I
would
like
to
point
people
to
slide
6,
which
essentially
is
the
backlog
of
essentially
proposals
or
upcoming
project
presentations
to
the
TOC.
So
we
like
to
invite
the
you
know
to
see
contributors
in
wider
community
to
give
any
feedback
on
these.
A
F
A
G
B
B
Okay,
go
ahead,
perfect,
okay,
so
open
metrics.
Basically,
what
we
are
doing
is
we're
taking
the
permissive
format
and
putting
this
into
its
own
little
project.
So
a
quick
look
at
we
had
before
premier
before
Prometheus
came
out.
So
basically
what
we
had
was
we
had
a
ton
of
different
monitoring
solutions
which
were
mostly
based
on
really
old
stuff.
There
are
a
few
new
ones,
but
still
most
of
those
have
proprietary
formats.
B
B
There's
no
or
there
was
not
a
lot
of
focus
on
metrics
or
just
doing
metrics
really
well
at
scale,
and
a
few
of
those
few
newer
ones
actually
tried
to
address
this,
but
they
didn't
have
the
simplicity
of
operation,
and
there
was
basically
one
large
standard
which
actually
survived
over
time
as
an
MPI
come
from
the
networking
world.
It's
a
lot
of
pain,
it
works,
but
it's
painful.
So
this
is
before
Prometheus
entered.
As
most
of
you
will
probably
know
please.
This
is
actually
there's
a
feedback,
someone
just
okay,
whatever
this.
B
Yeah
so
anyway,
then
Prometheus
came
out.
As
you
all
know,
it's
it's
on
the
receiver
CNCs
umbrella
itself.
Once
per
meters
came
out,
a
few
things
have
changed
since
then,
it
really
has
become
a
de
facto
standard
or
the
exposition
format
has
become
a
well.
Both
premises
itself
has
become
more
or
less
the
de-facto
standard
within
cloud
native,
and
also
the
standard
itself
for
the
exposition
format
itself
also
has
become
a
de
facto
standard.
B
As
of
itself,
there's
actually
active
upstream
work
between
several
different
competitors
within
the
umbrella
of
parameters,
which
is
great
to
see
we
have
tons
of
adoption
course.
It's
really
really
easy
to
to
expose
data
in
premises,
format
and
while
there's
in
a
lot
of
exploration,
experience
as
the
basis
for
for
how
to
do
premiere.
Six
position
forward.
B
Still
it
has
been
done
between
only
very
few
people,
not
like
thousands
or
or
not,
between
different
vendors,
and
something
we
heard
from
people
was
basically
that
some
people
are,
or
some
vendors
are
not
really
keen
on
supporting
something
which
could
be
seen
as
something
which
is
competing
course.
It
has
a
different
name
of
some
something
which
is
actually
you
can
install
it
instead
of
just
having
a
standard
and
especially
for
traditional
vendors.
Something
we
encountered
very
often
is
that,
basically,
unless
there
is
an
RFC,
they
don't
really
want
to
support
its
course.
B
B
By
and
large
those
are
not
by
us.
They
are
by
different
people.
So
there
is
huge
adoption.
There
is
thousands
of
native
integrations
into
various
systems
and
partially
even
into
into
boxes
which
you
can
buy
off-the-shelf,
and
there
is
a
ton
of
internal
usage,
but
obviously
that's
not
documented
publicly,
so
we
can't
really
count
it
or
refer
to
it,
but
just
got
feeling
premies
teams,
Prometheus
team
gets
a
ton
of
people
who
actually
tell
them
about
where
they
do
integrations,
but
obviously
they
are
internal.
A
key.
G
B
You
quickly
so
pretty
everything
which
is
under
the
umbrella
of
CNCs
is
able
to
to
expose
in
in
parameters
format
if
you
go
to
slide
12.
These
are
the
the
injectors
which
we
are
currently
a
well
which
are
already,
as
of
today,
ingesting
the
premises
format
as
it
as
it's
currently
defined.
As
you
can
see,
there
is
both
projects
and
companies
which
are
doing
this
so
going
from
there
slide
13.
B
We
wanted
to
basically
have
something
as
of
like
a
nutri
brand,
to
put
all
this
under
and
to
also
allow
wider
cooperation.
As
I
said,
we
do
have
upstream
work
from
competitors
within
the
umbrella
of
Prometheus,
but
still
we
expect
more
people
to
be
able
and
willing
to
work
under
the
open,
metrics
umbrella,
which
is
another
reason
for
for
doing
this.
Also,
by
ensuring
that
more
people
have
a
say,
we
actually
broaden
the
base
of
what
this
can
be
used
for
something
we
want
to
have.
B
We
will
also
get
to
this
at
the
end
no
official
RFC
by
the
ITF.
So
we
can
actually
point
vendors
that
okay,
there's
this
random
RFC,
please
go
implement
it
and
coming
along
with
that.
Also
a
registered
officially
registered
content
type
and/or
mime
type
depending
on.
If
you
look
at
it,
there's
two
lists
but
they're,
basically
the
same
and
also
getting
one
Ayana
port
assignment
for
like
a
canonical
exporter,
the
work
within
open
metrics
has
been
mainly
done
by
premises.
B
There
is
already
been
a
few
commitments
by
companies
to
support
us
slide.
15.
You
can
read
them
yourself,
I,
don't
have
to
read
them
out,
but
it's
it's
a
few
larger
ones.
So
I
dare
say
it's
not
bad.
There's
also
already
a
slide
16,
something
going
on
to
increase
collaboration
within
the
umbrella
of
scene.
Cf,
specifically
open
tracing
I
talked
to
Ted
young
some
time
ago.
B
They
have
a
different
focus,
obviously
on
several
different
levels,
but
still
we
want
to
try
and
align
where
it
makes
sense
to
basically
have
have
this
come
together.
There's
also
others
which
hopefully
will
be
in
there.
I
also
talked
to
open
event
some
time
ago,
but
that
didn't
really
go
very
far
but
yeah
and
also
for
observe
back
on
or
observe
econ.
The
name
doesn't
seem
to
be
certain
during
coupon
Seattle.
B
So
what's
the
current
state
of
our
little
effort,
if
you
some
of
you
at
least,
will
remember
that
we
had
a
call
last
year
already
about
open
metrics
and,
quite
frankly,
we
took
more
time
than
we
ever
thought.
We
would
take
to
to
get
to
the
point
where
we
are
now
mostly
because
there's
tons
of
details
to
consider-
and
there
was
a
ton
of
discussion
regarding
even
really
really
specific
stuff,
which
is
a
good
course.
B
Quite
frankly,
it
means
we
are
covering
a
lot
more
basis,
but
on
the
other
hand,
it
just
ate
up
a
lot
of
time.
That
being
said,
we
have
reached
consensus,
which
is
also
why
I
asked
Chris
to
be
scheduled,
for
this
call
course.
Basically,
while
we
still
have
quite
some
work
to
do,
we
are
we
agreed
on
what
we
actually
want
to
do
and
how
to
do
it,
and
for
those
who
are
familiar
with
the
permits
exposition
format,
it
looks
largely
the
same
still
like
there's
one
breaking
change.
Of
course.
B
Now
we
have
second
time
stands
of
millisecond
time
stamps
you
get
abilities
which
are
totally
new,
like,
for
example,
having
the
ability
to
expose
events,
arbitrary
events,
but
in
our
use
cases
or
before
seen
use
cases
mainly
Trace
IDs,
to
go
along
with,
for
example,
a
certain
bucket
of
a
histogram
that
you
then
can
say
this
a
trace
of
something
which
took
longer
than
10
seconds.
There
is
a
ton
of
uses
for
that,
and
just
tighter
and
cleaner
synthetics
overall
last
slide
is
the
Mar
18
R
to
finalize
the
protocol.
B
B
We
will
have
at
least
one
test
suite,
probably
two
to
really
run
against
both
the
in
jesters
and
the
NT
data
sources,
to
really
make
sure
that
whoever
is
is
claiming
to
be
open,
metrics
compatible
to
really
be
open,
metrics
compatible
and
not
have
something
exploded.
Some
at
some
point.
It
will
be
version
course
obvious,
and
we
will
have
at
least
one
client
library
like
canonical
client
library,
cur
language
which
is
officially
recommended
to
be
used
with
open
matrix.
B
B
A
G
Mostly
counters
and
like
not
numeric
data,
only
coming
from
applications
and
the
there's
both
implementation
side
of
like
how
do
you
get
this
data
out
of
the
application
and
then
there's
their
service
side?
How
you
actually
query
the
data
so
that
you
know
you
have
open
census,
which
is
a
library
program
that
implements
the
client
side
of
all
these.
G
And
then
you
have
things
like
Prometheus
and
Yaeger
that
implement
the
the
server
side
or
the
monitoring
side
and
then
open
metrics
is
kind
of
something
we're
looking
for
a
group
like
open
census
to
adopt
so
when
they
want
to
collect
and
expose
metric
data.
They
use
the
open,
metrics
format,
because
it
allows
the
wire
CN
CF
community
to
ingest
the
metrics
from
an
open
census
source.
H
G
So
it's
so
as
Prometheus
format.
So
when
an
application
wants
to
expose
data
for
Prometheus
to
collect,
it's
called
the
Prometheus
exposition
format
and
it's
we
didn't
have
an
example
in
the
slides.
But
it's
it's
a
very
simple
way
of
collecting
and
exposing
data
for
Prometheus
to
collect
and
we're.
Then
that's
gonna
be
basically
Prometheus,
exposition
plus
plus.
D
B
D
A
D
In
with
the
team
one
question
I
do
have
is
on
the
the
breaking
change
to
the
the
prometheus
format.
Edie
I
assume
that
that
is
being
socialized
back
with
the
Prometheus
folks,
and
that
will
be
an
actual
level
of
the
format
or
maybe
you
can
expand
on
that.
A
little
bit
of
detail.
Basically,.
B
You
have
a
different.
Basically,
you
have
a
different
version
and
three
of
premises.
Team
are
currently
on
the
call
I'm
one
of
them.
Formerly
Prometheus
team
didn't
yet
adopt
open
metrics.
Of
course,
they
can't
course
it's
not
officially
existing.
Yet
the
intention
is
to
obviously
do
is
else
we
wouldn't
be
doing
this
effort.
So
it's
a
breaking
change,
but
you
have
a
version
upgrade
and
if
you,
if
you
expose
with
the
old
version,
well
it's
milliseconds
and
if
you
use
the
new
the
new
version,
its
seconds.
G
F
G
D
I
G
Most
most
applications
don't
even
implement
this
because
they
don't
implement
the
timestamp
functionality
in
the
in
the
output,
because
the
the
the
timestamp
of
the
output
is
now.
This
is
only
really
used
for
Prometheus
Prometheus
Federation,
so
most
most
applications
don't
have
to
change.
Anything
is.
D
G
Mostly
it's
mostly
so
that
it's
more
readable
in
terms
of
the
the
fact
that
so
yes,
so
there
is
the
option
for
higher
resolution
for
systems
that
want
higher
resolution
than
millisecond,
but
also
because
there
are
like
influx
data-
supports
nanosecond
resolution.
So
we
wanted
to
be
able
to
support
them
as
well.
So
it
will
be
some
kind
of
float
that
exposes
the
that
level
of
detail.
D
B
Have
floating
point
for
well?
Actually,
we
already
have
floats
for
the
actual
values,
and
these
are
being
used
with
every
single
time
stamp.
Every
single
value
of
every
single
time
series
already
has
float.
Timestamps
themselves
are
fully
optional,
so
its
orders
and
orders
of
magnitude
different,
more
usage
of
floats
anyway.
B
D
There
are
resolution
cons
consequences
for
that,
so
the
I
mean
I,
I
I
understand
it.
Obviously
the
values
are
clear
we
can
be
in
floating-point
like
that
is
without
question,
but
to
put
actual
to
treat
time
as
a
floating
point
value
there
are.
That
means
that
the
resolution
counts
I'll
ramp
up
on
this
guy.
Yes,
you
know.
B
We
actually
went
into
this
in-depth
hearing
during
the
discussion
and
for
nanoseconds
float64
with
seconds
does
incarnate.
That's
obvious
for
nanoseconds
we're
not
really
certain
that
you
would
actually
be
doing
metrics
on
nanoseconds
events.
Yes
metrics,
we
did
look
at
it.
Some
three
collaborators
also
it
had
concerns
there,
but
in
the
end
we
caught
out
with
there
is
no
actual
need
for
nanoseconds
for
metrics
at
least
open
metrics
is
intended
to
basically
as
soon
as
we
have
well
as
soon
as
sweet,
but
we,
you
can
also
put
flow
to
128
as
well.
B
D
D
A
I
B
B
D
A
B
Yeah,
and
also
if,
if
so
IIF,
is
not
really
quick,
so
what
can
happen
is,
for
example,
that
we
have
open
matrix
1.0.
This
is
published
within
the
CNCs
bla,
bla
bla
and
then
a
1.1.
If
there
is
any
changes,
is
published
both
within
the
CNCs
and
within
the
IDF,
for
example,
but
this
has
to
be
seen
we'll
figure
it
out
this
week.
Oh,
we
can't
do
anything
else
anyway,.
A
J
It
good
good
for
me,
okay,
so
hello,
TOC
in
community
glad
to
be
here
today,
presenting
to
you
I
think
you
guys
have
probably
recognized
recognize
my
name
from
presenting
relating
to
storage
and
swg
in
the
past
few
years,
most
recently
I
actually
joined
VMware,
leading
the
upstream
activities
with
kubernetes
and
CFCF,
and
so
my
focus
is
storage
and
a
few
things
that
are
beyond
that
as
well.
So
today,
I'm
really
excited
to
present
to
you
one
of
the
homegrown
projects
that
we
have
here
at
VMware
and
I.
J
Think
it's
gonna
be
pretty
important
to
the
cloud
native
ecosystem.
So
this
is
project
Harbor.
So
Harbor
is
an
open
source
trusted
cloud
native
registry.
It
originated
within
our
China
or
India
group
as
a
container
registry
to
support
our
solutions
through
the
through
the
assistance
of
our
open
source
program
office.
J
We
actually
open
sourced
Harvard
back
in
2016,
so
harbors
a
production
ready
with
lots
of
users
in
China
and
across
the
world
and
I'll
show
you
guys
that
the
list
of
them
towards
the
end
of
the
presentation
supporting
me
today
on
this
call
I
also
have
James
Bala,
Henry,
Zhang
and
Steven
Zhu.
So
what's
the
background
container
registries
are
our
critical
components
within
these
cloud
native
environments,
and
you
know
when
we're
looking
at
the
different
environments,
whether
it's
development,
testing
and
production,
they
all
have
some
key
requirements
or
key
things
that
they
think
about.
J
J
We
also
need
a
certain
form
of
compliance,
so
we
need
to
be
able
to
be
able
to
validate
that
the
dependencies
are
meet
whatever
requirements
that
they
have
been
scanned
for
security
vulnerabilities
and
that
they're
likely
free
from
vulnerabilities
based
on
whatever
policies
that
we
set.
And
the
third
thing
is
that
you
know
we
need
to
be
able
to
rely
on
the
performance,
so
the
performance
needs
to
be
predictable.
J
So
if
I
include
my
registry
as
a
key
component
within
my
CI
CD
process,
I
need
to
validate
that
I'm
gonna
get
a
predictable
level
of
performance
as
I
push
and
pull
images,
and
then
the
last
thing
is
interoperability.
So
we
need
to
be
able
to
validate
that.
We
need
be
able
to
integrate
our
registry
within
our
cloud
native
environments
so
that
we
can
leverage
all
of
our
existing
processes,
and
so
it
really
it
really
serves
as
a
key
component
within
these
environments.
J
And
finally,
you
know
when
we
look
at
the
the
multi
cloud
world
that
we're
evolving
towards.
We
need
to
have
a
certain
form
of
portability
for
this
type
of
service.
So
having
registries
that
may
differ
in
different
ways
across
different
public
clouds
is
challenging
to
to
be
able
to
rely
on
those
things,
so
you
know
Portability
and
multi-cloud.
You
know,
registries
provide
a
an
important
part
of
that,
so
the
focus
of
harbor
is
to
be
a
trusted
cloud
native
registry,
and
this
is
that
stores,
science
and
scans
content.
J
The
mission
is
to
provide
these
cloud
native
environments,
the
ability
to
confidently
manage
and
serve
container
images.
So
what
makes
the
trusted
cloud
native
registry?
We
think
that
there's
three
key
features
that
we
can
highlight
that
harbor
delivers.
The
first
is
that
it's
a
multi-tenant
content,
signing
and
validation.
So
this
means
we
can
have.
You
know
many
consumers
and
users
all
pointing
at
the
same
registry
that
have
their
own
keys,
where
they
can
sign
and
validate
content.
The
second
is
that
we
can
form
policies
and
we
can
form
security
and
vulnerability
analysis.
J
So
as
we
push
and
pull
images,
we
can
actually
accept
or
deny
those
bushes
and
foals
based
on
policies
and
based
on
the
status
of
the
scanning
of
images
and
then
the
third
is
integration
to
role
based
access
control.
So
we
have
multi-tenancy
and
then
you
should
be
able
to
hook
in
that
multi-tenancy
to
your
existing
identity
systems,
to
enable
role
based
access
control.
There's
a
couple
other
features
that
our
users
of
harbor
tend
to
enjoy
as
well.
J
So
one
of
those
to
highlight
is
image
replication
between
instances,
so
this
allows
independent
harbor
instances
to
actually
replicate
the
content
between
themselves
and
then
the
other
one
is
an
internationalization.
So
we've
got
a
large
community
within
China,
so
naturally
we
have
two
main
languages
within
the
the
project.
Today,
English
and
Chinese.
In
terms
of
the
operational
experience
for
harbor
today,
it
is
deployed
within
containers,
so
we've
got
beta
support
for
a
helmet
art.
We
also
use
docker
compose
and
that's
what
employs
of
numerous
services
that
support
harbor.
J
So
we're
looking
to
expand
on
that
experience
and
to
make
it
even
more
cloud
friendly
where
we
start
taking
advantage
of
the
container
orchestrators
to
make
it
easier
to
actually
operate
harbor
in
the
long
term.
The
other
thing
to
highlight
here
in
terms
of
operational
experience
is
that
harbor
really
integrates
many
open
source
components
that
are
proven
components
out
there
today.
So
I'll
talk
about
that
here
on
the
the
next
slide
for
architecture.
J
So
in
the
middle
of
this
slide,
there's
a
gray
box
which
represents
all
of
the
different
harbor
components
that
make
the
service
work,
there's
some
key
components
which
are
the
ones
in
blue,
which
is
the
code
for
hardware.
So
here
we
have
a
core
service
for
your
API
and
GUI.
You've
got
your
job
service
and
your
admin
service
and
then
on
the
right
side.
Essentially
the
packaging
and
everything
together.
So
the
blue
are
the
real
harbor
components
in
the
pink.
We
have
the
third-party
components.
So
we
have
the
docker
registry.
J
We've
got
the
vulnerability
scanning
from
core
OS
through
Claire,
and
we've
got
the
trusted
content
through
CN
CF,
s--
notary
project
below
the
pink
boxes.
You
have
the
green
boxes,
which
represent
the
persistence,
so
we've
got
Redis
and
and
Postgres
supporting
these
services
and
then
outside
of
the
grey
box,
which
is
really
outside
of
the
packaging
of
harbor.
You've
got
reliance
services
such
as
your
local,
remote,
storage
or
block,
which
is
a
block,
filer
object
to
support,
persistence
and
then
the
left
side
you've
got
the
identity,
integration
for
LDAP
and
Active
Directory.
J
So,
to
give
you
a
little
peek
at
the
the
web
interface,
this
is
what
it
looks
like
for
a
user
who's.
Gonna
use.
The
GUI
now
harbour
is,
of
course,
fully
api
driven
and
the
GUI
sits
on
top.
But
this
gives
you
a
little
snapshot
of
what
this
actually
looks
like
so
a
user
would
go
in
they'd
go
to
the
top
left.
J
You
can
see
us
as
projects,
so
they'd
open
a
project
which
is
their
tenancy,
that
our
form
of
Tennessee
and
then
they'd
have
their
repository
in
there,
which
is
the
production,
slash
goal
a
in
this
case.
You
can
see
that
there's
one
image
that's
listed
in
this
repo
and
there's
one
version
1.6
ohh
cork.
We
have
you
go
through
Claire
where
we're
highlighting
the
vulnerability
scanning,
so
you
can
see
that
there's
seven
vulnerability,
things
to
be
actually
looking
at,
and
this
may
actually
in
terms
of
the
policy
that
we
set
for
for
this
user.
J
J
J
So
we've
got
two
key
products
that
harbor
actually
supports
with
its
registry
services
over
the
past
two
years
since
it's
been
open-source,
we've
actually
grown
from
thousands
of
2000
to
now
four
thousand
stars
future
integrations
are
actually
a
really
interesting
category
that
we're
looking
at
or
in
terms
of
roadmap,
I.
Think
one
of
the
most
important
things
that
we're
looking
at
has
to
do
with
the
OCI
initiative.
So
if
you
looked
at
that
architecture
slide,
we've
got
Dockers
registry
there's
a
initiative
with
noci
to
start
formalizing.
J
You
know
what
that
registry
api
looks
like
and
it's
taking
place
within
this
distribution
group,
so
we're
looking
at
a
future
integration
for
for
implementing
the
distribution
spec
as
it
gets
finalized
other
than
that.
We've
also
got
other
key
focuses
from
the
cloud
native
ecosystem,
whether
it's
in
kubernetes
or
the
open
source
open
service
broker.
Api
or
helm,
etc.
So
lots
of
really
interesting
stuff
to
look
at
so
harbour
is
a
trusted
cloud
native
registry,
but
you
know
it
also
has
the
ability
to
focus
on
certain
orchestrators
within
that
ecosystem.
J
So
one
of
those
you
know
natural
orchestrators
for
CN
CF
is,
of
course
kubernetes,
but
I
think
a
valid
earth
taking
advantage
of
an
Orchestrator
or
you
know,
building
a
great
user
experience
around
an
orchestration
tool.
It's
something
that's
very
valid
for
for
this
project,
but
the
short
term
kubernetes
is
the
one
that
we're
looking
at
so
working
towards
our
1.60
timeframe.
J
There's
some
key
things
that
we
can
highlight
that
that
we're
focusing
on
the
first
of
it
first
is
storing
helm,
turns
so
being
able
to
expand
beyond
what
we
do
today,
which
is
container
images
for
artifacts
and
thinking
about
other
things.
That
may
be
important
and
aligned.
The
cloud
native
the
ecosystem,
so
storing
of
pound
charts
and
scanning
and
validation,
and
everything
we
do
with
container
images,
is
one
of
those
things
that
we're
looking
at
delivering.
J
Beyond
that
we
have
custom
custom
controller
that
we're
looking
at
and
I
think.
There's
lots
of
opportunity
here,
whether
it's
to
enable
the
management
of
harbours
to
help
with
availability
through
expanding
and
adding
service
pods
on-demand
or
it's
to
be
able
to
take
advantage
of
the
networking
and
storage
features
to
be
able
to
scale
out
our
Harbor
instances.
As
you
know,
networking
a
storage
requirements
change.
Another
key
thing
for
the
controller
is
to
be
able
to
monitor
the
name
spaces
within
kubernetes.
J
So
beyond
that
custom
resources
could
also
be
important
to
harbor.
So
if
we're
extending
hardware
to
be
more
self-managed
and
all
done
through
the
Container
Orchestrator,
then
leveraging
CR
DS
would
allow
users
to
be
able
to
consume
and
manage
their
own
images
and
then,
lastly,
we're
looking
at
a
consistent
management
experience
by
way
of
a
harbor,
CTL
or
harbor
cuddle
right.
So
we
want
to
be
able
to
provide
a
kubernetes
user,
the
same
type
of
experience
that
they
would
be
used
to
all.
By
way
of
this
customs
controller.
J
So
Harbor
has
you
know
a
pretty
big
development
team
as
an
open
source
project
internally
I'm
aware
right
now,
so
we've
got
a
product
manager
and
we've
got
four
eight
full-time
developers
and
and
others
within
VMware
that
helps
support
the
project
over
the
lifetime
of
the
project.
Since
it's
been
open-source,
we've
had
18
releases
47
times
how
many
one
commits
and
there's
about
23
non
VMware
committers,
which
is
what
we'd
consider
more
material
where
they've
done.
50
plus
lines
of
code.
J
So
here's
the
users
there's
lots
of
big
names
that
we
can
highlight.
Some
are
within
China
somewhere
across
the
US,
but
you
see
that
top
line
for
Trend
Micro
BMC
priority
payments.
These
are
all
probably
bigger
names
that
lots
of
us
on
the
call
recognized
I'll
focus
on
two
really
really
quick,
so
on
star,
which
is
a
which
we're
highlighting
inter
they're
Shanghai
joint-venture.
J
They
have
many
development
and
test
environments
and
they
leverage
harbor
because
of
its
ability
to
integrate
within
their
CIC.
So
they
want
to
be
able
to
have
you
know
their
dev
test
environments
and
they
want
to
build
a
replicate
images
among
them
and
they
want
a
certain
level
of
predictability
of
performance
for
that
for
the
registry,
because
it's
a
part
of
their
CIC
D,
so
they
leverage
it
pretty
extensively.
J
So,
as
a
platinum
member,
they
actually
wrote
up
an
article
which
I
have
a
link
to
at
the
bottom
here
where
they
describe
their
journey
from
OpenStack
to
kubernetes
and
in
that
journey,
harbour
is
called
out
as
one
of
those
key
components
that
helps
them
make
that
journey
so
lots
of
users
within
within
China,
but
we
also
have
lots
of
other
users
across
the
u.s..
You
know
some
referenceable
and
some
not
so.
J
We
believe
that
a
thriving
community
collaborating
on
a
trusted
registry
is
important,
as
from
VMware's
perspective,
we
plan
to
continue
our
current
management
and
engineering
support
the
project
with
new
governance.
So
that
means
you
know
that
the
eight
full-time
engineers
that
we
have
the
product
manager
we
expect
that
they'll
still
be
in
place
as
we
as
we
look
at
the
future
of
it
within
a
foundation.
We're
looking
for
collaborators
for
a
few
key
reasons.
We
want
to
get
people
to
help
us
lead
on
strategy
in
direction.
J
We
want
to
accelerate
the
future
development
and
we
want
to
get
a
focus
on
interoperability
with
other
ciencia
projects.
I
believe
pretty
firmly
that
the
there's
definitely
some
room
to
improve
user
experience
and
take
advantage
of
the
CIO's
and
really
fit
within
these
cloud
native
environments.
There's
lots
of
work
to
be
done
to
make
that
interoperability
happen.
So
we
think
that
the
foundation
can
help
grow
the
harbor
community,
but
by
providing
a
vendor-neutral
home
to
support
the
project.
J
So
here's
a
few
other
key
things.
First
of
all,
so
harder
is
already
licensed
as
Apache
b2.
It's
got
a
trademark
to
the
name,
so
you
know
we'll
we're
definitely
willing
to
be
transferring
than
transferring
that
to
the
foundation.
We
have
already
have
a
couple
key
TOC
sponsors,
so
Quintin,
Hill
and
Ken
Owens
have
already
already
volunteered
to
help
sponsor
the
project.
So
we
thank
these
gentlemen
for
that
and
the
support
that
we
need
moving
forward
and
we'll
be
targeting
to
create
a
proposal
and
submitting
that
for
the
week
of
July
9th.
J
C
There
this
is
Henry
from
China
I,
see,
there's
a
lot
of
users
using
the
harbor
on
Prem,
as
far
as
they
are
deploying
in
the
Amazon
Cloud
and
all
kinds
of
cloud,
Alikhan,
all
kinds
of
cloud
environment.
If
they're
deploying
on
Prem,
they
can
use
like
local
file
system
for
storage.
If
they're
using
it
deploying
Harbor
in
the
cloud,
they
could
leverage
any
of
just
storage
in
the
cloud.
So
both
there
are
many
users
different
here
in
the
cloud
or
on
Prem
using
various
various
kinds
of
storage.
C
C
Right
now
the
use
of
you
tree
healthy,
so
the
object,
storage
or
the
storage
in
the
cloud,
mostly
using
their
that
they
are
Clow
credential,
I
amazonica
based
credential.
He
has
nothing
to
do
with
the
the
user
authentication
part
of
the
harbor,
so
that's
the
different
authentication.
So
so
so
is
maybe
it's
not
the
same.
Okay
is.
F
J
J
I
I
hid
quite
did
this.
One
has
been
asked
in
the
past,
maybe
not
most
recently,
but
you
have
a
quick
perspective
of
other
similar
tools
in
the
space
granted
you
or
harbor.
You
know
envelopes
other
open-source
tools
like
like
Claire
and
the
registry
and
etc
to
can't
bring
together
something
of
a
hub.
If
you
will
yeah.
J
Can
we
answer
that
in
the
proposal?
Maybe
we'll
follow
up
and
give
kind
of
a
snapshot
from
our
view
of
what's
going
on?
I
think
my
summary
is
it.
You
know.
Lots
of
folks
are
using
the
public
registries
and
you
know
some
of
these
services
may
be,
provided
you
know
through
public
registries,
but
this
is
really
a
open
source
portable
trusted
cloud
member,
it
native
registry.
So
let's
try
to
follow
up
with
that
in
the
proposal,
because
it
can
be
a
pretty
detailed
conversation.
Is
that
fair,
that'd.
J
Think
it's
the
consistent
user
experience
right,
so
harbor
has
a
certain
set
of
features
and
harbor
can
be
ran
on
top
of
any
cloud
that
you
want
it
to
whether
it's
on
Grammer
on
the
public
cloud,
and
it
can
also
replicate
you
know
amongst
itself.
So
you
get
that
consistent
experience.
You
get
the
replication
capability
and
it's
no
guarantee
if
you're
going
from
one
cloud
to
another
today,
you
know
what
services
they're
gonna,
provide
with
their
registry.
A
C
J
Consulting
with
the
the
sponsors
right
now
to
decide
on
what
level
to
introduce
it
at
so
we're
open
for
suggestions
and
feedback
on
what
the
TSC
thinks
will
be
the
right
level.
We
think
that
when
you
look
at
the
the
status
of
the
project
terms
of
the
history
and
the
users
that
we
meet
the
objective
criteria
to
be
incue
bation,
but
we're
looking
for
a
feedback
from
the
TOC
on
that.
Okay.
I
I
K
So
I'll
jump
in
here,
real
quick.
This
is
Matt
Farina.
One
of
the
things
that
can
be
a
problem
is
when
something
isn't
in
a
vendor
neutral
space.
It
can
be
hard
to
get
outside
maintainer
x'.
I've
talked
to
some
other
projects
about
this,
and
that's
one
of
the
reasons.
Sometimes
they
want
to
go
towards
a
vendor
neutral
space
is
to
enable
them
to
bring
on
other
maintainer,
x'
and
contributors
who
otherwise
couldn't
do
it
very
well
said.
Matt
totally
agree.
J
A
Clinton
folks,
alright
moving
on,
so
it's
a
couple
things
to
discuss,
but
mostly
we're
kind
of
going
to
wrap
up
today.
So
if
you
go
through
slides
34,
just
a
reminder
that
we
have
working
groups
that
meet
regularly
slide
35,
we
have
a
list
of
kind
of
different
project
backlog
slide
36.
We
have
three
events
that
will
be
coming
up
over
the
next
twelve
twelve
months,
so
we
have
our
first
event
in
China
in
November,
14th
or
15th
in
Shanghai.
A
December
11th
through
13th
is
our
flagship
event
in
North
America
in
Seattle,
and
then
we're
gonna
do
our
European
event
next
year
in
May
in
in
Barcelona,
so
CFP
is
open
for
Shanghai
in
Seattle.
So
please
submit
talks
if
you
haven't
already
in
terms
of
next
meetings.
Our
next
formal
meeting
with
the
TOC
will
be
July
3rd
and
we'll
be
hearing
from
the
ti
kV
project
at
that
time.
So
with
that
being
said,
we
have
about
eight
minutes.