►
From YouTube: CNCF TOC Meeting 2021-09-21
Description
CNCF TOC Meeting 2021-09-21
A
B
D
I've
gotten
a
few
regrets,
but
I
think
we
should
have
enough
to
be
able
to
have
like
a
really
robust
discussion
this
morning.
Some
folks
that
will
be
joining
probably
like
the
half
hour
as
well,
but
I
know
dawn,
is
here
because
this
is
something
that
dawn
foster
probably
wants
to
be
able
to
talk
about
as
well.
C
I'm
actually
in
just
outside
of
zurich
right
now
and
I'm
heading
to
burn
as
soon
as
we
come
off.
This
call
I'm
going
to
jump
on
a
train,
but
apparently
it's
only
like
an
hour
and
a
half
from
where
I
am
so.
B
D
D
C
Yeah,
so
I
think
this
is
a
pretty
much
open
discussion
that
came
out
of
you
know.
We
have
a
few.
C
D
C
I
was
thinking
of
sad
just
because
I
know
he's
been
involved
in
looking
at
those
particular
storage
projects,
but
yeah
I
mean
I'm
using
those
as
an
example
that
the
same
principles
need
to
apply
across
other
projects
and
we
do
have
examples
of
multiple
projects
solving
the
same
kind
of
problem.
You
know
we
have
multiple
solutions
for
runtime,
for
example,
container
d
and
cryo.
I
think
that's
a
great
example
where
they're
both
being
used
really
heavily
by
different
sort
of
sectors
of
the
ecosystem,
and
that's
that
that
seems
good.
C
You
know
they're
kind
of
competitive,
but
they
both
have
strengths.
So
that's
I
don't.
I
feel
like
that's
a
good
example
of
a
healthy
competition
between
two.
You
know
alternatives,
but
I
wouldn't
want
to
see
50
different
run
times,
because
how
would
you
choose
between
them
or
maybe
there
would
need
to
be
50
different
use
cases.
E
C
So
we
were
just
sort
of
laying
the
scenario
of
you
know.
We
have
a
general
problem
of
the
balance
between
competing
projects.
You
know
we
having
some
healthy
competition
can
be
good,
but
we
don't
want
to
have
a
giant
number
of
equivalent
projects
that
are
hard
for
end
users
to
navigate
between
and
then
sad
we
mentioned
you
because
I
know
you've
been
looking
at
some
projects
that
have
quite
a
lot
of
similarities
in
the
storage,
so.
F
Yeah
very
timely,
so
I've
been
looking
at
longhorn
they've
applied
to
incubation.
I've
been
doing
their
due
diligence
with
them
worked
with
tag
storage.
On
doing
that
due
diligence,
I
raised
that
question
with
tag
storage
as
well
say:
hey
what
happens
if
we
end
up
with
a
bunch
of
different
software-defined
storage
systems,
as
is
going
to
happen
since
open
ebs
is
already
part
of
the
sandbox,
and
you
know
we
already
have
rook
seth
and
there
will
be
others.
So
what
do
we
do?
F
C
Just
facade
this
is
a
question
open
to
whoever
wants
to
get
involved
and
and
throw
in
their
their
thoughts
on
this.
A
C
A
I
mean
that
obviously
is
an
unrealistic
number,
but
if
that
were
true,
I
wouldn't
even
personally,
I
wouldn't
even
think
that's
a
bad
thing,
because
I
mean
ultimately
that
means
you
know
given
given
there's
a
very
limited
attention
span
for
the
for
the
for
the
for
the
whole
ecosystem
and
the
industry
I
mean
given.
That
means,
if
there's
50
thing
or
even
10
things
going
on
on
a
specific
topic.
A
That's
got
to
be
a
very,
very
interesting
topic,
so
so
I
mean
if,
if
all
10
projects,
let's
say,
pass
the
criteria
for
incubation
and
even
graduation,
whatever
criteria
subjective
objective
criteria
we've
set,
then
I
think
they
should
advance
and-
and
we'll
just
you
know,
we'll
do
our
best
to
to
shepherd
those
projects
to
to
to
help
people
choose
the
right
project.
We
got
to
do
all
of
that.
A
But
ultimately
I
mean,
let's
just
say
the
interest
wanes
the
the
hype
cycle
ended
and
then
some
then
some
of
these
projects
will
no
longer
be
interesting
and
then
we
will
archive
them.
You
know
so
I
I
don't
know.
If
that's
that's
the
that's
almost
like
the
worst
case
scenario,
but
it
still
doesn't
seem
that
bad
to
me,
because
the
premise
of
all
of
that
is
this:
there
is
something
that
exciting
going
on
you
know.
So
so
anyway,
that's
that's.
That's
my
thought.
Tim.
G
Hi,
so
I
think
we
talked
a
little
bit
on
another
call
before
I
forget
exactly
when
so
so.
If,
as
a
problem
statement,
I
would
say
is
somebody
is
coming
in,
they
want
to
know
which
of
a
certain
type
should
they
look
for
right.
So
one
example
that
I
have
that
of
a
community
which
did
this
in
a
in
a
good
fashion,
was
like
in
openstack
cinder
since
we're
talking
about
like
storage
and
saad.
G
So
that
is
the
example
where
you
know
they
had
a
matrix,
and
they
said
here
is
the
list
of
features
that
are
available
that
are
possible,
and
here
is
here-
is
a
set
of
the
drivers
that
are
available,
and
you
know
you
essentially
have
a
check
box
if
it
supports
it
and
x
mark
if
it
doesn't
so
if
we
come
up
with
like
a
generalized
matrix
for,
for
example,
times
right
where
we
say
okay
here
is
the
different
ways
of
looking
at
it
or
whether
it
is
features
or
whether
it
is
capabilities
and
then
we,
you
know
we
maintain
and
we
essentially
get
input
from
the
run,
the
folks
maintaining
the
runtime
saying
what
are
the
things
that
you
think
you
are
important
to
your
runtime,
that
we
can
put
it
on
matrix
and
then
we
can
use
it
for
comparison
right
and
then,
when
the,
when
somebody
comes
in
to
evaluate
runtime,
they'll,
go
look
at
this
matrix
and
say:
okay.
G
This
is
important
for
me,
that's
not
important
for
me.
So
let
let
me
pick
one
of
these
two
or
one
of
these
five
right,
so
it
gives
them
a
chance
to
like
look
at
what
are
the
different
things
that
I
should
be
looking
at
and
which
of
these
runtimes
support,
or
does
not
support
one
of
these
things
that
I'm
interested
in
and
gives
them
a
starting
point.
That
is
basically
what
I'm
looking
for
is
like.
How
do
we
get?
Somebody
started.
G
You
know
once
you
well
once
you
show
them:
okay,
evaluate
continuity
or
you
evaluate
cryo,
because
it
has
a
set
of
things
that
you
have
and
then,
if
you
don't
really
like
it
see,
if
there's
something
else
that
is
available,
you
know
and
and
go
evaluate
that
that
that
was
how
I
was
thinking
about
it.
C
I
think
that
makes
a
lot
of
sense.
I
my
hesitation
on
that
example,
is,
I
think
it's
really
great
until
you
scroll
to
the
right-
and
you
see
I
I
didn't
count
them.
You
know
like
15,
that
have
like
a
tick
in
the
top
two
columns
and
then
a
series
of
crosses,
and
I
guess,
as
a
naive
end
user
I'd
be
thinking.
C
G
Right
of
course,
so
basically,
we
are
crowdsourcing
this
matrix
to
the
people
who
are
doing
the
work
and
we
are
not
like
as
a
poc,
maintaining
the
matrix
or
anything
like
that
right,
so
they
work
on
it
together.
You
know
similar
to
let's
take
to
run
times
right.
So
if
we
say
that
the
tag
maintains
the
matrix
and
all
the
runtimes,
you
know
when
they
have
something
new,
the
exciting
that
they
are
happy
about.
They
go
to
the
tag
and
say
hey.
F
Yeah,
it's
a
good
idea.
I
I
it's.
The
kubernetes
csi
is
actually
a
good
example
of
where
we're
doing
something
like
this.
I
think
there's
over
100
csi
drivers
now
and
the
kubernetes
csi
community
maintains
a
table
with
all
the
drivers,
links
to
them
and
then
a
little
matrix
of
what
their
features
are.
F
I
think
that's
working
well
overall,
the
interesting
thing
there
is
that
you
know
the
cncf
doesn't
host
all
those
drivers.
Necessarily
some
of
them
are
hosted
within
the
kubernetes
project,
but
the
vast
majority
of
them
are
actually,
you
know,
self-hosted
by
the
company
themselves,
and
we
just
link
out
to
them.
F
So
that
is
a
a
good
example.
I
think,
overall,
what
it
boils
down
to
is
what
liz
is
saying,
which
is
what
happens
as
an
end
user.
When
I'm
trying
to
make
a
decision,
how
do
we
make
end
user
life
easy?
Basically,
so
if
we
have
some
sort
of
guidance
on
where
to
start
and
how
to
pick,
I
think
that
seems
like
a
good
good
solution.
H
Yeah,
I
think,
from
the
end
user
point
of
view,
something
like
what
dim
said
kind
of
makes
sense
like
having
some
kind
of
matrixy
thing
and
having
the
project
maintainers
kind
of
saying
why
they
think
their
project
is
special,
like
I
don't
know
where
the
fastest
or,
where
the
I
don't
know,
most
available
or
whatever
like
have
some
metrics.
The
thing
that
I
think
is
interesting,
given
this
path,
we're
going
towards
having
the
tags
maintain.
Something
like
this
is
in
the
scope
of
an
incubation
project.
H
When
do
we
go
from
this
thing
doesn't
exist
to
it.
Does
right,
like
you
have
one
project,
does
something
you
don't
have
it,
but
then,
like
the
sixth
project
comes
in,
and
we
say
you
pass
all
the
incubation
requirements,
but
we're
not
letting
you
become
an
incubating
project
until
this
matrix
is
created
or
like.
How
do
we
get
to
the
point
that
this
thing
exists
once
we've
crossed
whatever?
That
number
is
without
penalizing
the
nth
project
in
its
process
towards
getting
incubated.
I
I
just
posted
in
chat.
One
of
the
things
we've
been
trying
to
drive
out
of
the
contributor
strategy
tag
is
getting
people
to
better
define
their
charters
and
what
functional
functionality
is
in
scope.
What's
out
of
scope
and
help
people
differentiate
this,
because
I
mean
the
the
problem
we're
trying
to
solve.
Is
that
we're
in
a
complex
ecosystem
with
lots
of
overlapping
functionality,
so
the
better?
We
can
get
people
to
document
this
in
the
readmes.
I
I
think
it
doesn't
necessarily
solve
the
matrix
problem,
but
I
think
it
gets
us
a
step
in
that
direction
and
josh
burkus
has
been
driving
trying
to
get
this
included
in
the
project
templates
as
well.
So
hopefully
we'll
we'll
get
this
in
better
shape,
but
we
do
have.
We
do
have
the
document
that
I
linked
into
the
chat
which
talks
about
how
projects
can
better
document
some
of
this.
So
so
we
can
encourage
people
to
use
some
of
that.
C
One
thing
that's
just
crossed
my
mind
is
sometimes
less
is
more
right.
You
know
a
simple,
straightforward
project
that
does
one
thing
really
well
be
better
for
some
applications
than
a
project
with
a
whole
ton
of
extra
bells
and
whistles.
So
I
wonder
if
we
need
to
somehow
express
that
through
the
matrices
or
accompanying
doc,
I
think
it
was
someone
mentioning
things
like
high
performance.
H
That's
the
sort
of
thing
I
was
hoping
to
get
out
of
having
like
a
blurb
or
something
from
the
project
like
they
might
say.
We
check
only
one
box,
but
we
are
the
best
at
that
box.
So
if
that's
what
you
want
use
our
project,
but
if
you
want
something
else,
obviously
use
something
different
and
I
feel
like
that's
the
kind
of
thing
you
can
only
get.
If
you
give
the
project
like
give
me
a
two
sentence,
elevator
pitch
for
why
you're
different,
as
opposed
to
just
the
matrix.
G
J
There
might
be
a
concrete
example:
it's
slightly
off
the
beaten
path,
but
there's
various
sites
usually
called
alternative2.net
or
equivalent
to
and
they're
pretty
much,
oh,
what
some
tool
they
like
isn't
being
produced
anymore,
or
now,
it's
commercial
or
for
whatever
reason-
and
I
kind
of
like
the
examples
they
have.
They
often
have
a
sort
of
matrix,
but
it
varies
from
one
category
of
software
or
even
just
two
specific
apps
one
to
another.
So
it
might
just
be
a
nice
visual
example,
something
that
kind
of
overlaps
with
the
matrix.
That's
already
been
described.
J
It
cuts
down
to
here's
the
key
things
you
like,
and
maybe
one
will
have
three
key
features
and
the
other
just
has
two:
we
can't
perfectly
map
them
one
to
one,
but
it
spells
out
this
one
has
more
features
and
it
excels,
and
these
three
categories
users
are
interested
in
and
this
one
only
meets
one
or
two,
but
does
them
exceptionally
well.
So
I
found
that
to
be
a
particularly
interesting
example.
K
Yeah,
but
I
I
would
like
to
try
manage
it
in
terms
of
the
work
to
be
done.
I
think
you
know
creating
a
matrix
for
everything
I
think
would
take
a
lot
of
time.
So
I
think
I
like
the
idea
of
the
blur
yeah
in.
I
think
it
will
vary
between
different
projects.
K
K
They
they're
essentially
solving
a
similar
problem
in
in
their
in
a
similar
space,
but
they're
solving
the
problem
in
different
ways.
So
I
think
it
might
be
good
for
the
project
to
provide.
You
know
blur
of
why
you
know
those
particular
projects.
Are
you
know
better
or
how
they
can
be
used?
You
know,
and
as
opposed
to
a
different
project
for
a
certain
kind
of
application.
C
C
I
think
in
both
the
sandbox
and
incubation
documentation,
but
I
think
we
we
sort
of
have
that
in
the
evaluation,
but
we
don't
really
maintain
that
or
make
it
available
to
end
users
in
a
consumable
way.
J
Is
there
a
specific
intent
to
provide
the
comp
all
the
comparisons
up
front
for
consumers,
or
do
we
back
away
from
that
a
bit
and
just
give
here's
the
blurb
and
leave
it
to
end
users
to
do
their
research
and
compare
them
and
arrive
at
said
conclusion?
It
does
kind
of
remove
us
from
the
position
of
appearing
to
endorse
a
particular
one
versus
another.
C
Yeah
I
mean
I
guess
there
is
already
a
signal
of
endorsement
by
having
a
project,
in
particular
in
incubation
or
graduation,
so
we
are
providing
some
kind
of
endorsement,
but
I
I
feel
like
right
now,
there's
a
sufficiently
small
amount
of
choice
that,
although
it's
a
huge
landscape
and
pretty
hard
to
navigate,
it's
not
completely
insurmountable,
but
I
worry
that
we
will
get
to
a
point.
C
K
And
another
question
to
me
whether
that
should
be
part
of
the
incubation
or
part
of
the
graduation
right.
So.
C
J
J
I
was,
I
was
thinking
of
two
platforms,
that
sort
of
come
to
mind.
One
was
linkedin
and
one
was
degreed
and
I
remember
with
like
a
degreed.
Employers
often
have
their
employees,
add
a
ton
of
skills
and
I
believe
some
similar
platforms.
You
can
even
add,
like
a
number,
so
you
have
just
a
massive
dictionary
of
thousands
of
keywords
and
skills
that
people
want
to
associate
with
themselves
data
mining
cloud,
computing
et
cetera
and
then
they
can
add
like
sort
of
a
score
out
of
10
to
it.
J
I'm
wondering
if
maybe
it's
sort
of
a
community
driven.
Here's
like
the
key
features
that
oppa
versus
some
other
policy
management
engine
has
that
are
relevant
to
that
sort
of
domain,
and
then
they
can
maybe
add
some
sort,
maybe
not
numbers,
maybe
a
scaling
low,
high
medium
good,
great
best
or
something.
I
wonder
if
it's
sort
of
a
community
or
maybe
maintainer,
driven
set
of
features
that
uses
keywords
and
they
can
throw
the
rankings
in
there.
J
C
L
Yeah
sorry,
this
might
be
a
bit
of
a
naive
question,
but
I'm
not
sure
where
the
notion
of
the
responsibility
of
finding
the
best
comes
from
here.
It's
been
mentioned
several
times.
A
responsibility
is
to
find
the
best
open
source
projects
or
to
pick
individuals,
and
that
doesn't
seem
like
mentioned
in
the
charter,
for
the
cncf
in
any
kind
of
way,
and
it
doesn't
seem
like
it's
actually
the
responsibility
of
this
group.
L
It's
more
the
responsibility
of
the
users
to
make
the
determination
of
what's
best
for
them
and
then
through
that
selection
of
what's
best
for
them
there
might
be
some
aggregate
sense
of
what's
the
most
popular
or
most
useful
or
most
broadly
applicable.
But
the
judgment
of
saying
this
is
best
seems
a
little
bit.
L
You
know
sort
of
like
outside
of
the
scope
of
what
was
the
the
we're
supposed
to
be
thinking
about
we're
supposed
to
evaluate
whether
something
is
viable,
technically
capable
useful
to
the
community,
those
sorts
of
things,
but
whether
it's
better
than
something
else
seems
a
little
bit
the
opposite
of
what
we
should
be
doing.
If
we're
trying
to
encourage
as
much
adoption
as
cloud-native
technologies
as
possible,.
C
So
I
was
just
having
a
quick
look
to
to
find
where
you
know
we're
in
the
the
charter
and
you're
absolutely
right.
The
charter
doesn't
really
talk
about
qualitative
assessment,
but
I
know
from
particularly
speaking
with
alexis
when
he
was
first
sort
of
you
know
his
his
kind
of
vision
for
what
the
toc
was
there
to
do
was
that
it
definitely
is
applying
judgment
that
you
know.
We
can't
just
have
a
tick
box
of
criteria
that
it
is
supposed
to
be
helping
assess.
C
C
Needs
to
be
balanced
with
the
no
king
makers.
I
completely
agree.
I
don't
think
it's
a
an
easy
line
to
draw,
but
I
think
what
we're
definitely
not
trying
to
do
is
accept
every
project
that
considers
itself
to
be
cloud
native.
I
think
we
are
looking
for
a
quality
bar
and
then
that
immediately
says
there's
got
to
be
some
judgement
about
quality.
L
But
it
seems
like
the
criteria
we
have
already
established
for
sort
of
you
know.
Sandbox
versus
incubating
versus
graduating
is
itself
a
collection
of
hurdles
that
provides
that
validation
and
judgment
that
are
about
the
viability
of
the
project
inside
the
guidelines
of
what
we
have.
So
you
know
it
seems
to
me,
like
you
know,
the
door
should
be
really
open
at
sandbox,
because
the
movement
to
the
next
level
is
actually
qualitatively
assessed
with
a
process
around
it.
L
C
Okay,
sorry
adam
I'd,
I'd
just
seen
her
comment
and
I
I
don't
think
I
don't
think
this
conversation.
It
needs
to
be
about
changing
the
criteria
for
sandbox
and
the
kind
of
bottom
of
the
funnel,
because
that
is
very.
C
We've
talked
about
that
quite
a
lot,
and
I
think
that
the
bar
for
entry,
for
that
is
pretty
low
in
in
some
respects,
but
incubation
is,
is
where
we
really
start
seeing
people
taking
notice
of
what
the
cncf
is
saying.
You
know
we
see,
sandbox
is
experimental
and
and
we're
not
making
any
guarantees
about
that.
But
we
are
telling
people
that
you
know.
C
C
L
Yeah
I
mean
I
you
know,
I'm
like
I
said,
I'm
pretty
naive
statement
from
my
perspective
on
only
being
involved
with
you
guys
for
following
along
for
the
last
year
or
so
so
I
don't
have
all
the
backstory
for
everything.
But
you,
my
general
thinking
about
this
and
my
experience
in
other
open
source
communities
is
the
the
marketplace
of
ideas.
L
But
ultimately
it
is
the
end
user's
choice
about
whether
those
things
are
viable
and
there's
plenty
of
situations
where
a
very,
very
reasonable,
open
source
project
is
still
interested
and
useful
for
a
particular
subsection
of
the
industry,
despite
it
being
completely
replicated
in
another
faction
somewhere
else,
and
it's
that's
completely
fine
in
sort
of
like
the
general
sense
of
open
source
land
and
in
particular
it's
completely
fine
in
the
general
sense
of
getting
as
many
people
to
use
cloud
native.
You
know
work
as
always
like.
L
If
that
thing
works
really
great
in
that
industry,
then
you
don't
need
to
try
to
encourage
people
to
migrate
away
from
it
to
something
that's
equivalent
or
more
popular
somewhere
else.
So
that's
where
they
make
sort
of
like
you
know
letting
the
users
make
the
determination
about
what
is
valuable,
and
you
know
I
keep
falling
back
to
that
sort
of
like
sense
of
this
is
the
way
this
is
the
way
it
will
happen
in
the
end
anyway.
L
So,
like
you
know,
we
can
put
our
thumb
on
the
scale
so
to
speak,
or
we
can
give
them
tools
to
help
them,
make
it
to
sort
of
those
determinations,
but
making
those
determinations
ourselves
is,
you
know,
sort
of
like
you
know,
feels
like
it's
the
wrong
way
around
the
way
the
things
are.
Gonna
adoption
or
choice
is
gonna
happen.
From
my
perspective,.
G
I
I
don't
think
we're
talking
about
making
a
determination
as
much
as
like
giving
guidance
saying
if
you
are
evaluating
projects
in
this
area,
then
look
at
these
aspects
where
things
are
the
same
or
things
are
different,
so
you
can
make
up
your
own
mind
that
that's
basically
what
I
was
looking
for,
rather
than
the
popularity
kind
of
thing
which
can
be
gamified.
C
I
think
that's
exactly
right.
It's
trying
to
find
a
way
of
getting
the
real
qualities
of
a
product
expressed
in
a
way
that
consumers
can
can
understand
the
differences
between
them
and
some
of
those
differences
might
well
come
from
the
experiences
of
end
users.
You
know-
and
I
think
some
of
the
the
metrics
that
I
was
just
having
a
quick
look
at
the
things
that
dawn
had
pointed
out.
You
know
things
like
responsiveness
to
issues.
C
That's
a
you
know
a
pretty
interesting
metric
for
what
it's
going
to
be
like
for
an
end
user.
If
they,
if
they
have
problems
like
how
how
responsive
is,
is
the
project
going
to
be
to
those
problems?
I
I
I
can
see
that
being
a
really
useful
thing
to.
I
don't
think,
that's
quite
what
we
want
to
have
in
the
matrix,
if
we're
still
going
on
the
matrix
idea,
but
having
this
data
available
is
going
to
help
end
users.
G
I
think
one
idea
I
think
we
still
like
seem
to
be
like
getting
the
tags
to
do
this
right
to
each
tag
should
have
a
page
where
they
have
some
sort
of
information
about
the
projects
that
fall
under
them
and
they
meet
with
consultation
with
the
projects
that
they
are
responsible
for
right
and
it
could
be
a
matrix.
It
might
not
be
a
matrix.
It
could
be
a
set
of
verbs
with
pointers
back
to
cncf
landscape
or
the
readme
of
the
different
projects
or
whatever
they
feel
like.
But
you
know
from
our
point
of
view.
G
We
should
say:
okay,
hey
tags,
go,
do
this,
have
a
page
where
a
cncf
end
user
can
come
and
look
and
get
a
sense
of
like
what's
going
on
here.
K
They
can
ask
the
project
maintainers,
you
know
to
come
up
with
that
information
right,
so
I
think
some
of
the
work
it
will
be
just
you
know,
lining
up
all
that
information
and
making
sense
out
of
it.
But
you
know,
most
of
the
information
will
come
from
the
project
maintainers
or
the
projects
themselves.
G
Right
and
basically
like
be
the
referee
when
people
will
say
hey,
this
thing
is
unique
in
mine
and
like
okay
collapse,
it
into
one
line
item
rather
than
two
line
items.
Something
like
that:
right,
ricardo.
M
The
one
caution
I'd
have
is
you
have
things
like
features
either
you
have
to
feature
you
don't,
but
if
you
get
into
these
gray
areas
of
oh,
how
responsive
am
I
who's
highest?
Performing
then
you're
going
to
get
into
these
battles
and
the
challenge
you're
going
to
have
is:
where
are
you
going
to
get
data
that
everyone
can
agree
on
as
to
what
the
right
metric
is
for
a
given
project
on
that
feature
at
least
seems
relatively
black
and
white?
You
have
it,
you
don't,
and
the
features
are
what
end
users
are
looking
for.
M
E
M
G
Agree
so
let's
take
the
community
aspects
out
of
this
and
leave
it
technical
and
it
and
yeah.
Probably
that's
the
right
way
to
do
it
and.
C
K
K
G
So
the
tag
might
come
in
and
say
you
have
to.
The
line,
item
would
be
supports,
high
performance
mode,
and
then
the
link
for
each
of
the
project
would
be
to
how
they
do
high
performance
and
how
they
measure
high
performance.
N
Should
we
be
measuring
and-
and
I
kind
of
asked
that
question
in
terms
of
I
don't
think
we
should
be
comparing
subjective
stuff,
I
think
we
should
be
comparing
you
know,
functionality
or
other
metrics,
where
you
know
it's
number
of
end
users
or
biggest
deployed
project
or
whatever
it
is,
but
but
should
we
be
measuring
things
which
are
completely
subjective?
Like
you,
you
could
put
10
engineers
in
a
room
and
spend
three
months
trying
to
figure
out
the
best
way
of
measuring
performance
and
and
still
not
come
to
a
conclusion.
C
We
were
measuring
as
a
kind
of
piece
of
information
you
know,
and
maybe
they
make
a
claim
that
says
we
believe
this
makes
us
really
high
performance
in
certain
scenario,
and
maybe
the
tag
can
look
at
that
and
say:
yeah,
that's
a
reasonable
claim
or
no
the
data
doesn't
back
up
that
claim,
or
you
know
it's
it's
too
subjective
we're
not
prepared
to
publish
that
on
our
assessment
of
what
differentiates.
I
think
it
should
be
really
the
tag
making.
The
decision
of
this
is
how
we
assess
the
way
different
projects
differentiate
from
each
other.
G
N
G
C
Don't
think
we
need
to
overthink
this
too
much.
It
doesn't
need
to
be
like
the
full
and
complete
assessment
of
every
possible
quality
of
every
project.
It's
more,
I
think,
about.
Let's
take
this
storage
example,
you
know
as
a
concrete
example,
if
you're
looking
at
longhorn
and
ebs
and
I've
forgotten
what
the
third
one
is.
C
The
third
one,
I
think,
is
the
one
I'm
thinking
of,
but
brooke
is
also,
but
if
you're
looking
at
those
different
things,
what
is
it
that
they
do?
That
is
you
know.
That
means
it's
viable
to
have
four
different
projects.
Why
do
we
believe?
That's
that's
the
case,
and
why
would
an
end
user
be
interested
in
one
or
two
of
those
projects,
but
not
the
other
two
or
three.
N
N
The
only
one
that
has
some
overlap,
which
we're
about
to
consider
is
open
ebs,
which
is
a
block
store
too,
but
obviously
has
some
differences.
So
you
know
out
of
those
four
things
given
that
you
know
these.
These
things
are
either
sandbox
or
incubation
or
graduated.
In
the
case
of
rook,
we
kind
of
should
know
that
one
is
just
an
operator
and
the
other
is
just
a
file
system
right
and
another
is
a
block
store.
G
E
C
N
I
I
think
that's
going
to
be
that's
going
to
be
problematic,
because
we've
had
a
discussion
about
this
in,
in
our
own
tag,
called
a
couple
of
weeks
back,
one
of
the
things
that
we
were
all
deeply
uncomfortable
with
is,
you
know,
becoming
king
makers
and
in
some
of
this
stuff
I
honestly
I'm
I'm
not
super
comfortable
being
a
judge
of
somebody
else's
marketing
or
or
somebody
else's
performance
claims
or
or
something
else,
because
when
it's
when
it's
functionality
or
you
know
it,
does
this
thing
or
it
does
that
thing
that
that's
easy
to
talk
about,
but
when
you're,
when
you're,
comparing
subjective
stuff,
it's
it's
much
much
harder
and
I'm
not
entirely
sure
we
could
do
this
without
controversy
at
a
lot
of
steps.
N
G
N
Right,
but
just
but
just
for
just
for
just
to
make
the
point
here,
we
we've
spent
quite
a
bit
of
time
writing
a
performance
white
paper,
for
example,
and
we
we
kind
of
highlight
how
hard
it
is
to
do
apple
for
apple
comparison,
because
there
are
so
many
things
to
consider,
and
we
actually
conclude
in
the
document
that
you
should
absolutely
always
ignore
vendor
benchmarks,
and
you
should
run
your
own
tests
in
your
own
environment,
because
that's
the
only
way
to
measure
anything
worthwhile
and-
and
so
you
know
no,
I
actually
don't
feel
comfortable
pointing
to
benchmarks
published
by
the
vendors.
G
C
And
I
don't
think
we
should
be
asking
the
tags
to
publish
anything
they're,
not
comfortable
with
I
mean
you
know.
In
some
cases
there
may
be
a
measure
that
makes
sense
and
in
other
cases
maybe
there
just
isn't-
or
maybe
it
just
comes
down
to.
C
C
What
is
it
that
would
make
some
people
lean
towards
one
and
other
people
lean
towards
another?
And
it
might
be?
I
don't
really
know
the
storage
market
very
well,
so
I
don't,
but
I
can
go
back
to
the
runtime
example
of
saying
you
know
if
you're
choosing
between
cryo
and
container
d,
a
big
part
of
that
choice
is
just
going
to
be
the
ecosystem
you're
in
if
you're,
in
the
red
hat
ecosystem,
you're,
probably
leaning
towards
cryo.
I
think
that's
the
reality
of
the
reasons
why
people
lean
one
way
or
the
other.
N
B
N
Things
things
like
ecosystem
or
you
know,
for
example,
you
know
talking
about
you,
know
objective
things
like
like
you
know,
scale
or
or
or
security
or
or
you
know
functionality.
N
Those
are
fine,
because
they're
they're
factual
they're
objective,
but
if
we're
recommending
one
thing
over
another
based
on
subjective
stuff,
I
think
that's
where
we
open
the
preferrable
kind
of
worms,
yeah.
C
And
I
I
absolutely
don't
don't
want
to
ask
anybody
to
you
know:
do
something
they're
not
comfortable
with,
I
think.
That's
part
of
you
know
the
tags.
Ownership
of
that
assessment
should
be.
You
know
this
is
this
is
what
we're
comfortable
with
and
it
may
be
extremely
factual
and
it
may
even
say
honestly:
we
don't
have
any
reason
to
prefer
one
of
these
projects
over
the
other,
but
they're
both
you
know,
I
don't
know
this
one's
popular
in
asia
and
this
one's
popular
in
the
america
or
something
I
don't
know
whatever
the.
K
Yeah,
I
think
it's
it's
not
in
the
spirit
of
making
a
recommendation,
it's
more
in
the
spirit
of
helping
end
users,
navigate
the
ecosystem
and
the
landscape
of
projects.
Like
oh
look
at
this,
this
information
is
here
and
there
and
we
can
help
you.
You
know
see
what
information
is
available
there
and
you
can
make
your
data
determinations.
H
I
think
I'll
take
that
step
further
and
say
that
it
should
never
be
making
a
recommendation.
It's
not
just
not
about
that.
It
literally
isn't
making
a
recommendation
and,
as
an
end
user,
I
would
be
okay
with
something
as
simple
as
a
list
of
projects
and
each
one
of
them.
Having
like
a
two
sentence
place
where
their
marketing
department
wrote
something,
and
if
five
projects
each
wrote,
we
are
the
fastest
x
in
the
world,
but
as
an
end
user.
H
I
just
know
I
have
to
do
all
my
own
research
because
they're
all
saying
the
same
thing,
but
I
think
with
a
lot
of
these
projects,
they
will
have
either
slightly
different
ways
of
saying
it
like
we
use
the
least
cpu
versus
we,
I
don't
know,
are
the
fastest
over
the
wire
or
something
or
they
will
just
say
completely
different.
Things
like
we
have
the
simplest
possible
runtime,
and
that
might
not
be
true,
but
at
least
that
tells
me
immediately
what
that
project
focuses
on
and
as
an
end
user.
H
It
helps
me,
simplify
it
a
little
bit,
but,
to
be
honest,
I
wouldn't
trust
any
of
it
anyway.
I
would
want
to
do
my
own
research
to
the
point
that
alex
was
making
and
I
would
never
even
if
the
tag
came
up
and
said,
like
I
don't
know,
magic
fs
is
the
fastest
file
system.
For
your
use
case,
I
would
say:
I'm
not
sure
that
spotify
is
doing
the
same
thing
as
tag
storage,
so
I'll
still
test
it,
but
I'd
love
some
place
that
doesn't
just
list
here
all
the
storage
projects.
N
B
B
If
I
see
one
product
that
only
offers
all
the
object,
storage
and
all
the
others
offering
the
three
I
might
be
tempted-
okay,
maybe
in
the
future,
I'll
need
them.
So
I'll
pick
that
one,
but
maybe
the
one
that
has
only
object.
Storage
is
the
one
that
I
need,
because
it's
more
performant
or
something
so
those
metrics
are
really
important
for
end
users
as
well.
So
maybe
having
these
two
lines
where
the
project
explains
why
they
do
things
and
what
they
focus
on
is
also
important.
B
C
Some
some
comments
going
in
about
public
visibility
of
due
diligence
documents.
I
think
it's
a
great
point.
Those
those
documents
are
a
lot
of
work
goes
into
those
they
are
public.
People
can
look
at
them,
but
they
don't
because
they're
not
easy
to
find
and
it's
not
easy
to
compare
them
and-
and
I
think,
maybe
having
some
way
of
pulling
the
salient
pieces
yeah
into
like
one
page,
where
you
can
say
yeah,
here's,
the
sort
of
you
know
two
sentence
description
and
here's
the
kind
of
really
high
level
feature
matrix.
C
D
C
Yeah,
like
maybe
ricardo,
saying,
github
file
with
links
to
all
of
them,
yeah
whether
we
have
them
all
in
one
place
or
a
per
tag.
At
the
moment,
I'm
picturing
like
a
per
tag
just
overview
of
the
projects
and
that
could
link
into
those
due
diligence.
N
G
Would
it
be
better
to
make
this
web
searchable
easily
rather
than
github,
it
might
be
a
barrier
to.
N
D
Tricky
part
about
that
is
that
data
is
actually
coming
from
the
landscape,
so
if
we
want
to
be
able
to
somehow
click
put
this
like-
and
I
I
recognize
we're
getting
into
like
procedural
pieces
here
rather
than
like
the
substantive,
like
the
oh
yeah,
we
should
do
this,
so
we
might
have
to
take
this
one
offline.
C
For
each
area,
I
don't
know
whether
this
duplicates
something
that's
already
on
the
landscape,
but
I
think
we
nevertheless
need
something
somewhere.
That
says
here
are
the
projects
that
are
incubating
and
graduated
and
hear
us
the
key
characteristics
of
them
to
help
you
understand
this
is
block
storage.
This
is
you
know,
and
we
don't
necessarily
need
that
for
every
project,
it's
more
as
soon
as
we
start
having
similar
projects
that
people
get
confused
by,
let's
try
and
help
people
navigate
that
okay.
D
C
D
Well,
the
part
where
we
say
which
ones
are
incubating
which
one
are
graduating,
which
one
is
sandbox.
That's
that's
already
kind
of
done.
C
Yeah,
no,
I
meant
more
in
the
landscape
where
it
says
things
like
this
is
storage
or.
N
Yeah,
I
I
think
this
is
somewhere
where
the
tanks
and
the
talk
can
help
I've
found
when
talking
to
projects,
especially,
for
example,
when
they're
doing
things
like
submitting
a
sandbox
proposal
or
sandbox
form,
putting
two
or
three
sentences
together
that
actually
describes
the
project
in
a
way:
that's
not
either
technically
obtuse
or
you
know,
marketing
overloaded
is
really
really
important.
So
to
be
able
to
say,
look
this
project.
Does
this
in
this
way?
N
For
this,
this
sort
of
use
case
is,
is
really
valuable,
and-
and
sometimes
the
project
needs
guidance
on
on
that,
because
you
know,
we've
had
a
fair
few
instances
where
projects
making
an
application
sandbox,
for
example,
just
completely
you
know
the
toc
actually
got
the
wrong
complete
end
of
the
stick,
based
on
the
description
that
the
project
supplies
so
so
actually
being
able
to
to
help
them
with
this,
I
think,
is
super
valuable.
C
I'm
wondering
whether,
like
on
the
cncf
site,
we
have
places
where
there
is.
You
know,
like
project
logos,
whether
we
should
be
crafting
for
those
you
know
with
the
projects.
Maybe
you
know
the
projects
come
up
with
it
and
the
tags
help
review
like.
What's
the
two
sentence,
description
of
each
project.
N
C
C
Does
it
make
sense,
and
I
think
this
is
a
question
really
for
sardin
alex
as
in,
and
I
can't
remember
who
else
is
leo's
on
for
for
storage,
but
maybe
to
look
at
storage
as
an
example,
sorry,
we
keep
using
storage
as
the
example
and
kind
of
flesh
out.
If
we
did
a
feature
comparison
chart.
Would
that
look
right
or
not,
and
does
it
make
more
sense
to
have
it
as
like
a
couple
of
sentence
sentences
for
each.
F
Yeah
got
it,
I
think
overall
guidance
seems
to
be
fairly
clear,
which
is,
let's
you
know,
let
the
best
projects
rise
up,
we're
not
trying
to
play
king
makers
here.
That's
the
ultimate
goal.
At
the
same
time,
we're
trying
to
balance
that
with
let's
make
sure
that
users
and
users
have
a
clear
idea
of
what
they
should
use,
that's
where
it
gets
a
little
bit
tricky,
especially
if
we
get
into
head-to-head
comparisons-
and
you
know
we
get
into
subjective
things
and
we'll
kind
of
try
to
use
as
much
as
possible.
F
C
Awesome.
Thank
you,
sir.
That
sounds
like
a
really
good
summary
brilliant.
I
think
that
was
really
useful
discussion.
I
think
we've
hit
the
hour
on
the
head,
so
thank
you
so
much
everyone
and
see
you
again
soon.