►
From YouTube: CNCF TOC Meeting - 2018-08-21
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
B
B
B
C
Because
I
was
I
was
just
curious,
had
a
question.
It
seems
like
that.
Review
is
sort
of
being
open,
with
no
specific
reservations
expressed
for
since
February
I
was
just
wondering
if
there
was
any
context
there.
Why
is
it
being
held
up
for
so
long
with
a
concrete
objections
that
were
raised,
that
I'm,
not
in
the
PR
or
the.
A
D
E
D
To
focus
on
kubernetes
first
and
then
Prometheus
came
up
after
that.
I
did
take
a
look
in
more
detail
at
dev
stats
and
some
other
information
from
directly
from
github
and
so
on
and
had
a
thread
with
Eduardo
Eduardo.
Yes,
thank
you
and
I.
Don't
really
have
any
reservations
right,
there's
information
from
that
thread
that
might
be
useful
to
post
to
the
ER
comments.
That
explains
some
of
the
things
like
a
lot
of
the
activity
and
fluency
has
been
in
the
ecosystem
rather
than
in
the
core,
which
is
useful.
D
They
do
have
two
active
contributing
companies
as
maintainer
x',
on
the
core
project,
though,
and
that
seems
have
a
reasonable
number
of
contributors
for
the
size
of
the
project
fluent
bit,
which
is
the
smaller
newer
project,
doesn't
have
as
much
contributor
diversity
but
sub
projects
and
kubernetes.
Don't
all
have
equal
numbers
of
contributors
either.
So
I
wasn't
that
concerned
about
that.
They
also
Eduardo
felt
that
the
number
of
contributors
was
in
good
shape,
is
sustainable,
so
I
didn't
really
have
any
red
flags.
D
B
D
C
D
C
D
A
B
Then
if
we
took
it
to
talk
about
the
auditing
piece
set,
but
I
think
I'm
I'm
in
favor
of
adding
that
as
a
as
a
requirement
for
graduation
going
forward,
Chris
I,
don't
know
Ike
I,
don't
want
to
force
it
on
them.
If
we
want
to
make
it
I
guess
I
still
requirement.
If
we
want
to
force
it
on
projects
that
have
already
graduated
right,
but
I
think
it's
good.
It's
a
good!
It's
a
good
capability
to
have
available
for
companies
to
request.
So
maybe
we
make
it
like
I
they
suggested.
You
know.
B
C
A
Yeah
I'm
happy
to
have
kubernetes
go
through
accordion
s
went
through
it,
which
is
technically
I,
guess
particle
urban
Eddie's
now
so
it
you
know
it's
up
to
the
community
really
to
make
this
request.
We
piloted
it
with
a
few
projects
in
C&C
yep
and
it's
been
actually
super
successful
in
my
opinion,
so
happy
to
do
it.
C
C
A
B
I
think
that's
sort
like
a
general.
There
have
been
some
emails,
we've
had
back
and
forth
the
last
couple
of
days
to
that
reading
and
I
think
you
know,
Brian
grant
brought
up
and
we
seem
to
sort
of
I
think
we
view
those
requirements.
We
have
for
graduation
sort
of
make
him
a
little
bit
more
clear,
so
yeah.
A
D
Or
online
yeah
I,
just
I,
just
sent
email
about
that.
Yeah
I
would
like
to
have
a
discussion
about
what
we're
looking
for
in
some
of
these
categories
since
similar
or
projects,
and
similar
categories
have
come
up
in
the
past,
or
we
expect
to
come
up
more
frequently
in
the
future,
so
just
so
that
we're
prepared
a
little
bit
with
what
to
ask
some
of
these
projects.
G
F
B
H
F
Awesome
thanks
hi,
so
yeah
I'm,
tense,
I
work
for
Roku,
a
salesforce.com
company
one,
the
co-creator
of
Dell
packs
with
me
is
joe
walsh
works
at
Heroku
who's,
an
architect
on
my
team
and
then
is
the
java
experience
owner
at
roku.
We
have
slept
stephen,
who
is
the
pitiable
rock,
is
a
pivotal
frog
manager
and
cloud
foundry
bill,
packs,
fetch
plead,
and
then
we
also
been
who
works
at
pebble
and
leads
the
cloud
foundry
java
expense.
F
So
for
those
of
you
who
aren't
familiar
with
bill,
packs
bill,
paxton
your
source
code
or
artifacts
into
a
running
application
on
the
cloud
and
really
want
to
meet
developers
where
they're
at
today,
which
is
it
at
the
source
code,
and
it's
been
around
for
over
seven
years.
So
it's
an
old
project
at
Heroku.
We
use
it
to
basically
turn
our
Ruby
specific
paths
into
polyglot
specific
path.
F
So
this
enabled
us
to
support
other
programming
languages
on
the
platform
and
since
then,
there's
been
a
number
of
other
players
in
the
space
cloud
founders
being
one
of
them
that
have
really
kind
of
an
endorsed
and
adopted
bill.
Pacs
and
more
recently,
Kay
native
added
support
for
it
as
a
way
to
handle
source
to
container
building
or
one
ways
that
you
can
go
ahead
and
do
that
next
slide
and
we
have
customers
across
both
starts
at
enterprises
and
the
reason
starts
really
love
using
build
packs.
F
Is
that
enables
them
to
get
this
really
quick
time
to
corruption?
Have
these
fast
rapid,
iterations
and
push
out
features
and
bug
fixes
and
on
the
other
end
of
the
spectrum
with
enterprises
allows
them
to
have
the
security
and
compliance
story,
and
the
way
this
is
achieved
is
by
separation
separating
the
application
concerns
from
the
rest
of
the
infrastructure
operation
concerns,
and
so
this
allows
application
developers
to
focus
on
actually
building
their
application
and
kind
of
the
application
operating
people
to
focus
on
actually
operating
and
managing
running.
F
Next
slide
and
another
reason
people
choose
build
packs.
Is
that
there's
a
community
behind
it
so
between
both
clattering
karoku,
we
have
13
people
that
are
paid
full
time
to
work
on
bill
packs,
contain
them
support
them,
ensure
when
you
choose
these
bill,
packs
that
they're
well
supported
up
to
date.
F
So
here's
a
list
of
the
bill
packs
that
both
plat
family
Heroku
officially
supports,
and
most
this
stuff
is
centered
around
languages.
But
since
we've
kind
of
open
sources
introduce
it
into
the
community,
the
community
has
really
showed
us,
the
first
utility
bill
packs
so
beyond
just
adding
new
languages,
which
they've
done
that
we
don't
actually
support
they've
added
support
for
various
tools,
as
well
as
off-the-shelf
products
that
you
can
kind
of
just
install
without
having
to
deal
with
the
integration
work.
F
It
takes
to
get
this
stuff
up
and
running,
and
these
are
examples
of
build
packs
of
being
implemented,
and
so,
if
you
need
to
go
off
and
build
your
own,
it's
not
too
complicated.
But
what
we're
proposing
here
isn't
the
actual
bill
PACs
themselves
that
we
want
to
contribute,
but
the
API
and
infrastructure
that
actually
power
the
bill,
PACs
next
slide
and
so
kind
of
stepping
away
from
high-level
and
kind
of
digging
more
into
the
bill
PACs
themselves.
F
Is
this
a
big
ABI
compatibility
chianti
with
the
underlying
libraries
in
the
operating
system,
and
so
this
allows
application
operators
to
provide
underlying
OS
image
updates
without
the
need
to
for
the
application
developers
to
have
to
really
do
anything
or
rebuild
their
application
at
all?
And
so,
if
you
need
to
patch
a
CVA,
the
app
operators
can
do
that
in
production
across
all
the
applications,
and
this
is
what
Roku
and
cloud
founder
have
been
doing
over
the
last
seven
years
without
incident.
F
And
if
you
compare
this
to
when
doing
Ducker
files
here,
the
whole
image
has
grew
built
for
every
single
app
across
your
entire
fleet
of
applications.
And
so
again
this
allows
app
developers
to
focus
on
building
the
application
and
yeah
pop
careers
to
focus
on
actually
operating
their
application
and.
G
Hey
so
thanks
Terrence,
so
I'm
gonna
talk
about
some
of
the
drawbacks.
We
have
with
build
packs
right
now,
so
each
platform
has
sort
of
a
custom,
build
pack
interface,
so
Heroku
build
packs,
often
don't
work
on
C,
F
and
C,
yet
built
X.
Often
don't
work
on
Heroku,
it's
possible
to
get
that
to
work,
but
it's
a
lot
more
effort.
The
the
contract
between
the
platform
and
a
build
pack
is
very
basic,
so
it
really
don
t
about
how
build
packs
behave.
They
people
think
they
feel,
like
black
boxes,
advanced
devs,
you
know
they.
G
They
say
this
is
maybe
not
enough
control.
For
me,
the
the
simple
contract
can
also
make
authoring
and
extending
build
packs.
You
know
not
the
easiest
things
they
all
work
very
differently.
You
know
you
may
and
may
be
working
on
something
very
complex
or
very
simple.
It's
not
it's
not
an
easy
thing
to
make
a
build
pack
really
quickly
and
get
it
out
there.
The
it's
build
tags
often
involve
a
lot
of
unnecessary
rebuilds
and
data
transfer.
So
if
you
have
a
thousand
Java
apps,
you
may
end
up
storing
a
thousand
JVMs
on
your
platform.
G
Just
because
the
way
the
model
works,
you
may
end
up
transferring
those
jaebeum's
back
and
forth
a
lot.
There's
not
very
much.
Data
deduplication,
like
I,
mentioned
the
it's
also
difficult
to
provide
additional
OS
packages.
The
you
know
everything
that
they're
sort
of
unprivileged
and
the
model
works
differently
to
sort
of
prevent
that
right
now.
Next
slide,
please!
G
So
in
January,
the
bill
Peck
leads
from
her
Oakland
pivitol.
We
sort
of
got
together
in
New
York
and
found
that
we
could
really
similar
problems
and
that
we
had
really
sort
of
similar
future
goals,
and
so
we
sat
in
room
for
two
days
and
and
came
out
with
I
sort
of
out
of
that.
We
got
this
idea
of
cloud
native,
build
panics
which
are
sort
of
this
idea
that
Bill
pack
should
be
universal
and
used
container
standards,
and
you
know
run
anywhere
and
take
advantage
of
more
modern
and
more
uniform.
G
So
we
use
a
new
technique,
that's
sort
of
pioneered
at
Google
right
now,
it's
not
Conoco,
but
that
lets
us
manipulate
images
inside
of
a
docker
v2
registry,
it's
from
a
new
feature
of
the
docker
of
each
image
format
where
we
can
just
swap
out
individual
images
to
update
them
without
having
to
re-upload.
You
know
previous
images
or
regenerate
images
that
don't
need
to
change,
so
this
can
lets
us
really
easily.
Take
advantage
of
you
know:
api
compatibility
of
OS,
libs
or
other
kinds
of
compatibility.
Guarantees
provided
by
different
languages
also
really
minimize.
G
This
build
time
and
data
transfer
compare
to
the
previous
model,
we're
targeting
kubernetes
for
this
effort,
but
this
should
be
compatible
to
any
image
based
platform.
It's
big!
We
really
want
this
to
work
well
with
helm
and
run
on
any
OCI
container
runtime
the
whole
thing
runs
unprivileged
and
it
just
a
series
of
images.
It
doesn't
require
a
Conoco
or
IMG
or
build,
or
anything
like
that,
and
there's
no
nested
container
acts
or
anything
like
that
needed
next
slide.
Please.
D
D
G
G
Happy
to
talk
more
after
a
few
more
questions,
so
our
goals
for
this
effort
are
to
alleviate
enterprise.
App
dependency
management
pains
is
one
goal
that
we,
you
know
if
you're
a
large
enterprise
patching
you
know,
critical
vulnerability
can
for
hundreds
and
hundreds
of
applications
can
take
a
really
long
time
and
rely
on
a
lot
of
green
pipelines.
With
this,
you
can
update
lots
and
lots
of
images
simultaneously
in
ways
that
are
they're
safe.
G
So
if
you
have
open
SSL,
CVE
critical
CVE,
you
can
patch
that
really
quickly
compared
to
you
know
hoping
that
you're
thousands
of
apps
eventually
get
updated.
We
want
to
make
all
app
developers
lives
easier,
not
just
enterprise,
the
we
don't
think
you
should
have
to
you
know:
I've
app
developers,
don't
want
to
worry
about
the
patch
version
of
node
or
the
patch
version
the
Ruby
they're
running.
G
If
we
think
we
can,
you
know,
make
a
system
that
manages
that,
for
you,
we
want
to
unify
the
build
pack,
ecosystems
between
pivotal
and
Heroku
and
the
community,
so
that
all
build
packs
run
everywhere
and
you
know
encourage
more
contribution
to
build
packs
and
creation
of
build
packs.
We
want
to
sort
of
narrowly
cover
application,
but
we
don't
want
to
have
a
lot
of
opinions
about
how
you
deploy
your
you
know,
images
they're
great
tools
for
that.
Like
help,
you
know
and
we're
not
saying
that
we
want
to
replace
docker
files.
G
This
is
just
an
alternative
to
docker
files
to
meet.
You
know
certain
use
cases
next
slide.
Please
so
now
I'm
going
to
talk
a
little
bit
about
how
they
work
so
can
I
go
over
this
quickly.
The
first
thing
that
happens
is
detection
they're,
four
steps,
so
the
first
thing
that
happens
is
detection
where
a
series
of
candidate
groups
are,
you
know,
process
the
app
source
code
and
the
first
candidate
group
that
says
yes,
this
works
I
gets
to
run
so
in
this
example.
This
this
is
a
APN
buildpack
node
build
tagging
revealed
panics.
G
Let's
say
you
have
a
rails
app
that
has
no
DJ
s
on
the
front
end
and
needs
new
relic
to
you
know:
do
performance
monitoring
the
these
build
packs.
Would
you
know,
say:
yes,
this
application
I'm
compatible
this
application
and
if
they
all
agree
on
that
they,
the
group
is
selected.
They
work
to
come
up
with
a
build
plan
that
has
the
dependencies
they're
going
to
provide
and
they
work
together
sort
of
to
come
up
with
the
list
of
dependencies
and
dependency
versions
that
they'll
install
during
the
build
process.
The
next
step
is
analysis.
G
So,
during
the
build
process,
the
build
packs
all
take
individually
take
the
build
plan
and
metadata.
You
saw
on
a
previous
step.
They
run
an
order
and
they
decide
whether
or
not
they're
gonna
replace
layers.
If
they
want
to
replace
layer,
they
just
create
a
directory
with
the
layer.
The
contents,
if
they
you
know,
don't
want
to
they,
just
don't
create
that
directory.
They
can
update
the
metadata
about
the
layers
and
a
series
of
tamil
files
that
metadata
stored
in
they
also
have
a
local
cache.
G
They
can
use
to
speed
things
up
even
further,
and
the
cache
can
also
be
used
to
supply
dependencies
to
other
build
packs.
There's
a
particular
contract
for
that
I
won't
go
into
too
much
detail
about
as
when
the
end,
you
end
up
with
a
whole
bunch
of
directories
that
replace
the
remote
layers
that
need
to
be
replaced
and
the
platform,
the
sort
of
part
of
the
API,
the
API
lifecycle,
replaces
those
layers
remotely
in
the
remote
image.
This
uses
that
image
layer,
rebasing
strategy
with
dr.
G
G
So
we
have
a
bunch
of
plant
contributions
here.
A
lot
of
these
are
already
finished
or
near
finished,
so
we
currently
have
a
working
document
for
an
API
v3
specification
for
build
packs,
that's
stable,
but
it
hasn't
been
completely
formally
specified.
Yet
we
have
a
reference
implementation
of
this.
That's
finished
already
called
the
build
pack
lifecycle
v3,
it's
just
a
series
of
images.
They
don't
require
privileges,
as
I
mentioned,
you
can
plug
them
into
any
platform
if
you're
willing
to
coordinate
them
yourself,
but
we'd
like
to
have
some
tools
to
do
that.
For
you
too.
G
So
we
have.
We
were
working
on
a
pack
CLI
that
has
just
an
early
alpha
out.
You
can
barely
do
anything
with
it,
but
it
does
work
with
one
sample.
V3
build
pack
at
this
point
that
coordinates
these
images.
The
packs
Eli
does
this
locally
on
a
docker
daemon
for
now,
but
we're
also
going
to
work
on
a
controller.
This
hasn't
been
started.
Yet
a
controller
for
build
pet
cloud
builds
on
kubernetes.
That's
the
next
step.
G
We
want
a
controller
that
will
automatically
coordinate
this
for
you
on
your
community's
cluster
and
potentially
update
images
to
our
update
deployments.
Sorry,
we
also
plan
to
provide
cloud
builder
images,
not
just
for
a
sample
v3
build
packs,
but
for
the
current
beat
we
call
the
Heroku
bill
takes
me
to
a
and
the
clap
I
knew
bill,
packs
B
to
be
in
our
specification.
The
we
plan
to
make
cloud
builder
images
for
them
as
they
are
today,
so
that
you
can
start
using
these
quickly
without
us
needing
to
port
them
to
v3,
which
will
happen.
G
You
know
over
some
amount
of
time.
We
plan
to
do
that
soon,
but
we
want
we
wanted
to
make
this
compatibility
layer
there,
so
you
could
use
them
right
now
until
we
finish
that
process
the
and
you
can
use
those
today
right
now
too,
if
you
want
to
the
Cloud
Foundry
ones,
work
in
creative
they're
already
in
the
project.
A
we
also
Hiroko
has
an
open
beta,
like
new
curated
registry
for
community-created,
build
packs
that
they
want
to
contribute
to
this
effort.
Also,
it's
really
exciting.
I
This
is
Dan
Kahn,
just
quick
question
on
that
that
I'm
unclear
about
the
existing
Heroku
build
packs
that
are
in
production
today.
So
like
the
Ruby
or
the
node
ones
that
are
used
by
thousands
or
tens
of
thousands
of
folks.
Are
those
getting
contributed
into
this
project.
Or
are
you
suggesting
a
model
on
how
they
can
be
built,
but
they
would
still
live
outside.
G
Of
this
project
that
it's
the
latter
so
that
we're
not
contributing
currently
we're
not
contributing
the
Cloud
Foundry
Heroku
bill,
patzert
of
large
projects
that
have
you
know
individual
communities
on
their
own.
You
know
we
may
we
may
have
build
packs
in
this
project
later
it's
just
not
an
initial
goal.
The
the
things
that
I'm
talking
about
the
cloud
builder
images
are
just.
G
We
made
a
compatibility
layer
to
run
CF
or
Heroku,
build
packs
and
images
in
an
unprivileged
way
on
these
platforms
that
doesn't
use
quite
use
the
v3
API
yet,
but
will
let
you
use
those
build
packs
right
now
outside
of
Heroku
or
outside
of
Cloud
Foundry?
So
it's
it's
the
compatibility
layer.
That's
that's
the
contribution.
G
Thank
you
next
slide
please.
So
we
have
to
sort
of
key
needs.
If
we
were
to
join
the
CN
CF
one
is
we
need
a
neutral
third
party
to
foster
collaboration.
We
we
don't
want
this
to
be
a
pivotal
thing
or
Heroku
thing
or
cloud
for
anything.
We
want
this
to
be
a
open
project
that
anybody
can
contribute
to
fairly.
You
know
we
won't
build
packs
run
everywhere,
not
just
on
particular
platforms,
and
we
need
adequate
vendor
neutral
infrastructure.
You
know
if
we're
gonna
host
a
registry,
build
packs
for
CIC,
do
all
that
stuff.
G
I
want
to
be
able
to
ship
build
packs
quickly
to
people
in
their
dependencies
next
slide,
please.
Finally,
the
we
think
there
a
lot
of
benefits
of
CN
CF
inclusion
for
us
and
for
the
CN
CF.
Hopefully
we
as
we
think
a
uniform,
buildpack
interface
specification.
It's
part
of
a
CN
CF
project
would
allow
the
CF
and
Heroku
build
packs
to
run
any
platform,
greatly
improved
cross
compatibility
and
interoperability
of
build
packs
that
in
community
built
X.
G
Also,
this
would
facilitate
the
adoption
of
container
standards,
because
there
are
a
lot
of
users
using
build
packs
right
now
and
built
X.
Don't
use
all
the
container
standards
that
you
know
currently,
the
we
would
like
to
see,
and
so
by
you
know,
making
those
users
use
this
new
build
pack
specification
that
uses
those
standards
will
pull
more
people
into
this
ecosystem,
the
association
with
kubernetes
and
other.
G
Since
you
have
projects,
we
also
think
will
encourage
wider
contribution
and
hopefully
dispel
myths
that
build
packs
are
very
platform
specific
and
then
you
know
you
have
to
make
your
Heroku
app
or
a
Cloud
Foundry.
Yet
you
know
you
make
a
bill
pack
app,
it
doesn't
doesn't
have
much
as
very
opinionated
configuration.
Does
that
much
configuration
all.
And
finally,
we
are
looking
for
TOC
sponsors
for
the
CNCs
sandbox.
So
sponsor
ask
and
that's
about
it.
That's
it
yeah.
D
G
So
it
actually
creates
a
new
image
sha,
so
you
can
choose
whether
or
not
you
want
to
upload
it
with
a
different
tag
or
you
can.
We
want
to
replace
the
tag
and
point
the
tag
at
the
new
sha.
That
means
have
user
sha
based
deployment,
though
you
do
have
to
update
your
deployment
or,
if
you're,
using
tags
or
to
make
sure
the
image
gets
to
the
edge
somehow
right,
so
the
different
strategies
for
that
could
be
nice.
Okay,
thanks.
E
E
D
D
B
B
J
J
You
know
a
bill,
a
bit
of
knowledge
and
the
community
about
it
already,
but
we'll
do
a
quick
introduction,
so
rook
can
be
thought
of
as
a
cloud
native
storage
Orchestrator
and
what
I
mean
by
that
is
that
rook
provides
a
platform
and
support
for
a
broad
set
of
storage
solutions
to
be
integrated
into
cloud
native
environments
and
the
way
it
accomplishes.
That
is,
with
a
lot
of
automation.
J
Please,
so
we're
gonna
be
talking
mostly
today
about
the
growth
and
the
progress
of
rook
over
the
last
seven
months,
since
we
were
initially
accepted
into
the
sandbox
stage.
So
since
then
we
have
we're
looking
at.
You
know
a
lot
of
the
numerical
metrics
and
data
growth
in
the
last
seven
months.
So
what
we
see
here
is
a
lot
of
these
metrics
have
have
doubled
or
tripled.
J
In
the
last
seven
months,
like
the
github
stars,
the
contributor
number
of
contributors
to
the
project,
you
know
quarter
and
slack
members
and
followers
there's
also
been
some
10x
growth,
which
I
find
very
interesting
in
the
number
of
container
downloads.
In
the
last
seven
months,
which
speaks
to
you
know
the
community
growing
and
the
popularity
of
the
project
projects
growing.
Another
thing
to
note
here
in
the
growth
of
the
project
over
the
last
seven
months
is
that
we
have
added
another
maintainer
to
the
to
the
project
to
bring
us
up
to
four
maintainer.
J
Some
three
organizations
now
next
slide.
Please.
So
we
can
talk
about
some
of
the
specific
accomplishments
now,
since
rook
was
accepted.
So
since
then
we
have
done
two
releases:
the
0.7
release
in
February
and
the
0.8
release
in
July
about
a
month
ago,
which
represent
a
total
of
545
commits
between
the
two
of
them.
Another
thing
that
we
have
done
in
the
last
seven
months
is:
we
have
gone
ahead
and
implemented
a
formalized
governance,
project,
governance
policy,
and
you
know
that
covers
things
like
adding
and
removing
maintainer
x'
from
the
project.
J
J
As
we
talked
about
earlier,
we
had
originally
focused
exclusively
on
SEF,
and
now
we
have
a
framework
for
other
storage
providers
as
well,
so
that
kind
of
makes
you
can
start
thinking
about
rook.
Now,
as
a
general
general
purpose
cloud
native,
storage,
Orchestrator-
and
you
know
some
of
the
you
know,
the
benefits
or
the
aspects
of
that
framework
are
kind
of
normalizing.
The
way
that
storage
resources
for
a
distributed
storage
system
would
be
declared
some
of
these
patterns
around
operators.
J
You
know
autumn
software,
automation
to
deploy
and
maintenance
distributed
storage
systems,
the
plumbing
for
those
operators
to
talk
to
the
kubernetes
api,
a
bunch
of
common
respects,
policies,
logic
that
can
be
shared
amongst
the
various
storage
providers
and
then
also
integrated,
like
integration
testing
framework
in
environments
that
these
storage
providers
can
all
reuse.
So
we,
with
that
framework
we
have
added
in
the
0.8
release,
support
for
both
cockroach
DB
in
Mineo
and
NFS,
cassandra,
Nixon
and
Alexi
are
all
coming
along.
J
K
Sure
that
this
that's
great
because-
and
that
was
definitely
a
subject
of
discussion
earlier
in
the
year
and
early
last
year-
we
talked
about
for
the
first
time-
are
people
actually
deploying
on
other
storage,
backends
I
assume
that
most
of
your
production
deployments
are
still
on
set,
but
are
you
seeing
an
interest
in
in
production
deployments?
That's
on
these
other
backends.
Yes,.
J
We
have
seen
interest
in
cockroach
to
be
in
video
and
then
some
of
the
other
platforms
as
well
they're,
both
you
know
very
early
alpha
stage
support,
so
there's
one
I,
don't
have
nearly
the
maturity
that
stuff
does
that
the
community
built
around
it.
So
you
know
the
majority
of
our
downloads
are
definitely
safe
right
now
that
we
are
seeing
some
traction
growing
from
some
of
the
other
platforms
and
something
I
find
interesting
as
well.
Is
that
a
lot
of
these
the
support?
That's
for
more
providers,
that's
coming
along,
there's
all
been
community
driven.
J
J
That's
a
great
question,
so
I
would
say
that
the
you
know
the
generic
you
know
kind
of
general
abstractions
across
all
these
providers
is,
it
is
still
early
in
its
implementation
and
that's
very
iterating
over
that,
and
that
is
growing.
There
is
absolutely
platform
specific
logic
that
goes
into
the
specific
deployment
and
management
of
the
various
storage
providers.
So
there's
there's
plenty
of
specific
logic
that
you
know
you
would
do
to
you
know
bootstrap
and
manage
over
time
a
stuff
cluster
that
does
not
necessarily
have
applicability
to
other
storage
providers.
J
J
So,
amongst
different
backends,
there
are
a
lot
of
ways
to
kind
of
specify
how
aid
should
be
run
and
managed.
You
know
how
to
select
and
provision
near
the
raw
storage
resources,
how
to
specify
placements-
or
you
know,
resource
consumption
or
how
to
set
up
networking.
So
a
lot
of
it's
more
deployment,
time
focused
I,
would
say,
and
less
of
it
is
ongoing
management
time.
E
J
I
would
say
that
there
is
a
there
is
a
separation
there
with
the
you
know,
infrastructure
sort
of
bootstrapping.
Our
initial
provisioning
of
resources,
you
know,
is
definitely
a
different
layer
than
that
later
run
time
day
to
operations,
and
then
that
kind
of
bleeds
into
where
the
other
layers
or
higher
layers
of
provider
specific
implementations
as
well
waited
the
more
of
the
ongoing
management
tasks.
So
there
is,
there
is
definitely
some
some
clear
separation
of
layers
between.
Is
it
stacked
that
way.
J
So,
yes,
so
then,
back
to
the
progress
our
conference
in
sandbox
entry,
since
the
Ceph
support
that
we
have
has
been
graduated
to
beta
from
alpha-
and
you
know
what
that
means
is
for
us
is
that
the
you
know
the
community
has
put
a
fair
amount
of
miles
into
the
Ceph
implementation
and
the
violet
reliability
of
it
has
increased
a
lot
so
going
forward.
The
API
for
specifying
and
managing
a
scepter
will
remain
stable
and
any
updates
to
it
will
be
done
in
a
way
that
honors
backwards
compatibility.
J
J
Some
of
these
storage
providers
have
more
control
over
the
specific
permissions
and
operations
that
the
operators
will
be
performing
and
then
also
being
able
to
incorporate.
You
know
pod
policies
and
things
in
other
environments
like
like
support
for
openshift
next
slide,
so
let's
go
ahead
and
start
talking
about
with
the
adoption
that
we're
seeing
for
Rooke.
So
this
first
slide
here
is
about
there's
some
of
the
bigger
names
that
are
in
the
process
of
evaluating
rook
and
there
they
have
deployments
in
their
environments
a
lot
of
those
right
now.
Are
there
running
internal
workloads?
J
You
know
they're
relying
on
them
and
you're
internally
for
some
of
their
business
dates,
but
they
are
not
yet
running
customer
facing
services
on
their
deployments.
One
thing
we
have
seen
is
you
know:
storage
systems
in
general
can
tend
to
have
longer
evaluation
periods
and
they're
originally
put
through
a
bit
more
rigorous
validation
when
they're
they're
being
evaluated
to
take
them
to
production.
So
we're
seeing
we're
seeing
folks
get
some.
You
know
good
mileage
on
them
and
you
know
really
kind
of
vetting
them
for
a
good
long
period
to
get
their
confidence
up.
J
Something
that
I'd
like
to
share
too
is
that
there
is
an
upcoming
CNCs
survey
that
was
taking
a
cube
con.
That's
gonna
be
released
I
think
next
week
and
amongst
the
some
of
the
questions
there
about
adoption
of
various
cloud
native
storage
solutions,
Brooke
had
the
highest
rates
amongst
all
those
projects
that
were
in
the
selections
there,
and
it
also
independently
confirmed
for
us
that
there
is,
you
know,
production
usage
out
there.
I
think
that
was
in
between
10
and
15
percent
of
deployments
are
now
seeing
production
usage.
J
So
let's
go
ahead
into
the
next
slide,
and
so
these
are
some
of
the
specific
use
cases
that
we
have
of
people
that
we've.
You
know
Enterprise
part
of
our
community
that
we
reached
out
to
or
have
reached
out
to
us
and
we'll
go
into
some
of
the
details
of
these
specific
companies
deployments
in
their
use
cases.
But
you
know
these
are
some
of
the
ones
that
we,
some
of
the
more
users
or
adopters
of
Rooke,
that
we've
had
a
lot
of
experience
with
and
we're
interesting
and
interested
in
what
their
deployments
are.
J
There
is,
you
know
a
note
at
the
bottom
there
about.
There
are
also
additional
adopters
of
Rooke,
especially
those
that
have
on
permit
deployments
that
are
running
they're
running
in
this
solution
on
site
and
that
are
not
we're
ready
are
willing
to
share
some
of
those
details
publicly
right
now,
but
you
know
there
are
more
adopters
there.
So,
let's
go
ahead
to
the
next
slide
and
start
getting
into
a
couple
of
the
details
of
some
of
these
specific
adopters.
J
So
SP
concur
is
I,
believe
that's
the
biggest
deployment
that
we're
aware
of
Brook
right
now,
both
in
terms
of
the
node
counts
and
the
end-users
being
serviced
by
it.
So
this
they're
evaluating
work
right
now
across
three
hundred
nodes
in
their
environments,
and
it's
about
10,000
or
so
users
are
being
serviced
by
you
know
with
rook
providing
the
underlying
storage
for
about
400
apps
in
their
environment.
J
So
this
is
this
kind
of
speaks
to
a
couple
of
their
experiences
here
have
kind
of
speaks
to
the
you
know
the
ease-of-use
and
the
reliability
that
people
are
experiencing
with
rook.
Where
a
lot
of
these
you
know,
administrative
tasks
or
operational
tasks
for
storage
systems
are
automated
and
you
know
kind
of
more
they're
able
to
take
more
hands-off
approach
with
running.
You
know
a
fair
amount
of
scaled
out
storage
I,
also
like
that
from
tis
one
of
the
senior
systems
engineers
over
there.
J
The
CP
concur
that
he
really
that
speaks
to
the
the
healthy
community
that
we've
kind
of
built
in
rook
that
you
know
we
have
a
lot
of
folks
who
are
you
know,
starting
to
help
each
other
in
the
community?
It's
a
healthy
growth
of
a
community
where
people
are,
you
know
solving
each
other's
problems
and
you
know
really
helping
each
other
out.
So
let's
go
move
ahead
a
little
bit
quickly
here
now,
cuz
I
think
I'm
running
out
of
time.
Well,.
J
J
Yes,
let's
get
two
quickly
through
this,
so
some
of
the
other
users
here
the
Pacific
research
platform-
is
it's
funded
by
the
National
Science
Foundation,
and
it
is
a
large
platform,
that's
being
built
for
researchers
amongst
a
lot
of
the
University
for
new
schools
and
other
universities
around
the
country
to
collaborate
with
large
datasets
and
machine
learning
and
simulations.
You
know:
image
processing,
all
sorts
of
stuff
there,
let's
go
to
head
to
the
next
slide.
J
Another
one
of
our
adopters
is
the
center
of
excellence,
the
next
generation
networks.
What's
a
it's
in
Canada,
it's
a
consortium
of
member
organizations
like
the
big
telco
and
device
companies
like
Bell,
Canada
and
Rogers
Cisco
Nokia
that
are
working
to
Co
system
to
grow
the
Kenya
Canadian
IT
sector.
J
They
you
know
have
like
a
one
of
a
of
you
know.
Both
your
storage
focus,
nodes
and
compute
focus
nodes,
and
you
know
the
ability
to
select
in-
and
you
know,
isolate
or
use
these
storage
resources,
while
being
able
to
also
take
advantage
of
running
workloads
on
the
heavy
of
the
compute
focus
nodes
that
you
know
can
access
the
storage
from
you
know.
The
storage
Evernote,
it's
kind
of
a
hybrid
mix
thereof.
You
know
hyper-converged
environment
to
be
able
to
take
advantage
of
both
storage
and
compute.
J
So
they
have
the
world's
most
advanced
digital,
everyday
assistance.
Where
those
documents
invoices
forms
all
sorts
of
things,
that
more
than
10
million
of
you
know
user
uploads
of
those
type
of
files
getting
uploaded
to
their
system
and
stored
stored
on
this
persistent
storage.
Here,
that's
you
know
it's
it's
the
smarts
or
intelligence.
You
know
that
the
operators
provide
kind
of
give.
You
know
the
ability
to
manage
their
data
at
easy.
J
Let
you
know
on
the
road
with
all
that
automation
and
very
easy
manner,
but
they
also
have
the
ability
to
dig
into
some
of
the
finer
configuration
options
to
you
know
in
more
advanced
scenarios
as
needed
to
so
that
flexibility
of
what
the
operators
can
fried
is
is
helping
the
the
genie
guys
a
lot
I
think
that
was
the
last
slide.
So
we
can
have
time
for
any
any
questions
or
move
along
to
the
next
agenda
item.
J
B
A
L
Thanks
for
the
time
so
security
that
security
has
been
a
cross-cutting
concern
across
multiple
infrastructure,
I
think
we
brought
this
up
before
and
we
started
pulling
pulling
together
like
whole
bunch
of
people
who
are
aware
of
security,
and
then
we
started
this
working
group
about
a
year
back,
I
think
maybe
less
so
people
that
came
together.
They
understood
that
this
is
a
cross-cutting
concerns
and
then
we
have
the
various
folks
from
different
organizations.
L
Gugak
startup
and,
in
addition
to
all
these
organizations,
I
think
we've
also
tried
to,
in
external
perspective,
from
NASD
and
NFC,
have
been
wrangling
with
this
problem
for
quite
a
while
know
whether
it's
big
data
team
or
their
security
team.
So
all
these
have
led
to
us
understanding
that
there
is
a
product
thing
here
in
security
to
understand
and
disseminate
all
the
agents
mean
obviously
zero.
This
will
be
far
advanced
and
many
other
infrastructure.
That's
part
of
CN
CF.
L
So
one
of
the
goals
here
is
to
enable
cross-pollination
between
our
learns
from
other
infrastructure,
to
the
other
infrastructures
that
are
going
through
the
same
problems,
and
we
we
have
been
running
this
for
a
while.
So
there's
a
lot
of
information
on
github,
the
link
is
link
is
dropped
there
and
thanks
again
for
signing
up
for
the
sponsorship.
For
this
I'd
be
happy
to
answer
any
questions.
B
And
then,
just
before
you
actually
questions
just
so.
Everyone
knows
in
the
reference
architecture,
discussions
were
having.
We
have
spread
out
security
now
as
its
own
category,
which
I
know
we
need
to
fire
review
the
categories
with
with
the
TOC
at
some
point,
soon
just
kind
of
get
some
feedback
on
them,
but
social
security
is
sort
of
its
own
area
thing
by
having
its
own
area.
Because
of
all
the
obvious,
you
know
implications
about
security
in
the
industry.
D
B
D
D
M
We
actually
just
had
a
great
conversation
with
Tim
Dempsey
goth
about
so
the
opportunity
to
kratom
have
a
subgroup
combined
of
members
from
each
group
who
likes
a
brainstorm.
Some
of
the
open
questions
around
you
know
the
there's
some
identity
concern.
You
know
like
how
do
we
articulate
how
identity
an
auth
occasionally
works,
because
that
has
to
ideally
work
across
may
be
broader
than
the
kubernetes
ecosystem?
And
if
we
can
get
this
stuff
to
be
interoperable,
it
would
be
ideal.
M
H
L
H
The
one
thing
you
know
also
your
reinforces
is
the
kubernetes
policy
working
group.
You
know
had
a
proposal
to
create
a
working
group
here
at
the
same
Jeff
level
as
well,
and
we've
joined
forces
with
with
that
group
to
form
one.
So
you
know
policy,
you
know,
policy
controls
are
in
scope.
You
know
do
that
merger
so.
L
They're,
due
to
address
to
address
Brian's
question
a
little
further
hi.
Now
it's
not
full.
It's
not
completed,
but
I.
Think
in
terms
of
leading
of
security
problem
itself.
There
is,
there
is
identity,
there's
authentication,
there's
optimization,
there's
policies
which
cuts
across
like
authorization
to
some
extent,
then
access
control
is
a
separate
part.
Are
you
doing?
Auditing
is
another
piece.
L
Compliance
is
a
big
beast
by
itself,
so
there
is
like
various
aspects
of
security,
and
some
of
it
is
touched
in
some
parts
of
infrastructure
to
depths,
but
even
to
the
extent
of
understanding
that
these
are
layered
problems
versus
these
some
are
overlapping.
Problems
is
less
clear
in
many
people
and
then
there's
like
endless
discussions
around
various
models
and
various
various
overlaps
that
exist
in
this
right.
M
You
know-
we've
had
a
lot
of
conversations
about
our
back
versus
a
back
and
they're
they're,
complementary
concepts,
but
really
clarifying
when
and
how
and
why
one
would
secure
a
system
in
different
ways
and
and
I
think
that
we're
exploring
we're
focusing
on
the
big
problems
first,
rather
than
it
and
there's
like
these
fuzzy
edges,
you
know:
we've
had
conversations
physical
security-
probably
not.
However,
if
you
have
to
if,
in
order
to
move
to
a
cloud
native,
you
know
part
of
your
infrastructure
that
has
to
connect
with
systems
that
ensure
physical
security
on
Prem.
M
Well,
maybe
we
have
to
kind
of
touch
those
systems
in
some
ways.
So
it's
really
about
like
how
do
you
secure?
How
do
you
reason
about
whether
your
cloud
native
deployment
is
secure?
And
how
did
your
you
know?
How
do
you
keep
your
end-users
safe
in
that
right,
so
that
we
are
explicitly
touching
on?
Well,
how
do
applications
become
secure?
How
do
they
do
you
know?
How
do
they
ensure
that
their
end
users
have
the
right
security
controls
and
the
developers
have
the
tools
they
need?
A
D
A
Want
to
be
sensitive
of
everyone's
time
since
since,
since
this
has
the
TOC
sponsor
already
I'm,
going
to
just
move
the
discussion
to
the
mailing
list
and
get
a
little
bit
more
feedback
before
seeing
if
we
call
for
a
formal
vote
to
accept
this
okay
other
than
that
I
think
we
need
to
wrap
things
up
since
we're
a
little
two
minutes
over.
So
I
appreciate
everyone
taking
the
time
to
present
today
and
check
the
mailing
list
for
for
follow-up.
So
thanks
again,
Ken
for
taking
the
Flex
pays
place
of
Alexis.
Today,
thanks
guys
take.