►
From YouTube: CNCF TOC Meeting 2017-07-11
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
B
C
A
A
D
E
A
A
B
A
E
E
E
E
So
agenda
we've
got
some
project
presentations
today
and
a
few
reminds
on
a
Obi
type
stuff
on
the
project's
slide.
Six
Chris
and
I
have
updated
the
project,
priorities
and
review
backlog
sheets
I
would
be
grateful
if
people
who
are
interested
in
the
question
about
which
projects
should
be
joining
or
applying
to
join
or
who
are
interested
in
putting
their
own
projects.
E
We
would
take
a
look
at
those
sheets
and
if
you
see
gaps
or
we
have
questions
either
come
in
on
the
sheets
or
get
in
touch
with
me
and
press
directly
and
we'll
try
and
get
things
scheduled
in
terms
of
storage
and
networking.
We
are
hoping
like
a
readout
from
the
storage
working
group
and
next
Tuesday
on
the
strategy
for
onboarding
projects
in
those
categories
and
there's
been
requests
from
right
discussion
of
that
strategy.
So
it's
important.
If
you
are
somebody
with
a
dog
in
that
race.
E
Thank
you
so
please
be
on
that
call
next
week.
If
you
care
about
storage
on
networking,
I,
don't
believe
I'll
have
much
time
for
discussion
about
those
topics
next
week,
but
we
can
get
it
rolling
and
I'm
very
keen
that
we
have
a
face-to-face
discussion
about
those
things
in
in
Austin
in
December,
okay,.
E
H
Sounds
good
well,
thank
you
for
making
the
time
this
morning.
I
know
where
we
have
a
tight
schedule,
so
I'll
jump
right
into
that.
Let's
jump
to
slide
11
here,
which
is
designed
to
provide
you
a
little
bit
of
context
for
what
it's
all
about.
So
spiffy
is
the
set
of
specifications
that
try
to
describe
how
we
think
about
I'd,
ending
specific
in
the
context
of
cloud
native
workloads
itself.
It's
comprised
of
a
few
items.
The
first
thing
is
the
PID
itself,
which
describes
the
naming
format
and
conventions
of
the
actual
ID
itself.
H
We
then
also
describe
how
we
think
that
ID
should
be
encoded
so
that
it
can
be
cryptographically
verified
in
a
production
system
itself,
as
of
now
we're
using
x.509
as
the
basis
which
we're
going
to
encode
these
identities,
and
it
also
describes
how
we
think
workload
should
be
able
to
retrieve
these
identities
and
their
estimates,
which
is
this
50
verified
blind.
Any
document
from
an
actual
running
system
and
that's
described
in
something
that's
developing
called
the
workload
API
itself.
Spire
is
a
tool
chain.
H
That's
designed
to
actually
implement
these
specifications
so
that
you
can
actually
provide
identity
and
attestation
for
workloads
in
a
myriad
of
computing
environments,
starting
with
Google
cloud
platform
working
very
much
with
kubernetes
and
going
beyond
that
as
well.
Let's
go
to
slide
12,
so
spiffy
started
off
last
year.
You
guys
may
or
may
not
remember
this
document,
but
jobina.
H
I
picked
this
up
with
Joe
early
on
last
year
and
continued
to
and
with
that,
we
decided
that
it
made
sense
to
really
try
to
make
a
go
of
this
project
because
we
felt
like
there
was
a
critical
mass
of
people
that
were
interested
in
this,
and
there
was
going
to
be
even
greater
set
of
people
that
were
going
to
be
Tristan
as
well.
Spiffy
going
to
slide
13
is
based
on
a
system
at
Google
called
Louis,
which
stands
for
the
low
overhead
authentication
system.
H
This
picture
here
is
a
picture
of
Google's
Niels
Provos
last
year,
who
publicly
mentioned
Louis
officially
for
the
first
time
up
until
then,
Louis
had
not
really
been
discussed
publicly,
except
for
two
other
instances
in
various
blog
posts
and
things
of
that
sort
as
well.
So
it
is
a
critical
thought
forum
at
Google
that
provides
a
tremendous
amount
of
scalability
security
and
efficiency
for
developers
throughout
the
platform
as
a
whole.
Let
me
hand
that
off
to
Andrew
who'll,
take
us
through
slide
15
at
this
point,
we'll
jump
into
spire
a
little
bit
hi
folks.
I
Hear
me
fantastic,
so
so
with
worth
spending
a
little
bit
of
time
on
what
Louis
and,
by
extension,
spiffy
inspire.
Actually
do
you
know
we
talk
about
these
things
in
terms
of
identity
issuance,
but
really
what
spire
is
doing
fundamentally
is
and
sniffy
is
doing
fundamentally,
is
providing
a
framework
for
bootstrapping
trust
on
the
kinds
of
deeply
heterogeneous
environments
that
we're
starting
to
see.
I
Modern
enterprises
needing
to
tackle
and
spire
helps,
simplify
that
problem
pretty
significantly
in
a
couple
of
ways
for
firstly
for
services
that
need
these
identities
that
need
this
initial
bootstrap
that
leave
this
initial
trust
signal
that
they
can
use
to
establish
communication
trusted
communication
between
themselves.
We
expose
this
low-level
local,
that
is
on
the
same
kernel.
Api
called
the
workload
API,
it's
a
very
simple
API.
It
doesn't
require
any
authentication
or
which,
in
turn
means
the
calling
system
doesn't
need
any
appreciate
keys
in
order
to
access
it
and
a
worker.
I
I
In
a
lot
of
interesting
ways,
the
workload
API
is
consistent
wherever
is
consistently
accessed,
no
matter
where
it's
called
from,
but
the
crucial
capabilities
we've
been
able
to
extract
trust
into
a
range
of
different
environments.
So
we
do
that
by
having
a
central
registry
of
identities,
policies
that
describe
how
those
identities
should
be
issued
and
we
describe
those
policies
in
terms
of
both
process
level,
metadata.
F
I
F
I
Happens
to
be
in
a
particular
AWS,
auto
scaled
instance
group,
for
example,
more
broadly,
what
we
provided
is
a
rich
plug-in
infrastructure
that
allows
this
workload
API
to
be
serviced
securely
in
a
broad
range
of
different
environments
on
slide
16,
we'll
start
diving
into
some
of
the
specific
use
cases
kuben
that
is
specifically
we
talked
about.
It
can
be
used
for
a
couple
of
interesting
ways.
One
of
them
is,
of
course,
to
issue
identities
to
issue
these.
I
These
seek
credentials
to
work
codes
that
are
running
inside
kubernetes,
so
that
are
a
container
that's
been
scheduled
by
kubernetes.
We
can
issue,
we
can
issue
identities,
to
work
codes,
running
in
a
single
cluster
across
multiple
clusters
and
indeed
between
kubernetes
clusters
and
workloads
that
are
running
outside
of
cuban.
It
is
another
use
case,
as
well
as
to
actually
simplify
the
deployment
of
distributed
systems
like
cuban.
It
is
themselves
so
being
able
to
identify
the
the
nodes,
the
role
they
play
and
and
helping
bootstrap
PKI
infrastructure,
that's
necessary
to
to
run
a
system
like
that.
I
You
can
extend
that
same
capability
to
any
other
kind
of
distributed
system
as
well
so
tools
like
Cassandra
chef,
puppet
ansible.
All
of
these
can
benefit
in
the
same
way
on
slide
17.
You
know
another.
Another
use
case,
of
course,
is
just
direct
authentication
between
services,
a
number
of
different
ways.
You
can
do
that.
We've
actually
been
working
pretty
closely
with
the
GAC
team
and
particular
kudos
to
justin
burke,
for
we're
working
on
explicit
spiffy,
inspire
support
out
of
the
box
for
g
RPC.
I
So
it's
effectively
just
an
authentication
backends
that
that
can
be
leveraged
transparently
slide
18.
You
know
that
same
model
can
then
be
extended
to
a
range
of
other,
more
or
less
every
other
square
on
in
the
in
the
cloud
native
landscape.
Basically,
one
of
the
most
important
use
cases
that
we
see
is
is
it's
a
secure
introduction
to
secret
stores
and
therefore
avoiding
the
you
know,
turtles
all
the
way
down,
pre
shared
key,
that's
necessary
in
order
to
access
the
secret
store
and
then
finally
slide
19
you
with
this
growing
world
of
service
mesh.
I
I
With
William
and
boyens,
and
the
lincoln
team
and
a
few
joint
customers
as
well
to
provide
a
secure
identity
bootstrapping
for
those
products,
and
so
those
then
have
a
way
of
establishing
trust
between
different
months
and
then
finally
slide
20.
There's
a
lot
of
text
in
this
slide,
really
the
takeaway
I'm
hoping
everybody
gets
here
is
that
spiffy
inspire
is
designed
to
be
extremely
extensible
from
the
ground
up
and,
in
fact
providing
that
extensibility
and
the
community
of
the
community.
I
Is
a
core
design
goal
of
the
project,
so
we
we
talked
about
being
able
to
just
to
assign
trusts
to
different
different
platforms
like
different
orchestrators
and
different
cloud
providers.
So
we
provided
a
rich
set
of
node
and
workload,
a
tester
plug-in
points
that
provide
that
functionality
for
specific
platforms
and
for
specific
environments,
and
we
we
have.
I
Of
additional
extension
points
as
well,
for
example,
if
you
want
to
use
a
difference,
static
authority
say
you
have
a
vault
implementation
or
a
psychotic
implementation.
You
can.
You
can
back
out
to
that
in
order
to
actually
issue
certificates
and
mint
certificates,
and
we
also
have
a
number
of
plugins
around
storage.
H
You
know
ok
great,
so
let's
go
to
slide
22,
so
you
know
Andrews
giving
this
a
good
kind
of
overview
of
how
we
think
spiffiest,
maybe
inspires
gonna,
be
applicable
to
to
the
broader
ecosystem.
A
lot
of
other
folks
thought
this
as
well.
The
first
meeting
here
that
you're
looking
at
are
this
image
on
slide.
22
is
an
image
of
the
first
spiffy
meetup
that
took
place
on
December
6th
at
Netflix
HQ
down
in
Los
Gatos
California.
These
included
folks
from
Netflix
Tullio,
Salesforce,
Google,
JPMorgan,
Chase
and
Cisco.
H
H
One
of
the
bigger
parts
of
our
history
was
the
actual
announcement
in
April
of
sto
decided
to
provide
support
publicly
for
the
spiffy
specification
itself,
and
so
that
created
a
landslide
of
interest
in
the
broader
concepts
around
service
identity
itself.
Looking
forward,
as
you
all
know,
we'll
be
presenting
spiffy
inspire,
add
cloud
neocon
in
December,
we're
looking
forward
to
that
and
from
there
on
out
continue
to
build
out
or
long
robust
roadmap
that
Andrew
will
get
into
in
just
a
few
minutes.
H
It's
like
twenty
four
talks,
a
little
bit
about
kind
of
the
scale
of
interest
in
the
project.
More
than
anything
else,
I
should
make
very
clear
that
this
slide
does
not
intend
to
represent
that
anything
represented
from
this
logo
are
actively
committing
code.
It's
supposed
to
represent
the
fact
that
these
are
all
organizations
or
some
organizations,
that
of
all
either
participate
in
these
events
have
reached
out
to
site,
tell
directly
about
what
the
heck's
goofy
actually
is
and
are
possibly
even
contributing
code
to
and
time
to,
both
these
specifications
and
aspire
code
base
itself.
H
It's
a
very
compelling
group
of
individuals,
organizations,
many
of
whom
you
have
first-hand
experience
with
at
the
CN
CF.
One
of
them
is
AWS
which
alexis
introduced
us
to
back
in
August
2017.
After
joining
the
cn7,
they
expressed
interest
in
learning
more
about
the
speaker
project,
so
we
been
working
with
them
and
number
of
their
folks
to
figure
out.
H
What's
the
value
of
spiffy
inspire
and
this
world,
the
service
side
anymore,
broadly,
it's
like
25
is
designed
to
kind
of
talk
a
little
bit
more
about
some
of
the
governance
structure
a
few
months
ago,
or
actually,
probably
about
two
months
ago.
At
this
point,
we
put
together
our
own
technical
steering
committee,
which
is
basically
our
governance
board
for
the
spec
project
itself.
This
slideshow
cases
to
you,
the
members
of
that
group,
including
function
side
till
blend
hefty,
oh
and
Twilio
as
well
we've
been
having
regular
monthly
meetings.
H
In
fact,
our
last
meeting
was
last
night.
We
record
those
meetings
and
we
make
those
notes
available
for
others
to
to
consume
as
well
slide.
26
is
designed
to
showcase
and
talk
about
the
licensing
model,
so
very
much
aligned
with
the
CNCs
model
around
Apache
2.0.
Additionally,
all
the
code
submissions
are
accompanied
with
Apache
and
Google
based
clas,
either
for
individuals
or
corporate
clas
as
well,
and
then
last
but
not
least,
trademarks
and
domain
names
have
been,
are
clearly
delineated
and
owned
and
ready
to
be
transferred
on
a
moment's
notice
itself.
I
Sure
and
I'll
skip
over
some
of
the
detail
here
and
save
it
for
the
Q&A.
But
if
you
all,
if
you'll
recall
the
previous
slide
slide
15,
we
we
talked
about
with
you,
know,
think
about
the
product
in
terms
of
workload
facing
functionality,
so
integration
into
different
products,
platform
facing
functionality,
that
is,
integration
into
different
platforms,
such
as
cloud
providers
and
then
core
functionality,
which
is
functionality
that
some
quarter
of
the
framework
it's
core
to
the
overall
framework.
F
I
Well,
as
start
to
improve
the
robustness
and
user
experience
of
the
core
product
coming
a
slide
29,
this
is
looking
at
the
roadmap
as
it
extends
its
2018.
You
know,
as
we
you
know,
the
high
order
bit
of
this
roadmap
really
is
the
release
of
spire
1.0,
which
we're
targeting
for
the
end
of
q3
next
year.
Spire
1.0
fundamentally
means
that
a
number
of
different
things
it
means
rounding
out
and
not
just
the.
I
Roadmap
here
we
expect
to
revise
that
over
time
as
well,
but
we
have
you
know
a
number
of
items
that
probably
look
fairly
common
to
any
project.
That's
considering
that
it's
considering
production
readiness,
stability
of
the
api's,
you
know
a
release,
management
model
that
guarantees
the
stability
and
then
integrity
and
prevents
regression-
and
you
know
crucially,.
H
H
H
There's
couple
of
appendix
slides
is
there
for
fun
if
you
guys
want
to
go
through
that,
I
also
want
to
say
thank
you
to
Brian
grant
from
the
CN
CF
was
agreed
to
sponsor
spiffy
as
a
project,
we're
looking
forward
to
working
with
him
and
the
rest
of
you
to
determine
the
path
into
the
scenes.
Yet
this
program
have
to
take
any
questions
you
guys
might
have.
H
J
You
mentioned
kubernetes
in
one
of
your
slides.
Is
that
just
simply
for
deployment
you
mentioned
cross
cluster
communication
with
that,
therefore
sit
on
top
of
the
our
back,
that's
already
in
there
to
maintain
roles.
How
exactly
does
that
integrate
fully
with
kubernetes
and
then
my
second
question
would
be,
if
you're
talking
about
full
application
integration
in
the
kubernetes
space.
Have
you
also
considered
authentication
out
to
storage
and
how
that
would
work.
I
F
I
There
are
number
of
ways
you
could
consider
extending
that,
so
that
it
does
in
terms
of
security
reduction
to
storage.
You
know,
there's
one
of
the
nice
things
about
spiffy
is
being
sufficiently
low
level.
Is
that
there's
a
number
of
the
we
see?
This
is
the
underpinning
of
establishing
communication
to
more
or
less
any
third
party,
as
in
third
party,
to
the
running
process
infrastructure.
So,
yes,
you
can
absolutely
see
a
world
in
which
the
in
which
in
which
these
identities
can
be
used
to
establish
communication
to
a
storage
system.
F
I
G
G
H
So
the
plan
is,
as
at
this
point,
to
to
spend
some
time,
studying
that
the
companies
that
we've
been
talking
to
that
expressed
interest
in
spiffy
thus
far
have
do
not
necessarily
have
massive
Kerberos
deployments.
It's
only
when
we've
started
to
engage
with
some
of
the
folks
in
city
and
some
of
the
larger
enterprises,
where
this
has
come
up,
we're
actively
studying
how
to
think
about
using
Kerberos
as
a
backing
store
for
generating
spoofy
certificates.
Potentially,
you've
been
having
a
world
where
we
can
actually
use
Kerberos
to
connect
infrastructure.
H
I
A
little
bit
of
colors
of
that
too,
you
know
for
the
customers.
We
have
spoken
to
they've
been
using
Kerberos
to
typically
to
identify
physical,
single
physical
nodes
on
on
static
infrastructure,
and
Kerberos
is
amenable
to
that,
because
you,
you
typically
want
to
pretty
share
a
key
to
establish
the
establish
that
identity
and
establish
that
trust.
That's
that's
useful
and
actually
something
that
spiffy
can
inspire
can
pack
into.
We
can
use
that
as
a
way
of
strapping
node
identity
and
my
extension
workload
identity
itself,
but
spire
provides
a
couple
of
things
on
top
of
that.
I
One
of
them
is
if
you're
running
on
a
cloud
provider
where
you
can,
you
can
test
identity
directly
through
saving
instance,
identity
document
or
GC.
This
metadata
API.
You
can
avoid
that
pre
shared
key
and
once
that's
what
we're
finding
a
lot
of
Kerberos
customers
are
doing
the
other
advantage
we
have.
We
can.
I
We
can
do
process
level
at
a
station
as
well
as
knowing
level
at
a
station,
so
particularly
important
when
I'm
running,
say
ax
Cuban
in
this
cluster
I'm
using
Kerberos
to
identify
the
node
I,
still
need
a
way
of
being
able
to
understand,
specifically
which
which
which
container
running
in
which
part
is
able
to
access
its
identity.
Once
those
spiffy
provides
that
API
as
well,
I
saw.
G
Yeah
and
the
in
the
ideal
end
state.
How
much
does
a
the
developer
of
the
workloads
being
identified
need
to
learn
about
specific?
Is
that?
Is
it
something
that's
expected
to
become
part
of
the
standard
developer
toolbox,
or
should
it
be
through
a
selection
of
third-party
frameworks
or
interval
that
that
it's
completely
transparent,
I.
H
Think
in
the
fullness
of
time,
you
know
when
we
look
at
places
like
a
little,
where
it's
become
part
of
the
underlying
fabric
for
all
developer
services
at
a
place
like
Google,
it's
fairly
transparent
to
the
developers.
They
know
it
exists,
they
know
what
it's
supposed
to
do,
but
it
kind
of
does
what
it
needs
to
under
the
covers.
H
The
goal
in
the
near
term
is
to
make
spiffy
available
and
as
many
development
frameworks
and
platforms
as
possible,
we
don't
prescribe
to
know
exactly
what's
going
to
be
necessary
for
each
developer
group
in
every
single
platform
and
so
we're
working
actively
with
a
varietal
wide
variety
of
folks.
In
fact,
we've
been
spending
some
time
with
the
folks
over
at
docker
to
understand
and
think
about
a
house
if
it
can
be
injected.
H
You
know
for
use
by
Dockers
customers,
we've
been
spending
time
with
kubernetes
and
we've
been
spending
time
with
VMware
and
a
few
others
as
well.
I,
don't
think
that
in
the
in
the
long
term
we
expect
spiffy
aspire
in
any
of
the
reference
implementations
to
to
require
active
care
and
feeding
by
developers
themselves
that
kind
of
defeats.
The
purpose
of
what
we're
trying
to
do
here,
yeah.
K
K
Incredibly,
empowering
it's
hard
to
overstate
how
important
a
mechanism
like
spiffy
is:
it's
a
single
number
one
biggest
missing
puzzle,
piece
for
enabling
micro
services
in
the
cloud
native
ecosystem,
in
my
opinion,
as
soon
as
you
break
one
application
into
multiple
applications.
Now,
suddenly
you
have
the
challenge
of
how
do
they
securely
authenticate
with
one
another,
and
you
can
just
turn
off
authentication
and
rely
on
network
security,
but
in
larger
deployments
that
doesn't
fly
and
multi-tenant
deployments
that
doesn't
fly.
K
K
E
K
E
A
couple
things
so
I
think
Brian
I
think
you
may
need
to
write
a
let
down
because
that'll
be
the
first
paragraph
of
the
documented
submission,
and
so
don't
forget
it
please
run
it
down
here.
Can
we
have
a
show
of
hands,
as
it
were,
from
voting
toc
members
from
spiffy
inspire
I'm,
particularly
looking
for
anybody
who
objects
to
moving
forward
with
this
tool?
A
written
proposal
was
Brian
a
sponsor.
E
L
Is
dancer
Lydia
give
me
we
do
okay.
Thank
you
very
much.
All
right.
Everyone
I
will
take
the
slides
from
slide,
40
and
also
a
Nilesh
Patel
is
online.
He
will
be
presenting
some
of
the
slides
as
well:
I'm
dance
rule,
II,
I'm,
a
product
manager
that
works
on
API
infrastructure
and
open
source
at
Google,
so
moving
to
slide
41.
L
The
reason
it
exists
is
that
we
see
there.
We
see
several
trends
that
are
happening
right
now
that
are
making
companies
increasingly
develop
distributed.
Applications.
We
see
some
of
its.
You
see
this
move
from
monoliths
the
micro
services.
Certainly
containerization
is
a
big
part
of
the
reason
this
is
happening.
People
are
able
to
now
build
their
applications
with
containers
and
deploy
them
with
containers
easily
and
repeatedly
it's
good
for
a
lotteries
and
scalability,
etc
and
they're
also
moving
toward
hybrid
deployments,
where
there
are
different
components
of
applications
that
are
running
not
only.
L
Monoliths
are
going
away
and
you
have
all
these
applications
that
are
very
complex
and
feature
a
lot
of
communication
between
the
different
services
or
micro
services
or
nodes,
whatever
you
want
to
call
them
that
that
really
make
up
the
application
so
because
there's
so
much
more,
this
communication
and
network
is
now
part
of
the
application.
This
communication
is
intrinsic
the
application.
There's
some
things
you
need.
L
You
need
to
be
able
to
monitor
all
of
that
communication
more
than
ever
before,
it's
very,
very
important
to
get
good
monitoring,
not
just
how
many
bits
are
going
back
and
forth,
but
what
your
service
dependencies
are,
what
your
latencies
are,
what
your
long
till
late!
These
are
the
kind
of
things
that
traditional
network
monitoring
can't
give
you
security
is
more
important
than
Emer.
L
These
things
aren't
going
on
a
single
box,
they're,
going
on
many
boxes,
many
boxes
across
data
centers,
sometimes
across
the
cloud
and
and
security
for
that
is
very
different
than
securing
a
monolith
and
then
finally,
you
need
to
do.
You
need
to
have
good
control
between
your
services.
You'd
be
able
to
do
rollouts
in
repeatable
manner.
You
need
to
be
able
to
do
a
canary
testing.
You
need
to
be
able
to
do
Bluegreen
deployments
so
how
you
control
the
traffic
between
you
knows
has
become
very
important.
Moving.
L
That's
a
problem
statement
moving
to
slide
42
sto
is
a
service
mesh
and
its
goal
is
to
connect,
manage
and
secure
micro
services.
Now
we'll
say,
micro
services
here,
although
in
reality
it
doesn't
matter
the
size
of
a
service
and
I'm,
not
sure
if
there's
a
precise
definition
of
what
a
micro
service
is.
The
fact
is
that
that
we
want
to
be
able
to
help
people
with
all
of
those
communications
and
really
separate
the
things
that
should
be
done
by
developers
and
the
things
that
should
be
done
by
operations.
L
People
now
in
some
in
many
companies.
Actually
these
are
the
same
people,
but
there's
still
a
a
separation
of
concerns,
and
it's
just
kind
of
referenced
in
that
you
know
spiffy
inspire
conversations.
There
are
things
that
you
don't
want
your
developers
to
have
to
worry
about.
You
want
them
to
be
available
and
one
of
the
goals
of
the
service
mesh
is
to
make
these
things
available
so
that
you're
not
building
into
each
application
or
service.
L
These
intrinsic
things
that
you
wanted
to
be
as
Brian's
head
like
dial-tone
is
tio
is
being
designed
both
for
containerized
and
non
containerized
workloads.
We've
done
a
lot
of
work
with
kubernetes
so
far,
but
early
adopters
are
telling
us
very
loudly
and
clearly
that
when
they
need
this
security
and
they
need
this
monitoring
in
this
control-
that's
not
just
for
their
services
that
they've
already
moved
to
kubernetes.
L
Moving.
This
slide
of
43
I
will
talk
a
little
bit
about
the
architecture
and
what
we're
what
we're
actually
shipping
with
SEO,
as
I
said
before,
it
is
building
on
top
of
envoy,
but
it
adds
a
couple
of
three
control
plan
components
up
top
pilot,
which
is
used
to
give
configuration
information
to
envoy.
So
every
envoy
is
it
starts
up,
will
be
talking
to
this
pilot
component
to
get
its
configuration
that
it
should
be
enforcing
a
mixer
which
is
a
runtime
component.
L
It's
pluggable,
it
accepts
all
telemetry,
so
it
can
report
to
downstream
systems
all
of
the
logs.
All
the
telemetry
that's
connected,
I
mean
it's
collected
and
mixers
also
use
for
policy
checks,
so
mixer
can
allow
you
to
do
authorizations
say,
is
a
particular
service
allowed
to
call
another
service
and
then
there's
a
third
service,
which
is
the
sto
auth
component?
Is
the
OAuth
is
used
for
distributing
TLS
certs
to
envoy
so
that
all
the
calls
on
the
service
mesh
are
can
be
secured
with
mutual
TLS?
L
One
thing
you'll
note:
in
the
picture
we
have
envoy
typically
deployed
as
a
sidecar
proxy
and
all
incoming
and
outgoing
traffic
on
a
service
mesh
does
go
through
envoy.
So
when
it's
service
a
service,
a
goes
to
call
service
B,
it
goes
through
an
envoy
which
uses
the
configuration
information.
It
retrieved
in
from
pilot
to
know
how
to
route
that,
whether
that
service
is
running
in
the
same
kubernetes
cluster
running
at
a
different
good
bananas
cluster
or
running
off
the
not
even
kubernetes
at
all
the
envoy
knows
how
to
route
it.
L
It
also
does
routing
rules,
like
should
I
be
diverting
one
percent
of
my
traffic
to
a
canary
instance,
calls
are
placed
from
envoy
to
envoys
securely
with
mutual
TLS,
as
I
said,
and
then
envoy
does
policy
checks
and
can
even
check
in
with
mixer
to
to
verify.
Is
this
service
allowed
to
make
this
call?
Is
there
a
quota
restriction
and
has
that
been
exceeded,
zooming
that
the
call
is
allowed
to
go
through
it
forwards?
L
The
call
to
service
be
one
thing
to
note
service
and
service-
be
completely
unaware
that
envoy
and
Sto
are
there
at
all
this
service
mesh
is
transparent
to
them.
It
can
be
a
standard
HTTP
call,
it
can
be
a
G
RPC
call
on
HTTP,
or
it
can
just
be
a
TCP
connection.
So
the
envoys
are
all
reporting,
telemetry,
again
mixer
out-of-band,
so
with
without
hurting
performance
reporting
importing
all
the
telemetry
back.
That's
logging!
That's
monitoring,
metrics,
it's
tracing
so
that
downstream
systems,
whatever
you
want.
This
is
a
pluggable
architecture.
L
Mixer
can
can
report
that
data
to
to
downstream
systems,
so
it's
meant
to
be
transparent.
It
is
it
is.
You
know,
allows
the
developers
to
just
concentrate
in
developing
their
services
and
gives
the
people
running
the
services
the
tools
they
need
to
monitor.
These
things
get
great
information
about
what
we
consider
to
be
the
the
golden
signals.
What
are
your?
What's
your
traffic?
What's
your
error
rate?
What
are
your
latency
is
across
different
distributions
and
not
only
that,
but
do
that
at
a
at
a
fine
grain
level
at
a
layer,
seven
level.
L
Is
there
a
particular
method
that
has
a
latency,
that's
spiking
in
the
99th
percentile,
for
example?
Or
can
we
require
off
on
a
specific
method?
You
can
do
things
like
route
based
on
content
and
say
calls
that
have
this
user
agent
need
to
go
to
a
specific
instance
of
the
service
and
I?
Don't
chill
one
in
the
diagram
here,
but
you
can
also
place
an
odd
way.
It's
an
ingress
proxy
as
well
for
traffic,
that's
coming
into
the
mesh.
That
does
not
have
a
non-void
sidecar.
L
So
that's
a
run-through
of
the
technical
portions
moving
on
to
slide
44.
How
does
this
relate
to
other
projects?
Well,
obviously,
we
were
working
very
closely
with
Matt
and
the
team
from
lyft
as
they've
developed
envoy
for
the
data
plane
and
we've
been
working
with
them
for
quite
a
while
now
and
have
found
them
to
be
a
great
development
partner.
L
We
are
working
closely
with
the
team.
Kubernetes
I'm,
not
surprising,
we're
here
at
Google
and
they're
here
at
Google
and
working
with
kubernetes
lets
us
take
advantage
of
say
deployment.
We
can
really
transparently
put
the
proxies
in
there
the
sidecars.
We
can
also
use
their
service
registry
to
to
populate
the
list
of
services,
we're
working
with
spiffy
we're
already
using
the
spiffy
specification
for
for
our
security
as
well,
and
since
we
announced
in
May,
we've
started
working
with
a
lot
of
other
projects
that
have
that
have
actively
been
doing
their
own
integrations
into
ISTE.
L
L
What
started
working
on
SDLC
and
github
in
December
of
2016,
or
with
IBM
and
and
lyft,
and
we
worked
on
it
for
about
six
months
before
we
made
an
announcement
at
glucan
last
year,
it
really
has
been
very
well
received
the
the
community
is,
you
can
tell
that,
there's
a
need
out
there
and,
as
you
can
see,
we've
got
some
stats.
It's
gotten
a
lot
of
attention.
There's
three
three
organizations
right
now
have
commit
access
between
us
in
IBM.
L
We've
got
about
35
people,
full-time
working
on
this
and
Counting
the
other
team,
the
other
companies
that
are
out
there.
We
estimate
another
50
or
so
from
a
broad
variety
of
companies.
As
you
can
see,
in
addition
to
those
I
already
mention
Red
Hat's
Tyrel,
Cloud,
Foundry
Microsoft,
this
the
community
is
really
building.
We
have
meetings
very
frequently.
F
So
from
the
slide
number
46
talking
about
the
competition,
so
today
we
did
not
see
short
of
any
else
like
head-to-head
competition,
but
there
are
various
aspect
of
this
deal
as
Dan
mentioned
earlier,
and
the
security
around
the
management
piece
around
routing
around
the
traffic
management.
So
so
some
of
those
species
have
vendors
today.
F
So,
for
example,
if
Isis
ability
talk
about
the
security
and
routing,
there
are
definitely
commercial
firewall
vendetta
which
can
be
detected
into
that
space
like
if
we
talk
about
the
fabric
or
micro
services
management
is
when
a
juror
has
their
own
assured.
So
this
is
fabric
in
place.
There
is
also
vendor
core
back.
Will
you
try
you,
who
is
also
doing
the
things
into
the
similar
space
around
this
security
is
routing
and
traffic
management,
so
that
so
that
so
we
we
didn't
try
to
look
at
the
competition
today.
F
Moving
on
to
the
slide
number
47,
which
is
the
roadmap
for
the
hto
upcoming
0.3
release,
we
have
listed
the
whole
set
of
features
onto
the
website
is
toodle-oo,
but
maybe
I
can
talk
about
some
of
those
features
over
here.
So
before
the
end
of
this
year,
it's
really
sort
of
two
main
features
that
drive
is
to
make
sure
that
overall
SEO
is
really
production
right
right.
F
So
we
as
a
team,
basically
want
to
feel
comfortable
that
when
our
customers
run
th
you
in
to
tell
production
environment,
they
actually
get
the
level
of
scaling
and
performance
which
they
expect
from
any
of
the
version
great
application.
We
are
also
supporting
both
the
features
around
the
VMS.
We
are
also
doing
the
basic
authorization
for
the
Arabic
and,
as
Dan
mentioned
earlier,
we
do
not
want
to
support
only
the
kubernetes.
We
also
want
to
support
the
rest
of
their
environments
until
run
time
platform,
so
we
are
also
doing
the
various
improvements
in
tech
space.
F
F
F
So
this
one
is
real
events
which
are
getting
organized
at
a
more
at
a
grassroots
level
by
the
developers,
so
really
really
good
to
see
that
and
last
one
months
we
have
also
started
seeing
that
the
similar
meetups
are
also
popping
up
across
Asia
and
Europe.
So
there's
certainly
good
to
see
that
people
are
actually
finding
the
still
quite
useful
and
they
also
want
to
educate
the
rest
of
the
community
around
that
there
are
various
blocks
in
our
sessions,
which
are
studentid
sessions
which
will
be
presented
in
various
meetups
webcast.
F
You
name
it
and
like
most
of
them
are
sort
of
like
without
any
involvement
from
the
Google
and
IBM.
So
that's
also
really
good
to
see
and
HQ
is
also
being
evil
ated
by
these
many
companies
Enterprise.
So
just
like
named
like
some
of
them,
SAT
Cloud,
Foundry,
odd,
OPD,
folks
for
the
Maysles
JPMorgan
Chase
light
band
Starbucks.
F
All
of
these
guys
are
really
exploring
that
how
the
how
they
can
take
the
advantage
of
the
history
of
features
in
order
to
better
manage
that
micro
services
match
and
it's
a
gel-like,
some
of
the
guys
that
we
have
already
talked
to.
But
there
are
a
bunch
of
other
sides
that
we
are
actively
talking
today
until
a
food
is
benefit
to
the
community
from
the
ECU.
So
the
community
will
really
get
the
engineering
commitment,
both
from
the
Google
IBM,
as
well
as
sorry
as
well
as
from
the
rest
of
the
partner
and
sto.
F
F
Moving
to
the
slide
number
49,
which
is
this
region
Illyria
talks
about
how
sto
benefit
to
the
CNC
else,
so
we
helped
build
some
of
the
deep
integration
and
will
be
continued
to
build
deep
integration
with
the
other
CLG
of
projects
and
which
feel
that
there
will
be
three
helps
to
foster
the
adoption
of
fan
types
UNCF
step
and
not
just
HT.
Oh
right,
talking
more
from
the
community
point
of
view,
it
will
help
people
help
to
drive
the
adoption
of
other
serious
project.
F
F
The
first
one
is
on
the
marketing,
so
we
would
like
to
see
in
CS
to
take
over
our
Twitter
account
as
well
as
help
us
to
coordinate
with
these
various
complications
and
enterprise,
to
make
sure
that
we
get
even
more
exposure
and
even
what
out
there
around
these
to
you,
we
would
also
like
the
CN
CF
self
on
to
the
organizing
the
events.
Specifically,
it
will
be
great
if
we
can
get
actually
micro
services
for
the
issue
specific
tracks
in
skew
on
NT
cloud
native
car
I
think
we
can.
L
F
We
hear
you
loud
and
clear.
Sorry
I,
actually
just
heard
that
I've
been
betrayed.
Sorry
on
to
the
governance
side,
we
would
also
like
I
think
we
can
also
use
your
help
in
order
to
formalize
how
much
on
the
process
I.
Don't
the
project
governance
talking
more
around
the
CIC,
despite
I,
think
we
can
use
your
help
to
cover
the
centralized
CI
Institute
that
we
can
specifically
use
for
to
test
the
issue
across
various
platform.
F
Under
the
security
side.
We
currently
do
not
have
any
kind
of
4
cv,
g'v
processes
in
place
at
all.
Today
we
can
use
help
to
formalize
this
process
and
also
the
CLS
side.
They
would
like
to
see
and
CSKA
take
over
the
CLA
from
the
IBM
and
Google
to
make
sure
that
it
is
basically
being
managed
at
the
org
level
and
that
that's
all,
that's
all
from
us.
F
Thank
you
Brian,
so
the
community
OC
sponsor
and
if
you
guys
would
like
to
learn
more
about
the
sto,
you
can
go
to
the
github
or
you
can
just
go
to
the
eastern
by
the
website
and
we
also
do
have
our
blogs.
They
can
read
more
about
that
on
the
block.
Okay,
with
that,
I
would
like
to
open
here
for
the
Q&A,
and
we
also
has
the
some
of
the
lead
engineers
moved
from
the
Oberlin
IBM
on
the
call
so
Lane
Shriram,
then
Louis
feel
free
to
jump
in.
E
A
question
before
I
go
in
about
one
minute,
because
there's
so
many
different
things,
it
would
be
really
good
to
understand
better
how
they
fit
together,
because
that
really
I
think
directly
addresses
the
question
of
competitors.
Just
let
me
in
the
chat
thread
you
can
see
AP
any
company's
hardware
companies.
E
You
know
region
companies,
all
you
know
converging
around
this
set
of
functionality,
which
mash
and
and
chrome-plated
data
plane
rolled
up
together,
plus
some
monitoring.
It's
a
sort
of
chimera
project
and
it'll
be
really
good
to
sort
of
unpack
that
a
bit
because
I
think,
if
it
sir,
is
successful,
it
will
have
to
have
a
good
story
for
why
should
use
it
rather
than
some
simple
things.
Don't
do
as
much
I
think.
That's
that's
really
key
to
understand
so
long,
so
you
can
do
that.
I
would
be
supportive
of
moving
forward
with
it.
I
think.
L
Alexis
one
thing
we'd
like
to
do
is
is
is
make
clear
where,
where
things
are
pluggable
because
my
APM
came
up
there
and
we're
certainly
not
trying
to
you,
know
disrupt
the
APM
market
so
much
as
provide
a
way
to
get
information
from,
we
hope
every
service
running
an
enterprise
into
that
that
APM
solution
as
easily
as
possible.
You
know
with
us,
with
a
single
place
to
plug
in
so
so
I
agree
that
we
can
take
that
on
I'm,
not
sure
what
the
right
format
for
that
is.
Yes,.
E
A
C
No
just
so
this
is
the
origin,
zero
from
Red
Hat.
Just
to
say
that
word
has,
as
you
might
all
have
been
very
involved,
and
this
is
of
course
it's
a
non-binding
support
due
to
my
participation
here
at
EOC,
but
we
expect
where
it
has
active
participation
through
a
impressive
numb
of
Engineers
to
ramp
up,
and
you
should
produce
them
so
I'm.
Just
stating
our
support
in
this
cool.
A
Thanks
I
think
in
the
interest
of
time
I
know
we
have
Ken's
gonna
go
over
some
service
things,
but
me
channeling.
My
best
Alexis
impression
Brian
grant
is
going
to
be
the
acting
TOC
sponsor
I.
Guess
that's
correct!
Brian!
Yes,
you
know
so
I'll
just
do
a
straw
poll
of
the
TOC.
Is
there
anyone
opposed
of
us
formally
inviting
the
sto
project
to
project
proposal.
A
Okay,
so
I'll
take
that
as
yes
to
have
them
formally
move
to
project
proposal,
space
with
Brian
grant
as
the
sponsor,
so
thanks,
SEO,
team,
leche
and
other
other
folks
for
Dan
and
presenting
so
hopefully
we'll
move
discussions,
the
actual
project
proposal
where
people
could
chime
in
and
work
with
the
community
to
optimize
proposal.
So
with
with
about
five
minutes
left
ken.
Are
you
on
the
line?
I
am
yeah,
let's,
so,
let's
just
jump
to
slide
56
to
get
a
brief
update
from
the
service
working
group.
B
Definitely
I'll
make
this
fast,
so
slide.
56
is
just
sort
of
like
an
overview
of
where
how
we
got
here
right.
We
started
back
in
May,
June
timeframe.
We
did
an
update
about
three
months
ago
now
in
August
first,
but
do
a
lot
of
a
lot
of
my
joint
COC
members
were
on
vacation
other
otherwise
tied
up
spending
a
lot
of
the
toc
audit,
but
well.
The
next
few
means
we
asked
for
participation
and
for
comments
on
the
white
paper.
B
This
is
just
restating
that
goes
mango
was
to
basically
create
a
white
paper
which
we've
done
and
so
I
feel
like
we've
accomplished.
What
our
initial
goal
was.
Would
love
feedback
on
that,
but
basically,
a
themes
of
that
way.
White
paper
were
trying
to
define
service,
which
has
been
a
lot
more
difficult
than
I
expected
it
to
be
in
meetings
dedicated
just
to
what
definition
of
absolute
is
and
what
the
definition
of
almost
always
is
and
and
fun
fun
stuff
like
that
that
I'd
rather
shoot
myself
over.
B
But
it's
important
to
have
those
discussions
and
we've
been
having
them
in
good
good
debate
and
then
trying
to
differentiate
what
services
against
things
like
fads
and
pads
and
cats
and
bears
and
any
other
eyes
you
can
think
of,
and
so
that's
sort
of
a
two
of
the
major
themes.
We've
also
tried
to
describe
many
use
cases
for
why
you
would
care
about
something
like
syphilis
and
then
we've
as
part
of
this
effort.
B
We've
provided
I
think
pretty
good
definitions
or
art
functions
and
services,
and
things
like
a
venting
and
so
I
feel
like
what
a
good
point
in
the
white
paper
were
to
sort
of
call
it
done,
which
is
on
slide
59.
What
we'd
like
to
do
is
work
with
the
Linux
Foundation
and
the
CNC
F
channels
to
kind
of
look
at.
How
do
we
promote
this
and
publicize
this
for
at
or
before
cloud
native
con?
B
Okay,
cool
and
then
the
second
next
step
is
looking
at
a
couple
of
different
options,
and
we
we
don't
have
time
to
discuss
this
today,
Chris,
where
we
can
maybe
meet
in
in
Austin
to
finalize
it
but
kind
of
looking
at.
Do
we
want
to
continue
to
foster
this
this
ecosystem,
because
server?
This
is
very
early
days,
as
we
all
know,
and
so
we
have
opportunities
to
kind
of.
You
know,
create
this
ecosystem
that
could
be
part
of
CNC
F
or
we
could
just
does
that.
B
The
next
further
describes
we
could
just
encourage
more
participation
in
the
CNC
F
by
by
having
benders
and
open-source
tech
developers
sort
of
try
to
propose
their
projects
to
us.
There's
an
opportunity
for
education,
which
is
I,
know
one
of
the
Linux
foundations
things
that
they
like
to
try
to
do
so.
We
have
some
I
education
opportunities
and
then
there's
also
just
general
guidance.
We
could
define
in
terms
of
how
you
sort
of
evaluate
functional,
non
functional
aspects
of
the
service
architectures
out
there,
hey.
K
B
K
K
Good
glad
to
hear
that
I
think
the
most
useful
area
would
be
developing
a
common
event
broker
and
event
schema,
and
that's
where
I
could
see.
You
know
it's
a
potential
for
an
actual
project,
and
this
means
yes,
because
that's
the
number
one
thing
that's
going
to
facilitate
interoperability
amongst
the
different
solutions.
B
K
Application
deployment
is
a
very
different
beast
because
you
know
there
are
lots
of
different
systems
from
that
there
are
dozens
of
CIC
systems.
There
are
approaching
a
dozen
functions
and
service
projects
that
run
on
top
of
kubernetes.
Already.
You
know
it's
very
opinionated
in
terms
of
how
do
you
build
package,
deploy
your
app
but
the
event
side?
You
know
we
really
need
the
equivalent
of
HTTP
for
adventure
than
applications,
and
we
don't
have
that
awesome.
B
So
if
you
go
to
my
last
slide,
slide
60,
that's
that
basically
sums
up
the
ask
where
Brian
just
described.
We
don't
want
to
continue
this
same
workgroup
but
create
a
new
workgroup
that
would
basically
stud
on
the
inventing
aspect
of
it
so
and
then
the
obviously
the
review
and
agree
to
at
least
the
white
paper
as
an
official
CTF
document.
Those
are
really
the
two
main
tasks
and
happy
to
take
comments,
offline
and
continue
the
conversation
next
weekend
it
in
Austin
and
December.
A
Think
you
can
so
I,
don't
know
how
I
got
through
that
so
fast.
Thank
you
and
sorry
for
giving
you
only
five
minutes.
So
just
the
interest
of
time
we're
meeting
again
next
week.
So
due
to
the
Thanksgiving
holidays
in
the
US,
we
bumped
up
the
TOC
meeting
to
November
14th,
we'll
be
hearing
from
the
open
policy
agent
project
and
we're
gonna
get
a
readout
from
the
storage
working
group.
So
thanks
again
everyone
for
your
time
and
look
for
official
project
proposals
front.
Oh
yeah,.
D
Just
so
you're
just
going
to
make
one
last
pitch
to
remind
everybody
that
cloud
native
Connick
Yukon
is
one
month
away,
and
so,
if
you
have
not
already
booked
your
tickets
to
it,
please
do
it.
We
have
a
bunch
more
hotels
that
we
just
added
to
it
and
also
the
sponsorships
are
almost
closed,
but
we
do
have
some
openings
for
the
big
party
event
on
Thursday
night,
and
so,
if
you
have
a
little
bit
of
marketing
budget
left
for
the
year,
please
reach
out
to
us
it's
the
last
couple
days
to
do
so.
Awesome.