►
From YouTube: CNCF Serverless WG Meeting 9/7/17
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
A
A
A
A
A
A
B
A
A
A
A
And
so
Dan
I'm
living
on
the
edge
right
now,
I
the
powers
back
on
for
the
minute
and
I
decided
to
switch
over
to
use
Wi-Fi.
So
the
power
goes
out
and
you
lose
me
it's
get
to
I
should
be
back
in
about
like
1530
seconds.
However
long
it
takes
me
to
switch
over
to
use
my
phone
to
tethered.
A
D
F
A
B
G
G
A
B
A
Yeah,
that's
going
to
get
started
it
pours
after
good
enough,
so
go
ahead,
and
so
that,
like
I,
said
on
the
agenda,
we
just
have
two
things:
101
have
Dan
walked
through
the
changes
that
he's
made
so
far
and
then
Chris
if
you're
on.
Maybe
we
could
talk
about
Chris's
status
on
his
write
up
for
the
difference
between
services
and
functions
as
a
service
go
ahead
and
start
Dan
walk
us
through
the
change
that
you've
made
and
see
what
you
all
think
sure.
B
So
last
week
we
got
the
feedback
from
Chris
Lunz
from
Amazon
that
we
wanted
to
soften
the
language
on
containers
right,
it's
kind
of
an
implementation
detail
and
if
the
point
of
view
of
the
developer
is
serverless,
it
really
doesn't
need
to
be
at
the
forefront.
We
kept
it
there,
of
course,
because
the
cloud
need
a
competing
foundation
does
focus
on.
You
know
the
spectrum
of
the
developer,
the
operator
and
another
folks
obviously
has
the
legacy
of
kubernetes
being
a
core
part
of
the
projects
that
are
governed
there.
So.
B
Bomb
the
agenda
right
here:
okay,
yeah,
so
I'll-
switch
over
to
the
second
yeah,
so
that
was
that
was
the.
That
was
the
context
behind
the
changes
I
made,
and
then
there
was
Christmas
had
those
comments
too.
So
I
try
to
integrate
what
I
known
based
on
his
comment
it
at
a
conference
couple
weeks
back
so
I
had
his
perspective
already
and
I
included
that
not
deeply
but
and
enough,
hopefully
to
to
include
his
concerns.
B
So
the
abstract
I
I,
just
spaced
it
out,
so
it
was
easier
to
see
some
of
the
concepts
in
here.
So
this
was
one
of
the
first
things.
I
did
was
ensure
that
we
did
address
this
idea
of
a
service
provider
versus
a
functions
as
a
service
engine
which
was
a
core
concern
and
also
that
positioning
versus
infrastructure
as
a
service
and
platform
as
a
service.
I
did
add
a
piece
here
on.
You
know
that
one
of
the
outcomes
we
do
want
is
a
set
of
recommendations.
B
The
wording
here-
I'm
not
happy
with,
but
that's
here,
and
also
that
we
talked
a
bit
more
about
the
the
target
end
user
of
this
paper.
What
I
found
is
with
serverless.
You
can
quickly
start
to
mix
your
audiences,
particularly
when
you're
coming
from
the
legacy
of
having
built
an
open-source
platform
for
this
you've
got
the
end
user
who's,
your
consumer
of
the
serverless
service,
and
then
you
have
other
folks
that
are
interested
in
operating
or
installing
that
system
themselves
in
order
to
provide
it
to
end
users.
B
B
It
occurred
to
me
that
perhaps
let
me
just
leave
that
container
references
out
up
front
right
so
you'll
see
later
in
the
doc
I
struck
out
those
areas,
I
left
him
in
the
dock
and
what
I
did
was
I
brought
up
instead,
because
we
have
that
end
user
focus
the
workloads
from
it
used
to
be
Appendix,
A
and
I
pulled
that
into
this
main
section
so
that
you
get
this.
This
idea
of
this
is
serverless.
This
is
how
it
differs
from
fast
and
bad
and
that
these
are
the
workloads
where
it
really
is
promising.
A
A
A
D
B
Yeah,
yeah
and
so
I
think
we
addressed
that
print.
Some
buddy
else
had
actually
a
nice
little
section
up
front
in
this
first
area.
So
let
me
let
me
hop
over
to
and
I
think
the
reason
it's
done
is
number
one.
Yes,
we
do
have
a
nice
comparison
there
already
that
somebody
added
we
do
have
the
fast
vs.
bass
and
then
we
we
work
into
the
use
cases
where
we
describe
why
those
are
a
good
fit
for
service
versus
other
implementations.
B
B
A
good
point,
then:
let
me
pop
pop
over
to
that
main
section
now
that
first
section
so
upfront,
we
basically
define
serverless
computing
and
we
we
do
a
bit
of
the
positioning
versus
I
as
containers
as
a
service
and
Pass
Sarah
had
added
a
comment
here
which
basically
captures
the
idea
of
back-end
as
a
service
which
was
kind
of
this
little
elephant
in
the
room
that
we
had
been
avoiding
before,
because
we
were
focusing
on
the
developer,
creating
applications
themselves.
Then,
within
this
context,
but
basically
I've
added
two
paragraphs.
There
was
one
about
this.
B
This
idea
of
the
association
between
functions
and
service
and
service,
and
that
back-end
is
a
service
comment
to
address.
Sarah's
comment:
okay
and
then
someone
had
created
this
section,
I,
don't
see
what
the
Providence
provenance
of
it
was.
But
this
was.
This
was
pretty
nice
because
this
now
positions
it
as
well
with
platforms
of
service
at
a
high
level.
We
get
into
the
details
in
the
second
section
of
the
paper.
B
Okay,
the
history
of
service
was
here,
I.
Think
Doug,
you
added
this.
This
is
almost
like
an
acai
in
many
ways,
but
it's
it's
here.
I,
don't
think,
there's
any
different,
more
better
place
to
put
it,
but
maybe
it
does
addressed
the
some
of
the
controversy
of
the
name
and
explain
why
it
came
to
be
yes,
be
clear,
I
actually
just
filled
it.
A
B
B
Okay
and
then
that
that
change
that
I
mentioned
this
was
the
workloads
was
formerly
appendix
2.
It
was
its
own
standalone
document
before
we
merged
everything,
so
it
seemed
to
fit
much
better
here
and
write
to
this
this.
This
basically
talks
you,
okay,
these
are.
These
are
some
workloads
that
our
theoretical
approaches
here
that
it
now
now
can
be
addressed
by
cloud
computing
with
server
list
and
where
they've
shown
improvements,
ideally
in
a
measurable
way
over
those
those
existing
deployment
models.
B
There
was
a
few
comments
on
you
know
performance.
Perhaps
this
should
be
dug
into
a
little
deeper,
maybe
in
in
a
and
appendix,
but
it
just
made.
You
know
some
vague
there
kind
of
addressed
it
in
a
vague
way
that
you
know,
there's
a
startup
cost
that
might
happen
here,
be
it
and
the
container
base
system
or
something
where
you
don't
know.
What's
running
the
functions
behind
the
scenes,
yeah.
B
D
Don't
know
if
that's
the
real
definition
of
service,
because
for
some
applications,
or
even
in
a
way
with
service,
if
you're
working
on
stream
processing
there's
always
something
in
the
stream
or
at
least
once
every
five
minutes,
and
it
will
always
be
up.
You
know:
we've
seen
the
articles
in
the
web
of
you
serve
HACC
a
SS
lambda
yeah.
B
I
A
F
A
We
can
work
on
that.
That's
a
minor
change,
but
so
Dan
as
you
walk
through
these
various
workloads.
Did
you
add
text
to
explain
why
serverless
is
either
the
only
fit
for
that
particular
workload
or
why
it's
a
better
fit
for
that
workload,
because
I
think
there
have
been
lots
of
comments
and
they're
talking
about
well.
Why
is
this
new
for
servos
I
could
do
this
using
other
technology.
So
why
are
you
bringing
this
up
as
a
as
a
serverless
benefit
when
I
could
do
it
on
I
asked,
for
example,
right
right.
B
Right-
and
then
that
brings
me
to
this
paragraph
here
right
so
when
I
originally
created
this
section,
this
little
piece
was,
it
was
almost
like
speaker
notes,
and
the
goal
of
this
section
was
to
frame
server
lists
the
examples
below
the
use
cases
as
having
done
something
in
some
magnitude
order
better
than
an
existing
model,
so
be
that
performance
cost
something
there.
So
this
may
not
stay
as
is
as
text,
but
that's
that's
the
the
spirit
of
what
follows,
and
that
was
why
you
know
it's
supposed
to
encourage
the
reader
to
think
hey.
B
A
Be
possible
to
elaborate
a
little
on
there
because
they
simply
say
serverless
is
is
faster.
Is
it's
kind
of
a
vague
statement?
I'd
it
be
nice
to
be
able
to
say
in
what
ways
it's
faster
because
as
a
as
a
newbie,
if
I
look
at
this
I'd,
look
at
it
and
say:
okay,
why
is
service
inherently
faster?
You
know
once
the
application
or
the
function
is
up
and
running
it
to
be
able
to
process
things
at
the
exact
same
speed
as
whether
it's
running
on
an
is.
A
So
if
it's
you,
because
to
me,
functions
that
service
versus
I,
as
is
more
about
the
infrastructure
in
which
are
hosting
it,
but
not
necessarily
the
application
running
itself
except
me.
Talking
about
you,
know,
auto-scaling
stuff
like
that,
but
even
though
it's
it's
similar
enough.
So
what
is
it
about
service
that
that
makes
things
faster
and
better
exactly
right?.
B
B
Those
those
were
always
kind
of
a
gap
in
the
paper
and
I
know
I
never
was
was
never
had
the
time
to
fill
those
out,
but
that's
kind
of
where
this
example
section
of
each
was
supposed
to
get
to
you
right.
You
say,
for
example,
in
the
Santander
use
case.
You
know
by
handling
a
billion
checks
with
$32,000.
This
is
X
degrees,
more
efficient
than
their
existing
system.
Things
like
that,
so
that's
where
I
think
it
would
it
would.
It
would
benefit
the
paper
to
add
those
those
numbers
right.
So
if
I.
A
E
Those
examples
do
help
and
they
come.
You
know
they.
They
concrete
in
some
of
the
speaker
notes.
If
you
will
about
some
of
the
speaker
notes,
are
you
know,
depending
upon
area,
are
put
others
who,
whose
worldview
isn't
within
serve
list,
but
maybe,
as
maybe
young,
you
know,
passes
or
content,
you
know
casts
or
something
else
it
puts
them
on
the
defense
and
in
fact,
they're
not
always
true
and
like
it
just
depends
on
the
use
case
and
that's
why
the
examples
are
good.
B
B
And
as
well,
the
same
same
example
with
chatbots,
so
something
where
perhaps
some
third-party
developer
created
some
chat,
bot
that
suddenly
plugged
into
Facebook
and
became
incredibly
popular
right
overwhelming.
What
normally
would
have
been
some
pretty
static,
back-end
infrastructure
to
support
it
and
instead
was
able
to
scale
up
to
the
you
know
millions
of
billions.
Your
requests
needed.
B
Yeah
yeah
I'm
at
so
yeah
each
of
these
use
cases
here
again
targeting
the
end
user
talking
to
some
of
their
their
current
pain
points
and
then
working
from
there,
so
yeah.
So
for
each
of
these
sections
it
would
be
helpful
if
people
could
come
forward
with
specific
examples
with
those
numbers,
those
dimensions
of
goodness
that
I'd
called
it.
B
Okay
and
I'd
noticed
you
know
when
we
had
done
the
merge
of
the
paper
each
of
the
sections
kind
of
had
an
explicit
summary
that
should
probably
end
up
more
at
the
conclusion
of
the
paper.
This
is
this
is
one
of
them
as
well,
and
below
here
was
the
strikeout
of
that
that
container
focus
of
the
paper.
That
was
a
bit
controversial,
so
I
just
I
crossed
it
out.
There
may
be
parts
here
to
salvage
for
other
parts
of
the
paper,
such
as
the
bit
about
12
factor,
applications.
B
I
know
that's
a
hot
topic
for
Chrisman's.
It's
also
one
of
my
favorite
talking
points
for
for
developers.
You
know
with
a
service
architecture.
You
can
get
halfway
to
twelve
factors
by
doing
nothing
cuz
the
platform
does
that
for
you
in
terms
of
scaling
and
operations,
I
think
that's
a
good
point
that
we
should
probably
bring
in
somewhere
else,
but
for
now
it's
it's.
It's
crossed
out
here.
B
And
as
we
move
into
the
second
major
section,
I
I
did
try
to
soften
the
wording
around
containers,
because
the
assumption
in
the
earlier
section
was
that
okay
here
here,
the
other
cloud
native
application
deployment
Pat
platforms-
they
are
all
based
on
containers,
but
they
approached
the
developer
experience
differently.
If
we're
gonna
soften
that
again,
there's
probably
some
more
wording
updates
here.
I
put
a
bit
of
a
disclaimer
in
right,
so
they
tend
to
focus
on
containers
and
I'm.
B
And
there
were
a
few
other
comments
in
line
that
I
think
I
addressed
distributed
systems
instead
container
interaction
platforms
of
service
yeah.
There
was
a
good
comment
here
from
Justin
I.
Think
there's
there's
some
places
to
make
that
clearer
or
interact
with
the
earlier
section.
The
document
where
we
do
the
explicit
comparison
to
platforms
of
service
I
think.
D
I
B
Yeah,
the
rest
of
these
I
believe
were
just
comments
in
line
Peter
had
a
few
comments
here
that
were
we're
more
around
right,
I
think
egg,
to
the
point
that
this
isn't
a
magic
bullet.
There's
a
lot
of
benefits
here.
The
developer
has
shifted
away
from
operations,
but
they
still
have
to
be
aware
that
there's
some
servers
behind
the
scenes,
of
course,
so
there's
some
things
that
still
bleed
out
of
that
abstraction,
including
perhaps
like
processor
types.
There
was
some
talk
about
that
recently.
B
A
B
Text
this
is
this
is
all
existing
text,
except
for
the
specific
comments
right
so
I
slightly
reworded
things
to
address
the
comments,
but
I
didn't
close
out
the
comments.
Okay
and
I.
Think
that's
that's
basically,
the
rest,
that's
again
again
with
the
summary,
should
go
to
probably
a
back-end
chapter.
Okay,
so
that
covers
my
changes.
I
know
that
there
was
also
an
open
change
that
may
we'll
get
to
later
in
this
call
about
this
whole
spec
chapter
mr.
B
guy,
some
serverless,
the
company-
we're
interested
in
maybe
abstracting
that
out,
but
but
any
other
comments
on
upfront
about
talking
to
the
use
cases,
the
developer
and
then
working
into
that
second
section:
a.
J
J
Also,
the
I
think
Pete's
comments
on
not
a
magic
bullet.
This
is
something
I
think
is
really
important
for
anyone.
Evangelizing
serverless
is
that
it
it
could
it
if
you
evangelize
it
the
wrong
way.
It
does
alienate
the
could
alien
to
sysadmin
community
a
bit
and
in
the
early
days
when
we
were
well,
we
were
all
super
excited
about
this
I
think
we
went.
J
It
went
a
little
too
far
in
that
direction,
and
there
was
there
was
a
bit
of
backlash
so
and
this
is
why
I
really
appreciate
what
what
piece
trying
to
say
there
and
I
think
this
is
important,
because
if
we
want
to
work
with
those
people
and
that
job
is
certainly
not
going
away,
it
might
change
a
little
bit.
I
think
the
best
approach
here
is
just
to
figure
out
how
to
kind
of
bring
them
into
this
tent
and
be
open,
be
open
to
them.
So
can.
J
J
There's
definitely
always
going
to
be
operations,
administration,
but
I
think
these
things
are
just
going
to
change
and
I
think
we
just
everybody
went
too
strong
and
the
the
buzzword
hype
there.
So
I'd
recommend
just
being
a
little
bit
more
sensitive
about
that
in
the
future,
because
it
does
results
in
a
lot
of
backlash.
If
you're
not
careful.
I
Yeah,
you
know
I've
been
developing
a
lot
of
conjectures
around,
you
know,
on-prem
serverless
and
one
of
them
is
is
round.
This
actually
gives
control
and
governance
back
to
IT
ops.
If
they
offer
a
functions
as
a
service,
they
now
have
visibility
into
the
workloads
that
are
being
run
with
within
functions
as
a
service.
B
J
B
J
B
Think
no
I
think
that
the
value
would
be
is
at
least
saying:
hey,
listen.
You
might
heard
this
term
just
like
you
hear
these
other
terms.
In
this
context,
it's
no
longer
the
politically
correct
term
or
whatever
you
know
sure.
Well,.
A
Hey
I
can
go
either
way
on
that.
One
I
can
see
someone
wondering
about
this
other
term
they've
heard
about,
and
how
relates
to
this,
because
we
don't
want
them
thinking
it's
a
completely
separate
piece
of
technology,
but
then
introducing
it
may
add
the
confusion
that
you're
talking
about
mark
I
to
tell
us
up.
Yeah.
A
B
B
B
E
A
I
I
I'd
say
that
there's
when,
when
I've
been
talking
with
people,
there's
been
a
lot
of
questioning,
you
know
why
would
I
run
it
on
Prem,
because
it's
not
truly
serverless,
right
and
and
so
just
talking
true
through
some
of
the
use
cases
you
can
say
different.
There
may
be
reasons
to
run
it
on
Prem
for
compliance,
privacy,
security
or
even
low
latency
delivery
of
the
same
types
of
functions
not
to
mention
you
know
you
want
to
have
portability
of
cloud
native,
architectures
and
serverless
is
a
cloud
native
design
pattern.
So.
D
I
D
Would
seem
called
things
like
you
know,
Amazon,
you
know
a
snowball
edge
which
run
lambda
still
server
less
compared
to
someone
installing
fast.
You
know
stack
yeah.
There
are
two.
There
are
actually
three
types
here:
one
is
sort
of
a
public
cloud
for
pre-integrated
as
a
service
service
is
the
same
thing
that
extends
to
the
cloud
through
still
manage
for
the
cloud.
Let
me
they
have
extensions
in
server
on
the
ship
and
and
the
third
is
DIY,
which
can
happen.
It's
actually
also
can
be
public
cloud
or
on
prep.
B
Right
and
I
know,
if
there's
a
bunch
of
use
case
in
here
right,
so
this
this
comes
up
quite
a
bit
with
with
the
open
list,
naturally
right
that
that
is
one
of
the
selling
points.
I've
got
a
I've
got
a
few
points
that
can
I
can
add
in
this
little
section
here
and
then
maybe
we
can
discuss
it
for
next
week,
but
yeah,
there's,
right
and
I
know
the
controversy
here.
Is
you
know
it's
not
service
if
you're
wanting
it
yourself
right,
which
is
clear,
but
what
that's
missing
is
that
to
the
developer?
B
It
doesn't
matter,
that's
their
point
of
view,
whether
it's
on
a
public
cloud
or
another
team,
they
may
know
that's
running
it
for
them
right.
So
I
think
that
that's
a
fair
point
controversy,
and
maybe
we
just
we
can
refer
to
it
somehow
I,
don't
know
why,
but
I
can
I
can
put
in
a
little
placeholder
for
that.
Okay,.
D
B
Okay,
Doug
I
know
another
big
point:
I
think
on
the
agenda
was
around.
B
D
We're
not
here
to
point
a
speck,
but
I
think
we
have
to
have
beyond
search
positioning
and
you
know
use
cases
a
way
for
the
next
to
pave
the
way
for
the
next
step,
which
is
you
know
eventually
my
view
that
what
we're
doing
here
is
not
just
all
sitting
around
and
saying
you
know,
surveillance
is
great,
is
also
trying
to
get
to
some
consensus
around
how
you
use
it.
You
know
how
you
drive
events
into
it.
It's
error.
B
J
A
A
D
I
B
A
A
E
It's
a
bit,
you
know,
so
this
is
like
having
spent
a
lot
of
time
with
them
on
CNI
and
and
also
just
seeing
home
from
soccer
yeah,
we
did.
We
do.
We
generally
didn't
use
the
term
on
CNI
I'm,
sorry,
the
determine
spec.
Do
you
by
and
large
this
seems,
Jeff
and
Christian
chime
in.
If
he's
listening,
isn't
doesn't
necessarily
said
about
two
producing
industry
standards.
I
guess
you
know
in
so
much
as
you
consider
an
industry
standard
measured
by
of
the
way
in
which
it's
or
how
broadly
is
adopted,
then
you
might.
A
E
E
E
A
So,
in
comes
in
next
steps
here,
I
know,
since
I
don't
think
Chris
Mons
is
on
the
calls.
We
can't
ask
him
for
his
status,
I'm
wondering
well
actually
before
I
get
to
that.
So
that's
a
very
sort
of
directed
question:
I'm
Dan
you're,
looking
at
appendix
B
a
second
ago
talk
about
the
yeah,
that's
at
service
implementations,
I,
can't
remember
who
owned
this
section
market
may
have
been.
You
and
Lee.
I
came
over
for
sure,
I'm
just
wondering
what
what's
your
guys
status
on
filling
out
these
things.
I
here
look
like.
I
So
so
I
had
started
that
section
and
the
intent
had
been
for
some
of
the
open
source
teams
to
be
able
to
move,
be
able
to
fill
it
out
more
fully
about
their
projects
in
terms
of
the
the
services
section.
I,
don't
know
how
I
think
it
makes
sense
to
mention
services,
but
that
that
landscape
is
moving
so
quickly
that
I
don't
know
that
I
could
do
justice
with
respect
to
all
the
services
except
to
say
you
know:
here's,
here's,
the
CNCs
landscape,
doc.
That
shows
all
these
services
that
everyone's
providing.
A
Controversial
question:
I:
would:
what
do
you
think
about
actually
dropping
the
OH
for
services
section
mainly
because
it
seems
to
me
that
that's
getting
into
almost
a
different
topic
in
the
sense
that
it's
talking
about
sort
of
the
services
that
these
functions
may
actually
use?
And
while
we
acknowledge
that
those
functions
are
those
services
exist?
Other
places
in
the
paper
I'm
wondering
what
is
the
value
of
actually
mentioning
the
the
exact
list
of
services
that
are
out
there?
My.
I
Original
intent
was
to
drive
the
conversation
around.
We
need
common,
the
venting
we
need
to
have
interoperability
of
the
services
with
all
of
the
functions
as
a
service.
Open-Source
projects
you'll
be
being
able
to
drive
the
entire
ecosystem
around
server
lists
and
that
isn't
just
the
functions
but
also
the
services,
so
that
was
the
intent.
I
can
be
talked
into
removing
this
section,
but
I
think
that
we
still
need
to
leave
readers
of
this
document.
With,
with
the
thought
that
it
is
more
than
just
running
a
you
know,
how
do
I
run
a
function?
A
Of
raises
interesting
question
to
me,
then,
because
we've
talked
about
interoperability
around
this
space,
I
think
for
sure
everybody
would
agree
that
we
talked
about
if
there
was
any
place
right
now
ability
it's
at
least
at
the
the
topmost
layer.
In
my
mind,
anyway,
with
the
way
I
picture
it
we
meaning
yeah,
you
know
how
do
the
functions
get
invoked?
D
Agree
with
mark
I
think
it's
a
very
key
thing,
because
the
service
is
a
servant,
invent
event-driven
function
and
your
function
is
getting
called
with
an
event
context.
So,
if
you're,
just
building
your
own
iOS
application,
you're,
not
you're,
forced
to
use
events
or
a
certain,
you
know
model,
no
one
tries
to
create
servant
or
probability.
Also
I
didn't
interesting
discussion
yesterday
with
the
quarry
Sandra's
from
Asia.
You
know
they
have
their
event
hub
and
are
a
few
a
few
efforts
in
the
industry
around
us.
Most
of
those
are
around
HTTP.
D
You
know
invocation
pretty
slow,
pretty
synchronous,
and
maybe
we
need
to
have
something
like
that,
which
is
more
generalized.
You
know
we
were
thinking,
maybe
two
to
come
with
a
straw
man
proposal,
because
I
think
you
know
just
take
even
just
the
open
source
solutions
that
are,
you
know
running
on
top
of
kubernetes
like
nucleon
fist
on
and
cupola,
and
they
all
use
the
ingress
engine
X
in
ingress,
which
is
lacking
many
features.
So
if,
instead
of
any
one
of
us,
you
know
going
and
beefing
up
or
creating
a
custom,
you
know
add-ons.
D
If
we
all
agree
on
this
is
how
in
advance
first,
whether
it's
an
HTTP
thing
or
something
that
drives
an
asynchronous
pub/sub
message
or
maybe
a
streaming
to
Kafka.
If
we
have
sort
of
some
sort
of
a
schema,
I
think
it's
by
the
way,
even
good
for
for
Amazon,
because
if
you
go
and
look
at
the
different
events
in
the
in
lambda
triggered
by
the
different
events
or
PSA's,
there's
no
schema.
There's
no
way
for
you,
as
a
user,
to
know
how
to
use
those
without
reading
the
full
details.
I
Goodnight
I'm
not
disagreeing
with
you,
but
you
know,
I
could
also
see
where
we
could
use
this
as
a
way
to
drive
the.
What
are
the
next
steps
would
be
understanding.
What
what
is
the
landscape
around
services
in
some
level
has
come
common
eventing.
You
know,
we've
we've
been
struggling
with
what
are
the
next
steps
and
perhaps
that's
the
perfect
way
to
frame
it.
I
D
So
maybe,
let's,
let's
work
together
some
of
us
to
to
come
with
the
on.
What
do
we
see
as
a
potential
common
eventing
model?
I
heard
also
the
serverless
calm
guy
is
talking
about
something
like
that
and
you
know
I
think
it's
a
big
necessity,
because
we
want
to
drive
all
sorts
of
events
and
I
don't
want
to
start
implementing
plugins
to
any
source
of
data
that
exists.
Okay,.
A
Yes,
yeah.
You
bet
all
right,
cool
and
I'll
mark
it
at
that.
You've
mentioned
that
at
the
beginning
of
that
section,
you're,
hoping
that
the
open
source
folks
themselves
would
sort
of
fill
out.
Those
sections
I'm
wondering
whether
it
makes
sense
for
us
not
to
wait
for
them
to
do
it
and
that's
take
an
initial
strongman
proposal.
Just
so.
We
have
some
text
there
and
it's
as
complete
as
we
can
do
and
if
they
don't
like
it,
then
they
can
go
ahead
and
edit.
It.
I
A
A
B
A
D
B
B
B
For
the
3d
version,
that's
good
yeah,
so
in
theory
you
know
we
could
be
kind
of
like
a
little
now
there
is
it
a
little
prettier,
but
you
know
if
we
had
something
similar,
possibly
for
you
know
the
serverless
landscape
right.
So
something
similar
to
this.
G
C
G
A
A
A
If,
for
the
sections
that
you
guys
own,
please
do
your
best
to
try
to
resolve
those,
and
if
you
are
on
the
call-
and
you
are
adding
comments
to
the
doc,
I
would
really
really
prefer,
rather
than
you
just
adding
comments
saying.
Please
talk
about
this
actually
mark
up
the
document
with
your
suggested
text.
That
makes
it
so
much
easier
for
someone
to
see
what
you're
exactly
thinking
of
and
that
way
if
they
like.
A
A
Anything
else
you
want
to
talk
about
I
know
we
still
have
to
really
sit
down
and
talk
about
the
conclusion
and
never
recommended
next
steps,
but
I
feel
like
we
got
to
get
the
body
of
the
text
in
there
first
and
we
are
slowly
adding
more
stuff
to
the
conclusion
section.
We
just
need
to
spend
I'm
aware
these
calls
flushing
it
out
out
of
the
other
I'd
rather
hold
off
on
that
first
and
get
the
body
there.
J
This
is
Austin
here.
I've
have
a
quick
question
for
everyone.
I
think
that's
using
this
document
to
set
the
stage
for
a
standardization
effort
is
a
fantastic
goal.
Standardization
is
something
we've
been
focused
on
for
a
long
time
at
the
surveillance
framework,
given
that
we
seek
to
provide
this
uniform
experience
across
all
the
service
compute
providers.
We
have
about
I
think
about
nine
different
service,
compute
vendors
integrated
into
the
framework
right
now
from
large
cloud
providers
to
smaller,
smaller
vendors.
J
A
lot
of
them
have
written
and
maintained
those
integrations
themselves,
and
we
want
to
make
that
experience
better.
It
needs
to
be.
Of
course
everyone
went
at
this
with
their
own
opinions
and
it's
just
created
a
lot
of
usability
and
onboarding
issues.
So
that
said,
what
what
is
we're
still
a
bit
new
to
the
csdf
and
trying
to
get
to
know
everyone
here?
But
what
is
the
status
of
this
standardization
effort-
and
you
know
here-
is
interested
in
that.
A
Nice
way
to
put
it
but
yeah
I
think
everybody
would
like
quote
harmonization
and
that's
and
that's
why
we
may
recommend
certain
efforts
heading
down
that
path
of
code,
harmonization
but
everybody's
afraid
of
using
the
standards
word,
because
it's
a
it's
like
the
third
rail.
This
work,
even
though
people
are
doing
it
with
CSI
CNI.
A
So
at
this
point
in
time,
I
think
it's
best.
If
we,
if
we
focus
less
on
looking
for
quote
standardization
and
just
finding
areas
where
we
can
work
on
interoperability
or
harmonization,
and
so
if
you
have,
if
you
have
a
specific
areas
where
you
think
we
should
do
that
and
add
that
to
the
conclusion
of
the
doc,
for
example,
inventing
keeps
coming
up
a
lot
right.
Is
there
something
we
can
do
in
that
space?
A
D
Awesome
take,
for
example,
in-service
call
me
up
those
jános
and
try
to
abstract
things
away,
so
you
know
just
try
and
see
what
we
wrote
in
the
implementation
section
here.
How
does
it
align?
There
are
things
that
serve
butters
me
with
your
model,
for
example
lack
of
versioning
and
things
like
that.
But
you
know
maybe
we
can
get
to
a
consensus
of
what
would
be
the
right
model
and
AWS
presented
sam'l
a
couple
of
weeks
or.
G
Sam
yeah,
you
know
for
my
from
the
CNCs
perspective.
You
know.
Eventually,
we
would
expect
an
output
from
the
group
that
hopefully
would
consist
of
a
project
or
set
of
projects
that
would
help
kind
of
do
some
harmonization
in
this
space,
whether
it's
seated
from
work
from
an
existing
company,
that's
modified
to
be
more
inclusive
of
their
technology
or
just
something
new
from
scratch.
I
think
it's
kind
of
up
to
this
to
this
group
to
to
decide.
A
D
The
two
critical
things
we
can
work
on
a
continuing
with
Chris
thing
is
one
is
the
event
model.
The
other
one
is
those
the
mo
just
like
what
service
has
which
allows
you
to
define
the
function.
I
think
beyond
that.
The
other
things
are
our
needs,
but
if
we
have
a
common
event
model
and
whether
a
simple
way
to
describe
a
function
and
its
dependency
think
we're
in
a
good
shape.
D
J
I
totally
agree,
so
how
so?
How
much
are
we
gonna
put
in
this
white
paper?
It
sounds
like
we're
just
kind
of
highlighting
these
areas
where
we'd
like
to
harmonize
but
but
well.
Here's
where
we're
at
as
a
company
we've
been
working
with
a
lot
of
these
providers.
On
this.
For
a
little
while
and
we've,
we've
got
a
few
things
down
from
event,
schemas
to
function,
models
down
at
pretty
low
level,
kind
of
specifications,
gestures
and
I,
don't
think
the
white
paper
is
gonna,
be
the
right
place
to
put
in
all
those
low-level
details.
J
I
think
the
goal
of
just
kind
of
outlining
the
major
areas
and
talking
about
them
at
kind
of
a
high
level
is
is
the
right
approach
for
the
white
paper.
Is
there
a
separate
place
where
we
could
go
get
into
the
low-level
details,
because
we've
made
a
lot
of
progress
here
and
would
love
to
just
start
getting
into
this
stuff
with
everybody
here
in
the
CNC
F.
G
Yes,
you
could
create
a
github
repo
call
it
whatever
you
want
common
serverless
event
model
or
something
and
kind
of
collaborate
that
way.
That
seems
to
be
a
common
effort.
That's
happened
with
other
initiatives
like
CSI
and
and
CNI
and
so
on,
but
it's
really
up
for
this
group
to
decide.
I
could
just
kind
of
give
you
examples
of
what
has
worked
for
other
and
as
CN
CF
projects.
J
Fantastic
that
sounds
great
I
just
joined
the
CNC
f
slack
Channel,
so
you
know,
let's
would
love
to
chat
with
all
the
people
interested
in
this
on
our
n
we're
also
working
with
some
providers
who
are
interested
in
moving
pretty
quickly
on
this,
because
they've
there's
just
a
lot
of
innovations
coming
out
in
the
space
and
they'll
have
tight
deadlines.
So
we
could
do
this
all
together
and
come
up
with
something
that
works
for
everyone,
of
course.
D
J
A
E
A
A
Okay,
although
I'm
not
hearing
a
huge
list
of
people
saying
they
can
make
it
so,
let's,
let's
keep
it
on
the
schedule
for
right
now.
If
you
can't
make
it
please
let
me
know
and
they'd
be
get
enough.
People
saying
they
can't
then
will
will
reconsider,
but
as
of
right
now,
we'll
keep
it
on
for
next
week.
Alright,
thanks
a
lot
guys
we'll
talk
to
you
later
thanks
thanks.