►
From YouTube: CNCF Serverless Working Group 2020-03-26
Description
CNCF Serverless Working Group 2020-03-26
A
A
A
Do
me
a
favor
in
the
zoom
slack,
could
you
just
I'm
sorry
Jim
chat?
Can
you
just
write
down
the
company
or
what.
F
A
B
A
I
A
F
J
F
L
A
D
A
A
A
A
A
Okay,
okay,
glad
later,
alright,
let's
go
and
get
started
since
it's
throughout
there.
There
you
go
Vlad!
Thank
you.
Alright,
we
get
started
catch
up
with
that
stuff.
Later,
alright,
hey
eyes.
So
just
let
you
guys
know
we
are
working
on
a
nadie
gap
platform
charter.
We
are
doing
a
server
list
charter.
We
decided
to
bite
the
bullet
and
go
ahead
and
do
it
that
will
give
a
proper
home
to
hopefully
cloud
events,
the
workflow
spec
and
the
two
new
specs
that
were
working
on.
A
It's
not
a
prove
hadn't
even
gone
to
the
TOC,
yet
I'm,
still
waiting
for
a
couple
of
internal
reviews
from
mark
and
Ken,
but
that's
the
pepper
currently
on
I,
will
probably
make
that
document
available
to
you
guys,
probably
in
a
couple
days
once
I
get
those
reviews,
Dino
so
I
make
sure
it's
not
horrible
before
I
show
you
guys,
but
anyway
just
wants
it
that
you
guys
know
that's
the
bathroom
community
time.
So
anything
from
the
community
people
want
to
talk
about
the
Sun
on
the
agenda.
A
M
A
That's
it
delivery.
Thank
you.
What
I
tried
to
do
was
to
write
it
up
to
talk
about
more
of
a
developer
experience
kind
of
sig,
because
I
was
thinking
about
what
service
meant
a
lot
of
people
and
when
you
look
at
it
from
technology
perspective,
it
overlaps
with
with
a
ton
of
other
stuff
right,
the
other
SIG's
in
particular,
but
what
those
other
SIG's
I
don't
think
I
necessarily
focused
on
is
how
to
make
a
developer's
life
easier
because
to
a
lot
of
people.
A
That's
what
service
is
really
about
in
some
ways:
right,
yeah,
there's
some
features
like
scale
to
zero
stuff
like
that,
but
a
lot
of
it
is
making
the
developer
go
back
to
being
a
developer
as
opposed
to
an
infrastructure.
It
expert
right,
so
I
try
to
write
it
up
from
that
perspective,
to
try
to
differentiate
us
a
little.
A
However,
having
said
that,
I
did
add
a
paragraph
in
there
that
says
realistically,
there's
probably
gonna
be
some
overlap
with
other
SIG's,
and
so
we're
gonna
have
to
work
with
the
other
SIG's
on
a
case-by-case
basis
to
see
which
project
really
belongs
under
which
sig.
So
it's
gonna
be
a
little
bit
of
give
and
take
that's
the
current
wording
anyway.
A
Let's
say,
let's
say
in
the
future:
some
project
like
say:
K
native
I,
know
it's
not
hitting
here
now.
Let's
just
use
that
as
an
example.
So
everybody
knows
it
but
sake.
A
native
decides
to
try
to
join
the
CNC
F
right
is,
is
that
some
of
that
should
go
into
a
runtime
at
platform
or
service
I.
Think
you
can
make
an
argument
for
any
of
those
three
and.
L
A
Mind
I
would
put
that
towards
our
sig,
mainly
because,
while
K
native
obviously
is
a
bit
of
an
infrastructure
kind
of
platform,
its
main
focus
is
to
make
life
easier
for
the
developer
by
trying
to
a
bessa
can
reuse
underlying
technology
and
just
simplify
it
for
the
users,
even
though
it
doesn't
reduce
some
of
his
own
features.
That's
not
its
main
point
in
my
mind,
so
that'sthat's
my
current
thinking
anyway.
A
A
E
D
E
Market
yeah
sorry
I
was
first
of
all,
you
can
probably
change
this
Kathy
to
my
name.
If
you
want
to
I'll,
be
doing
the
updates,
probably
start
joining
on
a
weekly
basis
whenever
you
feel
like
it,
yeah
we're
doing
couple
of
things
currently
there's
a
TSE
proposal,
as
as
you
guys
mentioned,
so
we're
kind
of
waiting
on
the
decision
and
and
and
having
a
server
listing.
I
guess
definitely
will
help
with
that.
So
that's
one
thing
that
we're
kind
of
hoping
this
will
happen.
E
I
don't
know
if
anybody
might
have
a
time
frame
for
us
as
far
as
when
or
how
I
don't
know
if
anybody's
gonna
call
to
head
the
knows,
know
or
might
know
the
second
thing
we're
working
or
meeting
weekly
or
discussing
the
work
for
work,
little
primer,
the
specification
primer.
So
that's
going
on.
That's
a
big
issue,
big
thing
going
on,
and
the
third
thing
is
we're
working
on
our
first
version
0.1,
so
we're
updating
all
the
all
the
documents
and
setting
up
the
get
branches
and
everything
for
for
the
first
version.
E
So
those
are
kind
of
like
the
three
big
things
they're
going
on
with
service
work
for
right
now
in
and
also
just
wanted
to
mention.
I
wanted
to
thank
the
community.
We've
been
just
like
in
this
meeting,
we've
been
having
a
lot
more
people
starting
to
join
and
starting
to
get
interest
on
during
meetings.
So
that's
a
that's
a
big
thing:
cool.
A
All
right
cool
moving
forward
then
into
the
PRS.
Okay,
oh
my
is
this
one
I
thought
we
resolved
this
one.
Didn't
we
well.
A
J
So
a
couple
of
things
that
came
out
in
particular
the
idea
of
expanding
available
source
c
e
source
attributes
that
might
be
available.
I
got
a
lot
of
feedback
through
different
channels
that
that
wasn't
necessarily
doable
for
some
implementers.
So
I
think
we
should
probably
debate
that
a
little
bit
further
before
we
decide
to
put
that
in
as
well
as
so
took
out
the
source
structure,
because
we
didn't
have
a
template
for
that
to
be
really
no
way
for
that
to
be
interpretable
by
by
machine.
So
this
needs
a
little
bit
more
thought
one.
J
You
know
I
think
this
is
fine
for,
like
an
RC
o
one.
What
I
would
plan
to
do
is
immediately
send
another
pull
request
for
an
RC
o
that
puts
this
more
in
a
rest
structure,
so
thinking
about
a
hierarchy
of
resources,
so
I've
got
looking
on
right
now.
My
desk
I
give
it
out
on
paper
last
night,
just
type
it
up.
I
think
it
makes
it
a
little
more
palatable.
J
The
other
thing
is
that
I
don't
want
people
selling
slack
but
I,
put
out
I.
Think
gem
suggested
last
week,
graph
QL,
so
I
went
and
learned
graph,
qo
and
implemented.
This
is
a
crafty
I
think
it's
actually
probably
slightly
more
elegant,
but
that's
more
of
a
question
for
the
group
of
how
do
we
feel
about
pushing
a
graph
QL
API.
A
A
So
a
quick
discussion
point
about
process
because,
since
you
mentioned
rc2
the
way
we've
been
working
in
the
past
for
cloud
events
is.
The
next
version
is
typically
labeled
as
just
RC,
N
or
whatever,
even
long
before.
We
even
think
about
officially
releasing
it
or
publishing,
or
anything
like
that.
So
if
we
accept
this
poor
request,
any
additional
pull
request
made
to
this
document
would
still
be
under
the
banner
of
RC
one
because
it's
not
actually
released
for
anything.
Yet.
The
first.
A
K
F
A
Yeah
so
again,
some
from
a
bit
of
a
process
perspective,
obviously
I'm
assuming
we're
gonna,
accept
the
peep
that
this
pull
request
and
once
we
do
I
would
I
would
hope
that
people
will
now
either
open
up
issues
or
pull
requests
against
this
documents,
and
so,
for
example,
the
graph
QL
discussion.
I
would
actually
think
we
place.
Don't
want
to
start
that
as
an
issue,
so
maybe
Mike
or
or
Thomas
either
one.
You
guys
want
to
open
up
an
issues
disturb
the
force
that
discussion
yeah.
A
A
Okay
cool,
so
we
will
now
make
that
the
official
first
draft
so
I'll
accept
it.
It's
all
approved
okay,
so
now,
as
Mike
mentioned,
and
thank
you
Mike
for
all
the
work
on
that
appreciated,
as
Mike
mentioned,
Clemens
is
actually
out
sick
this
week,
I,
don't
believe
it's
the
virus
or
anything
like
that.
So
that's
good,
but
he
did
get
a
recommendation
to
take
it
easy,
so
he
didn't
get
a
chance
to
address
any
of
the
concerns
or
the
race
issues
here.
So
ignoring
the
outstanding
comments
in
there.
A
A
Okay,
what
I'd
like
to
do
is
it's
proposed
that
we
conditionally
accept
this
as
the
first
rough
draft
contingent
upon
when
Clemens
feels
up
to
it,
addressing
all
the
comments
in
here
and
then
merging
it.
The
reason
I
don't
want
to
merge
in
now
is
because
I
don't
want
to
run
the
risk
of
losing
people's
comments
in
here,
because
once
you
close
the
PR,
sometimes
people
tend
to
forget
about
it
and
move
on,
whereas
if
you
leave
it
open,
it's
a
nagging
reminder
forum.
A
So
we
talked
briefly
about
this
last
week,
but
the
changes
were
made
too
soon
for
us
to
approve
it.
So,
just
refresh
your
memory,
there
was
a
little
bit
of
question
about
how
a
receiver
should
know
whether
a
binary
message
is
actually
a
cloud
event
or
not.
So
what
I
did
is
I
went
through
and
modified
all
of
the
transport
bindings
to
basically
add
sex.
A
Nothing
normative
in
this
text
is
just
there's
no
guidance
for
people
to
try
to
make
a
guess
as
to
whether
they
should
even
try
to
parse
it
as
a
cloud
event
when
it's
in
the
binary
format,
I
believe
this
applied
to
all
transports
except
from
Nats,
but
basically
the
language
is
pretty
much.
What
you
see
here
for
every
single
transport
binding
except
I,
made
it
specific.
For
example,
some
are
called
properties.
Some
are
called
headers
minor
differences
like
that.
Basically,
it's
the
same
thing.
A
All
right,
thank
you
all
right
now.
This
one
is
something
that
Jim
put
in
there.
Actually,
how
long
do
you
put
in
there?
Okay,
technically
it's
it's
too
soon.
However,
let
me
view
it
this
way.
I'd
like
to
do
is,
since
this
is
completely
non-normative
and
it's
just
sort
of
editorial
type
text.
What
he
did
is
he
added
this
section
right
here,
I'll
give
you
guys
a
chance
to
read
them.
A
N
A
A
A
L
L
Okay,
that
I
had
a
maybe
a
comment
here
so
I'm.
Sorry
like
this
is
my
fourth
meeting
I
was
wondering:
if
does
it
make
sense,
to
have
a
master
document
that
pulls
and
shows
the
relationships
between
a
lot
of
these
efforts
to
give
a
little
bit
more
context
so
that
we
can
appropriately
dig
deeper?
Does
that
make
sense,
so
you.
L
A
A
L
A
Be
great,
thank
you,
yeah!
Actually,
I
I,
just
I
write
this
down,
but
I
have
an
action
item
to
at
some
point.
Look
at
restructuring
of
our
directories
because
everything
I
see
is
variable
to
focus
on
cloud
events
right
now,
and
it
makes
more
sense
if
we're
gonna
have
three
different
specs.
Each
spec
might
have
its
own
primer
or
ancillary
documentation
and
a
single
flat
directory
structure
isn't
gonna,
be
as
nice
for
people
so
moving
stuff
around
into
subfolders
would
be
good
and
then
having
this
document
that
you're
proposing
at.
A
A
O
O
O
Event
formats,
so
that
could
be
Jason
could
be
Avro
could
be
whatever
format
and
everybody's
put
inside
a
single
part
of
the
multi-part
envelope.
So
we
can
send
multiple
events
and
we
can
also
optionally
give
a
name
to
them
and
that's
the
first
idea.
The
second
idea
is
to
create
custom,
multi-part
and
Co
em
custom
multi,
a
multi-part
content
type,
which
is
always
based
on
RFC
to
zero.
Four
six
is
that.
O
O
So
I
think
authorization
and
authentication
those
can
just
live
in
in
the
errors.
On
top
of
the
request,
so
given
I
mean
that
the
assumption
is
that
when
you
start
receiving
a
multi-part
cloud
events,
you
can
receive
them
so
the
on
top
or
under
address
on
top
of
the
request.
So
the
address
that
the
global
address
will
be
whole
request
should
contain
authorization
and
authentication
letters.
O
That
means
that
we
need
to
investigate
oasis
to
actually
implement
implement
this
like
I'm,
so
that,
for
example,
in
going
it's,
it's
quite
simple
to
do,
but
I
mean
I
still
need
to
do
a
better
research.
Well,
the
first
one,
the
multi-part
structured
yeah,
follows
the
RFC
I,
don't
remember
the
name.
If
you
look
on
top,
you
see
it.
O
So
yeah
there
I've
seen
seven
five,
seven
eight
and
it
is
there
FC
of
multi-part
from
data.
The
call
the
call
of
this
this
all
the
multi-part
structured
proposal
is
that,
because
we
said
the
using
multi-part
from
data
content
type
its
art
for
an
implementation
to
understand.
If
inside
each
part,
you
have.
A
A
O
A
A
O
Multi-Part
form
data,
it's
not
hard,
because
every
HD
presentation
is
it
multi-part
itself,
it's
different
because
the
not
every
chip
implementation
can
implement
a
full
multi-part.
Some
of
them
just
implements
a
multi-part
form
data.
Again.
This
requires
some
investigation.
Yeah
yeah,
if
you
are
interested
I,
can
I
can
go
forward
and
try
to
create
some
prototypes
of
implementation
check
out
the
works
and
we
can
go
forward.
I
mean
for
now.
O
P
Yeah,
it's
also
what
I
wrote
on
the
original
ticket.
So
the
concern
here
is
efficiency,
but
from
my
point
and
a
beauty,
main
goal
should
be
interoperability,
and
so
work
for
me
follows:
is
that
simplicity
and
aspect?
That's
easy
to
implement
out
for
me
the
primary
goals
and
sure
if
we
can
also
reach
efficiency,
that's
good,
but
it
relied
do
some
things
that
are
clearly
more
efficient
I'm,
not
questioning
that.
What
that
maybe
I
should
piece
back
harder
to
implement.
I'm
wondering
whether
that
is
really
a
good
idea
and
worth
it.
P
O
Leads
me
to
another
point:
does
it
make
sense
to
have
these,
together
with
the
batch
that
we
now
have
an
HTTP
binding
in
another
sub
spec,
which
Chico
like
an
Indian
HTTP
binding
for
multiple
events,
because
now
we
have,
they
should
be
binding.
That
supports
that's
binary
structure
them,
but,
like
good
part
of
the
s,
the
case
that
we
had
now
doesn't
support
match
for
SDK
go
up
the
main
one
doesn't
support
match,
so
maybe
it
makes
sense
to
to
work
those
in
a
different
by
HCP
binding.
O
N
C
A
F
N
F
A
A
O
O
P
F
P
P
I
mean
your
your
question
was:
was
a
mistake
to
pet
it,
so
here
this
one
I
think
the
batch
mode
itself
is
optional,
so
yeah
I
think
that
that's
also
one
of
the
points
francesco
just
made
is:
maybe
we
should
have
multiple
specs
or
we
say
one
is
the
one
that
one
has
to
support
and
here's
all
the
fun
stuff
you
can
do.
What
probably
you
know
we
don't
know
of
her
white
leaders
implemented
and
so
on,
so
that
that
is
one
way
we
pretty
cool,
yeah
I.
P
P
O
P
What
is
your
function,
depending
on
getting
multiple
events
in
the
same
request?
Yeah
yeah,
because
I
think
that
is
conceptually
something
that
we
can
maybe
implement
with
HTTP
what
we
won't
have
with
any
of
the
other
transport
formats,
so
you're
building
a
function
that
can
only
work
with
HTTP,
because
if
you
look
at
all
other
transports,
the
grouping
or
the
batching
of
events
is
more
or
less
random.
So
you,
just
you
know
it's
more
an
optimization,
a
transport
level
or
I
need
to
send
you
I,
don't
know
fifty
rents.
P
P
P
So
and
I
think
for
that
use
case,
I
would
for
to
keep
the
interoperability
and
to
make
sure
all
transports
can
support
the
same
thing.
I
would
really
like
to
keep
this
use
case
out
of
the
spec,
because
there's
no
way
we
can
implement
that
with
the
other
specs
on
this
patching
level.
We
really
need
to
have
a
different
terminology
for
this
kind
of
thing,
like
grouping
or
nesting
or
whatever.
It's
like
a
valid
use
case
I'm,
not
questioning
that,
but
it's
different
from
batching,
which
is
purely
a
performance
optimization
if
that
makes
sense.
H
H
Transition
of
you
know
a
phone
call
or
message
or
whatever
happens
within
their
account,
and
a
lot
of
customers
want
us
about
those
because
they
basically
don't
want
us
to
denial
service,
their
web
server,
and
so
batching
is
definitely
a
use
case
that
we
need
and
we're
going
to
implement,
regardless
of
whether
it's
in
the
spec
I'm
not
arguing
that
it
should
be
in
this
back
and
maybe
maybe
there's
an
argument
for
it
to
be
in
the
maybe
HTTP
binding,
specific
part
for
the
spec.
But
it
is
definitely
a
valid
use
case
for
us.
H
K
A
J
O
O
J
O
J
C
J
Subset
well
so
there's
there's
a
workaround
in
the
SDK
or
sorry
in
the
specification.
That
means
anything
that
supports
JSON,
structured
mode
can
support
batched
mode,
meaning
you
can
always
send
an
array
of
cloud
events.
It's
not
very
efficient
for
for
things
that
don't
natively
support
a
batched
content
like
Kafka.
J
What
what
seriously
he
is
trying
to
propose
is
the
stream
of
events
on
on
HTTP
so
that
we
don't
have
to
do
JSON
marshalling
and
produce
this
JSON
array,
which
is
very
inefficient
process.
O
P
What
he
asked
to
discuss
earlier
comment,
hard,
yeah
professing
a
badge
so
I
think
if
you
look
at
other
protocols
like
Kafka
or
pops
up
whatever,
if
you
send
a
batch
of
events,
you
basically
get
this.
What
doc
said
it's
too
old
to
response
which
is
okay,
you
sent
me
ten
messages
or
events.
I
got
them,
that's
it.
J
I'm
not
talking
about
the
producer,
it
needs
to
worry
about
this
I'm
talking
about
what
what
is
so
if
the
SDK
is
acting
on
behalf
of
middleware.
So
it's
it's
not
the
producer.
It's
it's!
Maybe
it's
some
other
thing:
that's
taking
a
batch
off
Kafka
and
it's
trying
to
deliver
it
as
a
batch
over
HTTP
that
consumer
is
going
to
take
in
that
that
whole
array
and
what
is
the
processing
story
for
that
situation,
and
it
became
very
unclear.
J
J
You
get
in
this
block,
jasa
JSON,
that's
ten
events
and
that
connection
is
going
to
stay
open
until
the
consumer
says
I've
stored
it
right,
like
you,
don't
want
to
lose
messages
because
it's
either
gonna
a
corn,
a
cut
up
stream
and
a
an
AK
means.
I'm
gonna
move
the
index
or
I'm
gonna
drop
the
events
or
I'm
gonna
erase
it
from
disk.
So
there's
there's
more
at
stake
on
HTTP
because
it
doesn't
have
that
centralized
broker.
That's
gonna
stream
things
out
for
you.
P
J
P
H
I
guess
a
question
whether
and
I
don't
have
a
strong
opinion
here
yet,
but
a
question
whether
we
want
to
prescribe
delivery
semantics
in
this
spec,
because
a
lot
of
that's
gonna
be
technology
and
implementation,
dependent
and
even
within
the
technologies
that
do
you
support.
You
know
at
least
once
delivery
like
Kafka
yeah,
there's
still
like
off
the
copper
cluster,
can
be
configured
in
lots
of
different
ways
with
how
those
how
that
deliver
have
how
the
how
the
persistence
works
right
like
there's
a
TTL
on
a
topic
TTL.
H
A
R
I
was
always
gonna
channel
clouds
and
basically
saying
the
same
thing
right
keep
for
people
who
are
new
this.
This
is
like
a
topic
that
comes
back
and
forth
every
so
often
because
of
this
semantic
issue.
Right
and
people
bring
different
semantic
assumptions
based
on
their
use
cases
or
their
tulane,
and
trying
to
harmonize
this
across.
All
these
different
use
cases
is
is,
is
seriously
quicksand,
yeah.
A
And
I
will
point
out
that
both
of
the
rules
that
we've
seen
so
far
from
Francesco
do
not
get
into
semantic
processing.
His
is
follows
the
same
pattern
that
the
rest
of
ours
or
the
rest
of
Cloud
events
does,
which
is
just
defines.
The
format
which
is,
you
know,
obviously
leads
into
the
question
that
Scott's
asked.
You
know
what
do
you
do
with
errors
and
stuff
like
that?
A
R
R
O
A
I
A
O
O
A
Okay,
so
you
talked
about
possibly
going
off
and
implanting
these
things
yeah.
Obviously
you
can
do
ever
you
want
to
do
personally.
I
would.
Rather,
if
I
was
in
your
position,
I
would
wait
at
least
a
week
because
since
we
just
presented
these
on
the
call
today,
I
wait
until
next
week
to
give
people
a
chance
to
look
at
these
think
about
it
and
see
what
their
reaction
is
next
week,
because
next
week
you
know
in.
A
A
A
Anybody
have
any
high-level
comments,
questions
about
or
concerns
I
mean
does
this
seem
like
something
we
should
consider
going
forward
and
it
may
be
it's
more
of
a
position
or
more
it's
more
of
a
question
of
how
we
position
it
in
the
specifications
like,
for
example,
do
we
do?
We
include
it
as
a
separate
spec
that
way
it's
more
clear
that
it's
optional
and
not
part
of
what
we
consider
that
core,
whereas
if
it
was
part
of
HP's
spec,
it
may
not
be
quite
as
clear.
A
D
K
All
for
cloud
events
I
wasn't
expecting
that
when
I
read
the
specification
at
the
first
time,
so
I
was
really
wondering
it
doesn't
really
make
sense
to
have
batching
for
small
amount
of
data
for
for
cloud
events
at
all,
I
mean
if
someone
wants
to
go
out
and
implement
it
for
themselves,
but
referring
it
from
the
specification.
I'm
really
wondering
whether
this
makes
sense
or
no
okay.
A
J
Plus,
like
you,
you
could
do
outbound
streaming.
If
you
have
a
very
big
list
of
events,
you
want
to
stream
out
that
maybe
doesn't
fit
in
a
normal
size
buffer.
Like
think
IOT
cases,
you
could
stream
out
single
processed
events
at
a
time
on
the
wire,
and
you
wouldn't
have
to
produce
that
giant
list
of
all
the
events
that
you're
going
to
stream.
J
Q
J
Q
E
J
He's
winning
one
of
the
proposals
had
a
option
so
that
you
could
do
multi-part
streaming
with
binary
mode
right.
I
think
that
there
aren't
three
proposals:
there's
there's
two
on
the
table:
there's
a
multi-part
for
binary
and
there's
a
multi-part
for
JSON.
O
O
Are
not
the
same,
the
thing
is
that
they
can
be
joined
because
as
soon
as
you
know
that
you
are
in
a
multi
part
of
the
cloud
event
for
every
part
you
can,
you
can
send
an
event
or
a
structured
or
binary
I
mean
define,
because
you
can
can
you
can
check
whether
two,
according
you're,
using
right?
That
answer
your
question.
J
A
L
M
A
Mean
here
yes,
yes
of
sometimes
yeah
well,
I
did
I'm
gonna
put
it
on
the
agenda
for
next
week
as
well.
So
it's
all
still
there
yeah!
Thank
you!
Okay!
All
right!
Thank
you,
Francesco.
Any
other
topics
for
this
week's
call
all
right
in
the
last
minutes
did
I
miss
anybody
on
the
agenda.
I'm
for
the
attendee
list,
I
think.