►
From YouTube: CNCF Serverless WG Meeting - 2019-08-01
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
Alright
not
doing
any
SDK
call
so
no
updates
from
the
SDK
other
than
we
do
have
a
call
plan
for
right
after
this
one
I
know.
Obviously
Clemens
can't
make
it
but
Scott.
Hopefully
you
can
I.
Think
timbre
may
have
some
issues,
wonder
if
he
wants
to
bring
up
so
Scott.
Even
if
you
didn't
complete
your
action
items,
if
you
or
anybody
else
who
probably
joins
the
SDK
call,
can
try
to
join
that
that
we
can
answer
Tim's
questions
because
I
think
you
add
some
so
I'll
be
good
against
reminder
is
right.
A
A
A
We
want
to
talk
about
first,
so
I'm
going
to
try
to
write
something
up,
hopefully
before
next
week's
call,
so
we
can
start
talking
about
it
and
that
this
give
us
a
clear
picture
on
what
we're
gonna
do
chances
are
it's
gonna,
look
very
similar
to
what
we
did
in
the
past,
but
just
wanted
to
have
some
talks
offline
first
about
it,
but
we
do
have
some
time
and
it's
not
too
like
November
or
something
alright
moving
forward.
Before
we
jump
into
the
PRS
or
issues
and
stuff.
Are
there
any
of
the
topics?
A
Okay,
cool
in
that
case,
what
I'd
like
to
do
first,
is
just
a
little
bit
of
cleanup.
Tippy
knee
opened
up
this
issue
or
PR
or
a
long
time
ago,
and
there
have
been
some
comments
on
there
that
he
hasn't
addressed.
I
think
them
many
many
times
asking
what's
up
with
that,
he
hasn't
responded,
so
I'm
inclined
to
say
that
we
closed
this
PRA
right
now.
It
is
just
an
extension.
So
it's
easy
to
add
later
if
we
need
to-
and
we
can
also
reopen
it
if
he
reappears.
A
A
So
let's
see
Evan
did
modify
his
PR
I
believe
on
Tuesday
evening
and
I
think
the
only
significant
change
was
he
removed
this
section
here
which
defines
the
mapping
of
how
to
serialize
them
a
map
into,
say,
DHCP
header
with
the
dash
system.
I
think
that's
pretty
much
the
only
changing
made
since
then.
So,
basically,
the
general
gist
of
this
to
refresh
people's
memory
is
get
rid
of
maps,
has
a
valid
attribute
type
that
does
not
touch
the
data
attribute.
Just
all
the
other
attributes,
including
extensions.
A
So
on
last
week's
call,
there
seem
to
be
general
consensus
to
head
in
that
direction
of
removing
maps.
However,
I
want
to
pick
on
Vladimir's
a
second
here.
Wladimir
Jim
did
ping
me
offline.
He
still
feels
quite
strongly
that
we
should
be
able
to
keep
maps
if
we
just
simplify
them
down
to
things
like
only
one
level
of
depth
and
stuff.
Like
that,
do
you
want
to
speak
to
that?
A
A
B
A
B
You
mean
like
so
okay,
so
let's
say:
I
want
to
make
a
three
level
deep
map,
and
so
now
I
have
to
write
some
sort
of
custom
convention
and
I
have
to
mix
that
what
splits
my
map
keys
into
the
cloud
events
attributes
so
I
think
you
know
if
you're
gonna
write
something
custom,
just
use
cloud
events
and
be
able
to
use
the
keys,
and
maybe
it's
not
so
bad.
If
we
use
a
different
string
in
your
custom
extension
to
split
the
keys
in
the
map.
A
A
A
C
A
C
A
D
A
A
Okay,
that
did
it.
You
forced
me
into
it.
I'm
gonna
pick
on
Christoph
for
two
reasons,
one
to
make
sure
he
gets
on
the
roll
call,
but
two
because
Christoph
you
had
tend
to
have
lots
of
really
cool
ideas
and
deep
thoughts
on
things.
What's
your
opinion
on
these
GPRS,
have
you
had
a
chance
to
review
them.
E
A
D
Yeah
I
think
that
the
so
James
470
is
you'll.
Try
it's
a
more
simplified.
You
know
in
terms
of
text
version
than
what
471
the
alternative,
one
that
Clemens
did
came
up
with
I.
Think
the
the
the
problem
that
James
is
trying
to
solve
is
being
able
to
have
binary
data
transported
in
such
a
way
that
it's
well
known
how
to
transform
it,
as
it
goes
through
multiple
hops,
and
you
know,
based
on
some
other
text
that
I've
read
from
him.
D
D
Yeah,
let's
just
say
that
there's
a
lot
more
text
yeah
and
when
I
when
I
looked
through
this
I,
the
the
implementer
Amin
starts
saying
you
know:
what's
the
what's
the
flowchart
that
I
need
to
follow
in
order
to
be
able
to
correctly
decode
and
re-encode
a
cloud
of
that
and
I'm
worried
that
I'm
worried
that
that's
not
simplified
enough
and
if
it's
too
complex,
then
people
will
get
it
wrong.
So
I'm
I'm
up
in
there
on
this
one.
D
470
is
likely
more,
you
know,
simplified
straightforward
in
terms
of
its
normative
discussion.
So
so
you
know
really
really
what
I'd
like
is
for
other
people
to
take
a
look
and
comment
on
this
as
well,
because
we
without
either
James
or
Clements,
to
help
with
some
of
the
discussion.
It's
more
difficult
for
me
to
know
the
exact
content
yeah.
C
A
I
appreciate
you
speaking
up.
Thank
you.
Bro
I,
cuz,
I
went
I
spent
it
quite
a
bit
of
time
this
morning,
going
back
and
rereading
both
pr's
and
the
original
text.
When
the
original
issue
to
see
what
was
really
you
know,
the
the
genesis
of
all
this
and
I
feel
like
I,
have
a
better
understanding.
Now
that
I've
gone
back
and
refresh
my
memory,
but
I
also
get
the
sense
from
being
both
pr's
that
it
is.
A
It
just
feels
like
there
might
be
a
simpler
solution,
even
though
both
of
them
may
be
hundred
percent
accurate.
It's
just
from
an
outsider's
point
of
view.
Reading
the
texture
I
may
be
asking
myself.
Why
is
there
so
much
texture
to
explain
something
that
that
it
should
be
easy
to
do
and
that
fact
that
there's
so
much
text
leads
me
to
believe
I'm
missing
something,
and
that
makes
me
scared,
so
I'm
wondering
whether
mark
so
mark.
You
said
something
interest
in
there.
A
You
said
that
you
thought
that
james's
was
easier
from
perhaps
an
implementation
point
of
view,
I'm
wondering
if
what
that
means
is
we
should
do
is
when
Clemens
gets
back
from
vacation.
I
believe
he's
back
next
week
is:
ask
him
to
perhaps
look
at
making
minor
editorial
tweaks
to
James.
Is
that
way
we
get
the
simplicity
from
the
implication
point
of
view,
but
maybe
the
sort
of
deeper
in-depth
discussion
points
from
Clemens
and
sort
of
merged
it.
A
Okay,
well,
it
sounds
like
we
can't
come
to
a
vote
on
this.
If
only
a
few
people
actually
read
it
and
we
do
want
to
try
to
skip
the
other
guys
involved.
Yeah
I
know
James
is
gonna,
be
real
difficult
to
get
on
the
call
here,
mainly
because
of
the
time
difference,
I
think
he's
in
Australia.
A
Obviously,
when
Clemens
gets
back,
then
we're
gonna.
Have
you
know
his
point
of
view,
which
is
obvious,
gonna
be
biased
for
his
PR,
so
it's
gonna
be
kind
of
a
challenge
not
having
the
opposite
way
to
be
on
the
call,
but
I
guess
the
best
we
could
do
is
just
hold
off
and
see
whether
Clemens
can
can
do
some
magic
in
terms
of
merging
the
two
cuz
I've
good
point
of
time.
I
don't
feel
comfortable,
trying
to
push
a
vote
on
this
I.
Just
don't
feel
like
we've
had
enough
review
of
on
this
stuff.
A
So
unless
someone
has
a
more
brilliant
idea,
there's
a
way
to
go
forward.
We
may
just
have
to
defer
next
week
and
just
ask
you
guys
to
please
look
at
these
for
next
week,
because
we
got
to
have
a.
We
gotta
have
a
deep
discussion
on
this
stuff.
This
is
and
the
map
one
under
the
I
think
biggest
issues
outstanding
41.0.
A
A
Okay,
in
that
case,
I'll
see
what
I
can
move
it
forward
with
all
right,
Scott,
you're,
batching,
one
so
I
think
there's
actually
two
different
discussions
here
in
the
same
issue.
One
is
we
learn
on
a
slight
tangent
with
the
webhook
specification
and
then,
of
course,
there's
the
batching
issue
itself,
so
let's
focus
first
on
the
batching.
A
Since
that's
what
the
issue
is
about
I
this
morning,
I
tried
to
summarize
four
different
options
here
as
I
see
it
if
I
miss
one,
please
let
me
know
but
I
think
before
options
are
go
full-bore
and
completely
define
batching
and
that
that
means
not
just
from
a
syntax
perspective
but
include
the
processing
model
definition.
So,
for
example,
one
of
the
things
Scott
thinks
is
missing
is
some
sort
of
response
back
to
the
sender
to
indicate
whether
each
individual
event
itself
was
processed
in
some
way.
A
Another
option
is
to
remove
batching
from
the
spec,
but
talk
about
how
you
can
do
batching
you
to
really
wanted
to,
but
it
becomes
an
application-level
definition,
meaning
the
batching
gets
shoved
inside
of
the
data
attribute,
and
then
it
becomes
up
to
the
application
to
figure
out
how
to
extract
it
and
process
each
one,
but
from
a
transport
level
perspective
this
was
still
just
a
single
cloud
event
that
gets
sent
over
the
wire.
So
it's
it's
kind
of
like
doing
nested
cloud
events
in
some
fashion
and
then
finally,
is
just
remove.
A
Batching
entirely
and
say
nothing
at
all
about
it.
I
think
those
are
the
four
options
that
I
could
see
that
people
would
possibly
mentioned.
Are
there
other
options?
People
could
think
of.
F
Could
we
keep
batching
not
specific
in
the
sense
option?
Four
and
then
do
something
for
club
events.
1.1
I
mean
batching
is
something
that
isn't
addressed
in
any
other
transport
as
I
know
off
at
the
level.
We
want
to
address
it
and
implementing
responses
about
hey,
which
one
failed
and
all
that
would
make
slowed
events
way
more
complex
in
the
sense
that,
as
you
said
last
week,
we're
not
gonna
be
able
to
just
describe
it
as
hey
just
add
these
headers,
and
now
we
have
a
cloud
event.
It's
gonna
be
hey
at
these
headers.
A
A
So
what
Mark
and
I
actually
add
a
little
bit
talked
about
this
yesterday
and
curricula
along
here
mark
but
I
think
when
we
were
when
we
talked
about
this,
we
couldn't
come
up
with
a
way
to
add
it
to
the
spec,
without
it
being
a
major
version
bump,
meaning
we'd
have
to
go
to
version
2.0
in
order
to
add
it
and
I
believe.
The
biggest
reason
is
because
a
receiver
of
a
crowd
event
that
has
batching
I
mean
oh
no.
This
is
a
map
discussion,
nevermind.
F
F
A
I
my
initial
take
on
it
is
for
both
maps
and
batching
I.
Don't
see
how
to
add
that
without
it
being
a
breaking
change,
because
in
most
cases
I
think
people
are
expecting
cloud
events
to
be
sent
as
one-way
messages,
which
means
you
could
have
no
guarantee
that
the
other
side
actually
got
it
aside
from
maybe
a
202
right
at
the
worst-case
scenario,
and
at
that
point
you
don't
know
whether
the
air
side
understands
Maps
or
batching
and
so
you're
kind
of
in
the
dark
and
there's
no
reliable
way
for
the
sender
to
know.
A
E
Okay,
so
the
thing
I'm
always
I,
think
that
is
difficult
to
understand-
is
there's
basically
two
types
of
batching:
one
is
batching
at
the
transport
level
or
you
have
no
semantic
grouping
of
the
components
together
and
the
other
one
is
more
a
semantic
batch
where
those
events
belong
together
for
some
reason,
so
I'm
trying
to
explain
them
a
little
better.
So
if
you
have
bus,
you
just
put
people
in
there
and
they
don't
know
each
other,
they
just
commute
together
or
they
have
no
relation
to
each
other.
E
That
would
be
a
transport
level
batch
batching,
so
it's
just
random
by
chance
that
they
are
together
and
I.
Think
that
part
we
cannot
really
remove,
because
there
are
transports
that
do
this
today,
for
example,
Kafka
just
does
it
for
you
there's
no
or
you
can
configure
it,
but
pretty
fault
Kafka
just
does
it.
They
just
wait
for
a
defined
time
in
your
client
vegeto's
message
together
and
send
them
to
the
server
and
that's
about
it
and
I.
Don't
think
we
we
should
remove
that
or
forbids
a
transport
to
do
that.
E
The
other
thing
is
to
have
more
semantic
grouping
of
things.
So
if
you
say
okay,
this
group
of
persons
they're
actually
a
family
or
they're
a
part
of
this
table
coupe
or
whatever,
then
they
have
a
semantic
meaning
why
they?
They
belong
together
and
that's
a
really
different
concept
and
then
for
events.
That
would
be
maybe
okay.
E
These
events
have
been
collected
by
this
IOT
device
over
the
last
minute
or
so,
and
then
you
want
to
group
them
together
and
the
fact
that
you
grouped
them
together
should
also
remain
if
you
move
them
across
several
transports,
so
I
think
when
we
discuss,
we
should
really
make
a
difference
between
those
two
and
what
what
I
did
when
I
made.
The
original
comment
in
the
spec
was
adding
this
transport
level
batching,
where
it
says.
E
Yes,
a
single
transport
can
bed,
she
wins,
but
as
a
center
and
as
a
receiver,
you
just
take
them
as
a
random
group
and
you
process
them
one
by
one
and
then
you,
if
you
hand
them
over
to
the
next
one,
if
you're,
just
an
intermediary
you're
free
to
break
up
the
badge
or
create
new
barriers
and
so
on
yeah.
So
this
is
kind
of
what
I'd
like
to
keep
and
then
the
next
question
is:
how
do
we
deal
with
this
at
HTTP
level?
E
C
I
think
Christophe
explained
it
much
better
than
I
would
have
explained
it
and
I
agree
with
him,
a
hundred
percent
that
we
can
either
leave
it,
as
is
where
we
have
defined
it,
that
the
transport
level,
which
is
what
I
would
like
to
do
and
not
change
the
syntax
of
exiled
event
to
include
the
concept
of
patch
in
the
cloud,
even
the
specification
itself.
So
my
vote,
my
stronghold
actually
is
to
leave
things
as
is
I'd
leave
it
at
the
transport
level.
C
A
I
wanna
make
sure
I
understand
what
you
guys
are
saying.
We
say:
leave
it
at
the
transport
level,
because
I,
my
interpretation
of
what's
in
spec
right
now,
is
not
necessarily
ting
at
the
transport
level
other
than
what
all
we're
really
doing
is
saying.
If
you
want
to
send
a
batch
of
them,
here's
the
JSON
for
what
it
looks
like
right.
A
It's
an
it's
a
it's
an
array
exactly
and
that's
good
Innes,
okay,
because
I
wasn't
doing
that
transport
level
as
much
as
because,
because
it's
not
like
we're
actually
interacting
with
the
transport,
it's
just,
we
just
defined
sort
of
the
wrapping
for
it.
Okay,
one
meter
was
on
the
same
pages
you
so
you're
advocating
for
basically
number
two
leaving
it,
as
is
you'll,
be
being
in
SAS,
exactly
yes,
okay
and
just
to
for
clarity
sake,
Christoph
you're,
you're,
basically
saying
keep
it
as
is
number
two
yeah:
okay,
cool.
Thank
you.
Okay,
Scott
your
hands
up.
E
Hey
I
couldn't
respond,
I
think
so
if
we
just
stop
pure
HTTP,
not
the
web
folks
back
then
I
assure
you,
maybe
I'm,
saying
something
wrong,
but
making
here
we
don't.
We
have
the
same
problem
that
we
don't
even
know
what
is
going
on
so
from
a
pure
HTTP
transport
layer
or
we
define
is
there
are
some
headers,
we
add
and
then
the
response
you
get
back
it's
up
to
you.
There
is
not
really
and
a
definition
of
an
error
code,
so
it
kind
of
from
a
cheapy
level.
It
implies.
E
If
you
get
a
foreign
error
back,
you
did
something
wrong,
but
we
don't
have
a
definition
of
saying
I,
don't
know
your
JSON
is
broken
or
I
do
not
accept
that
format
or
something
we
did
not
define
this
error.
So
there's
not
really
a
way
to
acknowledge
or
not
acknowledge
a
message
fully
either
at
the
level
that
we
have
it
at
in
pops
up.
E
The
other
thing
I'd
like
to
say,
is
that
what
you
still
can
do
is
knowledge
or
not,
acknowledge
the
whole
of
events,
which
is
obviously
not
as
good
at
what
most
other
transports
do.
Well,
yeah
I
think,
once
we
go
into
that,
we
should
focus
on
the
web
hooks
back
and
that's
very
true
bill.
In
my
opinion,.
C
I'll
just
say
how
we
implemented
it
in
Adobe
with
with
badge,
so
we
just
send
a
whole
bunch
of
events
and
when,
if
the
response
is
to
xx,
we
say
it's
done.
If
it's
not
to
accept,
we
say
something
failed
and
we'll
deliver
them
all
all
again,
so
we
don't
need
to
have
an
individual
acknowledgment
for
each
one
of
the
events
in
the
badge
which
has
treated
us
all
or
nothing
yeah.
A
A
A
But
look
I'm
on
my
point
in
saying
that
is
these
are
supposed
to
be
one
ways
right.
So
at
the
worst
case
scenario
you
make
it,
you
know
something
not
gonna
get
a
500,
but
let's
assume
you
get
at
least
to
a
two
back
saying:
yes,
I
got
it,
but
that
doesn't
tell
you
anything
beyond.
Yes,
I
got
it
all
the
way
to.
Maybe
you
could
do
get
it
200,
which
probably
means
you
successfully
process
the
whole
thing.
A
A
So,
let's
let
me
poke
on
that,
a
little
just
to
make
sure
I
completely
understand
it.
Let's
say
you
get
back
a
202
in
the
single
case.
All
that
means
is
I
got
it.
It
could
have
been
dropped
and
meeting
on
the
floor
by
mistake
or
afterwards,
but
from
your
point
of
view
in
the
sender,
all
you
have
the
202.
If
I
now
turn
on
batching
yeah,
why
does
that
202
mean
something
less
significant
to
you?
You
know
it's
already
pretty
insignificant
other
than
it's
gotta.
A
D
So
Scott
it
sounds
like
you're
you're
you're,
assuming
the
middleware
portion
of
it
will
do
the
delivery
of
each
event
and
give
you
a
response
in
line
at
this,
while
you're
waiting
for
a
response
back
from
that
server
and
I,
don't
know
that.
That's
necessarily
the
case
that
likely
it
would
give
you
a
response
code,
saying:
I,
accept
it
or
I,
don't
accept
it,
but
then
would
in
cue
each
of
those
events,
possibly
on
for
later
processing
in
for
later
delivery.
D
B
Think
that's
what
most
systems
would
do,
there's
there's
a
persistence
receipt
that
you
can
say:
okay,
I
got
the
event
I've
successfully
unmatched.
This
batch
and
I
put
it
into
new
persistence
or
I've
processed
it
in
some
way,
and
the
original
request
is
gonna
be
held
open
for
most
processing
models.
D
D
B
B
A
A
A
That
says
that
400
does
not
have
side
effects
right,
so
it's
possible
that
it
started
processing
that
one
event
did
some
changes
to
the
back-end
system
and
then
things
died
and
he
did
not
roll
anything
back
and
the
reason
I'm
mentioning
this
is
because
I'm
trying
to
equate
that
with
the
batching
case,
where
you
process
the
first
five
and
then
the
sixth
one
dies
and
you
get
back
400
and
I'm
in
my
mind,
I'm
trying
to
see
if
those
two
line
up
to
say.
Well,
it's
pretty
much
the
same
thing.
A
A
Okay,
so
to
try
to
narrow
things
down
and
move
things
along.
Let
me
ask
this:
is
there
anybody
on
the
call
and
I'm
going
to
say
that's
in
a
very
biased
way,
but
it
forgive
me:
is
there
anybody
on
the
call
who
would
like
to
advocate
for
position,
one
which
is
fully
defined
the
processing
model
and
leave
the
boundaries
of
crowd
events
as
just
a
sink
tactical
thing
of
what?
How
how
an
event
looks
on
the
wire
and
you
actually
get
into
the
processing
model
of
semantics
there.
Anybody
that
wants
to
advocate
for
number
one.
E
So
this
is
kind
of
my
you
know
compromise
whatever
you
want
to
call
it
do
it,
but
do
it
outside
of
cloud
events
and
make
sure
that
the
HTTP
processing
model
we
have
one
way
very
define
it,
and
people
can
standardize
on
that
or
they
can
also
do
their
own
thing,
which
is
also
fine.
So
we
don't
force
people
to
use
our
replicas
back,
because
there
are
a
hundred
different
ways
to
do.
Http
calls
anyway,
okay,.
D
I
was
gonna
comment,
but
if
we
truly
want
this
to
be
number
one,
then
we
likely
should
expand
the
HDP
transport
expect
to
include
you
know
the
error,
codes,
B
or
the
South
States
being
returned.
For
example,
you
know
I
just
pulled
up
the
standard
list
and
you
know
OH
accepted.
So
if
you
know
a
receiver
can
just
say:
okay
I
accepted
it
and
you
don't
get
any
other
information.
But
then,
if
it's
like
a
200,
ok,
we
would
think
about
what
is
the
payload
in
the
batch
case.
D
A
As
just
dealing
with
the
singleton
cloud
in
that
case,
if
you
think
that
the
webhook
spec
should
handle
batching
as
well
all
right,
do
you
think
that
the
definition
of
batching
should
be
in
the
webhook
spec
as
well,
because
that
that's
like
I
viewed
the
webhooks
back
as
a
very
generic
HCP
spec,
basically,
and
pretty
much
nothing
to
do
with
crown
events?
But
if
we,
if
we
push
the
batching
stuff
into
that
spec,
then
it
becomes
a
little
bit
of
both.
A
E
That's
a
good
question
but
I
think
like
the
web
book.
Spec
is
missing.
Also
Kennedy
response
right
now
what
I
said
before
it
doesn't
really.
It
has
some
error
codes,
but
it
doesn't
go
into
details.
What
does
400
mean
so
once
you
go
into
these
details,
you
can
also
do
a
special
subsection
for
when
something
is
patched
and
I.
Think
that
is
true,
where
you
transport
cloud
events
as
a
batch
or
whatever
else
as
a
batch.
E
Is
to
say,
batching
is
a
thing,
but
we
at
spec
level
do
not
define
it.
Each
transport
can
do
whatever
they
want
and
support
batching,
but
they
need
to
make
sure
that
his
transfer
living
level
bearing
not
a
semantic
meaning
of
a
batch.
As
long
as
transports
do
that
everything
is
fine,
we
don't
have
a
problem,
then
we
go
in
and
say
we
have
Jason
as
a
format.
We
because
we
define
it.
E
We
also
say
here
is
how
you
can
do
a
badge
in
Jason,
and
then
we
have
HTTP,
where
we
define
here's,
how
we
send
it
over
and
basically
all
the
only
thing
we
say
is
whatever
your
format
is.
If
it
happens
to
be
Jason,
then
it
looks
like
this.
If
it's
something
else
like
I,
don't
know,
XML
you
callosity,
fine,
better
Easter,
I'll,
just
add
batch
at
the
end
of
your
Hunter
type
and
that's
it
and
we
still
in
the
knitter,
defines
a
processing
model.
E
So
far,
so
I
think
you
will
need
that
processing
model
anyway
and
once
they
are
batching
that
processing
model
may
or
at
least
the
responses
need
to
look
a
bit
different
and
then
in
the
web
hogsback
we
actually
go
in
and
define
a
concrete
processing
model
for
what
delivering
a
single
event
or
a
bachelor
event
looks
like.
It
looks
like.
B
I
just
want
to
put
one
more
point:
I
think
the
the
issue
that
I'm
having
is
that
I.
Given
the
current
specification,
I
cannot
implement
something
that
would
do
things
like
delegate
a
batch
of
pub/sub
events,
send
them
off
and
then
Knack
upstream
or
at
a
cup
streams
like
I,
the
current
definition
doesn't
allow
me
to
do
what
I
would
like
to
do,
and
I'm
so
I
guess
I'm
asking
ways
for
optional.
E
E
I'm
not
sure
we
do
two
for
two
as
some
extent
HP
automatically
does
it
because
it
it
has
the
status
codes,
but
maybe
not
fully
like,
for
example,
again
the
case
for
inside
the
event
itself.
The
formatting
is
broken.
Them
should
be
a
particular
within
the
400
error
code.
It
should
be
something
more
specific,
so
you
can
react
to
this.
It's
just
like
here's,
the
400,
whatever
it
is.
E
A
So
so
I
want
to
circle
back
around
to
the
question
asked
earlier,
which
is:
does
anybody
advocate
for
number
one
and
kristoff?
You
raised
your
hand
but
I,
based
on
what
you
said,
though
I
don't
think
you're
actually
advocating
for
number
one
as
much
as
you're
advocating
for
number
two
with
a
follow-on
piece
of
work
of
moving
the
web.
Who
expect
some
place
else
on
expanding
at
the
cover
batching?
E
A
And
I
think
if
we
choose
to
do
something
like
number,
two
or
technically
anything
but
number
one
that
doesn't
mean
we
couldn't
do
something
else
later
out,
either
absolutely
yeah.
There's
a
follow
on
spec
or
even
in
our
spec
later,
but
at
but
I
just
want
to
make
it
clear
that
you're
not
actually
advocating
that
we
do
number
one
within
our
spec
itself.
A
Yeah.
Is
that
right
right?
Okay?
So
let
me
go
back
to
my
original
question.
Is
there
anybody
advocating
that,
within
the
cloud
events
spec
or
one
of
our
transport
specs
that
we
have?
You
know
within
our
scope
right
now
that
we
actually
define
a
full-fledged
processing
model
either
for
batching
or
single
events.
A
Okay,
not
hearing
that,
so
that
would
then
seem
to
me.
We
have
if
you,
if
you
want
to
boil
it
down
to
a
boolean
choice.
We
have
a
choice
of
either
people
have
inspected
a
or
remove
batching,
and
we
can
have
different
flavors
of
remove
batching
but
basically
comes
down
to
remove
batching
or
at
least
to
find
the
syntax
for
batching.
Our
is
that
what
a
choice
comes
down
to
there
are
my
oversimplifying
it.
A
Okay,
just
to
get
feel
for
the
group's
sound.
This
is
a
this
is
kind
of
a
big
decision
as
well
and
would
require
probably
not
flying
boat,
but
I'd
like
to
just
get
a
sense
of
people
on
the
call
I
know
Scott's.
Given
those
two
choices,
your
prefer,
your
preference
would
be
to
remove
batching
other
people
on
the
call.
What's
your
current
take
on
here
and
I
will
pick
on
people
who
have
been
quiet
so
step
forward
to
having
your
own
hey.
A
G
To
remove
it
or
suggest
we
remove
it,
we're
not
voting
at
this
is
a
slippery
slope,
so
it's
batching
today
with
an
AK
or
nack
based
processing
around
it,
and
then
these
typically
lead
to
something
more
complex
and
in
the
end
you
know
you'll
be
looking
at
distributive
transactions
which
are
at
nightmare.
So
so
I
vote
keep
this
back
pure
and
clean
and
you
know,
keep
batching
out
of
it.
Okay,.
A
A
H
A
A
J
Trying
to
find
the
window
and
unmute
problematic,
unfortunately,
I-
don't
know
enough
about
this.
To
give
my
opinion,
obviously
my
colleague
calling
gape
is
so
I'll
just
+1
him.
Oh
that's.
B
K
Would
vote
too
or
I
would
recommend
we
keep
it
just
because
I
understand
the
concerns
with
over
complicating
and
making
the
spec
a
little
bit
less
clean,
but
as
a
practical
matter,
when
you
start
doing
pub/sub
at
scale,
people
are
gonna,
have
to
start
implementing
badging,
and
so,
if
we
have
a
way
to
guide
people
with
the
spec
I
think
it's
I
think
it's
gonna
be
something
we
have
to
deal
with
eventually,
so
we
might
as
well.
You
know
hit
it
as
we're
defining
the
rest
of
this
right
now.
Okay,.
A
A
Okay,
it
sounds
to
me
based
upon
the
informal
questioning
it
sounds
like
we
actually
might
be
kind
of
evenly
split
and
that's
unfortunate,
because
I
so
there's
overwhelming
opinions
one
way
or
the
other,
and
because
we're
kind
of
evenly
split
I'm
wondering
whether
that
basically
means
we
do
a
vote
and
see
where
things
lie
because
I
don't
know
how
else
to
to
move
forward
here.
It
seems
like
it's
a
very
easy
choice
in
the
sense
that
it's
very
clear
what
the
choices
we
just
need
to
decide
one
way
or
the
other
is
there.
F
Could
we
get
some
feedback
from
Clemens
and
Clement
services
in
Microsoft
and
I
forget
Tim
from
AWS,
because
I
since
they're
gonna
be,
if
they're,
if
they
are
to
implement
cloud
defense
they're
going
to
implement
it
at
scale
and
I
might
be
affected
by
this?
I
would
like
to
make
sure
that
they
are
not
impacting
by
us
removing
batching,
okay.
A
We
can
do
that
as
I
said.
I
believe
Clemens
is
back
from
vacation
next
week
and
unfortunately,
as
you
can
see,
Tim
isn't
on
the
call.
But
what
I
can
do
is
I
could
take
the
AI
to
reach
out
to
both
of
them
to
make
sure
that
they
are
at
least
on
the
call
next
week
or,
if
not
at
least
voice
their
opinion
through
email
in
advance.
A
A
I
can
I
can
I
try
to
force
that
and
then
do.
I
guess
is
try
to
avoid
repeating
we
talked
about
this
week,
but
give
new
people
on
the
call
like
resign.
What
Clements,
who
obviously
had
it
hasn't,
had
a
chance
to
place
his
opinion
a
chance
to
voice
their
opinion
and
then,
if
we
don't
sway
people
with
one
side
of
the
other
start
to
vote
next
week,.
K
That's
quickly
from
the
Microsoft
perspective,
I
work
with
fundings
on
implementing
this
stuff.
I'm
gonna
be
the
one
kind
of
taking
care
of
actually
implementing
cloud
events
for
Microsoft
and
at
least
from
our
perspective,
if
we
don't
have
backing
defined
we're
just
we
already
do
it
in
our
in
Azure.
So
if
it's
not
defined
in
the
spec,
we'll
just
have
to
come
up
with
our
own,
because
it's
it
is
a
critical
path
for
us.
So
from
the
vendor
perspective,
that's
kind
of
Microsoft
stance,
but
I'm
sure
Clemens
can
give
more
details
today.
A
Okay,
thank
you
and
just
I
speak
in
his
share.
Speaking
just
as
Doug
from
IBM,
that's
actually
been
the
entire
reason
that
I
was
okay
with
it
going
in
the
spec.
To
begin
with
was
because
I
felt
like
enough,
people
were
gonna,
do
batching.
That
it'll
be
great
if
we
had
a
single
way
of
doing
it
as
opposed
to
everybody
roll
the
row
and
have
zero
interrupt
on
it.
Even
though
I
I
do
kind
of
agree
with
Scott
and
I
guess
Colin
when
they
said
one,
it's
not
fully
defined.
A
C
A
Okay,
anyway,
okay,
so
I'll
take
any
item
to
poke
just
send
out
a
note
to
poke
in
particular
Microsoft,
AWS
and
and
or
I
guess,
Clemens
and
as
well
as
the
other.
You
know
big
guys
to
get
their
point
of
view
and
to
warn
people,
though
they
may
be
doing
a
vote
next
week
or
wrote
starting
next
week.
I
should
say
well
button
that
take
some
notes
here.
E
So
look
I
say
remove
from
the
spec.
Do
you
mean
the
sentences
that
are
in
the
primer?
The
basically
says
it's
defined
a
transport
level,
or
do
you
want
to
keep
that
which
basically
says
we
don't
define
it
in
spec?
But
instead
do
you
mean
removing
the
JSON
format
that
defines
the
batching
and
the
one?
The
thing
that
we
do
at
HTTP
transport
level
so
basically
removing
it
from
those
two
format
and
the
transport
but
keeping
it
in
the
primer.
A
But
that's
because
you're
advocating
it
but
hey
you're,
you
bet,
because
you're
advocating
for
number
two
I
think
I
think
I
think
what
Christophe
is
asking
is,
if
it,
what
does
remove
actually
mean
my
interpretation
of
it
is
to
pretty
much
remove
it
from
the
spec
in
tours
even
talking
about
it,
but
that
doesn't
mean
that
a
transport
couldn't
batch
it
up.
If
they
do
it,
we
just
don't
talk
about
how
to
do
it.
That
was
my
interpretation
of
it
anyway.
B
E
A
E
E
A
Doing
it
for
that,
because
this
almost
sounds
almost
kind
of
contradictory
to
what
we
put
in
this
spec
I
thought
when
is
country
doesn't
like
word
for
it,
but
it's
weird
that
this
this
thing
basically
says
we're
punting,
but
then
we
turn
around
and
define
us
syntax
for
it.
So
it
feels
a
little
bit
awkward,
interesting,
ok,
ok,.
A
Think
we
have
a
path
forward,
we'll
see
how
it
goes
next
week.
I
did
want
to
talk
people's
attention
to
something
and
I
just
even
though
we
only
had
three
minutes
left
I'm,
like
obviously
gonna
push
for
vote
on
this
and
I
just
opened
it
yesterday,
but
we
have
this
property
called
schema
URL,
which
to
me
is
inconsistent
because
we
have
data
content,
type
and
data
encoding
or
something
like
that
data
counting
it
in
code.
A
I,
don't
like
that,
but
we
have
these
two
fields
called
data,
and
then
we
have
schema
URL,
which
relates
to
data,
but
it's
not
called
data
and
so
I'm
advocating
actually
changing
the
name
to
add
the
word
data
in
front,
so
that
were
consistent.
So
that
way,
every
property
that
points
directly
to
data
itself
has
the
word
data
in
front
of
it.
Obviously
this
is
a
breaking
change,
but
I
wanted
to
get
a
general
sense
for
what
people
thought
about
this.
Do
you
want
to
be
consistent,
or
do
we
not
care?
A
B
A
A
I'll
double
check
on
the
spec,
but
I'm
pretty
sure
that's
the
way
it
is,
but
if
so,
if
I'm
right,
then
I
think
it'd
be
a
really
good
thing
to
clarify
this
with
the
word
data
in
front,
but
I'll
double-check
this
back,
maybe
I'm
wrong.
Okay
with
that
I
think
we're
technically
out
of
time.
Okay,
I
just.
A
So
this
may
be
good,
just
not
just
because
of
consistency
for
my
OCD,
but
for
understanding
as
well.
Okay,
so,
whether
that
we're
technically
at
one
o'clock
or
the
top
of
the
hour
did
I
miss
anybody
on
roll
call,
I
think
I
got
everybody
you,
okay!
Thank
you
guys.
Please
do
review
the
two
data
PRS
one
from
James
and
one
from
Clemens
and
we'll
talk
about
again
about
those
next
week
and
if
you
are
involved
in
the
SDK
work,
please
stay
on
the
call.
So
that
means
maybe
mark
and
Scott
have
nobody
else.