►
From YouTube: CNCF Serverless WG Meeting - 2019-01-10
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
All
right,
no
audio,
yeah,
okay,
say
when
it's
total
three:
why
are
we
going
to
get
started?
Let's
see
short
list
of
people
today,
hey
I
I
think
the
only
a
I
wanted
to
call
out
was
Rachel
since
you're,
on
the
call
trying
to
remind
you
of
his
AI
to
create
PR
for
new
categories
for
transfer
bindings
I.
Don't
fully
remember
that
one
we
have
to
check
the
meeting
minutes,
but
I
remind
you
of
that.
One
on
sets
on
your
website.
Yeah,
oh
cool,
okay,
I!
A
Think!
That's
it
all
right
so
community
time
so
for
those
of
you
I
think
I
might
have
one
or
two
new
people.
This
is
just
a
time
for
people
who
don't
normally
join
the
call
to
bring
up
any
sort
of
broader
topic
that
they
want
to
bring
up
for
discussion
that
isn't
already
on
the
agenda.
I
should
say
so.
Is
there
anybody
you'd
like
to
bring
up
a
community
related
topic,
all
right,
cool,
all
right,
moving
forward,
then
I
don't
see
officer
on
the
calls.
A
Let's
talk
about
SDK
very
quickly,
for
what
I
can
tell
SDK
work
is
going
on
really
well.
There's
lots
of
PRS
getting
merged
stuff
as
those
repos
I
guess.
The
question
is
for
the
SDK
people
who
are
on
the
call.
Do
you
guys
want
to
have
a
regular
meeting,
I
believe
believe
the
Germans
names
of
Mathias
or
something
like
that
you're
suggesting
maybe
every
other
week
they
have
a
phone
call
just
sort
of
keep
in
touch
but
frame
for
those.
Does
she
keep
folks
on
the
call?
What
do
you
guys
think
about
that?
A
B
A
A
So
tell
you
what
mess
remedy
SDK
people
have
on
the
call,
but
why
don't
we
suggest
a
30
minute
call
before
this
one
every
other
week
and
I'll
put
that
out
there
for
a
sort
of
a
blog,
a
discussion,
point
or
vote
where
you
want
to
call
it
in
the
SDK
slack
channel.
That's
not!
Okay,
with
people.
C
A
Okay,
cool
all
right,
Kathy
I,
don't
see
Kathy
on
the
call
and
I
haven't
seen
any
activity,
I'm
that
I'm
a
workflow
stuff,
so
I'm
super
there's
nothing
to
mention
there,
so
we'll
keep
moving
forward.
Okay!
So
let's
start
talking
about
PRS
Christophe,
you
are
in
the
call.
So
maybe
you
could
quickly
talk
to
this
one.
Yes,.
D
So
there
were
a
couple
of
issues
where
we
discussed
about
batching
messages
and,
oh,
it's
kind
of
my
opinion
and
then
no
one
really
disputed
that
what
it
seems
to
be
a
question
that
comes
up
kind
of
so
I
decided
to
make
a
pull
request
for
it.
So
basically
people
want
a
batch
messages
or
batch
events
inside
a
message
which
generally
makes
sense.
But
if
you
look
at
cloud
event
as
such,
given
the
specification
I,
don't
think
we
should
specify
how
batching
is
done.
D
That
should
be
done
at
transport
level,
and
the
reason
is
that
many
transport
layers
already
support
batching
natively,
for
example.
Kafka,
you
even
don't
see
it.
It
kind
of
happens
behind
the
scenes
that
have
double
batch
messages
for
you.
So
what
we
should
really
do
in
my
opinion,
is
that
we
say
our
spec
says
how
a
single
what
a
single
cloud
event
is,
and
then
it's
up
to
the
transport
layers
to
define
if
and
how
they
support
bettering
of
fetching
of
multiple
events.
B
No
pushback
but
a
thought
we
so
in,
and
it's
just
basically
basically
the
products
from
product
implementation
perspective.
So
we
have
the
case
where
and
that's
a
natural
event
grid.
What
we
make
it
possible
for
customers
to
support
to
send
us
a
bunch
of
requests
with
a
single
interaction,
and
that
happens
in
cases
where
there
is
one
underlying
event.
B
That
kind
of
shows
up
kind
of
in
this
multi
pending
in
a
multi-tenant
system
basically
shows
up
in
whatever
are
50
different
projections,
if
you
will
and
so
there's
in
fact,
one
one
event
happening,
but
you're
really
handing
50
events
to
the
system
and
they
are
all
very
similar.
So
in
that
case
we
use
the
method
binary
projection,
but
kind
of
the
the
this,
the
self-contained
mode
that
the
structured
book
that
we
have
and
then
go
and
send
one
request.
B
But
we
send
that
one
request
with
an
outer
or
JSON
array
and
then
in
that
outer
Jason
race,
the
individual
events
and
that's
kind
of
the
batch
operation
and
since
then,
and
that
has
wire
impact
I'm,
not
another
thing
that
we
must
do
that.
There's
obviously
a
way
for
us
to
do
that
to
keep
doing
that
in
proprietary
fashion.
B
But
that's
that's
a
scenario
that
I
think
is
this
valid,
that
you
have
the
need
to
kind
of
send
a
bunch
of
events
at
the
same
time
into
the
system
system,
and
you
want
to
make
this
work
over.
You
know
the
final
transport,
Channel
and
I
think
one
of
the
changes
that
we
would
need
for
the
structured
mode
only
is
to
say
you
know,
send
those
events
as
an
array
and
that's
what
it
batches,
but
I'm,
not
I'm,
not
in
strong
opposition
to
this.
D
I
think
maybe
I
didn't
get
my
point
across,
but
what
I
think
batching
is
totally
useful
and
I
think
it
should
be
implemented.
It's
just
that
shouldn't
tell
how
fetching
is
implemented
in
Kafka,
because
Kafka
or
e
knows
how
it
does
fetching
in
general
yeah.
But
then
the
question
for
me
is:
if,
if
we
think
that
matching
is
useful,.
B
D
So
I
think
if
we
go
look
at
HTTP,
where
or
also
at
Jason
level,
we
could
find
how
that
looks
at
adjacent
level
is
Ryan
right.
Well,
what
I
don't
want
to
do
is
like
people
taking
a
Jason
that
contains
five
events
and
then
putting
this
basically
into
a
single
Kafka
message,
because
then
sort
of
the
semantics
of
what
is
what
is
the
single
event
inside
Kafka
doesn't
work
anymore,
yeah.
B
That's
right
and
I
think
that's
the
same.
The
same
argument
I
would
make
for
for
say
any
P
where
you
probably
don't
want
to
have
batches,
because
you
want
to
go
and
put
you
know,
500
messages
in
a
row
and
then
handle
them,
as
is
one
more
or
less
by
by
by
flowing
them
correctly,
but
for
HTTP,
that's
specific,
so
making
a
general
rule
like
in
HTTP
I
can
see
it
being
useful
to
put
multiple
messages
in
to
the
message
body,
with
more
sophisticated
protocols
such
as
Kafka
or
social
NEP,
I'll,
be
rather
against
it.
E
A
A
I
was
gonna
raise
a
very
similar
point
because
I
actually
interpreted
this
poor
request
as
the
first
of
potentially
many
basically
saying
at
the
spec
level,
we're
not
going
to
touch
batching
instead
of
the
transport
little
thing
and
that
each
transport
specification
may
or
may
not
choose
to
add
Nerva
of
language
but
says
in
our
particular
transport
case.
Here's
how
you
do
that
serve
example,
an
HTP
case.
We
may
decide
it's
the
way.
A
So
I
felt
like
well
on
the
same
page.
So
let
me
ask
this
question:
is
there
anybody
else
has
an
opinion
or
comment
on
this
particular
PR,
potentially
the
first
step
towards
other
ones,
obviously,
now
up
to
people
to
create
quality
ours,
if
we
actually
do
want
to
add
batching
to
particular
transports,
but
it
relative
to
this
one
PR.
Are
there
any
questions
or
comments
on
this.
F
A
D
A
A
A
H
A
Actually,
thank
you
for
bringing
that
up.
I
was
I
meant
to
bring
it
up
back.
We
forgot
from
my
point
of
view
the
infrastructure
it
felt
that
source
dog
I
have
no
pond,
keep
that
up
forever.
It's
really
up
to
you
guys
in
terms
of
how
long
you
want
to
keep
your
head
points
up.
I
have
this
general
sense
that
at
least
for
a
short
period
of
time,
people
may
want
to
use
this
demo
in
some
presentation.
A
For
example,
I
know
overkill
did
in
the
past,
so
I'm
inclined
to
ask
people
to
keep
it
up
at
least
for
a
while
longer,
but
I
wanted
to
bring
that
up
on
the
call
here
and
see
what
the
old
anchor
is
it
going
to
be
a
challenge
for
people.
If
we
ask
you
guys
to
keep
this
up,
I
know
Jim.
You
said
it's
not
a
big
deal
for
you,
but
what
about
other
people
on
the
call
like
William
from
Red,
Hat
or
anybody
else.
C
A
A
A
A
A
F
A
A
A
Right
cool-
and
we
approve
this
one-
all
right-
coupon
session,
yeah,
I,
guess
I
misspoke
in
the
previous.
The
previous
one
was
about
our
demo.
This
one
is
about
the
coop
confessions
themselves,
there's
pointers
to
the
PDF
files.
This
is
technically
under
the
serverless
working
group,
because
we
didn't
really
have
one
our
two
reasons.
A
One
is
we
don't
really
have
a
spot
for
it,
I
think
presentations
so
much
under
cloud
event,
but
because
of
these
presentations
were
about
the
serviceworker
group
in
general,
with
a
subsection
on
connivance
I
thought
it
was
more
appropriate
to
put
it
under
the
CNCs
service
working
group,
readme
presentation,
readme
files,
much
less,
but
it's
there
so
just
pointers
to
the
PDF
files
and
then
they
upload
the
to
PDF
files.
Here,
alright,
any
questions
are
coming
from
there
all
right.
Any
objection
to
adopting.
A
Okay,
Bremen.
Excuse
me
all
right,
so
Dan
our
website
for
us
and
I
believe
with
Sonya,
has
actually
created
a
blog,
basically
announcing
version.
0.2
I
saw
then
dan
added
it
to
the
website,
but
then
he
I
have
a
good
question
of
well.
Do
we
want
to
actually
create
a
blog
section
or
not,
and
it
sound
like
a
wonderful
idea
to
me,
but
rather
than
just
your
rattle
doing
it
I
wanted
to
bring
it
to
the
group
for
discussion.
Is
there
you
know?
How
do
you
guys
want
to
handle
blogs
going
forward?
A
J
K
The
yeah
I'm
worried
about
that
that
we
won't
have
very
much
content
and
there's
other
you
know
other
other
people
who
run
publication
systems,
I
write
for
open
source,
calm
and
I'm
a
moderator
there,
that's
good
Avenue.
A
lot
of
companies
that
are
on
these
calls
have
their
own
blog
and
so
they're,
more
likely
to
I
would
assume
post
there.
K
A
I
should
mention
that
Sonia
did
try
to
get
this
added
to
the
CNCs
website,
but
because
we're
just
a
sandbox
project
I
think
we're
ever
they're
not
allowed
to
advertise
our
stuff,
so
they
wouldn't
they
wouldn't
accept
our
blog.
That's
why
we
couldn't
put
it
there.
Gem
I
think
your
hand
was
up.
Please
do
you
want
to
say
something
I
think.
G
I
did
I
did
reach
out
to
the
CNC
of
marketing
folks
and
spoke
to
Taylor
and
Caitlin
about
both
about
this,
and
the
response
was
the
same
and
I.
Think
to
some
of
that
sandbox
versus
incubation
stuff
will
change
once
the
governing
board
stuff
is
finalized
in
the
dust
settles
a
bit,
so
it
just
kind
of
is
what
it
is
right
now.
So.
A
M
Yeah
I
still
believe
in
super
worth
it,
even
if
is
just
for
announcements
and
something
else
which
every
today
I
wanted
to
check
out
the
SDK
documentation
and
that
doesn't
seem
to
be
published
anywhere.
So
both
of
them
would
be
better.
More
documentation
is
always
better,
unless
it's
only,
but
still
so,
both
the
blog,
even
just
for
announcements
and
the
SDK
doesn't
seem
to
have
any
documentation
published
anywhere
yeah.
A
Let's
go
a
little
bit
later,
so
let's
focus
on
this
blog
thing.
First,
so
you
you
actually
use
the
word
announcement,
they're
sort
of
bloggers,
I
think
that's
interesting
I
thought
about
that,
because
they
were
obviously
never
supervised.
Formal
terms
like
awesome,
you
mentioned,
you
know.
First
of
all,
who's
gonna
manage
the
website
and
then
who's
going
to
create
new
content
going
forward
and
calling
it
a
blog
almost
implies.
We
want
whole
bunch
of
additional
content,
and
so
we
may
have
to
sort
of
you
know,
poke
people
to
to
create
content.
A
But
if
we
rename
it
announcements,
that's
a
little
less
formal
and
may
happen
less
often,
and
it
may
be
easier
for
us
to
write
up
a
short
little
blurb
about
what's
going
on
in
terms
of
it.
You
know
our
releases
and
that
may
be
something
that's
more
manageable
to
keep
on
our
website,
but
then
do
and
Dan
was
suggesting.
Maybe
let
people
write
blogs
and
put
blocks
other
places,
whether
it's
medium
or
our
own
company
website,
so
I
guess
what
I'm
suggesting
is.
J
What
if
we
created
I'm,
not
sure
what
this
cup
is
called
on
on
medium,
but
I,
think
it's
like
a
syndicate
or
something
where
you
can
basically
just
redistribute
blog
posts
from
other
peoples
from
other
authors
and
other
entities
and
people
I
mean
I,
guess
it
depends
if
they're
writing
on
medium
or
not.
But
if
people
write
stuff
about
cloud
events,
we
could
just
republish
it
through
our
to
our
medium
syndicate
and
inherit
a
lot
of
content
that
way
and
kind
of
centralized
content
for
people.
The
complexity
is
there,
I
think
are
the
challenges.
J
J
I
got
thinking
it
through
I.
Do
like
the
announcements,
because
we
do
have
a
newsletter
I
believe,
and
you
know
we
want
to
we're
gonna
be
sending
out
some
newsletters,
hopefully,
and
ideally,
if
we
just
have
to
write
one
announcement,
one
piece
of
content
to
send
out
to
the
newsletter
audience
as
well
as
the
post
on
our
website.
That
would
simplify
the
maintenance.
A
lot
well,
hey,
we
are
gonna,
have
a
new
week.
J
K
A
J
Yeah,
well,
we
have
a
list.
We
have
a
list
of
people
who
was
weekend
I,
think
and
if
we
just
want
to
agree
to
draft
kind
of
short
announcements,
I'm
sending
them
to
the
mailing
list
and
publishing
them
on
it.
I
can
announcements
page
or
something
I
think
yeah
sounds
like
it
has
low
overhead
I
could
still
keep
people
informed.
I
would
second,
that
yeah
I'd.
N
See
her
from
CNCs
just
like
to
reiterate
what
was
done
so
it
was.
There
were
few
Google
Groups
and
we've
decided,
like
the
community
cloud.
Events
decided
that
they
have
to
be
migrated
to
lease
sale,
so
we
will
see
and
see
if
we're
done
and
for
for
the
project.
So
now
you're
like
you,
may
use
it
as
as
you
previously
used
to
groups.
N
A
Right,
thank
you
York,
so
I
guess
I
like
to
do
is
sort
of
summarize
I
think
what
you
suggested
Austin,
which
is
leverage
our
mail
well
know,
create
announcements
page
on
our
website
right
as
part
of
it
and
then
potentially
look
at
leveraging
medium
in
the
future
when
there
is
more
content
content
out
there,
because
you
seem
concerned
that
there
was
an
old
bunch
of
crowd
event
blogs
out
there
today
that
sort
of
syndicate,
that's
where
we
come
right.
It.
J
A
And
what
we
can
also
do
is
as
a
work
item.
That's
part
of
the
release
process.
We
could
find
a
volunteer
to
write
up
announcements
that
way,
you
don't
forget
it
right
and
you
just
need
to
find
a
person
to
volunteer
every
released
at
least
see
him.
You
know
paragraph
or
two,
because
should
we
say
something
released?
At
least
you
know
the
highlight
will
come
and
going
on
or
something
so
I
can
add
that
to
the
release
process.
A
A
Okay,
okay,
so
I
guess
the
biggest
thing
is
like
I
like
to
consider
them
these
three
action.
Three
concrete
actions,
things
as
for
the
next
steps
for
this
particular
issue,
so
create
a
release.
Announcement
page
add
a
creating
a
release,
announcement
process
to
our
release,
process
or
steps
all
these
process
and
then
send
out
the
0.2
announcement
that
the
Sonia
wrote
to
our
analysts.
A
K
I'm
not
really
a
front-end
person.
This
is
kind
of
a
thing
that
I
could
do
with
Hugo,
but
I
will
take
a
look
at
some
options
and
if
we
ever
wanted
to
do
something
that
made
our
newsletter
type
thing
or
announcement,
look
a
little
bit
nicer.
I
have
reviewed
MJ
ml
in
the
past,
so
I
may
try
to
fit
something,
and
if
we
have
any
front-end
people
feel
free
to
jump
in
I
would
imagine
we're
not
a
lot
of
front-end
people.
So
I
will
treasure
long
and
try
to
get
this
done.
Okay,
I.
A
A
A
G
A
A
A
A
Okay,
this
I'm
not
trying
to
actually
want
to
resolve
this
today,
cuz
I'm,
not
sure
people
had
a
lot
of
time
to
think
about
it,
and
it
is
a
property
name.
Change
was
kind
of
serious
excuse
me,
but
I
didn't
want
to
actually
at
least
bring
up
the
topic
school
to
consider.
While
I
was
reviewing
one
of
the
binding
specifications,
that's
out,
there
came
on
which
ones
may
have
any
open.
Anyone.
I
came
here
for
sure
I
think.
In
my
opinion,
they
were
using
the
content
type
field
incorrectly.
A
I
think
they
were
getting
confused
between
the
contents.
I
feel
that,
for
example,
HTTP
has
versus
the
content
type
field
that
we
have
is
one
of
our
properties
and
I
got
me
kind
of
worried
that
people
will
use
it
incorrectly
going
forward
and
because
the
data
for
the
event
itself
is
kept
under
a
property
called
data.
It
seems
to
me
that
it
would
make
a
whole
lot
of
sense
to
avoid
this
possible
confusion
going
forward
if
we
just
rename
our
content
type
field
to
be
data
type
not
going
to
change
the
semantics
at
all
I'm.
A
Just
looking
at
a
simple
syntactical
change,
just
to
make
it
pretty
clear
that
this
is
the
type
for
the
data
field,
not
content,
type
to
be
confused
with
Stacie
Pitons
insights
header.
So
anyway,
that's
the
purpose
behind
it.
Pr
itself
is
strictly
syntactical
change,
content,
size
data
type,
all
the
way
through
the
entire
set
of
documents
you
have.
But
the
big
question
for
you
guys
is
whether
you
would
be
in
favor
of
this
change
that
I
had
a
conceptual
level,
totally
open
the
floor
up
for
comments
or
questions.
A
A
A
Okay,
not
going
to
push
it
so
I'll
bring
it
up
next
week.
I
will
hope.
Hopefully,
I'll
have
some
kind
of
resolution
next
week,
either
to
accept
it
or
reject
it,
but
please
give
it
some
thought:
I,
don't
think
it's
a
huge
coding
change,
but
I
do
you
think
it
will
alleviate
any
potential
confusion
that
might
be
out
there
all
right?
Thank.
A
I
A
Thank
you
for
coming
all
right
before
we
move
forward
into
the
next
vid
I.
So
many
other
comments
on
this
one
all
right,
I'm
going
forward,
then
Kristoff
I
believe
he's
open
this
one
yesterday,
so
I'm
not
gonna
push
for
a
vote
right
like
that,
but
I
thought
I'd,
give
you
an
opportunity
to
at
least
discuss
it.
A
D
So
maybe
I
start
a
step
back
so
for
a
lot
of
solace
technologies.
Let's
put
it
this
way.
There
are
some
limitations.
One
is
functions
as
services,
they
usually,
for
example,
a
doe
is
lambda,
will
only
accept
events
up
to
a
size
of
128
kilobytes
and
then
there
are
other
limits
for
other
functions
of
service
providers.
Similarly,
for
message,
queues
and
basically
transport
layers,
there
are
some
limitations
depending
on
the
product.
Sometimes
there
are
configurable.
Sometimes
they
are
just
usually
if
it's
a
software
service,
there
are
hard-coded
same
for
HTTP
servers.
D
They
usually
come
with
some
protections.
That
is
a
limit
basically
on
whatever
so
I
think
for
interoperability.
We
should
specify
somehow,
when
it's
okay,
to
review
something
and
when
it's
sort
of
guaranteed
that
you
can
send
an
event.
So,
if
I'm
sending
out
an
event,
I
want
to
know,
will
that
go
through
or
will
it
not
go
through?
D
Obviously,
and
then
one
consideration
for
that
is
size,
so
in
at
commercial
as
I
implemented
like
or
I
integrated,
with
a
couple
of
different
message
queues
and
what
we
do
is
we
basically
have
enough
data
and
people
always
ask
for
most
of
the
data
so
for
a
message,
queues
that
it
supported.
We
send
a
lot
of
data
for
message,
queues
that
support
less.
We
send
less.
Basically,
the
only
thing
basically
for
me
that
I
want
to
know
is:
when
do
I
need
to
cut
it
off
so
that
it
still
goes
through.
D
So
this
is
kind
of
my
first
proposal
how
to
approach
this
problem.
I'm,
not
I,
don't
know
we
can
do
it
one
way
or
another.
Another
I'm
also
not
super
picky
about
the
limits
itself,
whatever
that
would
be
I
just
want
that
there
is
a
limit
yeah.
So
I
basically
made
two
proposals
here.
The
first
one
is
really
a
hard
limit
that
basically
says
an
event
should
never
exceed
the
size
or
like.
The
first
question
is:
how
do
you
measure
size
at
all?
D
This
is
kind
of
difficult
because,
obviously,
depending
on
which
format
and
so
on,
it
will
be
different,
so
I
think
because
everybody,
we
kind
of
Jason
and
its
default
civilization,
more
or
less
I-
think
that's
a
good
way
to
measure
the
size
of
the
event.
So
I
picked
this
one,
but
I'm
open
up
for
better
suggestions.
So
basic
what
I'm
saying
here,
if
you
see
realized
as
minify
Jason,
then
it
must
not
be
larger
than
128
kilobytes
again
that
limit
itself.
D
But
if
you
go
above
it,
it
may
work,
but
in
your
sort
of
it's
up
to
you,
so
we
recommend
that
produces
stay
within
these
limits,
and
then
consumers
may
reject
a
messages
that
violate
these
limits.
But
if
everything
works,
it's
also
fine.
So
these
are
basically
the
two
options
that
I
have
and
yeah
for
idea
that
I
didn't
write
down.
Is
that
there's
this
claim
track
pattern
which
you
may
have
heard
of,
in
which
case
Middleborough
could
also
shrink
messages,
but
that's
maybe
too
advanced
for
now.
H
I
mean
I
I
think.
Obviously
we
need
to
make
statements
like
this
I'm,
but
I
do
say
it
more
of
a
transport
concern,
especially
if
you
look
at
HTTP.
Are
you
looking
at
the
whole
payload
or
just
the
data
payload,
or
does
that
include
the
headers
I
mean
I?
Think
you
get
into
a
very
sort
of
murky
territory
a
little
bit
and
also
from
a
an
end
to
end
perspective
with
a
potentially
multi-hop
environments,
I
mean
I
I'm,
not
quite
sure
whether
we
could
do
anything
more
than
say.
H
D
So
well,
I
think
what
I
discuss
I
think
the
second
option
sort
of
tries
to
say
that
there's
a
guarantee
and
for
interoperability,
every
transport
layer
has
to
support
events
up
to
this
size.
So
if
I'm,
sending
one
I
know
that
everyone
will
accept
that.
So,
if
I'm
setting
up
an
HTTP
server
and
it
doesn't
accept
a
hundred
headers-
then
it's
not
compliant
so
then
I'm
wrong,
yeah.
H
O
B
128
128
K
is
not
the
limit
which
makes
you
go
in
in
shortly.
Field.
I
would
hope,
not
all
right
if
we,
so
we
have
so
in
the
so
first,
you
can't
make
this
a
you
can't
really
make
it
as
a
transport
concern,
because
we're
going
to
have
plenty
of
scenarios
where,
especially
when
with
middleware,
is
involved,
where
an
event
goes
into
the
middleware
using
HTTP
pops
out
of
the
middle
we're
using
entropy,
which
means
now
you
can't
make
it.
B
B
256
K
is
kind
of
for
most
mainstream
transports
options,
brokers,
kind
of
within
the
ballpark
once
when
he
8
certainly
is,
and
in
terms
of
that
most
of
that
the
most
applications
even
in
messaging
and
inventing
can
typically
deal
with
that
size.
And
then
you
have
some
outliers
which
do
maybe
an
egg
and
then
beyond
that
you
do
file
transfers.
So
I,
don't
I,
don't
think
hundredth.
28K
is
a
problem,
and
if
we
wanted
to
make
this
work
for
like
laura1
or
something
like
that,
then
we're
having
a
different
discussion.
That's
a.
G
G
B
J
J
A
Right
hands
up,
it
was
interesting
Christophe
that
you,
you
did
approach
this
from
both
angles,
right,
one
from
the
producer
side
and
one
from
the
consumer
side
and
what's
interesting,
is
as
you're
talking
about
it,
I
kind
of
like
the
idea
of
if
we're
going
to
talk
about
limits
at
all,
potentially
looking
at
it
more
from
the
consumer
side
to
say
you
must
support,
compelling
messages
of
this
size
or
something
like
that,
mainly
because
specifying
its
limit
on
outbound
messages,
it
sounds
like
it's
a
bit
restrictive
right,
because
what?
If?
What?
A
If
we
in
some
particular
environment,
they
want
to
be
spec
compliant,
but
they
want
to
turn
the
messages
that
are
greater
than
128
K.
Something
like
that
right
are
we
now
going
to
say
they're
non-compliant,
even
though
they
know
that
the
receiver
can
accept
it
and
man,
passive
is
just
fine,
so
I
think
it's
I,
think
I
promote,
expect
I'd
rather
allow
people
to
go
beyond
the
limits.
If
they
know
the
other
side,
can't
accept
it
and
still
be
expect
compliance.
A
B
A
So
let
me
ask
the
higher
it
a
question
of
the
people,
because
I'm
not
hearing
anybody
speak
up
against
the
idea
of
adding
limits
in
some
way
the
specification,
whether
it's
am
a
spec
or
the
transport
effects,
is
some
of
the
figure
out.
But
does
anybody
have
any
concerns
with
continuing
ahead
down
the
path
of
adding
limits
in
some
fashion,
to
our
documentation.
A
Okay,
not
hearing
objection,
so
it
sounds
like
Christophe.
There
is
a
there's
agreements
and
that
head
down
this
path,
so
that's
good!
So
how
do
you
guys
want
to
move
forward
then,
in
terms
of
taking
the
next
step
in
terms
of
modifying
this
pull
request
to
get
everybody
on
the
same
page?
Do
you
want
to
just
work
through
comments
in
the
PR
or
some
of
the
mechanism?
The
only
you
know,
there's
part
of
me
that
wonders
whether
we
person
to
have
a
high
order
discussion
about
where
should
the
size
limits
apply.
A
For
example,
in
this
discussion
we've
had
people
talk
about
size
of
limits
of
the
entire
message
itself
mean
transport
concern,
and
then
other
people
said
no,
it's
you
don't
you
really
only
should
focus
on
the
size
of
the
crowd
event
itself,
because
it
rather
because
of
issues
related
to
that.
So
do
we
need
to
have
a
our
discussion
around
what
should
limit
supply
to
first
right,
entire
message,
just
a
crowd
event,
just
individual
properties.
Do
we
need
aside
order,
or
can
we
do
or
do
anything
on
between
all
three.
D
D
They
received
the
whole
thing
as
Jason
or
whatever,
and
that's
that's
one
size
did
they
have
to
keep
in
memory
and
then
I
also
picked
the
top
level
attributes,
because
I
know
that
this
is
part
of
the
HTTP
binary
thing
and
I
think
that
will
be
commonly
used
and
there
are
commonly
limits
on
it,
but
I'm
also
fond
of
removing
that
or
we
could
also
go
and
figure
it
out
each
for
attributes
and
data
individually,
but
it
seems
more
fragile.
Let's
put
it
this
way.
Yeah.
A
It
seems
like
I,
just
it's
funny.
Cuz
least
I
was
gonna
mention
the
100
top-level
attribute
was
researching
to
put
in
there
because
it's
20,
when
you
serialize
it
as
Jason.
The
hundred
limit
may
not
necessarily
be
necessary
right,
but
then
it
BC
realized
that
and
the
binary
HTTP
format.
As
you
said,
the
number
of
headers
AC
Sanders
may
matter
at
that
point.
So
that's
when
timber
was
oak,
linens
McClement
said
it
can't
be
a
transport
level
issue.
Well,
I'm,
not
sure
how
percent
agree,
I
think
it's
depending
on
your
civilization.
A
O
Yeah,
my
two
cents
are
that
I
don't
see
the
value
of
adding
this
type
of
limitation
using
the
word
must,
as
a
provider
I,
don't
I
really
don't
care
about
the
size.
If
I
want
to
send
a
picture
and
it's
one
Meg,
why
I
don't
I,
don't
think
the
spec
should
be
specifying
size
limits,
maybe
I
think
somebody
mention
it.
O
Maybe
you
can
go
at
a
specific
attribute
and
say
this
attribute
is
type
string
and
it
should
not
be
more
than
X
and
or
if
it's
an
integer
not
be
more
than
X
and
that
just
to
avoid
abuse
but
other
than
that
I
think
I,
don't
know
it's
doing
a
disservice
by
saying
you
cannot
send
a
picture
that
is
baked
or
a
piece
of
data
that
is
baked
or,
if
is
binary,
do
it
this
way?
If
it's
not
binary,
do
it
that
way?
It's
going
to
be
convoluted,
nobody
will
care.
O
A
H
I
my
closing
comment-
maybe
maybe
I
think
the
proposal
was
talking
about,
claim
check
patent.
So
maybe
this
is
an
example
of
where
you
refer
to
sort
of
best
practice
and
make
people
aware
the
transports
or
or
implementations
do
have
limitations,
and
these
are
patterns.
How
you
get
around
them,
yeah,
I'd,
say:
I,
I
would
sort
of
I
understand.
We
should
have
statements
around
this
stuff
I'm,
just
not
quite
sure
how
you
draw
a
line
and
where
you
draw
it.
A
Okay,
thank
you
guys,
all
right
with
that
I
think
I,
don't
think
we
have
time
to
dive
deep
into
anything
else
and
I
think
I
lost
my
money
to
it
anyway.
So
on
this
particular
issue
or
PR,
I
should
say.
Please
comment
on
the
pier
either
on
any
particular
line.
If
you
want
to,
but
I
think
it's
more
important
at
this
point
in
time
to
have
sort
of
a
higher
level
discussion,
and
so
thank
you
christophe
proportionately
discussion.
That's
really
good!
A
So
let's
have
a
discussion
in
the
PR
itself,
go
back
and
forth
and
see
if
we
can
maybe
land
on
a
general
direction
of
where
we
want
a
head
on
this
stuff
going
forward,
and
then
we
can
modify
this
PR
or
conditional
ones
as
necessary
point
forward
right
and
with
that,
let
me
switch
back
over
and
back
through
the
attendance
Carlos
I
heard
you
I,
don't
know
who
W
is,
but
your
mic
do
microphone
aw.
If
you're
there
I'm
either
come
off
mute.
A
A
Actually,
one
thing
since
you
weren't
on
earlier,
we
were
decided,
they
were
going
to
have
SDK
calls
every
other
week,
30
minutes
before
this
call.
Okay,
but
just
that's
what
you
know
since
since
you're
part
of
the
SDK
stuff,
all
right
and
I.
Guess
with
that,
we
are
done
so
you
get
back
four
minutes
of
your
day.
All
right!
Thanks,
guys,
we'll
talk
next
week.