►
From YouTube: CNCF Serverless WG Meeting - 2019-09-05
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
Alright,
three
up
for
the
era:
that's
going
to
get
started,
but
I'm
pom
pom
a
I
skip.
Alright
anything
from
the
community
people
like
to
bring
up
all
right,
SDK
I
think
you
had
a
call
last
week,
but
I
can't
remember,
there's
anything
worth
mentioning
Clemens
or
Scott.
Can
you
guys
think
of
anything
worth
mentioning.
A
That's
all
I
wasn't
gonna
do
it
today,
but
just
in
general.
Let
me
know
when
you're
ready
and
we
can
then
schedule
time
for
you
to
show
the
group
all
right,
so
the
incubator
proposal
for
the
project
still
set
for
September
17th,
just
reminder:
if
you
have
any
questions,
concerns
with
the
slide
deck.
The
link
to
the
PowerPoint
is
here
the
link
to
the
Google
Doc
is
here
just
let
me
know
if
you
want
to
make
any
changes
and
I
can
get
those
in
there.
Otherwise
I
think
we're
pretty
much
ready
to
go.
A
Oh
and,
of
course,
we're
looking
for
more
end
users.
Well,
you
have
three
listed
right
now,
so,
if
you
have
more,
you
can
add.
Let
me
know
I'd
like
to
get
this
to
be
longer
than
three.
If
possible,
we
do
have
a
outline
of
an
agenda
for
the
two
sessions
at
coop
con.
If
you
want
to
add
any
additional
items
to
that
list
or
want
your
name
to
be
associated
with
one
particular
topic
feel
free
to
go
ahead
and
edit
that,
as
you
see
fit,
we
saw
plenty
of
time.
A
So
you
don't
be
thinking
about
that,
and
now
we
get
down
to
PRS
Before
we
jump
into
that
any
other
topics
you
want
to
bring
up
before
we
get
into
PRS
all
right
cool
in
that
case,
Clemens
I
believe
you're
up.
First,
with
this
data
encoding
one
you
want
to
refresh
remember
where
we
left
off
last
time.
D
So
there
was
a
long-winded
story
around
having
structured
data
inside
of
data
that
end
up
with
what
we
had
affected
this
map
as
a
as
a
structure,
and
that
was
independent
of
encodings.
We
gave
up
on
that,
and
we
know,
allows
structured
data
in
data
in
the
data
fields,
but
we
only
allowance
kind
of
as
they
coding
define
the
chosen
coin,
defines
it
what
we're
making
and
so
there,
and
with
that,
specifically
in
Jason
what
we
have
here.
D
There
is
no
difference
between
a
string
and
adjacent
objects,
because
at
least
I
mean
from
the
encoding
perspective,
because
string
is
just
another
adjacent
expression
and
so
as
an
object,
and
so
is
an
array
as
a
friend.
That's
all
permitted.
The
only
thing
for
which
we
didn't
have
a
good
way
to
express.
It
was
a
pure
binary,
because
pure
binaries
are
not
representable
in
jason
itself,
certainly
not
if
we
are
encoding
a
structured
event
where
everything
is
in
Jason.
D
D
And
if
you
find
in
your
a
memory
structure,
a
binary
which
means
there's
some
binary
byte
array,
there
then
use
you
go
and
take
that
you
coat
it
in
base64
and
put
it
into
a
field
that
is
Cape
called
data
underscore
basics
before
and
those
two
are
mutually
exclusive
and
so
I
made
some.
They
were
then
some
some
suggestions
on
on
how
to
consolidate
the.
D
So,
first
of
all,
there
was
a
comment
that
you
may
talk
about:
those
having
to
be
mutually
exclusive,
so
I
I
added
that
that's
in
1/35
down
there
that's
right
there
and
then,
as
you
see,
there's
a
lot
of
other
deletions
because
all
of
these
rules
around
you
know
detecting
whether
it's
a
Jason
text
Jason
or
whether
it's
plus
Jase
and
all
those
things
they
basically
completely
go
away.
But
us
no
longer
making
a
difference
between
our
previous.
D
Transco
double
way
of
doing
structured
data
and
this
jason
model,
where
we're
now
effectively
trusting
that,
if
you
have
a
runtime
with
a
good
serialization
support
like
you
know,
see,
chart
app
or
a
go
app
or
gate
or
adjacent
app.
That
case
our
job
is
per
death
that
can
affect
you.
They
can
produce
a
could
form
and
document
out
of
memory
structure.
And
then
you
read
that
back
into
a
memory
structure
that
you
can
then,
probably
from
that
memory
structure
that
you
created
also
produce
a
reasonable
Avro
documents
without
having
too
much
loss.
D
D
D
D
D
D
If
we
make
that
extra
a
the
next
are
change,
work
effectively,
symmetric
here
on
between
AMQP
and
and
Jason,
because
there
we
have
a
place
where
all
the
button
with
a
pure
binary
go.
So
it's
an
AP,
p
data
and
if
he
wants
it
to
have
structured
data
inside
an
AP
p
message
we'll
put
that
into
a
p-value
and
the
functionality
is
exactly
the
same.
The
this
is
exactly
equivalent
between
the
two
them.
D
D
Yeah
I
didn't
I
meet
some
smilar
changes
to
consolidate.
Basically
what
Evan
Evan
said
you
should
go
and
take
this
and
consolidate
that
with
section
three,
because
it's
set
in
section
of
two
point:
three
I
think
yeah
and
so
I
consolidated
that
and
and
as
I
did
this
consolidated
text.
I
just
ran
into
all
this
like
lines
hundred
forty
hundred
forty
hundred
fifty
one
that
then
made
no
longer
any
sense
and
I
just
through
know
about
right.
A
A
So
I
definitely
think
it's
a
valid
concern.
I
think
that's
a
separate
discussion,
the
nonsense
that
I
think
that
goes
into
when
we
want
to
take
the
vote
for
1.0
and
so
I'd
rather
focus
on
that
later
on
in
the
phone
call.
That's
okay,
Scott,
okay,
related
to
this
particular
PR,
though,
unless
I'm
misinterpreting
what
you
said
Scott,
are
you
comfortable
with
this
one?
Or
would
you
want
more
time
to
review
this
one
as
well
I.
B
A
Okay
well
yeah,
like
I,
said
I'd
like
to
focus.
You
know
on
the
small
little
pieces,
do
baby
steps,
and
if
people
are
okay
with
this
general
direction
and
I'd
like
to
get
that
behind
us,
then
deal
with
a
more
abstract
of
do.
We
need
more
time
to
review
1.0
before
we
actually
call
it.
1.0
is
a
separate
discussion,
so
any
other
questions
or
comments
on
this
one
Marquis
for
up
there
for
a
sec.
Did
you
change
your
mind?
Well,.
E
B
I
don't
know
because
I
haven't
used
it
in
a
live
system,
yet
I
don't
know
the
Interop
story,
because
the
a
fair
amount
of
pieces
have
changed
and
the
though
I
agree
with
the
basics
before
data
piece,
the
change.
It's
gonna
make
version
upgrades
very
difficult,
like
going
from
1
to
2
to
3
to
version
1,
there's
a
there's,
a
lot
of
API
breaking
changes,
we've
made
in
the
last
few
weeks
that
are
going
to
be
difficult
to
to
understand
how
to
migrate
in
current
running
systems.
We.
D
B
A
C
A
A
D
This
is
really
a
the
reason
why
this
is
is
the
Jason.
Well,
this
is
the
Jason
encoding
of
why
the
there's
another
one
that's
is
that's
over
in
the
competing
coding
is
that
those
are
really
changes
and
only
have
to
do
with
a
wire
representation,
and
they
don't
affect
the
the
model
per
se
right.
There's
a
the
way
how
this
would
show
up
to
admit
the
way
how
this
will
surface
to
an
application
is
still.
There
is
data
right
and
there's
a
the
in
memory.
C
A
B
A
B
A
D
So
this
is
just
taking
the
marker
that
we
have
in
data
encoding
with
basics
before
it's
hitting
that
markers
and
it's
effectively,
it's
the
member
name.
If
you
will
like
from
from
information
from
information
contact,
a
Content
perspective
is
I
would
argue
exactly
the
same,
so
it's
less
complicated
that
it
was
and
we're
eliminating
a
bunch
of
extra
bunch
of
rules
as
well.
A
I'm,
just
I'm
just
worried
that
we're
delaying
it
for
no
real
reason,
I
mean
if
you're
saying
you
personally
need
more
time
to
think
about
it
and
you
know
actually
implemented
that
that'd
be
one
thing,
but
if
you're
doing
it
because
you
want
to
get
other
people
do
stuff
just
realistically
I,
just
don't
see,
what's
gonna
happen,
people
have
already
it's
been
out
there
for
two
weeks
now.
Basically,
okay.
A
So
I'd
like
to,
unless
there's
an
objection
and
someone
you
know
if
it's
I
don't
know,
if
I
don't
want
to
force
it
on
you
Scott,
but
I,
but
if
it
all
it
is,
is
trying
to
get
up
to
do
stuff
I.
Just
don't
we
haven't
had
much
luck
with
that.
So
that's
why
I
don't
want
to
push
it
until
a
don't
necessarily.
D
A
Just
remind
everybody,
even
if
we
do
approve
this
right
now,
that
doesn't
mean
it's
a
done
deal
right
during
this
whole
review
period
that
we
have
lined
up
for
ourselves.
If
somebody
comes
up
with
a
reason
why
this
is
a
bad
move,
we
could
always
revert
it
and
go
a
different
path.
You
know
we're
not
1.0
yet,
so
we
still
have
that
flexibility,
yeah.
A
A
On
the
call,
so
let
me
ask
a
quick
question
here:
okay,
so
I
don't
think
he
has
any
drastic
changes
his
last
time.
There
are
a
couple
of
RFC
wording,
changes
here,
but
don't
thing
that
actually
changes
anything
from
the
intention
of
this
one
Scott.
Do
you
know
enough
about
this
one
to
talk
to
her?
Do
you
want
me
to
try
to
mumble
my
way
through
it?
I
can't
wait
for
you
to
mumble,
mumble,
fumble
I.
Think
both
of
this
is
the
beginning,
probably
just
some
tactical
changes.
I
believe
I.
A
Don't
I
don't
know
what
he
did
now
that
we've
removed
other
changes.
I
think
he
clarified
that
there's
a
mapping
between
data
content
type
and
the
content
type
header
for
a
CP
I
think
that
was
one
change
just
made
that
a
little
clearer
and
oh
I
think
the
reason.
One
of
the
reasons
we
went
back
there
was
the
percent-encoding
stuff,
so
I
think
this
section
down
here.
D
A
D
F
G
G
D
Because
what
we
did
is
we
were
affected
taken
different
data
contents
I've
been
making
that
the
content
type
of
the
like
the
proper.
The
proper
are
positive
of
the
message,
because
we're
putting
was
taking
data
into
the
into
the
HTTP
entity
body,
so
those
fields
are
directly
corresponding,
which
means
we're
no
longer
wrapping
that
we're
not
replicating
that
into
the
comments
bucket.
So
that's
that
doesn't
change.
Okay,.
A
Yeah
I
think
I
think
ticularly
wording,
changes
and
the
word
the
I
think
everything
else
is
just
a
word
wrap
or
a
line.
Wrapping
thing
for
the
most
part
and
of
course
we
yeah,
of
course,
even
with
the
data,
so
I
think,
but
this
that
I
think
that
here's,
the
bulk
of
the
change
so
Clements
and
Heinz,
because
I
think
that
Heinz
Klaus
Klaus
I
think
you
may
have
looked
at
this
one
as
well
as
as
well
as
Clements.
What
are
your
guys
opinion
on
this
ones?
I
think
I
know
you
to
review
this.
D
And
I
believe
that
is
correct,
so
the
if
you
find
a
stray,
so
we
assume
that
all
strings
are
Unicode,
but
if
you
find
a
string
that
is
has
not
any
valid
is
not
valid.
As
an
HTTP
header,
you
run
there's
anything
percent-encoding
on
it,
I
mean
when
he
encoded
and
then
when
you
take
it,
then
you
percent
the
encoded
and
that's
then
giving
you
back
a
unicode
string
again.
A
A
D
D
D
All
the
way
he
says,
he's
a
string,
those
which
continue
to
cook
on
side
of
the
desk,
your
age
and
I'm,
saying
even
if
you
look
at
that
string-
and
it
is
all
within
the
ask
your
range
because
it
may
can
also
contain
the
the
percent
character
which
needs
to
be
escaped,
which
then,
which
then
is,
is
not
correct.
With
that
rule,
you
have
to
go
and
use
ik
%
encoding
in
all
cases.
So
basically.
A
A
Cool,
thank
you
guys.
Okay,
let's
go
back
to
this
fix
up,
Jason
mapping
thing
so
Clements.
You
briefly
talked
about
this
one
since
I
know
that
implies
you've
actually
reviewed
it
do
you
want
to
say
yeah
already
sort
of
talked
about
in
the
context
the
other
PR,
but
it's
very
thing
else
you
want
to
add
to
with
the
changes.
I
think
this
is
the
book
of
the
changes
right
here.
These.
D
Are
these
are
effectively
just
editorial?
Changes
are
necessary
because
we
didn't
track
them
all.
So
he's
he's
effectively
removing
the
mapping
of
map.
It
doesn't
preclude
that
he
can
use
the
anchor
P
map
because
you're
afraid
to
use
whatever
you
like
for
data,
but
there's
no,
so
we
only
have
data
known.
We
don't
have
to
be
any
type
and
map
type
anymore,
so
so
that
go
that
wording,
and
the
only
thing
that
needs
changing
here
is
I
believe
is
the.
D
A
A
D
Am
asking
for
there's
a
it
says
in
the
data
section
the
way
how
the
data
is
actually
stored,
since
the
data
is
stored
in
a
data
field,
that's
what
the
P
R
says:
okay
effectively,
what
this
needs
to
do
is
it.
It
needs
to
do
the
same
so
this
that
well,
that's
data
payload
shall
be
mapped
to
a
single
intersection.
D
Dana
section
is
per
se
binary
only
which
means,
if
you
now
have
structured
data
inside
of
the
in-memory
data
field.
You
can't
stash
it
in
here
because
there's
no
there's
no
civilization
model
for
this,
but
a
B
value
is
a
serialization
model.
For
that
so
effectively.
You
would
take
the
text
that
I
have
in
Jason
in
the
jason
PR
we
say
if
this
is
a
binary,
you
store
it
in
to
data.
If
this
is
sorry,
it's
data
database
Explorer.
If
it's
anything
else,
distort
it
to
data
member.
D
A
D
A
I
was
gonna,
say:
I,
don't
think
we
gig
I
comb
I
feel
incredible
of
proving
that
right
now.
Oh,
can
you
do
me
a
favor,
though?
Can
you
put
the
exact
text
that
you'd
like
to
see
as
a
comment
inside
the
pier
right
here,
so
that
way
is
really
easy
for
Evan
to
just
copy
and
paste?
Yes,
sir.
Thank
you
very
much.
I
think
they'll
speed
things
long,
okay,
any
other
questions
or
comment
on
this
or
concern
with
with
the
direction
we're
headed
here.
D
D
So
what
he
does
now
is
he
did
so
this.
So
this
schema
exists
not
to
perfectly
describe
an
event
an
event
in
Avro,
but
it
exists
so
that
we
can
so
the
schema
exist
so
that
we
can
even
serialize
an
event
because
ever
is.
This
is
a
format
that
requires
a
schema
to
do
serialization
at
all,
and
so
it
has
this
recursion
so
that
you
can
go
and
and
affect
the
serialize
structured
data
inside
of
the
data
field.
That's
why
the
recursion
existed.
D
So
now
we
change
the
rules
around
data
and
we
took
we
took
the
map
thing
away,
but
and
now
he
made
a
change
that
the
that
data
is
always
a
type,
is
always
bytes.
So
again
that
doesn't
that's
not
symmetric
with
how
what
we
do
Jason,
because
in
Jason
we
can
have
structured
data
instead
of
data,
and
so
the
recursion
that
the
old
schema
has,
where
you
know,
data
can
contain
any
other.
Any
other
fields.
Is
that
something
that
we
need
to
preserve?
Otherwise
we
can't
we
can
take
a
structured
data
into
that
field.
D
Schema
we're
not
so
the
schema
is
a
little
different
in
that
it
doesn't
it's
on
the
validation
thing,
as
you
would
have
that
for
JSON
schema.
This
is
this
is
the
thing
that
actually
drives
to
serializer.
So
with
with
that
change,
we
would
take
the
sterilizers
ability
away
to
deal
with
structured
data
and
structured
information,
set
the
data
field,
and
that's
why
I
don't
like
it.
Okay,.
D
A
H
A
H
A
I
D
A
Well
I
know:
Evans
did
mention
it
somewhere,
I
camera
aware
that
he
was
gonna,
take
a
look
at
the
pro
tip
of
one,
but
I.
Don't
think
he
had
a
chance
to
it
yet,
but
yet
Christophe,
if
you
want
to
take
a
look
at
Christophe,
know
who
is
that
Chris?
No
fear
yeah?
It
was
me
yeah.
Okay,
if
Christophe
is
you
want
to
take
a
first
pass
at
it?
Go
for
examines,
is
busy
as
well.
A
B
B
There's
an
issue
with
HTTP
binary,
where
the
extensions
that
you
might
add,
don't
don't
flow
through
middleware
and
then
turn
back
into
what
you
expect
it
on
the
HTTP
side.
Again,
because
the
spec
says
you
have
to
prefix
all
extensions
with
the
C
e
in
the
header.
This
spec
allows
you
to
send
other
things
that
are
not
prefixed,
but
there's
no
guarantee
that
that
extra
extension
that's
unknown
to
the
middleware
will
actually
make
it
to
the
other
side.
So
cloud
events
becomes
a
very
lossy
protocol.
If
you
switch
the
transports.
D
So
so
what
this
does
here
is
that
the
for
HTTP,
so
this
extension
has
a
special
rule
for
HTTP,
which
says
you
don't
take.
You
don't
take
the
trace.
You
don't
call
this
ee
trace
parent
and
you
don't
call
this
ee
tri-state,
but
you
really
use
the
w3c
default
headers
and
and
use
those
it's
so
not
the
SI
ones,
and
you
don't
send
the
C.
You
don't
send
the
the
SI
prefix.
Of
course,
middleware
has
rightly
stated
that
it's
not
aware
of
that.
D
Mapping
will
then
basically
pass
that
event
through,
but
it
will
pass
that
it
went
through
with
HTTP
headers
if
there
is
an
unaware
it
immediately
and
that
will
then
not
send
HTTP
along,
but
it
will
send
ATP
along.
It
might
then
strip
the
infrared
stripped
the
information
at
the
sea
level.
That's
correct.
However,
that
is
a
decision
that
the
extension
makes.
So
this
is
not
a
general
problem,
but
this
it's
a
decision
that
the
extension
just
made
here,
because
it's
a
it
decided
that
it
wasn't
to
have
a
different
mapping,
but
it.
E
B
D
So,
let's,
but
but
it
does
so,
this
is
the
this
rule
here
right,
it's
causing
this.
So
it's
not
it's
nothing!
That's
in
the
it's,
nothing
that
is
in
the
score
or
spec
or
even
our
HTTP
binding
is
really
that
this
extending
chooses
to
do
this.
It
chooses
to
do
the
over
and
we
permitted
the
override
specifically
for
that
purpose.
Now:
here's
why
that's
right
if
you
use
distributed
tracing
and
if
you
use
HTTP,
then
it
so
these
these
mappings
go
both
ways.
D
It
is
legitimate
for
a
for
you
to
send
the
cloud
events
to
a
proxy
and
the
proxy.
Let's
say:
envoi
decides
that,
there's
a
configuration
that
it
now
wants
to
go
and
add.
W
3,
w
3
trace
context
into
that
into
that
message,
because
the
application
doesn't
care
about
it
and
it
doesn't
have
tracing.
So
you
want
to
go
and
start
doing
tracing
as
a
as
a
by
intermediaries.
Effectively
you
want
to
do
a
you
want
to
do
that
by
the
Interceptor.
Now
now
that
is
just
standard
tracing
capability.
D
That
is,
that
the
inward
proxy
adds
and
it
adds
trace
parent
trace
State.
Now,
if
you
DC
relies
a
cloud
event
and
you
have
that
extension
activated
so
to
speak,
you
get
that
into
injected
context
back
into
your
cloud
event,
because
we
have
this
map
and
that's
a
that's.
A
function
of
HTTP
and
a
function
of
this
whole
trace
context
story
that
is
intentionally
so
so
so
what
work
and
and
because
there's
also
mapping
for
so
in
this
trace
context
world.
D
They
have
a
mapping
to
HTTP,
they
have
the
mapping
to
any
QP
and
they
have
a
mapping
to
MPT
and
they
actually
make
rules.
And
if
you
get
an
HTTP
request
and
then
you
use,
you
make
any
request
for
me
to
buy
names
this
then
you
have
to
go
and
pick
up
the
trace
name
then
traced
in,
and
you
have
to
use
it
in
a
downstream
in
any
downstream
request,
which
means
you
are
supposed
to
propagate
it,
which
means
specifically
for
trace
context.
D
The
propagation
is
guaranteed
more
or
less
by
the
trace
context,
specification
and
not
by
us.
That's
why
that's
legit,
but
that's
a
that's,
a
problem
that
exists
specific,
that
encodes,
a
problem
that
exists
and
what
we're
handing
off
control,
because
this
HDPE
mapping
here
choose
to
do
that.
But
it's
not
a
not
a
problem.
We
have
in
principle
it's
not
something
that
this
is.
If
we
define
an
extension,
we
don't
make
make
an
extra
rule,
then
all
the
the
extension
attributes
will
obviously
be
C
prefixed
and
then
they
will
also
be
propagated.
I
You
should
also
use
the
headers
in
a
structured
mode
for
the
other
libraries
to
work.
I
want
a
solution
I.
Maybe
it's
not
a
good
one.
What
could
be
to
duplicate
them
so
to
use
those
headers
and
then
also
have
it
in
D,
structured
or
in
the
binary
in
the
headers
twice
I
know
it's
more
data,
but
maybe
then
both
ways
are
being
served.
D
And
now
so
that's
that's
like
that's
something
that
makes
then
also
make
sense
to
me
to
do
the
duplication
so,
basically
so
I
to
solve
this
particular
problem.
We
can
make
a
clarification
in
this
spec
that,
but
that
talks
about
that,
maybe
he
talks
about,
has
a
sentence
or
two
about
what
I
just
said
and
then
clarifies
that
the
data
should
be
duplicated
into
the
HTTP
headers
and
then
should
or
should
also
be
carried
in
te
headers.
That
sets
on
something
that
we
have
in
so
in
trace
context.
D
Might
yourself
is
proposed
in
w3c
and
atp-binding
I'm?
There's
a
draft
out
there
and
I,
don't
know
what
when
and
why
that
hasn't
set
out,
because
W
receives
even
slower
than
we
are,
but
that
proposal
since
M
capiz
message
is
not
changeable
once
you
send
it,
but
this
is
about
propagating
in
your
state,
I
mean
through
intermediaries,
and
the
thing
we're
proposing
there
is
that
you
have
to
any
changes
unique
to
the
trace
context.
D
Go
so
the
trace
original
trace
context
goes
into
the
properties
of
the
message
and
then
any
changes
need
to
be.
You
need
to
go
into
the
annotations
of
the
MPP
message,
which
means
there's
also
duplication
there.
The
the
the
advantage
of
that
is
that
you
now
can
effectively
to
trace
context
paths.
One
is
you
get
the
trace
context
as
it
originated
in
the
application
that
this
one
would
end
up
in
the
see
e
dash
header,
because
HTTP
and
HTTPS
are
sure,
doesn't
know
about
this
and
won't
touch
it?
D
One
is
in
two
ends
where
it's
literary
just
called
when
to
call
them
at
the
cloud
event
without
the
transport
infrastructure
in
the
middle
and
then
at
the
HTTP
level
you
get
all
the
all
the
infrastructure,
traces
and
and
all
that
complex
along
with
it.
So
it
basically
sets
up
two
paths.
If
you
will,
where
you
have
one
layer
which
is
end-to-end
tracing
one
layer
which
is
tracing
with
all
the
transports
in
the
middle,
if
we,
if
we
duplicated
that's,.
B
D
A
The
point
hold
on
a
sec
I'll,
let
it
go
just
because
it's
it's
enjoyable
listening,
you
talk
sometimes
coming,
but
we
do
ever
speaker.
Q
hold
on
just
a
second
I'll
go
back
to
to
get
his
second
Clemens
I
mean.
Let
me
hit
the
people
like
you
first,
so
I
could
raise
my
hand
to
raise
these
excellent
point
that
I
think
Christoph
did
about
how
this
only
applies
to
if
I'm
sorry
it
applies
to
both
binary
and
HCP.
But
I
also
wanted
to
comment
that
I.
A
Don't
believe
this
is
a
club
event,
specific
issue,
I
think
if
you
have
a
piece
of
middleware
today
that
suddenly
starts
to
process
caught
events,
it
doesn't
say
even
know,
is
doing
cloud
events
per
necessarily
and
it's
going
to
either
drop
or
pass
along
these
HTTP
headers
perched
current
rules
and
processing.
So
I,
don't
think
introducing
cloud
events
necessarily
changes.
Anything
right.
It's
either
gonna,
take
all
unknown
headers
and
pass
them
along
or
is
gonna
drop
them
and
I.
A
Don't
think
cloud
events
changes
that
at
all
and
so
I
personally,
don't
see
it
as
a
problem
because
as
kind
of
what
coming
through
the
same
thing
at
earlier
is
this
extension
has
chosen
to
live
on
the
edge
by
not
pre
fixing
things
with
Cee
and
living
by
the
other
rules.
So
I
don't
Leslie
view
this
as
a
problem,
because
I
think
that
probably
exists
today
without
cloud
events.
So
that's
all
I
wanted
to
say,
but
I
think
Jam
your
hands
up
next.
So
then
we'll
go
to
Scott
and
Clemens.
G
I
think
my
comments
were
along
similar
lines
in
really
sounds
like
what
this
cloud
event
extension
is
saying.
Is
we
want
to
use
a
w3c
tracing
and
so
in
reality
it
should
be
that
spec,
that's
dictating
how
the
transport
encodings
work,
because
realistically
the
trace
context
is
not
it.
Is
it
really
an
attribute
of
the
cloud
event?
I?
A
D
D
So
HD
binary
mode
forces
prefixing
for
all
the
the
cognitive
context,
attributes
which
are,
which
is
also
true
for
extensions
and
only
if
the
only
if
the
extension
chooses
to
override
which
the
the
the
tracing
extension
does,
and
that's
the
only
one
that
does
that
then
do
the
change.
Do
the
rules,
change
for
that
particular
header
and
then
yes,
if,
then,
then
it
gets
lost
if
it
doesn't.
B
D
B
B
D
B
D
B
So
if
you
have
an
existing
web
hook
at
endpoint,
it
invokes
you
and
you
want
to
turn
that
web
hook
into
a
cloud
event.
It
should
be
as
simple
as
adding
so
the
a
couple
attributes
into
the
header
keep
the
payload
the
same,
keep
the
headers
the
same
and
then
middleware
shouldn't
drop
it
no
matter
what
transport
it
goes
over.
D
B
D
The
only
way,
the
only
way
this
works
is,
if
you
treat,
if
you,
if
you
stay
at
the
cloud
of
it,
you
can't
start
at
the
at
the
transport
layer.
You
have
to
start.
You
have
to
start
with
a
cloud
event,
definitions
of
the
cloud
event,
and
then
you
map
that
to
a
transport
layer.
You
can't
start
with
an
HTTP
message,
ending
expect
that
everything
is
in
that
HTTP
message
will
then
end
up
in
that
cloud
of
it
that
doesn't
work
so.
A
H
Well,
you
got
to
put
some
kind
of
code
to
do
that
where
even
the
cloud
events
headers
between
H,
GP
and
AMQP
do
not
use
the
exact
same
name
for
the
name
value
pairs
in
the
header,
so
you
have
to
have
some
mandatory
processing
in
between
and
I
would
assume,
then
again,
maybe
that's
a
bad
assumption,
but
that
that
is
then
under
the
control
of
whoever's,
doing
the
transformation
from
an
HTTP
transport
to
an
AMQP,
middleware
transport.
So
if
there's
got
to
be
code,
it's
going
to
be
some
developer
can
make
that
decision.
I
I
You
only
use
the
HTTP
body
to
transport
your
message,
but
it
doesn't
work
if
you
also
use
the
headers
so
I,
wouldn't
you
know,
I
wouldn't
take
it
too
literally
I
would
take
it
as
a
marketing
claim
to
get
people
interested
and
listening,
but
maybe
it's
not
ideal
to
strive
for
that.
This
becomes
in
all.
Educators
also
is
represented
in
our
specification.
I.
I
D
C
D
Have
a
custom
header,
that's
called
my
API
key,
and
then
you
put
the
API
key
in
there
if
at
all,
header
is
also
before
water
you're
not
leaking
the
API
key
because
say
if
a
meteor
doesn't
know.
Oh,
so
you
can't
write
that
there's,
there's
all
kinds
of
stuff
that
is
literally
just
hopped
to
hop
that
you
can't
blindly
forward
without
having
a
rule
for
it
and
the
rules
that
we
have
for
for
morning.
Things
is
by
making
those
called
men's
attributes.
D
A
And
with
that
I
think
we
got
the
call
it
time
because
there's
two
things
I
want
to
do.
First,
it
sounds
to
me
based
upon
some
of
the
changes
we
proposed
to
some
of
the
open
PRS
that
we
are
actually
are
not
ready
to
claim.
Even
with
these
current
pairs.
We
just
approved
today
that
we
have
an
RC
one,
because
I
think,
for
example,
we
have
Evans
to
PRS,
they
need
to
get
resolved.
A
You
okay,
my
theory,
objective,
so
we'll
push
this
date
out
to
12th,
so
I'll
try
again
for
next
week.
So
please
work
on
your
PRS
and
get
your
comments
in
there.
Try
to
get
them
out
this
week,
because
if
you
wait
till
next
week,
Thursday
will
come
out
really
really
fast
and
people
will
have
time
to
review
it
and
think
about
it.
So,
since
Tuesday's
the
deadline
for
normative
changes
and
finally
dug
em
and
fabio,
are
you
guys
there?
Let
me
get
you
on
the
middle
here,
probably.
C
A
I
A
B
D
A
Iii
think
it's
taking
over
time.
So
last
reminder,
please
add
all
your
comments
to
Evans
issue.
They
were
just
discussing
about
why
it's
a
good
or
a
bad
idea
to
do
what
he's
suggesting
or
if
you
are
not
sure
it
is
like
I,
think
someone
said
duplicate
the
data
between
c
e
and
normal
headers.
So
please
put
comments
in
there
as
being
try
to
resolve
that
one
well
during
the
next
week
or
so
alright,
and
with
that
we're
over
time,
I
apologize
right
lane
late.