►
From YouTube: CNCF Serverless WG Meeting - 2019-09-12
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
All
right,
three
after
the
hour,
let's
get
the
fun
started
at
14
people
all
right.
Let's
see
a
I
skip,
okay,
anything
from
the
community.
Anybody
want
to
bring
up
all
right,
STK
stuff.
We
did
out
of
a
phone
call
last
week,
so
I
don't
think
anything
to
talk
about.
However,
we
do
have
a
phone
call
scheduled
for
today.
I,
don't
know
if
we
have
any
topics
to
bring
up
what
we
do
have
one
scheduled.
A
If
you
guys
want
to
join
that
incubator
next
week,
I
believe
Tuesday
we'll
be
doing
the
presentation
for
incubator
status
for
the
project.
Still
looking
for
more
end-users,
do
you
guys
want
to
add
any
the
list,
otherwise
I
believe
a
ready
to
go?
If
you
haven't
looked
at
a
yes
proposal
is
here
have
any
edits,
just
let
me
know
and
I'll
try
to
make
them
koukin.
We
have
the
two
sessions.
We
did
two
big
sessions,
one
for
cloud
events,
one
for
service
and
I-
have
an
outline
of
what
we're
going
to
talk
about.
A
There
feel
free
to
make
any
edits.
You
want
have
to
stick
your
name
somewhere
around
there.
If
you
want
to
join
in
the
presentation
side
of
things,
I,
don't
believe
anybody's
done
so
yet,
but
we
do
piece
off
time
so
be
thinking
about
that
also
christiana
check
was
on
last
week.
We
didn't
really
say
anything,
but
he
added
this
to
the
agenda
and
that's
it
looks
like
you
will
have
a
surrealist
practitioners
summit
the
same
way
we
did
in
Barcelona
I
believe
earlier
this
year.
A
It
is
planning
on
being
a
one-day
event,
all-day
co-located,
as
you
can
see,
on
Monday
November
18,
the
CFP,
for
that
is
supposed
to
go
at
some
point
this
week.
I
don't
know
for
sure
when,
but
that's
the
rumor
who
got
this
week,
I
believe
they're
planning
on
making
it
just
one
long
track.
So
then
I
could
have
breakouts,
but
I've
also
heard
conflicting
stories
so
be
prepared
and
keep
an
eye
out
for
this
CFP.
You
guys
want
to
submit
something
all
right,
any
other,
so
you
wanna
bring
up
before
we
jump
into
PRS.
A
Alright,
let's
get
right
to
it
then
extensions
here,
another
first.
Let's
talk
about
this
one.
Okay,
this
one
I
thought
might
be
easy.
Klaus
is
not
on.
So,
basically,
in
this
one,
if
I
run
correctly,
he
said
you
move
a
little
bit.
Where
is
it
just
tell
our
properties
he
removed
this
bit,
because
that
implies.
A
A
For
sure
excellent,
thank
you
all
right
fix
up
Jason
mappings
next,
try
number
Evan.
It's
still
not
on
all
right.
Let's
see
what
changed
in
last
time,
since
this
one
is
any
remember
what
changed
I
think
there
might
have
been
I,
think
I
think
the
last
set
of
changes
he
made
in
here
related
to
data
content,
encoding
being
removed,
I
think
that
was
the
biggest
stuff.
So
here
we
have
this
section
here.
A
A
A
Okay,
anybody
else
when
I
comment
question
all
right,
any
objection
to
approving
it
where's
my
mouse,
any
objection,
easy
peasy,
cool,
all
right,
Avro,
let's
see,
let's
get
to
some
good
stuff
in
here
all
right.
So
someone
else
want
to
talk
to
this
one.
Maybe
Scott
or
Clemmons
since
I-
know
zero
about
this
and
then
see
if
I
have
to
change
this
go
find
I.
So.
C
So
the
this
is
doing
effectively
the
same
updates
and
the
reason
why
we
had
a
discussion
about
this
is
that
Evan
went
and
made
data
binary
only
so
this.
This
schema
that
we
have
here
this.
This
original
schema
was
effectively
just
driving
this
here
ever
serializer,
so
it
can
go
and
take
an
arbitrary
nested
set
of
Records
and
and
serialize
them.
C
So
it
wasn't
really
meant
to
represent
the
the
cloud
of
an
schema
but
really
just
drive
the
sterilizer
and
so
and
so
haven't
updated
that
to
include
the
new
types
that
we
have
and
then
the
discussion
was
whether
data
should
be
binary
only
or
should
still
be
be
able
to
contain
structured
data,
and
so
meanwhile,
he's
made
that
amendment
that
data
can
now
just
like
with
Jason,
contain
structured
information,
and
so
with
that
were
now
even
okay.
This
is
good,
I
mean
I
haven't
tested,
so
this
is
this.
A
That's
the
biggest
thing
I
want
to
make
sure
the
eggs
I
know
you
have
some
concerns
about.
The
previous
version
of
this
and
I
want
make
sure
that
those
your
concerns
were
addressed.
Mm-Hmm,
okay,
there's
a
seed
I.
Think
most
of
this
stuff
is
minor.
Is
this
an
example?
No
okay
yeah?
So
here's
the
actual
schema
I
think
this
is
a
duplicate
of
what
he
had
in
this
spec
and
just
to
be
clear.
This
text
in
here
looks
right
to
everybody.
D
D
C
So
the
special
problem
with
jason
is
that
it
can
natively
represent
binary.
Well
ever
can
okay.
So
that's
true
for
that's
true
for
MVP,
where
so
in
aim,
so
we
have
to
split
an
Enka
P
as
well,
that's
similar
to
Jason,
but
we
have
that
because
of
the
typical
usage
usage
pattern,
MVP.
Typically
in
MVP,
you
transfer
pure
binary
data
in
in
data,
and
then,
if
you
have
structured
information,
you
put
that
into
a
value.
So
MEP
can
distinguish
between
the
two
for
Jason.
C
We
have
that
split,
because
jason
has
no
way
of
representing
binary
data
at
all,
so
you
have
to
go
and
put
that
into
some
string
encoding,
so
we're
signaling
that
here
and
if
you
look
at
that
text
that
we
have
here
that
basically
says
if
it's
binary
represented
as
binary
and
otherwise
you
know
they
will
construct
a
constructor
Union,
which
in
fact
you
used
to
the
native
David.
That's
the
way.
How
this
turns
out
is
that
specifically
clavero
is
that
whatever
you
have
in
your
hands
as
as
data?
C
It
can
be
no,
no,
it
can
be
so
with
with
the
change
that
we
have
here.
It
can
be
any
effectively
you
you
can.
You
can
stick
any
object
graph
that
the
that
your
as
ever
sterilizer
understands
you
can
put
that
in
here
and
that
will
be
just
sterilized
illness
as
a
data
structure.
That's
what
that
recursive
schema.
Does
okay
cool
all.
D
C
A
A
Cool
easy
all
right
next
I
wasn't
quite
sure
which
one
to
tackle
next,
since
this
one
may
take
a
while,
let's
prefer
Sinhalese
bring
up
Evans,
PR
and
maybe
Scott.
You
can
talk
to
this.
One
I
explain
why
he's
suggesting
they
remove
it
at
least
temporarily.
B
Yeah
we've
changed
the
how
data
works
and
I
think
the
current.
The
current
proto
only
understands
extensions
as
they
were,
and
we've
changed
a
couple
of
those
formats
and
nobody
on
our
side
has
bandwidth
to
fix
up
the
proto
definition
so
rather
than
I,
rather
than
lock
in
a
v1
version
of
proto
that
we
would
have
to
support
grudgingly
for
the
rest
of
time.
We're
opting
to
just
delete
this
and
then
add
it
when
it's
when
somebody
has
bandwidth.
C
If
we
can,
if
we
don't
have
bandwidth
to
fix
it,
that's
the
right
way
to
do
it:
yeah,
okay
and
since
and
since
we,
since
we
have
a
modular
structure
and
effectively
all
the
we
can
add
bindings
to
both
transports
and
encoding
practically
at
a
time.
I.
Don't
think
that's
that's
too
terrible
that
we
can
basically
it
we
can
add
a
protobuf
and
a
C
bore
and
whatever
encoding
as
a
100
version.
E
A
Okay,
not
hearing
it
ship
it
yeah.
Okay,
so
don't
show
I'd,
say
you're
right.
Yes,
yes,
ship,
it
no
check
yeah!
So
technically
you
open
this
up
yesterday.
So
for
our
rules,
we
can't
technically
approve
it.
However,
what
I
like
to
do
is
suggest
this,
because
this
is
a
very
easy
change.
It
may
be
kind
of
versatile
to
somebody,
but
it
relative
to
approval
it's
either
a
very
binary,
yes-or-no
kind
of
decision.
A
What
I'd
like
to
do
is
this
is
suggest
that
we
conditionally
approve
the
concept
of
removing
it
I'll
work
with
Evan
to
get
the
pr
fixed
up,
because
it's
not
actually
right
the
way
it
is
correct,
I'm.
Sorry,
it's
not
correct
the
way
it
is
now,
for
example,
he
didn't
update
the
TOC
and
stuff
for
them.
The
main
read
me
did
stuff
like
that,
but
obviously
those
are
minor
typographical
things.
A
A
A
Okay,
any
objection
to
that
all
right
cool.
Thank
you
guys.
Okay,
quick
question
here.
This
issue
was
opened
up
by
Alan
I.
Don't
really
want
to
discuss
it
right
now,
let's
say
a
time
later,
I
just
have
a
question
for
you
guys
who
actually
have
looked
at
this
one
is
this
a
1.0
issue
is
my
only
question.
A
C
C
Yeah
we
we,
we
chose
to
effectively
chose
to
care
about
basics,
the
basics
tea
for
problem
in
the
first
place,
and
we
have-
and
we
have
basically
punted
on
the
B,
so
we
didn't
really
have
it.
We
didn't
really
have
a
good
good
source
for
source
of
reference,
for
you
know,
compression
and
all
extra
features,
so
weeds
I
think
we
punted
on
that.
In
in
the
discussion
we
can.
We
can
certainly.
C
A
C
Exactly
so,
it
ultimately
becomes
a
question
of
how
do
you
there's
a
bite
so
there's
binary
data
that
we
can
cleanly
support
and
now
the
question
is:
how
do
you
describe
that
binary
data
and
what
you
know,
what
that
binary
data
is
and
I
think
with
the
content
type,
we
already
have
a
weapon
to
go
and
very
clearly
declare
what's
in
there
without
having
to
resort
to
an
extra
encoding
flag.
Okay,.
A
A
A
A
Again
before
we
get
back
to
evidence,
because
that
might
be
kind
of
big
this
one
I
just
opened
yesterday,
so
we
can
ously
merge
it.
I
might
ask
for
this
one
to
be
merged
or
put
into
the
other
category
waiting
till
tomorrow,
but
this
is
a
strictly
syntactical
fixes.
So
it's
not
actually
major
I
just
noticed
that
we
got
rid
of
structures.
We
had
a
couple
examples
that
still
use
structures
so
I,
just
converted
those
to
beaches,
straight-out
integers
and
per
Vlad's
suggestion.
A
Instead
of
making
a
extension
I
made
it
other
value
just
to
make
it
clear
that
you
can
put
other
basically
words
in
there
without
dashes
or
anything
like
that,
and
then
the
other
thing
I
did
is
for
totally
unrelated
reason.
I
was
looking
for
the
text
that
we
had
around,
naming,
in
particular
the
valid
characters.
A
So
I
was
looking
for
this
section
right
here,
and
it
was
really
odd
to
me
that
it
appeared
sort
of
in
the
term
next,
the
terminology
section,
so
all
I
did
was
move
it
down
into
the
clap
into
the
content
attribute
section.
So
I
put
it
in
there
right
before
the
type
system.
It
just
seemed
like
that.
It's
a
more
appropriate
spot
since
we're
actually
describing
the
names
of
the
context
attribute.
So
it
seemed
like
it's
about
to
play.
It
you've
other
changes
in
here.
A
A
Oh
because
I
here's
that
change
out
there
event
data
I.
Then,
in
the
in
that
section
itself,
though,
I
got
rid
of
the
nested
data
tag
or
data
header.
It
seemed
like
it
was
kind
of
pointless
having
it
there,
because
all
we
had
was
a
small
little
section
afterwards
and
in
terms
of
what
it
was
actually
called.
We
actually
have
it
mentioned
right
here.
A
If
someone
can
come
up
with
a
better
word
than
just
data
for
the
header
there,
I
don't
want
putting
it
back
in
I,
just
couldn't
think
of
one
last
night,
so
I
decided
to
remove
it
anyway.
Nothing
normative
in
terms
of
changes
strictly
syntactical,
just
moving
things
around
slightly
and
he's
good.
Okay,
anything
coming
any
questions
or
comes
concerns.
A
A
B
B
The
though
the
way
the
the
spec
is
currently
written.
If
a
producer
adds
an
extension,
they
have.
No,
they
don't
have
the
ability
for
downstream
converters
to
like
it's
so
like
middleware
pieces
to
turn
it
back
into
the
header,
that's
required
for
HTTP.
If
it's
been
translated
into
a
different
transport
and
then
back
so
potentially
we
first
case
you
drop
them
and
best
case
they
become
C,
II
prefixed,
which
doesn't
work.
B
So,
yes,
we
have
one.
We
have
a
an
extension,
that's
supposed
to
be
followed
by
all
receivers
of
cloud
events.
But
what
do
we
do
in
the
case
of
just
some
random
extension
or
the
next
version
of
the
tracing
header
or
if
the
tracing
extension
adds
a
new
header?
We
would
have
to
change
our
specification.
So
we
think
this
is
a
problem.
Okay,.
A
A
D
I
think
you
know
I
believe
it
was
Clemens
that
was
touched
on
this
last
week.
I
think
this
is
unfortunately
an
edge
case
for
this
example.
Isn't
it
because
it's
in
a
pure
seeing
model
extensions
will
flow
yeah
without
any
problem,
because
the
way
they
get
prefixed
in
the
transport
bindings,
it's
only
distributed
tracing
where
we
have
a
problem,
because
it's
really
trying
to
say
I
will
don't
treat
this
the
way
you
do
everything
else.
Do
it
a
special
case
and
so
I
agree.
D
D
A
I'll
jump
in
then
I
would
comment,
so
I
ate
that
last
bit
that
what
Jim
said
there
I
think
is
the
key
thing
for
me.
It's
if
the
fact
that
we
allow
extensions
to
not
follow
the
normal
rules
is
actually
the
problem
and
I'd
like
to
look
at
it.
I'd
like
to
tackle
that
problem
itself,
because
I
think
if
we
can
make
extensions,
follow
the
rules
like
everybody
else,
then
this
problem
goes
away
because
it's
very
clear
where
the
content
properties
live
right.
A
They
all
either
in
the
Jason
body
or
there
is
a
cheap
headers
and
their
prefix
with
Cee.
And
so
what
I'd
like
to
do
is
something
is
attacking
this
problem.
This
way-
and
this
was
actually
talked
about
on
an
offline
phone
call
but
Evans
Scott
and
Clemens
that
I
had
earlier
in
the
week
try
to
hash
through
this
was
to
basically
say
all
extensions
must
be
sterilized
with
the
Cee
prefix,
just
like
any
other
attribute.
For
that
perspective,
they're
no
different
whatsoever.
However,
they
can
have
what
I
call
a
secondary
serialization
server.
A
Example
on
the
trace
an
example,
you
could
still
use
the
w3c
trace
headers.
You
know,
however,
you
want
that's
fine.
However,
they
now
become
sort
of
secondary
bits
of
information,
meaning
when
the
receiver
gets
the
cloud
event
he's
only
responsible
for
or
required
to
actually
look
at
the
prefixed
headers,
the
seee
ones.
A
If
he
wants
to
look
at
the
other
ones,
he's
a
he's
free
to
pick
those
up
and
pass
them
along
as
sort
of
extra
other
attributes,
I'm,
sorry
or
she,
they
sorry
day,
the
they're
free
to
pick
up
those
other
attributes,
but
they're,
not
cloud
event,
attributes
they're,
just
extra
bits
of
metadata
that
happen
they
pass
along
and
the
reason
I
say.
That
is
because,
technically
that
other
bit
of
metadata
could
have
been
changed
somewhere
along
the
process.
We
have
no
control
over
that
whatsoever
and,
in
particular
in
the
tracing
stuff,
as
I've
been
explained.
A
A
So
you
actually
might
actually
want
the
two
different
bits
of
information,
one
being
with
the
original
sender
meant
to
include,
as
quote
cloud
event,
data
versus
transport
level,
data
that
might
get
more
munched
along
the
way
right,
and
so
this
allows
basically
for
both
cases
to
happen,
but
in
particular
for
a
receiver
who
has
no
clue
about
this
extension
whatsoever.
They
still
have
a
very
clear
rule
to
follow.
They
pick
up
the
seee
prefix
ones,
and
that's
all
they
need
to
worry
about.
Anything
else
can
technically
be
dropped
from
a
Content
perspective.
A
D
D
A
That'd
be
really
good,
because
I
do
think
in
general.
Sdk
should
sort
of
try
to
present
as
much
data
as
they
can
to
the
receiving
application,
even
because
not
part
of
the
cloud
event
stuff.
Obviously,
they're,
usually
discretion
that
the
you
know
to
pick
and
choose
what
data
they
maybe
want
to
exclude,
but
in
general
I
do
think
I'd
be
good
guidance.
Yes,
that's
just
me.
Scott
your
hands
up,
yeah.
B
C
C
Yeah
and
for
like
for
the
c-sharp
the
dependency
sharper
vision
for
what
oh
I
think
also
it's
fairly
easy
to
take
the
the
context
from
where
you,
where
the
cloud
event
came
and
linked
that
into
the
cloud
event
object.
So
you
can.
Basically,
you
need
to
have
a
property,
that's
called
context,
and
then
that
has
you
know
either
an
HTTP
request
in
it
or
it
has
an
MVP
message
in
it
where
you
can
base
it
and
poke
around
and
get
at
transport
errors.
So
that's
probably
what
is
gonna
be.
B
A
B
A
Well,
but
it's
all
I
guess,
there's
two
ways
to
look
at
that.
One
is
if
it's
an
unknown
extension
unless
we
try
to
do
some
really
complicated
thing
and
encode
the
serialization
rules
into
the
message
itself.
That
says,
oh,
if
the
next
hop
is
HTTP,
do
this
for
this
header.
If
it's
AMQP
do
this
for
this
header,
which
sounds
like
it's
a
really
bad
idea,
try
to
encode
that
into
the
message:
I,
don't
I,
don't
see
how
you
can
solve
this
for
unknown
headers
by
receiver.
A
But
if
this
middleware
wants
to
act
as
almost
like
a
proxy
kind
of
a
thing,
it
seems
to
me,
it's
gonna
have
to
know
how
to
deal
with
these
headers.
What
is
information,
irrespective
of
connivance
right,
I,
need
to
take
this
trace
header
as
an
example
right,
even
a
cloud
event
wasn't
in
the
picture,
its
trace
header
day,
they're
gonna
get
propagated
properly
or
it's
gonna
get
dropped
regardless
of
what
we
do
in
our
spec.
So
it
sounds
like
it's
not
necessarily
our
problem
to
solve,
but.
C
So,
but
that's,
but
if
we're
talking
about
the
the
since
the
tracing
problem
is
one
that
is
important
in
this
context,
that's
something
that's
really
up
to
the
trace
context,
specs
to
solve,
and
not
necessarily
for
us
to
solve,
and
then
I
think
one
of
the
gets
clearer
and
that's
something
we
also
had
on
that.
We
discussed
briefly
towards
the
end
of
the
call
that
we
have
is,
as
I
think,
I
think
when
you
think
about
event,
delivery
as
push
and
where
the
message
the
event
travels
along
a
HTTP
routes
through
proxies
forward.
C
Then
you
clearly
have
this
parallelism
between.
You
know
the
the
trace
context,
as
originated
in
the
in
the
in
the
sent
in
the
publisher
and
then
how
it
travels
along
that
route
and
then
shows
up
in
the
receiver
for
the
HTTP
and
on
that
there
it's
pretty
clear,
but
as
soon
as
you
have
a
push-pull
translation,
where
you
push
into
a
queue
or
you
push
into
effectively
it
pops
up
intermediary.
C
So
wait
so
when
I,
when
I
pull
a
message
out
of
somewhere,
no
matter
what
no
matter
where
it's
stored
so
I
have
an
event:
I
store
that
event
in
a
queue
or
sort
of
event
somewhere
on
disk
and
then
now
sometimes
sometime
later,
go
and
pick
that
event
up
the
thing.
That's
being
the
trace.
The
trace
context.
Origin
for
that
operation,
for
the
HTTP
request
is
not
that
of
the
of
the
event.
The
trace
context.
For
that
is
me
picking
up
some
events,
but
I,
don't
even
know
which
event
that
will
be
no.
C
C
But
but
but
you,
you
also
need
to
have
a
trace
path
for
before
you
even
got
that
event
or
if
you
start
the
operation
of
the
the.
What
starts
that
context?
Is
you
saying
I'm,
gonna,
I'm
gonna
issue,
an
HTTP
GET
now
or
I'm
gonna
issue,
an
HTTP
delete
now
or
I'm
gonna
go
on
an
issue
NH
an
AEP
receive
operation.
That's
where
your
context
starts
for
that
operation.
A
I
I
think
it
makes
it
clear
that,
for
these
non
special
cases,
where
someone
wants
their
own
serialization,
I
think
requiring
everybody
to
seee
that
that
I
think
works
because
every
you
know
exactly
what
header
it's
gonna
appear
or
how
it's
gonna
appear
and
there's
no
place
else
to
look
I.
Don't
worry
about
anything,
I!
Think
it's
as
Jim
said.
It's
these
edge
cases
where
there's
a
bit
of
data
that
wants
to
sort
of
live
someplace
else,
because
there's
an
existing
spec
that
we're
trying
to
adhere
to
right.
A
A
A
B
A
B
F
F
F
A
And
Atwell
relative
to
cloud
event,
processing.
That
is
definitely
true
and
the
reason
I
was
okay
with
that
cuz
I
was
think
about
this
last
night
is
because
technically
any
crowd
event
attribute
that
we
have,
whether
it's
source
or
any
other
field
right.
If
that
data
is
created
based
upon
other
data
in
the
message
someplace
like
so,
for
example,
let's
say
source
was
extracted
from
the
body
of
the
message
or
some
other
HTTP
header,
or
something
like
that
right,
but
they
put
it
a
C.
So
they
get
that
interrupt
lever.
A
Looking
for
right,
if
some
piece
of
middleware
modifies
that
other
metadata
there's
no
requirement
on
them
to
modify
the
cloud
event
attribute,
in
fact
it,
as
you
said,
they
may
not
even
know
about
the
cloud
event
after
you
to
modify
it.
So
it's
technically
possible
that
the
receiver
will
have
a
source
attribute
that
doesn't
match
that
other
bit
of
metadata,
that
it
was
used
to
originally
create
or
created
from,
and
that's
fine,
that's
what
at
the
way
every
other
cloud
event
attributes
is
behaves
and
now
we're
making
extensions
work.
F
Evon
I
think
that's
a
accurate
statement
of
the
problem.
I
I
think
there's
a
second
way
to
solve
it,
which
is
to
say
that
any
of
these
encoding
x'
are
transports
specific
concerns
and
you
use
a
transport
specific
mechanism.
So
you
say
you
know
hey
when
you're
encoding
HTTP,
you
need
to
know
how
to
undo
HTTP
and
if
you
don't
link
in
the
AMQP
stuff,
you
don't
need
to
know
how
a
mqp
gets
mapped
to
two
and
from
cloud
events.
A
F
In
498,
I
suggested
adding
a
header
or
something
like
cloud
events
mapping
for
HTTP.
That
would
tell
you
what
mappings
the
sender
did
on
attributes.
That
would
be
something
specific
to
the
HTTP
transport
all
right.
The
thing
that
is
bridging
between
HTTP
and
AMQP
is
going
to
be
a
cloud
events
bridge
I,
think
I,
don't
think,
there's
a
general-purpose
HTTP
to
AMQP
origin.
C
We
so
objection.
There
is
a
generic
mechanism
for
this.
Well,
there
is
yeah,
we
have
a
there's,
a
spec
there's
a
spec,
that's
in
flight
to
be
become
a
standard,
Committee
standard
which
is
HTTP
over
nqp,
and
that
actually
has
that
mode.
So
it
has
a
it
has
a
very
effective
britain
is
effectively
purchased.
It
allows
you
to
take.
It
should
be
semantics
and
Berchem
over
MVP
I.
A
However,
but
even
if
that's
the
case,
I
still
don't
think
what
you
described
there
Evan
works,
because
what
if
the
w3c
spec
says
for
HCP
the
trace
header
is
C
realized
as
it
is
in
our
document,
but
for
AMQP
its
serialize
with
the
Z
in
front
I,
don't
know,
write
some
other
civilization.
That
information
is
not
available
to
anybody.
Unless
that
middleware
happens
to
understand
the
tracing
from
the
tracing
extension
right.
F
No
I'm
suggesting
that
there
would
be
so
on
the
first
HTTP
message.
There
would
be
a
header
that
said
cloud
events
mapped,
you
know,
trace,
trace,
ID
to
whatever
the
you
know,
w3c
header
is
the
recipient.
Would
look
at
that
cloud?
Events
mapped
and
say:
oh,
it
was
there
a
header
named,
you
know,
wc3
tracing
or
whatever.
Oh
I
should
trance
trance
like
that
into
a
cloud
events
trace
ID.
F
So
for
the
first
hop
all
of
those
you
know
all
of
those
traces
line
up
then,
when
goes
to
send
with
AMQP
it
either
knows
about
the
a.m.
compute
racing
attention,
at
which
point
it
puts
it
in
the
right
place
for
MTP
or
if
it
doesn't,
it
sends
it
as
a
regular
cloud
events
header
with
you
know:
whatever
prefixes
there
are,
and
you
don't
get
traces
for
that
path
along
the
way
that
you
get
the
tracing
information
from
the
first
part.
F
F
F
A
D
Just
one
thing
I,
when
the
tracing
stuff
was
originally
added,
was
the
intention
that
maybe
this
is
what
Clemens
was
alluding
to.
Is
this
meant
to
trace
the
context
of
the
event
or
all
the
infrastructure
that
it
flows
through
in
between,
because
it
sounds
like
if
somebody
publishes
an
event
and
it
bounces
through
lots
of
infrastructure
along
the
way,
I
could
end
up
with
a
received
trace
context
which
has
loads
of
stuff
in
it.
That
I
absolutely
no
interest
in
is
that
true.
E
D
It
flows
yws
infrastructure,
maybe
I,
don't
sticking
my
neck
out
yeah
before
it
leaps
into
our
infrastructure.
I
may
not
have
access
to
all
the
gory
details
of
what's
gone
on
under
the
covers
in
you
know,
WX
well.
F
I
think
that
you
could
either
send
the
trace
to
your.
You
know
if
you're
seeing
an
SLO
violation,
you
could
send
the
trace
aid
of
us
as
part
of
your
proof
or
AWS
could
choose
to
cut
all
the
traces
that
are
in
aid
of
yours
infrastructure.
So
it
looks
like
you
go
into
AWS
you
wormhole
and
you
come
out
the
other
side.
F
C
So
the
the
the
reason
for
why
we're
publishing
the
event
is
captured
effectively
in
the
in
the
trace
context
and
now
we're
doing
two
things:
we're
putting
it
in
to
seee
attributes,
which
is
an
end-to-end
flow
which
gives
visibility
to
the
reason,
as
the
publisher
had
it
to
the
consumer
without
the
intermediated
infrastructure
in
play,
and
then
by
propagating
it
into
the
w3c
context.
Header
that
now
lets
the
the
the
HTTP
infrastructure
flow
makes
that
visible,
and
even
if
that
now
gets
propagated
over
by
w3c
context,
rules
without
cloud
events
being
involved
at
all.
D
C
It
that's
action,
number
one
and
then
I
am
and
I'm
publishing
it
and
then
I'm
also
sending
it
over
the
transport,
which
is
action
to
and
I.
Think
of
those
as
distinct
activities
and
the
send
operation
gets
traced
and
then
effectively
me
having
sent
that
events
or
and
then
causing
things
with
it
at
the
application
level
is
a
second
thing.
C
A
I,
don't
know
we're
gonna
get
this
resolved
in
the
next
six
minutes,
so
obviously
we
got
that
more
discussion
around
this
offline
in
the
issue
and
probably
next
week.
So
let
me
ask
this
question,
because
this
doesn't
impact
all
extensions.
It's
just
as
Jim
said
those
edge
cases,
tensions
that
are
have
their
own
special
civilization,
which
hopefully
would
minimal,
but
we
least
have
one.
We
know
that
do
people
think
this
is
a
requirement
to
solve
before
we
approve
release
candidate,
one
for
1.0.
A
D
A
Think
no
matter
assuming
we
just
don't
close
the
issue
assume
we
want
do
something
with
the
issue.
I
think
any
PR
is
gonna
not
only
require
changes
to
the
spec,
but
also
to
that
extension.
Spec.
Yes,
I
I.
Think
that's
true,
so
the
question
is
I'm
not
saying
do
we
need
it
for
one
point,
though,
I
think
we
do
need
a
form
point,
no
I'm
quiet
I'm,
asking
whether
we
need
it
for
one
point,
though,
release
candidate
one.
F
A
A
A
Okay,
yeah
I
haven't
a
big
tree
answer
your
question.
I
agree.
It
is
a
little
bit
odd,
odd,
but
I
was
hoping
it
would
be
sort
of
an
outlier
thing
that
wouldn't
affect
the
core
with
you.
Bringing
up
content
type
is
a
good
example
of
wavy.
It
isn't
just
this
outlier
thing.
Okay,
so
is
there
any
objection,
then,
to
tagging?
This
is
required
for
at
least
candidate
1,
which
means
we
whoops
screen,
which
means
our
current
plan
here.
Is
it
pushed
out
at
least
another
week?
A
Ok,
in
that
case,
three
minutes
left.
Oh
I
closed.
This
issue.
Consider
removing
the
web
book
spec,
because
I
couldn't
think
of
any
place
else
to
put
it
Clemens
I.
Have
it
Clemens
and
I
have
not
any
chance.
I'll
talk
about
it,
even
in
Clemens,
I
think
you've
done
some
thinking
about
this
and
couldn't
Inc
of
a
good
home
either
so
I
think
for
right
now,
I
just
it's
ok
to
close
the
issue.
We
can
always
reopen
it
later.
A
A
C
A
A
Okay,
while
we're
waiting,
okay,
anything
else
do
you
want
to
bring
up
on
today's
call.