►
From YouTube: CNCF Serverless WG Meeting - 2018-08-09
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Usual
administrative,
administrative
details
and,
let's
jump
right
into
the
extension
stuff
now
last
week's
call
I
think
we
left
it
where
there
are
a
couple
people
that
wanted
to
talk
to
certain
aspects
of
the
issue,
mainly
from
the
perspective
of
why
they
believe
a
bag
is
needed,
I
believe
Rachel,
Canton
or
who
from
Google
was
gonna,
speak
for
the
GRP
seaside
or
protobufs
I.
Carry
this.
C
B
D
B
D
Yeah
I
I
had
to
I
just
came
from
another
CN
CF
meeting
is
also
okay,
yeah
hi
everyone
so
can.
D
You
have
the
slides
up,
then
we
can
walk
through
it.
So
so,
basically
I
wanted
to
follow
up
from
the
last
meeting
whereby
we
took
the
two
possible
approaches
for
extensions
that
were
actually
discussed,
and
we
ran
a
little
bit
around
how
this
would
look
like
in
protocol
buffers
and
wanted
to
share
what
we
found,
but
before
we
got
to
that
for
the
benefit
of
everyone
in
the
room.
D
I
just
wanted
to
give
a
couple
of
slides
overview
of
what
a
protocol
buffer
is
so
to
recap,
or
to
sort
of
like
see
for
the
first
time
a
protocol.
Buffers
is
basically
the
interface
definition
language
that
we
actually
used
to
actually
define
strongly
typed
messages
that
can
actually
be
compiled
to
a
binary
format
across
the
wire,
and
it
can
also
be
used
to
actually
define
service
definitions
as
well.
D
So
that's
why
RPC
systems,
and
so
on
that
not
just
Google
on
not
just
your
RPC,
but
in
many
different
places
are
they
actually
use
such
a
format
to
define
both
the
messages
and
the
RPC
systems?
Clift
is
another
example
that
works
very
similarly
to
this
yeah,
so
in
protocol
buffers.
So,
in
the
previous
slide,
we
saw
how
to
define
messages
in
this
slide.
D
Is
you
can
see
how
to
define
actually
services
and
how
these
services
can
actually
then
accept
a
message
as
an
argument
and
return,
a
message
as
as
as
its
return
value
as
well?
So
this
this
is
the
basic
sort
of
way
in
which
we
actually
defined
services.
We
define
messages
across
the
board
and
our
intent
at
least
is
G
RPC
and
protocol
before
teams
intent
over
here
is
to
actually
define
cloud
events
in
a
similar
way.
That's
that's.
D
That's
basically
our
goal
in
in
trying
to
actually
work
in
this
group
and
and
and
try
to
come
up
with
a
conforming
format
for
this
stuff.
So
we
have
been
on
the
topic
of
extension,
so
I
wanted
to
dive
right
in.
There
are
two
to
actually
language
versions
of
protocol
buffers
as
the
protocol
buffers
version
2
and
the
protocol
buffers
version.
3
I
am
starting
to
look
at
protocol
versus
version
2
on
the
right
side.
D
You
can
see
that
the
definition
of
protocol
buffer
in
version
2,
it's
actually
quite
a
bit
more
verbose
than
the
messages
that
you
saw
defined
in
the
previous
two
slides,
which
used
protocol
buffers
version
3.
But
the
reason
I
wanted
to
actually
talk
about
protocol
buffers
version,
2
and
extensions-
is
that
it
actually
has
a
particular
thing
called
extensions
that
that
allows
people
to
actually
define
extensions
and
I'm.
Presenting
this
to
this
group
because,
like
this
is
this
is
where
there's
a
lot
of
like
internal
experience
in
sort
of
building
these
kind
of
things.
D
So
in
protocol
buffers.
Basically,
you
have
a
field
number
on
the
right
side.
You
can
see
it
0,
1,
2
and
so
on
and
so
forth.
These
field
numbers
cannot
be
changed,
because
if
you
change
the
field
numbers,
you
are
actually
changing
how
the
binary
format
looks
on
the
wire
and
how
it's
going
to
be
deserialized.
D
What
extensions
actually
are
in
protocol
buffers
v2
is
that
they
actually
reserved
a
few
field
numbers
for
third-party
extensions
of
your
proto,
so
you
can
actually
say
that
this
particular
proto
is
actually
defined
somewhere
else
and
and
basically
what
you
would
do
is
that
you
would
then
take
the
original
proto
say,
call
it
proto
a
and
then
you
import
it
into
Pro
to
be
and
and
basically
extend,
extend
Porter
a
with
it
when
I
say
import,
that's
actually
an
import
statement
that
you
can
actually
do
there
and
then,
basically,
what
protocol
a
proteins
format
wire
format
will
look
like.
D
D
So
basically,
how
does
this
work
for
older
clients
right
like
say,
a
newer
version
of
a
protocol
comes
in,
and
you
have
older
client
library
running
or
older
service
libraries
running.
If,
if
that
service
library,
when
you
actually
marshal
sorry
or
when
you
actually
unmarshal
proto,
the
unknown
fields,
are
actually
not
discarded,
because
so
that,
if
the
same
message
is
actually
later
serialized,
that
unknown
message
can
actually
be
passed
along.
D
This
kind
of
allows
services
to
actually
update
at
different
paces
through
the
proto
definitions
and
it's
one
of
the
big
points
of
protocol
buffers
in
the
sense
and
when
you
upgrade
a
proto,
not
everybody
has
to
upgrade
their
implementations
immediately,
but
but
you
they
can
upgrade
as
and
when
they
want
to
take
advantage
of
the
new
feature.
So
that's
that's
kind
of
how
extensions
work
now.
Extensions
have
a
lot
of
problems.
Arm
extensions
is
not
future
proof,
because
proto
3
does
not
support
extensions.
So
this
is.
D
D
Don't
have
that
so
so
things
break
when
you
when,
when
you
do
try
to
do
that,
the
other
thing
is
that,
as
you
saw
in
the
extensions
that
there's
a
certain
number,
that
you
need
to
reserve
for
a
particular
extension
so
because
there
are
field
numbers
in
protocol
buffers
and
also
in
in
things
like
thrift
and
Captain
proto
and
so
on.
There's
a
growth.
D
If
any
approach
like
this
would
require
a
global
coordination
yeah
if
you
could
go
to
the
next
slide
so
and
the
other
thing
that
we
took
an
action
item
is
like
how
do
you
actually
promote
extensions
up?
Just
to
conclude,
the
thought
exercise
on
now.
D
Yes,
sorry
yeah,
so
did
this.
This
slide
talks
about
cloud
events,
extensions.
What
happens
when
you
actually
make
sure
when
you
actually
have
to
make
sure
that
that
something
that
that
was
implemented
as
an
proto
extension
is
now
an
official
part
of
the
proto,
because
we've
decided
to
actually
make
that
cloud
events
extension
part
of
the
official
spec.
D
You
have
to
actually
redefine
a
known
known
extension
into
the
official
protocol
buffer,
that's
okay,
but
then
the
other
problem
is
that
any
difference
between
the
initial
and
the
final
message
becomes
completely
incompatible.
So
if
you
decide
that
okay,
this
works
kind
of
well,
but
let's
promote
a
slightly
different
variation
of
it.
That
becomes
a
big
problem,
because
you
are
essentially
defining
a
different
message.
Difference.
D
It,
for
all
intents
and
purposes,
you
are
defining
a
different
type
in
that
case,
so
in
protocol
buffers
v3,
we
got
rid
of
the
extensions
and
we
replace
replaced
it
with
any
type
protocol.
Buffers
have
a
series
of
well-known
types,
so
types
that
implement
a
particular
functionality
that
are
encapsulated
in
a
proto
definition
it
sauce.
So
the
NE
is
a
well-known
type
that
this
these
are
sort
of
types
that
actually
let
you
build
upon
basic
types
in
protocol
buffers.
D
It
lets
you
use
messages
as
embedded
types
and
then
it
lets
you
specify
URL
for
that
for
where
to
actually
decode
that
particular
byte
string
that
any
wraps
around.
So
this
has
a
JSON
mapping,
but
the
JSON
mapping
actually
persists
a
URL
which
actually
contains
the
definition
of
how
to
act
e
unwrap
this,
which
makes
sense
if
you're
coming
from
a
proto
first
into
proto
into
JSON
world.
D
D
B
Like
this
is
a
pretty
important
point
and
it
might
be
worth
explaining
a
little
bit
more
detail
so,
like
if
say,
say,
I
create
an
extension
called
rachel's
extension
and
someone
starts
using
that.
Then
it
gets
assigned
a
field
number
like
100
and
then
say
you
create
an
extension
and
that
would-
and
you
create
like,
like
you-
could
create
the
same
thing
with
the
like,
starting
with
the
tag
number
100
and
we
would
collide
and
we
would
be
sending
like
my
data
would
get
into
your
would
like
become
the
value
for
your
extension
right.
A
Followed
by
a
lot
very
more
times
like
that,
the
mapping
to
numbers
I'm
trying
to
figure
out
if
that's
just
an
interesting
ability
in
detail
or
something
we
actually
need
to
think
about,
because
it
sounds
like
you're
saying
if
two
different
people
define
the
same
there's
two
different
people
define
an
extension
they're.
Both
gonna
get
assigned
a
number.
But
then
how
do
you
make
sure
they're
both
not
assigned
the
same
number
they're.
D
B
D
I
think
I
think
that's
a
good
question,
though,
like
the
thing
is
yes,
this
is
a
problem
for
because
of
implementation,
detail
in
the
protocol
in
the
protocol
buffer
format,
but
it
is
a
problem
that
we
would
run
into
into
a
lot
of
binary
formats,
because
what
binary
formats
do
is
that
they
actually
imply
they
actually
use
field
numbers
to
actually
tell
people
how
to
actually
structure
that
particular
message.
So
what
we're
trying
to
do
here
is
we're
not
I'm,
basically
from
a
protocol
buffers
perspective.
D
B
Just
to
make
you're
like
just
totally
explain
why
we're
bringing
this
up
like
if
Google
is
the
only
binary
format
that
was
going
to
implement
this.
This
would
not
be
a
problem
we
could
just
like
do
it
on
our
own.
It
would
be
fine,
but
we're
trying
to
make
something,
that's
compatible
with
other
with
anyone
else
who
wants
to
have
a
binary
format
as
well,
and
that's
why
this
is
a
problem
so.
B
F
F
D
So,
that's
that's!
That's
a
fair
point
and
and
I
think
that
I
have
a
few
I
think
the
last
slide
of
this
presentation
directly
addresses
that.
So,
basically,
what
has
happened
is
that
when
we
started
out
trying
to
design
this
particular
format
we
actually
caught
in
the
middle
of
okay.
How
do
we
handle
extensions,
because
that
becomes
a
interesting
point
as
to
how
to
actually
define
this
particular
spec
for
the
product
tree
when
we
are
doing
this,
we
actually
jumped
into
this
particular
discussion
at
the
same
time
as
that
other
discussion.
D
F
A
D
So
basically,
I
think
that
is,
that
is
the
son
of
the
state
were
in
all
we're
trying
to
do
here
is
to
try
and
say
hey.
We
are
trying
to
actually
implement
this
spec.
We
want
to
actually
get
this
out
as
a
pull
request,
but
at
the
same
time,
I
think
there
are
some
fundamental
decisions
that
are
also
being
voted
on
at
this
particular
point
that
is,
gonna
drastically
implement
how
that
looks
like,
and
we
think
that
there's
a
few
different
issues
over
here
that
pertain
to
a
lot
of
binary
formats.
F
So
if
you're
talking
about
a
lot
of
binary
formats,
it'll
be
interesting
to
go
and
see
you
survey
of
that.
Second,
it's
there
is
prior
art
for
how
to
do
extensions
in
a
broader
number
of
sterilizers
I
posted
paste.
It's
two
links
that
are
net
there's
Java
sterilizers,
which
effectively
all
do,
does
have
the
same
strategy,
and
that
is
to
stash
everything,
that's
not
known
by
schema
into
an
extension
dictionary,
so
either
format.
Let
me
speak
out.
F
The
wire
format
is
flat
so,
and
this
is
something
that
is
being
done
for
this
comes
out
of
very
good
practice
in
XML,
where
you
have
xml
structures
that
you
make
with
an
open
schema
where
you
put
in
any
element
at
the
end
where
you
are
effectively
defining
your
version,
one
and
you
are
appending
in
your
complex
type
and
any
element
at
the
bottom,
and
you
know
then
you're
defining
your
version
1.1
and
affecting
your
adding
elements
in
the
place
of
the
end.
You
shift
the
any
down
the
nice
thing
about.
F
That
is
that
now
the
1.1
sterilizer
and
the
1.0
sterilizer
will
be
able
to
round-trip
that
documents,
because
the
steriliser
supports
these
extension
bags.
If
you
have
an
open
schema,
so
I
pasted
two
examples
of
this:
one
is
for
Jason
net,
so
that
actually
does
that
sort
of
round
tripping
through
a
sterilizer
that
maps
to
an
object
with
Jason.
The
other
one
is
doing
that
same
thing
with
with
the
XML
sterilizer
of
the.net
framework
and
that
actually
also
round-trips
the
data
and
there's
a
there's
another
bag,
since
it's
XML
to
do
that
for
for
attributes.
G
F
D
D
Yeah,
so
just
just
wanted
to
let
you
know
that
there's
one
more
example
of
exactly
what
you
described:
that's
in
the
next
light,
so,
okay.
D
So
so,
basically,
what
I'm
trying
to
say
is
that
we
wanted
to
actually
so
Clements.
Basically,
what
happened
in
the
last
meeting
was
that
there
was
a
discussion
whether
we
should
have
these
extensions
directly
at
the
top
level,
or
we
should
have
that
in
the
bag,
as
you
described
so
I
took
an
action
item
to
actually
explain
in
the
protocol
buffers
world
what
putting
extensions
the
top
level
would
look
like
and
what
extensions
putting
in
a
bag
would
look
like,
because
and
and
the
pros
and
cons
of
each
so.
F
What
I
said
I'm
not
saying
the
extensions
go
into
a
bag
in
college
no
to
a
bag,
in
your
specific,
in
your
specific
format
that
you
defined
for
protobuf.
So
the
point
here
is
that
the
the
the
cloud
is
current
events.
Abstract
data
model
is
flat,
and
at
least
we're
advocating
for
that
being
flattened,
I,
think
Doug
and
I,
and
a
few
other
folks
agree
on
that
being
flat
and
that
bag
may
appear
in
a
particular
format.
F
Mapping,
because
you
need
it
in
Jason,
you
don't
next
Mel,
you
don't
impose
above,
because
you
have
your
schema
bound.
You
probably
need
to
but
okay,
that
that
doesn't
mean
that
that
doesn't
mean
that
the
that
the
specification
per
se
needs
to
even
have
that
bag
effectively.
The
bag
constitutes
itself
out
of
out
of
everything.
That
is
not
explicitly
known
that
you
can't
explicitly
put
in
the
schema,
because
you
know
it
so
you
have
any.
You
have
any
extension
effectively.
B
F
Wait
so
if
you
go
into
well
diners
for
my
zone,
interoperate,
like
total
buff,
doesn't
interrupt
with
first.
Does
it
well.
B
F
D
D
D
Why
is
there
not
a
pull
request
yet
because
we
want
to
actually
solve
these
problems
so,
and
these
are
the
sort
of
four
concerns
that
were
trying
to
address
for
here
right
like
one
is
that
we
want
to
follow
the
messaging
formats,
best
practices
and
and
I
deliberately
put
messaging
format
here,
because
I
want
to
define
a
cloud
events
spec
for
based
on
protocol
buffers
that
follows
protocol
buffers,
best
format,
best
practices
and
I.
D
Think
all
of
us
want
to
do
that
like
we
want
the
JSON
to
look
like
ideal
idiomatic
JSON
and
the
next
person
will
defines
it
for
thrift.
It
look
like
idiomatic
craft,
that's
the
other
thing
is
that,
is
it
important
to
maintain
some
sort
of
automatic
mapping
from
one
message?
Format
to
other
sounds
like,
like
your
opinion
at
this
particular
point.
Is
that
not
necessarily
we
can
leave
that
to
the
to
whoever
who
wants
to
do
that
mapping
to
actually
take
that
information
right,
the
custom
logic
to
map
back
and
forth?
D
That's
good
for
us
to
know,
because
that
opens
up.
That
makes
us
a
lot
more
free
to
actually
define
the
implementation
in
in
the
way
that
we
want
to,
but
I
just
wanted
to.
That
was
a
question
that
I
wanted
to
ask
this
group.
The
other
thing
is
that,
as
we
do
this
spec
one
of
the
things
we're
trying
to
do
with
protocol
buffers
is
trying
to
make
sure
that
that
what
we
do
over
here
works
for
other
formats
that
are
similar
as
well
yeah.
D
Thank
you.
I
think
there
are
a
couple
of
slides
up
before
this
that
actually
propose
that
we
actually
put
the
protocol
buffers
in
in
a
flat
namespace,
but
the
in
in
a
map
that
actually
contains
all
the
extensions
in
a
key
value
store.
So
that's
something
that
that's
missed
in
this
discussion,
but
that's
okay,
because
you
know
we
can
actually
shared
slides
out
and
and
folks
can
look
at
the
details
if
they
need
to
so.
A
So
Kalish,
can
you
sure
that
last
slide
again
so
yes
I
I'm,
struggling
here,
because
if
I
hadn't
had
the
history
of
the
of
the
previous
conversation,
meaning
previous
phone
calls
and
stuff
and
all
I
heard?
Was
your
presentation
here?
I
gotta
be
honest.
It
wouldn't
be
harvesting
clear
to
me
what
you're
advocating
yeah.
A
It
was
very
difficult
for
me
to
follow
all
of
it
because
there's
a
lot
of
information
there
and
it
wasn't
clear
to
me
which
parts
were
sort
of
background
material
versus
directly
related
to
bag
or
no
bag,
and
so
I'm
trying
to
figure
out
how
we
can
make
forward
progress
here,
because
people
are
itching
to
do
a
vote
right.
Yeah.
B
A
So
we
can
make
a
decision
today
and
then
next
week,
based
upon
you
guys,
you
know
you're
working
progress
with
us.
We
can
completely
change
our
mind.
That's
fine,
but
people
want
to
have
a
set
something
you
know
to
go
with
for
right
now:
I'm
trying
to
figure
out
the
best
way
forward
here
and
I'm,
not
quite
sure
what
to
do
with
your
information.
Does
that
make
any
sense.
D
D
This
is
the
set
of
information
that
we're
missing,
that
that
sort
of
like
augments,
the
the
all
the
discussion
that
happened
in
the
last
meeting,
I'm,
not
sure
if
I
can
summarize
the
last
meeting
adequately
for
you
at
this
point,
I
get
again
what
you're
trying
to
say,
though,
but
but
in
summary
right
like
here's,
my
question
right:
if
what
we
want
to
do
is
we
want
to
have
a
reasonable
mapping
of
the
protocol
buffer
implementation
to
say
the
JSON
implementation
and
so
on?
Do
we
need
to
maintain
such
a
mapping?
A
Just
from
a
moderator
point
of
view,
we
have
lots
of
people
in
the
chatter
saying
they
want
to
speak
up
so
I
know.
I
know
we
haven't
on
this
in
the
past,
but
for
this
meeting,
if
you
would
like
to
speak
up,
put
a
plus
hand
into
the
chat
and
I'll
do
my
best
to
keep
track
of
who's
on
the
speaker
queue.
We
don't
want
to
make
sure
everybody
gets
a
chance
to
talk.
A
H
Thanks
Doug
I
appreciate
it
and
I
apologize
to
the
group.
I
was
on
some
of
the
earlier
calls
and
I
haven't
been
on
some.
The
more
recent
one,
so
collection
and
Sarah
I
might
be
missing
some
things,
and
this
is
simply
from
an
old
guy
doing
this.
Since
the
early
1990s,
most
of
the
time
I've
seen
when
a
producer
is
producing
a
format
that
wants
to
be
generally
consumed,
which
I
believe
is
kind
of
the
purpose
of
cloud
events,
there
needs
to
be
a
single
definition
of
what
that
looks
like
on
the
wire
I.
H
Don't
believe
that
it
should
be
transport
bound
myself,
but
I
know
that
there's
some
conversations
around
that
as
well.
However,
there's
consumers
that
might
want
to
consume
that
data
in
a
different
format
and
again,
at
least
from
stuff
done
on
Wall
Street
and
some
others.
They
always
want
to
consume
it.
The
way
they
want
to
see
it,
for
example,
in
this
case
a
proto
buck,
and
usually
what
happens
is
if
someone
then
steps
up
and
says:
okay
I
know
the
standard.
Is
this
but
I'm
going
to
define
a
mapping
over
to
this
protocol
format?
H
On
the
wire,
and
then
they
actually
own
and
define
that
mapping
and
they
keep
that
up
to
date
and
of
course,
I
know
you
guys
were
talking
about.
How
do
we
do?
You
know
future
revisions
and
as
the
standard
changes
we
are
compatible
with
such
a
C
for
a
request
reply,
but
it
feels
like
Google
would
step
up
and
say:
hey
we're
part
of
the
cloud
events
group
and
here's
the
mapping,
that's
defined
by
the
standard
that
I
believe.
H
If
I'm
not
incorrect
is
supposed
to
be
JSON
today
and
here's
Google's,
you
know
mapping
into
protobufs
for
anyone
who
wants
to
consume
these
events
as
as
protocol,
but
works
now
again.
I
think
the
producer
should
not
have
to
worry
about
that.
So
there's
going
to
be
an
intermediate
layer
that
says:
oh
I
know
that
there's
interested
consumers
on
this
type
of
cloud
event,
but
in
a
protocol,
buff
format
and
I'll
do
the
conversion
for
them.
H
G
Yeah,
thank
you
for
that
again.
I'm
not
participated
earlier,
but
I've
been
tracking.
What's
going
on
in
the
space
there's
a
couple
of
points
which
I
want
to
mention,
but
I'll
be
very
brief
about
it.
One
is
that
the
data
that
is
being
passed
it's
being
enforced,
it
has
to
be
a
JSON
I
think
that
everyone
seems
to
agree,
at
least
at
the
protocol
level,
that
there
should
be
a
JSON
support,
our
implementation
available
when
you,
when
you
put
it
together
that
so
that's
one
aspect.
G
The
second
aspect
is
the
structure
of
the
message,
as
I
think
Derek,
rightfully
mentioned
should
be
understood
by
the
producer
and
the
consumer,
so
it
makes
more
sense
to
have
some
format
that
producers
and
consumers
can
agree
on
right
to
put
something
in
the
bag.
Expecting
that
at
some
point
a
producer
or
a
consumer
would
want
to
interpret
it
right
seems
to
be
that
we're
just
putting
something
that
eventually,
someone
would
be
able
to
understand
it.
G
It
enables
you
to
represent
Lake
data
in
JSON
format,
so
that's
kind
of
taken
care
of
and
linked
data
enables
producers
to
describe
what
event
they
are
creating
and
they
can
also
create
new
events
telling
that
okay,
this
event
is
a
subtype
of
some
other
event,
and
this
is
the
extra
attributes
I'm
adding
to
them.
That
way,
it
makes
it
easier
for
everyone
to
understand
what
is
being
passed
through.
A
F
So
if
we,
if
we
assume
the
cloud
events
based
spec
as
an
abstract
data
model
and
I,
think
that's
what
we've
defined
it
as
you
can
go
and
make
a
data
structure
inside
of
your
application,
and
that
reflects
exactly
that
state
and
then
has
effectively
a
way
to
store
anything.
You
read
from
whatever
what
the
serialization
is
into
a
dictionary.
F
Jason
is
just
what
the
the
one
format
that
we
have.
You
can
go.
Take
a
supervisor
and
sterilize
that
out
into
Jason,
you
can
serialize
it
out
into
XML.
You
can
serialize
it
out
the
protobuf.
You
can
serialize
it
out
to
thrift
and
it
pops
out
on
the
other
side,
and
you
can
go
and
now
deserialize
that
in
another
runtime
into
a
similar
structure.
F
The
question
that
we
had
that
that
were
that
we're
really
discussing
here
is
whether
we
want
to
have
a
closed
schema,
where
the
top
level
is
all
defined
and
extends
and
extensions
always
go
into
bag,
even
in
the
abstract
data
model,
or
whether
we
want
to
have
an
open
schema
where
we
simply
don't
have
that
bag
in
the
abstract
data
model,
like
in
the
document,
but
rather
basically
allow
for
arbitrary
extensibility
at
that
level
and
on
the
wire
data
flows
flat.
But
then
in
serialize
state
the
data
basically
goes
into.
F
You
know,
well-known
properties
versus
a
dictionary
if
you
go
and
deserialize
the
event
into
into
in
a
script
language.
So
basically,
you
just
take
a
JSON
parser
and
take
the
event
in
you're
simply
going
to
look
at
a
dictionary
and
everything's
going
to
be
flat,
which
is
going
to
be
much
easier
for
me
at
least
to
handle.
Then
you
know
more
structured
way.
F
So
that's
why
I'm
so,
basically,
there's
a
there's
always
for
for
anything
that
happens
in
memory.
Kind
of
the
reference
specification
is
the
core
spec
and
then
for
everything
that
happens
on
the
wire.
The
reference
that
vacation
is
then
the
formatting
expect
that
that
depends
on
that
and
I
think
the
formatting
specs
they
can
have
bags
and
they
sometimes
need
bags,
because
it's
not
expressible
in
any
other
way.
F
Inferences
and
wrote
above,
but
first
the
Jason
by
me,
has
a
very
clear
set
of
rules
and
how
it
can
go
and
sterilize
all
that
stuff
out
without
needing
it
back
which
keeps
it
easier
for
and
and
Jason
serializers,
which
are
formatting
into
objects.
Like
the
one
example
that
I
pasted,
they
will
still
go
and
put
the
stuff
into
a
bag,
because
that's
the
construct
that
they
have.
So
that's.
Why
I
prefer
have
it
flat.
I
So
any
of
this
discussion
is
about
how
to
map
the
information
in
the
crowd
events
and
to
transport
right.
How
I
see
realize
that
we're
concentrating
on
how
to
map
the
information
in
the
attention
I
mean
the
stationary
information
to
the
to
the
transport
shouldn't.
We
treat
the
same,
you
know
weather
and
that
information
is
in
the
men's
back
or
it
is
extension.
Usually
she
use
the
same
way
to
map
them
to
our
transport.
F
Yeah,
that's
that's
actually
a
point
that
I
just
brought
that
I
just
mentioned
is
like.
If
you
have
a
field
that
you
start
using,
because
you
want
to
promote
it
effectively
as
a
stand
as
a
standard
element,
then
the
you
know:
scripting
clients
which
are
not
relying
on
sterilization,
will
be
able
to
just
continue
to
use
it
without
it,
marching
from
one
place
to
the
next
place.
I
I'm,
just
you
know
whether
you
know
no
matter
what
which
transport
or
what's
the
serialize
mechanism
we
are
going
to
use.
For
you
know
the
transport
I
think
it's
better
to
treat
the
information
field
in
the
men
step
and,
in
extension,
the
same
way
how
we
see
rise
them
instead,
I'll
treat
them
differently
so.
J
I
am
I
think
that
I,
like
the
way
that
a
couple
of
people
have
been
attempting
to
frame
this
as
like
what
are
our
first
principles,
I.
Think
kulish,
you
know
did
that
in
the
final
slide,
which
I
actually
transcribed
into
the
notes,
so
that
we
would
have
them
here
and
I.
Think
there's
like
descent,
at
least
amongst
you
know
somebody
who
said
that
it's
we
have
collectively
decided
that
it's
not
important
to
maintain
path.
J
Means
that
I
know
that
the
things
in
this
bag
might
collide
and
the
things
at
the
top
level
won't
it's
a
way
of
like
basically
sorting
that
list,
so
instead
of
every
implementer
keeping
a
list
of
the
spec
things
right.
What's
in
spec
and
tracking
that
you
have
like
this
clear,
sorting
mechanism
right
so
we're
just
saying
it's
implicit
in
the
written
documentation
of
the
spec
which
attributes
are
are
commonly
accepted
or
not
versus
having
it
actually
be
in
the
structure.
But.
F
But
it's
clear
with
so
we
have
it.
We
have
a
cloud
defense
version
in
there
and
then,
once
you
have
the
cloud
events
versions,
you
know
exactly
which
which
fields
are
in
spec,
which
means
you
interpret
them
by
spec,
the
only
collision
that
can
happen
in
that
in
that,
in
that
scenario,
is
extensions,
stepping
on
top
of
each
other
and
they
would
step
on
top
of
each
other
at
the
top
level,
as
well
as
in
the
extension
bag.
J
Yes,
but
then,
if
somebody
implemented
us
extension
right,
so
you
have
two
different
implementers
right
who
implement
two
unknown
Street
unbeknownst
to
each
other
right.
They
both
implement
the
same
extension
right
because
you're
sort
of
encouraging
people
to
use
general
terms
right.
So
somebody
says:
okay,
I'm
gonna
have
a
logging
extension
I'm,
not
proposing
that
we
do
this
right,
but
two
different
people
saying
okay
well
I
have
the
great
idea
for
generalized
logging
and
they
name
it
the
same
thing
but
they're
there.
J
F
K
You
still
has
a
collision,
so
if
the
collisions
happened
inside
the
extensions
bag,
we
know
that
those
are
not
official.
We
can
choose
to
ignore
them
as
long
as
we,
you
know,
honor
everything
that's
defined
in
the
top-level
spec.
We
expect
a
base
level
functionality,
but
now
we're
saying
people
can
put
anything
at
the
top
level
and
at
any
time
we
might
decide
to
interpret
it
one
way
for
everyone
whose
conformance
to
the
spec.
F
H
This
is
Derrick,
I,
think
I
understand
the
problem.
That's
it's
in
salt
and
least
attempted
to
be
solved
through,
like
what
JW
tees
have
done
where
they
say.
These
are
the
required
fields,
and
this
is
the
spec.
If
you
want
to
add
extensions,
you
have
to
make
sure
you
don't
collide.
So,
for
example,
in
this
case,
I
would
imagine
Google
if
they
want
to
introduce
an
extension
that
they
want
to
propose
to
be
eventually
uplifted
to
the
spec.
H
H
J
I
think,
like
I'm,
really
speaking
as
somebody
like,
not
as
Google's
perspective
right,
because
Google
Microsoft
Amazon,
like
we've,
got
some
heavyweights
in
this
space
who
could
without
working
with
our
working
group
right
just
decide.
Hey
we're
gonna
determine
a
top-level
thing,
but
then
this
group
would
then
kind
of
like
it
would
be
very
hard
to
say
actually
we're
gonna
define
it
differently
than
in
practice.
Some
cloud
provider
has
has
turned
into
the
de-facto
standard
and
then
there's
no
process
of
promotion.
A
Okay,
guys
I
raised
my
hand,
not
as
IBM
er,
but
it
still
is
moderator.
We
have
15
minutes
left
here
and
I'm,
not
a
hundred
percent
sure
people
are
bringing
up
new
information,
am
I,
am
I
wrong
in
that,
and
did
someone
have
something
they
feel
like
is
brand
new
information
because
I'd
rather
not
sort
of
rehash
things
that
have
been
said
before,
because
I'd
like
to
see
him
come
to
some
sort
of
resolution
today,
because
I'm
hearing
I'm
hearing
an
cena's
from
people
they
want,
they
want
this
behind
us
one
way
or
another.
J
A
J
H
I
could
and
again
maybe
I'm
speaking
but
first
principles
for
me
from
an
inventing
system,
where
it's
kind
of
like
a
publish/subscribe
type
of
paradigm
is
the
first
principle.
Is
you
decide
the
schema
and
that's
required,
and
the
wire
format
for
all
producers
doesn't
mean
that
all
consumers
can't
consume
in
a
different
format,
but
producers
have
to
all
be
the
same.
H
So
if
it's
a
event
coming
out
of
google
amazon
IBM,
whoever
it's
by
the
spec,
so
it's
these
fields
or
require
these
are
optional
and
it's
JSON
on
the
wire
or
whatever,
but
they
all
have
to
be.
The
same.
I
cannot
see
a
world
where
I
have
an
event
producer
in
Google.
That
now
has
to
produce
five
different
versions
so
that
everyone
can
consume
it.
You
know
and
again
when
you
produce
something,
you
probably
know
the
first
consumer,
but
you
have
no
clue
who
else
is
gonna
want
to
view
that
data
later
on?
J
A
L
In
the
SDK
working
group
herb
sorry
sub
group,
we
determined
that
extensions
were
something
that
we
wanted
to
approach
initially
in
the
initial
design,
because
we
feel
it
they're
they're
very
important
until
we
know
where
those
extensions
are
going
to
end
up.
It's
it's
hard
to
move
forward
on
the
SDKs
in
general.
Additionally,
as
as
a
stakeholder
with
a
middleware
product.
J
Like
certainly
if
this
is
going
to
change
right,
then
changing
it
will
help
everybody
if
it's
not
going
to
check.
You
know
like
this
is
the
this
is
the
tension
right?
Why
there's
an
urgency
of
a
subgroup
to
do
it,
but
the
SDK
thing
I
think
that
was
something
that
I
missed,
that
this
is
where
that
this
originated
from,
and
so
my
question
to
the
SDK
group
is
just
like:
we've
talked
about
the
people
implementing
like
wire
protocols
could
track
the
the
list
of
known
attributes
right
and
create
their
own
bag
concept
right.
The
this.
J
L
Yeah,
we
know
for
clarity:
I,
don't
have
a
strong
super
strong
opinion
here,
I'm
leaning
towards
having
it
at
the
top
level,
because
it
seems
to
actually
graduate
extension
into
the
specification.
There's
a
good
graduation
story
there
and
I
favor
that
it's
just
it's
just
some
resolution.
Some
resolution
is
what
we're
looking
for
right.
A
J
A
But
if
the
code,
the
problem
here
is
that
people
do
think
that
did
they'd
like
to
see.
If
we
can
come
to
some
formal
decision
about
extensions
we're
in
past,
it
may
have
been
more
just
sort
of
implicit
because
that's
just
sort
of
the
way
it
was,
but
we
never
had
much
of
a
formal
discussion
about
it
and
that's
what
we're
trying
to
do
here.
Okay,
so
so.
I
I
I
Okay,
you
can
do
your
have
the
proposal
and
I
can
present
my
the
last
one.
It's
the
last,
so
I
don't
present
the
first
one
I
said:
is
it
the
word
food
proposal?
No,
no!
No!
It's
not
it's
a
you
know
it's
how
we
do
this.
You
know
these
popular
bags
or
these
extension
things.
So
a
minute
yes
present.
If
you,
if
you
like,
do
it.
A
I
Okay,
so
here
so
I
think
you
know
we
should
have
no
restriction
to.
We
should
not
put
a
restriction
in
the
spec
say
you
know,
there's
only
a
single
format
or
key
value
pair.
We
should
allow
the
metadata
in
the
format
of
both
key
value
pair
or
in
the
format
of
you,
know,
classification
back
upon
playback,
so
there's
some
situation.
You
know
we
have
to
have
that
bag
at
the
root
level
imposes
back
and
extension
spec.
I
So
no
man,
no
matter
whether
you
know
in
the
beste,
in
the
spec
and
b
or
in
some
extensions
back
I,
think
we
should
allow
you
know
these
two
different
formats
and
because
they
know
the
reason
I,
maybe
I
will
not
just
explain
I.
Think
one
thing
my
reasons
we
give
this
flexibility
we'll
have
more
people
use
these
work
groups
spec,
because
there
are
so
many
different
event
producer
types
there.
We
cannot
predict
that
you
know
every
metadata
information
can
only
be
modeled
as
a
key
value
pair
format.
I
That
would
be
good
because
that
avoid
the
problem
of
you
know
a
backward
compatibility
problem
problem
when
the
attribute
is
is
promoted
from
extensions
extension
to
a
men's
back
and
the
third
I
think
that
might
address
you
know
on
serous
question
I'm,
not
sure
whether
it
will,
and
so
if
a
company
is
will
be
used
by
a
large
number
of
use
cases,
no
matter
whether
it's
a
copy.
It's
a
classification
me
back
or
it
is
a
key
as
a
man.
I
It's
a
key
value
pair
I
think
the
work
group
should
define
a
consistent
name
for
that
bag
in
the
men's
back.
So
this
name
to
put
this
will
prevent
different
even
producer
from
giving
different.
You
know
names,
random
names
that
result
in
many
common
names,
but
I
traced
for
the
same
information.
Also,
it's
simplified
the
event.
Consumers
implementation.
So,
instead
of
having
to
pass
all.
I
Instead,
okay
I'm
gonna
continue.
You
know
three
minutes.
Instead
of
having
to
pass
different
popular
names
from
different
event,
producers
for
the
same
type
of
information,
the
school
consumer
can
just
pass
that
one
name
and
also
there
are
existing
particles
time.
The
particles
that
define
the
names
for
property
backs
so
I,
don't
think,
is
anything
wrong.
If
there's
a
need
to
define
that
you
know
we
we
have
to
you
know
we
put
a
restriction,
say
you
know,
no
one
can
define
any.
You
know
pocket
bags
perfect.
I
You
know
for
a
packet
of
information
and
so
so
that
information,
so
at
what
time
proposed
is.
If
we
see
there's
a
large
use
case
for
that
pocket,
we
should
defend
the
name.
But
you
know
the
information
inside,
but
Kate
do
not
need
to
be
defined
and
the
buggy
itself
can
be
optional.
So
we
can
define
that
in
the
men's
spec.
So
I
give
an
example.
Maybe
it's
go
down
and
yeah,
for
example,
there
you
know
something
something
we
need
I
think
identity
labels
we
defined
this.
I
Instead
of
you
know
some
people
call
it
identification
label,
some
people
call
it
identification
properly
or
some
people
call
it
a
back
pocket
or
other
names.
We
just
say:
identity
labels,
we
just
whatever
name
it's
not
less
than
this
name.
We
can
just
choose
a
name,
and
then
people
can
put
those
identity
information
inside
it
for
a
specific
event,
producer
I,
don't
think
there
will
be
a
lot
of
you
know:
silent
of
identity
labels,
maybe
just
a
few
and
then
it's
up
to
date.
I
So
even
producer,
you
know,
put
that
put
the
information
there
and
the
event
consumer.
You
know,
however,
how
do
you
when
consumer
it
is
to
use
it?
That's
not.
We
do
not
need
to
define
in
the
spec
in
the
event
spec,
because
only
we
need
to
define
what
was
needed
for
the
event
consumer
for
the
event
producer.
Sorry,
that's.
A
I
A
I
A
I
I
A
J
I
I
A
I
A
So
tell
you
what
I
feel
very
uncomfortable
doing
opposed
in
30
seconds
I
apologize.
We
were
not
able
to
come
to
the
resolution,
but
what
I
want
to
do
is
within
the
first
15
minutes
of
next
week's
call
have
a
vote.
I
I
think
we've
gone
back
and
forth
around
this
too
many
times
and
I
don't
want
to
do
a
vote
in
30
seconds.
I
just
think.
That's
that's
just
not
appropriate
from
a
process
perspective.
We.
J
A
Okay,
tell
you
what
let's
do
this
I
mean
I'm
here,
people
actually
do
want
to
start
a
vote
now
and
that's
fine
and
and
we're
good
cuz.
Anybody
not
on
the
call
is
gonna.
Let
them
do
an
email
vote
anyway.
So,
if
you're
on
the
call-
and
you
want
to
vote
now-
I'm
gonna
go
through
the
names
quickly
and
you
can
just
say
yes
or
no,
the
PR
in
question.
If
you
don't
want
to
vote
right
now,
you
can
do
through
email,
then
you
have,
and
so
how
does
end
the
day?