►
From YouTube: CNCF Serverless WG Meeting - 2018-08-23
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Okay,
cats
up
John
later
all
right,
we're
going
to
get
started
skipping
the
eyes.
The
only
thing
I
want
to
point
that,
although
I
don't
see
Kathy
on
the
calls
or
doesn't
do
any
good
to
do
that.
So
let's
skip
that
one.
Okay.
So,
first
of
all,
before
we
get
to
the
media
discussions,
we
had
a
doodle
poll
for
the
face-to-face
meeting,
the
OSS
summit.
We
only
had
four
people,
so
they
can
make
it.
So
we
are
not
going
to
have
an
official
meeting
there.
A
We
may
choose
to
just
sort
of
a
get-together
just
informal,
but
definitely
not
be
an
official
meeting
if
we
do
decide
to
meet
so
you
guys
don't
worry
about
that.
One
I
did
start
a
doodle
poll,
however,
for
koukin
and
Shanghai
I.
Think
we've
had
quite
a
few
people
sign
up
for
that
one,
but
I
want
to
make
sure
you
draw
your
guys
attention
to
it,
because
we
actually
do
have
a
intro
and
face
to
face
sessions
signed
up
for
us
and
the
coming
weeks.
A
I
will
start
Google
Docs
or
something
to
try
to
gather
ideas
for
what
people
would
like
to
discuss
there
and
who
will
be
there
to
help
present
or
whatever
so
forgot
the
logistics
later,
but
for
right
now
there
is
a
doodle
poll
for
people
to
sign
up,
so
we
at
least
know
who's
going
to
be
there.
Let's
try
to
get
people
to
sign
up
by
next
week's
phone
call
as
possible.
Please
just
so.
We
can
get
a
accurateness
of
people
all
right
all
right.
Moving
on,
then,
let's
talk
about
pr's
Rachel.
C
B
So
I'm
gonna
start
by
acknowledging
that
we
are
slowing
down
progress,
that
everyone
is
very
eager
to
make
and
I'm.
Sorry
about
that.
The
thing
that
I
do
want
to
say
is
that
we're
not
doing
it
to
be
capricious
or
because
we're
acting
in
bad
faith,
and
we
just
want
to
like
stop
the
spec
to
point
to
that.
B
Spencer's
opened
up
a
PR,
proposing
a
proto
implementation
and
that's
the
open
source
format
that
is
used
extensively
inside
Google
and
outside
Google
Thomas
has
created
a
sample
repo
that
we're
going
to
walk
through
as
part
of
this
demo,
but
also
we
open
sourced
it.
So
if
anyone
wants
to
go
look
at
it
after
this
or
in
more
detail
or
read
through
how
it's
working,
you
can
totally
do
that,
but
we
are
slowing
it
down,
but
we're
doing
it
because
we
think
that
if
we
make
some
changes
now,
it
will
be
more
broadly
useful.
B
So
that's
that's
what
we're
doing
here
and
if
you
can
hold
your
questions
to
the
end,
we
have
lots
of
people
in
the
call
who
are
happy
to
answer
all
your
questions
and
we
just
want
to
get
through
this
presentation
fast
as
possible.
B
So
to
start
proto
means
roughly
two
things:
there's
a
human,
readable,
dot,
proto
file,
it's
written
in
the
protobuf
interface
definition,
language
and
that's
used
to
generate
multiple
consumer
event,
libraries
and
coatings.
That
kind
of
thing
there's
an
example
of
what
this
looks
like
here
on
the
right
side.
One
of
those
encoding
is
the
proto
binary
format,
and
that
is
the
other
thing
that
we
can
use
part
of
me
so
to
avoid
confusion
in
this
I'm
going
to
refer
to
proto
lane
to
refer
to
the
language.
That's
the
that's
the
thing
that
is
human
readable.
B
They
can
write
that
also
generates
a
JSON
encoding
and
we
are
bending
over
backwards
right
now
lets
you
define
a
cloud
events
message
and
put
a
link
that
will
generate
JSON
the
spec
compliant,
and
you
can
read
more
about
how
it
converts
to
JSON
and
I'm,
going
to
refer
to
proto,
binary
or
by
anything.
So.
B
We
can
define
a
cloud
events
message
and
proto
Ling
and
it
will
generate
serialization
for
Pro
JSON
and
proto
jason
serialization.
I've
linked
to
it's
not
incredibly
restrictive,
but
there
are
some
restrictions,
a
source
of
confusion
in
previous
conversations
that
we've
had
has
been
that
the
proto,
binary
and
adjacency
relations
are
totally
independent.
That's
not
true!
B
If
they're
both
generated
by
a
common
protein
definition,
only
a
subset
of
a
JSON
can
be
generated
by
Pro
tolling,
it's
pretty
permissive,
but
it
enforces
some
guardrails
and
moving
extensions
to
the
top
level
proper
top
level
properties
does
make
it
impossible
for
us
to
define
a
spec
compliant
cloud
of
internal
link.
So
we
acknowledge
up
front
that
this
extensions
bag
is
just
a
slightly
clunkier
API.
B
B
Proto
is
used
at
companies
that
have
lots
of
popular
api's
and
we
would
love
for
them
to
be
able
to
support
cloud
events.
I
just
add,
like
I,
wasn't
sure
how
many
people
are
using
so
I
just
looked
at
the
public
mailing
list
and
there
are
about
4,000
users
and
just
scrolling
through
lots
of
them.
Are
big
companies
publishing
a
cloud
of
its
definition
in
portaling?
B
Would
let
all
those
companies
quickly
start
using
cloud
events
within
their
existing
part
of
a
system
and
if
a
cloud
events,
JSON
format
cannot
be
expressed
in
portaling,
then
every
single
company-
that's
using
proto
internally,
is
going
to
have
this
higher
cost
to
start
sending
or
receiving
events.
So
so
I'm
gonna
say
about
that:
it's
not
proprietary,
so
I'm
gonna
stop
sharing
and
I'm
going
to
let
Thomas
share
to
walk
through
his
demo.
This
demo
is
using
doug's
Jason
next
tool
and
it's
going
to
show
that
we
get
non-deterministic
behavior.
D
Thank
You
Rachel,
so
one
of
the
things
that
I
wanted
to
focus
on
was
that
I
think
to
a
large
extent
the
discussions
that
we've
been
doing,
where
we
use
proto
as
an
example,
aren't
just
about
proto
they're,
just
those
guardrails,
which
I
think
are
there
for
a
reason.
So
I
had
a
git
repo
that
I
published
yesterday
called
versioning
is
hard
I,
try
to
act
as
three
different
actors,
our
working
group,
a
smart
library
vendor
and
a
an
application,
development
and
I
try
to
do
this
all
on
the
Jason
spec,
as
proposed.
D
D
D
Anyways
each
of
these
is
poking
a
little
bit
fun
in
our
working
group.
It's
all
has
signed
off
by
each
commit
acts
as
one
person,
but
we
can
go
ahead
and
look
at
some
the
final
product.
We
have
some
fake
cloud
events
and
we
will
run
the
application
at
version
100
of
our
spec,
where,
let's
see
we
can
see
that
these
that
currently
says
that
there
is
anything
with
an
event.
Id
property
is
a
cloud
event
and
anything
else
can
be
parent,
they're,
just
extensions
out.
D
So
this
scope
of
the
spec
and
then
this
is
the
simple
jason
library.
Unfortunately,
the
jason
that
is
being
proposed
is
incompatible
with
the
default
go
compiler.
There
has
been
aid
out.
Sending
bug
that
I
linked
to
that
has
been
around
for
many
years.
Go
still
cannot
with
the
default
compiler
hand,
unstructured
properties.
D
So
thankfully
Doug
tried
to
work
around
that,
so
he
has
his
special
parser
that
I
use
and
then
my
application
just
basically
loads
it
into
Santino
prints,
prints
out
the
event
ID
and
we're
pretending
that
this
is
also
using
the
sample
draped
extension
that
we'd
ratified
yesterday
or
not
yesterday.
Last
week
and
said
it
was
either
sampled
or
a
rate
of
one
in
something,
or
it
was
not
simple.
So
if
you
run
that
at
one
point
oh,
it
was
got
about
1
to
3.
It
was
sampled
at
a
rate
one
in
10.
D
D
So
what
happened
is
spec
1.1
got
announced,
it
said,
hey,
we
also
have
an
event
time
property
and
by
the
way,
we've
also
noticed
that
everyone
likes
to
use
sample
drapes,
so
we're
going
to
now
formally
define
it.
So
the
response
for
that
is
to
do
so
just
add
those
fields
to
the
Jason
there,
the
struct
in
our
library
and
then
suddenly,
when
I
try
to
run
that
code,
though.
F
D
D
D
B
B
B
B
So
the
thing
we
want
to
take
away
from
this
is
that
we
need
to
balance
for
compatibility
and
extensibility
in
structured
data,
and
we
have
a
proposal
for
how
to
do
that
and
that
would
lexical
for
all
formats
so
diving
in
for
a
second
to
for
compatibility.
That's
about
adding
new
attributes,
the
spec
in
future
iterations
without
breaking
the
existing
event,
consumers
they're
still
using
the
full
version
in
JSON.
This
is
really
straightforward,
because
JSON
keys
are
strings.
B
B
B
Json
primitives
that's
defined
in
the
spec,
so
we
have
a
list,
but
that's
okay,
that's
okay!
I
can
I
can
send
you
a
link
to
it
later
on
so
for
compatibility
in
proto
binaries
a
little
bit
different.
Why
should
have
it
consumed?
This
is
a
question
that
we've
gotten
from
a
few
people.
So
I
want
to
just
address
this
directly.
Why
should
have
it?
B
Consumers
using
proto
binary,
not
use
the
unknown
fields
for
extensions,
and
there
are
a
few
reasons
unknown
doesn't
provide
for
compatibility
because
the
top
level
keys
are
integers
and
to
avoid
collisions
you're
going
to
have
to
use
a
high
number
integer
to
avoid
that,
and
so
the
normal
low
index
integers.
So
when
it
is
upgraded,
we're
going
to
have
to
flip
to
a
low
index
integer
when
it's
really
promoted
to
a
top-level
speck.
B
Is
a
major
change,
not
a
minor
change,
because
we're
going
to
have
to
like
we're
checking
for
it
in
one
spot
now
we're
going
to
check
for
in
another
spot.
So,
even
though
it's
a
minor
change
for
JSON
to
add
a
new
thing,
it's
to
add
a
new
property.
It's
going
to
be
a
major
change
for
binary
formats,
and
this
is
a
question.
I
really
want
to
draw
people's
attention
to.
B
Will
the
working
group
be
incrementing
their
semantic
versioning
only
when
a
change
breaks,
JSON
only
event,
consumers
or
when
it
breaks
anybody
so
diving
into
extensibility,
and
that's
about
allowing
arbitrary
attributes
either
the
top
level
or
in
a
property
bags.
So
things
are
not
defined,
so
extensibility
versus
property
bags
has
I'm
going
to
talk
about
the
pros
own
and
talk
about
the
cons
here.
The
pros
first
for
event,
consumers
using
a
binary
format.
Extensions
can
be
used
without
special
handling.
A
vendor
specific
extensions,
for
example.
Sample
rate
is
added
two
extensions:
Dickson
Chen's
property
bag.
B
If
it,
consumers
can
assign
extensions
to
be
a
struct
and
there's
a
link
to
how
that's
defined
struct
is
defined
is
there's
a
lot
of
work.
That's
gone
into
this,
to
making
sure
that
it
can
handle
arbitrary
values
and
that
the
conversion
between
JSON
and
Purdue
binary
smooth.
So
we
can
take
advantage
of
that
and
for
admit
consumers
that
are
using
adjacent
only
format.
Exemptions
can
be
used
without
special
handling,
so
the
cons
of
the
property
bag.
B
This
is
what
I
think
you're
all
very
familiar
with
I'm,
going
to
tell
you
what
you
believe,
anyway,
promoting
an
extension
to
topple
the
attribute
is
a
breaking
change
from
both
the
binary
and
the
Jason
formats.
For
example,
if
sample
rate
is
widely
used
and
then
promoted
to
be
a
top
level
attribute
to
be
backwards
compatible.
If
the
consumers
are
going
to
need
to
accept
the
1.0
events
or
sample
rates
still
Nixon
chin
bag
and
then
to
support
2.0
events,
the
consumers
are
going
to
start
to
look,
for
example,
rated
as
a
top
level
property.
B
So
if
we
use
extensibility
via
top
level
properties
for
Jason
only
implementations,
this
is
fine
as
long
as
you're
uniquely
named
and
the
promotion
path
is
absolutely
seamless
event.
Consumers
will
seek
no
change
between
the
attribute
being
an
extension
and
being
a
known
attribute,
but
for
pro
teliing
implementations
we
can't
easily
handle
top-level
attributes.
So
there
are
two
workarounds.
The
first
one
is
to
handcrafts
the
proto
binary,
adding
the
known
attributes
to
an
integer,
key
top-level
attribute
and
adding
unknown
properties
to
property
back.
B
This
is
going
to
require
abandoning
the
built
in
conversion
between
Jason
and
proto
binary,
that's
provided
by
Pro,
tolling
and
special
casing
that
for
cloud
events
is
going
to
be
a
pill
fight
for
every
system
that
we
want
to
support
this,
but
inside
Google
and
for
everything
that's
using
pedaling.
So
the
second
workaround
is
that
event:
consumers
could
drop
unknown
attributes,
but
that
is
effectively
dropping
extensibility.
So
we
don't
like
that.
That's
so
the
conclusions
and
our
requests
our
conclusion
line.
If
the
extension
bag
is
removed,
the
JSON
format
cannot
be
expressed
by
pro.
B
Delaying
and
Google
will
be
unable
to
avoid
fracturing
the
spec,
because
we're
going
to
have
to
duplicate
the
JSON
format,
and
it
will
differ
at
like
so
we're
going
to
break
into
a
format
that
is
compatible
as
totaling
and
a
format.
That's
compatible
the
spec.
We
know
that
it's
a
sacrifice
for
everyone.
That's
an
adjacent!
Only
system
they're
going
to
be
giving
up
the
cleaner
JSON
and
the
seamless
property
promotion
pass
for
for
only
JSON
systems,
and
we
think
this
change
will
be
a
good
like.
B
If
the
working
group
wants
to
support
or
wants
to
sacrifice,
support
districts
or
if
the
working
group
can
was
has
a
plan
to
only
make
breaking
changes
or
to
increment
a
major
version
when
it
breaks
Jason
only
systems,
that's
something
we
would
like
to
know
sooner
rather
than
later.
So
that's
it!
That's
all.
I
have
and
we're
happy
to
answer
your
questions.
Okay,.
A
So,
like
we
did
last
week,
I
think
it's
important
to
have
an
ordered
discussion
here.
So
if
you
want
to
raise
your
hand
to
say
something
put
a
plus
hand
into
the
chat,
I'll
do
my
best
and
notice
it
and
I
reason.
I
put
my
hand
up.
It.
First
is
because
you
guys
mentioned
my
tool,
so
if
you
can
stop
sharing
for
a
sec,
I
want
to
show
one
thing:
okay,.
H
A
On
a
second,
let
me
finish
what
I
was
gonna
say:
first,
then
you
can
bend
your
necks
I'm
like
you,
I
can
just
find
my
I
think.
That's
it.
You
guys
can
see
that
right,
yeah,
okay,
so
since
Thomas
was
using
my
tool-
and
he,
if
I
heard
him
correctly,
making
me
a
claim
about
how,
when
you
upgrade
from
one
version,
respect
to
next
and
you
add,
a
new
optional
property
that
was
once
an
extension,
it's
a
breaking
change.
There's
no
way
around
that,
and
that's
technically
not
true.
A
In
fact,
in
that
repo,
you
can
actually
see.
I
have
an
example
where
I
have
a
person
version
one,
but
just
named
person
version
2
with
name
and
address,
with
the
assumption
that,
for
the
version
1
address
was
extra,
meaning
was
just
that
extra
place
where
extensions
go,
and
so,
when
you
take
this
Jason
gets
parsed.
A
We
part
the
example
shows
you
parse
it
into
a
version
ones
person
and
a
version
2
person,
and
you
can
access
it,
because
the
toolkit
that
I'm
using,
which
is
the
same
thing
that
Thomas
was
using
my
little
ext
thing.
I
can
actually
address.
I
got
to
access
address
from
both
structures
with
the
exact
same
line
of
code,
and
you
run
it.
You
actually
get
the
exact
same
value
coming
out.
So
my
point
here
is
not
to
say
that
there
won't
ever
be
any
problems.
A
My
point
was
just:
it
depends
on
what
tooling
that
you're
using
going
forward
and
that's
the
way
I
just
want
to
point
out,
because
I
didn't
want
people
to
be
mislead,
at
least
for
my
tooling
that
there's
this
problem,
it
can't
be
done
when
it
can
be
done.
There's
an
example
in
that
it
shows
it
so
with
that
Kathy
I
believe
you're.
Next,
on
the
queue.
H
So
you
mentioned,
you
know
this
extension
and
back
and
do
mean
the
you
know
the
extension
keyword
we
we
used
to
have.
Are
you
referring
to
that?
Yes,
that
is
very
future,
so
so,
okay,
so
yeah
I,
just
want
to
make
sure
you
are
you
opposing
top-level
partner
backs,
is
not
that
extension,
but
you
know
we
proposed
some
attribute,
which
is
of
the
map.
Format
is
yeah.
B
H
D
Cool
and
I
think
I'm
next
I'm,
not
sure
if
I'm
supposed
to
raise
my
hand
to
respond
to
these
comments
or
not
so
in
response
to
the
thing
with
the
I
do
try
to
be.
If
you
check
that
github
repo
is
intellectually
honest
as
possible,
I
did
actually
disclaim
this
in
the
footnote.
Sarah
pointed
out.
So
the
point
of
the
discussion
was
trust
me
about
structured
formats
of
any
kind,
not
just
proto,
and
there
is
a
workaround
with
certain
usage
patterns
that
are
okay.
D
I
personally
would
not
recommend
the
one
that
was
just
shown
because
I'd
actually
recommend
using
tooth
trucks,
because
now
with
the
example
in
the
futures
section,
anybody
who
says
event
extras
will
invariably
be
broken.
You
can
escape
out
into
a
string
only
system.
This
is
how
some
libraries
do
it
for
HTTP
headers
or
you
either
access
everything,
there's
no
like
unknown
headers,
it's
just
everything
was
Jenny
Venable
launched.
I
would
argue
at
that
point.
We're
not
talking
about
the
trade-offs
of
the
structured
system,
because
you've
gone
to
a
map
based
system
again.
D
I
Then
the
question
is
what
the
tooling
is
right,
I,
don't
think,
there's
any
need
for
you
know
using
a
like
you.
What
I
see
is
that
you're
defining
a
proto
document,
and
you
seem
to
be
insisting
on
using
that
particular
document
to
also
do
all
of
the
Jason's
for
serialization
work,
that
you're
doing
and
I
find
that
choice
rather
odd
and
and
I,
don't
think,
since
we
have
a
normative
specification
for
how
Jason
Cloud
events
works,
it's
unclear.
Why?
That's
even
be
made
it.
I
Basically,
if
you
were
not
doing
positional
encoding,
but
if
it
would
all
putting
it
into
a
dictionary
which
make
it
completely
equivalent
to
Jason
when
I
came
up
with
is
effectively
a
overhead
of
four
bytes
per
field,
and
if
you
squish
it
down
and
really
make
it
positional,
then
you
can
also
get
to
and
and
use
effectively
a
the
capabilities
of
the
wire
format.
That
protobufs
gives
you.
I
You
could
get
to
two
bytes
overhead
per
field,
which
adds
up
to
between
eight
and
sixteen
bytes
and
that's
still
within
the
AAS
16
bytes
padding
range,
which
means,
if
you
sent
this
all
over
TLS
overall,
the
game
that
you
have
by
using
positional
encoding
using
protobuf
might
be
completely
going
away
just
because
you're
using
encryption.
I
So
for
me
the
question
is:
why
are
we
going
through
all
these
all
these
exercises
if
we
can
take
the
key
value
pair
model,
naps,
arrays
and
values
model
that
jason
has,
and
it's
also
used
by
several
other
binary
encoding
message:
pack
XE.
If
you
take
the
the
XML
version
or
beasts
on
and
and
rather
for
the
rather
particular
case
of
a
constrained
type
model
that
we
have,
and
you
know
hardwired
type
model
as
it
has
as
proto.
Has
it
and
also
has
surfed
avid
has
it?
G
Yes,
so
you
know
what
the
presentation
talked
a
lot
about
the
versioning
and
what
my
main
question
is:
didn't
really
address
the
difference
between
optional
and
required
fields,
where
we
had
felt
that
optional
fields
would
only
require
a
minor
release,
as
opposed
to
required.
That
would
require
a
major
release
and
it
seemed
like
the
presentation
was
really
just
talking
about
major
releases
in
required
fields.
A
B
D
Missed
it,
you
had
done
it
twice.
Oh
it's
just
and
say
the
there
we're
not
changing
her
stance
at
all
about
adding
brand
new
from
clean
cloth,
optional
fields.
Those
should
be
acceptable
as
a
minor
revision
I
would
save.
For
example,
if
we
added
something
that
is
already
in
the
well-known
extensions
list,
we
are
quite
aware
that
this
thing
does
exist
and
there
is
usage
and
that's
ratifying
an
extension,
not
just
adding
a
new
property
that
we've
never
seen
before,
but
ratifying
an
extension
is
a
breaking
change.
That
was
my
suggestion
and
I.
A
Doug,
okay,
okay
and
speaking
now,
strictly
from
my
point
of
view,
not
as
someone
who's
trying
to
run
the
meeting,
but
I
did
want
to
point
out
something
that
the
what
Thomas
was
talking
about
my
tool,
I,
think
you're
implying
my
tool
was
kind
of
flattening
everything
into
a
value,
pairs
and
stuff
you
not
able
to
use
structured
processing.
That's
actually
not
true.
It
only
does
that
for
properties,
not
the
ones
that
are
well-known
to
the
structure
itself.
A
Those
you
can
still
always
access
by
direct
reference
to
the
structured
elements,
but
I
guess
my
question
I've
to
have
two
questions
for
you
guys
one
is
and
the
stuff
that
you
guys
posted
yesterday.
It
made
it
sound
like
you
guys
were
saying.
The
existing
Jason
that
we
have
in
our
spec
is
is
not
compatible
with
proto
and
that
you
guys
are
going
to
be
requesting
changes
to
our
existing
Jason
and
I'd.
Like
clarification,
whether
that's
true
or
not-
and
my
second
question
is
a
little
bit
of
question
a
little
bit
a
statement.
A
This
notion
of
ratifying
extensions
is
something
I
think
we
need
to
almost
get
clarity
on,
because
the
spec,
by
definition,
does
not
know
about
extensions
period.
Nor
do
we
know
about
all
possible
extensions.
Are
out
there,
so
this
notion
of
ratify
an
extension
being
I'm.
Sorry,
the
notion
of
creating
a
brand
new,
optional
level
property,
as
an
act
that
is
different
from
ratifying
extension,
is
just
false.
They're,
both
the
exact
same
things,
the
spec,
doesn't
know
about
extensions
to
the
Specht.
A
Everything
is
a
brand
new
thing
and
we
can
never
know
for
sure
what
is
being
used
as
an
extension
someplace,
so
to
say
that
we're
going
to
add
something
brand-new
to
the
spec
and
therefore
would
have
to
worry
about
an
existing.
That
existing
thing
to
use.
Etics
is
an
extension.
Just
is
not
true.
Everything
could
technically
be
used
as
an
extension,
so
I'm
not
sure
we
can
differentiate
between
those
two.
So
I
like
to
get
an
answer
to
my
your
little
question,
though.
Well,
our
Jason
need
to
change
according
to
your
guys,
request.
Okay,.
B
D
D
The
we
currently
got
around
it
by
saying
if
it's
Jason
like
we
call
it
data,
and
then
we
had
to
say
something
like
bytes
data
for
raw
bytes.
Those
are
the
two
places
I've
seen
effectively.
The
three
things
that
we
can't
do
in
a
proto
compatible
JSON
are
distinguished
between
a
0
value
and
empty
value.
We
can't
have
like
multiple
indeterminate
top-level
fields
from
like
not
just
a
mate
hierarchy,
if
it's
just
the
working
group
that
releases
a
new
version,
there's
version
1
in
version
1.1
and
they
were
four
words
inverse
compatible
as
Rachel
covered.
D
B
D
Yeah
I
I
have
been
trying
to
take
the
Jason
spec
that
the
committee
has
come
up
with
and
treat
that
as
gospel
and
reverse
engineering
reporting
back
issues
I,
don't.
Obviously
my
life
would
be
easier
if
we
stayed
in
those
complaints.
I,
don't
I
know
that
that's
to
do
that,
just
because
it
makes
people's
life
easier
is
not
appropriate
in
the
group,
but
I
think
to
some
extent
it
may
be
appropriate
to
say
why
were
those
decisions
made
by
another
group
and
are
those
lessons
that
we
can
learn
from
ok.
I
I
No,
let
me
be
very
clear
right:
we
have
principles
for
what's
eligible.
What's
not
in
this
working
group
protobuf,
your
entire
stack
is
might
be
used
fairly
widely,
but
your
stack
is
proprietary.
It
is
not
under
the
umbrella
of
an
open
source
foundation
and
you
can
change
it.
You
control
it
as
much
as
you
like
and,
and
the
cop
and
Google
does
change
it
as
it
likes,
because
we
got
prota
and
you
got
pulled
over
three
and
you
basically
control
the
destiny
of
that
stack
with
anybody
influencing
it.
So
as
such,
it
is
entirely
proprietary.
I
There
is
no
moral
there's,
no
more
argument
here,
I'm
making
an
argument,
for
you
know,
principles
of
how
open
standards
are
being
created
and
what
you're
trying
to
do
right
now
is
to
go
and
say
we're
Google,
we're
great
or
a
big.
We
have
to
stack,
and
now
this
group
needs
to
bend
its
formats
to
the
will
of
the
particular
artifact
that
we're
using
without
even
looking
at
the
reality
of
the
wire
and
the
focus
on
what
you
really
need
to
bring
bring
to
the
table,
and
that's
your
binary
format.
I
The
JSON
format
that
we
have
here
is
100%
orthogonal
to
the
your
your
tooling,
because
you
can
use
like
everybody
else,
just
another
library
for
doing
JSON
parsing
nobody's
forcing
you
to
use
your
library.
Your
library
is
really
good
for
creating
the
binary
format,
but
your
library,
if
it's
not
good,
for
creating
the
JSON
format.
Well,
then,
then,
that's,
oh,
but
I,
don't
see
how
the
the
working
result
of
this
working
group
should
Bend
to
the
needs
of
a
particular
library
like
protobuf,
okay,.
F
A
If
it's
difficult,
that's
definite,
IM
tended.
My
point
was
that
there
was
a
distinction
being
made
between
us
adding
a
new
property
in
version
1.1
and
that
that's
thing,
that's
being
added
being
completely
brand
new
versus
it
being
a
known
extension,
that's
being
promoted,
and
my
point
was
that
from
a
spec
perspective,
there
is
no
differentiation.
The
spec
knows
about
no
extensions
period.
All
we
could
do
is
add
new,
optional
properties
in
version
1.1,
whether
it's
being
used
or
not,
is
unknown
to
us
and
that
it
doesn't
influence
anything
in
my
mind.
F
I
mean
in
terms
of
I
mean
you
might
use
a
implementation
in
order
to
make
a
case
for
it
being
promoted
in
the
spec
right
may
be
ratified
in
the
spec.
But
yes,
it
shouldn't
influence
our
decision.
However,
from
a
technical
perspective,
I
mean
I.
Think
this
whole
conversation
is
to
talk
about.
Well,
we
expect
that
people
will
be
using
the
extensions
in
order
to
try
stuff
out
and,
and
then
we
expect
that
it
will
be
common
that
stuff
that
people
have
used
and
have
implementations
in
the
wild.
F
A
F
A
Okay,
thank
you,
I
believe
I'm
next.
So
the
net
effect,
though,
of
trying
to
make
a
distinction
like
like
what
was
being
imposed
here
of
no
extension,
is
being
promoted
and
is
that
you
basically
treat
every
single
extension
I'm.
Sorry,
you
have
to
treat
every
single
new
property
that
gets
added.
Can
someone
go
a
meter
for
typing
yeah?
You
guys
got
to
treat
every
new
property
that
gets
added
to
our
spec
as
a
know,
an
extension
because,
obviously
someone's
probably
been
using
it.
A
Otherwise
they
wouldn't
be
adding
it
and
then
that's
gonna
be
a
major
bump
according
to
what
you
guys
are
asking
for.
So
any
change
to
our
spec
in
terms
of
new
properties
being
added
will
result
in
a
major
version
bump,
regardless
of
this
PR.
What
you
guys
are
suggesting
now.
The
other
thing
is
the
reason.
A
I
really
raised
my
hand,
was
I'm
Miller
confused
about
why
this
specification
is
so
problematic
because
there
are
a
ton
of
other
JSON
specs
out
there
that
have
Jason
civilizations
that
allow
properties
to
appear
at
the
top
and
I've
never
heard
of
one
that
tries
to
ban
new
properties.
That
stuff,
like
that,
that's
all,
are
we
forcing
people
to
put
things
in
certain
buckets
or
saying
you
can't
use
Brian
every
performance
with
it,
I've
never
come
across.
Another
spec
like
I
thought,
maybe
my
scope
issues
are
limited,
so
I'd
really
like
it.
D
Believe
injured
after
you,
okay
go
for
it
oh
and
to
be
clear,
like
I,
am
NOT
trying
to
put
my
foot
down
and
say
that
are
Jason
must
be
dictated
by
Fredo
I'm.
A
trying
to
raise
awareness
of
what
tooling
is
I
think
that
we
should
have
a
goal
to
consider
the
practicality
of
implementing
certain
things.
Certain
features
of
a
language
respec
are
easier
to
implement
and
others.
Jason
is
no
exception.
D
The
other
thing
is
just
effectively
giving
putting
the
choice
of
google's
road
map,
on
the
committee's
hand
that,
when
we
eventually
try
to
add
support
for
GR
pc,
having
published
a
profile
in
open
source
will
de
facto
imply
a
Jason
spec
and
we
are
stuck
between
a
rock
and
a
hard
place.
We
cannot
prohibit
someone
from
taking
a
proto
and
using
the
generated
JSON
version
and
at
the
same
time,
we
can't
without
help
reconcile
them
and
so
we're
trying
to
put
it
in
committees,
hand
of
which
do
you
prefer,
and
it's
absolutely
okay.
D
A
C
A
I
Would
I
would
prefer
so
this?
This
basically
made
made
a
similar
point
that
you
made
here
in
your
POC?
That's
you
know
we
can.
We
can
be
following
the
the
words
of
the
wise
Sam
Ruby,
expect
more
and
have
a
specification
that
allows
extra
stuff
also,
as
Tim
just
pointed
out,
JSON
and
XML
have
been
super
super
successful
with
that.
It's
very
possible
to
make
binary
formats
to
make
binary
formats
also
use
that,
and
so
my
prototype
I
don't
want
to
spend
much
time
on
it.
I
pasted.
I
The
link
basically
illustrates
that
you
can
have
an
extensions
bag
if
you
need
it
in
a
binary
format
and
still
make
that
compatible
with
having
no
extensions
really
in
the
core
spec
and
have
the
the
the
text
format,
including
the
binary
ones,
deal
without
an
extension
back
and
having
the
extension
not
having
new
sentence
backers
from
last.
It's
actually
allowing
for
better
extensibility,
as
proven
in
HTTP,
as
proven
in
Jason,
as
proven
in
X,
and
now.
E
E
A
B
I
can
take
that.
So
if
we
define
a
message
format
in
proto,
then
it
will.
Let
us
take
incoming
JSON
format.
Cloud
events
handle
that
easily
and
then
convert
it
to
proto
binary,
and
this
is
not
just
like
an
internal
need.
This
is
the
thing
that
we
think
is
useful.
Does
that
capture
what
you're?
Looking
for?
Yes,
Django.
E
D
It
so
the
problem
is
not
that
it
is
impossible
to
create
a
correct
library.
It's
at
the
intuitive
thing
that
many
reasonable
developers
would
have
done
will
put
them
in
a
trap.
So
the
the
problem
is
not
like.
If
you
look
at
the
the
repo
there's,
a
git
commit
with
every
author
look
at
the
persona
labeled
in
each
one,
and
it's
really
hard
to
say
like
who
was
the
bad
person
or
who's
the
like
junior
developer,
who
made
this
mistake
and
I'm
trying
to
as
I
say,
raise
awareness
about.
D
About
the
fact
that
these
things,
it
is
a
good
tool
should
be
easy
to
use
and
hard
to
misuse.
A
good
spec
should
be
easy
to
implement
and
hard
to
implement
in
a
bad
way
and
I
think
that
we
can
do
better
as
far
as
making
sure
that
we
have
a
spec
with
fewer
rough
edges
around
the
library
implementation.
A
Okay,
so
I
think
that
we're
basically
done
plus
almost
at
a
time
so
here's
what
I'd
like
to
do
we're
gonna
kick
off
the
vote
right
now.
I'm
gonna
go
through
the
list
of
companies
are
voting
rights.
Ask
if
they'd
want
to
vote
right
now
and
you're.
Basically,
what
you're
doing
is
voting
YES
for
the
PR.
No
on
the
PR.
This
is
the
PR
they
remove
the
bag.
It's.
C
A
A
You
can
choose
to
abstain
if
you
are
either
not
on
a
call,
obviously,
or
you
would
choose
to
defer
your
vote
until
later,
just
say
so:
that's
fine,
we'll
get
people
who
cannot
vote
now
or
don't
want
to
vote
in
how
until
next
week's
beginning
and
next
week's
phone
call,
you
know
noon
eastern
times
when
the
vote
ends,
but
for
people
who
want
to
go
right
now
in
this
call,
I'd
like
to
go
through
the
roll
call,
okay,
so
Adobe
AIR!
Oh
here's!
Anybody
from.
E
Yes,
I'm
here
Doug
and
I
vote.
Yes,
okay,.
A
F
You're
also
saying
that
people
could
vote
asynchronously
if
they
want
to
what's
happened.
A
This
is
just
277
I
suggest
you
don't
vote
now.
Unless
you
you
you'll,
get
to
77
and
and
and
decide
because
I
don't
want
I,
don't
want
you
to
feel
rushed.
So
I
would
suggest
that
you
take
time
to
review.
The
PR
is
that
okay,
you
have
until
noon
next
week
to
vote.
Okay,
my
VM!
It's
gonna
vote.
Yes,
Ken!
All
right!
Chris
from
Jay
s,
foundation,
ok,
Microsoft
boots,.
I
G
A
A
Okay,
so
just
to
be
clear,
we
have
until-
or
you
guys
have
until
noon-
Eastern
next
Thursday
to
vote-
please
what
I'm
gonna
do
is
I'm
going
to
take
the
voting.
That's
here
right
now
put
it
into
a
comment
inside
the
PR
through
these.
What
is
that
five
companies,
and
then
you
guys,
can
vote
by
putting
an
LG
TM
or
a
not
LG
TM
into
the
PR
itself,
and
then
we
will
close
it
down
next
Thursday
at
noon.
Eastern
sound,
okay
to
everybody.
A
Don't
you
don't
need
a
link
to
this?
This
isn't
a
minutes
already
knew
the
top.
Is
the
attendance
tracker,
but
what
I'm
going
to
do
is
I'm
going
to
take
the
current
voting
and
stick
it
into
the
issue
itself
or
the
PR
itself?
Yes,
okay!
So
with
that
we're
almost
at
a
time,
obviously
one
all
the
time
to
hit
another
issue.
Let
me
go
ahead
and
through
the
roll
call
again,
we
actually
quite
a
few
people
join
larger-than-normal.
A
Johnboll
is
straight
you
there
and
see
if
you're
on,
if
you're
typing,
think
oh
I'm,
you
so
John
Paul
Auster,
are
you
there
John
Mitchell,
okay,
John
Chiang?
Yes,
thank
you!
David
Lyle,
yes,
I'm
here
and
Jos
Sherman
show
you
there.
Okay,
Matt
Rakowski
here,
Thank
You
Klaus,
see
you
there.
You,
okay,
Kathy
I,
heard
Erica
erica
des
what
about
Rocky
I'm.
C
A
A
Renato,
are
you
there
hey
what
about
yang
Jung
Lee?
Thank
you.
What
about
Luciano,
Luciano
and
there's
a
someone
I'll
call
with
Chris
Chris's,
iPhone
I
think
is
the
thing
Chris
with
Hawaii.
Are
you
there?
Okay?
Is
there
anybody
on
the
call
that
I
missed
from
the
attendance?
Oh
I'm,
sorry
Doug
Doug
meet
Lori.
There
you
go!
Thank
you.
Doug
I
got
you
okay,
anybody
else.
I'm
s
in
attendance,
okay,
I
bet!
We
have
a
whole
two
minutes
left.
Is
there
any
really
quick
topic?
People
would
like
to
bring
up?
A
Okay,
in
that
case,
I
have
one
thing:
Kathy
just
remind
her:
you
have
a
Pete.
You
have
an
AI
to
open
up
PR,
just
upload
the
workflow
doc
to
the
server
this
working
group-
repo.
Oh
yes,
I!
Do
that!
Okay!
Cool!
Thank
you
just
wondering
might
do
that
all
right.
In
that
case,
we're
done.
Oh,
you
guys
very
much
very
exciting
ball.
Thanks
guys,
okay,
bye.