►
From YouTube: CNCF Serverless Working Group 2020-11-12
Description
CNCF Serverless Working Group 2020-11-12
C
B
E
E
B
A
B
B
Although
today,
depending
whether
you
like
storms
or
not,
we're
getting
a
really
really
good
storm
out
here,
tons
of
rain
and
stuff
and.
A
B
B
Hey
ginger,
dad
christian,
hello,
hey,
klaus,
hey
doc,
hello,
doo,
doo,
doo,.
B
G
B
H
B
B
B
That
is
true.
Yes,
okay,
let's
see,
if
fabian
are
you
hello,
hello,
somebody
else,
I'm
flying
by
kristoff?
Are
you
there?
Yes,
hello,
hello
and
I'm
mate.
Are
you
there.
B
We
actually
have
a
relatively
light
agenda
today,
so
if
you
guys
have
anything
you
want
to
talk
about
for
the
sdk
call,
I
know
there's
at
least
one
item
that
was
added
recently
but
go
ahead
and
add
some
more.
If
you
want
maybe
we'll
talk
about
the
discovery
stuff
too.
If
we
have
time
all
right
three,
after
let's
see,
did
I
get
everybody
ginger.
C
C
B
B
All
right
moving
forward
so
kubecon
next
week.
I
believe,
last
time
I
checked
none
of
the
serverless
stuff
overlapped
with
our
call
next
week.
However,
if
people
want
to
attend
kubecon
and
there
are
like
sessions,
you
know
during
this
time
we
can
obviously
cancel
next
week
so
question
for
the
folks
for
everybody
on
the
call.
Should
we
cancel
next
week
or
not.
B
Oh
no,
this
is
good,
although
that
is
an
invitation
for
me
to
pick
on
you.
So
there
you
go.
B
Okay,
so
say
what
why
don't
we
keep
it
on
for
next
week
and
if,
for
some
reason
we
only
get
five
people
showing
up
because
everybody's
bid
with
a
coupon,
then
we'll
not
do
anything
official
in
terms
of
voting
or
anything,
we'll
just
talk
about
other
stuff,
so
we'll
base
it
upon
how
many
people
we
get?
Okay.
B
Okay,
we'll
worry
about
that
later,
all
right,
cool,
okay,
so
office
hours.
So
thank
you
for
clement,
scott
and
klaus
for
agreeing
to
do
office
hours.
I
know
that
not
everybody
agreed
to
do
both
times,
but
I
didn't
notice
anything
on
the
form
that
I
filled
out
to
say
who's
going
to
do
which
time
so
you
may
get
an
invite
for
both
sessions,
which
is
fine.
B
If
you
don't,
you
only
show
up
for
one,
that's,
okay,
we
do
have
some
people
who
sign
up
for
both
so
just
sign
up
for
or
show
up
for
whichever
one
you.
You
agree
to
anything
else
related
to
kubecon
that
people
think
we
need
to
talk
about.
I
don't
think
there
is,
I
think,
we're
all
set
up,
but
anybody
think
of
anything.
B
All
right,
in
that
case,
for
the
discovery
interop
since
we
didn't
have
sdk
caller
skipped
that
one,
but
for
the
discovery,
interrupt
remember
what
we
talked
about
last
week.
I
think
most
people
are
still
just
trying
to
find
time
to
actually
do
the
coding.
I
know
remy
has
his
end
point
up,
so
maybe
we
can
pick
on
him
just
do
a
demo
later
and
when
we
get
to
that
section
of
the
agenda,
but
there
anything
or
any
topics
you
want
to
bring
up
to
bring
up
with
the
broader
group.
B
Okay,
in
that
case,
just
a
reminder
again,
we'll
have
the
sdk
call
right
after
this
one
tmr
anything
related
to
workflow.
You
want
to
bring
up.
L
Hey
hi
everybody
yeah.
Yesterday
we
released
a
0.5
version,
it's
it
was
a
big
release
about
a
year
worth
of
work,
and
so
that
was
a
big
thing.
We
wanted
to
release
it
before
coupon
and
we
also
released
the
java
and
the
go
sdks
and
the
vs
code
plugin.
So
we
kind
of
did
a
big
bang
thing.
It's
very
exciting!
Congratulations!
L
I
I
wanted
to
ask
you
guys.
It's
like.
I
was
looking
at
our
website
and
a
lot
more
than
50
of
traffic
comes
from
cloud
events
that
I
owe
thank
you
for
putting
those
links
there
and
I
was
wondering
if
it's
possible,
maybe
we
could
put
just
maybe
some
text
or
anything
anywhere
where
you
could
say:
hey
serviceworker
released
a
new
version
that
would
bring
a
lot
more
views.
B
I
don't
I
don't
personally
have
a
big
problem
with
it.
I
mean
it's
not
directly
related
to
cloud
events,
but
because
we
don't
actually
put
up
things
very
often
on
our
web
page
in
terms
of
announcements,
I
think
if
we
can
phrase
it
as
hey
the
the
service
workflow
spec
was
released
and
hey
by
the
way
they
use
cloud
events.
You
know
here's
a
perfect
example.
B
Okay,
if
if
you
and
I
want
to
work
offline
or
if
you
just
want
to
go
for
it
and
open
up
a
pr
against
the
the
web
repo,
you
know
we
can,
we
can
work
on
it
later.
B
All
right
cool,
in
that
case
any
pr's
or
issues
people
want
to
add
to
the
list
before
we
jump
into
them.
D
B
H
I
just
noticed
the
there
was
a
new
ap
operation
added
and
there's
there's
described
some
logic
in
for
the
consumers
to
fetch
the
schema
and
there
was-
and
there
was
some
logic
to
fix
the
schema
with
the
new
operation
or
using
the
data
schema
assist,
and
I
just
wanted
to
to
propose
some
some
ideas.
Some
options
in
trying
to
simplify
that
and
just
using
one
of
the
operation
to.
D
Let
me
let
me
try
to
let
me
try
to
explain
so
that's
that's!
That's
the
one
thing
I
probably
won't
want
to
want
to
explain
on
the
call,
if
possible,
the
the
rationale
behind
this,
because
you
know
this
is
written
in
formal
language
and
so
there's
there's
some
extra
explanation.
D
So
I
hope
this
helps
the
the
scenario
I'm
trying
to
chase
here
is
one
that
is
effectively
peer-to-peer
replication
along
a
flow
path
of
messages
where
you
have
multiple
and
and
think
of
the
following.
D
If
you
think
of
the
following
scenario
to
to
make
this
clearer
think
you
have
several
kafka
clusters,
I'll
just
use
kafka
as
an
example,
because
we
have
a
binding
for
it,
but
you
can
any,
you
think,
think
of
any
message,
brokers
and
really
I'm
not
thinking
about
the
brokers
that
are
really
talking
about
the
topics
and
you
have
multiple
of
these
in
various
regions
in
the
world
and
there
you
do
local
processing
in
those
local
events.
D
And
now
you
want
to
go
and
consolidate
all
those
events
into
a
single
location,
because
you
want
to
analyze
a
global
global
view,
which
means
you
need
to
have
like
you
have
three
locations
and
you
have
local
kafka
clusters
in
there
and
you're,
pushing
the
data
and
then
you're
doing
local
local
analytics.
But
then
you
also
have
one
global
kafka
cluster
that
you
replicate
the
messages,
all
the
messages,
all
the
events
into
for
global
analytics.
What
you
will
do,
then,
is
along
that
flow
path.
D
As
you're
setting
up
that
replication,
the
replication
for
the
data,
you
will
also
set
up
replication
for
the
schemas.
So
now
you
have
three
different
areas
effectively
of
your
application,
which
might
also
differ
because
of
local
differences,
etc,
which
are
effectively
three
different
domains.
If
you,
if
you
will
of
authority
where
you
might
have
also
different
publishers,
and
if
you
go
and
consolidate
those
into
a
single
location
into
a
single
schema
registry,
then
you
obviously
need
to
have
a
way
to
disambiguate
those
schemas.
D
But
the
events
that
you
publish
in
the
in
the
original
in
the
original
topics,
they
will
obviously
have
have
to
have
a
unique,
unique
identifier
for
that
schema,
which
is,
you
know,
ideally
unique
in
the
world.
D
D
Ui
is
that
you
have
a
unique
identifier
that
you
can
go
and
put
into
the
data
schema
metadata
element
of
your
of
your
cloud
event
and
then
that
lookup
function
is
really
meant
so
that
you
can
walk
up
to
whatever
your
local
schema
registry
is
the
one
that
is
configured
for
your
consumer
that
may
be
at
this
end
of
this
chain.
D
They
have
this
event
and
that
event
has
this
uri
here
as
an
identifier
for
the
schema,
and
then
you
can
go
and
grab
into
this
into
your
local
cache
effect
with
the
of
of
schemas
and
obtain
that
schema
directly
without
having
to
know
what
schema
group
that
belongs
in
because
it
might
have
been
replicated
into
a
particular
schema
group
for
for
for
reasons
of
of
access
control
and
without
necessarily
knowing
what
the
history
of
that
schema
is
because
none
of
those
things
are
interesting
at
that
point.
What's
really
interesting.
D
That
is
that
you
get
a
hold
of
that
document
so
that
you
can
go
and
serialize
your
data.
So
that's
the
so.
What
I'm
trying
to
do
is
I'm
I'm
trying
to
cross
I'm
trying
to
create
a
effectively
a
a
shortcut.
If
you
will
there's
this
well-organized
way
of
schema
group
and
schema
and
schema
version
for
how
you
manage
those
schemas
and
how
you
can
or
go
and
organize
them.
D
But
then
we
kind
of
need
to
have
a
shortcut
if
you
will
into
that
structure
to
to
grab
quickly
that
one
schema
your
eye
and
then
also
we
need
to
have
a
way
to
to
make
those
those
those
your
eyes
effectively
unique
so
that
we
can
go
and
replicate
them
across
these
these
flow
paths.
I
hope
that
makes
it
a
little
bit
more
clear
where
I'm
what
I'm
trying
to
do
here
so
so
it
is,
it
is.
D
This
is
one
of
those
things
which,
where
I'm
using
a
uri
as
a
global
identifier,
where
arguably
this
http
prefix
is,
is
confusing,
but
I
need
to
have.
I
need
to
use
the
uri
scheme
that
is
well
defined
and
unfortunately
we
don't
have
one
that
we
can
use.
That
is
knocked
down
to
a
particular
wire
protocol.
So
that's
that's.
Why
that's
why
I'm
using
those?
D
D
It's
literally
the
id
of
that
schema
that
is
global,
and
that
also
has
a
scoping
function
so
that
within
your
local
local
schema
registry,
if
you
don't,
if
you
don't
ever
participate
in
any
of
these,
these
com
complex
replication
scenarios,
your
world
should
be
simple,
but
as
soon
as
you
as
you
participate
in
one
of
those
one
of
those
scenarios,
then
you
should
be
able
to
go
just
just
pick,
a
name
which
is
your
authority
and
then
participate
in
all
those
replication
schemes,
and
that's
that's
what
that
goal
is.
H
Yeah
yeah
yeah,
okay,
I
understood.
Actually
it
makes
more
sense
to
me
now
it
was
I'm
thinking
as
after
what
you
say.
It's
like
where
I
proposed
is
actually
like
splitting
that
uri
into
the
concepts
it
defines
and
and
use
the
existing
api
we
had
available,
maybe
modify
the
get
schema
version.
Api
operation,
maybe
modify
it
that
operation
a
bit
and
and,
as
I
said
now,
it
split
the
uri
of
the
data
schema
into
the
concepts
and
use
the
existing
api,
but
yeah,
probably
as
well
as
you
said.
H
It's
a
shortcut,
the
the
remains
the
main
proposal
from
you
is
like
a
shortcut
and
but
maybe
the
one
of
the
most
important
things
in
my
opinion
that
you
should
say
there
is
like.
You
may
have
a
replication
scenario
where
you
don't
replicate
the
full
schema
group,
but
still
you
have
that
that
concrete
schema
replicated
so
because
of
that
you
use
the
the
shortcut
the
uri
to
to
look
up
to
to
fetch
it
and.
D
One
of
the
things
you
obviously
do
with
schemas
is-
and
this
is
kind
of
the
protobuf
and
the
avro
use
case-
is
that
you
want
to
save
space
like
that's.
So
I
mean
there's
reasons
for
I
want
to
have
structured
data,
and
I
want
to
kind
of
validate
all
the
structured
data.
So
that's
one
motivation.
D
The
other
motivation
is
that
simply
you
like
the
fact
that
protobuf
runs
very
short,
then
what
we,
then,
what
we
should
try
to
avoid
is
having
schema
these
data
schema
uris,
which
are
three
miles
long,
and
so,
while
those
schema
uris
that
are
three
miles,
long
are
great,
because
they're
wonderfully
legible
they're,
probably
not
the
ideal
thing
to
include
with
every
single
event.
So
I'm
also
trying
to
create
an
avenue
here
where
you
can
have
an
enormously
greedy
uri.
D
That
then
just
has
a
unique
identifier.
That's
why
I'm
that's
why
I'm
mandating
for
this?
D
This
schema
version
identifier
effectively,
something
that
is
unique
and
can
be
really
terse
like
if
you,
if
you're
running
a
local
if
you're
running
a
registry,
I'm
imagining
because
of
the
way
how
we've
created
this,
that
you
can
just
have
a
counter
as
your
as
your
identifier,
seed
right
and
then
you
can
go
and
prefix
that,
with
a
with
with
a
uri
prefix,
and
then
you
can
end
up
with
you
know:
uri,
that's
probably
10,
characters,
long
with
and
and
still
have
this
structure
that
I
have
here.
D
C
D
C
B
Okay,
in
that
case,
why
don't
we
do
this?
Because
klaus
did
want
to
talk
about
this
issue
right
here?
So
if
you
guys
don't
mind,
what
I
like
to
do
is
switch
the
order
slightly,
because
I
think
gem
wants
to
talk
about
this
one
as
well
so
klaus,
why
don't
you.
J
Yes,
so
it
was
also
brought
up
from
some
of
my
colleagues
and
I
there
was
this
change
about
the
null
values
in
the
json
format
and,
strictly
speaking,
it
is
not
totally
compatible
if
you
had
an
sdk
that
was
reacting
with
an
error
on
nine
values
before,
as
it
wasn't
clearly
specified,
and
now
it
would
be
against
the
specification
to
raise
an
error
if
there
is
another
value
in
the
json
format.
So,
but
that's
just
one
example.
J
J
Yeah,
so
that's
why
I'm
asking
here
for
ideas
arrested
issue.
B
M
B
That
one
for
a
second
okay,
okay,
I
think
the
I
think
the
more
interesting
question
at
this
point
is:
how
are
we
going
to
handle,
as
you
said,
different
specs
being
at
different
versions
and
you
kind
of
implied.
Maybe
that
means
we
should
have
different
repos
for
each,
and
I
know
when
we
talked
about
that
in
the
past
people
said
we're
not
we're
not
quite
ready
yet
for
separate
repo
for
each
spec,
and
so
that's
why
we
kept
everything
in
one
in
one
repo.
B
Would
this
problem
be
solved
if
we
just
make
it
perfectly
clear
that
each
spec
can
have
its
own
version
number
and
just
because
something
is
related
to
cloud
events,
1.0
does
not
mean
that
that
spec
itself
automatically
gets
1.0
so,
for
example,
for
the
spec
that
jim
was
working
on
the
protobuf
one.
I
personally
think
it
was
a
mistake
to
label
as
1.0,
because
I
don't
think
it's
had
time
to
gel
and
prove
itself
that
it's
worthy
of
1.0.
B
B
M
It's
interesting
yeah
because
I
think
when
I
was
doing
that
that
spec
in
my
mind
I
was
going.
I
was
thinking.
Oh,
this
is
just
another
representation
of
1.0.
I
I
didn't
perceive
that
we
would
sort
of
have
what
were
potentially
breaking
changes
in
in
a
in
a
major
version.
So
I
guess
the
question
is
really
you
know
those
clarifications
or
other
changes
that
were
made
around
nulls
you
know
do.
Did
we
consider
those
to
be
breaking
changes?
M
Because
I
think
if
we
did
then
then
there's
an
argument
that
we
went
wrong
somewhere?
M
B
No,
I
think,
at
least
from
my
point
of
view.
I
think
the
concern
I
had
with
the
protobuf
spec
was
that
it's
a
brand
new
spec
and
I
granted
I'm
sure
you
guys-
did
wonderful
work,
but
I'm
not
sure
we
necessarily
have
proof
that
it's
been
tested
thoroughly
right,
okay,
and
so
that
made
me
uncomfortable
that
we
labeled
that
particular
spec
1.0,
because
what
if
tomorrow,
we
find
a
major
change,
we
have
to
change
it
right.
Does
that
mean
we
have
to
introduce
a
2.0,
because
it's
a
breaking
change
for
that
one
spec
right.
B
M
M
M
C
B
Yeah,
I
think
I
think
the
discovery
I
think
the
other
specs
like
discovery
and
subscription.
I
think
those
might
be
a
little
bit
easier
to
argue
that
they
can
have
separate
version
numbers
because
they're
less
linked,
but
to
have
protobuf,
be
a
2.0
while
cloud
events
is
at
1.0.
That
might
look
really
weird.
M
But
I
mean
you've
got
the
same
issue
with
sdks.
Your
sdks
are
going
to
evolve
in
some
way
and
those
evolve
independently.
You
know
to
the
spec
yeah:
they
support
a
particular
version
of
the
spec.
M
I
guess
now
the
problem
becomes,
if
you
version
the
transports
and
the
formats
independently,
you've
got
another
level
of
complexity
in
the
in
the
sdk.
M
Excuse
me
in
the
sdk
versioning,
because
now
those
authors
have
to
say
well,
I
know
version
five
of
my
sdk
supports
this
version
of
of
the
cloud
event
binding
specs
and
this
these
versions
of
the
cloud
event
event
formats
and
you've
got
another
level
of
complexity.
There
not
saying
it's
insurmountable,
but
I
think
it
becomes.
It
may
become
difficult
for
the
population
outside
this
group
to
understand.
What's
going
on.
J
B
M
M
I
don't
know
if
we
went
through
that
with
avro
yeah,
but
as
you
do
and
clemens
has
always
threatened
to
do
sibor
or
or
try
and
persuade
somebody
else
to
do
it.
I
should
say
or
xml.
Even
so.
M
B
Well,
let
me
ask
you
this
at
least
for
the
protobuf
spec.
Would
we
have
avoided
this
issue?
If
we
didn't
label
it
1.0
immediately,
and
we
said
okay,
we
think
it's
done,
but
because
we
need
to
have
time
to
be
tested
in
gel
we're
going
to
call
it
a
0.9
and
wait
six
months
or
whatever
or
you
know,
pick
some
period
of
time
and
then,
if
there
are
no
issues
found
with
it,
then
we
can
raise
it
to
1.0.
M
And
that's
a
fair
point
I
mean
I
again
did
we
do
that
with
the
other
formats.
M
I
you
know,
I
couldn't
put
my
hand
on
my
heart
and
say
yeah.
We
have
enough
experience,
you
know
well
me
personally
to
sort
of
say:
oh
the
pros
above
stuff
works
yeah
and
I'm
not
sure
what
the
mechanism
is
to
garner
that
sort
of
feedback.
You
know
to
elevate.
You
know
this
stuff
from
a
suggestion
to
a
specification
right.
B
B
M
M
Well,
I
mean
I
maybe
this
group,
you
know
if
there
are
people
in
the
wider
cloud
event
area
that
have
you
know,
picked
up
that
spec
and
can
speak
to
whether
it's
working
for
them.
I
don't
know
if
any
of
the
sdk
teams
have
tried
to
support
it
in
their
format,
so
I
mean,
I
think,
it'd
be
good
to
get
a
bit
of
feedback.
The
interesting
thing
is,
you
know
your
prototype.
People
will
generally
say:
well,
we
always
try
and
make
stuff
backwards
compatible
when
we
change.
So
what
does
that?
M
What
does
that
mean
yeah?
So
I
I
would
have
to
think
about
that
a
bit
more,
I'm
not
sure
what
the
implication
is.
Maybe
maybe
that's
the
bottom
line
yeah
this
is.
M
B
Yeah,
so
to
me
the
implication
here
is
actually
very
minor
in
the
sense
that
I
don't
think
it
changes
what
people
can
code
to.
I
think
what
it
does
is.
It
gives
us
the
freedom
to
acknowledge
that
we
made
a
mistake,
and
this
gives
us
the
freedom
to
change
it,
because
once
we
give
it
the
official
1.0
label,
we
can't
change
anything.
B
F
K
K
We
have
to
think
about
how
we
synchronize
version
numbers
across
different
specs
or
aspects
of
the
spec
and
and
two
is
sort
of
the
version
numbers
having
some
implicit
meaning
like
1.0,
meaning
that
it's
ready,
and
I
wonder
if
there's
an
another
thing
that
we
can
add
to
like
so
there's
the
main
spec
that
I
think
should
iterate
over
the
major
version
numbers
and
the
the
subspecs
like
the
protocol
specifications
could
have
follow
that
version
number
the
main
spec
version
number,
but
have
some
annotation
like
whether
it's
experimental
or
deprecated
or
you
know
solid.
K
B
We
used
to
do
things
like
have
version
number
dash
wip
to
imply
that
it's
not
quite
ready
yet
or
you
know,
release
candidate
kind
of
things.
Maybe
that's
what
we
need
to
do
with
protobuf
is
to
make
it,
as
you
said
it
make
it
really
clear.
This
is
for
1.0,
but
it's
not
quite
ready
yet
so
whether
we
call
it
work
in
progress
which
may
not
be
official
enough,
maybe
it's
release
candidate
or
something
like
that,
and
let
that
gel
for
a
while.
Do
you
think
that
would
have?
B
Okay
yeah,
I
like
I
like
that,
because
it
it's
it's,
it's
almost
the
same
as
calling
it
a
0.9,
but
it
gives
it
a
little
more
formality
to
it
and,
as
you
said,
it
links
it
with
the
specific
version
of
the
main
spec,
because
if
we
did
have,
for
example,
a
2.0
and
a
1.0
of
cloud
events,
you
need
to
know
which
one
it's
for.
So
I
like
that
idea.
B
Just
taking
some
notes
here,
okay,
what
other
people
think
about
that
idea
for
dealing
with
one
of
the
three
problems,
meaning
the
problem
of
a
spec
is
linked
to
a
particular
version
of
cloud
events,
but
we
need
it
to
be
to
go
through
a
testing
period
before
we
can
actually
fully
claim
that
it's
like
1.0
ready
and
we're
talking
about
giving
it
as
some
sort
of
postfix
or
suffix
like
a
work
in
progress
release
candidate,
something
along
those
lines.
You
can
figure
out
the
exact
word
or
acronym
later.
J
B
B
J
Go
back
over
it's
just
technically,
we
have
created
that
1.0
branch,
but
we
never
have
cherry-picked
or
merged
anything.
We
did
to
master
into
that
branch
that.
B
B
B
J
D
I'm
worried
about
wire
back
compatibility
with
the
existing
code,
so
the
question
for
me
is:
do
we
if
we
change
the
version
numbers,
do
we
do
we
actually
change
the
spec
for
do
we
have
to
change
the
spec
version?
D
If,
if
that
is
so,
then
if
we
have
most
inconsequential
changes
from
a
from
a
wire
compatibility
perspective
for
the
core
spec,
then
I
would
prefer
that
we
find
a
different
way
to
version
those
things
and
call
them
kind
of
you
know:
have
them
errata
or
whatever
I
mean,
there's
there's
been
http
one
one
has
gone
through.
You
know
even
going
through
different
two
completely
new
rfcs,
while,
while
keeping
its
version
it's
on
on
wire
version,
number
stable
and
that's
something
that
I
would
certainly
prefer
like
that.
J
That's
a
good
point.
I
mean
just
for
those
edge
cases
now
to
make
that
big
step.
It's
probably
a
bit
too
much
yeah
so.
J
To
have
one
specific
example,
we
have
some
implementation
of
this
rest
api
except
the
cloud
events
and
so
far
it
actually
reacted
with
an
error
message
when
you
add
a
null
value
in
the
json
format
and
with
this
change
that
wouldn't
be
a
spec
compliant
anymore,.
B
B
O
O
G
L
You
could
support
both
in
json
schema.
You
have
the
one-off
thing
that
you
can
define
to.
Let's
say
this
is
a
breaking
change.
You
could
for
a
while
support
kind
of
like
in
java,
where
you
have
a
deprecated,
annotation,
saying
yeah.
L
B
I'm
not
sure
that
helps
us
with
respect
to
inner
ability,
though
right,
because
if
you
say
you
can
do
either
one
then
half
the
you
know,
you
run
the
risk
of
half
the
world,
not
supporting
it
and
therefore
they're,
not
interoperable.
Whereas
if
we
make
a
concrete
statement
that
says,
look,
we've
made
a
mistake:
you're
going
to
have
to
change
your
code
at
least
then
people
understand
that
by
doing
so,
they
will
at
least
be
guaranteed
internal
ability.
B
O
I
think
it's
fine
to
do
this
statement
honestly
sirian.
I
I
think
I
think
it's
fine
to
to
do
this
kind
of
statement
and
saying
we
are
wrong
and
we
fixed
it.
So
I
don't
see
anything
wrong
with
this
honestly,
while
on
the
other
end,
I
think
there.
I
think
there
is
very,
very
a
lot,
a
lot
of
overhead
for
everybody
to
to
bump
a
new
version
only
for
that.
J
Yeah,
so
I
already
agreed
that
changing
the
spec
version
here
would
be
really
not
adequate
as
it's
causing
a
lot
of
work.
So
yeah,
maybe
it's
it's.
If
we
got
it
as
a
bug,
maybe
we
do
some
bug
fix
release
then
101
I
mean,
then
we
wouldn't
change
this
back
version,
as
you
suggested.
B
Well,
okay,
what
if
we
did
this
in
addition
to
that,
what
if
we
also
send
out
an
email
to
the
mailing
list,
saying
what
we're
going
to
do
here
and
see
what
kind
of
feedback
we
get
and
if
we
don't
hear
anything,
then
that's
great,
but
maybe
you
know
maybe
we're
going
to
really
piss
off
our
community.
If
we
do
this
and
I'd
like
to
hear
about
it
in
advance
before
we
do
it.
B
B
One
I'll
talk
to
jim
to
see
about
doing
this
for
the
protobuf
spec
use
that
as
a
guinea
pig
we'll
look
at
adding
some
text
somewhere
that
talks
about
how
the
version
string
and
the
spec
will
only
be
the
major
minor
version,
not
the
patch
version
number
and
we're
never
going
to
increment
the
minor
version
number,
so
we're
always
going
to
be
zero,
send
an
email.
That's
the
mailing
list
saying
that
713
is
a
bug.
O
I
I
think
I
can
add
something
about
the
fact
that
we
have
a
one
portal
branch,
so
the
1.0
spec
points
to
that
branch,
but
we
never
really
sure
beat
bugs
so
when,
for
example,
in
the
sdks
we
say
we
implement
1.0.
In
fact,
we
are
implementing
master
that
that's
something
that
came
to
my
mind
now
like
like,
for
example,
all
the
fixes.
I
we
did
at
kafka
a
bunch
of
months
ago
for
the
kafka
binding.
We
call
it
1.2,
but
in
fact
we
are
implementing
muster
so
does
it
make
sense?
O
So
when,
when
we
implement
a
feature
in
the
sdk
scene,
at
least
that's
what
I
do
in
all
the
sdks
where
I
work
on,
I
look
at
master.
I
don't
look
at
the
one
pointers
back
so
when
we
fix
something
when
we
fix
something
in
master,
we
are
kind
of
implementing
master.
We
are
not
implementing
one
pointers
back.
O
So,
for
example,
you
probably
recall
that
I
found
a
bug,
something
that
was
not
really
clear
in
the
kafka
spec
a
bunch
of
months
ago
and
when
we
fixed
on
the
spec,
we
fixed
it
in
the
cloud
event,
sdk
2
for
the
1.0
version,
but
in
fact
we
that
that's
we
we
fixed
for
1.0
but
but
in
reality
we
fix
it,
given
the
master
spec,
not
given
the
one
point
respect,
because
the
one
point
of
spec
was
never
changing.
B
P
B
Q
R
Should
we
even
call
them
as
one
daughter,
because
officially
we
are
past
1.0,
so
it
should
be
even
called
then,
let's
say
1.1
release
candidate,
because
1.0
has
been
out
for
a
year
now,
right,
like
everybody,
says,
and
the
changes
which
goes
into
the
master
are
technically
not
the
stable
version.
They
are
development
versions,
so
they
are
still
under
development,
so
we
should
give
them
probably
a
different
name
under
the
specification
or
even
in
the
sdks.
Like
experimental
feature,
I
don't
know.
R
One
zero
one
still
goes
as
an
official
branch
of
patch
release.
It's
still
not
like
a
development
release.
So
if
we
are
following
the
semantic
versioning,
that
means
one
zero.
One
is
a
patch
on
top
of
one
zero.
It's
still
not
a
development
version,
so
it
removes
your
it
doesn't,
gives
you
the
flexibility
to
introduce
breaking
changes
into
your
master
branch
anymore.
B
I
guess
I'm
not
following
because
my
assumption
is:
we
have
not
introduced
any
breaking
changes
yet
ignore
713
for
a
second
right,
my
assumption,
all
the
changes
we
made
are
either
bug
fixes
or
clarification,
or
something
like
that,
and
there
aren't
any
breaking
changes.
So
therefore,
2.0
should
not
really
be
an
option
at
this
point.
R
So
my
proposal
over
here
is
basically
should
we
introduce
something
like
an
experimental
version
or
like
something
like
a
feature
version
where
we
start
getting
our
hands
dirty
and
then
we
release
these
patch
versions,
and
once
we
see
that
these
feature
versions
are
stable,
then
we
upstream
them
to-
let's
say
I
don't
know
1.0
2.0,
3.0
and
so
on.
So
should
we
have
a
provision
to
play
around
in
some
specifications
like.
B
B
S
If
you
can
get
past
your
audio
problems,
yes,
so
I
brought
it
in
the
chat
again
sorry
for
that
yeah.
So
I
agree
basically
with
yoshi
and.
S
B
Q
B
Think
about
it
we
don't
have
to
decide
on
the
call
here.
I
think
we
have
at
least
a
little
bit
of
a
path
forward
relative
to
this
issue
that
you
open,
klaus.
Klaus,
do
you
want
to
send
out
that
email
or
do
you
want
me
to?
B
B
B
Tell
you
what
I'll
I'll
take
all
the
actions
related
to
this
okay,
because
a
lot
of
it
is
possibly
just
some
additional
text
someplace
or
reaching
out
the
gym
to
see.
If
he's
willing
to
to
do
this
on
a
protobuf
one,
okay
class,
is
there
anything
else?
You
think
we're
forgetting
to
talk
about
relative
to
this
issue
and.
Q
B
Think
we're
going
to
probably
revisit
it,
but
right,
but
for
today,
do
you
think
there's
anything
else?
No,
but
I
I
like
the
discussion,
it's
good,
that
we
talked
about
it.
Yes,
it
is
and
thank
you
for
bringing
it
up
so,
okay,
okay,
I
don't
think
we
have
time
to
jump
onto
any
of
the
topics,
but
I
don't
think
any
of
them
are
major
actually
anish
on
your
that
is
your
stuff
right
anish
or
was
that
somebody
else.
R
I
think
it's
it's
too
late
for
primer
pull
requests.
I
think
we
can
ignore
that
for
now,
because
it
might
start
discussion
okay,
but
we
can
definitely
talk
about
the
issue.
What
I
raised
the
second
one.
R
Yeah
sorry
for
being
abrupt,
so
I
just
wanted
to
bring
this
point
into
the
forum
that
I
see
a
property
called
subscription
config
in
the
discovery
api
which
looks
somewhat
similar
to
the
protocol
settings
in
the
subscription
api.
So
it
would
make
sense
that
we
start
talking
about
the
differences
between
the
two.
If
there
are
any.
If
but
and
if
there
are
no
differences,
then
it's
probably
to
check
one
of
these
out
and
then
stay
consistent
across
the
apis.
R
T
Yeah,
I
don't
think
it's
the
case,
because
what
happened
is
the
subscription.
Config
is
telling
you
what
you
are
supposed
to
post
in
the
protocol
settings,
but
when
you
call
the
subscription
api,
you
need
to
have
those
information
from
the
disco
the
discovery
api
before.
So
that's
why
they
are
similar,
because
basically
one
is
describing
the
other,
but
I
don't
think
we
should
drop
penny.
B
B
D
That
is
what
we
then.
That
is.
That
is
why
I
said
this
bucket
might
make
sense
when,
when
doc
presented
the
pr
to
avid.
R
So
from
my
point
of
view,
it's
like
there
are
a
lot
of
things
which
are
present
in
this
dictionary
of
subscription
conflict
which
might
be
propagated
in
the
protocol
setting
dictionary
down
the
line
right
and
so
should
we
segregate
these
things
somehow
like,
because
I'm
pretty
sure
there
would
be
some
things
in
the
subscription
conflict
which
would
be
propagated
to
the
protocol
settings.
So
I
just
don't
see
some
sort
of
consistency
here,
but
it's
just
me
so
probably.
T
I
think
there
is
a
lot
to
still
define
like
at
least
when
I
did
the
implementation
or
one
type
of
implementation.
B
So
technically
we're
out
of
time,
but
I'd
like
to
personally
I'd
I'd
like
to
work
on
the
pr
related
to
this
one,
whether
it's
to
remove
config
or
at
least
explain
the
difference
between
config
and
protocol
settings,
I'd
like
to
work
on
a
pr
related
to
that
and
then
and
niche
you
can.
Then
you
know
jump
in
or
if
you
want
to
take
the
first
pass
at
it,
go
for
it
and
then
we
can
work
on
it
together.
I
don't
care,
but
I
do
think
we
need
to.
I
do
agree
with
your
issue.
R
Yeah
I
mean,
I
think,
then
we
should
discuss
again
next
week
so
that
I
have
more
points
and
probably
find
a
way.
Another
way
probably
find
a
place,
or
we
can
put
this
into
the
specification
at
these
differences
between
the
two
so
yeah.
B
Okay,
cool
next
technical
retirement.
Let
me
just
do
the
quick
agenda
check
or
I'm
sorry
attendee
check
and
then
we'll
jump
over
the
sdk
call.
So
manuel
is
still
there.
F
B
Okay,
ludang
are
you
there?
I
think.
No,
I
don't
see
him
grant
you
there
yeah
all
right,
cool,
okay.
Anybody
else
that
I
missed.
B
A
Nice
comments
doug,
the
sck
calls
on
the
same
recording
it's
right
here
right
now:
okay,
I'll,
come
back
and
watch
it
saturday
cool.
Thank
you,
yep,
okay,.
O
O
Yeah
so
two
announcements,
so
I
made
last
week
the
release
of
sdk
rust
0.3
and
by
the
way
I
need
to
add
the
links
because
and
a
lot
a
lot
of
breaking
changes.
But
you
know
it's.
We
are
still
in
a
very,
very
premature
phase,
but
things
are
starting
to
get
a
little
bit
more
stable
and
there
is
some
contributors
working
on
it.
O
So
I'm
not
only
the
only
one,
unfortunately,
and
in
the
next
release
we
will
get
no
std
support,
so
cloud
events
on
microcontrollers
and
mqtt
while
for
sdk
java.
I
did
a
release
this
week
and
I
hope
to
do
another
one
next
week
and
we
finally
managed
to
solve
the
most
outstanding
issues.
The
most
important
questions
in
particular.
O
There
was
this
question
that
took
quite
some
time
to
solve
it,
which
was
how
we
deal
with
with
the
the
cloud
event
payload
how
we
represent
the
cloud
event
payload
inside
the
faces,
and
that's
done
so
we
solved
it.
O
Maybe
it's
not
the
best
solution,
but
you
guys
check
it
out
and
let
me
know
how
it
looks
like
I.
I
will
try
to
rush
for
the
final
release
for
the
final
for
the
2.0
ga
before
christmas.
If
I
manage
to,
but
I
definitely
need
some
help
with
reviews.
In
particular
I
need
I.
I
will
really
really
love
to
see
at
least
the
implementation
of
protobuf
or
avro
or
both
because
they,
I
think
they
are
really
really
important.
I
heard
that
jay,
the
name
is
jane
wright.
O
B
R
Well,
thank
you
yeah.
So
basically,
I
I
got
a
chance
to
play
around
with
the
java
sdk
and
then
that
got
me
thinking
that
the
api
model
for
sdks
are
are
really
different,
like
especially
if
you're
coming
from
a
golang
world,
where
you
use
the
these
cloud
event
receivers.
You
start
the
cloud
receiver.
So
basically,
the
general
developer
interface
for
the
sdks
are
really
really
different.
R
When
you
come
from
go
and
and
suddenly
you
start
playing
around
with
java,
so
I
wanted
to
propose
like,
should
we
think
about
a
consistent
api
model
for
the
end
users,
in
order
to
like
give
give
a
consistent
experience
when
they
want
to
invoke
these
sdk
apis?
R
So
should
we
even
think
about
it
or
yeah,
I'm
probably
thinking
too
much
slinky
your
hands.
O
O
The
and
languages
itself
are
so
different
that
even
thinking
about
a
model
that
can
work
for
everybody,
it's
just
a
complete
waste
of
time.
So
I
I
I
contribute
to
three
sdks.
I
contribute
to
sdk
go
to
sdk
java
and
sdk
raster,
and
I
can
tell
you
that
I
never
ever
managed
to
find
some,
even
in
some
basic
interfaces,
a
common
product
that
we
can
use
so
like,
for
example,
the
whole
sender,
receiver
thing
that
we
have
in
sdk
go
works
pretty
well
with
go
because
in
go.
O
For
example,
you
have
a
single
way
to
manage
blocking
and
not
blocking
okay,
so
the
semantic
of
blocking
and
non-blocking
it's
straight
inside
the
language
itself.
This
is,
for
example,
not
doable
at
all
with
java,
because
in
java
you
have
10
different
ways
to
manage
sync
and
the
sync
to
manage
streams
to
manage
blocking
and
blocking
so
for
the
kind
of
interface
that
we
have
in
golden
sdk
cannot
work
at
all
in
sdk
in
sdk
java
and
another
example.
Sdk
rust.
O
The
receiver
model
that
we
have
in
sdk
go
doesn't
make
sense
in
some
use
cases,
because
maybe
I'm
integrating
with
a
with
a
library
like
arctic's
web,
where
you
already
have
a
very
well
defined
product
to
handle
requests
to
handle
events.
So
you
just
need
to
integrate
with
it
more
than
trying
to
create
your
own
interface
that
is
consistent
across
programming
languages
and
across
sdks.
R
Okay,
I
just
wanted
to
bring
this
discussion
because
it
was
really
really
a
world
of
difference
when
I
switched
to
the
java
sdk
yeah,
but
probably
see,
there's
a
reason.
O
Java
sdk
has
to
be
improved
for
sure
and
if
you
have
any
concrete
top
topics
on
the
improvements
of
dpis
I'll,
be
really
really
happy
to
discuss
about
that.
That's
for
sure
I
mean
the
goal
against
the
case
is
just
more
developed.
Okay,
said
that
again
the
consistent
api
model
across
all
the
sdks-
it
just
doesn't
sound
around
to
me.
T
Yeah
just
agree
with
slinky,
I
think
like.
Even
when
you
look
at
typescript,
it's
too
different
from
like
go
to
to
be
able
to
have
something
that
looks
like.
T
B
It
still
makes
sense,
then
we
can
get
consistency
for
those
things
sort
of
indirectly,
but
but
only
because
it
makes
sense
for
those
all
the
languages
do
the
same
thing,
not
because
we're
trying
to
push
for
consistency.
I
think
we
had
a
conversation
like
that,
but
I
don't
know-
maybe
I
remember
it
correctly
so
that.
R
O
O
We
solve
it
in
four
different
ways
in
the
four
sdk
in
the
four
major
sdks
we
have
in
sdk
rust.
We
have
a
union
type
and
in
this
union
type
we
are
tied
to
the
json
library,
because
it's
fine
to
do
that
in
rust
and
to
tie
to
that
particular
library
that
everybody
uses
in
java.
We
didn't
want
you
to
do
that
because
everybody
wants
to
use
its
own
library.
So
we
have
to
create
an
interface
that
abstracts
over
data,
and
then
everybody
implements
it.
O
Then
in
sdk
go
we
we
have
just
a
byte
buffer,
but
then
the
interface
allows
you
to
map
directly
to
the
data
structure
in
c
sharp.
We
just
return
any
an
object.
Whatever
so
untyped
object
in
javascript.
I
don't
know
how
we
do
that,
but
just
to
tell
you
that
even
the
simplest
thing
is
are
to
make
concepts.
H
L
R
This
is
something
I
would
just
spawn
a
parallel
conversation
with
you
sure.
L
M
B
Nish
remy
you're
europe.
T
Yeah,
so
it's
just
because
I
did
increment
like
part
of
the
implementation
of
subscription
discovery,
and
while
doing
that,
I
noticed
that
I
think
the
way
we
emit
message
in
the
javascript
sdk
was
not
super
plugable,
so
I
did
a
pr
that
can
can
present.
T
B
T
T
Well,
in
my
opinion,
when
I
do
development,
I
would
prefer
to
not
know
as
a
developer.
I
could
not
know
at
all
where
I
want
to
send
and
just
say:
okay,
I'm
gonna
emit
one
one
event.
So
basically,
I
see
it
more
like
that
in
my
code
and
then
anyone
who
wants
to
do
something
with
this
event
can
just
do
something
like
that.
So
he
will
say
when
there
is
a
new
event,
I
will
emit
it
to
the
either.
T
I
don't
know
who
is
going
to
subscribe
to
my
event,
like
I
don't
know
any
of
those,
so
I
need
a
way
to
emit
with
abstraction
of
the
other
people
they
are
doing
so
to
implement
that
I
came
up
with
just
like
instead
of
deprecated,
the
full
emitter
class
as
it
was,
I
just
use
it
as
a
single
singleton
to
be
able
to
listen
to
that
in
lantern
on
those
specific
events
or
with,
like
the
event
emitter
to
be
able
to
abstract
to
who
I'm
sending.
T
Listen
to
my
events.
So
that's
why
I
came
up
with
that
id.
So
if
you
look,
that's
probably
like
just
mostly
a
formatting
issue,
but
so
that's
the
thing.
I
demonstrate
you
how
I
envision.
M
K
Yeah
I
mean
I
I
haven't
had
a
chance
to
to
look
at
it.
Yet
I'm
I'm
wondering,
are
you
using
the
oh
yeah
there?
You
are
you're
using
extending
event
emitter
you're
using
the
built-in
node
stuff,
so
when,
in
the
code
sample
that
you
showed
you
have
you
know
the
singleton
dot
on,
can
you
show
that
again,
yeah
yeah?
K
So
that
is
that's
using
the
node
built-in
event,
emitter
stuff
right.
So
any
any
piece
of
code
within
that
same
process
could
call
that
same
function
provide
a
different
emitter.
So
a
cloud
event
could
be
emitted
multiple
times
for
a
single
event
is.
T
Yeah
and
like
the
only
question
that's
raised
that
I
erased,
then,
when
I
finished
my
implementation
was,
I
might
remove
the
like
the
official
event
emitter
to
do
the
same
kind
of
implementation,
but
internally
just
to
be
able
to,
because
it's
gonna
probably
be
asynchronous
like
those
functions.
T
And
so
maybe
when
you
aim
it,
you
want
to
make
sure
that
it's
completely
emitted,
because,
if
you're
in
the
like,
otherwise
it's
going
to
be
in
the
node
queue
and
you
don't.
You
cannot
be
100
sure
that
the
event
has
been
received
on
the
other
side.
So
depending
you
might
want
to.
We
might
want
to
change
to
be
to
have
that
ability
that
probability.
T
T
T
T
So
let's
say
I
add
garfield
as
well.
So
now
I
have
like
a
cat
service
and
a
pink
service,
so
I
can
subscribe
to
buff.
If
I
want
and
then
I'll
see
like
the
events
arriving
there
so
yeah,
but
I
and
what
I
did
was
that's
a
little
bit
more
out
of
scope.
It's
like
a
gateway
that
is
able
to
when
you
subscribe.
So
by
default
it
has
no
service
and
if
I
connect
the
gateway,
it
will
start.
T
So
if
we
agree
on
that,
I
think
I
will
probably
change
it
to
to
be
able
to
have
a
weight
on
the
emit
event
to
make
it
a
promise.
So
you
can
decide
if
you
want
to
wait
for
the
promise
or
not,
and
by
doing
that
I
have
to
implement
my
own
type
of
event.
Emitter,
but
I'll
take
exactly
the
same
methods
name,
if
that's
fine
with
you
and
I
have
a
the
other
pr
is
basically
just
it's
more
simple.
It
was
just
to
add
some
types.
T
V
I
I
guess
I
had
a
comment
with
with
like
the
just
sort
of
with
the
naming
like
with
get
singleton.
V
I
feel
like
that,
like
I
guess,
I
don't
really
see
that
naming
pattern
and
node
in
javascript
the
singleton
pattern.
Maybe
we
could
have
it
like.
T
Is
it
like
a
static
yeah.
T
Like
so,
it's
good
that
you
raise
that,
because
if
I
remove
like
I
had
to
use
the
singleton,
because
I
was
not
able
to
use
static
because
I
wanted
to
over
like
extend
the
event
emitter.
But
if
we.
T
Event,
emitter
part
and
we
just
go
with
static
with
promises,
which
I
think
is
more
efficient.
We
can
do
emitter
on
and
remove
basically
the
this
and
declare
on
as
a
static,
and
then
we
will
have
something
more
clean.
I
think
so
I'm
I
I
want
to
move
like.
I
can
update
the
pr
to
reflect
that
if
we
all
agree
that
it's
a
better,
if
I.
V
Yeah,
I
think
so,
and
I
guess
another
thing
is
so
with
the
on
event:
are
there
other
on,
like.
T
T
T
K
One
one
concern
I
have
is
having
all
of
the
like
subscription
stuff
and
discovery
stuff
in
the
the
current
module
yeah.
One
of
the
things.
T
So
if
I,
if
I
may
that's
that's
my
next
point
and
basically
my
goal
is
not
to
have
them
inside
it's
just
that's.
Why
like
for
me,
that's
like
the
only
modification
we
need
to
do
in
the
sdk
and
then
after
discussing
with
a
grant
and
you
like
two
weeks
ago.
T
I
think
I
do
agree
that
we
should
probably
split
and
we
should
have
what
I
would
call
the
cloud
events
dash
service,
which
is
basically
the
discovery,
implementation,
the
full
discovery,
implementation
and
the
full
subscription
implementation,
but
as
a
side
module
that
you
can
either
choose
to
pick
or
not
it's
up
to
you,
but
to
be
able
to
implement
this
module
efficiently.
If
we
don't
have
those
kind
of
stuff,
it's
not
going
to
be
possible.
K
Right,
I
guess
what
I
was
talking
about
is
less
the
that
emitter
stuff
and
more.
Let
me
just
the
other
interfaces
that
you
added
in
365.,
I'm
just
wondering
if,
if
the
well,
I
guess
those
are
needed,
but.
T
T
K
You're
thinking
is,
is
that
some,
you
know
secondary
module
or
some
other
module.
That
is
like
the
you
know,
cloud
events
discovery
module
in
javascript
is
is:
has
a
dependency
on
this
module
and
just
uses
these
interfaces
from
this
module.
T
Like
if
you
do
the
discovery,
it's
gonna,
so
the
sdk
definition.
So
that's
why.
In
fact,
I
don't
really
care
if
we
put
them
in
the
sdk
or
not.
For
now.
I
put
them
on
the
side
here
in
the
definition,
but
I
think
it
should
be
in
the
sdk.
So
instead
of
that,
I
should
see
from
cloud
events
because
then
we
know
the
sdk
is
about
exposing
the
right
interface
based
on
the
on
the
specification.
T
The
implementation
is
always
something
that's
why
you
wanted
to
get
away
of
the
implementation
in
the
sdk,
which
I
agree
with,
and
then
the
service
can
be
opinion
opinionated
on
how
to
implement,
but
I
don't
think
we
should
have
because
then
the
service
will
be
in
the
implementation.
So
I
don't
think
the
service
should
have
the
definition
of
those
extra
classes.
K
V
I
mean
I
I
I
think
there'd
be
a
lot
of
value
in
extracting,
especially
like
some
of
the
discovery
implementation
from
the
sdk
module
because,
like
for
example,
receivers,
don't
want
to
keep
updating
the
sdk.
If
the
sdk
has
a
new
major
version
for
some
discovery,
feature
yeah.
T
Yeah
and
it
potentially
for
security
reason
also,
you
might
have
more
issues
with
security
and
discovery
specifically
more
of
the
subscription,
I
guess
but
yeah.
I
do
agree
with
you,
so
that's
why
I
I
think,
like
I'm,
gonna,
split
it
and
just
contribute
like
the
discovery.
What
I
did
here
is
just
like
for
now:
it's
not
clean
enough.
It
was
just
to
make
like
the
example,
but
I
want
to
keep
it
part
of
this
code
to
have
cloud
events
dash
services.
Unless
you
have
another
name.
K
K
V
T
K
I
mean,
I
guess
the
the
only
reason
why
I
feel
like
it
should
be
that
way.
Well
I
mean
I
I
guess:
if
there
was
a
new
repo
we
could
have
it
be.
You
know,
discovery
javascript
or
something
like
that,
so
that
we
don't
take
up
the
namespace
of
discovery
or
subscription.
V
I
I
think,
there's
yeah
a
lot
of
value
in
having
multiple
modules.
In
the
same
repo
I
mean,
for
example,
I
could
even
imagine
folks
that
don't
want
to
yeah
folks
that
don't
want
to
send
cloud
events.
They
just
want
to
receive
cloud
events.
They
can
just
use
the
receiving
module
folks
that
only
want
to
send
events
never
want
to
receive
events.
Maybe
you
can
use
a
center
pattern
and
then
we
can
split
up
dependencies
that
we
don't
need
to
have
dependencies
that
are
used
just
for
one
part,
be
required.
V
O
Okay,
so
I
think
this
was
somehow
already
discussed
in
past
the
thing
of
supporting
discovery
and
subscription
in
our
sdks.
Now
I
have
mixed
feelings
about
that,
because,
let's
take
for
example,
sdk
javascript
sdk
go.
We
already
have
a
lot
of
modules
in
the
repo
and
we
have
all
aligned
version
so
which
means
that
when
we
ship,
for
example,
sdk
java
core
we
ship
together,
sdk
java
api,
we
ship
together,
sdk
java
jackson,
vertex
restful
web
services,
kafka
and
all
the
modules
all
together.
O
If
we
think
that
there
is
this
different
base,
if
there
isn't
any
different
pace
and
we
think
they
will
proceed
the
same,
then
I
don't
see
an
issue,
but
for
example,
if
if
I,
if
I
go
ga
with
sdk
java
like
we
do,
I
do
like
sdk
java
v2,
then
I
commit
to
don't
break
apis
and
if
we
decide
that
we
want
to
add
a
new
module
for
discovery.
For
example,
then,
when
this
module
is
released,
I
have
to
commit
to
this
mod
to
don't
break
apis
of
this
module.
O
T
Like
I
think
in
typescript,
it's
easier
because,
like
it's
pretty
well
tooled
for
monorepo,
so
you
can
even
have
separate
versioning
like
that's
what
I
use
on
most
of
my
projects,
so
they
depend
on
each
other,
but
it's
like
you
can
still
have
several
pace
of
release.
T
O
Well,
maybe
we
can
do
differently
for
every
language,
as
we
always
do
and
go
back
to
our
previous.
I
mean
we
can
do
that.
I
have
no
problems
I
mean
for
for
us.
I
must
I'm
aligning
versions
now,
but
it's
not
that
problem
for
the
tool
to
keep
versions
disaligned
but,
for
example,
it's
a
problem
for
sdk
and
it's
a
problem
for
java.
T
Yeah
and
but
I
really
think,
as
grant
mentioned,
that
I
think
the
discovery
and
and
the
subscription
api
will
evolve
almost
like
on
the
product
basis.
Why
is
the
sdk
is
supposed
to
evolve
more
like
the
spec
pace,
which
is
usually,
I
think,
slower,
because
the
discovery
you
have?
Basically
it's
the
real
world,
like
you,
really
need
to
implement
some
stuff
and-
and
that
means
you'll,
create
some
security
issue.
Maybe,
and
you
have
to
fix
them
like
more
quickly,
that's
the
way.
I
see
it.
O
R
And
yeah
I
mean
for
me
like,
I
think
we
need
to
answer
that.
How
is
it
first
of
all
the
time
that
we
should
incorporate
the
discovery
in
the
subscription
api
as
part
of
our
sdks
in
all
and
all
together?
Right,
because
currently,
the
specification
is
like
really
really
at
its
nascent
stage
and
probably
making
it
part
of
this
decay
would
not
be
a
nice
idea.
Otherwise
we
would
end
up
with
like
these
consecutive
sdk
releases.
R
You
know
we
would
probably
overwhelm
our
sdk
releases
with
every
update
into
the
specifications
right
and
another
concern.
What
I
I
mean
just
which
just
came
out
of
this
discussion
was
like.
R
And
it's
my
personal
opinion,
of
course,
that
I
really
believe
that
the
subscription
and
discovery
api
should
definitely
be
part
of
the
core
cloud
event
sdk,
because
because
now
we,
when
we
ship
in
sdk,
we
would
be
we
kind
of
tell
that.
Okay,
this
is
the
implementation
for
all
of
our
specifications.
R
So
I
kind
of
don't
like
this
different
repository,
thingy
for
sure
different
module.
T
It's
just
the
way.
The
way
you
build
application
in
typescript
is
just
to
have
different
modules.
So
this
way,
if
you
don't
want
to
implement
the
service
like
the
discovery
and
the
subscription,
you
can
just
take
the
sdk
be
like
build
with
that,
and
then
you
can,
just
like
add
the
other
module.
If
you
really
want
to
inherit
like
those
new
features
but
like
yeah,
I
think
it
it
made.
T
It
make
sense
to
me
to
separate
and
it's
because
the
the
implementation
of
the
api
can
be
done
through
express
it
can
be
done
through
the
framework
like
basically,
my
demo
is
using
a
serverless
framework.
I
did
develop,
so
it's
not
exactly
the
same
type
of
implementation,
you're,
not
using
the
same
stuff,
and
when
you
implement
an
open
api,
you
basically
like
it's
the
same
in
java.
You
can
do
it
with
spring.
You
can
do
it
like
old
school.
T
R
Yeah,
but
today,
if
we
implement,
let's
say
if
we
introduce
a
module
for
the
discovery
and
the
subscription
api
and,
let's
say
the
javascript
sdk,
do
we
give
a
message
to
the
community
that
okay,
now
officially,
we
started
supporting
these
apis
as
part
of
sdks
and
then
they
would
start
expecting
it
and
all
the
other
ones
as
well
right.
So
when
we
release
them
in
one,
then
I
think,
ideally
we
should
release
them
in
all.
It's
I
don't
know.
If
that's
the
standard
process.
K
K
R
I
think
it's
fine
yeah.
I
was
just
hoping
that
we
don't
raise
expectations
in
that
area,
that
okay,
suddenly
javascript,
comes
up
with
implementations
and
then
go
and
java
doesn't
have
one
right
so
or
we
start
breaking
a
workforce
that
we
also
introduce
these
specs
into
the
go
as
well
as
the
java
sdks.
B
R
Was
like
now,
we
have
a
par
which
has
one
of
the
implementations
for
the
subscription
and
the
discovery
api
spec
as
part
of
the
sdk,
so
there's
a
pr
which
can
go
in
so
officially.
We
might
start
having
support
for
the
subscription
and
discovery
api
implementation
from
the
javascript
sdk,
whereas
on
the
other
hand
the
java
and
go
sdks
have
not
even
thought
about
it.
Q
B
Keep
in
mind,
I
am
not
a
maintainer
any
sdk.
This
is
just
my
personal
opinion,
which
means
nothing.
I
personally
thought
it
was
a
little
bit
weird
to
try
to
incorporate
a
cloud
event
sdk
with
the
discovery
and
subscription
stuff
because
to
me,
while
they,
while
they
may
use
cloud
events
to
some
extent,
I
I
view
them
as
separate
projects,
but
I
know
in
the
past
I've
heard
other
people
say
no,
it
makes
perfect
sense
to
merge
it.
K
Can
I
can
I
clarify
something
yeah,
so
I
think
we're
not.
I
if
I
understand
the
conversation
correctly
so
far.
What
we're
talking
about
doing
is
in
having
implementations
of
the
discovery
and
subscription
apis
in
the
github
repository
called
sdk
javascript,
but
it
is
published
as
a
separate
npm
module.
Each
one
probably
subscription
is
published
as
its
you
know,
cloud
event
subscriptions
and
discovery
published
as
cloud
events
discovery
and
then
the
the
main
bit.
K
That
is,
that
implements
the
cloud
event
specification
as
well
as
the
http
protocol
binding,
and
you
know
things
like
that
that
is
published
as
its
own
module
npm
module
as
well,
so
they
are
distinct.
They
they
do
version
independently.
K
In
ideally,
the
the
top
level
one,
the
cloud
events
module-
that
really
is
just
the
implementation
of
the
spec-
would
provide
interfaces
that
the
subscription
and
the
discovery
apis.
K
Our
implementations
use,
but
there's
not
a
dependency
in
that
way,
so
that
the
cloud
events
sdk
doesn't
depend
on
the
subscription
api
does
not
depend
on
the
discovery.
Api.
The
discovery,
api
and
this
subscription
api
depend
on
the
interfaces
defined
in
the
cloud
events.
Sdk.
K
Is
that
a
correct
summarization
of
the
discussion
so
far,
and
maybe
I
just
added
embellished
a
little
bit
and
if
that's
the
case,
does
that
seem
as
weird
to
you
doug
or
not.
A
B
I
mean
in
fact
there
are
separate
npms.
I
think
sure
that
that
lessens
the
weirdness
a
little.
I
not
enough
for
me
to
not
call
it
weird
but,
like
I
said
my
opinion
doesn't
matter
right,
I
mean,
if
you
guys
think
it
makes
sense
to
be
part
of
the
sdk
go
for
it.
I
mean
as
long
as
the
as
long
as
the
code
is
out
there.
I
think
that's
that's
the
biggest
thing
where
it
sits
in
which
repo
or
stuff
like
that,
I
that's
that's
secondary.
T
T
So
that's
why
I'm
pushing
more
and
I'm
happy
to
see
the
discovery
and
the
subscription
being
a
little
bit
more
real,
and
I
think
it
it's
logical
as
a
group
to
try
to
push
like
the
full
solution
to
explain
how
it
works
and
how
it
can
interact
together,
because
the
interaction
works
only
if
you
have
like
the
not
only
but
nicely,
I
would
say
if
you
have
description
and
subscriptions
discovery
and
subscription.
Sorry.
T
I
know
like
I
didn't
get
any
resistance
for
anyone
for
now
I
was
just
trying
to
and
I
still
think
we
need
to
work
on
the
subscription
api
a
little
bit
more.
So
I'm
like
not
like.
R
We
discussed
this
some
few
weeks
back
that
first,
we
would
try
to
implement
it
as
a
part
of
the
interop
call
and
if
the
specification
seems
tangible
at
that
moment,
we
start
defining
the
the
specification
into
the
goal
line
sdk
as
well
as
in
the
javascript.
That's
what
I
can
remember.
B
R
Yeah
exactly
so,
I
just
so
my
my
major
concern
over
here
was
like.
Should
we
synchronize
the
timing
that,
when
these
implementations
step
up
into
these
corresponding
sdks,
should
it
be
all
together
or
should?
Is
it
fine
that
it
comes
in
one
of
the
one
of
the
sdks
first
and
then
later
on
for
the
other
ones.
B
Q
O
R
I
mean
yeah
at
some
point
of
time,
if
not
this
sdk,
but
in
a
different
repository,
so
we
would
probably
formulate
the
tasks
into
the
sdk
go
issues.
I
think
I
think.
R
Yeah
but
it
it
has
a
dependency
for
us,
as
I
mentioned
on
the
interop,
so
first
we
will
figure
out
as
a
part
of
our
interop
workgroup,
that
the
specification
currently
even
makes
sense
as
it
is
right
now
so
either
we
have
to
probably
evolve
it
a
bit
or
we
can
use
it
as
it
is.
So
we
would
decide
that
as
part
of
the
intro
and
then
we
would
start
discussing
how
we
want
to
incorporate
it
into
the
sdks.
So
that's
the
discussion.
I
could
remember.
T
For
me,
it
needs
to
evolve
it's
sure,
so
it's
a
work
in
progress,
but
I
just
wanted
to
put
like
the
base
and
the
implementation,
so
we
can
iterate
on
that
and
make
sure
that
we
are
all
on
the
same
page.
So
at
no
point
I
thought
like
that.
Implementation
is
the
final
one
in
the
final
one,
I'm
sure
we're
gonna
integrate
on
that.
B
Okay,
anything
else
for
the
agent
for
the
call.
T
I
think
we
could
merge
this
one
and
then
do
the
learner
split,
because
the
2pr
are
not
about
discovery
and
it's
just
some
tweaks
in
the
sdk
and
then
yes,
we
can
do
the
learner
because
we'll
have
to
synchronize
for
that.
T
C
K
Yeah
I
mean
I,
I
think
that
they
deserve
an
actual.
You
know
longer
github
review,
but
conceptually
and
all
that
it
makes
sense
to
me
to
land
these
as
they
are
and
then
move
towards
a
mono
repo
with
the
discovery
and
subscription
apis
as
separate
modules.
V
Yep
sounds
good
yeah.
I
think
the
main
concern
I
have
is
just
like
for
an
sdk
receiver
like
having
any
discovery.
Functionality
is
not
really
wanted.
So
as
long
as
like
things
are
split
soon
or.
V
Correct,
okay,
yeah,
not
yet
the
emitter-
and
I
mean
you
yeah.
I
mean
interfaces,
don't
really.
T
T
Yeah,
that
was
the
discussion
like
for
me.
It
makes
sense
to
be
in
the
main
module
if
we
decide
to
split
it
and
put
it
in
another
module.
It's
possible
also.
It's
currently
the
case
on
my
implementation.
So
it's
not
going
to
be
the
end
of
the
world.
The
the
really
super
important
one
is
the
emitter
one.
I
think
the
emitter
one
is
something
that
if
we
don't
do
like,
basically,
I
cannot
implement
or
I
cannot
implement
subscription.
K
Well,
I
really
like
the
emitter
one
and
I
think
that
it
it
brings
some
real
node,
specific
kind
of
idioms
to
the
sdk
like
actually
using
the
node
event.
Emitter,
that's
nice.
T
Don't
don't
remember
I
I'll
move
to
like
the
same
interface
of
the
event
emitter,
but
with
asynchronous.
So
this
way
we
have.
M
K
Right,
do
you
want
to
put
this
on
a
put
that
sorry
yeah
put
that
pr
into
wi
work
in
progress
status
until
you
make
those
changes.
T
K
T
I'll
just
update,
but
I
want
to
do
it
like
today
same
thing:
I
don't
want
it
to
be
sitting
too
long.
I
don't
like
it.
V
Yeah
I'll
go
review
it.
It
looks
fine.
I
think
we
should
definitely
try
to
split
the
sdk
sooner
than
later
yeah,
but
this
is
fine.
Yep
harvey.