►
From YouTube: CNCF Serverless WG Meeting - 2019-04-18
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
C
D
A
All
right
three:
after
has
anybody
I
miss
I,
think
I
might
have
everybody
all
right.
Let's
go
ahead
and
get
this
show
on
the
road.
Did
you
do?
That's
unusual
comes
alright.
Let's
get
started
typically
time
all
right.
Any
community
related
topics,
people
like
to
bring
up.
A
E
A
On
that,
okay,
yeah,
from
my
point
of
view,
I
was
gonna,
spend
a
good
portion
fright
tomorrow
and
this
weekend
we're
working
with
the
intern
on
getting
the
infrastructure
for
the
demo
set
up
so
that,
hopefully,
by
our
Monday
phone
call,
we
should
be
in
a
state
where
everybody
wants
to
participate
into,
follow
simple
instructions
to
figure
out
what
they
actually
need
to
code
up.
We
do
have
we've
made
some
really
good
progress
on
the
demo
flow
itself.
A
Some
template
or
I
put
together
some
Google
Docs
for
the
template
for
the
slides
I
know
no
one's
had
any
time
to
actually
work
on
the
slide
themselves,
but
if
you
are
working
on
either
the
intro
in
deep
dive
or
in
the
server
less
working
group
session,
the
templates
are
there
with
the
links
here
in
the
agenda.
It's
just
basically
the
intro
title
pages.
More
than
anything
else
so
Kathy
and
I,
we
need
Kathy.
You
and
I
need
to
work
on
the
practitioner
summit
slide
deck
as
well.
I
came
her
I.
A
A
Let's
see
what
else?
Okay,
nothing
but
I
cook
on
China
I,
don't
believe
since
last
time,
I
still
have
that
started
the
slide
deck
for
our
yearly
review.
That's
on
May,
7th
I
did
ask
for
some
time
from
the
TOC
I
believe
on
next
week's
phone
call
to
get
some
clarity
on
what
they
meant
by
three
independent
end
users.
So,
hopefully
that
will
help
us
decide
whether
we
want
to
or
can
go
forward
with,
possibly
moving
to
incubator
status
out
of
sandbox.
A
If
we
want
to
but
I
figure,
we
need
to
at
least
understand
what
the
requirements
are
before.
We
actually
have
that
discussion,
so
hopefully
we'll
get
that
next
week,
all
right
before
we
jump
into
PRS
any
other
high-level
topics,
people
like
to
bring
up
that
I
may
forgotten
about
alright
cool
moving
forward.
Then
Scott
Europe
first,
where
they
were
hopefully
relatively
easy.
One.
G
F
Right
we'd
like
to
talk
to
this
one,
it
was
just
finding
it
difficult
to
search
the
text
at
master.
I
noticed
there's
some
conflicting
data,
that's
happening
so
I
changed
the
version
to
0.3
a
bit
so
that
it's
clear
that
the
efficient
add
master
is
the
future
version.
Not
a
backported
thing
supported
him
himself.
A
A
Alright,
any
objection
to
approving
this
one
all
right
cool.
Thank
you
guys
very
much,
then
thank
you.
Scott
for
that
I
know.
There's
been
actually
a
confusion
point
for
some
people
in
the
past
all
right.
Next
one,
this
one
has
been
out
there
for
a
little
while
now
Kathy
this
one.
We
made
some
edits
based
upon
some
of
your
suggestions
in
particular
I.
Believe
I
proposed
some
text
and
you're
okay
with
a
fair
portion
of
it.
But
then
you
want
that
last
little
bit
removed
about
ordering
and
I
believe
I
was
the
gentleman's
name
alone.
A
A
So
I
know
not
everybody's
not
paying
attention
to
this.
So
let
me
give
you
guys
a
second
to.
Let
me
see
this
yeah,
the
the
main
parts
right
here
are
these
two
sections
I'll,
give
you
guys
just
a
minute
to
look
this
over,
but
I
think
we
have
talked
about
this
in
the
past
and
we're
almost
ready
to
go,
and
then
Cathy
had
some
last
minute
questions
and
concerns,
and
this
is
the
paragraph
that
tried
to
address
her
concerns
more
than
anything
else.
A
A
I
think
Clements
I
think
Clements
had
some
really
good
examples.
He
was
gonna
write
up,
am
I
correct
in
assuming,
though,
that
we
could
do
those
as
a
follow-on,
PR,
oh
sure,
okay
and
the
reason
I'm
pushing
a
little
is
one.
This
one's
been
out
there
for
a
while
and
I
feel
a
little
guilty
that
then
we
kept
them
on
hold.
You
know
that
it
was
good
discussions
that
had
to
happen,
but
I
know
that
this
is
also
a
blocker
for
the
kafka
transport.
I
know.
A
H
C
C
H
C
H
Yeah
sure
so,
basically
in
Kafka,
the
client
decides
to
partition,
but
it's
unless
you're
doing
something
really
weird.
You
use
murmur
hashing
on
the
key
of
the
message
and
the
cloud
event.
Spec
has
no
place
for
the
key
of
the
message,
which
is
a
important,
primitive
in
Kafka,
so
you
had
the
key
and
the
message
and
the
hash
is
always
on
the
whole
key,
the
binary
data
of
the
key.
C
No
but
III
don't
think
we
need
to
go
into
it
if
it's
this
early,
but
again
the
cache
is
created
by
Kafka
under
the
covers.
So
you
know
if
you
send
a
cloud
event
and
you
basically
pass
that
to
Kafka
Kathryn
would
have
to
still
manually
put
a
key
into
the
data.
So
that's
what
they're
talking
about!
That's
fine,
it's
a
separate
key,
but
it
wouldn't
be
the
key
in
the
key
hashing
that
Kathy
uses
that
she
put
it
in
a
partition.
So
you
could
actually
use
any
key
doesn't
have
to
be
specifically
a
partition.
C
C
Well,
I
think
it's
redundant
because
if
I
was
doing
a
katka
binding
which
doesn't
exist
yet
I
would
use
something
like
a
correlation,
ID
or
an
well
probably
a
correlation
ID
in
the
quality
event
message
and
then,
when
I
push
that
into
a
cap,
binding
I
would
use
the
correlation
ID
as
the
Kappa
key,
which
would
automatically
get
hashed
and
put
into
the
correct
partition.
So,
okay,
so
I
could.
So.
This
is
just
a
redundant
version
of
perhaps
another
key,
like
correlation
IDs
that
already
exist.
Wellyes.
H
Know
that
what
you're
talking
about
is
a
semantic
ID
of
that
would
be
used
for
petitioning,
but
the
problem
is
that
partitioning
is
a
process
defined
it's
process
defined,
so
you
might
have
different
hops
that
need
to
partition
based
on
different
correlation
correlation
IDs
relation
ID
is
a
very
generic
field.
Name,
it
doesn't
really
imply
anything.
H
You
could
have
many
of
those
for
an
event.
The
point
of
the
partition
key
extension
here
as
I've
understood
it
is
to
create
the
canonical
place
for
that
key.
So
if
you
knew
that
you
wanted
to
use
correlation
ID
X
as
your
kafka
key,
you
would
put
that
here
for
your
hope.
But
when
you
pass
the
band
forward
somewhat
some
other
consumer
or
some
other
process
might
want
to
partition
it
differently
based
on
another
correlation
ID.
H
Then
they
would
modify
this
field,
as
the
extension
spec
says
for
their
use
case,
that's
the
the
it
is
redundant,
but
it's
redundant
intentionally
because
it
makes
it
unabie
an
ambiguous.
What
you
actually
want
to
be
the
partition
key,
otherwise
the
binding
actually
can't
know-
or
it
would
need
to
be
a
specific
parameter
of
the
binding
or
something
I
don't
even
know,
but
this
was
all
of
those
options
were
discussed
beforehand.
H
For
example,
Clemens
had
that
idea
that
you
would
use
a
path
to
get
the
Kafka
message
key
from
the
cloud
event,
but
instead
it
was
decided
to
use
something
that
already
exists,
so
that
is
extensions
and
attributes
there,
and
since
we
don't
have
a
decision
that
extension
field
should
be
mutable.
This
seemed
like
the
proper
way
to
do
rotation
key
because
you
it's
not
set
in
stone.
You
want
to
change
it
between
hops,
most
likely.
I
H
Home
so
Clemens
had
a
great
example
here,
where,
if
you
take
in
a
huge
amount
of
messages,
and
then
you
partition
them
and
you
give
each
partition
to
a
dance
consumer
and
that
downstream
consumer
then
again
needs
to
partition
it.
If
your
partition
K
is
immutable,
the
downstream
consumer
actually
cannot
partition
it
because
they
already
were
in
the
same
partition,
which
means
the
key
hash
to
the
same
bucket.
H
So
the
downstream
consumer
actually
has
no
way
of
partitioning
their
set
of
data
which
they
read
from
one
partition
upstream,
because
they
all
make
their
keys
or
all
hash
to
the
same
bucket.
So
that
is
why
it's
not
possible
in
something
like
Kafka
to
have
an
immutable
key.
If
you
actually
want
to
create
complex
pipelines,
mobile.
C
H
Not
immutable
between
you
and
the
next
hope
they
can
modify
their
message
when
they
put
it
into
Kafka
again
after
they've
processed
it.
The
point
here
is
you
wouldn't
modify
this
like
in
place.
You
would
process
your
stuff
put
it
into
Kafka.
The
next
consumer
will
process
their
stuff
and
their
results
would
have
a
different
partition
key,
which
would
then
we
partition
the
resulting
data.
C
H
You
use
the
same
key
and
you
have
same
same
partitions
in
both
topics.
Your
second
topic
will
have
the
exact
same
partitioning
as
your
first
one.
The
murmur
to
hashing
is
intentionally
are
deterministic,
so
what
you're
saying
doesn't
actually
read
partition
the
data?
It
just
puts
it
again
in
a
different
topic
in
the
same
partitions,
so
we.
G
J
This
is
a
problem.
We
have
all
the
time
and
event
processing.
You
know
when
you
want
to
send
event
somewhere
and
they
get
very
high
volume
you
have
to
use
charting
because
that's
the
only
know
one
way
to
handle
arbitrary
high
volumes,
and
so
the
conventional
technique
we
use
is
not
to
specify
a
partition
key
literally,
but
to
specify
a
JSON
path
or
equivalent
saying
for
each
event.
Here's
how
you
pull
out
the
partition
key
that
seems
to
work
really
well
in
practice.
J
C
I
know
in
our
messaging
products
we
do
the
partitioning
keys
when
I
need
to
partition
completely
transparent
to
the
payloads,
and
that
works
fine
as
well,
and
we
partition
on
other
parameters
which
is
against
the
payload
and
a
bunch
of
other
things.
But
so
this
is
very
Caucasian
I'm
saying
with
Kafka
it
would
be
redundant.
C
I
C
H
H
Want
sense
and
correlation,
as
I
said,
that
was
a
an
idea
by
Clemons
which
was
disregarded
in
favor
of
this
extension,
which
is
the
same
thing
in
a
different
form.
You
would
set
the
your
one
said:
whatever
your
wanted
correlation
ID,
if
you
never
want
to
change
it,
that's
fine
or
if
you
want
to
change
it,
as
you
just
said,
that's
fine
as
well,
but
this
extension
is
where
you
would
set
that
key.
I,
don't
understand
the
problem,
that's
maybe
I.
It's.
C
Not
a
problem,
you
can
add
I'm
just
saying
it's
redundant
and
you
want
to
keep
adding
redundant
fields,
especially
if
it's
targeted
towards
only
Kafka
and
I
believe
it
can
be
done
without
adding
this
field
in
cap
itself,
as
well,
so
I'm
just
trying
to
cut
back
adding
extensions
that
may
be
redundant
fondant
or
only
one
binding.
So.
A
A
Right
but
I'd
right,
but
I,
think
ephemera
and
correctly
I
believe
that
that's
that
spec
does
exist
in
the
sense
it's
a
PR
and
I
think
they,
rather
than
wanting
to
create
their
own
extension
just
for
them.
They
thought
this
was.
This
was
a
generic
enough
thing
that
maybe
it
should
be
a
standalone
extension
indeed,.
C
Well
again,
depending
on
how
you
spec
out,
even
if
I
did
I
wrote,
the
calculating
since
I
could
use
the
keys
as
full
objects
rather
than
single
strings
or
field
I
would
actually
do
the
binding,
where
I
would
allow
them
to
specify
attributes
in
the
header
or
it
could
be
attributes
even
right
from
the
payload.
That
I
would
create
before
I
actually
wrote
the
record
to
Casta.
So
adding
a
separate
field
for
partition,
I
think
is
redundant.
B
So
what
I
gathered
from
the
discussion
that
the
other
people
had
was
that
for
them
to
be
able
to
put
it
into
a
Kafka
partition,
they
have
to
have
a
way
of
letting
an
event
producer
decide
in
which
partition
it
should
go.
So
this
is
a
way
for
the
event
producer
to
tell
whatever
Kafka
middle
murder
is
how
it
should
be
partitioned.
B
I'm,
not
sure
if,
like
because
the
Kafka
binding
should
work
for
all
cloud
events
also
for
cloud
events
that
don't
have
a
partition
key,
so
they
probably
have
to
have
a
fallback
in
the
end.
I
don't
know
what
that
will
look
like
what
I
think
what
we
should
get
at
is
that
this
is
an
optional
thing
that
an
event
producer
can
add,
so
that
then
a
Kafka
or
another
messaging
protocol
that
wants
to
partition
can
work
with.
We
should
kind
of
see
that
it
is
an
optional
thing.
F
I,
don't
think
you
want
the
producer
to
choose
the
partition
key,
because
they're
gonna
choose
a
bad
value.
You
want
the
middleware
to
be
able
to
specify
what
the
key
is
and
it
I
don't
think
the
consumer
really
cares
at
the
end,
because
the
it
shouldn't
need
to
know
where
the
event
came
from
from
which
partition
is
that
right
or
is
that
wrong
depends.
B
Made
the
same
point
actually
because
I'm
like
I,
read
really:
if
you
write
a
software
service,
then
you
don't
really
know
what
your
consumers
will.
What,
if
you
write
a
new
event
producer
in
your
own
ecosystem,
then
it's
very
likely
that
you
have
the
knowledge
of
your
consumers
so
again,
I
think
it's
optional.
So
if
it's
a
way
for
them
to
move
forward
with
the
Kafka
binding,
then
that's
kind
of
okay.
For
me,.
F
C
H
Highlight
at
the
part
of
the
text
he's
highlighting
it
again,
which
exactly
tries
to
convey
that
message.
I,
don't
understand
why
you
would
not
spec
this
as
an
extension,
and
why
would
you
leave
it
to
the
individual
transports?
Because
if
you
have
a
Kinesis
transport
and
a
Kafka
transport,
they
would
work
exactly
the
same
way.
With
this
extension
I,
don't
see
a
reason
why
you
would
leave
it
to
the
individual
bindings
to
define
an
unsparing
when
you
can
just
do
it
generically,
but.
C
C
A
So
I
my
assumption
kapeniak
of
argument.
My
assumption
was
that
the
producer
would
use
whatever
values
they
need
to
to
decide
what
the
purchasing
Keys
gonna
be,
and
then
they
would
stick
it
into
this
field
so
that
the
consumer
could
then
use
it
and
wouldn't
have
to
do
in
essence,
the
exact
same
logic,
the
producer
just
did
your.
I
H
Probably
yes,
but
not
necessarily
the
producer
might
not
even
define
this
key
at
all.
If
they
don't
know
it's
going
into
Kafka
and
then
some
middle
word,
that's
actually
putting
it
into
Kafka
would
define
it
because
they
know
how
it
should
be.
Partitioned.
I
now
understand
how
is
what
you're
going
after
you're
going
after
the
fact
that
Kafka
already
has
a
key
that
is
used.
The
problem
is
cloud.
Events
are
singular
objects.
H
They
are
not
split
as
a
key
as
and
as
a
message
as
in
Kafka,
so
you
need
to
somehow
define
that
key
I,
don't
believe
it's
a
disservice
to
generalize
how
that
key
is
defined.
You
can
still
put
any
data
you
want
in
here.
You
can
be
encode
your
objects,
your
headers,
whatever
you
want
here,
it's
still
binary
data
for
Kafka
Kafka
handles
everything
as
a
binary
data.
So
this
is
just
a
way
to
set
the
key
in
a
cloud
event
which
doesn't
naturally
have
a
separate
key
to
partition
on.
C
So
that
would
be
against
data
that
I've
already
received
either
header
data
that
could
be
payload.
Data
and
I
will
then
determine
how
I'm
going
to
partition
the
keys
for
writing
to
Kafka,
which
will
automatically
put
it
into
the
correct
partition
right.
There's
no
real!
You
know,
you're
limiting
yourself
with
an
optional
field
to
build
that
binding
right,
I.
H
Think
we
just
simply
disagreed
them,
because
I
don't
think
this
limits
you
in
any
way
you
have
to
encode
your
data
as
a
string
which
okay
yeah,
not
nice,
if
you
used
to
normal
Kafka
clients,
but
it's
in
the
name
of
generalization,
I
I,
don't
think
I
will
be
able
to
convince
you
that
you
lose
that
you
don't
lose
something
here.
If
you
do
think
you
lose
something
here
because
to
me
it's
just
two
different
ways
of
defining
the
key
it'll
still
be.
The
skiing
cough
Cup
it'll
be
partitioned.
H
C
Just
saying
it's,
it
has
two
issues:
one
it's
either
redundant
because,
just
like
with
a
connector,
for
example,
which
would
be
similar
to
building
the
binding
protocol
binding,
you
would
be
using
data
that
already
exists
in
the
cloud
event,
either
in
the
payload
or
as
part
of
the
header,
to
determine
the
key,
which
is
the
same
thing
you
do
when
you
do
a
connector
into
Kafka.
You
know.
Other
messaging
has
a
concept
of
the
key
when
you're
writing
a
source
connector
for
Kafka.
Therefore
you
do
the
exact
same
thing.
C
You
look
at
the
payload
and
header
data
that
you
just
received
and
you
determine
then
how
I'm
going
to
create
the
key,
which
is
generally
an
object.
It's
not
one
fielded,
usually
a
combination,
the
thing
to
build
a
object
as
the
key
which
I
then
passed
the
kappa,
which
will
automatically
then
figure
out
the
hash
and
put
it
into
the
correct
partition
automatically.
So
it
either
is
going
to
be
restrictive
if
you
expect
them
to
use
it
or
it's
redundant
because
you're
going
to
use
other
data
anyway,
just
like
with
a
connector.
C
G
F
F
C
You
would
use
with
a
casket
connector.
You
would
then
look
at
that
payload
and
you
would
look
at
header
data
and
determine
what
are
unique
attributes
that
I
want
to
group
all
the
data
to
make
sure
it's
all
ordered
against
that
grouped
data
and
it's
either
a
single
field
like
a
correlation
ID
or
it's
a
combination
of
payload
and
header
data,
where
I
would
feel
to
create
an
object
and
when
I
actually
now
create
the
attacker
records
to
write
at
the
kappa.
I
would
add
the
payload
data.
C
C
F
H
I
I'm
sorry
to
keep
repeating
myself,
but
there
is
no
difference
between
you
specifying
a
group
of
attributes
as
an
object,
and
you
specifying
a
group
of
attributes
encoded
into
a
string
because
they
are
both
binary
data
that
Kafka
will
use
for
the.
There
is
no
technical
difference
between
those
two
things
between
you
choosing
in
a
Kafka
connector,
a
group
of
attributes
and
encoding
them
in
us
as
an
object
into
the
gate
key
and
you
encoding
those
same
attributes
into
this
field.
As
a
string
Kafka
will
not
care
it
handles
it
all
as
binary
data.
H
What
do
you
understand
from
your
objections
is
the
fact
that
this
will
indeed
our
diplucate
the
key
into
the
message,
if
I
understand
correctly
how
this
extension
word
unfortunate.
In
my
opinion,
some
people
gave
examples
where
that
would
be
useful,
because
on
the
consumer
side
you
wouldn't
get
the
key
from
Kafka.
You
would
only
get
the
cloud
events
and
this
in
it
then
would
specify
what
was
used
as
the
partition
key.
So
you
could
even
reuse
it
if
you
want
I
know
so,
I
seem
to
disagree
with
your
problems
with
this.
So.
A
C
It's
unnecessary
information
because
everything
you
sent
to
Kafka
either
through
a
stream
processor,
a
connector,
because
really
what
you're
doing
is
some
kind
of
integration,
which
is
really
what
we're
discussing
the
integration
point,
either
the
stream
process
of
the
connector,
or
in
this
case,
which
would
be
the
Kafka
binding,
which
does
not
exist.
Yet
it
does
not
assume
that
you
have
created
a
key
for
Kafka
outside
of
it.
C
Do
you
do
it
before
you
pass
it
to
the
captain,
environment
or
or
before,
or
after
and
I'm,
saying
that
you
would
always
do
this
after
just
like
when
you
wrote
a
connector
you're,
not
going
to
tell
the
upstream
third
party
system
to
say,
I
need
you
to
calculate
this
partition
see
for
me
so
that
when
I
pass
it
the
koepcke,
it
can
use
that
partition
key.
You
would
do
that
in
the
binding
layer
where
here's
all
the
cloud
event.
C
C
B
The
mode
I
think
we're
getting
to
actually
the
point
that
the
offers
of
this
try
to
make
and
I
think
there
are
point
is
that
I
want
to
write
a
binding
that
is
just
standard
code
and
that
you
don't
modify
at
all.
So
if
you-
and
it
works
like
generically
across
events
that
I
have
never
seen
so
in
which,
in
in
the
case
that
Hein
suggests
means
I
always
have
to
go
in,
I
have
to
understand
I
get
events
of
this
shape.
B
C
Totally,
that's
again
do
I
want
to
create
the
key
before
I
put
it
into
the
event
and
to
make
sure
it
works
and
make
it
easier
to
write
a
copy
of
connect,
terracotta
mining
or
do
I
keep
it
completely
separate.
Where
has
no
knowledge
that
copy
is
going
to
be
involved
in
the
cloud
event
and
then
have
to
have
some
kind
of
configuration
or
some
kind
of
interface
to
allow
this
to
work
in
a
cap?
Combine
them
exactly
sorry.
I
try
to
make.
B
Sure
good
it
I
try
to
make
exactly
the
same
point
to
the
offers
that
there
will
definitely
be
events
where
they
cannot
excuse
assume
that
the
producer
mate
or
edit
a
partition
key
and
then
you
will
always
have
to
at
least
have
some
fallback.
What
I
think
yeah
I
think
we.
My
personal
opinion
is
that
we
should
look
at
this
together
with
the
binding
that
they
propose
and
then
we,
if
we
as
a
group,
think
it
doesn't
make
sense
to
have
like
a
generic
binding.
B
H
Those
two
things
are
not
exclusive.
This
extension
specifically
states
that
you
can
do
redefine
the
partition
key.
So
that
is
the
point
where
you
define
it
after
you
get
the
event
Hinds,
it's
not
producer.
Only
it's
not
only
producer
defined.
This
specific.
This
says
specifically
that
you
can
change
it
in
cases
where
the
cloud
event
is
delivered
to
an
event
consumer.
We
are
multiple
hops.
It
is
possible
that
the
value
of
this
attribute
might
change.
H
That
is
the
specific
part
that
says
you
are
not
tied
to
the
partition
key
set
by
the
produce
producer
that
that's
one
problem
that
you
that
came
up
Christophe
for
your
thing.
If
there,
if
one
does
not
exist,
I
would
assume
most
producer
wouldn't
set
this
key.
That's
fine!
If
you
don't
have
a
key,
it
will
go
to
a
random
partition
in
round
robin
fashion.
If
you
know
you're
going
to
put
your
messages
into
Kafka.
H
H
Think
Scott
has
a
good
point
in
the
chat
that
maybe
they
shouldn't
be
top-level
extension,
but
something
else
we
just
don't
have
that
something
else,
which
is
why
I
think
this
was
defined
as
an
extension,
so
the
Kafka
binding
could
get
moving,
because
we
also
didn't
want
to
do
a
Kafka
binding
specific
way
of
specifying
the
key.
That's
what
Clements
actually
suggested
weeks
ago,
but
how
I
wasn't
part
of
that
convent
conversation
that
was
disregarded
I.
Don't
remember
why?
Okay.
A
So
I
think
we
need
to
wrap
this
one
up,
at
least
for
this
phone
call.
So
Heinz
can
I
get
you
to
take
an
action
item
here
and
put
down
your
concerns
into
a
comment
into
this
PR,
because
it
seems
very
odd
to
me
that
the
authors
of
the
Kafka
spec
feel
like
they're
blocked
by
not
having
this,
and
so
they
the
fact
that
they
consider
it
is
sort
of
required
bit
of
information
in
order
to
make
a
forward
progress.
It
says
that
something
something
is
serious
or
needs
to
be
discussed.
Well,.
H
A
H
H
It's
needed
to
actually
put
that
message
into
the
correct
partition
in
Kafka,
so
when
you
are
putting
it
into
coffee,
you
need
to
know
the
key
there's
currently
no
way
to
set
that
key
without
the
Kafka
binding,
creating
its
own
special
way
of
doing
it
and
I,
don't
think,
that's
the
correct
way
to
do
it,
because
this
is
not
a
Kafka
specific
thing,
even
though
that
is
the
major
use
case.
We're
talking
about
now.
Kinesis
is
a
good
second
example.
They
have
the
same
model,
I
think.
J
C
That's
what
I
would
prefer
before
I
start
making
comments,
because
when
we
throw
things
like
well,
it's
for
routing
well,
even
in
Kafka,
if
you're
routing,
it
is
a
mule
to
put
it
into
the
first
topic.
Routing
is
all
done
in
Kafka,
using
stream
processors,
and
you
can
create
whatever
key
you
want.
It
has
nothing
to
do
with
the
original
publisher
of
the
original
cloud
event.
Okay,.
A
So
I
think
I'm
hearing
couple
things.
One
is
some
examples,
I
think
Scott.
You
call
the
use
cases,
that's
fine.
We
can
ask
for
those,
but
Heinz.
Can
you
also
add
some
comments
to
the
PR
itself,
explaining
your
your
concerns,
because
I'd
want
to
see
if
we
can
wrap
up
this
conversation
in
relatively
short
order
over
the
next
week
or
two
yeah.
C
C
G
A
H
C
C
But
I'm
just
gonna
say
it's
very
simple:
I
believe
it's
redundant
and
I
believe,
like
every
other
way
of
injecting
published
data
from
third
party
int
from
anything
into
Kafka,
the
key
is
always
generated
at
the
binding
or
the
connector
or
at
the
stream
level.
It
is
not
done
necessarily
at
the
external
source,
which
is
the
entire
architecture
for
things
like
Kafka
connectors,
which
is
a
standard
way
to
get
the
data
into
that
agreed.
H
A
A
One
is
ask
the
authors
to
give
some
some
concrete
examples
in
the
use
case
for
this
and
to
asking
the
West
recording
of
this,
and
then
they
can
comment
on
that
back
in
the
PR
itself,
because
I
think
I
think
the
author
of
this
is
technically
Australia,
so
it's
gonna
be
really
difficult
for
him
dinner
to
join
this
phone
call
I
believe
so,
let's
see
we
can
make
progress.
Do
those
two
avenues?
Okay,
excellent
discussion,
I
think
very
lively
with
the
Center
for
our
phone
calls,
so
that
was
good
okay.
A
F
F
You
can
adapt
that
into
a
cloud
of
it
and
every
the
source
and
the
ID
will
be
exactly
the
same
for
two
different
implementations
of
an
adapter
to
a
cloud
event.
But
the
type
should
be
different
because
the
type
comes
from
the
adapters
implementation.
So
the
only
way
you
know
that
that
event
is
a
d2
is
if
you
also
include
the
adapters
idea,
which
is
the
type
okay.
A
So
my
hands
up
next,
so
this
seems
to
me
those
Scott
if
the
adapter
is
going
to
blindly
take
some
ID,
that
I
got
from
the
friend
system
that
actually
produced
the
event
or
generated
the
event.
It's
that
adopter
is
that
adapters
job
to
manipulate
that
ID
in
some
way
before
it
uses
it
as
a
cloud
event
ID,
and
if
that
means
adding
some
unique
thing
to
it,
some
additional
unique
thing
to
it,
like
maybe
the
type
may
be
some
other
identifiers
and
concatenate
it
whatever.
It
seems
like
it's.
A
A
F
The
ID
is
unique
for
the
for
that
individual
producer,
so
it's
still
conformed
to
the
spec,
but
you
can
still
have
two
producers
that
are
producing
the
exact
same
event.
According
to
this
definition
of
unique-
and
you
won't
be
able
to
understand
if
it's
from
producer
a
or
producer
B,
because
their
upstream
source
of
truth
is
giving
them
the
same
data.
F
E
K
F
F
A
Guess
I'm
still
confused
Oh
Scott.
It
seems
to
me
that
that
you're
you're
trying
to
assume
that
the
ID
that's
given
to
this
adapter.
It
can't
be
changed
in
some
way
to
be
made
unique
unto
itself
and
that
this
idea
has
some
semantic
meaning
outside
of
this
provided
uniqueness
form
a
cloud
event
perspective
am
I,
am
I
correct
in
that
assumption.
How.
F
Would
you
know,
as
a
consumer
of
that
event,
how
to
unmingled
the
ID
back
to
the
database
commit
ID
if
it
goes
to
a
producer?
F
A
My
point
he
shouldn't
what
this
ID
is
not
meant
to
be
anything
other
than
uniquely
identified.
This
clown
event
from
other
crowd
events:
it's
not
supposed
to
have
a
semantic
meaning
like
a
database,
commit
ID.
You
can
use
that
ID
if
that
ID
happens
to
be
glowy
unique,
but
it's
not
meant
to
convey.
Oh
consumer
use
this
field
for
the
for
the
commit
ID,
because
that's
not
what
it's
there
for
there's
some
other
field
for
that
I.
A
F
A
Well,
no
I,
don't
think
that's
true
you!
If
you
it
depends
on
whether
you
want
a
D
dupe
from
a
cloud
event
perspective
or
D,
dupe
from
some
sort
of
semantic,
meaning
of
the
event
itself
perspective
from
a
crowd
of
event,
perspective
of
resetting
the
exact
same
cloud
event:
I
think
you
can
do
it
with
this
ID,
but
if
you
want
to
have
some
semantic
smarts
and
somehow
do
D
duping
from
a
data
perspective,
that's
out!
That's
a
that's
a
telescope
for
us.
F
A
F
G
F
Right
you're
sending
the
same
event
multiple
times,
because
they
have
different
IDs
for
club,
with
an
ID
but
they're
exactly
the
same
content
for
the
Paley,
and
now
that
you've
pushed
the
exercise
of
DZ
to
the
consumer
instead
of
the
transport.
Why
would
they
have
the
same
content
because
I
just
replayed
all
the
database
limits.
A
A
H
A
A
On
speed
today,
so
where
is
you?
Where
is
your
exact
definition
you'd
like
to
be?
We
can
talk
offline,
it's
in
the
closet,
okay,
okay,
I
got
my
multiple.
Try
you
taking
us
through
this
talk,
then
cuz
I
think
that's
probably
the
biggest
difference
of
opinion
is
what
is
the
ID
meant
to
be
used
and
what
it?
What
is
it
semantic
definition?
We
get
some
clarity
on
what
different
people
want
it
to
be?
A
Okay,
only
three
minutes
left
I,
don't
think
we
had
time
to
jump
into
another
issue,
exercise
the
next
one's
a
big
one
about
the
code
issue.
What
I
will
say
is
this,
though
we
have
technically
a
phone
call
schedules
right
after
this,
for
the
coop
Connie
you
session,
and
maybe
a
very
short
call
would
have
put
those
of
you
guys
who
are
planning
on
doing
one
of
the
three
different
or
four
I
guess
four
different.
Now
three
different
sessions
at
coop,
Connie,
you-
and
you-
can
please
join
that
cause.
A
A
H
H
G
A
H
A
E
H
A
A
G
A
M
A
H
A
A
A
good
discussion-
it's
just
you
know
it's
unfortunate,
because
I
know
everybody's
really
busy
and
is
the
kindest
cushon
says
you'd
like
to
have
through
the
PR
and
stuff.
Of
course,
that's
not
that's
a
the
best
way
to
have
a
conversation
and
we
tried
to
have
phone
calls,
but
then
times
those
kinda
way.
So,
unfortunately,
this
is
the
kind
of
way
it
has
to
play
out.
Sometimes,
but
it's
okay,
we'll
eventually
get
there.
A
H
A
H
A
Doesn't
sound
that
bad?
To
be
honest,
it
doesn't
I,
don't
hear
too
much
background
noise.
So
that's
not
too
bad.
Okay,
I
I.
Don't
actually
think
we
all
want
to
talk
about
because
I
don't
think
anybody's
gone
through
the
process
of
actually
starting
to
fill
out.
The
charts
so
but
I
did
want
to
ask
a
couple
questions.
A
Okay,
if
not
I'll,
sign
up
for
that
one
I
don't
mind
doing
it.
So
then
Scott
I'll
take
care
of
the
intro
stuff,
and
then
you
can
leave
it
on
with
the
more
advanced
cooler
stuff
like
the
SDKs
and
everything
else.
And
then,
if
we
have
time
the
demo
and
stuff
like
that,
sound
good
Scott,
it
sounds
great.
Okay,
perfect,
okay,
the
deep
dive
session,
but
a
policy
that
is
Vlad
and
Clemens
I
believe.
M
It's
very
very
closely:
it's
basically
messaging
that
depending
that's
what
I've
been
looking
in
the
last
week
last
time
and
I'm
not
sure
where
to
go
from
here
like
it
could
be
mostly
eventing
to
in
my
initial
final
plan
over
the
idea.
I
had
you
I
would
had
a
list
of
commits
or
versions
you
want
to
deploy
and
send
the
twelve
mass
cloud
event
to
S&S
this
case,
what
they
were
and
then
that
message
would
be
parsed.
But
it's
kind.
M
I'm
imagining
I
could
easily
tie
this
up.
I
give
up
events
to
do
everything
on
per
commit
or
perturbation,
but
that
feels
like
is
gonna,
be
a
bit
too
long.
So
I
don't
know
exactly
where
to
go
from
here,
but
again,
I
haven't
spent
that
much
time
on
it.
I
thanks
watching
this
weekend
for
this,
so
any
pointers
any.
M
M
Carrying
and
over-
and
it
would
be
an
absolutely
absolute
perfect
use
case
for
service,
but
right
now,
they're
doing
containers
and
that
works
they're,
also
not
the
being
good
startup
that
only
now
starting
mature.
There
aren't
a
lot
software
best
practices
because,
it
might
say
like
no
feature
flagging,
observability,
no,
a
be
testing.
No.
All
of
that
and
the
way
I've
seen
circle
is
using
is
multi
for
most
of
the
use
case
is
holding
two:
it's
the
first
one
being
new
startups.
They
disabled
is
because
we
optimize
our
cost.
G
M
To
sign
tiny
messages
between
different
lines
of
functions
or
stuff
like
that,
and
we
used
the
honeycomb
for
tracing
and
for
eventing
observability
and
all
that
and
I
basically
started
designing
a
JSON
but
was
heavily
resembling
cloud.
Events
like
okay
cloud
events:
it's
here
is
boring.
If
you
have
to
send
something
over
the
wire
you
can
use
it.
That
was
the
use
case
and
where
the
idea
came
from,
but
it's
still
kind
of
messaging,
so
I
don't.
M
M
A
M
A
M
So
15
minutes,
okay,
yeah,
okay
and
you're.
All
fine
with
me
saying
exhibits
are
here:
they're
boring
use
them
everywhere,
like
I.
Don't
want
my
floaty
band,
be
the
focus
I
want
you
to
be.
You
need
to
get
data
from
a
point
to
another.
You
might
as
well
use
Taliban
because
it's
supported
is
grilling.
It's
being
used,
it's
starting
to
be
native
to
different
stacks,
like
how
much
stuff
has
direct
cloudy
event.
Integration
is.
M
Yeah,
okay,
yeah
yeah
versioning
was
the
main
thing
as
soon
as
I
had
to
be
tracing.
I
had
to
do
HTTP,
header,
deserializing
and
serializing,
and
all
that's
already
done
by
cloud
event
and
by
the
cloud
events
as
decades
and
I
would
have
just
cobbled
the
Baloch
badly
created
Jason
with
some
version
tag.
Would
it
would
have
been
a
lot
of
internal
effort
when
I
can
just
explode
right?
A
A
E
A
D
M
A
One
thing
we
do
need
to
figure
out
is
where
we
actually
do
the
demo,
because
we
have
it
both.
It's
question
marks
in
the
intro
and
deep
dive.
So
at
some
point
we
need
to
figure
out
which
session
will
actually
talk
about
it
or
do
the
demo
there.
If
you
think
15
minutes
is
gonna,
be
pushing
it
for
you,
Vlad
and
I
know.
Clemens
can
easily
feel
fifteen
minutes.
We
may
want
to
put
that
into
the
intro
session,
then
so
Scott.
A
M
I'm
thinking
of
having
like
super
short
demo,
a
couple
minutes
going
through
it
and
then
going
into
how
its
implemented
and
stuff
so
I'd
rather
have
to
demo
in
the
in
not
in
the
deep
dive,
so
more
people's
feet
and
in
the
deep
dive
I
would
assume
like
previous
knowledge
and
go
directly
hi.
This
is
Evan.
This
is
the
extension
years.
The
city
people
already
know.
What's
the
extension
or
what
cloud
events
is
as
not
really
Cola
yeah.
A
E
Remember
one
thing
we
had
talked
about
potentially
having
the
the
airport,
the
lead
of
the
airport
consorted
Accra,
a
cross
ACI
participate
in
some
way
and
they
they
can't
physically
be
there,
but
they
could
be
there
virtually
I,
don't
know
if
that's
what
you
would
need
to
be
able
to
support,
but
the
key
guy
is
in
Canada
and
he
wouldn't
be
able
to
fly
all
the
way
out
there
for
that
that
he
could
participate
remotely
in
some
way.
Interesting.
E
G
A
C
A
You
all
right.
Finally,
the
disturber
last
session,
85
minutes,
Chad,
told
me
he
will
map
the
old
to
make
it
for
that
I
believe
he
was
gonna,
give
a
very
brief
state
of
service
overview.
Is
there
somebody
who
would
like
to
volunteer
to
cover
his
spot
on
that?
It's
to
refresh
your
memory,
I
believe
the
intent
here
was
we're
gonna
have
a
couple
of
quick
little
presentations
like
Chad
was
going
to
say
you
know.
What's
changing
serverless
I
believe
was
it
Christophe
was
going
to
give
a
quick
little
demo.
A
Jude
was
going
to
talk
a
little
bit
as
well
and
then,
after
that,
we're
gonna
lead
into
more
of
a
burden
of
a
feather
type
of
session
to
get
interact
with
the
audience
to
see
where
their
pain
points
are
where
they
might
want
us
to
go
from
a
service
working
group
perspective
in
terms
of
you
know
what
to
work
on
next
kind
of
thing,
so
these
are
all
pretty
much
very
short
little
intro
things
is
there.
Somebody
would
like
to
volunteer
to
do
this
data
server
list,
since
Chad
can't
make
it
I
could
do
that.
A
D
A
A
Okay,
as
I
mentioned
on
the
previous
call,
I
did
put
the
very
the
very
rough
draft
presentations
out
there.
So
you
can
see
there
I
put
a
link
into
each
section
here.
There's
nothing
in
there
than
the
title
slide
feel
free
to
start
adding
your
data
or
your
content
in
there.
So
people
can
start
reviewing
getting
common
thing
as
we
go
forward.
Do
we
want
to
pick
a
date
for
when
people
should
have
at
least
the
initial
rough
draft
for
their
stuff?
Already,
that's
to
put
some
timeline.
M
A
Typically,
in
my
experience,
people
wait
the
last
minute
to
get
this
stuff
done
and
I'm.
Okay
with
that,
because
people
always
make
change
the
last
minute,
but
I
want
our
working
group
to
be
able
to
review
our
content,
at
least
on
the
rough
draft
form
least
a
week
or
two
before
the
conference,
if
possible.
So
I
was
thinking
of
maybe
throwing
out
a
date
of
say
the
end
of
April.
By
having
the
very
first
rough
draft
from
everybody.
Is
that
doable
or
are
people
too
busy.
M
M
A
A
A
So
you
guys,
okay,
with
starting
out
with
you
with
a
date
of
what
is
it
April,
30th,
alright,
so
I
guess,
let's
push
it
up
till
yeah
this
April
30th.
What
if
we
did
say,
April
30th
for
the
first
rough
draft,
very
it'd,
be
very,
very
rough.
That's
fine,
but
at
least
sort
of
an
outline
of
what
you
want
to
talk
about,
so
that
we
can
tell
the
group
on
that
Thursday,
which
is
May
2nd
to
at
least
look
at
it
and
see
if
the
high
level
topics
that
people
want
to
talk
about
are
consistent.
M
A
We'll
tell
you
what,
since
it's
only
two
days
apart,
what
I'll
do
is
I'll,
say
May,
2nd
to
the
day
for
the
rough
draft,
but
with
the
assumption
that
I'm
that
phone
call
that
we
have
on
May
2nd.
You
know
a
regular
caught
up
in
call
I
will
say
the
rough
drafts
were
there.
Please
review
it
to
the
whole
group.
A
F
A
A
All
right
cool
in
that
case
I,
think
we're
done
and
dug
for
you
for
the
demo
stuff,
like
I
meant
on
the
previous
call,
I'm
hoping
by
the
end
of
this
weekend.
We'll
have
something
that's
a
little
more
concrete
in
terms
actually
running
code
for
us
to
start
playing
with
and
then
on.
Mondays
call
we
can
see
where
we
are
and
see
where
I
went
wrong
sounds
fair.