►
From YouTube: CNCF Serverless Working Group 2020-04-23
Description
CNCF Serverless Working Group 2020-04-23
B
Know
what's
funny
is
I
was
looking
dirt
earlier
in
the
week,
I
was
looking
at
the
agenda.
You
know
trying
to
get
her
ready
and
stuff
and
I
realized.
We
don't
have
a
whole
lot
to
talk
about,
especially
when
it
comes
to
PRS
and
stuff
and
I'm
trying
to
figure
out
if
it's
people
are
just
really
really
busy
with
other
things
there
with
all
the
stuff
going
on
they're
just
exhausted.
B
C
C
B
Yeah,
so
one
of
the
things
I
was
gonna
do
and
I
apologize.
I
actually
meant
it
to
do
this
to
try
to
force
some
discussions
is
to
really
go
back
and
read
both
the
latest
specs
and
basically
does
identify
all
the
holes
that
I
think
there
are
there
and
that
will
force
either
people
to
sign
up
to
do
work,
or
you
know,
give
myself
work
to
then
open
up
PRS
to
put
in
this
whole
draft
proposals
out
there
to
fill
those
holes
and
for
some
discussions
that
are
going
so.
B
B
E
A
A
B
G
A
B
B
A
B
Right,
let's
go
and
get
this
show
on
the
road.
Just
a
reminder:
Clemens.
You
took
an
AI
to
write
up
your
whole
soul
for
a
schema
registry,
stuff
and
I
write
community
time.
Anything
from
the
commuter
day
peopled
like
to
bring
up
all
right,
not
hearing
any
so
in
terms
of
us
becoming
a
real
full-fledged
sig
I
did
put
the
formal
proposal
out
there
to
not
actually
do
that,
and
rather
have
us
just
move
as
a
working
group
under
the
sig
app
delivery.
B
I'm
still
waiting
to
hear
back
from
the
TOC
to
see
what
their
official
position
is
on
that
I
think
I
just
put
the
request
out
there
yes
day
before,
so
it
might
take
a
while
for
the
TOC
to
notice
that,
but
let
you
guys
know
how
that
goes.
I
I
don't
see
any
real
problem
with
it.
That
I
think
some
people
may
think
service
is
worthy
enough
of
its
own
sig,
but
no
one
seems
be
able
to
come
with
a
really
good
definition
that
won't
be
too
contrived.
B
B
B
C
B
C
F
Cool
all
right,
just
maybe
a
quick
one:
yes,
okay,
there's
the
big
Kafka,
but
now
also
pulsar
pro
Vega
and
a
few
other
systems.
So
this
do
share
common
aspect
of
having
records
key
partition
and
so
on
so
forth
and
headers
as
well.
So
perhaps
it
can
be
changed
to
generic
specifications
for
Kafka,
like
systems
and
include
Kafka
boots
or
provegan
and
others
dressing.
C
I
would
love
for
everybody
for
all
of
those
folks
to
go
to
adopt
a
QP
and
then
that
problem
would
go
away
and
that's
actually
a
discussion
and
I'm
I'm
I'm
opening
about
to
have
with
them
for
bigger
folks,
but
yeah,
I
think
as
long
as
as
the
those
products
of
difference,
and
if
there
is,
if
they
are
under
the
umbrella
of
an
open-source
organization,
I
think
that's
part
of
the
criteria
that
we
have
then
yeah
they
should
so
pro
Vega
would
actually
prevent
those
meet.
The
criteria
yes,
but
pools
are
clearly
does.
C
C
K
B
Right
so
next
one
so
I
opened
up
a
PR
last
night,
so
obviously
it's
too
soon
to
even
think
about
approving
it.
However,
I
did
want
to
draw
people's
attention
to
it.
There
are
two
main
things
there:
one
activate
I
guess
three
one
is
I
added
the
new
specs
to
the
to
the
checking
tool
that
we
have
so
now
it
will
check
for
things
like
the
RFC
keywords
or
capitalized
and
stuff
like
that.
As
a
result,
I
did
find
some
cases.
B
We
were
using
the
keywords
in
non
uppercase
and
I
believe
all
the
cases
were
not
meant
to
be
using
that
the
RFC
keyword,
so
I
changed
those.
But
please
look
at
my
changes
to
make
sure
I
did
not
change
the
semantic
meaning
if
I
did
it
was
a
mistake
and
let
me
know,
and
then
I
think
there
was
a
one
bad
HOF
in
there.
B
I
B
I
L
I
B
C
B
Fine,
okay,
just
winter,
put
a
little
nagging
reminder
out
there.
Thank
you
and
of
course
that
applies
to
anybody
else.
I
just
Clemens
was
only
what
I
remember
speaking
up
okay,
so
that's
it
in
terms
of
actual
like
pr's
and
stuff.
So
last
night
I
did
find
some
issues
that
I
thought
might
be
worthy
of
discussions.
B
B
I
W3
is
back
and
if
that's
and
and
and
the
sentence
after
is
not
really
clear
because
it
tells
that
I
should
use
the
cloud
event,
actual
extension.
Okay,
there
are
both
in
the
same
envelope
and
they
don't
match,
but
this
doesn't
really
make
sense,
because
if
you
go
through
a
middle
word,
that
is
not
cloud.
The
events
aware
it
will
and
but
is
tracing
aware
it
will
use
the
choice.
Context,
editor,
I,.
B
B
C
So
they,
the
cloud
event,
is
effectively
that's
giving
you
kind
of
this,
the
original
context,
information,
but
then,
if
you're,
just
routing,
let's
say
you
routing
in
a
cloud
event
through
a
chain
of
multiple
HTTP,
lambdas
right
and
then
the
Covenant
shows
up.
On
the
other
end,
the
application
might
actually
not
care
about
any
of
those
HTTP
hops,
but
it
only.
It
only
cares
about
the
end
to
end
scenario.
So
it
takes
the
original
cloud
event.
The
original
context
that
it
sits
in
the
cosmic
a
you
stop
so
good
to
keep
going.
C
But
then,
if
you
look
at
the
HTTP
side,
the
HTTP
set
will
actually
have
have
traced
all
use
that
same
routes
to
go
and
trace
that
and
will
have
used
the
the
HTTP
context
information.
So
so
so
the
the
tracing
information
basically
just
forms
the
tree,
and
it
has
a
shared
rude
but
we're
effectively
carrying
both
of
those
contexts
with
the
common
events,
transport,
mappings,
okay,.
I
I
had
to
be
honest,
I
don't
have
this
big
knowledge
of
tracing,
but
my
question
here
is
that
I,
don't
really
get
is
why,
in
the
end
to
encase
should
I
care
about
the
original
trace
parent
when
the
tracing
infrastructure
already
does
the
linkings
with
the
little
trees,
with
the
middle
spans
right?
So,
if
I'm,
if
the
sender
is
already
committed
as
fine
with
the
original
transparent,
the
final,
the
other
end
of
the
of
the
flow
is
able
to
link
back
to
the
original
traitor
to
the
original
spam.
Okay,
yeah.
C
So
if,
if
you're
running,
if
you're
running
in
it,
so
let's
say
you
run
in
a
cloud,
pass
infrastructure
right
and
then
there
is
the
in
the
cloud
pass.
Infrastructure
is
a
ton
of
stuff
that
you
don't
see
and
that
you
don't
care
about
and
that
you
also
don't
have
any
insight
into
for
for
that
color
for
the
support.
Folks,
in
that
cloud
infrastructure
to
be
able
to
be
able
to
help
you
with
a
particular
case,
they
need
to
have
an
anchor
for
that
for
the
the
trace,
and
that
is
your
original
event.
C
However,
for
your
own
application
level
tracing
you
don't
care
about
any
of
that
stuff.
That
just
happens
in
that
cloud
infrastructure,
but
you're
effectively.
Just
calling
your
just
care
about
your
own
publisher
and
your
receiver
and
those
are
linked
through
the
information
that
sits
in
the
cloud
event
that
make.
I
C
B
B
C
B
B
B
I
know
Clements,
you
had
an
opinion,
I
want
to
say:
Jim
might
have
an
opinion,
but
I'll,
let's
give
it
at
least
another
week
to
review
all
the
bad
commentary
to
the
issue,
and
then
it
will
circle
back
around
next
week
and
say:
okay
and
I'll
make
a
note
to
remind
everybody
out.
There
I'll
send
out
a
note
to
remind
people
to
talk
about
it
or
to
think
about
it.
Yeah.
B
I
B
C
Is
it
we
only
currently
have
that
we
have
Jason
SD
as
our
canonical
structured
format
but
yeah
if
we
have
other
formats?
So
if
we're,
if
we're
coming
back
to
protobuf
or
if
we're
adding
C,
bore,
then
of
course
having
a
self-contained
structured
event,
it
C
bar
would
just
just
as
well
be
applicable
here
so.
A
C
B
C
C
So
technically,
yes,
it's
it's
a
breaking
change
because
of
that
bottom,
not
sure
how
much
ever
really
matters
to
actual
invitations.
If
they
are
using,
you
know
if
they
are
in
the
spirit
of
the
rest
of
the
in
the
rest
of
the
the
talks
ends.
You
know
today,
mostly
all
implications
will
probably
have
a
Jason
bias
anyways,
even
though
they
shouldn't.
B
N
A
B
A
F
Wanted
to
ads
it
in
Kafka
0.10,
there
was
content
type
support
already,
which
means
that
some
level
of
content
initiation
can
be
done
and
Jason
will
come
with
application,
Jason
and
Seaborn
others
who
come
with
their
own
content
types.
So,
even
if
it's
a
breaking
change
due
to
mass
change
to
shoot
implementations
that
handle
the
content
type
already,
we
just
continue
working.
B
B
A
C
C
We
basically
just
wanted
to
mimic
Jason
in
a
ver
a
rather
than
then
make
basically
pull
down
the
event
structure
into
into
a
Perl
per
se.
So
we
actually
didn't
want
to
make
it
as
strongly
typed
as
it
could,
because
of
extensions,
because
if
we,
if
we
so,
we
could
have
a
special
case,
the
the
fixed
event,
the
the
fixed
attributes
and
then
created
an
extra
buckets,
but
that
would
not
have
been
effectively
the
spirit
of
of
the
the
rest
of
it
to
have
a
buckets.
C
So
we
to
make
it
all,
even
all
in
all
in
one
place.
That's
what
we
should
have
chosen
that
correctly
and
then
data
should
also
be
able
to
carry
structured
information.
And
that's
how
that
comes
about
that.
You
can
basically
go
and
take
a
Jason
on
Jake
as
we
allow
in
Jason,
and
then
you
can
go
and
try
to
coat
that
into
the
into
the
data
element.
And
of
course,
you
can
also
just
put
a
put
a
byte
array
in
there.
If
you
want
to
do
so,
is.
C
C
Not
a
validation
schema
that
tells
you
whether
the
event
is
correct
or
whether
the
event
has
you
know
all
the
all
the
fields
have
all
the
right
values.
This
merely
allows
you
to
take
a
structured
cloud
event.
It
just
put
it
on
the
wire
in
the
hidden
in
an
appropriate
way,
because
the
Avro
serializer
is
driven
by
that
schema.
Just
like
protobuf.
This.
J
Interesting
distinction,
so
your
Clement,
if
I'm
understanding
you
correctly,
this
is
really
just
the
the
encoding
II
the
implementation
of
encoding
for
Avro,
but
doesn't
represent
the
canonical
schema
itself
that
correct
yeah
and
is
that
is
that
the
way
that
we
think
about
the
other
representations
as
well?
I,
don't
remember.
C
Well,
so
the
goal
was
to
keep
this,
so
things
were
all
in
motion
and
we
had.
You
know
this
notion
of
the
extent
we
have
this
notion
of
extension,
where
you
can
go
and
whatever
you
would
like,
and
the
the
the
predefined
attributes
are
also
optional
to
to
a
large
part,
and
so
I
think.
The
argument-
and
this
is
not
me-
defending
defending
the
structure-
is
just
trying
to
recall
how
we
got
there
is
that
you
should
be
able
to
go
and
take
an
event
similar
to
Jason
and
then
just
encode
that
into
so
it.
C
N
Yeah
just
a
quick
question:
having
done
some
recent
work
with
our
serialization
with
both
Apache
and
confluence,
it
makes
sense
on
the
serialization
part
on
the
D
serialization.
Well,
I
found
what
it
spits
back
after
you
get.
The
consumer
record
is
the
actual
JSON
data
from
the
ex
realized,
a
verbal
message,
so
you're
saying
this
will
spit
out
the
correct
JSON
for
the
consumer
after
Disney
serialize,
no.
C
N
The
consumer
records,
if
you're
doing
it,
for
example,
with
spring
cloud
scream
libraries
against
confluence.
It
gives
you
back
a
consumer
record
and
in
the
consumer
record
it
gives
you
back
the
value
out
of
the
object
and
the
value
is
always
represented
as
the
JSON
object.
That
was
deserialize
from
the
admiral
message.
So
I'm
just
you
know
so
long
as
it
will
spit
out
the
result
of
the
standard
libraries
of
your
actual
JSON
representation
that
you
expect
record
events,
if
that's
doing
that,
it
should
be
fine
right.
If
it's
not
that
should
be
concerned.
Yeah.
C
N
Because
what
you
do
use
both
the
if
you
want
to
do
a
custom
serializers
using
the
avro
libraries
for
Apache
or
when
we
did
it
with
the
confluent
Avro
serializers
and
deserialize
errs
you're
correct
the.
When
you
do
the
cogent
to
create
the
pojo
from
the
Avro
schema,
it
does
allow
you,
then,
to
update
the
record
automatically,
have
it
stored
as
a
serialized
object.
However,
in
both
cases,
when
you
consume
it
and
deserialize
it,
it
always
gives
you
back
a
JSON
representation
of
the
object
yeah.
That
is
long.
N
N
I
was
surprised
too,
which
really
actually
bothered
me
because
then
to
actually
work
with
the
large
result
of
the
JSON
I
had
to
use
the
jackson,
libraries
to
resize
it
against
the
json
schema
just
to
be
able
to
manipulate
it,
or
I
had
to
use
some.
You
know
primitive
json
libraries
when
I
called
it
up
and
I
had
the
pre
mobile
field,
just
not
a
big
deal,
but
when
you
got
an
IDE
it's
kind
of
soon.
N
Well,
that's
a
good
question
to
ask
councillors
yeah
exactly
again,
I'm
sure
it's
easy
enough
to
test
for
whoever
created
this
is.
You
know,
take
a
JSON
message
or
take
the
arrow
serialize
it
and
when
you
do
serialize
it,
what
JSON
do
you
get
back
if
it's
not
the
captive
Jason?
Sorry,
if
it's
not
the
part
of
in
JSON,
then
you're
probably
gonna
have
to
change
the
schema.
C
J
J
I
mean
this
feels
like
an
implementation
detail.
It's
ever
actually
has
two
different
representations
or
two
different
encodings.
It
has
binary
and
has
JSON
and
like
it
ever
was
not
JSON
a
boroughs.
A
bro
and
then
based
on
is
a
representation
of
its
specific
encoding
that
you
can
use
with
a
bro,
but
it's
not
a
bro.
So.
N
No,
what
I
found
is,
if
you
have,
if
you
don't,
represent
the
Avro
serialization.
Well,
the
Avro
schema
when
you
deserialize
the
message
it
and
again
you're
supposed
to
be
able
to
from
the
deserialize
switch
back
and
forth
between
the
Avro
and
JSON
by
default
when
I
tried
it
with,
and
literally
did
this
last
week
where
I
tried
to
use
the
D
serialization
using
the
Avro
libraries
using
Apache
and
a
compacted
Kafka
and
I
tried
it
again
with
the
implementation
from
console
where
they
provide
their
own
automatic,
don't
serializers
these
serializers.
C
So
there
are
two
problems
here:
one
is
that
consulate
gets
Jason
into
the
picture,
which
is
weird
the
way,
how
the
way
how
you
should
do,
transcoding
we
have
discussed,
is
we
have
this
we
have
in
in
all
in
every
SDK.
We
have
this
generic
representation
of
what
the
cause
of
it
is
in
memory
and
thus
the
for
matters
that
are
using
the
format
specification
they
sterilized
into
and
from
that
generic
representation.
N
Well,
the
design
goal
of
a
Perl
was
just
to
compress
the
data
when
you
see
related
to
make
it
highly
efficient
for
transporter
storage
but
you're.
Still
when
you
take
Averell,
you
will
then
create
a
POJO.
If
you're
doing
this
with
java
from
the
Avro
schema,
you
will
then
apply
the
field
that
you
want
to
update,
just
like
you
would
if
it
was
JSON
pojos
from
the
schema
and
store
it
or
send
it
in
case
of
Kafka,
and
when
you
receive
it,
it
expects
to
have
that
available
as
data
about
the
data.
C
D
sterilize,
that
into
into
a
comm
event
that
that
sort
of
representation
and
then
you
should
be
able
to
serialize
that
representation
into
Jason
and
and
there
should
be
no
mismatch
whatsoever.
But
there
there
can't
be
an
expectation
that
if
you
take
Avro
an
Avro
schemas,
this
would
this
year
and
then
turn
that
into
a
serialization
object
and
do
a
raw
deserialization.
What
you
will
get
is
the
immediate
representation
of
the
structure
that
you
have
here,
which
we
chose
in
particular
to
put
a
pro
on
the
wire.
But
that
is
not
the
cloud
event.
C
J
I
yeah
I
was
just
gonna
go
back
to
like
the
original,
just
thinking
about
the
trade-offs
here.
It
feels
like
we're
trading
off
type
safety
for
extensibility
of
cloud
events,
because
if
we
were
going
to,
there
is
going
to
be
a
new
extension
for
credence.
That
means
we
would
need
to
create
a
new
adverse
chemo.
If
we
were
going
to
do
this
in
a
type
safe
way,
I,
don't
I,
don't
know
how
I
feel
about
the
head
I'll
top
my
head,
but
it
feels
like
that's
the
trade-off
that
word
that
we're
making
yeah
that.
B
I
didn't
and
and
the
thing
that
you
say
the
very
beginning,
comments
resonated
with
me,
which
was
all
cloud
event,
attributes
expect
to
find
or
extensions
were
meant
to
be
treated
it's
siblings
to
each
other
great
and
that
that
was
a
big
motivation
for
us
to
Scott
your
hands
up.
This
is
actually
the.
O
So
if,
if
it
affects
this
thing,
so
one
idea
for
protobuf
was
that
what
what,
if
the
as
the
thing
gets,
versioned
and
inspect
and
extensions
get
promoted
up
into
the
proper
spec.
They,
the
the
object,
gets
migrated
out
of
the
extension
bag
and
into
the
strongly-typed
spec
like
like
these
pieces,
mm-hm
I,
wonder
if,
if
or
strongly-typed
specifications
like
Avro
and
for
protobuf,
it
actually
makes
sense
to
keep
it
as
bags
for
extensions
and
then
have
a
little
bit
of
middleware
shim
to
help
you
pop
the
extensions
out
and
make
them
top-level.
C
There's
pros
and
cons
so
the
just
in
terms
of
prior
art,
so
mqp
chose
to
do,
have
an
approach
like
this,
where
there's
a
properties
collection,
which
is
all
the
commonly
used
seals
to
reply
to
from
dates
times,
they
et
cetera,
and
they
are
using
a
schema
where
the
metadata
is
that
the
metadata
fields
names
are
emitted.
So
there
go
there
serialize
about
position.
So
that's
similar
to
what
you
would
do
with
portal,
but
for
with
ever
with
the
schema,
and
then
there
is
an
application
properties
bag
which,
where
all
the
extensions
go
the
way.
C
C
C
O
C
B
C
O
Throw
but
but
doing
that
you
lose
the
the
reason
why
you're
choosing
something
like
protobuf
like
you,
can't
you
you're,
basically,
then
just
using
a
JSON
stream
over
protobuf
instead
of
the
actual
proto.
Well,
you
still
using
these
prototypes.
You
can't,
if
you
have
support
extensions,
that
you
don't
forget
every
year
about
extension
promotion.
Please.
O
B
B
B
Some
kind
of
response,
because
I'd
get
based
upon
is
based
upon
his
issue
with
sound,
like
use
in
the
middle
of
actually
implementing
stuff
and
I
wanted
I
didn't
want
to
be
blocked
by
us
for
much
longer
so
I'll
take
I'll,
take
an
initial
stab
at
trying
to
answer
them,
and
then
you
guys
can
correct
me.
Ok,
but
I
did
want
to
leave
time
for
sergej
to
talk
about
his
SDK
testing
stuff.
Did
you
want
to
take
the
screen?
Yes,
please.
Ok,.
F
Ok
hope
you
can
see
you
can
see
my
screen
mm-hmm,
so
we
have
a
great
specification
and
it
answers
many
questions,
but
when
I
started
working
with
cloud
events,
especially
the
SDK
part,
I
was
missing
some
guidance
on
how
to
test
it,
which
cases
to
test
and
so
on
and
so
forth.
And,
for
example,
let's
take
a
look
at
the
HTTP
protocol
binding
specification,
which
is
the
most
popular
one,
I
guess
it
answers
many
questions.
It
also
mentions
some
edge
cases
like
content
type,
and
that
should
be
you
know
even
here.
F
There
are
many
languages
supports
that
we'll
talk
about
it
later
so
here,
for
example,
I
have
the
same
example
as
in
the
HTTP
protocol,
binding
markdown
file.
So
we
just
open
it.
Ok
yeah
it's
here,
so
it's
binary
content
mod
and
then
we
have
things
like
headers
prefix
it
with
Cee
content,
type
with
charset
and
so
on
and
so
forth.
F
So
I'm
just
doing
the
same
here
in
this
cucumber
specification
file
and
I'm,
also
adding
more
variations
with
content
type,
so
I'm,
just
testing
application
JSON,
but
also
mixed
content
or
not
mixed
Khan
died,
but
content
type
is
modifier
and
I
have
the
same
for
structured
content
mode,
as
you
can
see
here.
Basically,
it's
the
same
with
the
same
variation
of
content
types,
and
it
helps
the
same
for
Kafka
here
since
in
Kafka.
F
There's,
no
common
definition
of
how
Kafka
message
should
be
represented
in
text
forum
I'm,
just
using
key
value
pairs
for
header
and
the
payload
is
defining
the
string.
It,
of
course,
can
also
have
something
like
with
base64
payload
and
some
binary
palette
or
whatever
you
prefer,
and
the
assertions
I'm
using
most
checking
the
quality
to
some
JSON,
not
just
some
text
or
the
string
based
value
but
Jason,
because
some
implementations
may
decide
to
parse
some
implementation.
They
decide
to
return
the
content
as
it
is
so
anyways.
F
We
just
define
not
protocol,
but
language
agnostic,
specifications
and
scenarios,
and
then
we
can
run
them
and
here
I'm
using
SDK
Java,
so
I'm
just
inside
the
SDK
Java
project
and
it
added
a
few
files.
I'll
show
you
later
and
I'm
running
the
specifications.
As
you
can
see,
we
have
some
failures
and
this
one
is
the
most
popular
SDK
issue.
I
would
say
so:
I
can
click
and
go
to
the
specification.
This
is
our
binary
content
mode
specification
and
we
can
see
that
char
set
is
specified.
F
F
Mm-Hmm,
okay
go
so
another
failure
is
binary
content,
Martin
Kafka.
So
if
I
open
the
scenario,
it
says
that
content
type
is
application,
JSON
and
data
content
type
should
match
application.
Jason
I
went
to
the
implementation,
I'll
not
be
talking
too
much
about
the
figs,
but
I
just
fixed
it
here
in
in
the
Kafka
implementation,
and
now,
if
I,
okay
anyways,
if
I
run
it
again,
I
just
added
support
for
content
type,
because
previously
it
was
filtered
out
with
si
prefix.
Now
it's
green.
F
F
I
have
standard
infrastructure
for
cucumber
Java
here,
and
it
defines
steps
like
attributes
should
match
something
or
the
map
of
attributes
or
the
table
of
that
you
build
should
match,
or
data
should
should
be
equal
to
some
JSON
I
can
use
any
library,
I
want
and
Java
ecosystem
because
he
I'm
running
in
Java
to
do
some
JSON
assertions
and
I'm
not
limited
to
what
cucumber
provides,
which
is
great,
because
it
depends
on
language
how
it
prefers
to
compare
things
or
whatever,
and
the
same
applies
to.
For
example,
HTTP
steps
c
remembers.
F
A
payload
was
just
the
HTTP
message
as
it
is
on
the
wire,
and
I
need
to
parse
it
so
here
in
john
I'm,
using
raw
HTTP
library
to
parse
it,
and
then
one
says
parse.
This
HTTP
JSON
I
only
need
to
use
as
the
case
kind
of
internals,
but
also
the
part
of
the
public
API
structures
to
test
event
and
then
do
some
assertions
later.
F
A
second
sea
cloud
event
step
is
a
generic
definition
of
steps
that
doesn't
matter
how
you
parse
them
from
HTTP
message
or
maybe
from
Kafka
message.
Here
an
example
of
the
steps
for
Kafka.
You
just
parse
it
once
you
set
it
the
current
cloud
event
and
then
you
can
run
standard
assertions
on
the
parsing
cloud
event.
It
works
both
ways
that
we
can
also
do
something
like
given
some
cloud
event.
We
run
or
we
spec
it
to
be
converted
into
Kafka
message
into
HTTP
request
and
so
on
and
so
forth.
F
If
we
go
said,
you
know
that
close
to
running
BDD
style,
testing
of
the
SDKs
and
I
have
some
code
here
as
well
to
glue
the
steps
and
the
going
SDK.
So
here
I'm,
just
asserting
the
cloud
event,
the
same
thing,
sorry
I'm
not
going
developer
and
I
had
to
just
unmarshal
JSON
and
compared
with
the
same
CMP,
it's
used
in
other
tests.
So
I'm
pretty
sure
folks
from
SDK
sorry
from
going
SDK,
will
find
better
ways
of
writing
these
steps
and
the
same
for
Kafka.
For
example.
F
F
Now,
if
I
run
it
with
go
test,
the
same
set
of
tests
will
get
two
errors
and
if
we
look
at
the
errors
and
I'm
not
on
the
latest
master
at
the
moment,
so
please
serves
me
we'll
get
to
fillers
our
first
one
is
HTTP
protocol
binding
with
structured
content
mode
and
the
same
semicolon
charset
for
the
WHA
issue
and
the
same
with
Kafka.
So
that's
something
I
reported
to
Scott
and
he
fix
it
shortly
after
so,
if
it
just
do,
git
pull
and
run
the
same
spec
game
on
the
latest
master
boom.
I
I
F
So,
as
you
can
see,
structured
content
mode
is
failing,
and
what
you
can
also
see
is
that
Kafka
is
not
tested
at
all,
because
I
didn't
find
how
to
you
know,
test
Kafka
with
v1,
so
I
just
decided
not
to
implement
the
step
and
if
I
open
this
tab
definitions
just
a
second
here
we
just
say:
okay,
I,
don't
know
how
to
parse
Kafka
message:
I
returned
pending
and
the
result
of
running
the
specification
running
cucumber
will
be
passing
or
some
pending
features,
so
that
if
your
SDK
does
not
support
some
of
the
features
yet
then
I
can
just
keep
it
pending
and
it
will
be
a
to
do
for
you
run
a
boot
to
do
to
implement
it
in
the
future
and
as
last
starting
from
me
about
cucumber
and
this
approach
is
that
it's
a
very
popular
framework
and
you
find
many
implementations
for
many
languages
and
as
you
can
see,
there
is
going
gvm,
JavaScript,
Ruby
and
PHP
and
others
most
of
the
popular
popular
languages.
F
They
have
cucumber
support
and
if
some
don't
then
implementing
cucumber
support
isn't
hard
just
bunch
of
regexp.
You
know
how
hard
can
it
be
and
the
language
itself
is
gherkin.
Language,
so
cucumber
is
just
implementation
of
framework
for
gherkin
language,
but
the
specification
of
written
in
gherkin
and
you
can
just
use
some
other
gherkin
compatible
framework.
Thank
you.
Ok,.
O
O
B
A
B
B
Okay,
cool
I
could
spell
Adobe.
Thank
you.
Thank
you.
Did
I
miss
any
bills
for
the
attendance
all
right
in
that
case
excellent.
Thank
you
very
much
for
the
demo.
I
read
the
freezer.
That
looks
really
really
cool
and
let's
talk
about
it
on
the
next
SDK
call
next
week.
All
right
and
thank
you-
everybody
have
a
good
rest
of
the
week.
Bye.