►
From YouTube: CNCF Serverless Workflow Weekly Meeting 2021-05-10
Description
CNCF Serverless Workflow Weekly Meeting 2021-05-10
A
Against
alex,
I
don't
know
if
you've
seen
my
message
on
the
old
argo
slack.
I
sorry
I
messed
up
the
schedule.
Yeah
we
canceled
last
week's
because
of
kubecon,
and
we
we
had
a
reminder
somewhere
down
in
our
agenda
that
vanished.
A
C
Yeah
I
haven't,
I
haven't
really
I've
actually
logged
out
of
the
argo
slack
channel
now.
A
Okay,
I
last
week
I
was
out
of
office
when
theo
mentioned
that
I
yeah
he
was
in
contact
with
you
about
the
presentation.
A
So
it
was
that
we
two
weeks
earlier
so
three
weeks
ago
we
had
it
planned
to
be
presented
and
then
we
kind
of
removed
it
from
the
agenda
because
it
wasn't
happening,
but
just
the
week
before
kubecon
we
decided
because
of
kubecon
to
not
have
the
meeting
so
which
dropped
your
agenda.
A
F
Yeah,
so
if
you
want
to
can
do
a
quick
introduction
of
myself,
do
you
want
me
to
do
that?
Oh
yeah,
while
I'm
typing
the
other
names
sure
go
ahead
thanks
sure.
So
I
we
we
are
not
doing
anything
specific
with
the
serverless
functions
today,
but
I'm
I'm
interested
in
it.
A
part
of
it
is
because
I
work
on
this
project
called
prevega.
F
So
provega
is
one
of
the
projects
that
has
been
recently
accepted
for
sandboxing
in
cncf,
and
I
am
getting
familiar
with
the
various
work.
Working
groups
and
and
topics
and
again
serverless
functions
is
an
area
that
interests
me
and
we
might
invest
that
in
and
then
in
the
future.
So
I
decided,
you
know
just
out
of
curiosity,
just
sit
and
see.
What's
going
on,
hey
great.
G
A
Me
check,
I
think,
can
you
hear
me
yeah
hi
yeah.
Do
you
want
to
be
since
this
is
your
first
time
on
the
call?
Do
you
want
to
be
associated
with
any
company?
A
H
A
Maybe
not,
let's
try
him
later
kevin
nice
to
see
you
on
the
call
and
thanks
for
opening
these
issues,
can
you
hear
me
yeah,
okay,.
A
I
think
for
you,
it's
also
the
first
time
on
our
community
meeting.
Do
you
want
to
be
associated
with
any
company.
I
I've
been
here
before,
but
yeah
all.
A
Right,
I
should
have
you
in
my
locks,
then:
okay,
sorry,
for
that
then
lucas,
hi
hi.
How
are
you.
E
A
A
Yeah,
that's
cool
yeah.
There
was
a
lot
going
on
during
kubecon
on
the
bug
bash
on
the
java
sdk.
So
let
me
try
dragon
again:
oh
yanagan,
can
you
hear
me
yeah.
J
Yeah,
I
can
hear
you
know,
I'm
sorry,
my
audio
was
not
working.
So
actually
I
also
like
joined
the
web
bash
and
that's
what
we
were
kind
of
interested
in
the
project.
So
that's
why
I
joined
the
call
hey.
A
And
thanks
akam
for
posting
it
in
the
chat,
yeah,
the
company.
Okay,
then
let
me
get
a
quick
check
on
the
list.
Yeah,
I
think
that's
it,
charles
young,
okay,
then,
let's
get
started
so
welcome
everybody.
This
is
for,
since
we
have
a
lot
of
first
timers.
This
is
our
weekly
community
call.
It
always
has
the
same
recurring
kind
of
agenda,
so
we
start
with
community
questions.
Are
there
any
questions
you
would
like
to
ask
to
the
community
any
general
things.
A
No
then,
let's
get
started
so
presentations
from
time
to
time.
We
do
have
presentations
on
our
cumin
decor
and
this
week
I'm
happy
to
welcome
alex
collins
is
a
maintainer
of
argo.
The
you
should
know
it.
Cncf
workflow
project
started
with
aggro
cd
and
has
now
expanded
to
argo
events,
more
generic
argo
workflows
and
is
currently
thinking
about
data
pipelines.
If
I
got
that
correctly-
and
this
is
what
I
think
the
presentation
will
be
about-
okay
alex,
do
you
wanna
start
right
away.
C
Sure,
let
me
just
one
second:
I
want
to
load
up
my
presentation,
but
google
is
not
working
today.
C
That's
not
something
I
expected
give
me
one
second
and
google
drive
is
actually
broken
today.
So
not
perfect
timing,
wow,
that's
a
big
one!.
A
C
So
hopefully,
this
presentation
will
be
around
10
minutes.
I'm
going
to
talk
a
bit
about
a
new
project
called
argo
data
flow
which
is
currently
in
a
kind
of
a
poc
stage.
Just
a
little
bit
about
me.
My
name
is
alex,
I'm
from
originally
from
the
uk
from
london.
I
moved
to
california
two
and
a
bit
years
ago.
Now
my
interests
are
cycling,
coffee,
beer
and
music.
So
my
kind
of
ideal
day
would
be
start
with
coffee
ride
to
a
bar
where
a
band
is
playing
good
music.
C
That's
my
girlfriend
in
the
bottom
left
hand
side
there.
So
what
is
argo
data
flow,
so
I'll
get
data
flows
intended
as
cloud
native,
so
kubernetes,
a
language
agnostic
platform
for
executing
large
data
processing,
jobs,
pipelines
typically
composed
of
a
number
of
steps
which
processed
data
and
those
deaths.
Steps
are
typically
often
small
and
homogenic,
so
so
similar
it
kind
of
sits
somewhere
between
a
stream
processing
platform
like
think
or
apache
beam
and
a
data
batch
processing
platform.
C
Like
our
go,
argo
workflows,
I
gotta
sit
between
the
two
of
those.
It's
typical
use.
Cases
are
intended
for
things
like
click
stream,
analytics
we're
doing
a
lot
of
work
on
anomaly
detection
for
those
who
are
not
familiar
with
anomaly
detection
that
is
analyzing
metrics
from
application
logs
so
forth
and
determining
if
that
application
is
behaving
incorrectly.
Fraud
detection.
Obviously
that
fits
within
you
know:
common
real-time
data
processing
and
an
operational
stroke,
iot
analytics.
C
So
a
lot
of
these
use
cases
are
kind
of
real-time
use
cases.
You
know
I'm
looking
at
data
that's
coming
in
now
and
I
don't
necessarily
need
accuracy.
That's
not
the
most
important
thing
for
me
like
you
know
you
know,
but
what
I
do
need
is
responsiveness
here
is
an
example
of
a
pipeline
defined
for
data
flow,
so
we're
cloud
native.
C
So
obviously
the
common
denominator
here
is
yaml
everything's
defined
in
yamlor,
structured
document
format,
so
json
as
well,
and
this
I'm
trying
to
see
if
I
can
yeah,
can
you
see
my
pointer?
I
can't
tell
I
can't
no
I've
just
done.
There
is
click
on
it,
so
I
can
only
I
can
point
to
it.
Basically,
this
defines
a
data
pipeline
called
example.
C
It's
got
two
steps.
The
first
step
is
a
breeze
from
a
kafka
topic
named
pets
oops,
I
didn't
press
anything
there
and
then
it
filters
out
objects
of
type
cat
and
writes
them
to,
and
that's
topic
called
that
subject
and
subject
in
that
is
basically
the
same
thing
as
a
topic
in
kafka.
I
find
very
there's
very
little
difference
between
the
two,
except
that
the
nats
topics
are
a
bit.
More
lightweight
topics
are
pretty
heavyweight,
so
it
basically
filters
any
message
from
that.
C
Pet's
topic
writes
it
to
a
to
a
gnats,
subject,
called
cats
and
then
the
second
step
in
this
effectively
pre-pens
maps
each
of
those
cat
cats
to
say
hello,
the
name
of
the
cat
and
then
writes
that
out
to
a
kafka
topic
called
hello
cats
and
that
scales
using
replicas.
So
we
run
two
replicas
of
that.
C
This
is
a
very
basic
data
processing
pipeline,
but
it
shows
two
very
basic
data
processing
operations,
which
are
obviously
filter
and
map,
but
there
are
operations
ones
and
really
these
are
just
kind
of
built-in
ones
for
convenience,
the
main
ones
actually
run.
You
know
you
can
write
your
own
data
processor
to
to
process
your
data
and
then
that
kind
of
graphical
representation
here-
and
this
is
a
more
complicated
one.
C
So
I'm
reading
from
reading
a
load
of
pets
from
a
from
a
you
know
from
some
kind
of
data
source
and
writing
them
out
to
a
to
a
to
a
subject
or
a
topic,
and
then
splitting
that
by
filtering
it
into
cats
and
dogs
and
then
performing
different
kind
of
processing
based
on
whether
it's
a
cat
or
a
dog.
And
finally,
I
output
that
to
an
output
topic
there.
C
So
that's
that
in
that's
that
in
kind
of
summary,
so
let's
have
a
little
look
at
an
example.
I'm
going
to
hope
this
is
going
to
run
so
here's
a
very
basic
hello
pipeline,
and
this
one
I'm
I'm
reading
from
a
cron
schedule.
So
the
source
is
actually
chronic
schedule
and
the
outputs
are
log
sync,
and
these
are
kind
of
in
the
log
one's
kind
of
convenience.
C
The
final
step
is
often
often
to
lock
it
as
well
as
doing
something
else
with
it,
and
the
cron
is
useful
because
it
required
it
doesn't
require
you
to
use
kafka
or
all
that's
quite
difficult,
and
then
we
just
have
a
graphical
representation
of
that
and
you
can
see
that
processor
processed
and
you
know
151
messages
have
been
processed
152
and
there's
a
sample
of
what
the
message
is
there
on
on
the
thing,
so
you
can
see
what's
recent
going
on
and
you
can
see
that's
written
to
a
log
and
if
I
click
on
the
log
and
as
I
click
on
this,
then
I
can
actually
see
in
my
logs
information
about
the
messages
that
are
going
through
the
system.
C
C
So
this
is
a
two
node
pipeline.
So
look
on
this.
This
race
from
a
kafka
topic,
performs
some
processing
by
writing
to
a
nat
streaming
subject
and
then
writes
it
back
out
to
the
topic
here
now.
This
pattern
we
use
for
encryption
and
decryption,
so
it
allows
us
to
read
encryption
messaging
encrypted
messages
from
a
particular
topic
and
write
them
out
to
another
one
there.
C
So
that's
very
straightforward,
but
what
if
I
want
to
you
know,
I
don't
have
a
very
busy
topic
or
I
need
to
scale
up
my
topic.
We
can
define
a
number
of
replicas
for
a
particular
step
within
the
pipeline
and
again
we're
looking
at
a
very
simple
cat
step
here.
C
Cat
just
reads
and
writes
the
same
message:
it's
basically
identity
map
operation,
but
with
this
one
I
can
define
a
match
fields,
a
minimum
and
maximum
number
of
replicas,
for
that
particular
step
and
a
ratio
and
a
ratio
is
used
to
define
how
many
replicas
we
should
run
based
on
the
number
of
pending
messages.
So
for
each
thousand,
pending
messages,
I'm
gonna
run
replica
and
if
there's
less
than
a
thousand,
then
I'm
actually
gonna
run
zero.
This
allows
me
to
scale
my
data
processing
pipelines
to
this
guy.
C
This
this
one's
actually
scaled
to
zero,
should
typically
they
scale
up.
At
some
point,
I
can't
remember
what
the
configuration
is
now
may
have
changed
this
configuration
and
that
will
scale
up
occasionally
just
to
see
how
many
messages
there
are
on
that
topic
and
then,
if
there
aren't
messages,
it'll
scale
back
down
to
zero,
basically
allows
you
to
have
scale
to
zero
data
processing
pipelines
out
of
the
box,
with
no
with
no
special
configuration
required.
C
Another
thing
you
probably
want
to
do
and
this
this
will
seem.
Oh
here
we
go,
I
mentioned
it
would
scale
up.
Didn't
I
this
will.
The
ills
go
is
to
read
data
from
a
you
know,
a
particular
language
and
actually
embed
the
language
in
there.
So
this
has
a
concept
of
a
handler
now
any.
Let
me,
let
me
show
you
the
yaml
actually,
rather
than
that.
This
is
an
example
of
go
and
there's.
C
Examples
of
other
ones,
basically
allows
you
to
inline
your
code
into
the
yaml
here
and
perform
some
collaborations,
and
it
uses
a
run
time
to
execute
those
and
a
runtime
is
effectively
every
step
is
executed
using
a
docker
image
and
there's
a
golang
docker
image
that
you
can
run.
There's
also
a
python
and
a
a
get
a
java
one
as
well.
You
can
use
and
look
I'm
going
to
show
you
most
of
the
yammers,
because
they
they
tend
to
take
a
bit
longer
to
start
up.
C
Another
one
is
a
get
one,
so
this
is.
These
are
really
aimed
at
developers
who
are
working
in
pre-production
environments.
So
this
one
here
will
check
out
this
code
from
this
path.
From
this
repository
on
this
particular
branch,
this
main
branch
and
then
it
will
switch
into
this
path,
and
then
it
will
run
the
code
that
it
finds
in
that
path,
and
that
allows
you
to
basically
check
your
code
into
github.
You
don't
have
to
build
a
docker
image,
which
is
obviously
quite
expensive
and
time
consuming.
C
This
allows
you
to
do
run
to
completion
or
terminating
or
non-terminating
pipelines.
I
guess
you
could
call
them
so
this
example
is
a
pipeline
I'll
run
this
one.
C
C
G
C
A
Yes,
yes,
we
can.
Actually.
My
head
is
exploding
with
questions
right
now.
No,
it's
really
nice.
I
know
argo
for
it
being
able
to
embed
the
steps
as
a
container
runs.
So
these
are
what
they
are.
So
when
you,
when
you
started
showing
the
simple
filter,
is
also
something
that
would
be
run
in
in
a
container
right.
C
Yeah,
the
the
the
the
kind
of
design
of
this
is
the
only
on
on
a
cloud
native
citizen.
The
only
thing
that
you
have
for
doing
processing
is
a
pod
yeah.
That's
basically
your
function
units
is,
you
know,
one
function
is
one
pod,
so
everything
everything
boils
down
to
a
pod.
Ultimately,
including
kind
of
the
built-in
functions.
They're
really
just
wrappers
around
an
existing
image
that
runs
or
runs.
Those
functions
is
a
bit
frustrating.
Let's
get
started.
C
No,
that's
that's
quite
right,
so
that's
the
pod
itself
or
the
container
the
architecture
is.
There
are
always
three
containers
in
a
pod.
There's
a
there's,
an
init
container
that
which
does
some
set
up,
there's
a
sidecar
container,
who
is
responsible
for
reading
and
writing
from
the
sources
and
to
the
sinks,
and
that
is
the
bit
that
understands
about
kafka
and
about
nat
and
about
other
things,
and
then
it
uses
either
http
or
fifo
to
communicate.
C
I'm
gonna
switch
to
it.
To
I
don't
know,
I
think
git
is
you
know,
they've
annoyed,
maybe
said:
they've
annoyed,
another
nation
state
again
recently
by
hosting
software.
They
don't
like.
So
maybe
it's
under
attack
again.
So
this
is
the
pipeline.
I
wanted
to
show
you.
This
is
basically
a
pipeline
that
runs
to
completion,
and
this
is
just
an
image
that
exits
xero.
A
A
Oh
now
we
damn.
We
can't
give
him
the
feedback
he
deserves
for
putting
up
that
presentation.
So
I
I
noticed
it
was
really
proof
of
concept
very
early
work,
and
I
appreciate
much
that
alex
shares
this
with
us.
I
think
it's
it's
still
in
the
making.
As
an
extension
to
the
argo
workflow
ago,
events
aqua
cd
toolkit.
D
A
Because
we
have
a
couple
of
peers
that
I
think
we
can
finally
close,
so
the
subfloor
action
version
property.
I
actually
checked
the
comments
and
it
seems
that
we
have
concluded
on
that.
A
So
final
result
of
this
is
that
we
have
spec
version
defining
this
specification
version
of
our
workflow
definition:
language,
no
version
on
the
on
the
workflow,
that
is
the
workflow
version,
like
dev
version
and
so
on.
It
would
be
defined
easily,
with
runtime
or
in
metadata
any
annotation
data
that
one
may
always
add
to
a
workflow
definition,
but
we
don't
have
a
specific
version
property,
and
that
is
the
the
state
I
believe
for
now.
So
we
have
plus
one's
agreement
from
everybody.
A
B
Okay,
the
only
thing
I
want
to
say
here
is
for
the
sdk
guys.
This
is
a
pain
in
the
behind,
for
us,
ricardo
is
missing,
because
now
what
we've
established
is
the
spec
version
has
to
be
have
a
value.
So
it's
a
not
a
default
parameter,
so
we're
going
to
have
to
deal
with
every
release
to
update
everything.
B
B
Tests,
jason
and
yamls
just
fyi
yeah,
that's
for
the
examples
as
well
right:
okay,
yeah,
that's
for,
like
antonio
and
charles
and
ricardo
who's,
probably
still
pto,
probably.
D
B
It's
true,
however.
We
have
like
we
probably
have
20
examples
in
our
example
specification,
but
each
of
the
sdks
probably
have
100
or
even
more
json
and
yamls
for
tests
right
and
on
those.
We
want
to
make
sure
that
when
we
release,
let's
say
as
sdk
goes
to
0.7
that
we
update
all
those
as
well.
It's
probably
just
a
search
and
replace
type
of
thing,
but
yeah
it's
just
in
tests
and
for
testing
and
edge
cases
where
feature
tests,
validation,
testing,
all
kinds
of.
B
B
A
Okay,
anyways-
I
I
put
it
up
now,
since
it
seemed
to
be
resolved
from
the
comments
on
the
plus
ones.
Put
it
up
to
merge
anybody
have
or
want
to
to
object
to
to
matching
this.
A
Okay,
thanks,
it's
a
proof.
Sorry,
okay,
the
next
one
is
I
I
flipped
the
order
here.
Sorry,
maybe
we
can
talk
about
the
event
data
filter.
First,
if
that's
okay,
because
I
think
that's
also
that
has
come
to
a
conclusion.
After
all
these
comments,
so
I
gave
it
an
lg
tm
from
what
I've
seen
from
the
changes
that
are
in
now
and
last
time.
Two
weeks
ago
we
discussed
that
we
wanted
to
have
the
on
web
payload
or
some
parameter.
A
That
says
whether
or
not
the
event
should
be
unwrapped
or
not,
and
that
the
default
should
be
that
it
is
unwrapped
so
that
we
are
so
somewhat
backwards
compatible
with
the
existing
specification.
But
now
it's
also
possible
to
specify
unweb
payload
to
false
and
then
have
access
to
the
entire
event,
including
cloud
event,
attributes
and
yeah
cloud
event,
attributes
basically.
A
Did
we
approve
it
or
so
for
myself?
I
think
it
has
so
what's
the
or,
if
we,
if
we
need
to
talk
about
this
more,
please.
A
Yeah
I
mean
the
default
behavior
right
now
without
any
of
the
like
all
existing
workflow
definitions,
they
behave
exactly
as
they
should
and
as
they
did
right,
the
only
change
is
in
the
naming
of
that,
where
we
place
the
event
payload-
and
there
is
the
two-state
data
instead
of
results
right.
A
B
B
D
Established
that
they
can
be
transformed
to
json,
we
can
accept
any
wireform
athletics
that
they
describe
for
cloud
events
and
that
those
can
be.
You
know
translated
to
json.
B
A
G
H
A
And
what
the
sdks
now
have
to
do
in
order
to
apply
the
this
jq
expression
on
a
cloud
event
that
has
not
been
unwrapped.
They
need
to
provide
this
as
a
json
encoded
envelope.
That's
without
independent
from
whatever
the
payload
is.
The
payload
can
still
be
binary,
base64
encoded,
which
is
a
base64
string,
and
so
in
case
it
says
unwrap
note,
it
doesn't
say
decode
payload,
so
it's
not
decoding
the
payload
field
with
the
with
the.
What
is
it?
I
think
it's
called
data
content
type.
A
So
it's
not
using
the
it's
not
enforcing
this.
It
only
says
what
unweb
payload
means
it
provides
the
payload
field
of
a
cloud
events
envelope.
That
is
the
content
of
it,
or
is
it
payload
or
data?
I
think
it
should
be
data,
but
it's.
D
D
A
Payload
data
tomato
tomato,
so
what
it
when
unweb
payload
is
set
to
true.
This
is
how
it
would
behave,
how
it
always
has
behaved.
What
tyramia
points
to
is
that
when
you're
using
an
sdk,
let's
say
in
a
go
or
dotnet
implementation
using
the
goal.net
sdk
of
cloud
events,
then
I
think
you
get
a
content.
Sorry,
a
cloud
event
object
like
a
struct
or
a
class
instance
where
you
can
access
the
payload
and
give
it
to
the
workflow.
A
Now,
if
you
get
the
cloud
event
and
the
the
so
in
the
workflow
definition
says
that
you
should
not
unwrap
it,
then
you
need
to
provide
the
attributes
like
the
timestamp,
the
topic
subject,
the
type
field
and
so
on
everything
that
is
in
the
cloud
events
envelope.
A
You
need
to
encode
this
as
json,
so
that
you
can
apply
such
a
jq
expression
on
it,
which
is
sdk
a
cloud
event
sdk,
not
a
problem,
because
there
is
a
common
format.
I
I
don't
think
that
there
is
so
it's
in
the
standard,
and
I
I
would
think
that
the
cloud,
because
it
has
the
cloud
event
sdk.
It
needs
to
have
encodings
to
transmit
the
cloud
event
via
json
encoded
transport,
for
example
in
the
http
transport,
which
is
one
of
the
transports
that
all
cloud
events
sdks
support.
B
B
I
I
I
honestly
I
could
agree
with
this.
I
don't
know
why
a
dsl
would
need
to
tell
the
runtime
how
to
treat
events.
A
Yeah,
I
think
it's
I
mean
we've
had
it
for
a
while.
Well.
A
Yeah
same
feeling,
but
I'm
for
including
it
because
otherwise,
how
would
you
access
the
subject
of
a
cloud
event
when
you
subscribe
to
its
type?
And
then
you
have
attributes,
maybe
even
attribute
extensions,
like
the
correlation
id
that
you
need
to
do,
tracing
which
you
want
to
pass
on,
maybe
to
a
function
call.
So,
whichever
it
is,
I
I
see
the
need
for
it.
D
I'd
also
like
to
point
out,
I
think
charles
you
might
have
had
a
misconception
about
something
here.
One
thing
I'd
like
to
point
out
is
that
data
content
data
content
type
which
was
mentioned
earlier,
is
a
thing
that
says
in
the
cncf
cloud
event,
spec
that
basically
allows
events
to
have
payloads
of
other
types,
so
runtimes
can
be
or
or
cloud
events
can
have
yaml
as
their
data.
As
far
as
understand
they
can
have
base64
encoded
binary
as
their
data
right
and
so
runtime's.
E
D
B
D
I
mean
one
thing:
I'd
like
to
say
is
that
I
think
there
are
use
cases
that
keep
getting
written
off
here,
the
the
use
case.
I
brought
up
that
that
manuel
just
mentioned,
is
you
know
having
access
to
the
id
and
other
fields
on
the
envelope.
D
Every
time
I
brought
it
up,
they've
been
written
off
as
not
being
use
cases,
because
you
know,
I
think
people
seem
to
presume
that
in
all
cases,
consumers,
consumers
of
events
own
the
source
of
the
events
as
well,
which
I
don't
think
is
true-
and
I
like
I
you
know-
I
think
that's
what's
the
disconnect
here-
is
that
it
seems
like
there's.
D
You
know
people
involved
in
this
conversation
that
that
write
off
any
situation
where
or
perhaps
used
to
the
the
case
where
they
own
both
the
source
and
the
destination
of
events,
which
is
not
true
for
all
users.
Of
of
I
think
this
workflow
language.
B
B
I
honestly
personally,
don't
think
it's
kind
of
something
I
think
in
the
end,
it
will
bring
us
more
issues
than
it
will
help
us
out
as
far
as
the
dsl
goes
being
more
restrictive
in
this
case,
in
my
opinion,
brings
us
value
that
we
as
a
dsl
don't
have
to
manipulate
the
implementation
on
the
runtime
or
enforce
some
extra
checking.
D
B
But,
but
what
does
that
have
to
do
with
the
actual
message?
I
mean
okay,
I
guess
we
can
take
this
offline.
I
don't
want
to
discuss.
Can
I
take
our
time,
but
all
right?
Fine,
we
can
add
it
for
approve
I'll,
look
over
it
again
and
ask
the
other
guys
who
have
commented
on
it
to
take
another
look
and
if
they
have
to
see,
if
not
we'll,
go
ahead
with
it.
B
B
Format
all
the
time,
if
we
do
this-
maybe
it's
even
simpler
to
just
say:
okay
jason.
It
is
so
that
way.
We
don't
have
some
confusing
flag
to
people.
B
D
B
B
A
No,
it's
whether
you
get
a
pointer
directly
to
the
data
field
in
your
cloud
event
or
whether
you
get
a
pointer
to
the
entire
envelope,
including
the
source
type
subject:
timestamp
and
potential
extension
attributes
of
a
cloud
event.
H
E
G
B
A
Okay,
then,
just
as
a
reminder,
if
we
come
to
a
formal
vote
and
like
in
in
this
case,
I
I
still
hope
to
avert
it
and
to
clarify
any
misunderstandings.
A
But
if
it,
if
it
actually
comes
to
a
vote,
then
I
think
what
we
have
accept
established
in
terms
of
governance
is
the
same
as
in
cloud
events,
so
participation
by
companies.
So
it
wouldn't
be
fair
if,
let's
say
one
company
had
10
regular
attendees
and
then
would
all
vote
in
favor
of
us
the
intentions
of
a
single
company.
So
what
we
do
instead
is
have
a
per
company
attendance.
I'm
not
that
strict
that
I
would
exclude
anybody
who
has
missed
a
meeting.
A
But
I
I'd
look
at
the
attendee
list
and
see
look
at
the
regulars
and
then
I
I'd
cast
the
formal
vote
among
those
that
regularly
are
attending
plus,
of
course,
people
involved
in
the
in
the
pi.
If
that
has
been
triggered
externally,
but
for
this
so
dm
can
we
can
we.
B
D
Sorry
one
thing
yeah,
so
it
I
just
noticed
that
it
said
below
the
bullet
point
default
should
be
the
entire
event,
but
I
think
that
was
vetoed
in
the
last
meeting
when
this
was
approved.
So
the
default
is
not
the
whole
event.
It's
just
the
payload,
like
the
current
behavior,
just
wanted
to
clear
that
up.
A
Oh
yes,
thank
you.
Sorry,
thanks
much
yeah
there
was
a
left
over
from
an
earlier
discussion
state.
So
yes,
thank
you
thanks.
A
Okay,
do
we
have
nason
on
the
call?
Now
I
guess
not,
but
we
can
discuss
it
anyways,
so
graphql
support
support
we
do
have
for
function,
calls
a
rest,
grpc
or
rest
being
the
open
api,
rpc
being
grpc.
Actually
and
one
more.
Could
you
please
remind
me
what
it
was
so
anyways
we
we
get
one
more,
and
that
is
a
graphql
definition
which
adds
oh
yeah,
of
course,
expression.
That
was
the
last
one
I
forgot
about.
We
can
run
just
expression.
A
A
Query
interfaces
are
very
convenient
because
you
can
design
actually
what
the
response
should
look
like
for
those
who
haven't
played
with
graphql,
yet
you
can
select
the
fields
of
the
resource
that
you
are
querying
in
the
request,
and
this
makes
it
also
very
convenient
to
use
in
workflow
language
because
in
the
query,
so
this
is
an
example
graphql,
and
I
think
the
the
example
is
a
little
bit
lower
like
this
one
retweets.
Okay,
this
one
retrieves
the
single
pad
with
the
id-
and
this
is
the
selection
set.
A
So
you
can
say
that
from
the
pet
that
you're
converting
with
the
id42,
you
only
want
the
id,
the
name
and
the
favorite
tweet,
with
its
favorite
tweet
id.
While
the
schema
defines
what
the
data
structure
is,
so
you
have
pets
and
they
kind
of
favorite
a
favorite
tweet
favorite
tweet
with
its
id.
I
think
this
is
easy
to
understand,
and
then
the
mutation
I
already
mentioned
is
where
you
can
modify
records
and
they
require
specific
input.
A
This
looks
pretty
good.
It
has
already
been
long
enough
in
our
repository.
A
So
typically
we
wouldn't-
I
wouldn't
put
up
anything
that
has
been
added
on
the
day
of
the
community
call
or
the
weekend
before,
but
it
should
be
in
three
days
or
longer.
So
people
have
time
to
look
at
it.
So
that's
why
I'm
putting
it
up
for
vote?
A
A
We
don't
have
ricardo
on
the
call.
Let
me
pick
on
you
jorgen.
What
do
you
think?
Do
you
think
graphql
is
a
good
addition,
or
do
you
think
that
this
do
you
know
about
any
conflicts?
This
would
have.
D
I
it
looks
like
a
good
change
to
me.
I
have
not
used
graphql.
This
was
my
first
introduction
to
it,
so
I
don't
know
how
I
would
use
it
with
workflows
personally,
but
it
looks
like
it's
well
described
and
looks
like
it's
a
fit
with
the
other
types
of
functions
that
we
support
thanks.
I
didn't
have
any
concerns.
A
Okay,
anybody
else.
No,
then
thank
you
and
of
course,
we
can
always
add
to
it.
So
in
case
it's
underspecified
or
there
is
there's
text
missing
or
people
come
up
with
improvements,
but
thanks
for
having
this
approved,
so
we
can
match
it.
That's
also
our
peer
triage
alex
hasn't
come
back
and
we
have
10
minutes
until
the
top
of
the
hour.
A
So
we
have
a
couple
of
new
issues
and
I
wouldn't
know
with
which
to
start
so
I
just
go
through
the
order,
if
that's
okay,
in
which
they
came
in
the
numbers,
but
if
anybody
wants
to
urgently
discuss
their
pr,
please
speak
up
so
first
one
improved
timeouts.
Could
you
quickly
present
it
to
everyone?
A
B
I
mean
currently
our
timeout
settings
are
pretty
weak
as
far
as
things
like
the
execution
time
of
each
action
and
stuff
like
that.
So
this
just
to
add
a
bunch
more
different
timeouts
on
different
levels.
B
A
Already
had
a
couple
of
discussion
points:
yeah,
nothing,
crazy,
okay,
okay,
nothing,
controversial,
yeah,
okay,
so
that's
cool!
The
next
one
is
the
point
translating
oh,
we
don't
have
him
on
the
caller,
I'm
afraid
tao
jae
from
vmware
spelled
wm.
Where
nice
I
mentioned
that
he
would
like
to
have
some
support
to
translate
our
workflow
description,
language
into
asl
and
or
conducted
dsl,
and
there
has
been
a
discussion
going
on.
I
mean
it
has
support
also
for
scene
from
evan.
A
I
remember
evan
to
be
with
google,
I'm
not
sure
if
he's
still
and
that
maybe
people
have
investment
with
conduct.
The
same
applies
for
argo
and
any
other
workflow
engine,
so
I'm
not
sure
what
to
make
of
it.
A
But
this
is
an
interesting
aspect.
I
think
we
had
a
discussion
during
our
primer
design,
discussion
with
falco
as
well.
If
it
was
possible
to
maybe
translate
it
into
different
workflow
languages
and,
of
course
falco
would
have
been
a
proponent
of
bpmn,
which
adds
more
to
the
list.
So
far,
we're
only
doing
comparisons.
B
B
Yeah
yeah.
I
do
think
that
this
is
important,
because
a
lot
of
people
see
our
specification
as
kind
of
like
really
like
a
dsl
that
they
can
move
around
different
runtimes
on
and
yeah.
It's
not
the
first
time.
We
hear
this
too
that
that
people
want
to
us.
You
know,
because
a
lot
of
the
runtimes
do
have
specific
support
for
like
a
specific
dsl
and
by
using
serverless
workflow,
they
will
be
able
to
move
easily.
B
A
True,
okay,
cool,
then,
let's
get
to
the
next
one,
because
also
we
have
kevin
on
the
call
and
I
wanted
to
get
to
his
issues.
Oh
this
is
continue
s
didn't
we
have
now
sorry
this
already
it's
closed,
yeah.
Okay,
that
was.
I
That's
good,
that's
good
to
know,
I
hope.
Actually,
I'm
trying
to
remember
would
we
want
to
keep
it
open
until
the
spec
is
modified
with
the
clarifications.
B
A
Okay,
then
that's
postponed
that
kevin
since
you're
on
the
call.
Could
you
give
a
quick
introduction
to
this
issue?
Sure.
I
Yeah,
while
reading
the
specification,
I
spent
some
more
time
with
it
recently
and
to
me,
I
had
some
open
questions
about
what
the
expected
behavior
is
for
runtimes
regarding
how
to
handle
passing
arguments
and
how
to
validate
inputs
specifically
for
these
opening
api
functions,
and
I
could
imagine
a
few
different
ways
of
doing
it,
and
I
thought
it
would
help
the
spec
to
specify
that,
because
I
would
argue
that
it
would
improve
portability
so
that
you
could
run
with
the
same
set
of
inputs
between
two
different
runtimes.
I
There's
a
few
different
ways
of
doing
it.
I
think
one
just
at
the
top
level
of
the
arguments,
another
one
you
could
group
it
by
where
the
location
would
be
like
in
the
headers
or
body
et
cetera.
So
I
kind
of
describes
like
what
this
what
this
issue
is,
and
then
we
had
some
discussion
later
on
about
why
it
might
be
needed.
A
Okay,
that's
that's
pretty
cool,
I
am.
I
was
aware
so
open
api.
Is
it
ambivalent
or
can
it
be
over
specified
on
where
to.
I
Place
a
parameter:
can
you
put
it
yeah,
so
open
api
is
really
nice
in
the
sense
that
it
tells
you
exactly
where
things
will
go.
So
if
a
runtime
had
I
mean,
obviously
it
has
access
to
the
api
spec.
I
It
will
know
where
these
parameters
like
basically
how
to
match
these
parameters
to
their
location.
So
whether
it
be
header
you
know,
path,
params
body.
So
my
argument
or
my
proposal,
I
guess,
was
that
you
just
have
a
list
of
arguments
and
that
the
runtime
would
associate
those
arguments
to
their
proper
location
when
making
the
api
call.
I
I
know
that
it's
possible
to
do
that
now,
I'm
just
asking
that
we
put
it
in
the
spec
so
that
if
another,
you
know
if
another
implementation
says
well,
we
have
arguments
and
then
we
have
headers,
and
we
have
you
know
the
body
and
then
basically
arguments
inside
of
that.
Then
it
wouldn't
be
compatible
with
with
the
runtime
that
treats
them
all
as
a
flat
list.
A
Yeah,
I'm
sorry,
maybe
maybe
I'm
a
bit
tired,
I'm
still
not
sure
so.
The
current
spec
uses
rest
as
as
a
type
of
the
function,
but
it's
really
open
api
underneath
that's
right,
yeah,
yeah
and
so.
I
Here
is
the
first
one:
is
the
open,
api,
spec
and
then
below.
That
is
the
what
you
can
do
in
the
spec,
and
so
this
is
the
way
that
I
interpreted
it
was
that
you
know
we
could
have
pet
id
and
name
basically
in
that
arguments
list
and
that
the
runtime
would
be
able
to
associate
that
to
their
proper
locations.
A
Okay,
jeremy,
could
you
help
me
understand,
isn't
that
what
we
currently
have
or
is?
Is
it
just
a
specification
problem
that
we're
missing
text
to
describe
that.
B
Yeah,
it
looks
like
it
yeah
we
can
describe.
Currently,
the
arguments
is
an
type
object,
so
you
can
do
whatever
you
want
inside
of
it,
and
you
can
support
both
these
types
and
also
complex
types
as
well
as
far
as
parameter
goes
because
sometimes
parameter.
I
think
it
is
it's
just
the
type
is
strength
from
the
parameters,
but
sometimes
it
could
be
a
different
type
like
an
object
type.
That's
also
defined
in
the
open
api
definition,
so
you
can
support
both
in
a
way.
B
I
Right,
yeah
and
that
they'd
be
a
flat
list
as
opposed
to
I
mean,
I
think,
that's
how
most
people
are
interpreting
it,
so
I'm
just
asking
for
additional
wording
in
the
spec.
Maybe,
but
you.
I
You
might
have
like
arguments,
and
inside
of
that
would
be
like
headers
and
then
you
know,
query
params.
So
this
is
more
just
to
give
some
clarification
to
the
runtime.
A
That
sounds
really
good.
Would
you
be
willing
to
give
it
a
a
try
and
start
a
a
pr
on
this,
maybe
and
then
sure
we
can
yeah
okay,
wow
perfect?
That
would
be
very
nice
and.
A
A
So
tell
me:
I'm
afraid
we
really
don't
get
to
the
continuous.
Oh,
that's.
B
In
charles
is
the
maintainer
of
the
serverless
workflow
specification,
so
congratulations,
dude
and
we're
looking
for
more
maintainers.
So
if
everybody's
is
is,
is
wants
to
be
involved
has
been
involved.
Please
step
in.
There
is
no
extra
work.
You
just
get
to
get
some
bunch
of
emails
that
you
probably
don't
care
about,
but
yeah
it
does
have
some
sort
of
status
thing.
So
hopefully
we
can
add
some
more
of
you.
You
know
contributors
and
everybody
in
the
future
near
future.
Hope.
A
Yes,
perfect
and
congratulations
as
well
for
me,
but
I
mean
it's
a
pretty
obvious
thing
so
anything
else,
any
other.
A
Business,
no,
then,
please
also,
let
me
remind
you
that
you
can
always
put
stuff
on
the
agenda.
I
typically
sort
it
out
before
the
meet
the
day
before
the
meeting.
So
if
you
have
any
things
that
you
want
to
present
discuss
either
ping
me
or
just
put
it
in
this
document,.
B
And
sorry
manual
guys
for
the
new
guys
I
am
annoying,
I'm
the
guy
that
always
jumps
in
right
and
forgets
things.
Sorry
manuel
is
always
so
prepared,
but
we
also
got
a
approved
for
the
crowdfunding
through
cncs
and
it's
not
a
big
deal
and
we
we
really
need
to
talk
about
it,
but
we
did
get
approved
a
lot
of
different
projects,
almost
all
from
cncf
that
apply
can
be
approved.
So
it's
nothing
special
for
us
as
far
as
project
goes,
but
we
did
get
approved
for
crowdfunding.
B
So
hopefully,
if
that
gets
some
traction,
we
put
it
on
the
website
and
also
in
our
main
readme
is
just
simply
for
supporting
us
as
we're
growing
and
this
project
has
been
around.
But
I
really
think
we
it
will
help
us
with
with
supporting
the
community
and
giving
everything
back
to
to
you
guys
they're
helping
with
the
project.
So
that's
it
take
a
look
if
you
want
to
read
it.
It's
on
bottom
of
the
website,
then
also
on
the
bottom
of
the
main
readme
of
the
specification,
or
maybe,
if
you
want
to
convince
your.
A
Okay
thanks,
so
let
me
check
final
roll
call.
I
think
I
have
everybody
on
the
attended
list.
Thanks
everybody
and
talk
to
you
offline
and
see
you
next
week.