►
From YouTube: CNCF Serverless Working Group 2020-09-28
Description
CNCF Serverless Working Group 2020-09-28
A
A
Hello,
hello
here,
so
is
this
your
first
call:
do
you
want
to
be
associated
with
some
company.
A
D
Yes,
yes
yeah,
so
I
was
going
drawing
with
a
few
of
my
teammates
jorgen
and
olivier
so
yeah.
I
will
like
okay,
so.
A
A
C
A
A
Okay
and
it's
almost
five
minutes,
so
let's
start
community
question
time.
Does
anybody
have
a
question.
A
No,
then,
let's
get
to
our
first
agenda
point:
we
have
a
new
logo
after
the
cncf
design
team
has
proposed
us
with
several
options
and
we
had
a
coloring
discussion
last
meeting
two
weeks
ago.
We
finally
come
on
the
slack
channel
to
this
logo,
that
is
our
new
project
logo
and,
if
I
understand
correctly
we're
waiting
for
the
the
artwork
team
to
come
up
with
the
different
formats
to
upload
it
in
the
do
they
upload
it
to
the
landscape,
or
would
they
deliver
it
to
us
team.
C
I
think
we
will
get
a
link
where
we
will
get
this
logo
with
text
without
text
black
and
white
white.
Only
I
mean
all
kinds
of
different
options,
so
then
we
can
pick
and
choose,
but
where
the
link
is
I'm
on
it,
I
was
just
told
artwork
repo.
So
where
that
is,
I
don't
know
yeah
sorry,
I'm
new
to
this
as
well.
A
Okay-
and
we
have
a
few
spec
updates
jimmy-
do
you
want
to
say
something
about
the
updates.
C
To
the
subflow
states
back
yeah,
definitely
in
the
last
two
weeks
we've
had
a
couple
of
updates.
The
biggest
one
is
the
one
I
hope
we
will
discuss
today
and
I
have
a
little
presentation
for
it
as
well,
so
we
can
all
kind
of
look
into
it,
but
these
updates
they're
mentioned
there.
C
Is
we
updated
the
subflow
states
specification
document
because
via
community
also,
it
wasn't
very
clear
as
far
as
how
function
in
event,
definitions
get
propagated
to
sub-flow
states
again
for
for
people
their
new
subplot
state
is
a
state
that
allows
you
to
rather
than
have
allows
you
to
have
reusable
workflows
that
can
be
used
in
in
several
other
workflows.
C
C
So
there
was
a
question:
okay,
the
software
inherit
function,
event
definition.
So
there
was
ongoing
discussion
that
in
the
prs
and
at
the
end
we
decided
that
no
each
sapple
state
has
to
define
its
own
function
and
events
so
services
that
he
wants
to
invoke
and
the
event
steady
needs
to
be
either
consumed
or
produced
during
its
own
execution.
So
that
was
mainly
the
decision
because
we
are
as
a
specification
base.
C
We
do
not
wish
to
allow
run
times,
not
that
we
don't
wish
to,
but
it
is
better
for
runtimes
for
us
to
be
very
clear
and
specific
what
we
want
and
also,
at
the
same
time,
each
workflow
regardless,
if
it
is
a
subflow
or
a
parent
flow,
should
be
able
to
be
validated
on
its
own
rather
than
depending
on
another
workflow's
definition.
So
we
decided
on
that.
The
second
one
is,
it
kind
of
goes
in
line
with
this,
because
we
now
force
that
each
workflow
defines
its
own
functions
in
the
definition.
C
There's
all
definitely
cases
where
we
want
to
reuse
those,
and
so
we
allow
now
for
functions
not
only
to
be
referenced,
inline
and
events,
but
also
to
reference
an
existing
json
or
yaml
file,
which
include
those.
C
So,
basically,
you
can
define
a
json
or
ml
file
which
includes
your
function,
definitions
or
your
event,
definitions
and
you
can
reference
them
in
in
in
your
function
in
your
workflow
and
that
should
be
embedded,
so
multiple
workflows
can
reuse
them
rather
than
having
to
to
to
inline
them
in
every
single
workflow
definition
that
you
have
so
that
there
was
an
update.
So
the
third
one
is
the
one
that
we're
gonna
actually
talk
about
today.
C
It's
just
a
pr
form
currently,
but
I
that's
something
that
we
all
really
need
to
decide
and
I'm
just
trying
to
make
a
case
for
it
today
and
see
what
everybody
thinks
and
we'll
go,
we'll
we'll
talk
in
detail
there
now.
So
I
don't
want
to
waste
any
time
now.
As
far
as
issues
goes,
if
you
guys
have
time,
please
look
at
those
two
issues
that
are
linked
here.
They
have
to
do
with
retries
and
and
one
of
our
community
members.
C
There
make
some
really
good
points
on
what
we
can
do
to
improve,
especially
our
retry
definitions
in
the
in
in
the
current
work
was
specification
so
having
more
people
look
at
it,
and
chime
in,
I
think,
would
really
be
helpful,
especially
because,
right
now
I
am
rewriting
or
trying
to
rewrite
the
error
handling
and
the
timeout
and
the
retry
sections
of
the
of
the
specifications,
and
this
has
to
do
with
that.
So
any
input
would
be
much
appreciated.
C
A
B
A
C
A
Okay
and
yeah,
I
think
that's
for
the
agenda
today.
I
didn't
add
any
other
topics
so
tamiya.
Do
you
want
to
do
the
deep
dive
right
away
or
should
we
first
conclude
ask
for
any
other
questions.
B
A
Then
jump
into
the
deep
dive
yeah.
Definitely,
whichever
way
you
think
is
best
okay,
then
let
me
ask
if
there
are
more
questions.
A
C
Yeah
and
let's
see
how
I
can
share
my
screen,
but
just
while
I
find
the
oh,
I
think
you
have
to
stop
sharing
manolo,
but
just
to
understand.
Last
our
last
meeting,
we
said
that
it
would
be
beneficial
for
everybody
if
we,
if
we
started
taking
our
time
during
these
meetings
and
discussing
certain
parts
of
the
specifications
and
and
honestly
I
I'd
love
for
this-
to
be
an
open
discussion,
not
a
monologue
and
and
and
this
one
I
think,
will
be
a
little
bit
spicy.
C
C
So
last
meeting,
can
you
guys
see
my
screen?
Yep,
okay,.
D
C
So
last
meeting
we
said
that
the
first
topic
of
the
discussion
in
our
kind
of
deep
dice
or
whatever
we
want
to
call
these
sessions
is
about
function
definitions.
So
I
just
wanted
to
start
off
by
saying
what
are
function.
Definitions
in
serverless,
workflow
they're
used
to
describe
what
services
need
to
be
invoked
and
how
to
invoke
them,
they're,
typically,
external
services.
C
They
need
to
be
invoked
during
workflow
execution
as
part
of
the
orchestration
of
of
services
and
and
everything
else
that
that
that
you're
doing
and
you're
defining
with
our
serverless
workflow
markup
and
again
in
order
to
solve
business
problems.
Everything
that
we're
doing
or
the
defining
has
to
solve
a
particular
part
of
your
business
problems
within
your
organization
or
or
the
problems
you're
trying
to
solve
another
thing
about
function.
Definitions
is
that
they
should
really
provide
the
runtimes
all
the
information
needed
in
order
to
invoke
this
particular
external
service.
C
C
They're
part
of
actually
invoking
a
service
or
a
function
so
we'll
get
to
that
as
well
as
far
as
our
specification
is
concerned,
since
we're
not
doing
an
in-house
project
or
a
proprietary
type
of
markup,
we
have
to
be
aware
about
portability,
so
we
have
to
understand
that
whatever
markup
we
define
or
say,
users
have
to
use
if
they
choose
to
use
the
service
workflow
specification
portability
should
be
a
very
oh
sorry
about.
C
My
dog
should
be
very
important
part
of
of
what
we're
doing
so
in
order
to
see
where
we're
kind
of
taking
the
or,
where
we're
proposing
to
take
this
function.
Definitions.
Let's
take
a
look
at
how
this
currently
looks
with
our
specification,
and
I
did
this
little
example
here.
C
This
is
a
whole
workflow
definition
in
yaml
and
basically,
if
you
look
on
top
after
id
name,
inversion,
you'll
see
functions
which
defines
the
function
definition
array
in
serverless
workload
instead
of
inlining
function,
definitions
instead
inside
of
states
or
steps
or
or
those
parts
they're
really
concerned
with
execution
or
logic.
C
We
actually
define
them
up
front.
So
we
have
our
functions
array.
Each
one
has
a
name
parameter,
which
is
a
unique
identifier
of
this
particular
function.
Definition.
This
is
workflow
unique
identifier,
not
unique
identifier
within
the
service
that
we're
trying
to
invoke.
This
is
just
a
domain
specific
to
the
workflow
markup
itself.
C
The
second
thing
parameter
is
called
resource,
and
this
defines
the
endpoint
location
of
this
particular
service
that
is
exposed
to
the
public,
and
then
we
have
a
proprietary,
a
string
based
parameter
called
type,
which
we
initially
thought
or
currently
have
thought
when
we,
when
we
did
this,
that
it
would
allow
runtime
implementations
to
give
further
information
about
the
type
of
service
we
kind
of
left
it
open-ended,
which
is
a
string
currently
type
so
user
users
can
give
some
more
information
that
is
again
domain
specific
to
them
about
this
particular
service.
C
So
here
you
will,
you
see
two
functions
defined.
One
is
the
get
current
time
and
one
is
the
read:
wikipedia
function,
definitions
and
within
the
states,
then
different
states
have
actions.
For
example,
the
operation
state
callback,
state
and
an
event
states
can
define
actions
within
actions.
We
can
reference.
Those
functions,
which
means
referencing
a
function
within
an
action
means
at
this
point.
C
The
actual
service
should
be
executed
during
workflow
execution,
so
we
have
a
function
ref
and
the
second
parameter
down
here
on
line
19
is
ref
name,
and
at
this
point
we
say
we
reference
our
function
definition
and
at
this
point
we
want
to
execute
the
get
current
time
function.
The
same
thing
on
starting
on
line
20.
You
can
see
it's
the
same
thing,
but
we
also
allow
parameters
which
are
basically
json
objects
currently
to
to
to
be
passed
as
the
payload
of
of
for
this
service
that
needs
to
be
executed.
E
I
have
a
question
so
you've
defined
some
of
these
parameters
that
I
understand
they're
getting
passed
through,
for
example,
in
the
wikipedia
example.
But
why
is
it
that
the
there
isn't
a
definition
of
what
parameters
are
available
on
the
read
wikipedia
function?
In
this
particular
case,
yeah.
C
Workflow
markups
out
there,
you
don't
have
that
as
a
workflow
developer.
Currently,
you
not
only
have
to
know
what
you
want
to
write
as
far
as
your
orchestration
to
solve
your
orchestration
business
problem,
but
you
in
a
way
have
to
be
an
admin
as
well
to
understand
the
api
and
all
the
operations
of
all
the
services
that
you
want
to
invoke,
which
definitely
makes
it
hard
for
modelers
all
right.
C
So
one
of
the
parts
of
this
where
I'm
going
to
is
kind
of
like
a
step-by-step
approach
to
get
to
where
I
kind
of
want
to
get
at
the
end,
all
right,
no,
no
problem.
So
this
is
just
again
a
little
bit
of
iteration
of
how
we
currently
do
things.
We
have
a
name
resource
and
type
parameter
function.
Definitions
also
similar
to
states
have
a
metadata
definition.
This
is
our
free
form,
type
of
extension,
object
which
modelers
can
use
to
add
non-executable.
C
Parameters
and
information
to
their
workflow
model,
so
yeah
metadata,
is
also
available
for
function.
Definitions
as
far
as
function.
Ref
goes
again.
We
have
a
ref
name
which
references,
the
name,
unique
name,
parameter
on
the
of
the
function
definition
and
as
we
have
seen
in
the
example,
the
parameters
which
is
a
free
form,
json
objects
where
you
can
add
the
data
that
needs
to
be
passed
to
the
particular
service
that
we
want
to
invoke
during
workflow
execution.
C
Now,
let's,
in
order
for
us
to
see
where
we
are
and
how
to
improve
one
of
the
things,
I
think
maybe
it's
a
good
idea,
let's
compare
it
to
other
ones.
Now
I
picked
google
cloud
workflow,
not
because
I
pick
on
google
or
anything
like
that.
It's
just
a
new
one
and
it's
kind
of
new,
and
I
just
wanted
to
see
hey,
let's
see
what
they're
doing
all
right
and
this
is
for
their
documentation.
C
Instead
of
states,
they
have
steps
and
similar
to,
for
example,
aws
which
we'll
see
they
have.
They
defined
their
service
execution
inside
of
their
steps.
Where
we
kind
of
do
it
a
little
bit
different,
we'll
define
them
up
front
and
reference
them,
and
also
you
can
reference
files,
json
or
yaml
reusability.
We
talked
about
this
earlier,
but
basically
this
is
on
top
it's
kind
of
like
the
definition.
They
have
a
call
parameter
where
you
can
have
an
enumeration
of
different
types
of
http
calls
to
the
services
that
you
can
make.
C
You
can
have
arguments,
url
methods
you
can
set
headers,
you
can
say
the
body
and
blah
blah
blah
authentication
information
is
right
there
as
well,
and
then
you
have
things
like
timeouts
and
the
results.
As
far
as
being
the
results
of
the
service,
the
data
results.
How
is
it
placed
within
the
workflow
data,
as
the
state
execution
continues
and
underneath
is
the
same
example
that
I
showed
on
the
earliest
earlier
slide,
using
the
serverless
workload
specification,
this
is
kind
of
like
with
their
yaml.
C
They
their
workflows
are
only
ammo
based.
This
is
what
it
looks
like,
basically,
where
we
show
the
json
another.
We
also
show
the
ammo.
It
doesn't
matter,
so
that's
kind
of
what
it
would
look.
As
you
can
see.
That's
another
approach
to
to
that's
similar
to
ours
at
this
point,
another
of
course,
a
very
popular
workflow
language
out.
There
is
aws
and
I
don't
have
a
full
example.
C
I've
been
rushing
to
do
this,
but
this
is
how
this
is
the
very
similar
thing
they
define
a
resource
which
is,
in
this
case
an
arn,
but
in
a
way
that's
a
uri
at
the
end,
you
can
look
at
that
and
they
also
have
json
object,
type
called
parameters
where
you
can
basically
stick
anything
in
there.
You
wish,
in
order
to
provide
all
the
data,
all
the
information
for
this
particular
service
to
execute.
C
C
Aws
also
has
the
services
that
they
have
the
exposed
in
their
system,
and
they
allow
you
to
call
those.
So
it's
very
kind
of
closed
box
type
of
approach
where
we
as
a
specification,
we
kind
of
have
to
look
at
a
much
broader
state
at
this
time.
So,
no
matter
what
we
do
or
no
matter
what
we're
trying
or
these
different
approaches
are
trying
to
do.
They
were
never
going
to
fit
the
requirements
of
everybody.
Now
we
can't
do
that
either.
C
However,
they're
always
going
to
be
changes
and
updates
and
improvements
needed,
but
they're
all
based
on
again
the
proprietary
or
the
in-house
definition
of
how
you
invoke
a
function
now.
The
second
notes
I
put
on
this
slide
is
specific
to
our
function.
Definitions,
I
mean
what
I
mean
by
that
we're
in
the
same
boat,
so
we
we're
also
trying
to
evoke
services
during
or
or
define
how
services
are
evoked
during
workflow
execution,
but
we
are
a
specification,
so
we
have
to
look
at
this
differently
as
a
sophistication
we
have
to
distinguish
ourselves.
C
We
can
again
do
what
we're
doing
and
keep
what
we're
doing
in
refining
it
and
updating
as
new
customers
or
new
consumers
of
the
service
workload.
Marker
will
come
in
with
the
requirements,
but
again
it
will
be
some
sort
of
custom
based
definition
right
and
my
idea
or
or
the
idea
that
that
I
think,
as
going
forward
should
probably
look
at,
is
to
rely
on
other
specifications
same
thing.
We
our
specifications
said
that
we
rely
for
cloud
event.
C
Specification
for
cloud
format
well
functions
in
a
way
or
executing
a
service
is
a
very
similar
thing.
We
should
really
rely
on
existing
specifications.
Just
do
things
100
times
better
than
anything
that
we
can
specifically
create
and
do
for
function
or
service
definitions
and
at
the
same
time
we
need
to
focus
on
portability
and
the
more
information
such
as
authentication,
username,
passwords,
headers.
C
Things
like
that
that
we
stick
in
our
markup
itself
are
going
to
limit
our
portability
in
the
future
across
containers
and
cloud
platforms,
or
even,
if
we're
just
doing
this
normal
local
host
type
of
project.
You
know
that
that
our
specification
is
also
there
for
so
what
does
this
really
mean
going
long
term
for
function,
definitions?
We?
C
I
think
we
should
start
relying
on
open
api
specification
and
why
well
for
many
reasons,
but
let's
kind
of
look
at
first,
where
that's
taking
us
so
services
that
we
need
that
workflows
need
to
invoke
during
their
executions,
have
to
provide
or
have
available
an
open
eye
description
and
an
open
api
is
a
specification.
It
is
a
huge,
widely
used.
You
guys
can
look
at
the
docs
and
read
everything,
but
their
description
is
basically
json
or
yaml
format.
C
Open
api
covers
almost
all
use
cases
for
invoking
restful,
http
services,
including
authentication
callbacks
for
web
hooks,
etc,
etc.
So
what
they
can
provide
and
already
do-
is
already
there
and
we
don't
have
to
duplicate
it.
We
don't
have
to
create
a
subset
of
it
and
keep
improving
it.
It's
it's
there
and
it's
widely
used.
C
It
is
very
good
for
runtime
implementation
of
our
specification,
because
a
runtimes
get
all
the
information
they
need.
All
the
tooling
that
already
exists
in
multiple
different
languages
that
they
can
read
an
opi
open
api
definition
and
know
exactly
how
to
actually
make
this
call
remember.
Also
their
single
call
to
rest
service
can
actually
mean
multiple
calls
can
mean
getting
a
jwt,
the
token
doing,
basic
authentication
or
even
a
lot
more
things
than
that.
So
open
api
can
already
describe
all
this
for
us.
C
Also.
Another
thing
is
for
tooling
now.
This
is
where
I
come
to
your
question
that
was
asked
before
with
allowing
opi
specifications
if
there
is
ever
tooling
for
serverless
workload,
which
I
hope
there
will
be
outside
of
just
the
the
visual
code
plugin,
that
we
provide
some
visual
tooling.
For
example,
you
will
be
able
to
get
read
the
reference
opi
open
api,
specific
definition
and
help
the
users
with
the
actual
services
or
operations
provided
by
the
service
they
need
to
execute.
A
A
For
amazon
web
service
you've,
given
the
example
in
there
using
arn
resources,
it's
a
uri
format
to
describe
their
endpoints,
but
there
you
don't
even
get
to
choose
the
transport
method
so
whether
that
is
http
or
whether
they're
using
java
rmi
internally,
you
don't
get
to
know
for
the
the
google
example
that
is
a
very
generic
http
get
so
first,
if
we
stick
with
http
would
open
api.
A
Do
you
happen
to
know
if
we
might
run
into
compatibility
issues?
Because
if
you,
if
you
want
to
call
something
that
you
could
easily
call
with
an
http
request
and
you
can
specify
all
the
authentication
bearer
and
whatever
you
need
in
the
request?
But
you
don't
have
an
api
schema
for
it.
You'd
have
to
build
it
yourself
right.
C
Yeah
and
I
thought
about
that-
a
lot
actually
and
the
way
I
see
it
is
open
api
or
swagger,
as
it's
also
called,
has
a
lot
of
tooling
already,
and
they
make
it
very
simple
to
build
one
if
one
doesn't
exist.
Number
one
number
two
I
I
made
sure
before
I
did
this
and
you
guys
can
really
really
do
your
research
yourself
and,
and
I
would
love
to
get
everybody's
feedback.
Everybody
is
doing
open
api.
C
Now,
if
you
look
at
aws
itself,
they
allow
you
to
upload
swagger
definitions,
open
api
definitions
and
also
allow
you
to
build
open
up
api
definitions
from
their
existing
services.
Same
thing
with
google
cloud
same
thing
with
openshift,
for
example
the
stuff
that
upcat
are
working
in.
C
So
I
know
that
I
just
talked
to,
for
example,
scott
nichols
about
k
native
would
that
fit
within
the
two,
and
he
said
yes,
you
can
also
define
k
native
services
using
an
open
api
definition,
so
the
I
understand
that
there
might
be
use
cases
where
users
can
say
hey.
I
don't
want
to
do
this
or
I
cannot
do
this,
but
that
is
the
trade-off
that
we're
going
to
have
to
to
deal
with
this
as
a
specification.
C
I
I
also
described
in
the
pr
that
we
still
have
the
metadata
section
so
for
users,
really,
they
simply
do
not
want
to
use
what
we
the
open
api.
As
I
will
show
in
the
next
slide,
the
proposed
change
to
the
function.
Definition
they
can
still
use
metadata
to
describe
how
to
invoke
their
service
with
the
exclamation
or
or
the
node
that
those
types
of
workflow
definitions.
C
We
cannot
as
a
specification
ensure
that
they're
portable
across
multiple
containers
or
or
cloud
environments,
but
it's
still
possible
to
do
using
the
metadata
okay.
C
Yeah
you'll
see,
typing
type,
is
completely
removed,
you'll
see
next
slide
and
then,
if
you
want
to.
A
A
C
Yes,
definitely
but
like,
we
have
to
still
understand
that
we
cannot
please
everybody,
no
matter
what
we
do,
but,
however,
the
amount
of
benefits
this
has
for
the
bigger
public
has
to
outweigh.
If
we're
going
to
do
this,
so
this
is
kind
of
why
the
discussion.
A
C
So
I
think
this
is
the
last
slide.
I
hope
no,
there
is
one
more
after
this,
but
I
did
an
example.
Now
it's
the
same
example
as
we
saw
before,
but
with
the
proposed
function,
definition
where
we
still
need
a
unique
name
again.
The
name
is
domain
specific
to
the
workflow
definition
itself
as
far
as
referencing
it
in
actions,
but
you
see
rather
than
having
resource
type
and
anything
you
have
a
single
parameter
call
operation.
C
The
operation
parameter
is
is:
is
a
string
with
two
parts
divided
by
the
little
hashtag?
The
first
part
is
a
uri,
so
it
doesn't
have
to
be
http.
It
can
be
class
path,
file
path,
whatever
is
as
far
as
as
long
as,
if
it's
the
the
uri
specification
to
the
actual
json
or
yaml
file,
where
your
service
open
api
service
definition
is
present.
C
The
part
after
the
hashtag
is
the
operation
iid.
And
if
you
look
at
the
open
api
specification,
there
is
a
widely
used
parameter
called
operation
id,
which
is
again
a
domain
specific
parameter
which
really
ties
in
well
to
our
domain.
Specific
language
workflow
language
itself-
and
this
is
a
unique
mapping
in
the
open
api
definition
to
a
particular
endpoint
or
operation
that
your
service
provides.
C
A
Actually,
luckily,
this
the
hashtag
is,
is
the
uri
delimiter
to
specify
a
fragment
so
date.
A
C
In
there
is
several
we
have
to
identify
the
uniqueness
in
open
api
definition.
How
do
we
uniquely
define
a
certain
operation
of
the
service
that
it
provides?
There
is
two
things
you
cannot
use
the
path
name
itself.
It's
not
unique,
because
you
can
use
variables
you
can
you
can
you
can
use
name
and
path,
maybe,
but
one
of
the
standards
that's
really
across
the
board.
Right
now
is
to
use
the
operation
id,
which
is
a
string
that
has
to
be
unique
within
the
open
api.
C
It
maps
one
to
one
to
a
certain
operation
of
the
service
and
the
same
the
reason
and
I'm
just
letting
you
know
we
can
change
this
format,
but
take
a
look
at,
for
example,
apache
camel
the
latest
not
recently.
They
also
added
open
api
support
and
they
use
the
same
type
of
string,
so
I
didn't
steal
it
from
them,
but
looking
at
their
stuff
they,
so
the
approach
is
something
that
other
are
looking
at.
C
That's
why
I
think
is
is
is
useful,
but
we
we
really
if
we
find
some
better
approach
or
idea.
Let's
use
that
so
that's,
but
we
need
some
sort
of
way
to
define.
Okay,
here
is
the
yaml
or
json,
which
has
the
open
api
definition
file,
so
the
runtime
can
read
it.
The
tooling
can
read
it
and
see
all
the
different
operations
that
the
servers
provide,
and
then
we
need
a
unique
identifier
which
represents
a
single
operation
of
that
service
that
we
actually
want
to
invoke.
So
that's
kind
of
like
that.
C
That's
a
good
question,
however,
one
of
the
things
that
I,
for
example,
day
of
the
week,
if
you
look
at
that,
the
api,
the
open
api
specification
itself
does
not
know
or
is
involved
in
the
actual
execution
of
the
workflow
itself.
So
what
happens
on
really
after
the
execution
of
the
first
function?
In
this
case,
the
get
current
time
the
results
of
the
function
are
merged
with
the
state,
in
this
case
the
get
to
days
wikipedia
articles
operation,
state
and
then
the.
B
C
Function
gets
invoked
right,
so
we
have
to
pass.
The
workflow
has
to
pass
this
data,
which
were
the
results
of
the
first
function
invocation
as
the
body
or
the
parameter
to
the
second
function,
information
execution.
So
we
still
have
that
to
define
that.
However,
what
open
api
makes
it
a
lot
easier
now.
Is
that
up
front?
C
During
compile
time
we
can
say
hey
the
as
parameters,
you
pass
a
query
right
parameter,
but
the
openid
api
definition
says
it
should
really
be
called
something
else
or
you're
passing
in
one
parameter,
but
in
opi
open
api
definition
of
the
service
endpoint.
You
require
two
parameters.
For
example,
I
mean
what.
E
A
query
parameter
like
it
could
be
a
post
body.
It
could
be
whatever,
depending
on
what
the
european
api
spec
says,
yeah.
It
should
be.
C
Yeah,
that's
a
one-to-one
mapping
that
that
open
api
even
helps
us
with
to
know
exactly
the
structure
of
the
parameter
that
we
have
to
pass
in.
Even
they
have
open
api
has
a
schema
definition
for
the
object
type.
So
even
workflow
developers
will
exactly
know
what
the
structure
of
the
parameters
needs
to
be.
In
order
to
invoke
this
service
is
the.
E
C
E
All
right,
so
do
you
agree
that
the
query
online
22
is
not
required,
or
do
you
think
it
is
required?
I
think
you
could
just
bump
action
and
search
up
one
level
yeah.
G
Hey
hi
tiho.
Can
you
hear
me?
Yes,
hi
guys,
I'm
sorry,
I'm
I'm
late,
sorry
yeah
regarding
these
parameters.
It
is.
This
will
fit
very
well
on
the
pr
that
issue
that
I
open
regarding
we
giving
reasonable
reason
to
the
two
parameters.
So
if
they
are
path
parameters,
query
parameters,
header
parameters
or
body
parameters-
we
we
don't
know
you
know.
Actually
we
don't
need
to
know
now,
because
with
the
open
api
thing
and
yeah,
I
I
guess
query.
G
We
won't
need
that
and
maybe
just
parameters-
maybe
I
don't
know
name
me
and
data
and
then
the
data
we
we
fetch
from
the
from
the
from
the
variable
name
or
from
the
body
of
the
state
yeah
or
anything
like
that,
because
we
we
have
this
context,
information
coming
from
the
op
api
and
one
other
question
that
I
have
it
is
that
regarding
this:
well,
how
that
you
have
there
in
the
operation
for
implementation,
let's
say
that
they
might
have
the
the
open
api
json
file
within
the
their
context
like
if
I'm
I'm
writing
a
java
implementation.
G
For
that
I
can.
I
can
have
that
that
emo
or
json
file
within
my
class
path
or
if
I'm
running,
I'm
writing
a
go
application.
I
can
have
that
you
know
package
on
my
binary
as
well
or
I
can
have
that
external
reference
like
file
or
anything
like
that.
So
this
is
a
valid
way
or
I
name
right
so.
C
C
B
G
C
G
C
Certainly,
last
no
no,
no
problem.
So
just
last
slide
ahead.
I'm
sorry
it
takes
so
long.
So
here
is
the
link
to
the
pr.
I
think
it's
you
know
currently
only
open
pr
right
now,
so
the
changes
are
name
name
is
again
still
the
unique
identifier
for
the
function.
C
Definition
operation
is
now
the
new
parameter,
which
we
said
has
two
parts
divided
by
the
hashtag,
the
first
one
being
an
uri
as
an
and
ricardo
said,
and
the
second
part
being
the
operation
id
and
on
on
the
low
I
create
the
image
below
I
created
a
little
open
api
definition
using
the
swagger
editor,
that's
not
completely
correct,
but
it's
just
the
most
important
part
there.
C
That
shows
the
how
the
operation
id
for
slash
date,
time,
maps
to
the
in
in
actually
in
our
definition
of
of
this
string,
so
yeah
and
you
can
see
you
can
see
like
you-
can
define
in
open
api
the
response
codes.
If
what
kind
of
return
message
it
gets
and
the
parameters
and-
and
things
like
that,
so
yeah
it
it
does
what
it
does
it
does.
What
the
specification
is
intended
to
do
it.
Does
it
very
well
so
yeah?
C
So
that's
all
I
had
if
you
guys
have
any
questions
concerns
if
you
want
to
know
more
about
function,
definitions
or
anything
right
now,
please
pick
up.
B
C
B
Sir
okay,
in
this
case
it's
sns,
publish
if
we
instead
have
an
example
where
the
state
is
making
a
call
to
a
lambda
function.
How
would
how
would
that
work?
What
would
you
know
what
would
that
look
like
with
with
this.
C
We,
it
would
look
the
same.
The
first
one
would
be
this.
The
operation
string
would
be
basically.
The
first
thing
is
again
a
uri
to
your
open,
api,
json
or
yaml
definition.
C
The
second
part
would
be
your
operation
id,
for
example,
called
publish,
and
you
would
need
to
either
have
one
available
or
create
yourself
an
open
api
definition,
json
or
yaml
with
a
path
which
can
be
really
this
path
right
here
and
an
operation
id
of
publish.
So
the
runtime
can
map
the
operation
id
to
to
the
operator
of
the
operation
id
to
the
to
the
one
defined
in
your
open
api
specification.
B
C
Points
to
their
lambda
function
yeah,
but
from
what
I've
seen
again.
This
is
why
I
kind
of
I
do
feel
confident
about
this
more
than
not
and
where
I
want
your
guys's
input,
because
you
guys
might
be
experts
in
certain
domains,
they're
much
better
than
I
ever
could
be
to
know,
but
from
what
I
read
that
even
in
on
aws
you
can
it
can
create
them
for
you,
they
them
being
the
open
api
definitions
for
the
services
exposed.
C
F
B
Struggling
to
figure
out
how
it
works
with
something
like
my
own
custom
lambda
function
that
I've
uploaded,
and
I
want
to
invoke
that.
I
guess
I
guess
what
what
it
sounds
like
is.
A
A
Pass
yourself
and
then
you
just
refer
to
the
arn
as
the
function
to
be
executed
and
in
this
scenario,
I'm
not
even
sure
if
how
the
lambda
can
how
it
would
produce
the
result.
But
it's
that's
the
only
way
to
make
it
restful,
and
once
you
have
that
you
for
your
api
gateway
in
on
amazon,
you
could
come
up
with
a
swagger
definition.
A
This
example
uses
sns,
publish
and
it's
unidirectional,
it's
really
more
of
a
cloud
event.
It's
cloud
events,
of
course
not
exactly
supported
in
amazon,
but
it
is
the
transport
here
is
a
message
broker,
so
you
have
to
publish
to
a
topic
in
order
to
invoke
the
function.
I
think
the
the
lambda
here
is
bound
to
that
topic
ain
and
there
you
only
have
a
message
structure.
That's
one
way,
so
I
don't
know.
A
If
actually
I
assume
that
open
api
cannot
define
such
apis
because
it's
really
done
for
for
restful
apis
and
those
are
meant
to
be
http
https
based
request
response,
client
server
protocol.
A
If
there
is
something
other
that's
why
I
mentioned.
Maybe
we
wanted
to
retain
the
type
for
function.
Invocations,
I'm
not
assuming
anybody
wants
to
do
old-fashioned
corba
calls
or
do
some
asn
1
encoding,
but
if
there
are
other
invocation
protocols
not
based
on
http,
then
retaining
the
type
field
would
be
at
least
an
option
to
extend
the
specification
and
write
some
proprietary
extension.
A
C
We
we
have,
and
that's
a
good
thing
manuel
you
mentioned
this.
We
have
two
ways
of
invoking
functions
within
the
serverless
workforce
specification.
One
is
via
the
actual
function
definition,
which
is
meant
more
in
our
case
for
synchronous,
http
calls
in
this
case.
We
also
have
the
ability
to
invoke
functions
in
actions
via
events.
C
That
is
also
already
there
in
the
specification
itself,
so
you
can
already
define
in
a
lot
of
cases
right
now.
We
have
functions
that
are
not
exposed
via
http
or
not
exposed
at
all,
but
they
can
be
triggered
via
events
in
different
containers,
for
example,
and
in
that
case
we
can
also
describe
a
trigger
event
and
a
result
event
in
actions
to
invoke
those
types
of
services
as
well.
C
So
those
are
more
if
you
want
more
of
an
async
type
of
scenario
where,
where,
where
you
fire
and
then
you
wait,
you
will
most
likely
use
anyways,
event-based
invocation
of
your
services
or
specifically
the
callback
state,
even
if
you
wanted
to
but
yeah
so
function
is
more
of
like
a
synchronous,
type
of
invocation
scenario.
In
my
opinion,
again.
F
So
so
one
thing
on
this
from
me,
thinking
about
from
the
user
experience
point
of
view
like
in
in
step
functions
like
this
example
we're
looking
at.
I
know
it's
sns,
but
like
the
the
goal
for
when
the
launch
step
functions
was
to
orchestrate
functions
right,
and
I
think
we
can
look
at
another
example
where
they'll
they
have
like
ability
to
trigger
functions
directly.
F
So
if,
if
with
our
specification,
we
need
some
sort
of
a
similar
way
to
trigger
functions
right,
but
by
asking
the
function
author
to
also
define
a
yaml
spec
and
have
it
available
somewhere
uploaded
somewhere.
I
think
that's
going
to
increase
a
lot
of
development
cost
which
might
not
be
what
the
function
author
was
looking
for.
C
C
There
is
definitely
a
little
bit
of
upfront
development
with
this.
The
way
I
looked
at
it-
and
you
can
tell
me
if
I'm
wrong
is
yes,
there
is,
but
you
have
to
understand
that
the
actual
people
developing
the
workflow
model
could
be
completely
or
not
understand
the
actual
service
definitions
that
they
want
to
invoke.
I
know
that
I
want
to
as
a
business
analyst
or
as
a
user,
creating
solving
by
particular
business
problem,
the
orchestration
of
services.
C
So,
yes,
you
can
say
your
requirement
increases
development
time,
but
at
the
same
time,
having
open
apis
definition
even
for
your
services
will
allow
you
to
pour
them
tomorrow
to
a
different
cloud
provider,
pour
them
to
a
different
container,
for
example.
So
I
don't
see
it
as
this
particularly
bad
thing
for
yourself
doing
this
work
anyways
and
given
the
open
api
tooling,
it
seriously
need
takes
like
minimal
time
to
do
that.
C
It's
also
some
discovery,
and
it's
already,
if
you
already
have
defined
services,
for
example,
inside
the
runtime
container,
such
as
quarkus
spring
boots,
things
like
that
it
is
basically
just
one
line
of
of
in
most
cases,
one
line
of
application
properties
or
some
sort
of
properties
that
you
have
to
set,
and
it
will
generate
the
json
and
yaml
for
you.
So
open
api
is
very
unique
in
that
a
lot
of
tooling
already
exists
for
it.
C
That
will
help
you
not
have
to
spend
a
lot
of
a
lot
of
development
effort,
but
what
he
buys
us
as
a
specification
is,
is
really
the
fact
that
what
would
you
rather
have
a
workflow
definition
that
is
not
portable
or
a
workflow
definition.
That
is,
and
I
think
that's
what
we
need
to
decide
because
using
open
ipr
allows
us
as
a
specification
to
say
we
are
indeed
portable
for
for
service
definitions
and
and
then
they're
in
for
invocations,
and
that's
the
trade-off
that
I
want
to
that.
F
We
think
of
like,
like
I
don't
know
from
a
market
perspective
like
what
the
80
20
on
this
is
like
as
a
and
as
a
function.
Developer
is
like
80
of
the
time
I'm
gonna
require.
F
These
like
open
api
seems
to
me
like
an
advanced
use
case
right,
and
I
I
don't
know
if
that's
true,
but
it
just
seems
to
me
like-
is
that
something?
Do
we
want
to
force
that
upon
everybody,
or
can
we
think
of
something
like
the
current
method
as
like
a
quick
start
that
can
solve
most
of
the
use
cases
for
function
invocations
or
do
we
think
that
for
every
use
case
or
function
invocation
that
we
need
to
ask
the
function
or
the
workflow
developer,
to
write
an
open
api
spec
as
well.
C
Yeah,
I
don't
know
my
take
again.
Sorry
if
anybody
wants
to
speak
up,
please
do,
but
my
take
is
again.
We
are
working
on
a
specification,
we're
not
working
on
a
in-house
project.
The
problem
we're
going
to
run
into
is
tomorrow.
Let's
say
somebody
wants
to
use
this
specification,
but
google
ads
or
anybody
really
out
there
adds
a
parameter
that
they
specifically
need
that
we
can
only
add
in
the
next
version
and
then
so
they
cannot
use
our
markup.
They
will
go
to
google
same
thing
again.
C
Somebody
is
going
to
have
something
that
we
don't
and
it
again
is
all
proprietary
and
in-house
definitions.
If
it's
not
open
api
again.
The
point
of
my
talk
here
is:
let's
use
specification
based
stuff
and
we
can
never
replicate,
for
example,
what
opid
api
does
and
if
we
have
use
cases
where
this
is
really
why
we
use
cloud
events,
we
don't
really
want
to
create
another
event
format.
So
we
said,
let's
use
the
specifications
same
thing
with
function,
definitions.
C
The
service
definitions
is,
let's
use
specifications
for
that,
because
that
really
allow
us
to
grow
a
lot
more
than,
for
example,
markups
that
focus
on
in-house
or
proprietary
definitions.
For
this.
E
I
mean
it
might
be
worthwhile
having
a
an
inline
version
of
the
definition
to,
and
there
might
be
another
use
of
this
type
field
is
to
say:
okay.
Well,
yes,
you
know.
Ideally
you
should
have
an
a
an
existing
open
api
that
you
can
get
it
from,
or
maybe
you
can
host
a
copy
of
that
yourself
if,
if
one's
not
available
but.
E
C
C
That
is
fair
game
and
you
can
adjust
your
runtime,
which
I
think
we're
doing
actually
going
to
do
a
little
bit
of
that
also
within
within
red
hat
and
also
ricardo.
Tell
me
if
I'm
wrong,
but
we
also
looking
at
metadata
to
inject
further
information
that
is
specific
to
our
runtime,
but
just
on
the
specification
level
itself,
we
cannot
consider
those
types
of
workflow
definitions.
Portable
right
I
mean.
Am
I
wrong?
You.
B
Guys
tell
me,
I
really
like
the
open
api,
spec
and
first
class
support
for
it.
I
guess
what
I'm
worried
about
is
removing
the
the
type
field
which
seems
like
it.
It
requires
the
use
of
open
api
spec
for
for
all
functions
where
I
guess,
I'm
worried
that
there
are
use
cases
that
they're
not
where
the
interaction
isn't
restful,
and
so
any
open
api
spec
being
required
would
be
confusing
to
users
of
the
definition.
C
All
right,
so
I
think
that
that's
two
people,
I
think
manuel
also
said
the
same
thing.
So
one
of
the
things
maybe
to
to
move
towards
this
implementation
is,
let's
put
back
the
type
parameter,
which
is
again
to
everybody,
a
string
that
basically
runtimes,
can
put
in
their
own
identification
or
further
description
of
of
the
type
of
service.
They
want
to
invoke
to
make
sense
for
their
runtime.
G
Yeah,
I
agree
with
it
with
this
as
well,
because
let's
say
that
they
they
they
wish
to
invoke
a
soap
service,
for
instance,
how
they
did
that
yeah.
Then
you
know
they
they
might
use
the
wsdl.
They
are
very
old-school
yeah.
I
guess
that
makes
sense
having
the
type
parameter
and
we
can
even
assume
a
default
type
parameter
being
an
open
api
definition.
For
instance.
G
I
don't
know,
maybe
something
like
that
and
then,
if
you
want
change
you
just
you
know
type
your
own
type
parameter
and
interpret
that
in
a
way
that
you
that
you
can
do
your
thing
but
again
won't
be
portable
because
we
are
looking
for
you
know
portability
as
well.
So
you
can.
You
can
port
part
your
workflow
from
from
one
runtime
to
another,
and
then,
if
you
use
a
proprietary
type,
you
won't
be
able
to
do
that.
So
that's
that
could
be
reinforced
in
the
in
the
specification
as.
A
If
there
was
some
write-up,
I
don't
know
I'm
thinking
about.
Following
the
google
example
we
we
could
have
a
type
http
that
would
still
allow
a
generic
adaptation
right.
A
Zoicaro,
would
you
rather
switch
to
open
api
completely
use
the
tooling
to
generate
specs
for
function
development?
So
I
don't
know.
I
know
you
guys
work
a
little
in
java.
If
it's
probably
like
back
in
the
wsdl
times,
you
wouldn't
run
the
full
specification.
Eventually,
you
end
up
annotating
your
code
and
you
have
all
the
specification
created
through
tooling.
So
a
similar
thing
you
could
do
with
open
api.
A
But
for
somebody
who
just
steps
in
I
agree
here,
it
would
be
a
really
a
lot
of
effort
describing
the
api
open
api
spec
first,
in
order
to
be
able
to
use
any
workflow
engine
to
make
some
calls
so
something
that
just
generically
calls
an
http
endpoint
makes
it
still
sense.
To
me.
I
know
a
lot
of
people
who
still
implement
services
that
way
because
they
want
to
type
up
something
quick.
A
So
if
we
had
a
type
http
and
probably
with
some
predefined
metadata
on
the
method,
headers
body
and
whatever
is
still
in
is-
is
like
in
the
google
example.
Would
you
support
it
to
edit
to
the
specification,
or
is
that
a
no-go?
Because
there
is
not
enough
validation
in
it?.
G
This
is
hard
to
to
answer
well
from
from
a
runtime
implementation
perspective.
I'd
say
that
doesn't
matter
if
it
is
cam,
come
from
an
open
api
or
directly
from
the
spec,
and
the
code
base
would
be
the
same.
G
But
I
have
mixed
feelings
about
this
because
you
know
on
we
we
can.
We
can
have
that
using
the
only
the
open
api
and
then
leave
every
every
every
standard
of
defining,
how
to
call
an
http
restful
service
to
the
open
api,
and
that's
it
because
this
is
the
standard
today.
Everyone's
doing
that,
there's
a
lot
of
tooling,
like
jiho,
said
to
work
with
swagger.
For
instance,
you
can
you
know
any
java
application.
Can
you
can
generate
that?
You
know
quite
quite
quite
quite
quick
and
for
go
as
well.
G
It
is
basically
the
same
thing,
but
at
the
same
time
I
understand
that
we
understand
that
we
can.
We
can
have
lots
of
people,
you
know
seeking
for
the
the
the
the
specification
just
to
call
simple
service,
and
then
they
do
not
have
the
the
open
api
defined
and
for
that
I'd
say
that
they
they
should
use
their
own
metadata
and
do
their
stuff
in
there.
And
you
know
the
runtime
just
supposed
to
to
support
that
because
would
be
a
how
to
maintain
the
specification
in
order
to
align
with
the
open
api.
G
Maybe
so,
like
the
the
issue
that
I
opened
to
classify
the
parameters
when
calling
a
http
service
being
a
rest
or
not,
we
are
basically
doing
the
same
job
that
open
api
is
doing.
So,
in
my
opinion,
it
should
be
a
no-go
I'd
say
that
we
should
only
have
the
type
open
api
defined
by
default,
and
this
is,
if
you
do
not
define
a
type
just
be,
will
be
open
api
because
it
is
wide
used
used
by
everyone
in
the
industry.
G
And
you
know-
and
we
can
rely
on
the
standards
of
the
open
api
and
this
and
for
rest
for
rest.
For
you
know,
called
this
kind
of
services
and
for
atp
I'd
say
that
the
implement
you
should
decide
how
to
if
they
would
support
that
or
not.
A
Still
because
tma
you
mentioned
once
or
twice
that
authorization
is
open,
api
would
give
us
a
also
give
us
a
specification
of
how
to
authenticate
with
the
service,
and
I
was
thinking
about
some
messy
proprietary
authentication
methods
that
I've
come
across
and
how
they
would
be
supported
or
if
they
could
be
supported,
generating
an
open
api
spec.
A
So
I
get
a
definition.
So
how?
How
versatile
is
it
to
have-
let's
say
random,
headers
added
to
the
request.
C
C
C
From
that
perspective,
more
and
more
than
anything
else
yeah
it.
I
I
fully
understand
that
doing
this
might
limit
the
users.
Actually
of
what
what
you're
doing,
because,
like
you
guys
all
said,
it
might
be
much
faster
and
easier
just
to
hardcore
than
http
url
in
there.
But
at
the
same
time,
then
you
run
into
the
same
problems
that
we're
trying
to
solve
as
a
specification
right.
C
So
so
that's
that's
one
thing
like
why
why
it
doesn't
prevent
all
sort
of
users
in
the
event
definitions
to
to
to
use
the
proprietary
event
formatting
right
on
the
on
the
runtime
either,
but
we
recommend
using
the
cloud
event
format
as
well,
so
so
that
I
think
distinguishes
us
from
from
from
other
workflow
markups
out
there
they're
popping
up
every
week.
It
seems-
and
I
think
that's
a
positive
but
yeah.
C
And
if
there
is
a
better
to
learn
about
it,
if
there
is
the
reason
we
picked,
it
is
because,
like
ricardo
said,
there
is
a
standard
out
there
if
there
is
other
ones
for
for
for
non-http
based.
Let's
use
that,
but
one
of
the
things
I
think
that's
important
for
us
to
be
specification
based
right
and
and
let's
just
pick
the
best
one
that
I
think
no.
A
C
Yeah
it
also,
it
also
solves
the,
for
example,
the
cases
that
we
we
we
discussed
with
argo
recently
about
web
hooks.
It
also
deals
with
callbacks
as
well,
so
there's
a
lot
of
use
cases
that
we
really
don't
have
to
specify
on
our
own
and
have
to
work
with
in
order
to
to
make
users
happy
so
in
a
way,
creates
more
works
which
work
for
our
users,
which
might
limit
the
adoption,
but
the
same
way,
I
think
the
inclusion
of
of
of
on
runtimes
and
users,
especially
the
people,
writing
the
runtimes.
E
I
kind
of
wonder
I
know
we're
a
bit
going
a
bit
over
time
now
the
the
example
you
gave
before
about
the
wikipedia
api.
You
know
you've
got
to
pass
this.
I
think
it's
open
search
action
parameter
every
time.
The
way
that
we
previously
described
that
you'd
have
to
put
that
inside
every
function,
every
indication
definition
it
may
make
sense
to
have
some
kind
of
a
like
a
mapping
on
top
of
the
open
api
to
say.
A
A
Yeah
to
have
pre-customized
function
calls
here:
pre-filled
headers,
oh
yeah,.
C
So
if
I
I
mean
what
did
you
guys
think
overall,
because
this
is
a
kind
of
like
first
deep
dive,
but
it's
a
pretty
hard
one,
because
we
introduced
a
big
change
in
the
next
ones.
The
way
I
think
it
was
intended
just
to
talk
about
what
we
have.
I
think
two
things
if,
if
you
guys
don't
mind
either
in
chat
or
this
week
or
in
the
next
two
weeks
in
our
in
our
our
team
chat,
if
you
don't
mind
like
writing,
what
would
you
like
to
see
discuss
next?
C
So,
let's
pick
a
topic
for
for
our
next
deep
dive
or
anything
else.
You
want
to
talk
about.
Also
the
same
thing
is
like:
if
any
of
you
guys
would
be
willing
to
lead
this
type
of
discussion
for
the
next
time.
C
Of
course
I
can,
if
you
guys
want
me
to,
but
if
anybody
of
you
would
like
to
take
a
section
of
the
specification
and
talk
about
it
to
everybody
and
and
have
a
discussion
that
will
be
great
too
so
yeah.
A
F
G
Yeah,
I
really
enjoy
it
as
well.
I
I
left
some
some
comments
in
the
pr
about
the
the
this
definition
you
if
we
open
it
price,
so
we
can,
I
believe,
talk
in
there
and
the
pr
itself,
because
we
are
on
top
of
the
hour.
I
don't
know
if
we
have
any
more
time
to
discuss
anything.
C
Okay,
thank
you
guys
so
much
for
your
comments
and
your
time
to
join
and
listen
to
all
this
and
yeah
hope
to
see
you
guys
again
next
time.