►
From YouTube: CNCF Serverless WG 2020-05-11
Description
CNCF Serverless WG 2020-05-11
A
A
The
Argo
project
that
has
I
think
most
recently
joined
the
CN
CF
and
we
are
the
service
workflow
group,
a
subgroup
of
the
work
service
working
group
that
has
set
out
to
do
standardization
work
for
service
in
an
a
couple
of
other
tasks
that
have
come
out
of
an
initial
service
white
pipe
work
that
this
working
group
has
done.
So
are
we
waiting
for
anyone?
You
have
invited
Alex,
or
should
we
start
now.
C
Yes,
just
to
kind
of
introduce
myself
Ambala
my
I'm,
the
bar
as
a
couple
is
long-standing
engineer
on
introduce
himself
in
a
second
for
about
a
year
I'm.
The
principal
engineer
I
previously
worked
on
Argo
CD,
but
now
I've
been
working.
Argo
were
closed
since
December
I,
typically
index
more
on
community
aspects
and
I
also
index
on
more
strongly
on
internal
delivery
at
Intuit.
C
C
We
also
run
it
for
set
of
maps
CI
and
we
run
it
for
doing
some
performance
and
testing
as
well
as
busy
a
platform
to
execute,
and
we
all
so
users
have
our
automation
within
our
developer
platforms
as
well,
and
about
that's
about
half
the
work
that
we're
doing
about
how
a
lot
of
work
is
related
to
the
open-source
community
and
we
leverage
and
utilize
the
open-source
community
to
help
drive
the
direction
of
our
go
workflows.
So
when
it
was
first
originally
developed
I
think
about
three
years
ago.
C
So,
typically
a
workload
with
you
know,
several
thousand
steps
is
pretty
common
in
terms
of
its
you
know,
running
on
you
know,
different
platforms
I
had
a
very
interesting
conversation
with
some
guys
from
Cray
back
in
November
about
what
they
were
doing.
We
know
we
used
places
like
CERN
and
so
forth
as
well
for
their
for
their
platforms.
C
E
We
we
have
looked
at
our
go:
I
have
no
specific
what
we
created
the
kind
of
markup
comparison.
Documentation
I've
actually
ran
HelloWorld
yeah
locally
on
my
machine
as
well.
This
is
thick
when
you're
speaking
by
the
way
so
yeah
there
is
some
in
you
know
level
I
cannot
say
I'm
an
expert
in
oracle
by
any
means.
But
yes,
yes,
I
have
played
with.
C
C
C
C
There
are
basically
three
ways
to
know:
correction
for
way.
5
ways
through
ask
I
do
quickly
two
common
ways
that
people
interact
in
three
other
ways
you
can
interact
with
our
co-workers
and
to
the
common
way,
of
course,
is
one
common
way
through
the
user
interface,
and
this
is
the
user
interface
as
of
version
2.8,
and
it's
very
it's
relatively
straightforward
user
interface
and
once
you've
logged
in
you
can
submit
a
workflow
and
workflows
are
defined
as
kubernetes
yeah
Mille.
C
So
you
can
see
this
is
a
standard
gamal
and
given
a
given
a
name
here
and
an
entry
pointed
to
a
specification,
the
specification
defines
the
workflow
and
the
workflow
typically
can
most
at
workflows
reduced
to
a
directed
acyclic
graph,
though
there's
very
you
know
that
the
most
basic
workflow
is
a
directed
acyclic
graph,
which
executes
pods
as
as
each
node.
Basically,
this
one
has
a
has
a
secret
that
nearly
made
up
a
sequence
of
templates
that
are
connected
together
by
dependency.
C
So
in
this
example,
I've
got
a
container
here
and
which
runs
an
image
called
Argos
a
which
is
prints.
The
word
hello
I'll
go
to
the
console
and
that's
that's
all
exam
I
submit
that
and
then
what
we
do
is
we
figure
out
what
the
graph
is
for
your
workflow.
We
determine
the
first
steps
to
executing
you
execute
those
those
steps
and
one
there
when
they
complete
them,
then
your
work,
that's
complete.
This
other
shows
could
be
work
well.
I'm
gonna
show
you
a
more
advanced
example
by
uploading.
C
C
Each
of
these
steps
is
a
it
is
execution
of
the
pod
and
you
can
see
there's
ones
that
on
the
right-hand
side
of
the
draping
and
skipped
because
they
didn't
get
to
the
work
I'm
a
winter
wearing
to
flip
heads
and
finally,
we
get
to
handsomeness
this
work.
Folks,
I'm,
complete,
so
I
can
use
the
user
interface
to
do
that.
Some
of
the
features
of
the
user
interface
and
things
like
you
know,
templates
if
I
want
reusable
templates.
C
Second
I
can
use
those
and
a
cron
workflow,
which
is
have
worked
for
execute
on
a
schedule,
so
once
every
minute
once
every
hour
once
a
day
and
then
finally,
the
big
feature
here
is
a
thing
called
a
workflow
archive.
So
when
and
to
keep
the
number
of
workflows
in
your
and
you're
running,
set
which
costs
money
you
can
archive
them
to
a
database
for
kind
of
doing
data
analyst
exam
to
us,
not
sure.
C
There's,
none
in
here
not
only
make
it
automatically
archives,
I,
guess
I,
probably
couldn't
figure
it
off
and
then
obviously,
we've
got
some
searching.
That's
that
we
also
have
a
lot
of
that
in
a
CLI
talk
called
Argo
and
you
can
kind
of
brew
install
to
get
it
running,
and
that
has
just
it's
very
standard
kind
of
kubernetes.
B
C
Me,
the
Yammer
look
for
workflow
there
as
well.
Now
those
of
you
familiar
with
cloud
native
will
pre
recognize
the
the
kind
of
standard
format
of
kubernetes
manifest
you've
got
some
metadata,
such
as
the
name
of
namespace
specification
block,
which
defines
what
workflow
is
and
then
finally
for
a
status
that
blog,
which
explains
the
current
status
of
the
work
than
what
it's
been,
what
how
it's
been
executing
there
as
well,
and
actually
these
can
be
executed
using
the
cube,
cube,
CTL
commands.
C
C
Five
words
are
so
that
this
is
whether
the
fourth
ways,
programmatic
access,
so
I'll
go
workflows,
is
intended
to
be
very
embedded
in
and
one
of
the
use
cases
is
embedded
in
other
platforms
for
actually
their
workflows,
so
Cooper
pipeline
beds
are
go,
workflows
and
so
inclusive
programmatic
API,
is
in
terms
of
Python.
Api
can
be
both
by
the
open
source
community
and
also
a
Java
API
that
we've
done
for
our
internal
users,
who
are
on
there
using
Java
as
well,
there's
a
fifth
way:
I
guess
the
fifth
based,
probably
just
using
the
API
itself.
C
So
we
have
an
open
API
you
can
use,
and
then
there
is
just
a
standard,
swagger
JSON
document
that
explains
how
to
use
that
ask
actually
not
too
dissimilar
to
the
Cuban
asipi.
Ok,
I'm,
going
to
pause
there
just
in
case
people
want
to
ask
any
questions
about
any
of
the
other
things
that
I've
gone
over
there
and
the
movies
gonna
give
you
a
bit
of
an
example
of
how
to
find
out
more
information
about
anchor
workflows
than
what
you
want
to
do.
If
you
want
to
ask
a
questions
about
it,.
A
A
Yes,
actually
I
do
have
plenty
of
questions
in
my
head,
I
I,
wonder
which
are
the
most
from
this
meeting
in
this
purpose,
the
most
pressing.
So
you
mentioned
cue
flow
pipelines
and
I
I
heard
about
that
before
I'm.
Also
aware
of
the
cube
flow
DSL,
how
does
the
Pisan
DSL
weave
relate
to
the
cube
flow
DSL?
Is
that
more
or
less
the
same,
codebase
or
so.
C
I
mean
they
I
mean
they
use
our
go
workflows
as
the
component
within
their
software
executes.
Actually,
they
have
their
own
Python
language
that
they
transform
into
yeah,
and
we
also,
we
also
have
a
separate,
independent
SDK
that
people
use
as
I
mentioned,
we're
very
popular
in
the
machine
learning
community
and
we
look
at
the
machine
could
learn
because
you
like
to
use
like
to
use
Python,
so
that
kind
of
naturally
evolved
as
a
result
and.
A
I
also
learned
that
Arco
has
a
larger
ecosystem.
You
have
are
go
events
that
uses
the
gateways
and
sensors
also
speaks
cloud
events,
and
it
seems
that
only
the
extra
tooling
argo
city,
especially
where
you
have
experiments,
were
loads
and
so
on.
It
eventually
does
it
boil
down
to
a
workflow
and
the
workflow
of
resources.
C
Oopsey
tailplane
commands
and
obviously
it's
more
complicated
than
that
when
you
actually
get
into
the
details
there
are
go.
Events
is,
is
quite
closely
related
to
our
go
workflows
because
I
go
workflows,
only
provides
the
execution
component
of
workflows
and
some
very
basic
cron
shaking
trees,
requite
racing.
D
From
Intuit,
because
also
on
she's,
our
product
manager-
yes
just
to
expand
on
what
Alex
was
saying
as
his
saying
like
workflows
and
events
are
particularly
closely
tied
and
then
like
I
go
CD
and
rollouts
are
also
kind
of
closely
tied.
You
know
obviously,
I'll
go
CD
is
for
like
deploying
and
managing
applications,
whereas
workflows
and
events,
it's
more
about
triggering
asynchronous
processing
or
you
know
batch
jobs
on
kubernetes.
However,
they
are
related
in
the
sense
that
a
lot
of
people
will
create
pipelines
which
basically
consists
the
advanced,
triggering
workflows.
D
That
kind
of
thing
with
Argos
CD,
for
example.
So
one
way
to
view
it
is
that
Argo
CD
handle
is
kind
of
the
management
of
your
typical
services
based
computing
model.
You
have
a
service,
it's
running,
listening
for
requests,
doing
things,
whereas
workflows
is
your
more
like
your
batch
style
of
processing
model
and
events
is
your
kind
of
event-driven
or
event
based
processing
model
and
in
many
complex
applications,
you'll
actually
use
all
three
forms
of
processing.
You'll
have
like
us.
A
D
What
events
are
more
flows?
They
overlap
with
today.
Only
in
the
sense
that
let
me
see
the
we
purposely
you
could
use
workflow
for
CI
as
well.
Of
course,
like
so
like
event,
you
could
just
advance
the
trigger,
also
github,
you
know
events
and
do
CI
type
of
stuff,
but
the
CDT
part
mainly
just
deploys
things
into
the
cluster.
D
The
dashboard
is
tailored
much
more
to
you,
know,
applications
and
so
on
that
are
that
are
running
on
the
system
rather
than
just
you
know,
arbitrary
resources,
although
you
can't
see
any
resource
deployed
in
kubernetes
through
that,
so
a
lot
of
our
harbor
city
users.
Actually
you
know
like
an
Operations
team.
You
know
they
want
to
provide
solution
to
their
developers
or
users
to
deploy,
manage
applications
kubernetes
and
instead
of
giving
each
of
them,
like
you
know,
kubernetes,
like
namespace
access,
you
know.
Basically,
a
cube
can
take
pop.
A
B
A
B
So
we
are
targeting
two
kinds
of
users:
one
for
ml
and
data
processing
users
they're.
They
use
our
go
workflows
with
our
go
events
to
do
any
kind
of
data
processing,
moving
from
tools
like
air
flow
or
in
doing
ml
up
then
the
other
side,
where
it's
application
developers,
they
use
Argos
CD
to
sync
their
clusters
with
whatever
is
defined
in
get
now
Argos
CD
can
work
with
any
pipeline.
It
can
work
with
Jenkins.
It
can
work
with
Argo
voxels.
B
It
can
work
with
Tecton
any
pipeline,
so
Argos
CD
does
the
sink
part
and
showing
the
applications
integrated
with
logs
troubleshooting.
All
of
that,
but
to
drive
the
change
across
different
environments
using
Argo
city
either
you
can
do
auto
Singh
or
you
can
drive
it
through
a
pipeline
and
that
pipeline
can
be
either
Argo
workflows
or
it
can
be
Jenkins
or
anyone.
B
A
Thank
you
very
much
so
I
Diago
City
is
the
much
more
integrated
Taylor
to
continuous
delivery
system.
And
then
yes,
okay,
yeah,
the
the
notion
of
workflows
from
events
and
then
this
being
making
up
the
event-driven
processing
model
is
actually
something
that
the
service
working
group
has
concluded
to
at
the
end.
The
bottom
are
towards
the
end
of
the
white
paper.
A
So
having
looked
at
the
server
list,
landscape
I
think
one
conclusion
is
that,
yes,
events
would
trigger
functions,
and
that
is
also
the
main
term
that
is
used
in
service
working
group
and
then
functions
could
emit
other
events
to
trigger
other
functions
and
this
representing
or
making
up
a
workflow.
So
it's
a
kind
of
decomposing
the
application
workflows
into
events
and
functions
and
while
cloud
events
did
a
very
good
job
at
standardizing,
the
event
format
wondered
oh
and
it
has
reached
very
good
adoption
among
public
cloud
providers.
A
A
If
you
be
familiar
with
that
or
if
you
know
it
function
stage
by
Huawei
and
it
has
evolved
a
little
bit
further
with
mostly
the
work
done
by
taeho
mia-
and
we
have
recently
also
applied
to
become
a
sandbox
project
so
to
host
the
work
done
within
the
service,
workflow
work
subgroup
and
to
do
get
a
little
bit
more
presence
among
the
CN
CF
project.
So
they
become
to
become
a
sandbox
project
ourselves,
and
this
is
where
we
had
a
review
meeting
with
the
sick
delivery
and
maybe
tier
Mia.
A
E
First
of
all,
everybody
I'm
really
happy
about
this
meeting
and
yeah.
The
presentation
right
now
argue
is
really
nice,
but
I
really
like
it.
My
name
is
Tiffany
I
just
wanted
to
introduce
kind
of
myself
too,
if
you
guys
don't
mind
for
a
minute.
I
work
at
Red,
Hat
I've
been
around
workflows
for
years,
so
it's
not
like.
We
have
some,
even
though
we
have
a
small
community.
E
We
everything
that
I've
been
doing
for
the
last
decade
or
more
is
open
source
and
the
reason
why
we
especially
got
involved
with
CN
CF,
especially
the
workflow
group,
and
have
invested
tons,
live,
and
you
can
say
the
people
there
on
board
here
have
invested
a
lot
of
time
into
it.
Just
like
you
guys
have
from
home
community
perspective
is
the
reason
that
we
really
need
specifications.
This
is
like
it
right
here:
I've
been
using
bpm
into
DM
n
cm
MN.
E
It's
very
important
for
any
open
source
project,
in
my
opinion,
again,
is
to
really
utilize
specifications,
and
this
is
kind
of
like
where,
where
where
especially
now
that
that
we're
riding
on
to
actually
complete
a
runtime
implementation
on
our
end
for
the
serviceworker
specification.
So
it
is
implementable,
if
you
guys
have
any
questions
regarding
that
I'd
be
more
than
happy,
but
that
is
kind
of
like
the
point
where
we
come
in
and
I
think
this
is
kind
of
like
the
integration
points.
I
think
Argo
is
amazing.
E
No,
yes,
I
just
want
to
go
through
a
couple
of
slides
and
all
our
time
is
limited.
I
don't
want
to
bore
anybody
here,
but
I
think
two
of
our
slides
here
kind
of
represent
the
specification
overall
number
one
is
this
slide
which
you
guys
are
on
there
too,
of
course,
but
it's
the
state
of
kind
of
like
the
work
for
a
world.
E
Now
we
were
going,
we
are
going
seriously
from
this
BPM
and,
to
kind
of
you
know,
driven
world
they're
kind
of
really,
because
the
runtime
implementation
put
the
entire
workflow
kind
of
work
under
a
vendor
lock
because
of
specific
tooling,
as
well
as
runtimes
and
B
payment
is
is
is
is,
is
a
huge,
usually
specification.
However,
it
has
its
issues
and
it's
not
discussed
about
that.
But
the
point
is
the
world
is
moving
into
what
you
guys
are
also
doing.
Jason
yeah
mo
best
work
for
for
service,
and
why?
E
Because
do
we
want
to
simply
orchestrate
event
based
workflows
in
the
cloud?
And
these
number
of
different
types
of
Jason,
yellow
based
markups,
is
growing
everyday
and
there
is
a
need
for
standardizing
this
I
think
also
in
your
guys's
and
I
think
you
guys
have
just
said:
okay,
we
can
work
with
Tecton
pipe
ones,
but
can
you
really
take
your
word
politician
that
you
currently
have
imported
two
different
types
of
not
only
cloud
platforms
but
also
runtimes
that
exist?
E
There
is
that's
where
the
kind
of
specifications
are
important
and
the
second
slide
I
wanted
to
show
is
basically
and
I'll
make
it
bigger,
is
kind
of
what
the
several
specific
workflow
specification
is
trying
to
do
again.
We're
not
providing
runtimes
we're
providing
similar
to
your
Python
definition
of
your
work.
Well,
we're
providing
a
JSON
schema.
That
is
the
core
of
our
model
definition
out
of
this
JSON
schema.
E
Then
we
can
create
api's
SP
eyes
and
also
we
were
working
thinking
in
the
future,
providing
a
TCK,
because
every
specification
definitely
needs
that
and
what
implementations
then
would
provide
is
the
runtime.
So
the
goal
of
this
early
specification
is
provided
JSON
any
mo.
Both
formats
are
supported
by
the
specification
to
be
able
to
execute
on
different
runtimes
on
different
cloud
platforms.
E
E
You
guys
can
tell
me,
but
when
you're
starting
to
use
a
workflow
solution,
currently
you're
kind
of
in
the
vendor,
lock,
especially
on
the
workflow
model,
and
also
workflow
notation,
because
if
you
see
things
like
AWS
Amazon
Microsoft
things
like
that,
every
everybody
has
a
different
model
that
I
represent.
There
were
close
with
now,
our
specification
does
not
go
into
it.
Maybe
yet,
but
this
we
really
focus
just
on
the
model.
E
So
from
the
integration
perspective
between
the
two
groups,
I
think
we
have
to
talk
about
the
notation
and
what
can
be
expressed
with
our
go
workflows
and
can
they're
the
same
thing
be
expressed
with
with
with
our
current
json
schema.
Now
again,
is
there
interest
in
that
on
your
end,
I,
don't
know
right,
but
at
least
you
know,
with
this
meeting
I
think
we
can
start,
maybe
some
discussion
on
dead
end.
What
do
you
guys
think.
E
E
D
E
D
E
Just
like
what
you
guys
have
a
runtime
engine
which
takes
your
custom
kubernetes
resource
the
yeah
mode
that
you
produce
and
converts
it
probably
in
to
an
internal
object
structure,
that's
type
of
the
runtime
that
any
type
of
runtime
will
probably
do
right.
You
need
to
represent
the
Jason
and
yeah
Melissa.
Morris
internal
object
model,
then
then
can
be
executed.
Correct
are.
E
I
think
I
want
to
just
show
this
slide,
maybe
which
kind
of
what?
What
what
is
the
service
workflow?
We
focus
on
a
language
that
is
that
allows
workflows
to
orchestrate
micro
services.
Well,
as
the
amoeba
micro
services
event-based
triggered
workflows,
they
can
be,
of
course,
repeatable.
There
is
three
parts
of
the
service
workforce
specification
which
are
function-
definitions
in
our
case.
We
of
course
do
not
care.
E
These
functions
are
written,
they
can
be
polyglot
and
we're
not
kind
of
soap.
Where
did
you
define
that?
But
we
define
how
these
functions
can
be
executed?
Okay,
that's
under
the
function,
so
we
can
go
look
into
the
example.
Then
we
have
events.
Events
are
a
core
type
of
structure
in
the
in
the
in
what
that
means
is
events
can
start
workflow
execution
events
can
be
produced
when
the
workflow
execution
ends
and
also
events
can
be
produced
during
the
the
execution
of
workflows.
E
Now,
as
far
as
parameters
goes,
there
are
pass
two
functions
there
Jason,
so
they
can
be
events
as
well
so
cloud,
even
format,
which
is
a
format
that
represents
events
in
a
JSON
format,
can
be
used
pretty
much
throughout
the
the
the
workflow
now.
The
third
part
is
what
you
guys
call
steps,
and
we
currently
call
it.
States
are
the
building
blocks
or
the
control
floor
flow
logic
blocks
that
allows
you
to
do
things
like
which
you
guys
already
can
do,
for
example,
parallel
execution,
splitjoin
type
of
situation
and
stuff
like
that.
So
we
have.
D
E
Idea
is
just
similar
Argo,
which
you
just
showed
in
your
your
really
nice
demo.
Is
that
just
like
your
hub,
you
allow
users
to
entry
in
llamo
code.
This
is
the
same
type
of
aspect.
We
do
not
we
it's
it's.
It's
users
are
supposed
to
ride
their
Jason
Hammel
code,
depending
that
their
conforms
to
the
json
schema
and
just
to
show
you
that
so.
B
E
B
E
C
E
E
E
In
the
red
head
of
the
cookie
dough
project,
which
are
adopting
it
right
now,
this
is
fear
it
started.
You
know
the
specification
has
been
evolving
for
over
a
year
and
a
half.
Now
we
just
started
over
six
months
ago
doing
the
implementation
we
should
has
been
completed
and
now
we're
adopting,
but
as
far
as
that
I,
this
will
be
pushed
also
as
far
as
the
redhead
I'm
just
Kentucky
for
myself.
E
As
far
as
an
open-source
project
for
which
used
to
be
JEP,
M
and
rules-
and
we
have
evolved
that
into
a
new
project
called
code
IDO,
which
now
also
includes
the
support
for
be
payment
to
DMS
seaman
and
now
also
the
service
workforce
specification
as
well.
So
we
see
it
as
one
of
the
many
formats
which
which
we're
also
targeting
kubernetes
and
stuff
like
that
on
our
and
but
there
has
nothing
to
do
with
this
specification.
E
E
C
Basically,
all
our
workflows
ball
down
to
directed
acyclic
graphs,
so
in
the
way
that
if
your
programming
language
is
turing-complete,
you
can
do
anything
any
other
programming.
Language
could
do.
If
you
can.
If
you
can
boil
down
your
workflow
down
into
a
directed
acyclic
graph,
where
each
node
is
a
function
location,
then
you
can,
but
you
can
basically
model
every
graph,
every
workflow
you
need
to
anything
else
you
do
on.
C
That
is
bonus,
but
also
I
can
be
syntactic
sugar
and
we
have
plenty
that
can
effectively
syntactic
sugar
in
our
charts
and
and
that
STK
aspect
is
really
important
for
uptake
of
people.
I
can
see
that
the
defining
a
workflow
specification
allows
you
to
really
kind
of
d-cup.
All
those
two
aspects
have
one
organization
build
various
different
SDKs
and
different
languages,
and
another
organization
build
the
actual,
execute
the
workflow
executors
as
well,
quite
quite
separately
and
I.
Wouldn't
I.
C
Wouldn't
underestimate
the
importance
of
that
kind
of
tooling
to
users
our
users
don't
really
like
using
Yammer
at
all.
They
you
know
they
they
tolerate
it,
but
it
doesn't
have
you
know
anything
like
the
kind
of
tooling
that
they're
used
to
when
when
writing
code
such
as
also
completion,
you
know,
syntax,
you
know
sophisticated
syntax,
highlighting
yeah
yeah.
Well,
doesn't
doesn't
give
you
any
of
that.
You
effectively
have
to
code,
your
gamal
submit
it
to
be
executed,
and
if
it
gets
rejected
by
the
executor,
then
you
know
it's
syntactic
lien
ballad,
it's
much
better.
E
C
E
Maybe
it's
a
collaboration
between
you
know
our
teams
together
or
whoever
community
wise
might
be
interested,
but
that's
a
big
task.
You
know,
and
especially
if
you
already
have
an
amazing,
really
BPA
man
in
this
far
I
talk
about
it
a
lot,
but
the
notation
part
is
really
useful
and
nice
and
you
know
having
to
do
something
differently
or
do
we
reuse.
E
Something
that
already
exists
in
their
regard
is
a
decision
that
still
has
to
be
made
at
the
end,
but
I
do
I
did
want
to
show
from
our
end
just
real
quickly
and
you
guys
can
go
through
it.
We
did
create
some.
What
like
examples?
I
took
this
from
your
guys
as
examples.
I
hope
you
guys
don't
mind
and
we
have
a
written,
comparable,
side-by-side
comparison
between
Argo
and
the
service,
workflow
and
I.
Think
it's
really
showed
a
lot
of
different
things.
Number
one
I
think
from
functionality-wise
so
far
would
have
seen
now.
E
Of
course,
your
examples
might
not
cover
all
the
functionality
at
Argo
does
so
maybe
we
can
work
on
this,
but
they
are
fairly
comparable.
So
far
what
I've
seen
right?
There
are
some
things
there
are
good.
Does
that
I
really
like
they're
different,
that
we
currently
do
not
support
in
the
serviceworker
specification,
but
we
are
not
set
in
stone
and
we
can
collaborate
on
how
we
can
possibly
match
up
functionality
wise.
E
Yeah
and
I
think
it's
really
important
to
have
this
in
order
to
actually
say
hey.
Can
we
even
integrate
this,
or
is
this
even
useful
and
I
hope
to
add
more
examples
and
hopefully
to
get
help
from
you
guys,
which
examples
would
be
good
to
add
and
and
that's
kind
of
like,
maybe
something
we
can
talk
together
about,
but
what
I've
seen
so
far?
My
understanding
is
that,
yes,
the
service
were
close
specification
is
more
verbose
than
most
of
the
examples.
E
E
Then
what
Argo
has
and
I
think
that
may
be
a
trade
off
that
we
can
kind
of
look
into
and
compare
and
contrast
and
work
on
together.
If
you
guys
are,
of
course
interested
in
that
I
think
Saros
workable
specification
defines
more
concrete
states
with
types
that
can
be
more
easily
translated
for
tooling
and
kind
of
more
readability.
E
If
you
can't
even
get
that
with
him,
what
we're
doing
more
or
less
without
tooling,
but
that's
kind
of
like
the
comparison,
there
is
some
things
that
are
go
does
specifically
on
the
functionality
and
that
we
do
differently.
For
example,
timeouts
III,
no
exonerol
the
examples
you
guys
have
a
workflow.
E
Why
time
out,
where
you
start
a
worked,
when
you
say
okay,
if
it
doesn't
complete
within
a
minute
just
exit
right
in
in
the
sir
service
specification
world,
it's
done
a
little
differently,
which
is,
for
example,
timeout
on
on
events
actually
occurring
to
start
the
workflow
instance
or
timeouts
on
actually
executing
a
function
or
for
you
guys
is
a
you
guys.
Example:
a
function
that's
defined
in
a
pod,
so
that's
kind
of
like
the
difference.
E
Another
difference
is
that,
for
example,
where
you
guys-
and
you
can
see
we
you
we
had
to
use
in
order
for
me
to
translate
these
examples,
we
had
to
use
metadata
and
I'm
trying
to
figure
out
one
exam
Oh,
for
example.
This,
for
example,
that's
one
thing
that
there
is
different
in
the
you
guys.
You
know
have
the
container
definitions
right,
where
you
excuse
explicitly
say
my
function
or
my
func
function
that
is
exposed
in
kubernetes
is
in
a
container
where
the
service
work
for
specification
kind
of
abstracts
that
into
functions
right.
E
To
kind
of
specify
your
runtime,
this
is
a
rest
api,
or
this
is
something
that
runs
in
the
container
or
this
is
a
Kafka
event,
or
this
is
a
Java
or
a
Python
interface
right,
and
also
we
have
metadata
extension
points
on
both
kind
of,
like
extension,
points
are
on
both
the
state
level
or
the
definition
level,
but
we
also
have
extension
points
on
the
whole
workflow
level,
so
you
can
implement
things
like
logging
or
tracing
or
entire
extension
points
that
you
can
implement.
If
you
want
to
write.
E
D
B
As
I
see
it,
it
will
be
more
like
a
layer
on
top
of
our
go
because
somebody
needs
to
generate
the
kubernetes
manifests
are
go,
these
are
all
kubernetes
manifests
which
directly
can
be
applied
to
kubernetes,
so
so
somebody
either
orga
team
or
somebody
else-
has
to
write
like
a
mapping
layer
on
top
of
the
Argos
specs.
To
convert
this
to
the
arco,
kubernetes
manifests.
A
Assuming
that
the
languages
which
feature
melons
so
that
we
could
represent
everything
in
service
workflow,
a
zogo
can
currently
express
it
with
the
workload
CRD.
Then
yes,
this
would
be
simply
lay
on
top.
If
there
was
translation
to
be
done
and
I
think
that's
manageable.
If
there
was,
however,
an
image
mismatch
like
the
workflow
timeout
that
to
me
has
already
identified,
then
there's
more
work
probably
to
be
done
on
the
language
specification.
A
The
thing
is
with
a
little
less
adopters
like
algo
is
a
big
project
that
has
of
uses
and
is
filled
proven
and
is
a
production
really
implementation.
So
at
the
service
workflow
subgroup,
it's
what
I
don't
attempt
as
you've
already
seen
we're
not
covering
the
function
binding.
So
the
binding
to
a
kubernetes
platform
or
to
container
environment
is
answer
remains
unspecified,
but
there
is
the
common
concept
of
having
pieces
of
work,
Express
or
modulized
in
functions
and
then
to
give
some
control
structure
to
the
the
execution
of
this
work.
A
So
I
think
at
this
level
the
work
flow
language
tries
to
express
a
control
logic
and
so
I'm
with
Nokia
burlaps,
and
we
are
looking
at
it
to
to
find
a
way
to
have
a
common
description.
Language
we're
executing
stuff
completely
differently.
I
think
also
cogito
does
so
kobito
as
I.
If
I
understand
correctly,
is
very
much
tailored
to
Java
functions
and
can
do
through
other
function
bindings
and
can
invoke
a
lot
of
more
different
workloads.
A
We,
for
example,
we
would
compile
the
entire
workflow
into
a
single
container
at
one
time.
So
it's
a
completely
different
execution
model
underneath
that,
but
we
what
we
want
would
like
to
have
is
commonly
adopted
or
accepted
workflow
language
and
maybe
also
reaching
consensus
on
the
terminology
with
several
projects,
because
eventually,
what
we've
already
figured
is
Amazon
States
language
calls
these
individual
stop
States
I,
don't
think
so.
In
state
language,
there
is
only
one
forward
pass,
but
a
state
machine
would
not
be
per
se
acyclic.
A
So
there
is
a
lot
of
alignment
necessary
to
get
into
which
eventually
benefits
the
user
right.
If
the
user
has
only
this
one
learning
curve
to
adopt
this
one
terminology
and
then
knows
how
to
operate
in
different
environments,
this
is
I
think
where
we
want
to
get
to
with
the
standardization
of
the
workflow,
which
it.
A
So
I
know
there
were
several
larger
parties
involved
in
cloud
events
specification
and
in
the
civilus
working
group.
Their
interest
in
this
sub
group
task
has
been
I
know,
maybe
sidetracked
a
lot
because
of
the
cloud
events
specification,
but
also
a
little
bit
hesitant
to
jump
to
workflow
specification.
We
had
ideas
that
maybe
it
is
too
early
to
talk
about
workflow
specification
that
rather
the
statefulness
of
service
executions
needs
to
be
discussed
first.
That
would
be
your
artifact
layer,
so
maybe
maybe
it
is
too
early
yet
for
them.
A
D
C
E
What
we're
trying
to
do
number
one
is
create
number
one
thing
is
vendor
neutrality
and
we
want
to
be
portable,
I.
Think
this.
The
situation
currently
with
there's
all
these
different
workflows
based
yamo
and
JSON
based
notations.
There
has
to
be
some
standardization
and
there
will
be
if
it's
not
with
this
specification,
it
will
be
somebody
else
that
creates
one
and
I
think
Argo
is
a
project
and
you
guys
can
prove
me
wrong,
but
I've
been
working
open-source
so
long.
E
Writing
your
own
markup
for
something
has
its
limits
and
it
has
its
kind
of
life
expectancy,
because
if
it's
not
this
specification
with
CNC
f,
like
I,
said
a
specification
for
this
will
be
created
and
it
really
depends
who
adopts
it.
Do
big
guns
adopt
it
like
Microsoft
when
AWS
and
those
guys
Amazon,
probably
not,
but
they
might
in
the
future,
but
as
far
as
open
source
type
of
smaller
project
type.
It's
very
important
for
survival
of
a
special
of
your
notation,
and
you
guys
know
this
better
than
me.
E
We
had
things
before
like
custom
markups,
even
with
our
BPM
and
before
bpmn,
and
he
always
fell
short
and
I.
Think
also
other
smaller
size,
open
source
companies
can
can
can
talk
about
that
specifications.
Help
you
in
many
ways
and
I
think
we're
not
perfect.
You
know
there's
a
lot
of
things
that
we
want
to
change
it.
We
would
like
to
get
and
a
community
order
to
make
our
stuff
better,
but
at
the
same
time
we
want
to
also
work
with
you
guys.
So
that's
the
type
of
thing.
E
This
is
kind
of
like
where
we
would
like
to
find
some
sort
of
community
within
other
projects,
some
sort
of
interest,
we're
not
saying
do
one
or
the
other,
but
we
would
also
be
willing,
of
course,
to
help
projects.
There
say:
okay,
we
have
some
interest
here
and
we
would
like
to
have
some
sort
of
adoption
for
the
future
to
help
there
out,
as
well
as
far
as
pull
a
request
and
help
there
as
well,
because
at
the
end,
we're
all
CN
CF
and
we're
all
open
sore
Apache
2
license
right.
D
E
Focus
is
very
similar,
I
think
what
met
the
majority
of
the
current
Jason
and
yeah
Moe
based
workforce
specification
is,
which
is
to
a
workflow
orchestration.
Currently,
especially
in
the
service
community.
You
have
many
many
different
micro
services
deployed.
There
might
run
some
on
kubernetes
the
semi
not
run
on.
E
D
I
believe
there
is
like
in
the
execution
environment
there
is,
but
we
also
had
until
we'd
like
to
distinguish
between
server
lists
in
terms
of
a
programming
model.
You
know
basically
how
developers
write
their
code
versus
you
know
like
the
execution
model
like.
Is
it
using
containers
or
what
and
the
programming
model
in
you
know,
developers
are
into
it
they're
not
really
interested
right
now
in
writing
applications
using
a
fast
dial.
They
actually
find
it
much
more
difficult
to
do
it
that
way.
D
E
D
So
unifying
very
disparate
workflow
models
like
fast
or
even
based
model
I
mean
you
can
think
of
it
as
workflows
or
event
based
processing
right
there
or
you
know
what
fast,
even
but
unifying
very
disparate
models
like
fast
versus
batch
versus.
You
know
lambda
that
kind
of
thing.
It
may
be
very
to
do
that
and
target
a
particular
communities.
B
So
since
we
are
almost
over
time,
I
had
one
question
like
does:
Red
Hat
or
any
of
the
members
of
this
work
group
have
any
resource
who
can
map
the
two
projects.
Specs,
like
you
guys,
have
anybody
who
would
be
interested
in
doing
it.
Yeah
contribute
like
the
conversation
because,
as
I
said,
it
has
to
be
a
layer
above
because
our
NGO
is
very
kubernetes.
It
is
kubernetes
ERD
and
manifests.
D
B
D
Think
we
should
definitely
have
some
more
conversations,
I
think
personally,
I
yeah,
you
know
I
think
it
would
help
us
kind
of
map
out
the
space
and
these
figure
out
what
part
the
agro
workflows
part
fits
into
I
mean
we
have
our
idea
of
where
it
fits
into,
but
we're
not
so
familiar
with
things
outside
of
that
domain,
and,
of
course
you
know
you
you're
also
representing
communities
right
users,
either
inside
your
company
or
your
customers.
So
you
know
I
think
that
engagement
would
be
very
useful.
E
E
Currently,
very
small
team
right
we
have
a
couple
of.
We
have
of
course
manual
with
Nokia.
We
have
really
hacked,
we
have
come
under
on
board
and
Huawei,
and
we
have
a
small
community
of
people
who
constantly
you
know.
Join
meetings
are
involved
as
far
as
work
does
we
have
some
growth
need,
of
course,
on
the
community
side,
we
do
kind
of
things.
We
have
a
road
map
where
we
add
what
we
are
planning
to
do
and
basically
we're
doing
it
as
time.
E
D
E
E
At
the
entire
community
needs
and
having
the
Argo
community
involved
in
just
you
know,
allowing
us
would
be
a
huge
thing
for
us.
I,
don't
think
you
guys
understand
how
big
this
would
be
for
us
to
have
such
a
community
with
of
you
guys,
they
have
not
only
in
implementation,
but
also
much
larger
community
and
exposure
and
everything
basically
to
kind
of
help
us
out
and
at
the
same
time
you
know.
Of
course
we
would.
We
would
be
involved
into
into
into
helping
with
the
integration
as
well.
E
So
that
would
be
nice
as
far
you
know.
If
you
guys
see
interest
I
do
see
the
need
pause.
You
know,
Fargo
would
be
nice
for
us.
Of
course,
it's
it's
it's
within
CN
CF.
We
could
use
a
little
bit
of
help
to
kind
of
push
us
forward
as
far
as
infrastructure
goes.
Growth
within
the
CN
CF
ecosystem
goes
because
with
CN
CF
it
seems
very
difficult.
It's
it's
easy
to
do
projects,
but
it's
harder
to
do
specifications
without
some
sort
of
adoption.
So
so
you
know
we're
kind
of
here
saying:
okay,
we're
here.
E
A
Apparently
we
have
monthly
community
meetings
and
the
next
one
being
first
Monday
in
June.
There
is
I
think
first
and
also
for
next
week.
Monday
we
have
a
primer
called
scheduled
I
think
about
the
same
time
as
today's
call
in
which
we
discuss
the
base
concepts
of
the
language
and
whether
we
took
the
right
turns
and
try
to
summarize
this
into
sort
of
a
conceptual
motivational
primer
paper
personally,
I
could
say,
I'd
be
very
happy
to
welcome
you
to
the
calls
and
for
me
in
Nokia,
Belle
apps.
A
We
I
have
the
similar
situation
of
whether
I
should
root
for
adopting
it
or
not,
because
so,
we've
recently
we've
launched,
connects
micro
functions,
some
edge
service
platform
and
I'm
in
the
same
boat.
Here
of
whether
we
should
do
the
effort
and
implement
this
workflow
language
specification
or
try
to
shape
it
a
little
bit
further
before
we
do
so
yeah.
D
B
E
I'm,
sorry,
a
website
and
all
kinds
of
stuff
and
I'll
link.
You
personally
I'll
send
you
and
share
all
kinds
of
links
to
examples
and
everything.
Okay,
you
want,
but
yeah
I'm,
just
not
I'm,
not
we're
not
promoting
that
or
the
service
specification
and
whatsoever,
because
at
the
end,
is
just
a
runtime
implementation.
D
Out
there
like
three
different
communities
at
once,
so
I,
don't
know
if
you
have
broad
agreement
there
or
whether
some
people
introduce
that
right
now
or
more
like
to
ask
type
of
models.
Others
are
more
and
they
will
processing
type
of
things,
but
they
are
very
different
communities
and
and
and
if
it's
too
early,
then
it's
kind
of
hard
to
create
a
spec
that
it
will
be
adopted
by
multiple
communities.
So
those
are
some
of
our
concerns.
I'm
sure
you
have
some
yeah
and.
E
I
think
you're
very
correct
in
that
what
we
have
within
the
specification
as
far
as
people
involved,
our
people
have
been
around
business
process
modeling
for
many
many
years.
So
yes,
currently
the
specification
is
kind
of
tending
in
that
direction,
but
I
think
that's.
Where
kind
of
like
this
talk,
for
example,
is
big,
because
then
we
can
reach
out
to
you
guys
for
help
on
all
the
other
environments,
as.
D
D
And
so
we
tried
to
pick
an
area
we
feel
was
not
didn't.
Have
too
many
existing
versions,
targeting
that
particular
use
case
circa,
maybe,
and
one
that
we
felt
with,
is
likely
to
grow
rapidly.
So
that's
how
we
kind
of
get
the
mo
or
data
processing
area,
but
we'd
love
to,
of
course
understand
better.
What
the
other
members
of
the
working
group
are
thinking
in
terms
of
what
use
cases
or
communities
or
what
they
personally
want
to
use
it
for.