►
From YouTube: Argo Workflows and Events Community Meeting 22 Apr 2021
Description
02:45 CNCF Serverless Working Group Update - Tihomir Surdilovic
32:00 CNCF Slack Migration - Alex Collins
35:00 Template Defaults - Saravanan Balasubramanian
42:00 Argo Agent / HTTP Template - Saravanan Balasubramanian
A
Brilliant
okay,
thank
you
all
for
coming
today.
Welcome
to
another
argo
workflows
and
events.
Community
meeting
we're
gonna
do
a
slightly
different
format
today
and
I'll
tell
you
a
little
bit
about
that
shortly,
but
what's
on
our
agenda
today,
so
we're
going
to
be
talking
a
little
bit
a
bit
about
some
cncf
stuff.
We've
got
timor
from
the
serverless
working
group.
Who's
come
here
to
talk
a
little
bit
about
what
he's
doing
and
I'll
explain
that
shortly,
just
bri
any
questions.
A
I
guess
pretty
relatively
short
discussions
around
some
upcoming
new
features
and
one
is
called
template
defaults,
which
is
I'll.
Let
barlow
tell
you
a
little
bit
more
about
that
and
a
large
kind
of
interesting
feature
we're
building
out
to
help
specifically
improve
our
support
for
cooperfo
pipelines
and
hopefully
we'll
get
a
get
a
discussion
from
from
kfp
they'll
be
able
to
come
along
soon
to
one
of
these
community
meetings.
A
Talk
a
little
bit
about
that
and
bala
will
also
be
doing
that
shortly
and,
of
course,
there's
any
topics
you
want
to
bring
up
or
any
discussions
you
can.
You
can
bring
that
up
during
the
course
of
today.
A
So
do
just
add
yourself
to
the
attendees.
Let
us
know
where
you're
from
maybe
even
what
you're
doing
with
argo,
workflows
or
argo
events,
I'm
going
to
assume
that
everybody
here
is
familiar
with
argo
workflows
events,
so
I
won't
go
over
that
today,
just
a
little
bit
about
myself
and
a
couple
of
the
other
engineers
here
today,
myself,
bala
jesse
are
all-
and
I
think
I'm
thinking
simon
and
derek.
Probably
here
are
all
core
engineers
from
the
argo
teams
working
on
argo,
cd,
argo
rollouts,
argo
events
and
so
forth.
A
So
if
you
want
to
ask
any
questions,
we're
here
to
talk
about
that
now,
if
you
do
want
to
ask
any
questions,
the
best
way
is
probably
just
to
ask
at
the
end
of
a
presentation
when
somebody's
finished
you
can
probably
just
ask
out
loud
then
or
you
can
drop
the
message
into
the
chat
room
of
this
of
this
zoom
meeting,
and
somebody
will
read
out
your
message
or
you
can
come
back.
A
If
you
want
to
ask
more
details,
you
can
come
and
ask
us
on
the
slack
channels
later
on
people
always
ask,
and
I
always
say
yes,
we
are
recording
this.
Where
does
the
recording
end
up?
Well,
it
typically
gets
uploaded
to
youtube.
Maybe
later
today
or
tomorrow
depends
on
how
long
the
processing
takes
and
then
I'll
share
that
link
to
that
video
in
the
slack
channels.
Again
talking
a
lot
about
slack
today.
A
A
B
Can
you
see
it
all
right?
So,
first
of
all
hello,
everybody.
I
really
like
to
thank
alex
and
the
whole
argo
team
for
having
me
here
and
talking
about
our
little
cncf
project.
Just
as
a
little
background.
My
name
is
tikkumir
and
I'm
one
of
the
maintainers
of
of
the
serverless
workflow
specification
project.
B
B
It's
kind
of
hard
to
talk
about
our
project
without
talking
about
workflows-
and
I
know
argo
is
also
in
in
this
area
as
well.
But
the
way
I
want
to
start
it
off
is
there's
a
bunch
of
different
definitions
of
what
a
workflow
is
and
and
there's
just
as
many
workflow
technologies.
Currently
that
have
different
definitions.
B
Just
from
my
personal
perspective,
when
I
don't
look
at
workflows,
I
think
of
them
as
resilient
programs
and
I'll
try
to
explain
that
with
wordpress.
We
want
to
really
focus
on
what
and
nothing
else
so
we
write
our
workflows.
Typically
thinking
of
our
code
or
the
way
we
describe
workflows
as
things
that
are
relevant
to
our
business,
such
as
requirements,
logic,
whatever
we
think
that
represents
our
business,
that's
how
we
want
to
write
it
and
really
only
want
to
focus
on
that.
B
But
reality
of
things,
as
we
all
know,
is,
is
quite
different
once
you
start
creating
your
applications
and
you
start
writing
your
business
requirements
in
in
code
or
in
a
dsl
or
whatever
you
have
it.
I
soon
you
realize
it's
not
ready
for
production
and
to
have
production,
ready
code.
You
have
to
think
about
failure,
recovery,
persistence
and
all
bunch
of
different
things.
B
So
once
you
start
mixing
up
your
business
requirement
and
logic
with
all
these
other
non-functional
requirements,
they're
required
for
you
to
have
a
production
application
that
you
want
to
deploy
and
have
your
customers
actually
use
it.
You
create
complexity
right,
so
this
is
kind
of
the
basic
premise
of
why
workloads
I
think
of
is
resilient
programs
really.
So
when
we
start
looking
at
workflow
solutions,
we
have
two
things.
B
We
have
some
sort
of
workflow
definition
or
some
sort
of
representation
of
our
logic
and
our
requirements
that
we
can
that
we
have
to
define
you
know.
Unfortunately,
we
can't
do
this
in
our
natural
language,
and
if
anybody
is
here
from
like
google
or
microsoft,
you
know
you
guys
might
be
already
working
on
that
type
of
feature,
but
it
doesn't
exist
yet
that
I
know
off,
so
we
define
usually
a
workflows
in
it
different
ways,
and
on
this
other
side
we
have
a
workflow
runtime,
which
is
some
different
workflow
engines.
B
They
actually
know
how
to
interpret
the
instruction
and
the
that
we
define
in
our
definitions
and
know
how
to
execute
them.
There
are
typically
a
number
of
types
of
things
that
workflow
definitions
and
workflow
runtimes
are
kind
of
responsible
for,
and
work
in
in
workflow
technologies,
looking
at
the
definition
itself,
most
of
the
time,
they're
written
in
a
deterministic
manner,
meaning
that,
given
a
set
of
inputs,
whether
that
be
data
or
some
sort
of
signals
or
messages,
we
expect
our
workflows
to
produce
an
expected
output.
B
The
second
thing
is
in
potency,
meaning
that
if
you
run
this
workflow
once
or
a
thousand
times
given
the
same
input,
you
would
expect
some
sort
of
output.
That
is,
that
is
the
same
now.
Workflow
definitions
usually
define
we
put
in
a
series
of
instructions
or
decisions
that
we
want
to
execute.
B
We
want
to
provide
insight
to
our
workflows,
such
as
visibility
in
in
both
design,
testing
and
also
the
runtime
visibility,
and
I
think
you
know
argo-
does
that
very
well
things
like
isolation
of
workflow
instances,
fault
or
tolerance,
scalability,
timers,
persistence.
All
those
things
are
responsibilities
of.
Typically,
responsibilities
will
work
for
runtimes
that
really
bring
the
resilient
part
to
our
workflows
now.
B
Well,
I
forgot
to
say
one
thing:
when
we
for
our
serverless
workflow
specification
or
project,
we
really
deal
only
with
workflow
definitions,
we're
not
dealing
with
a
workflow
runtime,
and
we
leave
that
up
to
like
experts
like
the
argo
guys
and
everybody
else
so
dealing
with
with
the
workflow
definition.
We
we
we
currently
can
define
our
workflows
in
many
different
languages
and
they
typically
fall
into
four
different
buckets.
B
In
my
opinion,
one
is
the
flowchart
base,
type
of
languages,
the
forum
based
declarative
types
of
workflow
languages
and
also
there's
a
lot
of
work
languages
that
define
they're
defined
as
pro
in
programming,
language
construct
and
again
with
our
little
project.
We
we
fall
kind
of,
I
think,
in
the
same
category
as
our
workflow
being
declarative,
workflow
languages.
B
So
now,
let's
take
a
look
at
some
of
the
features
of
the
clarity
workflow
languages
nowadays,
they're
typically
described
in
json
or
yaml
format,
that's
kind
of
like
becoming
the
standard
and
they
don't
depend
on
visualization.
However,
they're
defining
some
sort
of
machine
or
human,
readable
format,
so
yeah
so
typically
expression.
The
clarity,
workflow
languages
do
use
some
type
of
expression,
language
for
different
things,
like
typically
manipulating
the
the
workflow
data
or
or
the
state
of
things
during
workflow
execution.
B
Now
the
clarity
workflow
languages
are
typically
focused
on
smaller
k,
scale.
Orchestrations,
because
I
mean,
let's
be
honest,
who
wants
to
look
at
a
2000?
You
know
lines
long,
gamma
or
json
file,
and
because
of
that
they
typically
focus
on
a
specific
domain,
and,
just
like
you
know,
flowchart
based
and
forum
based
and
also
declarative.
B
These
types
of
workflow
languages
are
translated
into
some
sort
of
executable
code
that
can
be
executed.
You
know,
of
course,
in
different
types
of
environments.
B
When
we
look
at
the
workflow
or
the
domain
for
our
serverless
workflow
project,
it
is
specific
to
microservices
events
orchestration,
so
our
dsl
or
our
or
our
workflow
language
focuses
on
things
like
dealing
with
events
dealing
with
execution
of
the
functions.
Things
like
you
know
so,
focusing
on
microservice
distributed
systems
and
those
types
of
types
of
architectures.
B
B
The
dsl
can
be
implemented
on
runtimes
as
a
state
machine
or
a
dag,
and
even
though
we
do
allow
kind
of
cyclic
or
cycles
within
the
dsl
you,
we
can't
break
out
of
them
using
things
like
timers
and
stuff
like
that
time,
timers
on
the
execution,
side
and
and
and
another
other
kind
of
timers
as
well
with
the
dsl
we
can,
you
can
describe
both
stateless
and
stateful,
orchestration
or
execution
of
of
your
workflows.
B
Another
thing
that
maybe
our
project
is
is
a
little
bit
or
what
we
focus
on
is
the
is
the
workflow
definition.
We
try
to
structure
it
into
three
distinct
or
separate
parts,
the
first
one,
of
course,
being
the
control
for
logic
and
and-
and
that
is
typically
the
decisions
that
you
want
to
make
during
your
workflow
execution,
and
that
includes
a
lot
of
different
things
like
looping
database
decision,
async
execution
weights,
things
like
undoing
things
or
compensation,
types
of
handling,
and
things
like
that.
B
So
that
allows
us
to
translate
our
business
requirements
into
some
sort
of
decision
flow
that
that
we
want
the
runtime
engines
to
actually
execute
the
other
two
parts
deal
with
events
and
function,
definitions,
event,
definitions
being
a
reusable
set
of
definitions
of
of
the
types
of
events
that
you
want
either
to
consume
or
produce
during
workflow
execution,
and
you
can
you
know
we
of
course
use
cloud
events
for
that.
B
I
failed
to
mention
we're
part
of
the
serverless
working
group
at
cncf,
so
we
are
kind
of
you
know
related
to
also
cloud
events
project
as
well,
but
event,
definitions
basically
allow
you
to
hook
to
any
sort
of
producers
or
consumers
of
events
and
and
then
be
able
to
to
to
interact
with
them.
The
third
thing
is
our
function.
Definitions.
B
This
section
is
also
reusable
reusable
meaning
it
can
be
reused
between
multiple
workflow
definitions
and
those
help
us
define
our
invocations
of
different
services
and
that
being
either
rest
services
or
or
or
any
kind
of
services,
they're
being
contained
or
even
images
kind
of,
like,
I
think,
with
what
argument
is
very
good
at
as
well
I'm
not
going
to
go
much
more
into
this
language.
B
B
So,
what's
the
point
of
having
this
serious
workflow
language
and-
and-
and
I
I
I
like
to
be
realistic,
honestly
and
and
right
now
or
or
I've
been
working
for
this
for
over
two
years
now
and
the
reality
of
things
is
that,
honestly,
at
the
bottom
line,
it's
an
open
source,
vendor
neutral
community,
driven
alternative
to
aws
step
functions
in
google
cloud
workflow,
and
I
think
this
is
kind
of
like
we're
this
year
has
been
really
good
for
us.
B
We've
received
a
lot
of
community
type
of
interest
and
I
think
it
boils
down
to
maybe
this.
We
don't
use
this
as
a
marketing
strategy
by
the
way
just
to
let
you
know,
but
it
is
kind
of
like
a
realistic
expectations
that,
when
for
people
they're
interested
in
in
in
in
our
language
at
this
time,
but
what
makes
serious
workflow
different
than
than
than
some
of
the
other
things
that
we
looked
at
well.
B
One
of
the
things
also
is
we
focus
on
standards
so,
like
I
said,
we
have
native
integration
and,
and
we
kind
of
enforce
these
standards
as
well.
So
things
like
cloud
events
for
event,
definitions,
open
api
and
grpc
for
function,
invocations,
async
mpi,
is
kind
of
like
the
thing
that
we're
currently
working
on
and
even
though
we
do
allow
multiple
expression,
languages
to
be
used
and
and
they're
user
defined,
we
do
default
to
jq,
currently
and
and
and
basically
everything
that
we
add
to
the
language.
B
So
what
about
functionality
and-
and
I
usually
use
this
type
of
table
and
with
some
of
the
comparison-
and
I
usually
you
know
here-
you'll
see,
step
functions,
google
cloud
bpmn
is
even
being
a
huge
standard
and
and
then
I
tried
to
compare
it
with
serverless
workflow
and
this
table,
as
you
can
see
it
really,
it
really
doesn't
include
everything
and
trying
to
focus
on
on
some
of
the
functionality.
B
It
is
relevant
to
our
microservice
orchestration,
so
things
like
retries
compensation,
execution,
parallels,
sync,
async,
invocations
and
things
like
that
and
so
far
the
way
kind
of
our
our
our
dsl
fits
in
there.
Of
course
it's
it's
currently
a
super
set
of
of
of
step
functions
and
and
the
cloud
workflow,
but
it
is
by
far
a
small
subset
of
of,
of
course,
the
bpmn
specification,
so
we
kind
of
fit
in
there
in
the
middle.
B
Why
else
do
people
look
at
our
specification
for
well
one
of
the
things
and
and
and
is
really
it's
really,
it's
easy
to
build
a
kind
of
common
experience
that
people
are
seeing
for
for
these
types
of
workflows
and-
and
we
we're
going
to
look
at
that
even
at
the
demo,
but
we
have
a
little
ecosystem
around,
so
we
just
don't
specify
the
the
dsl
currently
or
or
the
definition
of
our
workflow
language,
but
we
have
a
little
ecosystem
around
that
as
well
all
right.
B
What
about
sdks
we
do
have
sdks
in
java,
go
those
we've
had
since
even
I
think
the
last
time
I
had
the
opportunity
to
talk
with
you
guys,
but
since
then,
we
also
added.net
and
and
and
typescript
that
that
we're
working
on
currently
as
well
and
in
our
sdks
there's
not
really
runtime
implementations,
they're
more
of
a
sdks
for
parsing,
the
the
workflow
language
structure
into
an
object
model
and
then
pushing
it
back
if
needed,
into
into
into
the
the
dsl
markup
language.
B
So
next
the
question
that
we
usually
get
hey
who's
using
this
thing-
and
this
slide
right
here-
it's
kind
of
an
underrepresentation
is
it's
kind
of
hard,
even
now
to
kind
of
get
an
approval
of
everybody
who's
using
it.
B
I
think
you
know
if
you,
if
you
guys,
join
the
community,
I
think
a
lot
of
the
things
you
will
see
that
it's
probably
much
bigger
than
what
the
slide
says,
but
we
do
have
collaborations
with
different
projects
such
as
cloud
events,
of
course,
argo
being
one
of
the
top
ones
for
us
and
also
we're
collaborating
currently
with
the
tremor
projects
as
well
at
cncf.
As
far
as
community
collaboration,
we
do
have
a
number
of
companies
they're.
B
Looking
at
us
they're
evaluating
some
are
working
on
some
implementations
already
but,
like
I
said
it's
it's
it's
hopefully
but
cubecodena.
We
will
be
able
to
expand
this
list
and
and
and
make
it
much
bigger.
As
far
as
open
source
implementations,
we
currently
have
two
java
based
application
implementations
and
you
can
google,
those
and
and
then
figure
out,
but
you
know
there
is
a
lot
more
in
the
works
as
as
as
we
are
currently
at
this
time.
B
So
as
far
as
a
roadmap,
we
last
month
released
our
version
0.6
of
of
the
specification
and
each
one
of
our
releases
goes
with,
alongside
with
the
releases
of
our
sdks,
our
vs
code
plug-in
our
diagram
generation
things,
and
and
basically
we
try
to
release
everything
at
the
same
time.
But
from
the
specification
dsl
perspective,
we
we,
we
really
0.6
we're
looking
at
july
release
of
o7.
B
B
I
didn't
know
you
know
how
much
time
I
had
you
know
I
had
to
talk
here,
so
I
prepared
a
tiny
little
demo,
it's
very
short
and
alex
you.
Let
me
know
if
we
have
time
for
this
or
not,
but
if
not
we
can.
We
can
do
this.
Another
time.
B
Sure
all
right,
so
so
now
that
we've
talked
about
our
our
project,
I
just
wanted
to
show
super
simple
demo.
If
you
want
to
run
it,
this
yourself
you
can
go
to.
This
is
where
it
is
on
on
github.
B
So
the
idea
behind
this
is
is
I
just
wanted
to
show
simple
orchestration
of
couple
of
services
and
one
of
the
things
that's
nice
about
declarative
languages.
Is
that
they're
very
language
agnostic?
B
So,
since
we
defined
you
know,
declares
your
language
in
some
sort
of
markup
definitions,
we
can
not
only
have
different
run
times
in
different
languages,
execute
them,
but
also
we
have
the
ability,
especially
also
through
using
some
of
the
the
standards,
such
as
open
api,
to
define
our
interactions
or
workflow
with
those
services
in
a
programmatic
programming,
language
agnostic
way.
So
in
this
demo,
what
I
wanted
to
really
show
you
is:
we
have
three
services
and
the
one
is
a
node
js
service,
all
right
and
I'll,
try
to
ping
it
here
that
lives.
B
This
one
lives
on
8080,
so
we
can
just
everything
is
a
localhost,
but
of
course
you
can,
you
know
run
this
on
deploy
these
functions.
Basically,
on
any
environment
you
want,
so
you
see,
we
ping
this
node.js
service
and
it
gives
us
just
invoked
our
node.js
function.
We
also
have
a
little
ghost
service
that
is,
of
course
written
in
golang
and
that
to
me
is
going
to
run
on
port
8081
right.
So
it
just
gives
us
the
same.
B
I
think
well,
we'll
figure
it
out.
No,
so
I
I'll
figure
it
out,
but
oh
8082,
yeah,
sorry
about
that.
B
I
have
this
wrong
all
right,
so
java
service
is
also
up
and
responding,
and
now
what
we
want
to
do
is
we
want
to
define
a
simple
workflow
that
kind
of
orchestrates
these
services
in
in
a
very
simple
way.
So
I'm
going
to
go
ahead
and
open
up
vs
code
and
we're
you
know
in
just
a
couple
minutes
we're
going
to
write
our
workflow
together.
So
this
uses
one
of
the
java
implementations
open
source
there
of
of
our
specification.
B
As
you
see,
we
just
have
a
little
java
project
and
there
is
no
code.
This
is
just
source
main
resources
it
it.
We
have
couple
files
here,
which
are
the
open
api
definitions
of
of
our
three
services.
B
So
we
don't
really
care
where
these
services
are
running,
in
this
case,
of
course,
localhost,
but
we
can
throw
open
api
definitions,
define
that
so
there
is
no
code
involved
or
or
or
or
anything
else
so
right
now,
we
want
to
start
writing
our
workflows.
So
let's
say
we
save
a
new
file.
We
have
a
let's
say,
demo
workflow,
and
we
want
to
define
it
in
json
format
now
before
what
we
recommend
users
to
do
before
they
start
writing.
B
These
types
of
workflows
is
go
and
get
our
vs
code
plugin,
so
you
can
find
it
here
and
just
go
ahead
and
install
it,
and
that
will
give
you
little
things
like
code,
hints,
code,
completion
and
also
diagram
generation
as
well.
So
let's
go
ahead
and
define
our
workflow
workflow
definition
usually
has
a
unique
identifier.
It's
called
just
simple
yeah.
You
can
define
things
like
version
and
things
like
that,
let's
give
it
a
name
and
that's
pretty
much
it.
B
The
next
thing
we
want
to
do
is
tell
the
workflow
okay,
what
how
do
you
start
and
for
that
we
have
a
little
start
parameter
and
we
can
say
start
orchestration,
which
is
the
name
of
the
first
which
we,
which
we
call
state,
but
it
can
be
also
a
step
or
a
task
or
whatever
it's
called.
Basically,
the
first
decision
in
our
control
flow
logic
that
needs
to
be
executed
when
when
when,
when
this
workflow
is
instance
is
invoked,
for
example.
B
So
now
the
next
thing
we
want
to
do
is
define
our
control
logic
blocks,
and
for
that
we
have
an
array
called
states
and
each
state
has
a
name.
So
in
this
case
let's
say
we
want
our
start.
Orchestration
state
and
each
state
in
our
workflow
in
our
dsl
has
a
type.
So
you
have
a
number
of
different
types
of
states
which
all
correspond
to
some
sort
of
control
for
logic,
functionality
or
block
that
they
perform.
So
in
this
case,
we
want
something
called
an
operation
state.
B
An
operation
state
is
basically
a
simple
state
that
just
does
some
actions,
action
beings,
invocation
of
things
like
functions,
sub,
workflows,
and
things
like
that.
B
So
we
have
then
to
define
the
actions
that
our
state
wants
to
execute
and,
let's
start
defining
that
we
have
a
name,
let's
say
invoke,
go
function
and
we
give
it
a
function.
Reference
of
invoke,
let's
say,
go
and
that's
it
as
you
see
so
far.
Our
state
definitions
are
completely
domain,
specific,
meaning
that
you
can
use
your
domain
specific
languages
for
basically
most
of
the
part
and
we'll
get
to
the
part
where
that's
a
little
bit
different.
B
But
for
defining
the
control
for
logic,
you
can
fully
use
your
domain
specific
language
and
let's
say
we
also
have
we
do
a
copy
and
paste
and
we
will
invoke
after
this,
let's
say
our
node
function
and,
lastly,
we
want
to
invoke
our
java
function,
let's
say
in
that
order,
but
it
doesn't
really
matter.
Another
thing
that
you
can
specify
is
the
action
mode,
whether
you
want
to
execute
these
actions
in
parallel
or
sequential.
B
For
us,
we
can
define
this
in
sequential
and
we
have
to
tell
our
workflow
runtimes
when
to
end
our
workflow
execution,
and
we
can
end
them
here
as
well.
So
that's
pretty
much
it
now.
I
believe:
that's
it
for
the
control
for
logic,
the
only
oops.
Sorry,
the
only
other
thing
we
have
to
do
is
we
have
to
take
our
tell
our
runtimes.
B
Well,
okay,
we
want
to
execute
these
functions
that
might
be
living
somewhere
else
or
be
deployed
anywhere
really
and
be
accessed
via
maybe
htp
or
rpc,
or
we
don't
really
know
or
really
care
at
this
point,
because
we
offload
that
into
open
api.
So,
let's
give
our
runtime
a
little
bit
of
more
information
of
how
we
want
to
evoke
these
functions.
B
So
here
we
have
a
little
function.
Definition.
We
have
to
match
the
name
to
the
function
reference,
so
this
function
reference
when
when
we
come
here,
we
will
try
to
look
it
up
by
name
into
the
function.
Definition
and
the
operation
parameter
is
basically
is
currently
a
path
to
your
open
api
definition,
which
in
this
case,
is
our
ghost
service
and
to
the
unique
operation
id
of
that
particular
service,
in
which
case
it's
go
and
then
we
can
do
the
same
thing
for
whoops.
B
I
didn't
copy
it
for
our
other
two,
so
here
we
call
it
invoke
node,
and
here
we
call
it
invoke
java,
and
in
this
case
we
want
to
use
our
node
service
json
and
find
out
its
unique
operation.
Id
called
node
that's
convenient,
and
here
we
want
to
use
our
java
service
ap,
open
api
definition
and
call
our
operation
id
of
java.
Now
how
these
functions
are
invoked,
if
that's
a
post
or
a
get,
what
kind
of
response
we
get
or
not?
B
We
all
that
define
that
with
open
api
in
this
case,
and
that's
really
it
we
have
our
workflow
defined
now.
Another
thing
that
you
can
do
is
we
can
preview
a
diagram
within
inside
vs
code.
So
you
see
we
just
used
planned
uml,
it's
really
simple
and
it's
it's.
You
know
they're,
not
nearly
as
pretty
as
what
you
guys
do
at
argo.
I
must
say,
but
we
have
a
starting
state
start
of
our
workflow.
B
We
have
a
single
state
which
is
start
orchestration
in
this
case,
and
it
gives
you
some
information
and
a
pretty
ugly,
currently
legend
that
I
think
with
the
0.7.
You
will
be
able
to
turn
it
off
and
that's
that
so
once
we
have
defined
this,
we
can
go
back
and
just
start
our
service,
I'm
just
making
sure
that.
B
He
will
parse
it
into
some
sort
of
executable
code
in
this
case
java
you
will
and
it
will
also
expose
this
service
to
a
api
endpoint.
B
So
this
thing
I
think,
already
started,
I
think,
hopefully
no
it's
still
working.
I
guess
it's
got
to
generate
a
ton
of
code,
all
right.
It's
started,
and
now
we
have
our
workflow
available
as
as
a
service
and
the
endpoint
is
going
to
match
the
id
of
of
our
workflow
definition.
B
So
just
I
have
a
little
cheat
sheet
here,
so
I
don't
have
to
to
mess
up
the
curl
command
and
we
can
come
here
and
try
to
invoke
our
service
whoops,
and
I
think
this
is
slash
simple.
Oh
no!
This
is
a
simple
workflow,
so
in
our
case
it
has
to
be
simple
because
that
matches
our
id
of
the
workflow
all
right,
and
here
we
go
so
what
what
I
did
now
is.
I
executed
this
curl
command
to
a
post
request
to
an
endpoint
simple.
B
That
is
the
endpoint
that
wrapped
the
execution
of
our
our
workflow.
We
just
defined
and
our
workflow
is
basically
executed.
The
three
functions
in
the
particular
order:
sequentially
stored
its
responses
into
our
workflow
data,
and
here
is
the
result,
though
you
get
so
that's
kind
of
like
how
you
can
get
started
with
serverless,
workflow
and
and
and
get
up
and
running
with
with
some
simple
demo.
B
So
that's
all
I
had
again.
Thank
you,
argo
thanks
for
having
me
here
and
allowing
me
to
talk
regarding
our
project
and
if
you
have
any
questions,
go
ahead
and
if
not
here's
some
information,
we
are
on
cncf
slack
under
service
workflow
and,
if
you
think
about
any
questions
or
want
to
you,
know
kind
of
just
hang
around
and
talk
to
us
you're
more
than
welcome.
A
To
just
get
the
right
documents,
okay,
so
if
anyone
has
been
in
the
slack
channels,
you're
probably
aware
that
we're
playing
to
migrate
from
our
argo
proj
slice
that
we
have
to
cncf,
just
as
part
of
our
migration
to
become
a
cncf
graduated
project.
We're
doing
this
we're
kind
of
aware
that
we're
probably
going
to
lose
people
as
a
result
of
this.
A
A
Also,
we
want
to
take
advantage
of
the
fact
that
the
cncf
slack
can
keep
a
much
longer
history
of
messages,
so
we
can
keep
all
the
information
that
we
currently
lose.
I
I
it
says
10
000
messages
here,
but
I
think
actually
might
be
like
three
months
as
well,
so
we
have
a
number
of
channels
that
just
don't
have
any
history
in
them,
because
the
last
message
was
sent
several
months
ago.
A
I
think,
what's
not
really
stated
here
is
a
bit
of
a
rationalization
of
the
number
of
slack
channels
that
we
that
we
have
so
you
can
find
more
information
you
want
to
in
this
particular
issue
here,
there's
also
kind
of
a
schedule
for
rolling
out
so
we'll
be
archiving.
I
think
another
four
channels
tomorrow
and
ultimately
a
number
of
the
channels
we've
archived
next
week.
A
Now
we
don't
currently
plan
to
replace
some
of
these
channels,
there's
no
plan
to
replace
channels
such
as
random,
arco,
helm
and
announcements,
or
the
argo
sig
cei,
simply
because
I
personally
maybe
I
personally
prefer
to
have
less
slack
channels
rather
than
more
slack
channels,
and
some
of
those
conversations
can
probably
be
had
within
other
channels.
So,
for
example,
if
you
want
to
talk
about
developing
for
our
workflows
or
agro
events,
you
can
jump
into
their
respective
channels
to
have
that
discussion.
A
Obviously,
cncf
comes
with
his
own
channels
for
things
like
random
announcements,
so
we
don't
need
to
migrate
those
channels.
That's
just
really
an
update
that
if
you
have
any
questions,
come
and
ask
those
questions
on
the
github
issue
here
just
drop
in
and
ask
about
them.
I
think
we
do
have
some
people
asking
about
I'll
go
home,
so
maybe
there's
a
discussion
to
be
had
about
that
in
the
future.
Okay.
So
that's
really
all
I
wanted
to
say
around
the
cncf
slack
migration.
A
A
Okay,
no
questions
so
bar
is
just
going
to
be
talking
about
a
couple
of
features
that
are
probably
stated
for
version
3.2,
maybe
3.1.
He
can
pretty
clarify
if,
if
I'm
wrong
about
that,
which
two
new
features
called
templates
defaults
and
the
argo
agent,
but
are
you
ready
to
take
over?
Yes?
Okay,
you
have
the
con
yep.
C
Okay,
so
today
I'm
going
to
talk
about
the
two
things.
One
is
like
3.1
feature,
which
is
called
template
default.
I'm
going
to
demo
that
on
also,
then
second
part
will
be.
We
are
going
to
discuss
about
the
upcoming
future,
the
agent
feature
for
3.2
or
3.1
whenever
we
didn't
target
it
for
3.1
yet,
but
if
we
have
a
time
accomplished,
we
can
include
that
with
otherwise
it
will
go
into
the
3.2
feature.
C
So
let
me
talk
about
the
first
first
part
is
a
template
default,
so
we
introduce
a
new
new
element
in
the
workflow
spec
level
called
template
defaults,
which
is
mainly
you.
The
workflow
user,
can
define
that
all
the
common
element
for
the
template
element
which
can
be
applied
to
that
all
the
template
in
the
workflow,
for
example,
retail
strategy
or
timeout
or
semaphore
or
mutex
and
active
deadline
seconds.
All
the
under
the
container
image,
pull
policy
or
all
the
all
the
template
related.
C
All
the
element
can
be
defined
in
the
one
one
place
called
the
template
default,
which
will
be
applied
to
that
all
the
template
in
that
workflow
in
the
runtime,
which
is
mainly
saving
the
saving
the
workflow
object
size.
So,
for
example,
if
you
want
to
put
a
rate
strategy
on
all
the
template,
it
will
considerably
occupy
multiple
characters.
Then
that
will
automatically
increase
our
workflow
object.
For
example,
in
intuit
we
have
like
50
steps
workflows.
C
If
we
are
adding
a
retail
strategy
on
the
50
steps,
it
will
be
consuming
like
a
considerable
amount
of
object,
size
that
will
sometime
it
will
cross
that
yeah
the
kubernetes
size
of
1
mb
of
each
object.
So
that's
why
this
feature
is
very
valuable
for
the
big
workflows
user
commonly
defined
the
template
common
common
elements,
then
that
will
be
applied
into
the
all
the
templates.
C
C
Let
me
demo
here
so
I
have
a
written
strategy
of
limit
two
and
I
have
the
failure,
the
retrieve
failure
step.
So
let
me
demo
that
so
I
don't
have
any
rate
strategy
on
there
any
steps.
C
C
C
D
Yeah,
thank
you.
I
was
just
curious
if
you
were
to
use
the
template
default.
You
know
in
a
workflow
template
or
something
like
that
and
then
refer
to
that
to
templates
from
that
template.
D
You
know
with
the
template
ref.
Would
they
continue
to
have
the
template
defaults
that
are
set
in
the
original
manifest.
C
C
Because
template
refused
like
a
small
function,
you
are
only
referring
the
function,
not
the
whole
object
right,
so
think
about
that
concept.
The
template
is
like
only
you
are
referring.
The
function,
not
a
whole
object,
so
the
workflow
spec
level
will
not
be
supported
for
the
template
ref,
but
this
entire
object
will
be
supported
when
you
are
using
a
workflow
template
ref,
which
you
are
referring,
that
entire
object.
E
So
if
I
have
like
an
interpolation,
a
a
template,
written
into
say,
limit
there
and
it's
inputs
dot
parameters,
dot
limits
will
that
template
value
be
resolved
like
per
template
or
will
it
throw
an
error?
Saying:
hey
inputs.whatever
doesn't
exist
at
the
workflow
level.
C
So
you
cannot
parameterize
the
template,
defaults,
okay,
cool,
but
if
you
have
a
template
level,
retake
strategy
that
will
take
a
persistence
to
apply
that
okay,
yeah.
That
makes
sense.
E
C
Workflow.Parameters,
definitely
you
can
use
it.
Okay,
just
not
template
level
parameters.
Yes,
yes,
okay,
the
global
parameter.
You
can.
You
can
use
it
because
there
is
another
p.
Another
feature
we
recently
measured
into
master,
which
will
resolve
which
will
be
resolve
the
workflow
level,
parameter
to
that
all
the
workflows
pick
elements.
C
Okay,
let's
move
on
the
next
one,
mainly
next
one,
our
team
start
working
on
the
the
agent
and
http
template
feature
feature
the
mainly
the
goal
was
to
optimize
the
execution
of
the
lightweight
templates
like
a
http
operation
or
calling
the
listing
the
s3
buckets
or
something
like
that,
and
another
goal
is
like
reducing
the
resource
utilization.
That
means
currently
that
our
workflow
structure
is
every
step
whether
it
is
a
heavy
heavy
processing
or
light
processing.
We
need
to
create
a
pod
and
then
the
pod
will
execute
the
step
and
finish
it
sometime.
C
So
we
were
kind
of
thinking
to
reduce
that
one.
That's
why
we
introduced
that
agent
concept,
which
is
mainly
like
whenever,
whenever
the
workflow
is
seeing
like
a
lightweight
http
kind
of
templates,
which
will
create
a
the
agent
pod
in
the
workflow
namespace,
and
it
will
execute
that
all
the
lightweight
lightweight
operations
in
that
template
and
generate
output
to
output.
C
C
So
let
me
go
into
that
like
a
the
real-time
user
experience
on
this
feature.
C
C
You
can
create
such
a
way
that
it
will
be
executed
in
the
single
part,
which
is
the
agent
part
and
output
will
be
stored
into
the
workflow
template
with
the
different
node
status.
So
in
the
workflow
side
there
won't
be
any
difference.
Only
the
operation
side,
where
the
controller
will
not
create
a
par
for
each
individual
request.
C
Itself-
and
I
have
a
little
bit
detailed
detail-
operation
like
what
is
the
controller
responsibility
and
what
is
the
agent
responsibility?
C
F
Hi
bala
this
is
yuan,
and
so
I
see
that
you're.
It's
currently
only
supporting
http
template
we're
planning
to
support
other
types
of
template,
for
example
resource
templates,
so
that
we
can
reuse
a
shared
part
to
create
other
custom
resources,
and
so.
F
How
do
we
handle,
for
example,
for
http
template?
How
do
we
handle
any
exceptions,
because
if
it's
a
script
template
we
can
handle
like
this
in
very
customized
python
script?
But
if
it's
just
a
http
url,
then
how
do
we
handle
different
types
of
exceptions?
How
to
to
refer
back
to
different
behaviors
and
so
on?.
C
So
basically,
the
the
toxic
crd
will
have
will
have
a
will
have
a
status
which
is
going
to
have
like
a
what
is
the
http
response
and
what
is
the
error
message
and
all
the
things
will
be
stored
into
that
that
will
be
pass
it
to
the
workflow
template
or
flow
status
and,
okay,
I
think
we
we
are
going
to
implement
a
default
retrain
like
something,
but
I
have
a
question
like
still.
We
are
discussing
like
what
is
the
return
strategy
for
http
template
all
the
things
but
current.
C
F
A
One
one
goal
is
to
avoid
doing
any
heavy
processing
with
inside
the
controller,
so
the
workflow
controller
can
avoid
doing
heavy
processing
which
de-risks
issues
created
by
workflows,
as
in
as
in
you
know,
if
you've
got
a
multi-tenant
system,
two
name
spaces,
you
don't
want
to
do
heavy
processing
for
one
name,
space
in
case
that
causes
the
other
name
spacing
issue,
so
that
moves
that
heavy
processing
off
the
controller
into
the
into
the
user's
name
space,
but
it
avoids
creating
a
one
pod
for
every
piece
of
work
that
needs
to
be
done.
A
So
it
kind
of
provides
this
kind
of
middle
ground
between
the
two
of
those
and
the
other
kind
of
benefit
that
we
don't
mention
is
that
it.
It
means
that
the
agent
pod
can
run
with
the
service
account
specified
by
the
user,
not
with
the
service
specif
account
specified
by
the
operator
for
the
controller,
and
that
means
that
the
user
can
protect
their
data
from
being
read
by
the
controller
to
improve
security.
A
At
the
same
time,
the
controller
typically
can't
look
at
things
like
you
know,
contents
of
secrets
and
so
forth
in
the
user's
name
space.
So
it's
actually
it's
actually
more
secure
and
we're
hoping
that
we're
going
to
build
it
out.
A
I
think,
as
well
says
for
doing
http
templates,
potentially
resource
templates
as
well,
which
are
quite
expensive
to
run
at
the
moment,
and
every
resource
template
comes
with
a
pod
that
runs
for
the
same
for
the
for
the
same
life
cycle
of
the
resource
as
well,
so
it's
quite
expensive
to
do,
and
finally,
multi-cluster
multi-name
space
workflows
where
you
have
a
task
within
your
workflow
that
runs
either
in
another
namespace
or
another
cluster
that
moves
all
the
authentication
or
the
security
aspects
into
the
user's
namespace.
A
A
So
if
you're
like
us
and
you're
running,
you
know
hundreds
of
argo
workflow
instances,
you
don't
necessarily
want
to
have
to
configure
every
single
one
of
those
you
actually
much
rather
have
your
users
configure
each
one
of
those
instance
so
that
that
and
it'll
enable
that
in
the
future.
A
Our
back
is
the
thing
that
stops
us
getting
stuff
done.
I
find
a
great
way
to
stop
you
doing.
Some
stuff
is
our
back.
Actually,
people
really
struggle
with
our
back.
Actually,
it
says
non-trivial
and
kubernetes
concepts
that
you
don't
typically
want
to
learn
when
you
get
started,
but
actually
often
creates
a
massive
roadblock
very
early
on
in
your
development
process.
A
Okay,
so
thank
you.
Bala
for
talking
about
template
defaults,
inaudible
agent,
so
those
are
coming
up
in
three
two
and
we
haven't
even
put
three
one,
ga
so
that'll,
maybe
well.
I
guess
we'll
just
see
where
they
land
it
might
they
might
land
in
three
one
depending
what
the
time
time
frames
are,
and
if
you
wanna
learn
a
bit
more
about
three
zero
and
three
one:
we've
got
a
video
on
the
cncf
youtube
channel.
You
can
watch
okay.
I
think
that
brings
us
to
the
end
of
our
agenda
for
today.
A
If
you
have
any
more
information,
you
want
to
find
out,
obviously
come
and
find
us
in
the
slacher.
Ideally,
the
cncf
slack
child,
and
hopefully
the
video
of
this
will
be
uploaded
in
onto
youtube
later
on
today
or
maybe
tomorrow.
So
you
can
share
with
your
colleagues
if
they
are
interested
in
having
a
watch
of
this
okay,
any
anything
else.
Any
topics
people
want
to
discuss
before
we
close
out.