►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
before
we're
into
the
the
meat
of
3.0,
I
wanted
to
give
you
a
brief
history
of
what
argo
is
and
how
we
ended
up
where
we
are
today.
A
So
argo
is,
is
a
set
of
kubernetes
native
tools
for
running
and
managing
your
jobs
and
applications
with
kubernetes.
So
argo
consists
of
four
different
open
source
projects.
We
have
argo
workflows,
which
is
the
the
the
star
of
the
show
today,
and
then
we
also
have
argo
events,
which
is
an
event-based
dependence
manager,
and
then
we
have
argo,
cd
and
rollouts,
which
are
the
declarative,
continuous
delivery
tools
and
progressive
delivery
tools
that
you
can
use
to
manage
applications
and
manage
your
rollouts
in
various
shapes
and
forms.
A
Argo
started
a
few
years
ago,
it
was
incubated
at
a
startup
called
atlantic,
which
was
then
fairly
quickly
acquired
by
intuit
intuit
had
a
need
to
build
a
self-service
developer
platform
and
modernize.
The
way
we
were
doing
our
software
development
and
manager
applications,
so
aplatix
was
required
and
tasked
to
to
build
that
and
without
platix
came
the
project,
argo
workflow
at
the
time
as
they
were
building
out.
A
As
we
were
building
out
this
this
new
developer
platform,
there
was
also
a
need
that
we
realized
to
have
a
continuous
delivery
platform
and,
and
hence
argo
cd
was
born.
So
our
gov
workflow
was
the
first
one
and
then
argo
cd
came
shortly
thereafter
and
then
one
of
the
argo
workflow
users
blackrock
later
contributed
argo
event
into
the
project
and
as
this
matured
we
also
realized
the
need
for
progressive
delivery
and
argo
rollouts
was
incubated
about
a
year
later
and
premiered
at
kubecon
about
two
years
ago.
A
Now
about
a
year
ago
now,
argo
was
incubated
as
a
cncf
incubating
project,
and
just
a
few
weeks
ago,
we
filed
the
pr
to
be
a
fully
graduated
project,
so
the
project
has
added
a
lot
of
features,
a
lot
of
contributors
and
gone
through
a
lot
of
maturing
just
in
the
last
last
couple
of
years
here.
A
So
we're
really
excited
about
how
far
the
project
has
come
in
a
very
short
time,
and
that's
also
evidenced
by
the
the
number
of
contributors
and
the
growth
we've
had
in
our
star
count
on
github
we're
now
up
to
over
15
000
stars.
We
have
almost
4
000
contributors
of
which
over
900
are
actually
contributed,
contributed
contributors,
either
active
right
now
or
have
contributed
in
the
past
code
to
one
of
these
four
four
projects
and
we're
very
active
slack
channel
or
several
slack
channels.
A
We
have
over
five
thousand
seven
hundred
members
currently
and
it's
a
really
good
spread
in
the
community
around
this
project
from
there
are
vendors
that
are
stepping
in
and
helping
here
that
are
offering
the
commercial
support
options
for
argo.
We
have
a
lot
of
users
and
individuals
that
are
using.
You
know
the
regular
open
source
versions
of
this
and
then
all
contributing
to
to
the
success
of
this.
This
project
you
can
see
just
in
the
last
year.
You
know
argo
workflow,
that's
the
oldest.
A
The
most
mature
of
this
project
have
added
you
know,
50
new
stars
in
in
less
than
a
year,
and
we've
actually
done
of
the
350
releases
that
we've
done
in
the
argo
project
over
the
last
almost
four
years,
50
of
those
releases
have
been
in
this.
A
In
the
last
year
alone,
there's
been
a
lot
of
good
growth,
both
in
terms
of
features
in
terms
of
contribution
and
in
in
terms
of
stability
in
in
the
various
argo
projects,
and
one
thing
we're
also
really
excited
about
is
seeing
how
argo
argo
workflow
in
this
case
is
getting
picked
up
by
other
open
source
projects
and
then
building
an
ecosystem
of
of
related
projects
where
argo
is
using
a
component
and
argo
being
cloud
native
at
its
core
and
having
a
very
flexible
architecture,
makes
it
easy
to
integrate
with
makes
it
lightweight
it's
all
container
based.
A
Where
workflow
has
code,
you
can
use
yaml
to
describe
describing
workflows
to
describe
how
you
do
things
and
that
fits
in
nicely
with
how
these
other
projects
you
know
we
have
a
platform
and
framework
projects
like
cubeflow
or
kendro,
have
cooler
the
which
is
a
unified
interface
to
interact
with
various
workflow
engines,
underneath
that's
also
supporting
argos
all
these
kind
of
come
together
and
use
argo
and
support
argo
as
a
plug-in
orchestrator
underneath,
and
we're
really
excited
to
see
this,
this
ecosystem
growing-
it's
been
growing,
qui,
pretty
fast
here,
just
just
in
in
the
last
last
six
six
months
to
a
year
here.
A
I
also
just
want
to
mention
you
know
in
in
terms
of
the
stability
and
the
scale
that
argo
has
used,
that
at
intuit
as
an
example,
since
that's
where,
where
alex-
and
I
work
and
this
platform
that
I
mentioned
in
initially,
that
applics
was
tasked
with
building
we're
now
5000
service
developers
that
are
onboarded
to
this
to
this
platform
and
with
over
11
000
applications
deployed
over
350
plus
kubernetes
clusters,
all
running
in
aws,
with
with
well
over
15
000
nodes.
A
B
Thank
you,
henrik.
I'm
just
going
to
take
over
the
screen
recording
from
you
just
now.
There
we
go
good.
You
should
be
able
to
see
the
slide.
Is
that
right,
yep
I'll?
Take
your
silences?
Yes,
so
argo
workplace
3.0
is
probably
the
largest
release
of
argo
workflows
since
well.
2.0
I
expect,
and
the
main
area
of
focus
in
this
is
one
of
the
main
areas
of
focus
has
been
around
the
user
interface
enhancements.
B
B
So
the
first
new
feature
coming
in
version
3.00
is
support
for
argo
events,
we're
providing
both
an
api
and
a
user
interface
for
people
who
use
argo
events.
Now
for
those
of
you
who
are
not
familiar
with
argo
events,
it
is
a
cloud
native
and
cloud
event
compliant
system
for
consuming
and
tr
trigging
actions
based
on
events.
B
Interface
is
filling
out
a
really
an
area
we
thought
was
really
lacking
for
argo
events,
because
it's
quite
it's
often
quite
difficult
to
diagnose
issues
in
a
cloud
native
landscape
without
a
user
interface
to
help
you
I
mean,
if
you're
ever
looking
at
logs,
I'm
sure
you
you
never
look
directly
at
the
raw
logs
from
coop
ctl.
You
go
into
some
kind
of
logging
facility
like
splunk
to
do
so.
So
let's
have
a
little
look
at
the
new
argo
events:
user
interface.
B
If
you're
familiar
with
argo
workflows,
2
you'll
probably
recognize
the
same
color
schemes
and
layouts
here.
But
what
will
probably
jump
out
at
you
immediately
is
the
fact
there
are
a
number
of
new
buttons
on
the
left
hand,
navigation.
Here,
I'm
going
to
go
through
several
of
these
in
a
in
this
presentation.
B
So
the
first
new
area
here
is
an
area
called
event
sources.
Now,
if
you're
using
argo
events
event
sources
is,
is
the
thing
that
results
in
some
kind
of
action
happening
all
right,
there's
a
number
of
different
types
of
event
sources,
but
this
user
interface
not
only
allows
you
to
create
them.
It
also
allows
you
to
update
and
list
them,
so
let's
create
a
new
one.
This
is
a
pretty
canonical
example
of
an
event
source.
B
This
one's
a
calendar
event
source
that
will
will
create
an
event
in
the
system
at
an
interval
of
every
10
seconds.
Let's
create
that
you
can
now
see,
we've
got
a
user
interface,
I
can
expect
the
calendar
and
I
can
go
up
and
have
a
look
at
all.
The
different
event
sources
will
also
be
listed
in
here.
B
What
has
happened
and
a
sensor,
what
what
needs
to
happen?
What
needs
to
be
triggered
as
a
result
of
that
first
item:
let's
create
an
event
source
now
the
event
source.
Here,
I'm
going
to
create
a
little
bit
more
sophisticated
than
the
previous
example.
Here.
It
contains
the
two
templates:
the
templates
are
the
triggers.
The
first
template
here
is
going
to
create
an
arco
workflow.
B
So
the
result
of
this
is
every
15
every
10
seconds,
an
ongo
workflow
will
be
created
and
the
second
one
is
a
logging
trigger
diagnostic
trigger
I'll,
just
print
out
to
the
console.
The
contents
of
that
particular
event.
Let's
create
that
and
again
here
I
can
edit
this
once
I've
created
it.
I
can
have
a
list
of
these
different
sensors,
but
if
I
want
to
go
and
diagnose
an
issue,
I
can
actually
move
into
this
new
view
called
event
flow.
B
An
event
flow
shows
the
event
sources,
the
sensors
and
the
triggers
and
the
actions
that
result
on
them.
I
can
actually
enable
the
event
flow
view
here
by
clicking
on
this
button,
and
that
will
show
me
every
time
an
event
occurs.
An
animation
will
show
what's
going
on
in
the
system
and
the
specific
goal
behind
this.
The
specific
use
case
is
this:
just
to
make
it
easier
to
understand
and
debug
issues
that
are
going
on
with
the
system.
B
I
can
actually
have
a
look
at
these
individual
ones.
I
can
click
on
them
and
I'll
get
some
additional
options
here.
This
is
just
a
view
and
I
can
go
into
that
open
edit
one.
I
can
also
have
a
look
at
the
standard
kubernetes
events
that
may
have
occurred
related
to
that
resource,
and
I
can
have
a
look
at
the
log
files
here
where
you
can
see
that
I've
got
some
structured
login
going
on
here
from
the
calendar.
B
B
And
finally,
I've
got
some
additional
diagnostic
information
shown
on
this
around
the
number
of
messages
that
have
occurred
here.
So
if
I
don't
looking
at
the
screen
and
look
away,
you
can
see
those
numbers
go
up
as
a
result
of
it
and
then
finally,
I
can
even
go
and
click
through
into
those
workflows
and
take
a
look
at
the
workflow
itself
and
do
the
usual
kind
of
investigation.
B
What
we
found
during
our
development
of
our
argo,
workflow,
3.0
user
interface,
is
we
found
ourselves
wanting
to
fix
a
number
of
kind
of
pre-existing
issues
and
banks
and
things
that
didn't
work
particularly
well,
and
what
we
found
when
we're
doing.
That
is
actually
it's
often
much
easier
to
rewrite
completely
rewrite
an
existing
class
style
component
as
a
functional
component.
B
Okay,
I
want
to
talk
about
another
little
example
of
a
feature
in
here.
I'm
going
to
talk
about
widgets.
Now,
a
widget
is
the
way
to
embed
a
bit
of
information
about
a
workflow
into
other
applications.
We
have
a
couple
of
different
kind
of
widgets.
I'll,
just
show
you
them
now.
If
I
go
into
the
view
here
of
my
workflow,
I
am
provided
with
slightly
different
options
here
at
the
top
of
the
screen.
You
can
see
that
there's
a
new
logs
viewer,
that's
really
used
for
anybody.
B
Debugging
logs
allows
you
to
actually
look
at
the
different
types
of
logs
from
the
different
containers
in
the
pod.
Allow
you
to
get
a
bit
more
insight
into
what's
going
on,
not
just
in
your
container,
but
also
in
the
other
containers
running
within
the
pod
and
there's
also
this
new
share
link,
and
if
I
click
on
this,
I
will
get
just
an
example
here
of
the
widgets
that
exist
for
this
particular
workflow,
and
these
widgets
are
animated.
B
So
when
a
workflow
starts
operating
it'll
it'll
start,
you
know
in
the
gray
state
if
it
goes,
becomes
fail,
it'll
advance
the
red
state
or
it'll
change
the
reset.
These
are
embeddable
and
you
can
just
have
a
look
at
the
url
to
embed
them
here
as
a
preview,
and
let's
just
open
that
up
in
a
new
window-
and
here
we
go-
that's
my
little
widget.
I
can
embed
that
if
I
click
on
it,
then
it'll
just
take
me
to
the
workflow
itself.
B
Now
that's
kind
of
fun
another
another
type
of
widget
is
related
to
the
cron
workflow.
So
a
chrome
workflow,
just
a
bit
of
a
recap
for
people
who
are
not
familiar
with
this-
is
a
workflow.
That's
triggered
on
a
standard,
cron
schedule.
So
this
example
here
is
going
to
trigger
once
every
minute.
If
I
click
on
that
chrome
workflow,
I'm
given
a
share
button
here
as
well,
I
can
click
on
that
and
I've
got
status
badges
and
in
this
example,
I
can
see
that
that
was
already
running.
B
It
just
went
from
a
blue
state
to
a
green
state.
So
that's
a
good
example
of
this,
and
this
hopefully
it'll
make
it
easier
for
those
use
cases
where
you
probably
want
to
build
some
kind
of
framework
or
platform
around
argo
workflows.
But
you
also
want
to
include
some
of
the
more
sophisticated
and
useful
user
interface
elements
and
one
of
the
most
valuable
parts
of
the
argo
workflows.
User
interface.
Is
this
graph
stroke
dag
view
showing
the
workflow
as
it
changes
and
executes
okay?
So
that's
a
little
bit
of
an
overview
of
widgets.
B
B
The
first
graph
is
a
duration
graph
that
allows
me
to
determine
if
my
workflow
has
been
taking
longer
over
time
and
the
second
one
is
a
resource
consumption
graph.
Now
resource
consumption
is
a
slightly
older
feature,
dates
back
to
version
2.9,
and
each
workflow
will
provide
kind
of
a
high
level
summary
of
over
the
ballpark
of
the
memory
and
cpu
or
gpu
that
workflow
is
used
and
that
allows
you
to
track
cost
over
time.
B
This
is
this
is
this
is
specifically
about
the
amount
that
was
requested,
therefore
directly
correlates
to
cost,
and
so
you
can
see,
for
example,
because
this
one
is
pretty
much
stable.
Your
cpu
and
memory
have
have
remained
unchanged
over
time,
and
this
also
works
if
you've
got
archived
workflows,
if
you're
using
the
workflow
archive,
this
can
actually
look
back
at
historical
data
as
well.
B
A
longer
a
long
requested
feature
from
argo
workflows
is
controller,
high
availability.
The
goal
of
high
availability
is,
if
the
controller
that
operates
the
workflow
crashes.
For
any
reason,
it
can
take
some
time
for
that
controller.
To
start
back
up
again,
it
has
to
query
all
the
workflows
in
your
cluster
and
build
some
data
sets,
but
also
there
may
just
be
some
underlying
kubernetes
scheduling
issue.
That
means,
for
some
reason
that
controller
can't
start
up
for
some
time.
B
So
we've
introduced
a
feature,
that's
quite
common
in
other
work.
Other
kubernetes
controllers
is
a
thing
called
leader
election,
and
this
basically
allows
you
to
scale
your
controller's
deployment
to
two
or
three
and
when
one
of
those
controllers
goes
down,
another
one
can
be
spun
up.
I'm
not
going
to
show
you
this
it's!
You
know
very
much
a
terminal
level
operation,
but
it's
relatively
straightforward
to
do
so.
B
You
add
some
our
back
around
the
new
kubernetes
api
called
the
coordination
api,
which
is
how
we,
how
we
do
leader
election
and
then
you
just
scale
your
controller
to
two
or
three
replicas
and
typically
you'll,
probably
also
use
a
thing
called
az
anti-affinity
to
ensure
each
of
those
replicas
is
running
in
a
different
availability
zone
for
your
cloud
provider.
This
means,
if
one
of
those
extremely
unlikely
sorry
not.
Let
me
correct
myself,
extremely
rare
but
absolutely
guaranteed
to
happen.
Availability
outage
happens.
B
I
mean
that
will
happen,
regardless
of
which
cloud
provider
using
it
will
happen
to
you
one
day.
You'll
actually
already
have
two
controllers
in
hot
standby
in
the
two
other
availability
zones
and
one
of
those
will
automatically
start
up
and
take
over
executing
your
workflows.
So
even
if
you
have
an
outage,
an
availability
zone
outage,
you
can
you'll
still
be
able
to
recover.
B
Another
really
under
the
hood,
but
I
think
very
impactful
enhancement
in
version
3.0
is
a
concept
called
a
key-only
artifact.
A
key
only
artifact
is
defined
is
the
same
as
a
normal
artifact,
but
rather
than
specifying
the
whole
set
of
configuration,
you
only
need
to
specify
the
configuration
related
to
the
key.
Let's
have
a
little
look
at
what
that
might
look
like.
B
And
okay
key
only
artifact
here
we
go
so.
This
is
an
example
of
the
key,
only
artifacts
and
the
main
difference
that
most
users
will
notice
is
here
under
the
outputs.
I
don't.
I
only
specify
the
key
for
my
s3
bucket.
I
don't
have
to
specify
the
the
entire
thing
and
the
rest
of
the
information
for
this
will
be
populated
from
either
the
artifact
repository
reference
or
the
defaulted
configurated
configuration
artifact
repository
and
they'll
be
able
to
read
to
that
explicit
key.
B
So
previously,
if
you,
if
you
wanted
to
not
include
the
bucket
and
other
kinds
of
configuration,
you
actually
also
needed
to
you
know,
if
you
wanted
to
have
an
explicit
key,
you
would
have
needed
to
include
that
in
entirety,
and
this
is
very
useful
for
examples
where
I
want
to
use
some
formatting
here
in
in
the
workflow-
and
this
also
has,
like
a
nice
side,
benefit
for
large
workflows
that
have
many
artifacts
with
them.
Then
they
use
a
large
fan
out.
B
B
I'm
going
to
talk
a
little
bit
about
some
upcoming
features.
Some
users
are
probably
going
to
skip
version
3.0
and
go
straight
to
version.
3.1
and
version
3.1
is
going
to
follow
quite
soon
and
I'm
hoping
that
we'll
have
it
release
candidate
available
in
the
next
few
weeks,
and
it's
going
to
bring
three
or
four
particularly
new
interesting
features
and
we'll
talk
a
little
bit
about
one
called
data
template.
B
So
one
thing,
that's
always
been
quite
difficult
to
do
in
argo.
Workflows
is
efficiently
create
a
fan
in
fan
out,
mapreduce
style,
job
and
data.
Template
is
a
new
abstraction
that
allows
you
to
easily
do
the
fan
out
part
of
a
fan
out
job,
and
I'm
now
going
to
show
you
an
example
in
our
user
interface
here.
B
For
the
example,
is
data
transform
here
we
go
this
data
transformation
lists
a
list
of
log
files
in
a
buckets
and
each
of
those
log
files
then
becomes
part
of
the
iteration
loop.
So
this
allows
me
to
list
a
bucket
and
then
start
up
one
other
step
for
each
item
in
those
buckets.
This
has
loads
of
really
interesting
use
cases,
for
example,
listing
your
python
files
in
a
pocket
and
then
potentially
running
those
python
files
separately.
B
So
let's
give
this
guy
an
execution
here.
So
let's
just
talk
a
little
bit
about
it.
First.
Here
I
have
the
data
template.
It's
marked
by
data
colon
and
under
that
I
have
a
source
and
then
artifact
past
this.
This
tells
me
which
bucket
to
list.
So
this
is
the
s3
bucket,
and
this
also
uses
key
only
artifacts
here
as
well.
So
I
don't
need
to
release
the
rest
of
it.
This
just
lists
any
artifact
in
that
bucket
with
main.log.
B
Then
that's
passed
to
the
next
step
here,
so
I'll
print,
those
to
standard
outs,
they're
past
the
next
step
as
items
the
next
step,
I'm
going
to
perform
some
processing
on
those
ones.
So
let's
execute
this
workflow,
so
first
step
as
I
mentioned
list
log
files
in
the
bucket
that'll
go
into
the
bucket
and
use
whatever
api
method
is
used.
It's
very
typically
a
very
lightweight
method
that
allows
you
to
list
those
items
in
the
bucket
and
then
it'll
proceed
to
create
one
additional
step
for
each
item
that
was
listed
in
that
bucket.
B
There
we
go,
you
can
see
that
with
142
hidden
nodes,
there
was
a
large
number
of
log
files
listed
in
that
bucket
and
it'll
fan
those
out
and
do
one
operation
per
item
that
you
can
see
how
this
could
be
really
useful.
Your
first
step
could
produce
a
number
of
artifacts
that
you
want
to
iterate
over,
but
maybe
you
only
want
to
iterate
over
some
of
them.
You
know
you
can
then
use
that
filtering
operation
to
do
that.
B
Another
new
version
coming
up
is
expression.
Tag
templates
now
most
users
may
be
already
familiar
with
tag
templates.
It
allows
you
to
substitute
information
from,
for
example,
input,
parameters
or
workflow
parameters
into
a
particular
template
within
a
workflow
expression.
Tag
templates
builds
on
this
to
allow
those
templates
to
not
just
be
plain
substitution
of
variables,
but
actually
fully
formed
expressions
using
the
expression
syntax
that
many
users
will
already
be
familiar
with.
Let's
have
a
look
at
an.
B
Example
now
let
me
walk
you
through
this
example.
This
example
contains
one
dag
template:
that's
going
to
iterate
over
a
number
of
numbers
and
print
out
some
information
and
the
dates
the
way
to
differentiate
an
expression
tag
template
from
a
pre-existing
template
is
rather
than
starting
with
just
two
curly
braces.
It
starts
with
two
curly
braces
and
an
equal
symbol.
B
In
this
case
you
can
see,
we've
got
an
array
that
I'm
going
to
be
iterating
over
filtering
out
numbers
that
are
not
greater
than
1
and
finally,
converting
that
to
json,
which
is
necessary
for
with
param
and
then
going
to
pass
that
in
as
a
parameter
called
foo.
Let's
have
a
look
at
that
pod
0
template
pod.
Zero
template
is
going
to
print
out
some
information
to
the
standard
to
the
console.
B
It's
going
to
print
out
the
word
hello,
followed
by
the
evaluation
of
this
expression.
This
expression
is
based
on
that
input
foo.
It
takes
it
as
an
integer,
because
parameters
normally
are
strings
and
multiplies
it
by
10..
So
in
this
example,
we've
got
a
list
of
one
and
three
we're
going
to
skip
one
and
just
have
three
as
the
value
for
foo
and
it's
going
to
multiply
3
by
10.
To
give
you
30.,
then
it's
going
to
print
out
the
date,
but
I
didn't
want
to
include
the
entire
date.
B
I
just
wanted
to
include
the
year.
So
here
I'm
using
a
template
library
that
comes
built
into
argo,
workflows,
3.0
called
sprig
sprig
is
a
very
common
template
in
system
for
the
go
language
and
provides
a
whole
number
of
useful
functions
for
manipulating
data,
and
this
particular
one
is
for
formatting
dates
and
in
in
the
sprig
date
format.
Function.
B
2006
really
means
just
print
out
the
date
with
only
the
year
and
that'll
be
the
workflow
creation
timestamp.
Let's
execute
this
and
see
what
happens.
B
B
Now
these
are
two
separate
features,
but
they
work
extremely
well
together
and
it's
going
to
be
rare
to
talk
about
one
without
the
other.
The
inventory
executor
is
a
new
executor
to
add
to
the
existing
accusers
we
already
have,
including
docker
and
pns-
that
basically
builds
on
the
lessons
learned
for
those
plus
some
additional
lessons.
We
learned
from
working
with
the
team
behind
techton
cd,
an
ms3
executor,
uses
shared
volumes
to
coordinate
into
process
communication,
and,
as
a
result
of
that
it
can,
it
allows
us
to
run
multiple
containers
within
the
pod.
B
Today
you
can
only
run
a
single
container
within
a
pod,
but
with
the
msu
executed,
you
can
run
multiple
containers
within
that
pod
and
actually
have
the
processes
within
each
of
those
containers.
Wait
for
the
process
in
a
previous
container
to
complete
this
allows
you
to
create
a
directed
acyclic
graph
of
containers
within
a
pod
to
do
processing
and
one
of
them.
One
of
the
great
benefits
of
this
is
because
they're
containers
within
a
pod.
B
They
can
share
things
like
volumes
and
also
communicate
together
with
each
other
over
localhost.
An
emissary
executor
on
its
own
is
not
enough
to
achieve
this.
We
also
introduced
a
new
template
type
alongside
his
resource
container,
and
the
other
types
of
templates,
such
as
dag
and
steps
called
container
set
and
a
container
set
simply
specifies
a
group
of
containers
that
run
and
the
dependencies
between
them.
So
let's
see
this
in
action.
B
Now
there
are
several
examples
we
can
choose
from
here
and
the
first
one
I'm
going
to
choose
is
this
graph
workflow,
which
demonstrates
the
ability
to
connect
multiple
steps
within
a
workflow
now
one
thing
I'm
just
going
to
quickly
highlight
here
is
this
additional
annotation.
This
is
also
introduced
into
argo,
workflow
3.0
in
previous
of
what
our
versions
of
algorithms,
though
you
were
committed
to
a
specific
container,
runtime
executor
for
all
your
workflows
within
a
within
your
controller
version.
3.0
allows
you
to
use
a
label
to
use
different
executors.
B
The
different
executors
have
a
trade-off
between
power,
security,
performance
and
so
forth,
and
this
allows
you
to
experiment
with
new
ones
without
risking
the
other
ones
or
mix
and
match
the
ones
you
want
to
use
for
particular
workflows
depending
on
your
requirements.
B
B
Now
one
of
the
things
I
love
about
doing
the
container
demo
is
it's
often
extremely
responsive,
so
responsive
it
didn't
appear
to
execute.
They
all
appear
to
execute
once,
and
I
can
assure
you
that's
not
the
case,
because
these
are
separate
containers
rather
than
separate
pods.
They
do
not
have
that
overhead
of
waiting
for
the
other
pod
to
complete
the
communication
about
the
kubernetes
api
and
synchronization
they'll
all
execute
as
fast
as
possible,
and
it's
really
good
for
co-locating
tasks
within
a
single
place.
B
This
contains
workflow
uses
a
shared
volume
specified
here.
That's
mounted
onto
all
of
the
containers
on
the
path
specified
here
so
so
acts
as
a
shared
workspace
and
in
our
example,
we're
going
to
have
an
artifact,
that's
produced
by
container
a,
and
this
can
be
written
into
that
workspace,
but
then
collected
from
the
output.
There
we
go,
and
here
we
go.
You
should
see
that
it
executes
extremely
quickly
as
well.
Typically
within
a
few
moments
there
you
go
there's
another
example
of
the
consaina
set.
B
Template
so,
as
I
previously
mentioned,
argo
workflows,
3.0
introduces
a
number
of
new
features,
I'll
just
review
them
here
for
you,
the
argo
events,
api
user
interface,
with
the
eventful
view
for
diagnosing
issues
and
the
ability
to
create
event,
sources
and
sensors
through
the
user
interface,
a
significant
refactoring
to
improve
the
performance,
reliability,
maintainability
and
robustness
of
the
user
interface.
Many
users
will
note
that
you
don't
get
as
many
disconnection
errors
in
version.
B
3.0
control
highlight
high
availability
through
leadership
election
that
allows
you
to
survive
availability
zone
out
even
survival
availability
zone
outages
key
only
artifacts
which
simplify
the
the
definition
of
a
workflow
and
reports
that
allow
you
to
look
at
the
history
of
a
workflow
understand
how
it's
changing
over
time.
B
Then
in
3.1
data
templates
allow
you
to
fan
out
workflows
based
on
the
contents
of
a
bucket
with
inside
your
artifact
repository
and
conditional
parameter
logic,
which
I
haven't
talked
much
about
here.
That
allows
you
to
select
which
one
of
the
outputs
of
a
dag
or
steps
template
is
passed
downstream.
B
Now
we're
still
in
20,
oh
hang
on.
Are
we
20,
20
or
21
20
21?
Aren't
we
I'm?
You
know
you
spend
so
much
time
locked
down
at
the
moment.
It
can
be
hard
to
remember
we're
looking
forward
to
some
new
features
coming
up,
so
we
didn't
talk
much
about
this,
but
earlier
on
this
year
we
did
do
a
survey
to
understand
what
our
users
wanted.
We
got
some
really
interesting
and
useful
feedback
from
that,
and
one
of
the
ones
that
came
back
is
more
capabilities
for
working
with
python.
B
We're
also
looking
at
the
ability
to
to
run
workflows
that
span
multiple
clusters
and
multiple
namespaces.
So,
for
example,
you
might
have
a
build
job
that
builds
a
container
in
one
cluster.
You
want
to
run
that
in
another
cluster
then
deploy
some
additional
information,
another
cluster,
but
you
have
it
all
specified
within
inside
a
single
workflow
and
we
think
that'll
be
really
useful
for
both
machine
learning
and
infrastructure
automation,
use
cases.
B
We
also
want
to
look
at
building
a
plug-in
framework
that
allows
for
integration
extension
things,
like
lifecycle
hooks
or
the
ability
to
write
your
own
templates.
We
talked
a
bit
about
container
set
template
today
in
the
new
data
template.
The
plug-in
framework
will
actually
allow
users
to
build
their
own
templates
to
do
whatever
they
want
to
do
and
we're
looking
at
doing
that
with
a
new
thing
called
http
template
and,
finally,
we're
looking
to
improve
the
whole
developer
experience
for
making
it
make
it
easier
for
people
to
work
within
the
ecosystem.
B
A
I
just
want
to
thank
you
for
for
stopping
by
and
listening
to
this,
I
hope
gave
a
good
overview
and
got
you
excited
about
all
the
new
things
that
are
coming
in
3.0,
as
well
as
a
sneak
peek
on
3.1.
So
there's
some
additional
information
available
in
the
release.
Blog,
that's
available
on
the
cncf
blog
there's,
also
the
argo
website
that
has
more
information
about
the
project
yourself,
how
to
get
it
and
we
also
have
a
very
active
slack
channel.
A
So
there
is
a
link
here
to
help
you
join
this
live
channel
and
join
the
conversation.
There's
a
lot
of
good
information
being
passed
around
a
lot
of
good
help
in
in
the
slack
channel.
So
please
check
out
the
website
check
out
the
release
blog
if
you
want
more
information
about
what
we
just
presented
on
and
come
and
chat
with
us
on
the
slack
channel.
If
you
want
more
information,
thank
you.
Everyone.