►
From YouTube: Argo Workflows Community Meeting 19th Feb 2020
Description
A
There
we
go
now
have
I
muted
the
microphone
good
morning.
Everybody
welcome
to
the
second
Dargo
workflows
community
meeting
of
2020.
In
the
last
meeting
we
did
some
demo
of
new
2.5
features
and
today
we're
going
to
talk
a
bit
about
what's
coming
up
soon.
So
thank
you
for
thank
you
for
joining
us
today.
For
those
people
who
are
not
familiar
with
our
workflows,
our
workflows
is
a
container
native
workflow
engine.
A
So
what
that
means
is
if
that
could
be
run
workflows
on
kubernetes
and
it's
quite
popular
within
the
ml
and
AI
community,
so
we're
in
the
end
up
talking
about
stuff,
like
that
today,
in
the
meeting
there'll
be
myself
Alex
from
Intuit.
We've
also
got
a
couple
of
my
colleagues
here
to
demonstrate
a
couple
of
new
features
and
Sam
elder
is
also
going
to
talk
a
bit
about
stuff
that
he's
been
using
Argo
fourth
day
so
I'm
looking
forward
to
that,
we've
got
a
document
which
I
am
sharing
on
the
scope
showing
on
the
screen.
A
A
A
yes
I,
gotta
yeah!
Thank
you
very
much.
Okay.
Okay,
if
you've
got
any
questions
during
the
meetings,
just
drop
them
into
the
chat,
and
somebody
in
here
will
read
your
question
out
and
whoever's
doing
it
will
ask
about.
It
feel
free
to
ask
questions
at
any
point
and
just
a
final
note,
as
we
are
recording
this
and
we'll
share
that
recording
later
on
on
YouTube
for
everyone
in
the
community
to
hear,
and
so
we
tend
to
get
quite
a
few
people
who
watch
that
video
as
well.
A
A
C
C
It's
the
new
UI
will
be
released
in
two
point.
Six
and
here
on
the
workflow
list
page.
We
have
a
new
panel
on
the
Left.
We
used
to
have
a
namespace
builder
in
a
phase
filter
on
the
top
right
and
right
now,
since
we're
adding
label
theaters.
So
we
look
all
the
filter
together
here
and
then
you
can
do
your
namespace
name
of
our
face,
or
even
you
can
just
search
a
specific
workflow
I
think
you
guys
already
I
have
been
familiar
with
namespace
and
phase
filter
so
for
a
label
filter.
C
If
you
click
on
the
label
field
of
field
level
of
your
field,
and
then
your
pops
up,
a
drop-down
in
dish,
choose
all
the
available
filters
available
levels
for
the
existing
workflows
and
they
can
easily
pick
one.
So,
for
example,
I
have
some
other
work.
Clothes
has
a
level
like
sleep
60
if
I
click
it
and
then
choose
the
workload
here.
C
Fine,
if
I
click
the
cross
and
then
you
know
just
clean
up
the
filter
and
then,
if
I,
do
you
like
one
120
and
then
issues
or
closest
level
sleep
equal
to
120
right
now,
I'm
gonna
create
a
new
workflow
and
I
mean
I.
Have
a
simple
worker
protesting.
It's
like
sleep
seconds
party
fold
is
like
sleep
60
seconds
right
now:
I,
don't
give
a
parameter
say:
hey
I
want
to
sleep.
C
All
right
and
then
when
I
clicked,
the
the
field
ending
issues
slip.
One
eighty
level
here
and
I
click,
Edit
issues,
the
booklet
just
created
the
story
running
stated
and
thanking
the
feature
is
twice
straightforward
and
you
can
try
to
help
line
yourself
in
then.
We
also
had
similar
features
to
the
archive
page.
D
E
E
E
So
you
guys
know
that
technol
we
introduced
the
tour
flow
template
on
2.4,
which,
which
can
be
only
referred
in
that
what
workflows.
So
now
you
can
directly
run
workflow
template
template
as
executable,
which
will
automate
automatically
create
or
flow
and
submit
it
into
our
go.
So
they
know
that
I
have
a
sample,
load
flow,
template.
E
E
E
E
E
A
A
Think
I
think
we
good
okay.
Thank
you.
So
those
are
the
two
two
feature
three
pieces
to
look
to
feature
previews
we
have
from
are
coming
up
in
our
go
with
those
be
point
two
buttocks
we're
gonna
do
some
kind
of
slightly
different
stuff
now,
and
so
one
of
the
features
we've
got
coming
up
is
the
ability
to
report
custom,
step
level
metrics
from
listen.
We
watch
you
want
to
get
a
cutie
feedback,
so
I'm
gonna
pass
her,
but
just
to
Simon,
because
this
he's
gonna
share
some
stuff
with
us
on
this.
F
Hello,
everyone,
my
name,
is
Simon
and
I
am
working
on
custom
level,
custom
and
step
level
metrics,
which
has
been
a
feature
that
has
been
requested
from
the
community.
First
I'd
like
to
point
out
that
I
we
have
an
RFC
design
doc
on
this
issue,
which
is
twenty
two
or
three
so
most
of
what
I'm
gonna
say
today.
You
can
actually
read
the
reasoning
behind
on
this
doc,
but
to
make
this
dog
a
long
story
short.
F
F
What's
on
your
kubernetes,
your
CTL
workload,
resources
and
the
reason
the
way
that's
not
particularly
useful
is
because
you
can
just
query
those
resources
and
doesn't
really
give
you
any
new
information,
and
not
only
is
it
not
useful,
but
on
top
of
that,
it's
not
how
Prometheus
is
supposed
to
report
metrics
you're
supposed
to
just
give
a
snapshot
of
metrics
at
a
certain
time,
as
opposed
to
just
a
status
of
a
state.
So
with
that
in
mind,
we
decided
to
sort
of
refactor
metrics.
The
soil
changes
will
be
backwards
compatible,
but
they
will
be
slightly.
F
New
changes
will
be
significantly
different
from
what
is
currently
being
offered
and
just
to
sort
of
give
you
guys
a
quick
overview
of
this.
Let
me
open
this
doc
so
essentially,
currently
the
idea
of
the
change
is
to,
instead
of
having
the
source
of
truth,
be
all
of
the
workload
resources
you
have
on
your
cluster.
We
will
have
the
source
of
truth
of
metrics,
be
the
controller
which
is
has
several
advantages.
F
Many
mainly,
you
are
now
able
to
report
very
granule
information,
doing
workflow
execution,
as
opposed
to
just
the
latent
information
that
is
stored
in
the
resource
after
the
fact
and
others,
and
you
can
read
in
the
doc.
So
the
way
that
you
define
metrics
will
also
change.
We
will
provide.
We
will
provide
users
with
very
limited
workflow
controller
level
metrics,
which
means
that
are
those
are
metrics,
that
the
controller
itself
will
have,
such
as
how
many
workloads
I've
run
or
how
many
were
close
of
each
status
have
been
have
progressed.
F
But
if
you
want
specific
metrics
to
your
own
or
close,
you
will
have
to
define
them,
and
then
we
have
come
up
with
a
schema
on
how
to
define
them
and
then
to
show
a
small
example
of
this
I'm
going
to
open
up
this
workflow
that
I
have
with
some
metrics
on
them.
So
this
is
a
very.
This
is
the
standard
hello
world
workflow?
You
can
see
here
that
here's
our
hello
world,
but
all
of
this
information
that
you
see
here
are
our
new
metric
definitions.
So
as
an
example,
we
have
a
duration
gauge
metric.
F
So
a
duration
gauge
metric
is
a
gauge
metric,
which
is
a
per
media
stream
of
something
of
a
metric
that
either
that
either
goes
up
or
down.
So,
for
example,
the
canonical
example
of
this
would
be
a
temperature.
So
if
you
want
to
know
what
the
temperature
of
your
computer
is
every
time
you
check
it,
it's
gonna
be
different,
so
it's
gonna
go
up
or
down,
and
in
this
case
it's
just
the
duration
of
your
workflow.
F
So
the
way
we
define
this
is
we
have
a
gauge
with
a
value
and
a
duration,
and
then
we
want
that
duration
to
update
in
real
time.
Another
example
of
this
is,
we
have
a
histogram
I
hit.
The
Graham
is
essentially
a
account
of
how
many
of
those
metrics
have
finished
under
certain
parameters.
We
will
also
use
the
duration
as
an
example
for
this,
but
instead
of
reporting
in
real
time
will
report
their
own
completion
and
we
have
a
couple
bins
set
up
for
our
histogram.
F
So
just
to
give
you
a
quick
example:
I'm
going
to
run
this
workflow
right
here,
it's
running,
and
you
can
see
here
that
on
my
metrics
page,
we
have
updated.
This
gets
updated
with
the
duration
of
the
workflow
and
as
soon
as
the
workflow
finishes,
it
gets
added
to
our
histogram.
So
this
particular
workflow
run
more
than
the
the
upper
level
six
point
five
one
seconds,
so
it
will
get
out
of
two
of
the
cash
flow
infinity
bucket.
F
But
if
we
run
this
again,
you
can
see
that
the
same
metric
is
used
for
the
duration
and
then
this
one
also
ran
I,
guess
they're
running
a
bit
slower
than
I
expected,
but
this
one
also
ran
more
than
eight
seconds.
So
the
key
core
idea
here
is
that
users
will
define
their
own
metrics
and
then
Argo
will
just
simply
report
them.
F
So
it
is
the
developer's
responsibility
to
sort
of
understand
that
this
duration
gauge
will
get
replaced
on
each
run
of
a
workflow
and
then
essentially
puts
the
burden
of
understanding
the
metrics
to
the
developers,
which
is
good
because
it
gives
you
guys
more
control
for
your
own
specific
use
cases.
So
this
is
pretty
much
in
development
right
now.
This
was
meant
to
be
more
of
a
pattern
demonstration
as
opposed
to
a
future
demonstration.
F
D
F
So
I'll
post
it
in
here
in
the
zoom
you
guys
should
feel
free
to
reach
out
and
I.
Think
that's
it.
For
my
analyst.
You
guys
have
some
questions
that
I
can
see.
Okay
yeah,
so
that's
I'll,
wait!
Ten
seconds
in
case
you
guys
have
our
typing
questions.
Otherwise
we
will
move
on.
F
A
Thank
you
very
much
Simon
for
that.
Okay,
so
we've
need
you
know.
Community
feedback
is
really
important
to
us.
You
know
it's
very
key
in
helping
us
determine
what
features
we
don't
want
to
do,
and
it's
really
important
if
you
are
particularly
interested
in
something
else,
is
useful,
so
you
kind
of
put
your
hand
up
and
say
that
that's
something
useful
to
you.
A
B
B
B
B
B
Slides
I'll
talk
about
one
thing,
that's
more
straightforward
and
then
one
thing
a
little
bit
more
other
than
out
of
the
beaten
path
with
regard
to
testing
so
I'd
say
previously,
but
before
version
will
point
aside
first
for
version
2.4
we
had
these,
we
had.
We
were
generating
a
lot
of
templates.
So
let's
say
you
wanted
to
run
this
particular
instrument
with
these
settings.
You
put
that
into
a
template
and
maybe
the
settings
would
be
the
input
parameters
or
let's
say
you
wanted
to
insert
this
kind
of
recording
step
to
record
something
to
a
database.
B
All
of
these
would
be
different
little.
You
know
templates.
We
were
passing
around
and
and
previous
partnership
before
we
had
to
kind
of
have
this
big
long
footer,
so
to
speak,
of
lots
of
these
templates
that
would
be
used
in
the
workflow
or
possibly
would
be
used
in
the
workflow
we
didn't
want
to
keep
track,
of
which
ones
are
used,
you
know,
and
in
each
workflow,
but
every
time
we
wanted
to
change
one
of
those
templates.
B
We
had
to
go
change,
their
crop,
all
the
different
file,
and
so
we're
four
examples
has
been
a
lifesaver
and
I
mean
really
just
have
helped
us
to
accelerate
this
by
externalizing
those
templates.
So
they
don't
have
to
be
at
the
bottom
of
every
workflow
file
and
yeah,
and
then
we
built
that
that
separate
and
then
the
workflow
can
be
a
little
less
cluttered
and
the
chemists
can
actually,
you
know,
understand,
what's
going
on
without
having
to
see
that
big
log
for
tempo.
They
also
let
those
test
these
templates
more
easily.
So
we
can
test.
B
We
can
write
tests
for
clothes
that
just
reference
the
same
those
common
templates
and
then
those
performances
can
tell
us
whether
the
templates
are
working
or
not.
And
you
know
if
you
double
fit
me
physical
instruments
sometimes
working
one
day
and
not
working
the
next
day,
you
have
to
figure
out
what's
wrong,
so
we
have.
We
can
really
need
to
be
testing
and
testing
as
income
I
key
part
of
our
development
cycle
in
this
capacity.
So
I
think
this
is
fairly
straightforward.
B
I
think
a
lot
of
people
are
familiar
with
workflow
templates
and
the
ability
to
externalize
in
that
fashion,
but
a
little
bit
more
advanced
stuff
is
how
we
kind
of
use
done
integration
test
with
all
of
this,
because
when
you're
running
you
know
some
chemistry
experiment,
you
don't
necessarily
want
to
actually
run
the
experiment
and
then
discover
that
you're
recording
stuff
at
the
end
failed.
You
know,
and
then
you
have
to
start
all
over
again.
B
B
B
As
long
as
you,
you
know,
by
that,
there's
some
correspondence
between
the
templates
of
the
same
same
level,
these
integration
tests
or
the
whole
thing
without
actually
having
to
run
our
instruments.
So
that's
yeah,
that's
something!
I
wanted
to
share
that
I
thought
was
pretty
cool
and
I
thought
others
night-time
some
use
to
it.
B
Finally,
I
want
to
talk
about
some
limitations,
but
you
know
there
could
be
some
room
for
improvement
here
and-
and
the
first
is
that
there's
this
there's
conflict
between
workflow
template
and
global
parameters,
and
that
might
be
resolved
at
something
version.
2.5
I,
have
it
obvious
accessibility
either
just
came
out,
but
but
what
you
can't
use
these
global
parameters,
email,
workflow
template,
doesn't
allow
it,
except
with
the
one
one
key
like
backdoor,
which
is
the
default
values
for
inputs
to
two
input
parameters
to
a
template.
B
As
I've
shown
like
some
slight
change
to
a
template,
so
then
I
have
a
copy.
This
layering
of
templates
allows
me
to
do
that,
but
then
I
have
to
copy
the
entire
template,
with
all
the
other
inputs
and
outputs
with
just
that
one
little
change,
and
so
it
would
be.
You
know,
thinking
in
a
kind
of
functional
programming
way.
It
would
be
really
cool
if
there
was
some
kind
of
like
way
to
make
a
operations
on
templates.
B
You
know
way
to
find
that
in
a
way
that
would
just
say
insert
this
particular
parameter,
but
otherwise
leave
the
template
unchanged,
and
actually
you
know
I
I,
don't
know
exactly
the
specs
on
this,
but
this
actually,
this
layered
system,
you
know,
with
all
the
different
templates
I've
built,
actually
takes
a
little
while
to
our
go,
template
create
and
I
happen,
I
have
to
believe
everything
and
create
them
and
I'm
going
to
make
any
change
to
this
process.
So
maybe
there
could
be
some
more
robust
thing
going
on
and
I
have
a
couple.
B
E
B
B
So
a
lot
of
our
instruments
are
connected
through
the
cloud
we
have
a
you
know
some
Jeff
PC
clients
and
servers
that
we're
using
to
contact
these.
So
that's-
and
you
know
eventually
we
also
want
to
be.
You
know,
multi
in
multiple
locations,
running
some
road
running
some
properties
in
one
lab
and
stuff
and
some
processes
in
another
lab.
So
that's
that
part
chosen
to
put
people
on
the
cluster
is
that
the
question?
How
does
half
died?
I
couldn't
wait
to.
D
Got
some
more
questions
yeah?
The
second
question
is:
can
you
run
tests
under
CI.
B
Yeah,
so
some
tests
yeah
some
types
of
purely
just
there's
some
there's
some
checking.
You
know
to
make
sure
it
worked
some
tests
we
have
to
like
it.
The
test
of
an
instrument.
Did
the
instrument
do
what
you
wanted
it
to
do,
and
so
right
now
we
know
we
don't
have
all
your
video
camera
setup
there
become,
like
you
know,
validate
that
the
instrument
did
it.
We
have
to
have
a
human.
B
You
know
checking
that
it
actually
ran,
so
those
tests
can
sometimes
take
a
little
while
if
the
instrument
operation
is
complicated
so
but
yeah
for
that
for
some
of
the
tests,
I
can
run
them
like
automatically
as
soon
as
I.
You
know,
update
things,
I
can
run
them
and
some
tests
I
have
to
I
I
have
to
trigger
it
myself
and
you.
B
B
A
Cool
everybody
thought
that
was
really
good.
I
really
enjoyed
that
really
interesting
here.
People
using
Argo
in
ways
we
don't
expect
I
have
really
thought
you
could
do.
That's
like
that's
that's
really
interesting.
It's
quite
a
few
nodding
heads
in
this
room,
Sam
or
I
can't
see
them.
Oh,
thank
you,
sir,
and
thank
you
very
much.
That's
okay,
I'm!
Just
looking
at
a
time
all
right,
we're
okay
for
time,
I'm
hoping
this
won't
take
very
long.
A
So
one
of
the
planned
features
for
version
2.7,
which
I
now
think
of
as
the
March
version
is
to
provide
some
kind
of
better
support
for
running
Argo,
workflows,
programmatically
from
different
programming
languages
and
people
typically
call
these
things
SDKs
or
to
teach
grandma's
suck
eggs.
Software
development
kits
and
I've
been
chatting
to
a
couple
of
people.
Who've
already
provided
some
SDKs
in
Java
and
Python
about
about
how
we're
going
about
going
out.
A
Do
this,
and
and
I'm
want
to
kind
of
basically
just
explain
the
current
plan
here
and
just
ask
people
with
it
if
they
and
get
a
feedback
on
a
couple
of
kind
of
interesting
points
that
that
have
come
up
in
discussion,
so
the
plan
is
to
basically
produce
some
API
clients
using
the
Argos
swagger
definitions
now,
prior
to
version
2.5.
The
only
way
to
interact
programmatically
that
ago
was
actually
through
the
kubernetes
api
and
now
with
2.5
we
have
our
own
archive.
A
Server
exposes
its
own
G
RPC
based
API,
so
the
plan
is
to
provide
them
swagger
base
for
their
HTTP
rather
than
G
RPC
and
way
to
interact
abso.
My
first
question
my
first
question
is:
does
anybody
actually
want
G,
RPC
I
feel,
like
probably
HTTP,
based
API?
What
are
you
gonna
meet
in
most
use
cases
and
without
all
the
complexity
that
G
RPC
involves?
There's
no
question
for
you.
A
The
second
question
is
about
the
clients
themselves,
so
originally
talked
about
SDKs,
but
actually
the
plan
at
the
moment
is
to
provide
clients
and
what's
the
difference
between
a
client
and
SDK
Westgate's.
Typically,
more
things
mechanics
just
one
component
in
the
SDK
so
that
the
clients
will
allow
you
to
interact
with
the
API
is
including
their
kind
of
archiving
api's
and
offload
offload
no
status.
A
Api
switch,
which
aren't
available
at
all
by
the
existing
kubernetes
api
x'
to
access
those
and
will
provide
those
clients
in
java
and
python
and
the
reason
we're
providing
a
java
sdk
is
because
we
use
Java
internally
ourselves.
So
we
need
one
and
the
reason
we're
finally
gonna
provide
a
Python.
One
is
because
people
in
the
community
are
asking
for
a
Python
SDK
nobody's
asking
for
us
to
cater
languages
such
as
C,
sharp
golang,
PHP,
Java,
Script,
corba,.
A
Or
any
other
other
languages,
so
if
you
think
another
language
is
really
important
to
you're,
really
key
to
you,
if
that's
something
you'd
be
interested
in
contributing
or
helping
with,
because
you
know
we're
golang
experts
in
Java
experts,
but
we're
actually
not
Python
experts,
so
you
know
it's
very.
It
would
be
very
easy
for
us
to
produce
next
again
in
non
core
languages
that
doesn't
actually
work
that
well.
So
my
next
question
is
you
know
what
languages
do
you
need?
A
D
A
You
I
should
mention
there
is
a
there's,
a
slack
channel
if
you're
interested
called
that's
the
case.
I
think
the
case
you
can
doing
that
if
you
want
to
join
the
conversation,
so
Scala
will
take
Java
at
discounted
rates.
Okay,
fine!
Well!
Obviously,
if
you
Scully
use,
Java
and
drop
with
that
yeah
I'm
I'm.
Looking
at
good
saying,
are
you
guys
happy
with
with
the
Yammer
the
interesting
question?
I
definitely
heard
several
people
say:
they're
not
happy
with
llamo
I
think
prior
to
this
meeting,
but
I
think
it's
a
good.
A
A
So
obviously
the
client
is
very
closely
tied
to
the
API,
but
you
know:
do
people
need
higher
levels
of
abstractions
than
you
know,
just
so
just
the
HTTP
client,
which
does
talk
in
kind
of
terms
of
workflows
and
so
forth,
but
you
may
require
something
a
bit
more
sophisticated
now
something
slightly
different,
slightly
more
bespoke,
sir,
to
your
use
case.
Well,
thank
you
guys.
A
So
I've
got
one
more
item
and
then
there'll
be
open
Q&A
after
that.
If
you
want
to
ask
any
questions,
the
other
item
is,
you
know
we
have
them.
We
currently
have
a
survey
here.
What
we
the
reason
we're
doing
a
survey
is
kind
of
to
take
the
temperature
of
the
Argo
were
close
community.
We,
we
kind
of
use
that
to
kind
of
focus
our
energies.
If
something
is
important,
we,
you
know
this
is
a
good
opportunity
to
kind
of
bring
that
to
for
also
kind
of
learning
about
what
people
are
doing.
A
So
we
can
appropriately
address
and
index
those
kind
of
issues.
So
it
is
really
important
that
people
complete
the
survey,
because
we
don't
really
have
any
other
way
to
find
out
what
people
are
doing
so
that
that
is
really
important.
We
will
be
summarizing
and
Publishing
the
result
of
that
of
that
survey,
probably
in
a
few
weeks
time,
so
you
can
find
out
what
everybody
else
is
up
to
I.
Don't
know
how
we'll
do
that
for
you
PDF
document
or
something
like
that?
Okay,
so
that
is
kind
of
the
end
of
the
formal
agenda.
A
A
Really
this
is
the
Sasha
hug,
because
they
are
very,
they
are
have
been
very
desirable
and
we
had
to
be.
We
did
take
them
to
coop
con
San
Diego
back
in
November,
and
they
were
very
palpable.
We
actually
had
to
restrict
them
specifically
to
kind
of
committers
people
who
have
actually
contributed
code
changes
and
people
have
contributed
a
lot
of
code
changes
or
if
there
are
a
few
people
in
our
NGO
community
of
countries,
you
know
multiple
complicated,
pull
requests.
An
example
would
be
daisuke's,
worthless
templates.
You
know,
that's
a
really
big
feature.
That's
there.
A
You
know.
If
you
want
to
get
one
shipped
over
to
you,
then
that's
that's
the
kind
of
thing
you
can
do
and
we
also
have
t-shirts
for
anybody.
That's
there
as
well.
Another
example
is
an
Argo
CD.
A
guy
called
yarn
had
completed
a
whole
bunch
of
security
fixes
so
that
you
know
you
know
that's
what
you
have
to
do.
That's
fine,
it's
quite
transactional.
Isn't
it,
but
that's
how
maze
Pete's
asking
Olympus
for
work
flow
MMO
will
be
helpful.
E
A
D
A
Okay:
okay!
Well,
thank
you
very
much
for
attending
a
thank
you
to
Derek
bara,
Simon
Sam
for
doing
their
future
previews
the
q
and
A's
and
their
demonstrations
as
find
it
all
very
interesting.
This
will
be
saved
and
then
encoded
and
uploaded
to
YouTube,
for
people
to
watch
later
and
I'll
be
showing
that
video
link
will
appear
in
the
community
meeting
document
and
it
will
also
be
dropped
on
Twitter
and
into
the
slack
channel.
So
you
should
be
able
to
get
reviews
later
on.
Ok,
thank
you
very
much.
Lucas!