►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
let
me
tell
you
a
little
bit
about
our
go
workflows
and
events,
so
I
go
workflow
events
are
two
of
four
products
in
the
argo
community
that
focus
on
cloud-native
and
everything
you
know
is
kind
of
negative.
A
That
means
kubernetes
and
focus
builds
on
actually
scheduling
and
executing
workflows,
and
the
tool
tools
are
particularly
popular
with
inside
the
machine
learning
and
data
processing
community,
and
we
also
see
kind
of
ci
and
cd
use
cases,
as
well
as
things
like
performance
testing,
which
we've
talked
about
in
a
previous
meeting
in
this
meeting.
I'm
really
pleased
that
we've
got
eric
meadows
and
peter
salanki
they're,
going
to
be
doing
some
demos
showing
the
stuff
that
they've
been
working
on.
A
That's
gonna
be
really
interesting
because
we
love
to
see
what
people
have
been
doing
with
argo
workflows.
It
kind
of
really
kind
of
validates
us
and
we'll
also
have
bala
who's.
One
of
the
core
team
members
doing
a
demo
of
a
new
feature
coming
in
v210
or
maybe
v2,
is
it.
I
think
it's
v210
if
I
think
it's
been
merged
already
around
semaphores,
which
is
a
useful
feature
if
you've
got
large
workflows
that
use
limited
or
shared
resources
and
there's
a
discussion
topic
where
I
want
to
kind
of
solicit
feedback
from
people.
A
A
request
for
comment
on
the
introduction
of
a
web
hook
capability
into
the
argo
server,
as
well
as
a
slightly
smaller
discussion
around
the
potential
of
building
out
a
template,
catalog
potentially
similar
to
techtvon
cd
or
they
have
a
catalog
or
two
airflow
operators,
or
to
the
github
actions.
Catalog.
A
Okay,
so
if
you
want
to
ask
any
questions,
you
can
obviously
put
your
hand
up
or
just
ask
out
loud
when
there's
a
you
know
an
appropriate
point
in
the
conversation
or
if
you
prefer,
you
can
drop
a
message
into
the
into
the
zoom
chat
and
I'll
I'll,
monitor
that
and
hopefully
read
back
any
messages
that
have
been
asked
for
and
then
we
can
put
those
people
and
we
will
also
be
recording
this
and
we
typically
publish
the
recording
on
youtube
afterwards.
A
If
you
want
to
review
any
particular
sections
or
if
you
want
to
share
it
with
anybody
that
you're
or
you
know
just
of
personal
interest-
and
we
get
quite,
we
get
quite
a
number
of
viewers
on
youtube
as
well,
which
I'm
always
very
pleased
by
okay.
So
the
first,
the
first
topic
for
today
is
going
to
be
eric
meadows
and
he's
going
to
be
talking
about
cicd
eric.
Are
you
ready
for
this.
C
D
B
So
I'm
gonna
go
through
cicd
for
machine
learning.
In
mlb
quick
background
about
myself,
I
joined
major
league
baseball
back
in
january
of
this
year.
B
I've
been
doing
machine
learning
and
data
engineering
over
the
past
boy
like
10
years
and
mlb
has
been
great.
We
use
at
mlb
quick
background.
We
use
the
google
cloud
platform
and
our
team,
I'm
in
the
machine
learning
data
engineering
team.
We
particularly
use
our
kubernetes
engine
cloud,
sql
memory,
storage
and
cloud
storage,
so
real
quick.
Just
what
are
what
were
our
ci
cd
goals,
because
we
were
looking
at
tools.
B
B
We
wanted
those
deployments
to
be
one-click
rollbacks.
We
wanted
continuous
deployment
for
non-production
deployments
and
we
wanted
approval.
When
we
deployed
to
actual
production,
we
wanted
to
be
able
to
handle
all
devs,
doing
multiple
builds
and
the
ability
to
deploy
anything
so
pods
deployments
or
cloud
run
and
others,
and
so
we
looked
at
jenkins.
B
We
looked
at
jenkins,
x,
google
cloud,
build
argo
workflows
and
go
cd
and
the
the
one
that
hit
on
argo
was
that
it
was
blank
slat
automation,
so
we
had
to
build
a
lot
ourselves
which
now
that
we're
talking
about
the
hub,
hopefully
that
all
the
work
that
we've
done
and
other
people
have
done
will
help
make
this
real
easy,
especially
in
the
data,
processing
and
citcd
side.
B
So
these
were
what
we
presented
internally.
It
was
blank
slate
automation,
intuit
dog
foods,
it
it
was
used
for
kubeflow
for
data
processing
and
we
can
reuse
the
interfaces.
Those
are
huge
wins
and
so
that
ended
up
winning
out.
We
have
these
workflow
processes
that
we
follow,
so
we
have
kind
of
three.
B
We
link
we
see
if
a
link
commit
exists,
then
we
clone,
and
then
we
discover
what
docker
builds
need
to
happen
within
that
repo
because
we're
more
of
a
mono
repo
based
in
the
ml
team.
Here
we
build
everything
and
then
we
report
back
and
when
we
build
the
repos
for
our
models
ends
up
triggering
a
commit
over
to
a
model's
deployments,
repo
which
handles
all
of
our
overlays
for
kubernetes
for
the
application
configs
and
it
handles
the
same
workflow.
B
So
our
npd,
I'm
going
to
zoom
in
with
the
handy
2.9
zooming,
is
that
we
basically
do
a
full
build
of
all
of
our
changes
that
happen
with
our
docker
files
and
we
do
an
npd
which
is
everything
this
side
and
over
and
a
prod,
build,
and
then
because
we
have
that
notion
of
the
deployments
being
a
separate.
So
our
cia
process
is
here.
Our
cd
process
is
another
workflow
that
I
can
show.
B
I
did
not
link
this
up,
but
this
basically
goes
in
and
we'll
go
down
and
we
will
do
our
deployments
and
apply
them
separately.
So
if
I
go
back
to
this,
every
one
of
our
docker
builds.
We
decide
between
one
of
our
docker
builders
and
then
we
basically
go
down
through
a
preset
path.
On
we
build
our
images
we
test.
B
Then
we
render
out
our
deployments
and
then
at
the
end
we
end
up
just
doing
as
most
people
would
reporting
our
status
out,
and
we
have
this
because
we're
making
a
commit
to
another
repo.
Sometimes
things
collide.
So
sometimes
we
have
the
notion
of
failures,
so
we
do
retries
when
we
do
make
those
commits
or
emerges
to
those
other
branches
and
then
real
quick.
The
other
part.
That's
so
that's
our
large
workflow,
our
small
workflow
here
is
we
do
our
auto
discovery
on
what
needs
to
be
built.
B
This
is
one
dev
branch.
We
see
that
we
need
to
build
two
docker
images.
We
run
our
unit
tests
and
then
we
have
different
application
configs.
That
can
happen
all
in
this
repo,
and
so
we
discover
what
we
need
to
render
out
from
an
application
side.
We
make
those
commits,
and
then
we
end
up
reporting
out
our
status
and
this
then
these
auto
merges
and
commits
trigger
other
builds,
and
so
hopefully
these
will
be
into
the
a
good
portion
of
this
will
be
over
in
the
hub
templates.
So
any.
E
Workflow
features
and
do
you
leverage
or
rely
on
a
lot
that
you
find
like
useful.
E
Okay,
I
think
I
think
you
mentioned
you
you're
going
to
contribute
to
the
hub.
So
I
guess
you
are
leveraging,
like
templates.
B
Yeah,
we're
leveraging
templates
a
lot,
so
we
have
six
major
templates
that
are
used
across
the
org
outside
of
ml,
and
we
have
a
lot
of
our
build.
Processes
are
internal.
We
have
those
templates
as
well.
E
And
when
you
use
the
templates,
are
you
using
them
more
as
a
component
or
do
you
use
templates
as
kind
of
like
a
predefined
workflow.
E
B
We
kind
of
split
out.
We
have
individual
pieces
that
are
like
hey.
I
want
to
build
a
docker
file,
but
then
we
have
a
larger
set,
which
is
like
hey,
I'm
going
to
discover
my
docker
files
and
build
those,
and
some
are
like
I'm
gonna
report
out
like
in
the
bottom
here,
like
our
git
status,
that's
just
a
common
shared
one,
but
in
a
larger
workflow
template.
We
include
that
so.
E
Cool,
okay,
oh
and
then
sorry
last
question
for
the
shadowing
and
canarying
is:
is
that
argo
rolex
or
something
different?
So.
B
D
I
have
a
question:
do
you
see
a
need
to
deploy
to
multiple
clusters,
because
right
now
from
argo
workflows,
are
you
deploying
to
single
cluster
or
multiple.
B
D
F
I
have
a
question:
it
seems
like
you
built
the
full
ci
and
the
cd
portion
of
it.
I'm
just
curious.
Why
did
you
use?
Why
did
you
build
the
cg
portion
in
our
workflows
versus
using
orgo
cd
for
the
cd.
B
So
for
us
we
looked
into
argo
cd
and
our
org
does
use
argo
cd
across
the
board.
It
was,
it
was
just
a
choice
to
have
it
all
in
one
spot,
there's
no
real
rhyme
reason
behind
it.
It
just
we
were
doing
it
in
jenkins
before
in
one
tool.
So
we
wanted
to
keep
it
in
one
tool.
F
I
see
and
just
one
more
follow
up
questions.
When
are
you
looking
to
contribute
these
templates
into
the
hub.
B
A
Yes,
so
we've
got
a
couple
of
questions
eric
in
the
chat,
one
from
matt
c.
Does
eric
work
at
mlb
advanced
me
a
media?
Out
of
curiosity,
that's
the
first
question.
B
A
Okay
and
derek
is
asking-
and
this
I'm
really
interested
in
this
question
as
well.
How
are
the
workflows
triggered?
Do
you
about
from
your
github
webhook?
Are
you
using
argo
events,
I'm
assuming.
B
Yes,
so
we
use
argo
events
to
handle
that
which
encompasses
our
workflow,
but
we
use
the
web
hook
version,
not
the
the
github
version
of
it,
the
github
version.
We
wanted
to
be
able
to
do
an
org,
wide
url,
and
so
that's
why
we
don't
use
the
github
version.
We
just
use
the
webhook
version.
Okay,.
A
Okay,
that's
interesting
simon's
asking
for
me
why
I'm
sure
it's
the
most
important
question.
Definitely
the
question
I
wanted
to
ask
so
it
says:
what's
your.
B
G
That's
not
usually
a
very
positive
return
choice
so
right
now
the
giants.
A
B
A
Do
you,
what
are
the
differences?
What
are
the
you
know?
I
guess
you
and
peter
both
got
some
interest
here.
What
what
are
you?
What
are
you
looking
at
in
that
in
in
that
exercise,.
B
B
The
problem
that
I
find
is
that
it
is
kind
of
more
challenging
to
work
with
versus
looking
at
argo
workflows,
and
you
know
exactly
what
gets
passed
and
what
you
can
use
in
between
steps
when
it's
assembled
in
the
pipelines
version.
You
don't
know
if
you
can
necessarily
pass
in
a
variable
because
of
the
way
it's
defined.
It
doesn't
follow
typical
python.
B
A
Okay,
interesting
we're
always
quite
interested
in
people
who
are
using
kind
of
other
products,
kind
of
collaboratively
with
our
go
workflows
and
argo
events.
If
you
are
using
them,
we
do
like
to
know
about
it.
We
it's
not
the
first
time
I
think
we've
heard
people
moving
for
simplicity
and
particularly
from
people
moving
off
air
flow.
I
think.
B
D
So
argo
workflows
doesn't
have
a
lot
of
ci
cd
specific
features.
So
if
you
are
using
it
for
those
use
cases,
we
would
be
really
happy
if
you
can
contribute
anything.
B
Yeah
and
that's
that's
where
we
I
pinged
alex-
and
I
was
like,
I
think,
maybe
the
hub,
which
would
be
good-
would
help
with
this,
and
so
that's
why
we're
hoping
to
contribute
a
lot
of
these
pieces
back.
Thank
you.
Hey
yeah,
you're
welcome.
A
Okay,
thank
you
very
much
eric.
So
that
was
great.
I
hope
you
can
share
your
slides
with
us
in
case
people
want
to
see
them
again
in
the
future.
Next
up,
we've
got
peter
solanki
he's
going
to
be
talking
a
bit
about
managing
argo
and
I'm
hoping
to
see
a
really
cool
demo
doing
some
3d
rendering
that
he's
promised
peter.
Are
you
ready.
C
Perfect,
yes,
we're
going
to
talk
a
bit
about
about
managing
orgo
as
a
managed
service
and
we're
going
to
talk
about
two
use
cases.
One
is
a
cgi
rendering
use
case.
One
is
a
molecular
dynamics,
use
case,
but
first
see
who
are
we?
So
we
are
probably
a
bit
of
a
different
user
of
our
go
than
most
people
on
this
call.
We
are
a
startup
public
cloud
which
is
pretty
rare
these
days,
so
we
kind
of
try
to
compete
with
with
amazon
and
google
asia.
C
Our
focus
is
on
accelerated,
compute
and
other
other
use
cases
that
you
know
take
a
lot
of
compute,
so
gpus
is
kind
of
what
we
built
our
business
around
and
with
that
we
tend
to
serve
large
sort
of
batch
style
customers.
So
this
is
people
in
the
rendering
space
and
molecular
dynamics,
neural
net
training,
those
type
of
people,
and
we
also
have
sort
of
other
other
business
segments.
This
is
real-time
type
customers.
C
So
this
is
video,
transcoding
and
and
neural
net
inference
where
we
do
do
quite
a
lot
of
work
with
the
kubeflow
guys
on
the
serving
piece
too,
to
sort
of
serve
neural
nets.
Of
cpus,
our
clients
usually
take
pay
quite
a
lot
of
resources,
so
on
the
gpu
side,
which
is
our
focus
to
take
anywhere
between
50
and
4000,
gpus
on
and
off
or
in
the
cpu
side,
it's
2,
000
plus
course.
C
The
way
we
built
our
stack
is
large
multi-tenant
kubernetes
clusters
running
on
bar
model,
and
this
allows
us
to
do
some
pretty
cool
things
with
really
fast
spin
up
and
spin
down
times.
We
can
move
nodes
around
between
clusters.
We
can
run
clusters
to
span
multiple
data
centers
and-
and
we
can
do
that-
pretty
pretty
flexibly
to
kind
of
meet
customers
needs.
C
And
how
do
we
get
into
our
goal?
So
when
we
started
this
business
only
two
years
ago,
we
started
out
by
acquiring
a
cgi
rendering
product
so
that
sort
of
user
and
platform
to
do
cgi,
rendering
there.
We
then
built
an
in-house
job
scheduler
to
schedule.
These
render
grander
jobs
over
all
of
our
gpu
nodes.
C
It
worked
fine
and
actually
worked
surprisingly
well
and
the
guy
who
who
built
it
brian,
is
also
on
this
call
and
is
going
to
do
our
demo
later
on,
but
we
clearly
quickly
realized
that
we
don't
really
want
to
be
in
the
business
of
maintaining
a
job
scheduler.
You
know
we
want
to
need
more
and
more
features.
We
wanted
to
move
to
something
that
was
kubernetes
native
and
we
didn't
want
to
sort
of
reinvent
the
wheel.
C
I
felt
like
there
should
be
products
out
for
this,
so
we
looked
into
a
number
of
products.
We
looked
into
a
peloton
by
uber
and
they
kind
of
felt
on
fell
on
just
lack
of
community
support.
I
don't
know
if
anyone
is
actually
using
it
outside
of
uber
seems
really
cool,
but
there
wasn't
a
lot
of
documentation
on
it
and
and
didn't
really
want
to
head
down
that
path.
We
looked
at
some
cdi
specific
tools
like
openq
from
from
sony
and
also
didn't
really
yeah.
Yeah
didn't
really
really
like
that.
C
Both
lack
of
actual
users
outside
of
sort
of
sony
and
google
and
just
you
know,
didn't
run,
wasn't
kubernetes
native
and
so
on,
and
then
we
kind
of
stumbled
upon
our
workflows
in
our
ocd
last
october
and
it
pretty
quickly
dawned
on
us
that
you
know
this
is
going
to
be
the
winner
and
going
to
help
us
with
a
with
a
lot
of
things
both
because
communities
native
and
the
flexibility
of
aggro
workflows,
kind
of
fit
our
our
vast
needs
of
scheduling,
scheduling,
stuff
on
all
different
type
of
nodes
and
whatnot.
C
We
need
to
do,
and
you
know
different
chain
workflows,
so
our
journey
then
started
sort
of
rebuilding
our
internal
cgi,
rendering
product
right
before
christmas
and
then
at
the
same
time,
we
got
some
clients
who
were
moving
off
of
amazon,
amazon
and
google,
who
had
different
use
cases
which
we
were
going
to
get
into
and
they
needed
ways
to.
You
know
batch
to
batch
process
workflows
and
since
we
already
knew
about
argo
dan,
we
kind
of
introduced
them
to
hey.
You
should
get
to
check
out
argo.
C
I
think
this
would
be
a
great
great
alternative
to
what
you're
doing
today
and
that
kind
of
worked
out.
You
know
it
helped
us
a
lot,
a
lot
making
making
us
able
to
support
these
clients
and
they
ended
up
liking,
argo
as
well.
So
then
we
kind
of
started,
building
out
kind
of
a
managed
argo
service
where
we
managed
our
control,
plane
and
and
the
clients
can
submit
their
jobs.
C
What
we
are
building
now
we're
starting
to
design
at
least
is
sort
of
simplified
ui
to
argo,
to
be
able
to
schedule,
as
or
one-off
pods
or
one
of
one
of
workflows,
where
you
just
input
a
docker
image,
and
you
don't
even
have
to
write
a
workflow
channel
and
we're
hoping
to
be
able
to
contribute
back
more
of
the
work
that
we're
going
to
do
to
core
argo
as
well,
we'll
get
back
to
that
a
bit
later
as
well.
C
C
They
used
to
run
on
aws
using
aws
batch,
and
we
moved
them
over
using
argo
that
was
actually
pretty
smooth
and
you
know
they.
They
had
a
pretty
massive
reduction
in
cost
and
also
thanks
to
both
sort
of
communities,
our
goal
and
how
we
build
our
bad
mother
stack.
They
can
spin
up.
You
know
4
000
gpus
in
in
10
seconds,
which
is
pretty
awesome.
C
The
client
runs
a
lot
of
workflows,
but
the
workflows
in
themselves
are
very
simple,
they're,
actually
just
one
or
two
steps,
but
they
churn
through
easily
10
000
workflows
in
a
day
with
each
workflow
being
two
minutes
to
an
hour
each,
and
we
saw
some
some
very
interesting
sort
of
artifacts
when,
when
putting
that
much
load,
both
on
kubernetes
and
on
argo
and
argo,
ui,
definitely
doesn't
like
more
than
a
thousand
workflows
and
sort
of
the
top-level
view,
and
we
also
found
some
interesting
memory,
leaks
and-
and
you
know,
metrics
issues
when
churning
through
that
many
workflows
and
yeah
and
kubernetes.
C
If
we
don't,
if
you
don't
don't
clean
up
your
pods,
if
you
don't
set
your
pub
gc,
then
you're.
Definitely,
your
epcd
is
going
to
be
pretty
unhappy
when
you
start
getting
ten
thousand
parts
in
the
name
space,
so
moving
of
aws
as
batch.
So
this
client
was
using
aws
batch.
They
were
not
doing
anything
fancy,
but
you
know
they
thought
that
they
was
batch
was
a
very
simple
tool
to
work
with,
and
the
functionality
of
aws
batch
is
is
way
more
limited
than
argo
is.
C
We
replace
this
with
with
argo.
Argo
rest
api
that
we
manage.
We
managed
our
installation
for
them
and
then
they
get
the
rest
api
in
the
rv
ui.
C
They
very
quickly
said
you
know
came
through
that.
Okay,
we're
testing
this.
We
want
to
be
able
to
submit
sort
of,
submit
a
workflow
template
with
some
parameters
through
rv
ui
and
doing
that
is
kind
of
clunky,
and
you
know
you
have
to
end
up
typing
yaml.
So
for
someone
who's
not
not
hasn't
worked
with
yaml
before
you
know
some
sort
of
scientist
or
something
like
that,
you
can't
just
jump
into
the
argo
ui
and
submit
the
workflow
template.
C
Another
feature
gap
is
that
aws
batch
makes
it
very
easy
to
sort
of
aggregate
and
watch
your
your
logs
through
aws
cloud
cloud
logs,
whatever
it's
called,
and
especially
when
you
are
you
know,
quickly.
Sort
of
churning
through
your
workflows
and
we're
deleting
pods
sort
of
managing
and
searching
through
logs
is
not
easy
using
argo
tools.
C
So
we
we
built
something
quickly,
just
exposing
all
the
logs
we
collected
in
the
search,
build
the
quick,
graphing
dashboard
and
then
use
the
awesome
link
feature
in
the
inargo
to
on
the
workflows,
just
link
out
to
an
external
grafana
dashboard
where,
where
they
can
search
and
look
for
all
their
all
their
logs
and
as
I
mentioned
earlier,
the
the
kind
of
the
the
biggest
pain
points
is
that
these
these
guys
won't
want
a
ui
to
to
monitor
their
workflows
and
when
you
have
thousand
plus
workflows
running.
C
At
the
same
time,
the
argo
or
ui
sort
of
workflow
is
just
kind
of
grind
and
still
halt.
That's
something
that
we
would
definitely
definitely
want
to
help
out
with
and
contribute
to
to
sort
of
optimize
that,
but
we
haven't
had
a
possibility
to
do
it.
Yet
all
in
all
the
migration
was
was
not
a
big
deal
kind
of
both
seen
from
us
and
the
clients
perspective.
C
C
So
I'm
going
to
talk
a
bit
more
about
our
experience,
running
argo
as
a
managed
service,
so
we
deploy
one
argo
installation
per
tenant
per
customer.
This
is
the
reason
we
went
with.
This
is
because
we
don't
want
to
force
everyone
to
be
in
the
same
version.
We
want
to
be
able
to
have
different
upgrade
paths
and
also
to
to
ensure
sort
of
separation
between
the
clients.
C
This
is
something
that
might
change
in
the
future,
but
right
now
we're
pretty
happy
with
it
and
the
argus
simplicity
in
design,
meaning
that
you
don't
necessarily
need
a
bunch
of
databases
and
a
bunch
of
different
components
that
that
talk
to
each
other
makes
it
very
easy
to
deploy
it
in
different
name
spaces.
C
Without
a
lot
of
warhead,
we
manage
our
upgrades
via
our
cd,
of
course,
by
default
we
use
the
k8s
api
executor,
which
we
have
been
moderately
happy
with.
When
you
have
a
lot
of
workflows,
the
performance
suffers
and
there
are
some
issues
which
I
believe
have
been
fixed
in
recent
pull
requests
where,
if
you
have
a
long-running
workflow
and
executive
connection
to
the
api
server
drops
or
there's
an
etcd
leader
change
or
anything
like
that.
Basically,
the
workflow
fails.
D
C
To
work
from
which
is
not
very
good
so
for
for
some
of
these
more
more
trusted,
high
highs
or
performers
need
clients.
We
run
them
on
a
docker
executor,
which
you
know
has
some
some
safety
concerns,
but
we
we
solve
that
in
other
ways,
we
spent
a
lot
of
time
digging
through
github
issues,
because
a
lot
of
the
issues
that
we
run
into
with
our
goal
already
already
someone
found
or
or
solved
we're
really
happy
with
the
release.
Cadence.
C
C
So,
even
though
we
run
into
run
into
some
issues
with
all
different
stuff
type
of
stuff
over
the
past
couple
months,
they
tend
to
be
fixed
really
quickly,
which,
which
kind
of
makes
working
with
argo
a
breeze,
and
we
don't
have
to
spend
a
lot
of
time
kind
of
hacking
around
things
for
our
clients,
something
we
definitely
should
have
had
and
are
trying
to
get
is
more
in-house
goal
talent
to
be
able
to
both
fix
bugs
ourselves
and
then
contribute
a
bit
to
to
the
argo
projects.
C
G
C
So
I'm
going
to
jump
into
the
the
second
and
last
case
study.
So
this
is
our
own
cgi,
rendering
product
called
concierge
render.
So
this
is
a
cloud
render
product
built
for
for
3d,
designers
and
studios.
It's
a
web
app
that
you'll
see
in
a
second,
you
upload
your
3d
models
and
you
click
render,
and
we
then
federate
it
out
and
render
it
over
hundreds
to
thousands
of
gps.
C
We
package
up
all
the
rendering
software
in
docker
containers
and
then
we
use
argo
to
to
sort
of
orchestrate
the
render
workflow
to
select
the
things
we
render
on
a
cpu,
gpu,
node
and
so
on.
Our
our
web
app
backend
integrator
argo
over
the
kubernetes
api.
The
reason
we
went
this
way
instead
of
the
argo
res
server
is
that
we
need
some
functionality
that
argo
rest
doesn't
provide
like.
We
need
to
be
able
to
watch
workflows.
C
We
also
write
some
special
code
to
update
workflow
templates
in
argo
via
ci
cd,
where
we,
you
know
they're.
C
The
workflow
templates
is
named
something
regular
like
render.jamo
in
the
in
the
git
repo,
but
then
once
they're
run
through
cd,
they
get
renamed
to
render
dash
their
md5sum,
and
then
we
have
something
that
when
we
submit
workflows,
we
replace
all
the
references
you
know
to
render
with
render
dash
and
v5
sound,
and
this
allows
us
to
have
multiple
versions
of
workflow
templates
deployed
at
the
same
time,
in
the
same
name
space.
C
C
I'm
going
to
give
you
a
quick
demo
of
or
actually
brian
who's
on,
the
call
it's
going
to
give
you
a
quick
demo
of
the
product
and
how
it
looks
in
argo
when
we,
when
we
spin
up
a
render
brian
I'm
going
to,
let
you
take
over
the.
A
Brian,
are
you
muted,
yeah.
G
H
All
right
so,
first,
I'm
going
to
start
with
just
an
example
of
what
we're
going
to
be
doing
here.
This
this
window
here
has
a
version
of
blender
open
that
has
the
scene
that
we're
going
to
transform
from
kind
of
this
rough
wireframe,
untextured
image
into
a
145
frame
scene
blender
is
an
open
source
product,
that's
used
by
everyone
from
kind
of
small
hobbyists
that
are
just
getting
started
to
hollywood
type
studios.
H
H
The
user
can
see
in
real
time
as
frames
come
in,
but
for
us
this
has
been
really
a
game
changer
in
the
way
that
we
were
managing
workflows,
just
having
the
flexibility
to
have
the
step
functions
in
place
to
say,
okay,
if
something
goes
wrong
here,
we're
gonna
do
retries
in
a
different
way
than
we
did
last
time.
H
H
All
right,
so
it's
so
awesome
that
we
can
kind
of
go
right
into
the
argo
ui
and
see
the
logs
in
real
time
makes
it
really
easy
for
us
on
the
administration
side
to
see
what's
happening,
to
make
sure
jobs
are
running
okay,
so
we've
got
our
first
frames
and
we've
turned
that
wireframe
image
into
this
photorealistic
rendering.
D
H
H
I
can
get
access
to
this
directly
myself,
so
we've
onboarded
a
few
studios
that
are
using
argo
directly
that
have
brought
it
into
their
own
cicd
pipeline
and
they
like
it
because
they
can
get
access
to
more
gpus
and
they
have
more
flexibility,
and
we
actually
have
some
folks
that
they
their
own
programmatic
execution,
so
we're
extremely
happy
with
it.
We
just
went
live
with
this
new
upgrade
a
couple
weeks
ago,
but
so
far
it's
leaps
and
bounds
better
than
what
we
had
before.
C
Okay,
thank
you
brian,
so
I
must
just
want
to
tie
in
because
you
asked
you
asked
michael
that
earlier
what
features
we
use
in
our
go.
So
a
lot
of
our
workflows,
use
retries,
as
I
mentioned
retards,
are
very,
very
good
for
us.
C
But
you
know
just
outward
box:
retries
with
back
offs
have
been
super
helpful
for
both
of
our
use
cases.
We
leverage
workflow
templates
quite
a
bit
where
we,
you
know,
try
to
just
make
the
workflows
less
complex
by
leveraging
workflow
templates
and
you
know,
split
them
up
into
main
workflows
and
then
one
workflow
template
for
render
engine
and
so
on.
C
We
use
artifacts
in
both
http
and
raw
artifacts
to
inject
some
python,
scripts
and
stuff
that
we
need
inside
the
render
component
the
render
container,
and
that
allows
us
to
kind
of
version,
these
python
scripts
with
our
main
source
code
and
then
just
inject
them
via
our
go
artifacts
into
the
container
sort
of
at
workforce
schedule
time.
C
So
we
don't
need
to
rebuild
sort
of
our
containers
that
are
actually
our
containers
with
the
render
software
every
time
we
want
to
update
sort
of
our
python
scripts
that
run
inside
the
container.
If
that
makes
sense
yeah,
I
think
that's
it.
This
was
my
last
slide,
so
I'm
opening
up
to
any
questions
for
anyone.
G
Hey
peter
you,
you
mentioned
that
moving
from
aws
batch
to
kubernetes
you're
able
to
get
a
big
cost
savings
where,
where
did
most
of
those
savings,
come
from.
C
So
yeah
I
mean
you
know
what
most
of
those
savings
come
from
just
moving
from
aws
to
to
corby,
where
we
we
offer
a
lot
of
wide
different
range
of
gpus
or
aws.
You
really
have
the
k80
and
you
have
the
v100
and
we
have
a
lot
of
gpus
in
between
and,
for
you
know,
a
lot
of
use
cases.
The
k
the
k80
is
very
underpowered
powered
where
the
v100
is
super
overpowered,
so
for
a
lot
of
use,
cases
just
find
a
better
better
fit.
C
C
Where
you
know
if
the
spin-off
time
is
only
10-20
seconds,
they
can
be
much
more
fine-tuned
in
the
spin-up
and
spin
down
versus.
If
the
spin-up
time
is.
You
know
five
to
seven
minutes,
because
you
spin
up
a
vm
and
then
join
it
into
a
community
cluster
you're
gonna
keep
more
nodes
in
your
kubernetes
cluster,
which
is
gonna
cost
you
more
money.
G
Right,
well,
that's
great
also.
You
mentioned
that
one
of
the
pain
points
was
kind
of
just
a
little
bit.
What's
your
thought
in
terms
of
like
a
table-based
gui
for
really
large
workflows
versus
having
a
making
knitting
degree,
I
mean
the
graphical
one,
more
scalable.
C
C
Okay,
yeah,
so
for
for
our
for
our
use
cases,
we
don't
have
the
problem
inside
the
single
workflows.
You
know
our
workflows
are
much
simpler
than
than
what
we
saw
from
mlb
in
individual
workflows.
The
problem
is
that
we
we
push
so
many
different
workflows
through.
So
as
the
summary
list
is
where
we're
our
chokes.
So
in.
C
In
the
individual
workflow
views,
we
never
have
any
problems
because
they're
not
not
that
crazy
yet,
but
the
big
problem
is
when
you
want
to
list
the
workflows-
and
we
have
you
know
a
thousand
plus
workflows
in
that
list.
It
just
get
bogged
down
because
I
think
that
argo
server
is
just
sending
every
workflow
with
all
the
details
to
the
react.
Front
end
doesn't
implement
any
pagination
and
I
think
I
think
that's
why
it
gets
slowed
down
nicely.
I.
G
E
E
What
what
kind
of
scalability
problems?
I
know
you
should?
It
seems,
like
you,
shard
the
controller
into
so
many
controllers,
print
names,
one
name,
space,
one
controller,
yeah,.
C
Exactly
and
a
namespace
map
to
a
client
for
us,
and
so
we
had
some
scalability
problems.
You
know
I
found
some
tickets
around
that
we
had
some
memory
leaks
that
were
pretty
interesting
and,
and
we
had
some
early
on
stuff
that
you
guys
helped
with
very
quickly.
Otherwise
the
really
did
the
scalability
problems
have
been
in
the
ui
where
the
uis
gets
bogged
down.
That's,
like
you
know,
90
percent
of
the
pain
point
has
been
like.
C
Oh
great,
the
ui
doesn't
know
again,
because
now
we
have
too
many
name
spaces
and
people
get
annoyed,
and
then
you
yell
at
them
and
tell
them,
use
the
cli
and
all
stop
being
so
they're
relying
on
the
ui
and
they're
like
well.
The
ui
is
pretty
and
yeah.
So
so
that's
that's
really
where,
where
the
pain
points
are
otherwise,
we
haven't
yeah.
Otherwise,
there's
been
very
little,
as
I
said
we
have,
we
had.
You
know
before
we
implemented
any
reasonable,
workflow
ttls.
C
Just
deleting
you
know,
10
000
workflows
with
our
go,
delete
all
takes
like
an
hour
so
that
that's
pretty
interesting
but
but
except
for
those
type
of
things,
we
haven't
really
had
any
issues
with
scalability
that
that
aren't,
you
know
also
a
problem
with
with
the
rest
of
the
kubernetes
stack.
As
I
said,
you
know
when
you
run
when
you
have
10
000
pods
in
a
name
space
kubernetes
is
not
going
to
be
super
happy
with
you.
C
Btcd
is
not
going
to
be
super
happy,
so
so
my
tip
is,
it's
really,
you
know
be
aggressive
with
ttls
on
your
workflows
when
you
have
a
lot
of
them
and
and
pod
gc,
and
the
downside
of
that
is
that
you
know
you
obviously
lose
being
able
to
dig
into
the
workflow
using
argon
get
our
list
or
the
ui.
C
So
you
need
to
kind
of
offload
your
offload,
your
logs
to
somewhere
else
like
elasticsearch
and
so
on,
because
the
pod
is
gone
and
and
yeah
I
mean
it
could
possibly
be
an
avenue
to
do
some
better,
like
more
direct
integrations
with
elasticsearch,
maybe
or
those
type
of
databases
for
logs.
So
you
can
still
click
the
logs
button
in
the
ui
and
it
has
logs
from
somewhere
else
and
no
they're
either
is
support
or
has
been
talking
about
support,
support,
saving
logs
as
artifacts,
which
is
something
that
could
be
interesting.
C
We
haven't
looked
into
it
because
we
have
elasticsearch
collecting
logs
for
us.
We
don't
archive
workflows
when
they're
complete,
so
we
haven't
we
haven't
deployed.
You
know
any
of
the
database
support
in
our
go
with
passwords,
mostly
because
we
haven't
wanted
to
add
that
level.
C
Complexity,
that's
probably
something
that
we
will
want
to
do
at
some
point,
but
I
just
know:
there's
been
so
much
other
things
to
sort
of
manage
and
maintain
that
adding
the
complexity
of
sort
of
an
archive
database
and
and
managing
that
is
not
something
I've
been
ready
to
add.
Yet
so
we
kind
of
be
dealing
with
trying
to
collect
all
the
data
we
need
from
the
workflow
and
then
just
having
it
expire
using
ttl.
C
Okay,
well,
if
he
says
painless,
I'm
probably
gonna
end
up
doing
it
and
then
I'll.
Let
you
know
how
it
scales.
A
I
I
have
a
question:
please:
how
do
you
secure
your
various
user
interfaces?
You
mentioned,
it
was
multi-tenant.
C
Yes,
so
that's.
Why
also
one
of
the
reasons
we
run
our
gopro
namespace,
you
know
so
that
we
have
we
named,
we
don't
soft,
the
soft,
multitalents
and
kubernetes
clusters,
so
but
each
namespace
is
pretty
strictly.
You
know,
separated
from
each
other,
both
through
network
firewalls
policies
and
all
the
pod
security
policies
and
so
on.
So
since
each
user,
each
customer
has
their
own
argo
installation,
there's
no
sort
of
multi-tenancy
on
the
argo
level,
and
we
you
know,
then
the
ui
themselves.
C
No,
we
don't
we
would,
it
would
be
in
ingress
and
it
would
be
as
some
simple
application
like
basic
off
for
that
specific
user.
We
haven't
yeah,
we
we
haven't
built
sort
of
a
unified
authentication
thing
for
all
of
our
sort
of
kubernetes
services.
That's
something
that's
coming,
but
it's
pretty
simple
yeah,
but
you
know
it's
it's
either,
or
sometimes
we
even
use
them
use
that
if
you
know
sometimes
we
even
use
our
goal,
ui
off
with
kubernetes
client
credentials.
Sometimes
users
do
that
as
well.
Okay,.
G
Peter
I
have
a
quick
question.
Yep
you
mentioned
that
you
part
of
the
thing
part
of
the
things
you're
working
on
right
now
is
to
be
able
to
scale
retries
with
more
resources
using
pod
spec
merge.
C
G
C
So
it's
a
bit
too
early
yeah.
So
what
we
have
right
now
is
kind
of
we
have
have
it
packed
up
in
the
workflow,
where
we
just
have
a
bunch
of
kind
of,
if
that,
if
else
type
of
statements
in
the
workshop,
but
we
want
to
do
something
more
generic
and
something
that
we
could
contribute
back.
C
But
I
don't
have
you
know
I
don't
have
a
design
for
that
yet,
but
it's
something
that
I
would
bring
to
you
and
discuss
with
you
before
we
actually
start
coding
on
it
before
we
can
actually
implement
something
in
in
argo
itself.
I
need.
D
E
C
Issue
and
I
think
that
I
think
that
they
should
go
close,
or
at
least
it
hasn't
moved,
and
I
you
know,
I
think
that
this
definitely
needs
to
be
a
good
amount
of
thought.
Put
into
that.
So
you
don't
make
something.
That's
too
restrictive
and
only
fix
if
you
know
only
solves
one
use
case,
and
you
also
don't
want
to
make
it
like
too
random,
either
yeah.
G
Something
that's
been
on
my
mind
is
to
be
able
to
evaluate
expressions
using
the
argo
variable
syntax
because
we
expose.
We
now
expose
what
number
of
we
try
that
you're
in
as
an
article
variable.
So
if
you
could
just
simply
do
some
expression
resolution
with
it
into
the
pod
spec
merge.
C
Would
that
be
helpful?
It
would,
but
you
know,
in
our
case,
in
our
case
we
might
want
to
retry
it
on
a
null
with
different
node
selectors,
so
you
might
want
to
change
a
node
selector
from
you
know
a
you
know:
tesla
p6000
gpu
and
change
it
to
a
tesla,
v100
gpu,
and
then
it's
not
only
arithmetic.
So
so
then
you
kind
of
want
to
use
that.
C
Also,
then,
you
kind
of
want
to
use
that
almost
in
like
a
when,
when
condition-
and
you
know
run
this
run
this
when
retry
is
two
and
then
you
know
figure
out
how
to
do
that
without
a
bunch
of
duplication.
E
Yeah
one
workaround
I've
suggested
for
this
limitation
is
to
actually
not
use
the
retry
but
use
like
some
loop
that
iterates
the
variance
that
you
need
and
then,
when
it
actually
succeeds,
you
actually
skip
the
the
remaining
stuff
because
that's
kind
of
like
a
cheesy
way
to
do
a
retry
with
variants.
C
Yeah
yeah
I
get
it
that
makes
sense,
definitely
look
into
that
as
well.
You
know
as
a
workaround,
it
would
still
be
really
nice
if,
if
we
can
come
up
with
something,
that's
more
sort
of
organized.
C
D
C
There
was
a
question
about
the
etc
issue.
I'm
gonna,
I
don't
think
I've
actually
filed
an
issue
for
it,
but
I
think
there's
a
very
recent
issue:
a
pull
request.
I'm
gonna
pull
that
off
and
send
it
into
the
chat.
C
C
A
Okay,
thank
you
very
much
peter.
I
mean
it's
a
fantastic
demo
from
both
you
and
brian
and
from
from
eric.
Today
we
just
really
love
hearing
about
how
people
are
using
argo,
workflows
and
the
different
use
cases.
People
are
solving
that.
It's
really
helpful
for
us
to
kind
of
determine
what
our
plan
is
in
terms
of
new
features
and
capabilities
that
we
want
to
build
into.
A
So
if
anybody
else
wants
to
volunteer,
it's
also
a
really
great
opportunity
to
talk
about
what
you
and
your
business
are
doing
as
well,
and
I've
got
a
few
more
lined
up
over
the
next
couple
of
months
as
well.
I'm
quite
excited
about.
I
won't
tell
you
too
much
about
that
just
yet,
but
it's
gonna
be
pretty
good.
I
think
okay,
so
next
on
the
agenda,
we
are
obviously
currently
working
on
features
for
version
210
and
that
should
go
release
candidate
in
this
week.
A
So
pretty
friday,
or
maybe
maybe
we'll
make
it
until
monday
and
there's
a
couple
of
new
features
in
there.
One
is
called
memorization,
but
today
I've
got
bala
and
balor
is
going
to
be
giving
us
a
demo
of
a
feature
called
semaphore.
So
bala.
Please
take
it
away.
I
Yep
thanks
alex
hi,
I'm
bala,
I'm
staff,
engineer
and
intuit
working
the
argo
workflow
project.
Let
me
share
my
screen.
I
Yes,
yes,
yeah,
okay,
so
we
were
working
on
the
the
feature
called
semaphore.
Now
it's
renamed
to
synchronization.
So,
basically,
now
the
current
argo
is
supporting
the
rate
limited
the
two
scenarios.
One
is
like
a
configuration
level,
which
is
a
controller
level
so
that
that
it
will
control
the
parallel
of
workflow
execution.
The
another
one
is
like
a
step
level
and
task
level
which
will
control
the
rate
limit
rate
limit,
the
particular
workflow
we
level
so
in
the
synchronization,
which
will
solve
that
two
main
use
cases.
One
is
like
you:
can?
I
The
user
can
respect
certain
certain
workflows,
not
all
all
our
flows,
and
it
will
support
that
multiple
rate
limits.
The
user
can
configure
the
multiple
credit
limits
and
which
can
refer
in
the
certain
types
of
workflows,
which
will
be
rate
limited.
Then
another
one
is
like
you
can
rate
limit
the
specific
steps
across
the
workflows
that
are
the
two
major
use
cases
it's
going
to
be
solve.
I
I
I
I
So
template
level
has
the
same
spec
element,
which
is
called
synchronization
and
semaphore
with
your
key.
So
one
beauty
of
this
feature
is
like
now.
This
template
lock
will
be
across
the
workflows.
If
the
two
two
workflows
have
a
same
semaphore
lock
on
the
particular
template,
then
it
will
be
rate
limited
across
the
workflows.
I
I
I
Yeah,
so
the
both
workflows
are
completed
with
that.
Based
on
the
configuration
we
have
yep.
That's
it
from
me.
You
guys
have
any
question
about.
Could
you
tell
us
about
the
kind
of
use
cases
you
would
solve
so
the
mainly
like
we
have
a
use
case
saying
that
I
can
we
don't
want
to
par.
We
don't
want
to
rate
limit
all
the
workflows,
but
we
need
to
rate
limit
certain
workflows.
I
I
E
It's
another
way
to
state
it
is
it's
like
having
a
priority
queue
that
is
determined
by
us,
a
semaphore,
and
so
one
of
the
design
decisions
is
that
when
notifying
a
waiter
or
choosing
a
waiter
to
to
notify
to
wake
up
to
acquire
the
semaphore,
we
choose
the
one
that
has
the
highest
priority
in
the
earliest
creation,
timestamp
and
so
the
so.
It's
a
combination
of
synchronization
features
along
with
prioritization.
A
I
don't
know
that
feature.
Does
anybody
else
know
about
it?.
B
Depends
on
past,
you
can
set
it
to
true
in
airflow,
and
what
that
means
is
that
if
you
do
kick
off
a
backfill,
it
waits
for
a
previous
run
to
complete,
and
so,
if
you
scheduled
like
a
job
every
minute
and
you
scheduled
it
for
yesterday,
it
would
kick
off.
You
know
86
000
or
not
86
000.,
whatever
the
number
is
minutes
worth
of
jobs
and
then
once
one
completes
the
next
one
follows.
I
A
A
Thank
you.
So
that
is
a
new
feature
coming
in
version
210
and
hopefully
that'll
be.
That's
they'll
be
ready
for
you
to
try
out
on
monday
next
week.
If
you
I
want
to
give
it
a
go.
A
Okay,
I'm
going
to
now
share
my
screen
since
I
think
the
rest
of
the
meeting
is
me
talking,
which
is
my
favorite
thing
and
what
I
thought
I'd
just
initially
give
you
a
bit
of
an
insight
into
how
we
go
about
determining
what
we
work
on
next,
what
our
roadmap
will
be,
and
it's
actually
relatively
simple.
A
So
when
you
raise
an
issue
in
github,
you
have
an
option
to
raise
a
bug,
report,
enhancement,
process
or
question,
and
we
you
know
we
try
and
respond
to
all
of
these.
You
know
within
24
hours.
Typically,
we
tend
to
look
at
bug
reports
as
higher
priority
enhancement
proposals,
especially
we
think
they're,
a
regression
from
a
previous
functionality.
A
The
way
we
then
look
at
enhancements
is
we
typically
go
into
the
list
here.
We
sort
them
by
the
number
of
thumbs
up.
So
if
you've
got
an
enhancement,
you
think
is
particularly
important
or
in
fact
a
bug
report
that
you
think
is
particularly
important.
The
one
thing
you
can
do
to
kind
of
bring
that
get.
That
prioritized
is
to
give
it
a
thumbs
up.
Now
we
don't
always,
then
you
know
directly,
go
and
implement
the
first
item
on
the
list.
A
We
sometimes
will
we'll
go
down
further
in
the
list,
because
there's
a
workaround
for
an
item
or
it's
not
necessarily,
we
want
something
we
want
to
do,
or
perhaps
we
hope
that
people
from
the
community
will
will
do
it,
because
it's
quite
hard
for
us
to
implement
things
such
as
for
the
core
team
to
implement
things
such
as
support
for
azure,
for
example,
because
we
run
our
workloads
on
gke
and
aws.
So
we
can't
it's
very
hard
for
us
to
test
it.
A
So,
if
you've
got
something
unusual
like
that,
it's
often
a
really
good
one
to
contribute
to,
and
the
reason
I
mention
this
is
because
we
recently
decided
to
work
on
inversion,
211
the
ability
to
trigger
a
workflow
from
a
web
hook,
I'm
going
to
give
a
little
bit
of
an
insight
into
how
we
go.
I
plan
to
go
about
doing
that
with
the
goal
of
soliciting
feedback
from
from
people
on
the
meeting
today,
so
you
can
talk
to
us
a
bit
about
it.
A
So
the
plan
is
to
attempt
to
fill
one
two
or
three
of
these
three
use
cases
mentioned
here.
The
first
one
we
think
is
pretty
clear-cut
that
people
want
to
submit
a
workflow
from
a
workflow
template
based
on
a
message
or
potentially
a
cluster
workflow
template.
The
second
one
is
that
we
know
that
people
use
automation,
and
these
this
is
you
know.
A
Events
is
very
much
about
automation
to
resume
a
suspended
workflow
and
the
way
that
you
do
that
today
is
you'd,
use
the
api
or
the
cli
to
do
that
and
some
automation
behind
it
and
the
final
one
is
what
we
want
to
do
ourselves,
which
is
basically
gatochron
workflow
and
gauging
from
workflow,
basically
means
your
scheduler
workflow,
but
then
that
workflow
is
does
not
actually
start
executing
until
another
precondition
has
occurred.
A
A
So
if
you
want
to
have
somebody
send
a
web
hook,
event
such
as,
for
example,
github
sending
a
web
hook,
then
you
need
to
secure
the
endpoint
and
create
a
token
and
the
way
that
we
create
tokens
currently
is
basically
you'll,
create
a
service
account
in
your
cluster.
For
that
token,
and
then
you
can
make
a
http
request
to
one
of
two
endpoints.
Actually,
the
same
endpoint
is
whether
or
not
you
include
the
namespace
or
excluded
namespace.
A
The
reason
you'd
include
the
namespace
is
because
the
service
account
that
you
use
does
not
have
cluster
scoped
permissions
and
a
bit
of
an
overview
for
people.
What
I
mean
effectively
you
might
be
able
to
to
trigger
a
workflow
based
on
the
workflow
template.
You
would
need
two
permissions.
A
A
What
do
you
expect
an
event
to
contain?
Well,
an
event
is
actually
intended
to
contain
just
any
json
message,
any
any
json
payload
and
then
whether
or
not
an
event
is
used
to
trigger
a
workflow
template
is
based
on
a
thing
called
an
expression.
A
I
think
it
might
be
better
to
include
an
example
of
a
workflow
template
that
could
be
triggered
I'll.
Come
back
the
expressions
you
know
I'll,
explain
the
expression
syntax
based
on
the
template,
because
I
think
maybe
this
document
is
ordered
in
the
best
and
easiest
to
read
order.
A
So
a
workflow
template
that
could
be
triggered
based
on
an
event
simply
has
a
new
field
and
an
inspect
called
event
and
there's
a
field
current
expression.
An
expression
is
not
the
same
as
the
probably
the
curly
brace
syntax
you
might
be
used
to
for
parameters.
It
is
actually
similar
to
the
depends
on
syntax
and
in
fact
it's
the
same
as
that
it
depends
on
syntax
and
that
it's
evaluated
over
an
environment
and
the
environment
is
basically
the
message.
A
So
this
this
expression
here
says
that
the
event
must
have
a
message
in
its
payload,
so
field
called
message
and
it
must
have
a
metadata
of
x.
Argo
e
to
e,
and
that
would
be
metadata
basically
means
the
same
as
a
http
header,
so
you
have
to
access
to
any
http
headers
to
start
with
the
x
hyphen
prefix,
which
would
be
things
like
x,
github
action
or
x.
A
Gitlab
action,
what's
not
shown
in
this
example,
is
also
you'd
have
access
to
the
subject
of
the
claim
set,
and
that
is
intended
to
allow
you
to
filter
out
events
based
on
who
sent
them.
So
the
idea
is
you
if
you're
accepting
events
from
github
and
from
gitlab,
I
don't
know
who
would
do
that,
but
let's
just
use
those
as
kind
of
very
canonical
examples.
You
might
not
want
to
have
a
workflow
triggered
based
on
a
gitlab
message
or
github
one.
You
can
use
the
the
claim
set.
A
Subject:
kind
of
the
user
kind
of
the
claimset.subject
is
kind
of
the
username
of
who
it
is,
and
that
might
be
a
bit
longer
than
this,
because
it's
a
service
account
it
might
be,
and
let
me
see
if
I
can
get
an
example
to
show
you,
which
is
a
little
bit.
B
A
Just
paste
in
the
url,
so
so
a
service
account
has
a
slightly
different.
You
know
different
subject:
syntaxes
system
colon
surface
account
colon
namespace
colon
service
account
name,
so
that
might
be,
for
example,
jenkins
is
the
one
I
tend
to
tend
to
use.
A
Okay,
so
does
anybody
have
any
questions
about
that
so
far?
You
know
the
events
will
come
in
and
this
will
result
in
potentially
a
workflow
template
being
triggered
eric
you're
asking.
Is
this
a
constellation
of
events
and
workloads?
Let
me
come
back
to
that
at
the
end
of
the
discussion
and
tell
you
what
the
the
differences
are
between
the
two
of
those.
A
Okay
and
so
here's
an
example
of
triggering
that
above
event
with
the
authorization
token,
and
that
would
result
in
the
workflow
being
triggered
and
printing
out
the
workflow.
A
A
Workflow
is
one
that
contains
a
suspend
node
template
and
that
will
suspend
at
that
point
when
it
gets
to
that
particular
node,
and
you
can
then
then
resume
that
by
either
resuming
it
from
the
user
interface
by
clicking
on
the
resume
button,
or
you
can
send
a
http
request,
you
know
using
the
cli
or
so
forth
to
execute
a
resume
command
both
of
those
require
the
person
sending
the
message
to
understand
about
argo
workflows
and
the
format
and
syntax
of
our
workflows.
A
The
the
events
allows
you
to
connect
clients
who
don't
understand
or
don't
know
about
workflows,
but
do
know
how
to
send
web
book
events,
and
so
here's
the
syntax
for
a
template.
A
suspend
template
is
simply
one
that
has
the
suspend
node
and
then
it
has
an
event
which
has
an
expression,
which
is
very
much
the
same
thing,
and
this
will
execute
and
suspend
at
the
point
of
of
that
spend
mode.
A
We
also
like
would
support
a
manual
intervention,
an
automatic
resumption
of
timeout,
because
these
are
both
features
that
suspend
nodes
already
so
support.
So
you
could
do
that
from
the
cli.
These
are
going
if
your
workflow
goes
into
spend
state
and
for
some
reason
you
don't
get
the
message
that
you
want,
or
you
have
some
kind
of
outage
that
relates
it
and
then.
Finally,
this
is
just
an
example
of
gating,
a
cron
workflow.
A
Okay,
so
does
everybody
have
any
questions
about
those
before
I
talk
a
little
bit
about.
Is
this
a
consolidation
of
argo
events
and.
A
Workflows:
silence:
okay.
I
know
this
meeting's
gone
quite
long,
so
maybe
that's
why
so
just
to
talk
about
the
consolidation
of
argo
events
workforce.
The
answer
is
so
we
know
that
people
often
use
argo
events
and
our
go
workflows
in
concert,
but
we
also
know
that
argo
events
is
an
additional
software
component.
You
deploy
whose
configuration
is
is
non-trivial.
In
fact,
derek
wang
is
currently
working
on
him.
Making
that
easier
to
use
there
isn't
really
a
goal
to
merge
this.
In
the
same
way
we
don't
have.
A
The
goal
of
the
chrome
workflow
is
never
to
bring
in
that
kind
of
calendar
functionality
from
arc
events.
This
is
partic.
This
is
intended
to
solve
very,
very
simple
use
cases.
So
you
know
this
was
was
not
expected
to
support
or
understand
anything
more
sophisticated
than
very
basic
web
hooks.
A
Yep
cool
okay,
so
we've
not
had
a
lot
of
questions
about
this,
so
we
will
perhaps
come
back
to
this.
You
know
later
on.
If
you
want
to
provide
feedback,
that
would
be
fantastic.
You
can
just
come
on
github
and
there's
a
there's
a
ticket
for
that
already.
A
Okay,
I
wanted
to
talk
a
little
bit
about
a
discussion.
We've
had
recently
about
producing
a
template
catalog,
so
the
argo
template
feature
has
come,
come
into
full
force
in
kind
of
version,
2.8
and
version
2.9,
and
it
allows
you
to
basically
create
templates,
but
we
actually
know
that
people
have
and
want
to
share
and
want
to
use
templates.
The
plan
is
is
to
kind
of
see
if
there's
a
strong
interest
in
this
and
about
and
find
out
what
kind
of
templates
people
would
like
to
do.
A
A
Having
okay,
I'm
going
to
take
that
as
as
terraform
template
christine,
can
you
expand
on
what
what
you'd
like
from
terraform.
F
Well,
of
course,
right
now
we
do
use
concourse
pipelines
to
run
our
terraform
and
we've
been
looking
to
move
them
into
our
workflows
and
at
the
time
we
do
utilize
workspaces,
so,
in
short
like
if
you
have
multiple
workspaces-
and
you
want
to
filter
like
apply
these,
this
terraform
using
these
workspaces
concurrently,
for
example,
that
would
be
really
nice.
A
So
I
think
we
just
we're
looking
for
the
kind,
what
kind
of
things
you're
interested
in
and,
more
importantly,
are
people
willing
to
submit
and
contribute
their
own
templates
to
the
catalogue
as
well,
because
that'll
probably
be
the
one
thing
that
makes
it
a
success.
Is
you
know
a
healthy
ecosystem
of
catalogs,
it's
probably
good
to
compare
it
to
github
actions.
A
So
if
you
go
into
github
actions,
of
course
they
have,
you
know
you
know
really
large
catalog
of
different
actions
you
can
use
and
the
techton
catalog
it's
on
cd
catalog
they've
also
got
a
really
good
selection
of
tekton
tasks.
You
can
import
into
your
system
as
well,
so
be
really
good
to
kind
of
have
some
similar
things
to
those
kind
of
those
guys.
A
Okay,
I'm
going
to
stop
sharing
my
screen
now.
So
thank
you
all
for
coming
along
to
today's
hope.
You've
enjoyed
all
the
different
demos
and
discussions
that
we've
had
today,
and
we
do
this
on
every
third
wednesday
of
the
month.
If
you
want
to
come
and
ask
any
more
questions
from
people
to
do
that,
please
drop
into
our
slack
and
we've
got
a
couple
of
people
asking
about
the
the
serverless
work
grows:
workflows,
group,
slack
channel,
so
I've
added
that
to
the
document.
A
If
you
want
to
find
out
where
that
is-
and
of
course
I
hope
you
all
stay
safe
and
have
a
great
week.