►
Description
00:00:00 Introduction
00:03:30 Argo and KNative - David Breitgand (IBM) - showing 5G infra automation use case
00:31:30 How New Relic Uses Argo Workflows - Fischer Jemison, Jared Welch (New Relic)
00:59:00 Editing Workflows in VSCode - Paul Brabban
01:08:00. Argo Workflows v3.0 - Alex Collins (Intuit)
http://bit.ly/argo-wf-cmty-mtng
A
Okay,
so
thank
you
for
coming
today.
I'm
aware
that
everybody's
quite
busy
this
week
and
kubecon
is
on
in
north
america.
I'm
sure
people
are
keen
to
see
some
of
the
presentations
in
that
just
giving
you
a
little
bit
of
an
overview
of
what
we'll
be
doing.
Today
we
will
be:
we've
got
a
couple
of
different
demos
today
around
infrastructure
automation
from
david
as
well
as
new
relic,
who
are
quite
competent,
argo
users.
A
Okay,
now,
if
you
haven't
already
added
yourself
to
the
document
here,
please
add
yourself
under
the
attendee,
so
we
know
who
comes
this
we're
kind
of
particularly
interested
in
what
you
use
our
go,
workflows
and
argo
events
for-
and
we
like
to
learn
a
lot
about
that.
So
this
is
important
to
us,
so
we
can
kind
of
start
to
tailor.
You
know
where
we
invest
our
time
and
energy
to
make
sure
we
support
your
different
use
cases.
A
So
it
is
really
useful,
in
fact,
the
more
the
more
detail
you
can
give
the
better
for
those
of
you
who
have
not
joined
before
or
don't
don't
know
so
much
about
our
go,
workflows
I'll,
go
see
the
argo,
rollouts
and
I'll
go
workflows
and
aggro
events
are
all
argo
products
that
help
with
things
such
as
running,
workflows,
scheduling,
actions
on
your
cluster
performing
git,
ops,
continuous
delivery
and
rolling
out
new
versions
and
applications,
and
we
have
two
community
meetings
a
month.
A
One
focus
on
our
go
cd
and
argo
rollouts,
which
is
really
focused
more
on
application
delivery
and
with
we
have
an
argo
workflows
in
our
events,
community
meeting
the
alternate
wednesday.
Every
every
every
every
third
wednesday
of
the
month,
and
we
talk
about
things
like
ml
and
batch
and
so
forth,
and
this
one
we've
got
a
couple
of
argo
engineers
here
on
the
course
myself
on
the
core
I've
also
we've
also
has
bala.
I
think
we
also
probably
have
simon
as
well,
so
they
can
help
answer
any
of
your
questions.
B
A
B
A
If
you
do
want
to
ask
a
question,
the
best
way
to
do
that
is
just
to
kind
of
easily
ask
when
it
just
seems
appropriate
and
it's
if
you
think
it
needs
a
longer
answer.
Maybe
you
can
hold
off
to
the
end
and
you
can
also
ask
in
the
chat,
room
and
we'll
read
out
your
questions
to
whoever's
presenting
at
that
point.
A
We
are
recording
this
and
we
put
the
video
up
on
youtube
later
on
for
people
to
watch.
Okay,
any
questions,
I
think
we're
good.
Aren't
we
okay,
so
our
david?
Are
you
ready.
C
Yes,
I
have
a
little
bit
unstable
connection,
but
I
hope
it
will
be.
Okay
actually
got
disconnected
and
reconnected
now,
so
I
hope
it
will
be
fine
throughout
the
presentation.
B
C
Can
you
see
my
screen
something
on
my
screen?
Yeah?
Okay,
thank
you
right.
So
I'm
going
to
talk
about
serverless,
argo
events
and
how
we
used
argo
events
and
argo
workflows
in
a
project.
It's
called
5g
media.
It
was
a
european
project.
Now
it's
completed,
so
I
will
give
you
the
motivational
use
case,
but
I
would
like
to
start
from
some
feature
that
I
want
to
present.
C
So
basically,
I
would
like
to
start
from
presenting
myself
I'm
a
researcher
at
ibm,
heifer,
research
lab,
so
I
was
giving
a
presentation
at
kubcon
and
alex
was
in
the
audience
and
I'm
very
grateful
that
he
actually
invited
me
to
give
this
talk.
So
I
really
appreciate
I
have
a
web
page.
You
can
you
can
check
it
the
publications
and
patterns
and
so
on.
I'm
a
senior
technical
staff
member,
which
means
I'm
mostly
on
the
architecture
side
of
things,
technical
leading
people
and
algorithm
research,
and
so
on.
C
C
Currently
I
focus
on
serverless
technology
and
in
the
past
I
contributed
to
open
risk
and
applied
open,
can
key
native
in
various
contexts,
ranging
from
big
data
analytics
to
serverless
at
the
edge,
how
we
use
serverless,
with
gpus
media,
intensive
applications
and
so
on
and
recently.
C
My
research
agenda
revolves
around
5g
networks,
orchestration
and
promoting
kubernetes
native
infrastructure
style
orchestration
in
5g,
and
specifically,
we
look
at
5g
mac
issues,
network
slices
and
stuff
like
that
most
important,
I'm
extremely
fortunate
to
work
with
very,
very
smart
people
and
talented
people
and
padilla
and
avi.
Who
are
my
colleagues
who
participated
and
contributed
in
this
work?
They
won
the
call,
so
I'm
almost
done
with
my
shameless
plugin
of
our
lab.
So
that's
how
our
lab
looks
like
really
nice.
C
As
the
the
talk
cook
on
europe
using
argo
and
canadian
orchestrate
media
intensive
services
in
5g
edge,
I'm
not
going
to
create
this
talk
in
this
call
today,
but
I
will
mention
the
highlights
of
it.
C
So
we
have
this
project
that
they
mentioned.
It's
called
5g
media,
programmable,
h2
cloud
virtualization
fabric
for
the
national
device
media
industry.
C
C
Basically,
I
can
because
I
don't
see-
I
cannot
actually
ask
you-
please
raise
your
hand
who
has
experience
with
telcos
and
mano
and
open
source
man
in
particular.
Probably
a
few
people,
or
maybe
more
than
a
few
people
here
know
what
I'm
talking
about.
C
So
the
telco
management
and
orchestration
solutions
are
really
not
flexible
enough
and
really
not
cloud
native
enough
to
allow
complex
expression
for
cutting
edge
application
that
you
want
to
put
into
the
energy
network
at
the
edge
of
the
5g
network.
C
And
initially,
we
kind
of
stuck
without
orchestration
tool.
Yeah
now
zoom
tells
me
that
my
connection
is
unstable.
Do
you
still
hear
me
well,
or
there
is
a
problem-
you
sound
absolutely
fine,
oh
okay,
good,
and
he
really
was
a
real
life
saver
for
us
in
the
project
and
I'll,
explain
why
and
we
we
so
fall
in
love
with
our
workflows
and
argo
events,
so
we
started
using
them
in
our
other
projects.
C
All
of
these
projects
are
research
projects,
but,
as
you
know,
argo
ibm
uses
argo
in
production
as
well
and
we
work
with
people
in
production.
So
there
is
more
and
more
positive
attitude
towards
argo
and
I'm
very
happy
about
this
and
it's
absolutely
a
great
tool,
and
maybe,
if
you
have
other
opportunities
to
discuss,
also
how
we
use
this
in
other
projects.
I
will
talk
also
about
other
projects,
but
not
today.
C
So
let
me
talk
quickly
about
so
that
was
an
introduction.
Let
me
talk
now
about
a
small
feature
that
we
thought
about
and
we
thought
it
might
be
useful
and
interesting
to
the
archicad
and
if
not
just
throw
rotten
tomatoes,
it
doesn't
say
you
just
you.
Don't
understand
what
you're
talking
about
it's,
not
important
and
so
on,
so
we
are
really
not
dogmatic
about
it,
which
just
thought
might
be
nice.
C
It's
it's
easy
to
do
and
we
did
a
prototype
of
that
and
you
have
a
github
for
that.
The
only
thing
I
have
to
say
there
is
a
disclaimer
here.
The
implementation
uses
an
old
version
of
our
go,
but
we
can
rebase
if
the
community
finds
it's
interesting
and
we
can
submit
a
course
through
the
regular
process
and
so
on.
So
what's
what
is
this?
C
C
There
are
pros
and
cons,
of
course,
pros
if
you
really
have
proliferation
of
these
gateways
and
sensors
and
somehow
we
managed
to
build
an
architecture
in
5g
media.
Well,
we
had
very
many
of
them.
C
You
would
like
to
conserve
resources
in
a
resource-constrained
environment,
and
if
there
are
many
such
pods,
maybe
they
don't
do
not
need
to
run
all
the
time.
So
you
can
do
some
sort
of
statistical
multiplexing
gain
and
you
will
be
able
to
do
more
work
on
on
the
same
capacity
and
basically,
the
reason
that
it
proliferated
with
us
was
that
once
we
understood
that
the
traditional
mana
solution
would
not
be
extremely
good
for
us.
C
We
used
this
itself
as
an
orchestrator
argo
as
a
work
during
the
dent
manager,
and
we
onboarded
network
services
directly
to
kubernetes,
and
the
entry
point
into
the
orchestration
of
these
services
was
a
pair
of
gate
and
sensor.
So
on
boarding
stage.
C
Whoever
knows
what
it
means
in
telco
means
that
nothing
should
run
at
this
point
to
just
register
your
service
with
the
system,
and
then
you
instantiate
it
for
the
time
and
needed,
and
that
means
that
when
we
did,
this
on
boarding
is
somehow
created.
This
in
large
amounts
gateways,
and
we
didn't
want
all
the
time,
but
rather
be
there
known
to
kubernetes
and
only
be
used
when
we
actually
need
them.
C
Of
course,
if
you
make
it
a
canadian
service,
you
have
cold
start,
so,
probably
not
good
for
real
time.
So
it's
a
trade
basically,
so
I
can
show
you
a
short
demo.
I
missed
one
time.
I
think
I'm
pretty
much.
Okay,
I'll
try
to
speed
up
a
little
bit.
So
let
me
show
you
a
really
really
short
demo.
Just
tell
me
if
you
see
the
fonts
all
right
so
basically,
first,
let's
take
a
look.
What's
running
here
is
nothing
is
running
except
controllers
gateway
in
sensor,
and
here
all
the
fonts
are
not
extremely
cool.
C
Let's
take
a
look
at
this
yammer.
I
don't
know
if
you
can
see
here
in
the
specification
of
the
of
the
gateway,
for
example,
that
it
says
k-native,
gate
http,
so
that's
kind
of
a
new
type
here
that
we
added.
So
it's
not
very
important.
What
is
the
implementation
inside?
It's
really.
It's
really
not
very
important
right
now
in
the
sensor.
You
also
see
that
this
is
a
new
type
of
the
sensor
called
k
native
sensor
and
basically
right
now.
C
C
They
are
created,
and
now
we
would
like
to
see
what
happens
to
them
so
they're
creating
this
start
running.
So
I
guess
you
have
a
pretty
good
idea.
What's
going
to
happen
next,
you
see
that
one
of
them
will
start
terminating
now
right
because
it's
already
the
grace
period
of
k
native,
so
it's
terminating
terminating.
So
this
k,
native
gate
and
sensors
start
terminating
and
they
are
there,
but
they
are
not
consuming
any
resources
now,
so
they
they
became
serverless.
C
Argon,
in
the
broader
context,
give
you
some
interesting
stuff
and
then
later,
if,
if
you,
if
you
want
to
give
some
feedback
on
that,
to
ask
questions,
we
can
look
back
to
the
demo
and
that
feature
okay.
So
these
are
people
right.
So
we
have
this
problem.
We
have
midi
intense
public
services,
they
need
to
run
at
5g
edge.
We
need
to
orchestrate
them
somehow.
C
So
why
are
going
canadian?
How
we
use
them
already?
So,
what's
the
problem
again?
Okay,
so
you!
What
we
want
to
do
is
to
facilitate
the
in
that
project
is
to
facilitate
deployment
of
the
third-party
software
at
the
edge
and
want
to
do
it
cost
efficiently.
So
it
will
be
instantaneously,
elastic
but
easy
to
scale
down
and
fast
time
to
market.
So
it's
very
difficult
to
do
with
vms,
and
I
do
not
need
to
spend
too
much
time
in
this
audience
trying
to
convince
you.
C
Really
it's
not
doable
these
vms.
But
what?
What
are
we
after
here
with?
Basically
this
midi
intensive
applications?
C
C
So
basically,
we
just
need
a
mechanism
to
do
this
orchestration
on
demand
for
this
infrastructure
to
be
propped
up
where
needed
when
needed
for
the
exact
duration
where
it's
needed-
and
these
were
our
use
cases.
So
we
had
a
very
interesting
use
case
on
tele-immersive
gaming
working
with
specialists
in
these
gaming
domains.
So
basically
it's
about
democratizing
tele-immersive
gaming.
Nowadays,
you
only
have
this
type
of
experience
in
amusement
parks,
but
in
the
future,
when
5g
is
round,
games
like
this
can
be
commonplace
mainstream.
So
basically
what
happened
here?
C
C
We
send
one
frame
to
each
edge
where
spectator
is,
and
you
have
a
portfolio
of
transcoders
so
that
each
dictator
would
get
a
quality
of
service
that
is
most
most
suitable
for
her
terminal.
So
it
was
one
use
case
and,
as
you
can
imagine,
people
just
come.
They
want
to
start
games,
so
everything
is
really
serverless
here
and
what
does
it
mean
in
the
5g
edge?
It
means
you
need
to
start
all
kinds
of
container
network
functions
on
demands,
connect
between
them,
create
a
service,
so
any
of
them
and
each
such
orchestration.
C
It's
an
orchestration
flow
triggered
by
some
events,
and
you
really
really
cannot
do
this
fast
enough
and
flexible
enough
through
the
manual
solutions
that
telco
world
has
today.
But
argo
helped
us
to
do
that
now.
We
had
another
use
case
very
interesting,
which
subdivided
into
two
use
cases
related
to
remote
media
production.
C
One
very
interesting
news
case
cases
about
mobile
journalism,
so
the
journalist
is
involved
in
some
event.
We
actually
did
it
in
the
field
right,
so
the
journalist
is
involved
in
some
events
and
now
she
wants
to
produce
the
content
and
more
than,
moreover,
she
want
they
compete
all
these
journalists
compete.
Who
would
be
the
first
for
the
newscast
and
they
do
not
have
enough
vendors
to
send
high
quality
content
to
the
headquarters,
so
they
would
transmit
to
the
nearest
5g
edge
and
there
on
demand.
C
A
cognitive
infrastructure
would
be
set
up
to
support
production,
for
example,
to
create
captions
on
demand,
face
recognition
looking
to
wiki
to
see
who
are
you
looking
at
and
so
on,
and
then
the
produced
content
will
be
transferred
to
to
the
newscast
and
ready
for
the
car
for
the
pre.
For
the
for
the
program,
and
then
the
editor
would
look
at
it
and
and
put
it
directly
into
the
news,
and
so
this
way
they
can.
C
More
capabilities
to
use
this
infrastructure
at
the
edge
rather
than
you
know,
just
sending
an
unedited
video
to
the
headquarters,
and
so
it's
a
much
slower
process
and
we
speed
speed
it
up
quite
considerably
and
then
the
last
one
is
related
to
the
ultra
high
definition,
quantum
distribution
and
what
we
did.
We
helped
our
customer
they're
our
partner
there
who
actually
produces
this.
C
They
develop
these
caches
for
the
luxury
sector,
like
yachts
and
and
hotels,
and
so
on,
and
the
conference
centers
for
ultra
high
definition
content,
and
they
it's
very
so
you
do
not
want
to
have
caches
all
the
way,
all
the
time
in
all
the
locations
where
possibly
somebody
can
take
a
look
at
your
content.
So
you
recognize
where
the
demand
is,
and
then
you
set
up.
C
You
prop
up
these
caches
on
demand
and
these
are
again
container
network
functions
and
you
need
to
to
orchestrate
this
whole
tree
of
caches
and
configurate
them,
and
so
on
so
forth
and
argo
in
all
three
use
cases
universally.
We
used
argo
to
to
trade
all
the
virtual
infrastructure
for
these
applications
and,
moreover,
in
the
edge
you
have
limited
resources.
They
compete
for
resources,
so
you
need
to
do
some
arbitration.
You
need
to
to
go
to
the
optimizer
and
the
scheduler
and
ask
please
who
can
use
the
gpu
here.
C
Can
this
application
use
it
or
that
application
use
it
and
then
again
gluing
that
all
in
a
principled
manner,
we
did
it
with
our
workforce,
so
I
do
not
have
really.
I
don't
have
time
to
go
into
all
the
details
here
and
I'm
almost
at
the
top
of
the
hour
for
me,
so
I
just
want
to
summarize
it
maybe
bits
so
I
will
skip
over
this
one.
C
C
Basically,
the
telco
requirement
their
mono
tools
would
have
the
visibility
into
what's
going
on
in
this
kubernetes
domain.
But
all
the
real
stuff
is
delegated
to
argo,
workflows
and
argo
events,
and
you
see
there
are
many.
This
argo
sensor
and
dargo
gateway
pairs
this
pair,
actually
we
use
it
service
descriptor
is.
C
Related
to
this
use
game
and
if
you
want
to
tell
this
service,
so
you
want
something
to
happen
inside
this
service,
you
talk
to
the
gateway,
it's
like
a
personal
orchestrator
for
this
instance
of
the
service
and
then
and
then
it
talks
to
the
sensor
and
then
argo
workflows
is
triggered
and
then
something
happens
and
it
happens.
The
way
we
want
and
and
really
we
were
really
impressed
by
how
well
are
good
so
one.
C
So
that's
the
conclusion
basically
was
very,
very
helpful.
We
found
it
very
easy
to
master
what
we
did
in
this
project.
It
was
not
a
very
large-scale
system,
but
this
was
very
complex
system.
It
was
very
experimental
system
and
we
found
argo
very
helpful
to
do
this
fast
experimentation.
Maybe
the
only
thing
that
I
can
complain
about
argo
was
that
sometimes
was
not
fast
enough,
because
every
step
you
do
it's
basically
to
pop
up
a
container.
C
So
we
have
some
ideas
on
how
maybe
you
can
preform
them,
or
maybe
you
can
have
a
feature
in
argo-
that
if
you
need
some
pre-warm
containers,
maybe
you
need
you.
Can
trade
traded
for
the
cost
that
you
need
to
pay
for
these
pre-warm
containers,
but
most
of
the
time
people
within
the
specifications
that
that
we
have
to
to
provide
reaching
features
and
the
feature
that
I
presented
you,
for
example,
making
argo
event
serverless?
Maybe
it's
an
attractive
feature
to
the
community?
Maybe
not!
C
Now
our
experience
with
argo
go
to
thinking
actually
do
we
really
need
any
orchestration
engine
besides
kubernetes?
Maybe
we
should
treat
it
as
what
it
smart,
orchestrator
and
rather
than
usual
manager,
the
way
telcos
treat
it.
And,
of
course
there
is
this
question
of
workflows
versus
operators.
Workflow
has
a
beginning
and
an
end,
because
we
use
argo
for
orchestrating
infrastructure,
maybe
a
little
bit
in
non-conventional
use
of
argo,
because
in
the
cloud
native
world
you
never
deploy
your
application.
You
assume
it
always.
C
You
always
redo
it
to
redo
your
your
operations,
but
we
have
some
some
ideas
of
how
maybe
you
can
marry
the
advantages
of
the
workflow
and
the
launch
of
the
operator,
and
we
really
didn't
think
it
through
yet
completely,
but
we
have
feeling
that
maybe
you
do
not
need
to
select
one
or
another.
Maybe
you
can
somehow
benefit
from
so
thank
you.
Thank
you
very
much.
I
hope
I
st
budget
more
or
less,
and
I
I'll
be
happy
to
take
your
questions.
If
you
have
one.
A
A
C
Oh
we're
we're
getting
good
already.
Well,
it's
the
2020.!
It's
a
pilot
here,
as
you
know,
also,
and
the
5g
we're
getting
it's,
not
the
5g
that
I'm
talking
about
here
in
this
5g
that
we're
getting
now
it's
just
slightly
better
than
4g
in
most
cases,
but
it's
a
regular
story.
It's
a
very
typical
story
in
telco.
C
Oh,
I
can
hear
you.
Can
you
hear
me
now:
that's
better
yeah
yeah,
okay,
so
I
I
need
to
shout
a
little
bit
to
finish
by
buy
some
optimism,
I
want
to
say
that,
although
some
people
believe
actually
it's
the
cause
of
calling,
but
actually,
if
we
have
this
5g
infrastructure
leading
prey
in
place
now
and
applications
like
those
that
I've
presented
to
you
for
distant
learning,
teleconferencing,
holographic
communication,
so
one
I
think
our
experience
now
will
be
much
better
than
what
we'd
like
to
do.
A
Okay,
great
stuff:
well,
thank
you
very
much
david.
If
anybody's
got
any
other
questions,
they
want
to
ask
or
anything
that
jumps
into
your
mind,
just
ask
in
the
chat
room
and
we
can
we
can.
We
can
discuss
those
shortly.
Next.
We've
got
fisher,
jameson
and
jared
welch.
I
hope
I
pronounced
those
names
correctly
from
new
relic
and
you're
going
to
be
talking
a
bit
about
how
you
guys
use
not
just
argo
workflows
but
other
argo
projects.
Do
you
guys
want
to
take
over
the
screen
share.
D
All
right
sharon:
are
you
ready
to
go
good
to
go
all
right
sweet?
So
my
name
is
fisher
jefferson,
I'm
here
with
my
colleague
jared
welch,
and
we're
going
to
be
talking
about
how
we
use
argo
workflows
at
new
relic.
D
D
We
are
currently
hiring
if
you
really
like
this
talk.
We've
got
a
link
in
the
slides
here
jared
and
I
specifically
are
on
the
build
and
deploy
tools
team
at
new
relic
so
most
relevant.
To
this
talk
we
own
all
of
the
cicd
infrastructure
at
new
relic,
and
we
run
we
own
and
operate
basically
every
argo
project
component
that
we
use
in
one
way
or
another.
D
B
D
So
for
our
architecture
overall
we're
midway
through
an
aws
migration
right
now,
so
our
compute
is
split
between
cloud
and
private
data.
Centers,
we
shard
our
cloud
infrastructure
across
a
number
of
different
kubernetes
clusters
in
aws,
where
each
one
has
kind
of
its
own
isolated
cluster
and
associated
aws
resources.
B
D
And
different
workloads
have
different
numbers
of
clusters
associated
with
them,
maybe
as
few
as
two
to
four
as
many
as
15
to
20
at
our
current
scale,
and
these
are
really
very
big
clusters
with
on
the
order
of
hundreds
of
nodes,
so
pretty
substantial
and
then
at
our
current
scale
we
have
almost
3
000
applications
in
our
production.
Argo
cd
instance,
which
we
use
for
all
of
our
users,
argo
cd,
apps,
and
you
can
see
on
the
right
here.
D
According
to
our
internal
metrics,
we
ran
10
000
deploys
to
kubernetes
clusters
in
the
last
month.
I
believe
using
our
continuous
delivery
platform
which
we're
going
to
talk
about
in
a
sec.
D
D
The
most
important
things
projects
do
related
to
deployment
is
defining
environments,
which
can
sometimes
be
grouped
into
environment
groups,
and
on
this
slide
we
have
kind
of
an
overview
of
what
that
data
model
looks
like
where
one
project
has
many
environments
and
an
environment
group
may
also
have
many
environments,
so
getting
a
little
more
specifically
into
environments.
D
So
if
project
deploys
into
kubernetes
you'll
have
one
environment
per
for
every
kubernetes
cluster
you
deploy
into
we
configure
stuff,
like
secrets
and
environment
variables,
on
a
per
environment
basis,
although
we
have
tools
for
kind
of
inheriting
those
into
individual
environments,
if
you
define
them
at
the
top
level,
and
then
we
do
allow
people
to
group
environments
into
environment
groups
and
then
control
how
changes
roll
out
within
that
group
which
we'll
talk
about
in
a
sec.
We
use
our
workflows
for
that.
D
So
next
steps
going
over
some
of
our
argo
workflows
use
cases.
The
three
I'm
going
to
talk
about
in
this
section
are
triggering
cd
syncs.
So
we
use
argo
workflows
to
decouple
argo
cd,
syncs
from
github
master
merges.
D
D
Kubernetes
we
really
like
working
with
the
kind
of
moderative
modern,
declarative
pipelines
that
are
easy
to
define
and
code
or
define
in
a
program
and
then
instantiate
and
then
argo
workflows
also
integrates
really
easily
with
our
existing
tooling,
which
is
all
go
servers.
And
we
have
a
really
easy
time.
Working
with
the
argo
api.
Using
the
argo
client.
D
So
for
our
sync
workflows,
when
we
do
a
deploy
to
kubernetes
in
grand
central
that
triggers
one
sync
in
the
corresponding
argo
cd
application,
where
we
have
one
argo,
cd,
app
per
environment
per
application,
we
define
a
default
sync
and
wait
workflow,
but
we
also
let
people
define
their
own
custom
workflows.
D
So
when
you
have
an
environment
group
of
sometimes
15
to
20
clusters,
we
wanted
to
be
able
to
let
people
do
kind
of
custom
rollouts
within
that
group,
and
we
wanted
to
be
able
to
do
a
configuration
that
was
really
heavily
inspired
by
argo
rollouts.
If
you've
looked
at
the
canary
rollouts
there,
so
people
can
define,
for
example,
like
in
this
screenshot.
D
I
want
to
deploy
one
environment,
wait
30
minutes,
deploy
20
of
my
environments,
wait
another
five
minutes,
deploy
two
more
environments
and
then
deploy
the
rest
of
my
environment,
so
that's
kind
of
a
contrived
example,
but
just
kind
of
going
showing
all
the
options
that
people
can
do
here
and
then,
when
someone
triggers
a
group
deploy,
we
read
in
their
grand
central
configuration
for
that
environment
and
we
dynamically
generate
a
deploy.
Workflow
kind
of
in.
E
D
In
grand
central
that
we
then
send
argo
workflows
to
generate
the
workflow
and
run
the
group
deploy,
and
so
it's
kind
of
a
series
of
screenshots.
What
that
looks
like
we
can't
do
a
demo,
unfortunately,
but
we
do
have
these
recorded,
so
this
is
in
the
grand
central
ui
you
select,
which
release
you
want
to
do.
D
This
is
actually
this
is
deploying
to
a
group
environment,
so
it'll
tell
you
which
environments
you're
deploying
to,
and
then
this
is
kind
of
what
the
workflow
looks
like
when
it
kicks
off
it'll,
deploy
to
the
one
environment
and
then
pause
on
this
wait
step
which
is
configured
as
five
minutes
and
then,
when
that
completes,
we
have
this
kind
of
fan
out
to
deploy
two
more
environments
then
wait
again
then
deploy
the
remaining
seven
environments.
D
And
finally,
just
kind
of
as
a
general
catch-all
for
the
rest
of
our
use
cases.
Grand
central
is
kind
of
our
entry
point
for
teams
deploying
argo
workflows.
We
let
people
define
what
we
call
generic
workflows,
so
you're,
not
deploying
any
kubernetes
workflows,
you're,
just
triggering
you're.
Excuse
me:
you're,
not
playing
any
kubernetes
resources,
you're,
just
triggering
a
workflow
when
you
click
deploy
and
teams
use
that
for
all
sorts
of
stuff.
E
Thank
you
fisher,
so
I'm
gonna
be
talking
about
what
we've
codenamed
argo
relay
go
to
the
next
slide.
So
argo
relay
is
our
code
name
for
a
project
that
is
a
set
of
our
go
workflows
for
automating
our
infrastructure
build-outs.
So
each
of
those
clusters
we
were
talking
about
earlier,
like
we
said
they're
pretty
large,
they
contain
components
from
lots
of
different
teams.
You
know
as
many
as
10
to
15
teams,
aws
resources
associated.
E
So
there's
a
lot
of
work
that
goes
into
building
those
out,
and
it
was
previously
you
know
mostly
manual
so
relay
is
our
attempt
to
automate
this
in
a
way
that
makes
it
extremely
simple
to
stamp
out
our
horizontal
units
of
scale
and
it's
you
know
in
front
terraform,
kubernetes,
automation
and,
of
course,
grand
central
deploy.
So
all
of
the
stuff
we
just
talked
about
with
those
deploys
like
we
run
those
as
part
of
this
workflow
as
well
go
to
the
next
slide.
E
So
how
does
it
work?
So
it's
it's
not
huge
compared
to.
I
know
some
people
run
very
huge
ml
workflows
on
here,
so
this
isn't
too
huge
compared
to
that.
But
it's
broken
into
steps.
We
use
the
argo
workflow
dag
to
express
dependencies
and
that's
been
extremely
useful
for
making
sure
we
have
the
ordering
correct
and
making
sure
that
services
are
stood
up
in
the
order
that
they
need
to
there's.
E
Of
course,
lots
of
you
know
inter-service
dependencies,
things
that
depend
on
kafka
being
out
things
that
depend
on
you
know,
other
consumers
being
up
the
default
behavior
we
started
with,
and
this
is
kind
of
what
inspired
the
relay
name
was
sending
a
slack
message
and
having
folks,
like
click,
a
button
saying.
Yes,
my
steps
are
done.
E
What
the
step
is
complete,
that's
all
handled
by
argo
events,
which
we
didn't
have
time
to
kind
of
cover,
but
there's
some
like
custom
behavior
in
there
using
argo
events,
and
so
that
kind
of
inspired
the
relay
name.
It
was
basically
like
a
workflow
to
act
as
an
orchestration
layer
or
a
coordination
layer
instead
of
everyone,
just
working
on
a
google
sheet
spreadsheet
and
so
teams
own
their
own
steps
in
this
top
level.
Workflow,
and
so
we've
increasingly
been
adding
automation.
E
Steps
to
this,
instead
of
like
the
manual
manual
click
a
button
when
we're
done
running
all
of
our
scripts
manually
and
most
of
those
are
grand
central
deploy.
So
we
have
we've
created
like
a
standardized
template
that
actually
triggers
the
aforementioned.
Like
argo,
cd,
sync
grand
central
deploy,
the
other
nice
thing
is
that
teams,
because
it's
an
arduino
workflow
and
it's
just
running
you
know
a
docker
container.
E
E
If
you
zoom
in
there's
like
a
lot
of
stuff
going
on,
some
of
those
steps
are
actually
fairly
large.
This
is
kind
of
what
it
looks
like
you
can
kind
of
see.
The
the
first
leg
is
like
the
infrastructure
build
out
stuff.
That
kind
of
happens.
Serially,
you
know
like
terraform
applies
and,
and
things
like
that
and
then
once
we
get
into
like
the
service
deployments,
it
kind
of
starts
fanning
out
those
different
teams.
Steps
take
over
we're
running
this
really
frequently.
You
know
like
two
plus
times
a
week.
E
The
workflow
usually
runs
for
a
couple
of
days
so
like
up
to
48
hours.
It's
actually
executing
we've
had
some
pain
around
like
retries
and
restarts,
which
I
think
is
getting
better
we're
trying
to
upgrade
to
our
go
211
soon,
which
I
know
comes
with
some
improvements
around
like
the
retry
and
restart
story,
and
pretty
soon
here.
E
And
then
we
just
want
to
kind
of
cover
some
of
the
future
stuff,
like
kind
of
how
it's
been
going,
and
what
we're
looking
forward
to
next
so
what's
been
great,
is
that
this
has
allowed
us
to
develop
new
features,
really
really
quickly.
We're
not
reinventing
the
wheel
constantly.
So
once
we
kind
of
got
the
basics
in
place,
you
know
some
of
the
base
core
like
workflows
in
place.
We
can
reuse
those
all
over
the
place
and
then
that
makes
it
easy
to
enforce.
E
You
know,
versions,
enforce
compliance
and
also
fix
bugs,
because
we
fix
it
one
place.
Instead
of
a
hundred
places,
it's
been
really
great
working
with
the
open
source
community.
We
getting
questions
answered
really
quickly.
We've
had
some
prs
approved
really
quickly.
It's
been
a
really
really
nice
experience.
The
slack
is
super
active,
much
better
than
spinnaker,
which
was
our
previous
continuous
delivery
system
that
just
did
not
work
out.
If
anyone
else
has
used
spinnaker
you
probably
if
you're
here,
you
probably
are
no
longer
using
it.
I
hope
it's
been.
E
It's
been
super
great
using
argo,
because
every
time
we're
like
oh
man,
we
need
to
do
this,
there's
usually
a
feature
for
it
or
there's
a
feature
on
the
roadmap
for
it,
which
is
not
always
the
case
with
open
source
projects
or
any
project.
Really
there's
been
some.
You
know
some
rough
edges
we've
been
dealing
with
kind
of
a
workflows
learning
curve.
You
know
new
relic's
kind
of
still
new
to
kubernetes
as
a
company.
E
Of
course,
I
don't
know
if
anyone
else
gets
tripped
up
on
this,
but
the
whole
inputs
versus
arguments
still
confuses
me,
sometimes
aka,
where
the
heck
do.
I
put
this
input
field
that
definitely
trips
up
some
of
our
users
as
they're,
starting
to
write
their
own
workflows.
So
we've
appreciated
the
the
docs
improvements
and
we've
been,
of
course,
writing
some
of
our
own
docs.
E
That
kind
of
wrap
some
of
our
learning
and
understanding-
and
you
know,
put
some
new
relic
spin
on
it
and
then,
of
course,
the
ammo
engineering
gamble's
in
okay
format,
but
it's
not
not
the
best
format
for
everything
and
and
sometimes
we
feel
like
we're
the
yaml
engineering
team,
which
I'm
sure
other
people
feel
that
as
well
appreciate
the
next
slide.
Some
of
the
stuff
we've
been
dealing
with
stability
and
scaling
monitoring.
All
these
like
workflows,
running
and
deploys
has
been
a
little
challenging.
E
Of
course
argo
has
standardized
and
prometheus
metrics
and
we're
a
new
relic.
So
we,
you
know
we
have
somewhat
competing
solutions
there.
We
do
have
a
a
product
that
scrapes
prometheus
metrics
into
new
relic,
but
occasionally
that's
occasionally.
Those
are
difficult
to
work
with,
and
it's
basically
just
been,
we've
been
having
trouble
getting
visibility
into
some
of
this
stuff.
So
we're
hoping
to
look
at
ways
we
can
get
metrics
out
of
workflow
steps
or
emit
events
from
workflow
steps
or
something
like
that
and
then
kind
of
unrelated
workflows.
E
But
argo
cd
scaling
has
been
a
huge
problem
for
us.
We
didn't
cover
it
and
previously
in
the
talk,
but
we
actually
have
another
instance
of
argo
cd.
That
does
some
like
background
syncs
for
like
some
of
our
core
kubernetes
infrastructure.
So
there's
like
another
3000
apps
back
behind
the
scenes
that
most
users
don't
know
about,
but
we're
hoping
the
argo
cd18
release
will
solve.
E
Basically,
all
of
our
problems
here
fingers
crossed
and
the
other
thing
we've
been
looking
at
is
building,
are
making
sure
like
these
big
relay,
workflows
and
sync
workflows,
are
item
potent
and
kind
of
self-healing
so
that
there
isn't
so
much
manual
work
involved
in
like
restarting
retrying
or
recovering
from
errors.
We're
hoping
to
get
to
a
point
where
it's
just
kind
of
we
could
run
it
continuously
if
we
needed
to
next
slide
and
then
it's
a
future
hold
we're
hoping
to
just
automate
all
of
the
things.
E
Basically
with
workflows,
it's
kind
of
our
unit
of
automation.
We
recently
got
a
new
relic
metric
provider
merged
into
argo
rollouts,
so
we're
hoping
to
use
that
in
the
future
to
do
all
sorts
of
analysis
at
deploy
time
both
for
a
specific
environment
but
also,
as
part
of
like
group,
deploys
so
kind
of
mimicking.
The
argo
rollouts
analysis
steps,
but
doing
it
on
like
a
broader
scale,
where
each
step
is,
you
know,
deployed
to
a
whole
environment,
replacing
a
legacy
deploy
scheduler.
E
That
kind
of
does
our
auto
promotions
we're
open
to
replace
that
with
workflows.
I
think
we
don't
think
it'll
be
difficult,
really
and
then
finally
like
getting
to
a
point
where,
when
people
come
to
us
asking,
if
how
do
I
deploy
x,
the
answer
is:
go
write
a
workflow,
you
know
nobody's
building
any
custom,
tooling
nobody's
standing
for
new
services
or
anything
like
that.
It's
just
build
a
workflow
and
away
you
go.
E
That's
all
we
got
jared
at
jwelsh92
on
github,
that's
not
a
ton
in
there,
but
there's
a
couple.
Cool
things
are
going
there
and
then
fischer
of
course,
jameson
on
github.
E
Yeah,
we
use
argo
events
a
little
bit
so
far,
I'm
actually
looking
at
using
it
well
this
week,
I'm
working
on
using
it
to
send
notifications
from
that
big
relay
workflow
when,
like
a
pod,
fails
or
something
like
that,
so
that
people
aren't
sitting
around
waiting
to
see
what
happens.
E
So
that's
actually
looking
at
doing
that
and
then
we
use
argo
rollouts.
We
have
it
installed
on
all
of
our
clusters,
so
the
I
think
20
like
three
quarters
or
more
of
those
3000
some
applications
we're
running,
are
using
argo
rollouts,
some
of
them
with
lots
of
instances.
So
it's
been.
It's
been
interesting.
E
Right
now,
I
think
we
just
are
using
s3
as
artifact
storage.
We
don't
do.
I
think
it's
basically
just
storing
the
logs
and
we
have
a
handful
of
workflows
that
pass.
You
know
data
between
steps,
but
that's
not
too
many.
A
lot
of
the
steps
are
kind
of
isolated,
so
it
hasn't
been
too
much
of
a
bottleneck
for
us.
Yet
we'll
see
we'll
see
how
that
goes,
we
haven't
had
to
deal
with
any
like
volumes,
shared
volumes
or
anything
like
that.
So
far,.
E
Like
jenkins,
oh
yeah,
the
ci
system
is
jenkins,
which
grand
central
like
orchestrates.
It's
like
users,
don't
have
to
like,
create
jobs
manually
in
jenkins
for
the
most
for
all
the
grand
central
managed,
ci
grand
central
will,
like
you
know,
template
out
jobs
for
them
and
then
there's
a
lot
of,
like
other
jenkins,
is
running
that
teams
run
for
various.
E
You
know,
automation
or
you
know,
builds
that
are
kind
of
non-standard
and
we're
seeing
a
lot
of
folks
kind
of
reach
for
workflows
first,
instead
of
jenkins
for
for
things
that
they
need
to
automate
or
run
on
a
schedule.
Especially
that's
been
one
of
the
cool
things
about
moving
to
kubernetes.
E
Is
we
used
to
run
a
marathon
and
there's
like
no
there's
no
way
to
just
run
a
job
on
a
cron,
so
people
stand
up
hold
jenkins
to
run
a
script
every
month
every
week
or
something
like
that,
whereas
in
kubernetes
you
can
use
a
batch
job
or
a
cron
workflow.
If
you
want
more
flexibility
and
just
like
that,
you
can
run
something
on
the
schedule
and
it's
it's
easy
and
you
can
deploy
it
through
grand
central
with
argo
cd.
A
And
do
you
I'm
asking
all
the
questions,
so
everybody
else
has
other
questions.
Please
jump
in.
Do
you
get
ops,
your
workflows.
E
Yes,
and
no,
like
fischer,
was
mentioning
we
have
those
group
deploy
workflows.
Those
are
actually
generated
dynamically
at
deploy
time
using
the
argo
workflows
like
go
types,
but
the
vast
majority
of
the
other
workflows
are
get
ops
installed
with
argo
cd.
We
use
templates
very,
very
heavily,
I
think,
for
every
one
workflow
file
we
probably
have
like
10
workflow
template
files
sitting
around,
so
we
use
those
primarily
actually
okay,.
F
E
We
have
a
mix.
A
good
chunk
of
our
templates
are
kind
of
like
collections
of
standardized
steps,
so
some
of
them
are,
you
know
a
bunch
of.
We
have
like
a
whole
bunch
of
argo
cd
steps
of
various
shapes
and
sizes.
E
They're
all
hung
on
the
same
workflow
template
file,
but
are
consumed
at
different
layers
depending
on
what
else
is
calling
it.
We
are
that
big
argo
relay
workflow
we
were
talking
about.
We
are
looking
at
potentially
splitting
that
up
into
very
you
know
smaller.
You
know
making
it
right
now
it's
a
workflow
file
so
making
it
into
workflow
templates,
which
would
of
course
have
those
would
be
sub
dag,
so
it
all
get
attached
to
like
one
top
level,
dag
that
we
would
execute
that's
a
mix.
E
F
Quick
other
question:
well,
I've
got
you
have
you
guys
like
so
in
those
templates
they
tend
to
use
to
use
any
kind
of
like
templates
that
run
in
parallel
and
have
to
be
aggregated,
because
I
know
that,
like
there's
been
some
stuff
through,
like
aggregating
artifacts
versus
parameters
and
stuff
like
that,
is
that
something
you've
run
into?
Is
it
just
not
use
case
that
you
have.
E
Yeah,
it's
more
like
the
we
we
use
the
dag
for
the
ordering,
but
I
don't
think
there's
actually
any
outputs
other
than
like
the
steps
are
successful
right.
So
we
kind
of
use
the
dagger
as
a
way
of
like
checkpointing
and
making
sure
that,
like
you
know
these
five,
you
know
steps
are
complete
before
we
go
on
to
the
next
thing
that
depends
on
them.
Yeah
that
makes
sense
cheers
thanks.
G
I
also
got
a
question.
This
is
really
impressive,
so
I
was
just
running.
I
mean
you
mentioned
that
this
is
a
lot
of
yamaha
engineering
and
then
you're
sometimes
having
trouble.
You
know
what
is
an
argument
when
it
does,
the
input
have
to
go
etc.
So
I
was
wondering
since
you're
telling
other
teams
also
to
write
workflows.
How
are
you
actually
testing
the
workflows
or
and
also
developing
them,
so
basically
making
sure
that
your
infrastructure
doesn't
work?
E
Yeah
we
have
a,
we
have
a
mix
of
things.
We've
got.
We
have
a
staging
environment,
that's
like
locked
down
to
basically
just
our
team
where
we
can
run
stuff.
We
also
provide
a
repository
with
a
kind
cluster
with
a
bunch
of
yaml
and
a
script
for
basically
standing
up
our
entire
argo
infrastructure
in
kind.
E
So
you
just
like
run
like
a
you
know,
run
local
script
and
it
will
pull
all
the
secrets
from
vault
stand
up
the
kind
cluster
stand
up,
argo
cd
and
then
use
the
app
of
apps
pattern
and
self-managed
argo
cd
to
kind
of
install
the
rest
of
the
components,
including
all
of
our
like
standard
workflow
templates.
So
we've
seen
a
lot
of
people,
use
that
and
then
there's
a
handful
of
teams
that
are
installing
their
own.
E
B
I
I
like
what
she
said:
automate
all
the
things
with
argo
products.
This
is
something
what
I'm
trying
to
sell
to
our
company.
B
It's
just
I'm
feeling
that
there
is
some
sort
of
a
hesitance
from
developers
or
from
people
who
should
be
using
the
workflows,
like
the
learning
curve,
is
here
and
there's
something
new
people
afraid.
How
does
it
feel
in
your
company
when,
when
you
ask
developers
or
your
engineering
teams
to
write
workflows.
D
D
We
provide
a
lot
of
support
for
teams
as
they're
moving
into
excuse
me
as
they're
moving
into
kubernetes
kind
of
on
our
platform
so
we're
we.
D
Really
readily
available
to
answer
questions
as
they
come
up
and
I
don't
do
anything
to
add
up
to
that.
E
No
yeah,
it's
a
lot
of
support
and
like
consulting
providing
a
lot
of
workflow
templates.
So,
like
the
majority,
a
lot
of
the
times
when
folks
are
writing
their
own
workflows,
like
at
least
some
of
the
steps
in
there
are
templates
that
we've
provided,
because
we've
written
a
fair
number
of
kind
of
generic
steps
that
work
for
a
lot
of
things
yeah
it's
it
hasn't,
always
been
easy.
Some
folks
struggle
with
learning
workflows,
so
we've
tried
to
write
docs,
make
sure
we
have
docs
available
and
provide.
E
B
A
Great
stuff
guys,
and
will
you
share
your
slides
with
us.
A
Cool
like
a
google
slide
is,
is
brilliant,
brilliant
and
I'll.
Probably
come
back
and
ask
you
guys,
maybe
about
coming
to
guest
blog
post
at
some
point
in
the
future
as
well.
A
couple
of
other
people
have
promised
to
do
that
for
me,
so
that'll
be
coming
up
over
the
next
couple
of
months,
where
we
talk
a
little
bit
in
our
blog
about
the
use
cases
and
maybe
a
bit
of
a
you-
know
different
kind
of
kind
of
dive
into
this.
What
people
are
doing?
A
Okay,
fantastic,
okay!
Thank
you
very
much
guys.
So
next
we're
just
kind
of
a
short
short
how
to
from
from
a
guy
called
paul
braban.
I
don't
know
who
paul
works
for
maybe
you
could
tell
us
a
little
bit
about
your
organization.
We
know
a
lot
of
people
use.
You
know
we
don't
editing.
Yaml
can
be
a
bit
of
a
pain
for
people,
and
so
we
want
to
provide
kind
of
better
ide
support,
paul's,
going
to
talk
a
little
bit
about
editing,
workflows
in
vs
code.
Paul.
Are
you
ready.
H
It
away
that's
a
good
start:
hi
yeah
yeah,
I'm
paul,
I'm
I'm
a
independent
consultant
and
I'm
currently
working
with
a
uk
retailer
who
we
do
need
some
data
science
work
and
rather
than
go
for
air
flow.
Like
everybody
else,
I
thought
we'd
try
something
before
we
try
something
different
and
have
a
little
bit
more
lightweight
and
kubernetes
native,
but
yeah.
H
One
of
the
challenges
that
I
certainly
got
is
there's
a
lot
of
yaml
and
folks,
don't
like
editing
it
very
much
so
so
yeah
I
put
a
bit
of
time
in
to
try
and
get
the
id
support
that
that
we
have
for
intellij
and
in
vs
code,
and
so
I'm
just
going
to
quickly
share
into
the
chat
the
the
link
to
adjacent
schema.
That
alex
has
put
together,
which
we'll
be
using
in
the
demo.
H
H
F
H
Yeah,
great
fantastic,
fantastic,
okay,
so
what
I've
done
is
kind
of
uninstalled
everything
we'll
start
from
scratch,
and
first
thing
we're
going
to
do
is
get
the
the
vs
code,
yaml
plugin,
the
red
hat
yaml
plugin.
Now,
if
you've
got
kubernetes
the
community's
plugin
installed,
then
you
already
have
it,
but
if
not,
then
it's
available
as
in
its
own
right,
so
just
install
that
quickly
see
proper
demo.
If
things
could
go
wrong.
H
Okay,
so
that's
installed
now
nice
and
simple,
it's
from
a
reasonably
replaceable
publisher,
so
I
should
should
probably
be
okay
for
you.
This
is
so.
I've
also
got
the
project
I
have
open
is
the
igo
project
itself
with
all
the
examples,
so
I'm
just
going
to
use
the
hello
world
example
from
that
to
show
you
the
kind
of
support
that
you
get
so,
for
example,
if
I
want
to
add
a
new
element
here
to
this,
this
workflow
step
you
control
space,
to
to
see
what
my
auto
complete
options
are.
H
H
Hopefully
that
will
change
when
we
when
we
get
the
the
schema
set
up,
so
we
should
configure
the
the
yammer
plugin
with
with
the
schema
so
back
to
the
extensions
window
here
extension
settings
just
to
speed
up
getting
there
and
we've
got
yaml
schemas
here
as
well
in
the
settings,
so
we'll
click
that
it's
json
document
and
we're
going
to
add
the
schema
now
oops.
H
What
did
I
just
do
there
we
go
so
the
first
thing
we've
got
to
do
is
get
that
the
url
that
I
shared
in
the
chat.
Let
me
just
grab
that
I
think
I
have
that
now
there
we
go
so
I
I
presume
this
is
probably
going
to
have
a
location,
looks
more
stable
and
it
will
be
updated.
Updates
will
appear
automatically,
but
that's
this
is
kind
of
where
we
are
at
the
so.
Basically
the
structure
of
this
dot,
this
this
elements
you've
got
yeah
schemas
the
first.
H
The
key
is
the
the
link
to
the
schema
itself
and
the
second
part
is
just
a
file
glove
to
for
the
files
that
we're
going
to
apply
the
schema
to
so
this.
The
glob
I've
got
here
is
which
can
apply
to
everything
under
examples.
H
Examples,
slash
any
subfolder
and
then
anything.yaml.
I
have
some
problems
with
well.
I
had
kubernetes
the
community
schemas
in
here
as
well,
which
overlapped
with
the.
H
So
I
just
got
my
cat
scratching
at
the
door
which
which
overlapped
if
they
were
both
trying
to
do
yammer
documents,
and
I
was
getting
strange
errors.
So
one
thing
to
be
aware
of
to
try
this
if
you
do
have
overlaps
on
the
gloves
and
you
might
see
funny
errors-
and
I
ended
up
naming
my
workflows.website
to
get
around
that
in
my
real
projects.
Okay,
so
that's
set
up.
I
just
saved
that.
Hopefully
we
go
back
to
hello
world
yaml
and
there
are
no
errors
in
here.
H
But,
for
example,
if
I
try
and
create
a
parameter
without
a
name,
I've
now
got
an
error.
So
the
error
is
that
I've
got
a
missing
property
name
in
this
in
this
element
here,
and
this
description
just
gives
you
text
from
the
schema
up
at
the
top
there
as
well.
So
this
is
the
kind
of
the
kind
of
support
that
you
that
you
get,
which
I
did
before.
H
I
don't
know
what
the
name
was,
but
there
we
go.
It's
message
also,
you
get
certainly
type
supports.
If
I
try
and
give
it
a
name
of
one,
I
get
an
error
because
I'm
not
allowed
to
use
a
number
where
a
string
is
expected.
So
it's
really
basic
support
at
the
moment,
but
it
is.
It
is
helpful,
so
back
back
to
here,
so
what
can
actually
apply
at
this
point
in
the
template?
That
makes
more
sense,
so
I
can
apply
continue
on
exit
template
et
cetera.
H
These
are
actually
all
valid
at
this
point
in
this
in
the
schema
and
again,
obviously
it
will
give
me
a
little
bit
of
help
to
to
figure
out
what
kind
of
what
kind
of
types
I
can
actually
use
at
each
point.
So
it's
fairly
basic
support,
but
I
presume
we
can
start
raising
prs
and
issues
and
extending
what's
covered
by
the
template.
H
If
we
want
to
do
that
and
that's
essentially
it
so
all
the
workflow
types
should
be
supported.
So
there's
a
quant
workflow
down
here
somewhere
there
we
go
chrome
workflow.
So,
for
example,
when
I'm
talking
about
the
scheme
as
being
fairly
weak,
this
workflow
will
allow
you
to
remove
the
schedule
element
from
a
from
a
chrome
workflow,
which
is
obviously.
B
H
Legal,
I
think
that's
not
legal
or
shouldn't
be
legal,
so
there's
some
things
that
are
allowed
that
shouldn't
be
allowed,
but
certainly
the
support
you
do
get
is
very
helpful
and
again
you
can
see
you've
got
options
here
that
are
valid
rather
than
some
invalid
actions
there
as
well,
but
a
lot
of
the
options.
There
are
bad
at
this
point
in
spec
and
that's
that's
basically,
that's
all.
I
had
to
talk
about
any
questions
or
anything
any
thoughts.
A
So
jared
says
that
he's
found
the
intellij
plug-in
helpful
and
I'm
guessing
talking
about
the
kubernetes
one.
But
I
think
when
I
spoke
to
you
about
earlier
this
week,
you
mentioned
this
is
possible
to
do
in
in
the
free
version
of
intellij
as
well.
H
That's
right,
yeah,
so
the
intellij
instructions,
the
kubernetes
plug-in,
doesn't
work
on
intel
j
community
edition
idea.
Community
edition
has
to
be
the
ultimate
edition,
which
is
why
I
started
looking
at
this
because
I
would
just
have
used
intellij
so
but
yeah,
because
this
is
so
the
the
trick
is
the
the
document.
The
schema
documents
are
used
by
that
plug-in
are
kind
of
kubernetes
open
api
documents.
H
I
think,
whereas
this
is
just
a
plain
json
schema,
which
is
you
know,
a
standard
way
of
validating
yammer
documents
which
are
just
json
really,
so
I
so
that
you
can
certainly
use
the
native
idea
functionality
to
validate
yammer
documents.
It's
just
in
your
like
settings.
You
can
see
like
json
schema
settings
there
same
thing,
you
define
the
where
the
schema
is
located
and
you
provide
a
way
of
linking
it
to
documents
in
your
project
and
there's
also
a
plugin
for.
H
What's
it
called
sublime
text,
I
suspect
most
most
kind
of
reasonably.
You
know
comprehensive
editors
will
have
some
support
for
for
violating
adjacent
yammer
document
with
json
schema.
So
hopefully
this
really
kind
of
levels.
The
playing
field
in
terms
of
support
across
across.
H
A
I
I
can
tell
you
for
sure
that
the
json
schema
is
actually
more
accurate
than
the
crd
validation,
because
the
kubernetes
open
api
specification
is
actually
a
subset
of
json
schema
and
it
doesn't
allow
crop
referencing
to
other
types.
So
it's
actually
you
it's
not
possible
to
get
it
to
work
for
workflow
steps.
It's
a
deficiency.
B
A
The
kubernetes
open
api
specification.
That
means
that
you
can't
actually
fully
describe
workflows
because
our
scheme
is
relatively
complicated
for
us
for
a
custom
resource.
H
A
H
A
H
I
was
also
thinking
about
the
linter
as
well.
I
don't
know
whether
we're
trying
to
unify
the
things
that
are
checking
sort
of
the
quality
behind
this
one
sort
of
schema
that
we
can
then
contribute
to
and
cover
all
the
kind
of
funny
cases
that
we
find
yeah.
A
H
So
yeah
so
here,
if
I
I
can
supply
a
template,
understand
plate
ref,
which
I
think
is.
A
A
Cool,
thank
you
very
much
paul,
so
we're
a
bit
over
a
bit
over
time,
but
everybody
seems
to
be
hanging
on
in
there
and
so
I'll
try
and
go
through
this
last
bit
relatively
quickly.
Maybe
spend
about
10
minutes
on
it
and
I'll
explain
why
I
don't
want
to
spend
too
much
time
on
it.
As
as
we
go
forward.
There
is
actually
a
rationale
for
me
trying
to
not
just
standard
work
avoidance.
A
Things
so
we're
going
to
be
producing
a
new
version
of
argo
workflows,
it'll
be
called
version
v3
now,
normally
normally
v3
is
a
breaking
change
for
most
applications.
You
don't
really
expect
an
upgrade
to
go
particularly
well
between
the
v2
and
the
v3
version,
and
that's
actually
not
our
intention
at
all.
I
want
to
make
that
very
clear
up
front.
The
reason
we're
going
to
migrate
to
v3
is
about
because
of
how
golang
makes
go
modules
work.
A
A
Now,
when
we're
going
to
do
that,
we're
actually
going
to
plan
to
build
out
some
new
user
interface
features
and
we're
actually
going
to
separate
the
user
interface
that
currently
lives
in
the
same
code
base
with
the
same
manifests
into
a
new
repository
allowing
you
to
deploy
that
user
interface
separately
or
choosing
not
to
deploy
it.
There's
a
number
of
reasons
behind
that
about
kind
of
improving
our
own
developer
velocity,
allowing
us
to
actually
iterate
on
the
user
interface
much
more
quickly,
okay.
So
what
else
do
we
want
to
get
out
of
that
user
interface?
A
Well,
we
want
to
provide
some
support
for
our
events
in
that
and
I've
got
two
links
for
people
here
in
this
document.
I'm
just
going
to
drop
them
into
the
chat
for
you
to
do
a
little
chat
and
then
actually,
let
me
just
drop
this-
this
url
this
document,
which
should
be
public.
If
it's
not.
Let
me
know.
A
So,
there's
that
no,
I
just
didn't
work,
let's
try
again
there
we
go.
That's
the
document
there
and
there
are
two
links
here.
I've
highlighted
this
in
yellow
one
is
to
an
online
application
that
you
can
log
into
and
test
out
and
I'll
just
show
you
that
very
quickly
and
then
another
one
is
a
link
to
this
ui
working
doc,
which
is
this
next
tab.
This
ui
dock
is
just
intended
to
be
a
list
that
people
can
add
and
provide
feedback
on
what
they
think.
A
A
So
if
you
want
to
access
this
interface,
go
to
workflows.apps.arc,
approach.io
and
log
in
using
your
github
login,
and
that
will
give
you
read-only
access
to
this
user
interface,
and
you
can
see
you
can
have
a
go
at
yourself
and
you'll
notice.
There
are
a
couple
of
new
icons
on
the
left
hand,
tab,
and
this
one
is
an
events
icon.
This
is
actually
existing.
Workflow
icon,
we're
using
a
slightly
different
iconography,
then
there's
a
workflow
events,
binding
icon
and
finally,
workflow
templates.
One
is
of
interest.
A
So
if
I
need
to
read
out
the
page,
I
think
I
need
to
oh
what's
going
on
here.
This
will
be
very,
very
short
demo
if
it
doesn't
load
there.
We
go.
A
A
A
A
So
this
is
just
talking
it.
These
slides
will
just
I'll
talk
through
a
couple
of
the
areas
I
want
to
go
through
quite
quickly,
so
one
thing
is:
we
have
a
new
view,
doesn't
quite
look
like
this
anymore,
slightly
different.
That
shows
you,
the
relationship
between
argo
events
entities,
so
you
can
open
that
and
see
that
what
sensors
and
what
triggers
that
you
have
in
your
user
interface
it'll
also
show
animations
when
events
go
between
one
item
and
another.
A
These
are
some
of
the
examples
of
the
icons
you'll
see
in
the
user
interface
and
what
I
really
wanted
to
show
you
was
the
new
workflow
editor,
so
we're
going
to
have
a
workflow
that
allows
edits
that
allows
you
to
edit
your
workflow
workflow
templates
in
a
visual
format,
currently
is
only
kind
of
implemented
for
workflow
templates.
It
allows
you
to
do
things
like
add
dags
and
tasks
to
it.
A
Okay,
as
I
mentioned,
I
don't
want
to
go
into
a
great
deal
of
detail,
we'll
just
see
if
it's
backup,
no,
it's
not
backup,
not
really
sure.
What's
happened.
So,
if
you're
interested
in
getting
involved
in
that
providing
feedback
or
or
testing
it
out,
have
a
look
at
this
document.
Hopefully
I'll
get
them
working
again
later
on
and
we
will
have
opportunity
for
you
to
provide
feedback
and
help
guide
that
vision
for
it
as
well.
F
It's
the
ingress
controller.
The
angus
controller
is
down
I'm
trying
to
scale
it
up
right
now,.
A
Okay,
thank
you.
Jesse
jesse
is
going
to
try
and
sort
it
out,
but
I
probably
won't
be
available
before
the
end
of
this
meeting
and
I'll
drop.
That
in
I
will
let
you
know
in
the
argo
workforce,
channel
the
arguments
channel.
A
Okay,
there
was
a
question
in
the
chat
any
plan
when
argo
workflows,
212
will
be
released,
so
I'll
go
workflows.
212
is
currently
rc2
or
rc3
simon's
overseeing
that
release.
At
the
moment
we
tend
to
gather
a
bit
of
feedback.
It
takes
a
couple
of
weeks,
so
I'd
imagine
maybe
a
week
to
two
weeks
from
now,
you'll
see
version
zero
of
that
available.
If
you're,
if
you're
desperate
to
try
out.
A
It
is
brilliant,
oh
okay,
so,
as
I
mentioned,
you
can
have
a
look
at
the
events.
I
said
it
changed
in
terms
of
the
coloring
in
the
interface,
so
this
particular
one
will
show
you
a
events.
I
think
it
goes
every
10
seconds,
it's
proof
of
concept.
There
we
go
so
there
you
go,
you
can
see,
that's
triggering
a
sensor
and
there's
a
new
workflows
in
here.
You
can
have
a
look
at
these
workflows.
You
can
see.
That's
the
workflow
there
as
well.
A
This
has
got
set
up
with
a
workflow
event
binding
that
listens
to
messages
from
github,
so
you
could
be
able
to
see
workflows
struggling
from
github
and
what
we
can
do
is
we.
Maybe
we
can
have
a
look
at
the
new
workflow
template
editor
very
quickly.
This
is
the
new
workflow
template.
Editor
you've
got
controls
to
do
things
like
you
know,
add
a
new
container
to
it
as
well.
You
can
wire
those
contains
and
play
around
with
it.
This
is
quite
new
and
I'm
aware
it
needs
a
bit
more
work.
A
The
very
last
thing
I've
added
to
the
community
meeting
documents
is
a
few
promotions
for
people
presenting
about
argo
at
kucom,
north
america
today
tomorrow
and
on
friday.
So
if
you've
signed
up
for
that
and
you've
access
to
their
virtual
platform,
you
can
check
out
those
as
well.