►
Description
6:00 SSO + RBAC Demo
22:00 Pod Names v2
23:00 Azure Orkestra
A
So
good
morning,
everybody,
I
hope,
we're
all
good.
Today,
it's
not
morning
for
some
of
our
presenters
today.
It's
it's
well,
I
think
it's
evening
and
maybe
a
different
kind
of
morning
for
some
of
the
presenters
today.
So
that's
awesome
that
people
are
so
happy
to
come
along
and
talk
a
bit
about
what
they're
doing
today.
A
So
this
is
the
agenda.
For
today,
we're
gonna
have
a
presentation
from
scott
on
azure
orchestra.
He'll
be
talking
a
little
bit
about
that
and
how
they
use
argo
with
inside
that
system,
then
we
are
going
to
have
some
kind
of
short,
like
I
say,
lightning
demo,
so
typically
like
three
to
five
minute
long,
demos
of
some
new
features
and
this
this
section
of
the
bats.
I
think,
oh,
this
section
of
the
community
meeting
is
new
and
this
is
kind
of
an
opportunity
for
anybody.
A
Who's
working
on
new
features
for
our
go
workflows
or
argo
events
to
kind
of
come
and
show
off
their
their
work
and
what
they're
working
on
and
also
kind
of,
maybe
gather
a
little
bit
of
feedback
as
well,
so
we're
going
to
have
three,
hopefully
three
lightning
demos.
On
that
one,
so
bassanth
is
going
to
demonstrate
a
new
kind
of
extension
to
how
we
do
our
back.
That
allows
you
to
set
up
our
back
sso,
but
have
the
users
configure
that
our
back,
rather
than
requiring
a
system
administrator?
A
Thank
you
kane
and
then
jp
is
going
to
be
talking
a
bit
about
pod
names.
Pod
names
is
kind
of
a
beta
feature
in
version
3.2
that
so
pod
names.
A
A2,
which
basically
means
that
pod
names,
rather
than
kind
of
having
kind
of
auto-generated
names
the
the
name
is
now
based
on
the
template.
That's
executed
that's
to
kind
of
help,
users
with
debugging
their
pods
when
they're
running
it
just
makes
it
especially
for
simple
workflows,
really
easy
to
find
them,
and
then,
if
there's
time
I'll
talk
a
little
bit
about
some
of
the
plug-in
proof
of
concept
and
work
that
I've
been
working
on
this
week.
A
If
you
have
a
question,
you
can
obviously
ask
that
or
feel
free.
Typically,
people
often
wait
until
the
end
of
like
a
presentation
or
demo
to
ask
questions.
That's
a
good
time
and
you
can
obviously
follow
up
on
slack.
If
you
want
to
ask
more
questions.
B
A
You
want
to
watch
this
later
on
or
share
it
with
anybody.
I
will
hopefully
put
the
recording
of
this
on
to
youtube
either
today
or
tomorrow.
It
depends
how
long
it
takes
to
upload,
and
I
like
to
add
the
kind
of
marks
in
it.
So
you
can
skip
to
the
bit
you're,
particularly
interested.
That
starts
at
two
minutes
or
three
minutes
so
they'll
that
that's
just
gonna
deeply
link
into
those
videos
and
I'll
share
that
video
on
the
slack
chat
later
on
any
questions
before
we
get
started.
C
Okay,
today
we're
going
to
talk
about
azure
orchestra,
I'm
going
to
make
the
assumption
that
most
folks
on
the
on
the
court
have
not
not
heard
about
this.
So
we're
going
to
go
from
the
beginning
and
and
then
go
through
where
argo
plugs
in.
C
C
A
C
A
D
D
Okay,
hello,
guys
good
morning
good
evening,
so
yeah
just
a
quick
intro
about
myself,
so
I'm
bassan
and
I've
been
working
for
intuit
about
four
years
now,
I'm
a
senior
software
engineer
and
currently
I'm
working
on
a
product
called
as
quickbooks
and
within
quickbooks.
My
team
currently
works
on
reporting,
so
just
a
bit
of
background
around
how
we
have
been
involved.
D
So
basically,
we
have
about
50
to
60
workflow
templates
that
we
have
written
using
argo
workflows
and
we
use
it
for
things
like
starting
aws,
cmrs
or
orchestration
jobs,
routine,
ams,
aws,
cmrs,
d,
buying
messages
from
kafka,
doing
dbf
db,
schema
upgrades
and
and
a
lot
more
things
so
yeah.
As
of
now
I
I
would
believe
that
my
team
is
a
very
prominent
user
of
argo
options
and
yeah
with
that.
D
We
have
been
very
happy
and
they
are
trying
to
contribute
some
of
the
things
okay,
so
moving
along.
So
today
I'll
be
talking
about
what
I
would
like
to
call
sso
our
back
namespace
delegation,
so
yeah,
let's
and
yeah,
as
the
name
suggests.
This
feature
revolves
around
the
sso
airbag
capability
that
was
added
into
our
back.
D
That
was
added
into
argo,
workflows,
okay,
so
yeah.
This
is
the
agenda
that
I've
planned
so
I'll
start
with
a
quick
recap
of
how
sso
are
back
works
as
of
today,
then
we'll
go
into
the
typical
kubernetes
setup
that
we
have
at
intuit
and
the
sso
use
cases
that
we
have
and
then
we'll
talk
about
the
problems
that
arise
with
the
current
target
implementation
and
then
we'll
talk
about
the
new
feature
and
moving
on
to
the
demo
right.
So
just
a
quick
recap.
D
So
this
is
how
users
today
configure
ssor
backup
so
essentially
the
the
person
or
the
team.
That
is
installing
argo.
Basically
configures
service
accounts
in
their
name
space.
Typically,
the
argo
name
space
right,
so
you
would
basically
create
say
service
account
for
admin
user
and
provide
it
with
an
our
back
rule
and
arbitrary
questions.
D
And
what
you
are
saying
is:
if,
if
the
person
has
a
group,
that's
called
admin,
so
essentially
you
will
associate
the
user
with
this
particular
service
account
and
after
this
association
is
done,
you
will
use
this
particular
service
account
and
the
associated
rule
to
do
the
operations
in
the
workflow.
So
just
to
show
right
so:
okay.
D
Yeah
so
yeah,
this
is
my
local
environment
and
here
I've
configured
a
service
account
called
argo
server,
and
this
is
some
our
back
rule
and
when
I
try
to
log
in
essentially
with
decks
in
my
local
environment,
I
will
be
able
to
just
do
that,
and
so
currently,
my
so
currently,
this
particular
service
service
account
has
permissions
only
for
arguing
space.
D
So
that's
why,
when
I
try
to
log
into
any
other
thing,
it
just
gives
me
an
error
right
and
you
here
in
the
user
section
you'll
see
that
service
account
use
this
argo
server.
So
this
is
the
just
a
quick
recap
on
how
our
back
works
today
now
moving
along.
So
this
is
the
typical
kubernetes
cluster
setup
that
we
have
at
intuit.
D
So
typically,
a
specific
business
unit
is
given
a
kubernetes
namespace
and
it
typically
has
say
one
name,
space
for
argo
insulation,
and
then
there
are
different
services
that
are
deployed
in
the
kubernetes
new
space.
So,
for
example,
here
you
can
see
that
service
one
has
a
couple
of
name
spaces.
So
basically,
this
is
a
logical
grouping
of
new
spaces
so
for
service.
For
so
and
for
each
service,
there
are
different
kubernetes
name
spaces
that
are
associated,
so
you
can
see
basically
like
two
pre-prod
name
spaces
and
two
crowd
name
spaces
and
this
particular
whole
name.
D
Space
is
managed
by
say
team,
one
and
likewise
for
service
two
and
teep
two.
Now
the
use
cases
that
we
have
are
that
some
members
in
team,
one
have
to
be
given,
say:
read:
write
permissions
only
in
the
pre-plot
name
spaces
and
some
have
to
be
given
read,
write
permissions
for
the
all
the
name
spaces,
including
the
programming
spaces
and
so
on
and
so
forth,
and
so
this
is
like
pretty
much
easily
configurable.
So
you
can
just
add
a
service
account
in
the
installation
system
and
and
get
your
thing
right.
D
D
You
will
essentially
need
to
create
a
service
account
per
user
and
the
namespace
combination
and
then
try
to
manage
resources,
try
to
manage
your
permissions,
and
this
typically
gets
hard,
because
all
of
that
is
managed
in
one
single
link
space,
for
which
your
team
does
not
have
permission
right.
So
these
are
some
of
the
challenges
that
we
have
been
facing
at
intuit
with
this
particular
kubernetes
setup
and
the
way
how
sso
are
back
works
today,
right
and
like
this.
This
is
what
I
have
listed
here.
So
these
are
the.
D
These
are
the
issues
that
I've
listed,
so
basically,
one
is
that
config
is
only
at
an
installation
level,
so
any
changes
that
are
required
will
have
to
be
done
by
the
person
or
the
team
that
installed
argo,
and
it
is
basically
alex's
team
who
manages
that,
and
essentially
it
is
burdensome
for
that
team,
and
the
fun
fact
is
that
we
have
actually
mentioned
it
in
our
documentation.
Also
that
many
complex
rules
will
be
burdened
some
word
and
some
on
that
team.
D
So
this
this
is
just
some
funny
thing,
but
they
just
came
up
yeah
apart
from
that,
basically
manages
config
is
hard
so
hard
in
the
sense,
like
one
user
can
always
be
mapped
to
one
single
service
account
and
only
that
service
account
has
to
be
used.
So
essentially,
this
makes
it
hard
for
us
to
use.
Multi-Tenancy
basically
have
this
configurations
at
an
a
space
level
where
it
can
be
controlled
and
managed
better
and
the,
and
the
thing
that
I
talked
about
basically
granting
different
permissions
for
different
users
and
for
different
name
spaces
is
cumbersome.
D
So
all
of
these
issues
is
something
that
we
have
been
facing,
and
this
is
what
led
us
to
start
the
the
new
feature
of
namespace
delegation,
cool
yeah.
So
let
let
me
talk
about
how
we
have
how
you
can
leverage
this
feature
to
like
better
use.
The
sso
are
back
right,
so,
okay,
yeah,
so
you'll
still
be
using
service
accounts,
but
yeah,
essentially
you'll
have
the
ability
to
create
service
accounts
in
your
own
name
space.
So
there
are
two.
D
There
are
two
things
that
that
you
need
to
answer
right
when,
whenever
you
want
to
grant
somebody
some
permissions,
so
one
is:
can
a
user
login
to
this
particular
cluster
and
can
I
can
a
user
perform
some
operation
in
my
namespace
so
for
login,
essentially
like?
Essentially,
you
will
be
using
the
service
account
in
the
default
installation
namespace.
D
D
So
here
you
can
see
that
my
namespace
is
essentially
my
name
space,
for
which
I
have
permissions
to
so
I
have
created
my
service
account
with
with
this
particular
name,
and
I
have
attached
the
right
permissions
with
read,
write
rules
using
the
role
binding,
and
here
I
am
mentioning
that
allow
a
person
only
if
say
my
team
is
in
groups.
So
this
way
we
have
separated
two
things.
D
So
one
is
the
login
and
then
the
operation
part
and
the
user
and
the
owner
of
this
particular
name
space
will
be
able
to
configure
the
service
account
for
which
our
back
has
to
be
used
right.
India,
like
this,
this
just
reduces
a
lot
of
dependency
on
the
installation,
team
and
mixes
and
and
basically
makes
the
whole
thing
self-serve.
D
So
this
is
how
the
feature
works.
Yeah
before
I
move
on
to
demo
any
questions
comments
yeah.
So
one
question
I
have
is
that,
as
we
said
right,
the
one
service
account
can
be
associated
with
one
user,
so
right
how
we
are
associating
here
to
the
like.
We
are
saying
user
in
groups
right,
so
we
are
getting
right,
user,
different
right,
yeah
yeah,
so
so,
if
okay,
so
if
I'm
a
user
and
I'm
making
a
request
in,
say
my
name
space
so
then
I'll
be
using
this
particular
service
account.
D
If
I
I
make
a
request
in
say
a
secondary
namespace,
so
then
the
service
account
of
that
namespace
will
be
used,
so
that
provides
a
way
to
essentially
map
like
one
user
to
multiple
user
accounts,
multiple
service
accounts,
but
they
are
still
belonging
to
different
namespace
and
the
owner
of
the
namespace
has
a
control
to
whether
to
give
you
permission
or
not.
D
Okay,
so
here
you
saw
right
like
so.
This
is
the
service
account
that
is
present
in
say
only
the
argoning
space,
and
that's
why
I
am
able
to
only
perform
operation
in,
say,
argo
and,
if
I
switch,
then
it's
like
you
can't
do
anything
right.
So
let
us
see
so
yeah.
So
let
me
first
create
say:
create
a
new
namespace
say
my.
D
If
I
like,
okay,
so
I
can
say,
create
workflow
templates
now,
if
I
just
remove
the
workflow
templates
from
this
permission-
and
if
I
refresh-
I
don't
have
permission
right
so
so,
essentially
it's
the
same
feature,
but
it's
just
extended
to
be
at
a
namespace
level
and
much
more
configurable
by
the
owner
of
the
use
space
owner
of
the
namespace.
India
reduces
a
lot
of
dependency
on
the
person
who
installed
so
yeah.
That
was
a
bit
about
the
demo
and
yeah.
I've
just
listed
out
the
benefits
of
this.
D
So
one
is
basically
so
configured
as
user
level
and
users
can
define
their
config
in
their
own
name.
Space
reduces
dependencies,
and
essentially
it
makes
the
whole
process
elsewhere
like
there
would
be
cases
where
like
if,
if
there
are
certain
incidents
where
like
there
is
a
production
incident,
and
you
would
want
to
like
promote
a
user
to
admin
and
for
a
small
period
of
time
and
like
in
in
the
previous
way,
you
have
to
like
raise
a
raise
a
ticket
and
ask
the
other
team.
D
But
now
you
can
essentially
do
it
yourself,
and
that
reduces
a
lot
of
time.
The
second
thing
is
now,
like
things,
are
context
aware,
in
the
sense
that
the
argo
server
knows
that
the
person
is
making
only
making
this
request
in
my
particular
name
space.
So
let
me
look
at
the
service
accounts
and
the
rules
in
that
name,
space
for
which
the
user
has
more
control
right.
Apart
from
that
yeah,
you
can
like
map
a
single
user
to
different
service
accounts
based
on
your
context,
and
they
easily
manage
permissions
for
different
new
spaces.
D
E
So
how
does
it
know
to
use
the
the
namespaces?
This
is
automatically
scanning.
D
Like
so,
if
you
think
about
it
right
so
here
when
I
say:
workflows:
mining
space,
so
essentially
this
particular
string
mining
space
goes
all
the
way
to
the
back
end
and
each
work
and
each
request
object
right.
If
you
say
like
workflow,
workflow
request
or
template
or
workflow
template
requested
and
everything
else.
So
essentially
they
have
this
namespace
associated
so
using
which
we
calculate
like,
using
which
we
determine
that
this
is
the
service
like
these
are
the
service
accounts,
which
we
need
to
look
at
and
then
associated
to
the
user.
E
Okay
and
then,
how
does
the?
How
does
the
operator
actually
opt
out
of
this
because
let's
say
they
don't
want
to
they
do
like
privileged
escalation,
for
example,.
D
Yeah,
so
that
is
something
which
we
have
been
discussing
so
like
currently
what
we
have
decided.
D
Basically,
currently,
this
feature
is
in
beta
and
if
you
want
to
enable
it,
you
will
need
to
set
this
environment
variable
to
true,
and
if
you
have
not
said
then
sso
delegate
won't
kick
in
and
I
think
going
forward.
We
can
even
provide
provide
an
argument
to
the
argo
server
itself.
Basically
to
use
this
feature
or
not.
A
Yeah,
it's
a
little
bit
smart,
I'm
gonna!
Just
like
add
one
thing
to
this:
it's
a
little
bit
smarter
than
just
prepping
the
namespace
out
of
the
url
as
well.
You
can
actually
intercept
the
grpc
calls
and
you've
got
structured
data
at
that
point
during
specs
and
obviously
anything
in
the
name.
Space
has
a
get
namespace
method,
so
you
can
duct
type
it
to
get
the
namespace
out.
D
More
thing,
and
just
one
more
thing:
we
have
added
a
small
optimization
so
till
now
every
call
made
to
argo
server
used
to
like
fetch
all
the
service
accounts
from
the
kubernetes
apis
and
yeah.
This
basically
happened
and
basically
a
single
call
was
messed
for
each
of
the
requests.
Now
we
have
just
added
an
informer
to
just
cache
all
the
requests
so
going
forward.
Basically
going
forward
v3.3,
you
should
be
able
to
see
a
lot
less
calls
being
made
to
kubernetes
replay
server
yeah.
That's
just
one
thing.
A
Okay,
so
we've
got
one
another
demo
from
a
new
feature
from
jp
this
feature
jp
you
ready.
F
Yeah,
thank
you
so
pod
names
v2,
as
alex
mentioned
at
the
beginning
of
the
presentation,
includes
both
the
template
name
along
with
the
global
workflow
name.
This
is
going
to
be
available
heard.
It
currently
is
available,
if
you're
only
using
the
cli
in
3.2
123,
but
the
full
ui
support
will
be
coming
in
3.2.4.
F
F
It
will
generate
pod
names
that
include
the
template
name
in
here.
So
we
see
that
the
global
workflow
name
is
stag
diamond
steps
and
that
each
of
these
is
invoking
an
echo
template
and
this
is
really
useful
for
like
troubleshooting
purposes.
So
if
I
go
to
the
cli-
and
I
do
like
cube,
control
get
pods
and
argo.
F
That's
invoking
this
echo
template,
and
this
just
generally
is
like
pretty
useful
if
we
notice
that,
like
certain
pods
are
or
like
certain
templates
are
functioning
well
versus,
like
not
functioning
well,
I
can
go
in
and
troubleshoot
the
ones
that
are
like
functioning
poorly
based
on
the
name,
whereas
not
having
this
information
was
previously
pretty
difficult
to
enable
this
you're
going
to
set
an
environment
variable
of
pod
names
equal
to
v2
in
both
the
workflow
controller
and
the
argos
server
deployment,
and
that
should
be
ready
in
3.2.4,
real,
quick
demo,
but
hope
you
all
liked
it.
A
Thanks
thanks
jp,
so
what
we're
going
to
do
now
is
we're
going
to
flip
back
to
our
main
feature,
which
will
be
scott
talking
about
I'm,
hopefully
talking
about
his
azure
orchestra
and
not
about
internet
bandwidth,
issues
which
we
all
have,
I
think
good.
Thank
you,
scott.
Take
it
away!
Thank
you.
C
Okay,
second
time
lucky,
hopefully
this
will
work
better,
so
a
little
bit
about
me
quickly,
because
I
wasted
a
bit
of
time.
Before
I
worked
for
microsoft,
I
am
a
cloud
developer.
Developer
advocate
on
cloud
native.
I
run
the
cloud
native
team
there.
So
mostly
we
work
on
kubernetes.
We
might.
We
do
work
on
some
other
stuff
as
well
like
wasm
and
a
few
other
bits
and
pieces.
My
passion
is
container
runtimes,
I
wrote
mostly
go.
C
I
do
do
a
bit
of
rust
now
I
clay
comic
books
and,
as
you
can
see
behind
me,
I
love
funko
pops
like
like
those
as
well.
C
So
a
quick
overview
of
what
we're
going
to
do
today
is
just
look
at
what
orchestra
is
and
and
where
I'll
go
fit
in,
so
I'm
going
to
go
through
this
fairly
quickly.
This
was
meant
to
be
a
slightly
longer
presentation,
but
I've
cut
it
down
a
little
bit.
C
So
one
of
the
most
important
things
that
we
wanted
to
do
at
the
beginning
of
orchestras,
it's
100
open
source.
It
covers
release,
orchestration
and
lifestyle
life
cycle
management.
It
started
off
with
a
a
problem
around
to
plan
applications
with
helm,
but
it's
growing
to
to
further
than
that
and
I'll
explain
that
in
a
second.
C
So
this
is
the
this
is
architecture
here
it's
built
with
all
cncf
projects
and
basically
what
it
does
and
is
builds
complex
application
dags
and
allows
you
to
do
deployments
of
individual
components
for
your
application,
including
service
mesh,
including
gatekeeper,
including
storage,
including
anything
by
just
adding
custom
crd
into
your
deployment
and
then
deploying
out
your
application.
So
what
this
is
for
and
then
you've
got
like
you
see
down
the
bottom.
There
you've
got
helm,
argo
workflows,
the
helm,
controller,
you've
got
chart,
museum
and
captain
to
do
your.
C
You
know
the
testing,
so
this
is
kind
of
what
the
the
architecture
looks
like.
So
you've
got
the
the
application
resource
group,
which
is
a
custom
resource.
You
deploy
that
into
orchestra.
C
It
will
look
and
see
at
any
of
the
helm,
charts
argo
workflows
will
be
automated,
then
it
will
kick
into
creating
the
diag
and
deploy
out
your
application
for
you,
you've
got,
we've
got
a
staging
registry
there
using
chart
museum
as
well.
So,
basically,
what
what
what
this
allows
you
to
do
is
this
is
kind
of
what
a
standard
application
looks
like.
C
So
you've
got
network
layer,
security,
layer,
there's
a
whole
heap
of
stuff
that
you
probably
want
into
your
application,
and
your
application
might
not
be
able
to
tolerate
kubernetes
just
keep
the
playing
and
then
like
the
reconciliation
loop.
You
might
need
to
make
sure
that
there's
certain
things
there
before
you
deploy
other
things,
and
this
is
where
orchestra
comes
in,
so
it
understands
and
creates
the
diagram
of
how
all
your
infrastructure
and
application
sits
together
and
then
deploys
it
out
for
you
and
that
that's
that's
what
this
does.
C
So
the
problem
that
it's
trying
to
solve
is
prior.
When
people
were
doing
these
deployments,
they
probably
were
using
city,
icd
and
and
then
having
different
pipelines
to
go
inside
istio
and
then
make
sure
that
was
there
and
install
say,
open
policy
agent
and
make
sure
that
was
there.
You
make
sure
the
database
is
there
and
then
they
would
have
stacked
helm,
charts
and
some
sort
of
testing
in
between
this
allows
it
all
to
be
like
seamless
in
in
the
one
workflow.
C
C
C
C
So
the
original
use
case
for
this
was
5g
for
the
core
network
function,
so
they
had
a
very
complex
set
of
needs
for
infrastructure.
They
were
all
on
kubernetes.
They
wanted
blue-green
deployment.
C
They
had
one
auto
remediation
on
failure,
so
you
can
see
all
there
there's
there's
there's
a
lot
that
they
needed
and
they've
found
limitations
in
in
hell,
and
this
is
where
this
project
got
its
got
its
legs.
So
you
can
see
here.
This
is.
This
is
just
a
a
platform
dependency.
C
You
can
see
there
that
you
that
it's
they
needed.
The
outback
rules
open
policy
agent,
a
metal
bare
metal
load,
balancer
seo.
Then
they
had
some
statin
redis
and
then
all
the
pods
were
were
deployed
out.
So
all
that
all
all
these
applications
needed
to
be
there
prior
to
all
the
all
the
components
needed
to
be
there
prior
to
the
pods
being
able
to
spin
up
and
they
needed
low
latency
and
the
desired
state
loop
of
kubernetes
would
make
things
fail.
C
So
this
is
built
solely
for
kubernetes,
so
we
haven't
extended
it
out
past
there,
yet
there
it's.
I
should
have
turned
off
my
outlook,
information,
it's
a
declarative
approach
and
it's
git
ops
compatible.
So
basically,
what
we
wanted
to
do
is
abstract
away
the
complexity
from
the
end
user
and
just
make
it
feel
like
a
normal
helm
workflow.
C
So
you
can
see
that
you
can
have
your
your
application
chart
with
application
dependencies
and
it
will
use
the
workflows
to
work
out
what
what
it
needs.
I
am
actually
an
ex
puppet
employee,
so
this
is
feels
very
similar
to
what
puppet
used
to
do
for
infrastructure
back
in
the
day.
It
builds
the
the
graphs
and
and
the
diag
and
understands
where
everything
goes
and
allows
you
to
to
be
able
to
map
out
very,
very
complex
things
and
to
play
it
without
having
to.
C
C
So
there
is
by
default,
we
deploy
argo
cd.
If
and
but
you
can,
you
can
change
that
if
there's
already
a
need
to
change
it
like
someone
else
is
using
something
else,
but
by
default
we
use
argo.
We
have.
We
have
got
an
executed
container
image
that
allows
you
to
build
your
own
testing
all
plug-in.
C
C
So
again,
this
is.
This
is
again
what
it's
worked
on,
so
you
can
see
that
by
default,
even
even
if
the
end
user
uses
jenkins,
they'll
still
have
an
argo
in
our
installation
installed
because
we're
using
in
the
back
and
we're
using
the
workflows
as
part
of
the
workflow
for
the
orchestrator
operator.
So
you
can
see
there
that
it's
it's
going
to
play
our
goal
as
well.
So
this
is
the
roadmap
of
where
we're
going
our
captain's.
Actually
there
we've
got
that
out
now.
So
that's
actually
in
the
repo.
C
If
you
go,
have
a
look-
and
this
is
a
quick,
a
quick
diagram
of
how
the
the
workflow
works
for
the
plug-in.
So
you
can
see
there
that
you've
got
just
orchestrated.
You've
got
the
executor
and
it
triggers
the
action
in
the
control
plane
to
deploy
out
the
workflow
and
then
that
to
place
into
kubernetes.
C
So
the
more
complex
view
of
of
this
with
testing
is,
you
can
see
that
we
generate
through
our
argo.
It's
launched.
It
deploys
the
the
helm
release
captain
triggers
an
evaluation
of
of
whatever
that
is
that
you've
deployed.
You
can
then
have
your
testing
systems
plug
into
the
query
results.
C
You
could
then
have
metrics
from
prometheus
to
say
something
successful
or
not.
That
could
be
done
on
any
custom,
metric
and
prometheus
you
liked,
then
you
can
once
it's
reconciled.
It
goes
to
the
home
controller
and
the
deployment
is
is
met,
so
you
can
see
there
how
you
can
start
to
break
down
a
whole
bunch
of
of
complex
deployment
and
testing
strategies
using
the
automated
argo
workflows
and
built
in
with
captain.
C
And
yeah
this
is
just
basically
a
network
function,
so
this
is
how
the
network
function
operator
worked
in
real
life,
which
is
very
similar,
so
you
can
see
there.
This
is
a
real
life
use
case
of
of
what
what
went
on.
C
So
this
is
just
just
a
cause
between
the
plugin
system
and
the
trigger
deploy
inside
captain
to
allow
la
to
see
what's
happening
on
the
on
the
the
control
plane
side
and
speaking
to
to
the
deployment
I
was
gonna
do
a
demo,
but
I
am
actually
not
going
to
do
that
because
my
computer
is
running
at
100
and
I
don't
want
to
attempt
the
demo.
C
You
got
you've
got
a
whole
bunch
here,
we'll
just
go
to
and
you
can
just
deploy
this
out
and
it'll
automatically
create
the
upgrade
workflows
for
you
and
you
can
see
what's
been
displayed
there.
This
is
just
using
the
seo
the
seo
book
app
and
you
can
see
it
automatically,
create
the
workflows
and
build
the
diag
for
you
of
the
dependency
graph
of
the
application
you
are
deploying,
so
I
rushed
through
that
a
little
bit
quickly
than
I
wanted
to,
but
I
didn't
want
my
slides
to
stuff
up
either.
G
G
G
Yeah,
it
seems
to
be
a
lot.
I
mean
it's,
it's
basically
to
me
the
extra
it's
a
little
bit.
Give
me
the
feeling
it's
a
little
bit
heavy.
I
mean
I
know
it's
powerful,
but
it's
also
giving
me
the
feeling
it's
heavy
and
can
do
a
lot
of
things,
but
it
seems
to
be
it
seems
to
me.
I
have
to
use
all
set
of
the
tools
there
to
really
really
simulate
this.
C
C
You
can
use
that
or
you
can
use
if
the
version,
if
there's
a
version
which
match
you
could
use
something
else,
you
chart
museum
and
stuff.
We
we
deploy
and
we
keep
it
in
our
namespace.
It's
still
it's
still
available
to
you
to
use
if
you
wanted
to
use
it,
but
if
there's
different
versions
and
and
things
like
that,
but
yeah
at
the
moment
in
this.
In
this
stage,
we
there
is
a
dependency
on
on
all
the
upstream
components.
C
Okay
and
natasha
just
put
a
demo
in
that
was
done
at
kubecon
pre-event
and
that
will
show
a
working
demo
on
youtube.
Thank
god
for
videos
when
I'm
having
technical
difficulties.
A
C
That
that
is
a
very
good
question
for
the
test.
Maybe
he
can
answer
that
in
chat
I'll
see
if
he
or
he'll
answer
that
in
chat.
C
Yeah
we
have
a
dependency
on
on
helm
for
the
help
controller.
A
Okay,
great
stuff,
scott
thanks
for
coming
along
to
to
show
that
to
us
today
you
know
we
love
seeing
different
use
cases
for
argo
workflows,
not
just
machine
learning.
It's
quite
nice
to
see
some
infrastructure
automation
stuff
as
well,
because
we
we
do
many
different
things
I
will
take
over
now.
I'm
just
going
to
give
people
a
little
bit
of
an
overview,
and
we
only
have
a
few
minutes
left
so
it'll
be
pretty
rapid
of
a
project
that
myself,
bala
and
michael,
have
been
working
on.
A
Intimate
gives
us
a
once
or
maybe
twice
a
year.
We
get
one
week
to
work
on
a
specific
project
that
interests
us
and
the
thematic
thing
for
our
our
project.
This
time
was
to
to
escape
from
the
monolith
so
like
systems
that
allow
us
to
to
break
down
monoliths,
and
what
we
wanted
to
do
was
look
at
plugins
as
as
a
way
to
do
that
for
argo
workflows.
A
So
the
the
goals
for
plugins
are
the
reason
to
use
plugins.
I
think
it's
very
general
to
any
software.
Not
just
argo
workflows
is
that
we
wanted
something
to
allow
people
to
kind
of
write
code.
Any
language,
obviously
workflows
is
written
in
golang
and
maybe
but
most
of
our
users
use
python.
So
if
you're
pretty
neat,
if
you
could
extend
argo
workflows
using
python
rather
than
golang,
it
needs
to
be
relatively
simple
to
do
it.
So
the
idea
is
that
plugins
actually
just
runs
as
an
rpc
sidecar
we're
not
wedded
to
this
concept.
A
Yet
we
talked
a
bit
about
using
wasm
as
a
as
an
alternative
technology
for
this
as
well.
But
ideally
it
should
be
very
easy
to
write
your
plugins
and
iterate
your
plugins.
In
fact,
you
should
be
able
to
start
a
plugin
at
run
time,
and
then
you
stop
and
start
it
and
continue
to
kind
of
iterate
with
it.
A
So
you
don't
you,
don't
have
access
to
that
feature
when
you
want
to
do
it,
and
finally,
I
think,
what's
not
stated
in
this
list
is
the
ability
to
extend
our
go
workflows
with
propriety
or
internal
tools
that
you
cannot
open
source.
So
this
is
a
whole
bunch
of
things
you
might
want
to
do
with
workflows.
A
lot
of
companies
can't
contribute
back
to
open
source
due
to
hashtag
security,
for
example.
A
So
what
what
does
the
plugin
kind
of
boil
down
to?
Well,
there
are
two:
there
are
three
software
components
inside
workflows,
the
executor,
the
controller
and
the
argo
server,
and
so
there
are
three
types
of
plug-ins,
an
executor
plug-in
a
controller
plug-in
and
argos
server
plug-in.
We
are
not
doing
argo
server,
plug-ins!
I'm
not
going
to
talk
about
that.
A
I
think
that's
really
interesting
area
as
well,
but
we're
really
focusing
on
executing
controller
plug-ins,
because
they
the
control
and
executed
the
core
components
really
of
argo
workflows
and
adding
capabilities
there
kind
of
gives
you
that's
where
all
the
power
is.
That's
where
all
the
engine
is
so
adding
capabilities
there.
It's
really
useful.
A
So
what
has
been
well?
You
basically
have
to
enable
in
the
workflow
controller
and
then
we'll
have
a
little
look
at
what
controller
plug-in
would
look
like.
So
there
are
a
couple
of
different
apis.
You
can
implement
as
a
controller
plug-in.
One
is
a
workflow
life
cycle
hook,
which
basically
means
your
plugin
will
be
invoked
before
or
after
the
workflow
is
reconciled
or
operated
on
same
same
same
kind
of
thing,
and
then
that
allows
you
to
modify
that
workflow
or
validate
that
workflow
before
it's
executed.
A
So
you
could
use
that
labels
annotations
to
it
or
perform
validation,
or
you
know,
build
any
kind
of
notification,
kind
of
plug-ins
or
exporting
systems
to
it.
So
we've
built
as
part
of
our
poc
a
page
judy
plugin
that
reports
failed
workflows
to
page
duty
or
a
slack
plug-in
to
report
the
reports.
You
know
your
workflow
status
to
us
to
a
slack
room,
all
those
kind
of
things
you
can
build
relatively
straightforwardly
like
this
there's
a
node
life
cycle.
A
Look
so
that
ties
into
node
execution,
and
this
is
kind
of
driven,
but
by
some
kind
of
kind
of
rather
old
kubeflow
requirements
to
basically
short-circuit
the
execution
of
a
node
or
a
template
within
a
workflow
that
allows
you
to
therefore
go
to
an
external
system
before
that
node
is
executed
or
perform
actions
around
the
life
cycle.
That
of
a
particular
node.
Again,
you
can
do
notifications
for
that
or
you
can
do
things
like
caching,
memorization
and
another
thing
allows
you
to
do-
is
kind
of
not
non-pod
tasks.
A
So
if
you
don't
want
to
run
a
pods
instead
as
your
template,
we
can
do
other
stuff
as
well
and
then
the
third
type
is
a
parameter.
Substitution
plugin
so
allows
you
to
add
your
own
kind
of
curly
brace
parameters
into
a
workflow
that
people
could
use.
So
perhaps
that
might
be
some
versioning
information
or
something
else
like
that.
A
So
this
is
this
is
as
it
stands
today.
I
think
there's
a
bit
of
discussion
about
what
the
packaging
format
is,
but
basically
you
implement
a
particular
api
endpoint
and
you
send
some
kind
of
response
to
it.
So,
in
this
example,
this
python
endpoint
will
be
invoked
every
time.
There's
a
pre-operation
it'll,
just
print
out
the
word
hello
to
the
console
and
then
return
a
response
back
to
it
and
that's
kind
of
relatively
straightforward,
and
then
I
can
basically
install
this
python
plug-in
by
using
a
kind
of
customized
patch.
A
So
this
is
the
patch
I
could
use,
or
it
could
be
my
own
image.
So
I'm
just
kind
of
taking
advantage
of
that
python
as
an
interpreted
language
here
to
actually
just
kind
of
put
this
relatively
simple
plug-in
straight
into
my
workflow,
the
controller
and
then
the
controller
needs
to
discover
plugins,
so
basically
I'll
create
a
configuration
map
for
that
plugin,
and
that
just
tells
me
where
to
find
that
controller,
plugin,
okay.
So
as
an
example,
I
just
will
just
see.
Do
we
have
an
exact
advanced
example?
Let's
click
in
cf.
A
D
A
This
will
send
a
page
to
duty
notification,
and
this
basically
gets
the
old
status.
A
new
status
will
work
for
name
and
if
it's
gone
from
running
to
fails,
it
will
then
create
a
page
duty
event
for
that
service
and
that-
and
that
will
appear
in
my
console,
telling
me
that
I've
got
a
you
know
failed
workflow.
If
I
want
to
alert
all
that
kind
of
stuff,
pretty
simple
stuff.
A
Then
look
at
an
executor,
plugin
executable
plugins
are
run
as
part
of
the
executor
rather
than
part
of
the
controller,
so
they're
kind
of
they're
running
the
user's
namespace
rather
than
the
controller's
namespace,
and
so
the
nice
thing
about
an
executor
plugin
is
that
as
a
user,
I
can
configure
an
executed
plugin.
You
know
that
can
be
something
I
could
configure.
I
don't
need
my
system
administrator
to
install
the
plugin
for
me
and
that
plug-in,
of
course,
is
isolated
to
that
name
space.
A
So,
even
as
a
user,
I
can
write
a
pretty
wonky
plug-in
and
it's
kind
of
contained
within
that
name
space
for
me,
there's
only
one
api
endpoint
for
this
there's
only
one
kind
of
plugin.
You
can
write
at
the
moment
for
executor
and
I
think
there'll
be
another
one
in
the
future,
because
it's
popular
up
very
issue
is
basically
execute
a
template
and
this
example
is
wrong.
So
let's
go
and
find
a
correct
example
of
an
executable
plugin
to
show
people
what
that
looks
like
excuse,
the
slack
one,
that's
introduction,
so
this
is
a.
A
A
It's
a
plug-in
and
under
that
got
slack
now
that
I've
got
text
saying
you
know,
workflow
has
finished
and
that'll
be
parameterized,
just
like
a
normal
workflow
by
it,
and
then
obviously
I've
got
a
configuration
here,
and
so
this
is
my
slack
plug-in
here
kind
of
some
interesting
things
about
this
again.
I've
got
a
description
of
this
executable
plug-in.
A
It's
not
just
the
address
that
it
should
run
on,
but
also
the
container
to
run
as
well,
and
so
when
the
agent
is
started
up
as
part
of
your
workflow
and
the
agent's
a
new
feature
in
version
3.2
responsible
for
making
hd
progress,
the
agent
is
also
now
responsible
for
executing
plug-in
templates.
It
says
container
will
start
up
as
a
sidecar,
I'm
importing
some
secret
data
into
it,
which
is
really
important
because
that's
slack
url
to
secrets.
A
So
that
allows
me
to
kind
of
configure
the
plugin
based
on
secrets
and
in
the
user's
name
space
or
as
a
user,
and
then
I've
got
this
here
and
what
this
plug-in
does
is
it
you
know,
listens
to
that
api
endpoint.
If,
if
the
plug-in
is
a
slack
plug-in,
then
it
will
create
a
request
to
the
slack
api
with
the
text
from
it,
and
that
will
send
me
a
slack
message,
relatively
straightforward
and
finally,
it'll
respond
back
with
some
information
about
the
outcome
of
the
execution
of
that
particular
plugin.
A
It'll
say
that
plugin
has
is
successful
and
that
will
appear
in
my
no
graph.
It'll
say
that
it'll
show
a
green
tick.
Next
to
that
that
temp,
that
particular
template
and
actually
the
message
here
will
be
shown.
So
I
can
actually
report
back
additional
diagnostic
information
about
the
execution,
plug-in
message
and
actually,
of
course,
things
like
inputs
and
outputs
also
work
with
this.
So
I
can
create
a
template
plugin
that
consumes
inputs
and
produces
outputs.
If
I
want
to
okay,
so.
A
To
show
that,
so
again
it's
this,
you
know
it's
20
lines
of
python
to
execute
this
a
bit
of
configuration
and
that's
kind
of
it,
and
that's
really,
I
just
wan
well,
I
would
just
wanted
to
talk
about
the
failure
modes
briefly,
because
that
is
important
for
a
plug-in
and
I
think
that's
not
kind
of
a
fully
exp.
The
two
areas
are
not
fully
explored
are
kind
of
failure,
modes
and
performance.
A
So
basically
a
plug-in
is
the
assumption.
Is
that
when
the
user
writes
the
plug-in
there
might
be
a
bit
wonky
plug-in
and
that
the
workflow
controller
should
be
tolerant
to
kind
of
unreliable
plug-ins
that
occasionally
just
error
out
for
an
unexpected
reason,
and
that
needs
to
be
contained
with
it.
So,
typically,
the
failures
from
a
plug-in
fall
into
whether
or
not
it's
kind
of
like
a
transient
error
or
a
fatal
error,
and
so
fatal
error
is
anything.
A
That's
not
a
503
or
some
kind
of
timeout,
something
all
those
sort
of
science
and
that
basically
results
in
the
workflow
failing,
which
is
obviously
what
you'd
want
for
a
template
invocation.
You
have
the
indication
that
fails
and
you
want
that
to
fail,
and
you
maybe
you'll
have
some
retry
strategy
on
that,
so
we
try
again
later
on,
but
for
a
controller
plug-in
that
fatal
failure
is
a
bit
more
serious
and
it
needs
to
be
contained
within
sites
that
the
plug-in
itself
again
the
workflow,
might
fail.
A
So
there's
like,
I
think,
an
unexplored
area
here
around
what
failure
means
and
how
we
should
tolerate
that
and
what
kind
of
retries
we
should
do
and
how
we
should
treat
and
fail
for
is.
Is
there
a
type
of
plugin,
that's
a
critical
plugin
which
fails
the
workflow
and
a
non-critical
one
which
we
ignore.
You
know
if
we
can't
send
a
notification
to
an
external
system.
Do
we
fail
the
workflow
or
do
we
retry
later
on?
Or
do
we
ignore
it?
You
know
through
a
few
options
there
around
that
and
then
performance
is
pretty
important.
A
We
we
have
people
talking
about
running
workflows
with
a
hundred
thousand
nodes
them
now.
We
know
that
we
have
people
running,
you
know
20,
000,
plus
node,
workflows
and
there's
basically
an
order,
a
b
cost
to
it.
Where
a
is
the
number
of
plug-ins
and
b
is
the
number
of
nodes,
it's
quite
possible
that
you
know
very
large
workflow
will
result
in
a
very
large
number
of
network
calls.
A
I
don't
necessarily
want
our
solution
to
this
to
be
like
we
have
for
istio,
where
we
always
recommend
people,
just
as
disabled
istio,
be
nice
to
support
those
ones,
and
that's
where
the
discussion
of
kind
of
web
assembly
comes
in,
because
you
know
http
call
is
never
going
to
be
fast,
even
if
you're
using
using
domain
sockets
and
keeper
live,
that's
as
fast
as
you
can
go
and
it's
still
significantly
slower
than
an
in
process
called
like
10
times
as
slow.
A
So
we
may
want
to
look
at
wasn't
as
a
solution
to
allow
people
to
author
plug-ins,
but
we
haven't
really
explored
that
either.
Okay,
you
can
try
this
out
by
the
way.
If
you
want
to
give
it
a
go,
you
can
go
in
and
pull
the
pull
request
and
build
it
and
play
around
with
yourself
and
great
to
hear
people's
feedback
on
that.
E
A
I
don't
understand
this
question
israeli
jacksonville
logs
from
the
executors
into
the
argo
controller.
Yes,
but
you
probably
wouldn't
want
to
do
that
because
of
scaling
issues,
because
the
argo
controller
might
be
managing
you
know
tens
of
thousands
of
nodes,
even
even
even
higher
numbers
than
that
and
so
pulling
logs
into
the
controller.
Obviously,
that
consumes
memory,
cpu
network
bandwidth,
so
that
doesn't
scale.
E
A
How
about
we,
if
you
want
to
follow
up
in
a
tish
with
that
question,
you
can
obviously
come
ask
on
slack.
E
I
would
think
it's
possible.
I
think
that
one
thing
that
a
plug-in
could
do
is
recognize
the
the
location,
the
storage
location
and
where
the
logs
were
placed.
I
don't
know
if
they
would
have
access,
but
I
mean
you
could
give
the
controller
access
and
then
the
controller
as
a
plug-in
could
download
and
extract
the
logs.
But
I
I
don't
think
I
would
recommend
putting
that
much
burden
on
the
controller.
A
A
B
Okay,
yeah:
let's,
if
we
have
another
two
or
three
minutes,
I
can
show
the
live
hook
also,
so
I
think,
let's
I
think
we
should
probably
wrap
up
on
that.
A
Okay,
because
we're
just
out
of
time
and
people
always
have
eleven
o'clock
meetings,
and
we
also
have
an
11
20
meeting
when
you
prepare
for
so
so.
A
Great,
so
just
some
conclusions,
thank
you
very
much
to
everybody
who
has
done
a
demonstration
today
or
a
little
lightning
demo.
I
I
want
us
to
do
more
lighting
demos
in
the
future.
So
if
you
want
to
come
and
do
that-
that's
great
as
well
in
on
this
on
a
web,
I've
written
web
wednesday,
the
15th
december
we'll
have
andre
and
john
and
from
cooflow
k-tip
and
they're,
going
to
be
doing
a
kind
of
presentation
demo
for
us
as
well,
so
I
think
that'll
be
pretty
neat.
A
As
I
mentioned,
the
video
will
be
up
on
youtube
if
you
do
want
to
present
or
do
a
demo,
you
just
reach
out
to
us
and
slack
myself
or
bala,
and
we
can
sort
that
out
for
you.
If
you
want
to
get
more
involved,
you
know,
for
example,
doing
blog
posts
or
anything
like
that.
You
know
just
reach
out
to
us.
If
you
want
to
do
more
of
that
kind
of
stuff,
that
would
be
great
to
hear
from
people
on
that.