►
Description
Flowify, an open source GUI for Argo Workflows - Adam Cheng @ Equinor
Exploring Messaging Systems with Argo Events - David Farr, Staff Software Engineer @ Intuit
See Medium post for more info: https://dfarr.medium.com/exploring-messaging-systems-with-argo-events-a259f663bd30
GitHub repo for POC: https://github.com/dfarr/kafkanaut
Please help us by filling out this quick survey! (est. 2 minutes): https://forms.gle/JZELvyfoGuncemxi7
A
All
right
and
now
that
it's
yeah
four
minutes
after
the
hour,
we
can
officially
get
started
that
Don
Kalin
nice
to
meet
everybody
here,
good
to
see
you
happy
holidays
again,
yeah
thanks
for
joining
us
end
of
the
year.
I
know
things
get
busy
or
at
least
family
time
becomes
more
important.
So
thanks
for
stopping
by,
we
have
some
exciting
presentations
today,
so
I
work
in
the
community
as
a
contributor
on
the
project.
So
you'll
see
me
in
slack.
A
If
you
have
any
issues,
feel
free
to
reach
out
I
focus
on
Argo
workflows,
primarily
in
my
work
at
Pipe
kit,
which
is
a
startup
that
provides
the
multi-cluster
control
plane
for
our
workflows.
A
And
today
we
have
a
agenda
of
just
Community
presentation,
so
joining
us
today
is
Adam
from
equinor
and
David
from
Intuit.
Thank
you
all
for
yeah
coming
to
share
these
exciting
products
projects
you
have
with
us.
So
we'll
kick
it
off
with
you.
First
Adam,
blowify
and
I'll.
A
Let
you
give
the
full
intro,
but
it's
a
nice
GUI,
that
for
Argo
workflows
that
you
all
developed
at
equinox
so
really
excited
to
see
the
latest
update
from
you
all
and
then
we'll
continue
with
a
presentation
about
from
Adam
or
from
David
related
to
Argo
event
and
event
streaming
so
yeah
thanks
I
think
you
have
the
screen
Adam
right!
Is
it
working
on
your
end.
B
Yep,
just
shout
if
you
don't
see
my
screen
so
yeah
yep,
thanks
for
inviting
us
to
this
community
meeting,
I'm
Adam,
so
a
little
bit
of
a
plan
for
the
next
10-15
minutes,
we're
going
to
show
at
least
who
we
are,
and
why
would
be
glorified?
Give
you
a
bit
of
a
demo
and
a
plan
of
what
we
plan
to
do
with
it
within
Equinox.
So
for
those
who
don't
know
us,
we
are
not
an
I.T
company.
B
We
are
Energy
company
based
in
Norway
with
renewable
energy,
carbon
capture
and
oil
and
gas.
So
if
you
want
to
look
learn
a
little
bit
more
about
us,
it's
loop.equino.com,
so
part
of
our
strategies
is
open
source
we
Embrace
open
source
and
by
default
most
of
our
software
called
develop,
should
be
open
source.
That's
why
glorify
it's
open
source
and
we
are
showing
it
share
it
with
the
community
here.
B
So
the
it's.
What
if
I
start
with
a
project
that
from
a
several
of
the
super
users
of
alcohol
workflow
within
Equinox,
and
we
like,
we
enjoy
the
simple
of
the
yaml
file.
We
enjoy
its
Docker
based
kubernetes
Technologies,
but
we
want
to
say
push
it
a
bit
further.
Can
we
bring
this
the
the
benefit
of
our
go
to
people
who
doesn't
know
how
to
cook?
Because
for
the
same
reason,
I
can
work
from
exploring
a
lot
of
this
technology
to
data
scientists
were
not
entirely
very
good
at
cooking.
B
B
We
have
Sometimes
some
good
ideas,
there's
a
lot
of
scripts,
but
they
struggle
to
bring
this
into
the
majority
of
the
people
that
use
this
technology
scientists
engineers,
because
they
they
don't
really
know
how
to
once,
for
example,
single
night
or
python
code,
and
then
we
also
lack
a
lot
of
Call
of
the
software
developed
to
actually
bring
this
into
conduction.
We're
thinking
could
that
that
Argo
workflow
could
that
be
the
bridge
to
bring
some
of
this?
B
Not
that
complex,
but
quite
beneficial
script
or
or
small
tools
into
the
majority
of
the
engineers,
with
some
with
a
loan
local
UI
for
Argo
workflow.
That's
how
we
begin.
B
So
what
we
want
to
achieve
is
we
want
to
get
it
fast.
You
just
need
to
play
with
talking
container
and
you
don't
have
to
know
about
how
it
actually
works
within
the
kubernetes
wing
or
within
angle
workflow
and
everything
building
and
wondering
wherever
can
be
done
with
a
GUI.
You
don't
have
to
worry
about
yaml
file.
You
don't
have
to
worry
about
the
taco
work
phone
number.
B
We
saw
that
out
for
you
and
this
data
scientists
or
people
with
some
content
background
can
focus
on
building
a
Docker
container
and
with
a
separate
line
of
configuration,
can
build
what
we
call
a
component,
which
is
basically
a
steps,
a
container
template
in
micro,
workflow
and
that
can
be
reused
for
other
people.
They
can
take
it
grab
it
and
and
and
build
them
together
into
our
Google
flow.
So
hopefully,
when
it
works,
you
don't
have
to
know
how
to
write
Diago
the
ago
Workforce
manifest.
B
So
it's
a
little
bit
like
the
the
Lego
bricks
analog,
the
the
the
data
scientists
can
focus
on,
say,
building
the
bricks,
which
is
a
single
Lego
bricks,
and
then
the
user
can
take
the
different
size
of
Lego
bricks
and
build
their
house.
They
don't
have
to
know
how
that
single
leg
works
is
built.
We
don't
know
I
doubt
anyone
knows
the
dimension
of
single
language,
but
we
all
can
grab
it
and
they
all
interlock
with
each
other.
You
can
build
houses
across
with
it.
B
That's
the
same
concept,
so
the
first
thing
we
want
to
do
is
I
will
do
step
by
step
with
Access
Control.
First,
first
of
all,
the
access
control
within
Econo,
it's
done
by
roadbase
and
every
single
we
map
the
flowify.
B
Something
called
workspace,
which
is
a
portrait
or
a
team
within
your
Enterprise,
would
be
I
will
have
a
Sandbox
and
they're
equivalent
to
a
kubernetes,
namespace
and
inside
this
workspace,
and
admin
of
that
workspace
could
be,
will
be
able
to
convert
kubernetes
secrets
and
more
than
amount
for
for
the
users
that
and
use
that
that
you,
you
said
in
the
component
and
in
the
workflows.
So
we
don't
do
any
verification
to
only
you
can
use
any
oidc
client.
B
You
want
the
only
thing
we
do
in
the
flowify
access
control
is
to
take
the
JW
token,
which
currently,
because
we
use
Azure,
but
it's
quite
straightforward-
to
add
other
oidc
provider
as
well.
So
a
quick
just
to
show
you
how
it
actually
looks
like
I'm
going
to
do
the
demo.
This
is
what
flowify
looks
like
yeah,
the
UI,
it's
a
UI
and
then
a
server.
You
can
see
workspaces.
B
These
are
actually
one-to-one
Mac
to
a
kubernetes
namespace
and
they
have
their
own
kubernetes
sequence
that
you
could
use
in
your
album
workflow
they
can.
You
have
their
own
value
volume
amount
that
you
can
use
in
your
kubernetes
workspace,
so
these
are
World
bases
based
on
your
wall
from
your
JW
token,
and
if
you
go
to
say,
go
inside
one
of
the
workspace,
you
can
see
the
existing
workflow
available
to
you
and
if
you
go
to
this
is
the
admin
page,
which
is
where
you
can
configure
some
of
our
Advanced
I.
B
So
this
is
something
then
then,
as
a
workspace
I
mean
you
can
you
can
figure
out
and
you
don't
have
to
let
the
user
worry
about
it.
So
this
is
the
first
step
into
the
forward.
5
gram,
which
is
the
workspace.
It's
it's
a
Sandbox
environment,
every
sandbox
has
its
own
secret
and
Etc,
and
it's
our
sandbox
briefing.
The
webflow
is
also
restricted
into
the
workspaces
as
well.
So
let's
go
into
the
the
more
useful
stuff.
B
It's
we
built
something
for
breaks
and
graph,
which
is
an
abstraction
of
the
which
is
abstraction
of
the
Argos
container,
set
template
a
break.
It's
basically
an
instruction
to
run
a
single
container
and
the
graph
it's
down.
B
It's
basically
multiple
angle
container
template
that
interlock
together
and
become
a
workflow.
It's
basically
a
workflow
I'll.
Come
back
to
this
later,
why
we
differentiate
works
and
graphs,
but
this
mainly
for
the
super
user
to
understand
or
to
before
people
who
build
components
to
be
worried
about
the
normal
user
who
build.
Workflow
doesn't
need
to
worry
about
this,
so
the
the
fundamental
part,
the
lowermost
level,
would
be
what
we
call
a
component
which
can
be
a
break
to
system
or
a
graph.
So
you
can
create
a
component
clicking
this
button.
B
Let
me
show
you
an
example
of
bricks.
We
can
view
this
example.
It's
basically
an
instruction
to
run
this
container
with
entry
point
with
a
bunch
of
arrays
and
then
on
the
left
hand,
plane
is
what
specified
for
for
the
input
and
then
the
output,
and
then
this
would
be
mapped
into
the
argument.
So
if
you
have
an
input
for
fluorify,
it's
called
array
and
then
it
will
map
to
this
array
input
and
will
be
passed
into
the
entry
point
of
the
of
your
Docker
command.
B
So
this
is
just
a
note.
Your
box
ended
a
way
to
construct
the
entry
point
command
to
run
the
topic
container.
So
this
is
what
we
call
a
component,
but
you
can
also
have
multiple
components
that
a
multi
breaks
that
interlocks
together
into
your
graph
is
basically
a
workflow,
but
the
benefit
of
this
you
you're
able
to
abstract
a
lot
of
these
to
the
user,
because
at
the
end
of
the
day
the
user
will
only
see
the
input
and
then
the
output
of
the
component.
They
don't
work
with.
B
They
don't
need
to
know
about
whether
it's
a
single
container
or
it's
a
multiple
container.
You
can,
you
can
put
graph,
so
you
can
go
to
graph
and
then
you
can
add
a
lot
of
at
multiple
this
existing
breaks.
These
are
all
bricks,
so
you
can
always
add
these
breaks
together
and
then
you
can
link
them
together
using
this
drag
and
drop
interface.
So
you
have
this
input.
Verify
inputs
mapped
into
the
bricks.
So
then,
what
the
user
see?
They
only
see
array
and
index.
B
They
don't
see
what's
inside,
so
that
so,
but
that's
essentially
is
a
talk.
So
the
new
sufficient
part,
it's
a
bit
slightly
different.
We
call
it
a
component
Works
workflow,
as
I
said
it's
about
abstractions,
so
you
can
have
a
single
bricks,
which
is,
which
is
a
single
instruction
to
one
container
or
it's.
B
A
DOT
is
a
chain
of
container
as
a
component,
but
what's
important
is
you
can
abstract
all
this
away
and
don't
use
the
the
local
user
who
only
see
the
input
and
an
output
where
they
can
go
to
the
workflow
and
change
them
together?
So
you
can
actually,
if
the
component
is
a
graph,
that
means
you
can
have
graph
within
a
graph
which
it's
become
a
nested
dark
or
it
can
be
a
bricks
connected
to
Bricks,
which
is
just
a
simple
dark
and
you
just
want
as
a
workflow.
B
So
that's
that's
the
level
of
instruction
we're
trying
to
provide
to
the
user.
So
what
you
can
do
in
here
this
is
say
my
Persona
is
now
it's
a
local
user.
I,
don't
know
anything
about
doc.
I,
don't
know
anything
about
yamu.
What
I
can
do
as
in
common
and
create
a
new
workflow,
so
I
want
to
have
a
little
word,
for
you
can
click
this
one
query
and
it
works.
Well
so
I've
this
one
already
done
so
you
can
create
a
graph.
B
B
It
has
some
type
checking,
but
we
haven't
really
come
to
a
conclusion.
What
is
the
best?
Because
you
don't
want
to
restrict
too
much
for
the
local
user,
so
we
haven't
found
a
sweet
spot
yet,
but
there
are
some
minimal
amount
of
pipe
tracking
about
what
the
type
of
that
input
would
be,
and
then
you
can
change
them
together.
B
I
can't
do
that
because
the
output
is
the
parameters,
and
this
one
is
asking
for
artifact
parameters.
Then
I
can
put
them
together
and
then
the
user
can
also
use
the
amount,
and
these
Works
set
up
for
them
by
the
workspace
I
mean
so
you
can
take
whatever
that's
available
in
there.
We
don't
have
to
in
this
workspace
yet
but
I
believe
we
don't
have
a
secret.
B
Then
you
can
get
the
secret
and
this
will
pass
into
environment
to
secret,
as
you
do,
let's
say
in
Bible
workflow,
that
user
can
just
run
it
so
not
a
single
line
of
code.
They
need
this.
One
can
be
a
break.
This
one
can
be
a
graph,
but
it
doesn't
matter
the
implementation
of
the
component
itself.
It's
completely
hidden,
so
they
don't
get
scared.
B
So
this
is
what
we
have
built
at
the
moment
in
terms
of
the
user
interface.
The
architecture
is
in
here
the
deployment,
the
oidsp
flow.
It's
we
just
use
all
of
proxy.
We
don't
have
a
choice
as
long
as
you
stand
as
long
as
you
send
your
JW
token
in
the
authorization
header,
but
in
the
back
the
better
token,
then
it's
fine
and
then
and
then
flow
if
I
just
get
the
token
verifies
it
get
whatever
claim
that
you
would
use
to
identify
the
user
and
also
the
roads.
B
As
I
said,
we
only
have
Azure
because
we
only
have
partial
in
Econo,
but
we
are
very
happy
to
help,
if
any
other
for
any
other
oidc
provider,
but
with
here
we
need
some
help
from
the
community
to
to
at
least
get
some
example
token
for
us
to
work
with,
because
we
don't
have
other
cloud
provider
for
use
yet,
and
then
we
have
this,
what
you
see
with
this
component
component
workflow
all
this
document
has
an
intermediate
layer.
B
We
have
something
called
flow
if
I
manifest
just
intermediate
layer
before
we
generate
generate
the
icon,
workflow
manifest,
and
we
store
this
in
there
in
the
among
in
the
mongodb
and
this
flow,
if
I
manifest
when
users
construct
a
workflow
and
then
flow
if
itself,
when
it,
when
it's
submit.
So
when
you
submit
it.
B
So
if
I
server
would
generate
the
appropriate
the
the
angle
workflow
manifest,
and
then
you
just
use
the
other
word
for
API
to
submit
it
and
then
and
then
the
server
would
would
check
the
the
progress
and
then
I'll
go
work,
for
it
would
then
be
responsible
for
execution
of
the
workflow,
because
we're
not
going
to
reinvent
the
wheel
of
all
the
hard
work
that
our
Workforce
had
done
with
several
people
within
Econo.
B
What
we
want
to
do,
we
want
to
Leverage
The
the
power
of
our
Google
workflow
and
build
a
rapid
layer
on
top
of
it
to
make
it
easier
to
use.
So
we
have,
if
you
go
to
our
documentation
site,
we
have
a
local
Docker,
compose
image
that
you
can
spin
up
and
and
test
around.
We
have
example
database
in
there
as
well
just
follow
the
instructions.
If
you
want
to
deploy
in
your
institution,
have
a
go,
take
contact
with
us.
We
are
very
happy
to
help
you
to
to
set
it
up.
We
have.
B
We
have
dedicated
team
to
support
this.
So
at
the
moment
we
are
we.
This
project
come
out
of
Econo
research
and
we
exited
research
and
has
now
hand
over
to
our
operate
IIT
operations
team.
So
the
aim
is
to
spend
the
the
bulk
of
next
year
to
bring
it
into
production
and
that's
where
we
want
to
show
it
to
the
community,
and
we
want
to
show
it
to
you
and
and
have
some
opinion
how
this
works
and
also
hopefully
create
a
community
around
it
as
well,
and
then
the
team
on
the
right.
B
We
have
five,
basically
four
four
and
a
half
people
working
on
it,
but
so
don't
be
hesitate
to
to
contact
us.
We
have
a
discussion
board
on
GitHub,
just
post
anything
if
you
need
help
with
very
helpful
happy
to
help.
If
you
see
this
missing
features
or
if
you
can't
get
the
local
test
environment
running
just
just
give
us
a
shout
and
and
and
have
a
look
in
the
documentation
we
we
do
need
some
help
to
to
see
what's
best
for
equino.
B
C
This
this
looks
great,
so
is
it
fair
to
kind
of
summarize
this
as
like
a
UI
for
constructing
workflows
right
instead
of
kind
of
like,
of
course,
writing
yaml
for
Meaningful
workloads,
it's
complicated,
so
this
kind
of
provides
a
way
of
kind
of
you
know,
drag
and
drop
components,
and
you
can
connect
one
to
the
other
and
eventually
behind
the
scenes
flow.
If
I
will
create
the
Argo
workflow
right.
B
I'm,
sorry,
no
go
ahead,
you
have
to
say
something
yeah,
so
so
yeah
yeah,
that's
true.
We!
This
is
what
what
you
do
create
this
component
and
and
workflow.
We
just
recreate
an
intermediate
document
called
the
the
the
flow
if
I
manifest,
which
is
a
simplified
version,
and
then
we
base
on
that
to
construct
the
angle
workflow.
If
that's
the
need,
we
can
also
give
you
a
capability
to
retrieve
that
generated
ago.
Workflow
yaml
file,
the
Argo
manifest
that
could
be
useful.
B
We
thought
about
that,
and
also
we
Implement,
except
this
less
than
that.
We
also
have
some
more
advanced
feature.
We
are
testing
like
the
conditional
statements
and
parallelization,
so
I
think
the
parallelization
works
quite
well,
but
yeah.
D
So
you're
doing
some
additional
parallelization
outside
of
what
workflows
itself
does
or
making
use
of
that
feature
you
mean,
but
in
a
user-friendly
way
or.
B
Yeah,
so
we
do
have
a
way
to
if
you
can
can
see
through
my
screen,
I'm,
not
sure,
but
let
me
do
it
so
if
you
can
see
this
screen,
let
me
see
that
works,
so
we
have
something
called
a
functional
component,
which
is
map
which
is
parallelization
which
allows
you
to
say
I
want
to
want
this
workflow
multiple
times.
Then
you
can
map
this
one
this
multiple
times.
B
B
D
Pretty
nice
that
I
mean
I
could
see
even
people
that
know
how
to
use
the
yaml.
It
would
be
nice
for
them
to
just
be
able
to
connect
these
things
and
without
having
to
write
the
animal
itself.
C
So
you
mentioned
about
like
this
data
scientists,
for
example,
being
one
of
the
users
of
the
system,
so
they
are
going
to
have
to
write
some
of
their
own
code
as
well
right.
So
if
I'm,
a
data,
scientist
and
I
want
to
do
some
say
you
know,
model
training
or
whatever
query
S3
and
then
do
some
some
sort
of
like
data
cleanup
and
whatnot.
B
So
the
thing
only
say
we
can
go
to
one
of
these
simple
components.
Say
say
this
is
a
very
simple
component:
it
just
it's
a
HTTP.
It
just
get
a
random
number.
So
what
you
have
to
do
is
just
I
see
create
something
in
in
the
docker
image.
So
you
just
need
to
create
a
Docker
image
that
won
this
node.js
script
or.
B
For
example,
sure
so
that
that
is
the
only
thing
I'm
because
we're
not
sure
at
the
institution
but
for
equinox,
then
for
us
a
Docker
is
the
is
the
standard
way
for
deliver
any
applications.
So
so,
as
long
as
you
can
yeah
as
long
as
the
data
scientists
know
how
to
package
a
python
script
into
a
Docker,
then
that's
just
straightforward
I.
C
E
If
people
want
oh
sorry,
go
ahead,
there's
a
UI
for
adding
these
components.
You
can
do
this
all
from
the
flow
of
IUI.
B
B
Constant
and
I
have
an
argument
that
is
called
that
is
taking
what
you
see
here
before,
if
I
input
into
here
so
URL
equal
that,
whatever
that
strength
going
through
here,
would
then
append
to
URL
equal
XYZ,
for
example,
and
then
it
will
append
into
into
the
tank
of
arguments
and
then
go
into
the
your
Docker
entry
point.
Basically,
it's
a
way
to
concatenate
your
entry
point
arguments,
that's
so
cool
yeah,
so
the
only
thing
you
need
to
do
in
the
code
is
just
make
sure
you
have
a
Docker
image.
C
I
guess
an
as
time
goes
by
these
images
would
get
built
for
various
different
use
cases,
so
they
could
get
reused
across
teams
across
projects
across
different
different
workflows.
Also,
that
way.
B
Yeah
yeah,
so
so
this
is
something
we
have.
It's
called
Marketplace
at
the
moment.
Every
single
component
you
make
it's
in
this
Marketplace
which
which
can
be
used
for
all
the
workspace
in
the
future.
We
also
consider
some
team
may
have
some
secret.
They
may
want
to
have
some
component
that
is
within
that
workspace,
but
we
haven't
implemented
that
yet.
But
this
is
everything
you
see
in
the
marketplace
can
be
used.
B
B
We
have
been
full
about
the
reverse
transpiration
yet
so
at
the
moment,
it's
just
one
way,
so
we
just
go
from
the
floatify
to
the
only
thing
we
reverse
transpire.
It's
probably
the
the
job
manifests
because
then
we
need
to
transpire
back,
so
people
can
see
it
in
the
in
the
UI,
but
then
the
the
album
manifest
is
not
at
the
moment,
yeah.
A
B
A
Open
sourcing,
all
this-
this
is
amazing,
We
messaged,
the
docs
out
here
so
check
that
out
once
again,
Adam
anywhere
else.
You
want
to
point
people
to
checkout
sofa.
B
A
Yeah,
thank
you,
Adam
great
presentation,
all
right
and
let
me
hand
it
over
to
David
now
so
exploring
messaging
systems
with
Argo
events
and
David
I
think
you
had
a
blog
post.
A
E
Otherwise,
I'm
just
gonna
talk.
Thank
you.
E
There
is
good
thanks,
Galen
hi
everyone.
My
name
is
David
Farr
I
work
here,
Intuit,
so
I'm
not
on
the
Argo
team
at
Intuit.
I
actually
am
on
the
data
platform
teams
into
it,
but
I
get
to
work
in
close
conjunction
with
the
Argo
team,
and
primarily
we
tend
to
work
on
Argo
events.
The
most
we
make
contributions
to
the
Argo
events
project
I
recently
wrote
a
blog
post
called
explore,
exploring
messaging
systems
with
Argo
events.
So
please
check
that
out.
E
If
you
haven't
had
the
opportunity
to
already
I'm
going
to
post
a
link
to
this
in
the
zoom
chat,
and
it's
also
in
the
community
Docs
Google
Docs,
and
then
there's
also
a
link
to
a
short
survey.
There
we're
interested
to
learn
a
little
bit
more
about
how
you
know,
folks
on
the
call
might
be
using
Argo
events
using
sensors
in
particular
and
like
the
the
choice
of
the
event,
bus
technology
has
on
the
way
that
sensors
work,
particularly
on
the
way
that
they
scale.
E
At
Intuit,
we
use
Argo
events
to
coordinate
our
goal
workflows
when
other
workflows
complete,
we
run
other
workflows,
and
this
is
done
through
Argo
events
and
one
of
the
issues
that
we
have
with
Argo
events
is
that
it
necessitates
a
large
number
of
odds
in
our
kubernetes
cluster,
because
Argo
events
creates
one
deployment
per
sensor
and
because
of
the
way
that
that
clusters
are
managed
that
into
it,
we
have
a
pool
of
IP
addresses
and
every
pod
consumes
one
IP
address
from
that
pool,
and
if
we
have
too
many
Argo
events
deployments,
we
we
have
contention
on
this
resource
and
because
of
this
at
Intuit,
the
the
total
number
of
pods
in
any
given
cluster,
it
must
restrict,
must
adhere
to
a
strict
resource
quota.
E
That's
just
the
reality
of
the
way
things
are
so
we
know
this
is
a
problem,
and
then
we
also
know
that
sensors
or
dependencies
and
and
triggers
which
are
part
of
a
sensor
definition
are
go
events.
You
know
they
can
be
aggregated
together
into
a
single
sensor.
Both
fields
are
indeed
arrays,
and
this
would
relieve
our
issue
since
we'd
be
putting
all
of
these
workloads
into
just
a
single
kubernetes
deployment
and
therefore
less
pods.
E
You
might
be
aware
that,
unfortunately,
the
sensor
deployment
in
Argo
events
as
it
is
implemented
today
doesn't
actually
horizontally
scale,
so
if
you've
ever
specified
a
replica
greater
than
one
in
the
settings
of
a
sensor,
you
can
do
this,
but
any
additional
sensor
pods
on
top
of
that
initial
one
will
actually
just
be
in
standby
mode
to
enable
High,
availability
and
there's
actually
a
leader
election
that
occurs
and
if,
if
the
leader
pod
is
to
fail
at
any
given
point
in
time,
a
different
pod
is
elected
as
a
leader,
but
at
any
given
point
in
time.
E
Only
a
single
pod
is
processing
events
in
in
an
Argo
event
sensing.
E
So
this
actually
presents
kind
of
a
unique
opportunity
for
us,
because
we
could
kill
two
birds
with
one
stone.
So
you
know
we
could
resolve
the
issue
that
we're
seeing
on
the
ground
into
it,
where
we've
just
got
too
many
pods
in
these
clusters,
while
also
enabling
horizontal
scalability
on
these
sensors,
and
then
we
can
increase
the
performance
and
the
throughput
of
these
sensor
applications
for
all
users
of
Argo
events-
and
this
is
something
that
you
know
I
think
would
be
beneficial
outside
of
just
into
it.
E
It's
something
that
you
know
the
the
project
might
benefit
from
on
a
wider
level,
and
in
addition
to
that,
we
also
suspect
that
we
can
improve
the
consistency
of
the
application
while
we're
at
it.
E
E
So
it's
probably
too
much
to
go
over
in
the
community
meeting
so
I
I
would
ask
you
to
take
a
look
at
the
blog
post.
If
you're
curious
to
like
learning
about
this
in
much
more
detail,
hopefully
not
too
much
detail,
the
blog
post
is
a
little
bit
verbose,
but
long
story
short
in
the
blog
post,
I
put
forth
an
argument
that
jet
stream,
which,
if
you're
familiar
with
Argo
events,
that's
the
current
Argo
events,
event
bus.
It's
the
messaging
system
used
to
handle
these
transitory
events.
E
E
So
please
check
it
out
if
you're
interested
I
do
want
to
say
we
don't.
You
know
my
team,
we
don't
have
a
vested
interest
in
one
or
the
other
if
we're
gonna
go
Pulsar
we're
going
to
Kafka.
If
we're
going
to
go
some
third
party
that
we're
not
even
thinking
about
or
if
we're
going
to
maintain
jet
stream,
because
you
know
it's
the
one
that's
already
there,
so
it's
already
been
implemented.
E
We're
mostly
just
looking
at
this.
From
the
perspective
of
we
want
to
achieve
that
horizontal
scalability
in
Argo
events,
and
we
suspect
that
one
of
our
Pulsar
might
be
a
good
choice
to
make.
But
if
you
have
a
different
opinion
on
the
matter,
we
would
love
to
hear
from
you.
E
We
would
love
to
hear
of
horizontal.
Scalability
is
something
that
you're
interested.
Is
this
a
problem
that
you're
you're,
seeing
in
your
Argo
events
deployments
so
I've
put
together
a
quick
survey,
there's
a
link
to
it
in
the
zoom,
Channel
and
then
or
in
the
zoom
chat,
and
then
there's
also
a
link
to
it
in
the
Google
Docs?
E
It
should
only
take
like
three
minutes
to
fill
out.
You
know
please
go
ahead
and
do
so
if
you're
so
inclined.
We
would
love
to
hear
from
you
and
yeah
with
that.
I
think
you
can
just
open
it
up
to
questions.
D
E
D
Then
one
thing
I
just
wanted
to
mention
is
you.
D
Is
sort
of
managed
by
Argo
events,
I
guess
you
were
looking
at
Kafka
and
Pulsar
as
something
that
would
sort
of
be
external
to
Argo
events
as
this
as
an
optional
bus
right.
E
Yeah,
that's
a
good
call
out,
I,
don't
think
we'd
want
or
users
of
Argo
events
would
want
a
built-in
deployment
of
something
as
heavy
as
Kafka
to
be
to
your
cluster.
It's
probably
something
that
a
lot
of
companies
have
dedicated
already
in-house,
so
this
would
probably
be
like
an
optional
add-on
to
Argo
events.
D
C
One
quick
question:
I'm
not
as
intimately
familiar
with
Argo
events,
as
maybe
you
know,
some
of
you
are.
But
why
does
well
a
two
questions?
One
is:
why
does
every
sensor
need
its
own
deployment
and
then?
Secondly,
even
if
it
were
in
the
common
case,
you
would
you
would
see
like
five
six
eight
types
of
sensors
at
most
right,
so
that
would
mean
six
or
eight
IP
addresses,
but
that's
not
a
lot.
C
D
I
mean
you
can
do
real
triggers
into
sensors,
so
you
can
have.
You
know
multiple
triggers
in
a
single
sensor
which
are
all
doing
different
things.
But
oh.
E
Is
independent?
Those
are
the
Core
Concepts
of
Argo
events.
A
sensor
is
the
object
that
acts
on
events
and
triggers
actions
based
on
those
of
them.
C
C
C
E
E
B
A
And
fill
it
out,
I
guess
my
question
for
you:
David
was
to
yeah
just
kind
of
dig
in
a
little
bit
more
on
if
you're
comfortable,
sharing
what
What's
one
of
the
use
cases
that
you
saw
from
yeah
like
the
machine
learning
platform
perspective
and
into
it
that
really
drove.
Why
you'd
need
this
new
architecture?
It
would
be
helpful
to
hear
for.
E
Others,
yes
so
add
into
it
and
specifically
on
my
team.
This
is
what
my
team
works
on.
It's
not
part
of
the
Argo
project.
It's
a
you
know
it's
an
in-house
application.
E
It's
called
BPP
batch
processing
and
in
BPP
any
developer
at
Intuit
can
create
a
a
batch
processor.
You
do
this
in
a
what
we
call
the
paved
Road.
It's
like
a
UI
where
you
go
and
you
author,
a
batch
processor,
so
any
developer
into
it
can
create
the
batch
processor.
E
They,
of
course,
have
to
write
the
code,
the
logic
at
the
end
of
the
day
you
get
a
Docker
container
out,
you
know
the
usual
stuff
and
then
these
batch
processors
can
optionally
be
wired
together
into
a
pipeline
underneath
the
hood,
this
all
uses
Argo
workflows.
E
However,
the
way
that
one
workflow
might
invoke
another
workflow
is
through
Argo
events
and
that's
how
it's
implemented
on
our
team
and
what
you
end
up
with
in
the
the
total
potential
space
here
is,
if
you
imagine
a
graph
where
every
node
is
one
of
these
BPP
processors
and
every
Edge
between
the
nodes
is
a
sensor
or
something
that
manifests
in
Argo
events.
E
So,
in
the
worst
case
scenario,
we
it
actually
grows
exponentially
because
in
the
worst
case
scenario,
you
might
have
every
single
DPP
processor,
invoking
every
other
processor
when
it
finishes
in
the
real
world.
Obviously
it
doesn't
work
like
that,
but
that's
the
worst
possible
case
and
that's
the
reason
why
we
see
because
it
goes
exponentially.
A
Very
cool
thanks
for
sharing
that
and
yeah
I
think
some
of
your
colleagues
did
a
talk
at
argocon
right
related
to
this.
So
I
think
we
have
the
link
below
in
the
community
document.
If
you
scroll
back
down
to
October
or
so,
and
they
kind
of
went
through
that
whole
architecture
I
think
yeah.
A
That
was
it:
okay,
yeah
in
the
workflows
track
and
events
track
at
argocon.
Okay,
yeah
check
that
out.
I
think
we
have
the
link
Down
Below
in
the
community
doc
yeah
any
other
questions
for
you
for
David.
A
Awesome
well,
yeah
thanks
everybody
for
joining
yeah.
If
you
enjoyed
the
presentations
today,
give
us
a
clap,
emoji
and
zoom
here,
we'll
see
you
in
January
and
if
anyone
else
has
talks
or
demos
like
this,
they
want
to
share
blog
posts.
You
know
like
David
did
today,
it's
perfect
first
for
the
community
meeting.
So
just
DM
me
on
slack
at
Kalin,
you'll
find
me
in
there
and
happy
holidays.
Everybody
thanks
for
joining.
Thank.