►
From YouTube: ☀️ GSoC CloudEvents plugin mentor meeting 2021.7.26
Description
Discussion on architecture, including consideration of Knative eventing architecture, especially its use of brokers. Lots of discussion on the CDF SIG Events Protocol Proof of Concept: https://github.com/cdfoundation/sig-events/blob/main/poc/README.md
Super interesting work! ☀️
B
No,
I
thought
you
were
gonna.
Do
the
introduction.
A
Oh
okay,
welcome
to
today's
g
stock
mentor
meeting
for
the
jenkins
cloud
events
plugin
and
we
are
speaking
about
shorty's
work
in
the
last
week
and
moving
the
plugin
towards
a
first
release.
B
Version
so
I
did
review
the
pull
requests
and
got
it
most
and
just
one
thing:
one
thing
I
want
to
say
was
the
readme
would
be
nicer
with
the
toc
table
of
content
and
that's
it,
and
I
think
we
can
move
on
to
doing
the
first
release
this
week.
We
can
catch
up
on
wednesday
or
even
tomorrow,
or
to
do
that
and
yeah.
So
what
are
your
next
steps
of
the
plugin?
Then.
C
Okay-
okay,
I'm
sorry
got
it
yeah,
so
the
the
past
week,
so
we
talked
with
the
event
sake
team,
just
to
get
a
better
understanding
of
how
they
have
been
developing
their.
Like
events,
sync
for
the
captain
and
tucked
on
sort
of
integration-
and
I
was
able
to
clear
some
things
in
terms
of
how
we
can
design
a
sink,
which
is
agnostic
and
also
it
can
deal
with
different
kinds
of
events
in
different
structure
and
coming
in
from
different
sources.
C
So
I
think
that
if
we
can
talk
a
bit
about
that,
so
we
have
a
better
understanding
of
the
infrastructure
that
can
be
in
place
for
the
the
the
stage
where
jenkins
as
a
sink,
I
think
it'd
be.
C
That
will
be
a
bit
helpful
and
moving
forward
and
just
thinking
about
everything
that's
needed
for
jenkins
and
sync.
The
last
time
we
were
also
discussing
when
like
cara-
and
I
we
were
discussing
about
how
we,
how
like
designing
the
sync,
which
is
you
know
in
a
naturally
cloud
native
sync,
which
can
deal
with
fault
or
just
like
network
and
transient
failures
and
also
can
deal
with
retries.
C
But
retries
is
also
applicable
to
jenkins
as
a
source,
but
that's
something
that
can
be
accomplished
in
an
easy
manner.
But
for
jenkins
is
a
thing
you
know.
So
when
we
were
talking
with
the
unsight
team,
they
they
specifically
had
sort
of
a
a
middleware
I'd
say,
which
is
dealing
with
events
coming
in
from
tecton
and
captain.
So
they
have
like
captain,
inbound
and
tucked
on
inbound
and
like
outbounds,
for
both
of
them
and
what
they're
doing
is
basically
like
transforming
the
the
cloud
events
which
is
going
out
and
then
they're.
C
They
have
a
a
middleware
in
between
which
is
like
a
cloud
events
broker,
which
deals
with
receiving
and
sending
of
the
events,
and
if
our
sync
is
totally
like
an
http
request
response
system
only
like
if
you're
designing
or
if
we're,
for
example,
like
receiving
events
only
in
that
manner.
You
know
only
like
a
post
request
to
the
end
point
where
it's
present.
C
C
If
having
k
n
like
having
k
native
integrated
with
this
sink,
how
can
this
look
like
it's
going
to
be
two
different
tools,
but
also
when
someone
is
configuring
jenkins
and
sync,
we
we
want
to
spin
up
k
natives,
so
we
can,
you
know,
configure
like
an
infrastructure
as
code
or
something
similar
to
that,
but
I
was
actually
curious
and
hearing
your
guys's
opinion
on
the
jenkins
as
a
sink
infrastructure,
specifically
going
on
using
a
pop
sub
like
infrastructure,
going
for
a
different
like
the
different
protocol
for
both
the
entrances
and
source.
C
Anything,
for
example,
if
we
want
to
introduce
kafka
both
for
jenkins
as
a
as
a
source
and
also
as
a
sync.
You
know
like
because
cloud
events
does
support
like
the
protocol
for
using
kafka
and
also
if
we
want
to
go
with
like
a
killing
infrastructure,
so
there's
rabbit
mq,
which
supports
the
the
kind
of
system
that
we
might
want
to
work
with
in
a
fall,
tolerant
and
sort
of
like
a
full
like
transient
failure,
tolerant
system.
C
B
So
I
think
we
can
try
implementing
retrieves,
probably
with
http,
but
if
you
want
to
start
playing
around
with
the
kafka
protocols
for
the
cloud
events,
what
you
could
start
doing
is
you
could
just
set
up
like
kafka.
B
I
haven't
worked
much
with
kafka,
but
you
could
just
set
up
the
infrastructure
for
now
and
then
start
playing
around
with
it
and
then
along
the
way,
you
will
see
problems
which
come
in
some
things
which
could
be
better,
but
in
terms
of
implementing
pub
sub
and
everything
from
jenkins
side
directly,
I'm
not
so
sure
about
that.
B
We
probably
it
probably
would
be
good
if
all
of
that
stuff,
like
even
retries
and
all
these
things
actually
would
be
managed
by.
C
Yeah
and
that's
what
for
k
native
that's
what
they're
the
like
the
broker
is
doing.
So
you
know
you
have
events
coming
in
from
techton
and
that's
using
sort
of
like
a
techdon
outbound
and
then
sending
a
specific
kind
of
event
which
k
native
is
expecting
over
to
the
event
broker
and
the
event
broker
or
the
cloud
events
event
broker
is
the
one
who
is
implementing
the
we
tries
and
also
implementing
sort
of
an
asynchronous
way
of
handling
all
of
the
events
and
the
messages
which
are
coming
over.
C
So
with
the
jenkins
saying,
that's,
like
my
only
concern
with
you
know,
because
they
said
I
like
remember
them
saying
that
you
know,
since
all
different
sources
have
different
data
or
different
way
that
they
have
structured
their
cloud
events.
C
I
don't
I
don't
remember
them
saying
that
any
specific
solution
for
designing
an
agnostic
thing
specifically
has
worked,
because
everything
is
different,
so
we
might
want
to
think
about
just
implementing
when
we're
thinking
of
like
implementing
a
genetic
sink.
Also
just
thinking
about
having
that
middleware,
which
can
handle
different
kinds
of
both
like
protocol
and
also
the
different
structure
of
data.
That's
coming
in
and
then
design
it
in
a
way
that
jenkins
should
receive
it.
A
Shitty
from
my
understanding,
but
you
can
correct
me
if
I'm
wrong
the
way
that
they're
using
the
broker
this
middleware,
it
is
what
enables
the
sync
to
be
agnostic
to
the
underlying
technologies.
So
this
is
what
provides
essentially
like
a
translation
layer
depending
on
whatever
underlying,
if
you're
doing,
rabbit,
mq
or,
if
you're
doing
kafka
or
what
whatever
underneath.
C
Yeah,
I
think
it's
so
for
them.
What
I
understood
was
that
the
the
responsibility
of
creating
sort
of
you
know
both
the
events
which
are
which
are
understood
by
the
sender
and
the
receiver.
It's
the
responsibility
of
both
the
source
and
the
sync,
so
I
can
actually
go
back
and
pull
out
their
poc
that
they
have
and
also
there's
a
pub
sub
like
plugin
for
jenkins,
which
is
a
light
version
of
pops
up,
not
necessarily
something
that
can
be
helpful,
but
definitely
like.
I've
looked
into
it,
and
this.
B
If
you
think
we
can
use
the
existing
logic
and
the
power
sub
plugin,
which
you're
talking
about,
could
you
send
the
link
to
that
plugin.
C
B
B
Maybe
it's
a
good
idea
to
experiment
with
it.
So
initially,
why
don't
we
try
play
like
playing
around
with
those
k
native
brokers
and
see
how
that
goes.
C
Yeah
well
sounds
like
a
good
idea
and,
and
also
like
another
thing
is:
if
we,
if
we
do,
want
to
support
just
k
native
where
or
how
should
I
inside,
like
as
a
part
of
the.
B
When
we
try
that
with
canada,
does
it
mean
that
we
support
only
the
nato,
because
I
think
cloud
events
are
agnostic.
That
way,
like
anything,
that
reads,
cloud
natives
cloud
need
cloud.
Sorry,
I'm
saying
cloud,
I
mean
cloud
event.
So
anything
that
reads
cloud
events
will
like
read
a
cloud
event
like
whether
or
not
it
is
supported
by
k-net.
C
Yeah,
so
they
gained
it
like
the
k
native
broker
specifically
works
with
cloud
events,
so
this
is
like
it.
It
can
work
with
other
structured
event
data,
but
I
think,
like
the
k,
native
brokers
itself
is
only
with
cloud
events,
but
if
we
like
wanna
substitute
this
with
something
so
like,
can
you
guys
see
my
screen
yeah
so
where
there,
the
poc
that
they
had?
C
They
had
like
that
on
cloud
events
which
was
sort
of
that
theft
on
outbound,
which
is
like
sending
cloud
events
over
and
then
it's
receiving
it,
and
then
they
have
a
trigger.
So
it's
just
like
if
any
event
is
coming
and
looking
like
it's
from
techcon,
so
that
triggered
it's
basically,
you
know
how
we
were
mentioning
setting
filters,
so
ce
type,
c's,
that's
what
the
trigger
is
going
to
do
is
basically
set
filters.
C
So
the
trigger
is
going
to
be
like
okay,
if
any
event
is
coming
off
to
type
c
dot,
techton
or
whatever,
then
send
it
over
to
this
captain
inbound
and
the
captain
inbound
is
what
sort
of
transforms
it
to
be
used
by
captain.
C
So
escar
was
saying
that,
in
a
way
that
cloudvents
broker
it
is,
it
is
like
agnostic,
because
this
is
only
actually
dealing
with
the
triggers,
which
are
you
know,
looking
for
a
particular
type
of
an
event,
but
both
of
these
services
so
tucked
on
and
kept
in
here,
have
their
inbounds
and
outbounds.
C
You
know
which
are
sort
of
dealing
with
that
conversion
of
events
into
a
format
which
can
be
used
by
each
other.
So
if
tecton
has
this
outbound
captain
has
this
inbound
and
what
this
is
going
to
do
is
basically
transform
the
events
coming
from
tecton
into
something
that
captain
can
use
and
understand,
so
that
that
was
one
of
the
thing
which
was
I
like
thing.
C
The
thing
about
canada,
which
is
really
interesting,
is
you
know
it
is
working
with
power
events
as
are
so
we
are,
we
only
like
you
know
we
can
set
or
a
person
can
set
triggers
for
a
k,
native
service
which
will
be
running
inside
of
or
maybe
like
as
a
part
of
our
plugin.
C
It's
sort
of
that
interoperability.
Also
between
different
operating.
I
mean
open
source
tools
operating
together,
so
the
user
would
have
to
set
up
these
triggers,
but
those
triggers
won't
be
inside
jenkins
as
a
sync,
but
they
would
more
be
inside,
like
the
k
native
service,
which
is
running
as
part
of
this
plugin,
but
the
only
thing
again
is
about
these
inbounds
and
outbounds,
or
those
are
specific.
You
know
how
you
mentioned
the
direct
interoperability
system
so
again
having
those
agents
and
and
that
kind
of
system.
So
that
is
one
thing.
B
B
I'll
tell
you
what
it
will
look
like,
so
the
captain
bridge
and
tecton
dashboard
is
the
front
end.
The
cli
is
there
okay
forget
about
the
clr,
but
the
that
will
be
the
jenkins
front
like
that,
will
be
the
jenkins
port
8080
from
where
we
can
check
the
jenkins
ui.
Okay,
captain
inbound
so
take
down
triggers
are
basically
web
hooks.
B
Okay-
and
these
are
these
are
event
listeners
which,
with
which
we
can
use
which
technon
can
use,
and
then
these
are
created
on
tecton
site
and
then
consumed
by
the
broker,
where
a
trigger
object
is
made
by
the
k
native
cloud
events
by
the
by
k-nator,
and
this
trigger
object
is
created
with
the
webhook
url
given
by
tecton.
Okay.
So
that's
what
it's
triggering
same
thing
is
with
kept
on
inbound.
Do
not
look
at
the
so
these
two
are
technically
the
same
because
they
are
both
web
hooks.
B
B
Controller
for
tecton,
so
what
the
cloud
events
controller
for
techtron
does
it?
It
just
sends
cloud
events,
so
you
configure
a
sync
where
you
want
to
send
cloud
events
and
you
send
the
cloud
events
and
the
cloud
events
are
sent
to
that
place.
That's
all
is
that's
happening.
So
what
is
happening
here
that
all
the
events
are
just
sent
to
the
cloud
events
broker
over
here
from
and
the
trigger
is
the
one
which
is
actually
doing
all
the
work
here
now
to
translate
this
into
jenkins.
B
B
So
right
now,
right
now,
what
we
are
done
with
is
the
outbound
right.
B
You
would
say
that
we
are
done
with
the
outbound
and
we
are
kind
of
working
on
the
inbound
inbound
stuff
so
and
we
are
trying
to
make
the
inbound
stuff
a
little
better,
so
in
terms
of
the
inbound,
which
is
jenkins
as
a
sink.
B
So
this
is
that's,
that's
the
inbound
stuff
that
we
still
have
to
work
on
and
like
make
it
a
little
better.
So
that
is
our
next
step
in
this
process,
but
for
now
what
we
can
do
is-
and
I
think
it
will
be
a
good
exercise
as
well,
but
like
just
play
with
the
same
architecture
and
forget
about
this,
I
don't
know
what
this
cloud
events
player
is.
Very
frankly,
I
think
it
might
be
a
ui
from
which
you
can
yes,
this.
B
Okay,
so
it's
it's
basically
like
a
more
sophisticated
socket
for
so
the
cloud
events
player
view
monitor
and
measure
without
events.
Okay,
so
what
we
can
do
as
a
good
exercise
is
we'll
just
replicate
this
entire
architecture
for
jenkins,
and
then
we
will
remove
captain
out
of
the
picture
and
then
slide
in
jenkins,
and
then
obviously
we
won't
be
able
to
or
do
any
inbounds
right
now.
But
what
we
can
do
is
we
can
do
outbound.
B
B
You
will
have
to
configure
a
canadian
trigger
to
say
that,
once
the
this
jenkins
type
of
event
comes
trigger
a
tecton
pipeline
which
does
something
okay,
so
I
think
this
this
would
be
a,
but
this
would
be
a
really
good
experiment
which
you
can
do
the
next
week
and
in
this
in
this
way
you
can
probably
gain
an
idea
what
the
poc
will
look
like
and
also
you
can
showcase
this
to
the
events
say
that
you
have
done
a
poc
with
jenkins
interoperability
with
techton
and
obviously,
if
that
works
for
tecton,
it
will
obviously
work
with
captain
as
well,
because
in
the
trigger
you,
probably
you
probably
have
to
create
a
similar
trigger.
B
The
only
thing
you
will
have
to
change
in
that
trigger
is
the
web
hook,
which
it's
calling
and
the
payload.
That
is
there.
So
I
think
that's
the
good
next
step
you
could
take,
but
I
I
don't
know
if
I'm
just
blabbering
or
I've
answered
any
questions.
C
No,
no,
it
does.
It
does
make
sense
and,
like
I
am
following,
because
you
have
more
insights
on
this,
since
you
have
loved
with
this
system,
it
does
make
sense.
The
the
only
thing
that
I'm
just
like
looking
at
right
now
is
sort
of
understanding
that,
if
we
are
because
I
remember
them
mentioning
how
the
inbound
and
the
outbound
is
all
for
both
sort
of
like
the
tecton
and
captain
is
also
doing
a
bit
of
manipulation
to
adjust
the
event
in
the
way
that's
coming
over.
C
B
B
I
can
help
you
set
it
up
if
you
want,
and
we
can
kind
of
do
like
a
poc
with
jenkins
and
techno.
C
Yeah,
oh
yes,
I
was,
I
was
going
to
say
that,
oh
so,
if
we
do
decide
in
the
future
to
sort
of
move
with
this
architecture,
I
also
was
wondering
again
going
to
the
question
of
how
can
we
enable
this
this
thing
in
the
middle?
You
know
like
the
k
native
system,
amis
jenkins,
like
the
cloud
events
plug-in
for
jenkins,
so
I'm
just
thinking
a
bit
about
that
of
how
you
know
if
there
needs
to
be
a
script.
That
can
also
start
this
architecture,
but
that
will
also
need,
like
a
pod.
B
C
Not
thinking
necessarily
as
a
dependency
but
more
or
less
like,
if
we
are,
you
know
like
if
we
need
gain
native
as
sort
of
a
service
from
inside
of
jenkins
plug-in
as
sort
of
like
this
always
needs
to
do.
This
needs
to
be
there's
like
yeah,
maybe
maybe
we
can
say
dependency,
but
also
like
it
obviously
can
work
without
it.
C
We
don't
always
like
need
k
native,
but
if,
if
someone
happens
to
do
or
happens
to
initiate
this
system
running,
he
wouldn't
want
to
go
back
and
set
this
up
on
k
native,
but
they
like
need
a
system
which
is
you
know,
asynchronous
and
which
can
similar
to
like
a
native
can
handle.
All
of
this
like
have
that
cloud
events
broker.
C
So
if
we
end
up
using
like
kafka,
you
know
some
other
pop
sub
mechanism,
or
some
cue
we
still
have
to
we
would
we
might
like
have
to
make
so
any
structure
or
any
format
of
like
supporting
cloud
events,
because
we
we
don't
have
the
trigger
inside
of
jenkins
as
a
single
source
right.
We
only
have
that
inside
of
k
native
so
for
that
filtering
and
then
sending
it
over,
because
the
idea
here
we'll.
B
Do
one
more
example
after
this,
so
once
you
are
able
to
achieve
this
bit
of
exercise
where
you
replace
captain
with
jenkins
and
test
switching
against
outbound
and
protect
on
inbound.
We'll
do
one
more
exercise
where
we
replace
this
middle
bit
of
cloud
events
broker,
and
we
replace
it
with
something
else,
and
we
could
probably
discuss
that
next
time
I'll
try
to
figure
something
out,
but
maybe
we
replace
this
with
apache.
B
You
know
and
see
how
we
can
make
that
work,
because
the
point
of
indirect
interoperability
is
that
you
don't
you
it.
It
should
all
be
fairly
decoupled
and
you're
just
using
like
on
data
packets
to
tell
each
other.
What's
going
on
what
is
happening
and.
B
Focus
is
that
so
from
what
you're
saying
right
now,
it
seems
that
you
are
considering
this
cloud.
Events
broker
probably
has
a
hard
dependency
at
some
point,
but
that
won't
be
the
case.
B
So
you
pro
and
at
that
point,
when
we
do
the
second
exercise,
we
also
get
a
better
idea
of
the
event
filtering
or
something.
If
you
want
to
do
in
on
our
side,
what
that
should
look
like,
what's
important,
what's
or
just
some
simple
filtering,
even
maybe
use
rejects
or
something
we
could
do
that
on
our
side.
But
I
think
it's
a
good
idea
to
do
some
like
test
architectures
like
in
this
case
like
this
is
actually
a
very
good
diagram.
B
B
Okay,
this,
I
just
checked
some
documentation.
They
have
their
own
kafka
broker
as
well.
Maybe
you
could
just
extend,
or
instead
of
completely
replacing,
to
use
the
kafka
broker,
to
send
things
to
kafka
initially
and
then
remove
after
that,
remove
it
completely
and
then
just
connect
jenkins
to
kafka,
protect
on
or
do
something
like
that,
I'm
not
sure
how
well
that
will
work.
B
My
lingua
is
not
on
point
and
then
we
can
do
an
experiment
where
we
remove
the
k
native
based
kafka
broker
and
describe
with
a
normal
kafka
broker.
We
could.
C
Decide
on
in
like
giving
users
the
ability
to
to
choose
the
the
protocol
binding.
So
do
you?
Do
you
guys
think
that
that,
like
that
protocol,
binding
should
sort
of
like
the
user
is
entering
it
or
if,
like
a
person,
is
selecting
a
particular
kind
of
broker
it
just
automatically?
Yes,
I
think
that's
the
more
easy
way.
I
think.
That's
like
the
better
idea.
B
Yeah,
it
should
be
fairly
easy
because,
like
you
can
you
can,
you
can
say
something
like
protocol
binding
and
then
in
a
drop
down
they
can
select
kafka
or
http
or
whatever,
and
then,
when
you're
creating
the
yeah,
it
should
be
fairly
easy.
I
think
just
like
when
you're
creating
it
you'll
have
a
switch
case.
It
should
be
simple
enough.
C
C
C
C
Yeah,
so
you
know
like,
like
you,
suggested,
having
an
abstract
like
a
sink
and
different
kinds
of
sync
are
just
extending
from
it
and
then
like
developing
over
it.
So
I
think
that
can
be
really
helpful
here,
because
we
have
an
http
sync,
which
is
extending
from
the
abstract
cloud
vents
sink
and
then
we'll
just
have
different
kinds
of
sync,
which
is
designing
events.
So
we
might.
I
think
we
might
also
not
need
actually.
B
Thinking
if
yeah
abstract
makes
a
lot
of
sense
with
my,
but
the
final
product
is
gonna,
be
a
json
right
like
like,
like
a
json
object
like
a
cloud
events
object.
The
final
product
is
a
tower
console
right,
so
I'm
just
thinking
instead
of
doing
the
abstract
thing.
Would
it
be
better
to
have
a
sort
of
switch
case
which
manages
all
this
and
because
I
I
don't
want
you
to
waste
too
much
time
on
the
ui.
C
I
think
that
this
is
going
to
be
like
very,
I
think,
in
terms
of
just
making
different
zinc
classes,
because
in
that
way
we
can
sort
of
just
separate
this
the
kind
of
things
so
the
ui
has
sync
type
right,
so
a
person
can
see
the
same
type
and
what
this
can
do
is
also
just
sort
of
separate
the
ui
sync
type,
with
the
implementation
sync
type,
so
the
things
are
kept
separate
and
what
this
is.
C
This
will
really
need
if
it
it'll
just
look
at
the
the
place
where
we
are
sending
events
and
see
if
the
sync
type
is
http
then
use
the
http
sync
class
to
send
events.
If
it's
a
different
format
then
use
that
format.
So
we
can
use
switch
case
there
and
then
just
go
into
classes
because
we
still
would
have
to
do.
The
the
design
of
the
cloud
event
is
going
to
look
different.
C
B
B
Even
I
would
have
to
just
check
before
I
say
anything
really
because
yeah
I
I
even
have
to
figure
it
out
to
tell
you
the
truth,
but
what
we
can
do
is
we
can
sync
sometime
this
week
once
we
gain
enough
info
on
what
that
is,
and
on
my
side
also
I'll
check
it
out
what
we
can
do,
but
I
think
to
start
off
with
the
kafka
bits
to
start
off
with
that,
we
can
just
use
the
kafka
broker
for
which
k
native
gives
and
we
can
start
off
with
them
before
moving
on
directly
to
our
patching.
B
C
I'm
really
sorry
this,
like
the
whole
thing
with
like
the
power
issues
and
the
fire
cuts
here,
like
that
it
just
really
derailed.
What
I
had
planned
should
have.
B
C
B
But
yogi
right
I
don't
know
I
I
can't
say
much
much
about
that,
but
I
hope
you
just
get
light
on
a
more
regular
basis.
Are
there
any
kind?
But
when
are
you
free
this
weekend
or
you
can
just
actually
just
ping
me,
I'm
pretty
much
free
this
entire
week
and
you
should
just
ping
me
when
you're
free
and
we
can
catch
up
on
the
release
bits
and
then,
after
that,
if
you
want
to
sync
on
the
the
architecture
bits,
I
can
help
you
with
that
as
well.
B
B
I
don't
know
what
else
might
be
needed,
but
for
all
the
crds
and
all
the
crds
that
you
need
you
were
able
to
install
and
keynative
is
something
you
are
able
to
install
in
mini
cube.
The
only
next
thing,
probably
the
next
thing
you
can
do
is
you:
can
you
just
have
to
figure
out
what
triggers
are,
how
to
set
up
triggers
and
all
those
little
bits?
And
then
then,
after
that,
you
have
to
figure
out
running
tecton
locally,
on
your
mini
cube
and
using
the
cloud
events
controller
for
it.
B
I
don't
know
if
you
need
that
and
setting
up
the
triggers,
which
you
will
definitely
need
for
the
trigger
template
trigger
bindings
and
whatever
things
are
there
in
the
triggers
controller,
you
have
to
learn
a
little
about
them
and
then
just
create
the
event
listeners
and
then
give
it
to
the
cloud
event
triggers
and
then
once
and
set
jenkins
as
a
source
on
one
of
the
broker.
I
think
so
we
can
talk
about
that
sometime
this
week.
B
E
C
B
Not
exactly
I
did
deploy,
but
I
couldn't
get
to
running
things
with
it.
I
I
I
was
just.
I
just
got
a
little
confused
with
the
abstractions
and
the
wording
they
used
for
certain
things,
but
I
will
probably
have
to
check
it
out
at
some
point
and
we
can
just
you
know,
have
a
hacking
session
and
figuring
out
and
figure
out
what
captain
is
about
because,
as
far
as
I
know,
they
they
are
like.
The.
B
Like
they
really
embrace
cloud
events
and
try
to
work
with
them
as
much
as
possible
and
use
that
as
their
main,
medium
of
communication
kara,
like
do
do,
let
me
know
if
I'm
going
wrong
here,
because
I
feel
you
know
a
lot
about
captain
compared
to
what
I
do.
A
B
Yes,
yes,
so
maybe
in
the
next
meeting
what
we
could
do
is
we
could
have
like
a
captain
session
after
after
the
hacking
session,
maybe
and
figuring
out
stuff,
whatever
questions
or
anything
you
might
have
about
captain,
we
can
ask
them
to
mauricio.
When
he
comes
in
next
meeting.
We
can
make
make
it
a
point
to
call
him
for
the
next
meeting
for
the
captain
discussion.
C
I
don't
think
it's
I
don't
know,
but
I'm
just
gonna
go
on,
live
and
say
that
it's
the
summer,
I'm
probably,
but
I'm
not
sure
kara
you
might
have.
You
might
know
more
about
it.
I.
A
Think
that
is
exactly
correct.
It's
due
to
the
summer,
we
did
the
same
thing
with
the
interoperability.
C
I
also
like
I
have
like
the
started
with
jenkins
as
the
sync
and
the
one
on
again
like
the
one
thing
is
the
setting
of
the
sort
of,
maybe
I
would
say
triggers
because
we
have
been
talking
in
terms
of
triggers
for
k
native
so
triggers
here
and
that's
why,
when
I
was,
you
know
like
mentioning
like
that
dependency
with
k
native
that
one
thing
was
leveraging
triggers
that
k
native
has
because,
yes,
it
will,
it
will
be
easy
to
implement
it
inside
of
jenkins.
C
But
if
we
are
already
working
with
another
sort
of
system
that
that
is
natively
made
to
trigger
and
filter
based
on
a
cloud
events
metadata.
I
think
my
question
here
would
be
that.
B
Yeah,
we
that
would
be
jenkins
as
a
sync
right.
B
Yeah,
so
probably
in
that
one
sink,
what
we
can
do
is
we
can
have
a
object
called
triggers
which
the
user
can
set
up
and
then
certain
like.
If
you,
if
in
the
sync
you
say
you
want
to
create
three
triggers,
let's
do
three
different
things
and
the
first
one
would
be
something
like
you
know,
just
start
job.
B
One
second
would
be
like
start
job
to
start
a
top
three,
but
you
could
do
that
like
and
I
think
before
we
reach
that
we
have
to
kind
of
like
redo
the
ui
move.
It
move
it
to
manage,
manage
jenkins
and
do
all
that
sort
of
stuff,
but
yeah
you
can.
You
can
start
off
with
that.
Whenever
we
do
it,
we
can.
B
C
So,
yes,
the
work
on
that
I
have
been
on
it
and
I
I
would
say
that
most
of
it
is
like
sort
of
moved
out.
But
it's
not
like
very
clean
because,
as
like
just
rushing
over
and
like
writing
it
down
so
still
like.
I
would
move,
probably
like
everything
and
then
design
the
idea
with
jenkins
as
a
sink
specifically
aligned
with
triggers
and
filters.
B
Yeah,
I
feel
that
there's
a
lot
of
stuff
that
we
need
to
do
with
jenkins
with
us
as
a
sink.
It's
it's.
Probably
it's
probably
a
better
idea
to
do
it
in
some
time.
As
we
understand
you
know,
the
architecture
builds
itself
and
how
the
sync
would
be
used
and
getting
started
with
the
architecture
with
the
cloud
events
broker
and
everything
is
a
good
place
to
start
like
once.
We
start
understanding
the
one
one
flow
which
is
jenkins
to
techcom.
B
B
Initially
design
document
draws
and
we
can
gain
some.
You
know
we
can
get
some
help
from
that
key
with
how
we
can
you
know,
design
that
part
or
if
we
can
just
use,
use
the
implementation
that
they've
already
done
and
kind
of
re
refactor
it
for
cloud
events.
C
Yeah,
yes,
that
does
sound
good
and
most
of
triggers
for
this,
like
with
the
gainitive
broker,
they're,
usually
sat
on
the
header
fields
or
not
the
headers,
I
would
say
the
metadata
about
the
event.
So
if
do
we
still
wanna
sort
of
have
that
structure
specific,
like
probably
like
in
this
introductory,
and
the
starting
out
phase
wanna,
keep
the
filtering
option
for
the
body
or
the
data
specifically
of
the
event
as
well.
B
Yeah
we
should.
We
should
allow
the
user
to
filter
everything
very
quickly.
The
boy
feels
like
we
should
allow
them
to
filter
each
field
individually
with
a
different
filter,
each
part
of
the
header,
each
different
header
with
a
different
filter
and
then
the
body
itself
with
a
different
filter
like
the
body
contains
this.
Does
this
like?
I
think
we
should
allow
the
user
to
do
that.
B
Yeah,
because
if
we
just
allow
only
a
few
kind
of
filters,
like
only
the
header
or
you
know
some
parts
of
the
body,
I
don't
think
we
can
do
that,
but
just
the
header.
I
think
it
will
be
good
for
to
start
off,
but
we
should
allow
them
to
do
everything.
B
C
So
if
you
guys
can
see
my
screen
right
now,
we
are
looking
at
so
we're
still
inside
of
the
poc
and
we're
looking
at
the
like
how
they're
designing
of
the
triggers
for
kept
on
so
like
depth
on
outbound.
C
Yes,
so
like
the
events
which
are
going
out
from
tucked
on-
and
so
you
know
they
have
this-
let's
see
so
they
have
like
the
bindings
they
have
like
100.
maxi
type.
This
is
the
c
type
that
an
artifact
was
published.
C
You
know
this
they're
doing
like
bits
with
expression
replacing
register
whatever,
and
then
they
have
like
these
bindings
where
they
have
okay,
the
name.
This
is
the
name
like
sh
captain
context
and
body
dot.
Sh
captain
context,
so
this
is
like
this
like
this
is
the
binding
with
like
the
the
event
data
whatever
is
present
inside
of
the
event
body
so
like
obviously,
here
it's
like
that,
they
are
aware
of
the
kind
of
events
which
gets
triggered
from
decked
and
so
like
the
entire
body
and
the
entire
life.
C
I
think
that
we
can
take
reference-
some
like
somewhat
reference
for
this,
but
for
us
it's
going
to
look
very
different
because
we
obviously
can't
just
like
give
them
like
findings
like
okay
or
just
like
filters
like
okay.
This
is
going
to
be
inside
of
the
body
and
then
body
dot
whatever,
because
we
don't
know
where
it's
coming
from.
C
So
that's
also
one
thing
that
we
have
to
think
about
of
how
we
can
allow
user
to
go
very
modular
with
whatever
filters
they're
setting
on
the
body,
because
our
sink
is
agnostic
and
it
doesn't
know
how
to
park.
It
knows
how
to
parse
it,
but
but
it
doesn't
know
how
to
find
that
particular
piece
inside
of
the
body.
B
Yeah,
if
you
remember,
we
discussed
this
last
time
with
the
cell
interceptor
so
discussed.
B
So
if
you
can
go
back
to
the
yaml
for
triggers
and
you
go
back
to
the
filter
that
was
there,
I
can
show
you
so
what
is
happening
over
there,
that
they
are
using
an
interceptor
which
is
a
common
expression,
language
interceptor,
which
we
saw
last
time
and
the
language
that
they're
using
over
there
with
the
header
dot,
match
something
something
so
that
language
is
the
common
expression
language,
and
I
don't
think
we
have
a
java
library
so
that
we
could
use
which
we
could
use.
B
But
it
is
definitely
some
an
idea
with
which
we
can
go
ahead.
So,
instead
of
cl,
we
could
use
something
else
for
filtering
and
we
can
obviously
start
off
with.
We
can
obviously
start
off
with
something
else
like
you
know,
just
simple
matching:
we
don't
even
need
any
kind
of
library,
but
in
due
time,
what
we
can
figure
out
is
what
we
can
use
for.
This
sort
of
you
know
data
extraction
from
the
cloud
events
which
the
users
can
use
ahead
in
their
whatever
or
triggers
they're
using.
B
Them
to
run
in
their
pipelines,
so
what
this
allows
the
user
to
do
is
that
the
the
dynamic
nature
of
the
pipelines,
which
is
changing
the
variables
and
running
them,
the
changing
the
variables
part,
is
completely
done
by
the
triggers
themselves,
so
they
don't
have
to
like
the
devops
guy
or
the
whoever
it
is.
He
doesn't
have
to
think
about
it.
B
So
this
is
definitely
like
some
feature
which
we
which
we
should
have,
and
so,
whenever
we
are
done
with
the
initial
implementation
of
this
thing,
so
once
once
we
reach
the
trigger
stage
and
we
are
able
to
do
match
we
at
that
point,
we
should
figure
out
when
we
like.
What
do
we
need
to
do
for
this
value.
C
Yeah
like
because,
as
I
I
I
don't
know,
if,
like
I'm
understanding
correctly
but
the
the
interceptor
that
they're
using
is
more
specifically
for
like
the
matching
but
like
the
bindings
is
what's
like
when
we
were,
you
know
talking
about
extracting,
maybe
the
job
name
from
the
event
body.
That's
coming
for
jenkins,
you
know
so
pecton
is
like
okay.
I
have
completed
this
artifact
and
this
is
now
what
I
want
like.
C
I
want
to
start
a
job
with
this
name,
so
that's
so
something
like
maybe
body,
dot,
jenkins
job
name,
and
that
would
be
that
would
have
the
nemo
jenkins
job
name
and
then
we
would
trigger
a
job
like
using
using
that
particular
variable,
so
so
the
the
interceptor
it
I'm
not
sure
if,
like
using
that
inside
of
I
mean
yes,
we
can
use
that,
obviously
it'll
be
used
inside
of
even
body,
but
that
modularity
of
extracting
specific
values-
and
I
think
that's
one
thing-
that
the
the
system
for
the
poc
with
captain
and
tucked
on
yes,
like
the
inbounds,
is
sort
of
helping
with
that
they
know
what's
coming
over.
B
So
as
we
move
forward,
we
will
obviously
have
to
give
examples
of
how
to
use
this
plugin
with
different
systems.
So
at
that
point,
we'll
have
to
give
examples
of
like
what
it
would
look
like.
B
You
know
to
use
it
with
tecton
and
there
might
be
an
examples
directory
where
we
can
keep
the
stuff,
and
we
should
do
that
and
if
you
notice
here
this
is
this
part
of
the
eventing
is
not
that
dynamic
like
they
know
where
the
certain
things
are
like
the
trigger
id
the
context
and
all
so
these
things
we
need
to
be
sure
of
these.
These
are
some
things.
We
need
to
be
very
sure.
That's
why
and
that's
why.
The
format
that
we're
using
for
the
event
in
the
cloud
event
also
matters
a
lot.
B
So
yeah,
but
I
I
think
I
think
we
can
actually
take
this
conversation
ahead
at
a
later
date.
As
we,
you
know,
okay,
so
right
now
we're
just
kind
of
getting
acquainted
with
this.
So
I
know
like
there's
like
ideas
flowing
and
all,
but
slowly
as
this
kind
of
you
start
getting
used
to
these
ideas,
you
will
figure
out
that
you'll
figure
out
the
right
path
to
take.
B
So
it's
good
that
this
is
simmering
in
your
head
right
now,
but
once
you
start
working
with
the
architectures
and
stuff,
I
think
you'll
get
a
better
idea
of
what
stuff
should
look
like
what
it
should
be
because
you
you
would
have
done
enough
for
like
you,
you
would
have
seen
enough
things
to
know.
B
What
needs
to
be
done
ahead
like
for
like
right
now,
you're
looking
at
these
tecton
triggers
like
you'll
work
and
work
on
setting
them
up
at
some
point,
you'll
figure
out:
okay,
captain
inbound
outbound:
this
is
how
it
works.
B
Then
you
will
get
a
better
idea
of
like
how
the
cloud
event
side
cloud
events
plug-in
side
of
things
should
look
like,
because
users
who
are
using
these
things
once
they
switch
over
to
something
else,
they'll
they'll
have
a
similar
concept
in
their
head
and
they'll
also
be
able
to
work
with
the
same
concepts
more
easily,
and
it
will
be
easier
for
us
also
in
developing
this.
C
Yeah,
that's
that's
a
good
idea
all
right
well!
Thank
you.
E
Thank
you.
Well,
everyone
had
to
go.
C
But
also
thank
you,
everyone
and
thank
you
cara
forward
with.
D
E
B
E
E
A
All
right,
good
meeting
guys
I
will
we'll
be
in
contact
over
the
week
and
thank
you
very
much
for
being
here
and
thank
you
very
much
for
your
work.