►
From YouTube: SIG Interoperability Meeting - Sept 16, 2021
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
B
A
Yeah
not
necessarily,
but
making
it
rather
easy
for
teams
to
to
switch
also.
B
B
B
I
guess,
while
we're
waiting,
y'all
can
add
your
names
to
the
heckin
d-dog.
B
We'll
first
just
go
over
some
of
our
action
items
and
ongoing
discussion
topics
and
then
we'll
have
your
presentation
so
leaving
the
rest.
For
last.
B
B
Actually,
can
I
ask
you
to
share
fight?
Is
that
all
right.
D
B
Okay,
so
we'll
start
with
our
action
items
and
welcome
to
the
interoperability
seg
everyone
we
have
shirty,
who
is
going
to
give
a
presentation
and
demo
if
there's
time
for
the
demo,
but
it's
presents
on
cloud
events
plug-in
for
jenkins
and
improving
interoperability
for
jenkins
in
the
cloud.
So
that
should
be
great.
It's
really
exciting,
but
first
we'll
go
over
some
of
our
action
items
and
and
ongoing
talks.
So
action
items
we
have
mattie
is
to
start
a
new
discussion
on
other
metadata
standardization
efforts.
B
B
That's
great
I
actually
had
in
my
for
myself.
I
had
some
questions
on
how
this
relates
to
s3dx
being
becoming
recognized
standard
for
software
bill
materials
and
how
this
would
booster
all
the
associated
technologies.
I
don't
know
if
you
want
to
say
something
about
that.
E
Yeah
I
mean,
based
on
what
I
read
in
the
standard.
It
seems
pretty.
Orthogonal
to
spd-x,
like
spdx,
tends
to
define
some
attributes
for
the
files
that
could
be
in
a
software
release
or
a
software
package,
but
the
antuto
defines
a
way
to
describe
how
different
parties
can
sort
of
check
each
other's
work
in
a
using
cryptography
how
how
that
stuff
was
produced
and
whether
the
output
is
well
formed
and
things
like
that
yeah.
So
so
they
seem
pretty
complimentary.
B
Yes,
definitely
it's,
I
think
it's
great
news,
because
the
more
standardized
this
becomes,
the
more
that
it's
clear
how
people
should
approach
this
is
is
really
really
useful
for
any
practitioners.
It
really
helps
to
just
remove
a
set
of
questions
and
make
the
best
path
easier
for
people.
So
that's
good.
How
can
we,
as
I
said,
move
forward
your
discussion
and
what
should
we
be
focusing
on.
E
I
think,
maybe,
as
it
would
be
helpful
if
we
could
get
some
perspectives
on
whether
the
standard
is
is
kind
of
well
suited
to
different.
You
know
continued
like
cd
or
ci
related
tasks
and
how
people
could
see
it
kind
of
fitting
into
some
problems
that
they've
that
they've
seen
I'm
not
much
of
a
cryptography
expert.
E
So
it's
kind
of
hard
for
me
to
make
a
judgment,
call
on
what's
missing
or
what's
what's
good
or
what
could
be
improved
there
or,
if
there's
any
use
cases
that
that
are
important
for
cd
that
just
haven't
been
considered
or
addressed
in
that
standard.
B
E
Yeah
or
or
like,
if,
if
there's
any
use
cases
that
that
seem
important
in
that
space,
that
just
aren't
addressed
at
all.
D
Well,
I
I
actually
I
forgotten
to
comment
on
the
discussion
you
opened
matty.
I
have
been
looking
at
in
total
and
I
found
the
plugin
created
by
the
in
total.
What
was
it
santiago?
D
D
I
also
found
a
github
action
for
spdx
automatic
generation
of
spdx
of
materials,
so
it
looks
like
different
people
are
trying
to
bring
these
things
into
cic
domain,
but
that
there
it
doesn't
seem
to
me
to
be
too
much
I'll
say.
Cross
collaboration
going
on
like
in
total,
is
doing
something.
Spdx
is
doing
another
thing
and,
like
we
are
talking
about
these
things
here
within
our
group.
D
B
Okay
and
we've
kind
of
touched
on
svdx
becoming
an
internationally
recognized
standard
for
software
bill
materials,
which
is
pretty
exciting
and
good
news
version.
Three
of
that
is
in
progress
right
now,
and
so
again
we
we're
just
looking
at
the
question
of
what
could
we
as
a
sig,
contribute
to
that
effort?.
G
Yeah,
can
I
add
some
context
for
jenkins,
because
yeah
we
were
working
on
left
security
integration
for
jenkins,
which
is
a
bit
tricky
because
of
our
packaging
format
and
the
agreements
there
was
that
we
would
be
producing
his
boom
and
uploading
it
directly
to
sneak
servers
and,
as
a
part
of
that,
I
created
basically
from
xml
generator.
G
D
Like
I
think
like
again,
this
is
like,
as
a
user
like
when
I
think
about
all
this
s
bomb
stuff,
like
we
all
use
certain
ci
cd
to
take
major
orchestrator
or
whatever
I
think
like.
Since
this
supply
chain
and
s1
stuff
became
pretty
hot
lately.
It
has
always
been
important,
but
it's
more
obvious.
Now,
it's
you
know:
white
house,
executive
order
and
national
tech
communication
industry
association,
releasing
white
papers
after
white
paper.
I
think
as
a
user.
I
think
this
should
be
somehow
de
facto
built
and
built
into
the
cicd
technology.
D
So
users,
don't
need
to
you,
know,
spend
a
lot
of
time
to
you
know,
learn
about
what
this
supply
chain
is.
What's
the
bill
of
materials
means
how
much
effort
they
should
put
into
getting
this
thing
up
and
running,
and
I
don't
know
like
it's
like:
you
need
a
bit
of
materials
for
the
bill
of
materials
tools
to
start
it.
You
know
there
are
different
tools
like
spdx,
becoming.
D
So
that's
kind
of
my
feeling,
after
spending
some
time
on
reading
all
these
things,
so
how
we
can
make
things
easier
for
I
don't
want
to.
I
don't
mean
bad
when
I
say
average
developer,
but
how
we
can
make
these
things
easy
to
use
for
developers.
So
these
things
become
part
of
their
they
work,
rather
than
them
being
enforced
by
some
security
person
within
their
organization
and
so
on.
G
So
for
that
there
are
two
parts:
firstly
generation
of
spdx
and
secondly,
consumption
generation
can
be
done
relatively
easy
because
there
are
none
that
many
developer
tools
on
the
market
and
yeah
common
tools
like
maven
gradle.
There
are
only
a
limited
way
to
build
projects
there
and
the
spdx
regeneration
could
be
integrated
there
out
of
the
box
and
then
basically,
this
is
what
we
want
to
do
for
jenkins
plugins.
Just
in
our
pattern
form
here,
you
can
take
my
avenger
plug
whatever
and
to
generate
this
pdx
definition
or
other
one.
D
D
If
you
start
thinking
about
what
else
we,
as
users
need
to
be
careful
when
it
comes
to
consuming
the
dependencies,
and
you
start
with
the
s
bomb,
hoping
that
s
bomb
will
give
you
some
kind
of
starting
point
at
least
telling
you
what
you
are
bringing.
So
you
know
what
you
are
bringing
in
what
you
do
with
it
like,
if
you
include
it
in
your
product,
development
or
not,
is
like
second
step,
but
the
very
first
step
is
just
simply
to
have
that
asp
in
place,
which
is
not
the
case.
D
Unfortunately,
yes,
some
communities
start
adopting
that
at
least
they
are
generating
some
kind
of
s-bomb,
but
for
majority
of
the
communities,
it's
not
there
and
just
generating.
That
is
a
problem
and
the
other
thing
like
it's
tooling
technology
problem.
For
example,
again
I've
been
looking
at
spdx
a
lot.
There
are
a
few
tools
that
are
supporting
different
programming,
languages
and
frameworks,
and
you
know,
package
managers
like
mainland
go
and
whatever,
but
those
two
they
don't.
D
You
hit
that
problem.
One
business
example
I
faced
was
like
we
analyzed
same
primary
force
with
two
different
tools:
one
tool
reporter
65
dependencies,
the
other
two
reports
400
dependencies,
so
which
one
is
correct.
According
to
both
of
them,
like
both
the
two
developers
they're
each
correct.
It's
kind
of
you
know
it's
not
just
genetic
exponent,
but
just
identifying
what
are
those
secondary
dependencies.
G
D
Spdx
again,
they
they
host
some
tools
in
their
spdx
organization,
and
this
spdx
spawn
generator
is
one
of
such
tools,
but
they
are
not
that
there
was
a
discussion
of
not
doing
that
because
if
they
go
that
bad,
some
people
may
think
those
tools
are
blessed
by
spdx.
Currently,
hpdx
is
there
for
spec,
and
this
discussion
was
on
their
mail
list,
so
they
are
talking
about
not
hosting
tools
in
spdx
and.
G
They
totally
do
they
approach
also
wooden
host
tools,
but
the
specification
could
be
more
explicit
what
expected
from
the
listing,
because
yeah,
because
they
want
to
show
60
another
shows
400
dependencies
and
both
of
them
are
right.
The
problem
is
rather
than
the
specification,
because
it
wasn't
explicit
enough
to
say
what
needs
to
be
displayed.
D
But
is
it
spdx
problem?
Actually
I
think
it
goes
before
then
that,
because
spdx
is
just
like
sp
x
gives
you
certain.
You
know
way
of
creating
s-bomb
with
you
know,
references
and
you
know
these
relationships
and
filing
for
licensing
and
so
on.
But
what
gets
important
that
final
report
comes
from
the
you
know,
package
manager
and
by
after
the
dependencies
are
analyzed
by
some
tool
before
you
actually
start
generating
spda
access
place
like
even
before
spdx.
G
Yeah
right,
but
we
can
invert
it
otherwise,
so
I'm
a
tool
developer,
I'm
reading
spdx
specification.
How
do
I
understand
what
exactly
should
I
produce
as
an
output?
So
for
me
it's
kind
of
a
real
problem,
because
it's
exactly
what
I'm
working
in
sync
right
now,
it's
just
not
as
pdx
it's
another
json
format
and
yeah.
Today
we
just
leased
all
dependencies
on
the
library
level,
but
if
I
develop
a
tool
which
produces
spd-x
compatible
specification
as
a
tool
developer,
I
need
to
have
enough
information
to
understand
what
I
need
to
produce.
D
D
Sorry
for
the
magic
discussion
about
you
know
I
I
kind
of
like
got
confused
when
I
started
all
these
different
things
and
comparing
things
with
each
other,
it's
like
it's
very
difficult
to
you
know,
make
like
click
and
choose
one
like
again.
I
am
not
after
picking
one
over
the
other,
and
I
was
hoping
that
when
I
run
these
things,
I
get
pretty
identical
output,
but
that
wasn't
the
case.
G
B
Okay,
any
more
comments
or
things
to
bring
up
for
discussion
for
the
svdx
in
toto
discussion.
H
So
when
katie
was
presenting,
she
talked
about,
I
think
used
it
was
pedigree
and
provenance,
including
build
information
into
it.
Has
anybody
heard
anything
more
about
that
discussion.
D
H
Yeah,
I
didn't
see
any
any
updates
on
it
or
answers,
but
I
was
wondering
if
there
was
anything
more
on
this,
the
part
or
not,
but
I
I
take
that,
as
I
know
them.
D
E
Yeah,
I
was
just
wondering
if
maybe
it
would
be
a
good
idea
to
reach
out
to
okay
offline
to
see
what
she
thinks
is
sort
of
the
next
step
since
she
kind
of
started
the
document.
But
I
think
it
also
might
be
helpful
to
to
have
more
input
on
the
discussion
from
different
folks
about
what,
like
you
know,
either
agreeing
or
disagreeing
about
whether
some
of
these
additions
are.
You
know,
useful
or
kind
of
maybe
adding
these
settings
and
new
ideas.
D
Relate
to
that
one?
Actually,
I
bumped
into
like
that.
There
was
a
thread
happening
in
spdx
mail
list.
If
I
can
find
that
I
don't
know
spdx
groups,
they
are
thinking
of
having
some
kind
of
more
flexible
approach
like
I
don't
know
the
words
that
they
are
thinking
about
some
kind
of
profile
thing,
depending
on
like
the
use
case
and
build,
could
be
no
one
of
the.
D
Sorry,
anyway,
they
were
talking
about
some
profiles
and
so
on,
but
yeah.
I
think
it's
right
time
to
bring
these
things
to
spdx
comment.
While
they
are
working
on
version
three,
that's
so
what
I
was
trying
to
say.
B
Three,
do
are
you
involved
in
those
in
that
work,
patty
or
anyone
else?
I.
E
D
So
I'm
wondering
if
it
makes
sense
to
reach
out
to
santiago
as
well
with
regards
to
input
or
to
because
kate
well
during
one
of
our
meetings
in
the
past,
it
was
april
or
something
talking
about
hpx,
maybe
like,
if,
like
people
like
matty,
if
you
find
it
useful,
we
can
try
inviting
him
to
our
one
of
upcoming
meetings.
To
hear
more
about
intelligent,
perhaps
ask
questions.
D
B
Sure,
and
actually
looking
at
these
action
items,
I
was
quite
excited
about
the
std
accent
in
total
discussion.
I
skipped
forward
right
into
them,
so
fatty
you
were
to
look
at
integrating
github
discussions
to
slack.
How
is
that
going?
It.
B
Good
any
other
comments
or
things
to
bring
up
for
integer
and
svdx
discussion.
We
can
move
forward.
B
Cool,
we
should
look
at
the
interoperability
sig
road
map.
That
would
be
great
if
everyone
looked
at
it,
I
feel
like
it
should
we
feel
like
it
should
evolve
a
bit
from
where
it
is,
and
anybody
has
anything
they'd
like
to
say
about
that
now.
B
Please
please
jump
in
and
say
how
you
think
the
end
of
the
roadmap
should
be
evolving.
It
seems
it
naturally
is
we've
kind
of
we
have
different
new
focuses,
but
you
know
if
anybody's
thinking
about
anything
particular
you
can
jump
in
now.
D
I
was
thinking
about
this
actually,
and
maybe
we
can,
you
know,
mark
some
things
as
done
and
they
become
part
of
our
day-to-day
like
week
which
work
within
the
sikh
like
this
knowledge
transfer.
I
think
today's
presentation
from
shruti
will
be
our
25th
presentation
or
something
so
it's
happening,
and
we
will.
We
should
continue
doing
it
but
same
time.
It
would
be
good
to
say,
like
we
have
been
doing
this
already,
so
we
kind
of
hit
our
aim
and
we
are
collecting
end
user
requirements.
D
I
think
we
had
some
reference
references
to
events,
so
we
can
consider
ourselves
as
successful,
given
that
we
have
a
seek
for
events
doing
cool
stuff.
I
think
we
can
somehow
capture
those
things
and
mark
them.
As
done
and
add,
new
areas
like
this
supply
chain
seems
to
be
a
natural
topic
to
look
at
based
on
the
past
conversations
and
based
on
the
original
discussion,
starting
with
standardized
metadata.
D
So
that's
kind
of
well
how
we
can
evolve
this
and
in
addition
to
that,
for
example,
the
ship
right
project
joining
the
foundation
is
kind
of
was
kind
of
good
to
see
because,
like
originally
our
focus
was
mainly
on
orchestra.
I
see
the
orchestration
tools
like
spinnaker
tech
tone,
jenkins,
jenkins,
x
and
so
on,
and
given
that
foundation
is
like
kind
of
broadening
its
reach
to
other
frameworks
or
other
areas
within
the
ci
cd
domain,
like
field
test
and
so
on.
Perhaps
you
can.
D
D
Yeah,
it's
like,
I
know
how
far
we
could
go
with
standardization
or
what
standardization
means
to
different
people,
at
least
like
you
know
again,
this
ties
back
to
this
metadata
thing
like
when
a
test
framework
produces
some
kind
of
result,
or
it
it
at
least
has
I
am
done.
Tests
are
successful
of
that
information
sent
back
to
orchestrator,
for
example,
and
then
to
developer.
B
D
Yeah,
I
think
we
can
you
know
and
since
event
seek
is
that
we
can
yeah
drop.
Those
I
don't
know
dropping
is
right
word,
but
at
least
say
this
work
has
already
been
driven
by
significance
and
close
collaboration
between
interval.
Dancing
events
is
part
of
you
know
this
in
our
roadmap.
We
will
not
be
you
know,
we
will
not
be
doing
these
things,
perhaps
event
stuff,
but
we
will
be
talking
and
we
will
be
working
together.
So
that's
the
way
we
can
perhaps
talk
about
it
in
our
roadmap.
I
I
D
Yeah,
what
I
was
trying
to
say
like
some
of
these
things
are
like
okay,
they
can
be
achieved
in
other
means,
not
just
events
but
other
means,
but
some
of
these
things
are
like,
I
think
they
exclusive
state
events
somewhere.
So
we
can
at
least
you
know,
point
to
yeah
you
see
later
like
we
should
drop
this
like
we
are
not
like.
I
I
B
So
for
the
roadmap,
it's
really
a
matter
of
sort
of
drawing
the
lines
of
different
work
streams
a
little
differently,
so
we
have
regular
ones
like
knowledge
sharing
and
some
work
streams
like
for
event.
Standardization
have
have
really
been
taken
over
as
their
own
subject
area
with
the
event's
sake
and
are
less
on
our
roadmap,
and
then
we
have
new
developing
focus
on
svdx
and
and
quite
a
lot
on
metadata,
which
is
probably
in
here.
D
Yeah,
that's
like
that's,
been
what
we've
been
talking
about
since
beginning
of
this
year.
Yeah
and
the
other
thing
is
like
policy
is
not
here,
so
policy
could
be
made.
Part
of
the
roadmap
continue
digging
the
domain
a
bit
more
because,
like
communities
start
talking
about
policies,
they
want
cb,
ci,
cds
and
so
on,
like
since
we
start
talking
about
these
things,
we
should
continue
talking
about
these
things.
Maybe
someone
comes
and
starts
a
policy
which
could
be
a
big.
You
know
win
for.
D
Well,
I
can
take
an
action
and
send
a
pull
request,
heavily
removing
event
stuff
and
referring
to
event
seek
and
adding
some
headings
for
metadata
or
supply
chain
in
general,
perhaps
and
then
others
by
chain.
Xbomb
is
one
thing
that
is
late
to
metadata
and
so
on
and
policies
kind
of
relate
to
supply
chain
as
well
because
like
if
you
have
some
dependency
within
your
s
bomb
with
normal
durabilities,
your
policy
should
block
that
progressing
further
within
your
pipeline
and
so
on.
So
it's
kind
of
it
looks
a
bit.
D
H
D
B
Good,
all
right,
that's
I'll
work
with
you
a
little
bit
on
the
roadmap.
We
can
do
that
async
off
out
of
this
meeting
any
other
commentary
on
roadmap.
Would
anyone
else
like
to
add
something.
B
Okay,
great
I'm
super
excited
because
sriji
has
every
two
presents
her
work
on
jenkins
interoperability
with
cloud
events
and
her
work
on
the
cloud
events
plug-in
for
jenkins.
C
Yes
I'll,
thank
you
so
much
cara
and
I'm
super
excited
to
be
here
and
also
share
the
work
I'll
go
ahead
and
share
my
screen.
C
So
this
is
the
cloud
events
plugin
for
jenkins
and,
of
course,
the
purpose
behind
it
was
to
enhance
interoperability
between
jenkins
and
primarily
ci
cd
tools,
which
are
already
using
cloud
events
as
part
of
their
event-driven
architecture.
So
this
has
been
developed
as
a
gsoc
project
alongside
cara
and
other
members
of
the
cdf
community,
and
also
taking
inspiration
from
some
of
unreal's
work
on
events
segway
tucked
on
and
captain
poc.
C
So
this
is
the
repository
of
the
plugin,
and
there
is
also
an
article
on
interoperability
and
columns,
but
I
feel
like
you
guys,
are
not
going
to
need
it
because
it's
it
really
talks
about
interoperability,
the
need
for
nci
cd
systems
and
direct
and
indirect
interoperability,
and
how
the
jenkins
cloud
events
plug-in
is
going
to
eliminate
the
need
for
maintaining
external
tools
to
be
talking
with
and
working
with
other
systems
and
cicd
systems
which
are
using
parliament.
C
So
again,
the
cloud
events
plug-in
direct
versus
indirect
interoperability.
You
know
we,
we
have
multiple
tools
in
acid
workflow.
As
you
know,
workflows
are
very
complex
these
days
and
they
do
tend
to
have
multiple
tools.
It
becomes
really
hard
to
communicate
because
each
of
them
is
speaking
a
different
language.
Essentially
each
of
them
has
a
different
payload
for
how
their
event
is
looking
like
and
with
the
cloud
events
plug-in,
and
we
wanted
to
integrate
cloudants
inside
of
jenkins.
C
C
So
a
part
of
this
is
also
scroll
back
down
to
your
ankle
and
looking
at
one
of
the
people
like
this
is
something
that
was
taken
as
an
inspiration
from
andres
and
the
unsafe
team
work
on
poc,
with
techton
captain
hot
events
and
also
k
native,
and
we
are
kind
of
working
alongside
this
and
developed
poc
for
the
cloud
events
plugin,
and
we
will
take
a
look
at
that
in
a
bit.
C
I
will
move
on
ahead
with
so
the
plugin
allows
users
to
configure
jenkins
as
a
source
and
a
sync
going
to
emit
and
consume
cloud
events,
and
so,
as
I
said,
this
was
a
gsap
project.
In
the
first
part,
we
made
jenkins
as
a
stories,
and
I
started
working
on
jenkins
as
a
sync,
so
for
jenkins
as
a
source.
Again,
the
idea
is
getting
a
url
for
the
sync
and
choosing
all
of
the
events
which
will
be
sent
over
to
the
sync
via
http.
C
So
why
use
the
cloud
events
plugin
for
jenkins,
a
obviously
the
big
thing?
Is
it
standardizes
communication
between
jenkins
and
other
ci
cd
system,
allowing
that
indirect
and
easy
interoperability,
where
you're
not
needing
to
maintain
external
tools,
to
talk
with
every
other
service
which
is
in
that
pipeline?
But
we
are
just
using
that
one
common
language
for
events,
all
the
tools
can
understand.
C
Obviously
the
end
tool,
or
the
sync
whatever
the
sync
is-
is
going
to
be
needing
some
sort
of
configuration
on
its
end
to
figure
out
how
it's
going
to
treat
that
particular
event
similar
to
how,
if
you're
communicating
with
a
person
using
the
same
language
that
we're
understanding
you
know,
each
person
will
will
like
communicate
or
do
a
task
specific
to
what
was
said
to
a
similar
idea
here.
C
And
we
can
build
complex,
end-to-end
pipelines,
extending
multiple
ci
cd
systems,
which
use
cloud
events
without
needing
any
extra
tools
or
without
needing
any
extra
efforts,
except
obviously
for
the
sink
to
see
how
it's
going
to
need
that
or
how
it's
going
to
use
that
particular
event
or
events
which
are
coming
out
from
jenkins
and
integrate
other
systems
with
jenkins,
and
they
loosely
couple
scalable
into
agnostic
manner.
Again
we're
removing
that
tight
coupling
by
creating
us
like
creating
a
way.
C
C
Again,
because
we
can
move
ahead,
adding
a
lot
of
different
tools
into
our
pipeliner
into
our
workflow,
and
it
obviously
is
is
in
a
way
that's
too
agnostic
and
we
are
eliminating
the
need
to
maintain
tool-specific
adapters
for
communicating
with
systems
so
direct
interoperability
by
having
that
particular
plug-in
or
that
particular
tool
is
eliminated
because
it's
it's
as
it's
a
common
language
and
it's
a
common
event,
format
which
gets
emitted
from
jenkins
and
also
consumed
from
jenkins,
which
other
tools
can
easily
use.
C
So
we
struggled
at
releasing
the
plug-in
for
a
bit,
but
the
good
news
is:
it's
now
released.
The
first
situation
is
out
and
we'll
take
a
look
at
how
the
ui
and
the
configuration
is
looking
at
right
now.
So
during
the
first
iteration
we
we
did
design
in
two
different
iterations,
so
they're
doing
the
first
one,
the
jenkins
slot
events
plug-in
ui
for
jenkins
as
a
source,
and
this
was
under
jenkins,
global
config.
So
I'm
looking
at,
let's
go
back
to
the
dashboard.
C
So
we're
looking
at
the
global
configuration
so
to
make
sure
that
it's
going
to
apply
across
all
of
the
like
the
global
settings
and
then
we
are
choosing
the
same
type
so
right
now
it
would.
This
was
tested
out
with
the
kafka
sync
as
well,
and
that
works
as
well,
but
just
choosing
a
name
is
standard.
Http
sync,
and
this
is
something
the
broker
url,
which
is
taken
from
the
poc
that
we
will
take
a
look
at
in
just
a
bit
and
all
of
the
events
which
are
supported
right
now.
C
So
we
have
job
kind
of
events
and
node
failures
and
willing
to
be
adding
events
for
test
suite,
and
if
that
field
passed,
there
was
a
question
that
came
up
during
one
of
the
event
sick
team
and
I
think
that's
a
really
good
idea
of
implementing
it
inside
of
jenkins
hollywood
spoken.
C
So,
yes,
the
type
of
events
were
already
looked
at
and
the
structure
of
the
events
which
is
going
to
be
sent
by.
So
this
is
how
the
structure
is
going
to
look
like
for,
let's
say
the
queue
event.
We
have
standard
cloud
events
metadata
the
spec
version
id
the
type
source
and
the
event
data.
Obviously
the
event
metadata
payload
is
going
to
look
different
for
each
kind
of
event
and
again
it
is.
C
It
is
on
to
this
thing
to
decide
how
it's
going
to
work
alongside
any
event
event
data,
that's
ended
and
the
second
iteration
was
when
we
started
having
questions
about.
C
So
what
we
decided
was
using
a
canada
cloud
events
broker,
which
is
again
the
clean
end
of
default
broker,
which
is
going
to
be
listing
two
events
or
cloud
events
from
jenkins,
and
we
had
a
native
trigger
also
to
test
out
a
filtering
based
on
attributes
and
based
on
metadata
and
event.
Data
is
going
to
work
and
that
directly
is
going
to
go
on
to
cloud
events.
Tecton
like
that
cloud
event
will
go
on
to
the
tecton
trigger
which
triggers
a
test
run
inside
touchdown,
so
I'll
move
on
showing
some
files
there.
C
There
are
a
bit
of
files
and
all
of
them
are
again
inspiration
from
event,
sick,
pc,
the
k,
data
default
broker
and
I'm
just
showing
the
relevant
two
or
three
files
that
are
relevant
to
the
particular
plc.
With
the
cloud
cloud
events
plugin.
C
So
here
is
the
the
trigger
for
k
native
the
k
native
cloud
lens
broker
and
the
cadence
trigger
right
here.
So
this
is
the
file
for
that,
where
we're
specifying
a
filter
on
the
the
ce
type.
So
this
particular
attribute
we
just
want
to
test
out
if
filtering
can
work
as
well,
because
this
is
also
an
important
concept
for
jenkins.
As
a
sync,
I
just
wanted
to
test
out
and
that's
why
we're
implementing
filtering
on
on
the
cloud
event
metadata.
C
So
again,
this
is
one
of
the
definition
for
the
tecton
triggers
where
we're
extracting
information
from
the
like
the
the
header
or
the
cloud
events
metadata
and
the
cloud
events
body
right
here,
the
the
ce
type
and
display
name,
which
is
for
the
display
name
for
the
particular
kind
of
job
and
since
we're
looking
for
entered
waiting.
So
we're
talking
about
a
specific
job
inside
of
jenkins,
and
this
is
going
to
be
passed
back
into
or
like
as
parameters
into
the
task
run,
where
we're
just
echoing
it
out
to
test.
C
If
the
entire
system
is
going
to
work
or
not
so
go
go
back,
and
also
so
here
we
have
the
the
using
default
broker.
We
didn't
test
this
out
with
a
craft
card
broker,
but
I'll
just
use
a
default
broker
and
configuring.
So
the
broker
is
already
added
and
I'll
just
save
this
information
and
all
of
these
events
will
be
sent
over
to
the
k
native
broker.
C
To
the
tactile
dashboard
that
I
have
running
so
this
is
where
we
can
see
if
you
know
all
of
the
the
payload
information
that
we
sent
and
the
sort
of
parameters
if
they
were
being
used
correctly.
So
this
is
the
job
name
that
we
said
and
the
event
type.
So
we
have
the
job
name
and
the
event
type.
Sarah,
here
and
again,
the
the
idea
is
to
see
for
both
jenkins
as
a
source
and
also
as
a
sink.
C
What
kind
of
filters
can
we
set
on
event
body
on
event
metadata,
and
how
can
we
pass
that
information
between
different
ci
cd
systems?
So
if
I
had
another
sync
inside
of
this,
if
I
had
like
captain,
it
would
be
a
very
similar
process
of
sending
events
over
just
specifying
another
another
thing
to
use
and
I'll
move
back
to
the
poc
just
in
just
for
one.
Another.
C
Second,
is
so
so
this
the
same
thing
can
also
work
alongside
having
multiple,
multiple
sort
of
tools
which
are
also
defined
as
async
for
jenkins,
and
also
we
can
reverse
this
and
test
this
out
with
when
jenkins
is
the
sink
has
been
developed,
which
is
definitely
worse
and
should
be
getting
to
it
pretty
soon
and
yeah.
So
again,
going
back
to
the
questions.
Why
did
we
choose?
The
k
native
broker
was
again.
We
wanted
to
implement
that
transient
fault,
tolerant
way
and
canada.
C
The
k-native
broker
really
helped
us
with
that
and
implementing
the
retry
strategy
and
implementing
asynchronous
communication,
making
sure
that
all
of
the
events
are
not
being
lost
and
being
sent
to
to
the
same
as
they
are,
even
if
they're,
not
as
they're
happening,
but
also
maybe
later
on,
and
did
we
achieve
our
goals,
building
a
tool,
agnostic
system.
So,
as
I
said,
we
tested
this
out
with
captain
and
also
we're
having
a
kafka
broker.
So
it
did
work.
C
So
it
seems
like
a
positive
effort,
and
that's
really
amazing
and
obviously
would
love
to
hear
your
guys's
thoughts
and
opinions
and
hears
again
a
big
thank
you
to
cdf
and
g-stock
for
an
amazing
summer
and
really
amazing
project,
and
this
was
really
fun
building
but,
more
importantly,
it
became
really
all
into
perspective,
and
we
designed
this
psc
and
built
out
with
not
just
jenkins
but
other
tools
which
are
using
cloud
events
and
just
understanding.
How
was
the
scope
of
of
interoperability
with
cloud
events
is
well
stop
sharing
here.
C
B
C
Winky
fry
was
so
much,
it
was
so
much
fun,
developing
it
and
like
working
alongside.
All
of
you
has
been
really
amazing
and
it's
just
such
such
a
big
scope
and
I'm
realizing
it
every
like.
Every
time
I
was
working,
I'm
like
just
realizing
what
it
says.
This
is
so
vast
and
so
huge
and
so
fun.
F
Nice
presentation
thanks,
I
I
do
have
one
question.
I
I
guess
I
missed
a
bit
the
part
of
how
I
understood
that
the
plugin
can
be
used
for
jenkins
to
act
as
a
sync
as
well
as
a
source,
and
then
is
that
correct.
C
C
E
Thanks
for
the
foundation,
I
was
curious
about
like
when
you're
using
jenkins
as
a
source,
how
you
decided
to
go
about
representing
the
different
events
that
are
sort
of
internal
to
jenkins.
As
cloud
events,
was
there
like
a
standard
that
you
followed,
or
was
it
kind
of
something
you
had
to
come
up
with
on
your
own.
C
Yeah,
that's
actually
a
really
great
question,
and
so
for
just
designing
events.
I
think
a
lot
of
it
came
from
existing
in
plug-ins
and
how
they
are
structuring
their
events,
especially
for
you
know,
if
you
have
a
github
kind
of
event
where
something
isn't
published,
so
you
want
to
do
something
or
a
notification
status
plugin.
That
was
also
one
of
the
plugins,
so
taking
a
mix
of
inspiration
from
all
of
those
plugins
and
making
sure
that
it's
representative
enough.
C
So
if
another
ci
cd
system
depends
on
a
particular
kind
of
information,
it
should
be
getting
that
information
so
also
working
with
the
entire,
like
the
team
for
the
plug-in
and
deciding
on.
If
this
you
know,
structure,
if
this
metadata
of
this
data
is
enough
or
if
another
cicd
system
is
depending
on
it,
would
they
be
able
to
go
off
of
it
and
build
the
information
that
they
need.
E
Yeah,
that's
really
interesting
to
hear
it
seems
like
it
could
be
kind
of
a
good
input
to
the
kind
of
the
events
interest
group.
C
Yeah,
I
think
that's
a
great
idea,
because
we,
I
think
I
definitely
was
looking
for
if
there
was
a
standard
of
what
can
we
send
over.
What
does
another
system
expect?
How
how
can
we
make
sure
that
the
other
system
gets
the
exact
information
that
it's
going
to
need?
But
I
think
for
now
is
just
looking
at
okay.
What
can
we
extract
from
this
event
or,
for
example,
if
it's
a
q
entered
waiting
event,
what
all
information
can
we
extract
and
just
like
putting
it
all
together
and
then
send
it
over?
E
And
just
kind
of
following
up
on
that,
like
did
as
part
of
your
project,
did
you
end
up
creating
any
documentation
of
like
what
the
what
the
event
formats
are
or
like
what
the
different
fields
that
you
added
were,
or
is
it
like
the
best
way
to
learn
about
that,
would
would
that
just
be
to
kind
of
look
through
the
code
and
figure
out
how
each
product
is
being
handled.
C
So
for
how
each
event
is
emitted,
like
the
the
payload
itself,
all
that
particular
information
is
inside
of
the
the
github
repository
and
also
where
the
plugin
is
available.
C
But
in
in
terms
of
you
know,
what
kind
of
events
are
I
mean
also,
that's
the
information
that's
on
there,
but
we
are
still
sort
of
like
looking
into
implementing
other
event
kinds
inside
of
jenkins,
like
whatever,
like.
Whatever
is
internal.
You
know
like
test
passing
and
all
of
that
kind
of
event.
C
So
for
that,
like
while
it's
in
development,
I
think
we'll
still
have
to
sort
of
look
through
the
code
to
understand
okay,
what
needs
to
be
done,
but
what
what's
already
published
and
what's
actually
available
and
the
event
type
and
decline
and
everything
that's
on
the
repository.
C
I
I
think
because
it
has
like
an
event
metadata
and
event
data
structure.
Each
of
them
is
very
different
for
the
kind
of
event,
so
I
can
share
my
screen
again.
C
So,
for
example
like
if
you
have
a
q
entered
waiting
event,
the
event
metadata
event.
Data
looks
very
different
than
how
a
build
event
is
looking
like.
So
you
know
like
it
has
information
about
a
build
and
scm
and
all
of
that
stuff,
whereas
the
q
event
and
metadata
is
looking
different
and
the
reason
behind
putting
this
here.
C
It
was
also
taken
from
one
of
the
candidates
service
for
sockeye,
and
it
was
helping
us
visualize,
and
it
was
just
understanding
that
any
service
which
is
being
built
on
using
cloud
events
should
be
able
to
put
or
set
filters,
so
it
should
know
what
it
is
going
to
get.
I
think
that
was
inspiration
behind
putting
all
of
that
information
about
events
here.
A
B
I'm
just
adding
the
link
to
our
hack,
md
docs
to
the
plugin.
I
realized
that
we
had
two
links
to
the
article,
not
the
actual
plugin,
which
it's
kind
of
too
bad,
but
there
so
the
source
is
there.
B
Awesome
any
other
any
other.
Last
any
questions
before
we
wrap
up.
B
Okay,
great,
thank
you
once
again,
sriji
that
was
fantastic
presentation
and
great
work,
and
for
me
it's
really
nice,
because
you
know
it
working
on
this
project
and
and
having
support
treaty.
It's
it's
really
been
great.
How
the
event
sake
has
been
really
supportive
and
what
we've
talked
about
in
the
interop
sig
has
fed
into
it
and
the
wider
cdf
community.
So
that
has
been
really
really
great
experience
for
me
and
for
the
the
entire
project.
So
thank
you
all.