►
From YouTube: GSoC Phase I V2
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
Hello,
everyone,
my
name,
is
shruti
chatter
reddy,
and
I
am
one
of
the
g
socks
student
this
year,
who's
working
alongside
a
team
of
some
really
amazing
mentors
in
building
the
cloud
events
plug-in
for
jenkins,
which
is
aiming
at
enhancing
interoperability
between
jenkins
and
other
ci
cd
tools.
So
before
I
dive
into
the
demonstration
of
the
plugin
itself,
let's
talk
a
bit
about
interoperability,
the
need
for
it
and
what
it
can
look
like
between
different
systems
as
workloads
become
more
and
more
complex.
A
A
Let's
think
a
real
world
example
of
a
bunch
of
traders
wanting
to
do
business
with
each
other,
but
the
catch
is
each
of
the
traders
speaks
understands
a
different
language
and
they
must
understand
each
other's
language
in
order
to
do
business
with
themselves
or
with
each
other
now
trader,
a
hired,
a
translator
to
do
business
with
trader
b.
So
all
communications
between
these
traders
are
carried
out
pretty
well
by
this
translator.
A
So
it
is
a
direct
coupling
created
between
the
two
services
wanting
to
communicate,
client
plugins
adapters
agents.
So
these
are
some
examples
of
direct
interoperability
in
the
technical
world
then
enters
cloud
events
a
way
to
achieve
indirect
interoperability
in
the
tech
world,
so
think
of
it.
If
the
traders
had
a
common
language
or
common
business
language
which
each
trader
in
the
market
has
to
know
to
communicate
and
do
business,
so
even
if
they
use
different
language,
they
have
to
know
and
understand
this
common
business
language
so
that
they
can
do
business.
A
This
way,
the
need
to
develop
explicit
ways
to
talk
with
each
other
is
eliminated,
because,
even
if
the
traders
are
speaking
different
language,
they
all
understand
this
common
language,
which
is
needed
so
platylentals
achieve
exactly
that
and
invent
different
systems.
It
defines
the
standard
specification
which
all
the
systems
involved
can
understand
and
therefore
using
cloud
events
removes
the
need
or
removes
the
overhead
of
developing
adapters
additional
plug-ins
for
services
that
we
might
want
to
talk
with.
A
Basically,
you
can
think
of
it
as
as
allowing
jenkins
to
enter
this
market
of
tools
where
each
tool
speaks
different
language
and
what
and
using
cloud
events
here
is
going
to
standardize
that
common
that
language,
so
all
of
the
tools
in
walder
speaking
and
understanding
the
same
language
of
cloud
events.
So
the
cloud
events
plugin
for
jenkins
allows
jenkins
to
be
configured
as
a
source
and
in
sync
for
cloud
events
for
other
services,
which
will
facilitate
communication
with
these
tools.
A
Super
easy
and
also
help
build
complex
workflows,
and
this
plugin
can
be
configured,
as
I
said,
as
a
source
or
sync,
depending
on
a
user's
need.
So
in
a
way
we
are
saying
that
jenkins
can
be
configured
or
jenkins,
can
speak
and
understand
this
language
of
cloud
events,
so
we
are
standardizing
the
way
they
communicate.
We
are
eliminating
the
need
to
build
complex
pipelines
and
we
are.
We
are
allowing
integration
with
other
tools
in
a
super
easy
manner,
and
it
is
also
pretty
agnostic
all
right.
So
with
that,
let's
move
on
to
the
demonstration.
A
All
of
the
events
coming
in.
So
we
have
to
configure
the
sockeye
as
a
sync
and
all
of
the
events
coming
inside
of
the
sockeye
will
be
presented
here.
So
all
of
the
events
metadata
will
be
present
inside
events
attribute
and
all
of
the
events,
payload
or
events
data
will
be
present
inside
the
data
column.
So
we
will
take
a
look
at
how
we
can
configure
a
particular
sync
or
a
sockeye
as
a
sink
inside
jenkins.
So
let's
go
back
to
the
jenkins
service
that
I
have
running
here.
A
So,
as
I
said,
this
is
running
on
a
kubernetes
cluster,
going
back
to
the
plugins.
We
already
have
the
cloud
events
plugin
installed.
As
it
says,
this
plugin
allows
you
and
can
to
be
configured
as
a
source
and
a
sync
amazing.
So
also,
let's
take
a
look
at
global
configuration,
and
this
is
where
we'll
be
configuring
all
of
the
information
necessary
for
jenkins
to
send
events
to
async
or
configure
cloud
events
plug-in
jenkins
as
a
source.
A
A
Saving
this
information
and
taking
a
look
at
the
jobs
we
have
configured
in
the
jobs
we
will
be
triggering
so
we'll
start
with
job
two
event
and
let's
look
at
the
job
too.
So
here's
the
description,
it's
a
test
job.
This
job
is
parameterized.
This
job
also
has
sem
configured
so
we'll
also
take
a
look
at
what
happens
when
we
are
triggering
or
updating,
scm
and
trig,
and
that
triggers
a
job
inside
jenkins.
A
So,
as
I
said,
we're
pulling
the
scm,
we
also
have
another
project
that
will
be
triggered
as
soon
as
this
job
is
built,
so
we'll
be
able
to
take
a
look
at
all
of
those
events
for
test
two
and
also
the
test
job
that
gets
triggered
inside
of
the
sockeye
service.
Saving
this
information,
let's
see
if
we
see
something
interesting
and
we
did
on
the
right,
you
see
the
first
event
that
we
got
was
the
job
updated
event,
and
this
is
of
the
source
was
job
test
2..
A
So
this
is
a
job
and
the
name
of
the
job
is
test2
and
here's
the
uuid
alongside
event
data.
So
we
have
the
user
id
and
the
username.
So
this
is,
I
have
signed
as
myself
and
more
information
about
the
event
itself
right.
So
the
information
that
we
are
seeing
here,
the
attributes
are
the
data.
So
this
is
all
cloud
events
information,
so
the
contents
attribute
helps
us
or
the
id
source
type
helps
async
figure
out
if
this
is
something
that
they
want
to
work
with
are
not
not
so.
A
As
I
said,
this
is
a
standard
kind
of
a
language.
So
whenever
async
is
receiving
any
event
which
has
these
particular
event
metadata
configured,
it
knows
exactly
what
it
means
and
then
it
can
filter
out
whether
this
is
something
that
the
sink
wants
to
act
on
or
not
so
the
source
or
the
type
of
the
event
is
going
to
give
information
about
that
particular
event:
that's
being
emitted
from
a
source
and
the
data
is
more
information
about
that
particular
event.
A
So
it's
it's
in
giving
information
which
is
relevant
to
the
kind
of
event
that
was
updated.
So
this
is
going
to
look
different
for
all
of
the
events.
A
Let's
look
at,
let's
trigger
the
job
and
take
a
look
at
what
happens
when
a
job
gets
started
right.
Okay,
so
let's
build
with
parameters.
A
All
of
the
events
which
are
emitted
here,
as
you
can
see,
they're
emitted
in
a
sequence
so
as
they
happen,
they're
going
to
be
emitted
in
a
sequence.
So
the
first
event
that
happened
was
q
entered
waiting,
and
here
is
the
event
data
which
is
represented
for
whenever
an
an
event
entered
the
queue.
A
So
we
also
have
this
that
the
type
or
the
source
it
was
a
still
job
test,
two
which
triggered
the
event,
and
we
have
more
information,
for
example,
qid
or
this,
the
the
duration,
that
it
was
in
the
queue.
The
next
type
of
event
that
got
triggered
was
a
job
started
event
or
job
completed
event.
So
here
what
happened
was
when
it
when
a
job
was
started,
we
had
more
information
about
that
particular
kind
of
build
right,
so
we
had
the
the
number
of
the
bill.
A
We
had
the
time
stamp
when
that
build
was
started.
Moving
on
again,
it's
happening
in
a
sequence,
so
first
it
was
started.
Then
it
got
completed
and
then,
as
I
said,
since
this
had
another
job
configured,
that
it's
going
to
build
or
like
test,
two
is
going
to
build
as
soon
as
test2
got
completed,
is
a
test
kind
of
job
or
the
test
shop
going
back
to
the
dashboard.
A
Let's
also
take
a
look
at
the
test
job
here,
so
this
is
a
test
job
which
got
triggered
and
here's
some
more
information
about
the
the
key,
the
entered
or
the
test,
job
being
in
the
queue
or
being
in
the
queue
entered
waiting
stage
and
as
soon
as
the
test
started,
building
here's
information
about
the
test
job.
So
we
have
the
we
have
the
display
name.
We
have
the
url.
We
don't
have
an
scm
state
here,
because
this
is
not
configured
with
an
sem
now.
A
So
it's
pulling
the
scm
and
let's
keep
patients
for
a
second
and
we'll
take
a
look
at
the
test,
two
job.
So
that's
what
we're
hoping
the
test,
tube
first
entering
the
queue
and
then
all
the
sequence
happening
and
then
test
two
triggering
another
kind
of
job
which
was
the
test
shot.
So
we
should
take
a
look
at
all
of
those
events
happening
here.
A
This
is
a
switch
over
to
jenkins
okay
and
let's
wait
for
a
second.
Let's
hope
that
it
does
happen
fast.
If
I
accommodate
it
right
right.
A
You
know
a
lot
of
the
times,
something
that
we
want
to
make
sure
is
okay,
so,
as
you
can
see,
something
got
entered
in
the
queue.
Sometimes
we
do
want
to
make
sure
that
our
our
sem
is
also
configured
right
and
our
the
the
sync
information
that
we
are
entering
it's
an
http
sync.
So
we
want
to
make
sure
that
we're
we
are
only
giving
that
kind
of
information
which
is
which
is
relevant
here.
A
So
so,
as
you
can
see,
the
test
2
entered
waiting,
it
had
it
had
left
and
then
it
had
started,
and
it
has
information
about
moving
on
here.
It's
going
to
have
information
about
the
particular
kind
of
same
information,
so
the
branch
or
the
commit
id
amazing.
So
as
soon
as
the
job
test
two
was
completed,
we
also
had
the
test
job
started.
A
So
this
is
what
going
to
look
like
all
in
a
sequence
with
event:
information
personalized
through
that
particular
kind
of
event,
that's
emitted,
so
obviously
the
event
metadata
the
event
metadata.
Key
values
are
going
to
remain
the
same,
but
the
information
about
the
values
itself,
they're
going
to
be
changing,
and
that's
also
going
to
contain
the
information
which
is
relevant
to
that
particular
event.
So
any
for
any
sync
which
is
receiving
this
event
will
be
able
to
filter
event
based
on
based
on
the
event
metadata
and
so
moving.
A
So
this
was
jenkins
as
a
source,
and
this
is
what
we
want
to
do
for
jenkins.
As
a
sync
is
a
service
similar
to
sockeye,
where
we
are
giving
users
the
ability
to
add
filters
to
the
events
which
are
coming
in
and
then
based
on
those
filters
trigger
specific
actions.
So
if
a
job
is
coming
in
or
or
an
event
is
being
triggered
from
inside
of
tecton-
and
we
mentioned
that,
we
only
want
to
listen
to
a
type
where
a
pipeline
was
updated
inside
of
tekton.
A
So
this
is
what
we
want
to
do
for
jenkins
as
a
as
a
sink.
So
this
is
this:
this
was
phase
one
and
that's
our
plan
for
phase
two.
Thank
you
so
much
for
for
for
your
time
and
we
we
are
still
looking
for
building
this
out
and
also
testing
and
integrating
on
jenkins
as
a
source
and
is
seeing.
So
this
is
something
you're
interested
in
is
interoperability
between
different
systems
and
standardizing
the
way
systems
communicate.
It's
something
that
you're
interested
in
we're.
A
Looking
for
feedback
on
bio
jenkins,
source
and
also
jenkins,
is
saying
so,
please
provide
us
and
your
opinions
and
your
feedback
again.
Thank
you.
So
much
hope
this
was.