►
From YouTube: CDF GSoC 2021 Final Presentations
Description
CDF GSoC 2021 Final Student Presentations
Recorded on Aug 24 and Sept 2, 2021.
Intro: 0:00
Presenting the CDF: 1:13
Jenkins intro: 4:13
Spinnaker intro: 6:02
Demo list: 6:27
Git credentials binding for sh, bat and powershell: 7:08
Conventional Commits Plugin for Jenkins: 17:18
try.spinnaker.io: 31:04
CloudEvents Plugin for Jenkins: 43:13
Security Validator for Jenkins Kubernetes Operator: 1:04:07
Jenkins Remoting Monitoring: 1:18:37
A
A
Today,
we're
going
to
be
talking
about
the
projects
our
student
worked
on,
but
first
we're
going
to
have
an
introduction
to
the
continuous
delivery
foundation.
Then
our
students
are
going
to
present
their
work.
This
will
be
followed
by
question
and
answers
links
to
the
phase
1
demo,
slides
and
phase
2
demo
slides
are
provided
here.
B
So
continuous
delivery
foundation
was
founded
by
google,
netflix
and
cloudbees
back
in
20.
I
want
to
say
2018
2019,
it's
a
little
early
in
the
morning.
For
me,
sorry,
I
don't
remember
the
exact
dates
with
the
purposes.
It's
a
it's
a
it's
a
sub
organization,
a
linux
foundation
with
the
purposes
of
furthering
the
development
tool
stack
as
it
relates
to,
in
particular,
continuous
delivery
and
helping
drive
industry
standards.
B
We
became
a
sponsor
for
google
summer
of
code
last
year
for
the
first
time
and
jenkins.
It
has
a
long-standing
jenkins
project,
has
a
long-standing
history
with
google
summer
of
code,
so
the
foundation
joined
in
along
as
well
and
we've
had
students
also
contribute
to
their
projects.
So
it
continues
to
be
an
area
where
we
love
to
see
investment
next
slide.
B
Next
slide
yeah,
so
it's
our
second
year,
as
I
said,
we
had
22
project
proposals.
We
were
able
to
accept
six
projects.
This
year
we
had
three
to
four
mentors
per
project.
It's
really
great.
The
the
various
projects
get
really
excited
to
support
the
students
in
their
efforts.
B
Additional
projects
that
can
be
considered
would
include
jenkins,
x,
techton
and
screwdriver,
as
well
as
ortelius,
and
then
next
year
we
will
have
another
new
project
that
will
be
online
for
students
to
take
a
look
at
which
is
shipwright,
so
we're
expanding
the
number
of
projects
that
are
available,
and
so
hopefully
you
can
spread
the
word
amidst
your
friends.
They
want
to
apply
for
student
proposals
next
year.
We
will
hopefully
have
a
plethora
of
opportunities.
B
I
am
pleased
to
report
all
the
students
past
their
midterm
evaluations.
I
haven't
seen
the
latest
results
yet
for
the
final,
but
it
has
been
a
very
successful
summer
of
code
season.
Next
slide.
B
B
Hopefully,
all
of
you
are
made
aware
that
you
have
the
ability
to
record
a
lightning
talk,
could
be
a
talk
that
you
recorded
for
this.
It
could
be
another
variation
to
submit
to
devops
world.
We
have
reserved
time
there,
so
you
have
a
chance
to
see
your
project
at
an
industry
event
which
we
hope
that
you
will
choose
to
join.
B
A
C
I
would
say
a
few
things
if
you
don't
mind,
of
course,
yeah.
First
of
all,
thanks
to
all
six
students
working
on
the
continuous
foundation,
g-sub
projects
and
thanks
to
everyone
who
was
working
on
jenkins,
we
had
five
great
projects.
C
Four
of
them
are
focused
on
jenkins
in
the
cloud
and
on
cloud
deployments,
and
even
if
you
take
a
look
at
these
presentations
so
cloud
events,
remote
monitoring
with
open
telemetry
and
parameters
and
security
detectors
for
jinx
kubernetes
separator.
All
of
them
are
strengthened
junkies
position
in
cloud
environments,
and
this
is
exactly
what
we
need
for
the
project
and
all
of
these
projects
are
important
part
of
the
jenkins
roadmap.
A
A
A
D
D
So,
under
the
project
overview,
the
project
involves
extending
the
credentials
binding
plugin
to
create
custom
bindings
for
two
types
of
credentials
which
are
username
password
and
as
such,
private
key.
These
bindings
are
then
used
to
automate
the
authentication
task
when
performing
any
git
operation
using
command
line
git
through
back
or
powershell
in
a
pipeline
job.
D
Now
why
these
bindings
were
required
or
what
was
the
motivation
behind
them?
Firstly,
when
it
comes
on
performing
a
git
operation
using
a
pipeline
script,
there
is
not
much
support
provided
and
the
user
had
to
depend
on
various
workarounds
to
the
credentials,
binding
plugin
or
environment
directive.
D
D
D
D
D
Also,
four
encryption
algorithms
were
supported,
namely
rsa
dsa,
ecdsa
and
ed25519.
D
D
D
So
now
we'll
look
in
the
configurations
of
this
project.
Here,
I'm
performing
a
simple
lit
check
out
on
a
remote
repository
hosted
on
gitlab,
which
is
a
private
repository.
D
D
So
now
moving
on
to
the
road
head,
so
here
the
we
will
be
discussing
the
tasks
that
need
some
work
even
after
the
g-shock
program.
So
that
includes
adding
more
automated
unit
tests
and
making
minor
bug
fixes
and
code
improvements.
But
apart
from
all
that,
the
major
task
is
of
releasing
the
git
ssh
private
key
binding.
A
Are
there
any
questions
on
the
this
project
for
for
harshit?
Who
is
online
right
now
with
us.
B
A
A
All
right,
thank
you
very
much.
Harshit
we're
gonna
move
on
to
the
next
presenter.
A
Okay,
so
do
you
want
to
share
share
your
screen,
share
your
presentation
and
go
at
your
own
pace.
E
E
E
We
can
move
to
the
next
slide
phase
so
today
I'll
be
talking
about
what
are
conventional
comments:
conventional
plugins
for
plugin
for
jenkins,
how
to
use
the
plugin
I'll
show
you
our
demo,
extending
the
plugin
and
next
steps
followed
by
q.
A
if
there's
any.
E
Yep,
that's
it
okay!
Thank
you.
So
much
so
conventional
comments
are
a
lightweight
conventional
convention.
On
top
of
our
commit
messages,
they
are
made
such
that
so
that
it
is
easier.
The
commits
are
human
readable
and
it
is
easy
to
write
automation,
tool.
Conventional
comments
dovetail
with
semantic
versioning.
So
can
we
move
to
the
next
play.
E
There
are
some
examples
of
conventional
comments,
the
chorus
used
to
so
conventional.
As
I
was
talking
about
conventional
comments,
they
are
did
update
with
semantic
versioning,
so
they
follow
this
pattern
of
major
minor
and
fashion
patch
versions.
So
achor
is
a
conventional
commit
which
basically
says
not
to
bump
any
version
of
all
the
three
versions.
E
E
E
Yes,
thank
you
so
right,
so
this
plugin
determines
the
next
semantic
version
it
takes
in
the
following
things:
the
get
comment,
log
of
a
repository,
the
latest
tag
and
the
current
semantic
version
that
it
is
at
so
sometimes
that
the
latest
ad
in
the
current
version
mentioned
in
the
configuration
file,
would
be
different.
The
plugin
handles
those
situations
as
well.
Currently,
we
support
six
project
types
that
is
maven
help.
Gradle
python
make
an
npm.
E
E
We
can
go
to
the
next
slide,
so
using
the
plugin
plugin
is
available
at
plugins.jenkins.org
conventional
comments.
You
can
also
download
it
from
the
update
center.
We
are
using
dev229
to
release
the
plug-in
on
every
feature.
A
recommended
usage
and
you'll
see
this
in
demo
as
adding
a
step
in
the
jenkins
pipeline
and
it
works
on
both
declarative
and
separate
pipelines.
E
Let's
get
started,
okay,
so,
let's
see
how
minor
version
bump
looks
like
so
I
have
a
sampling,
even
project
I'll,
show
you
all
the
source
code,
that's
in
github
and
as
well.
You
can
see
there's
just
one
tag
at
0.1.0
and
I
recently
pushed
off
comment,
adding
a
feature
that
has
had
hello
world
action.
E
E
E
So
now,
let's
try
bumping
the
major
version.
I
have
a
sample
repository
here
with
me.
It's
a
sample
python
repository
as
you
can
see,
there
are
no
tags
present
and
I
have
made
a
breaking
change
comment.
I
will
show
you
all
the
current
version
of
the
project.
It's
usually
in
a
setup.pi
or
config
file.
I
have
it
in
the
config
file.
E
Just
the
script,
so
we
are
just
cloning,
the
project
and
calling
the
next
version
apply,
save
and
I'll
build
it
now.
So
what
we
are
trying
to
see
over
here
is,
as
the
current
version
is
0.0.0
and
the
comment
is
a
breaking
change.
It
should
bump
the
major
version
and
give
the
next
version
as
one
point
zero
point:
zero.
Let's
see
the
logs,
so
the
next
version,
it
says
no
tax
found
so
as
there
were
no
tax
and
one
point,
zero
point:
zero.
E
E
So
here
we
are
back
at
the
sample
name
and
project
pipeline
and
I
have
modified
the
pipeline
a
bit
to
add
build
metadata
to
the
conventional
comments.
Plugin
I'll
show
you
all
the
pipeline
here.
I
have
used
environment
variables
to
add
built
numbers
using
the
optional
parameter
built
metadata.
The
rest
of
the
steps
remain
same.
I
am
finally
printing,
the
next
version,
so,
let's
run
it.
E
As
it's
getting
built
if
you'll
remember,
this
is
the
same
project
that
I
used
to
demonstrate
the
minor
version
bump.
So
we
know
that
the
next
version
should
be
0.2.0,
along
with
the
build
number
so
yeah.
So
here's
the
print
message
and
it
is
0.2.0,
along
with
the
build
number
6..
E
E
What
I
recommend
you
all
is
to
go
to
the
github
repository
of
the
conventional
commits
plugin
and
look
at
all
the
options
that
are
available
to
manipulate
the
pre-release
feature.
So
we
have
three
that
is
pre-release
naming
the
pre-release.
Second
is
preserve
pre-release.
That
is,
keep
the
existing
pre-release.
The
default
value
is
false
and
finally,
we
have
increment
pre-release
where
we
increment
the
prerelease
option
and
the
default
is
voice.
So
last
two
are
boolean.
E
So
if
our
current
version
is
0.1.0
alpha
and
we
have
a
fix
that
is
incrementing
the
patch
version,
we
have
preserved,
pre-release
and
increment
free
release,
then
our
final
version
would
be
0.1.0
0.1.1.
This
is
because
of
the
patch
version
alpha,
because
we
have
preserved
the
prerelease
and
that
one
because
we
have
incremented
the
prerelease.
E
E
E
E
So
here's
the
version
that's
been
created.
I
think
we
can
go
back
to
the
slides.
E
So
extending
the
plug-in,
as
you
can
see,
we
had
only
six
projectiles
that
we
supported.
Suppose
you
want
to
add
a
new
one.
That
is,
let's
take
an
example
of
grow
project
type.
You
just
have
to
create
a
public
class
and
implement
the
following
three
methods
that
is
check.
The
check
method
just
is
a
it
returns,
a
boolean,
true
or
false.
It's
a
check
whether
the
project
is
at
the
given
repository
is
of
that
particular
project
type.
E
So
we
usually
check
for
the
configuration
file,
for
example,
if
it's
some
even
project
we'll
check
whether
the
form.xml
exists
or
not
in
the
given
directory.
Second,
is
the
get
current
version.
It
is
reading
the
current
version
from
the
configuration
file
of
the
project
and
finally
is
right
version.
So
right
writing
back
the
calculated
version
to
the
file
and
thanks
to
kristin.
This
was
super
easy.
E
E
Moving
on
next
steps,
so,
yes,
the
next
steps,
would
be
to
write
back
in
various
configuration
files.
So
right
now,
maven
and
npm
are
done.
Gradle,
python,
helm
and
make
our
left
so
that
that
would
be
my
next
steps
and
we
would
love
to
hear
your
feedbacks
and
suggestions
for
the
plugin
on
github
and
also
fundraiser.
A
F
I
like
to
start
off
my
presentation,
giving
a
little
primer
on
what
spinnaker
is
so
spinnaker
describes
itself
on
its
website
as
an
open
source
and
multi-cloud
continuous
delivery
platform
that
helps
you
release
software
changes
with
high
velocity
and
confidence.
F
F
Spinnaker
supports
deployments
on
all
major
cloud
providers
such
as
aws,
azure,
google
cloud
provider
and
oracle
spinnaker's
biggest
selling
point
is
its
continuous
delivery
features.
It
supports
advanced
deployment
strategies
such
as
red
black
rollouts,
which
deploy
a
new
version
of
your
application
with
the
existing
version,
and
it
destroys
the
old
version
once
the
new
version
is
ready
to
go.
F
F
There
are
tons
of
other
features
that
are
available
in
spinnaker
and
I
recommend
you
to
read
them.
If
you're
interested
on
our
website,
spinnaker
was
originally
developed
by
netflix
to
serve
as
their
own
private
deployment
platform,
but
it
was
released
to
the
public
in
2015
and
since
then
many
other
companies
such
as
google
and
airbnb,
have
also
adopted
it
as
their
own
primary
deployment
platform
and
in
2019
spinnaker
was
donated
to
the
cd
foundation.
F
I
clearly
remember
the
first
time
when
I
tried
to
install
spinnaker
and
it
was
extremely
difficult
to
say
the
least,
and
I
spent
countless
hours
searching
through
random,
get
up
issues
and
looking
through
stack
overflow
and
digging
up
random
messages
from
slack
just
to
get
the
main
ui
of
spinnaker
to
appear
on
my
computer,
and
I
think,
probably
one
of
the
biggest
reasons
why
it's
so
difficult
is
because
there's
so
many
dependencies
required
to
actually
get
spinnaker
running.
So
you
need
a
external
storage
provider
like
an
s3
bucket.
F
F
If
you
compare
this
to
like
a
project
like
jenkins,
all
you
need
to
do
to
run
jenkins
on
your
computers
to
have
java
installed
and
double-click
the
jar
file
having
a
sandbox
environment
where
users
can
go
in
and
deploy
some
pipelines
and
test
out.
The
spinnaker
ui
is
something
that
I
really
wish.
I
had
when
I
first
heard
about
this
project.
F
F
F
We
support
the
aws
load,
balance
controllers
that
users
have
an
easy
way
of
accessing
their
deployments
on
their
web
browser.
We
also
have
a
private
image
registry
hosted
on
aws
so
that
we
can
get
around
any
rate.
Limiting
issues,
and
also
so
that
we
can
verify
the
authenticity
of
each
image
that
we
allow
users
to
deploy.
F
We
have
also
installed
a
special
admissions
controller
to
block
any
images
that
are
not
from
our
private
registry
for
user
deployments.
We
have
a
couple
of
default
pipelines
that
users
can
deploy.
We
also
have
an
auto
resource
cleanup
pipeline
that
deletes
any
unused
resources
after
a
certain
period
of
time.
F
F
F
So
here's
a
quick
demo
of
the
cleanup
pipeline,
so
here's
another
pipeline
that
has
stuff
deployed
to
a
cluster
and
if
we
go
back
to
this
cleanup
pipeline,
this
usually
runs
automatically
every
30
minutes
or
so,
but
we
can
just
run
it
manually.
For
this
example,.
F
The
detailed
auth
flow
for
spinnaker
can
be
seen
in
the
diagram
below
so
here's
the
offload
from
the
user's
perspective.
So
once
they
go
to
the
try,
dot
submit
the
io
website,
it
redirects
them
to
google
oauth
and
they
can
select
which
account
they
want
to
log
in
with,
and
after
that,
they
are
authenticated,
so
they
can
see
the
pipelines
and
applications
that
we
have
set
up.
F
If
we
query
the
api
for
more
specific
information
about
our
account,
we
can
see
the
roles
listed
here.
So
this
account
has
the
role
public
and
it
also
shows
the
email
and
the
name
that
is
associated
with
this
particular
account.
F
So
here
are
the
areas
for
future
improvement.
I
think
it
would
be
nice
if
there
are
more
default
pipelines
or
more
interesting
containers
to
deploy,
as
there
are
only
really
three
examples
that
we
offer
to
users.
At
this
time
I
originally
planned
for
supporting
user
created
pipelines
so
that
it
would
be
a
little
more
interactive,
but
due
to
time
constraints
and
security
concerns,
we
were
unable
to
add
this
to
our
final
project.
For
the
summer,
however,
the
groundwork
is
already
set
for
limiting
which
specific
containers
that
users
are
allowed
to
deploy.
F
Additionally,
we
only
support
users
deploying
to
a
standard
kubernetes
cluster,
but
I
think
it
would
be
a
lot
more
interesting
if
users
can
deploy
cloud
specific
services
such
as
google's
app
engine
or
amazon's
ec2
instances
for
the
long-term
viability
of
this
project.
I
think
it'd
be
worthwhile
if
we
pursued
a
hybrid
tenant
solution.
F
F
If
I
wrap
up
my
presentation,
I
would
like
to
give
a
couple
of
announcements,
so
the
alpha
release
for
a
live
hosted
version
will
be
released
very
soon.
Unfortunately,
there
was
a
delay
due
to
infrastructure
setup,
but
the
link
to
the
live
hosted
version
will
be
posted
on
the
cd
foundation
slack
and
the
spinnaker
slack.
So
keep
your
eyes
out
for
that.
F
F
Finally,
the
single
source
of
truth
for
this
project
can
be
found
in
the
link
below.
This
is
a
link
to
the
spinnaker,
docs
and
I'll,
be
up
updating
this
site
frequently
with
the
live
link
once
that
shows
up
and
the
slack
channel
that
we
can
use
to
discuss
any
concerns
that
you
might
have
about
this
project
or
any
feedback.
F
A
A
G
Hello:
everyone,
my
name,
is
shruti
chatharovi
and
in
this
session
we
are
looking
at
the
cloud
events
plugin
for
jenkins,
and
this
has
been
developed
as
a
google
summer
of
code
project
under
cdf,
and
the
idea
behind
this
project,
or
this
plugin
for
jenkins,
has
been
to
enhance
interoperability
between
jenkins
and
other
ci
cd
tools.
G
So
you
know
each
of
the
tools
can
have
their
own
different
and
specific
way.
How
they're
describing
events
which
makes
it
very
very
hard
to
design
systems
or
design
systems
around
events
or
an
event-driven
architecture,
because
there's
no
common
way
in
each
way.
Each
way
is
going
to
look
different
for
a
particular
tool
by
using
cloud
events,
we're
basically
standardizing
and
we're
saying
that
all
of
the
events
shouldn't
have
this
particular
structure
and
does
it
makes
it
very,
very
easy
to
just
define
and
design
event-driven
architectures.
G
And
we
wanted
to
bring
that
standard
specification
and
that
common
way
of
consuming
and
emitting
events
inside
jenkins
and
that's
the
cloud
events
plugin.
G
So
during
phase
one
demos,
we
talked
quite
a
bit
about
indirect
interoperability
about
cloud
events,
and
why
do
we
want
to
use
this
particular
idea
of
interoperability?
And
here
is
the
link?
It's
a
youtube
video.
If
this
is
something
you'd
want
to
watch,
and
if
you
want
to
read
more
about
interoperability,
about
cloud
events
and
how
is
jenkins
interoperating
and
how
is
jenkins
using
cloud
events
and
also,
why
did
we
want
to
implement
it
in
the
first
place?
Here
is
a
medium
article.
G
So
the
cloud
events
plugin
for
jenkins.
It
allows
users
to
configure
jenkins
as
a
source
and
or
a
sync
emitting
and
consuming
cloud
events.
So,
as
we
said
earlier,
using
cloud
events
is
going
to
standardize
the
way
that
events
are
both
emitted
and
consumed,
because
it
is
just
giving
us
a
a
simple
architecture
or
a
simple
design
of
what
an
event
should
look
like.
So
with
jenkins
as
a
source.
G
We
are
defining
that
all
of
the
events
which
jenkins
will
be
emitting
should
be
cloud
events
and
when
jenkins
is
a
sync,
we
are
defining
or
we
are,
we
are
defining
how
jenkins
will
be
consuming
those
cloud
events.
G
So
why
would
you
want
to
use
the
cloud
events
plugin
for
jenkins?
So
obviously
the
first
thing
is
because
it
is
standardizes
communication
between
jenkins
and
other
ci
cd
tools
and
not
just
other
ci
cd
tools,
but
also
tools
apart
from
ci
cd,
which
is
using
those
cloud
events,
and
this
is
going
to
allow
indirect
interoperability
and
and
we
we
talked
a
bit
about
interoperability
in
phase
1
demos.
G
But
to
give
you
an
idea
about
indirect
interoperability,
it's
just
an
idea
that
we
do
not
want
to
have
a
direct
one-to-one
relationship
between
our
systems,
which
are
interoperating.
So
there
is
this
common
language
which
each
of
the
tool
understands,
and
all
we
are
going
to
do
is
make
our
system
interoperate
with
other
systems
using
cloud
events
so
using
this
common
language.
G
The
second
reason
is
because
we
can
build
complex,
end-to-end
pipelines,
extending
multiple
ci
cd
tools
and
again
not
just
ci
cd,
but
also
other
tools
which
can
which
uses
cloud
events
without
needing
any
extra
effort,
and
when
I
say,
without
needing
any
extra
effects,
what
I
mean
is
that
we
will
not
need
to
design
specific
translators
to
talk
with
other
systems.
We
will
only
need
this
common
language
which
we
will
be
needing
in
order
to
design
a
system
which
which
extends
multiple
tools
and
multiple
systems.
G
All
we
will
really
going
to
need
is
for
any
sync
which
is
consuming
cloud
events
to
basically
define
how
it
wants
to
use
a
particular
kind
of
event
which
is
coming
from
a
particular
kind
of
system.
So
that's
that's
the
logic
behind
jenkins
as
a
sink
is
jenkins,
as
the
sink
will
understand
this
common
format,
which
it
is
going
to
be
consuming
from
different
systems.
G
You
know
three
or
four
different
kind
of
systems,
but
the,
but
the
logic
here
is:
how
do
we
want
to
use
or
how
do
we
want
to
use
those
particular
events
which
are
coming
from
all
of
these
different
systems?
G
And
the
third
reason
is
integrating
other
system
but
jenkins
in
a
loosely
coupled
scalable
and
tool
agnostic
manner.
Again
this
is
too
agnostic.
So
we
are
not
designing
this
for
any
particular
tool,
but
this
can
be
extended
to
any
tool
which
can
con
for
jenkins
as
a
source.
This
can
be
for
anything
whatever
is
consuming
cloud
events
so
jenkins,
as
the
source
is
going
to
be
emitting
cloud
events,
and
it's
just
going
to
be
out
there.
G
G
In
the
sense
that
this
can
scale
to
several
services,
not
just
two
or
three
but
any
service
which
understands
and
uses
cloud
events-
and
it
is
obviously
loosely
coupled
because
we
are
not
creating
that
direct
one-to-one
coupling
or
one-to-one
agent
or
adapter-
which
we
will
be
needing
to
talk
to
that
particular
service
and
again
it
eliminates
the
need
to
maintain
tool,
specific
adapters
for
communicating
with
systems.
So,
as
you
can
imagine,
it
is
a
much
simpler
way
of
communicating.
G
So
imagine
if
we
have
10
different
systems,
10
different
systems
in
our
pipeline
and
each
want
to
talk
with
each
other,
and
it
would
be
quite
hectic
to
define
to
define
essentially
ways
to
communicate
with
these
10
different
systems,
because
each
of
these
systems
would
have
a
different
way,
but
using
cloud
events,
we
just
have
a
common
link.
Each
of
these
tools
is
going
to
understand,
is
going
to
emit
and
consume
so
making
all
of
our
design
very
easy.
So
that's
why
cloud
events
plug-in?
G
Is
your
go-to
if
you're
designing
such
an
event-driven
system,
and
some
super
great
news-
is
that
the
cloud
events
plugin
is
now
released.
So
from
phase
one,
the
one
of
the
questions
that
came
up
was
when
it's
going
to
be
released
and
good
news
is
it's
not
released
and
you
can
check
it
out
right
here,
download
it,
and
obviously,
please
do
provide
us
with
your
feedback.
G
So
during
phase
one
demos
we
we
saw
jenkins
claudivan's
plug-in
ui
for
jenkins
as
a
source,
so
how
we
can
configure
the
cautions
plug-in
for
using
jenkins
as
a
source
which
is
going
to
be
emitting
cloud
events
and
other
systems
can
consume
these
events.
We
also
saw
the
types
of
events
which
are
supported
by
jenkins
cloud
events
plugin,
so
going
back
down
here
we
have
q
events,
we
have
build
events,
we
have
job
events
and
also
node
offline
or
online
events.
G
G
We
had
some
questions
for
ourselves
and
these
questions
made
us
think
more
broadly
about
the
cloud
events
plug-in
for
jenkins,
not
just
in
an
individual
sense,
but
also
how
this
is
going
to
look
like
for
when
jenkins
is
interoperating,
with
many
different
ci
cd
tools
and
when
a
user
is
building
a
pipeline
which
has
different
ci
cd
tools
which
need
to
inter-operate.
G
So
the
first
question
we
had
was:
how
can
we
implement
a
transient
fault,
tolerant
way
of
sending
cloud
events,
especially
important
for
an
event-driven
architecture,
because
we
want
to
make
sure
that
none
of
the
event
is
is
lost
by
any
network
failures
and
how
can
the
plug-in
handle
asynchronous
communication
so
very
important
for
implementing
an
event-driven
architecture
through
the
cloud
events
plug-in
for
jenkins
inside
of
jenkins?
The
second
question
was:
if
we
were
to
implement
a
in
asynchronous
communication
inside
of
the
jenkins,
how
how
should
we
do
that?
G
The
flip
question
was:
how
can
this
plug-in
work
alongside
other
ci
cd
tools
that
use
cloudvent
so
again
very
important
to
make
sure
that
the
cloud
events
plug-in
for
jenkins
allows
jenkins
to
achieve
that
initial
goal
of
enhancing
interoperability
between
different
systems
in
a
much
easier
way
and
without
needing
to
maintain
specific
adapters
for
each
different
system?
G
So
all
of
these
questions
led
us
into
designing
a
proof
of
concept
using
jenkins,
cardone's
plugin
for
jenkins,
and
also
a
tools
which
have
been
using
cloud
events
specifically
cit
tools,
and
this
proof
of
concept
has
been
inspired
by
the
eventsic
team
at
cd
foundation,
and
so
they
have
something
very
similar
with
tacton
and
captain
where
both
of
these
systems
act
as
a
source
in
the
same
sending
and
consuming
cloud
events
and
how
they
are
implementing
that
that
fault,
tolerant
and
also
that
asynchronous
communication
is
through
k-native
cloud
events
broker
and
cloud
events
containing
a
broker
by
default,
uses
cloud
events
and
transfers.
G
It
between
different
systems,
so
it
access
that
middleware
handling,
asynchronous
communication
handling,
all
retries
and
other,
never
failures
which
might
occur,
so
it
is
taken
away
from
both
like
tacton
and
captain
and
in
our
poc.
That
handling
of
network
failures
is
taken
away
from
the
cloud
events
plug-in,
and
it
is
it's
sort
of
creating
an
abstraction
at
the
cloud
and
some
k
native
broker
layer
rather
than
inside
of
our
plug-in.
G
So
so
what
do
we
have
inside
of
the
jenkins
poc
for
cloud
events?
Plugin
is
jenkins
as
a
source,
sending
cloud
events
and
tact
on
as
a
sync
consuming
cloud,
and
so
we
also
tested
this
out
with
captain
and
also
did
did
a
test
with
how
this
is
going
to
look
like
for
when
we
are
using
kafka.
G
But
in
this
particular
poc
we
are
only
looking
at
jenkins
and
takton,
where
jenkins
is
sending
a
card
events
broker
to
a
native
cloud
events
broker,
so
the
caditive
cloud
events
broker
has
an
idea
of
a
k
native
trigger
and
a
cadaver
trigger.
You
can
think
of
it
as
a
filter
which
filters
on
specific
attributes
of
the
cloudvent
metadata.
G
For
example,
ce
type,
so
looking
here
we
have
ce
type,
c,
spec
id
and
source,
so
we
can
specify
any
of
that
inside
of
our
k
native
trigger
and
only
any
event,
which
has
that
attribute
matching
will
be
passed
on
further
and
all
of
the
other
events
will
not
go
beyond
that
layer.
So
any
of
the
events
which
pass
the
canadian
trigger
will
all
will
move
on
to
attack
on
trigger
and
the
tactile
trigger
is
will
be
receiving
the
cloud
events,
and
this
is
where
we
can
extract.
G
You
know,
event
specific
information
from
cloud
events.
For
example,
you
know
if
we
have,
we
can
extract
number
of
executors
or
we
can
also
extract
event,
data
or
event
metadata.
However,
we,
however,
we
like
and
pass
that
information
and
trigger
a
tasker
on
our
pipeliner
and,
however,
we
have
defined
it
inside
of
tecton
definition.
So
now
we
will
be
moving
on
into
the
demonstration
and
taking
a
brief
look
at
the
yaml
files
behind
for
the
poc
all
right.
G
So
what
we're
looking
here
are
brokers
indicative
eventing
namespace,
so
we
have
two
different
kinds:
we
have
the
default
broker
and
the
kafka
broker
and
the
default
broker
is
a
very
simple
cloud
events
broker,
which
is
going
to
be
dealing
with
cloudwind
events
to
transfer
messages
between
subscribers
and
different
things
and
sources.
Essentially,
so
the
default
broker
is
the
only
one
where
we
will
be
talking
about
in
this
psc.
G
This
is
the
broker
definition,
a
very
simple
default
broker,
and
this
is
the
trigger.
So,
as
we
were
looking
at
the
image
for
our
poc,
the
kdata
trigger
is
going
to
define
a
filter
and
it
is
going
to
filter
events
on
that
specific
attribute
that
we
have
defined.
So
the
event
attribute
which
we
have
defined
here
is
type
or
c
type,
because
we
are
referring
it
to
type
because
it
is
going
to
be
c
type
or
cloud
event,
specific
attributes,
so
ce
type
to
be
q
and
thread
waiting.
G
So
here
is
the
event
that
we
are
looking
at,
and
this
is
the
only
event
which
will
be
passed
through
to
the
subscriber
of
the
of
this
k-native
broker
so
techton.
This
is
the
only
qnted
waiting
is
the
only
one
which
will
pass
through,
and
the
next
thing
that
we
have
is
tact
on
trigger
and
the
techno
trigger
is
what
is
going
to
be
receiving
that
entire
event
and
then
extract
information.
G
So
here
is
where
we
are
extracting
information,
job
name
from
the
ce
type,
and
then
this
is
the
information
that
we
can
use
in
our
classroom
and
our
pipeline
run.
However,
we
want
to
use
this
information,
and
this
is
not
only
specific
to
our
event
metadata.
We
can
also
use
event
data
again.
This
is
according
to
the
need
of
a
user.
So
where
are
you
open
to
using?
G
However,
you
would
want
to
use
this,
so
what
I'm
going
to
do
is
I'm
going
to
copy
this
particular
url,
and
this
is
going
to
be
the
url
for
sync
of
the
cloud
events
broker
or
the
cloud
plugin
for
jenkins
right
here.
G
So
I'm
going
to
paste
this
and
when
I
paste
this,
all
of
the
events
of
the
the
types
which
are
checked
here
will
be
sent
over
to
the
sync,
but
whatever
will
be
sent
from
the
broker
to
techton
will
be
that
one
event
which
we
are
filtering,
which
is
the
q,
entered
waiting
this
particular
event.
So
I'm
going
to
save
this
information
and
what
we're
here
is
we're
looking
at
task
runs
inside
of
tekton
dashboard.
So
when
I
run
this
job,
I
should
see
one
single
task
run,
not
two,
not
three.
G
So
we're
looking
at
this
test-
and
this
is
the
one
I
will
be
triggering
here-
and
it
will
only
trigger
as
soon
as
that
event
is
received
inside
of
k
native
clutterman's
worker,
that
is
going
to
be
filter,
and
this
is
going
to
be
sent
over
to
tacdon
and
tecton
will
trigger
a
task
on
if
the
particular
event
of
that
type
is
received.
So
for
now
we're
only
seeing
one
events
and
that's
good
jenkins
attacked
on
task
run,
and
this
is
what
we
have
defined
so
so
yeah.
G
So
this
is
the
only
event
that
was
that
was
received
by
tekken,
because
all
other
events
were
filtered
and
if,
for
example,
there
had
to
be,
if
my
tactile
was
not
available
or
for
some
reason
there
were
some
other
network
failures,
it
would
be
the
job
of
k,
native
inventing
broker
or
the
canadian
cloud
events
worker,
which
handles
cloud
events,
and
this
is
going
to
be
retrying
all
of
those
network
and
transient
failures.
G
So
that
was
the
end
of
the
presentation.
Thank
you
so
much
everyone,
a
special
thank
you
and
shout
out
to
the
city,
foundation,
gsoc
and
all
mentors
on
this
project
for
such
an
amazing
summer.
This
might
be
an
end
of
cheese
talk,
but
this
is
definitely
not
the
end
of
me
contributing
to
and
being
a
member
of
an
absolutely
amazing
community,
and
if
you
have
any
questions
and
feedback,
we
would
all
love
to
hear
them.
G
A
Do
we
have
comments
or
questions
from.
H
One
please
okay,
so
my
name
is
siri
and
I
was
a
mentor
on
the
open
ministry
instrumentation
of
blinking
through
motion.
I
would
be
interested
in
understanding
what
are
the
semantic
conventions
that
are
used
in
the
event.
We
can
communicate
events,
we
have
a
shared
data
structure,
but
what
are
the?
What
is
the
common
known
way
to
common
attributes
that
are
shared
across
all
these
cicd
tools
so
that
they
can
interpret
together?
A
H
I
C
Have
on
the
basic
answers,
but
we
can
follow
up
later
for
this
on
the
call,
but
actually
cloud
events
is
very
pluggable.
C
And
yeah
one
thing
which
was
mentioned
that
cloud
events
basically
doesn't
mean
that
events
themselves
are
very
much
standardized.
C
So
that's
why
there
is
a
project
started
in
the
continuous
delivery
foundation
about
creating
a
standard
for
ci
events.
So
there
is
a
separate
project
started,
basically,
as
a
spin-off
of
the
events
seep.
So
there
is
interoperability,
seek
the
interoperability
seek
created.
Another
special
interest
group
in
the
cdf
called
events
special
interest
group
and
this
event
seek,
is
actually
working
on
open
standard
that
would
unify
these
events
across
multiple
systems.
H
A
J
Hi
everyone
I
am
today,
I
am
going
to
demonstrate
my
work
on
adding
a
security
validator
to
the
jensen's
kubernetes
operator,
so
the
security
validator
right
so
like
what
is
the
problem.
It
is
solving
right
so
in
the
jenkins
custom
resource
that
we
are
defining
or
we
are
defining
it
in
a
declarative
manner
right.
So
the
the
custom
resources
and
the
plugins
are
being
declared
in
this
fashion
right.
J
So
there
are
some
security
vulnerabilities
that
are
present
in
the
plugins
and
it
is
not
visible
to
the
end
user
so
to
solve
this
problem
like
the
security
validator
is
being
added
to
the
operator,
so
the
security
validator
is
nothing
but
a
validation
that
that
operates
before
the
object
is
persisted
to
that
city
cluster
right,
so
it
operates
before
creating
or
updating
or
kubernetes
or
jenkins.
Custom
resource
object
right,
so
the
web
hook
is
different
from
the
validation
that
we
are
doing
in
the
reconciliation
loop.
J
It
is
after
the
object
is
persisted
to
that
3d
cluster,
so
it
is
low.
In
contrast
to
the
you
know
that
validation,
the
web
hook
is
quite
fast
and
yeah
so
on
using
the
web
book.
It
is
completely
optional
for
the
user
to
use
a
level
right
and
it
can
be
easily
installed
via
help
or
there
are
cube,
cuddle
manifests
and
that
can
be
used
to
get
the
book
up
and
running
right.
J
Also,
we
are
using
a
search
manager
as
an
external
dependency
right,
because
we
have
to
manage
tls
certificates
and
that's
why
search
manager
is
used.
So
let
us
move
to
the
demonstration
right
demonstration
so
I'll
be
using
the
image
that
I
have
read
locally
to
launch
the
operator
along
with
the
web.
So
this
is
the
image
that
I
have
built
locally.
So
this
thing
is
operator
security
validator.
J
J
The
web
book
can
be
installed
right,
so
this
stack
will
also
install
all
the
external
dependencies
like
research
manager
and
all
those
different
dependencies
will
also
be
installed
or
when
we
will
enable
the
webhook
right.
So
also,
it
is
advisable
to
not
to
launch
the
jenkins
cr
along
with
this
helm.
Charts
because
of
the
web
hook
actually
takes
some
time
to
get
up
and
running,
and
if
I
install
the
jenkins
custom
resource
along
with
the
web
book,
then
I
won't
be
able
to
validate
the
security
warning
so
by
default.
J
J
J
Yeah,
so
actually
it
takes
some
time
for
the
operator
to
get
up
and
running
right.
So
so
it
will
first
initialize
the
plug-in
data
cache
and
all
those
things.
So
it
generally
takes
around
a
minute
or
two
to
get
the
operator
up
and
running,
and
then
we
can
launch
the
jenkins
here
right.
So.
J
J
J
Okay,
so
now
the
operator
is
up
and
running,
so
let
us
try
to
create
a
jenkins
custom
resource
from
here
right,
so
I
have
defined
some
jenkins
custom
resources
and
in
the
first
example
I
am
creating
a
cr
with
some
of
the
plugins
containing
security
vulnerabilities,
for
example,
in
this
vnc
viewer
plugin,
we
are
using
the
1.7
version
right,
so
so
this
version
will
have
security
vulnerabilities
right.
So
let
us
try
to
create
a
new
cr
from
here.
J
Yeah,
so
it
will
throw
an
errors
or
specifying
the
plugins
containing
security
vulnerabilities,
for
example.
Oh,
it
will
so
like
specify
that
these
user
defined
plugins
will
have
security
vulnerabilities
and
we
have
these
four
plugins
that
have
security
or
normalities
right.
So
in
the
upcoming
example,
I
have
kept
all
of
the
plugins.
Oh
right,
so
I
have
updated
their
versions
so
to
a
version
which
does
not
have
security
vulnerabilities
right.
J
So
let
us
try
to
create
a
new
cr
from
here,
but,
first
of
all
let
us,
let
me
see
the
logs
of
the
operator
right
so.
J
J
J
J
J
So
now
let
us
see
the
logs
yeah,
so
it
wrote
a
response
that
allowed
is
like
we
are
allowing
on
it
to
create
a
object
right,
so
it
sends
a
200
response
right.
So
yeah,
that's
pretty
much
it.
Apart
from
that,
we
have
to
mention
that
whether
we
want
to
validate
the
security
warnings
or
not,
and
if
we
set
this
flag
to
false
so,
for
example,
in
this
particular
example,
if
we
set
it
to
false.
J
J
J
So
there
is
always
room
for
improvement
right,
so
some
of
the
areas
on
which
we
can
like
extend
the
project
or
add
some
features
is
like
one
of
them
is
to
have
a
post
installed
right.
So,
as
you
have
seen,
while
installing
the
webhook
right,
so
it
takes
some
some
time
to
get
ready.
So
what
can
be
done
is
to
have
a
post
install
hook
that
checks
whether
the
web
hook
is
ready
and
when
it
is
ready,
then
the
help
installation
will
be
completed.
J
So
the
user
knows
that
the
now
he
can
send
like
he
can
create
a
jenkins
custom
resource.
Another
things,
another
areas
on
which
the
book
can
be
extended
is
we
can
move
the
validation
logic
that
is
being
done
in
the
controller
to
the
book
right
and
there
is
other
sorts
of
validation
that
logic
that
can
be
implemented
in
the
webhook
right.
J
One.
Another
feature
is
like
we
right
now.
We
are
validating
the
plugins
right,
so
for
each
plugin
there
is,
there
are
dependencies
for
a
particular
plugin
right.
So
what
can
we
done?
Is
we
can
like
traverse
the
whole
dependency
graph
and
check
for
validate
all
those
plugins?
All
its
depends
dependencies
as
well,
because
that
is
those
are.
Those
plugins
are
also
being
installed
so
like
validating
those
before
installation
would
make
sense
right.
So
this
is
another
cool
feature
that
can
be
implemented
so
yeah.
A
A
A
I
K
And
to
achieve
this
purpose,
we
set
the
goals
below
one
is
to
correct
telemetry
data,
including
matrix
traces
and
logs
of
the
remoting
module
with
open
telemetry,
and
the
other
is
to
set
to
send
the
data
to
an
open,
telemetry
protocol
endpoint
and
the
open.
Telemetry
is
our
next
standard.
Next
industry
standard,
observability
framework
for
cloud
native
software
and
and
it
handles
three
type
of
geometry-
data,
rugs,
metrics
and
traces
at
once,
and
thereby
it
enables
the
integration
between
the
different
type
of
telemetry
data
and
but
in
this
google
swap
record.
K
K
K
K
Yes,
agent,
okay
and
set
the
set
this
environment
variables.
This
specifies
the
location
of
the
target
open,
telemetry
protocol.
K
Then
I
will
execute,
execute
the
remoting
agent,
and
here
you
need
to
use
the
remoting
engine
as
a
java
agent.
This
use
remoting
engines
as
a
java
agent,
and
you
need
to
also
specify
the
logging
property
file
and
then
agent
is
connected
and
rets
explorer
explores
the
telemetry
data
on
grafana.
K
K
And
regarding
the
logs
in
in
the
error
log,
you
can
see
the
stack
trace
for
the
error.
Okay,
then
next
I
will
show
the
agent
matrix
from
promises:
data
source
you
can
filter
the
matrix
by
matrix
type
and
service
instance
id
or
that
other
attributes,
and
yes,
so
far,
I
we
correct
only
very
general
metrics
like
gb
road,
jvm,
gvm
memory,
usage
and
etc.
K
First,
you
can
control
which
metrics
to
correct
in
in
this
in
this
merging
engine
you
this
motion
engine
can
correct
many
kinds
of
matrix,
but
you
may
want
to
correct
only
gvm
matrix
because
you
already
correct
the
other
kinds
of
metrics
with
another
two,
so
we
offer
the
feature
to
fill
the
matrix
by
their
name
using
regular
expression.
K
And
I
prepare
two
types
of
demos
so
that
you
can
try
them
out
very
quickly.
One
one
uses
docker
compose
like
the
demo
I
showed
before
and
and
the
other
uses
kubernetes
docker
compose
demo
is-
is
the
easiest
way
to
try
out
our
monitoring
engine.
It
set
up
all
services,
you
need
and
it
services
if
service
is
pre-configured.
K
So
what
you
need
to
do
is
to
clone
a
repository
and
change
directory
to
the
example
directory
and
do
docker
compose
app,
and
then
you
can
export
data
on
the
graphene,
and
I
also
prepared
the
kubernetes
plug-in
integration
demo
and
and
in
this
demo
the
service
instance
id
will
be
not
a
name,
so
you
can
find
out
target
drugs
and
metrics
more
quickly
and
I'll.
Show
you
a
quick
demo
for
this
criminal
system.
K
K
And
here
two
agents
are
correct
allocated
and
these
agents
emit
their
telemetry
data.
So
let's
see
graphana.
K
K
So
it's
much
more
easy
to
access
to
the
rugs
and
you
can
also
see
the
see
there
metrics
from
promises
dead
cells.
K
K
K
K
For
example,
the
count
of
reconnection
or
the
average
offline
time
in
a
day
may
have
junkies
admins
to
check
the
connectivity,
and
I
conducted
a
user
survey
in
phase
one,
and
I
found
that
connectivity
is
one
of
the
main
factors
for
high
availability
agents.
So
so
this
is
important
and
users
should
be
able
to
configure
the
open,
telemetry
service
name
and
service
namespace
and
remoting
our
resource
attributes.
K
Yes,
this
is
all
from
my
presentation
and
google
sum
up
called
in
jenkins
is
a
great
experience
for
me
and
thank
you
so
much
mentors
and
anyone
involved
to
organize
this
google
search
called.
Thank
you
very
much.
C
I
would
like
to
just
say
thank
you.
Thank
you
for
the
contributions,
because
yeah
it's
a
great
project
which
is
super
available
to
the
jenkins
ecosystem,
and
you
can
see
that
there
is
so
many
projects
happening
around
the
observability
space
and
the
agent
monitoring
has
been
always
a
big
issue
for
jenkins
end
users.
C
A
All
right,
can
you
please,
I
think
I
can
override
your
share
so.
A
Okay,
thanks
for
those
the
presentation
and
those
comments,
is
there
any
other
comments
or
questions.
J
E
C
C
Okay,
sorry,
in
addition
for
the
jenkins
to
the
jinx
kubernetes,
separator
the
results
classic
jenkins
hand,
charts
and
also
there
is
a
lot
of
security
related
questions
today
in
terms
of
management
and
configuration,
including
yaml
files.
So
I
wonder
whether
the
results
of
your
projects
could
be
potentially
applied
to
there
too.
C
J
Like
the
network
is
quite
unstable,
so
that's
why
yeah
so?
Can
you
hear
me
now?
Yes,
hello,
yeah
yeah,
so,
regarding
your
question
like
what
I
have
understood
is
like
you
are
saying
that
the
helm
charts
like
we
can
use
the
charts
to
install
the
operator
and
we
can
like
apply
those
results
right.
So
the.
C
Hello!
Yes,
though,
it's.
E
C
A
Yes,
okay,
so
thank
you
for
for
for
this,
for,
for
for
the
time
and
for
answering
questions,
they
could
continue
answering
questions
in
the
chat.
I
believe
that
will
be
easier.
A
All
right,
let's
move
on
to
concluding
the
presentations
today.
So
last
couple
of
slides.
A
We
have
a
feedback
form
that
I
invite
everyone
to
visit
and
give
us
feedback
regarding
the
summer
of
code
program
in
this
presentation,
and
actually
this
is
the
end,
so
we'll
turn
off
the
recording
and
and
and
then
everybody
can
ask
more
questions,
questions
in
a
more
open
channel
and
speak
freely
to
each
other,
all
right
and
it's
actually
the
last
slide.
So
thanks
everyone.
Let
me
stop
the
recording.