►
From YouTube: GSoC CDF Meetup: Google Summer of Code Midterm Demos
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
Good
morning,
good
afternoon,
good
evening,
welcome
to
today's
gsoc
phase,
one
coding,
demos,
essentially
the
midterm
demos
for
this
year
for
our
gsoc
cohort
for
the
continuous
delivery
foundation.
A
A
So
today's
agenda
we're
going
to
be
introducing
the
cds
participation
in
gsoc
and
a
little
bit
about
the
cdf
itself.
Then
we
will
have
project
demos
by
students
which
will
be
around
15
minutes,
after
which
there
will
be
a
short
q,
a
time
of
about
five
minutes
before
we
move
on
to
the
next
presentation.
However,
after
all
the
student
presentations,
we
will
stop,
recording
and
promote
everyone
to
be
able
to
speak
freely
together
and
have
a
moment
we
can
just
share
stories
and
ask
each
other
questions.
A
A
little
more
relax
chat
and
the
link
to
the
slides
is
here
because
some
of
the
slides
have
additional
links
about
our
students
or
about
the
projects.
So
you
may
very
well
want
to
just
click
on
that
and
have
that
so
the
cdf
has
participated
in
g-stock
for
the
second
year
this
year
we
were
really
excited
to
be
a
g
soccer.
Again,
you
can
see
on
the
previous
slide,
which
I
went
over
very
quickly.
There
is
a
link
to
a
blog
post
where
we're
celebrating
that
we
had
this
year.
A
So
thank
you
very
much
to
all
our
mentors
who
have
been
involved
in
supporting
regis
students
and
really
for
all
of
our
community
members
who,
in
their
own
ways,
have
stepped
in
at
various
times,
and
you
know,
welcomed
our
gsoc
students
and
supported
them.
A
So
for
the
cdf
this
year,
the
two
cdf
open
source
projects
that
are
participating
in
gsoc
are
jenkins
and
spinnaker,
and
all
of
our
students
have
passed
their
midterm
evaluations,
which
happened
last
week.
So
it's
all
going
rosy.
Our
students
have
been
fantastic
and
we're
really
happy
to
celebrate
them,
especially
at
this
time.
This
year
has
been
hard
for
everyone
in
various
ways,
and
it
is
incredibly
impressive
how
dedicated
our
students
have
been
and
how
hard
working
they
are.
A
Oh
so
now
this
is
going
to
be
really
fun.
We're
going
to
have
tara
hernandez,
who
is
on
the
cdf
technical
oversight
committee
and
is
one
of
the
g
soccer
guidelines
this
year
she
will
be
speaking
a
bit
about
the
cdf
and
the
cdf's
participation
in
g-suck.
B
Them
hi,
my
name,
is
tara,
hernandez
and
I'm
a
member
of
the
technical
oversight
committee
for
the
continuous
delivery
foundation,
an
organization
founded
as
a
space
to
really
dig
into
the
art
of
the
systems
of
software
development.
The
best
written
code
in
the
world
isn't
that
interesting
that
nobody
can
use
it
and
sometimes
figuring
out
the
best
way
to
get
product
with
customers
and
keep
it
running.
24
7
can
feel
as
hard
a
technical
challenge
as
developing
the
product
itself,
so
the
projects
we
work
on
in
the
cd
foundation
try
to
help
with
that.
B
Our
mentors
come
from
a
variety
of
companies:
google,
netflix
jfrog,
cloudbees,
verizon
media,
many
others
who
invest
heavily
in
software
infrastructure
tooling.
This
is
our
second
year
pitching
projects
to
the
students
of
google's
summer
of
code
and
last
year
was
a
huge
success
for
our
students.
Some
of
them
continue
to
participate
in
our
community
to
this
day,
see
for
yourself
what
they
thought
of
a
program
and,
at
the
end,
you'll
see
links
on
where
to
get
more
information.
Hope
to
see
you
this
summer.
C
Hi,
I
was
a
google
chamber
of
code
2020
student
at
jenkins.
I
worked
on
a
project
where
we
wanted
to
improve
the
performance
of
a
plugin
called
the
git
plugin.
Our
aim
was
to
essentially
reduce
the
time
taken
to
run
a
jenkins
job.
That
jenkins,
being
a
wonderful
community
of
developers,
is
a
great
place
to
start
your
open
source
journey.
The
reason
why
I'm
saying
that
I'll
tell
you
the
reasons.
C
The
first
reason
is
that
at
jenkins,
you
get
to
vote
alone
alongside
experienced
developers
who
are
willing
to
give
their
time
and
motivation
to
you
at
each
step
of
the
development
journey
which
you're
going
to
have
during
the
summer.
Secondly,
g-suck
at
jenkins
is
one
of
the
easiest
way
to
deploy
your
code
to
a
huge
community
of
users,
considering
the
fact
that
jenkins
is
one
of
the
most
popular
automation
tool
used
in
the
software
community.
C
Third,
each
project
that
you
see
in
the
list
is
is
is
chosen
with
due
diligence
and
careful
consideration.
Make
making
sure
that
the
mending
you
get
has
the
sufficient
availability
to
help
you,
and
also
the
project,
has
a
huge
community
value
which
is
very
beneficial
for
you,
as
well
as
the
community.
C
C
D
A
Now,
we'll
just
quickly
go
over
some
of
our
communication
channels.
Should
you
wish
to
reach
out
to
the
projects
when
these
presentations
are
done
so
for
jenkins
we
have
a
gitter
channel
for
gsoc
a
mailing
list
for
g
stock.
We
also
have
regular
office
hours
on
wednesdays.
Please
do
feel
free
to
just
drop
in
ask
any
questions.
You
have
it's
very
open
and,
in
addition,
you
can
check
out
the
main
page
on
the
jenkins
I
o
website
for
jizak
and
then
you'll
find
out.
A
Hopefully,
they'll
answer
lots
of
questions
here
or
find
places
to
to
find
those
answers.
Similarly,
for
spinnaker
we
they
have
a
main
page
for
gsoc
on
door,
docs
on
spinnaker
io
and
they
have
a
dedicated
slack
channel
within
the
spinnaker
slack.
A
Almost
all
of
the
projects
have
their
own
respective
communication
channels,
so
a
lot
of
them.
The
spinnaker
one,
is
on
the
spinnaker
slack,
but
a
lot
of
the
the
others
actually
are
within
the
cdf
stack.
A
So,
okay,
so
just
thank
you
once
again
to
our
amazing
students,
our
incredible
mentors,
the
org
admins
and
please
keep
in
mind.
We
are
following
the
cdf's
code
of
conduct
for
this
for
these
presentations,
which
essentially
comes
down
to
being
very
open
and
welcoming
with
everyone.
A
So,
but
there
is
a
link,
should
you
wish
to
to
learn
more
about
the
code
of
conduct
for
the
presentations
themselves?
The
presenters
all
have
the
ability
to
speak
at
any
time
during
this
this
webinar,
but
please
do
keep
yourself
muted
when
other
people
are
presenting
for
attendees
there's
a
zoom
chat,
asking
questions
that
you
have
and
we
will
be
monitoring
it
and
then,
after
the
presentations,
as
I
said
before,
we'll
turn
off
the
recording
and
just
have
a
moment
to
kind
of
speak
more
freely
with
each
other.
A
E
Yes,
I
am
talking
about
my
project,
remoting
monitoring
with
open
telemetry,
and
can
I
share
my
voice.
Query.
Okay,.
A
E
Thank
you.
The
purpose
of
this
project
is
to
minimize
the
downtime
and
set
up
cost
of
junkies
agents
and
to
achieve
this
purpose.
The
goal
of
this
project
is
to
correct
monitoring
data
and
measuring
and
troubleshooting
data
of
the
mounting
module
with
open
telemetry
and
the
motion.
Data
will
include
the
tracer
launching
procedure
of
genghis
remoting
of
zhang's
agent
and
the
metrics
of
the
remoting
jvm
or
process
and
systems
an
excel.
E
Breeze,
thank
you.
Oh
yes,
and
the
purpose
of
and
and
my
project
sends
monitoring
data
to
open
telemetry
protocol
and
the
point
and
which
open,
telemetry
and
the
point
to
use
or
how
to
visualize
data
is
up
to
users
and
users
needed
to
set
up
these
services
for
their
own.
E
E
E
Yes,
yes
thank
you
and
17
respondents
out
of
28
use
doppler
image,
so
I'm
planning
to
publish
docker
image
to
easily
use
our
monitoring
feature
next
slide.
Please-
and
the
survey
also
tells
me
five
common
causes
of
agents
and
availability.
E
E
E
Please
and
users
also
think
the
information
users
also
think
that
information
like
channel
connectivity
or
memory
usage
or
cpu
utilizations
are
needed.
Excellence
and
a
respondent
said.
E
Even
though
there
are
a
lot
of
tools
to
correct
these
general
metrics
and
another
respondent
said
having
archive
of
nodes
with
access
to
their
logs
or
events
would
be,
would
have
been
nice,
and
he
said
so
because
to
have
an
archive
of
agent
drugs,
we
need
to
transfer
them
by
using
some
tools
and
open
telemetry.
Have
a
capability
to
handle
rogues
and
now
rocks?
Are
not
the
scope
in
this
google
sumo
code,
but
I
want
to
handle
them
in
the
future
excel.
E
Please,
and
I'm
working
on
men,
mainly
on
concept
in
phase
one,
and
in
my
first
presentation
my
first
implementation.
I
prepared
a
jenkins
plugin
and
sends
a
monitoring
program
from
jenkins
controller,
but
it's
not
robust
against
agent
restarting,
and
this
is
not.
This
is
not
able
to
correct
monitoring
data
before
the
initial
connection
excel.
Please
and
the
user
service
suggests
many
users
think
it's
important
to
have
monitoring
of
agents
before
initial
connections
and
accessories.
E
So
I
changed.
The
structure
and
and
user
will
download
the
merging
program
called
monitoring
engine
and
which
is
a
jar
file
and
press
it
in
the
agent
node
in
the
provisioning
phase.
Next
service,
yes,
okay
and
and
for
the
next
step,
is
to
complete
the
pre-request
for
demoting
instrumentation,
which
I
created
before,
and
support
for
monitoring,
configurations
and
docker
image
for
instrumented
agents
and
alphabets
and
exceptions
and
I'll
show
you
some
demos.
Can
I
share
my
screen.
Thank.
E
E
Yes,
okay,
I
will
follow
the
redmi
of
our
repository
and
first
build
monitoring
engine,
but
I
did
it,
but
but
I
did
it
before
the
presentations,
so
I
skipped
this
and
second
downloads
the
custom
emoting
jar
file
from
here.
E
And
also
jenkins,
ssh
agent,
I
I'm
using
I'm
using
my
repositories.
Example
directory.
We
have
a
demo
repository
a
demo
directories
and
and
jenkins
websocket
agent,
and
we,
I
also
have
a
jenkins
general
b
agent,
but
I
will
start
the
jenkins
general
agent
manually
from
now
and
to
start
agent,
the
nlp
agent.
E
And
the
eagle
ui
at
their
junkies
agent
service
and
for
interest,
and
we
can
see
three
spans.
One
is
for
ssh
agent
and
one
is
for
the
socket
agent
and
the
one
is
for
general
agent,
and
this
is
our
jnlp
agent,
and
we
can
confirm
that
the
jlv
protocol
are
used
and
which
performs
a
version
of
remoting
modules,
and
this
information
can
be
seen
also
from
zipkin.
E
A
Thank
you
akira.
Now
we
have
time
for
some
q,
a
with
akira.
E
F
E
It's
difficult
to
understand
the
remoting
module,
remote
module
and
it's
difficult
to
instrument
the
remote
module
with
open
telemetry,
because
we
have
we.
I
need
to
know
that
know
very
well
on
things
remoting
and
also
open
telemetry
and
also.
E
A
Okay,
well,
we
can
save
them
for
afterwards
as
well.
We
can
speak
in
a
little
more
relaxed
fashion,
but
next
up
we
have
shruti
who's
going
to
present
on
the
cloud
events
plug-in
for
jenkins.
G
H
H
Let's
take
a
real
world
example
of
a
bunch
of
traders
wanting
to
do
business
with
each
other,
but
the
catch
is
each
of
the
traders.
The
space
understands
a
different
language
and
they
must
understand
each
other's
language
in
order
to
do
business
with
themselves
or
with
each
other
now
trainer
a
hired,
a
translator
to
do
business,
which
can
hear
me
all
communications
between
these
traders
are
carried
out
pretty
well
by
this
transfer.
H
So
it
is
a
direct
coupling
created
between
the
two
services,
one
to
communicate,
client
plugins,
adapters
agents.
So
these
are
some
examples
of
direct
interoperability
that
enters
cloud
events
a
way
to
achieve
indirect
interoperability
in
the
tech
world,
so
think
of
it.
If
the
traders
have
a
common
language
or
common
business
language
which
each
trader
in
the
market
has
to
know
to
communicate
and
do
business,
so
even
if
they
use
different
language,
they
have
to
know
and
understand
this
common
business
language
so
that
they
can
do
business.
H
This
way,
you
need
to
develop
explicit
ways
to
talk
with
each
other
is
eliminated,
because,
even
if
the
creators
are
speaking
different
language,
they
all
understand
this
common
language,
which
is
needed
so
carbohydrates
achieve
exactly
that.
In
an
inventory
system.
It
defines
a
standard
specification
which
all
the
systems
involved
can
understand
and
therefore
using
problems
will
lose
the
need
or
reduce
the
overhead
of
development
adapters,
additional
plug-ins
for
services
that
we
might
want
to
talk
with.
H
Basically,
you
can
think
of
it
as
a
as
allowing
jenkins
to
enter
this
market
of
tools
where
each
tool
speaks
different
language
and
using
cloud
events
here
is
going
to
standardize
that
common
that
language,
so
all
of
the
tools
involved,
are
speaking
and
understanding
the
same
language.
H
So
the
contents
plugin
for
junkies
allows
jenkins
to
be
configured
as
a
source
and
sync
for
cloud
events
and
for
other
services,
which
will
facilitate
communication,
will
be
school,
super
easy
and
also
help
build
complex
workflows,
and
this
plugin
can
be
configured
as
a
set
of
because
the
source
are
the
same
depending
on
the
user's
needs.
So
anyway,
you're
saying
that
sentence
can
be
configured
or
jenkins,
can
speak
and
understand
this
language
of
clouds.
H
H
So,
as
I
said
earlier,
this
plugin
allows
users
to
configure
jenkins
as
a
source
and
a
sync,
and
in
this
demo
we
will
be
looking
at
jenkins
as
a
source,
and
we
will
be
configuring
a
sync
that
jenkins
will
send
the
events
to
so
here
we
are
looking
at
aws,
eks
kubernetes
running
on
aws,
and
we
have
a
junction
service
running
inside
and
we
also
have
another
service
called
a
stock
eye
service,
which
is
a
k.
Data
serving
service
developed
by
scott
nicoles,
and
what
this
service
is
going
to
do
is
help
us
visualize.
H
All
of
the
events
coming
in.
So
we
have
to
configure
the
sockeye
as
async
and
all
of
the
events
coming
inside
of
the
sockeye
will
be
presented
here.
So
all
of
the
events
metadata
will
be
presented
inside
events
attribute
and
all
of
the
events,
payload
or
events
data
will
be
present
inside
the
data
column.
So
we
will
take
a
look
at
how
we
can
configure
a
particular
sync
or
a
sockeye
as
a
sync
inside
jenkins.
H
So,
let's
go
back
to
the
jenkins
service
that
I
have
running
here
so
as
instead,
this
is
running
on
a
humanities
cluster
going
back
to
the
plug-ins.
We
already
have
the
cloud
events
plug-in
installed
and,
as
it
says,
this
plug-in
allows
you
to
be
configured
as
a
source
and
sync
amazing.
So
also,
let's
take
a
look
at
global
configuration,
and
this
is
where
we'll
be
configuring
all
of
the
information
necessary
for
jenkins
to
send
events
to
async
or
configure
cloud
events
plug-in
jackets
as
they
source.
H
H
Now
these
are
the
kind
of
events
you
want
jenkins
as
a
source
to
send
over
to
the
configured
sync,
so
job
created,
updated
and
so
and
so
events
will
be
sent
as
and
when
they're
triggered
inside
jenkins
to
a
particular
sync
with
all
of
the
events
metadata
and
all
of
the
events
data.
So
any
of
them,
that's
where
that
will
be
set
as
a
cloud
event,
compliant
event.
H
Saving
this
information
and
taking
a
look
at
the
jobs
we
have
configured
in
the
jobs
we
will
be
triggering
so
we'll
start
with
job
2
event
and
let's
look
at
the
job
2..
So
here's
the
description,
it's
a
test
job.
This
job
is
parameterized.
The
job
also
has
scm
configured,
so
we'll
also
take
a
look
at
what
happens
whenever
you're
triggering
or
updating,
scm
and
trick,
and
that
triggers
a
job
inside
jenkins.
H
So,
as
I
said,
we're
pulling
the
scm,
we
also
have
another
project
that
will
be
triggered
as
soon
as
the
job
is
built,
so
we'll
be
able
to
take
a
look
at
all
of
those
events
for
test
two
and
also
the
test
job
that
gets
triggered
inside
of
the
software
server.
Saving
this
information,
let's
see
if
we
see
something
interesting
and
we
did
right,
you
see
the
first
event
that
we
got
was
job
updated
event,
and
this
is
a
from
the
source
was
job
test
two.
H
So
this
is
a
job
and
the
name
of
the
job
is
test
two
and
here's
your
uuid
alongside
event
data.
So
we
have
the
user
id
and
the
username.
So
this
is,
I
have
scientists
myself
and
more
information
about
the
event
itself
right,
so
the
information
that
we
are
seeing
here,
the
attributes
are
the
data.
So
this
is
all
cloud
events
information,
so
the
condiments
attribute
helps,
as
the
id
source
type
tells
a
sync
figure
out.
If
this
is
something
that
they
want
to
work
with
or
not.
H
So
as
I
said,
this
is
a
standard
kind
of
a
language.
So
whenever
async
is
receiving
any
event
which
has
these
particular
event
metadata
configured,
it
knows
exactly
what
it
means
and
then
it
can
filter
out
whether
this
is
something
that
the
sync
wants
to
act
on
or
not
so
the
source
or
the
type
of
the
event
is
going
to
give
information
about
that
particular
event,
that's
being
edited
from
its
source
and
the
data
is
more
information
about
that
particular
event.
H
So
it's
it's
in
giving
information
which
is
relevant
to
the
clinical
vendors
of
this.
So
this
is
going
to
look
different
for
all
of
the
events.
H
H
And
all
of
the
lines
which
are
edited
here,
as
you
can
see,
they're
emitted
in
a
sequence
so
as
they
happen,
they're
going
to
be
embedded
in
the
sequence,
so
the
first
event
that
happened
was
two
antenna
bleeding
and
here
is
the
event
data
which
is
represented
for
whenever
an
an
event
entered
into
queue.
H
So
we
also
have
this
that
the
type
or
the
source
it
was
a
still
job
test,
two
which
triggered
the
event,
and
we
have
more
information,
for
example,
qid
or
just
the
duration,
that
it
was
in
the
queue.
The
next
type
of
event
I
got
triggered
was
a
job
started
eventually
completed
again.
So
here
what
happened
was
when
a
job
was
started.
We
had
more
information
about
that
particular
kind
of
build
right,
so
we
have
the
the
number
of
people
we
have
the
time
stamp
when
that
build
was
started.
H
Is
a
test
time
job
or
the
test
shot
going
back
to
the
dashboard?
Let's
take
a
look
at
the
test
shock
here,
so
this
is
a
test
job
which
got
triggered
and
here's
some
more
information
about
the
entered
or
the
past
job
being
in
the
queue
or
being
in
the
queue
entered
waiting
stage
and
as
soon
as
the
test
started,
building
here's
information
about
the
test
job.
So
we
have
the
we
have
the
display
name.
We
have
the
url.
We
don't
have
an
sdn
space
here,
because
this
is
not
configured
within
sdn.
H
H
So
it's
pulling
the
scm
and
let's
keep
patients
for
a
second,
and
we
will
take
a
look
at
the
test,
two
gen.
So
that's
what
we
are
hoping
the
test,
tube
first
entering
the
queue
and
then
all
the
sequence
happening
and
then
test
two
triggering
another
kind
of
job
which
was
the
test
shot.
So
we
should
take
a
look
at
all
of
those
events
happening
here.
H
Let's
switch
over
to
jenkins.
Okay,
let's
wait
for
a
second,
let's
hope
that
it
does
have
to
classify
I'm
gonna,
get
it
right
right,
okay,
so
this
is
taking
a
second.
H
You
know
a
lot
of
the
times,
something
that
we
want
to
make
sure
is
okay,
so,
as
you
can
see
something
on
it,
sometimes
we
do
want
to
make
sure
that
our
our
scn
is
also
configured
right
and
our
the
the
same
information
that
we
are
entering
in
issues
with
these
things.
So
we
want
to
make
sure
that
we
are
only
given
that
kind
of
information
which
is
which
is
relevant
here.
H
So
so,
as
you
can
see,
the
test
tube
entered
waiting,
it
had
it
had
left
and
then
it
had
started,
and
it
has
information
about
moving
on
here,
it's
going
to
have
information
about
the
particular
kind
of
the
same
information,
so
the
branch
or
the
commit
id
anything.
So
as
soon
as
the
job
test
2
was
completed,
we
also
had
the
test
job
started.
H
So
this
is
what's
going
to
look
like
all
in
a
sequence
with
event,
information
personalized
to
that
particular
kind
of
event.
That's
ended
so
obviously
the
event
metadata
the
event
metadata
key.
Now
these
are
going
to
remain
the
same,
but
the
information,
the
values
itself,
they're
going
to
be
changing,
and
that's
also
going
to
contain
the
information
which
is
relevant
to
that
particular
event.
So
any
sync
which
is
receiving
this
event
will
be
able
to
filter
an
event
based
on
based
on
the
event
metadata
and
so
moving.
H
So
this
was
junctions
as
a
source,
and
this
is
what
we
want
to
do
for
jenkins.
As
a
sync
is
a
service
similar
to
sockeye,
where
we
are
giving
users
the
ability
to
add
filters
to
the
events
which
are
coming
in
and
then
based
on
those
filters
trigger
specific
actions.
So
if
a
job
is
coming
in
or
an
event
is
being
triggered
from
inside
of
tecton,
and
we
mentioned
that,
we
only
want
to
listen
to
a
type
where
a
pipeline
was
updated
inside
of
text
on.
H
So
this
is
what
we
want
to
do
for
jenkins
as
a
sink.
So
this
is
this.
This
was
phase
one
and
that's
our
plan
for
phase
two.
Thank
you
so
much
for
for
for
your
time
and
we
we
are
still
looking
for
building
this
out
and
also
testing
and
integrating
objections
and
the
source
and
is
seeing
so
this
is
something
you're
interested
in
is
interoperability
between
different
systems
and
standardizing
the
way
systems
communicate.
H
It's
something
that
you're
interested
in
you're
looking
for
feedback
on
budget
as
a
source
and
all
surgeon
can
just
sing.
So
please
provide
us,
your
opinions
and
your
feedback
again.
Thank
you.
So
much
hope
this
was
interesting.
A
Great,
thank
you
so
much
to
your
teeth.
That's
a
fantastic
presentation.
I
am
so
sorry
that
we
had
some
sound
issues.
I
am
in
london
and
they're
mowing
the
lawn
outside,
so
I
muted
myself
and
fortunately
dropped
a
little
bit
of
sound
there,
which
is
too
bad
because
shruti's
explanation
of
interoperability
is
one
of
the
very
best
I've
ever
heard.
So
I
just
so.
You
know
that
this
entire
presentation
will
be
online
afterwards,
but
also
we
will
take
all
the
individual,
pre-recorded
demos
and
put
them
online
as
well.
F
H
We
are
hoping
within
this
week
we
will
push
our
first
release
and
then
we
can
go
around
and
you
know
it's
also
important
for
us.
The
users
and
everyone
in
the
community
is
using
it.
So
hopefully
this
week
there
will
be
a
release
and
you
all
can
start
using
it.
I
Thank
you
and
another
question
rather
about
processes,
so
I
felt
that
the
events
seek
in
the
continuous
delivery
foundation
is
actually
about
standardizing
events
for
ci
cd
systems.
Did
you
have
a
chance
to
present
in
this
week
or
to
communicate
with
sikh
members
to
understand
what's
at
the
status
and
how
to
align
the
projects.
H
Yes,
and-
and
we
also-
we
had
a
meeting
about
this
yesterday
with
the
event
sig
looking
at
a
way
of
just
standardizing
events
and
hopefully
we'll
be
able
to
also
present
information
with
them,
because
because
this
is
not
just
you
know,
jenkins
or
just
cloud
events,
but
it
is
more
interoperability
between
jenkins
and
other
open
source
city
tools
and
also
just
cip
tools
in
general,
which
are
using
cloud
events
compiled
events.
H
I
Yeah
definitely
thank
you
and
yeah.
I'm
looking
forward
to
see
more
collaboration
and
yeah.
I
think
that
it's
really
a
great
thing
to
have
in
the
junkies
community
and
there's
also
a
lot
of
potential
opportunities
for
projects
because,
for
example,
you
produce
cloud
events
yeah
we
have
monitoring
with
open
telemetry
and
all
the
things
can
be
integrated
with
each
other
and
connected
to
whatever
central
monitoring
systems,
and
I
think
it
would
be
a
good
end
goal
for
us
as
a
project.
H
Yes,
and
that's
the
last
thing
that
I'll
add
on
to
it,
is
it's
been
really
fun
working
on
this
because
again,
as
you
said
like
this,
is
you
know
an
interoperability
essentially
between
so
many
open
source
systems?
Not
just
junctions
or
not.
Just
you
know
cloud
events
alone,
so
just
the
idea
of
of
innovating
or
trading
over
something
that
so
many
tools
involved
in
an
open
source
system
is
really
really
cool.
So
this
this
has
been
really
fun.
J
Oh,
I
have
a
video,
so
let
me
just
play
that.
J
G
J
Hello,
my
name
is
daniel
and
today
I'll
be
presenting
phase
one
of
my
project,
the
google
summer
of
code
2021,
my
project
name,
is
try.spinnaker.io
explore
spinnaker
in
a
sandbox
environment,
I
like
to
start
off
my
presentation,
giving
a
little
primer
on
what
spinnaker
is
so
spinnaker
described
itself
on
the
flip
side
as
an
open
source,
multi-cloud
continuous
delivery
platform
that
helps
you
release
software
changes
with
high
velocity
and
confidence.
As
you
can
tell
it's
quite
a
mouthful
and
I
like
to
break
down
these
buzzwords
one
by
one.
J
J
Spinnaker
supports
deployments
on
all
major
cloud
providers
such
as
aws,
azure,
google
cloud
provider
and
oracle.
Spinning
group's
biggest
selling
point
is
its
continuous
delivery
features.
It
supports
advanced
deployment
strategies
such
as
red
black
rollouts,
which
deploy
a
new
version
of
your
application
with
the
existing
version,
and
it
destroys
the
old
version
once
the
new
version
is
ready
to
go.
J
J
J
So
you
need
a
external
storage
provider
like
an
s3
bucket.
You
need
to
have
a
nice
cluster
that
has
at
least
16
days
of
ram
and
four
cores.
You
also
need
to
set
up
cloud
providers
that
you
want
to
deploy
to,
and
you
need
to
do
a
lot
of
networking
to
expose
the
ui
api
and
whatever
services
you're.
Providing.
J
If
you
compare
this
to
like
a
project
like
jenkins,
all
you
need
to
do
to
run
jenkins
on
the
computer
is
to
have
job,
install
and
double
click.
The
jar
card
having
a
sandbox
environment
where
users
can
go
in
and
deploy
some
pipelines
and
test
out.
The
spinnaker
ui
is
something
that
I
really
wish
I
had
when
I
first
heard
about
this
project.
J
Regarding
the
infrastructure
of
our
project,
I
decided
to
go
with
a
multi-tenant
solution
on
an
aws
eks
cluster.
This
means
that
all
the
users
will
be
sharing
a
single
spinnaker
instance
on
this
lab.
All
the
infrastructure
is
codified,
using
terraform
and
as
simple
as
running
one
command
to
get.
Try
it
out.
Spinach.I
o
writing
on
aws
spinnaker
and
its
associated
configurations
are
installed
using
armory's,
open
source
spinnaker
operator.
J
Password
and
after
you
log
in
here's,
the
spinner
for
ui,
so
right
now,
we
support
the
payment
to
a
create
question
using
manifests.
So
here
is
an
allowed
app.
We
came
up
with
this.
J
J
Another
feature
we
have
is:
is
our
private
ecr
registry,
which
stores
all
of
our
docker
images
and,
as
you
can
see,
this
is
not
the
the
default
document
that
many
people
are
used
to
this
one
is
hosted
by
us
and
the
reason
why
we
do
this
is
so
that
we
don't
hit
any
rate
limit
issues
and
so
that
users
have
to
deploy
images
that
we
have
already
hand
selected
for
our
application,
so
users
can't
deploy
their
own
malicious,
bitcoin
miners,
or
anything
like
that,
and
to
show
that
all
other
public
images
are
blocked.
J
It
deletes
all
the
deployments,
all
the
services,
all
the
storage,
related
things
and
secrets
that
have
been
deployed
by
spinners,
which
means
it
has
been
deployed
by
the
users
on
our
application.
J
People's
spinnaker
and
I
also
plan
so
that
users
can
also
create
their
own
pipelines
the
spinnaker,
but
this
sidecar
would
also
auto
delete
them
after
a
certain
period
of
time,
similar
to
how
I
auto
deleted
it,
the
the
resources
that
users
have
created,
I
also
plan
to
install
falco
onto
our
kubernetes
cluster,
which
is
a
security
tool
that
logs
any
specif
suspicious
activities,
and
I
also
plan
to
collect
some
metrics.
J
A
You
fantastic
thank
you
danielle.
Now
we
have
a
few
moments
for
q,
a
with
danielle.
D
F
I
Yeah,
having
such
a
instance
like
dry
spinach
is
really
good,
and
I
think
it's
something
we
should
encourage
actually
for
the
continuous
liter
foundation
projects,
because
being
able
to
a
tool
quickly
is
essential
and
yeah.
Maybe
installing
spinnaker
is
not
easy,
but,
for
example,
installing
jenkins
isn't
the
easy
either,
especially
if
you
talk
about
configurations,
code
and
modern
approaches,
so
we're
investing
some
time
and
making
it
possible.
F
F
J
J
So
I
guess,
like
you,
have
to
take
a
break
and
then
try
to
log
in
with
like
a
new
account
and
see
what
you
can
do
and
we
can't
do
and
regarding
security,
I
would
just
say,
like
a
good
practice,
is
just
to
give
like
the
least
amount
of
privileges
to
a
particular
user.
J
So
in
my
case
we're
restricting
users
to
only
deploy
from
our
private
repository
instead
of
like
the
public
one,
so
that
which,
like
mitigate
a
lot
of
risk,.
D
I
think
it's
a
unfortunate
sort
of
historical
truth
that
infrastructure
type
deployments
often
don't
receive
the
same
scrutiny
as
a
so-called.
You
know:
production
application,
production
environment,
but
with
with
the
advent
of
public
cloud
being
a
target
for
a
lot
of
things
like
this,
raising
overall
awareness
is
definitely
a
good
thing.
D
I
mean
we
could
look
at
the
solar
winds
right.
They
came
in
through
team
city
which
it's
completely
it's
completely
wide
open
that
it
was
not
an
application
designed
to
be
run
in
a
public
setting.
D
You
know
we
historically,
it's
like
oh
well,
the
firewall
rules
will
protect
us,
but
we
need
to
not
think
that
way.
So
this
is
awesome.
I'm
glad
that
that
we
have
some
sort
of
security-mindedness
here.
A
Thank
you,
daniel.
That
was
fantastic.
Next
we
have
aditya
who
will
speak
about
the
conventional
commits
plugin
for
jenkins.
K
I
hope
I'm
audible
and
yeah.
Please
bear
with
me
if
something
goes
wrong
because
of
my
connection
or
my
indian
accent
or
yeah,
just
my
headphones
and
die
out
with
me.
I
I
feel
their
batteries
a
little
low,
okay,
so
I'll
get
started.
Conventional
comments.
Plug-In
for
jenkins
is
a
g-shock
project,
as
you
all
know,
and
I'm
working
with
wonderful
mentors,
gareth
christian
olivia
and
alan.
Let's
move
to
the
next
site.
Please.
K
So
today's
agenda
is
I'll
talk
about
what
are
conventional
commits.
What
is
this
plugin?
That
is
doing
something
with
conventional
comments:
how
to
use
the
plugin
and
that
demo
will
be
mostly
around
that
and
then
I'll
tell
you
all
the
next
steps
and,
finally,
the
q,
a
so
conventional
comments.
Can
we
go
to
the
next
slide?
Please.
K
Oh,
I
did
not
know,
there's
some
kind
of
animation
over
there
yeah.
So
conventional
comments
are
a
lightweight
convention
over
comments,
so
they
give
our
commit
messages,
a
structure
so,
and
why
do
we
need
this
structure
because
then
automate
writing
automation?
Tool
gets
really
easy.
Once
we
have
some
sort
of
a
structure
to
the
commit
messages.
K
Yes,
I
covered
that
already
and
it
does
it
same
way
by
the
semantic
versioning.
So
can
you
please
put
in
the
next
slide?
I
have
some
examples
over
there
and
you
can
know
more
about
conventional
comments
from
conventional
comments.org.
So,
as
you
can
see,
here
are
four
to
five
example
of
conventional
comments.
First,
one
is
pure.
This
is
like
adding
very
rudimentary
things
adding
file
and
get
ignored
or
something
that's.
K
When
we
used
to
it,
then
we
have
fixed
for
bug
fixes,
and
this
is
analogous
to
increase
or
a
bump
in
the
patch
version
and
semantic
versioning
via
feed.
That
is
adding
a
feature.
This
is
analogous
to
increment
of
a
minor
version
and
semantic
versioning.
And
finally,
this
breaking
change
and
there
are
two
examples
of
breaking
change.
There
are
multiple
ways
to
write
it.
It
is
the
increment
in
the
major
version.
So
it's
and
the
reason
I
think
it's
an
all
cache,
because
it
is
shouting
breaking
change.
K
Yes,
we
can
move
ahead
to
the
next
right.
So
now
I'll
talk
about
what
is
this
plugin
doing
doing
with
all
the
information
of
the
conventional
comments,
so
what
it
is
doing
is
basically
loads,
all
the
commit
messages
and
telling
us
what
is
the
next
semantic
version.
So
versioning
is
very
important
in
software
engineering
and
what
and
most
of
us
use
get
nowadays
so
using
all
those
git
messages,
we
we
have
tagged
version,
so
what
it
is
considering
is
one
the
git
log.
K
Second,
the
latest
tag
and
third,
the
current
version.
So
using
this
information
at
the
scene.
What
is
the
latest
tag
and
all
the
commit
messages
from
that
tactile?
The
latest
comment
are
considered
to
calculate.
We
are
next
semantic
version.
We
currently
support
the
following
projects:
maven
gradle
python
node
make
help
we
plan
to
add
more
projects,
as
the
need
arises.
For
now
we
have
supported.
K
We
can
go
ahead
to
the
next
slide,
so
using
the
plugin.
So
plugin
is
available
at
plugins.jenkins.I
o
slash
conventional
comments.
We
are
using
egypt229
to
release
on
every
feature
and
the
recommended
usage
is
to
add
a
step
in
a
jenkins
pipeline.
It
is
quite
easy
to
use.
I
will
be
showing
you
all
next.
K
So
demo
I'll
go
ahead
and
share
my.
K
K
Yeah,
I'm
this
aside,
okay,
so
this
is
my
local
jenkins
instance,
and
this
is
the
project
that
I'll
be
using
for
demo.
So,
as
you
all
can
see,
it's
a
sampling,
even
project
for
conventional
comments,
I'll
show
you
the
tag.
I've
tagged
it
0.1.2.
K
This
is
the
latest
tag
and
the
latest
comment
is
a
feature
adding
hello
world
action.
I'll
go
to
my
jenkins
dashboard
and
I
have
a
sample
maven
project
over
here
for
simplicity,
I'll
just
build
it.
I've
already
built
it
once
I
hope
it
passes.
It
takes
a
couple
of.
K
K
You
can
go
ahead
and
see
the
logs,
so
it
clones
the
project
and
I'll
show
the
pipeline
next.
Now
it's
not
complete
yet.
K
Yes,
now
it
is
so,
oh,
I
think
I
clicked
the
wrong
locks.
K
So
the
latest
tag
was
0.1.0
and,
as
you
all
can
see,
that
was
a
feature,
so
it
should
add
the
minor
version
above
the
minor
version,
and
it
did
so.
This
is
point
two
point
zero.
So
now
what
I'll
do
is
you
can
have
it
both
ways?
You
can
see
the
struck
the
plugin,
the
pipeline
script,
it's
quite
simple!
So
what
I
am
planning
to
do
next
is
creating
a
pipeline
in
your
life
and
it
won't
take
a
lot
of
time.
K
I'll
just
quickly
create
an
item
like
sample
python
project
pipeline.
Okay,
and
actually
I
have
a
script
ready,
just
I'll
paste.
So
what
the
script
does
is
it
will
clone
the
python
sample
project,
which
is,
I
think,
this
one
so
and
you
can
see
there
are
no
tags.
So
what
if
there
are
no
tags
available
so
yeah?
We
will
see
that
face
right
now
and
we
are
just
calling
the
next
version
here
and
say
now.
K
Okay,
this
one's
done
pretty
quickly,
we'll
see
so
it
it
says
it
did
not
find
any
tags
and
it
shows
that
this
should
be
the
version
and
why
it
should
be
a
version.
Because
of
let's
see,
let's
see
the
comments,
we
have
all
the
chores
one
feet,
so
that
should
increase
the
minor
version
and
again
so
it
is
working
correctly
and
yeah.
Now,
let's,
let's
change
something.
So
is
my
vs
code
away
visible
because
yeah
it
is,
it
is
not
visible
to
me.
Actually,
it
is
a
little
bit
off.
K
K
K
K
These
two
are
like
the
standard
files
that
need
to
be
present
and
we
have
all
our
source
code
in
the
source
directory.
So
let's
say
I
change
this
run
to
main,
and
here
the
call
as
well
so
and
just
for
the
demo
purpose
I'll
commit
this
as
a
breaking
change.
K
So
and
I'll
just
push
this.
K
K
K
Yes,
so
next
step
is
to
add
support
for
pre-release
and
build
metadata
information.
So,
as
currently,
what
we
are
doing
is
we
are
supporting
the
standard
version
that
is
a
major
minor
and
patch
and
there's
no
pre-release
or
build
metadata
support
present.
K
So
that
is
the
next
step
that
we
are
going
to
do,
and
I
I
showed
you
that
it
outputs
the
next
version
so
after
this,
what
we
are
planning
to
do
is
writing
back
that
version
to
the
file
like
the
setup
file
in
python
or
form.xml,
for
an
even
project
that
is
planned,
and
we
would
love
to
hear
your
feedbacks
and
suggestions
so
move
on.
You
can
go
to
github
or
join
the
github
channel
for
this
project
and,
oh,
I
I
have
nothing
more
to
add.
D
This
is
pretty
cool,
oops.
Sorry,
one
question
I
had:
is
it
it?
If
I
was
following
along
correctly
you
you
were
using
it
based
on
commit
comment
as
far
as
triggering
the
breaking
change.
For
example,
do
you
see
possibilities
of
having
a
little
bit
more
programmatic
detection
that
you
could
build
in
so
like
sort
of
increasing
kind
of
the
smartness
of
the
system.
K
So
definitely,
yes,
that
can
be
done
and
it's
a
wonderful
suggestion.
Yes
I'll
I'll,
have
to
look
into
it.
To
be
honest,
the
way
people
write
code
on
what
changes
the
breaking
change
to
detect
that
I
think
yeah.
D
Well,
nothing,
okay!
So
like
phase
one
of
that,
for
example,
you
think
if
you
have
a
test
suite
and
you
go
through
and
everything
you
know,
the
initial
build
goes
through
most
of
the
tests
pass,
but
one
of
the
tests
fails.
D
Ideally,
it's
not
a
breaking
change
like
no,
that's,
actually
a
test
failure,
so,
but
thinking
about
how
you
encourage
sort
of
a
developer,
cultural
change
as
part
of
the
infrastructure
and
and
more
about
like
how
do
we
progress
on
policies
that
the
tooling
then
reinforces
so
anyway,
that's
like
phase
six.
Don't
worry
about
it
right
now,
but
it's
an
interesting
thing
to
to
think
about
for
the
future
on
some
of
this
stuff,
which
is
you
your
infrastructure
is
not
just
the
thing
that
tells
you
things.
K
Okay,
so
yes,
I
did
not
do
that
and
it
would
require
credentials.
Yes,
we
can
definitely
have
the
credential
binding
plugin
to
help
us
there.
A
Hey
any
any.
D
A
L
L
So
come
to
the
project
overview,
as
I
said
mentioned
previously
like
this
feature-
allows
jenkins
pipeline
users
to
run
authenticated,
commands
in
as
sh
that
and
powershell.
So
for
for
now,
we
are
providing
support
for
two
protocols
that
are
http
and
ssh
to
make
get
authentication
operations.
Work
yeah
next
slide,
please
so
the
main
motivation
behind
the
credentials
binding
plugin
first
was
the
lack
of
flexibility
in
the
pipeline
pipeline
job
because
in
a
freestyle
job
we
have.
L
We
have
a
good
publisher
that
can
perform
authentication
operations
for
the
user,
but
in
when
it
comes
to
pipeline
job.
We
lack
this.
So
this
was
the
main
motivation
also,
as
I
mentioned
in
the
slides,
that
adding
a
good
publisher
logic
would
be
too
hard
coded
for
a
pipeline.
Job
like
pipeline
is
more
about
flexibility
that
it
gives
to
the
users,
so
adding
a
great
publisher
would
be
restricting
that
flexibility
that
is
provided
by
the
pipeline
job
now.
L
Another
reason
was
due
to
the
workarounds,
so
the
workarounds
that
are
being
performed
by
the
users
you
can
see
on
stack
overflow,
take
a
lot
of
time
and
are
are
actually
quite
confusing
to
understand.
As
well
so
this
will
save
users
from
a
lot
users
from
a
lot
of
time
and
and
will
help
them
also
to
make
them
their
work
efficient.
L
Another
reason
was
that
it,
this
issue
was
among
the
top
five
top
five.
I
guess
yeah
it's
mentioned.
Besides
top
five
jenkins
announcement,
requests
and
looks
like
this:
okay,
so
coming
to
the
implementation
strategy
implementation
strategy.
For
this
we
have,
we
are
using
the
bit
the
credentials
binding
plugin
as
a
dependency
in
the
git
plugin,
so
the
credentials
binding
plugin
provides
us
with
the
wrapper
named
with
credentials.
L
So,
basically,
the
purpose
of
credentials,
binding
plugin,
is
to
bind
environment
variables
which
the
use
users
per
user,
provide
and
make
them
available
in
a
pipeline
job,
but
we
are
using
the
same
strategy,
but
in
this
case
we
are
using
this
to
perform
authenticated
git
operations
on
the
behalf
of
the
user.
So
users
don't
have
to
do
anything.
The
credentials
will
be
provided
from
the
jenkins
credentials.
Plugin
and
the
user
will
bind
it
to
well.
L
L
L
So
coming
to
the
deliverables,
so
for
the
phase
one,
I
have
developed
the
authentication
support
for
http
protocol
and,
as
I
mentioned
previously,
we
are
using
with
credentials
binding
and
we
we
are
choosing
the
cli
git
bit
tool,
specifically
cli
git
tool
to
limit
its
scope,
and
we
have
implemented
this
the
feature
in
git
plugin,
because
this
seems
more,
you
know
more
supportive
on
in
git
plugin,
because
there
are
certain
methods
that
are
useful
to
this,
particularly
to
this
feature
that
are
available
in
git
plugin
now
coming
to
the
environment,
the
yes,
this
supports
a
lot
of
different
environment
operating
system
environments
we
have
tested
on
different
linux
distributions
such
as
ubuntu
send
os
and
a
lot
more.
L
I
have,
I
don't
remember
the
list,
but
we
have
tested
on
various
architectures
as
well.
My
mentor
mark
is
has
tested
his
the
binding
on
a
lot
of
architectures
and
operating
systems
and
yeah.
So
another
point.
Another
deliverable
for
this
is
the,
as
I
mentioned,
is
this
limiting
this,
this
scope
of
the
binding
to
the
client
and
yeah.
So
another
thing
that
I
forgot
to
mention
was
checking
the
specific
git
version.
L
L
J
L
L
L
So,
as
you
can
see,
there
are
two
up
two
options.
First,
is
the
credentials
that
the
user
have
to
select
the
credentials
that
they
have
to
provide?
Then
next
comes
the
git
tool
name,
so
I
have
the
bit
tool
name
called
default.
I
have
installed.
I
have
created
a
global
configuration
with
tool
type
for
that
and
we
could
generate
the
snippet.
L
So
once
we
have
generated
the
snip,
we
can
copy
it
into
a
pipeline
job
script.
I
have
created
a
paraben
job
script
for
the
demo,
so
here
what
I'm
doing
is.
I
am
cloning,
a
private
git
repository
and
I
am
building
the
project
and
then
I,
when
I
am
successful,
I
am
pushing
adding
a
tag
and
pushing
the
tag
to
the
repository.
L
L
L
L
F
I
don't
I
I'm
not
seeing
any
questions
in
the
q
a
panel
either
I've
I've
got
a
a
few
of
my
own,
but
it's
sort
of
self
it's
self-promoting,
so
I'm
hesitant
to
ask
them.
Harsha
you've
been
absolutely
wonderful
on
this
project.
Thank
you.
Any
any
insight
you'd
like
to
share
on
security
concerns
or
issues
that
that
were
of
of
worry
for
you,
as
you
were
developing
this
or
things
like
that.
L
Well,
security
concern
for
now
I
I
don't
think
so,
but
for
our
next
binding
I
guess
there
might
be
some
security
concerns,
because
because
we
are
creating
a
file
on
the
user's
pc
or
you
can
say,
users-
that's
a
user
system,
so
there
might
be
some
permission
issues
that
might
be
that
much
might
cause
an
issue
for
the
username
password
binding.
We
didn't
encounter
any
issues
like
that,
but
for
the
ssh
I
think
there
would
be
some
issues,
but
for
the
username
everything
is
okay
for
my
site.
A
I
regularly
talk.
Thank
you,
arshad
that
I,
like
the
ongoing
concern
of
the
security
that
we're
seeing
across
the
projects.
I
think
that
is
really
beneficial
for
our
ecosystem
and
you
know
hats
off
to
our
g-suck
students
for
taking
up
the
mantle
and
working
on
these
projects.
So
next
we
have
pulkit
who
will
be
presenting
on
the
security
validator
for
the
jenkins
kubernetes
operator.
M
M
Okay,
so
first
I
am
going
to
give
a
brief
introduction
to
those
kubernetes,
so
yeah,
as
the
name
suggests,
we
are
running
jenkins
on
top
of
kubernetes
right,
but
we
can't
run
jenkins
on
top
of
kubernetes
directly,
because
jenkins
is
a
stateful
applications
and
whether
it
is
stateless
application
right.
So
that's
why
we
need
a
custom
operator
to
run
generating
on
top
of
kubernetes.
M
So
this
is
the
whole
purpose
of
the
operator
on
top
of
kubernetes,
to
make
it
more.
M
About
the
architecture
of
the
operator,
so
we
are
launching
the
operator
as
a
deployment
and
we
are
using
role-based
access
control
to
give
it
access
to
certain
resources
of
kubernetes
resources
right.
So
we
are
giving
it
access
to
our
custom
resources,
the
technical
resources
that
we
have
defined
and
various
other
resources
like
secrets,
contact
us
map
services.
M
Custom
resources
right,
so
we
are
launching
jenkins
instances
as
a
custom
resource
application.
So
we
have
extended
the
kubernetes
api
and
we
are
launching
these
instances
as
custom
resources
right.
So
we
are
defining
these
instances
in
a
declarative
manner
right.
So,
for
example,
we
have
various
fields
like
in
the
spec
section.
We
have
defined
various
fields
for
master
as
well
as
jobs
and
platform
as
well.
M
So,
for
example,
we
have
defined
the
for
defining
the
plugins,
so
in
the
in
this
custom
resource
we
have
to
specify
it
in
the
spec
for
master
section,
so
to
specify
what
plugin
the
master
is
going
to
use.
We
have
to
specify
the
name
of
the
plugin,
so
these
are
all
user
defined.
M
M
So,
for
example,
in
this
in
this
particular
plugin,
so
we
may
have
security.
We
have
these
security
modules
mentioned
in
the
site,
but
the
user
can't
know
whether
there
is
a
security
bonding
because
he
is
using
the
declarative
signals.
So
that's
why
we
are
using
the
validation
right.
So
that's
what
the
security
evaluator
is
all
about,
so
about
the
implementation,
so
I
am
using
validation,
validating
admission
to
demand
the
security
valuation.
So
what
happens
is
when
a
request
and
the
request
for
a
object
comes
to
the
kubernetes
api.
So
the.
M
If
there
is
a
validating
admission
configuration,
there
is
a
validating
web
of
configuration,
then,
before
persisting
it
to
the
xd
cluster,
it
first
sends
the
request
to
the
evaluating
exhibition,
complication
right
and
which
inca
in
turn,
call
these,
which
are
which
are
nothing
but
appended
to
the
kubernetes
operator
itself
right.
So
this
validated
admission
configurations
also
before
creating
an
object,
so
whether
an
object
will
be
created
or
not
depends
on
if
the
web
approves
the
so
about
the
implementation.
M
M
Tls
certificates
so
therefore
that
is
api
over
https
and
before
it
used
fast
for
clear
security
as
well.
So
that's
why
we
need
dls
certificates
to
establish
and
connect,
establish
a
connection
between
the
book
and
the
communities
right,
so
the
that's
how
the
whole
communication
works
so
so
the
book
is.
M
And
the
service,
then
redirects
requests
to
the
operator,
so
this
whole
communication
will
happen
over
https,
so
we
need
a
valid
tls
certificates
to
establish
a
connection,
a
valid
connection
between
so
that's
where
manager
comes
in
so
start
manager
is
a
kubernetes
pattern,
so
it
is
used
for
provisioning
and
renewal
of
tls
certification.
M
Manager,
we
are
creating
certificates,
we
are
creating
objects
of
type
certificates
and.
M
So
in
my
implementation,
so
there
I
have
to
implement
just
two
functions:
function
for
updating
and
the
function
for
creating
a
new
generations,
custom
resource.
So
first
I
added
a
new
stack
named
validates
warnings
in
custom
resource
definition,
so
when
this
particular
stack
is
enabled,
so
it
is
obvious
that
when
this
particular
step
is
enabled,
then
only
the
validation
will.
So
it
is
like
a
common
stage
for
enabling
or
disabling
apart
from.
M
My
current
version
lies
between
the
first
person
and
the
last
question
then
there
is,
there
is
a
security
vulnerability
in
my
plugin
as
well.
So
that's
the
logic
that
I
am
using.
I
am
comparing
these
semantic
questions
and
based
on
my
comparison,
I'll,
send
a
request
for
true
for
decline.
The
object
condition.
So
this
is
the
whole
evaluation
project
that
I'm
using
right
so
for
testing
so
for
testing.
We
are
using
this.
M
M
So
that's
the
whole
implementation
that
I
have
discussed
so
right
now
we
are
at
the
factory.
So
I
what
I
have
done
is:
I
have
scaffolded
a
new
backup.
I
have
generated
the
manchester
manager,
I
have
implemented
the
business
logic,
the
validation
industries
and
I
have
updated
the
health
deficits
and
included
manifesto
search
manager
right
and
I
am
able
to
run
the
help
tests
as
well
so
from
in
the
phase
two.
I
have
to
improve
the
quality
of
my
code
and
work
on
unit
tests,
as
well
as
as
well
as
the
end-to-end
test.
M
M
So,
apart
from
just
validating
the
security
warnings,
what
we
can
do
is
use
a
validation,
therefore,
to
implement
all
other
sorts
of
validation
as
well.
So
right
now
I
have
some
ideas
that
can
be
done
as
well.
So
first
is
to
check
the
required
version
right.
So
what
happens
is
there
is
a
required
version
of
a
particular
program
and
if
the
core
version
of
that
plugin
exceeds
the
current
version,
then
the
forward
trash
is
right.
So
to
remove
this
of
the
code,
I
plan
to
implement
this
validation
also
in
the
validator
interface.
M
So
that
way
it
should
be
checked
before,
given
the
creation
of
a
gentleman's
right.
So
the
next
plan
that
I
have
in
mind
is
to
implement
right
now
we
are
using
other
forms
for
validation,
logic
in
the
reconciliation
loop
right.
So
because
of
that,
if
the
validation
fails,
the
word
crashes,
so
we
don't
so
that's
not
a
good
way
of
doing
the
validation.
So
if
we.
M
M
So
I,
instead
of
like
defining
those
default
values
in
the
representation
group,
what
we
can
do
is
define
those
values
in
the
defaulting
admission
as
well,
so
that
way,
after
defaulting,
after
defining
a
default
with
values,.
F
M
Levels
like
if
a
particular
civil
security
warning
is
created
medium
or
it
is
of
no
severity
or
it
is,
it
is
of
high
stability.
So
I
want
that
to
implement
in
the
custom
resource
as
well,
so
the
user
has
a
like
way
of
specifying
those
warnings
and
want
to
validate
only
for
those
warnings
that
have
a
civility
level
equal
or
higher
than
the
option
right
so
yeah.
So
that's
all
from
my
side.
Thank
you
for
watching.
E
M
M
Security
warnings
field
that
I
have
talked
about,
so
I
am
setting
it
to
true
so
this
is
for
like
this
is
a
toggle
switch
to
check
if
the
validation,
if
we
are
going
to
want
to
do
the
validation
or
not
so
right,.
M
M
Yeah
so
I
have
built
that
occur
image
locally,
because
it
takes
some
time
to
build
the
image
and
run
the
mini
cube
cluster,
and
so
I
have
the
mini
club,
minicube
cluster
up
and
running,
and
I
have
deployed
the
operator
in
the
default
namespace.
M
Apart
from
that,
I
have
like
configured
the
search
manager
and
it
is
also
running
so
I
have
generated
the
certificate
resources
and
everything
is
set.
So
let's
start
with
launching
the
cr
right.
So
when
I
try
to
yeah,
so
this
is
the
command
to
launch
the
jenkins
instance.
So
this
is
the
message:
that's
the
I
I'm
getting
from
the
web
right.
So
the
webhook
actually
throws
a
error
message.
If
any
security,
particular
security
vulnerability
is
detected
or
not.
M
So
this
is
the
security
vulnerabilities
right,
so
it
throws
us
an
error
message
that
these
two
plugins
have
a
security
vulnerabilities
and
it
is
denies
to
create
the
request
for
the
creation
of
the
object.
Apart
from
that,
if
we
want
some
more
details
on
the
security
on
what
are
the
error
messages
right?
So
we
can.
The
user
can
check
it
in
the
logs
right.
So
in
the
logs
we
are
getting
various
sorts
of
information
right,
so
we
are
getting
the
security
message,
as
well
as
the
secured
link
to
the
security
advisory
as
well.
So.
H
That
looks
super
cool
toolkit.
I
I
was
just
wondering
or
just
had
a
question.
So
what
happens
so
you
have
all
of
these
security
vulnerabilities
presented
to
your
user
when
they're
installing
a
plug-in.
So
what
happens
if
they
still
want
to
go
ahead
with
installing
all
of
the
plug-ins
and
not
really,
you
know
care
about
the
vulnerabilities
which
are
like
the.
M
M
Yeah
so
the
example
is
created
and
it
doesn't
show
any
error
message
right.
So.
M
H
So
I
I'm
guessing
that
this
this
is
going
to
like
take
away
presenting
vulnerabilities
for
all
plugins.
Is
there
a
way
to
go
in
a
more
modular
level
and
only
skip
vulnerabilities
for
certain
plugins?
So
I
I
am
guessing
that
this
is
going
it
just
removing
the
the
feel
that
you
removed
from
the
ammo
is
going
to
take
away
the
entire
scanning
of
the
vulnerabilities.
But
what?
If,
if
I
just
want
to
do
it.
D
I
just
think
it
goes
back
to
kind
of
that
earlier
conversation
we
had
about,
do
you
have
a
secure
infrastructure
or
not?
It
sounds
like
in
pulkit's
system,
it's
kind
of
all
or
nothing
for
everything,
and
you
know,
can
you
have
exceptions
for
a
plug-in?
You
really
want
that.
Isn't
secure
and
I
think
there's
sort
of
two
topics
around
that
one
is
sort
of
do
you
want
to
be
commit
to
having
a
secure
system,
in
which
case
do
you
need
to
replace
that
plug-in
right?
D
Policy-Wise
or
you
know,
do
you
want
to
have
the
ability
to
override
it,
in
particular
instances,
even
in
a
short-term
environment,
and
I
think,
from
a
incremental
process
perspective,
it
would
be
ideal
to
be
able
to.
I
like
isolate
a
particular
plug-in,
but
as
a
best
practice,
I
think
you
know
peer
pressuring
the
larger
community
to
have
to
fix
their
plug-in
so
that
they're,
not
you,
know,
swiss
cheese
worth
of
security
exposure
is,
is
the
more
ideal
outcome.
M
M
Community
is
like
constantly
tries
to
improve
the
security
vulnerabilities
right
so
for
a
from
a
user's
perspective,
for
example,
if
the
user
is
using
a
particular
version
of
a
plugin
right.
So
for
that
version
there
may
be
security
vulnerabilities,
but
for.
M
D
Yeah,
I
almost
wonder
about
the
idea
of
attestation
like
this
is
a
much
broader
concept,
and
maybe
the
jenkins
marketplace
already
has
this,
but
it's
just
thinking
like.
Oh,
if
we
had
this
thing
as
part
of
the
validation
of
plugins
published
out
that
you
get
like
a
badge
is
like
yeah.
This
one
is
known
to
be
secure
or
not
on
an
ongoing
thing.
G
I
Actually,
it
was
even
in
our
gsoc
project
ideas
list.
Maybe
a
few
years
ago
we
discussed
multiple
times,
but
we
have
never
implemented
that
so
one
of
our
contributors,
daniel
beck,
he
created
a
kind
of
rating
field
in
our
update
center.
So
we
can
sort
the
plugins
by
rating
and
the
rate
and
calculations
basically
can
be
updated
in
our
code
base.
I
But
if
we
talk
about
particular
pages,
what
we
do
now
we
have
the
communications
code,
so
any
patch
you
put
in
documentation
gets
displayed,
so
you
check
can
check
our
market
place
plugins
the
changes
that
I
know
how
many
plugins
already
have
various
badges-
let's
say
passing
security
scans
being
compliant
with.
I
F
Pulkit,
one
of
the
things
that
you
had
mentioned
was
version
numbers
and
version
number
comparison,
and
you
use
the
phrase
semantic
versioning
have
you
encountered
any
problems
or
issues
or
anything
you've
learned
there
because
of
the
wide
range
of
versioning
schemes
that
jenkins
plug
in
jews.
There
are
some
that
are
absolutely
not
semantically
versioned
right
or
there
there's
some
variant
of
semantic
version
has
that
has
that
caused
you
any
challenges.
M
No
like
we
are
using
a
particular
format
so,
for
example,
for
a
version
right,
so
we
have
the
first
letter
as
the
major
version
right
then,
after
the
dot,
we
have
the
minor
version,
and
then
we
have
the
patch
version
right.
So
that
is
the
versioning
schema
that
we
are
using
right.
So
that
is
the
schema
that
I
have
taken
into
account
for
comparison.
M
A
Those
are
all
our
student
presentations
today.
Thank
you
very
much
for
presenting
your
work
is
amazing
and
very
appreciated.
It's
a
great
contributions
to
both
the
jenkins
and
spinnaker
projects,
as
well
as
the
wider
cdf
community.
So
we
thank
you
for
being
here,
making
your
contributions
and
for
your
enthusiasm
and
your
hard
work.
I
also
thank
you
to
all
our
mentors
and
our
org
admins,
so
we
do
have
a
feedback
form
for
today's
webinar
series.
You
can
please
go
to
that
link
and
fill
it
out.
A
That
would
be
great,
and
now
what
we'll
do
is
I'm
going
to
stop?
The
recording
should
be
good
and
then
promote
all
attendees
to
be
panelists,
which
will
enable
you
to
speak
and
share
your
video,
and
we
can
all
just
be
together
having
a
more
free
chat.
Is
there
anything
else?
Anyone
want
to
add
anything
before
I
stop.
I
Just
thanks
to
all
students
and
participants
yeah.
It
was
a
great
session
and
thanks
a
lot
to
all
the
students
working
on
jenkins
and
spinnaker
projects
and
contributing
to
this
the
africa
system,
because,
even
though,
while
working
on
just
the
two
projects,
there
is
a
lot
of
interoperability
efforts,
helping
much
bigger
communities
and
I'm
looking
forward
to
see
the
results
in
one
month.