►
From YouTube: Securing the Delivery Pipeline 2021 02 24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
yeah
thanks
for
joining
the
jenkins
contributor
summit.
Again
today
we
have
a
discussion
about
securing
jenkins
delivery
pipelines
and
two
days
ago
we
briefly
discussed
topics
you
would
like
to
cover.
So,
firstly,
programs
we
experience
with
plugins,
then
what
would
we
do,
including
taking
a
look
at
the
job
229
from
jc
and
yeah?.
B
A
Looking
at
our
pipelines
and
discussing
how
we
want
to
improve
them
and
what
infrastructure
we
need
for
that.
A
A
C
C
A
Yeah
so
a
long
story
short
currently
developers
release
from
their
own
machines
on
from
their
own
automation,
setups
what
it
means
that
we
basically
trust
maintainers
to
build
the
components
properly,
including
plugins
and
libraries
available
within
jenkins.
A
Yes,
the
jenkins
core
itself
is
built
by
automation,
environment,
but
even
in
the
jenkins
course,
we
include
components,
let's
say
jenkins
models
or
libraries
like
annotation,
deep,
which
is
built
locally.
A
Well,
job
229
is
for
continuous
delivery,
but
yes,
it's
also.
It
also
provides
automation,
so
jab
229
can
be
one
of
the
answers,
though
job
229
basically
says
that
we
would
be
delivering
pipelines
from
sort
of
jenkins
companies
from
github
actions
right.
D
E
Yeah
the
motivation
section
talks
about,
I
mean
it
certainly
talks
about
use
of
local
bills,
as
opposed
to
builds
via
automation
but
yeah.
It
is
just
one
particular
approach
to
doing
this.
There
are
certainly
lots
of
other
approaches
that
are
possible
this.
This
one
seemed
like
the
path
of
least
resistance,
given
the
infrastructure
that
we
had
available
to
us.
Basically,.
C
So
jesse
just
a
quick
question.
If
I
understand
all
these
concerns
correctly,
the
idea
is,
we
want
essentially
the
developer
laptop
independent
release,
environment
of
229,
but
perhaps
don't
also
want
a
solution
for
people
who
might
not
be
comfortable
with
this
model
of.
If
you
merge
a
pr,
it
will
automatically
be
released.
E
Only
if
the
change
log
includes
at
least
one
change
in
a
user-facing
category,
like
bug
enhancement,
type
of
thing,
so
yeah
the
idea
there.
I
think
they
talked
about
that.
Some
of
the
motivation
is
just
that
I
at
least
have
often
seen
the
case
where
you
know
somebody
files
a
pull
request,
and
then
it
goes
through
a
lot
of
review
cycles
and
finally
gets
approved,
and
then
somebody
murdered
and
then
three
and
a
half
months
pass
and
they
post
a
comment
in
github
like
by
the
way
I
actually
needed
this.
E
For
some
reason,
could
you
please
do
a
release
and
then
it's
another
it's
another
process
for
the
maintainer
to
go
and
look
that
up
and
get
everything
ready
and
do
a
release
or
something.
So
the
idea
was
to
the
continuous
delivery.
Part
of
the
motivation
is
to
avoid
that
delay.
But
yeah
you
can
you
can
simply
comment
out
the
push
trigger
if
you
don't
want
any
automated
releases,
in
which
case
you
still
have.
The
github
actions
has
the
workflow
dispatch.
Where
you
go
into
the
actions
tab
of
the
repository
and
click.
You
know.
C
Well
yeah,
but
the
point
is
we
don't
want
the
secret
to
be
shared
for
one
thing,
because
we
want
the
fine-grained
permissions
control
and
the
other
thing
is
people
need
to
be
aware
that
this
is
happening
so
setting
it
up
on
an
organization
level
for
the
existing
jenkins
ci
org,
with
2000
plugins.
That
we
have
is
simply
impractical.
A
A
A
E
Yes,
so
it
actually,
it
actually
does
use
jenkins
for
the
ci
part,
so
the
deploy
phase
deliberately
bypasses
any
kind
of
testing
whatever,
because
it's
only
using
a
commit.
That's
already
been
verified
by
the
jenkins
ci,
which
is
assumed
to
be
doing
the
cross-platformer,
possibly
work
with
docker,
whatever
is
normally
defined
in
the
jenkins
file,
so
it's
restricted
just
to
the
actual
deployment
step
mvn
deploy.
E
You
certainly
could
set
something
up
like
this
with
jenkins.
I
think
the
the
tricky
aspect
here
is
that
we
want
to
have
some.
We
want
to
have
a
secret,
that's
saved
per
repository.
E
And-
and
we
can't,
we
can't
really
use
an
organization
folder
the
way
we
do
on
ci
jenkins
io.
For
this
purpose.
I
think
it
just
doesn't
work
in
terms
of
the
access
control.
It's
probably
something
that
we
could
develop
some
sort
of
jenkins
feature
that
tries
to
mimic
the
access
scoping
of
github
actions
in
terms
of
picking
things
up
per
repository
and
also
using
secrets
per
repository
or
getting
secrets
from
some
other
store
that
we
would
have
to
build.
For
that
purpose,
I
mean
it's
certainly
possible.
E
It
would
just
be
a
larger
implementation
effort
to
set
something
up
like
that,
because
of
the
because
of
the
specific
ways
that
we
have
this
big
organization,
with
lots
of
repositories
each
with
our
own
contributors,
and
we
don't
want
you
know
we
don't
have
any
kind
of
shared
level
of
trust
and
the
way
the
way
the
artifactory
permissions
are
scoped
per
repository
as
well.
So.
D
E
E
E
A
D
C
So
I
mean
I
mean
you,
you
don't
even
have
extra
commits
if
you're
released
this
way
so
a
question
and
you
can
attach
the
the
statuses
to
any
commit
or
pull
request
that
you
have
also
the
repository.
C
So
this
does
not
happen
have
to
happen
during
the
release.
In
fact,
we
can
run
arbitrarily
complex
and
long-running
analysis
asynchronously
and
that
doesn't
have
to.
You
know,
delay
the
release
process.
People
would
probably
write
if
their
releases
take
two
hours,
because
we
add
all
of
the
tools
in
there.
E
C
C
At
the
moment,
where
all
of
the
regular
releases
are
done
with
maven
release
plug-in,
we
also
do
the
staging
with
maven
release
plug-in,
but
a
slightly
different
target.
The
stage
target,
plus
we
overwrite
the
destination
repository
and
we
do
not
push
the
commits.
So
we
have
the
private
staging
repoint
or
the
factory
upload
the
artifacts
there.
We
have
the
private
search,
github
repo
and
at
at
first
you
only
have
the
tax
and
commits
locally
and
you
need
to
manually
push
them
there.
C
So
I
we
probably
want
a
more
standardized
release
environment
than
we
currently
have
to
stage,
and
we
definitely
want
to
continue
staging
because
it
makes
release
days
much
less
stressful
and
we
would
just
need
to
adapt
if
a
plug-in
uses
this
this
release
environment,
with
the
independent
of
whether
it
has
the
cd
trigger
or
manually
triggered.
C
C
I
would
expect
it
to
be
a
slight
more
common
for
them
to
perhaps
accidentally
merge
a
pull
request
that
looks
good
during
that
period
and
create
a
release.
So
the
pattern
of
version
numbers
needs
to
be
such
that
we
can
have
a
reliable
way
to
not
end
up
with
conflicts.
There
perhaps.
E
I
guess
I
mean
I
don't
think
yeah
it's.
It's
definitely
true
that
you
would.
You
would
have
to
either
remember
to
disable
the
automatic
trigger
or
just
hold
off,
on
merging
pull
requests
for
a
few
days.
That's
something
we
should
think
about.
I
think
the
the
actual
process
should
be
slightly
simpler
because
we
don't
have.
E
We
don't
have
the
weird
situation
that
we
do
with
maven
release,
plug-in
of
us
having
staged.
You
know,
prepare
for
release,
prepare
for
next
development
version
commits
in.
G
E
Cert
repo,
that
are
there
just
to
produce
a
version
that
you
then
have
to
merge
with
with
stuff
in
the
public
master
branch,
so
it
should
be.
That
part
should
be
somewhat
simpler.
I
mean
you
would
just
be
pushing
the
actual
security
fix,
commit
to
the
public,
repo
and
merging
it
into
the
current
ad,
or
it
would
be
a
fast
forward
merge
if
there
were
no
other
commits
in
trunk.
E
At
that
point
I
mean,
as
far
as
the
version
number
it
just
it
just
encodes
what
the
get
history
looks
like,
which
is
independent
of
where
that
history
was
built.
So.
B
E
If
you
have
done
something
else
in
the
public
master
or
well,
it's
the
same
as
the
current
situation.
When
you
accept
it's
a
little
bit
easier,
maybe
because
once
you
merge
in
the
security
commit
on
top
of
that,
that
will
generate
a
new
release
which
includes
the
the
public
changes
from
the
past
few
days,
plus
the
security
fix
also.
E
Yeah
yeah,
if
you're
get,
if
you're
get
history
branches,
then
then
there
is
no
total
order.
Right
between
the
two
branches,
you
can't
decide,
which
version
is
newer
than
another.
I
mean
it's
possible
to
override
the
the
version
prefix
when
you're
staging
to
cert.
If
that
was
useful
for
some
reason,.
C
H
But
that
in
itself
isn't
going
to
help,
because
when
the
security
vulnerability
is
announced,
people
that
have
already
upgraded
who've
got
a
new
plug-in
that
had
been
merged
to
master,
that
don't
have
that
security
fix
can't
upgrade
to
that
security
version
because
they'll
end
up
having
an
older
one
or
an
incompatible.
One
they'll
have
to
wait
for
a
release
that
hasn't
yet
happened.
C
So,
for
example,
if
a
word,
if
a
plugin
is
on
version
5
and
we
create
the
security
fix
5.1
and
at
the
same
time
version
6
is
created,
we
will
also
need
to
create
version
6.1
or
version
7,
with
the
security
fix
afterwards
and
but
jesse
brings
up
the
inclusion
of
the
shortcheck
sum
in
the
version
number,
which
prevents
collisions
with
almost
a
certainty,
because
the
security
fix
will
necessarily
have
a
different
checksum
than
whatever
release
is
done
in
public
at
the
same
time.
C
So
that
is,
is
actually
less
of
a
problem
than
it
is
today.
The
only
problem
is
that
we
make
that
it
is
much
easier
to
get
into
the
situation
through
accidental
mergers.
E
F
F
Is
it
something
that
could
be
simplified
if
we
have
a
specific
jenkins
instance
which
he
has
configured,
so
we
would
not
consider
ci
dot,
checking
that
ideo,
but
it's
a
plugin
such
as
against
the
slide.js,
where
people
could
provide
their
own
credential
well,.
A
E
E
F
C
I
mean
we've
as
far
as
I
can
tell
you
correct
me.
If
I'm
wrong,
I'm
surprised
that
you're
bringing
this
up-
and
I
understand
the
optics
of
it
that
all
like
mentioned
to
some
degree,
but
we
are
shedding
whatever
services
we
can,
because
the
infra
team
is
so
small
and
already
overloaded
and
now
we're
saying
well,
let's
just
host
another
jenkins
instance
to
which
I'm
responding
yeah
in
a
second
vpn,
because
it's
not
going
to
be
public
and
it's
also
not
going
in
the
same
vpn
as
everything
else
that
we
have.
F
Surprised,
no,
no,
no!
No!
No!
So,
basically,
the
reason
why
I
was
mentioning
the
the
gcas
can
config
is
because
you
can
easily
isolate
that
and
allow
other
people
to
manage
the
prs.
I
mean
if
you
take
the
example
of
citations
that
you
you
rely
on
the
infra
team
to
do
the
changes,
but,
for
instance,
in
the
case
of
release.ci,
just
people
who
have
access
to
a
git
repository
can
manage
the
service.
So
that's
why
I
was
suggesting
to
to
another
service.
I
mean
I'm
not
saying
that
we
have
to
do
that.
E
F
C
But
that
is
much
less
accessible
and
if
the
status
is
linked
to
a
hidden,
vpn,
a
private
jenkins
instance,
we
would
need
to
grant
at
least
the
maintainers
access
to
that.
But,
additionally,
even
if
we,
if
we
find
a
good
solution
to
the
fine
grained
authentication
like
we
currently
attach
to
individual
repositories,
those
audio
factory
access
tokens,
we
also
need
to
consider.
C
If
we,
if,
if
it's
an
instance,
that's
supposed
to
be
accessible
to
maintainers,
we
need
to
maintain
the
vpn
access
to
hundreds.
H
Think
there's
a
misunderstanding,
because
the
github
checks,
api
and
jesse
will
correct
me
here.
If
I'm
wrong
doesn't
work
on
pull
requests,
it
works
on
commits.
So
you
can
push
a
check
response
for
the
release
process
to
the
commit
that
you
are
trying
to
release
that
will
then
be
visible
in
the
github
api
and
the
checks
tab
for
that
commit.
H
So
that
there's
no
need
to
access
jenkins
to
see.
Well,
if
you
need
all
the
logs,
then
probably
you
will,
but
if
you
just
need
like
the
last,
however
many
bits
of
logs,
then
that
should
work
and
at
the
end
of
the
day,
if
there's
nothing
secure
in
the
logs,
because
we're
hopefully
masking
all
the
secrets
correctly,
we
could
always
just
dump
them
in
a
gist
and
link
the
gist
to
the
bullet
quest.
It's
extra
work,
but
it's
it's.
I
think
a
solvable
issue.
C
Okay,
that's
good
to
know
so,
but
what
what
I
also
wanted
to
mention
is
it's
not
just
managing
access.
It's
also
administering
the
instance
yeah.
What
what
I
mean
by
that
is,
if
I
look
at
pull
requests
to
core
or
various
plugins,
I
occasionally
see
failing,
builds
and
then
I
jump
to
jenkins
and
it's
well
the
connection
between
the
controller
and
the
agent
broke
and
that's
pretty
annoying.
C
In
the
worst
case,
someone
will
need
to
troubleshoot
that
and
that
someone's
going
to
be
olivier,
who
has
a
bunch
of
free
time
on
his
hands?
It's
it's
incredible,
and
so
that's
that's
another
concern
right.
There's
there's
just
more
work
that
we
would
need
to
do
for
a
very
simple
process.
Otherwise,
because
I
mean
I
I
like
jenkins,
I
like
pipeline,
but
those
it's,
it's
not
necessary
for
the
like
one
maven
deploy
command
that
we
need.
E
B
G
E
It
uses
github
cli
using
the
github
action
token
that
you
get
for
free
with
github
actions
that
so
half
of
the
deploy
scripts
is
running,
github
cli
to
do
stuff
in
github,
like
manipulating
the
release
and
so
on.
So
the
the
deployment
artifactory
is
a
single
line
of
this
process,
so
most
of
the
rest
is
really
pretty
normal
usage
of
github
actions
for
automating
things
in
github.
E
Oh
yeah,
I
don't
know
what
to
what
to
say
it's
just
that
we've
been
discussing
trying
to
provide
some
kind
of
cd
for
plugins
or
non-developer
laptop-based
deployment
of
plug-ins
for
several
years,
and
nobody
had
time
to
do
anything
about
it.
The
advantage
of
this
is
just
that.
It
uses
some
pieces
that
we
already
have
lying
around,
basically
with
relatively
relatively
small
piece
of
infrastructure,
which
is
the
secret
provisioning.
C
I
mean
it
really
depends
on
what
the
outcome
of
this
discussion
is.
If
it's
ultimately,
chip,
two
to
nine
can
do
the
things
and
we
just
need
an
extra
paragraph
that
also
acknowledges
yeah.
We
don't
you
can
configure
your
repository
to
not
release
on
merge.
Then
that's
a
possible
outcome.
If
we
decide
to
go
an
entirely
different
way,
I
see
jab
2,
35
or
whatever,
coming
up
with
a
separate
proposal.
C
Basically,
I
meant
to
say
yeah
when
we,
if,
if
we
were
to
go
in
a
completely
different
way,.
F
F
My
basically
my
concern
about
duplicating
and
the
jenkins-
that's
that's
the
point
that
daniel
raised.
If
we
have
several,
I
mean
people
who
want
to
release
something.
This
means
that
we
have
less
time
to
maintain
jenkins
yeah.
If
it's,
if
it's
already
I
mean,
if
the
jenkins
is
a
pretty
busy
instance,
it's
just
harder
to
work
on
it.
E
B
B
E
E
There
is
an
annoyance
that,
as
far
as
I
can
tell,
this
is
just
limitation
of
or
multiple
limitations
of
github
actions.
Is
that
one
you
can't
you
can
set
the
so
it
doesn't
use
a
push
trigger.
It
actually
uses
a
trigger
on
successful
check
because
it's
waiting
for
jenkin
not.
G
E
For
the
push,
but
for
jenkins
to
validate
the
push
you
can
set
up
the
trigger
to
be
activated
when
there's
a
successful
check,
but
you
can't
say
which-
or
you
can
set
up
the
trigger
to
activate
when
there
is
a
check,
but
you
can't
say
which
one
I
think
it
is,
and
so
it
actually
runs
a
bunch
of
times
and
then
each
time
says:
oh
no.
E
This
is
not
the
check
I
was
looking
for
in
exits
and
all
of
those
exits
show
up
as
failed
builds
because
there's
no
way
to
market
as
skipped
or
aborted,
or
something
like
that,
like
you
can
in
jenkins
and
the
same.
If
you
have
a
push
that
had,
you
know
only
dependency
updates
or
something
like
that.
It
goes
halfway
through
decides,
there's
nothing
interesting
to
release,
and
then
the
only
thing
it
can
do
is
mark
itself
as
failed.
E
So
it
looks
ugly
in
the
actions
tab,
but
I
don't
know
if
any.
B
C
So
what
I'm
hearing
is,
there
are
actually
reasons
to
think
about
alternative
approaches
other
than
where
the
jenkins
project,
so
we
should
use
jenkins
for
it.
E
Yeah
I
mean
yeah.
Well,
it's
convenient.
The
use
of
github
actions
is
convenient
because
we're
integrating
release
draft
earn
because
we
have
the
github
api
token.
So
you
get
a
special
like
a
temporary
token
associated
with
each
action
run
that
you
can
use
for
write
operations,
yeah.
E
Advantage
yeah,
we
used
actually
published
the
the
release
and
the
check
in
the
github
releases
page,
so
that
part
would
be
hard
to
replicate,
because
I
don't
think
you
can
do
that
from
an
outside
tool.
A
That
doesn't
provide
the
same
epa
at
the
last
github
universe.
The
questions
asked
about
that
product
managers.
No,
no
plans
which
yeah
helps
with
looking.
A
See
if
you
wanted,
we
could,
of
course,
run
agency's
pipeline
within
github
actions
using
jinx
file,
runner
or
whatever's
solution,
but
I'm
not
sure
whether
it
provides
any
benefit
in
this
case.
E
B
E
B
B
C
C
E
A
Well,
it
says
standalone
tool.
There
were
some
patches
which
allow
to
use
pct
in
principle,
but
somebody
would
still
need
to
implement
that.
C
No
until
recently,
it
didn't
check
at
restricted
either.
So
there
are.
There
are
a
few
standards
sort
of
that
that
we
have
in
the
project
that
are
basically
just
never
made
it
into
the
gradle
tooling.
Obviously
we
don't
use
it,
and
so
the
maintainer
is
basically
on
his
own
there,
but
so
yeah.
G
A
We
have
never
accepted
that.
G
E
C
Right,
good
lab
hook
still
doesn't
have
the
security
fix,
and
that
was
by
far
the
most
popular
one,
so
that
would
be
doable.
I'm
still,
I
think
we
should
get
back
to
topic.
I
think
it
would
be
reasonable
for
us
to
start
this
with
mayman
support
only
and
look
into
greater
support
later
or
just
say.
Well,
if
you
want
this
you're
going
to
have
to
build
your
plugin
with
maven.
E
Yeah
some
of
the
infrastructure,
I
suppose
you
could
reuse
you
would
need
to
have
a
different
action
for
the
deployment,
obviously,
because
because
it
does
nvm
deploy,
but
I
suppose
you
could
reuse
the
artifactory
secret
injection,
you
could
reuse
the
stuff
that
does
the
you
know.
That
does
the
check
for
the
the
jenkins
ci
status
and
release
drafter
and
all
of
that,
so
it
would
probably
be
something
that
could
be
built
by
people
interested
in
gradle
tooling.
A
E
Talked
about
jenkins
core
components
at
all.
As
far
as
I
know,
the
same
system
can
be
used
for
any
core
components:
other
than
parent
pumps.
C
I
mean
either
I
I
mean
back
ports
typically
come
up
in
the
context
of
security
updates,
for
example,
when
the
weekly
has
a
different
version
of
stapler
than
the
lts
line.
Now,
if
we
were
to
use
this
release,
process
for
stapler,
or
whatever,
like
I
mean,
remoting
is
also
a
candidate.
C
Whatever
library
we
have
to
patch
and
the
versions
have
diverged,
how
do
the
backpacks
work
is?
Is
it
just
a
matter
of
well,
we
specify
the
full
version
in
the
palm,
so
it
doesn't
matter
how
the
version
sword
order
looks
like
we
just
need
to
know
that
one
is
the
main
line
and
the
other
is
the
back
part.
E
C
Specifically
mean
the
version
number,
the
version
number
is
the
num,
the
distance
to
the
root
commit
plus,
okay
jesse.
You
explain.
E
E
And
sort
of
in
the
default
default
trunk
flow
for
a
plugin
that
doesn't
have
any
special
needs.
The
recommendation
is
to
also
use
that
same
string
as
the
version
number
of
the
whole
component,
but
you.
E
So
I
think
you
can
do
the
same
thing
for
backwards.
I
just
haven't
tried
it
yet
that
in
if
you're
cutting
a
back
port
branch
from
a
particular
trunk
branch
point,
you
know
so
you
have
a
bunch
of
strong
commits
going,
and
then
you
have
a
particular
trunk
release
that
you
want
to
use
as
a
base
for
backwards.
E
C
Or
you
know
to
make
sure
you
know
that
the
the
ordering
works
we
can
also
do
a
trunk
count,
dot,
trunk
hash
and
then
another
separator
actual
current
actual
hash.
C
That
would
be
the
manually
provided
one
because
I
know
which
release
I
branch
off
of
to
do
the
back
port,
so
I
can
use
the
entire
version
string,
including
the
hash
as
the
prefix
there,
okay,
so
this
is
basically
we
we
can.
We
can
overwrite
this
even
in
the
default
case
that
we
see
in
block.
C
Just
because
we,
this
came
up
with
using
jenkins
for
this,
would
what
would
need
to
be
different
in
this
process,
for
it
to
be
hosted
inside
jenkins,
because
we
would
still
use
github
releases
because
we
like
them.
Apparently
the
version
pattern
would
be
the
same
because
that's
the
same
thing,
we
would
still
not
use
maven
release
plug-in.
C
We
would
probably
have
some
sort
of
marker
file
in
the
repo
rather
than
the
github
action
and
go
from
there
right.
So
if
we
were
to
decide
well,
we
want
to
host
it
ourselves
because
there
are
benefits
to
doing
that.
C
E
E
E
Release
and
then
you
would
have
to
do
the
physical
deployment
step
would
have
to
be
done
inside
inside
a
container
and
some
sort
of
isolated
environment.
That's
just
given
a
check
out
of
the
write
commit
of
the
source
code
and
the
in
the
specific
artifactory
token.
E
But
yeah
it
would
be
possible.
You
would
yeah
you'd
have
to
have
some
sort
of
marker
repository
or
perhaps
it
would
be
like.
Currently
in
rpu
we
have
the
cd
enabled
equal
true.
Maybe
this
is
something
that
we'd
put
into
rpu
rather
than
marking
it
in
the
repository,
I'm
not
sure
but
another
another
problem
with.
That,
though,
is
what
is
the
the
trigger
right,
so
with
github
actions,
you
know
you
have
the
choice
of
either
doing
the
manual
trigger
and
we
don't
need
to
build
the
gui
for
that.
E
It's
just
part
of
the
github
actions,
gui
and
authentication
that
someone
with
right
permission
to
the
repository
automatically
gets
the
ability
to
run
that
action,
and
if
you
were
doing
this
elsewhere,
then
you
would
have
to
come
up
with
some
other
means
of
doing
a
manual
trigger.
C
Yeah,
thank
you
yeah,
it
just
seems
like
github
does,
does
basically
not
allow
it
to
easily
leave
their
action
ecosystem.
E
E
Issue
in
229,
since
you
normally
can
use
an
automated
trigger,
especially
with
the
with
the
trick.
That
looks
for
interesting
changes
in
the
release
draft.
But
I
think
you
still
really
want
to
have
the
option
of
manual
trigger
and.
E
For
backboard
branches,
I
think
those
you
don't
want
an
automatic
trigger
on
those.
I
think
you
would
want
a
only
manual
trigger
for
backwards.
E
B
E
E
B
C
So,
from
my
point
of
view,
I
think
it's
229,
perhaps
with
a
few
modifications
to
make
it
easier
to
get
started
for
existing
or
to
make
it
easier
to
transition,
because,
personally,
I
would
be
sort
of
wary
of
a
new
process
that
automatically
kicks
off
whenever
I
merge
something
and
perhaps
have
not
labeled
the
pull
request
adequately.
C
So
we
might
want
to
be
mindful
of
people
who
don't
want
to
immediately
hand
off
everything
to
a
fully
automated
process,
but
otherwise
I
think
this
is
this
is
a
great
design
and
and
we
we
could
adapt
it
more
or
we
should
adopt
it
more
widely.
E
A
Also,
the
new
format
is
just
human
and
friendly.
So
if
you
want
to
send
a
version
to
your
fellow,
I
mean
on
another
side
of
the
slack,
you
would
need
to
copy
paste.
It.
E
Yeah,
so
I've
already
had
one
complaint
actually
from
file
parameters.
Plugin
someone!
You
can
go,
look
up
the
issue,
but
someone
said
that
they
had
an
unnamed.
They
worked
for
a
large
company.
They
had
an
unnamed
internal
repository
from
an
unnamed
vendor
that
had
some
sort
of
version,
partial
parsing
script
that
didn't
like
that
version.
E
E
C
E
I
guess
I
don't
know,
but
yeah
those
sorts
of
things
will
happen
all
right.
You
can
use
kind
of
semver
like
versioning.
If
you
want
to
you're
it's
up
it's
up
to
you
in
that
case,
to
increment
the
major
and
minor
portions.
When
you
think
it's
appropriate
that
would
be
get
commits
to
change
those
portions
you
could
you
could
use
the
automatic
piece
as
the
patch
component.
I
suppose
it
would
be.
E
I
don't
have
too
strong
opinion
I
feel
like
for
most
plugins.
It
probably
doesn't
matter.
You
know.
The
point
is
just
to
push
updates
not
to
not
to
convey
meaning
exactly,
but
some
people
are
going
to
feel.
C
E
C
I
I
doubt
it
so
I
mean
core
itself,
there's
a
conversation
on
the
dev
list.
I
believe
to
have
date-based
versioning
or
some
other
sort
of
versioning
that
ultimately,
that
originally
came
from
we're
doing
so
many
changes
and
we're
still
two
point
x,
but
for
the
components,
meaning
stapler,
remoting
and
so
on.
Nobody
even
sees
them
unless
they
keep
clicking
on
about
jenkins,
to
see
the
license
information,
and
nobody
does
that.
C
C
C
Extensions
so
next
topic
with
the
checks.
A
Yeah,
so
I
guess
this
topic
would
be
completely
separate,
because
if
we
take
agree
that
there
is
ci
pipeline-
and
there
is
cd
pipeline
then
yeah
all
we
have
here
is
about
ci
pipeline.
A
Yeah
one
may
ask
why
we
test
the
artifacts
different
from
what
we
actually
ship,
but
in
principle
you
do
the
same
for
other
million
flows.
So
for
us
one
of
the
conditions
was
about
to
link.
Currently
during
the
build.
We
run
some
static
analysis
tools
like
spot
box,
animal
sniffer
and
a
few
other
enforcer
checks,
but
we
don't
really
invoke
security
is
coming.
So
how
is
current
security
scanning
complemented?
A
If
you
have
dependable
enabled
years
depends
infrastructure
verifies
dependencies
and
sometimes
it's
a
notice
for
you
as
a
maintainer,
so
you
can
update,
but
it's
not
part
of
the
sacd
pipeline.
So
it's
basically
a
standalone
process
and
what
could
be
done?
There
are
existing
tools
like
a
wall
dependency
check
which
can
verify
dependencies
when
you
actually
run
the
build
and
pretty
much
the
same.
B
C
Yeah
so
since
I
mean
it
comes
up
in
the
release
pipeline
topic,
because
it
makes
some
sense
to
ensure
that
we
don't
ship
completely
broken
and
unsafe
software,
and
I
mean
this
is
to
an
extended
problem
in
the
project
which
we
can
see.
If
we
look
at
the
security
advisories,
I'm
not
a
huge
fan
of
failing,
builds
or
even
failing
releases
when
static
analysis
finds
problems.
C
Especially
if
those
are
sort
of
time
dependent
processes,
so
if
I
built
something
yesterday
and
it
passed,
and
someone
published
the
cve
and
the
same
thing
from
yesterday-
you
built
today
will
fail
and
and
perhaps
even
prohibit
the
release.
I
think
that
would
be
a
problem.
Even
even
maintainers,
who
accept
the
results
of
such
scanners
will
be
annoyed
by
not
being
able
to
perhaps
it
not
perhaps
even
not
being
able
to
ship
a
hotfix
for
a
bug,
because
a
dependency
got
a
cve
and
it's
just
the
same
dependency
as
it
was
yesterday.
C
And
it's
junit
with
the
local
file
inclusion
thing
yeah,
so
we
sh
the
the
pipelines
that
we
build
and
I'm
definitely
for
adding
more
scanners,
for
example,
in
the
jenkins
file
based
ci
should
not
fail
the
build,
but
just
use
the
builds
as
an
opportunity
to
trigger
themselves.
C
So
we
can
add
more
scanners,
more
results,
but
perhaps
not
completely
block
releases
and
such
which,
based
on
the
github
statuses,
is
a
problem
because
you
cannot
have
a
successful
start,
a
pr
build
status
if
the
scanners
say
no,
so
we
need
would
need
to
separate
those
out.
There's
the
the
actual,
build
and
there's
perhaps
other
statuses.
E
E
Yeah
and
checks
also
by
the
way,
allows
you
to
create
a
status
that
is
neutral,
so
it
doesn't,
it
doesn't
count
as
a
failed
status
for
the
whole
pull
request,
but
you
can
still
have
a
status.
That
appears
that's
not
positive.
That
includes
a
warning
message
and
something
that
would
show
up.
I
guess.
C
Yeah,
I
think
that
would
be
great,
I'm
just
thinking
whenever,
whenever
the
topic
comes
up,
adding
more
checks
to
the
builds,
I'm
thinking
of
all
the
plugins
that
disable
injector
tests,
because
the
maintainers
didn't
bother
figuring
out
what
broke,
and
then
you
have
these
really
trivial
things
that
should
be
no
jenkins
plug-in
getting
through.
Additionally,
there's
there's
a
problem,
perhaps
depending
on
the
scanners,
how
they
work
if
they
need
configuration
through
the
plug-in
palm.
C
Or
are
part
of
the
jenkins
test
harness
like
the
injected
test?
Is
then
that
relies
on
the
maintainers
regularly
advancing
the
version
numbers
of
the
build
tooling.
C
C
Basically,
any
tool,
I
guess
could
be
wrapped
so
that
it
reports
its
output
in
a
way
that
can
be
consumed
rather
easily
on
github.
E
C
C
E
C
C
I
mean
that
already
exists,
that's
the
thing
that
I'm
doing
at
the
moment,
so
I
only
scan
master,
and
this
is
made
visible
to
maintainers
only
in
the
security
tab
and
the
benefit
there
is
also
maintainers,
don't
have
to
change
the
code
to
make
the
warnings
disappear,
that
it
can
be
managed
through
the
github
ui,
at
least
in
my
opinion,
it's
a
benefit,
because
I
I
ignore
these.
I
dislike
all
of
the
ignore
warnings
annotations
in
the
code
that
may
be
obsolete
years
down
the
road
anyway,.
C
Perhaps
perhaps
a
matter
of
how
many
tools
and
how
frequently
the
tools
change,
because
if
that
happens,
asynchronously
with
your
code
updates,
like
the
independent
codeql
scan
does-
and
all
that
happens
is
you
need
to
amend
the
code
again
and
again?
Who
knows
but
yeah
still
it's
it's
fairly
convenient
to
get
rid
of
findings,
which
means
I'm
actually
pre.
Personally,
I'm
pretty
okay
with
false
positives,
because
they
are
so
easy
to
dismiss.
E
You
know
rerun
locally
on
my
source
tree
until
there
are
zero
warnings
left.
However,
I
deal
with
it
and
then
commit
and
that's
the
that's,
the
fix
for
the
warnings
versus
the
code,
ql
things
they
had
to
go
around
and
go
through
in
the
gui,
and
it
wasn't
exactly
clear
whether
a
warning
would
pop
up
again
if
I
refactored
some
things
and
just
moved
the
same
code
to
a
different
place.
C
Still
I
mean
we
don't
need
to
decide
on
specific
tools
here,
there's
probably
also
the
topic
of
how
suitable
are
they
in
terms
of
I
mean
ova's
dependency
scan?
Can
it
handle
plug-in
dependencies
because
if
you
up
depend
on
the
slightly
older
release
of
I
don't
know,
script
security
plug-in
and
the
scanner
tells
you
your
plugin
is
unsafe,
but
it
has
absolutely
no
impact
on
the
runtime.
That's
probably
not
the
kind
of
scanner
we
want
to
use
in
the
jenkins
project
for
plugins,
but
the
broad
strokes
of
how
scans
would
be
integrated.
C
Does
that
seem
like
we're
approaching
some
sort
of
consensus?
I
I've
been
talking
so
much,
but
nobody
else
says
anything.
G
A
G
It
just
in
general,
sneak
is
free
for
open
source,
so
no
need
to
pass
to
the
by
the
foundation,
or
things
like
that.
You
can
also
add
a
sonar
cube
because
they
are
also
providing
some
security
scan.
It's
not
exactly
the
same
kind
of
code,
because
it's
mainly
checking
the
code
and
not
just
the
composition
analysis
like
always
persnic
that
could
be
potentially
useful,
perhaps
in
addition
to
find
sex
bugs,
not
sure
exactly,
but
they
are
just
including
also
some
security
scanning
addition.
E
Anything
that
works
on
binaries
rather
than
source
code.
I
guess
it
would
have
to
be
invoked
either
as
part
of
the
plug-in
build
itself
or
as
a
downstream
check
or
something
right,
because
the
the
rules
for
what
physically
get
packaged
into
the
hpi
are
kind
of
complicated
and
they've
actually
changed
in
the
past
few
months.
E
A
Well,
in
such
case,
it's
applicable
only
to
a
subset
of
tools.
E
A
G
Also
something
to
mention
in
the
topic,
even
if
a
lib
that
is
included
in
the
hpi,
is
never
used
at
runtime.
Some
scanners
are
discovering
it
finding
vulnerabilities
and
so
blocking
the
production
deployment
for
some
user
of
jenkins.
So
is
it
something
we
also
want
to
cover
meaning
not
pure
security,
but
also
security,
safety
sentiment
from
different
users,
meaning
that
for
security
point
of
view,
there
is
no
need
to
change
some
of
the
things
there,
but
for
some
customers
that
will
have
an
impact
on
their
deployment
possibility.
I
would
say.
E
I
mean
that
will
show
up
as
a
part
of
the
hpi
colon
package.
I
think
it
is.
It
will
print
a
list
of
all
of
the
chars
it's
including,
and
it
uses
warning
label
for
transitive
dependencies,
but
I
would
say,
if
you're
packaging,
something
that
you're
not
using,
and
that
has
a
security
vulnerability,
and
maybe
you
should
fix
that.
A
Well,
in
some
cases
there
are
platform
specific
bits
being
packaged,
so,
for
example,
you
might
have
a
dll
resource
which
is
used
only
on
windows
and,
for
example,
we
have
such
example
in
jenkins
windows
process
management
library.
There
are
two
delays,
packaged
and
they're
used
on
a
specific
platform.
C
C
I
mean
there
might
be
dependency
problems
like
guava
has
giant,
is
a
giant
library
and
there's
like
two
classes
that
have
very
specific
vulnerabilities
and
then
used
nowhere
in
jenkins.
C
So
I
would
not
fault
people
too
much
for
ignoring
that
for
a
while,
or
you
know,
lowering
the
priority
of
the
work
to
to
update
that
library,
but
otherwise
it's
something
that
will
need
to
be
evaluated
and
only
if
it's
something
that
only
affects
specific
users
and
environments,
that's
up
to
the
users
or
admins
to
decide
whether
they
can
ignore
it
or
not,
as
the
developers
by
default,
it's
not
obviously
safe
to
continue
shipping.
It.
C
C
Well-
and
it's
not
a
great
situation
right,
we
should.
We
should
do
our
best
to
keep
to
do
things
like
keeping
dependencies
updated,
because
people
will
complain
if
they're,
outdated
and
so
giving
the
plug-in
maintainers
the
tools
they
need
to
understand
what
users
might
be
concerned
about
would
definitely
be
be
a
big
step
forward
and
then
it's
up
to
the
maintainers
to
say
yeah.
This
looks
relevant,
I'm
going
to
update
it
or
no
this
you
know
too
much
work
cannot
do
it
right
now
and
and
defer
it.
C
C
A
C
Right
so
you
mean
something
like
I
don't
know,
exposing
exposing
to
on
the
plugin
side,
whether
a
plugin
chooses
to
not
release
something
with
outstanding
warnings
or
something
like
that.
Or
what
do
you
mean.
C
A
But
yeah
something
calling
these
lines,
so
there
should
be
motivation
to
enable
these
checks
if
you
want
to
disable
them
by
default.
C
A
E
Well,
but
we
I'm
saying
we
could
we
could
start
making
some
conditions
for
plugins
which
show
up
in
the
setup
wizard
whether
they're
recommended
by
default
or
not.
I
think
it
doesn't
matter
so
much
just
having
just
for
them
to
be
listed
in
the
setup
wizard
or
be
a
dependency
of
something
that's
listed
in
the
setup
wizard.
E
C
Define
I
agree.
Obviously,
if
we
introduce
these
processes
now,
we
cannot
immediately
start
filtering
because
the
setup
wizard
would
look
fairly
empty.
I
mean
it
would
basically
be.
I
don't
know,
look
cli,
but
over
time.
I
think
we
should.
C
So
I
just
wanted
to
mention
something
else
related
to
what
jesse
just
said.
I
think
it's
helpful
to
think
of
these
analysis
tools
being
on
two
axes.
C
C
If
you
configure
the
version
of
or
find
bugs,
if
you
configure
the
version
of
findbox
in
the
pom
and
you
don't
change
the
source
code,
the
outcome
is,
as
far
as
I
know,
always
the
same,
whereas
there
are
other
tools
like
os
dependency
check,
I
believe,
can
run
as
part
of
your
maven
build,
but
it
contacts
a
cve
database
of
sorts
and
the
outcome
is
time
dependent
and
the
other
axis
is
whether
it's
part
of
the
local
build
that
you
can
run
in
your
ide.
C
What
jesse
said,
which
is
particularly
convenient
and
the
other
is
well
it's
from
the
outside
in
the
synchronous
sort
of
process
and
the
asynchronous
time-depending
one
is
a
big
one,
as
well
as
the
always
consistent
only
depending
on
your
actual
component
and
configuration
and
locally
run
one,
but
something
like
over
us
dependency
check.
I
would
not
want
to
add
to
the
parent
palm
enable
by
default
and
fail
the
build
if
it
fails,
because
that
has
the
bad
behavior
of
changing
whether
something
is
buildable
from
one
day
to
the
next.
C
E
A
A
Consumption.
The
question
is
that,
from
what
tools
do
we
want
to
start
right
now,
because
here
you
have
ql,
which
is
mostly
done
by
daniel?
You
have
to
switch
already
integrated
in
apparent
forms,
but
we
so
far
didn't
really
have
dependency
scans.
We
should
generate
reports.
C
I
mean
if
we,
if
we
consider
the
two
axes
I
just
mentioned,
something
like
a
dependency
scan,
would
probably
be
best
introduced
as
a
github
action
that
adds
or
runs
as
part
of
build
plug-in,
but
only
adds
some
additional
status
rather
than
be
part
of
the
regular
pr
merge
status
or
pr.
Is
it
merge?
I
think
it's
pr
merge
and
I
mean
we
can.
We
can
basically
just
add
stuff
if
someone
doesn't
care
about
a
new
tool
being
added
and
they
don't
care
about
it,
and
otherwise
it's
an
helpful
service.
C
I
don't
see
why
those
shouldn't
be
opt
out
if
they
don't
end
up.
Failing,
builds.
A
They
shouldn't
especially
synchronously
because,
for
example,
forward
dependency
check
it's
about
five
minutes.
If
you
don't
have
cash,
so
that's
out
of
the
question
and
for
the
tools
like
sonar,
etc.
They
also
they
also
consume
quite
a
lot
of
time.
C
H
I
I
was
going
to
ask
what
what's
the
difference
between
the
os
dependency
checks
and
dependebot
security
checks,
as
opposed
to
depend
about
regular
checks
for
everything
just
just
having
it
enabled
for
security.
A
H
E
Yeah
it's
automatic
and
github
if
a
cve
shows
up
in
something
that
the
dependable
parsing
algorithm
would
consider
to
be
a
dependency,
and
yes,
that
includes
test
dependencies
and
whatever
it's
not
very
smart,
it's
just
some
ruby
parser
of
your
pom.xml.
Basically,
as
far
as
I
know,
then
it's
going
to
show
up
in
your
security
tab
if
you're
a
repository
owner.
C
On
the
org
level,
I
recently
made
the
huge
mistake
of
clicking
enable
for
all-
and
that
was
before
the
400
chain-
unit.
Dependencies
and
plugins
got
the
security
vulnerability
report,
so
that
was
fun
but
yeah
I
could
disable
it
and
on
an
arc
level,
we
can
also
periodically
just
opt
in
everyone
again,
even
if
they
opted.
A
Yeah
speaking
of
dependencies
yeah,
we
talked
a
lot
about
java
dependencies,
but
if
you
have
some
plugins
like
blush
and
accessory,
which
also
include
a
lot
of
javascript
stuff
and
in
edge
cases,
we
have
libraries,
including
other
jars
and
dlcs
resources,
which
is
probably
the
least
important
case,
but
for
npm
stuff.
I
guess
we
will
also
need
to
invent
something
unless
we
press
it
with
approach
being
used
by
willie
hoffner,
when
each
javascript
dependency
becomes
a
separate,
epi,
plugin.
H
C
I
think
one
potential
problem:
there
is
the
js
libs
approach
to
javascript
libraries,
but
I
think
that's
essentially
long
deprecated
or
js
builder,
I
think,
was
also.
C
Approaches
but
I
don't
think
those
gained
a
lot
of
fraction
in
the
ecosystem,
so
it's
not
like.
We
have
hundreds
of
plugins
with
this.
E
Problem
yeah,
I
think
uli's
approach
makes
sense.
It's
the
same
way.
We
package
reusable
java
libraries.
H
It
me
if,
if
there
is
a
security
vulnerability
in
that
and
there's
a
breaking
change
because
we
haven't
updated
it
or
even
there
was
a
breaking
change
because
it
went
from
the
the
whoever
produces
that
library
doesn't
care
about
backwards,
compatibility
which
happens
then
we're
stuck
in
a.
We
have
to
do
a
massive
kind
of
fix
to
get
everything
fixed
and
released
and
not
break
anything
which,
I
don't
think
is
it's
really
sustainable.
H
H
E
H
I
was
gonna,
say
yeah
that
there
always
seems
to
be
a
new
show
anything
in
javascript
land,
so
it
might
happen
more
often,
but
I
I
don't
know
I
have
no
metrics
to
say
either
way.
A
Okay,
so
we
are
going
slightly
over
time.
Should
we
briefly
summarize
the
results
of
the
discussion
and
what
would
be
our
next
steps.
C
And
while
something
like
a
dependency
scan
superficially
is
probably
helpful
to
you
know
make
people
back
off
with
their
own
dependency
scan
results
in
terms
of
security
benefits
in
the
actual
code.
I
think
doing
the
jenkins
specific
rules
and
rolling
that
out
a
lot
more
widely
will
probably
have
the
best
results
in
you
know,
actual
actually
fixing
security
vulnerabilities
in
jenkins.
A
And
taking
that
solo
to
the
halfway
done,
why.
B
C
So
there
are
two
directions
in
which
we
can
improve.
This
one
is
improve
the
rules
that
are
genuine,
specific
and
publish
them
as
well
to
make
to
allow
others
to
use
them
and
then
properly
integrate
that
into
the
usual
github
pull
request
workflow,
because
right
now
that
is,
you
know
just
a
daily
scan
that
updates
metadata
attached
to
the
repo
for
the
latest
master
commit,
and
that's
not
really
useful
once
it
finds
something
real
or
you
know,
even
to
scan
a
poll
request
to
prevent
you
from
introducing
more
problems.
A
But
yeah,
I
think
that
it's
something
you
could
definitely
adopt
another
low
hanging
fruit
for
us
is
a
wasp
dependency
check.
If
we
really
want
to
do
that,
though,
again
as
it
was
discussed,
partially
replaced
by
dependable,
so
no
specific
outcome
right
now
and
we
could
also
try
enabling
sneak,
although
it's
actually
enabled
for
us,
the
problem
was.
B
B
A
A
So
these
buttons
used
to
work
now
they
don't
so,
let's
wait
until
they
actually
fix
it,
but
yeah
before
that
it
was
possible
to
go
to
sneak
and
to
explore
plugins
at
the
third
and
actually,
as
you
may
imagine,
the
most
of
these
issues
come
through
dependencies
and
through
transient
dependency
resolution
and
the
yep,
like
we
discovered
from
other
tools.
You
just
have
jenkins
version,
starts
to
hold
jenkins
core
and
then
the
things
start
exploding,
because
you
need
to
stop
rules
to
ignore
such
things.
A
H
A
D
So
I
I
can,
by
the
way,
the
standard
from
release
engineering
for
linux
foundation.
Snick
is
available
via
the
linux
foundation's
security
system
right.
So
that's
actually
what
they're
using
under
the
covers,
so
you
don't
need
to
go
applying
to
snick
for
an
open
source
license
you
get
that
as
part
of
lfx
security.
B
D
D
A
Yeah,
that's
fine
because
they
needed
to
design
user
experience
because
there
were
some
collisions
between
linux
foundation
and
the
sneak
javascript
so
that
you
you're
not
able
to
actually
scroll
all
the
issues.
You
know
the
reflex
security
front
end,
so
maybe
that's
what
they
redesign,
I'm
not
sure.
C
So
so
the
reason
I'm
asking
about
snick
is,
if
I
look
at
the
github
code
scanners,
so
I've
I've
used
that
so
far
for
my
custom
code,
girl
rules,
but
this
there's
a
marketplace
for
github
actions
that
run
security
scans
and
there
is
something
called
snake
infrastructure
as
code
and
that's
a
workflow
can
be
set
up.
Is
that
different
from
this
snake?
Or
is
that
largely
the
same
functionality
just
exposed
to
users
differently?
Now.
C
Okay,
so
there's
also
vera
code
static
analysis.
I
think
all
like
you
had
great
experience
with
that.
A
C
Look
like
there
are
essentially
one
click
and
the
commit
to
master
away
from
being
run
against
jenkins,
plugins
security,
yeah.
That
looks
to
be
largely
the
same
list,
so
you
can
see
this
by
the
way
in
every
repo,
if
you
click
on
the
security
tab
and
then
code
scanning
alerts,
and
then
you
can
set
up
actions
there.
C
So
that
might
also
be
you
know
just
for
us
to
evaluate
which
of
these
tools
make
sense
and
and
can
handle
jenkins
plugins
and
then
recommend
their
use,
maybe
set
them
up
by
default
in
the
plug-in.
B
B
C
C
E
Yeah,
well,
I
can
verify
that
the
back
port
release
flow
actually
works.
I
mean
I,
I
guess
just
take
some
stupid
plug-in
like
log
cli
and
try
to
backboard
something
I
want
to
make
something
up,
but
just
make
sure
that
the
the
version
scheme
works
for
backboards
and
that
you
can
do
either
manual
or
automatic
triggers
or
something
reasonably
in
that
mode.
The
github
actions,
the
documentation
is
really
vague
about
some
triggers
only
apply
to
the
default
branch,
or
they
only
apply
when
the
configuration
file
is
in
the
default
branch
or
something.
E
E
E
E
A
C
I
mean,
wouldn't
it
be
easier
to-
I
don't
know,
slip
version
number
something
stupid
like
that
to
try
to
start
integrating
that
or
applying
the
the
new
release
process
there
and
see
what
happens.
E
C
Winston
would
be
a
good
one
if
we
don't
want
to
do
remoting,
especially.
C
Then
for
this
process,
I
think
it
would
make
sense
to
look
into
what
the
plugin
template
should
look
like.
We
have,
I
think,
default
workflows
in
the
plugin
template.
B
E
Yeah
so
well
with
the
archetypes
all
of
the
hosting
on
jenkins
ci
specific
stuff
is
currently
it's
not
part
of
the
archetype
itself.
It's
actually
injected
by
a
archetype
parameter,
and
I
I
don't
think
we're
ready
at
this
point
to
turn
on
jep229
by
default
in
archetypes.
C
E
A
E
E
E
Yeah
I
mean
we
could
include
all
of
the
stuff
for
job
229
in
the
archetype
commented
out.
That's
one
option.
Another
option
is
to
have
another
flag
in
the
archetype.
So
currently
there's
a
there's,
a
boolean
flag
to
include
jenkins,
ci
github
specific
stuff.
We
could
have
another
flag
to
include
cd.
C
Yeah,
so
I
think
that
would
be
useful
just
to
get
a
better
idea
of
how
that
would
look
like
to
maintainers,
because
I
think,
from
a
technical
point
of
view,
we're
there,
there
are
already
two
plugins
that
use
it
in
slightly
different
ways,
at
least
perhaps
more
but
lock
c
alliance
jjwt
api.
I
know
of.
E
C
File
parameters,
and
now
the
question
is:
how
do
we
make
this
accessible
to
maintainers?
Who
are
not
also,
you
know
the
authors
of
this
thing.
E
A
E
C
All
right,
so
what
you're
saying
is
in
my
great
dark
theme,
sunrise
theme.
H
I
was
just
gonna
say:
is
that
a
valid
version
number
as
far
as
maven
and
everything's
concerned
shouldn't
that
be
a
dash
as
opposed
to
a
dot
after
the
793.
H
It's
it's,
it's
maven's,
happy
with
anything
because
it
goes.
I
can't
pass
this.
It's
a
string.
Have
a
version
number,
that's
a
string,
everything's
a
string,
so
I
I'm
not
meaning
happy
as
in.
Can
it
could
it
you
know?
Is
it
not
going
to
blow
up
it?
I
know
it's
not
going
to
blow
up
and
you
can
consume
it
and
everything
it's
like.
Is
it?
Could
you
compare
that
to
something
else
that
happened
to
be
you
know,
or
is
it
just
going
to
be
a
string
comparison
and
I
think
a
string
comparison
compared
to
something?
E
E
E
F
C
Yeah,
but
that
looks
like
a
problem
that
we
can
address
or
that
we
can
figure
out
how
to
and
then
we'll
need
to
document
it,
because
maintainers
should
know
the
options
that
they
have,
because
I
don't
want
anyone
to
say
I
don't
like
having
version
700,
whatever
I'm
not
going
to
use
this
they
should
you
know
we
should
we
should
advertise.
Well,
if
you
don't
like
this
version
format,
here's
how
you
get
add
a
one
in
front
or
two.
C
I
think
that
would
be
a
reasonable
next
step
for
us
to
basically
promote
chap
229
in
the
regular
jenkins.
I
o
developer
documentation
when
it
comes
to
releasing
or
publishing
plugins
and
have
at
least
a
minimal
introduction
there,
plus
references.
E
A
C
F
B
A
C
C
And
I
think
for
those
of
us
who
are
maintainers,
I
think
it
would
be
useful
if
we
just
started
using
this
in
perhaps
our
less
popular
plugins.
So,
for
example,
I
completely
forgot
that
I'm
the
maintainer
of
solarize
theme
and
I
think
that's
a
plugin
that
I
can
just
use
rather
than
you
know,
matrix
off.
That
might
be
a
bit
much
as
the
first
test
run
plug
in
for
it
and
then
you
know
go
from
there.
C
The
documentation
needs
to
cover
both
the
version
number
formatting,
as
well
as
how
to
have
manual
releases
via
actions.
I
think
those
two
might
help
in
terms
of
getting
a
wider
spread.
B
B
A
C
A
Yeah,
so
it's
definitely
not
blocking
all
out,
but
eventually
we
will
likely
want
to
accept.
A
C
And
I
mean
especially
with
github
actions
and
the
github
marketplace
and
github
security
scans,
it
seems
like
maintainers
can
just
enable
whatever
checks
they
like
and
what
we
need
is
basically
a
blessed
or
known
good
set
of
scans
that
aren't
completely
useless
for
jenkins
plugins,
due
to
us
doing
weird
things
with
the
palm
and
the
runtime
mattering
more
than
the
declared
dependencies
and
such
so.
C
I
think
we
should
have
definitely
a
recommended
set
of
scans
that
perhaps
can
make
it
in
the
archetype,
but
this
is
also
very
low
barrier
way
for
anyone
in
the
project
to
contribute.
Just
you
know
enable
this
on
your
plugin
and
see
what
happens.
B
B
C
So
I
guess
we
all
meet
again
in
at
four.
So
in
45
minutes
or
something
isn't
that
the
closing
meeting.