►
Description
Tekton Client Plugin has been a great start for Jenkins and Tekton integration. But Tekton is just a Pipeline step at the moment. What if we make the Jenkins Pipeline orchestrator part (Multibranch, Jenkins UI/reporting, ) universal and allow plugging in multiple Pipeline engines, including [Jenkins Pipeline with and without Sandbox, Tekton, Jenkinsfile Runner, and more]?
Meeting notes: https://docs.google.com/document/d/1QLWXNG23ui-LvQXth3UREzOvLYgTMSZcv-El0H141a4/edit#heading=h.rbvy02o8js5h
A
Thanks
thoren
for
coming
to
this
session,
we
will
be
talking
about
making
pipeline
engine
first-class
citizen
in
jenkins,
basically
on
par
with
freestyle
jobs,
pipeline
and
other
components.
A
No
because
I
should
have
clicked
there,
okay
and
now:
yes,
yes,
okay,
so
what
do
we
have
so
something
like
one
year
ago,
a
bit
more
with
bavar
from
red
hat?
He
started
integrating
tech
one
with
jenkins.
There
is
client,
it
was
also
a
part
of
work
for
openshift
magenti's
pipeline,
so
there
so
what
you
can
client
plugin
does
well.
It's
basically
provides
some
interface
for
communicating
with
the
pipelines,
so
you
can
create
resources,
trigger
bills
and
retrieve
some
results,
but
basically
that's
it
and
how
it's
implemented.
A
A
What
are
the
obstacles
here,
so,
firstly,
you
are
expected
to
have
all
the
contacts
on
the
jenkins
side
when
you
do
not
get
all
benefits
from
tikton's
pipeline
engines,
because
there
is
no
integrated
user
experience,
so
you
can
trigger
tecton
collect
results,
but,
for
example,
if
tektron
produces
built-up
multiplex
reports,
you
couldn't
couldn't
stream
them
back
to
jenkins.
A
A
Jenkins
itself,
it
consists
of
multiple
components,
so
we
have
this
web
ui.
We
also
have
what
I
call
orchestrator
part.
So
basically,
here,
for
example,
there
is
a
webhook
receiver,
which
triggers
builds
and
propagates
results
back
to
the
acm,
so
all
of
that
is
handled
by
jenkins.
All
of
that
can
be
handled
by
pro,
for
example,
in
the
nether
circus
system,
but
yeah.
This
way
zinc
is
quite
strong,
especially
thanks
to
all
its
reporting
compatibility
compatibilities,
but
there
is
also
pipeline
engine.
A
So
it
means
that
it
would
be
much
more
simpler
from
the
drinking
side,
but
we
could
get
all
the
advantages
of
jenkins
because
yeah,
currently
dictum
doesn't
have
its
own
browsing
ui.
There
are
some
implementations
for
ticton,
but
I
believe
it's
not
a
part
of
the
main
project
right.
B
We
do
have
a
dashboard,
it's
part
of
tecton,
but
it's
it's
it's
kind
of
like
the
kubernetes
dashboard,
so
it
allows
you
to
see
resources.
It's
not
specifically.
I
think
the
jenkins
ui
is
more
targeted
towards
maybe
cicd
scenarios.
Where
you
see
the
pipelines
and
you
have
you,
you
can
see
the
history
of
execution
as
well,
while
the
execution
history
protection
relies
on
what
is
in
the
cluster.
Basically
right.
B
Will
evolve
the
time,
but
the
the
fact
is
that
a
lot
of
people
as
well
as
is
used
and
enjoys
the
the
jenkins
ui,
and
I
think
for
for
people
used
to
work
with
with
the
jenkins
ui.
It
would
be
beneficial
to
be
able
to
to
continue
using
that
ui,
but
rely
on
some
of
the
benefits
of
having
tecton
running
underneath.
A
All
right-
and
there
are
multiple
ways
to
make
it
compatible,
so
we
have
already
have
many
beats.
So,
for
example,
we
have
this
tecton
client.
We
basically
is
based
on
a
java
library
which
provides
the
optical
interactions.
It's
still
missing
a
few
critical
bits,
for
example,
visualization
execution.
A
A
So
one
of
our
issues
with
sticktone
technolon
has
a
huge
ecosystem
of
various
tasks
that
are
reusable
on
its
own.
But
of
course
it
doesn't
have
a
report
publishing,
but
what
we
could
do.
We
could
take
jenkinsfeld
runner
and
run
it
as
a
publishing
step
which
streams
the
data
back
to
jenkins,
so
basically
how
it
would
look
like
that
yeah.
We
have
jenkins
as
a
big
orchestrator.
A
And
can
execute
pipeline
or
it
can
execute
a
tecton
I'll,
actually
not
execute
but
trigger,
because
tipton
leaves
on
its
own
in
the
kubernetes
cluster.
But
what
could
happen
next?
Some
of
the
tickton
steps
could
trigger
a
jinx
file.
Runner,
for
example,
for
publishing.
A
And
it
could
stream
the
data
back
to
jenkins,
so
what
it
means
that
we
would
keep
all
this
orchestrator
part.
For
example,
jfr
could
provide
reporting
capability
and
tickton
execution
could
stream
all
the
build
results
and
metadata
again
to
jenkins.
So
we
could
even
rebuild
the
pipeline
graph
so
that
we
could
display
pipeline
execution
graph,
even
if
it
gets
produced
by
tecton,
with
all
parallel
steps,
et
cetera,
it's
already
supported
api.
A
B
B
Would
be
interesting
to
to
find
to
maybe
highlight
some
use
cases
for
these
or
like
what
would
be
the
benefit?
What
would
people
looking
for
when
using
this
kind
of
combination?
Right?
It's
not
entirely
clear
to
me
whether,
in
this
setup,
people
would
be
writing
pipelines
using
the
jenkins
pipeline
format
or
detect
on.
A
I
would
not
expect
electron
format
in
this
case,
at
least
in
the
beginning.
There
could
be
some
interoperability
and
conversion
in
the
future,
but
the
electron
is
the
most
straightforward,
because
you
have
pipeline
definition,
just
trigger
it
on.
So
what
are
the
benefits?
Firstly,
jenkins
itself.
A
So
if
you
take
the
jenkins
controller
on
its
own,
it's
not
cloud
native.
It's
not
scalable
enough.
When
it
comes
to
big
pipelines,
you
can
create
multiple
controllers.
You
can
create
clusters
of
junkies
with
integration
ion
message,
bus,
but
still
it
has
a
lot
of
limitations,
but
if
we
have
loaded
execution
to
tipton,
so
firstly,
we
would
get
benefits
of
with
all
its
scalability
features
available.
A
This
opportunity
to
just
spawn
as
many
tecton
pipelines
as
needed
on
different
kubernetes
clusters
again,
if
needed,
and
the
jenkins
itself
could
stay
as
to
some
extent
aside
and
just
provide
publishing
front
end
and
again.
Jinx
itself
can
be
eventually
converted
in
this
way,
so
that,
let's
say
there
are
multiple
web
ui
providers
connected
in
the
cluster
somehow,
and
we
have
prototypes
for
that,
for
example,
project
nirvana.
I
was
working
on
but
yeah
for
that.
A
The
critical
part
is
pipeline
engine
because,
while
pipeline
engine
is
integrated
with
the
controller
part,
it's
really
difficult
to
do
anything.
B
Right
yeah
and,
as
you
mentioned,
like
high
availability,
I
think
scalability
and
support.
I
think
one
of
the
main
points
for
jenkins
in
enterprise
that
I've
heard
at
least
is
high
availability.
So,
if
you
like
want
to
reconfigure
your
jenkins,
you
need
to
reload
it
and
you
have
a
downtime
a
few
minutes.
B
So
I
think
in
this
approach
you
would
not
have
the
downtime
for
running
the
pipelines,
but
what
happens
in
so
if
you
restart
jenkins,
is,
is,
is
that
what's
that?
Would
this
so?
Would
that
still
be
a
single
point
of
failure
in
terms
of
I
guess
it
depends
how
how
the
pipelines
are
triggered
and
that's
what
I
wanted
to
envision.
B
A
It's
actually,
but
classic
architecture
is
just
one
of
configurations
for
jenkins
and
actually
what
blocks
us
right
now
is
the
fact
that
we
have
a
lot
of
context
in
runtime,
so
some
data
can
be
uploaded
to
pluggable
storage.
There
are
implementations
the
roles,
event
systems
so,
for
example,
for
blue
ocean
web
ui
for
various
logging
systems.
We
can
stream
events
across
multiple
instances.
A
A
There
is,
of
course,
many
edge
cases
where
it
wouldn't
be
possible,
because
jenkins
is
also
a
webhook
receiver,
which
is
a
single
pointer
failure.
Whatever
you
do,
but
once
a
pipeline
execution
happens
separately,
we
can
actually
do
a
lot
of
exchanges
to
the
controller
itself,
so
basically
converting
jenkins
controller
to
controller
plane
and
also
have
a
kind
of
web
ui
which
again
can
be
scalable,
and
basically
it
can
be
a
fully
client-side
api
event
as
well,
if
needed
so
even
bloation.
A
So,
for
me
not
having
a
pipeline
on
the
controller
would
be
one
of
enablers
for
that
the
ac
card
was
on
the
call,
so
we
actually
did
quite
a
lot
of
experiments
in
the
past.
So
it's
technically
possible.
A
Yes,
it
would
require
a
lot
of
changes
in
jenkins,
but
if
you
experiment
with
stepton
it
would
be
one
of
possibilities.
It
can
be
done
with
james
pipeline
as
well
without
action.
So,
for
example,
jinx
file
runner
is
basically
the
same
offloading
pipeline
engine
to
a
separate
instance
like
container
streaming
results
back,
but
tekshon
has
its
own
advantages,
because
you
hope
that
techtron
will
become
a
commodity
at
some
point.
So
why
don't
we
integrate
the
agencies
with
tecton
as
well.
B
A
Yeah,
so
we
will
talk
about
real
high
availability.
Of
course
we
will
need
to
externalize
the
context
and
well
it
has
been
a
common
understanding
conjunctions
for
more
than
five
years.
If
you
want
to
change
the
section,
if
you
want
to
get
rid
of
controller,
as
instance,
we
will
need
to
significantly
rework
but
yeah.
Currently
pipeline
is
one
of
the
impediments,
because
well
you
either
have
multiple
controllers
in
some
way
or
you
create
completely
different
architecture.
A
In
other
projects,
so
yeah
there
are
many
ticked
on
users
by
now.
So
for
me,
it
would
be
natural
that
if
we
at
least
tried
using
this
engine
and
provide
the
possibility
for
end
users
to
choose
whether
they
use
jinx,
pipeline
or
tickton,
because
current
state
for
junkies
pipeline
is
yeah,
it's
a
kind
of
liability
for
us.
A
It
requires
a
lot
of
maintenance
and
yeah.
If
you
talk
about,
let's
say
five
to
ten
years
horizon,
I
would
a
rather
question
of
whether
we
should
keep
maintaining
jenkins
pipeline
as
main
engine
for
such
long
term.
A
Think
and
but
you
can
use
jenkins
python,
but
what
the
engine
is
under.
The
hood
is
a
separate
topic
because
I
think
spy
plan
again.
They
first
expect
the
user
experience
and
yes,
of
course,
groovy.
They
say
they're
perfect,
but
many
of
these
speeds
can
be
mapped
to
dictionary
execution.
So
there
was
already
a
prototype.
Haseo
was
presenting
to
the
devops
world
that,
basically
your
thought
pipeline
execution
to
jenkins
fat
runner
and
then
return
the
results
back,
jinx
felt
runner
can
run
and
dicktone,
for
example,
sap.
A
They
created
stewart
ci
and
it's
basically
tickton
plus
strings
file,
runner
plus
visualization
capabilities.
A
C
A
In
five
to
ten
years,
you
will
still
have
jenkins
pipeline
but
yeah.
For
me,
it's
not
like
saying
that
tikton
will
be
the
only
solution
in
january.
For
me,
it's
rather
making
it
as
an
alternative
for
those
who
want
to
get
just
takes
on
execution,
but
get
benefits
of
changes
like
publishing.
So
where
jenkins
is
really
strong.
A
A
So
for
that
yeah
I
wanted
to
really
talk
to
tom
coming
to
see
what
would
be
missing
and
what
would
be
just
cases
on
your
side,
because
for
us
one
of
the
questions
is
don't
like
jenkins,
a
tickton
but
other
ecosystem
parts
which
could
be
reused
so,
for
example,
lock
streaming,
because
if
tikton
community
plans
to
adopt
a
standard
lock
streaming
solution,
for
example,
open
telemetry
who
knows
in
the
future
and
first
it
would
be
also
an
opportunity
to
hook
on
that.
Instead
of
inventing
something
on
our
own.
B
B
A
So
for
that
we
have
remoting
protocol.
So
basically
it's
a
jenkins
song
implementation.
Well,
currently,
it
can
operate
via
web
sockets
over
tcp.
A
I
think
yeah,
but
the
remote
is
used
to
stream
local
information
etc
back
to
controller.
A
So
what
we
did,
the
remoting
itself,
it's
quite
stable
solution,
but,
for
example,
when
we
are
talking
about
clustering
jenkins,
the
option
we
discussed
is
actually
using
message,
bus
and
during
google
summer
of
code
in
2017,
we
created
the
implementation
for
apache
kafka,
so
basically
what
it
means
there
can
be
multiple
controllers
and
multiple
agents
and
the
agents
could
stream.
The
data
to
controllers
and
controllers
can
generate
them
just
by
subscribing
to
events,
and
it's
also
one
step
towards
high
availability,
because
remoting
itself
is
point
to
point
communication.
A
So
in
remoting
there
is
no
any
kind
of
broadcast
company
capabilities,
but
when
we
switched
to
apache
kafka,
we
in
that
particular
plugin.
We
were
actually
able
to
avoid
that
and,
for
example,
that's
why
I
mentioned
open
telemetry,
because
if
tickton
streams
that
they
looks
to
open
telemetry
logs,
then
obviously
it
can
be
used
by
any
visualization
tool
in
kubernetes
but
jenkins
on
the
same
part,
with
just
subscribe
to
this
information
and
do
visualization
on
the
inside.
A
B
I'm
just
wondering
if
that's
something
that
needs
to
be
techno
specific
or
if
there
might
be
something
already
out
there
for
this,
because
at
the
end,
in
terms
of
like
pipeline
execution
logs,
it's
all
running
in
parts.
So
basically
it's
a
logs
subresource
in
kubernetes.
So
you
can
already
get
those
from
the
api
and
that's
what
we
we
do
like
in
detecting
cli.
B
You
can
stream
the
logs
by
using
dkn,
which
is
detect
on
cli,
which
basically
connects
to
the
api
using
the
watch
option,
and
then
you
get
the
logs
there
and
I
think
that's
the
same,
that
the
dashboard
does
and
then
it
uses
websocket
to
display
those.
So
I
wonder
if
there
is
anything
that
would
generate
a
kafka
events,
type
of
stream
already
from
from
pod
logs
kubernetes
but
logs.
B
B
Something
that
basically
would
would
serve
the
purpose.
If
I
understand
correctly.
A
So
for
us,
apache
kafka
was
just
an
experimental
implementation,
so
yeah,
I
keep
him
in
co-op
and
telemetry
just
because
we
started
a
lot
of
activities.
Consolidating
combat,
but
yeah
following
part
is
in
the
very
beginning.
So
let's
see
how
it
goes
for
us.
When
we
talk
about
pipeline
information,
we
actually
need
some
additional
metadata
because
we
need
to
build
a
pipeline
drive
so
so
that
we
can
visualize
steps,
and
the
question
is
about
metadata.
That
would
be
submitted.
So
pretty
much
like
cd
events
project
does
for
various
events.
A
We
would
need
approximately
the
same
for
pipeline
graph
execution
so
that,
for
example,
in
your
pipeline,
you
execute
the
script
and
yeah.
There
is
information
submitted
for
that,
so
that
we
can
trace
this
data
and
at
the
same
time
we
can
visualize
say
it
is
pipeline
execution
graph.
B
Yes
about
that
I
mean
we
do
have
the
events
and
I
mean
we
could
extend
if
there
is
an
event
missing
or
but
we
we
do
set
events
for
like
pipelines.
I
started
stopping
and
seeing
the
tasks.
We
don't
have
events
at
step
level
today,
but
you
could
add
them
if
needed.
B
So
if
you
want
to
know
within
the
pipeline
where
you
are
in
the
execution-
and
you
want
to
visualize
that
that
gives
you
the
data,
but
what
about
the
initial
static
view
of
the
pipeline?
So
if
you
have.
A
B
Yeah,
so
if
you
have
the,
as
I
said,
I
think
if
you
have
the,
if
you
know
about
the
the
pipeline
structure
and
junk
inside,
you
don't
need
to
go
and
retrieve,
and
you
can
get
right
time
information
either.
You
could
get
it
from
events
in
cloud
events
contact
them
or
you
could
have
like
you
said
in
the
beginning,
you
could
have
this
jenkins
run
jenkins
file,
runner
type
of
execution
in
detecting
steps
that
report
back
into
the
jenkins
store
directly.
I
think
so.
A
For
all
complex
data,
I
would
expect
it
to
be
keys
because
well,
it's
very
tempting
to
have
a
kind
of
unified
record
reporting
for
common
operations,
so,
for
example,
publishing
unit
test
results.
I
guess
for
tectonic
would
be
use
case
without
jenkins
2
to
be
able
to
visualize
it
somehow
and
maybe
transfer
this
bit
in
a
standard
form,
but
it
goes
way
beyond
just
jenkins,
so
just
tone.
A
It's
super
complex
area,
but
for
jenkins,
specifically,
we
could
take
jenkinsfeld
runner
because,
for
example,
in
jenkins
there
is
g
unit
plugin,
but
the
unique
plugin
doesn't
do
the
unit
test,
genuine
plugin,
just
processes,
the
report
and
just
publishes
the
results.
So
we
could
basically
have
firstly
right
one
task
which
does
all
the
build
and
tests
we
use,
maybe
and
this
whatever
and
then
just
another
task
which
connects
to
the
same
workspace
and
publishes
the
report
and
well
basically,
that's
it
wouldn't
be
a
huge
performance
ever
have
so
yeah.
A
B
Yeah,
you
need
to
find,
I
guess,
some
kind
of
convention
in
terms
of
what
detect
on
run
emits
to
to
let
the
the
jenkins
part
know
where
to
pick
up
the
like
the
location
of
the
the
jenkins
unit
report,
for
instance,
right.
C
B
If,
if,
if
on
texan's
side,
you
write
this
file
somewhere,
that
jenkins
can
pick
it
up
from
into
a
workspace,
and
you
need
the
the
jenkins
runner
to
grab
it
from
there
and
send
it
back
to
the
main
jenkins
that
will
do
the
the.
B
A
The
question
will
be
rather
this
credentials
and
access
tokens
so
that
this
dictance
task
could
report
back
to
jenkins.
A
A
So
basically
jenkins
would
be
the
same,
at
least
in
the
beginning.
So
there
is
no
need
in
various
magic
mechanisms
to
determine
how
and
where
to
publish
but
yeah.
Then
we
still
have
a
question
like
how
we
transparently
pass
information
about
connecting
back
to
jenkins,
because
you
could,
for
example,
just
create
api
token
for
the
build
put
it
somewhere
on
kubernetes
secret,
but
it
doesn't
seem
to
be
the
best
solution.
B
No,
I
mean
we,
we
you
rely
on
secrets
for
credentials
either
in
secrets
or
you
I
mean
you
can
you
can
pass.
We
have
a
concept
of
workspaces,
it's.
B
B
C
B
What
what
is
the
concern
with
the
having
the
jenkins
credentials
on
in
a
secret
is?
Are
those
kind
of
static
or
those.
B
A
So
jenkins,
these
the
many
use
cases
improve
that
there
is
a
kind
of
or
airbag
on
the
jenkins
side.
It
can
be
integrated
with
kubernetes
or
not.
But
when
we
discuss
system,
we
assume
that
resources
are
shared
between
multiple
users.
A
So,
for
example,
you
have
a
kubernetes
cluster,
but
still
there
are
bills
that
belong
to
different
users
and
these
users
are
supposed
to
be
deleted
and
if
you
say
that
we
push
everything
to
not
secrets,
it
means
that
basically,
every
pipeline
execution
would
be
able
to
use
api
with
the
same
credential
stocks
as
dr
jenkins,
so
it
would
be
used
by
by
one
user
to
inject
data
etc
in
another
build.
A
So
I'm
not
sure
whether
I
guess
for
text
on
it's
not
a
big
concern
by
default,
because
if
you
use
tickton
it
means
that
yeah.
This
relational
is
on
the
cluster
level,
but
yeah
for
jenkins.
Historically,
it
hasn't
been
a
case,
or
maybe
it's
something
we
need
to
rethink
on
the
junk
inside,
but
yeah
right
now.
If
I
offered
whatever
jenkins
user
to
basically
give
up
on
the
security
mechanisms,
it
might
be
a
kind
of
uphill
battle
for
us
yeah.
A
So
basically
do
tipton
maintenance
receive
requests
to
have
resolution
within
the
dictionary
itself.
I.
A
Mean
oh
yeah,
so
having
a
multiple
user
contacts
within
the
same
grants,
cluster
running
tipton.
B
Yeah
I
mean
you
can
have
you
can
have
name
spaces,
you
can
have
service
accounts,
so
you
can
control
which
service
account
is
used
to
execute.
Your
builds
basically.
A
Have
one
basically
zone
execution,
every
pipeline
task
could
be
executed
in
a
separate
namespace,
so
I
don't
have
to
run
tecton
in
a
single
main
space.
B
Yeah,
no,
the
the
controller
itself
runs
in
in
one
name
space,
but
the
execution
happens
in
whatever
name
space
that
you
you
decide.
Usually
you
you
will
have
one
or
more
other
namespaces
where
the
execution
of
your
runs
it
happens
and
you
can
control
which
service
accounts
are
used,
so
you
can
have
even
within
the
same
space.
Different
service
accounts.
A
Then,
to
dedicate
the
namespace
even
that
it
might
be
debatable
on
the
jenkins
side,
because
we
have
built
authentication
and
theoretically,
even
within
the
same
project,
every
build
can
be
executed
by
different
users
with
different
permissions
it.
A
A
A
A
Okay,
so
yeah.
What
I
wanted
to
do
by
this
meeting
is
to
actually
start
doing
some
prototype.
What
is
the
write
down?
Jab
draft
jenkins
enhancement
proposal,
which
would
outline
this
system,
as
you
can
guess
by
now.
A
I
definitely
didn't
do
much
on
this
front,
but
I
think
that
if
such
configuration
some
looks
same
on
the
top
level,
I
could
try
writing
down,
and
yet
we
still
have
a
lot
of
gaps
in
terms
of
tectonic
knowledge,
so
if
there
is
a
way
to
get
some
validation
from
the
kickstand
community,
once
initial
draft
is
ready,
it
would
be
great.
B
B
At
this
point,
the
main
things
I
think
I've
got
on
tactical
side
is
the
streaming
of
the
logs
and
how
to
deal
with
the
credential
for
reporting
back
to
the
jenkins
yeah
and
possibly
the
separation,
the
build
separation.
A
A
So
yeah
the
clone
itself,
it
seems
quite
handy.
I
mean
we
don't
need
to
do
much
to
just
beat
it
on
running
yeah.
There
is
a
question
how
it
would
be
better
to
integrate
templates
and
have
very
visible
components,
because
there
is
a
lot
of
correlation
between
tickton
templating
libraries.
A
B
A
Yes
and
no
so
jinx
pipeline,
it
executes
in
the
sandbox,
so
it
has
access
to
jnk
system,
but
it
doesn't
have
full
access
to
binary
apis.
A
A
A
Okay,
so
we
have
been
with
quite
a
lot
in
this
engine
because
it's
advantage
and
disadvantage
of
jenkins
that
execute
execution
happens
on
the
controller
and
yeah.
It
saves
a
lot
of
time,
but
at
the
same
time,
it's
a
massive
security
concern.
If
you
consider
jenkins
control,
which
has
a
shared
service
and
the
whole
drink
is
controlled,
it
is
a
shared
service
by
default.
A
A
So,
for
example,
when
I
was
talking
about
g
unit
plugin
or
for
example,
performance
analysis,
static
analysis,
all
these
plugins
could
be
detached
and
publishing
could
be
done
in
a
pipeline
step
or
in
a
tickton
task,
because
you
can
just
package
jenkins
call
runner
instance
with
the
required
plugins
and
just
well
just
in
quotes
stream
their
result
back
to
jenkins,
and
it
should
be
okay.
A
Yeah
so
see
what
cia
it's
well.
Basically,
it's
jinx
fabra
in
dipton.
A
A
B
Yeah
we
we
do
have
internal
jenkins,
offering
development
teams
right
and
we
we
have
a
public
cloud
offering
which
is
ibm
devops,
which
uses
tekton.
B
So
we
have
a
tecton
cluster,
which
is
multi-tenant
with
all
the
nice
features
and
compliance
built
in
so
it
would
make
sense
for
us
to,
you
know,
have
a
because
we
have
a
techno
infrastructure
which
is
quite
proven
and
stable
and
meets
all
the
requirements
to
use
that
as
a
runtime
environment
for
the
services
on
top,
which
we
do
for
other
things.
But
yeah.
A
A
B
I
know,
but
I
I
know
where
that
is
interested
as
well,
and
this
is
kind
of
work
I
mean
for
for
ibm
specifically.
I
know
we
have
a
lot
of
customers
that
are
used
to
jenkins
in
terms
of
ui,
but
also
the
jenkins
syntax
for
riding
pipelines,
and
so
what
would
be
interesting
for
sure
would
be
to
have
a
setup
where
you
can
still
write
your
pipelines
in
the
jenkins
format,
not
execute
them
on
tickton,
but
execute
them
yeah,
and
it
doesn't
have
to
be
necessarily.
B
It
can
be
transparent
to
end
users.
At
the
end
of
the
day
that
this
I
mean
you
could
think
to
benefit
from
detecting
catalog
and
gradually,
you
know
benefit
more
from,
like
the.
B
Components
there
like
like
today,
you
could
call
out
within
a
jenkins
pipeline
to
detect
on
tasks,
for
instance,
but
still
benefit
I
mean
from
if
you
you
could
consider
the
jenkins
pipeline
as
a
kind
of
dsl
on
top
of
the
tecton
pipeline,
where
the
syntax
is
simpler
than
the
tactile
one
developing
a
tectum
task
and
techno
pipeline
is,
can
be
a
lot
of
yammu
so
for
people
that
wants
to
get
started,
it
can
be
a
bit
daunting,
while
the
jenkins
index
might
be
easier
and
more
yeah
easier
to
deal
with,
and
that's
the
feedback.
A
That's
a
good
point
for
safe
but
yeah
for
implementation
detail.
One
big
problem
is
how
it's
executed,
because
if
you
say
that
we
want
to
execute
an
entire
pipeline
in
a
single
container,
of
course,
we
can
upload
it
to
tickton,
but
it
doesn't
use
much
advantages
of
tickton.
A
So
if
you
want
something
real,
we
would
need
to
somehow
convert
this
pipeline
definition
to
multiple
tickton
tasks
and
while
it's
possible
it
might
additional
issues.
So
declarative
syntax
would
be
easier
for
that,
but
yeah.
I
guess
many
people
on
this
call,
don't
really
use
declarative
because
declarative
results,
a
kind
of
simplification
for
junk's
pipeline,
it's
basically
syntax
sugar,
which
creates
all
these
stages
tied
to
a
single
engine
and
drags
more
more
flexible.
A
So
when
you
develop
by
client
libraries,
essentially
you
usually
opt
out
to
script
pipeline
and
okay,
yeah
and
the
yeah.
Then
it
becomes
complicated
so
actually
for
declaration
of
pipeline.
There
was
already
converted
and
jenkins
said
so:
the
declaration
of
five
interject
sex
pipelines
and
while
we
put
them
well
nzx
pipelines,
we
are
basically
on
the
top
of
techcon
2..
A
So
but
for
me
the
main
question
whether
it's
useful
to
anyone,
because
the
most
of
real
pipelines,
I
see
there
in
script
pipelines
so,
okay,
it
would
be
a
solution
just
for
a
narrow
range
of
use
cases
and
well
for
scripted
yeah
scripted.
It's.
B
C
B
A
project
that
was
driven
mostly
play
by
ibm
and
basically
in
kubeflow,
is
a
is
an
engine
for
the
machine,
learning
data
type
of
pipelines
and
it
provides
a
python
dsl
and
what
we
we
have,
that
that
python
dsl
can
compile
into
argo
cd
pipeline
or
you
can
compile
now
as
well
and
detect
pipelines.
So
there
isn't.
B
C
B
Your
python,
via
the
dashboard
technical
dashboard,
if
you
want
to
but
then
yeah,
you
can
basically
continue
to
write
pipeline
in
the
same
format
as
you
would
tube
flow
pipelines,
and
then
you
can
visualize
them
also
in
the
cube
for
ui,
because
they
are
written
in
the
geoflow
psl.
So
it's
all
nothing
to
kill,
and
I
was
wondering
whether
a
similar
approach
could
work
for
for
pipeline
as
well.
B
C
B
A
There
will
be
a
lot
of
restriction,
but
technically
could
be
done
so
for
us
yeah.
The
main
problem
would
be
to
map
two
tasks,
because
we
definitely
want
to
have
multiple
tickton
tasks
so
that
we
have
execution
in
different
containers
while
passing
the
workspace
and
not
all
jinx
pipelines
are
implemented.
In
this
way,
though,
it's
possible.
A
So
but
yes
splitting,
I
think
it
could
be
done
for
many
cases.
The
problem
with
that
the
compiling
jenkins
pipeline
isn't
always
possible
on
its
own,
because
it
is
a
dynamic
logic,
sometimes
it
loads
background,
libraries
etc
on
the
flight.
A
A
B
A
And
what
it
means
that
here
you
have
execution.
So
if
you
want
to
have
a
stage
in
the
jenkins,
it's
something
like
that
and
that's
it.
So,
basically,
you
have
a
stage
and
there,
for
example.
If
you
need
a
note,
you
say
note
note,
for
example,
linux
and
you
do
something
there
and
it's
more
less
common.
At
the
same
time,
you
don't
have
to
do
that.
You
can
avoid
using
stages
at
all.
A
You
can
somehow
switch
between
note
implicitly
so,
for
example,
there
might
be
pipeline
library
steps
something
like
on
linux,
node
and
then
this
code
is
some
way
deep
inside,
because
in
jenkins
we
have
a
lot
of
extensions,
so
just
by
groovy
closures.
So
it
would
be
like
that
and
it
can
be
even
more
chord
because,
for
example,
in
jenkins
from
one
note
you
can
call
another
note
and
yeah
you
would
need
to
unravel
many
of
these
edge
cases.
If
you
wanted
to
just
process
the
execution
correctly
again
how
popular
it
is
yeah.
A
I
don't
think
that
it's
as
popular
as
we
would
it
would
need
to
be
to
force
to
be
concerned,
because
many
pipelines
are
really
simple
in
jenkins.
A
So,
for
example,
in
jenkins
itself,
we
have
pipeline
library,
and
I
believe
that
all
pipelines
we
have
here
would
be
easily
convertible
to
tecton,
because
here,
for
example,
there
is
a
lot
of
scripting
there.
But
ultimately
we
end
up
with
parallel
tasks
which
in
execute
on
node
and
pass
through
just
several
common
stages
like
doing
command,
doing
more
maven
and
then
just
publishing
the
results
and
archiving.
A
So
these
constructs
could
be
easily
converted
to
tickton.
The
problem
is
how
to
do
it
automatically,
because
here
you
can
see
that
the
pattern
is
probably
inverted
compared
to
tecton,
because
we
just
allocated
the
node
and
then
within
the
node.
We
do
multiple
steps
electron.
We
rather
expect
that
every
step
will
be
performed
in
a
separate
container,
so,
firstly,
we
would
execute
maven.
A
B
B
B
So
it
depends
what
you
need.
Often,
the
the
the
reason
for
breaking
into
task
is
either
that
the
task
is
a
reusable
block.
So
you
you
want
to
do
a
specific
job
and
you
already
have
a
task
that
defines
how
to.
C
B
C
B
A
So
in
jenkins,
it's
more
common
to
do
this
publishing
sequentially,
because
yeah
this
index
just
suggests
that
but
yeah
you're,
rather
than
a
indict
on
it,
could
be
just
parallel
by
default.
A
Which
is
definitely
an
advantage
but
yeah
if
we
create
compiler,
two
tectonic
pipelines,
many
use
case
could
be
determined
and,
of
course,
if
you
use
kubernetes
plug-in
if
you
use
docker
pipeline
plug-in
and
others
they're
more
naturally
convertible
to
tickton,
because
they
have
more
of
a
clear
separation
between
stages,
so
you
can
in
kubernetes
plugin.
You
can
also
create
just
a
port
with
multiple
regions
for
different
stairs
and
multiple.
A
If
containers
is
still
online,
he
can
comment
on
that,
but
generally
kubernetes
plugin
allows
doing
that.
So
you
just
provision
port
and
switch
from
cadence
to
continue
to
review
the
tools
you
needed,
it's
very
close
to
what
we
would
do
in
tecton.
A
So
you
see
more
value
in
this
execution
so
when
you
have
jenkins
file
definition
but
executed
on
dictron
natively,
not
in
having
tickton
format
in
jenkins.
A
B
B
Well,
you
could
be
both
of
them,
but
since
there
is
a
lot
of
pipelines
for
you
between
jenkins
and
if
you
want
to,
you
know,
run
a
jenkins
service
and
don't
want
to
go,
tell
people
first,
you
have
to
rewrite
all
your
pipelines
in.
B
A
So
far,
the
main
problem
is
finding
bandwidth,
because
there
are
not
so
many
contributors
right
now
who
work
in
this
domain.
We
have
did
awesome,
work,
james,
straughan
garrett.
A
All
of
them
contributed
to
this
one
client
planning
yeah,
it's
just
a
first
step,
but
yeah
going
beyond
it
would
definitely
require
a
lot
of
coordinated
effort.
A
So
I'm
not
sure
it's
probably
the
case
when
we
call
in
company
contributors
whether
we
could
at
least
crowdsource
some
time
for
one
of
these
projects,
so
that
we
need
to
think
what
projects
would
actually
get
the
support
because,
yes,.
C
B
B
A
Maybe
I
should
talk
to
ander
buyer
a
to
see
whether
he
would
be
interested
at
least
to
do
some
to
see
whether
we
could
apply
what
we
did
for
jenkins
x
to
such
conversion
and
yeah.
If
it's
doable.
Why
not?
So
there
are
questions
about
the
recording.
I
will
upload
it
to
jenkins
youtube
by
the
end
of
the
weekend,
so
yeah
there
will
be
some
notes
and
yeah.
We
will
we
don't.
A
But
I
think
that
yeah,
the
main
question
you're
right-
is
just
having
some
design
so
that
we
can
see
the
use
cases,
because
I
believe
that
that's
high
and
ibm
have
ones.
B
Okay,
yeah,
but
even
for
the
tekken
format
in
jenkins.
I
think
the
prerequisite
would
be
to
do
this.
A
Than
it
is
done
today,
right
yeah,
it's
a
bit
more
trickier
than
that,
because
pipeline
engine
is
mostly
implemented
in
genk's
python
plug-in
set,
so
jinx
pipeline
has,
for
example,
its
own
log
storage
implementation
and
when
you're
working
on
a
plugable,
lock
storage,
we
finished
on
the
the
pipeline
part.
A
A
So
it's
usually
plugins
that
do
that,
but
what
we
would
need
is
to
have
a
kind
of
api
in
the
build
when
you
call
jenkins
and
say
that
yeah
for
this
particular
build
which
is
pending.
I
would
want
to
upload
this
report
and
please
attach
it
to
the
built
data
it's
doable,
but
yeah.
We
don't
have
this
apis
at
the
moment.
For
many
cases.
A
B
A
Yeah,
so
it's
available
everywhere.
At
the
same
time,
yeah
architecture
of
jenkins
is
a
bit
complicated
because
historically
many
such
interfaces
are
not
even
a
part
of
the
jenkins
core.
So,
for
example,
unit
test
reporting
is
a
part
of
the
g-unit
plug-in
due
to
historical
reasons
and
yeah.
There
are
some
exceptions,
so,
for
example,
robot
framework
plug-in
for
python
doesn't
really
integrate
with
this
g
unit
at
the
beginning.
Instead
of
that
in
the
remaining
bits
on
its
own,
but
apart
from
this
atrocities,
it
should
be
more
or
less
straightforward
for
jenkins.
A
Right
now,
I
would
just
like
to
see
build
log
test
results,
static
analysis,
maybe
coverage
and
performance
tests,
and
that
would
be
already
a
really
great
start
in
terms
of
utilizing
agency's
web
ui,
but
other
analytics
capabilities.
A
So,
for
example,
jenkins
has
about
traceability
capability,
but
whether
it's
really
useful
in
2021,
whether
it
should
be
just
going
somewhere
to
open,
telemetry
and
visual
jenkins
interface
completely
yeah.
I'm
not
too
concerned
about
it
right
now,.
B
A
Yeah
we
have
some
time
left.
So
would
you
like
to
discuss
more
on
this
topic
and
or
maybe
a
different
other
area.
A
B
A
B
And
we
need
to,
there
are
a
few
things:
take
them
like:
okay,
temporary
credentials,
how
to
window
those
locations
for
reporting
back
to
the
jenkins
streaming
of
logs
and
the
pipeline
graph
structure.
B
So
and
then
again
I
mean
if
we,
if
you
were
using
tecton
format
in
jenkins.
I
think
that
the
pipeline
structure
is
then
provided
by
the
definition
right.
So
we
only
need
to
provide
the
runtime,
and
if
we
use
the
jenkins
format
translated
again,
we
would
have
a
static
definition
already
in
the
pipeline.
So
it's.
B
B
C
A
Have
can
definitely
drive
formal
definition.
A
A
So,
if
I'm
to
drive
it,
it's
definitely
not
november
in
december,
maybe
a
bit
later,
just
yeah,
I'm
changing
jobs
right
now,
so
my
bandwidth
will
be
limited,
yeah
make
sense,
but
at
least
to
start
exploring
that
and
documented
the
concept
without
design
it's
hard
to
communicate.
B
A
Now,
well
in
jenkins,
we
largely
moved
to
github
issues.
So
in
this
particular
question
in
this
particular
topic,
the
question
is
where
to
put
it,
because
we
have
so
many
components,
so
we
can
probably
so
organizational
advice.
I
would
rather
keep
it
on
the
terminal
of
cloud
native
seek
or
maybe
under
the
umbrella
of
cdf
interoperability
seek
whatever
is
more
convenient
and
yep
for
task
tracker
yeah.
You
could
just
create
a
new
repository
for
that.
A
B
Oh,
I
mean
if
it's
it's:
okay
for
the
inter
folks
yeah
that
works.
A
So
yeah
this
part,
I
can
take
yeah
for
jinx
file
to
tipton
converter.
We
can
definitely
create
a
task
for
that,
but
yeah.
A
A
Now,
yeah
also
reach
out
to
vip
half
and
the
red
heart
team
to
see
whether
they
would
be
interested,
maybe
I'll,
say
virtuslab,
because
they're
currently
building
a
jinx,
kubernetes,
separator
and
obviously
using
tickton.
They
could
have
some
advantages.
A
Yeah,
maybe
jenkins
ex
team
would
be
interested
as
well,
but
the
agency
yeah
they
need
to
regroup
yeah.
If
they
have
some
interest
because
they
look,
they
contribute
a
lot
to
the
client
plugin
plugin,
so
they
could
be
well.
They
need
for
integration
as
well.
So
maybe
they
would
also
be
interested
to
see
the
way
it
evolves.
B
A
Yeah
I
wanted
to
do
it
for
today,
but
yeah
over
committing
is
definitely
a
part
of
my
job
definition,
so
yeah
one,
it
will
be
definitely
set
up
proposals.
A
A
A
What
about
others,
so
we
basically
spent
the
most
of
the
time
talking
this
andrea
and
sorry
if
it
was
difficult
to
follow
because
yeah.
I
definitely
didn't
do
my
homework
before
this
meeting
yeah
just
in
case
any
comments
or
feedback.
They
will
be
appreciated.
A
C
C
I
personally
don't
run
a
lot
on
jenkins
right
now,
but
it
really
just
depends
because
I
do
a
lot
of
consulting.
So
I
go
from
client
to
client
and
you
know
you
never
know
what
they're
going
to
be
running.
So
it's
been
a
long
time
since
I've
looked
at
kubernetes
offerings
but
yeah.
I
think
I'd
have
to
go
and
look
at
the
like
the
benefits
of,
and
then
you
know
where
does
that
fit
for
you?
C
You
know,
as
you're,
evaluating
your
options
and
just
having
the
option
to
have
other
pipeline
engines
is
definitely
something
that
is
interesting
and
probably
good
for
the
community,
but,
like
you
said
there
will
definitely
be
edge
cases.
So
you'll
probably
pick
up
some
benefits
from
using
them
as
an
engine,
and
then
you
know
it's
like
well
what
what
do
I
lose
when
I
decide
to
translate
my
existing
jenkins
file
into
one
that
can.
A
Run
right
so
yeah
I
was
looking
into
that
and
for
me
the
first
stable
stick
feature
was
reporting
because
from
what
I
see
not
so
many
people
already
lose
it
yeah,
though
even
us
jenkins
may
integrate
with
external
services
like
test
et
cetera,
and
it
can
be
done
with
tipton
as
well,
but
jenkins
user
experience
for
reporting
has
been
actually
quite
good.
C
Yeah-
and
I
think
you
guys
mentioned
something
about
tech,
talent
is
usually
something
done
in
concurrently
versus
sequential
and
that's
probably
definitely
some
place
that
deep
ins
could
use
some
help
they
have.
You
know
we
have
parallel
stages
to
run
things
concurrently,
but
kind
of
the
ability
to
like
display.
That
is
very
hard.
This
means
like
with
current
pipelines
right
a
lot
of
the
output,
will
be
mashed
together.
So
it's
very
hard
to
follow.
C
A
A
So
this
is
something
we
can
look
into.
Yeah
currently
pipeline,
visualization
and
jenkins
is
also
a
hard
topic,
so
team
giacomp
is
working
on
the
new
plugin
for
browsing
pipelines,
but
yeah,
currently,
I'm
not
so
sure
about
the
future
of
blue
ocean
or
pipeline
stage
viewer.
So
maybe
it's
topic
that
needs
to
be
brought
up
to
the
authors.
A
Well,
because
we
agree
with
you
that
jenkins
is
not
created.
The
massive
fertilization.
C
Improve
it
and
then
just
pipeline
thoughts
in
general,
regardless
of
the
executor
to
me,
I
feel
like
there
was
for
a
year
or
two
or
three.
There
was
that
kind
of
push
towards
yml
like
when
travis
got
popular
and
then
you
know
get
lab
and
then
github
actions,
but
I
think
we're
going
to
see
that
reverse
I
mean
you're,
seeing
it
in
javascript
and
python,
where
typing
and
things
like
that
are
becoming
extremely
popular
very
quickly.
C
I
think
you're
going
to
see
more
developers
wanting
to
build
their
ci
cd
pipelines
in
a
in
some
kind
of
feature-rich
programming
language
and
not
not
in
batch
or
some
type
of
job
dsl
type
type
language.
Maybe
like
descriptive
pipeline
you're
going
to
see
people
want
to
push
towards,
I
think
more,
like
full
featured
programming,
languages.
A
B
B
B
Yeah,
so
I
think
it's
it's
interesting
to
to
have
a
lot
of
power
and
information
in
your
pipeline
and
metadata
structure
and
support,
but
then
it
becomes
really
like
a
development
job
writing
the
pipeline.
So,
like
writing,
a
text
on
task
but
well
written
with
all
the
input
parameters
and
the
results
and
the
things
done
properly
because
a
development
job
yeah.
So
it's
that's
why
the
catalog
and
the
usability
story.
C
B
B
The
construct
that
you
want,
but
you
you
still
want
the
other
spectrum
where
you
want
people
just
to
even
maybe
say
graphically.
You
have
a
ui
where
the
drop,
the
boxes
and
say:
okay,
this
is
my
pipeline
and
it
connects
things
together
and
it's
very
easy
to
do
so.
I
think
it's
there.
Both
words
exist.
A
B
Has
imperative
in
a
way
of
course,
you
have
some
script,
does
something,
but
it's
declarative
in
the
way
that
you
say
what
what
are
your
inputs?
What
are
your
outputs?
You
know
and
yeah.
That's
one
thing
that
we
are
hitting
in
a
few
places.
Now
a
few
use
cases
I've
seen
that
it's
with
the
limitation
is
get
this
concept
of
pipeline
resources
in
the
beginning
they
could
be
used
as
inputs
and
outputs,
and
you
could
say
this
input
is
a
git
repository.
It
is
output
with
a
container
image
and
so
forth.
C
B
But
now
we
are
saying
that
there
are
a
few
use
cases
where,
like
provenance,
for
instance
now
where
it's
important
to
know
that
a
certain
pipeline
task
produces
a
certain
type
of
artifact,
because
maybe
you
want.
A
It
will
be
helpful
for
us,
and
basically,
we've
seen
this
similar
evolution,
for
example
with
github
actions,
because
it
have
actions
metadata
evolved
a
lot
well,
additionally,
but
it
just
wasn't
required
you'll
be
able
to
write
a
new
step,
but
now,
if
you
want
to
put
it
to
be
on
the
marketplace
the
way
they
require
quite
a
lot
of
metadata
to
be
available
because
otherwise
thereto
link
is
unable
to
process
it
correctly,
which
is
okay.
C
A
So
I
think
we
have
a
basic
plan
and
yeah
yes,
my
homework
will
be
to
actually
write
up
something
so
that
it
can
be
reviewed
by
people
who
are
not
junkies
or
tipton
experts,
but
for
actually
users
so
that
we
could
understand
feasibility
of
this
approach
for
them
and
yeah
I'll,
take
some
action
items
and
yeah,
hopefully
I'll
find
them
to
do
it
soon.
A
So
basically,
this
syntax
is
rather
focused
here
so
that
it
can
map
to
text
on
this.
A
D
I
think
this
is
still
in
very
early
stage
and
going
forward.
Maybe
we
are
going
to
design
a
single
interface
web
interface
for
this
for
their
integration.
Yes,
this
sounds
like
the
cubesat
devops
with
jenkins
yeah.
You
know
we
have
a
individual
web
interface
for
jenkins
pipeline
so
which
is
similar
to
the
blue
ocean.
A
Yes
and
yeah
you,
so
basically
you
can
see
the
using
the
same
interface
right
yeah.
So
basically
it's
what
you
would
like
to
see
on
the
junky
site
as
well.
So
I
mean
to
a
single
web
ui.
A
Yeah
well
eventually,
you
may
need
some
specifics
of
pluggability
but
yeah
having
such
foundation,
I
think,
would
be
nice
for
any
extensible
system
and
yeah.
I
guess
you're
doing
more
or
less
the
same
as
we
discussed
before,
for
example,
for
jenkins
x,
when
genghis
x
was
deciding
what
the
built
engine
to
use,
they
basically
put
multiple
ones
and
on
the
jenkins
side,
we
actually
want
even
drinkers
to
support
multiple
engines.
A
So
let's
hear
how
we
could
unravel
this
russian
doll
later.
A
A
A
Probably
not,
then
thanks
everyone,
so
in
20
minutes
we'll
have
a
session
with
levy
about
visual
studio
code
and
development
tools
and
as
we
agreed,
we
postponed
james
valrano's
session
by
one
hour,
so
at
1
pm
utc
we
will
have
the
jinx
file
running
session
and
yeah.
Then
we
will
basically
close
and
yeah
I'll,
take
action
item
to
just
process
all
nodes.
A
Well,
it's
just
a
very
small
summit
this
time.
Well,
we
could
have
prepared
and
announced
it
a
bit
earlier.
So
in
summer
we
had
50
people
participating
and
yeah.
I
think
we
can
return
back
to
the
scale
for
the
next
summit,
but
yeah
for
this
particular
one.
I
just
wanted
to
have
several
focused
discussions,
so
hopefully
we
achieve
these
goals
and
yeah
thanks
a
lot
to
andrea
for
joining
us
thanks.
I
don't
know
honestly
so
yeah.