►
From YouTube: OpenShift Coffee Break: Tekton in action
Description
A recurring series on OpenShift Pipelines, from the basics to advanced use cases. In this episode we will talk about all CI/CD with Siamak Sadeghianfar, PM for OpenShift GitOps and Pipelines at Red Hat.
A
A
So
usually
we
pick
up
different
topics
for
the
openshift
coffee
break,
and
this
one
will
be
a
special
episode
because
it's
the
first
one
of
the
recurring
series
series
that
we
wanted
to
do
about
pipelines.
A
So
we
will
specifically
be
speaking
about
texon,
which
is
a
kubernetes
native
way
of
doing
ci
cd
and
today
I
have
siama
savita
and
nikhil
from
the
engineering,
and
we
have
also
our
lifetime
guest
terrell,
who
will
still
be
sharing
coffee
with
us.
So
if
you
guys
can
can
make
a
quick
introduction
for
yourselves.
B
I
can
start
okay,
camera
here,
product
manager
for
openshift
pipelines
that,
based
on
the
upstream
takedown
project
and
the
red
hat,
I
generally
look
after
the
cicd
space
on
openshift.
C
D
Yeah,
I
can
go
next
so
here,
thanks
for
having
me
here
on
the
openshift
coffee
break,
so
myself,
savita,
I'm
basically
part
of
devtools
group
group
and
I
basically
work
on
openshift
pipelines
and
upstream
projects
like
techton
k
native.
So
currently,
I'm
based
out
of
india
bangalore
office.
E
Hello,
I'm
nicholas
norbus,
I'm
also
from
bangalore,
india,
I'm
part
of
developer
tools
in
red
hat
and
I
contribute
to
tecton
cd
projects
and
I
mainly
focus
on
the
operator
and
the
product
productization
part
of
openshift
pipelines,
where
we
bring
the
upstream
components
like
pipeline
stickers
into
openshift.
F
Hi
my
status
changed
from
co-host
to
permanent
quest
since
I've
worked
for
red
hat
last
week,
but
currently
working
as
a
senior
engineer,
devops
in
a
company
called
vanguard.
So
I
will
be
providing
the
external
view
of
the
thing
so
that
not
everything
is
so.
A
A
So
the
goal
here
today
is,
as
I
mentioned,
we're
going
to
have
a
recurring
series
about
techton
and
of
course
we
will
also
speak
about
the
productized
version
of
it,
which
is
openshift
pipelines,
and
this
the
goal
is
to
to
have
pragmatic
sessions
where
we
go
as
we
move
along
in
the
different
episodes.
We
learn
new
things
and
new
things
about
detecting
concepts,
and
we
show
you
also
how
to
do
that.
A
So
it's
basically
it's
not
a
master
course,
but
it's
basically
showing
you
the
different
concepts,
how
to
use
them,
how
they
work
together
and
because
we
have
the
the
great
luck
to
have
engineering
guys
from
from
us.
We
can
also
understand
better
what
red
hat
does
in
the
in
the
communities,
what
how
we
develop
things,
etc.
So
siamak,
since
you
are
the
product
manager
for
for
that.
How
about
you?
You
tell
us
a
bit
more:
how
get
red
hat
got
involved
with
techton?
A
Why
we
chose
that
and
what
we
do
in
the
communities
as
a
starter.
B
Sure
so
I'll
be
happy
to
share
like
and
talk
a
little
bit
about
that,
so
the
the
continuous
integration
space
is
really
is
not
in
the
shortage
of
solutions
right
there
are,
if
you
go
like
as
long
back
as
you
go
10
years,
15
years,
there
is
a
very
colorful
landscape,
a
variety
of
different
solutions
that
that
target
continuous
integration
in
different
ways.
They
all
want
to
solve
the
same
problem.
They
want
to
automate
the
activities
involved
in
in
the
software
development
life
cycle,
like
killing
the
binary
of
your
application.
C
B
It
perhaps
running
some
security
checks
against
it
and
there
are
more
and
more
activities.
They
all
solve
the
same
problem.
You
want
to
automate
this,
but
there
are
different
approaches,
so
this
is
quite
colorful
and
if
you
go
a
couple
of
years
back,
there
is
also
a
large
number
of
cloud
services
that
do
the
same
thing,
look
at
travis
and
serial
ci
and
give
that
pipeline
a
little
bit
of
action
and
so
on.
B
There
was
no
shortage
of
solutions,
a
few
if
you
may,
if
you
will,
however,
as
we
saw
so
red
hat
like
I
am
part
of
open
shift,
core
openshift
is,
is
the
the
kubernetes
distro
enterprise
kubernetes
from
red
hat,
and
there
is
a
sharp,
really
increasing
adoption
of
kubernetes
across
the
industry,
not
everywhere,
but
it
really
grows
double
digits
and
a
lot
of
organizations
that
are
building
cloud
native
applications.
Kubernetes
is
not
really
it's
a
given.
That
is
a
platform
for
deployment.
B
So
we
see
this
growth
of
applications
being
deployed
on
kubernetes
and
the
ci
systems
that
are
completely
oblivion
to
to
to
the
platform
with
a
container
platform.
So
there
was
this
gap
that
was
discovered
in
in
the
kubernetes
community,
as
the
ci
systems
are
are
very
different
from
how
from
the
infrastructure
that
the
application
is
deployed
on
and
if
you
and
when
you're,
using
cloud
services
to
an
extent.
B
B
Even
if
your
infrastructure
is
a
cloud,
but
if
you're
the
owner
of
the
infrastructure,
then
it
starts
to
become
problematic
that
that
way
of
that
division
that
exists
and
then
you
can.
You
can
also
notice.
There
has
been
a
lot
of
efforts
in
bridging
this
gap
right.
So
there
has
been
a
lot
of
engineering
efforts
about
how
we
can
make
jenkins
to
to
work
much
nicer
on
a
container
platform,
and
you
can
see
similar
to
to
other
container
platforms
due
to
other
ci
engines.
B
You
can
see
it
also
so
most
of
the
cloud-based
cis.
They
do
recognize
the
enterprise
needs
and
they
have
a
version
of
that
that
runs
on
on
kubernetes,
or
at
least
the
pipelines
can
execute
in
kubernetes,
and
there
are
ways
to
have
the
runners,
for
example,
to
schedule
the
actual
execution
of
jobs
and
kubernetes.
There
have
been
a
lot
of
efforts
to
sit
in
between
and
bridge
these
two
wards
together,
which
works
to
16
degree,
but
there
are
still
limits
to
that.
B
The
obvious
ones
are,
for
example,
one
one
aspect
is
that
how
the
ci
engine
itself,
how
well
that
is
suited
for
kubernetes
environments
and
the
more
traditional
monologic
that
solution
is
the
the
more
difficult
it
is
to
run
in
kubernetes
jenkins
is
a
great
example.
Everyone
is
jenkins
user
that
had
including
red
hat
and
ronan
jenkins
on
kubernetes
itself.
It's
difficult
because
jenkins
is
designed
really
for
a
different
era.
It
was
it
was.
It
was
designed
way
before
containers
very
thing,
and
so
the.
C
B
B
So
this
was
recognized
first,
when
the
k-native
project
started
by
by
google
as
a
serverless
platform
at
the
heart
of
kubernetes
and
there
it
was
recognized
that
we
also
need
a
build
system
that
is
more
closer
to
k-native
closer
to
kubernetes.
The
all
the
ci
system
that
can
build
container
images
are
are
like
really
good,
but
all
these
gaps
make
it
difficult
to
use
it
with
integrative.
So
canadian
build
was
formed
to
focus
on
this
problem
and,
as
use
cases
were
like
collected,
and
what
does
canadian
building
to
do
quite
quickly?
B
B
So
that
was
the
pivoting
point
that
the
tikton
project
that
kennedy
built
was
broken
out
of
canadian
project
and
became
really
what
what
ticked
on
is
today
became
a
project
that
focuses
really
on
on
this
gap
between
ci
and
kubernetes,
and
a
ci
framework
that
is
really
at
the
heart
of
kubernetes
builds
on
the
concept
of
kubernetes.
So
you
build
your
pipelines
concept
with
the
same
concept
that
you're
familiar
with
you
build
a
pipeline.
B
You
have
tasks
or
stages
of
sort
and
so
on,
but
you
build
you
really
build
them
with
the
concepts
of
kubernetes
as
well.
You
have
pods
in
it.
You
have
containers,
you
have
secrets,
config
maps,
so
for
someone
that
is
on
kubernetes,
this
seems
a
very
natural
step.
Your
the
learning
curve
is
extremely
low
because
you
are
not
reusing
everything
that
you
have
from
from
your
existing
skills
and
even
constructs
that
you
have
on
kubernetes
and
build
pipelines
from
them.
B
So
that
was
really
the
problem
that
tecton
tried
to
solve
to
create
a
native
ci
that
is
built
for
kubernetes
is
familiar
to
kubernetes,
and
in
here
it's
the
operational
model
of
kubernetes
as
well,
so
it's
it
becomes
really
absorbed
into
the
cloud
kubernetes
platform,
all
right.
So
this
is
how
this
is
the
background
of
a
project
like
not
that
brief
black
project.
B
A
B
Like
ages
ago,
even
though
this
is
like
two
years
ago
or
one
and
a
half
years
ago,
if
I'm
not
mistaken,
like
less
than
one
and
a
half
years
ago,
perhaps,
but
it
is
it's
a
very
fast
moving
project
and
an
openshift.
We
we
do
pride
ourselves
that
we
want
to
make
kubernetes
simple
for
developers.
Our
our
goal
is
really
to
be
the
the
number
one
developer
platform
for
for
on
kubernetes.
B
So
we
monitored
this
space
very
closely,
and
it
was
quite
evident
for
us
that
tekton
is,
is
really
did
the
right
direction
for
continuous
integration
on
kubernetes.
So
we
started.
We
were
already
investing
in
k
native
for
several
foreclosed
and
we
started
investing
in
tecton
as
well.
B
We
have
like
two
of
the
really
sharp
engineers
on
the
tech
talent
team
with
me
on
the
call
that
we
we
work
a
lot
on
upstream
and
address
issues
and
bring
customer
cases
that
are
very
relevant
within
the
enterprise
environments
and
work
through
them
within
techton
offstring
project,
with
creating
enhancement,
implementation
discussing
and
then
bring
them
down
as
a
supported
product.
F
You
mentioned
that
cms.
There
are
several
different
like
toolings
to
do
the
ci
cd.
What
about
those
customers?
They
have
a
zillion
lines
of
jenkins
code,
groovy
and
scripting,
and
everything
what
is
the
like
the
way
to
move
forward
if
they
want
to
move
to,
let's
say:
is
it
lift
and
shift,
or
is
it
just
re-engineering
or
is
it
the
same
concept
like
moving
from
vms
to
container
now
is
moving
from
jenkins
to
cloud
native
pipelines
or
something
is
there
any
like
coding
rules
how
that
should
be
done.
B
B
We
have
seen
in
the
waves
of
technology
as
technology
matures
and
evolves,
and
you
are
left
with
the
decision
of
how
do
I
move
from
my
existing
way
of
working
to
to
this
new
way
that
addresses
some
of
the
challenges
that
I
that
I
have,
but,
at
the
same
time,
there's
a
lot
of
work
for
me
a
lot
of
existing
investments.
So
there's
this
trade-off
of.
How
do
I
address
those
challenges
without
like
stopping
still
and
spending
a
year,
just
refactoring
stuff
right?
B
This
is
the
same
conversation
with
microservices
and
monolithic
applications
or
thing
container
vms.
Like
you
mentioned,
or
this
this
happens
like
this
is
the
exact
same
type
of
change.
What
do
we
see
at
a
lot
of
our
customers
is
that
they
usually
draw
a
line
at
some
point
and
every
any
new
effort
that
is
being
gone
into
ci
or
building
pipelines
or
building
a
platform
that
provides
ci
as
a
service
to
internal
teams
that
redirects
to
investing
in
tiktaan.
B
In
this
example,
while
maintaining
and
keeping
the
existing
investment
and
over
time
analyze
and
see
where
it
makes
sense
to
perhaps
move
some
of
those
existing
efforts
to
take
down.
But
we
don't
see
that
often
that
people
like
stop
and
move
everything
that
they
have
already
on
jenkins
or
some
other
platform
to
to
take
time.
It's
a
more
gradual
movement,
but
absolutely
like
this
becomes
a
pivoting
point
that
no
no
new
investments
takes
place,
for
example
in
in
creating
pipelines
through
jenkins.
B
That's
yes,
the
customers
that
started
off
in
clickton.
F
Okay
makes
sense,
I
have
actually
another
question
to
the
engineering
team,
since
you
follow
the
upstream
closely,
and
the
upstream
is
driven,
of
course,
by
the
customers
and
the
users
to
which.
B
B
Each
of
them
bring
in
their
own
use
case,
they
use
cases
of
their
companies
or
their
customers
to
the
community,
so
it
really
grows
in
in
a
variety
of
different
directions,
but
I
thought
I
can
like
er
get
help
from
sergio
and
nicholas
they
focus
on
two
of
the
sub
projects
of
takedown.
They
can
talk
a
little
bit
about
their
areas
of
where
which
direction
they
see
in
in
those
areas.
That
is,
that
is
growing.
E
Yeah
so
yeah
yeah.
I
think
we
both
can
mention
some.
So
maybe
you
can
go
first
and
then
I
can
talk
about
other
points
as
well.
D
Yep,
so
as
soon
like
I
most
of
the
time
I
spend
working
on
triggers,
which
is
one
of
the
sub
project
of
entire
tecton
cv
like
tecton,
cd,
so
and
also
operator,
so
maybe
nikki
can
address
most
of
the
things
in
operator
so
coming
to
triggers
so
initially
like
when
techton
cd
project
started,
it
started
with
the
pipeline
project.
I
mean
sub
project
module
so
once
they
started
pipeline
thing
everything
pipeline
run
and
those
things
has
to
be
run
manually,
so
they
got
a
requirement
or
like
use
cases
like.
D
How
do
we
schedule
these
pipeline
run
or
task
run
dynamically
based
on
some
events
or
some
mechanism,
so
that
those
things
will
be
dynamically
run?
In
that
case
a
trigger
event
based
mechanism?
They
were
they
thought
and
they
started
with
this
trigger
project
so
initially
like
they
started
with
the
alpha
api.
So
that's
how
the
kubernetes
api
start
works.
I
mean
eventually
they
start
with
alpha.
Then
beta
then
ga
so
triggers
is
still
is
in
the
alpha
state.
D
So
this
first
started
adding
the
basic
in
integration
with
the
github
so
because
most
of
us
will
be
very
familiar
with
the
github
source
code
management
tool,
and
this
way
they
have
started
in
like
extending
the
sem
tools
like
github
bitbucket
gitlab
and
after
that
event,
mecha
one
flow
of
event
mechanism
started
working
and
later
like
once
the
basic
flow
is
started
happening.
Then
we
got
lot
of
requirements.
I
mean
user
inputs
like
okay.
D
We
we
want
now
customization
of
these
event
mechanisms
I
mean
we
don't
want
to
use
the
just
the
basic
inbuilt
provided
by
triggers
and
also
we
want
something
customization
on
top
of
that.
So
this
way
it
started
evolving,
and
now
we
have
the
customization
step
and
also
like
not
trigger
supports
the
k92
kne2
based
event
listener,
also
because
right
now
till
now,
what
was
happening
whenever
trigger
creates
the
objects
the
parties
keep
on
running,
even
though
someone
is
not
sending
events,
so
in
that
case
the
resource
utilization
was
unnecessary
wasting.
D
So
in
that,
during
that
time,
like
we
thought,
okay,
let's
use
the
knee
to
based
mechanism.
Also
so
now
the
trigger
integrates
with
k
native
also
yeah.
So
this
way
we
start
adding
the
features
based
on
the
user
inputs
or
based
on
the
use
cases
and
based
on
like
lot
of
inputs
from
the
developers.
Companies
who
started
contributed
yeah,
so
right
now
trigger
is
in
alpha
state,
so
maybe
like
another
month
or
two
months,
we
will
be
making
it
to
the
beta,
then
yeah.
D
This
is
how
we
start
contribute
and
add
the
new
features
to
triggers.
I
I
think,
yeah
nikhil.
Can
you
just.
A
A
So,
just
let's
not
forget
that
I
know
we
are
all
very
well
versed
into
tecton.
We
speak
about
those
different
concepts
like
event,
listener
and
triggers
and
stuff
like
that.
But
what
I
wanted
to
do,
because,
let's
keep
in
mind
that
this
is
a
101
session,
we
are
basically
introducing.
A
Although
the
audience
might
know
what
I
wanted
to
do
is
first,
have
a
quick
recap
of
those
main
concepts
just
to
set
the
floor,
so
everyone
watching
can
understand
what
basically
we
are
talking
about
and
then
we
can.
If
it's
okay,
we
do,
we
can
articulate
more
in-depth
information
about
those
concepts.
Would
that
work
for
you.
A
I'm
going
to
share
my
screen
and
show
you
those
different
protectant
concepts.
F
Meanwhile,
I
will
have
a
question.
Maybe
audience
doesn't
know
that?
Is
it
correct
that
when
the
api
is
in
alpha,
the
api
might
change,
but
after
beat
that
the
api
doesn't
change
anymore
and
then,
of
course
ga
is
solid.
So
that's
good
to
know
when
customers
start
using
deckton
that
if
they
use
api,
which
is
alpha,
there
might
be
changes
and
even
breaking
changes.
So
that's
that's
good
to
know.
D
A
Yeah
sure
don't
try
to
build
productized
ci
or
production
ci
based
on
alpha
features.
That's
the
the
the
the
goal
or
message
there
all
right
so
very
quickly.
I'm
going
to
introduce
those
concepts
to
to
to
set
the
floor
so,
as
mike
mentioned,
what
we
wanted
with
tecton
or
what
the
community
wanted
was
to
basically
standardize
around
concepts
that
everyone
would
be
free
to
implement,
how
they
want,
but
basically
make
those
things
able
to
work
using
kubernetes
native
resources.
A
So
we
take
on.
Basically,
we
extend
kubernetes
with
some
new
concepts.
We
use
what
we
call
the
custom
resource
definition
mechanism
that
allows
us
to
enrich
communities
with
with
new
things.
So,
basically,
if
we
go
at
the
very
base
layer,
we
add
three
things
which
are
the
notion
of
a
pipeline,
the
notion
of
a
task
and
the
notion
of
a
step.
So
a
pipeline
is
a
sequence
of
tasks
that
can
run
in
sequence
or
in
parallel.
A
Each
task
will
run
in
a
pod
and
each
task
will
contain
several
steps
that
will
run
in
containers
in
the
same
pod.
So,
to
give
you
examples,
so
all
of
those
things
of
course
are
going
to
be
defined
the
kubernetes
way,
meaning
using
gamma
resources,
and
if
we
see
an
example
of
a
pipeline,
we
see
here
that
we
we
have
several
tasks
that
will,
for
example,
clone
the
the
the
source
code
then
build
it
and
then
deploy
it.
A
So
we
can
have
conditional,
we
can
have
retries,
we
can
have
some
execution
logic
built
within
the
pipeline,
but
the
very
important
thing
that
is,
although
all
of
those
things
happen
in
different
pods
or
different
containers,
we
can
share
data
between
all
of
those
steps
or
or
tasks.
So,
for
instance,
say
that
you
wanted
to
clone
your
code,
and
this
can
be
done
by
a
task
in
a
pod,
but
this
pod
will
write
the
data
to
some
shared
workspace.
A
We
call
that
workspaces
and
it's
going
to
be,
for
example,
a
persistent
volume
that
another
pod
can
then
mount
and
if,
if
we
wanted
now
to
build
the
binary
of
the
application
using
something
like
maven,
we
find
the
source
code
already
there.
We
can
also
use
things
like
sub
paths
to
say
I'm
going
to
clone
my
code
in
this
specific
folder,
I'm
going
to
store
my
maven
artifacts
or
cache
in
this
folder.
So
you
can
basically
share
data
between
the
different
parts
that
are
playing
some
parts
in
your
pipeline.
A
A
Binaries
or
scripts
that
you
need
to
run
your
your
ci
stack,
then
it's
very
easy
to
build
that
into
a
container
image.
So
that's
one
of
the
skills
so
taro
you
asked:
how
should
I
move
from
my
traditional
ci
to
tecton,
so
one
of
the
I
would
say
key
skills
would
be
to
learn
really
how
to
write
docker
files
or
create
container
images
and
embed
whatever
custom
tools
you
need
to
to
perform
your
steps.
A
So
if
you
needed
a
specific
cli,
then
you
have
to
build
if
there's
no
image,
that
already
provides
that
you
can
build
your
own
image,
embed
all
the
tools
within
that
image,
and
then
you
can
reference
it
within
your
steps
as
your
image
that
will
run
the
commands.
Okay
and
final
step
is
final
concept.
Is
the
step
and
basically
the
step
says
I'm
going
to
use
this
specific
image
and
I'm
going
to
run
this
specific
command.
A
So,
for
instance,
if
you
wanted
to
build
your
java
application,
then
you
could
do
a
maven
package
or
if
you
wanted
to
install
your
your
dependencies
or
whatever
you
can
do
a
maven
install,
etc.
So,
basically,
you
have
a
task
that
has
several
steps
and
each
step
can
have
different
commands
with
with
the
container
image.
A
So
that's
the
the
final
concept
that
I
spoke
about,
which
is
how
to
share
data
between
different
items
and
basically
it's
going
to
be
a
persistent
volume
that
will
be
shared
between
different
parts
of
your
pipeline.
A
A
So
these
are
the
building
blocks
that
you
need
to
know
about,
and
I
have
something
that
I
have
built
also
to
talk
about
the
triggers,
but
maybe
let's
save
that
for
a
bit
later
on
when
savita
and
mikhail
show
us
the
demo
and
try
to
to
expand
the
demo.
With
with
this
notion
of
I'm,
gonna,
make
a
comment
to
my
code
and
then
trigger
the
execution
of
my
pipeline
all
right,
so
that
was
it
for
the
concepts.
A
I
hope
it
sets
the
floor
to
understand
what
we
are
manipulating
in
terms
of
our
resources
and
how
they
get
executed
on
kubernetes.
E
You
mean
the
the
direction
in
which
the
upstream
is
going
in
the
question.
E
Definitely
I
think
so.
I
can
answer
that
question
in
two
aspects.
One
is
the
general
philosophy
and
the
second
one
is
the
direction
in
which
the
work
is
going.
So
the
direct,
the
general
philosophy
is
this,
so
that
is
in
the
beginning,
so
it
was
kind
of
assumed
that
tech
tone
is
a
tool
that
we
use
for
for
the
outer
loop
builds.
E
That
is,
you
know
when
you
want
to
build
and
publish
something
in
the
enterprise,
so
at
first
we
did
not
have
that
focus
of
whether
techtone
can
be
used
for
a
developer's
workflow.
That
is
I'm
building
a
software,
and
then
I
wanted
to
run
some
tests
and
you
know,
and
then
then
just
deploy
in
my
local
setup,
but
right
now,
discussions
are
going
on
in
that
aspect.
That
is
whether
techtron
should
kind
of
consider
this
developer's
local
web
class.
E
Well,
so
one
aspect
is
that
we
are
also
thinking
about
ways
to
make
sure
that
cicd
pipeline
is
also
a
part
of
the
application
instead
of
thinking
about
those
two
in
two
different
terms
and
the
second
aspect,
that
is
the
general
work
that
is
happening
so
initially,
a
lot
of
focus
was
on
the
core
functionalities.
That
is
how
do
we
write
a
task
and
how
do
we
share
tasks
and
the
different
features
that
we
need
to
share
data
between
tasks
or
pipeline
such
things?
E
But
now
the
workflow
I
mean
there's
a
lot
more
focus
on
supporting
features
as
well,
for
example,
as
cmak
mentioned
in
all
the
workloads
run
as
containers.
So
when
you,
when
you
run
a
pipeline,
the
tasks
come
up
and
they
run
as
task
runs,
which
is
essentially
port.
E
E
We
are
just
it's
a
it's
a
method
to
upload
or
save
your
logs
somewhere,
so
that
you
can
kill
the
taskbar
ports
and
then
there
is
a
lot
more
workflow
related
work
happening
inside
operator,
because
initially
the
focus
was
on
just
installing
the
components
that
is,
you
just
want
to
install
techno
pipelines
or
just
triggers,
but
now
people
want
to
have
access
to
methods
using
which
they
can
automate
upgrades
configuration
and
and
customization.
E
A
good
example
is
we
have
the
operator
project
which
can
be
built
for
two
different
platforms.
That
is
when
it
is
built
for
kubernetes.
It
has
certain
set
of
behavior
and
it
provides
the
upstream
techno
dashboard.
But
when
we
build
the
operator
for
downstream,
it
does
not
supply
the
upstream
dashboard.
But
in
addition,
instead
it
provides
some
cluster
tasks.
You
know
which
comes
on
openshift,
so
so
that
in
that
aspect,
there's
a
lot
of
focus
now
on
to
the
supporting
features
of
supporting
workflows
rather
than
on
the
core
workflows.
A
Yeah
and
speaking
of
that,
so
thank
you
very
much
mikhail
there's
a
question
about
pipeline
resources
and
because
you
guys
have
been
involved
in
the
upstream
like
engineering
discussions
on
all
those
things,
could
you
tell
us
why
they
have
been
duplicated?
I
know
there
are
some
issues
with
them:
some
limitations.
A
They
have
been
mostly
replaced
by
this
notion
of
workspaces
where
you
can,
instead
of
referencing
pipeline
resources,
you
can
say
the
the
code
is
in
this
workspace
and
and
just
use
that,
for
example,
can
you
tell
us
for
why
they've
been
deprecated
and
what's
the
new
way
of
doing
that,.
E
So
I
can't
give
an
overview,
so
that
is
initially.
It
was
thought
that
the
way
we
share
workflows
will
be
using
pipeline
resources.
That
is,
we
have
types
of
pipeline
resources
like
git
image,
storage,
cluster,
etc.
And
then
then
there
was
a
point
where
people
were
starting
to
request
more
and
more
types
of
pipeline
resources
so
and
the
types
of
pipeline
resources
are
inbuilt
in
the
course
of
technolon
implementation.
E
So
that
is
you
know
each
time
we
want
to
add
a
new
pipeline
resource,
we'll
have
to
add
that
to
the
core
implementation.
So,
instead
of
adding
python
resources,
we
thought
about
that
typed
pipeline
resources.
That
is
you
know,
so
that
is,
we
can
provide
an
interface
using
which
users-
or
you
know,
other
developers-
can
create
custom
pipeline
resources
without
having
to
make
changes
in
the
core
implementation.
E
But
if
that
did
not
take
off,
but
instead,
I
think
when
technon
started
becoming
popular
people
were
more
interested
in
sharing
workflows,
using
tasks
and
pipelines,
so
a
pipeline
resource.
It's
this
generally
saying
that
okay,
so
basically
in
in
simple
words,
pipeline
resource
is
just
a
collection
of
steps.
So
if
you
use
a
pipeline
resource
just
before
running
your
pipeline,
it
will
add
a
few
steps
just
before
or
after
your
your
defined
steps.
E
So
as
you
see
so,
the
pipeline
that
you
define
is
what
you
see
running,
but
with
pipeline
resources,
you
see
these
additional
steps
running.
It
might
be
a
little
unclear.
So
so
two
reasons,
one
that
is.
It
was
kind
of
difficult
to
support,
more
types
of
python
resources
and,
second,
I
think,
using
tasks
and
sharing
workflows,
using
tasks
and
pipelines
in
stock
pipeline
resources
makes
it
clear
more
clear.
A
All
right
thanks
yeah,
so
it's
it's
so
the
less
we
concepts
we
have
to
to
to
manipulate
the
easier
it's
going
to
to
to
to
to
adopt
to
be
able
to
adapt
text
on
industry.
So
thanks
thanks
a
lot,
so
I
I
I
do
have
one
one
question
though
so
I
know
you
guys
are
working
on
the
operator
and
extending
it
to
make
it
easier
to
do
things
by
instantiating
kubernetes
concepts.
A
So
are
there
plans
to
have
this
type
of
like
generic
approval
concept,
added
to
kubernetes
as
a
crd,
or
something
like
that,
or
can
you
tell
us
a
bit
more
about
how
we
we
are
going
to
implement
the
notion,
this
notion
of
an
approval
gate
using
tecton
pipelines,
for
instance,.
B
B
This
is
a
very
common
request
right
and
it
really
comes
from,
like
I
said,
a
majority
of.
B
Yeah,
like
not
even
traditional
majority
of
teams,
have
at
some
point
been
using
jenkins
right
and
jenkins
is
this
is
one
of
the
like
patterns?
You
use,
you
add
a
manual
approval
there
and
the
the
pipeline
waits
for,
like
things
happen
outside
the
system.
So,
and
I'm
not
saying
this
is
only
in
jenkins,
but
this
like
immediately
comes
to
mind
from
jenkins,
is
definitely
a
very
common
request
and
there
are
discussions
and
and
requests
on
it
within
techcon
as
well.
B
There
are
intake
thousands
some
of
the
issues
when
you
look
at
income
project
in
the
community
are
created
as
like
this
type
of
larger
topic
questions,
and
what
should
we
do
about
manual
approval?
So
there
is
definitely
a
use
case
for
that
and
interest
within
the
community.
There
is
also
interest
in
it
from
like
a
lot
of
customers
I
talked
to
because,
like
I
said,
they
are
coming
from
some
other
ci
system.
B
Of
course
this.
So
at
some
point
this
will
appear,
I
believe
in
takedown
as
well.
The
timeline
is
not
known
because
the
discussions,
the
community
to
consolidate
and
come
to
a
level
that
fits
the
tecton
model
really
well.
The
original
thought
was
that
these
are
really
modeled
at
different
pipelines.
B
So
at
any
point,
if
you
have
to
really
wait
for
a
long
time
and
then
do
something
else,
are
they
really
using
a
pipeline
or
like
they're,
two
different
sets
of
activities
that
need
to
happen,
but
you're
modeling
it
as
a
single
pipeline
and
connecting
them
with
the
manual
approval
thing
in
between?
So
that
was
like
the
original
thought,
but
we
we
do
also
recognize
the
the
the
case
for
even
though
there
are
separate
pipelines
but
having
them
together
with
manuals.
That
makes
it
easier
to
correlate
these
activities
to
each
other.
B
B
This
will
appear
in
tecton
at
some
point,
but
we
don't
know
yet
when.
A
Okay:
okay,
thanks
all
right,
so
thank
you
very
much
guys,
so
I
believe
nikhil.
You
had
something
to
showcase
what
we
talked
about.
Would
you
like
to
go
over
a
quick
demo
of
pipelines
now
running
on
top
of
openshift,
because
I
believe
we
add
things
on
top
of
the
core
tecton
to
have
a
very
nice
and
productized
version
within
our
project?
Would
you
like
to
to
show
that
so.
E
E
So
we
have
prepared
us
a
brief
demos
just
to
give
an
overview
of
the
different
things
that
are
possible
with
tech
tone
pipelines,
slash
open
shift
pipelines.
So
let's
a
software
will
share
a
screen
and
then
we
will
take
you
through
the
demo.
D
Yep
yeah
I'll
quickly
share
my
screen.
Hope
it's
visible
now.
D
E
Yeah,
I
shall
add
one
point
here
that
is
so
just
like
a
ceremony
said
the
engine
behind
a
tectonic
as
communities.
So
there
is
no
other.
You
know
a
a
lca
city.
E
So
basically,
so
we
can
leverage
our
kubernetes
knowledge,
so
that
is,
we
interact
with
tecton
the
same
way
we
interact
with
any
other
kubernetes
resource.
So
at
the
same
time,
one
of
our
goals
is
to
make
sure
that
people
get
a
feel
that
they
are
interacting
with
a
ci
cd
system,
not
kubernetes,
so
that
is
where
the
dev
console
comes
in.
So
what
we
try
to
provide
is
a
ci
cd
experience.
Instead
of
you
know
having
to
deal
with
endless
ml
files.
E
Yes,
you
can
always
edit
yaml
files,
but
this
is
what
we
have
you
know.
We
want
to
show
you
how
this
tecton,
cd,
which
leverages
kubernetes
features,
can
be
provided
as
an
integrated
system
where
you
can
be.
You
can
forget
that
it
is
running
on
kubernetes
and
use
it
as
a
any
other
cd
system.
D
Yeah
so
as
like,
we
talked
about
operator
and
different
tecton
cd
components
so
for
the
installation
purpose,
as
java
said,
like
we
release
our
openshift
pipeline
as
a
productized
stable
build,
so
we
we
recently
released
da,
which
is
4.1
so
yeah,
I'm
just
going
to
the
this
is
a
4.8
cluster.
D
So
I
am
installing
the
red
hat
openshift
pipelines
from
the
operator
hub.
You
can
see
1.4.1.
This
is
the
ga
release
we
did
last
month
so
I'll.
Just
with
few
clicks,
I
could
able
to
install
the
open
shift
operate
openshift
pipeline
operator
on
this
cluster,
so
you
can
see
like
lot
of
channels,
but
the
stable
is
the
one
which
we
released
recently
so
yeah
I'll
click
on
the
install
and
it
started
installing
the
operator
with
the
openshift
pipelines,
components
like
pipelines
triggers
and
like
cluster
task.
D
A
E
D
Always
installing
the
open
shift
operators
so
meanwhile
nikhil
you
wanted
to
add
something
about
operators
like
why
we
started
adding
the
operators
why
we
are
not
using
instead
of
directly
installing
pipeline
trigger
separately,
why
we
have
to
go
through
operator
itself,
but
we
the
what
are
the
benefits
we
get.
E
In
simple
terms,
we
use
operators
to
embed
human
operator
logic
into
software,
so
what
that
means
is
that
so
eutectoid
pipelines
triggers
all
those
are
upstream
projects
and
which
needs
a
lot
more
of
human
operator
manipulation
or
a
system
admins
job
to
get
them
installed
and
make
sure
that
they
are
configured
properly
and
they
upgrade
properly
without
breaking
your
workloads.
So
that
sounds
like
a
lot
of
documentation
and
a
lot
of
issues.
E
So,
instead
of
that,
what
we
are
trying
to
do
gather
trying
to
do
is
gather
all
the
best
practices
and
recommended
practices
around.
You
know
around
the
life
cycle
of
technology
applications
and
combine
them
in
software
as
openshift
pipelines
operator.
E
So
so
then,
we
are
trying
to
take
this
operator
to
different
levels
of
maturity,
so
that
is
initially
it
will
provide
installation
and
upgradation,
and
then
we
will
start
supporting
metrics.
We
can
support
backup
and
recovery.
Such
things.
D
Yep
thanks
nikhil
for
the
brief
information
about
the
operator,
so
you
can
see
like
the
operator
is
installed
successfully.
So
in
the
installed
this
thing
and
if
we
go
like
all
of
the
tecton
pipelines,
openshift
pipelines,
components
and
pipelines
and
triggers
will
be
installed
in
the
project
called
openshift
pipelines.
D
So
now
we
are
ready
with
the
pipeline
on
our
4.8
cluster.
So
let's
just
install
one
of
our
basic
pipeline
to
show
like
how
a
pipeline
is
created
as
part
of
our
demo.
D
So,
let's
make
use
of
this
basic
ad
flow
through
git.
No,
so
I
I
will
be
installing
one
of
the
pipeline
called
front
end,
which
is
ot
so
yep,
so
I
will
be
giving
the
git
url
as
oti,
so
it
should
automatically
choose
the
runtime
builder,
which
is
python,
so
our
ui
application
is
in
the
python
language.
D
So
you
can
see
here
we
have
in,
we
can
choose
any
of
this
deployment
or
deployment
config.
So
I'll
go
with
the
default
configuration
and
also
here
we
get
the
like
option:
whether
we
want
to
create
a
pipeline
template
or
not.
So
I
will
be
choosing
this
add
pipeline
and
then
we
have
a
create
route
application.
So
it
means
like
for
this
ui
pipeline
a
route
url
will
be
created
so
that
we
can
easily
access
it.
D
One
more
thing
I
wanted
to
show
here
in
the
visualization,
so
this
pipeline,
when
we
choose
when
it
creates
it
basically
have
these
three
tasks
like
first
it
will
fetch
the
repository,
then
build
it
and
then
finally
it
deploys
it.
So
now,
let's
create
the
pipeline
yep,
so
it
start
creating
the
pipeline.
We
can
see
a
a
good
view
of
topology
in
the
topology
section
of
this.
B
Pipeline
so
serenity
did
you
have
any
kubernetes
resources
in
your
git
repo
as
well
or
just
a
source
code
application.
D
It's
so
yeah
yamas
are
there
so,
but
it
doesn't
take
these
examples.
It
just
take
the
code
right.
B
In
general,
so
you
generated
it
deployed
your
application
and
generated
those
manifests
for
you
and
all
you
have
them
there.
You,
if
you
didn't,
have
them
that
would
have
been
fine,
also
deploys
the
application,
generates
deployment
and
route
and
service,
and
also
added
a
pipeline
for
you,
based
on
takedown.
That
builds
the.
D
Application,
yes
exactly
so
you
can.
Oh
sorry,
I
I
just
created
my
pipeline
in
the
openshift
pipeline
namespace
itself,
so
we
can
see
a
lot
of
things
here,
but
that's
okay!
So
here
you
can
see
a
pipeline
or
ui
application
got
created
pipeline,
but
yeah.
So
we
get
error
like
image,
pull
back
off,
because
in
the
operator
we
have
some
restriction
or
some
restriction
to
not
to
create
any
of
the
pipeline
example
or
resources
in
the
openshift
pipeline
namespace,
where
actual
components
are
running.
D
E
Sure
sure,
or
we
are
running
short
of
time
you
can
switch
to
that
4.7
cluster.
If
you
have
that
accessible,
then
you
can
show
the
final
executed.
You
know
pipeline
time.
E
That's
also
possible
okay,
so
so,
okay,
that's
a
good
question.
I
mean
that's
a
good
question
about
why
that
openshift
pipelines
namespace
is
different
from
other
namespaces.
So,
like
I
was
mentioning
before
the
operator
embeds
a
lot
of
operator
knowledge.
So
that
is
one
of
the
operator.
Knowledge
is
when
you
want
to
run
a
pipeline
or
the
tecton
application
itself.
It
needs
certain
outback,
permissions
and
privileges
so
that
it
can
run
on
up
and
shift.
E
So
if
you
get
the
upstream
release
and
then
set
it
up,
then
all
of
this
will
have
to
be
done
manually.
So
what
the
operator
does
is
it
create,
creates
a
service
account
called
pipeline
in
all
the
namespaces
with
sufficient
privileges
so
that
it
can
be
used
to
run
your
ci
cd
workloads,
but
the
operator
doesn't
create
this
pipeline
service
account
in
namespaces,
with
a
open
shift
or
a
cube
prefix.
E
So
openshift
pipelines
is
a
namespace
where
operator
will
not
touch
so
it
doesn't
have
that
default
service
account
which
could
build
the
image
and
then
pull
the
image.
D
All
right
yeah,
so
now
you
can
see
like
the
reason
for
not
allowing
to
create
the
pipeline
resources
on
openshift
pipelines.
So
for
the
timing,
I
have
created
a
new
project
called
demo
and
I
have
created
a
pipeline
over
there,
so
this
pipeline
run
is
in
the
running
state.
So
if
you
want
to
see
like
okay,
which
step
it
is
happening
right
now,
so
we
can
see
fetch
repository
is
completed,
so
it
means
the
cloning
of
the
code
is
done.
Now,
it's
doing
the
build.
D
You
can
see
the
four
steps
over
there
generate
build,
push
and
digest
to
result.
So
it's
like
create
the
docker
file,
build
it
and
push
it
to
the
internal
registry
and
finally,
we
will
get
the
sha
so
that
the
same
sha
will
be
used
in
the
deploy
to
or
deploy
the
things
so
now
like.
Meanwhile,
if
you
wanted
to
see
what
all
things
it
is
going,
we
can
go
to
lock
section
and
we
can
see
okay,
so
in
which
step
it
is
exactly
happening
and
where
the
progress
is
happening
through
this
different
different
steps.
D
Now,
meanwhile,
like
until
it
happens,
so
I
can
show
few
more
things
here.
So
if
we
go
back
to
pipelines
and
see
here
like
we
can,
we
have
all
these
options
like
once
we
create
through
ad
flow,
we
will
get
a
pipeline
and
a
pipeline
run
running
automatically,
but
later
we
do
some
edit
to
the
pipeline
and
we
wanted
to
run
it
again.
So
in
that
case,
what
we
can
do,
we
can
click
on
start
and
it
will.
We
need
to
do
this
manual
step
to
start
the
pipeline
run
again.
D
So,
instead
of
that,
what
we
can
do
in
order
to
avoid
that
manual
interventions,
we
can
add
a
trigger
over
here,
so
this
trigger,
basically
what
it
will
do.
It
will
watch
on
the
events
and
trigger
a
pipeline
run
for
us,
so
maybe,
like
maybe
as
jaffa
said,
like
we
can
stay
tuned
to
the
next
episode
for
more
information
about
the
trigger
and
deep
dive
of
the
trigger
concepts
and
how
those
dots
will
be
connected
from
github
to
the
pipeline,
run
creation.
D
So
for
now
I'll
just
show
the
workflow
here
not
to
the
concept
wise.
So
you
can
see
here
like
we
have
a
list
of
providers
supported
like
we
have
bitbucket
github
gitlab,
so
for
now.
For
the
time
being,
I
am
selecting
the
github
pull
request
review
comment
so
that
I
can
send
some
review
comment
to
the
existing
pull
request.
D
So
for
that
I
am
choosing
the
github
provider
type
and
the
you
repo
is
the
oty,
and
you
can
see
everything
as
it
is
the
default
values.
So
once
I
add
this
one
yeah,
so
meanwhile,
you
can
see
a
pipeline
run
is
success,
so
the
pipeline
run
is
success.
Now
now,
let's
go
to
administrator
view
and
see
in
the
pipeline
section.
We
have,
as
I
have
added
triggers
right
now,
so
we
I
can
see
a
event
listener
and
trigger
template
and
cluster
trigger
binding.
D
So,
like
yeah
the
definitions
and
importance
of
these
things,
we
can
see
it
later
so
now,
what
I'll
do?
I
will
get
the
url
for
the
event
listener
so
that
I
can
configure
in
my
webhook
so
yeah?
This
is
the
you
are.
I
can
go
to
networking
section
and
the
routes
I
can
get
this
url.
So
if
I
directly
hit
this
url
I'll
straight
away,
get
the
error
saying
that
okay,
I
have
not
sent
any
body
format
because
get
a
pull
request.
Review
comment
is
expecting
some
comments
in
the
body
section.
D
So
that's
why,
if
you
directly
hit
the
url
we
get
this
error
now,
let's
go
to.
This
is
the
repo
which
is
my
own
forked
one.
So
I
can
have
access
to
the
settings.
D
So
what
I'll
do
I
will
go
to
webhook
section
and
I
will
quickly
add
the
webhook
so
here
I'll
just
remove
the
existing
default
values-
and
I
have
posted
the
webhook
url
here
and
trigger
code-
expect
the
content
type
to
be
in
the
application,
json
format
and
by
default
the
push
event
will
be
selected,
but
I
don't
want
the
push
event,
because
I
am
interested
in
the
pull
request
commit
event,
so
I
will
be
choosing
that
event
here
and
I
will
add
a
web
hook.
D
So
this
once
the
book
is
added
once
it
is
success,
you
can
see
a
book
you
can
see
here.
No
events
are
triggered
till
now.
So
what
I'll
do?
For
time?
Being
what
I
did?
I
just
created
a
pull
one
pull
request
and
it's
kept
it
ready.
So
meanwhile,
I'll
just
watch
on
the
pipeline
runs
here.
So
if
I
go
to
pipeline
section
and
pipeline
runs,
you
can
see,
there
is
only
one
pipeline
run
is
running
so
now.
What
I
will
do
I'll
just
add
some
comment
over
here.
D
A
With
the
yeah
try
to
reply
under
the
the
first
conversation
that
you
started.
D
D
A
It's
fine,
it's
fine!
We
we
know
it's
not
on
our
side
that
it's
not
getting
triggered
so
no
worries.
I
can
testify
that
it
works.
So
thank
you
very
much
savita
and
everyone,
because
we
are
getting
close
to
the
end
of
the
show.
What
we
will
do
is
actually
for
the
next
show.
A
We
will
talk
about
the
cinematic
that
happens
to
have
that
event
trigger
the
the
pipeline
with
everything
you
said,
like
the
event
listener,
the
trigger
templates,
the
trigger
bindings
and
everything,
and
explain
how
all
of
that
works
together.
So
that's
going
to
be
a
first,
a
second
topic
and
as
we
go
along
with
those
sessions,
we
are
going
to
extend
that
pipeline
with
more
complex
topics.
A
It
was
great
to
finally
be
able
to
start
that
that
that
show
about
tecton
pipelines
and
openshift
pipelines,
and
I
hope
to
see
you
soon
in
the
next
sessions
so.
A
All
right,
I'm
going
to
stop
the
the
stream,
and
thank
you
very
everyone
who
has
attended
a
nice
day.
Everyone
thanks
have
a
nice
day.