►
From YouTube: OpenShift Coffee Break: Pipelines as Code
Description
Get your espresso ready for the EMEA OpenShift Coffee Break together with Natale Vinto and Jaafar Chraibi as we go into another episode of the Tekton in action series: Pipelines as Code! Together with Savita Ashture and Khurram Baig from Tekton engineering team, we will discuss how start using pipelines from your git repositories that can be executed when triggered by a Pull Request or a Push.
A
Good
morning,
everyone
thank
you
for
joining
us
today
for
another
episode
of
openshift
coffee
break
yeah.
So
today
we
have
our
usual
suspects
in
natalie
as
our
main
co-host
and
myself,
the
other
main
co-host,
and
we
have
two
special
guests
today.
A
So
we
have
savita
from
red
hat
engineering
and
we
also
have
quran.
They
both
work
on
texan
upstream
community
and
shift
and.
B
A
A
So
basically,
whenever
you
make
a
code
change
you
you,
may
you
do
a
git
commit
or
you
create
a
pull
request,
or
something
like
that.
You
want
to
have
some
automated
pipelines
within
your
workflow
and
basically
that's
what
we
are
going
to
uncover
today,
so
savita
and
puram.
If
you
want
to
please
introduce
yourselves
and
of
course,
natalie
also,
but
he's
very
famous
already.
B
B
And
I
am
working
on
spectrum
stream
project
and
downstream
and
I
mainly
focus
on
triggers-
and
I
have
also
done
some
work
on
catalogs
and
on
my
tricks,
as
well
as
logging
pipelines.
A
Okay,
cool
savita.
C
D
You
thank
you.
Everyone
welcome
good
morning.
I
hope
you
had
your
coffee
shot
here.
I
openshift
coffee
break.
My
name
is
nadal.
I'm
a
product
marketing
manager
with
openshift.
We
are
hosting
this
show
together
with
jafar,
and
we
are
very
happy
to
have
again
savita
in
in
the
stream
and
also
to
have
a
cool
run
for
today's
talk.
Jafar,
I'm
really
excited
to
see
pipeline
as
code.
So
I'm
I'm
looking
forward
to
see
what
it
is
and
how
it
can
be
implemented
on
top
of
openshift.
A
Okay,
and
so
just
one
one
clarification
for
our
viewers
we
are
making
so,
as
you
saw,
savita
was
already
on
a
previous
episode,
so
our
goal
is
to
have
a
series,
so
we
will
be
having
several
episodes
about
tecton
and
openshift
pipelines
and
as
we
go
along,
we
are
going
to
explore
more
advanced
concepts
so
yeah
today
we
will
be
speaking
about.
You
know
how
to
trigger
pipelines
from
from
git
events
and
especially
explain
what
happens
in
the
background,
which
is
basically
what
you
guys
have
implemented
to
make
it
work.
A
That's
that's
that's
the
great
thing
I
I
believe
you
have
contributed
to
to
creating
the
things
inside
tecton
upstream
to
make
such
things
work,
of
course,
with
other
people.
So
that's
really
cool
to
have
engineers
like
you
talk
about
how
these
things
work
in
the
background
and
how
they
evolve.
A
So,
okay,
let's
get
started.
Who
wants
to
to
to
go
first,
and
maybe
you
know,
have
have
a
quick
reminder
about
tecton
concepts.
A
Interruptions
yeah
exactly
yeah
so
cram,
I'm
sorry
to
say
that.
But
would
it
be
possible
to
maybe
just
disable
the
video
stream
to
to
to
make
sure
we
have
enough
bandwidth.
A
A
D
B
A
Yeah,
exactly
all
right.
B
A
B
Yeah,
so
so
previous
time
we
have
already
discussed
part
of
the
tech.
What
is
checked
on
and
it's
I
think,
I'm
not
sure
whether
we
have
gone
through
various
components
of
headphones,
which
are
the
shared
components
to
provide
humanities
native
cs3d
pipeline.
B
The
first
component
is
checked
on
pipelines,
which
we
have
already
covered,
which
is
packaged
as
openshift
factor
along
with
triggers
and
operator,
and
then
next
we
have
spectron
trigger,
which
manages
the
tactile
resources
based
on
events,
and
these
events
can
be
cloud
segments
or
web
events
like
github,
and
then
we
have
something
called
techcon.
B
Which
extends
our
project
on
architecture
by
expanding
the
communities?
B
C
B
With
some
type
with
some
para
in
a
parameterized
form,
and
these
parameters
are
are
based
on
events
and
then
we
have
something
called
trigger
binding
and
trigger
binding
crd
just
extract
those
extract,
payloads
field
to
the
corresponding
variables
which
are
referenced
in
the
trigger
template,
and
then
next
crd
is
trigger
crd,
which
is
just
a
combination
of
trigger
template
and
trigger
binding,
so
that
we
know
from
where
to
take
our
variables
and
along
with
that,
we
have
something
called
interceptor.
C
B
It's
it
just
take
payload
fields
and
modify
it.
A
Okay,
thank
you
kram.
So
if,
if
I
try
to
demystify
a
little
bit,
maybe
yeah,
do
you
have
some
more
go
ahead.
Please
show
me
the
next
slide.
A
Okay,
yeah
yeah,
so
so,
let's
pause
for
four
seconds
on
these
concepts
and
try
to
to
explain
why
we
are
talking
about
this.
Okay.
So
usually,
when
we
are,
you
know
as
a
developer
or
even
you
know,
if
you
are
implementing
continuous
delivery,
you
want
to
trigger
your
pipelines
from
a
git
event.
Like
you
make
a
comment
or
you
merge
a
pull
request
or
something
like
that.
Then
you
want
some
some
pipelines
to
to
automatically
get
executed.
A
So
if
we
rema
remember
the
the
how
tekton
works,
you
have
this
notion
of
pipeline.
So
basically
the
pipeline
is
the
definition
of
everything
that
is
going
to
run
like
the
the
tasks
and
the
steps
etc,
but
the
python
itself
doesn't
run
it's
not
an
instance
of
of
of
something
that
is
running.
It's
just
a
static
definition
like
if
you
were
in
java.
You
have
your
your
class
and
then
you
have
the
instances
of
your
class,
so
the
class
doesn't
exist
in
itself.
A
A
What
branch
is
it?
What
is
my
working
context,
etc,
etc?
So
it
needs
some
dynamic
information
because
we
don't
want
to
have
static
information
in
the
pipeline
static
definition,
so
we
have
what
we
call
placeholders
or
variables
that
we
define
in
the
pipeline.
We
say,
for
example,
this
is
dollar
git
url.
A
This
is
dollar
git
branch,
etc,
etc.
So
this
is
all
going
to
be
variables
and
at
runtime,
when
we
want
to
trigger
the
pipeline
run,
we
want
these
things
to
be
filled
right.
We
want
to
to
get
the
proper
values
and
that's,
I
believe,
where
the
those
triggers
come
in
play
come
into
play,
so
they
we
have
the
pipeline.
We
have
the
pipeline
run
and
those
triggers
are
going
to
say
okay.
So
when
I
have
this
git
event,
please
take
this
variable.
A
This
variable
and
this
variable
and
fill
that
in
the
pipeline
run
or
to
create
a
pipeline
run.
Okay.
So
that's
what
I'm
saying
here
is
a
shortcut
I
have
pipeline
on
the
left.
I
have
a
pipeline
run
with
all
the
data
that
is
already
filled,
and
now
we
are
trying
to
explain
what
happens
in
between
all
right
so
and
yeah
yo.
So
please
go
ahead
again
gram
and
explain.
B
Yeah
so
so,
if
you
have
I'm
say,
if
we
have
a
web
of
events
from
github
generally,
it
will
be
there
in
the
payload
or
maybe
some
information
might
be
there
in
header.
But
generally
we
use
payload
field.
Also
a
payload
fields
and
and
those
payload
fields
are
extracted
to
the
parameter,
define
and
trigger
binding.
B
B
A
B
A
Okay,
okay,
so
I'm
from
I'm
not
sure
everyone
can
hear
fine,
because
we
have
some
audio
issues.
So
that's
why
I
I'm
rephrasing
so
so,
basically,
what
you
just
explained
is
we
have
something
that
says
here
are
the
variables
that
are
interesting
to
me.
I
want
to
get
the
url,
the
git
url,
the
branch,
the
et
cetera,
et
cetera,
the
committer
id,
maybe
or
something
like
that
and
those
things
we
define
them
in
the
trigger
binding,
correct.
A
We
say
these
are
the
things
that
I
want
and
you
have
the
trigger
template
that
says:
here's
the
data
that
you
need
to
fill
like
the
git,
repo,
etc
are
going
to
be
variables,
and
basically
the
trigger
bindings
says
this.
Is
you
you
take
this
data
from
here,
and
you
put
it
here.
Is
that
correct
so.
C
B
A
Okay,
cool
thanks,
so
I
hope
this
is
clear
for
people
who
are
watching.
If
something
is
not
clear
enough,
please
don't
hesitate
to
ask
questions
in
the
chats
and
we
will
try
to
answer
them
as
we
can.
D
B
B
Okay,
then,
we
have
an
even
listener
crt,
which
provides
an
invention.
Basically,
all
these
operations
like
trigger,
binding
and
trigger
template
operations
are
done
by
human
listeners.
It
extract
parameters
from
trigger
binding
and
create
resources
for
the
corresponding
trigger
template
and
okay
yeah
for
it,
and
you
can
also
provide
interceptor
in
the
indirect
email
listener
to
pre-process
the
event
payload,
and
then
we
also
have
some
advanced
concept
called
block
cluster
interceptor
cluster
interceptor
provide
a
clustered
scope
and
style
interceptor,
meaning
you
can
have
a
k
native
service
which
basically.
C
A
B
A
Okay,
so
this
is
funny
because
it
reminds
me
of
so
a
few
years
ago
I
I
was
trying
to
implement
this
kind
of
ci
cd
with
the
with
the
github
actually
and
they
used
to
provide
a
bot.
I
I
don't,
it
was
called
probot.
A
Robot
was
basically
this
type
of
framework
where
you
can
define
these
things.
You
say
if
I
have
a
push
event
then
do
this,
but
the
do
this
thing.
Is
you
basically
have
to
write
your
own
code
to
extract
the
information
from
the
the
payload
to
do
your
own
custom
logic,
and
then
it
can
interact
with
with
something
else.
So
basically,
I
was
implementing
some
node.js.
A
Backend
that
waits
for
something
to
happen
on
on
github
part,
and
that
does
something
like,
for
instance,
it
can
create
a
project
in
openshift
it
can
deploy
an
application,
automatically
etc.
But
all
of
that
you
have
to
do
it
by
coding
things
yourself.
It
was
just
an
empty
shell
and
what
you
guys
did
here,
I
think,
with
the
event
listener
is
you
have
implemented
this
feature
out
of
the
box.
A
So
basically,
when
you
create
your
pipeline
and
you
create
your
whatever
you
need
to
interact
with
the
git
or
github,
etc,
you
do
instantiate
your
own
backend
that
listens
to
these
events,
and
that
already
has
all
the
logic
to
extract
the
correct
information
and
to
create
whatever
it
needs
to
run
the
pipelines
in
in
openshift.
Is
that
a
correct
way
to
to
yeah
yeah
to
phrase
it.
B
Can
be
performed
like
complex
by
mean,
let's
say
you
want
some.
Let's
say
you
have
a
full
event
and
you
want,
let's
say
appropriate
or
there's.
A
B
A
That
would
be
interesting,
I
think,
for
maybe
a
next
show,
where
we
can
go
a
little
bit
deeper
into
interceptors
and
explain
basically
how
we
can
use
them
to
to
implement
what
you
said
like
approvals
and
such
things,
because
I
know
that
approvals
are
something
that
are
still
in
the
works
upstream,
like
the
the
the
structure
to
to
to
define
how
we
can
do
that
is
still
in
the
works,
so
that
that
could
be
interesting
for,
for
an
upcoming
show
to
to
explain
how
we
can
do
those
custom
interceptors
and
integrate.
C
Yes,
basically,
one
more
point,
I
just
want
to
add
here
like
when
we
have
trigger
installed
right.
So
by
default
we
will
have
some
three
for
a
core
interceptor
like
git,
gita,
sorry,
github,
bitbucket,
gitlab
and
cl
along
with,
like
as
this
cluster
interceptor
introduced.
The
main
motive
is
to
earlier
with
having
this.
Only
like
fixed
interceptor
was
blocking
the
triggers
to
grow.
I
mean
it
was
not
able
to
make
use
of
the
triggers
in
a
dynamical
way.
A
That's
really
cool
yeah,
so
so,
if,
if
you
have
your
own
workflows
that
you
have
already
automated
in
some
way
like,
if
you
are
doing,
I
don't
know
git
flow
or
github
flow,
or
things
like
that,
where
you
automatically
create
projects
or
deploy
applications,
etc.
A
That
can
be
maybe
a
way
to
to
implement
this
type
of
behavior
in
a
more
dynamic
way.
So
yeah,
that's,
really
really
interesting
and
yeah.
Let's,
let's
aim
to
have
that
dissected
in
another
episode,
because
we
we
like
everything
that
can
be
automated.
A
All
right
thanks
thanks
a
lot
for
the
explanations,
so
this
summarizes.
I
think
what
you
have
said:
yeah,
please
go
ahead.
B
Yeah
yeah,
this
summarizes
what
I've
said.
Let's
say
an
event
is
coming
to
the
evidence.
The
part
of
sync
and
then
regular
binding,
is
extracting
those
parameters
and
providing
a
closed
record
template
and
we
have
some.
This
is
a
combination
of
trigger
and
and
then
in
turn,
it
will
create
a
resource.
D
A
That
instantiates,
that
that's
something
that
we
run
actually
okay,.
B
A
A
B
A
So
is
that
correct
to
say
that
the
event
listener
is
a
an
application
itself
that
runs
in
a
pod
like
it's
something
that
is
it's
it's
yeah,
it's
it's
an
even
based
architecture
where
you
have
a
pod
that
listens
for
something
to
happen,
and
when
it
intercepts
that
event,
it
does
something.
Is
that
correct.
A
A
A
A
C
Service,
so
that
that's
the
reason
we
will
get
the
url
in
the
event
listener,
so
that
we
can
use
that
url
and
send
the
event.
So
whenever
event
comes
that
so
before
that
only
someone
has
to
apply
this
trigger,
binding
and
trigger
template.
These
are
like
a
static
templates.
We
can
say
so.
C
A
Okay
yeah.
Thank
you
very,
very
much
for
reminding
us
this
point,
because
it's
very
important,
the
url
actually
is
what
we
put
in
the
webhook
definition
correct
in
the
in
the
bit
repo
side.
A
I
have
a
web
hook
and
you
select
whatever
events
you
you
want
to
intercept
and
you
put
that
url
that
is
generated
by
the
openshift
router,
basically
to
point
to
the
event
listener.
For,
for
the
specific
repository.
D
Just
as
sorry
just
as
confirmation,
the
event
is
an
http
post
from
a
web
book
right.
It's
always
http
https,
it's
a
post
with
a
json
content
or
any
other
form.
So
this
is
the
kind
of
event
which
is
that
this
component
is
listening
to,
and-
and
I
have
a
question-
I
don't
know
if
you
are
aware
for
the
next
version
of
openshift-
if
there
will
be
an
improvement
in
the
pipeline
ui,
you
know
now
pipeline
unite.
Ui
helps
you
creating
at
is
a
pipeline
in
openshift
web
console.
D
C
Yeah,
so
maybe
like
more
thing,
we
will
see
in
the
demo
section
so
right
now,
like
open
shift,
yeah
open
shift,
ui
have
basic
template
for
the
triggers
and
which
will
just
give
us
the
end-to-end
flow,
but
coming
to
the
advanced
use
cases.
So
right
now,
it's
under
still
implementation
but
yeah
to
answer
the
questions
right
now.
Ui
supports
the
trigger
adding
part
along
with
the
pipeline.
A
Yeah
yeah
so
yeah.
So
if
I'm,
if
I'm,
if
I'm
not
mistaken,
that's
what
you
mentioned
about
like
the
pre-existing
stuff
for
github
and
gitlab
etc,
because
you
can
already
say
I
want
to
intercept
this
event
from
github
or
whatever,
and
it
will
create
those
event,
listeners
etc
with
the
correct
information
yeah
because
because
if
you
have
to
create
everything
via
ammo,
it's
a
bit
complex.
A
So
so,
since
we
are
speaking
about
both
techton
and
openshift
pipelines-
and
we
are
rightfully
you
know-
we
have
a
foot
in
community
and
we
have
a
foot
in
products.
So
does
these
things?
Okay?
Can
they
also
be
done
from
the
upstream
ui,
or
is
it
something
that
we
add
in
the
openshift
console
as
an
added
value
like
I,
I
haven't
played
with
detector
dashboard?
I
don't
know
what
the
ui
does.
B
A
Okay,
but
these
things
that
that
that
natalie
mentioned
like
creating
the
the
the
event
listeners
automatically
from
the
ui
etc.
Is
it
something
that
you
can
also
do
upstream,
or
is
it
just
in
openshift
pipelines
that
we
have
this
added
value.
B
Oh,
I.
B
C
Yeah,
I
can
just
tell
my
experience
like
in
upstream
dashboard
like
dashboard,
what
we
need
to
paste
post,
sorry
paste,
the
yaml
files.
I
mean
there.
A
B
A
A
Okay,
okay,
cool,
so
yeah,
so
that's
part,
I
would
say
so
that
that
was
a
very
you
know,
genuine
question.
I
I
didn't
know
what
the
answer
was,
but
so
this
is
basically
something
that
we
do
as
part
of
the
added
value
of
the
openshift
console
where
we
we
make
it
even
easier
to
use
upstream
features,
but
in
a
more
productized
way.
So
I
think
we
have
things
like
also
designing
the
pipeline.
A
B
Yeah,
so
here
we
have
our
trailers
even
lesser
crd,
so
we
have
just
a
kind
even
listener,
as
well
as
service
account,
which
is
used
for
creating
resources,
and
we
have.
A
Yeah,
so
sorry,
just
to
to
make
sure
I
understand
it
correctly
in
your
payload,
when
you
have
your
git
event,
it
has
something
called
commit
id
and
it
has
something
called
repository.url:
okay
in
your
json
payload,
and
basically
what
this
says.
This
figure
by
link
says
extract
this
thing
that
you
have
in
value
yeah
and
store
it
as
git
revision.
Is
that
correct,
like
this
is
going
to
be
the
parameter
name,
revision
that
would
be
used
somewhere
somewhere
else.
B
D
And
I
guess
this:
this
json,
you
are
kind
of
navigating
the
dom
of
the
json,
not
from
the
web
book
post,
academic,
http
request
and-
and
I
guess
this
this-
those
those
fields
here-
change.
No,
if
you
are
using
github
or
gt
or
cox
or
a
git
lab
so
yeah
from
the
experience
I
think
they're
pretty
they
say,
but
sometimes
they
change.
So,
if
you
wanna.
B
C
A
And
so
so
a
funny
thing
that
you
mentioned
this
natalie,
because
for
many
many
reasons
so
at
some
point
many
years
ago,
I
think
it
was
eight
years
ago
or
maybe
10
years
ago
there
was
a
group
of
software
vendors
that
said,
okay.
So
whenever
I
have
an
issue
like
a
jira
issue,
if
it's
in
in
jira
it's
something
if
it's
in
github
it's
something
else.
If
it's
in
gitlab
it's
something
else,
can
we
come
up
with
some
sort
of
standard
to
make
things
to
to
be
able
to
integrate?
Like?
A
Can
we
define
pivot
formats
that
any
integration
tool
can
attached
to
and
then
every
tool
fills
this
information
and
there
was
a
standard
that
started
to
be
defined
called
oslc.
A
I
think
it
was
open
services,
life
cycle,
something
I
I
don't
remember
exactly,
but
basically
it's
exactly
what
you
are
pointing
to
like.
So
when
I
have
a
payload
from
git,
it's
gonna
be
a
different
json.
When
I
have
a
payload
from
gitlab,
it's
going
to
be
another
different
json.
A
So
thus
you
you
are,
you
have
a
specific
implementation
for
every
tool
and
the
goal
of
this
thing
was
to
have
one
standard
pivot
format
that
you
integrate
with.
So
it's
always
going
to
be
head
underscore,
commit
under
underscore
id
and
whatever
you
have
on
the
other
front
like
if
it's
git
it's
going
to
be
transformed
to
fit
in
that
field,
so
you
don't
have
to
implement
it
for
every
different
tool.
A
A
So,
maybe
that's
a
conversation
that
can
happen
upstream
to
see
you
know
if
there's
a
way
to
it's
basically
the
same
thing
that
happens
for
for
tecton
right
so
before
every
it
vendor
has
its
own
definition
of
a
pipeline
and
what
a
step
is
and
what
the
task
is
etc
and
it
was
not
compatible.
A
A
We
don't
have
any
more,
each
one
with
his
own
definition
or
his
own
dsl,
to
define
a
pipeline
like
the
groovy
for
jenkins
and
the
yamo
for
guitar
runners,
etc.
So
now,
if
we
are
using
the
tekton
part
of
it,
it's
going
to
be
the
same
yaml
it's
going
to
be
standard
of
things,
so
maybe
it
can
make
sense
to
go
even
further
and
say:
okay,
now,
let's
standardize
on
those
bits
as
well.
B
C
A
All
right,
all
right
so
yeah
just
closing
the
the
parentheses
here
but
natalie.
This
was
a
project
that
I
I
had
in
mind
for
for
a
long
time
to
have
this.
B
D
A
This
unique
people's
standards
and
I
I
even
wanted
to
do
it
like
with
tamil
transformations
where,
where
you
have
you
know
the
the
standard
listener,
and
then
you
do
some
transformation
to
generate
the
correct
payload
that
we
expect.
A
D
Right-
and
you
see
you
have
it
here
as
open
source
upstream
project,
we
have
today
awesome
engineer
talking
about
it,
which
is
very,
very
cool,
and
you
know
what
this
is.
Another
example
of
kubernetes,
which
is
a
standardization,
also
platform,
we're
going
all
everyone
into
the
same
standard,
same
open,
source,
same
project,
same
standard,
so
this
kind
of
you
know
converging
on
on
the
same
thing,
which
is
very
cool.
I
think
we
can
go
in
the
next
step
because
we're
we're
going
otherwise
out
of
time
there
there.
D
There
is
also
some
interaction
in
the
chat
some
people
like
to
know
more
about
techtone.
Yet
we
will
share
after
this
live
demo,
some
resources
to
get
started
with
tecton
and
also,
if
you
have
any
useful
links,
savita-
and
please
please
share
with
us,
so
everyone
can
can
start
learning
tech
tone.
B
So
here
we
have
defined
some
trigger
temperature.
We
can
see
some
params
that
are
that
are
available
from
trigger
binding,
that
secret
revision
gets
repository,
url
message,
content
type
and
and
in
the
resource
template
we
are
using
it
like
tt.parent.message,
pc.parent.content,.
B
But
just
what
it
does
is
it's
modify
the
payload.
That's
what
already
said-
and
this
is
how
we
define
the
landfill.
You
have
a
service,
it
creates
service
deployment
for
running
which
modifies
it
and
give
it
200
response.
So
what
you
have
is
name
of
the
deployment
name,
space
and
path
we'll
have
soon.
A
So
so
for
another
episode,
I
don't
know
when:
let's
try
to
have
this,
let's
try
to
implement
a
custom
interceptor
that
integrates
with
servicenow
to
do
an
approval,
for
instance,
let's,
let's
have
that
as
a
background
id
that's.
B
B
A
We
have
audio
issues,
it's
dropping
a
lot.
So
please
excuse
me
if
I,
if
I
rephrase
what
you
what
you
said
well,
basically
people
who
are
watching
can
can
read
this.
The
file,
the
the
most
important
or
interesting
thing
I
see
here
is
the
server
listing.
Is
it
correct
to
say
so?
Savita,
please
confirm
yeah.
So
when
we
spoke
about
the
even
listener
we
said,
there's
a
pod
that
is
always
running
to
intercept
whatever
things.
A
So
that's
cool,
but
I'm
imagining
if
we
have
hundreds
of
integration
like
webhooks,
so
that
means
we
have
hundreds
of
pods
that
are
running
but
they're
just
waiting
for
something
to
happen
and
they
are
consuming
resources.
A
What
what
I
understand
now
is
this
can
be
serverless
also
like
we
can
have
something
that
doesn't
exist
like
no
pods,
but
just
a
url,
and
once
we
have
the
webhook,
it
actually
creates
the
pod.
It
does
its
thing
and
then
it
shut
down
it
sit
down
shuts
down.
A
C
A
C
Basically,
like
till
no
event
listener,
whenever
we
create
you,
it
is
to
create
ka
test
spot
right.
So
now,
like
we
integrated
with
k
native
service,
so
with
the
help
of
k
native
service,
we
could
able
to
achieve
this
serverless
thing.
So
so
there
is
another
we
have.
We
have
a
custom
resources
in
event,
listener
like
we
can
specify
kubernetes
resource
or
custom
resource.
A
Okay,
that's
cool,
so
so
they
can
create
their
own
operator
to
do
something
and
then
integrate
with
it.
A
A
A
A
So
let's
go:
let's
go
ahead.
Thank
you
very
much
from
it
was
very
interesting
ground
for
the
future
episodes
thanks
a
lot.
D
Fine,
okay,
just
please!
If
you
can
increase
the
font,
so
we
can
see
better
in
open
your
demo.
C
Show
yeah,
I
see
something:
click
is
not
working
in
my
mouse.
Okay,
yeah.
A
C
C
A
Yeah
no
worries
no
worries
so
yeah,
maybe.
C
It's
fine
yeah
yeah,
it's
fine,
now:
okay,
yeah,
so
yeah
for
okay,
so
everything
is
installed
in
this
cluster,
so
all
the
openshift
pipelines
and
all
so
the
one
thing
to
make
sure
that
the
openshift
pipeline
is
installed
or
not.
So
we
can
see
a
pipeline
column
here
so
pipeline
triggers.
So
it
shows
like
okay.
C
C
Okay,
so,
okay,
let
me
create
a
new
namespace
called
demo
so
project,
so
yep
yeah,
so
project
got
created,
so
I'm
going
to
use
this
form
git
so
that
I
can
specify
my
own
github
repo.
So
this
is
the
one
which
I
have
created
for
this
demo.
So
by
default
it
selects
the
builder
as
go,
and
I
am
not
going
to
touch
any
of
the
things
so
just
I
will
add
a
pipeline
so
what
it
will
do
when,
while
creating
this
form.
C
So
if
I
click
this
button
add
pipeline,
so
it
will
create
a
pipeline
template
for
me.
I
no
need
to
create
pipeline
template
manually,
so
instead
of
that
this,
this
openshift
pipeline
operator
has
integrated
all
these
things.
So
this
is
the
functionality
of
the
openshift
pipeline
operator.
A
C
C
Yeah
so
because
in
open
in
this
dev
console
what
we
have
done
as
part
of
the
ui
in
order
to
make
our
application
or
like
my
task,
my
pipeline
is
working
or
not
to
from
the
github,
so
what
we
have
done,
like
initially,
when
user
create
for
the
first
time
we
are
triggering
the
pipeline
run
also
automatically
but
later,
when
we
do
some
edit
to
this
pipeline.
C
Okay,
so,
okay,
before
that,
I
just
want
to
show
like
pipeline
run,
is
triggered
already
and
just
for
the
refresh
purpose
like
this
pipeline
run,
contains
three
tasks:
fetch
repository
building
and
deploying,
and
all
these
three
tasks
have
several
steps,
so
those
steps
will
actually
run
the
containers
and
do
the
operation
so
now
like
how
to
move
this
one.
I
want
to
edit.
C
Yeah,
okay,
so
here
like,
if
I
go
to
pipeline,
I
will
have
a
edit
option
right,
so
here
edit
pipeline
is
there.
So
if
I
do
some
edit
operation,
I
mean
I
can
add
some
task
or
edit
something
so
once
I
do
this
edit
operation
after
that
to
get
changes
of
those
edit
things
I
need
to
rerun
here
I
mean
I
need
to
start
it
again.
B
C
Don't
want
to
do
that
start
manually
right,
so,
as
you
explain
as
we
discussed
in
the
present
while
presenting
like
we,
we
don't
want
to
start
any
of
the
pipeline
manually.
Instead,
whenever
an
event
occurs,
it
should
rerun
right
so
based
on
the
events.
So
to
do
that,
like
we
have
a
trigger
concept,
as
we
discussed
so
till
now,
we
just
created
a
sample
pipeline
and
pipeline
run
automatically,
but
later
on,
whenever
there
is
some
changes
to
my
repo.
So
basically
my
repo
is
this
one.
C
A
C
That
so
earlier,
there
was
no
add
trigger
form
here.
A
C
We
have
added
this
add
trigger,
and
here
we
have
the
option
to
select
like
which
pro
provider
we
want
like
whether
it
is
a
bit
bucket,
github
or
gitlab.
So
basically,
we
have
supported
all
these
things
and
it
will
be
inbuilt
added
to
this
operator
so
that
by
default,
all
these
things
will
be
created
already,
so
that
we
can
make
use
of
them
directly.
A
C
B
C
A
C
A
C
I
can
choose
trigger
tab,
so
you
can
see
a
event.
Listener
got
created
just
now.
You
can
see
time
also,
and
one
thing
I
want
to
show
like
I
have
not
created
it.
I
have
not
created
any
trigger
bindings
but
openshift
pipeline
operator
in
by
default.
It
it
have
all
these
cluster
trigger
binding,
so
it's
available
at
the
cluster
level,
so
that
so
so,
basically
in
triggers,
we
have
two
crd
called
trigger
binding
and
cluster
trigger
binding.
C
So
all
these
things
are
shipped
as
a
cluster
trigger
bonding
because
everyone
can
use
it
across
the
cluster
and
if
someone
want
at
the
name
space
level,
they
can
directly
come
and
create
in
the
trigger
binding,
using
this
like
ammon
way
and
also
we
can
see
a
trigger
template
so
before
that
in
the
cluster
trigger
binding.
I
just
want
to
show
how
this
get
github,
pull
request.
Look
like
so,
if
you
see
yaml
here
yeah,
so
we
can
see-
or
these
are
the
different
different
parameters.
It's
actually
watching.
A
B
C
Revision
action,
full
request,
number
and
full
name
and
everything
like
whichever
the
required
information
and
same
thing
will
be
used
inside
trigger
template,
so
I'll
go
back
to
trigger
template
now.
So
we
can
see
here
in
the
yaml
section
c.
We
use
all
those
parameters
from
the
trigger
binding
and
in
the
spec
like
we,
we
are
making
use
of
the
from
which
repo
and
what
is
the
name
and
all,
and
we
are
finally
using
the
pipeline
which
we
created.
Initially,
we
are
not
creating
any
new
pipeline
here.
C
Instead,
we
are
just
making
use
of
the
already
existing
pipeline
because
we
did
add
trigger
from
the
existing
pipeline
right,
so
it
automatically
created
this
trigger
template
and
added
this
resource,
template
of
kind
pipeline
run
and
while
adding
kind
pipeline,
it
referred
the
pipeline
which
is
already
existed
so
now.
A
C
C
Yes,
so
before
that
one
more
thing,
I
just
quickly
want
to
show
this
event
listener
how
it
looks
like
so
this.
This
is
how
event
listener
looks
like
where
we
have
a
triggers
and
template
binding
things.
C
I
mean
these
things
club
together,
so
that
information
can
be
shared
so
now,
like,
as
I
have
created
everything
so
this
event,
listener
internally,
creates
a
k
test
pod,
as
we
discussed
so
it
it
will
create
a
kubernetes
part
so,
and
it
keep
on
running
as
it's
a
kubernetes
based,
so
it
it
keep
on
watching
on
the
event.
So
it's
like
continuously
running
and
it
it
just
watches
on
the
spot
port
and
another
thing
is
like
when
we
create
this
one.
It
automatically
creates
a
route.
Also,
oh
okay.
C
C
Why
it
gave
the
error?
No,
I
will
make
use
of
this
url
and
I'll
go
back
to
my
repo
right.
So
this
is
the
simple
go
up
which
I
use
for
the
creating
pipeline.
I
will
go
to
settings
and
add
it
to
the
webhook.
So
here
is
the
webhook,
so
I
have
already
set
it
up
for
testing
purpose.
So
I
will
just
straight
away
clear
everything.
C
So
here
we
will
specify
the
payload
url
of
our
event
listener.
One
right
and
content
type
always
should
be
application
json.
This
is
the
one
which
it
will
expect
and
then
we
can
select
anything
which
we
want
so
right
now.
I
am
interested
in
pull
request
right,
so
I
just
click
on
this
one.
Let
me
individually
select,
so
I
I
just
selected
the
pull
request.
So
in
pull
request,
we
have
these
many
events.
C
I
mean
these
many
information
like
whenever
there
is
a
pr
open,
close
reopen
assigned
unless
and
anything
happen
on
the
full
request.
A
event
should
send
to
my
pipeline.
C
So
I
will
just
update
this
web
book,
so
the
book
is
updated.
So
I
have
already
created
few
pr's
for
the
testing
purpose.
So
what
I
will
do
this
is
the
existing
pr
right.
So
I
can
do
any
action
on
this
pull
request.
So
if
I
do
some
action,
it
should
automatically
re-trigger
for
me.
So
before
doing
any
things,
I
will
just
go
back
here
and
show
like.
Currently
we
have
only
one
pipeline
run
running
in
this
demo
name
space
you
can
see
so
now.
I
will
just
close
this
pull
request
right
after
this.
C
If
I
go
and
see,
a
new
pipeline
run
is
up
and
running
because
a
event
comes
to
this
event,
listener
event
listener,
and
then
it
it
it
it.
It
recognizes
that
pull
request
and
it
fetches
all
the
information
from
the
trigger
information
to
trigger
binding
and
given
to
the
trigger
template,
and
finally,
it
created
the
power.
For
me
I
mean
this
pipeline
run
from
the
trigger
templates.
C
So
if
I
go
back
here,
this
is
the
one
I
was
saying
so
in
the
trigger
template,
we
have
specified
a
pipeline
run
so
yeah,
so
you
can
see
here
we
have
specified
a
pipeline
run,
so
this
pipeline
run
is
executing
the
pipeline,
which
is
the
original
one.
A
Cool
yeah,
so
so
that's
that's
really
cool
and
and
we'll
see
that
it
works
very
fine,
so
great
to
see
that
working
live.
I
have
a
question
on
the
pipeline
run.
Can
you
please
go
back
to
the
like
to
the
ui
yeah
okay
pipeline
run?
Can
you
click
on
it
like
on
the
pipeline
run
there.
A
Is
there
something
on
the
metadata,
okay
trigger
event
id
so
say
now
I
want
to
understand
what
has
triggered
the
execution
of
my
pipeline
like
there
was
a
commit
id
or
there
was
a
pull
request
id
or
whatever
is
this
stored
somewhere
in
the
metadata
for
the
pipeline
run
or
or
or
not?
For
the
moment.
C
C
D
A
Yeah,
so
maybe
that's
something
we
can
talk
about
offline,
but
what
would
be
very,
very
cool
is,
is
to
be
able
to
trace
from
the
pipeline
execution
what
has
started
from
the
git
repo
so
like.
If
it's
a
pr,
I
click
on
the
pipeline
learn,
and
I
have
a
direct
link
to
the
pr
to
to
understand
what
code
has
been
merged,
and
you
know
these
things
that.
A
A
All
we
have
to
do
maybe
is
to
add
a
label
as
prime
said,
but
that
that
links
to
the
url
of
the
of
the
of
the
pr.
C
A
All
right
all
right,
so
that's
why
we
bring
your
engineers
here
so
so
we
can
also
have
this
type
of
exchanges
with
the
even
if
people
are
asking
some
questions.
So.
Thank
you
very
much.
I
believe
we
are
already
past
the
the
the
the
hour
so.
D
Yeah,
we
are
a
little
bit
out
of
time.
You
know
in
overseas
tv
in
the
morning,
yeah
yeah.
A
C
D
I
put
in
the
chat
is
the
link
to
our
learning
portal
because
someone
asked
how
to
start
learning
about
tecton.
Also
we
have.
We
have
a
scenario
where
you
can
start
learning
about
openshift
pipeline
on
openshift.
D
Then,
if
you
like
to,
we
have
the
tech
tone
deep,
dives,
a
series
of
online
event
when
an
instructor
will
do
a
deep
dive
on
tecton
and
the
material
for
learning
tecton
is
a
is
the
other
one
I
put
in
also
in
the
chat,
so
those
are
two
valuable
assets
that
red
dot
developers
offer
to
start
learning
openshift
pipeline
and
checked
on
on
top
of
openshift
and
kubernetes
and
yeah.
I
just
wanted
to
to
share
our
resources.
A
Yeah
very
good
yeah,
very
good
okay,
so
thank
you
very
much
again
to
everyone
savita
crab.
It
was
a
great
session
thanks
natalie,
for
being
always
such
an
awesome
host
and
asking
great
questions
so
in
the
next
sessions.
I
hope
you
know
we
will
be
able
to
cover
these
things
that
we
discussed.
Like
you
know,
maybe
this
suggestion
how
to
trace
back
to
to
code.
If
it
comes
in
one
of
the
upcoming
screens,
I
will
speak
about
it
also
with
with
samara.
Well.
D
I
have
a
suggestion
on
jafar
our
next
show.
This
is
you.
D
B
D
A
Yeah,
perfect
and
so
yeah,
so
I
was
going
to
say
so
again
as
a
you
know,
wrap
up
thanks
a
lot.
We
already
have
good
topics
to
cover
for
the
upcoming
sessions
we
will
see
for
august
11.
Maybe
we
can
speak
about
the
new
feature
called
pipeline
as
code
really
like
we
can,
where
we
embed
a
text
on
in
the
git
repository,
and
then
we
automatically
trigger
that
based
on
those
events.
So
that's
another
further
step,
but
we
first
wanted
to
show
this
thing
before,
because
this
thing
runs
now.
A
The
pipeline
s
code
is
in
depth
preview,
but
we
will
speak
about
it.
Definitely
and
we
will
I'm
very
interested
in
the
custom
interceptor
thing,
because
I
I
like
to
tweak
things-
and
I
have
some
some
very
good
ids.
You
know
the
custom
interceptor
can
do
what
we
said.
It
can
pull
the
data
from
the
pull
requests
and
it
can
add
a
label
to
the
pipeline
run
to
have
the
url.
So
that's
very
simple
use
case
we
can
do
already
and
then
we
can
see
integration
with
servicenow
or
whatever.
A
So
thanks
a
lot
thanks
again.
Thank
you
very
much.
Thanks
for
everyone
who
connected
on
the
stream,
thank
you
very
much
natalie
for
setting
this
up
and
making
sure
that
everything
runs
smoothly,
and
thank
you,
of
course,
avita
and
karam,
and
we
hope
to
have
you
next
in
other
episodes,
and
I
wish
everyone
a
great
day
and
we
are
going
to
to
to
end
the
session
now
thanks
and
see
you
on
august
11,
then
thank
you.