►
From YouTube: OKD Streams Operator Build Pipeline using Tekton-Luigi Mario Zuccarelli,Sherine Khoury,Diane Mueller
Description
OKD Streams Operator Build Pipeline using Tekton
Guest Speakers: Luigi Mario Zuccarelli & Sherine Khoury
Moderator: Diane Mueller
https://okd.io
A
B
Briefing
on
the
newest
and
latest
bits
of
it,
initiatives
that
were
that
are
taking
place
in
the
okd
working
group.
Today,
we're
going
to
talk
about
the
operator
build
pipeline
using
tecton.
If
you
don't
know
what
okd
is
it's
the
community
open
source
distribution
of
kubernetes
that
powers,
Red,
Hats,
openshift
and
okd
streams-
is
an
initiative
within
the
okd
working
group
that
we're
doing
in
conjunction
with
red
hat
engineering,
to
build
pipelines
for
building
okd,
and
this
has
been
ongoing
for
the
past
I.
B
C
Okay,
so
by
web
introduction,
my
name
is
Luigi,
as,
as
Diane's
pointed
out,
I'm
working
on
the
CFE
team
and
I've.
Just
let
Shireen
introduce
herself
quickly
as
well.
A
Yeah
so
I'm
I'm,
Shireen
I'm,
also
with
the
red
hat
openshift
Team
I'm
working
on
the
customer,
focused
engineering
team,
so
same
team
as
Luigi.
C
Cool
okay,
so
we're
going
to
be
introducing
the
operator
build
pipeline
using
techton,
and
so,
as
as
Diane
pointed
out,
we
have
this
notion
of
okd
streams
and
she's
she
well.
She
put
it
really
well
there
not
only
do
we
build
FH
cos
as
the
underlying
operating
system,
but
we
also
now
have
s-cos,
which
is
also
coming
out
in
the
MVP,
and
so
we've
coined
it
as
okd
streams.
C
C
Tecton
is
a
kubernetes
open
source
project
for
cicd
pipeline
builds.
It's
really
container
based
it's
fixable,
extremely
easy
to
use.
In
my
opinion,
maybe
someone
might
not
agree
but
very
easy
to
use
it's
extremely
flexible,
as
I've
mentioned,
as
as
the
great
thing
is
they
and
we'll,
we
have
a
lot
of
at
the
end
of
the
slideshow.
C
We'll
have
a
lot
of
links
for
you
to
go,
have
a
look
at,
but
the
the
actual
the
power
of
Teton
is
the
tecton
tecton
Hub,
where
you've
got
a
a
sort
of
plethora
of
of
tasks
that
you
can
use
and
Implement
at
the
box
without
even
changing
a
line
of
code,
but
you
can
obviously
customize
and
use
it
as
as
needed,
so
just
a
brief
architecture,
overview
of
what
we're
trying
to
sort
of
sort
of
build
here
and
and
why
we've
used
tecton
and
what
we're
actually
trying
to
do
here.
C
So
on
a
high
level,
I've
got
a
a
architecture
diagram
just
just
to
point
out
the
different
pieces
that
are
used
in
the
actual
build
pipeline.
So
we
have
our
pipeline
runs
over
here.
I,
don't
know!
If
you
can
see
this
yeah
over
there
is
the
pipeline
runs,
we
have
a
pipeline
objects
and
we
have
task
objects.
C
Well,
the
toss
really
could
be
like
something
like
a
gits.
Clone
builds
a
golang,
Builder
golang
program,
put
it
into
a
Docker
that
type
of
thing.
This
is
what
we
mentioned
by
tasks
and
then
the
supporting
infrastructure.
We
have
sorry
here.
We
have
config
Maps
conflict
maps
are
sort
of
objects
within
openshift
that
store
key
value
Pairs
and
there
we
store
the
credentials
used
for
pushing
images.
We
use
canonco
the
container
Google
tools
container.
C
The
tool
is
really
really
cool,
powerful,
it's
it's,
it
builds
and
pushes
the
images
and
then
obviously
we
have
a
PV
persistent
volume
and
persistent
volume
claims
and
then
obviously
all
the
necessary
rbec
for
us
to
realize
the
all.
The
all
the
different,
rule-based
access
controls
permissions
for
the
tasks.
C
Just
the
different
objects
that
we're
using
within
the
within
the
operator
pipeline
tree
you'll
have
a
you'll,
have
a
look
at
the
top
there's
environments
overlays
in
crcd,
and
we
make
heavy
heavy
use
of
customization
customization
is
a
is
also
an
open
source
tool
that
really
helps
with
deploying
multiple
objects
or
multiple
manifests
so
with
it
is
plugged
in
using
the
cube,
CTL
command
line.
C
C
It
is
a
sort
of
MVP,
as
Diane
mentioned
in
the
beginning.
So
it's
very
naive
and
very
opinionated
at
this
point
in
time.
We
there
are
a
lot
of
caveats
in
here,
but
what
we
wanted
to
show-
and
what
we
want
to
do
highlight
is
that
we're
able
to
build
operators
and
and
and
then
deploy
them
into
the
operator
Hub.
So
more
about
operators,
LM,
bundles
and
operator
have
them.
C
The
operator
really
is
a
is
a
a
way
of
actually
putting
your
your
SRE,
your
site,
reliable
Engineers
work
into
a
a
programming
language
like
golang,
where
we
can
actually
then
take
all
the
stateful
assets,
the
order
that
we
deploy,
something
in
and
and
actually
deploy
that
programmatically
and
then,
as
well
as
looking
at
the
the
the
desired
state
that
we
want.
C
The
deployment
to
be
in
and
the
operator
will
run
in
a
in
a
reconcile
Loop
and
make
sure
that
we
have
the
desired
state
that
we
want.
That's
at
a
very
high
level.
Again,
there's
lots
of
good
information
on
that.
We'll
point
you
to
at
the
end
of
the
slide
that
you
can
read
up
on
operators.
C
Olm
is
really
the
the
operator
life
cycle
manager
and
that
makes
sure
that
the
the
operator
is
that
we're
able
to
have
a
life
cycle,
in
other
words,
if
I
bump
the
version
of
an
operator
I
need
a
way
of
seamlessly
upgrading
that
operator,
and
we
can
do
that
via
Olin
and
olm,
bundles
and
but
the
Olin
bundles.
C
We'll
talk
later
about
and
you'll
see
in
the
demo,
where
we,
where
we
create
the
operator,
the
operator
might
have
a
an
agent
that
it
needs
and
like
we
call
it
the
upper
end
and
so
we'll
need
to
bundle
the
operator,
the
operand
and
then
put
it
into
a
catalog.
And
then
the
catalog
is
what
we
talk
about
operator
hub
and
in
operator
Hub.
C
They
are
operators
that
you
can't
access,
because
in
openshift
these
are
subscription
based.
So
in
the
red
hat
Marketplace,
for
example,
there
could
be
tecton
and
Argo
CD
or
some
really
cool
database,
but
you
can't
access
it
because
it
is
a
subscription
based
and
it's
in
openshift.
So
we
want
to
build
these
pipelines
to
allow
sort
of
the
community
access
to
build
operators
and
and
catalogs
and
push
it
into
the
communities
operator
Hub
so
that
we
have
these
operators
available
for
okd
and
well.
C
That
leads
me
on
then,
to
handing
over
to
Shireen
to
do
a
a
live
demo.
I
believe
Sharon.
A
Okay,
so
what
I'm
gonna
show
you
here
is
how
to
build
an
operator
using
the
pipeline
that
we've
we've
built
on
a
kind
cluster.
So
to
start
off
what
is
kind
kind
is
kubernetes
on
Docker,
so
basically
it's
using
a
Docker
containers
as
nodes
and
building
a
kubernetes
cluster.
A
On
top,
you
can
find
that
actually
on
six
kubernetes
IO,
it's
very,
very
easy
to
install
and
it's
extremely
lightweight,
so
it's
very,
very
adapted
to
developing
basically
can
create
a
cluster
by
doing
something
like
kind
create
cluster
once
you've
you've
installed
it
I
already
have
one
so
get
cluster.
My
cluster
is
right.
Here
doesn't
want
to
detect
my
finger
anymore.
A
That's
fine,
okay
and
get
cluster
sorry.
So
we
we've
got
a
cluster
as
you
can
see,
I'm
using
sudo
here
and
that's
simply
because
I'm
not
using
kind
on
Docker
but
kind
on
podman.
So,
if
you're
interested
in
that,
we
can
also
send
some
some
links
to
to
tell
you
how
to
tweak
that
now,
on
top
of
kind,
we've
installed
the
tecton
controllers
so
to
install
that
it's
pretty
simple,
there's
a
tact
on
dev
and
some
really
nice
yaml
file.
A
Where
you
Cube
CTL
apply
the
thing
and
it
installs
it
for
you,
so
I've
already
done
that
I
can
show
you
here.
I
have
a
namespace
called
tecton
Pipelines,
and
in
here
we
can
see
that
we
have
all
the
services
pods
deployments,
everything
that
is
needed,
including
our
back.
All
of
that
is
already
in
there.
So
it's
a
one-liner.
You
get
tacked
on
on
kind
running,
so
that's
basically
about
what
I
need
in
order
to
to
get
myself
started
up.
A
I've
just
pulled
the
okd
operator
pipeline
repo,
where
the
pipeline
code
lives
right
now.
Also
we
got
the
we'll
get
the
the
links
afterwards
and
what
I'm
going
to
do
is
simply
apply
well,
first,
before
applying
I'm
going
to
change
my
name
space
to
okay,
the
team.
A
This
is
also
in
the
readme.
It
tells
you
to
use
okd
team
and,
as
Luigi
said
a
bit
earlier,
we're
using
customize
in
order
to
be
able
to
apply
everything
so
so
I'm
gonna
apply
here.
A
A
I
get
the
pipeline
deployed
as
well
as
three
tasks,
so
we've
got
one
task
clone
to
clone
the
repo
from
GitHub,
and
this
is
basically
one
of
the
tasks
that
exist
on
the
tecton
Hub.
We
we
didn't
really
create
that
the
two
that
we
created
are
the
container
all
and
the
bundle
so
now
that
we've
applied
this
to
our
namespace,
we
can
start
the
pipeline.
A
A
This
is
the
name
of
the
the
repo
where
I
want
to
push
my
my
operator
once
my
all
of
my
Docker
images,
are
built
and
ready.
A
A
So
this
volume
claim
is
really
so
that
we
can
share
files
between
tasks
and
between
steps
of
the
pipeline
so
that
we
we
can,
for
example,
get
the
advantage
of
car
of
caching
and
not
have
a
long
pipeline.
So,
let's
okay,
so
let's
start
our
python
our
pipeline
started.
It's
got
this
random
name
and
we
can
even
use
this
line
here.
To
look
at
the
logs
see
what's
happening.
A
So
here
the
pipeline
is
going
to
take
around
five
minutes,
so
this
is
going
to
take
a
little
bit
of
time
around
seven
minutes
more
or
less
so
it's
gonna
start
the
first
by
cloning,
the
repositories
as
you
see
here
and
then
it's
gonna
go
into
another
task
where
it's
going
to
build
the
containers
so
starting
by
a
big,
golang,
CI
step,
which
is
by
the
way
we've
made
it
not
to
fail
the
whole
pipeline
because
we
know
operator
to
operator
the
the
the
linking
rules
could
could
change
and
the
the
quality
standards
are
not
the
same
so
next
unit
tests
and
then
we
go
into
building
the
operator.
A
So
while
we
wait
probably
for
this
to
kind
of
finish
off,
I
can
show
you
a
little
bit
inside
the
code.
So
this
is
what
a
tecton
task
would
look
like.
There's
the
series
of
parameters,
it's
basically
exactly
the
same
parameters
that
we
passed
earlier
in
the
command
line.
Here
we
see
what
we've
been
doing
with
the
PVCs.
A
If
you
remember
earlier
so
this
is
where
we
load
the
PVCs
into
the
task
so
that
it
uses
the
cache
and
the
PKG
folder
from
the
PVC
instead
of
rebuilding
each
and
every
time
it's
going
to
start
a
new
step.
So
the
yellow
step,
the
goal
length,
CI,
lint
step
that
we
saw
earlier.
Is
this
verify
step?
So
here
you
can
see
that
it's
taking
an
image
that
we've
built
and
it's
using
that
container
in
order
to
run
a
golang
CI.
A
So
two
little
things
to
say
here:
the
go:
bundle
tools
is
a
container
that
has
basically
go
golang,
CI
operator
framework
SDK.
A
The
tools
that
you
might
need
to
get
that
operator
basically
built
and
run
and
be
able
to
to
kind
of
push
it
so
going
back
to
here.
A
We
see
that
our
build
step
is
done,
and
now
what
we're
doing
here
is
preparing
the
Docker
container
of
the
operator
to
be
pushed
to
to
a
registry
we're
using
canico
here,
which
is
very,
very
practical,
because
we
don't
need
to
use
a
Docker
engine.
It
runs
unkind
and
it
can
build
any
container
exactly
like
if
we
were
using
podman
or
if
we're
using
a
docker.
A
It
does
both
the
build
and
the
push
in
the
same
in
the
same
step.
Basically
so
here,
as
you
can
see,
we've
pushed
the
operator
you're
free
to
basically
add
another
step
in
here
to
build
an
agent
if
your
operator
is
using
an
agent
or
a
cube,
rbac
proxy,
for
example,
if
you're,
if
you're,
using
a
cube,
rbac
proxy
with
your
operator-
and
you
need
to
build
that
and
what
happens
next
here
is
a
this
second
task
starts.
A
So
the
bundle
all
is
where
we
start:
building
the
olm
artifacts
or
the
operator
life
cycle
management
artifacts.
That
are
needed
so
that
we
can
use
our
operators
from,
for
example,
operator,
Hub
or
a
community
Hub
and
install
them
on
an
openshift
cluster
on
okd
cluster
and
start
using
them.
So
first
thing
we're
building
a
bundle,
as
you
can
see,
we're
using
operator
SDK
things.
So
it's
it's
based
on
operator
SDK,
and
so
we
call
bundle
validate
once
that
is
done.
A
We
can
push
that
bundle
to
to
to
a
registry,
and
next
we
need
to
create
an
index
and
a
catalog.
So
these
might
look
like
a
fancy
fancy
words,
but
they're
they're
really
related
to
the
way
olm
handles
the
bundles.
So
what
olm
in
the
operator
Hub
shows
is
operator.
Catalogs
and
catalogs
contain
either
one
operator
like
what
we're
we're
creating
here,
we're
creating
a
catalog
containing
one
index
which
contains
one
operator,
but
if
you
take,
for
example,
the
the
red
hat
catalog
it
contains
for
a
release.
A
For
example,
it
contains
all
of
the
operators
that
this
release
has
so
here
to
make
things
simple
and
to
allow
you
to
use
your
operator
really
easily.
We've
created
the
catalog
image
for
you
and
the
index
image
for
you.
Based
on
that,
you
can
use,
for
example,
a
subscription
yaml
on
your
okd
cluster,
and
you
can
very
easily
deploy
the
your
operator
to
the
cluster
from
that
catalog
that
you've
built.
C
A
C
Finished,
would
you
mind
just
doing
a
tkn
describe
the
pipeline
bill
that
you've
just
made?
Yes,.
A
So
here
is
our
last
run
of
the
of
the
pipeline,
so
you
can
see
basically
that
this
time
it
even
took
a
minute
less
than
last
time,
which.
C
Caching
is
working
nicely,
so
if
you
don't
mind
just
TK
in
PR
describe
the
name
and
it'll
give
you
a
nice
fancy
little.
A
So
should
we
look
at
the
query
I
o
to
to
see
what
bundles
got,
what
images
sorry
got
got
pushed
Maybe.
So
here
is
well
done.
I'm
just
gonna
share
this
here.
A
And
sorted
by
last
modified,
so
here
are
all
of
the
images
that
we've
built.
We've
built
the
operator
we've
built
the
bundle
that
basically
describes
the
operator
describes
all
the
related
images
there
is
it
describes
which
channel
it.
It
belongs
to
with
version
which
are
back
it
needs,
which
deployments
it
needs.
All
of
that
is
in
the
bundle.
A
It's
really
something
that
the
operator
framework
SDK
builds
for
you,
then
we
have
an
index
which
contains
the
operator
bundle
and
finally,
the
catalog,
which
contains
one
or
more
indexes.
According
to
your,
not
your
use
case.
A
C
Yeah
yeah
we
had
we
had
previously,
it
ran
on
operate
first,
but
we
had
some
credential
issues,
so
we
couldn't
unfortunately
show
it
today,
but
we
will,
we
will
be
together
with
the
s-cos
build
pipeline
will
show
both
of
them
in
the
in
the
near
future
in
the
next
couple
of
days.
Really
so
I'm
going
to
I'll
share
the
last
bit
just
to
show
the
regards
the
links
and.
C
Great,
so
the
the
GitHub
is
okd
project
and
specifically
okd
operator
pipeline,
so
please
download
Fork
it
play
with
it
as
Shireen
showed
you,
it's
really
easy
to
build
locally
and
I
think
this
is
the
power
of
what
we
wanted
is
that
you
can
build
these
pipelines
locally.
You
don't
need
a
heavy
kubernetes
infrastructure
or
openshift
infrastructure
to
build
it.
But
having
said
that,
it
really
makes
it
makes
it
really
works
well
on
a
heavy
openshift,
okd
streams,
a
type
of
of
cluster
and
will
work
well.
So
PRS
are
welcome.
C
As
I've
mentioned
these
the
SDK
operator
framework.io,
there's
some
really
good
documentation
have
a
look
at
operatefirst.cloud
and
then
obviously
okd.io
for
information
on
our
okd
and
okd
streams.
So
that's
it
we'll
hand
over
to
Diane.
B
Well,
thank
you
very
much
for
that.
It's
great
to
see
the
progress,
we're
making
I,
know,
operators
and
getting
operators
to
work
with
okd
and
creating
the
catalogs
has
been
one
of
the
the
initiatives
within
the
okd
working
group.
That's
been
going
on
for
a
long
time,
so
this
is
really
amazing
to
see
it
in
a
pipeline.
B
B
We
know
you
know,
I
know
these
guys
sound
like
and
are
now
experts
in
this
pipeline,
but
we
know
there
are
tecton
experts
out
there
in
gurus
and
we'd
love
to
have
your
feedback
on
this
and
see
if
we
can't
make
them
more
efficient
and
effective
and,
as
always,
probably
better
documented
than
you
know,
we
do
on
our
first
passes.
So
if
you
have
PR's
or
comments
or
any
issues,
you
try
running
this
and
you
you
have
some
feedback.
Something
didn't
work
quite
right.
B
Please
let
us
know
these
things
should
run
on
any
kubernetes
cluster,
so
they're
not
totally
specific
to
running
on
openshift
or
okd,
but
if
we
put
anything
in
there,
that
is
we
want
to
hear
about
it.
So
again,
thank
you,
Shireen
and
Luigi
for
taking
the
time
today
and
I'll.
Add
this
to
the
playlist
of
okd
streams,
videos
that
are
up
on
our
YouTube
channel
and
we
will
be
following
this
up
with
more
in
the
not
too
distant
future.
B
So,
if
you're
interested
in
this
stuff,
please
go
to
okd
dot.
Io
come
to
one
of
the
working
group
meetings,
they
happen,
bi-weekly,
there's
a
Community
Development
one
on
the
other
week
and
there's
lots
of
things.
People
can
do
to
get
involved
and
we
just
would
love
to
hear
from
you.