►
Description
Want to learn Tekton? Get your espresso ready for the OpenShift Coffee Break together with Ian Lawson, Solution Architect at Red Hat, that will give us an introduction to Tekton to start creating Kubernetes-native pipelines in your clusters!
Twitch: https://red.ht/twitch
A
C
A
C
They'll
say:
there's
there's
two
explanations
for
it.
The
first
one
is,
I
grew
up
in
a
village
with
four
ian's,
so
so
you
could
never
tell
which
ian
was
ian,
and
secondly,
was
I
used
to
play
a
game
called
quake
back
in
1996
and
I
had
a
clan,
and
that
was
like
quite
my
quake
name.
B
C
A
So
thanks
for
joining
us
today
today,
we
have
a
very
interesting
topic.
I
know
also
jafar
is
very
interesting
on
that
because
he
is
taking
care
of
us
serious
about
tecton.
So
the
topic
of
today
is
absolute
beginner
guide
to
tekton,
which
is
the
kubernetes
native
pipeline
system
that
is
available
in
the
community
for
kubernetes
and
normal
shift.
Before
we
go.
I
hope
you
had
your
coffee
shot
as
a
tradition
for
the
openshift
tv
coffee
break.
A
C
Yeah,
so
techton
is
basically
a
cloud-native
pipeline
system.
That's
built
into
kubernetes,
that's
actually
part
of
openshift.
It's
kind
of
a
replacement
for
the
old
jenkins
way
of
doing
things.
People
who
are
never
had
experience
at
jenkins
jenkins
was
a
build
tool.
C
We
used
to
actually
integrate
jenkins
directly
into
openshift
three,
but
the
problem
was
that
jenkins
is
a
standalone
engine,
so
whenever
you
used
to
kick
off
a
pipeline
open,
if
you
had
used
to
have
to
sit
there
and
wait
for
jenkins
to
complete
or
fail
so
around
about
two
years
ago,
google
and
red
hat
got
the
idea
of
actually
building
a
a
cloud
native
or
kubernetes
native
version
of
pipelines,
so
that
you
could
have
visibility
into
the
atomic
components
of
what
makes
up
a
pipeline.
B
Let's
think
that
our
viewers
know
nothing
and
go
from
scratch,
so
that's
totally
fine.
B
A
That's
fantastic.
I
would
like
to
remind
to
the
people
attending
if
you
like,
to
make
any
question
to
jan
to
the
to
the
show.
Please
write
the
question
in
the
chat.
We
will
bring
it
to
the
presenter
if
you
want
to
share
any
use
case
of
text
on
any
one
of
you
already
using
tecton.
Please
write
in
the
chat.
If
you
have
any
question
about
tecton,
we'll
try
to
answer
today.
B
And
so
just
maybe
one
quick,
quick
comment.
What
I
like
about
this
tecton
thing
is
that
it
basically
extends
kubernetes
with
some
native
ci
cd
concepts.
So,
instead
of
relying
on
an
external
tool,
like
you
said
jenkins
or
other
cicd
tools,
kubernetes
becomes
the
the
cicd
engine
basically
and
it
understands
things
like
pipelines,
tasks
etc.
B
C
We've
pushed
as
kind
of
a
ci
cd
tool
for
building
sort
of
pipelines,
build
pipelines,
distribution
pipelines
and
stuff
like
that.
But
you
can
automate
anything
within
openshift,
because
the
actual
tecton
itself
works
with
images
and
works
with
containers.
You're
not
limited
to
using
techton
to
actually
write
pipelines
to
do
builds.
You
can
use
tong
to
do
things
like
automating,
adding
users,
adding
projects,
adding
network
policies,
any
object
that
you
have
access
to
within
openshift.
You
can
actually
use
as
part
of
a
task
within
tech
top.
B
Yeah,
so
I
don't
know
if
you're
going
to
explain
that
in
details
like
how
it
works
with
images
etc.
But
that's
definitely
one
of
the.
I
would
say
a
big
advantages
is
the
extensibility
and
how
easy
it
is.
You
know,
as
long
as
you
can
create
a
container
image
yeah
or
you
need
in
there
so.
C
C
C
Now,
if
you
keep
that
in
mind,
when
you
realize
what
techton
is
what
techton
actually
does,
is
it
extends
that
object
model
to
include
things
like
pipelines
and
tasks?
You
know
they
are
the
atomic
components
that
make
up
pipelines
when
you
understand
that
under
the
covers,
it's
just
an
object
state.
B
C
B
Thanks
and
so
where
did
texan
come
from,
I
mean:
are
we
open
to
questions
now
or
do
you
want
to
have
to
present
whatever
you
wanted?
Absolutely.
C
B
C
Extension
to
jenkins
that
allowed
jenkins
build
files
to
call
into
openshift
and
actually
interact
with
the
object
model,
so
people
could
build
jenkins,
build
files
that
contain
things
like
do
an
openshift
build
wait
for
the
build
to
finish.
Do
an
openshift
deployment,
wait
for
the
deployment
to
finish,
and
that
was
fantastic.
It
gave
you
that
kind
of
atomic
interaction
from
a
jenkins
perspective,
but
the
problem
was
always
that
if
you
wanted
to
run
a
build
process
or
a
cicd
process,
using
jenkins
openshift
would
have
to
hand
over
control
and
then
just
sit
there.
C
So
you
could
be
running
tasks
within
techton
that
actually
interact
with
the
metrics
of
openshift
to
decide
what
they
do
next.
So
if
your
build
is
taking
too
long,
you
could
have
another
task
spin
off
that
does
another
build
somewhere
else
or
pushes
the
bill
to
another.
Labeled
node
and
you've
got
that
fantastic
fine-grained
control
of
what
you
can
actually
do
within
a
pipeline,
as
well
as
the
visibility
of
the
pipelines
themselves.
B
Yes,
yeah
absolutely,
and
what
I
also
like
about
it
is
that
so
there
were
many
many
achilles
tools
spread
across
the
the
ecosystem
and
most
of
the
time
they
had
very
specific.
B
As
you
said,
d
cells
are
ways
of
describing
the
pipelines
when
they
adopted
pipelines
as
code,
and
so
what
I
like
with
tecton
is
that
it's
it
acts
as
a
sort
of
standardized
way
of
describing
the
pipelines,
of
course,
in
the
kubernetes
landscape.
But
this
comes
also
from
like
the
continuous
delivery
from
foundation
where
all
of
those
ci
cd
vendors
came
together
and
said.
Okay,
let's
now
come
up
with
a
more
standardized
approach,
what
are
the
main
concepts?
B
What
are
the
constructs
that
we
need,
and
instead
of
relying
on
each
each
one
of
our
specific
engines?
Let's
now
push
that
to
kubernetes
and
standardize
on
how
we
can
describe
pipelines
as
code,
and
then
anyone
can
leverage,
I
would
say,
techton
on
their
own
tooling,
afterwards,
yeah.
C
Yeah,
so
the
other
thing
I'd
add
work,
because
I
didn't
really
do
an
introduction.
I
I
only
pretend
to
work
at
red
hat.
I
am
a
software
developer.
I
was
a
software
developer
for
25
years
and
I
was
a
developer
when
I
was
a
developer
before
agile.
I
was
a
developer
during
agile
or
fragile,
as
we
used
to
call
it,
and
I
used
to
spend
a
huge
amount
of
my
time
having
to
write
and
develop
the
pipelines
and
it
used
to
burn
a
lot
of
my
development
time
and
as
a
developer.
C
C
I
don't
mind
the
animal
I
I
spent
a
huge
amount
of
my
career,
debugging
and
and
decoding
xml,
so
that
was
one
of
the
key
things
I
did
and
write
it
because
I
used
to
write
browsers
and
I
just
write
search
engines,
but.
B
It
can
be
very
tricky,
but
hopefully
there
are
some.
You
know
some
limiting
tools
that
help
with
that,
but
yeah.
Let's
see
how
we
can
start
from
scratch
and
and
see
how
easy
or
hard
it
is
to
create
yaml
files.
A
C
So
so
can
I
show
the
screen:
is
the
screen
yeah?
So
so
I
say
this
is
this.
This
is
my
screen.
First
thing
before
we
start
techton
has
the
best
logo.
I
absolutely
love
that
logo
that
logo
is
super,
but
I'll
show
it
to
a
customer,
and
he
said
basically,
why
is
that
cat
wearing
a
suit?
At
which
point?
I
can
never
look
at
that
logo
ever
again,
but
no,
it's
an
absolutely
fantastic
logo.
That's
the
techno
web
page,
basically,
there's
a
very
good
resource
for
actually
learning
techton
tecton.
C
As
we
said
as
a
standalone
project,
an
upstream
project,
someone
was
asking
about
how
you
actually
install
it
within
openshift.
It
is
available
through
the
operator
hub.
So
what
we've
done
is
we've
written
an
operator
that
will
actually
install
the
objects
you
need
and
it
also
installs
additional
components
within
the
openshift
user
interface
for
actually
doing
builds
of
pipelines
themselves,
but
we'll
start
with
the
very
very
basics
on
what
techton
is
and
I
want
to
show
you
basically
a
screen,
my
my
user
interface.
C
C
You
know,
I
don't
know
if
anyone
else
has
got
a
lego
red
hat,
but
they
are
really
cool
and
I'm
a
huge,
huge
lego
fan
and
to
be
honest
pipelines,
reminds
me
a
lego,
because,
when
you're
building
pipelines
itself,
it's
all
about
the
combination
of
atomic
components,
you
have
what's
called
a
task
and
a
task
is
basically
one
or
more
steps
that
can
be
executed
in
different
container
images.
C
C
I
say
I
I've
forgotten
all
this,
so
I
was
playing
with
it
yesterday
and
I
created
a
complex
pipeline
made
of
a
number
of
different
tasks,
but
if
I
show
you
the
example
of
an
atomic
task,
so
we've
got
something
here
called
see
task
one.
It's
peter,
yam
and
you'll
see
that
it's
an
incredibly
simple
way
of
defining
this.
C
It
is
a
kind
task
which
is
defined
by
the
tecton.dev
api,
we're
currently
running
it.
It's
actually
v1
beta.
So
this
is
an
older
version.
I've
actually
been
using
you've
got
a
name
for
the
task
and
then
you've
got
a
specification
that
defines
what
that
task
does
now.
The
tasks
are
the
atomic
components
and
they
can
return
true
or
false.
So
basically,
the
output
of
the
task
determines
whether
or
not
a
pipeline
progresses
from
that
point,
but
you
can
see
with
this.
This
task
is
actually
make
made
up
of
two
steps.
C
C
The
second
step
is
just
basically
doing
an
echo
so
that
these
tasks
aren't
very
exciting,
because
what
I
want
to
do
is
build
a
sort
of
a
component
pipeline
based
on
these
tasks,
but
you
can
see
that
the
actual
definition
for
how
you
build
these
tasks
is
incredibly
simple
and
the
fact
that
you
can
actually
include
images
and
then
run
the
commands
on
the
images
themselves.
There
is
a
bit
of
a
cheat,
that's
kind
of
used
in
the
old
days.
C
What
you
would
do
is
you'd,
create
a
command
and
then
pass
the
arguments
in
that
became
very
complicated.
If
you
had
multiple
sort
of
commands,
you
were
trying
to
run
so
you
do
have
the
ability
to
put
a
script
in
there
and
actually
run
an
executable
within
the
actual
image
itself.
So
you
can
look
at
things
like
running
a
bash
script.
I
thought
I
may
have
an
example.
If
I
do
at
ubi
task,
one.
B
C
Exactly
here
works
as
well
yeah
yeah,
one
thing
to
say
is
that
these
tasks
are
extremely
used
to
write,
but
people
have
been
writing
them.
People
have
been
working
on
them,
there's
actually
a
hub
on
the
techton.dev,
where
people
submit
the
individual
tasks
for
doing
things
like
doing
a
source
to
image,
build
doing
a
builder,
doing
a
git
pull
doing
a
git
clone
all
those
kind
of
things.
C
So
you
could
write
a
task
within
an
openshift
pipeline
that
actually
uses
the
oc
command
to
talk
to
openshift
and
do
oc
commands
themselves
now
the
tasks
themselves,
the
definition
they're,
not
runnable,
they're,
just
cookie
cutters,
they
define
how
what
a
task
does
and
how
it
behaves
when
the
task
is
actually
executed
by
text
on
it
creates,
what's
called
a
task,
run
object
and
that
task
run
object
is
a
temporary
stamped
instance
of
the
task
being
executed,
and
this
is
the
difference
this
is.
This
is
where
tekton
is
very
clever.
C
It
has
pipelines
and
the
pipelines
are
similar
to
tasks
they're,
basically,
a
cookie
cutter
that
links
all
the
tasks
together
like
a
piece
of
lego,
but
it
defines
how
you
run
the
pipeline.
A
pipeline
run
is
an
execution,
a
temporal
execution
of
that
pipeline
instance.
Once
you
get
it
clear
in
your
head
that
the
task
on
the
pipeline
are
the
definitions
they're,
the
templates
for
how
you
execute
these
things
and
the
tasks
around
the
pipeline
runs
at
individual
instances
of
executing
it.
It
all
clicks.
Does
that
make
sense?
B
Yeah
that
to
totally
and
maybe
to
to
to
to
explain
that
differently
is
so
you're
saying
we
can
have
a
sort
of
blueprint,
a
template
that
that's
like
the
pipeline
and
the
task
where
we'll
have
some
templated
tasks
fields
and
then,
when
you
create
a
task
for
or
pipeline
run,
you
say
this
is
the
value
I
want
to
use,
for
example,
for
this
field
in
the
pipeline
template
or
in
the
task
template
is
that.
C
Yes,
yes,
that's
one
of
the
things
I
missed.
I
I
missed
is
you
can
actually
put
parameters
into
the
tasks
and
parameters
into
the
pipeline
itself,
so
when
you
actually
define-
and
that's
the
beautiful
thing
about
defining
the
tasks
and
pipelines
as
cookie
cutters.
So
when
you
execute,
for
example,
the
task
run,
you
provide
the
parameters
that
can
parameterize
and
change
the
configuration
of
the
task
you're
running
yeah.
C
We
use
that
a
lot
and
one
of
the
nice
things
about
openshift
server
shift
comes
with
a
pipeline
builder,
which
allows
you
to
actually
physically
build
this,
and
it
will
actually
force
you
to
put
in
what
are
not
optional
parameters
before
you
execute
the
task
or
before
you
execute
the
pipeline.
C
A
Terminal,
so
it's
more
visible
and
if
we
then
after
during
the
show
we
can
also
share.
If
you
have
this
example
somewhere
on
github,
we
can
share
the
link
so
that
people
can
can
look.
C
I
do
and
it
ties
in
knight
into
a
question
that
was
asked
earlier,
because
I
actually
built
this.
If
we
do
a
pwd,
this
is
actually
a
pipeline
as
part
of
an
argo
cd
demo.
So
argo
cd
is
that
wonderful
state
engine
that
will
pick
yaml
definitions
out
of
a
github
repo
and
enforce
them
on
a
cluster.
This
pipeline
is
actually
part
of
a
get
ops
demo,
and
it's
on
I'll
I'll
show
you
where,
where
it
lives
in
the
outside
world,
so
people
can
have
a
look
if
they
want.
A
C
It
is
it,
it's
there's,
a
wonderful
level
of
complexity.
You
can
do
we'll
talk
about
the
basics
and
the
simplicity
of
techton,
but
the
sky's
the
limit
in
how
you
can
use
it,
combining
it
with
argo
cd,
combining
it
with
the
github
side
of
the
openshift,
means
that
you
can
pre-define
things
like
all
of
your
pipelines
and
you
can
predefine
your
pipelines
to
create
users,
create
network
network
policies.
All.
C
Objects
within
openshift
itself
and
you
can
store
all
those
in
a
github
repo
and
then
actually
enforce
them
using
argo
cd.
So
you
get
that
wonderful,
coders,
config
and
that
wonderful
kind
of
automation
which
takes
you
back
to
my
point
that
in
the
old
days
I
used
to
spend
too
much
time.
Writing
these
pipelines
and
not
enough
time
writing
code
by
having
these
pipelines
defined
in
yaml
and
also
having
them
define
viago
cd.
C
It
means
I
can
spend
95
96
97
of
my
time,
writing
physical
code
and
not
worrying
about
the
the
process
of
actually
deploying
it
building
it
doing
all
those
kind
of
things,
and
I
said
when
I
actually
push
it
to
customers.
When
I
talk
to
customers
about
this,
I
tend
to
get
a
bit
over
excited
because
you
can
automate
everything
within
openshift.
You
can
automate
all
your
data
operations,
not
just
your
building.
Your
applications,
you're
deploying
your
applications,
you're
staging
your
applications,
but
every
aspect
of
day-to-day
operations
of
openshift
itself.
C
For
me,
the
big
thing
to
be
honest,
going
back
to
the
basics
is
the
fact
that
you
can
execute
a
container
in
a
task,
so
anything
you
can
put
in
a
container
image
that
can
be
executed
with
an
open
shift
can
form
a
task
within
a
pipeline,
and
that's
amazing,
and
going
back
to
that
thing.
We're
talking
about
with
the
basics
where
I've
actually
defined
the
task
and
all
those
kind
of
things.
What
I
did
was
I
created
a
pipeline
a
bit
like
the
mouse
trap
game.
C
If
I
remember
this
mousetrap,
where
you
used
to
build
that
complicated
thing
and
then
kick
a
ball
off
and
everything
would
move
through
the
thing,
so
what
I
was
doing
was
trying
to
make
as
complicated
a
pipeline
as
possible.
So
what
this
does?
What
this
shows
you
is
when
you're
building
pipelines
just
by
using
a
tag
which
is
called
run
after
you.
C
Define
the
sequence
and
the
dependencies
of
the
actual
pipeline
itself
there's
another
tag
within
pipelines
called
from
which
allows
you
to
define
which
resources
come
from
tasks,
so
you
can
define
the
actual
way
in
which
all
these
tasks
live
together.
So
what
you're
looking
at
here
is
a
pipeline
that
says
I'm
going
to
run
task,
1
task,
2
and
task
3
simultaneously,
when
all
three
are
completed.
Sorry,
when
task
1
and
task
2
are
completed
task,
4
will
start
when
task.
2
and
task
3
are
completed
task.
C
5
will
start
task,
6
waits
for
task
4
and
task
5.
task,
7
task,
8
waist
for
tar
six,
and
then
we
have
what's
called
a
finalizer,
so
they're
all
basic
type,
they're
all
pithy.
You
know
these
tasks
are
just
printing
things
out,
I'm
not
doing
anything
exciting,
but
I
want
to
see
how
I
could
actually
create
the
complexity,
its
simplicity,
using
the
yaml,
and
if
we
look
at
the
actual
pipeline
itself
for
this
again
one
of
the
nice
things
about
the
openshift
system.
C
Is
you
have
this
fantastic
editor
within
openshift
that
allows
you
to
actually
edit
it?
So
you
can
see
the
actual
specification
is
incredibly
simple,
so
it's
things
like
task
five
run
after
task
two
task:
three,
the
task
reference
defines
what
makes
up
task
five
in
this
case.
It's
a
kind
of
task
and
the
name
is
c
task.
Five.
That
c
task
five
defines
the
task
object,
that's
actually
used
as
part
of
task
five.
C
So
it's
like
this
kind
of
very,
very
cool
lego,
where
you
put
all
these
things
together,
and
it
is
just
way
too
much
fun.
I
should
be
doing
other
things
like
writing
software,
but
no
this
this
is
way
too
much
fun
to
do
and
what
I'll
do
is
basically
I'll
set
that
going
just
for
a
giggle
and
you
can
watch
one
of
the
nice
things
about
it
is
you
can
actually
watch
the
individual
steps
running?
So,
if
you
remember,
each
of
these
tasks
is
two
steps.
C
What
I've
done
with
the
task
is
actually
I've
got
the
task
to
output
the
version
of
rel
before
it
echoes
the
actual
thing
it's
doing
so
those
tasks
are
running.
These
tasks
are
waiting.
This
task
is
now
kicked
off
because
task,
1
and
task
2
is
finished.
This
task
is
kicked
off
because
task,
2
and
task
3.
C
They
finished
task.
6
is
kicked
off
because
these
two
are
finished,
so
you
can
see
you
can
actually
build
a
huge
level
of
complexity
within
to
the
pipelines
themselves,
using
a
very,
very
simple
setup.
You
have
all
a
very,
very
simple
approach
and
I'd
say
that's
a
very
pity.
Example
this
finalizes
nice
because
it
allows
to
clean
up.
C
One
thing
I
haven't
talked
about
because
it's
not
part
of
this
pipeline
is
another
aspect
of
techton
called
workspaces
and
what
workspaces
allow
you
to
define
is
persisted
storage
that
exists
beyond
the
bound
of
the
tasks,
because,
because
the
tasks
are
actually
containers
they
execute
and
when
they're
finished
they
go
away.
The
way
all
containers
do
so
what
you
do
for
persistent
state
is
you
have
this
concept
of?
What's
called
a
workspace
and
a
workspace
defines
a
file
system?
C
That's
exported
into
the
task
now
the
lovely
thing
about
it
is
that
you
define
the
workspace
as
a
component
within
the
pipeline,
but
it's
not
instantiated
until
you
run
the
pipeline
run.
What
that
means
in
english
is
you're
not
hard
bound
to
the
definition
of
the
workspace
as
part
of
the
definition
of
the
pipeline.
The
pipeline
just
says:
I
need
a
workspace.
C
The
pipeline
run
allows
you
to
define
it
as,
for
example,
a
persisted
volume
which
retains
its
state
between
runs
or
an
on
host
version,
which
will
only
run
on
the
node
itself,
so
you
can
define
the
actual
nature
of
the
workspace
itself.
So
if
you've
got,
if
you
want
to
have,
for
example,
a
temporary
workspace,
that's
just
a
scratch
pad
for
that
container
to
to
use
that's
not
part
of
the
container
image,
you
can
define
it
as
a
workspace
and
then
just
define
it
as
an
ephemeral
component
and
that's
the
beauty
of
it.
C
C
C
So
if
we
look
at
the
yaml
for
this,
you'll
find
it's
slightly
simpler
than
the
previous
one,
but
we
have
in
the
tasks
a
concept
of
a
workspace
which
defines
the
name
of
the
workspace
within
the
task
and
the
workspace
that's
defined
within
the
pipeline
to
use.
In
this
case,
we
have
one
workspace,
but
you'll
see,
there's
no
definition
of
what
that
workspace
physically
is
because
that's
not
part
of
the
pipeline
definition.
You
only
have
to
instantiate
that
when
you
physically
execute
that
pipeline
using
a
pipeline
run
and
I'll
show
you
this
in
action.
C
So
if
we
go
to
pipelines
ub
file
pipeline,
what
I'm
going
to
do
is
I'm
going
to
start
it
and
what
openshift
will
do?
Is
it
scans
the
pipeline
and
says?
Well,
I
need
a
workspace.
So
please
tell
me
what
this
workspace
has
to
be.
I've
got
a
number
of
different
choices,
so
I
can
choose
an
empty
directory.
I
can
bind
it
to
a
config
map
to
a
secret
to
a
volume
claim
template
or
persistent
volume
claim.
C
So
what
I'm
going
to
do
is
I'm
going
to
bind
it
to
persistent
volume
claim,
and
I
previously
created
a
pvc
in
here
and
a
pvc
is
just
basically
a
piece
of
storage-
that's
locked
into
that
name
space.
So
I'm
going
to
give
it
that
piece
of
storage,
so
when
this
pipeline
is
actually
executed
as
a
pipeline
run
that
pvc
is
incorporated
into
each
of
the
tasks
that
actually
state
that
it's
required
within
the
actual
pipeline
definition.
C
So
when
I
execute
it,
what
you'll
see
is
that
task
is
now
running?
If
we
go
in
what
it's
doing,
is
it's
actually
pulling
down
the
image
executing
that
container
within
the
actual
space
itself,
and
it's
attaching
that
pvc,
so
what
it's
doing
as
part
of
the
create
it's
pulling
down
the
universal
base
image
step
id
tells
me
what
version
it
is.
It
then
goes
to
the
next
step
within
the
task,
which
is
the
check
go
to
the
check.
This
is
the
second
component.
C
If
you
go
to
the
pipeline,
you'll
see
that
it's
actually
running
what
this
one
did
was
it
actually
created
that
physical
file
within
the
pvc,
the
check
one
is
basically
just
doing
an
ls
to
see
that
it's
there
and
the
remove
is
doing
a
delete.
So
if
you
go
to
the
check
you'll
see
what
the
check
actually
did
was
it
outputted?
What
version
of
relay
it
was
doing
it
echoed
that
was
checking
the
file
are
being
created
and
it
does
an
ls
within
that
system.
C
What
I
did
was
an
ls
minus
al
zed
to
show
you
the
sc
linux
constraints,
so
what
it's
doing
is
when
it
actually
creates
this
pvc,
it's
applying
acetone
as
constraints
which
bind
it
to
this
namespace,
and
you
can
see
that
the
pipeline's
finished,
so
what
that
pipeline
was
doing
was
effectively
three
independent
tasks
that
were
using
a
physical
workspace
to
persist
state
between
the
tasks
themselves
again.
Another
pithy
example,
but
a
more
sort
of
use
real
world
use
case
for
this
is
you
could
have
one
task
that
does
a
git
claim.
B
Yeah
I
was
going
to
have
a
question.
I
don't
know
if
you
are
using
results
and
passing
values
from
a
task
tool
to
the
other,
but.
C
I
wasn't
these
are
pithy
examples
you
can
you
use
the
from
clause
to
actually
push
to
push
resources
from
a
task
to
a
task?
What
I
tend
to
do
is
I
tend
to
we've.
This
is
the
kind
of
warts
and
all
discussions.
So
I'm
going
to
tell
you
about
the
bad
bits
and
the
good
bits
we
have
the
concept
of
what
was
called
pipeline
resources,
which
was
the
idea,
another
type
of
object
which
allowed
you
to
define
physical
resources
that
could
be
consumed
as
part
of
a
pipeline.
C
It
became
too
difficult
to
to
actually
build
so
we've
actually
deprecated
the
concept
of
pipeline
resources.
What
we
do
now.
B
C
The
way
to
to
do
a
good
pipeline
is
to
think
about
the
actual
tasks
themselves
as
sausages
in
a
sausage
machine
and
then
actually
store
the
state
externally
to
the
tasks.
So,
if
I'm
doing,
for
example,
a
git
clone,
I
will
actually
produce
a
workspace
and
I'll
make
sure
the
task
actually
get
clones
into
that
workspace.
C
B
Yeah,
sorry,
so
I
I
was
actually
referring
to
results
and
not
like
resources.
So
so,
for
instance,
when
you
get
cloned
to
a
specific
path
or
when
you
touch
your
file
within
a
specific
path,
is
there
like
a
way
to
say
okay,
so
this
first
task
has
created
this
output
and
then
I
want
another
task
that
doesn't
know
about
that
task
but
needs
that
output
from
there.
B
I
think
there's
this
notion
of
results
where
basically,
the
initial
task
will
output
some
values
somewhere
in
the
the
results
and
there
afterwards
in
the
pipeline,
you
can
say
pipeline
that
results
that,
whatever
you
put
in
there
to
use
that
value
elsewhere,
absolutely
yeah.
C
B
Yeah
I
mean
yeah.
Pvcs
are
definitely
useful
because
so,
for
instance,
you
are
going
to
clone
the
the
folder
the
the
code
in
a
folder
in
the
workspace.
B
C
Yeah
yeah,
that's
doable,
I
say
someone
just
asked
a
question
about:
are
there
practical
examples
for
tasks
which
might
not
succeed
like
tess
was
simple
into
order
to
stay
attached
to
the
quality
game?
Yeah
there
are
there's
a
fantastic.
I
think
someone
put
up
the
link
to
the
hub.techtom.dev,
which
is
basically
an
open
source
repository
where
people
can
just
push
these
kind
of
tasks.
C
So
what
I've
done
here
is
I've
done
an
oc
get
cluster
tasks
within
the
name
space
that
I'm
in.
So
I
can
see
things
like
all
the
source
to
image.
Components
have
been
expressed
as
cluster
tasks,
and
I.
C
C
The
complexity
for
a
task
can
be
immense,
so
we're
thinking
about
tasks
being
atomic
components,
but
you
can
do
a
huge
amount.
I've
seen
customers
have
actually
produced
a
composite
image
that
contains,
for
example,
maven
and
sonocube,
and
other
things
like
siege
in
a
single
container
and
they've
got
a
single
task
that
does
the
whole
build
the
actual
software.
Do
a
solid
cube,
a
set
of
tests
on
it,
do
a
siege
on
a
hosted
version
of
it
to
see
how
it
performs
under
load,
and
they
do
that
as
a
single
task.
C
I've
seen
other
customers
that
have
very
very
small
atomic
tasks
and
bigger
pipelines
with
multiple
tasks,
very
complicated
pipelines,
there's
no
sort
of
kind
of
common
guide
on
on
which
is
better
or
which
way
to
do
it.
That's
the
joy
of
having
this
kind
of
box
of
technical
lego,
which
allows
you
to
build
these
things.
Does
that
make
sense.
A
Yeah
thanks
yeah
and
there
there
was
also
a
question
from
badujan.
If
you
would
you
mind
talking
about
tech
on
bundles.
C
Tecton,
bundles
yeah
there's
a
lot
of
new
stuff
coming
along
with
the
tektons
tent
on
the
side.
What
we
talked
about
today
is
the
absolute
absolute
sort
of
basics,
which
are
the
atomic
components,
but
we
got
this
concept
with
another
cool
image
down
here
called
techton
friends,
which
is
basically
basically
what
they're
doing
is
they're,
actually
producing
much
bigger
versions
of
the
tasks
the
bundles
which
allow
you
to
put
things
together,
pipelines
together
and
all
those
kind
of
things
it
is
a
growing
project.
C
That's
the
other
thing
to
be
aware
of
it's
a
reasonably
new
project
tact
on
it's
only
been
around
a
couple
of
years.
I
don't
think
we're
at
version
one
on
correct
me
if
I'm
wrong
guys,
but
I
think
the
operator,
let's
have
a
look,
so
the
operator
version
I
think,
is
0.7
off
the
top
of
my
head
is
a
quick
look.
C
A
Now
that
you
are
talking
about
the
operator,
there
was
a
question.
Let
me,
let
me
go
there
so
can
tecton
have
installing
from
operator
catalog.
It
looks
good.
Yes,.
A
C
It
is
one
of
the
red
hat,
supported
operators
for
those
who
don't
know
operator
hub
has
three
levels
of
operators.
We
have
community
operators
which
have
the
upstream,
where
you
can
execute
those
within
an
openshift
cluster,
but
they're
not
supported
that
the
behavior
is
not
supported.
The
execution
of
the
operator
is
actually
supported
by
openshift,
but
the
behavior
of
the
operator
itself.
C
Isn't
you
have
another
one
which
calls
it,
which
is
called
a
marketplace
operator
and
that's
really
cool
and
that
you
can
install
the
operator
then
buy
an
external
license
from
a
third-party
vendor
that
provides
support,
and
then
you
have
a
red
hat
operator.
Openshift
pipelines
is
a
red
hat
operator.
C
What
that
means
is
when
you
actually
install
it
as
part
of
openshift
red
hat,
assuming
you've
got
the
the
correct
support
levels
will
support
every
aspect
of
its
execution,
not
just
the
operator
but
the
behavior
of
every
single
one
of
the
instances
of
the
objects
that
that
operator
actually
does.
I
saw
another
question
that
talked
about
disconnected
environments.
I
do
a
lot
of
work
yeah.
I
do
a
lot
of
work
with
with
naughty
customers.
C
C
C
You
actually
put
them
into
a
certain
defining
tar
file
and
you
actually
co-locate
that
on
your
disconnected
system
with
a
proxy
which
can
then
use
that
to
serve
it
as
an
operator
hub
within
your
disconnected
environment.
So
anything
you
can
do
within
operator
hub,
you
can
do
in
a
disconnected
environment.
You
just
have
to
do
some
work
up
front
to
actually
produce
those
tar
files
which
contain
the
manifests
which
allow
the
actual
hub
itself
to
express
it
into
the
openshift
cluster.
That
makes
sense.
A
Yeah
yeah
and
thanks-
and
this
is
for
the
installation
of
operator-
I
think
then
there
might
be
some
com
consideration
about
hey.
If
you
need
to
do
a
build,
then
you
need
a
proxy
to-
I
don't
know
maven
or
node.js
package
manager.
So
it's
a
journey
where
you
have
to
proxy
mirror
the
operator
hub,
but
also
the
dependencies
of
your
software.
So
you
need
kind
of
a
mirroring
system.
C
Yeah,
so
one
of
the
nice
things
about
the
latest
version
of
openshift
is
as
part
of
a
disconnected
environment.
We
allow
you
to
install
and
use
key.
So
key
is
our
enterprise
registry
system.
It
now
comes
as
part
of
the
bundle
for
a
disconnected
install.
So
when
you
do
a
disconnected
install,
it
will
stand
up
and
instance,
a
key
for
you.
What
you
need
to
do
to
actually
install
the
disconnected
environment
is
simply
follow
the
instructions
for
downloading
the
appropriate
images.
C
So
it's
all
the
images
you
need
to
actually
do
an
installation
which
is
the
stuff
that
I
was
talking
about,
plus
the
images
for
the
actual
operators
and
the
manifest
definitions
for
the
operators
and
then
just
basically
host
them
on
the
internalized
version
of
key
and
then
what
happens
is
openshift
will
install
itself
from
that
instance
of
key,
and
it
will
use
that
local
version
of
key
as
its
point
of
reference
for
all
its
images.
So
if
you
have
to
do
an
update,
I
have
to
do
a
patch.
C
A
Yeah,
no,
we
have
a
lot
of
questions
yeah,
for
instance,
this
rock
count
that
looks
very
active
in
the
tecton
and
narco
city
is
wondering
if
you
have
any
exp
any.
If
you
use
cam
client,
if
you
have
any
opinion
of
the
camp
clay
cli,
I.
A
C
I've
never
used
it.
I
said
I've
never
used
it.
One
thing
I
would
say-
and
I've
done
this
this
is
this-
is
a
this-
is
a
pitch
for
openshift,
so
I
apologize
previously.
Is
I've
been
banging
my
head
on
doing
demos
for
years
and
I've?
Basically,
for
example,
in
this
demo
here
I've
used
my
command
line,
so
I'm
always
flicking
backwards
and
forwards
between
the
command
line.
What.
B
C
Now
got-
and
this
does
tie
into
the
to
somewhat
about
the
cam-
is
a
new
operator
you
can
see
up
here,
which
has
released
only
a
couple
of
days
ago,
called
the
command
line
operator
and
what
this
does
is
it
actually
allows
you
to
spin
up
a
command
line
within
openshift
itself?
Now
the
reason
I
get
excited
about
this
is
out
of
the
box.
C
A
C
I
type
help
installed
the
latest
version
of
all
the
clies
that
you
should
you.
You
need
really
to
interact
directly
with
the
system
and
one
of
them
is
tkn,
so
this
has
the
actual
tkn
command
line
client
installed
within
it,
so
you
can
do
direct
techton
requests
into
the
system
from
this
client.
I
haven't
had
experience
at
the
campsite.
What
I
tend
to
be
doing
is
I
tend
to
use
this.
I
haven't
really
had
a
good
play
with
it.
C
The
only
thing
I've
noticed
is
one
thing
is
that
it
and
I've
got
razor
scenar
as
an
rfe
to
the
guy
that
wrote.
It
is
that
it
drops
you
into
the
openshift
terminal
project
by
default,
so,
even
if
I'm
in,
for
example,
the
pipeline's
demo,
it
will
drop
me
into
this.
So
if
I
can
change,
for
example,
my
project
to
demo,
I
can
use
tekton
directly.
C
To
actually
interact
with
the
tecton
components,
so
I
can
build
pipelines.
I
can
interact
with
the
tasks.
The
task
runs.
The
pipeline
the
pipeline
runs
so
if
you're
kind
of
because
I
I
love
the
opentv
user
interface,
but
that's
because
I
used
to
design
user
interfaces
and
I
think
it's
a
very
clever
user
interface.
But
I
know
a
lot
of
people
are
command
line
lovers,
so
this
gives
you
the
ability
to
use
the
command
line
automatic
or
from
within
the
openshift
user
interface
itself.
So
that
was
me
not
answering
that
question
at
all.
A
No
thanks
for
showing,
I
think
it's
really
cool.
You
know
this
web
terminal.
You
don't
have
to
install
anything.
You
just
click
over
there
and
start
this
container,
this
pod
containing
tecton,
cli
or
cube
ctl,
for
it
cube
ctl
and
texan
sili.
You
don't
need
anything
else,
no
to
start
getting
started
with
tecton.
C
It
is
nice,
I
said
that
the
other
thing
is
that,
because
this
is
run
by
an
operator,
it
will
allow
itself
to
update
those
command
lines
when
they're
released.
C
So
you
don't
have
to
worry
about
having
the
latest
version
of
the
command
line,
that's
applicable
to
the
actual
cluster
itself,
because
this
is
running
on
a
410
cluster.
It's
pulling
down
the
410
operator
for
that
terminal,
which
has
the
latest
versions
of
the
command
lines.
If
a
new
version
of
the
command
line
is
released,
the
operator
is
updated.
That
operator
is
updated
as
part
of
a
patch
and
you'll
get
it
automatically
within
the
command
line.
C
So
it
means
you
don't
have
to
worry
about
keeping
these
things
in
sync,
because
we
have
a
lot
of
command
lines.
There's
a
lot
more
that
I
use
above
and
beyond
these
as
well,
and
the
nice
thing
about
this
is
the
you
can
actually
interact,
because
it's
an
open
source
project.
You
can
interact
with
the
team
that
writes
this
raise
rfes
to
say,
for
example,
oh,
I
need
the
latest
command
line
for
interacting
with
my
playstation
5
and
they'll
decide
whether
or
not
they
want
to
incorporate
it
if
they
want
to
incorporate
it.
B
A
A
C
C
You
could
have
another
task,
that's
waiting
for
a
git
commit
or
an
update
to
the
git
repo
itself
and
by
having
this
trigger,
what
will
happen?
Is
it
registers
a
web
hook
when
the
actual
git
repo
is
updated?
It
will
automatically
kick
off
a
task
which
actually
does,
for
example,
in
the
pipeline.
We're
talking
about.
Does
the
git
clone
again
and
repeats
the
pipeline,
so
you
can
automate
a
lot
of
the
eventing
within
tech
tom
there's
a
number
of
different
triggers
you
can
actually
put
in
there.
A
Yeah,
of
course-
and
I
your
your
example,
is
pretty
cool
because
as
a
parallel
task,
then
there's
you
know
some
order.
It's
it's
nice
also
to
do
after
after
the
the
the
example
yeah.
It's
I
think
it's
worth
to
do
a
comparison
with
jenkins.
No
most
of
the
people
knows
jenkins.
So
comparing
task
and
step
and
stage.
C
Yeah,
I
I
say
it's
it's:
it's
kind
of
a
one-to-one
map
between
the
the
stages
within
a
stage
within
jenkins
is
equal
to
a
task
within
text.
However,
and
I
I
have
to
say
it's
a
lot
to
customers,
we
haven't
discontinued
jenkins.
C
It
doesn't
have
to
be
a
big
bang
to
move
to
tecton
you
can
install
the
jenkins
operator,
you
can
still
use
jenkins,
you
can
still
use
the
openshift
dsl,
so
if
you've
got
five
six,
seven
years
of
experience
out
with
jenkins
build
files
and
you're
looking
at
tactile
and
thinking.
Well,
I
don't
really
understand
this
yet
and
I
don't
want
to
throw
away
the
work
we've
done.
C
You
can
install
jenkins,
you
can
still
use
jenkins
and
then
in
time
you
can
move
tasks
over
one
by
one
and
build
small
pipelines,
larger
pipelines,
test
them
and
do
all
those
kind
of
things,
and
that's
one
of
the
beautiful
things
about
this
I
did
see
and
I
the
reason
I'm
smiling
is.
I
did
see
one
customer
bless,
what
they'd
done
was
they
created
some
techton
tasks
and
one
of
the
techton
tasks
ran
jenkins,
which
was
great.
B
And
so,
if
that
helps
there's,
I
think
a
migration
guide
from
on
on
the
openshift
documentation
that
natalya
has
just
pasted.
Thanks
basically
gives
an
overview
of
here's.
The
terminology.
C
Yeah
yeah,
as
I
said
it's
I
I
can't
say
how
much
I
enjoy
playing
with
tech
on
it.
It's
a
fantastic
technology
and
because
of
the
fine-grained
access
to
the
kubernetes
object
model,
the
sky's,
the
limit.
I
think
you
have
to
sort
of
be
very
careful
about
getting
a
little
bit
over
excited
with
tactile.
I've
seen
some
customers
that
have
basically
gone
to
the
nth
degree
and
they
produce
pipelines
for
literally
everything
and
then
what
you
get.
C
C
It
has
to
be
complex
enough
to
do
the
tasks
you
need
without
being
over
complex.
There
is
the
capability
to
actually
go
absolutely
insane
with
this,
which
I
mean
this
example
this
pipeline.
Here's
an
example
of
it.
You
know
that's
way,
overcomplicated
for
what
I
was
trying
to
do
it.
It's
just
basically
80
decades,
yeah.
A
Yeah
thanks
for
sharing
the
link
jafar,
I
think
it's
a
very
cool
I
didn't
know
they
had
had
to
the
documentation.
It's
it's
cool
to
do
this
comparison,
this
mapping,
yes
again,
and
also
what
do
you
think
about
the
difference
like
jenkins?
Having
one
agent
running
in
some
hosts
can
be
also
a
kubernetes
pod,
but
this
single
agent.
This
is
doing
everything
and
no
persistent
volume,
just
one
persistent
volume
max
no
and
one
agent
running
in
a
pod
or
in
a
virtual
machine
here,
the
content.
A
As
you
said,
you
can
have
a
persistent
volume
shared
among
different
pods
right.
How
do
you
see
this
architecture
comparing
to
jenkins.
C
C
With
the
text
on
side,
because
everything's
atomic,
because
tasks
running
containers
themselves,
you
can
treat
them
as
instances
within
openshift.
So
if
you
want
to
scale
them
up,
you
would
simply
scale
them
up.
You
wouldn't
have
to
create
additional
instances
and
techton
additional
instances
of
the
task.
You
can
just
change
the
deployment
definitions
to
make
these
things
scale
up
and
scale
down.
One
thing
I
haven't
seen
yet,
which
I
really
want
to
investigate,
and
this
is
why
I
get
excited
about
something
else.
C
I
love
k-natives.
I
could
talk
for
hours
about
k-native
kennedy
was
the
coolest
one
of
the
coolest
technologies.
We've
got
and
kennedy
put
very,
very
simply
allows
you
to
scale
an
application
down
to
zero.
Now
that
sounds
simple,
but
what
it
does
is
it
allows
you
to
basically
take
an
application
offline.
So
it's
not
consuming
any
resources,
but
it
still
has
an
end
point.
So
if
you
push
traffic
into
it,
it
spins
up
again
I've
yet
to
see
people
using
techton
with
k
native
now,
with
k
native.
C
So
I
would
like
to
see
potentially
the
capability
of
having
tecton
pipelines
that
have
components
that
are
k
native
and
also
have
canadian
components
that
are
kicked
off
by
events.
So
you
have
the
concept
of
k-native
with
what's
called
a
cloud
event.
A
cloud
event
is
an
abstracted
form
of
messaging
that
allows
to
throw
an
event
at
it
and
it
triggers
off
different
applications.
C
I'd
love
to
see
a
number
of
different
type
of
pipelines
being
kicked
off
by
cloud
events
using
k-native,
which
means
you
could
have
thousands
of
pipelines,
taking
no
resources
until
they're,
actually
called
and
again
it's
it's
one
that
one
of
the
lovely
things
about
the
the
nature
of
it
being
kubernetes
native
is
that
you
get
the
advantage
of
using
k
native
to
get
the
advantage
of
scaling.
Get
the
advantage
of
metric
control.
C
group
controls
all
those
things
you
get
out
of
the
box
with
open
shift.
B
C
B
C
Tecton
pipelines,
wherever
it
can,
if
you're,
not
controlling
it
by
node
labeling,
by
putting
tekton
into
a
project
or
putting
tecton
into
a
pod,
that's
got
a
limit
range.
It
means
that
tecton
can
execute
and
not
be
a
noisy
neighbor
for
your
critical
production
applications
and
that's
the
advantage
of
this
compared
to
something
like
jenkins,
because
tecton
is
kubernetes
native,
it's
basic
components
within
openshift
itself.
Does
that
make
sense.
A
A
You
can
do
that
with
k
native
serving
and
the
functional
service,
which
is
adding
now
to
k
native,
so
tekton
can
build
that
container
representing
that
function
in
this
case
in
the
function
as
a
server
is
a
build
pack
built,
but
tecton
can
do
the
build
but
build
with
anything
like
buildback,
directly
or
shipped
right,
which
is
an
abstraction
around
buildings.
So
shipwright
can
do
that,
build
pack,
for
instance.
A
So
it's
it's
cool
that
you
can
connect
multiple
things
and
have
your
have
your
cloud,
no
you're,
building
kind
of
a
cloud
services
here.
C
C
A
native
function
running
in
another
project
in
that
same
cluster,
so
yeah
functions.
Basically,
we've
got
on
the
creative
side
now
which
allow
you
to
basically
shortcut
the
creation
of
k
native
services.
It's
basically
like
a
source
to
image.
We've
got
this
concept
called
func,
which
is
part
of
the
k
native
command
line.
As
you
said,
what
it
does,
is
it
simulates
the
way
in
which
lambda
kind
of
works?
The
nice
thing
about
functions
is
that
you
don't
need
to
understand
anything
about
k
native.
C
A
A
C
C
I've
got
a
caucus,
which
is
this
fantastic
new
version
of
java
function,
which
is
waiting
for
a
caucus
event
to
arrive
at
the
broker
when
a
caucus
event
arrives
at
the
broker,
this
caucus
function
will
actually
kick
off
another
event
that
kicks
off
a
camel
k
function.
So
this
camel
k
is
waiting
for
a
tech
talk
event.
I
said
we
got
got
off
the
concept
of
a
tecton,
but
you
can
actually
use
this
as
part
of
text
on
to
trigger
various
tasks.
If
that
makes
sense,
I
say.
C
Back
and
talk
about
kane
about
k
native
because
it's
it's
my
second
favorite
thing
behind
tactile.
A
You
have
to
now.
C
A
Kind
of
cool
demos
yeah,
you
know
we
have
five
minutes
yeah.
Do
you
want
to
say
anything
else
about
this
absolute
beginner
guy
to
tacton?
Do
you
want
to
finalize
the
the
session
with
some
some
some
other
topic
that
people
should
remember
should
know
about
tecton.
C
Yeah
I
mean
it's
it's
the
the
key
thing
to
remember
about
tecton
is
even
though
it
sounds
complicated
and
the
ideas
behind
it
are
complicated
and
the
whole
kubernetes
infrastructure
is
complicated.
The
actual
components
that
make
up
the
pipelines
and
make
up
makeup
tecton
make
up
protect
on
themselves,
are
incredibly
simple.
C
You've
seen
how
easy
it
is
to
build
a
task
you've
seen
how
easy
it
is
to
build
a
pipeline
based
on
the
tasks
and
it's
that
abstraction
between
the
atomic
building
of
a
task
and
the
creation
of
just
chaining
the
tasks
together
in
a
pipeline
that
makes
it
so
easy
to
use-
and
I
say
all
you
have
to
do-
is
install
openshift,
install
the
operator
itself
and
bang
you've
got
access
to
it.
We've
provided
basically
the
the
github
repo
for
this
there's.
A
C
B
Yeah,
I
think
this
is
a
an
awesome
introduction,
because
it
really
explains
the
the
building
blocks
and,
as
I
said
once,
you
get
a
good
grasp
of
how
those
things
work
together.
It's
really
much
easier
to
then
create
more
complex
stuff.
I
would
just
end
up
with
two
comments
regarding
new
features
that
we
added,
which
makes
it
even
easier
to
adapt,
which
the
first
one
is.
So
you
mentioned
that
tasks
can
be
shared
and
can
be
created
like
cluster
tasks.
So
you
don't
have
to
reinvent
the
wheel.
B
Whenever
you
want
to
create
a
pipeline,
you
can
go
and
search
for
whatever
is
in
there
and
a
very
cool
feature
that
we
just
added
is
the
local
tecton
hub,
which
can
be
enabled
by
administrators
on
openshift.
So
you
can
have
a
local
text
on
hub
on
your
openshift
cluster
and
publish
your
own
tasks
in
there.
So
developers
can
more
easily
go
and
find
tasks
that
they
can
reuse
within
their
pipelines.
So
that's
a
first
very
cool
feature.
B
B
You
know
making
making
tasks
available
and
making
the
templated
pipelines
available
to
developers
as
well.
C
Yeah,
it
sounds
cool
I
I,
whilst
you
were
talking,
I
put
up
put
up
a
couple
of
links
on
the
the
actual
shared
screen.
We've
got
this
site
called
rookie.dev
and
another
site
called
epiphany.org
where,
where
I
tend
to
try
and
simplify
these
things,
so
that
what
a
friend
of
mine
just
put
up
this
triggering
tech
on
pipelines
for
any
specific
branches
which
goes
in
and
talks
about,
tecton
triggers
linked
into
branches
on
github
repo,
there's
lots
and
lots
of
information
out
there.
C
So
if
anyone
wants
to
go
and
learn
the
complexities
or
simplicities
of
techton,
it's
literally
just
a
couple
of
clicks.
B
That's
very
nice,
and-
and
since
you
are
mentioning
it,
the
last
feature
that
we
added
a
stick
preview
is
what
we
call
pipelines
as
code,
which
also
allows
you
to
to
to
do
that
in
a
I
would
say,
simpler
way,
because
you
don't
even
have
to
get.
You
know,
take
care
of
the
triggers
because
they
are
created
by
openshift
itself,
so
that
new
feature
yeah.
I
think
we
can
have
another
show
explaining
that.
A
B
A
A
Yeah,
I'm
sending
a
lot
of
link
in
the
chat
where
there's
lots
of
resources
like
demos
and
labs
online
labs,
a
blog
article
about
what
is
a
pipeline.
That's
called
interaction
with
argo
cd.
I
think
I
put
the
getting
the
gitobs
ebook,
which
is
a
book
published
by
our
colleague,
fania
talking
about
how
to
mix
tecton
and
argosy
with
practical
examples,
and
also
I'm
going
to
send
now
in
chat
what
the
link
that
joffrey
shared
so
pipeline
as
a
code.com
you
can.
A
You
can
see
example
from
there,
so
it
looks
like
folks,
so
we
are
ending
our
session.
We
are
we
handed
our
time
today,
but
it
was
really
interesting.
The
session
today,
this
demo
session
lots
of
lots
of
questions
lots
of
interaction
thanks
everyone
for
joining
today.
Of
course,
jan
at
this
point,
you
have
to
come
back
to
follow
up
the
other
session
that
you
mentioned.
Let's
say,
k
native
the
next
or
no.
What
do
you
think.
A
Yeah
and
if
you
want
to
see
jan
and
if
you
are
in
london
next
week,
you
can
go
to
the
ai
summit.
London
is
going
to
do
a
a
keynote
in
the
talking
about
aiml
demo
and
I'm
sure
yeah,
there's
going
to
be
tecton.
There.
C
Yeah
yeah
I'm
doing
a
demo
on
building
neural
nets
with
k-native,
which
is
which
is
making
my
head
hurt
at
the
moment.
A
B
Yeah
sure
so,
thanks
for
bringing
that
up
so
yeah
this
afternoon
will
be
I'll,
be
running
a
a
presentation
about
openshift
platform
plus,
which
is
basically
openshift,
plus
the
query
registry
acm
acs
get
ups
and
pipelines
so
I'll,
be
showing
showing
a
demo
of
you
know,
basically
how
to
handle
a
pipeline
in
a
multi-cluster
environment,
have
security
gates
acting
as
you
know,
pipeline
gates,
so
to
to
to
say:
if
there
are
security
issues
with
the
application,
we
stop
the
pipeline
execution
and
not
proceed
with
the
deployment,
and,
if
it
succeeds,
then
acm
and
argo
cd
can
deploy
the
application
to
other
clusters
and
such
things.
B
A
A
3
p.m.
European
time,
let's
say,
says
time:
2
p.m,
uk
time
yeah
and
there's
a
final
question:
it's
a
tricky
one.
Why
is
it
techno?
Adoption
is
quite
reluctant.
C
It's
it's
that
that
was
the
point
of
today's
thing.
Tecton
appears
to
be
complicated.
It's
it
seems
to
have
quite
a
steep
learning
curve
when
in
actuality
it
doesn't
one
of
the
site,
one
the
side
effects
and
problems
we
have
with
open
source
projects
is
because
we
all
get
very
excited
about
the
technology.
We
tend
to
forget
that
people-
people-
don't
have
the
basics
in
mind
once
you
understand
the
basics
that
underpin
techton
and
understand
what
it
actually
does
and
how
it
actually
does
it.
C
It's
simple,
but
it's
that
understanding
how
simple
it
is
and
what
we
don't
do
with
that.
We
don't
do
it
as
good
a
job
of
making
it
simple
enough
for
people
to
understand
how
to
use
it
and
that's
why
people
are
reluctant
to
use
it
once
you
have
been
on
a
session
like
this.
You
understand
the
ins
and
outs
and
the
basic
approach
that
underpins
techton.
It
should
click
and
then
more
people
will
actually
do
it.
I
think
techton
is
going
to
ramp
up
in
terms
of
adoption.
C
A
Thanks,
jan
for
for
this
answer
really
helped
to
understand
the
context
and
there
there
are
ending
the
chat
asking
the
link
to
jafar
for
the
session.
I
I
put
the
link
of
the
openshift
tv.
If
you
go
there,
you
find
all
the
information
all
the
links
to
attend
the
level
up
hour
today.
We
have
also
ask
an
admin.
Folks,
we
come
back
next
wednesday,
always
10
am
assess
time
or
9
a.m.
Bst
time
here
for
openshift,
openshift,
sandbox
container,
another
cool
topic,
and
and
and
that's
it
thanks
very
much.