►
From YouTube: Modern CI/CD with Tekton, Kaniko, and Kustomize
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
off
we
go.
I
want
to
thank
everyone
for
joining
us
today.
Welcome
to
cncf's
live
webinar,
modern
ci,
cd
with
tecton
kaniko
and
customize,
I'm
libby
schultz
and
I'll
be
moderating
the
webinar.
Today
we
want
to
welcome
our
presenter
jason
smith,
an
app
modernization
specialist
at
google,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
will
not
be
able
to
talk
as
an
attendee,
but
there
is
a
chat
box
where
you
can
drop
your
questions
on
the
right
hand.
A
A
A
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
programs,
page
at
community.cncf.io,
under
online
programs.
They
are
also
available
via
your
registration
link
and
the
recording
is
also
available
on
our
online
programs-
youtube
playlist
under
the
cncf
channel.
With
that
I'll
hand
it
over
to
jason
to
kick
it
off.
B
Thank
you
very
much
libby
and
thank
you
everybody
for
joining
on
a
early
tuesday
morning,
well
not
too
early,
but
we'll
we'll
learn
some
interesting
things
today.
Why?
Let's
jump
right
into
this?
B
So
today
we're
going
to
talk
about
modern,
ci,
cd
with
techton,
canaco
and
customize.
As
the
title
suggests,
my
name
is
jason
smith.
Some
people
know
me
as
jay
I
respond
to
either
as
mentioned
at
modernization
specialist
over
at
google
cloud.
That's
where
you
can
find
me
on
twitter.
You
can
see
a
little
picture
of
me
and
my
dog
and
just
putting
it
out
there.
Forgive
me
if
you
hear
barking
in
the
background.
She's
still
young
and
random
noises
cause
her
to
go
crazy,
so
we'll
we'll
we'll
power
through
it.
B
B
But
let's
first
talk
about
this
so-called
perfect
tool.
I
mean
we
are
all
looking
for
the
best
tools
for
our
job.
You
know
if
you're
a
mechanic
contractor
something
to
that
effect.
Doing
some
kind
of
construction
that
perfect
tool
may
be
a
hammer.
Screwdriver
saw
any
myriad
of
things
if
you're
a
chef,
perfect
tool,
maybe
a
whisk
or
some
kind
of
a
futuristic
technology.
B
Like
you
see
in
back
to
the
future,
where
you
just
pop
something
in
this
microwave
and
it
comes
out
and
it's
it's
a
full
cooked
meal,
maybe
that'll
be
the
perfect
tool
for
a
dream
chef,
but
we're
all
looking
for
the
perfect
tool
to
do
the
job.
One
thing
I
see
a
lot
of
people
talking
about
when
we're
when
we're
talking
about
cloud
native
moving
to
the
cloud
moving
our
apps
to
the
cloud
is
we
talk
about?
You
know
how
great
it
is
to
be
on
the
cloud.
B
B
Well,
the
best
tool
for
application
design
doesn't
exist.
So
I'm
going
to
give
you
guys
back
about
55
minutes.
Thank
you
for
your
time.
You
know
not
really
not
so
what
I
usually
get
a
lot
of
people
asking
me
either
in
my
role
or
just
in
general,
is
everybody
seems
to
want
a
solution,
kind
of
a
bundled
solution
for
building
code
for
deploying
code,
but
I
can
just
kind
of
one
click:
install
everything's
good
to
go.
B
What
I
usually
find
with
a
lot
of
these
a
lot
of
these
use
cases
is
there's
a
lot
of
customization
that
tends
to
happen
after
the
fact.
So
there
really
is
no
such
thing
as
a
perfect
tool,
because
every
tool
you
deploy
you
are
going
to
have
to
do
some.
Let's
call
it
after
market
configurations,
so
you
deploy
it
now,
you're
loading
it
up
with
shell
scripts
or
just
different
types
of
commands,
and
sometimes
I've
seen
people
who
are
myself
included,
I'm
just
as
guilty
as
anybody
else
who
are
just
like
yeah.
B
B
So
what
I
usually
say,
instead
of
looking
for
the
best
tool,
look
for
the
best
components
in
the
best
platform.
What
does
this
mean?
Well,
this
means
look
for
something
that
gives
you
the
building
blocks
you
need
to
build
the
best
tool
and
that
you
can
iterate
on
top
of
rather
than
trying
to
find
the
best
tool
and
then
trying
to
tweak
it,
because,
if
you're
going
to
wind
up
tweaking
and
customizing
anyway,
from
my
perspective,
it's
easier
to
start
lower
level
than
to
try
to
customize
on
top
of
an
opinionated
system.
B
Now
one
great
example
of
this
whole
platform
component
idea,
I
like
to
say,
is
kubernetes
kubernetes
kubernetes
is
a
platform
for
building
platforms.
That's
how
I've
always
chosen
to
see
it.
You
know
you
hear
a
lot
of
people
talk
about
how
kubernetes
is
the
future.
Kubernetes
is
the
cloud
and
kubernetes.
You
know
it's
great.
Everybody
wants
to
use,
let's
containerize
my
apps
microservices
yada
yada,
that's
all
true,
but
if,
if
somebody
asked
me
to
define
what
kubernetes
was
in
one
phrase,
it
would
be
it's
a
platform
for
the
available
platforms.
B
B
B
We
just
abstracted
it
away
and
you're
able
to
clear
it
in
the
ammo
files
and
then
kubernetes
does
its
thing
to
make
it
work
for
you.
So
now
I
can
build
a
lot
of
different
things
on
top
of
kubernetes
in
the
cloud
using
traditional
vms,
traditional
traditional
objects.
Really
so
I've
seen
people
do
machine
learning
I've
seen
people
do
do
like
a
sentiment
analysis,
just
a
variety
of
different
things
on
kubernetes
doing
additional
customizations,
because
the
api
is
declarative.
B
As
we
all
know
here
we
are
all
kubernetes
users
and
it's
extensible,
so
you
can
create
your
own
controllers.
Create
your
own
objects,
create
your
own
crds.
A
lot
of
people
have
done
that.
Where
I
mean,
if
we
just
look
at
the
ecosystem
and
apology-
apologies
that
this
is
an
eye
test
so
early
in
the
morning,
I'm
not
gonna,
ask
you
to
read
the
smallest
line,
but
this
is
just
like
a
landscape
of
everything
that
is
cncf
related
as
of
a
few
days
ago,
when
I
added
the
slide
for
all.
B
I
know
it
probably
changed,
but
this
is
just
a
collection
of
all
the
partners,
projects
etc.
That
are
part
of
kubernetes
and
a
lot
of
it
is
because
they
were
able
to
iterate.
On
top
of
it.
So
you'll
see
some
companies
here
that
have
been
around
for
years
and
years
and
years,
but
they've
been
able
to
turn
their
product
into
something
cloud
native,
because
kubernetes
gave
them
the
api
to
essentially
extend
their
application
to
be
more
cloud-ready.
B
Oh,
let's
move
to
the
cloud.
Well,
we
want
to
containerize
our
application
microservices
and
then
you
start
asking
questions
and
then
it's
like.
Oh
okay!
Well,
let's
take
a
step
back.
I
don't
think
we've
we've
thought
this
plan
through.
Yet
so
kubernetes
isn't
a
magic
bullet
and
you
really
shouldn't
be
thinking
about
your
cic
pipeline
as
a
magic
bullet.
You
shouldn't
be
thinking,
okay.
Well,
I
want
this
solution.
That's
just
going
to
make
everything
happy
going
to
make
all
of
my
developers
happy
and
whatnot.
B
So,
let's
now
that
we've
covered
a
kind
of
a
primer,
if
you
will
talking
about
kubernetes
talking
about
best
practices,
talking
about
are
talking
about
how
kubernetes
is
a
platform
for
building
platforms,
rather
than
just
being
the
perfect
platform.
How
do
we
build
applications
on
it?
Well,
the
way
we
build
applications
on
it
is
we
have
to
look
at
it
the
same
way.
We
look
at
kubernetes
and
a
platform
for
building
platforms,
so
we
want
to
build
a
code
pipeline,
but
we
want
it
to
be
declarative.
B
We
want
to
be
able
to
iterate
on
it.
We
want
to
be
able
to
templatize
it.
We
want
to
be
able
to
expand
upon
it
as
needed
and
make
the
changes
with
the
the
least
amount
of
friction
possible.
B
So
if
we
look
at
maybe
a
just
a
very
basic
diagram,
so
we
have
what
our
data
center
could
look
like
what
the
cloud
could
look
like.
We
have
the
infrastructure,
we
have
our
servers,
our
nodes,
we
have
kubernetes,
which
is
abstracting
that
away.
We
need
a
tool
to
abstract
away
the
code
deployment
portion
too,
but
we
can
build
that
on
top
of
kubernetes,
so
we
have
a
tool
called
tekton
or
there's
a
tool
called
techcon.
It
is
open
source.
B
It
is
part
of
the
cd
foundation
which
you
can
probably
call
like
a
us,
a
partner
foundation
to
cncf
or,
as
they
are
both
kind
of
subsidiaries
of
the
linux
foundation.
It
uses
kubernetes
native
components
that
are
declarative,
reproducible
and
composable.
B
Basically,
everything's
is
a
extension
of
kubernetes,
so
everything
you
declare
on
tecton
is
creating
pods.
It's
creating
containers.
It
is
using
kubernetes
components
to
do
it.
So
if
you
know
kubernetes,
you
can
figure
out
tecton.
There
are
event
triggers
for
automatic
automating
build
processes.
So,
let's
say:
if
you
want
to
create
a
trigger,
if
something
is
pushed
to
a
specific
git
repository
with
maybe
a
specific
tag
or
on
a
specific
branch,
it
needs
to
do
xyz.
B
But
if
it's
on
a
different
branch
do
this,
it
comes
with
a
concept
called
catalog,
which
is
a
bunch
of
reusable
tasks
and
pipelines.
We're
going
to
dive
into
a
little
bit
of
what
tasks
and
pipelines
are.
But
basically
there
are
a
lot
of
components
that
are
going
to
be
very
similar,
regardless
of
whether
it's
your
pipeline
another
company's
pipeline.
Some
projects
pipeline,
something
like
maybe
deployed
to
a
docker
hubbard,
deploy
to
whatever
your
your
repository
is.
B
Those
will
probably
be
pretty
similar,
no
point
reinventing
the
wheel,
there's
a
catalog
of
of
common
built
tasks
that
you
can
just
plug
in
your
variables
into
your
your
different
parameters
and
there's
a
lot
of
products
that
are
starting
to
integrate
it.
So
jacob's
x
comes
to
mind,
k
native
as
well,
which
fun
fact
about
this
k
native.
It's
that
it's,
it
actually
used
to
be
part
of
k-native
as
a
product
called
k-native
build,
and
then
it
really
just
kind
of
yeah.
I
guess
a
quick
primer
on
that
story
is
over
time.
People
realized.
B
B
Let's
talk
a
little
bit
about
what
makes
tekton
work
now,
techton,
you
know,
isn't
a
single
product
like
if
you
went
to
github
and
you
looked
at
the
tecton
project,
you're
not
going
to
just
see
one
repository.
That
has
everything
that
you
need,
there's
a
bunch
of
different
ones.
You
know
obviously
there's
some
of
the
things
like
for
the
website
and
community
and
whatnot,
but
there's,
I
would
say,
two
major
components,
and
that
is
the
pipelines
and
the
triggers.
There
are
a
few
other
ones
like
there's
a
dashboard
there's
a
cli.
B
Those
are
kind
of
self-explanatory,
but
let's
talk
a
little
bit
about
these
other
ones.
So
a
pipeline
has
three
primary
components.
There
are
obviously
other
ones.
Let's
talk
about
the
primary
ones.
You
have
a
step
which
is
a
single
operation
in
a
ci
cd
workflow,
so
that
could
be
like
running
pi
tests
on
a
python
application
or
running
a
build
or
some
side
effect.
A
task
is
a
collection
of
those
steps
and
these
are
instantiated
on
a
kubernetes
pod.
B
So
whenever
a
task
is
executed,
it
spins
up
a
pod,
said
task
and
then
spins
that
pod
down
now
a
pipeline
is
a
collection
of
tasks
in
order,
so
once
task
a
is
complete,
do
task
b,
once
test
b
is
complete,
you
know
so
on
and
so
forth.
B
Trigger
is
the
other
component
now
trigger
is
the
component
for
eventing,
as
I
like
to
call
it.
So
it's
responding
to
an
event
in
the
world.
Basically
it's
it's
you.
You
have
these
multiple
components,
though
so
the
event
listener,
which
is
essentially
a
crd
that
enables
a
declarative
way
to
collect
http
events
with
json
payload.
So
let's
talk
about
git
la
gitlab.
Let's
talk
about
github.
B
You
know
the
world
wide
web
set
up
a
password
or
a
security
key
and
whenever
a
certain
event
happens
in
your
git
repository
it
will
then
the
web
hook
that
in
within
that
git
repository,
can
then
trigger
an
event
and
for
what
it's
worth,
it's
not
limited
to
get
repositories.
Obviously
that's
the
most
common
iteration,
because
that's
how
we
code,
you
know
we
push
code
to
our
git
repo
and
then
do
our
ci
cd.
But
there
are
other
things
you
can
use
to
trigger
builds.
Then
there's
the
trigger
template.
B
Now
this
is
the
re
where
you
declare
the
resource
for
the
trigger
so
trigger.
So
an
event
happens,
a
get
push
happens
and
and
there's
new
code
on
the
main
branch.
So
the
trigger
template
is
okay
great.
What
are
we
going
to
do
with
that
new
code?
What
what
is
the
action
that
I
want
to
take
place?
Build
a
task
build
a
pipeline?
Do
we
want
to
push
this
code?
B
Do
we
want
to
containerize
it
and
then
trigger
binding,
essentially
binds
the
trigger
template
to
the
event
listener,
and
it
also
can
pass
parameters
from
the
json
payload.
So
things
like
the
git
repository
url
or
the
branch
things
that
are
going
to
show
up
in
that
json
payload,
you
can
pull
it
out
and
essentially
turn
it
into
a
variable
and
pass
that,
along
to
the
trigger
template.
B
And
it's
a
little
you'll
get
access
to
the
slides
later.
So,
if
you
can't
see
this,
I
apologize,
but
basically
here's
kind
of
an
idea
of
what
a
task
would
look
like.
This
is
a
build
task.
As
you
can
see,
there
are
two
two
main
steps
here:
the
canico
one
and
the
pie
test,
one
we'll
dive
a
little
deeper
into
what
happens
with
the
canico
one.
It's
actually
pretty
cool
and
we'll
talk
about
it.
But,
as
you
can
see
here,
it's
it
looks
like
your
standard
kubernetes
object.
B
You
know
I'm
calling
the
specific
api.
It's
tasks
kind,
give
it
a
name,
the
parameters,
resources.
These
are
essentially
inputs
and
outputs.
So
in
this
example
the
input
this
is
the
git
repository
that
it's
going
to
be
getting,
and
then
the
image
is
the
name
of
the
image
that
I
want
to
be
build
that
I'm
going
to
be
pushing
to
my
container
registry
and
of
course
here
you
just
decline.
The
you
declare
the
steps
you
give
it
an
image
very
important
to
point
out.
B
Every
step
is
its
own
container,
and
because
of
that,
you
can
actually
create
incredibly
intense
steps.
If
you
have
like
a
very
common
thing,
so
you
can
see
here
I'm
using
a
python
container,
I
can
pull
that
from
anything
run,
pi
test
good
to
go,
and
then
you're
also
able
to
pass
along
arguments,
as
you
can
see
just
kind
of
like
standard
pi
test
arguments.
B
But
let's
say
I
have
a
very
niche
use
case
that
there
isn't
a
current
container
that
exists,
or
perhaps
there
is
a
container
that
exists,
but
it's
kind
of
80
of
the
way
there.
I
need
to
add
a
few
extra
lines,
a
few
extra
features
to
it
in
order
for
it
to
work,
that's
fine!
B
You
can
put
whatever
container
you
want
in
the
image
file
there
and
that
container
can
be
its
own
steps.
So
when
I
say
building
like
be
having
the
components
to
build
what
you
need,
this
is
exactly
what
I'm
talking
about:
you're
able
to
actually
build
your
own
step,
your
own
job.
If
you
want
to
do
a
specific
type
of
analysis
as
part
of
the
pipeline
you're
able
to
do
that
and
then
here
kind
of
is
a
pipeline.
So,
as
you
can
see,
it
takes
in
the
resources.
B
So
it
passes
along
get
image.
It
passes
along
a
name,
I'm
sorry
it
passes
along
get
it
passes
along
the
image
variables
parameters.
Then
we
actually
give
the
we
all.
We
list
the
different
tasks
that
are
part
of
the
pipeline
in
order,
and
this
is
what
it
does
so
here
we
have
our
build
test.
I
have
a
separate
task
called
deploy,
which
does
like
the
push
to
kubernetes.
B
I
mentioned
canaco
a
little
bit
now.
One
thing
a
lot
of
us
have
probably
had
to
deal
with
is
building
docker
images,
and
how
do
we
do
that?
Most
of
us,
I
probably
do
docker
build
or
podman
build
or
whatever
tool
it
is
we
want
to
use,
but
ultimately
it
boils
down
to.
I
need
to
have
some
kind
of
cli
on
a
machine
that
is
running
some
kind
of
docker
image,
a
docker
machine,
whatever
tool,
I'm
wanting
to
use,
run
that
command
and
then
do
the
push
so
spinning
up
vms.
Just
to
do
that.
B
B
It's
a
so
canaco
is
like
a
container
image
for
building
container
images,
there's
an
actual
canaco
image,
and
what
happens
is
that
x?
The
image
will
execute
the
docker,
the
you
know,
the
docker
builds
docker
pushes
and
whatnot
and
within
your
kubernetes
cluster,
build
an
image
and
deploy
it,
and
so
you
don't
need
to
worry
about
a
specific
type
of
docker
daemon
or
anything
like
that.
Supports
your
standard,
docker
file
format.
So
no
surprises
there
at
this
point
in
time.
It
does
not
support
windows
containers.
B
I
can't
say
that
it
never
will
I'm
sure.
As
the
you
know,
a
demand
goes
up
there
and
it
might.
It
is
also
open
source.
So
you
know
if
anybody
has
extra
cycles
to
help
develop
this,
please
by
all
means
join
in
now.
One
thing
we
also
have
to
think
about
is
iterating.
On
top
of
our
code,
on
top
of
our
kubernetes
deployments,
you
know
day,
one
application
is
going
to
look
one
way
day.
B
500
applications
going
to
go
through
a
lot
of
different
changes,
a
lot
of
different
patches
heck.
There
might
even
be
some
changes
within
the
actual
kubernetes
api
over
the
course
of
two
years
that
you
might
benefit
like
you
know
how
we
have
just
said:
the
gateway
api
alpha
the
other
day
or
last
month.
You
know,
maybe
you
want
to
take
advantage
of
that
for
the
new
version
of
the
application.
B
How
do
we
actually
iterate
our
application
to
where
it
can
constantly
change
without
it
just
becoming
a
mess
of
yaml?
Well,
that's
where
customize
come
in
customized
as
a
kubernetes
sig
project.
It
is
essentially
a
I
it's
I
I
refrain
it's
it's
kind
of
templating,
but
kind
of
not
it's
essentially
creating
configs,
and
then
it
builds
on
top
of
it.
You
could
do
a
dynamic
resource,
build
it's
actually
built
into
cubectl
now,
so
in
the
past,
you
had
to
you
download
the
customize
program
and
customize
build
yada
yada
yada.
B
Now,
as
of
I
want
to
say
1.14,
it
is
actually
a
part
of
cube.
Ctl
cube,
cuddle,
cube
control,
let's
not
get
into
that
apply
dash
k
to
do
the
build,
and
it
is
not
a
traditional
packing
packaging
tool
in
the
same
way,
you'd,
probably
think
of
maybe
say,
like
helm,
it's
more
of
a
way
to
organize
your
conf
configurations
and
make
it
easier
to
iterate
on
new
versions
create
different
patches
and
so
on.
B
B
B
I
create
an
overlays
directory,
and
now
I
have
a
development
application
and
a
production
application,
so
on
top
of
it
for
the
overlays.
If
I
want
to
do
a
development
or
a
production
push,
the
customize.emo
will
take
these
default
base.
Applications
the
base
deployment
the
base
service
and
then
on
top
of
that
it
will
deploy
a
different
cpu
count
or
replica
account
for
development
than
it
will
for
production.
B
So
a
lot
of
times,
people
find
this
easier,
I'm
not
necessarily
trying
to
say
that
it
is
a
replacement
for
helm
or
json
or
skel
scaffold
or
whatever
tool
you
may
be
using
today.
It's
really
just
a
different
way
of
thinking
of
things.
I
know.
Sometimes
charts
can
become
difficult
as
time
progresses
and
you
start
adding
more
to
them.
This
makes
it
a
little
easier
to
iterate
on
top
of
it,
but
then
again
it's
just.
It
is
a
tool.
B
It's
not
the
tool,
it's
one
that
I
like
to
use,
though,
because
I
find
it
easier
to
manage
the
code
long
term
building
a
pipeline.
So,
as
we
mentioned,
we
have
the
reusable
tasks.
We
have
the
reusable
pipelines.
What
I
can
do
is
I
can
take
a
for
this
example.
We're
doing
a
go.
I
can
run
a
go
test
when
I
push
the
code,
so
here's
my
pipeline
here,
all
the
tasks
you
know
I
push
my
code
here-
run
a
go
test,
build
the
image.
B
B
B
And,
of
course,
once
the
analysis
is
done,
do
some
kind
of
deploy,
as
you
can
see,
you
know
it's
not
a
singular
line.
There's
some
branching
that
takes
place
here,
so
you
are
able
to
program
some
responses
in
there,
some
kind
of
intelligence
to
where,
if
x
happens,
do
this
do
that
so
on
and
so
forth,
which
is
awesome.
B
And
basically,
this
is
what
it
can
also
look
like.
So
here
I
am,
I'm
writing
code
push
to
get
repo.
I
have
my
techton
pipeline.
Kaneko
is
building
the
actual
container,
turning
the
code
into
a
container,
pushing
it
to
our
our
container
registry.
Whatever
it
is,
you
want
to
use
use
that,
whether
it's
artifactory
docker
hub
github,
there's
too
many
there's
so
many
to
name
then
customize,
deploy
and
hey.
I
have
a
happy
application.
B
Yes,
I
am
going
to
share
a
repo
in
the
slides.
I
am
putting
a
I'm
working
backwards
here
in
the
slides.
I
am
putting
the
bitly
link
to
my
repo.
B
B
Can
I
increase
the
font
size?
Sorry,
I
already
yeah
in
the
url
you'll
have
the
slides.
So
all
right,
let's
see.
B
B
And
then
also
just
as
a
side
note
too
somebody
mentioning
integrating
hub
helm
with
with
customize,
I
heard
of
people
doing
that.
As
I
mentioned,
I've,
never
personally
done
it.
That
being
said,
because
what
we're
talking
about
is
components
to
build
a
pipeline,
I
don't
see
any
reason
why
you
couldn't
do
that.
Realistically.
B
On
top
of
that,
I've
seen
people
use
tekton
with
other
cd
tools
to
do
specific
tasks
such
as
people
doing
techton
to
argo,
to
push
helm,
charts
and
and
whatnot.
B
B
Resources
are
the
items
that
we're
going
to
pass
along
in
the
pipeline.
So
in
this
example
I
mean
you
can
have
multiple
resources,
but
in
this
example,
I'm
wanting
to
tell
it
okay.
This
is
where
the
git
repository
is
for,
where
my
code
lives,
that
I
want
you
to
build,
and
then
this
is
saying
this
is
what
I
want.
This
is
what
the
resulting
image
should
look
like,
and
here
obviously
I
can
replace
that
with
whatever
but
yeah.
So
here's
what
the
image
should
look
like
when
all
said
and
done.
B
When
I
jump
back
to
tasks,
I
have
a
build
task,
which
is
the
one
I
showed
earlier
set
some
parameters
such
as
the
docker
file
path.
So
basically,
workspaces
is,
I
should
have
mentioned
this
earlier.
Workspace
is
essentially
a
part
on
the
volume
while
it's
while
the
container
is
running
in
the
pod,
where
it
will
store
the
code
temporarily
while
it
is
doing
the
build.
Obviously,
when
it's
pulling
code
down
and
trying
to
do
a
build,
it
needs
the
code
to
live
somewhere.
So
that
will
be
a
workspace
here.
B
I
have
an
app
directory
docker
file,
source
path,
where's
the
source
code,
kanako
text
context,
and
then
here
are
the
inputs.
So
it's
input,
I'm
telling
it.
Okay
go
to
this
git
repository
and
that
resource
I
showed
you
earlier
output
create
this
image.
B
It's
going
to
run
a
test
and
then
it's
going
to
use,
as
you
can
see
the
canico
project,
image
called
executor,
executor
or
executor.
I
guess
tomato
tomato,
and
then
it
will
use
kind
of
code
to
essentially
run
this
command
to
build
and
push
the
container
to
a
specific
registry.
Where,
where
I
listed
image.
B
Deploy
pretty
much
the
same
thing,
only
it's
deploying
code,
so,
as
you
can
see,
I
have
the
apply
k
there
and
this
is
using
a
cube,
ctl
image.
So
basically,
this
is
a
docker
image
or
container
image
that
exists
purely
for
the
purpose
of
executing
cube.
Ctl
commands
and
google
offers
a
lot
of
different
ones
and
there's
just
a
lot
of
different
ones
out
there
and
of
course,
as
I
mentioned,
you
can
create
your
own.
B
If
you
don't
like
what
this
cube
ctl1
does
you
can
you
can
customize
it
and
put
it
in
your
own
registry
or
create
something
entirely
different
now?
This
is
not
necessarily
best
practice,
largely
because
if
you
deploy
this
way,
it's
hard
to
know
who
owns
what,
but
this
is
just
kind
of
for
demo
purposes,
so
it
doesn't
matter.
B
Which
is
I've
showed
earlier
now
interesting
thing
here
that
I
didn't
talk
about.
We
have
this
concept
called
run.
You
have
task
runs
pipeline,
runs
and
then
there's
some
new
versions
that
are
being
tested
yet
now
to
kind
of
replace
some
of
the
resources.
But
it's
it's
not
important
at
this
point
in
time.
This
is
basically
saying
okay,
this
is
what's
telling
the
the
pipeline
to
execute,
because
we
don't
want
the
pipeline
to
just
randomly
run
a
pipeline
run
as
a
file
here
as
a
sim
single
file.
B
So
we
have
a
listener
and
here's
our
event
listener.
As
I
mentioned,
I
gave
it
a
name
because
I
don't
want
just
anybody
accessing
and
sending
a
payload
to
my
github,
repo
and
or
sending
a
payload
to
my
event
listener
and
getting
it
to
do
whatever
it
is.
I
put
in
a
secret,
I
give
it
a
the
value
like
what
kind
of
event
type
you
can
use
any
event
type
that
is
provided
by
your
git
repository.
B
You
have
to
read
their
documentation,
though,
because
I
know
like
git,
lab
and
github
have
different,
or
they
name
them
differently.
Anyway,
the
binding
and
the
template
that
it's
going
to
use
service
account
for
security
purposes
and
the
resources
you
can
actually
set
resource
limits
for
the
containers.
B
This
is
what's
binding
and
what
it's
going
to
grab
from
the
url
from
the
json
payload
from
the
trigger
event,
is
it's
going
to
pull
the
git
revision
in
the
git
repository
url
and
pass
it
along
to
the
trigger
template?
Trigger
template.
Will
then,
essentially
is
the
trigger
template
is
essentially
a
pipeline
run
or
a
task
run
so
you're,
essentially
saying:
okay,
the
the
event
happened.
What
do
I
want
it
to
do?
Next?
B
It's
pretty
straightforward,
but
it's
pretty
nice
too,
and
it's
nice,
because
you
can
just
build
on
top
of
things
and
real
quick.
So
you
might
have
noticed
that
I
had
the
app
on
there.
So
just
simple
app!
So,
as
you
can
see,
I
have
a
docker
file.
It's
just
going
to
look
at
get
the
git
repository,
which
is
this
app
and
then
do
the
build.
B
Let's
see
here
and
then
there's
some
manifest
so
nothing
too
exciting.
But,
as
you
can
see,
here's
a
customized.yaml
file
very
basic,
but
it's
declaring
the
resources.
It's
matching
with
a
specific
application,
and
these
are
the
simple
resources
for
a
hello
world
application
and
now
hey.
I
do
need
to
update
the
readme,
there's
actually
a
script,
that
I
wrote
that
automates
it
I'm
essentially
just
going
to
decompose
it
into
the
readme.
So
bear
with
me
on
that.
One.
B
B
So
your
question
about
tecton
positioned
as
a
tool
to
build
other
ci
cd
tools.
B
Yes,
and
no,
I
would
say
that,
yes,
it
still
fits
that
category
in
the
sense
that,
in
the
sense
that
you
can
build
on,
if
you
look
at
tecton
as
being
a
platform
for
ci
cd,
then
you
can
build
your
own
cic
tools
on
top
of
it
so
such
as
jenkins,
x
and
whatnot.
However,
the
individual
building
blocks
of
tecton
could
be
used
to
build
his
own
pipeline.
B
I've
seen
both
use
cases,
so
I've
seen
people
extend
their
jenkins
using
the
jx
plug-in
to
do
specific
use
cases,
but
I
also
have
seen
some
people
who
have
just
used
straight
techton
to
do
all
their
deployments,
there's
no
right
or
wrong
way.
Really.
B
The
point
here
is
that
we're
giving
you
the
basic
components
to
build
what
you
need
to
do
so
there
are
some
people
who
only
use
tecton
for
the
ci
component
and
then
use
argo
or
flux
for
cd
there's
some
people
who
use
tekton
for
both
ci
and
cd
there's
some
people
who
just
use
it
for
testing,
there's
no
right
or
wrong
answer
for
it.
It's
supposed
to
be
the
components
and
yeah
there's
a
lot
of
people
just
build
on
top
of
that.
B
I
see.
Take
tech
time
mainly
at
ci
jenkins
x
is
good.
Bait
is
based
on
good
ideas,
but
actually
I've
never
been
stable
and
for
cd,
and
you
see
yeah,
as
I
mentioned,
there
are
people
who
actually
do
that.
It's
a
very
common
use
case
and
yeah
from
a
security
perspective.
There's
definitely
a
benefit
of
using
tekton,
with
with
argo
cd
for
the
cube,
ctl
apply
of
customize.
B
So
how
would
you
bootstrap
tecton
without
external
ci
cd
solutions?
If
you
can
give
me
more
information
on
that,
I
would
be
able
to
give
you
an
answer.
But
yes,
let's
see
here,
let's
jump
into
this
screen
and
then
I'll
show
you
what
I've
got.
B
B
A
lot
of
people.
I've
noticed
so
one
of
the
greatest
use
cases
I've
seen
of
tecton
just
in
my
field
and
whatnot
is
people
who
want
to
build
on
cluster,
so
people
who
don't
want
to
have
to
reach
outside
to
go
to
a
third
party
to
do
their
ci
cd.
You
know
they're
already
having
to
reach
out
to
like
get
repository
granite.
You
can
like
deploy,
gitlab
or
get
t
or
whatnot
in
a
cluster,
but
there's
a
lot
of
people
who
want
to
have
like
just
everything
inside
of
the
cluster.
B
B
B
If
you
don't
have
a
system
like
say
gitlab
to
deploy
to
kate's,
because
I
do
it
with
tech
down
how
to
play
it
initially,
you
can,
if
you
could
elaborate
on
that,
a
little
bit
I've
used
spit
occur
but
manage
that
the
pipeline
specs
through
terraform
yep
that
cool
all
right.
So
let's
jump
to
this
real,
quick
all
right.
So
now
I'm
I'm
using
gke,
because
you
know
I
work
for
google
and
I
have.
I
have
access
to
my
google
cloud
platform
and
whatnot,
but
I
want
you
to
know
it
does.
B
Kubernetes
is
kubernetes,
you
can
use
tekton
in
anything.
You
can
use
my
git
repository
on
pretty
much
anything,
and
if
you
find
that
there
is
some
kind
of
weird,
you
know
feature
that
I'm
not
noticing
just
let
you
know,
create
a
github
issue
and
we'll
make
it
work
all
right.
So
what
do
I
have
here?
I'm
going
to
go
ahead
and
actually
show
you,
the
tecton
cluster,
nothing
interesting
to
see
here
today,
of
course,
until
I
actually
go
through
and
set
up
tecton,
which
you
can
do
by
running
these
simple.
B
B
Tectron's
also
evolved,
or
at
least
the
trigger
portion
has
evolved
a
little
bit
to
where
it
is
able
to
use
to
where
there's
some
built-in
interceptors
like
for
very
common
event
types.
So
github
get
lab
all
of.
B
B
I
don't
know
if
anybody's
ever
had
it,
where
you
just
have
like
a
random
epiphany
of
of
what
to
do
to
to
to
improve
a
demo,
and
then
you
just
do
it,
but
then
it
didn't
actually
make
anything
better.
It's
always
a
fun
experience.
So
there's
this
techton
cli,
which
is
oops.
Oh,
I
installed
the
wrong
one.
I
installed
the
mac
version
on
a
linux
machine.
B
B
B
All
right,
so
let's
go
ahead
and
do
this
I
usually
like
to
just
build
it.
One
piece
at
a
time
makes
it
a
little
easier.
So
let's
jump
oh
yeah.
I
want
to
be
in
tecton.
So
let's
do
some
cube
cto,
let's
apply
the
resources.
First,.
B
B
B
If
there
is
like
a
bug,
I'll
fix
it,
and
that
way
you
guys
will
have
it
available
for
actually
trying
in
real
time
later
today,
and
I'm
also
going
to
continue
iterating
on
this.
So
hopefully
you
know
in
the
future
we'll
have
stuff
about.
B
You
know
how
to
do
canary
analysis
and
whatnot.
B
B
B
B
Oh,
so
you
can
so
far
as
the
all
the
resources
you
can
do,
a
tech
tkn.
What's
the
word?
Actually
it's
tkn
and
then
there's
it
lists
a
bunch
of
different
things
that
you
can
list.
So
you
can
list
resource.
You
can
list
tasks,
you
can
list
task
run.
So
in
fact,
let
me
go
ahead
and
just
show
you
I'm
still
learning
this
whole
screen
share
on
this
platform,
so
bear
with
me.
B
B
B
Let's
see
here,
we've
used
our
goods.
You
need
to
deploy
both
the
tech
time
operator
in
a
pipeline.
Simplifies
management.
That's
good!
Is
there
a
debug
functionality
that
allows
you
to
connect
and
execute
steps
inside
the
build
container?
I
know
circle
c.
I
do
not
have
an
answer
for
that.
I
I
don't
know
if
it
exists
today.
If
it
doesn't,
I
would
be
willing
to
bet
that
there
are
some
tools
or
there's
probably
a
pull
request,
or
something
on
that
does
tecton
run
on
mini
cube
and
kind
to
test
locally.
B
You
can't
I've
never
tried
it
on
mini
cube.
I've
tried
it
on
kind.
You
can
at
the
end
of
the
day,
kubernetes
is
kubernetes,
so
you
know
it
doesn't
matter.
B
I
just
run
on
gk
just
because
you
know
work
for
google.
It's
I
mean
one
I
like
gk,
but
I
work
for
google.
So
it's
easy
for
me
to
get
access
to
it,
but
this
can
just
as
easily
be
anything
else.
I
mean
I've.
I
actually
have
a
kubernetes
cluster
running
on
like
eight
different
raspberry
pi's
and
I've
run
it
there
using
cube
adm.
So
you
know
kubernetes
is
kubernetes.
B
B
Not
off
the
top
of
my
head,
so
in
theory
you
could
even
use
like
customize.
Ironically,
you
could
probably
use
customize
to
define
all
the
resources
to
deploy
kubernetes
to
deploy,
tecton
and
manage
like
the
pipeline
files.
B
B
Tech
time
with
army,
I'm
pretty
confident
it's
possible
to
do
that.
Yep.
What's
the
pattern,
people
follow
for
the
setup
github
accessible
only
in
on-prem,
and
you
need
to
deploy
to
the
cloud.
Usually
service
account
keys
things
like
that.
Just
to
authenticate
give
the
right
permissions
any
recommended
way
to
store
kubernetes
secrets
in
the
get
repo.
B
So
I
don't
know
I
don't
know
if
there's
a
best
practice
for
storing
secrets
so
like
in
my
obviously
I'm
not
following
best
practices,
because
in
my
version,
if
you
actually
look
at
my
secret
file,
there's
the
there's
the
password,
but
it's
also,
I
don't.
You
know,
really
care
it's
howdy
y'all,
but
you
there
are
a
few
different
ways
to
manage
secrets,
so
you
can
use
like
a
secret
manager
such
as
what
you
might
see
from
different
vendors
there's
also
other
kubernetes
way.
B
B
Yes,
there
is
a
dashboard,
it
is
still
relatively
new,
but
it
is
being
built
upon-
and
you
know
it
didn't
exist
a
while
ago,
so
it's
kind
of
nice
that
to
see
that
it
is
kind
of
going
that
direction.
And
hopefully
you
know
it
is
an
open
source
project.
So
obviously
it
goes
kind
of
through
the
same
open
source
struggles
that
a
lot
of
people
go
through,
like
just
people
committing
time.
That
being
said,
you
know,
google
ibm
salesforce.
B
A
lot
of
different
companies
are
contributing
to
tech
tons
and,
quite
frankly,
are
using
them
internally.
So
I
can
only
expect
to
see
better
things
coming
down
the
way.
So
here
is
where
the
techton
dashboard
is.
I
dropped
it
in
the
chat.
B
Best
practices
for
organizing
it
really
depends.
What
I
do
is
just
I
I
tend
to
live
in
the
world
of
folders
and
subfolders,
so
I
might
have
a
subfolder
called
like
tasks
and
that's
every
test,
but
then,
within
that
folder
I'll
have
subtasks
like
this
is
my
these
are
tasks
related
to
building
on
my
development
branch?
These
are
the
tasks
related
to
building
on
and
so
on
and
so
forth.
B
So
the
very
the
very
basic
tecton
is
the
tecton
pipeline,
like
that
was
that
was
like
the
first
component
to
be
deployed
as
part
of
text
on
everything's
built
up
on
it.
I
usually
recommend
it
really
depends
on
what
you're
going
for
you
only
need
tekton
triggers
if
you
want
to
have.
If
you
want
to
have
automated
triggers
to
happen,
so
somebody
pushes
to
a
git
repository
or
something
like
that.
Then
you
can
install
triggers.
That
being
said,
you
can
just
use
manual,
pipeline,
runs
or
task
runs
to
trigger
things.
B
B
I
do
recommend
using
the
cli
tool
because
it
just
makes
it
easier
to
view
logs
and
what's
going
on
as
far
as
the
dashboard,
that's
purely
a
personal
preference
thing
I
honestly
barely
use
it,
but
if
you
want
to
go
for
it,
that
would
probably
be
the
vast
majority
of
all
the
tools
that
you
would
actually
need
to
deploy
to
use
tekton.
B
That
being
said,
I
think
there
are
some
new
features
that
are
coming
out.
So
there's
like
the
tecton
catalog,
as
I
mentioned,
and
there
are
some
secure,
I
think
tecton
chains,
which
is
security
related,
but
yeah,
there's
a
there's
a
lot
of
different
things
coming
down
the
other
way
as
well
coming
down
the
pipeline.
B
I
do
not
have
the
test,
oh
I
do
have
did
I
deploy.
I
don't
think
I
deployed
the
dashboard
it's
a
separate
component,
so
I
can't
really
share
that
because
it's
not
installed.
B
B
Minutes
no,
okay!
Well
I
I
want
to
thank
you
all
for
your
time.
Follow
me
on
twitter.
Follow
me
on
linkedin
the,
as
you
can
see
here,
the
slide
deck
does
have
a
bitly
link
to
my
github
repo,
I'm
going
to
constantly
iterate
on
top
of
it,
because
I
just
like
sharing,
you
know,
name
of
open
in
the
name
of
open
source.