►
Description
KCD 2022 CZ & SK virtual - Prvá CNCF cloud native konferencia v regióne Čiech a Slovenska. kcd.live
A
And
now
I
introduce
a
little
bit
about
our
speakers
from
india,
hello,
savita
and
shivam.
B
You
thank
you
thanks.
A
lot
for
the
nice
introduction
I'll
quickly
share
my
screen,
hello.
Everyone
welcome
to
kcd.
C
B
B
We
are
mainly
focusing
on
pipeline
and
triggers
project
and
with
a
live
demo,
then
we
will
see
a
high
level
overview
of
argo
siri
and
one
of
the
important
custom
resource
application
of
argosy
and
then
finally,
we
have
a
end
to
end
demo,
where
we
will
show
how
we
can
integrate
both
technology
and
how,
with
that,
with
the
help
of
that
integration,
how
we
could
be
able
to
build.
C
B
B
B
Part
of
a
cd
foundation,
so
tecton
is
basically
used
by
developers
who
write
their
ci
cd
for
their
daily
and
also
very
platform
administrators
who
develop
cicd.
B
Or
anyone
of
the
organization
can
use
it,
so
this
is
a
pretty
much
overview
about
detector.
B
B
Project,
so
among
those
projects,
we
are
focusing
on
few
major
one
which
actually
used
by
many
users.
So
let's
understand
one
by
one,
each
of
those.
So
we
have
a
pipeline.
So
basically,
it
provides
a
declarative
way
of
declaring
the
kids
resources
to
write
these
either
pipelines,
and
we
have
a
triggers
and
and
also
pipeline,
is
a
core
project
of
tecton.
I
mean
most
of
the
apis
are
resides
under
this
pipeline
project
and
we.
B
You
want
to
do
automation
and
if
you
want
to
deploy
the
pipeline
resources
dynamically
based
on
some
events,
so
in
that
case
we
will
make
use
of
trigger
project
moving
on.
We
have
a
cli,
so
it
is
a
command
line
interface,
which
helps
us
to
interact
with
the
tecton,
and
similarly,
we
have
a
dashboard.
This
is
a
kind
of
a
it
is
a
web-based
ui.
So,
like
the
way
we
have
a
cli
which
interacts
from
the
command
line.
B
The
dashboard
will
helps
us
to
interact
through
the
best
things
to
interact
with
the
technolon.
It
could
be
from
pipelines
triggers
to
any
of
the
components
we
can
interact
via.
B
B
Like
if
you
take
an
example
of
jenkins,
jenkin
has
many
plugins
right,
which
we,
which
we
can
use
and
use.
Similarly,
what
pipe
to
start
a
catalog
project
where
number
of
users
from
different
different
companies
with
different
different
scenarios.
They
will
write
tasks
and
put
it
in
this
repository
catalog
and
anyone
can
use
it
from
here
I
mean
most
of
the
single
entity
of
the
task
will
be
added.
A
B
It's
again
a
web-based
platform
which
show
which
shows
the
information
of
for
tasks
which
are
present
in
a
catalog
repository.
So
basically
that
helps
us
to
decide
which
task
is
more
popular,
which
task
is
used
and
how
many
stars
are
given.
So
based
on
those
information,
we
can
use
the
task
so
maybe
like
how
we
how
we
could
be
able
to
use
tasks
from
this
catalog
repository.
We
will
show
in
the
live
demo
I
mean
today
in
our
demo.
We
are
using
most
of
the
tasks
from
the
catalog
repository.
B
A
A
B
B
A
few
of
the
concept
and
core
resources
of
tecton
pipeline
project
itself,
so
here
similar
to
like
kubernetes,
we
have
a
container
deployment
service
for
tecton
pipeline,
also
has
its
own
concepts
and
few
resources.
So
we
have
a
task
pipeline
task
run
pipeliner
these.
B
These
are
the
custom
resources
which
is
provided
by
tecton
pipelines,
and
we
have
a
step
workspace
result.
These
are
the
concepts
which
we
used,
which
we
use.
When
we
write
these
resources,
we
will
show
how
these
are
interrelated
in
a
live
demo
on
as
well
as
in
the
next
slide,
where
I
will
just
explain
high
level
things
of
these
concepts
moving
on
so
instead
of.
B
Want
to
explain
those
concepts
in
a
diagrammatical
view,
so
that
it
will
be
easy
to
understand
how
they
are
interconnected
with
each
other
I'll,
take
a
scenario
of
cloning,
the
code
building
the
code
and
image
and
then
pushing
that
image
to
the
registry
and
then
finally,
getting
the
digest
to
get
the
digest.
It
means
the
in
order
to
make
sure
pushy
success
and
then,
finally,
I
deploy
that
image
using
cubesat.
B
So
you
can
see
that
these
are
the
multiple
stages
I
have
done
when
I
want
to
do
these
kind
of
operations,
so
like
in
kubernetes
container,
is
an
entity
which
does
some
operation
and
terminate
right.
So
similarly,
in
tecton
pipeline,
we
call
as
a
step.
This
is
not
a
resource,
but
it
is
a
kind
of
a
concept
or
object
kind
of
thing
which
we
used
inside
a
one
bigger
resource.
So
I'll
come
I'll
come
to
in
a
minute.
So
before
that,
I
just
want
to
say
that
step.
A
B
Entity
entity
which
does
some
operation
based
on
the
provided
information.
So
when
we
say
information
it
could
be
like
environment
information,
image,
information,
argument,
information,
volume,
information.
It
could
be
anything
so
it's
it,
as
I
said,
step
is
exactly
similar
to
container
so
it
expect
or
it
accepts
all
those
informations
which
container
expects
so
now,
like
you
can
see.
I
have
a
step
kept
here
and
three
step
here
and
one
step
here
so
as
entity.
We
need
to
club
these
step
together
to
form
an
entity
and
that
we
call
it
as
a
task.
B
So
basically
task
is
a
custom
resource
of
tecton
pipeline
which
helps
to
manage
multiple
steps
together
and
execute
those
steps
sequentially
like
if
I
have
multiple
steps
it
executes
build.
First,
then
push
then
digest
so
this
is
this
is
something
managed
by
task
itself
and
then
we,
I
have
a
task
called
reply
where
we
will
do
the
deployment
and
the
output
of
one
task
can
be
given
as
an
input
to
other
task.
B
So
when
I,
if,
if
I,
if
you
remember
in
the
previous
slide,
I
explained
about
the
workspace
and
results.
So
these
are
nothing
but
the
concepts
which
we
use
for
input
output
operations,
so
workspace
is
nothing
but
kind
of
a
storage
where
we
store
output
of
one
task
in
the
workspace
and
that
workspace
give.
A
B
An
input
to
the
another
task
so
to
handle
input
output
operation
between
multiple
tasks.
We
use
workspace
as
well
as
results,
so
these
are
the
two
entity
in
tecton
pipeline.
Now
you
can
think
like
why
I
have
a
kept
as
a
three
different
task.
I
I
can
write
as
a
single
task
with
all
these
steps
club
together,
I
can
do
it.
There
is
no
issue,
but
I
want
to
show
one
single
task,
which
is
a
reusability
right.
So,
let's
suppose
I
have
a
scenario
where
I
just
want
to
clone
the
code.
B
Okay,
so
in
that
case
I
can
just
straight
away
use
this
task
from
which
was
checked
out
somewhere
and
I
can
use
so
that's.
That
is
one
of
the
benefit
of
task,
so
basically
write
which
will
do
one
particular
job
so
that
multiple
users
can
use
it.
So,
let's
suppose
I
will
write
a
task
called
clone,
but
after
writing
I
will
push
that
task
to
a
catalog
repository
and
like
later
after
one
month
or
after
few
days,
some
other
user
can
easily
use
the
task.
The
other
user
know.
B
So
that's
the
reason
I
always
prefer
to
write
a
single
task
which
do
some
preferable
work
so
that
it
can
be
reusable.
That's
the
one
of
the
benefit
now
to
run
this
task
so
till
now,
when
I
apply
this
task
camera
on
the
kubernetes
cluster,
it's
just
a
static
entity.
It
won't
run
it
run
this
task
resource
we
have
another
custom.
B
B
So
maybe
in
the
live
demo,
we
will
show
how
the
tasks
execute
one
by
one
on
what
on
what
basis
and
again
this
file
dt
and
this
pipeline,
we
have
a
resource
called
runner,
so
a
runtime
entity
which
instantiate
this
pipeline
so
till
now
we
see
the
theoretical
part
of
these
concepts
and
resources
and
how
they
are
integrated
with
it.
B
C
Yeah,
thank
you
sabita.
I
will
just
share
my
screen.
C
Okay,
so
we
saw
the
tecton
typing
concepts,
so
we
just
try
out
the
concept
and
then
see
how
the
pipeline
would
run
so
for
the
demo
we
are
using
a
application
which
is
a
new
stem
application.
It's
a
simple
application
which
you
can
search
for
news
and
there
is
nothing
quite
complicated.
C
So
I
have
deployed
this
application
on
the
cluster,
so
I
will
just-
and
we
have
another
repo
where
we
keep
configuration
for
the
ci
cd
there
is.
It
is
not
required
to
have
two
requests,
but
we
are
like
having
the
code
at
one
place
and
configuration
has
worked
this
about
the
cluster,
so
it's
openshift
cluster.
It
is
not
required
for
the
demo
to
be
performed
over
shift.
You
can
use
any
kubernetes
cluster.
The
only
thing
which
would
be
different
is
for
openshift.
C
We
are
creating
a
route
because
it
is
easy
to
expose.
You
just
need
a
kubernetes
cluster
where
you
can
expose
something
outside.
So
basically
you
can.
You
just
need
ingress.
So
for
this
demo
we
are
using
open
shift
so
for
the
demo.
So
this
is
the
configuration
repo
in
this.
We
have
the
application
configuration
the
namespace
deployment
app,
so
I
have
applied
this
on
the
cluster
as
you
can
see
here.
So
there
is
a
deployment
there
is
out
and
the
app
is
working
fine.
C
Now
we
have,
we
want
to
set
up
a
pipeline
which
would
fetch
the
code
from
here
from
this
code,
repo
build
the
a
new
image,
push
it
to
our
image
history
and
update
our
different.
Basically,
we
want
to
build
that
pipeline.
C
C
There
are
parents,
so
pipeline
is
just
a
template.
You
can
share
with
other
people
and
they
can
run
it
too.
So
we
have
multiple
tasks,
so
we
will
see
the
first
one,
which
is
the
thing,
but
the
git
clone.
We
want
to
clone
the
repo
right,
so
we
are
using
git
load
so
where
this
task
is
coming
so
as
with
explain,
we
have
a
hub
right.
So
if
you
go
here
and
search
for
kit
cloud,
so
you
will
find
the
task
here.
C
You
can
see
the
yaml
how
it
works,
and
if
you
want
to
install
it,
then
you
can
just
do
this
cover
and
in
your
pipeline
you
just
need
to
reference.
The
task
after
you
have
installed.
That's
all
so.
Tasks
are
like
a
template
where
you
can
pass
parents
depending
on
your
project.
So
so
here
we
are
passing
the
url
of
the
repo
and,
like
the
revision,
means
with
the
branch
we
want
to
clone.
So
this
is
like
that's
all.
We
need
to
do
to
use
a
existing
task.
C
Similarly,
we
have
build
and
push
for
building
the
image
which
are
using
build
up.
That
is
also
from
the
hub.
So
we
are
just
reusing
that
one
thing
to
notice
is:
there
is
a
run
after
what
that
means.
So
we
want
to
build
the
image
only
after
we
have
cloned
the
repo
right.
So
we
don't
want
to
execute
this
task
first,
so
we
mentioned
here
run
this
task
after
we
have
completed
search
repository
successfully.
C
If
it
fails,
then
this
complete
pipeline
will
fail
so
yeah
and
another
thing
to
notice
is
the
workspace.
C
C
So
this,
if
you
notice
here
so
this
workspace,
it
says
output,
so
the
fetch
task,
the
kid
from
us
will
clone
the
code
and
like
clone
it
in
this
workspace,
and
this
workspace
will
be
used
by
another,
the
tasks
which
are
in
pipeline
to
execute
their
logic.
So
we
have
build
and
push.
We
have
checked
deployment
so
once
we
have
built
the
image
push
to
a
registry
now
we
want
to
create
a
deployment
or
update
the
deployment
depending
on
so
we
first
check
the
deployment
whether
the
deployment
exists
on
the
cluster.
C
If
it
exists,
then
we
will
just
update
with
the
new
image
like
a
new
image
which
we
have
built
and
if
it
doesn't
take,
we
will
create
a
deployment.
So
this
is
like
we
are
first
thing.
Is
cloning
second,
is
build
and
pushing
the
to
the
registry,
and
third
is
creative
deployment
or
updating
a
deployment,
and
there
are
two
more
tasks
which
I
will
skip
for
now,
but
we
will
come
back
to
it
later.
C
So
here's
a
so
so
I
will
just
run
this
script,
so
this
script
is
nothing
but
it
will
install
the
task,
create
a
namespace
create
secret,
because
we
want
to
push
our
image
to
a
registry
right,
so
we
need
authentication.
So
I
will
just
execute
this
script,
which
will
create
some
secret.
Some
service
account
so
pipeline
needs
some
access
right
to
create
the
deployment
or
update
it
from
it.
So
this
script
will
create
everything
we
require
and
it
will
start
a
pipeline.
C
So
pipeline
is
a
template
and
pipeline
is
the
instance
of
that
pipeline
running.
So
these
are
the
parents
which
we
will
be
passing
to
the
pipeline
and
that
will
be
used
to
execute
the
pipeline.
Okay,
so
let's
see
is
it
in
action.
C
C
Okay,
so
the
pipeline
then,
should
have
started
because
we
have
started
it
manually
so
pkn,
which
is
the
cli
which
is
provided
by
tecton,
which
interacts
with
the
tecton
resources.
So
pr
is
nothing
but
pipeline
run
and
ls
like
listing,
so
we
can
say
see
that
the
pipeline
has
started.
So
this
is
the
one
of
the
way
we
can
see
or
access
our
pipeline
or
task.
C
Another
is
like
the
web
view.
So
if
we
go
to
dashboard,
which
is
also
provided
by
technocd,
if
we
go
here
take
on-
and
we
see
the
pipeline
right
so
we
can
see,
the
pipeline
has
started.
The
first
watch
fetch
that
is
flown,
so
it
is
now
building
and
pushing
the
image
right.
So
this
will
once
it's
it
will
check
the
deployment
update
requirement
and
so
on.
C
So
what
we
did
here
is
like
we
created
our
task,
which
are
required
for
a
pipeline.
We
created
some
rbac
and
we
then
started
a
pipeline
using
the
pipeline
drill,
but
this
was
like
manually
started
right.
We
can't
do
this
every
time.
Every
time
we
need
to.
We
can't
create
a
package,
so
we
need
something
automated
right.
So
whenever
a
code
is
updated
in
our
git
repo,
we
want
a
pipeline
and
iplane
to
start
it
right.
So
for
that
we
have
clicked
on
triggers
so
till
okay.
So
we
can
see
all
everything
is
templated.
C
C
Okay,
so
this
is
the
deployment
for
the
application,
so
you
can
see
it
is
like
44
seconds,
so
it
is
recently
updated
with
the
new.
B
C
B
Yeah,
could
you
share
the
presentation
shivam.
B
Yeah
yeah,
so
we
saw
till
now
like
how
we
set
up
the
pipeline
manually,
but
when
we
talk
about
the
ci
cd,
there
should
not
be
anything
manual
operation.
I
mean
what
is
csd.
It
should
be
continuous
integration,
continuous
deployment
without
the
manual
intervention,
but
the
w
we
saw
is
something
we
manually
ran
the
pipeline
so
that
that
is
something
we
don't
want.
So
to
avoid
that
kind
of
thing,
we
have
another
project
which
does
some
operation
for
us,
which
is
tecton
trigger
and
few
of
the
core
concept
trigger
providers
today.
B
So
we
have
a
event.
Listener
object,
so
it
is
kind
of
a
custom
resource
which
which
tries
to
run
kubernetes,
pod
and
kubernetes
service
and
which
keep
on
watching
for
the
incoming
http
request,
and
we
have
a
another
custom
resource
called
trigger.
So
this
trigger
basically
decides
what
to
do
with
the
incoming
event,
and
then
we
have
an
interceptor.
B
It
is
again
a
continuous
running
for
which,
which
does
some
operations,
like
validation
of
the
validation
of
this,
to
know
that
whether
the
events
are
coming
from
the
authorized
sources
or
not,
then
we
have
a
trigger
template.
It
is
kind
of
a
blue,
it
actually
contains
the
information
of
pipeline
run.
Okay
or
it
could
be
task
run.
Then
we
have
a
trigger
binding.
So
it
is
another
custom
resource
which
actually
does
some
operation
on
the
incoming
request.
B
So
what
it
will
do
is
like
once
after
the
validation
is
success,
the
data
come
so
it
takes
the
values
from
the
incoming
event
and
extract
values
from
that
request
and
pass
it
to
some
variables
and
those
values,
or
we
can
say
those
variables
will
be
used
further
inside
a
trigger
template,
and
then
we
have
a
clustered
trigger
binding.
It
is
similar
to
trigger
binding
but
runs
at
the
cluster
level.
B
Yeah
so
we
saw
the
those
are
the
different
concept,
but
I
just
want
to
show
how
they
are
interconnected
with
each
other
with
the
diagrammatical
view.
So,
as
I
said
like
when
I
apply
even
twister
yaman,
it
creates
a
pod
and
kubernetes
service
for
me
and
that
power
continuously
watches
for
the
incoming
event.
So
you
can
see
that
a
event
is
coming
to
the
event
listener.
B
For
once
event,
once
event
comes
to
the
event
listener
for
trigger
does
some
operations
to
what
to
do
with
that
event,
so
what
it
will
do,
it
will
passes
that
event.
Information
to
the
interceptor
interceptor
does
the
validation
and
once
validation
is
success.
It
will
pass
the
incoming
request
to
this
trigger
binding.
So
what
does
this
trigger
binding
do?
B
As
I
explained,
it
will
extract
the
data
or
we
can
say
body
from
the
event
and
do
some
operations
and
pass
it
pass
it
to
the
variable
and
those
variables
will
be
used
inside
trigger
template
and,
as
we
discussed
trigger
template
contains
the
information
of
pipeline
run.
So
my
point
here
is
the
event
information
coming
from
the
any
of
the
sources
like
github
bitbucket
gitlab.
B
B
I
mean
these
are
the
event
sources
where
trigger
supports,
but
there
is
also
one
of
the
functionality
of
the
tecton
trigger
is
to
write
your
own
custom
interceptor.
I
mean
you
can
add
your
own
event
source.
You
can
run
your
own
event
source
and
integrate
with
the
kept
on
trigger
and
tecton
trigger,
which
will
easily
use
that
one.
This
is
how
the
tecton
trigger
works.
Now
I
would
like
to
take
these
concepts
and
explain
how
this
tecton
trigger
dynamic
is
scheduled.
Those
pipeline
done,
which
he
has
already
written
ot.
C
Sorry
for
that,
okay,
so
we
saw
we
ran
a
pipeline.
Now,
let's
see
how
to
set
up
triggers.
C
So
we
want
to
set
up
our
triggers
on
this
repo
because
we
will
be
doing
some
code
changes
in
this
repo
and
the
pipeline
should
have
started
right.
So
let's
go
to
the
files
and
let's
see
what
are
the
different
resources.
If
I
go
to
triggers
so
we
can
see
a
event
listener
here.
So
this
is
the
event
list
now
which
will
be
watching
for
the
event.
Basically.
C
So
in
this
we
have
a
binding
and
a
template
and
an
interceptor.
So
the
interceptor
is
nothing,
but
it
will
help
you
filter
out
the
event.
So
for
this
demo
we
will
be
using
the
push
event.
So
let's
say
we
configure
our
book
for
all
the
events,
but
we
want
to
only
like
process
only
the
push
event.
So
we
add
a
filter
here
which
would
filter
out
the
events.
Basically,
there
are
many
more
interceptor
git
lab
other
or
we
can
write
our
own
interceptor.
Also.
C
The
second
thing,
which
is
the
finding
so,
if
you
see
binding
so
binding,
is
nothing
but
so
the
event
which
is
coming
from
github
or
any
other
event
source
will
have
some
payload
right.
So
we
want.
If
we
want
to
use
some
data
from
that
payload,
then
we
use
binding.
So
we
say
that
from
the
event
which
is
coming,
get
the
head
commit
either
url
and
the
push
thread,
which
is
nothing
but
when
the
event
was
pushed.
C
C
It
will
get
that
data
from
the
payload
and
then
it
will
create
the
resource
which
is
mentioned
here,
which
is
nothing
but
pipeline
run
so
in
this
pipeline,
then
we
will
say
you
need
to
start
a
python
right,
so
you
need
to
start
this
pipeline
and
these
are
the
patterns
we,
which
you
need
to
pass
to
it
so
while
creating
the
pipeline.
Basically,
so
we
saw
event
listener,
which
is
watching
the
event
interceptor,
which
is
filtering
out.
C
The
events
trigger
binding,
which
will
get
some
data
from
the
event
payload
and
the
template
is
nothing
but
what
we
want
to
create.
So
this
only
pipeliner
and
task
run
are
currently
supported
and
route.
So
we
need
to
configure
this
event
listener
with
github
right,
so
we
want
something
publicly
accessible.
C
C
C
C
Url
here
we'll
do
this
run
format.
We
just
want
to
push
event
and
we
will
just
say
just
add
a
book,
so
you
will
get
a
successful
year.
If
your
event
listener
is
up
and
running,
it
will
just
ping
it
and
just
tell
you
this
okay,
so
we
have
set
up
our
triggers.
We.
A
C
Ready
to
do
any
code
changes
so
to
see
whether
the
triggers
is
working
or
not
so
so
I
have
a
code.
The
code
repo
is
generally
like
flown
here,
so
I
will
just
do
a
no
edit
comment
and
just
post
push
it.
We
just
want
to
check
whether
the
pipeline,
then
we
started
or
not.
We
will
do
a
code
changes
and
do
an
end
to
later
so
for
now
I
will
just
do
of
course
like
I
will
just
create
a
push
event.
B
C
C
Event
was
not
delivered
due
to
some
reason,
and
if
we
go
to
the
dashboard
we
will
receive
a
new
pipeline.
So
so
what
I
did
was
the
event
was
unable
to
deliver,
so
I
just
re-delivered
it.
Ideally,
this
doesn't
happen,
but
I
don't
know
why
it
happened
this
time
so
yeah.
So
we
have
set
up
a
web
book
so
whenever
there
is
a
new
push
event,
so
this
event
will
be
delivered
to
the
event
listener.
C
The
event
listener
will
process
the
event
pick
some
data
from
that
payload,
which
is
define
the
binding,
then
add
it
to
the
pipeline
run
because
we
have
added
here
right.
You
can
see
here
like
add
the
git
triple
url,
which
is
coming
from
the
event,
the
revision
and
then
start
this
pipeline
run
and
this
python
we
can
see
here,
which
is
started
so
which
will
execute
all
the
things
so
yeah.
So
this
is
how
we
can
set
up
the
end
to
end
ci
cd.
C
So
we
can
see
that
this
is
also
updating
our
dev
instance.
So
currently,
the
application
which
I
deployed
here
we
are
called,
let's
say,
call
it
as
a
dev
instance,
because
we
are
going
to
set
up
another
instance
and
then
we
will
integrate
both
rcd
and
tecton
together.
So
this
is
nothing
but
a
one
instance
where
we
have
end
to
end-to-end
cicd.
C
C
B
B
Okay,
so
before
that
argo
cd
is
also
a
kubernetes
controller,
which
is
something
similar
to
like
how
we
have
tecton
and
it's
a
declarative,
github's
continuous
delivery
tool
for
kubernetes
and
argo
cd
is
used
as
a
standalone
tool
or
it
can
be
used
as
a
ci,
cd
workflow
and
the
single
source
truth
for
the
argo
is
the
git
repository
where
the
kubernetes
manifest
or
health
charts
resets,
and
why
should
we
use
arco
siri?
So,
basically,
argo
cd
will
be
helpful
for
the
application
deployment
life
cycle
management,
everything
in
an
automated
fashion.
B
B
So,
let's
take
a
scenario
where
I
want
to
deploy:
10
application
like
in
a
5
clusters,
and
I
can
make
use
of
for
any
of
the
jenkins
or
jenkins
use
any
of
the
helm
or
customize
to
deploy
my
application,
and
even
I
can
do
update
also
if
there
is
any
change
in
the
image,
information
or
something
else.
But
what
so
here?
What
I
need
to
do.
I
need
to
configure
such
a
way
that
it
should
go.
B
I
should
go
in
a
five
different,
five
different
cluster
and
do
the
changes
every
time,
but
what,
if
I
have
it,
ten
thousands
of
application
and
thousands
of
clusters?
In
that
case,
it
will
be
very
difficult
to
do
this
stuff
manually.
There
is
a
lot
of
chances
like
we
may
forget.
We
may
tend
to
do
some
mistakes
and
all
those
stuffs.
So
here
comes
a
gitoff's
operation,
which
is
a
best
solution
to
to
to
to
solve
this
kind
of
multi-cluster
operations,
and
we
have
the
argo
which
does
easily
for
us
now.
B
How
are
go
for
us
shivam?
Can
you
go
to
next
slide?
Please
yeah!
So
here
we
can
see
that,
like
we,
we
are
go
similar
to
tecton
pipeline
tecton
trigger
argo
also
have
many
custom
resources,
but
as
per
our
demo,
we
are
just
making
use
of
application
custom
resource
and
that's
the
reason
we.
We
took
this
as
an
example
to
explain
this
stuff.
So
basically,
it's
a
custom
resource
definition,
which
does
which
have
two
important
concept
called
destination
and
source.
B
So
basically,
this
application
contains
information
of
your
destination
cluster,
where
the
application
should
go
and
deploy
right.
That,
as
I
said
like,
if
you
have
a
ten
thousands
of
clusters,
thousands
of
cluster
ten
thousands
of
application,
how
how
how
argo
cd
will
get
to
know
like
where
should
go
and
deploy.
So
that
is
where
a
destination
play
important
role.
So
here
we
will
specify
on
which
name
space
and
on
which
cluster
I
want
to
go
and
deploy
my
application,
and
then
we
have
a
source.
B
B
We
have
a
source
and
this
source
contains
create
a
stage
which
is
nothing
but
the
path
of
the
kubernetes
manifest,
and
if
you
have
a
helm
charts
you
can
specify
the
chart
name,
and
then
we
have
a
repo
url
which
is
given
down
where
you
can
specify
where
the
url
of
the
github
repository,
where
actually
the
data
exists,
or
you
can
say
manifest
exist
so
with
this
help
help
with
the
help
of
this
applications.
Here
we
could
able
to
achieve
that
work
and
we
have
another
concept
of
orgo
called
sync.
B
B
How
that
happens
so
there
a
sync
process
will
come
into
a
picture,
so
what
it
is
tries
to
make
always
to
to
make
sure
that
they
are
reso
present
in
the
git
repository
and
the
resource
present
in
the
cluster
should
be
in
sync,
always
they
should
not
be.
In
sync,
I
mean
out
of
sync,
and
this
thing
can
we
can
do
in
two
ways:
it
can
be
automatic
or
it
can
be
manual.
B
So
most
of
the
time
we
do
prefer
automatic,
but
there
will
be
a
few
scenario
where
we
will
do
manual,
because
we
want
to
make
sure
that
before
upgrading
to
my
cluster,
we
want
to
make
the
updated
changes
in
the
github
repositories
proper
or
not.
So
to
have
that
gap
we
will
configure
as
a
manual
sync
so
that
administrator
or
end
user.
They
can
just
verify
that
pr
and
then
do
auto
manual
sync.
B
B
Now
I
would
like
shivam
to
take
care
from
here
and
explain
how
we
could
able
to
integrate
both
argo
and
tecton
and
with
the
the
pipeline
which
we
have
deployed
previously
and
how
we
can
how
he
he
could
be
able
to
integrate
with
targo
yep
over
to
you
shiva.
Thank
you.
C
Yes,
thank
you
savita,
so
we
saw
that
we
have
deployed
our
instance
using
technocrat.
So
let's
deploy
another
instance
using
rbcd,
then
we
will
like
integrate
both
of
them
together
and
see
how
beneficial
that
would
be.
So
arco
is
quite
big
so,
but
we
are
using
only
one
concept
of
it.
That
is
the
applications
here
where
we
are
telling
you
need
to
deploy
this
application,
which
is
stored
here,
which
is
in
a
git
repo
directly,
and
you
just
need
to
go
and
install
that
on
our
cluster.
C
There
is
a
ksf
stage,
which
is
nothing
but
a
staging
configuration
and
the
dev
is
nothing
but
which
we
use
for
the
other
instance.
C
So
if
I
go
to
the
cluster,
if
I
do
as
well
so
I
just
have
so
this
skip
nothing,
but
I
will
create
the
we'll
apply
the
application
cr
of
argo
cd
and
it
will
also
create
a
config
map
which
will
have
an
api
key.
So
the
application
which
we
are
using
needs
api
key,
which
is
nothing
but
which
we.
So
I
don't
want
argo
to
manage
my
configuration,
which
just
may
be
secret.
C
Of
course,
in
that,
so
I
will
create
the
config
map
myself
and
I
will
just
tell
the
con
archosity
yeah
if
you
just.
You,
can
just
ignore
this
change
to
the
config
map,
because
I
will
handle
that
myself.
So
you
can
do
this
for
secret
or
any
other
thing.
C
A
C
Ways
like
one
is
automatic
or
we
one
is
manual,
so
we
have
kept
it
manual
for
now
we
will
say
now
you
can
sync,
it
is
like
I
haven't,
given
you
permission
to
swing
basically,
so
if
we
go
to
the
cluster,
if
we
do
cube
city
allocate
parts
and
we
will
see
the
another
news
name
of
stage
name
space
where
the
arrow
cd
will
deploy
a
new
instance.
So
this
is
like
we
can
see
the
part
if
we
see
the
route
here,
okay,
so
we
have
a
new
route.
C
So
the
application
will
be
up
okay,
so
this
is
our
staging
instance
and
we
have
our
dimensions
right,
so
the
dev
instance
is
deployed
by
tecton
cd
end
to
end
from
the
code,
and
the
staging
instance
is
just
deployed
by
ocd
by
watching
over
this
manifest.
C
A
C
C
C
Why
we
will
see
in
a
bit
so
the
task
two
task
square
like
it
will
use
used
to
like
create
a
full
request
to
our
staging
configuration
with
the
updated
image
and
why
it
is
created.
We
will
see
in
the
end-to-end
demo
and
yeah,
so
the
last
two
tasks
for
that.
C
C
C
C
And
I
just
going
to
push
this
chain
to
the
main
branch
and
hoping
a
plane
would
get
the
event
okay,
so
new
change
is
pushed
if
we
go
to
our
code,
repo,
which
is
here
and
if
we
see
here
new,
commit,
has
preferred
and
our
pipeline
should
have
started.
C
Okay,
yeah,
so
this
time
the
event
listener
got
the
event.
Last
time
it
may
be
a
networking
issue
so
like
we,
let's
go
to
the
dashboard
so
text
on
dashboard.
If
we
go
to
pipeline,
then
okay,
so
the
new
pipeline
started
with
a
new
change
right.
So
this
will
get
the
new
code,
build
a
new
image
and
update
our
dev
instance.
So
we'll
see
the
changes
there
and
then
okay.
Here
we
saw
that
we
had
the
last
two
type.
So
this
time
we
will
see
what
they
are.
C
So
let
it
finish
till
then
I
will
just
go
to
the
image:
okay
here:
okay,
so
how
does
it
relate?
We
have
a
listener.
We
got
the
event,
we
started
a
pipeline.
We
cloned
build
and
pushed.
We
updated
our
dev
instance,
which
is
running
here
and
we
created
a
full
request
to
our
configuration
java
with
the
updated
image
which
is
watched
by
argo
cd
and
which,
with
nothing
but
handling
our
staging
instance.
C
C
So
we
can
see
a
change
here
right
and
this
changes
our
dev
instance.
If
we
can
notice
this
from
the
url,
if
we
go
to
our
staging
instance,
there
is
no
change.
It's
the
old
one,
okay,
so
in
our
pipeline
we
created
a
pull
request
right.
So
let's
go
to
that.
It
is
there.
It
is
okay,
let's
see
go
into
the.
So
this
is
the
last
task
which
is
creating
full
request
and
in
the
result
we
have
a
pull
request.
C
So
this
full
request
is
on
our
staging
configuration,
so
what
it
is
doing,
it's
changing
the
image.
So
why
so,
let's
say
we?
We
merged
a
pull
request
in
our
code
or
we
added
some
new
code
and
which
is
will
deploy.
Our
davinci
is
right.
So
now
we
have
saw
the
changes
right.
We
saw
the
changes
are
good
and
we
want
to
these
changes
to
apply
on
the
staging
configuration.
C
C
So
arbor
city
is
already
watching
this
right,
so
it
is
no
way
to
watch
for
we
have
configured
it
for
so
if
we
go
to
our
rcd
and
see
okay,
it's
say
out
of
sync
yeah,
because
the
image
is
updated
in
our
configuration.
So
it
is
saying
that
so
here
you
can
see.
This
is
a
deployment
right.
It's
say
out
of
sync,
because
the
image
is
updated.
So
now
we
have
saw
the
changes
here
that
the
changes
are
fine,
so
we
can
promote
them
to
our
staging
instance.
So
if
we
go
back
to.
B
C
We'll
just
say
so:
this
can
be
done
automatically
for
the
demo
purpose
we
just
skipped
manually,
so
it
will
fetch
the
new
image
and
update
our
deployment.
C
So,
in
short,
so
we
saw
that
how
end-to-end
tecton
pipeline
helped
our
dev
instance
and
how
we
use
arco
cd
to
manage
other
instance.
So
you
will
find
all
the
code
on
this
repo
is
tech.d.
C
You
will
find
all
the
installation,
steps
and
explanation.
If
you
feel
any
issue,
you
can
just
create
issue.
We
will
help
you
out
there
and
please
do
try
out
and
let
me
know
about
your
feedback
we'll
be
happy
to
help
you.
Thank
you.