►
From YouTube: OpenShift Coffee Break: DevOps with OpenShift
Description
Get your espresso ready for the OpenShift Coffee Break as we talk about everything DevOps with OpenShift! Our special guest is Wanja Pernath, Solution Architect at Red Hat, and author of the new "Getting GitOps" e-book from Red Hat Developer. Join the discussion to share architectures on OpenShift with Tekton, Argo CD and all Kubernetes automation!
Twitch: https://red.ht/twitch
A
Here
we
are
welcome
everybody
good
morning.
Welcome
back
to
the
openshift
tv
coffee
break
from
this
session.
Today
we
have
a
special
guest,
but
before
introducing
it,
let
me
introduce
all
the
stuff
here.
My
name
is
natalie
vinto,
I'm
product
marketing
manager
here
at
openshift
and
hey
jafar
andrea.
How
are
you
good
morning.
A
Finally,
thank
you,
my
wonderful
co-host
here
on
openshifttv
and
we
have
a
super
special
guest
book
author.
Also
we're
gonna
talk
about
this
awesome
book
that
it
was
out
from
a
few
days.
Welcome
one
yeah,
hello,
welcome
and
so
first
thing,
vania
vanya
is
the
correct
pronunciation
right.
Yes,
yeah!
That's
right!
Okay!
Mario,
do
you
want
to
introduce
yourself
yeah.
C
Of
course,
so
I'm
yeah
I'm
working
as
an
partner
enablement
manager
for
technical
stuff
on
openshift,
from
the
development
perspective
at
red
hat.
A
Awesome
and
the
reason
why
we
invited
you
mania
is
not
only
because
you
are
cool
and
you
do
lots
of
stuff
a
lot.
We
know
you
do
lots
of
cool
demos,
look
workshops,
but
also
because
you
are
out
of
a
book
and
I
think
andrea
has
a
copy.
A
So
for
the
monkey-
and
I
I
would
like
to
to
say
a
lot
to
the
first,
I
would
like
to
say
hello
to
the
to
the
chat.
Welcome
everybody.
If
you
have
any
question,
please
stand
in
the
chat
we
will
answer
in
the
show.
The
topic
of
today
is
devops
with
openshift
and
we
talk-
and
we
take
this
opportunity
to
talk
about
also
the
the
book
that
one
vania
wrote
and
let
me
also
share
in
the
chat
this
link,
so
the
people
can
download
for
free
right
vania.
A
C
I
would
like
to
so
yeah.
It
is
about
understanding
and
using
git
ups,
but
it's
not
theoretical.
C
So
for
from
my
perspective,
it
was
very
important
to
have
something
like
an
a
practical
guide,
a
practical
blueprint,
so
to
say
to
your
guides
to
githubs,
which
means
it's
discussing
a
well
complete
use
case
with
a
rest
service
based
on
on
quakers
at
the
beginning
and
then,
of
course,
the
rest
service
is
using
a
database
and
then
you're
going
through
first
of
all
to
the
through
the
development
of
that
rest
service
with
quakers
and
then
down
to
the
complete
road
until
you
are
using
tecton
to
do
your
development
pipeline
staging
pipeline
and
then
at
the
end,
setting
up
argo
cd
in
order
to
make
sure
that
you
have
something
which
is
automatically
deploying
the
complete
environment
which
you're
using.
C
Well,
at
the
beginning,
I
didn't
want
to
write
a
book.
This
is
quite
interesting
because
the
first
thing
I
wanted
to
do
is
because
I
do
all
this
in
my
day-to-day
job
when
I'm
enabling
partners
for
redshirt
from
technical
perspective-
and
the
only
thing
I
wanted
to
do
at
the
beginning
was
just
taking
notes.
So
I'm
talking
a
lot
when
I'm
doing
my
enablement
sessions,
and
so
I
I
needed
something
just
yeah
well
to
have
my
notes
to
prepare
myself,
for
example,
at
the
beginning.
C
Well,
and
then
I
thought
well
after
after
a
session,
I
had
so
many
questions
coming
from
from
the
audience
I'm
asking
about.
C
Well,
I'm
I'm
overwhelmed
with
the
news
of
open
shift
and
kubernetes,
and
then
there
are
so
many
tools
like
text
and
like
argo
cd,
it's
way
too
much,
so
I
thought
well,
it
might
make
sense
to
start
writing
a
blog
entry
about
this,
just
one,
a
small
one
yeah,
so
where
I'm
able
to
point
my
attendees
after
that,
then
to
to
that
blog
entry,
so
that
they
are
able
to
read
it
right
and
well,
then
it
was
two
block
entries
about
this.
C
Then
I
decided
to
have
three
because
I
also
wanted
to
talk
about
customized
and
things
like
this
at
hem
charts,
because
this
is
also
part
of
the
cicd
chain,
and
then
I
thought
well,
if
I'm
doing
all
this,
I
also
have
to
talk
about
text
and
pipelines.
So
I
also
wrote
a
complete
article
on
this
and
between
the
third
or
fourth
and
the
fifth
chapter
or
blog
entry.
My
management
came
to
me
asking
me
well,
why
are
you
not
writing
a
book
out
of
this
stupid?
They
said.
A
C
Yeah
well,
and
then
I
got
in
touch
with
other
people
inside
reddit
who
were
just
writing
a
book.
I
think
you
were
one
of
them
natalie
and
so
well
yeah,
so
I
asked
them
and
they
were
so
friendly.
C
I
wrote
professionally
with
them,
with
with
the
with
the
editors
at
red
hat,
it
was
so
nice
and
I
introduced
githubs
with
them
so
at
the
beginning.
So
I
did
all
my
writing
with
markdown,
of
course,
because
it's
a
technical
book,
so
I
can
write
markdown
and
the
initial
thing
was
to
generate
a
word
document
out
of
it
to
send
it
to
them,
and
then
I
got
it
back
which
was
completely
red
then,
because
what
mistakes
I
did
and
so
on,
and
then
I
decided
well.
No,
this
can't
be.
C
I
do
don't
want
to
to
to
export
imports
and
stuff
like
this,
so
I
ask
them
if
they
understand
to
use
git
and
they
said
yeah.
Of
course
I
I
know
gitz
because
he's
a
technical
writer
as
well
the
editor,
and
then
we
decide
to
use
the
complete
kit.
So
if
you
want
to
have
a
look
at
the
git
repository
of
the
complete
book
with
all
the
sources,
you
can
find
it
publicly
available
on
github
as
well
on,
under
my
account.
A
Oh
wow,
so
you
did.
D
C
No,
unfortunately,
not
this
is
just
by
well
using
git
as
the
central
repository
to
to
write
all
the
chapters
and
so
on
and
to
get
the
change
back.
B
Yeah,
I
think
it
could
be
good
if
we
at
some
point
share
the
link.
A
Yeah,
it's
already
shared
and
look
look.
This
is
the
page
and
one
I
was
already
sharing
the
in
the
the
screen
yeah.
A
We
put
the
link
on
the
all
the
book,
but
now
I
think
we
have
also
this
yeah.
A
B
A
Using
it
very
nice,
very
cool,
very
cool-
and
you
mentioned
if
you
can
go
back
to
the
cover
of
the
book,
so
this
is
the
landing
page.
We
put
the
link
in
the
chat
you
can
download
it
for
free,
it's
an
e-book,
so
you
get
a
pdf
just
register
for
free
at
developer.com,
which
is
the
editor,
the
publisher
and
look
at
this
cover.
You
say
the
the
monkey.
A
This
the
sign
is
from
our
awesome
colleague
colleague,
shout
out
to
kalik
we
made
very
nice
graphic
and,
and
also
the
content
is
very
cool.
But
vane.
Do
you
want
to
talk
about
the
contents?
The
title
of
the
show
today
was
devops
with
openshift
and
techton
and
argo
cd.
Can
you
explain
us?
How
do
you
compose
your
devops
with
openshift
with
those
two
open
source
projects.
C
Yeah
well
so
it
was.
The
discussion
was
quite
nice
with
with
all
the
internal
people
and
understanding
getting
the
title
of
the
of
the
book
was
also
quite
interesting,
so
I
at
the
beginning,
I
wanted
to
have
something
like
understanding,
githubs
or
githubs
from
scratch,
or
something
like
this,
but
this
was
not
necessarily
what
so.
There
were.
Other
voices
who
thought
getting.
C
Githubs
is
way
better
for
this,
and
one
thing
which
is
quite
interesting
is
this
monkey
in
the
title
in
the
title
which
you
can
see
there,
because
I
wanted
to
have
a
head
scratching
monkey
sitting
in
front
of
a
jungle
where
you
have
signs
saying
well,
this
is
tech,
10
go
there.
This
is
agua
city
go
there.
This
is
pokemon,
go
there
it's
on
and
then
that
you're
having
I'm
going
to
have
a
monkey
which
is
like
this
okay.
Where
should
I
go
now
right
and
at
the
beginning?
C
A
Nice,
nice
and
I'm
looking
at
you
know
these
books
cover.
Can.
Can
you
show
the
list
of
the?
What
are
the
topics
of
the
book
I'm
reading?
For
instance,
you
know
beside
basic
kubernetes,
then
there's
a
a
bit
of
opposition.
Then
we
have,
you
know
customized
tecton,
argo
cd
helm.
So
there
are
lots
of
cool
tools
right.
Yes,
they.
C
Are
because
it's
in
nature,
when
you
are
when
you
would
like
to
set
up
a
new
project
or
would
like
to
reuse
an
existing
project,
then
well,
you
will
be
thinking
about
how
to
well,
first
of
all
how
to
do
ci
cd
with
a
complete
with
your
new
new
project.
So
would
you
like
to
still
use
jenkins?
Would
you
like
to
use
tektin
for
this?
Would
you
like
to
use
whatever
anything
else
out
there?
C
So
I
I
was
thinking
about
answering
some
questions
and
doing
giving
in
a
demo
on
how
to
use
this.
Naturally,
then,
also
when
you're
thinking
about
distribution
of
your
applications,
for
example,
if
you're
in,
if
you're
selling
your
app
to
your
customers
or
something
like
this.
C
So
I'm
also
discussing
the
differences
between
helm,
charts
and
operators
and
when
to
use
which
and
what
and
of
course,
every
chapter
of
the
book.
Every
topic
has
also
a
practical
example
of
this,
and
all
those
examples
by
the
way
can
also
be
found
publicly
here
on
github.
C
So
just
have
a
look
at
the
book
example
repository
where
you
can
find
more
or
less
everything
we
are
starting
with.
So
when
you
are
new
with
openshift
and
you
are
using
for,
for
example,
the
source
to
image
environment
in
there
to
build
your
application
inside
openshift.
So
then
the
next
question
is
okay.
Now
I
have
my
development
environment.
C
So
how
can
I
go
over
to
the
test
environment
so
that
I
can
stage
my
my
development,
for
example,
for
this
I
was
thinking
about
well
either
we
are
just
exporting
the
deployment,
the
image
stream
and
blah
blah
blah
blah
blah,
or
we
are
I'm
going
to
use
openshift
templates
for
this,
so
I'm
discussing
also
templates
and
also
thinking
about
when
to
use
templates.
So
if
you're
in
a
mixed
environment,
you
can't
use
templates
because
openshift
templates
are
unfortunately
auto
shift
specific.
C
So
you
can't
just
use
object
templates
for
this,
so
there
must
be
something
else,
and
this
was
then
customized.
Yeah
customized,
in
my
opinion,
is
one
of
the
core
technologies
when
you
are
thinking
about
ci
cd
with
in
that
kubernetes
environment.
C
So
I
have
a
complete
chapter
of
customization
files
and
customization
things
in
order
to
well
make
sure
that
you're
able
to
deploy
your
application
from
scratch
at
the
end
and
then,
of
course,
then
the
distribution
thing.
Thank
you
exactly.
Then.
The
distribution
stuff
is
well.
If
I
would
like
to
make
my
application
publicly
available,
then
I
need
helm.
I
need
kubernetes
operators
and
things.
C
Then,
of
course,
I'm
digging
digging
into
cicd
using
attacking
for
this,
and
the
final
part
is
then
well
if
I
already
have
everything
if
I'm
using
customize,
if
I'm
having
all
my
files
or
the
metadata
files,
which
I
have
for
my
application
already
available
in
the
git
repository.
C
So
wouldn't
it
make
sense
to
have
a
tool
in
between
my
git
repository
on
the
one
side
and
the
kubernetes
shift
cluster
on
the
other
side,
and
this
is
then
naturally,
of
course,
an
argo
cd
which
is
watching
on
one
side
and
is
deploying
all
the
changes
back
to
back
to
kubernetes
overshift
in
this
case.
D
So,
in
a
sense,
you
analyzed
various
options,
but
you
also
in
the
end
provided
a
recipe
starting
from
scratch
to
do
devops.
Sorry
from
the
beginning,
to
the
end.
Obviously,
you
took
a
example
of
an
application.
You
chose
quarkx,
but
that's
it's
applicable
to
many
other
to
many
other
cases.
C
Exactly
yeah,
so
the
book
example
repository
could
easily
be
used
as
a
blueprint
for
this.
Of
course,
if
you
have
a
larger
application
landscape,
not
just
one
microservice,
which
is
connected
to
a
prosperous
database
or
so
then
you
need
to
think
differently
from
time
to
time,
but
for
each
of
those
microservices
which
have
one
or
many
relationships
to
to
external
tools
like
a
postgres
database
like
sso
or
key
cloak
whatever,
then
you
could
easily
use
this
as
a
blueprint.
C
You
can
go
through
the
complete
example
and
at
the
end
you
have
an
yeah.
You
have
pipelines,
you
have
the
understanding,
the
basic
understanding
of
using
argo
cd
with
this,
and
you
are
able
to
easily
deploy
your
complete
environment.
A
That's
pretty
cool,
that's
pretty
cool
and
I'm
sure
you
know
I'm
sure
we,
the
audience,
also
would
like
to
see
some
of
those
examples.
But
before
that
vani
have
a
question
I
see
you
mentioned
that
also
openshift
templates
in
the
book.
Do
you
think
that
the
templates
can
be
partially
at
least
substituted
by
elm
charts
or
do
you
think
are
still
relevant
from
the
devops
story,
at
least
in
openshift.
C
So
from
an
open
shift
perspective,
I
personally
would
use
templates
not
for
doing
ci,
cd
or
pipelining
things.
I
personally
would
use
openshift
templates
for
setting
up
samples
for
my
junior
developers,
for
example.
So
when
you
are
so
when
you,
when
you
are
going
to
create
a
new
application
for
example,
then
most
of
the
steps
are
always
the
same.
C
So
you
need
to
to
find
the
baseline
for
something
you
need
to
connect
it
to
a
database
or
to
an
sso,
thingy
or
whatever,
and
all
that
can
easily
be
created
as
an
as
an
as
an
openshift
template.
So,
like
all,
the
examples
which
are
available
inside
openshift
by
default
can
also
be
used
as
an
from
an
from
an
template
perspective.
So
the
nice
thing
is
templates
are
integrated
into
the
ui
of
openshift,
so
which
means
any
developer
is
able
to
easily
use
that
template.
C
There's
a
description.
There's
an
icon.
There
is
a
list
of
properties
which
you
need
to
to
to
define
or
to
change
of
course,
and
then
you
can
easily
start
from
scratch
a
new
application,
so
I
would
say
from
if
you're
an
architect
or
a
senior
developer,
and
you
would
like
to
create
some
some
templates
for
your
junior
developers.
I
would
use
this
one-
those
ones:
okay,
okay,.
A
B
So
yeah,
that's
actually
a
a
very
cool
feature
that
we
added
recently
on
the
openshift
410.
It's
something
that
we
we
wanted
to
have
for
for
a
while,
which
is
the
ability
to
add
your
own
pages
to
the
openshift
console.
So
you
can
provide
a
very
customized
experience
and
also
maybe
bring
data
from
other
tools
into
the
openshift
console,
so
I'm
actually
going
to
have
a
show
on
it.
This
afternoon,
oh
with
the
ask
an
admin,
it's
on
the
level
of
hour.
B
At
3
pm
cet
and
it's
basically
yeah,
we
call
that
the
console,
dynamic
plugins
and
it
allows
you
to
write
your
own
plugin
in
in
react.
So
we
do
provide
all
the
framework
and
you
can
then
have
your
own
things
appear
in
in
the
console,
so
yeah.
So,
for
example,
what
you
mentioned
about
the
templates,
you
could
also
create
your
own
experience
like
if
you
have
a
developer
workflow
that
you
want
to
have
for
your
for
your
you
know
as
a
routine
to
onboard
new
developers.
B
You
can
also
create
like
a
three
steps
or
four
steps:
wizard
that
will
be
really
customized
for
your
developers
and
it
can
use
under
the
hood,
whatever
you
want,
like
templates
or
or
hem
charts,
or
something
like
that
and
that's
actually
one
of
the
plugins
that
I'll
be
working
on
is
basically
how
to
have
a
yeah,
a
simple
steps:
wizard,
but
something
very
customized.
B
Depending
on
on
what
you
wanna
wanna
do
so
yeah.
I
do
agree.
The
templates
are
still
something
to
kick
off
the
the
experience
with
the
people
that
are
not
very
familiar
with
the
platform,
whereas
with
hem
shots,
you
can
do
a
little
bit
more
sophisticated
things
and
they
are
probably
a
bit
more
adapted
to
a
ci
cd
pipeline
because
of
everything
you
can
do
with
injecting
the
values
and
having
the
customized
things,
etc.
A
That
cool
it
looks
andrea.
It
looks
a
very
cool
day
for
openshift
tv.
We
have
vania
today
talking
about
the
getting
hitops,
then
we
have
jafar
at
the
level
of
power
doing
this
live
demo
of
customizable
web
consoles,
so
stay
tuned
on
openshift
tv,
but
still
vania.
Today,
our
today,
we
are
here
for
devops
with
openshift,
and
can
you
please
show
us
the
the
code
as
as
we
say
here
can?
Can
you
show?
Can
you
do
some
live
demos
to
us?
Please,
like
we
would
like
to
see
that
in
action.
C
Yes,
of
course,
so
if
you
would
like
to
share
the
screen,
thank
you
so
much.
So
what
you
can
see
here
right
now
is
nothing
so
all
the
default
projects,
so
it's
not
has
nothing
to
do
with
what
I'm
going
to
show
you
right
now.
So
I
need
to
provision
my
demo
environment
now
and
by
the
way
this
openshift
is
running
on
my
single
node
shift
cluster
running
on
an
in
german,
you
would
say
brule
versus
so
it's
called
very
how.
C
Let
me
go
to
the
chat.
It's
called.
C
Because
this
is
very
small
into
nuc
environment
and
it's
quite
loud
but
yeah,
it's
it's
nice,
so
it
has
48
gig
of
ram
and
something
like
this.
So
it's
it's
really
nice
but
anyway.
So
that's
my
open
shift
single
note
running
on
so
now,
let's
install
my
ci
environment
first.
C
So
let
me
go
to
this
one,
so
this
is
coming
from
my
example.
So
just
download
or
clone
the
book
example
repository.
And
then
you
see
here
in
the
githubs
repository
you
can
see
argo
cd
and
tactile.
C
C
C
So
if
you
go
to
the
book
ci
application
now
you
can
see
first
of
all,
the
nexus
environment,
of
course,
is
being
installed
as
well,
because
it's
a
java
application
and
when
you're
building
it
on
openshift,
you
need
to
well
find
a
way
to
store,
not
the
complete
internet
but
parts
of
it
right
based.
B
C
So
now,
of
course,
we
have
two
pipelines
being
installed
here
and
the
death
pipeline.
The
stage
pipeline
just
have
a
look
at
how
those
two
are
looking
like.
So
the
dev
pipeline
is
cloning,
the
source
code.
So
the
book
example
repository
cloud
conflict.
Is
the
person
service
config
repository
which
being
deployed
to
is
the
git
ops
repository
so
to
say
where
this
one
is
very
important,
so
we
have
everything
what
we
need
in
order
to
install
our
our
application
on
a
new
namespace
is
part
is
stored
here.
C
So
this
is
one
thing
and
then,
of
course,
we
are
packaging,
the
application
we
are
building
the
image.
We
are
pushing
the
image
to
a
quay
registry-
query
io.
In
this
case
it
could
be
any
any
registry,
of
course,
but
I'm
using
public
way
because
well
I
use
it,
I'm
used
to
it
and
then
we
are
extracting
the
digits
coming
from
the
build
of
the
image
and
we
are
updating
the
git
config
repository
with
that
image,
digit
yeah.
C
So
when
we
are
going
back
to
person
service
config,
we
are
changing
the
overlay
depth
environment.
So
this
is
the
difference
between
the
base
and
the
depth
environment,
for
example,
in
this
case
here
in
the
in
the
customization
file,
you
will
see
the
the
new
image
digits
of
the
current
build
you're
going
to
see
here.
A
So
this
is
the
point
sorry
to
interrupt.
That's
the
point
of
contact
from
the
ci
world
with
tecton
to
the
cd
world,
with
argo
cd
right
exactly
exactly
yeah.
C
B
Yeah,
that's
all
good
yeah.
Sorry
yeah
go
ahead,
but
I'll
have
a
question
just
after
afterwards.
Oh.
C
Okay,
so
the
difference
also,
we
are
used
to
have
an
well
a
pipeline
which
is
doing
everything,
so
we
have
a
pipeline
which
is
downloading
everything
which
is
building
everything
and
then
which
is
deploying
everything
at
the
end.
This
is
the
way
how
we
are
used
to
do.
Ci
cd
at
the
end
and
githubs
is
a
little
bit
different
by
github's.
C
You
have
to
think
about
who
is
responsible
for
deploying
the
actual
application
at
the
end,
and
in
my
opinion
this
is
argo
cd
when
we
are
thinking
about
this,
because
we
have
everything
what
we
need
is
stored
in
git.
So
when
argo
cd
is
going
over
this,
it
is
reading.
Everything
from
git
is
merging
the
the
config
and
it's
deploying
it
in
a
new
name
space.
C
So
I'm
not
actively
deploying
anything
with
the
tecton
pipeline
in
this
case,
but
I'm
just
instructing
it
to
update
the
image
digit
in
this
case
and
then
argo
cd
is
taking
care
of
everything
else
yeah.
So
you
had
a
question.
B
Yeah
sure
so,
can
you
please
go
back
to
the
the
screen
that
you
were
just
showing
here
and
maybe
show
yeah?
Can
you
show
the
last
comment,
change
where
it
says
updated
the
image
digest:
okay,
yeah
just
so
we
so
so
we
see
exactly
like
what,
because
I
think
this
is
very
important
if
you
can't
show
like
the
last
comment
or
okay.
So
that's
basically,
you
are
going
to
create
yeah
the
new
type.
B
So
is
this
based
so
do
you
have,
for
example,
a
track
to
the
commit
id
or
something
like
that,
or
is
it
just
like
the
the
latest
tag
of
the
image
that
has
been
created?
This.
C
Is
just
the
latest
type
of
the
image
which
has
been
created
so
when
the
build
pipeline
is
building
the
image
and
is
using
the
image
id
or
the
image
digist
so
to
say
to
update
the
customization
yaml
file
in
github
right.
A
B
Yeah
and
so
yeah
thanks,
so
it's
it
might
seem
too
detailed.
But
I
think
this
is
a
key
point,
because
that's
actually
what
makes
it
work
it's.
It's
like.
I
have
something
a
sort
of
template
and
then,
whenever
you
change
that
value
in
that
specific
repo,
it's
going
to
trigger
the
the
deployment
by
arco,
cd
or
what
we
call
the
the
synchronization
loop
and
something
else.
I
don't
know
if
you
mentioned
it,
but
you
basically
have
two
repos.
You
have
one
for
the
application
source
code.
B
You
have
another
one
for
the
the
git
config
that
is
used
by
the
githubs
tool
or
argo
3d,
and
this
is
basically
a
common
approach.
Nowadays
is
basically,
you
separate
the
code?
Yes,
so
so
you
don't
have
a
pipeline
that
gets
triggered
whenever
you
you
make
like
a
code
change.
Unless
you
want
to
have
something
you
are
going
to
trigger
the
deployment
I
mean
the
deployment
of
the
application
gets
triggered
when
you
make
the
change
in
the
config
repo
and
not
in
the
application,
repo
necessarily.
C
Exactly
yeah,
so
so
right
now,
just
the
ci
environment
is
set
up
for
this,
so
it
does
nothing.
So
what
I'm
not
doing
right
now,
but
this
is
just
because
I
have
my
openshift
cluster
running
inside
my
own
network,
so
I
do
not
have
a
trigger
so.
D
C
Yeah
I've
tested
this
and
it
works
when
I'm
using
not
openshift
inside
my
own
network
but
somewhere
on
amazon
or
something
like
this,
but
for
this
environment
and
because
I
am
living
in
an
area
where
a
good
network
is
sometimes
some
kind
of
luck.
So
I'm
trying
to
use
everything
internally.
You.
D
C
B
C
Yeah
cool,
okay,
so
one
thing
more,
I
need
to
do
in
order
to
set
up
the
the
tecton
environment
now
or
the
ci
environment.
Is
I've
created
the
bash
script,
which
is
also
creating
some
service
account?
C
So
if
tecton
is
so
wet
when
tecton
is
starting
to
build
something,
it
runs
always
in
a
in
the
security
context
of
a
service
account
and
because
the
build
pipeline
also
makes
use
of
a
docker
push
so
writing
to
a
docker
repository
craig
in
my
case-
and
it
is
also
writing
to
the
github
repository-
I
need
to
provide
somehow
all
the
to
the
credentials
to
those
two-
the
cigarettes
yeah
exactly
yeah,
and
this
is
done
by
this
by
the
way
they
are
two.
C
There
are
two
passwords
in
this,
but
I'm
always
frequently
changing
them.
So
you
can
copy
paste
this
if
you
want
but
yeah
well,
you
shouldn't
yeah.
C
I
have
to
I
think
so
this
script
is
as
mentioned.
This
is
just
doing
the
same.
What
I
did
with
oc
apply
minus
k
already,
but
it's
also
creating
the
service
account.
It's
creating
two
secrets.
I
want
to
access
the
doc,
the
the
docker
repository,
the
cray
io
and
one
to
create
the
secret
for
the
for
github
in
this
case
now
so
now
everything
is
in
here.
So
the
next
thing
I
need
to
create
is
the
argo
cd
environment.
C
So
if
I
go
back
to
shift
having
have
having
a
look
at
the
our
city
environment
here,
you
can
see
there
is
nothing
right
now.
So
we
are.
This
is
argo
cd,
which
is
coming
with
with
the
openshift
githubs
operator,
which
you
can
install
in
there,
but
then,
of
course,
you
need
to
create
applications
which
are
then
the
mapping
between
the
git
repository
and
the
kubernetes
instance
openshift
in
this
case,
so
I
haven't
created
anything
right
now,
but
I
do
this
right
now.
So,
let's
go
back
again.
C
I
do
an
oc,
apply
minus
k
on
my
rbcd
folder.
So
now
it's
creating
two
namespaces
book,
dev
and
book
stage
and
it's
creating
two
role
bindings
for
the
argo
cd
application
reconciler.
So
the
service
account
in
there
so
that
this
service
account
is
allowed
to
write
and
update
book
dev
and
book
stage
yeah
and
then,
of
course,
it's
creating
the
two
argo
cd
applications
book,
dev
and
book
stage.
So
let's
have
a
look
back
here
now
those
are
created
in
in
argo
cd.
B
So
sorry,
just
this
one
question
about
the
the
the
role
binding.
So
so
is
it
a
role
that
you
have
created
that
allows
like
right
access
to
the
namespaces
or
or
is
it
a
default
role
that
that
you
have.
C
There
is
a
role,
so
the
argo
cd
user,
so
the
internal
use
of
argo
cd
that
needs
to
have
added
access
to
those
two
namespaces,
and
this
is
what
I
did
here.
Okay
now
so
now
you
can
see
that
the
book
def
application
is
okay,
I'm
already
assumed
successfully
with
my
openshift
environment.
C
And
the
same
is
also
true
with
the
book
stage
environment,
so
it
is
all
already
done
and
you
can
see
the
synchronization
was
done
and
there
is
also
yeah
well,
there
is
no
problem,
as
I
can
see.
So,
let's
go
back
to
open
shift
here,
have
a
look
at
the
topology
view
of
the
book
def
program
namespace
and
you
can
see.
We
now
have
the
person
service.
C
Here
we
have
a
database
which
is
by
the
way,
based
on
the
crunchy
data,
prosperous
operator,
blah
blah
blah
and
both
are
connected
to
each
other
already.
And
if
I'm,
if
everything
has
worked
correctly,
you
can
see
it
now.
We
should
have
the
person
service
which
already
have
content
in
the
database,
because
augustini
also
has
the
has
a
hooking
mechanism.
C
So
when
you
are
deploying
a
new
application,
you
can
create
a
hook
which,
for
example,
is
being
called
after
the
synchronization
happens,
and
I
used
that
after
synchronization
hook
to
fill
the
database
with
some
with
some
life
yeah.
So
we
have
four
entries
in
there
for
singers.
A
C
Otherwise,
it
would
work,
of
course,
but
it's
my
internal
cluster
running
okay
yeah,
so
that
doesn't
work.
Unfortunately,.
B
C
Always
every
time
there's
a
swing,
it
is
going
to
run,
okay,
second,
but
just
a
second.
That
was
the
wrong
one.
This
one
sorry
so,
but
it
has
a
check
in
there.
So
if
you
have
a
look
at
this
here
so
see
this
is
the
the
post
sync.
C
Zinc
hook,
which
is
just
nothing
more
than
a
typical
kubernetes
job
in
this
case,
but
it
could
be
anything
yeah.
So
when
you're,
when
you
want,
for
example,
you
could
also
use
a
tactile
pipeline
which
is
being
executed
as
a
post,
sync
hook,
yeah,
so
that
would
all
work.
C
So
what
does
this
thing
being
doing?
It's?
Well,
it's
taking
the
service
url,
which
is
the
internal
url,
and
it's
first
of
all,
it's
checking
if
there
are
already
some
persons
in
the
database.
If
no,
then
we
are
going
to
fill
some
data
into
the
database
and
if
yes,
then
well,
it's
going
to
be
ignored.
A
Nice,
that's
pretty
cool
and
very
useful
vania.
If
I
can,
when
you
work
with
databases,
if
you
think
about
the
precinct
now
we're
seeing
the
posting
no
but
the
precinct,
you
know
it
can
be,
some
database
schema
upgrade
or
something
like
that,
and
the
posting
is
something
like
you
show
that
so
it's
very
important
to
also
put
those
kind
of
mechanisms
like
oops.
A
We
we
put
in
the
chat
the
link
to
the
documentation
are
go
cd,
documentation
on
hooks
if
you
like
to
check
out,
but
this
is
a
real
example
on
how
it
works
on
a
real
use
case,
right
application,
database
and
and
also
like
available
databases
since
you're
using
the
operator
right.
It's
kind
of
a
I
see
a
stateful
set
and
there
are
multiple
pods.
A
C
Yes,
exactly
here,
you
see,
you
saw
this
is
the
the
personal
service,
it's
really
a
real
rest,
crud
service
yeah.
It
has
implemented,
I
think,
or
I
think
I
implemented
all
the
methods
which
are
necessary
so
creating
a
new
person
updating
that
person
deleting
a
person
and
so
on.
So
it's
a
real
rest
service
and
it
is
connected
to
that
database
over
there.
So
if
you
are
interested
in
how
I'm
provisioning
the
database,
this
is
this
one
postgres
yaml
and
yeah.
C
It
is
a
it's
coming
from
an
operator
which
is
the
crunchy
data
operator,
I'm
using
here,
and
the
nice
thing
is
that
I'm
installing
the
database
just
as
an
on
a
declarative
way.
As
I
install
my
application
declaratively,
I'm
also
installing
the
the
database
decoratively-
and
this
is
quite
nice
yeah.
So.
A
C
Is
everything
I
need
in
order
to
deploy
the
database
and
the
rest
is
being
done
interactively
behind
the
scenes
by
the
corresponding
operator?
There
are
several
operators
out
there,
but
from
my
own
youth,
when
I
still
was
blonde
and
not
gray,
I'm
used
to
do
prosperous,
stuff
and
oracle,
of
course,
but
possible
stuff.
So
I
was
using
a
prosperous
for
for
that
environment
here
and
by
the
way.
So
when
you
want
to
build
the
environment
now,
so
we
have
no
personal
service
1.6.0
whatever
based
on
quakers,
something.
C
So
I'm
now
changing
this
a
little
bit.
So
let's
go
here:
let's
go
to
the
index.html
file
which
you
can
find
here
and
then
just
change
it,
but
you
can
of
course,
do
any
change
yeah.
So
this
is
just
a
just
something
that
you
are
able
to
see
that
there
has
something
been
changed
right.
C
Change
this
is
a
non-breaking
change
exactly
because,
as
you
might
know,
when
you
are
working
with
with
customers
and
you're
doing
breaking
changes,
and
you
do
not
have
any
pipelines,
because
your
customer
is
sitting
behind
your
back
with
a
verbal
gun
in
his
hand
about
fix
this
now
right,
then
you,
you
know
that
I've.
C
So,
let's
look
what
has
changed?
I
just
have
changed
the
index
html
file,
so
let's
do
a
git
commit
now
bumps
version
or
whatever.
C
Now
I
do
a
good
push
and
then,
of
course,
I
need
to
to
use
to
start
the
pipeline.
This
is
also
something
which
I
do
with
with
my
script,
which
is
part
of
that
book
example
repository.
C
Why
I
use
script
here.
I
can
show
you
this
in
a
second
because
easily,
I'm
again,
I'm
using
the.
So
it's
in
the
context
of
the
service
account
again,
and
what
I'm
going
to
do
here
is
that
yeah
that
this
is
executing
the
build
pipeline.
The
bill
pipeline
in.
C
Of
that
of
that
service
account,
I
was
creating
and
unfortunately
right
now
the
ui
of
openshift
doesn't
support
using
or
changing
the
service
account,
at
least
not
when
I
was
starting
all
the
demos
and
stuff
like
this.
So
maybe
it
has
changed
now.
So
now,
let's
go
to
the
book
ci
environment,
where
my
pipelines
are
so.
C
Exactly
it's
the,
but
I
have
chosen
to
create
my
own
service
account
just
because
well
to
show
that
it's
working
right
and
then
you
have
to
deal
with
the
with
the
other
challenges
so
that
I'm
not
able
to
say
well
just
start
the
pipeline
here
or
do
whatever,
but
anyway,
in
a
real
environment.
I
I
would
say
that
you
would
not
use
start
pipeline
here,
but
you
also
execute
it
in
a
different
way.
C
So
now
we
have
the
pipeline
run
here,
which
is
doing
exactly
what
I
described
already.
So
we
are
cloning,
the
source.
We
are
cloning,
the
conflict.
We
are
packaging,
everything
we
are
building
the
image
now
and
by
the
way,
I'm
a
lazy
guy.
So,
instead
of
doing
of
using
a
docker
file
behind
the
scenes
here,
I'm
using
what
quakers
provides
us.
C
A
And
you
extract
from
the
output,
I
think
this
is
important
if
you
can
also,
if
you
can
share
them,
if
you
can
show
us
the
the
the
source
code,
the
yaml
file
of
the
pipeline,
how
do
you
extract
from
the
output
of
the
quercus
build
and
push
right?
How
do
you
extract
the
hash
of
the
container
image
and
giving
parameter
to
the
next
task?
C
Yeah,
this
is
the
tactile
park
apart
here
right
now,
so
what
I'm
doing
is
so
extract.
The
digest
is
not
doing
anything
else.
Then
taking
the
jib
dash
image,
dot,
digits
file,
which
was
built
during
or
creating
a
good
time,
and
then
I
am
taking
this
the
content
of
that
file
and
put
it
into
the
tecton
task
results.
A
So
basically,
you
are
creating
a
file
with
the
digits,
and
then
you
are
sending
into
result.
Result
is
a
is
the
way
tecton
used
to
you
know
to
print
in,
to
give
the
an
output
result
that
you
can
use
also
an
input.
So
this
is
the
powerful
thing.
No.
B
Yeah,
so
how
do
you
use
it
in
the
next
step?
This
would
be.
C
The
next
step,
I
think,
yep,
so
here
I'm
doing
I'm
using
updated
digits,
which
is
using
the
customize
tool
to
say
edit
set
image,
and
then
we
are
using
the
current
wait.
Where
is
it
here,
thomas
new
digits
and
the
digit
is
coming
now?
I
need
to
go
to
the
pipeline,
of
course,
the
dev
pipeline
and
the
digit
is
coming
just.
Let
me
check
here:
where
is
it?
Where
is
it?
Where
is
it
yeah?
It's
over
new
ditches?
So
it's
here
we
go
yeah
exactly.
C
Can
you
see
it
yeah
you
can
so
this
is
also
also
automatically
being
extracted
from
this,
so
the
parameter
or
providing
parameters,
input
output
variables
from
one
task
to
the
other
in
a
pipeline.
This
is
quite
quite
nice.
A
Indeed,
indeed-
and
this
is
in
the
source
code
right-
we
put
in
the
the
caption-
everyone
can
also
look
at
the
wall
source
code.
In
your
repository
that
we
put
here,
yeah.
B
Right-
and
I
I
think,
and
the
image
or
digest
is
something
in
the
focus
properties
or
something
like
that.
If
I,
if
I'm
not.
C
Correct
yeah,
so
when
you're
using
jib
container
build
the
quadcos
extension
when
you
use
this,
then
when
you're
pushing
the
image
to
a
external
registry,
then
the
that
file
is
being
created
so
image
dot.
Digit,
I
think,
is
the
name.
A
I
know
you
know
jeep
and
there's
another
tool
called
j-cube
that
can
do
the
same
thing
and
also
use
jeep.
I
know
chip
is
using
kind
of
distro
less
container
image.
No,
the
the
container
image
will
be
not
something
like
from
any
from
fedora
from
centos.
It's
gonna
be
a
kind
of
distroless,
I'm
wondering
if
you
can
also
control
the
layers
like.
If
you
want
to
start
from
a
certain,
let's
say:
fedora,
can
you
can
you
do
still
do
with
jeep?
Do
you
know
if
that
is
possible?
It.
C
Is
yeah,
so
this
is
all
done
within
quarkus,
so
if
you
want
to
have
a
different
base
image
to
to
build
your
your
image
from
then
just
have
a
look
at
the
quakers
container
image.
So
there's
a
there's,
a
guide
on
the
quakers.I
o
website
where
you
are
able
to
see
okay.
This
is
the
way
how
I
am
able
to
do
it.
A
Wonderful,
wonderful,
this
so
yeah.
If
you
are
using
qualcomm,
this
is
very
lightweight.
No,
you
don't
need
to
use
any
kind
of
a
docker
build
or
or
container
built
in
in
the
node.
It
can
be
just
quarkus
in
general.
The
agnostic
way
would
be
you
know
using
in
our
case
builder,
because
it's
dockerless
demo
less,
but
you
can
use
also
docker
build
if
you
have
that
multiple
way,
but
this
is
very
very
cool.
C
No
exactly
so,
I
just
wanted
to
make
sure
that
that
everybody
is
able
to
understand
that
demo
right.
So
this
is
the
reason
I'm
using
jib
for
this,
and
because
it's
so
easy
to
create
your
pipeline,
then
because
it's
as
mentioned
just
another
making
goal
which
you're
calling
and
that's
it
so
yeah
everything
needs
to
should
already
be
updated,
which
means,
if
I'm
now
going
back
to
the
deaf
book
book,
dev,
environment,
okay,
well,
everything
is
already
updated.
C
So
if
I
now
reload,
you
see
cool,
yes,
wow
everything
is
done,
so
our
new
application
is
up
and
running
and,
of
course,
in
a
staging
environment
we
still
have
the
old
one.
So
if
I
go
back
to
staging,
if
I
open
this
one,
then
of
course
we
still
have
no
that's
wrong.
That's
the
book
death!
This
is
stage
so
this
one,
it's
still
1.6
and
now
we
would
like
to
create
to
use
the
the
staging
pipeline
and
again
for
this.
I
again
put
something
based
on
that.
B
This
part
is,
is,
can
be
a
bit
tricky,
so
yeah,
let's
see
how
we
do
it,
and
maybe
we
can
have
a
conversation
on
on
different
ways.
This
can
be
achieved,
especially
with
tecton,
okay,.
C
C
Yeah
and
now
that
is
starting
the
tactile
pipeline.
C
So
if
I
go
back
to
openshift
now
into
the
ci
environment
pipelines,
you
can
see
that
now
the
staging
pipeline
is
running
and
that
staging
pipeline
is
doing
the
following
stuff.
So,
first
of
all
it's
cloning,
the
config,
so
the
person
service
config
repository.
C
Then
we
are
creating
a
branch
in
this
person
service,
config
repository
and
again
we
are
now
extracting
the
digits.
But
this
time
the
digits
is
coming
from
the
development
config
now,
so
I
would
like
to
use
the
current
state
of
the
development
environment.
I
would
like
to
use
it
in
my
for
for
staging
this
yeah.
Of
course
you
could
also
do
something
like
use
that
digits,
for
example.
C
A
Yes,
scope
is
a
tool
to
you
know:
copy
container
images
right
instead
of
doing
kind
of
a
docker
pull
or
podman
pool
and
podman
push
docker
push.
You
can
use,
compare
to
copy
and
push
right.
C
Exactly
exactly
so,
scopeyou
now
has
used
the
the
digits
of
the
last
build
to
create
attack,
which
is
called
1.7.0
tv,
and
this
should
also
have
or
would
should
reflect
my
github
repository
environment
here.
So
you
can
see
that
there
is
a
new
release
branch
with
that
tech,
I've,
I've
created
and,
of
course,
right
now.
Nothing
has
happened
with
argo
cd
because
argo
cd
is
watching
in
this
case
on
the
main
branch.
So,
of
course,
in
a
real
environment.
C
So
this
is
yeah,
but
in
this
case
it's
the
main
branch
we
are
using
here.
So
if
we
go
back
to
here
to
the
staging
environment,
we
entered
to
the
book
stage.
You
can
see
here
right
now.
There
has
nothing
changed.
So
if
we
are
refreshing,
this
synchronization
is
still
like
the
same.
So
it's
still
1.6.0
life,
as
you
can
see
here
quickly.
So
what
we
need
to
do
now
is,
of
course,
in
a
real
environment.
C
You
would
have
a
devops
engineer
or
an
architect
or
whatever
somebody,
and
that
somebody
is
now
going
to
get
saying.
Well,
let's
compare
what
was
changed
now
we
are
creating
a
pull
request.
Now,
the
creating
the
pull
request
could
also
be
automated,
but-
and
there
is
a
gh
tool
available,
which
you
could
use
for
this
to
directly
interact
with
the
github
or
gitlab
apis,
but
I
haven't
used
this
so
far,
but
maybe
in
a
later
release,
I
will
do
this
yeah.
B
C
Awesome
yeah-
and
here
you
can
see
now
that
there
is
a
change
and
this
is
exactly
what
has
changed,
and
this
is
again.
This
is
something
what
is
really
important
for
the
git
ops
paradigma,
that
you
always
know
what
is
live
and
what
has
changed
from
the
from
the
older
versions
yeah.
C
So
now
I
can
exactly
see
what
has
changed
the
the
red
one
was
the
old
version
and
the
green
one
is
now
the
new
version
so
and
if
I
now,
of
course,
go
back
to
that,
one
I
say
well
seems
I'm
able
to
merge
it.
So
I'm
going
to
merge
my
pull
request.
C
C
There
was
a
change
now,
please,
let's
go
back
to
do
a
kids
clone
and
now,
let's,
let's
publish
the
changes
on
the
staging
environment
and
if
we
go
back
to
the
staging
environment,
which
is
this
one
after
a
while,
you
see
yay.
C
C
Do
this
exactly
more
or
less
the
same
with
a
prod
pipeline
or
a
pre-pipeline,
or
something
like
this
and
yeah
just
use
a
different
different
branch,
name
or
whatever?
But
this
is
completely
up
to
how
you
were
defining
this
and
yeah
and
argo
cd
is
the
guy
in
the
middle,
which
is
doing
the
actual
cd
part.
The
deployment
part.
A
And
and
this
cd
part
could
be
also
multi-cluster
right,
you
can
have
a
one
argo
controlling
multiple
cluster
or
one
argo
per
cluster.
That's
that's
powerful!
In
that
case,
if
we
have
multiple
clusters,
how
do
you
see
this
structure?
Will
you
have
a
the
central
hub
arcocity
pushing
to
multiple
clusters
is
still
valid.
This
architecture,
you
show
us
yeah.
C
So
I
I
can't
show
it
because
I
just
have
one
intel
nfc
system,
but
yeah,
so
this
would
be
exactly
the
same
instead
of
using
just
a
second,
let's
go
here
instead
of
using
in
cluster.
I
hope
you
can
see
it
yeah
this
url,
you
are
using
a
different
ul
yeah,
so
where
your
production
or
testing,
openshift,
cluster
or
kubernetes
cluster
or
whatever,
is
residing
and
then
instead
of
using
this
one
you're
using
that
other
one
and
rcd
is
taking
care
of
the
deployment.
A
Cool
pretty
cool
well
wow,
that
was
great
and
it
you
can
see
the
people
can
see
it's
live
because
I
we
share
in
the
chat
the
repository
that
you
were
using.
So
you
they.
You
can
see
the
the
pull
request
you
made
the
the
merge.
That
was
all
live
and
it's
impressive
amazing.
So
what
is
the
next
step,
of
course,
downloading
the
book,
because
you
find
all
those
all
those
examples
there
right,
maybe
andrea.
A
We
can
share
again
the
the
link
to
the
book
so
that
people
can
see
it
so
yeah.
Really
our.
A
Suggestion
is
to
download
the
the
book
and
to
find
this
example
and
also
more
content.
I've
seen
in
the
list
there
also
there
was
a
part
about
tecton
security.
No,
I
think,
there's
more.
C
Yeah,
so
I'm
talking
briefly
about
tecton
security
too,
so,
especially
in
the
context
of
okay.
How
can
I
provide
my
security
credentials
to
github
to
create.io,
for
example,
so
this
is
what
I'm
going
to
discuss
in
there.
So
really
briefly,
if
you
need
more,
then
there's
a
very
good,
so
the
the
the
documentation.
C
A
Well,
fantastic,
do
you
have
any
final
question,
andrea
jafar,
before
we
close
up.
D
B
C
No,
no!
No!
No!
No
I'm
sorry!
You
are
absolutely
right.
So
upstream
tecton
I
think,
has
a
dashboard
ui,
but
that
I'm
not
sure
I've
never
used
or
had
a
look
at
the
dashboard
of
tecton.
But
when
you're
using
the
openshift
pipeline
operator,
which
you
can
see
or
sorry.
C
So
openshift
pipelines
this
one
provides
an
overshift
developer,
console
integration,
and
this
is
exactly
what
you
see
now.
So,
when
I'm
going
back
to
my
book,
ci
environment
here
and
having
a
look
at
the
developer
view
the
pipelines,
then
you
see
this
one
is
completely
coming
from
the
openshift
pipelines
operator.
B
Yeah
so
so
I
I
believe
the
tecton
dashboard,
like
the
upstream
one,
shows
you
like
the
tasks
and
and
the
logs
and
results
and
stuff
like
that.
But
it
doesn't
give
you
this
this
visual
graph
and
and
the
the
other
thing
is
the
editor
like
if
you
click
on
the
pipeline
itself,
and
then
you
go
to
edit
yeah
second.
B
Actions
edit,
that's
one
of
the
like
great
values
you
get
in
here-
is
that
you
are
able
to
use
the
the
tasks
that
you
already
have
in
there
and
visually
create
your
pipeline,
which
is,
I
mean
very,
very
useful,
because
pipelines
in
texan
can
tend
to
become
very
lengthy,
and
you
know
hundreds
of
lines
of
yaml
files
yeah.
So
exactly.
C
I
would
also
suggest
that
when
you're,
starting
with
tektin,
that
you
use
the
openshift
pipelines
editor
for
this
with
just
design
your
your
pipeline,
as
you
can
see
it
here
and
then
you
can
always
easily
go
to
the
yaml
view
and,
for
example,
just
take
that
file,
and
then
you
store
it
locally
within
your
project,
for
example,
yeah.
C
So
exactly
because
you
can
see
it
here,
it
is
a
little
bit
lengthy
and
honestly,
who
said
that
yaml
is
a
great
configuration
technology.
I
can't
understand
this
okay.
Well,
the
difference
between
yaml
and
xml
is
also
so.
Xml
is
also
quite
lengthy
and
but
janu
where.
A
A
That's
an
interesting
one
but
yeah.
I
think
it's
very
cool
what
we've
seen
and
vania.
If
the
people
want
to
try
stuff
in
with
reading
the
book,
I
think
they
can
download
code
ready
containers,
which
is
a
local
instance
of
openshift
for
free
that
you
can
download
in
your
workstation
start
and
openshift
locally
and
start
using
tech
on
argo,
cd
and
openshift
and
do
the
same
example.
You
do
you
did
it
today
with
the
nice
pipeline,
ui
builder
and
all
the
stuff
right.
A
Everything,
fantastic
fantastic.
We
put
the
link
in
the
chat.
Please
download
quality
containers
to
to
try
it
out.
If
you
have
any
openshift
just
download
the
book,
read
the
book
and
send
feedback
to
vania
follow
vania
on
twitter.
We
we
have
your
twitter
handle
manual.
Let
me
I
already
shared
in
the
chat.
Oh
yeah.
Here
we
go,
I'm
ready
to
share
it,
share
it
in
the
chat.
A
Thank
you
very
much
and
we
we
close
up
folks.
Thank
you,
vanna,
for
this
all
really
awesome.
Awesome
demo
live
demo.
Let's
close
up
today
open
shift
tv.
We
have
level
up
our.
We
have
jafar
doing
the
live
demo.
D
Yes,
so
next
week
we're
still
going
to
be
talking
about
ci
cd,
but
this
time
we'll
be
talking
about
csd
pipelines
for
ai,
for
ai-based
applications
that
use
red
hat,
open
data
science
and
we'll
have
the
one
only
max
murakami.