►
From YouTube: OCB: What's new in OpenShift Pipelines and OpenShift GitOps - Jaafar Chraibi/Christian Hernandez
Description
What's new in OpenShift Pipelines and OpenShift GitOps in OpenShift 4.8 with Jaafar Chraibi and Christian Hernandez (Red Hat)
A
A
A
A
Hello,
everyone,
my
name,
is
karina
angel
and
I'm
one
of
the
openshift
product
managers
and
welcome
back
to
openshift
commons
and
this
series
of
briefings
we're
kicking
off
it's
a
deeper
dive
into
what's
new
in
openshift
and
4-8,
and
that's
coming
out
really
shortly
and
as
part
of
my
role,
I
help
cover
products
that
sit
on
top
of
openshift.
A
B
I
thank
you.
Thank
you
karina.
Thank
you,
everyone
for
giving
us
the
opportunity
today.
So
my
name
is
jafar
and
I
work
as
a
tech,
marketing
manager
for
openshift,
focusing
on
pipelines
and
prior
to
that
I've
been
working
as
a
solution
architect
for
many
many
years
on
on
openshift
itself.
So
I'm
really
glad
to
now
be
part
of
the
cooking
area.
C
You
yeah
cool,
thank
you
christian,
hernandez,
a
similar
background
to
jafar
technical
marketing.
I
focus
on
get
ups
and
also
again,
like
jafar
said.
I
was
also
an
sa
for
a
long
time
on
the
openshift
product,
so
I've
seen
this
product
grow
so
much
and
now
excited
to
be
part,
as
jafar
said,
as
be
part
of
the
kitchen
and
on
the
on
the
back
side.
A
B
All
right,
so
in
this
first
section
we
are
going
to
speak
about
openshift
pipelines.
So
it's
as
most
of
you
openshift
lovers
already
know.
We
have
been
providing
cicd
capabilities
on
top
of
the
platform
for
for
a
long
time,
but
we've
had
a
really
interesting
theater
cooking
up
for
a
while
on
the
platform
and
we've
been
happy
to
announce
it
as
ga
in
early
may.
So
this
is
what
we
are
going
to
cover
in
more
details,
but
before
diving
into
the
specifications
of
what
what
it
does.
B
Let's
have
a
quick
refresher
on
what
happens
yeah
what
happened
in
the
devops
space
for
for
the
last
few
years
so
yeah.
We
all
know
that
devops
is
a
key
approach
to
to
be
able
to
deliver
high
quality
applications
into
production
in
a
very
fast
pace
and
to
cope
with
the
with
the
customers,
demand
and
requirements.
B
So
and
the
part
of
the
the
devops
approach
was
a
pillar
around
continuous
integration
and
continuous
delivery.
So
both
those
processes
or
methodologies
were
at
the
heart
of
making
everything
as
automated
as
possible
from
building
the
application
running
the
different
integration
tests
and
security
checks,
etc.
B
Until
releasing
the
pipeline,
so
traditionally,
there's
been
maybe
separate
tools
that
do
continuous
integration
with
things
like
jenkins
and
other
tools
specializing
into
more
the
latest
phase,
which
is
the
actual
deployment
of
the
application
in
two
different
environments,
and
when
we
go
from
building
the
application
to
deploying
it
in
an
automated
way.
That's
what
we
call
the
ultimate
continuous
delivery
and
if
we
look
at
how
things
have
been
done
for
the
past,
maybe
10
to
15
years.
So
there
were
some
long,
existing
solutions
around
around
continuous
integration.
B
With
things
like
jenkins
and
as
you
all
know,
openshift
has
provided
those
capabilities
for
for
a
long
time.
But
we
as
we
saw
the
space,
evolved
into
different
directions,
and
we
wanted
also
to
make
the
openshift
the
perfect
platform
to
run
not
only
the
traditional
ci,
workloads
or
ci
city
workloads,
but
also
embrace
new
capabilities
releases
native
ways
of
doing
ci
cd
with
things
like
tecton
or
openshift
pipelines
that
we
will
be
speaking
about
today
or
even
embracing
github's
approach,
as
christian
will
be
showing
later
in
in
this
presentation.
B
So
openshift
has
evolved
to
embrace
a
new
way
of
doing
sort
of
develop
settings
on
on
kubernetes
itself
and
that's
what
we
call
openshift
pipelines
as
being
the
product
that
we
provide
on
the
platform.
B
So
let's
go
back
on
what
happened
in
the
last,
I
would
say:
10
years
of
doing,
ci
cd
and
trying
to
enhance
the
not
only
the
tooling,
but
also
the
way
of
designing
and
thinking
about
the
ci
cd
pipelines.
So
if
we
look
at
the
traditional
way,
things
were
designed
for
a
different
space.
B
Containers
did
not
exist
yet,
and
ci
of
cd
basically
revolved
around
having
a
dedicated
set
of
tooling
that
you
installed
that
you
deploy
on
different
environment,
that
you
size
to
be
able
to
have
multiple
pipelines
running
at
the
same
time.
But
those
solutions
themselves
were
monolith,
monolithic
applications
for
the
most
part,
and
they
were
not
really,
I
would
say,
designed
to
to
to
fit
with
the
with
the
cloud
scale
so
having
thousands
of
of
pipelines
running
at
the
same
time
in
different
environments,
etc.
B
So,
of
course
you
have
this
ability
to
deploy
what
we
call
agents
that
are
capable
of
doing
on-demand
tasks,
and
then
you
know
scaling
up
in
sort
of
in
some
sort
of
capabilities
the
the
ci
pipeline,
but
they
also
had
some
some
limitations
and
there
were
some
some
drawbacks
when
starting
to
open
up
those
platforms
for
the
whole
company,
then
you
started
to
have
things
like
colliding
versions
of
pipelines
or
plugins
or
agent
versions
and
stuff
like
that,
where
some
projects
needed
some
capabilities
and
some
other
pipeline
projects
needed
some
newer
capabilities,
for
instance,
but
they
couldn't
upgrade
because
they
were
facing
some
collisions
or
incompatibilities
between
all
the
plugins
or
extensions
that
they
needed
to
run
their
applications.
B
So
basically,
what
happened?
Is
those
people
started
to
instantiate
dedicated
versions
or
dedicated
instances
of
those
ci
or
cd
tools
to
be
able
to
cope
with
those
different
problems?
So
then
things
started
to
evolve.
Something
very
big
happened
in
the
iq
world
and
it
was
the
emergence
of
containers
and
of
kubernetes
as
a
an
orchestrator,
and
then
those
people
working
in
the
cicd
space
started
to
think
about
how
we
could
evolve.
The
way
that
pipelines
are
being
defined
and
been
managed
and
run
by
trying
to
leverage
kubernetes
native
capabilities.
B
So
many
actors,
big
names,
started
or
got
involved
in
it.
Of
course
you
have
red
hat,
which
is
a
big
player
in
it,
and
the
goal
was
to
basically
come
up
with
a
new
standard
that
everyone
would
then
implement
in
its
own
set
of
capabilities
or
tooling,
but
at
least
it
will
be
on
a
set
of
standard,
whereas,
prior
to
that
everybody
had
his
own
way
of
describing
what
a
pipeline
would
would
look
like.
B
So
that's
where
the
upstream
project
tecton
came
up
and
red
hat
is
a
big
contributor
to
the
upstream
project
itself.
But
of
course,
as
we
do
with
anything
that
we
run
on
open
shift,
that's
you
have
the
upstream
projects,
but
you
have
then
what
we
call
the
product
size
downstream
distribution,
which
is
called
openshift
pipeline
and
openshift
pipeline,
comes
as
a
an
operator
that
you
can
install
on
the
platform
itself.
It's
it
comes
out
of
the
box
with
the
platform
it
doesn't
require
additional
subscriptions.
B
So
it's
part
of
the
core,
I
would
say
capabilities
of
the
platform,
so
the
goal
here
is
that
it's
built
for
kubernetes.
It
scales
very
well
because
we
know
that
we
have
proof
now
that
kubernetes
is
a
very
scalable
orchestrator.
B
So
it's
it's
very
appropriate
to
cope
with
those
cloud
scale
requirements
and
the
the
cool
thing
is
that
it
also
plugs
into
the
security
mechanism
of
kubernetes.
So,
for
instance,
when
you
want
to
define
who
can
learn
what
on
any
ci
tool.
Traditionally
it's
something
that
lives
out
of
the
lives
in
the
ci
platform,
and
it
doesn't
really
integrate
out
of
the
box
with
the
security
authorizations
or
or
our
back
capabilities
of
the
target
deployment
platform
itself.
B
But
now
with
openshift
pipelines
and
with
clickton,
it
all
relies
on
the
security
capabilities
of
kubernetes.
So
you
don't
have
to
duplicate.
Who
can
do
what
in
different
solutions
to
to
to
be
able
to
deploy
your
applications
on
different
environments?
And,
of
course,
one
of
the
great
features
that
tecton
comes
with
and
thus
openshift
pipelines?
B
Is
this
notion
of
extensibility
so
traditionally,
for
instance,
if
you
wanted
to
deploy
to
a
cloud
environment
or
from
a
jenkins,
pipeline
or
or
or
things
like,
that,
you
would
have
to
install
plugins
or
extensions
that
understand
how
to
interact
with
the
target
environment.
So
that
would
be
relying
on
plugins
that
are
deployed
developed
maintained
by
some
third-party
providers,
and
then
you
would
have
to
understand
how
to
pragmatically
programmatically,
define
the
the
logic
in
your
pipeline
to
interact
with
those
targeted
environments.
B
But
what
we
wanted
to
do
with
tecton
was
the
ability
to
provide
it
as
an
out
of
the
box
kubernetes
type
of
mechanism
that
allows
you
to
add
more
tasks
or
more
features
into
the
platform
itself,
without
having
to
develop
or
write
some
custom
plugins
to
be
able
to
interact
with
third-party
solutions.
So
we
will
be
speaking
about
that
later
on
in
the
presentation,
so
what
we
did
with
openshift
pipelines
is
so,
of
course,
you
have
tekton,
which
is
at
the
heart
of
the
the
platform
itself.
B
But
what
we
do
is
we
have
projectized
it
in
openshift
pipelines,
so
we
have
a
native
integration
with
the
openshift
console
where
you
can
not
only
run
and
see
your
pipelines
being
executed,
check
the
logs
graphically
from
from
this
console
pipelines
view,
but
you
can
also
design
the
pipelines
from
the
openshift
console
so
if
you
have
never
played
with
with
tekton,
yet
it's
very
yaml
heavy.
B
So
you
have
to
write
your
pipeline
tasks
in
in
yaml
to
describe
what
you
want
to
do,
what
task
comes
after,
which
one
or
in
parallel
etc.
So
it
takes
a
heavy.
So
you,
you
basically
need
to
be
a
a
yammel
black
belt
to
be
fluent
with
it.
So
what
we
wanted
to
do
is
give
our
users
the
capability
to
create
and
design
their
own
pipelines
without
taking
care
of
the
yaml
back
office.
Things
and
the
openshift
console
does
then
generate
whatever
yaml
resources
we
need.
B
So
it
gives
you
all
this
flexibility
to
start
with
something
easy
and
neat
from
the
ui.
And
then,
if
you
wanted
to
go
into
more
sophisticated
modifications,
you
could
switch
to
the
yaml
view
and
then
add
whatever
you
wanted
from
from
from
the
january
perspective,
so
everyone
can
can
find
something
interesting
for
for
from
them.
B
So
one
of
the
major
improvements,
then
is
we
don't
need
a
ci
or
cv
solution
anymore,
because
if
you've
been
looking
at
how
things
evolved
over
the
last,
maybe
four
four
years,
so
some
vendors
start
to
provide
containerized
versions
of
their
solutions
and
there's
nothing
wrong
with
it,
but
it
required
some.
You
know
administration,
you
had
some
overhead
in
in
the
way
that
you
had
to
deploy
the
solution,
upgrade
it
maintain
it
and
cope
with.
B
B
So
that's
one
of
the
the
I
would
say
the
great
improvements
is
that
you
don't
need
to
install
any
ci
or
cd
solutions
anymore.
It's
already.
It
becomes
part
of
the
openshift
platform
itself.
B
So
there's
a
lot
of
interest
in
the
communities
and
outside
world
and
people
started
to
create
their
own
tasks
for
for
this
tecton
ecosystem.
So,
basically
I'm
a
third
provider
and
I
want
to
integrate
with
pipelines.
B
B
So
let's
have
a
few,
a
very
high
level
overview
of
what
it
brings
into
kubernetes
and
how
it
all
works
together.
So
basically
what
it
does
is
that
it
enriches
kubernetes
with
few
new
concepts,
so
the
main
ones
are
a
pipeline.
B
Then
you
have
a
task
and
then
you
have
a
step.
So
a
pipeline
is
basically
a
graph
of
that
describes
the
the
overall
workflow
that
you
want
to
achieve,
and
it's
comprised
of
different
tasks
that
are
run
either
in
sequence
or
in
parallel
and
now
kubernetes
starts
to
understand
this
notion
of
pipeline.
So
if
you're
familiar
with
kubernetes,
you
know
that
this
kind
keyword
describes
the
type
of
resource
that
you
are
handling
with
things
like
pods,
pods
or
services.
B
So
this
type
of
native
kubernetes
things,
but
now,
when
you
install
openshift
pipelines,
kubernetes
starts
to
understand
okay,
so
I
have
a
new
resource
type
called
pipeline
and
I
know
exactly
what
to
do
with
it
and
basically,
when
you
run
your
pipeline,
it's
going
to
run
parts
and
containers
that
will
run
everything
that
is
defined
within
this
pipeline
to
do
whatever
ci
or
cd
tasks
you
need.
So
the
next
essential
concept
is
this
notion
of
a
task,
so
a
task
basically
does
something
specific.
B
B
I
can
use
a
container
image
that
I
have
that
has
all
the
binaries
that
I
need,
for
example,
something
like
maven.
If
I
want
to
do
a
maven
build
of
my
application
now,
I
want
to
build
the
container
image
from
a
docker
file.
For
instance,
then
I
can
use
the
builder
latest
stable
image
and
I
can
provide
some
parameters
to
do
whatever
I
want.
So
I
can
do
a
build.
I
can
do
a
push
to
an
external
registry
and
such
things,
so
that's
the
beauty
of
it.
B
Everything
is
defined
in
container
images
and
as
long
as
you
can
reference
that
container
image,
you
can
reuse
that
in
your
pipeline,
the
final
step,
final
component
or
or
notion,
is
this
notion
of
a
step
which
is
basically
inside
a
task.
You
can
have
different
steps
that
are
performed
so,
for
instance,
I'm
going
to
first
do
a
maven
install
of
my
dependencies.
Then
I
can
do
a
maven
package
or
whatever
I
want
to
do
within
my
specific
task
and
all
of
the
tasks
happen
within
a
same
pod,
so
they
can
share
resources.
B
You
can,
for
instance,
have
a
step
that
will
clone
the
git
code
somewhere
in
and
store
it
in
somewhere
in
your
kubernetes
environment,
in
a
persistent
volume,
and
then
this
other
step
can
reuse
that
storage
and
find
the
code
and
build
the
application.
For
instance,
and
once
it's
done,
you
can
run
some
tests
and
then
you
can
package
your
application
in
a
containerized
image
in
a
different
step
and
push
it
to
your
container
registry
etc.
So
these
are
the
main
essential
concepts.
So
basically
you
have
the
notion
of
a
pipeline.
B
All
right,
so
openshift
pipelines
comes
out
of
the
box
with
a
lot
of
what
we
call
cluster
tasks
that
are
reference
tasks
that
can
be
used
by
anyone
in
that
has
access
to
to
the
capability.
B
But
if
you
wanted
to
extend
your
pipeline
environment
with
some
new
tasks,
for
instance,
you
have
a
very
specific
solution
that
you
want
to
integrate
with.
So
as
long
as
you
can
create
a
container
image
for
it
and
have,
for
example,
a
cli
embedded
in
it
or
a
script
that
does
whatever
you
need
to
do,
then
that's
something
that
you
can
share
and
reuse
at
enterprise
level
and
to
make
that
even
simpler,
there's
a
marketplace
that
has
been
created.
B
So
it's
called
the
tecton
hub
and
basically
it's
a
marketplace
where
people
create
those
container
images
and
tasks
that
can
perform
specific
items.
So,
for
instance,
christian
will
be
speaking
about
openshift
get
ups,
which
relies
on
a
solution
called
argo
cd,
and
here
we
have
an
example
of
a
task
that
can
interact
with
angle
directly
from
from
an
openshift
pipeline.
B
Yep
sorry,
so
we
also
have
created
a
very
nice
plugin
for
vs
code.
So
basically
you
can
visualize
and
run
or
troubleshoot
your
pipeline
executions
directly
from
your
vs
code
environment.
So
you
don't
have
to
switch
back
and
forth
from
your
developer,
your
development,
environment
and
and
open
shift
if
you
wanted
to
to
remain
focused
on
your
development
tasks.
B
So
what
it
does
is
it
gives
you
the
ability
to
trigger
your
pipelines
from
either
the
command
line,
there's
a
tick,
tk
and
cli
that
you
can
use
or
there's
this
plugin
that
you
can
install
on
vs
code
and
I
believe,
on
color
coded
workspaces
too,
where
you
can
then
visualize
your
pipelines
and
interact
with
them
graphically
directly
from
your
development
environment.
B
So
that's
about
you
know
the
ga
feature
that
we
brought
to
openshift
in
early
may.
I
believe
it
was
on
may
3rd,
so
that
was
the
first
ga
release
of
the
openshift
pipelines
operator.
But
now
there
are
also
some
new
improvements
coming
up
with
openshift4h,
and
this
is
a
feature
that
I'm
really
excited
about.
It's
called
open,
it's
called
pipelines
as
code
and,
basically
what
it
does
is.
B
It
allows
you
to
define
your
pipeline
as
code
inside
your
reposit,
your
source
code
repository
and
once
you
have
an
event
that
gets
triggered
on
the
repository,
for
instance,
a
pool
request
or
something
it's
going
to
look
for
the
pipeline
definition
within
the
the
repository
of
the
application,
and
then
it
will
trigger
the
pipeline
automatically
on
the
platform.
So
that's
a
very
nice
feature
and
I
have
a
couple
of
slides
or
one
slide
that
goes
into
more
details
about
it.
B
So
that's
a
very
cool
feature:
it's
in
depth
preview
for
the
moment,
but
it's
going
to
be
very
interesting
and
there's
a
lot
of,
I
would
say
interest
to
to
make
it
go
to
tech
preview
and
then
ga
in
the
next
releases.
B
So
it
takes
some
things
like
github
actions
where
you
basically
define
whatever
logic
you
want
within
your
same
repository
of
your
application
and
then
it
gets
triggered
automatically
when
specific
events
are
triggered
so
yeah.
This
slide
explains
into
more
details
how
it's
structured
and
how
it
works.
So,
basically
you
will
have
your
application
git
repo,
where
you
create
a
dot
text
and
folder,
where
you
define
the
pipeline
that
you
want
to
be
executed
and
you
will
define
the
pipeline.
What
type
of
events
are
going
to
trigger
the
pipeline?
B
B
So
basically,
I
have
this
source
repo
and
in
there
there's
a
dot
text
on
folder
and
basically,
what
it
says
is
whenever
I
have
a
pull
request:
here's
the
pipeline
definition
that
needs
to
run
so
basically
it
says
whenever
there's
a
pull
request
on
the
main
branch,
then
you
have
to
run
this
specific
pipeline.
So
now
I
don't
have
to
set
setup
as
a
developer.
B
Anything
on
my
openshift
namespace,
because
once
I'm
going
to
trigger
that
specific
event,
so
it
can
be
attack
push
it
can
be
a
release.
It
can
be
a
pull
request
again
as
mentioned
here.
This
is
going
to
automatically
create
everything
that
we
need
on
openshift
and
run
the
pipeline
and
report
back.
The
results
in
the
github
checks,
so
there's
a
bi-directional
interaction
where
the
stuff
runs
on
openshift
but
at
the
same
time
it
does
update
the
github
status.
So
we
can
see
what
steps
are
being
run.
B
It
comes
up
with
the
logs,
also
directly
in
github,
etc.
So
it's
a
very
neat
upcoming
feature
and
there
will
be
some
there
will
be
a
talk
actually
about
it
in
in
one
of
the
open
shift
coffee
break
sessions
that
we
run
in
emea
every
other
wednesday.
B
So
I
will
keep
you
posted
whenever
we
have
new
material
on
that,
and
so
before
handing
the
presentation
to
christian,
I
wanted
to
invite
you
to
check
some
learning
material
that
we
have.
So
you
can
go
to
learn
that
openshift.com
and
you
will
find
a
tutorial
on
using
the
openshift
pipelines
feature
that
we
just
spoke
about.
So
it's
a
hands-on
scenario
where
you
can
play
with
with
it.
B
So,
thanks
again,
I
hope
this
gave
you
an
overview
of
all
of
the
nice
capabilities
that
we
are
adding
to
openshift
in
terms
of
kubernetes
native
ci
cd,
and
one
of
the
key
aspects
of
it
is
how
we
we
can
have
that
continuous
feedback.
B
When
we
deploy
applications
into
production,
we
don't
want
to
see
it
as
something
that
is
done,
one
shot
and
then
your
application
lives
on
its
own
there's
a
more
sophisticated
way
of
getting
information
from
what's
actually
running
in
production
and
making
sure
that
the
lights
stay
green
and
that's
what
christian
will
be
speaking
about
now
in
the
openshift
get
up
section.
So
thank
you
very
much
and
now
it's
off
to
you
christian.
C
Thanks
jafar,
let
me
share
my
screen.
Let
me
know
when
you
can
see
my
screen.
Just
give
me
a
give
me
a
holler
here.
C
Sounds
good
all
right
cool,
thank
you
jafar!
So
so
yeah
again,
my
name
is
christian
hernandez
the
technical
marketing
one
of
jafar's
counterparts
here
and
I'm
gonna
be
talking
about
openshift
getups,
and
you
know
what
we've
been
doing.
You
know
what's
new
and
then
what's
up
and
coming,
I'm
gonna
be
going
pretty
fast
to
try
to
power
through
some
of
these
things,
so
we
can
get
to
your
questions
so
again
feel
free
to
drop
questions
in
the
chat
and
we'll
get
to
them
at
the
end.
C
Here
before
I
actually
talk
about
openshift
get
ops,
I
want
to
actually
talk
about
getups
itself,
because
I
want
I
want
to
abstract
the
way
to
get
what
the
get
ops
is
as
a
practice.
First
before
we
actually
dive
into
the
tool
and
what
what
that
provides
for
you,
but
so
what?
What
is
get
ops
and
and
get
ops
is
a
purposely
bad
term
right.
It's
it's
supposed
to
be
a
like
kind
of
like
an
ear,
warm
term.
It's
supposed
to
be
it's.
C
It's
meant
to
be
a
catch,
your
attention
kind
of
kind
of
kind
of
deal,
but
but
there
is
an
actual
practice
behind
it,
from
a
10
000
foot
view
and
in
the
next
few
slides,
we'll
dive
a
little
deeper
but
from
a
10
000
foot
view
you're
using
git
as
the
source
of
truth,
meaning
that
not
only
are
you
storing
your
application
data.
C
Not
only
are
your
you
storing
your
infrastructure
right,
kubernetes
manifests
in
there,
but
you're
just
infrastructure
in
general
right,
the
entire
platform
is
described
in
a
git
repo
and
you
treat
everything
as
code,
meaning.
That
brings
me
to
like
this
last
point
here.
Is
that
you're
doing
everything
via
get
workflows
right
so
currently
developers
right,
they're,
they're
doing
things
like
I
want
to
make
a
change
to
some
code.
C
Let
me
do
a
pr
well,
now,
operations
right,
the
operation,
folks,
their
experience
is
going
to
be
the
exact
same
way
right,
I
want
to
add
a
new
kubernetes
node.
That's
a
pr!
I
want
to
scale
the
the
infrastructure,
that's
a
pr!
I
want
to
build
a
new
cluster,
it's
a
pr
right
and
so
you're
doing
everything
via
a
get
workflows
and
something
that
is
understood
generically
in
the
industry,
so
but
diving
a
little
deeper
right.
C
I
want
to
talk
about
the
getups
principles
itself,
right,
removing
again
not
talking
about
any
tool,
anything
any
specific
implementation
of
get
ops,
but
we're
just
kind
of
just
talking
about
github's
principles
and
what
it
is
right.
Someone
asked
me
is
getups,
just
a
buzzword,
and
or
is
it
an
actual
thing?
The
answer
to
that
question
is
yes
to
both.
C
Those
questions
is
actually
yes,
you
know
the
two
things
could
be
true
at
the
same
time,
it
is
purposely
a
buzzword-y
word,
but
it's
actually
a
a
an
actual
thing
and
I'm
a
member
of
the
cncf
open,
get
ops
sig
right.
So
it's
it's
a
it's
kind
of
like
a
a
subcommittee
or
a
sub
sig.
C
I
guess
in
in
the
application
a
delivery
sig
in
the
cncf,
where,
as
vendor
neutral
group
right
comprised
of
people
from
red
hat
right
myself
from
red
hat,
weave
works,
amazon,
microsoft,
code,
fresh,
we
got
together
and
we
want
to
have
a
driving
principles
to
what
get
ups
is
right.
So
putting
this
really
simply
is
that
one,
a
system's
desired
state
must
be
declarative
right.
So
we're
talking
about
a
declarative
state
right
fits
really
nicely
into
cloud
native
architecture
of
kubernetes.
C
So,
in
order
for
you
to
be
doing
get
ops,
you
have
to
have
a
system.
This
desired
state
must
be
declarative
right,
so
this
kind
of
goes
to
the
idea
of
infrastructure
as
code.
So
if
you're
doing
infrastructure
as
code,
congratulations
right.
You
already
hit
that
first,
that
first,
that
first
pillar
right
there.
The
first
principle
here
so
number:
two,
your
your
definitions,
right
everything
that
you're
storing
it
has
to
be
immutable
right
or
also
known
as
keep
it
in
scm,
aka,
git
right.
C
This
is
where
the
the
get
and
get
ops
comes
from,
but
it
doesn't
have
to
be
get
right.
It
can
be
scm,
but
the
the
idea
is
that
you
want
to
be
able
to
keep
track
of
things
using
git
workflows.
Those
versions
have
to
be
immutable,
and
so
this
is
why
a
git
fits
really
nicely
in
here
and
number
three,
and
I
believe
this
is
key.
C
This
is
the
most
important
principle,
and
this
is
what
differentiates
it
from
things
like
like
an
event
driven
architecture
right,
so
you
can
have
infrastructure
as
code,
but
it's
you
know
you
have
things
that
are
are
more
driven.
This
is
what
separates
get
ops
for
more
like
a
traditional
devops
practice
is
that
you
have
the
state
is
continuous.
Reconciliation
must
be
continuous
right,
so
you
have
a
software
agent,
that's
sitting
on
your
cluster.
C
That's
always
running.
That's
always
making
a
reconciliation
of
that
cluster,
so
you're,
taking
kind
of
like
the
desired
state
and
you're
running
state
which
I'll
go
over
in
a
second
and
make
sure
that's
reconciled
and
make
sure
that's
continuous
and
then
number
four
right.
It's
declarative
operations.
This
is
what
we
what
we
like
to
call.
Yes,
we
really
do
mean
it,
meaning
that
it's
just
operations
should
be
done
by
mutating.
That
declaration
right.
So
it's
essentially
a
pr
right
is
what
what
I
explained
to
you
before.
C
So
you
have
operations
must
be
done
via
mutation
of
that
declaration
or
aka
operations
as
a
pull
request.
So
so
I
actually
want
to
so
one
two
and
four
are
kind
of.
I
want
to
say
self-explanatory,
but
number
three.
C
Since
I
said
that
was
important,
and
I
want
to
spend
a
little
bit
more
time
on
that
before
I
dive
into
bishop
getups
is
that
you
have
your
your
desired
state
in
get
right,
so
that
makes
that
makes
you
know
that
that
makes
sense
right,
and
you
have
what
you
currently
that
you're
running
and
the
differentiating
factor
of
get
ops
is
the
cd
part
right.
This
continuous
delivery,
continuous
reconciliation,
continuous
check
right
so
take
how
kubernetes
works
right.
C
Just
a
primitive
kubernetes
right,
you
have
your
declared
state,
which
is
your
deployment
manifest
right,
and
you
have
your
running
state
is
how
many
pods
you
have
right.
So,
let's
just
say
you
have
a
deployment
that
says
I
want
two
pods
running
and
you
have
book,
but
you
have
one
pod
running
that
that
replica
replica
set
controller
right.
That
controller
sees
the
difference,
will
reconcile
it.
C
You
know
making
sure
that
your
desired
state
matches
the
current
running
state
and
if,
when
it
runs
again-
and
you
know
you
have
two
replicas-
then
it
does
nothing
right.
It
kind
of
just
you
say:
okay,
you
know
it
your
desired
state
and
your
current
state
are
matched,
so
I
don't
have
to
reconcile.
Take
that
up
a
level
right.
Take
that
up
not
only
from
to
your
infrastructure,
but
your
application
delivery.
C
It's
that
same
approach,
right,
we're
taking
that
idea
of
kubernetes
but
you're
using
it
to
operate,
not
only
the
the
deployment
right,
but
your
entire
system
is
now
operating
on
that
same
principle.
So
some
of
the
things
that
you
get
with
get
ops
right,
it's
a
standard,
workflow,
meaning
that
everyone
can
understand
it
right.
I
come
from
an
operations
background
right,
I'm
an
ops
guy
and
even
I
you
know
get
is
something
that
I've
used
right.
C
This
git
is
something
that
even
operation
folks
use
nowadays
and
it's
just
a
standard
workflow.
So
it's
familiar
to
everyone.
I
like
to
glom
points
number
two
and
three
right:
the
visibility
and
audit
and
enhanced
security
right.
You,
you
get
enhanced
security
from
get
ops,
because
you
have
that
visibility
in
audit
everyone
sees
what's
going
on
in
your
system
right.
Everyone
sees
the
changes
who
did
it
who's
it?
Who
approved
it
right
if
you're
using
protected
branches?
You
know
you
have
that
process
in
place.
C
The
security
guys
can
take
a
look
at
it
right.
It's
it's!
It's
it's
out
there
for
everyone
to
use
and
for
everyone
to
see.
So
you
you
you
get
to
catch
some
of
those
things
ahead
of
time
and
if
you're
deploying
many
clusters
you're
taking
care
of
many
clusters,
you
have
that
consistency
across
all
your
your
clusters,
right,
if
you're
in
multiple
clusters-
and
you
want
to
make
sure
they're
all
in
sync
right.
You
have
a
dev
environment,
maybe
with
five
clusters.
You
want
to
make
sure.
That's
always
in
sync.
C
You
can
use
these
practices
to
make
sure
they're
all
in
sync
and
kind
of
a
high
level
workflow
that
this
this
workflow
will
seem
familiar
right.
Most
most
people
are
that
are
doing.
Devops
are
doing
this
thing
here,
where
you
have
some
sort
of
source
code
repository.
C
You
end
up
with
a
an
image
right
in
an
image
registry,
and
you
may
have
that
ci
system
pushing
that
there
and
the
ci
system
may
make
a
pull
request
to
the
to
the
configuration
repository,
and
you
know
some
cd
system
right,
whether
it's
invent
driven,
whether
it's
the
same
ci
system
does
a
you
know,
either
via
push
or
pull,
and
that
you
know
image
registry
that
the
new
image
gets
deployed
on
the
the
cluster
and
again
last
time,
hammering
at
home
the
differentiating
aspect
of
it.
C
Right,
though,
what
makes
what
makes
get
off
get
ups
and
what
makes
it
different
than
a
traditional
event-driven
devops
workflow
is
that
this
software
agent's
always
just
running
there
always
continuously
delivering
so
then
making
it
act
on
you
know,
someone
makes
a
pr
and
someone
as
soon
as
someone
merges
that
pr,
your
declared
state
changes,
and
so
the
software
agent
then
text
that
drift
is
like.
Oh
hey,
you
know
you
want,
you
know,
there's
there's
a
difference.
There
it'll
take
action.
It'll
always
completely
keep
your
cluster
in
sync.
C
So
now
that
we
have
that
baseline
of
what
git
ops
is,
and
you
know
kind
of
what
it
gets
you,
let's
talk
about,
openshift
get
ups
specifically
and
as
as
jafar
explained
a
little
bit
right,
this
openshift
getups
is
really
the
downstream
version
of
what
the
upstream
is.
It's
powered
by
argo
cd,
so
argo
cd
is
our
openshift.
Get
ops
is
based
on
argo,
cd
and
what
you
get
with.
That
is
that
you
know
we
like
just
like
everything
in
in
openshift
v4.
C
It's
it's
operator
driven,
so
you
you
get
to
subscribe
and
you
get
to
enjoy
everything
that
you
that
you
get
with
operators
right.
You
get
the
the
automated
upgrades
for
the
the
operator,
multi-cluster
config
management.
You
get!
This
opinionated
get
ups
bootstrapping
with
this
downstream
version.
C
Productized
version
of
argo
cd,
so
so
argo
cd
is
a
what
the
name
implies
is
a
continuous
deployment,
continuous
delivery
tool
right,
and
it's
really
built
on
the
fact
that
it
always
keeps
your
cluster
in
sync
with
what
your
configuration
is
and
get
so
it
is
that
tool.
It
is
that
cd
part
where
I
described
earlier
about
that
that
agent-
that's
that
sits
on
your
cluster
to
make
sure
that
it's
everything's
always
constantly
in
sync.
That's
argo
right
and
that's
what
argo
does
and
you
can.
C
You
know,
track
different
different
branches,
different
paths
right.
You
have
a
granular
control
on
deployment
right,
so
it
does
not.
It
not
only
works
on
your
stateless
application,
but
also
your
stateful
application
right,
a
lot
of
people.
I
guess
most
people
are
running
staple
applications.
You
know,
stateful
applications
aren't
going
away.
C
Argo
city
gives
you
the
power
to
control
sync
order
for
more
complex
rollout
than
just
something:
that's
a
stateless
and
since
you're
using
get
you
get
that
the
ability
you
get
to
leverage
all
those
get
workflows
right.
So
if
you're
gonna
need
to
roll
back,
you
can
roll
back.
Just
by
changing
the
the
the
git
commit.
C
You
can
roll
forward.
That
sort
of
thing
argo
cd
has
built-in
templating
right.
So
you
know
I'm
a
fan
of
not
dry
right,
don't
repeat
yourself,
I'm
a
fan
of
you
know
since
since
you're
you're,
you're
sinking
yaml,
you
don't
want
to
copy
ammo
over
and
over
and
over
and
over
again
that
same
yaml.
So
there's
templating
support
using
customize
and
helm
are
two
of
the
big
ones.
Other
ones
are
jsonnet,
but
I
think
most
people
either
use
customize
or
helm.
C
I'm
a
big
fan
of
customize.
I
use
it
all
the
time
and
you
get
to
visualize
right.
So
argo
city
comes
out
with
a
nice
ui
that
lets
you
visualize
how
your
application
is
spread
out
throughout
of
your
environment
and,
speaking
of
how
you're
deploying
your
applications
with
with
openshift
get
ops,
you
got
kind
of
flexible
deployment
strategies
to
fit
whatever
needs
that
that
you
have.
C
You
have
sort
of
like
a
central
like
kind
of
a
push
right
hub
and
spoke
sort
of
design
where
you
have
argo
cd
sitting
on
on
a
cluster
somewhere,
it's
managing
multiple
git
repos,
and
it's
deploying
those
out
to
multiple
clusters
right,
whether
it's
openshift
or
kubernetes
itself.
You
have
the
cluster
scope,
which
is
probably
what
most
people
use
is
the
one
I
use
a
lot
is,
essentially,
you
have
a
nargo
cd
installed
per
cluster,
so
you
have
an
argo
cd
sitting.
C
If
you
have
you
know
five
different
clusters:
you'll
have
five
argo,
cds
and
essentially
the
scope
of
the
argo
cd
deployment.
Is
that
cluster
itself?
So
it
takes
care
of
that
entire
cluster
with
with
openshift
getups,
you
have
what
we
call
the
application
scoped
where
this
is
the
multi-tenant
deployment
method
of
argo
cd.
Where
you
have
you
know,
team,
a
and
team
b
managing
an
application
stack
that
may
spread
through
multiple
name
spaces
on
a
single
cluster
or
you
can
use
argo
cd
for
that
deployment.
C
So
this
is
that
ci
cd,
that
last
mile
cd
aspect
of
argo
cd,
where
you
have
argo,
that's
you
know
deploying
to
a
few
name
spaces
and
controlling
a
few
name,
spaces
and
team.
A
and
b
may
or
may
not
know
about
each
other
right,
so
you
know
they
may
may
not
necessarily
care
about
each
other.
This
is
like
the
multi-tenant
deployment
as
well,
so
so
we
have
here
so
before
I
explained
the
the
the
opinionated
bootstrapping
for
argo
city
right.
So
this
is
dev
preview.
C
What
we
call
the
get
ops,
application
manager
or
cam
right
kam.
So
this
here
is,
is
the
idea?
Is
it's
supposed
to
take
you
from
from
zero
to
get
ups
right?
So
if
you
can,
if
you
can
imagine
I'm
a
developer,
I'm
starting
a
new
project,
but
you
know
it's
a
green
field.
I
want
to
be
able
to
do
things
in
a
get
ups
friendly
way.
Where
do
I
start
right?
C
What
are
the
best
practices
and
the
the
idea
is
that
this
bootstrapping
gives
you
that
best
practices
out
of
the
box.
So
you
know
it's
an
opinionated
way
right.
C
You
do
a
cam,
bootstrap
and
it'll
build
it'll,
build
out
the
directory
structure
and
the
configuration
you
provided
information
about
your
about
your
deployment
and
it'll
build
out
all
that
directory
structure
configuration
for
you,
so
it'll
configure
web
hooks,
it'll,
configure
argo
cd,
it'll
use
customize
to
templatize
everything
for
you,
you
can
integrate
with
secret
managers
right,
so,
whether
you
sealed
secrets,
whether
you're
using
hashicorp,
vault
or
external
secrets
right,
you
can
integrate
with
external
secrets
as
well
and
anytime.
C
You
want
to
progress
your
application
right,
you
can
do
cam,
environment,
bootstrap,
right,
ad,
staging
environment,
ad
production
and
then
in
the
you
know
coming
you
know
you
know
now
we're
talking.
You
know
what
what's
coming
up
next.
Is
that
you'll
get
this
this
view
in
the
developer
view
developer
perspective
in
the
ui,
where
it's
what
we
call
the
environment
views
where,
like
you,
can
actually
see
your
environment,
you
can
see
the
dev
stage
right.
C
You
can
see
your
environments
spread
out
fully
integrated
into
the
ui,
and
so
this
is
kind
of
what
you
get
right
with
with
cam
bootstraps.
You
do.
You
know
you
do
a
bootstrap
and
it
gives
you
this
whole
pipeline
from
beginning
to
end
right
end
to
end
where
you
get
that
tecton
pipeline
configured
for
you
right,
as
jafar
said
you
don't
have
to
be
a
yaml
expert
right.
C
You'll,
get
like
a
known
good
template
and
you'll
get
a
known,
good,
argo,
cd
directory
structure
and,
and
you
get
all
that
set
up
for
you
so
again.
This
is
that
preview.
If
you
look
up
get
off's
application
manager
in
in
get
you
can
see
all
the
changes
we've
been
making
like,
like
the
the
famous
saying
goes,
you
know,
feedback
is
welcome.
Pr's
are
welcome,
it's
you
know
something
that's
ever
evolving
and
currently
in
dev
preview.
C
So
with
that
being
said,
I
do
want
to
talk
about
what's
new
in
4.8
right,
so
we
released
openshift,
get
ops
in
four
dot,
the
tail
end
of
4.6
4.7.
I
think
it's
4.7,
sorry
4.7
we're
moving
to
1.2
right,
so
we
did
1.1
in
4.7.
We're
going
to
1.2
4.8
and
some
of
the
things
that
we're
at
or
that
we're
adding
well
what's
what's
what's
going
to
come
out
of
the
box?
C
Is
the
integration
with
red
hat
sso
key
cloak
for
those
using
the
upstream?
Is
that
essentially,
now
all
that's
a
manual
process?
You
have
to
spin
up
argo
cd.
You
have
to
spin
up
the
sso.
You
have
to
actually
do
the
connection
to
yourself
and
make
sure
everything's
all
set
up
correctly.
Now,
that's
out
of
the
box.
The
operator
does
that
for
you
again
openshift
before
operator
based,
so
we
do
that
out
of
the
box
via
the
operator.
C
Now
in
1.2,
the
argo
city
again,
the
privilege
configuration
that's,
been
simplified,
where
you
can
actually
give
other
p,
you
can
actually
kind
of
hand
the
keys
over
to
your
name
space
right.
So
if
jafar
has
argo
cd
set
up
for
his
city
and
I
don't
really
want
to
manage
the
cd
part
of
it,
I
just
want
you
know
someone
to
do
it
for
me.
I
can
actually
annotate
my
name
space
to
let
jafar's
argo
cd
manage
my
name
space.
So
it's.
C
It
simplifies
the
privilege,
configuration
for
argo
city
and
1.2
again
enhancing
the
environment
view
in
the
dev
in
the
dead
perspective
on
the
openshift
ui.
So
you
can
actually
see
your
your
application
as
an
application
versus
a
namespace
driven.
So
you
can
actually,
if
you
have
an
application
that
spans
many
namespaces,
you
can
actually
manage
that
application,
essentially
without
having
to
switch
contacts
that
that
way.
C
So
one
of
the
big
things
that's
coming
1.2,
one
of
my
favorite
things
is
acm
and
argo
cd
integration
there's
going
to
be
tighter
integration
with
acm
and
argo
cd,
so,
for
instance,
acm
will
now
will
now
recognize
that
you're
using
argo
cd
and
will
pull
that
topology
view
into
into
its
ui
and
also
it'll
have
native
support
for
things
like
application
sets
where,
where
you
can
actually
define
application,
sets
from
the
acm
level
and
have
that
bleed
down
into
your
managed
clusters
there.
C
So
it's
gonna
be
really
really
tight,
a
tiny
integration.
So
some
of
the
things
that's
up
and
coming
right.
So
some
of
the
things
that
we've
done,
you
know
we
went
ga
on
the
second
half
of
this
year.
Right
and
again,
I
just
talked
about
what's
coming
in
1.2
1.3,
let's
get
even
better
right
at
the
end.
C
At
the
end
of
this
year,
beginning
of
next
year,
we're
going
to
have
the
namespace
argo
cd,
so
remember
that
that
deployment
mechanism
of
the
name
space
version
of
argo
cd,
you
can
use
off
authentic,
openshift
authentication
with
that
as
well.
We're
going
to
have
openshift
get
ops
on
dedicated
right,
so
we're
we're
making
updates
to
argo
cd.
So
that
way
you
can
run
it
on
openshift
dedicated
and
we'll
be
able
to
build.
C
Also,
one
of
the
cool
things
is
that
helm
charts
will
be
built
into
the
application,
delivery
manager
or
cam
right,
where
you
can
actually
specify
helm,
charts
right
so
with
the
cam,
now
kind
of
kind
of
becomes
a
centralized
tool
for
managing
your
ci
cd
process.
So
so
I
do
want
to
leave
some
time
for
questions
right
and
we're
approaching
the
top
of
the
hour.
So
I
do
want
to
say
thank
you
and
if
you
want.
C
About
about
get
ops,
there
is
learn.openship.com
what
what
jafar
said
right:
slash,
getups,
you'll
you'll,
learn
about
tecton
you'll,
learn
about
argo
cd
you'll
see
how
that
all
those
pieces
fit
together
catch
my
I
guess.
Bi-Weekly
show
get
off
skype
to
the
galaxy.
If
you
missed
past
shows-
or
if
you
wanna
wanna,
see
what
we've
been
talking
about
with
get
ops
is
red.ht
slash,
get
ups
go
there
you'll
see
the
playlist
there's
tons
of
content
for
you
to
watch
over
the
weekend.
A
B
Oh
yeah,
so
I
said
yeah
yeah,
so
I
frustrated
same
as
you
as
like
learning
resources,
but
I
think
the
question
was:
if
you
have
some
resources
as
actual
pods
or
or
services
that
are
stuck
and
you
want
to
troubleshoot,
what's
going
on
so
christian,
did
you
see
the
the
question
or
I
did?
I
did
not.
A
C
Yeah
well,
this
is
yeah,
that's
a
troubleshooting
question,
so
I'll
be
the
first
to
admit,
and
we
are
working
on
this-
is
that
there's
documentation
is
kind
of
lacking
both
upstream
and
downstream.
We
we
did.
We
did
hire
a
lot
of
people
and
they're
ramping
up
to
do
documentations,
both
upstream
and
downstream.
So
so
that's
coming
some
of
the
things
that
that
argo
cd
provides.
You
really
have
to
go
down
deep
into
the
weeds,
meaning
that
you
have
to
look
at
the
controller.
C
There's
there's
there's
a
lot
of
controllers
in
argo,
cd,
there's
a
controller
for
the
get
repo
and
there's
a
separate
controller.
That
actually
does
the
sync
and
there's
actually
a
separate
controller
that
does
application
sets.
So
you
have
to
kind
of
try
to
figure
out
where
the
problem
is.
First
of
all,
if
it's,
if
it's
argo
city,
specific
it'll,
be
in
one
of
those
one
of
those
operator,
pods
you'll
you'll
get
a
log.
The
logs
are.
A
C
C
Aspect
of
it
and
then
the
lat,
the
last
thing
is
that
argo
cd
sometimes
doesn't
understand
your
crds.
So
if
you're
doing
things
with
crds,
you
have
to
just
update
the
configuration
to
let
make
argo
city
a
little
smarter
about
the
cr
crd.
So
without
knowing
the
specific
issue,
those
are
some
of
the
things
that
advice.
I
think.
B
Yes,
so
maybe
to
add
just
just
a
few
comments
to
that.
So,
as
christian
said,
if
the
information
is
not
showing
up
in
the
argo
ui
itself,
you'll
probably
have
to
look
at
the
like
the
infra
level,
like
the
pubs,
what's
happening
within
the
argo
application
itself,
and
one
tip
would
be
to
to
to
probably
try
to
use
the
openshift
login
ui,
where
you
can
aggregate
whatever
comes
from
argo
star,
so
you
don't
have
to
switch
back
and
forth
from
different
parts
to
try
and
understand.
B
If
something
has
happened
in
in
the
10
pods
that
are,
you
know
contributing
to
argo
cd
working
in
openshift.
So
you
could
try
to
look
in
the
aggregated
logs
view
and
build
some
specific
dashboards
to
capture
that
types
of
events,
and
you
can
also
so
what
was.
B
I
was
going
to
suggest
something
else
that
I
forgot
what
it
was
so
yeah.
This
is
yeah
just
one
one
way
of
doing
it
is
trying
to
aggregate
locks,
oh
yeah
and
the
second
one,
as
christian
said,
it
could
be
related
to
the
events,
so
something
misconfigured
with
the
service
account
or
something
like
that.
So
check
the
kubernetes
or
the
openshift
events
stream
and
see
if
something
is,
is
prohibiting
the
and
the
the
tasks
to
happen
or
or
whatever.
B
A
That
back
to
product
management
and
say
hey,
we
shouldn't
be
asking
people
to
build
their
own
dashboards
to
troubleshoot.
And
what
are
we
doing
right
now
and
then,
as
part
of
the
get
offs
working
group
and
the
cncf
that
you've
discussed
earlier
christian?
Is
this
an
area
that
the
working
group
is
looking
at,
making
it
easier
to
troubleshoot,
because
the
whole
point
right
of
gitops
is
making
life
easier.
C
Yeah
yeah
and
it's
it's
it's
part
of
the
the
best
practices
right
or
implementation,
like
a
almost
like
a
reference
architecture,
aspect
of
it
which
is
coming
down
later.
We
still
need
to
figure
out
the
principles
and
and
firm
those
up,
but
yeah.
Definitely
that's
that's
something
kind
of
upstream
is
we're
working
on
like
hey.
This
is
the
the
best
practices.
These
are
some
of
the
things
to
look
out
for
so
yeah.
A
B
Yeah,
so
I
think-
and
there
is
nothing
better
than
learning
by
experimenting
things,
so
I
would
say,
as
a
next
call
to
action,
is:
please
go
and
try
those
learning
resources
that
we
have
set
for
you,
so
you
can
get
familiar
with
it
and,
if
you
wanted
to
so
I
don't
know
if
there's
a
way
to
provide
feedback
other
than
giving
our,
I
would
say
emails,
oh
yeah,
we
can.
We
can
provide
the
slack
channels
that
that
you
have
there
for
the
commons
briefing.
B
If
you
wanted
to
provide
some
feedback,
then
we
can
have
a
look
at
it
there.
So
yeah.
I
would
say
that
that's
the
call
to
action
and
check
christian's,
regular,
get
up
show
and
I'm
I'm
still
in
setting
up
a
new
show
that
we
just
started
like
two
sessions
ago
on
tecton.
So
you
can
also
subscribe
to
it
and
and
learn
new
things
on
tecton
and
option
pipelines.
C
Yeah
so
just
kind
of
just
kind
of
echoing
what
what
jafar
said.
I
start
start
off
slow
because,
as
as
you,
progress,
you'll
find
it
especially
with
get
ops
for
for
me,
my
opinion
changes
as
I
learn
more
and
more
and
more,
and
so
so
it's
it's
an
evolving
thing.
So
definitely
try
it
out
yourself
and
some
of
the
the
things
that
we
put
out
for
you
guys.
A
Awesome,
thank
you.
Thank
you.
Everybody
for
joining
us
and
next
tuesday
is
another
another
deep
dive
into
what's
new
in
openshift48,
so
please
join
us
again
same
time.
Thank
you
both
and
chris
will
see
us
out.