►
From YouTube: OpenShift 4 Red Hat Operators
Description
Through custom definitions, OpenShift can be enhanced with first class resources that can be manipulated through the OpenShift API. Operators use this technique to allow users to deploy and manage applications using their domain specific concepts at the OpenShift level. This video demonstrates the AMQ Streams operator and its use of resources to deploy and use a Kafka cluster.
A
A
The
operator
hub
is
a
new
tab
in
the
UI
and
gives
us
access
to
all
of
the
operators
that
can
be
installed
into
our
cluster
once
they're
installed.
We
can
use
them
to
vision,
custom
resources
depending
on
what
it
is
the
operator
provides
for
us
for
this
example.
This
operator
will
only
run
inside
of
a
specific
namespace
as
compared
to
being
installed
on
the
cluster
itself,
so
we're
going
to
stall
it
into
a
demo.
Namespace
we've
created.
A
The
a
and
Q
streams
operator
is
going
to
give
us
access
to
the
data
streaming
platform
provided
by
Red
Hat.
It's
based
on
the
Apache
Kafka
project,
and
it's
going
to
give
us
access
to
a
number
of
services
and
resources
related
to
managing,
not
just
the
creation
and
running
of
a
kafka
server,
but
all
of
extra
pieces
surrounding
it
in
terms
of
configuration
in
terms
of
supporting
services
and
in
terms
of
actually
creating
resources
that
run
on
the
kafka.
Server
itself.
A
We
switch
over
to
our
demo
operator
project
you'll
see
the
operators
has
been
successfully
installed,
so
we'll
take
a
look
inside
of
there
and
see
the
variety
of
resources
that
are
available
to
us
now.
For
the
purposes
of
this
demo,
we're
going
to
set
up
just
a
basic
task
of
cluster.
It's
going
to
set
up
all
of
the
resources
necessary
to
run
it.
In
addition
to
standing
up
the
services
themselves,
you'll
see
the
basic
openshift
llamó
that
you're
normally
used
to
seeing
in
the
past.
A
One
of
the
benefits
of
operators
is
uses,
custom
resources
to
be
able
talk
in
terms
of
the
business
logic
being
deployed
instead
of
having
to
think
I
have
a
part
of
a
particular
image
on
it.
I
can
tell
OpenShift
give
me
all
of
my
installed
Kafka
installations
and
let
me
work
in
terms
of
the
actual
object
itself.
A
Now.
You'll
notice
is
correct
off
the
creation
of
a
number
of
resources
supporting
resource
types
like
secrets
and
configuration
maps.
We
also
notice
there.
The
zookeeper
nodes
had
begun
to
start
if
we
had
duck
and
we
would
wait
for
all
three
of
those
nodes
to
begin
before
it
actually
begins.
The
creation
of
a
Kafka
Brokers
operators
give
us
access
to
this
level
of
advanced
installation
logic
where
we
can
stand
up
one
service
before
standing
up
another
or
trade
data
back
and
forth
as
necessary
hopping
out
to
a
shell
very
quickly.
A
We
can
see
that
majority
of
these
these
servers
have
begun,
so
we'll
hop
back
into
the
UI
and
take
a
look
at
the
resources
themselves.
If
we
jump
back
to
the
installed
operator
section
we're
going
to
go
through
we're
going
to
also
create
a
casket
topic
inside
of
our
newly
created
cluster.
Again,
it's
going
to
prompt
us
with
the
traditional
OpenShift
Diavel
we're
going
to
just
simply
rename
the
topic
name
itself
to
hello
world.
A
A
Once
inside
the
cluster,
we
use
the
tools
built
into
the
image
itself
to
take
a
look
at
all
of
our
topics.
Now
we
use
localhost,
but
obviously
we
could
have
gone
to
the
service
level
itself
and
had
the
load
balancer
pick
one
of
these
instances,
but
for
simplicity,
we're
just
going
to
say:
query
the
cloth
good
server
itself
for
the
Kafka
instance
itself
and
make
sure
that
the
topic
that
we
created
is
there
as
expected.