►
From YouTube: Sponsor Demo: Knative, Kafka, Tekton, KubeVirt via OpenShift 4.5 w/ Red Hat Advanced Cluster Mngmnt
Description
An all-demo, live, non-stop, whirlwind tour of OpenShift 4.5 based on Kubernetes 1.18. Touching on multi-cloud management, Knative (OpenShift Serverless), Kafka (Red Hat AMQ Streams), Tekton (OpenShift Pipelines) and KubeVirt (OpenShift Virtualization) where VMs and Pods live side by side.
A
Hello,
my
name
is
bro
sutter
from
red
hat
and
we
got
a
lot
of
fun
things
to
show
you
in
a
very
short
period
of
time,
we're
going
to
dive
right
in
and
get
right
to
it
right
now.
You
can
see
there's
a
lot
of
different
ways
to
get
openshift
4.5
based
on
kubernetes
118.
I
could
start
here,
but
I
want
to
show
you
how
I
installed
multiple
clusters
around
the
globe.
A
I
came
into
this
thing
called
advanced
cluster
management
you're
going
to
hear
more
about
this
in
a
second,
but
it
was
super
simple
for
me
to
set
up
a
cluster
here
in
dublin.
You
can
see
based
on
amazon
or
in
sydney
based
on
google
and,
of
course,
even
the
cluster
that
I'm
looking
at
right
now
is
sitting
in
texas.
So
I
have
my
three
clusters
around
the
globe.
I
can
hit
this
add
cluster
button
and
say
create
cluster,
and
this
is
exactly
what
I
did.
I
can
call
this
the
burr
cluster.
A
If
I
want
and
then
pick
any
of
my
public
cloud
providers
and
for
instance,
I
can
pick
google
cloud
here,
pick
what
version
of
openshift
that
I
want
to
lay
down
there
pick
my
connection.
It
knows
what
dns
name.
A
Now
I
want
to
make
this
point
very
clear:
the
ability
to
set
up
a
new
cluster
is
super
easy
and
across
all
three
public
cloud
providers,
including
bare
metal
or
on-premise,
or
even
things
like
vsphere
and
other
solutions,
but
look
here
I
have
this
thing
running
on
amazon,
as
I
mentioned,
there's
my
amazon
user
interface.
You
might
love
it,
you
might
hate
it,
but
there
it
is.
I
have
the
google
running
here
also
and
that's
running
down
in
sydney
down
in
australia,
and
I
have
azure
running
over
in
texas.
A
B
Right,
thank
you
very
much
bro.
Let's
take
a
look
at
what
all
we
can
do
with
this
really
powerful
capability
that
red
hat
has
recently
made
available.
So
in
your
environment,
we
saw
a
couple
of
clusters
created.
I've
got
a
an
acm
cluster,
a
hub
in
particular
that
has
several
different
open
shifts
from
both
from
all
three
of
the
major
cloud
vendors
as
well
as
running
in
a
data
center
environment.
B
So
each
of
these
clusters
runs
an
agent
that
allows
us
to
see
what's
going
on
in
that
cluster.
It
allows
me
to
drive
configuration
changes
and
deliver
applications.
We'll
see
the
all
of
those
examples
here
in
just
a
moment.
So,
for
example,
if
I
wanted
to
trigger
an
upgrade,
I
can
see
what
versions
are
available
to
every
cluster
that
I
have
under
management.
B
If
I
want
to
simplify
upgrading
several
clusters
at
once,
if
upgrades
are
available,
I
can
trigger
an
upgrade
in
a
batch
and
all
of
them
will
immediately
go
off
and
start
triggering
that
upgrade
process.
So
now
it
simplifies
a
administrator's
job
who's.
Trying
to
manage
many
different
clusters
understand
the
inventory
and
drive
changes
in
behavior,
but
now,
if
I
have
clusters
attached
in
under
management,
what
are
the
things
I
might
like
to
do?
B
In
particular,
we
often
find
users
want
to
understand
what's
going
on
in
their
clusters,
so
our
search
capability
actually
indexes
everything
any
api
or
crd,
that's
available
in
that
cluster.
You
can
now
search
against
rackham's
database
and
understand
what's
going
on
and
this
if
I
have
an
app
that
is
spread
across
many
different
clusters.
B
Now
I
could
do
something
like
find
everything
that
is
part
of
a
particular
namespace.
So
let's
look
at
a
namespace
called
wordpress
app,
and
I
can
see
that
it
has
lots
of
different
parts,
pods,
replicas
secrets,
deployments,
etc.
I
might
want
to
drill
into
what
clusters
is
it
currently
running
upon
see
if
it's
healthy
or
not
healthy,
so
search
becomes
a
really
powerful
way
to
understand
the
state
of
those
clusters.
Now,
understanding
what's
going
on,
is
really
only
part
of
the
problem.
B
We
use
a
very
simple
concept
of
placement
rule
that
matches
against
the
labels
that
I've
assigned
to
the
cluster.
It
will
record
the
decision
and
tell
me
whether
or
not
I'm
compliant-
and
this
particular
example
will
push
an
oauth
configuration
directly
in
to
any
cluster-
that
I
need
to
have
this
particular
policy
applied.
Let's
take
a
quick,
quick
look
at
one
example
here:
I've
actually
got
a
concept
of
image,
manifest
vulnerabilities,
it's
already
deployed
against
two
clusters.
B
Done
and
then
over
here
on
the
right,
we're
very
quickly
going
to
see
it
pop
up,
I'm
going
to
see
if
I
can
catch
it
in
the
act.
So
here
we'll
look
at.
I
now
have
a
new
decision
recorded
and
then
over
the
next
few
seconds,
we'll
actually
see
that
container
security
operator
automatically
get
pushed
out
to
charlie
and
bring
it
into
compliance
now.
The
other
aspect
of
managing
config
is
really
about
applications,
so
when
we
think
about
applications,
we're
delivering
any
kind
of
deployment.
B
Typically,
these
are
managing
github,
they're,
managed
in
helm,
repositories,
object,
store,
etc,
and
the
notion
of
rackham's
delivery
model
allows
it
to
pull
directly
from
those
sources
and
syndicate
the
app
across
your
environment.
So
here
again
I'm
still
driving
them
with
placement
rules.
I
have
a
set
of
cluster
selectors
that
define
what
labels
clusters
need
to
have
to
run
part
of
that
application.
I'm
defining
subscriptions
that
link
back
from
some
source
and
in
all
the
examples
that
we
see
here.
B
These
are
actually
driven
from
a
public
github
repo
that
same
application
is
actually
available
here,
and
it
actually
points
to
the
get
repo
for
kubernetes
and
then
does
what's
needed
to
configure
that
application.
So
this
is
just
a
whirlwind
tour.
What
we
went
through
is
how
we
can
drive
upgrades
of
clusters,
how
we
can
manage
configuration
policies
across
those
clusters,
how
we
can
deliver
applications
and
how
we
can
search.
So
with
that
bur
I'd
like
to
pass
it
back
to
you.
A
That
is
absolutely
amazing.
I
love
what
you
did
there
with
the
search.
I
love
what
you
did
there
with
the
policy
management
and,
of
course,
application
management
really
cool
stuff
to
see
all
those
components
in
that
centralized
location.
So
at
this
point
we
got
to
dive
in
and
do
something
else,
we're
going
to
see
some
k,
natives,
some
tech
ton,
some
kafka
and
the
magic
of
how
that's
been
done
now
in
open
shift
with
william
marquito
oliveira.
A
C
You
bart
here's
what
I
got
for
you,
so
I
have
an
open
shift
cluster
here
with
some
operators
already
installed.
I
have
the
mq
streams,
that's
based
on
the
streamsy
cncf
project
that
we
donated
to
cncf,
that's
our
kafka
operator
and
we
have
pipelines
based
on
tactone
and
serverless
as
well
based
on
k-native.
C
So
now
what
I'm
going
to
do
is
deploy
a
serverless
container,
a
serverless
application.
I'm
going
to
start
here
from
a
container
image
that
I've
built
before
I'm
selecting
the
image
here.
The
application
name
comes
up,
and
here
with
a
single
click
right,
I
can
make
that
container
run
as
a
serverless
application
right,
it's
very
straightforward,
and
by
selecting
that
I
can
also
configure
a
couple
other
things
from
that
serverless
application.
C
So,
for
example,
I'm
just
limiting
the
number
of
concurrent
pods
being
running
to
handle
those
events
so
why
the
application
is
coming
along
here.
I
can
also
repeat
that
same
process.
If
I
want
to
import
a
project
from
git
right
that
same
experience
again,
I
can
start
from
a
git
url,
that's
going
to
trigger
a
build,
and
I
also
have
the
serverless
the
canadian
service
option
here
to
to
run
as
well,
and
I
can
do
that
also
with
a
docker
file
right.
C
If
I
have
a
docker
file
here
or
in
a
git
ripple,
I
can
also
deploy
that
application
as
a
service
application
as
well,
so
our
application
is
already
running.
So,
while
I'm
doing
that,
let
me
also
demonstrate
how
you
can
create
your
own
kafka
cluster,
using
the
mq
streams
operator
right.
So
I'm
just
gonna
go
back
here
to
the
admin
console,
I'm
gonna
transition
to
a
different
project
and
select
kafka.
So
I
have
one
kafka
cluster
running
already,
I'm
going
to
start
a
new
one.
C
Let's
give
this
one
a
name,
so
my
kafka
cluster.
If
I
could
type
my
cluster
there
you
go,
I
can
configure
a
couple
things
for
for
the
kafka
cluster
as
well
number
of
brokers,
details
about
security
and
whatnot.
I
can
also
use
a
yaml
right
if
I
want,
but
I'm
going
to
stick
to
the
user
interface
for
now
hit
create.
C
Let's
take
a
quick
look
at
all
the
resources
coming
up,
but
you
can
see
that
this
is
a
real
kafka
cluster
right
again
all
the
brokers,
zookeeper
and
everything
else
are
starting
up
and
you
see
again
all
the
details
about
configuration
and
security
as
well.
So
that's
pretty
cool,
but
I'm
not
going
to
wait
for
that
again.
We
don't
have
enough
time.
Let's
go
back
to
our
serverless
project
and
our
serverless
application.
C
The
application,
as
you
can
see,
scaled
down
to
zero
because
of
course
no
events
were
sent
to
that
app.
But
that's
going
to
change
now
because
I'm
going
to
add
an
even
source
to
that
application,
and
here
you
can
see
the
list
of
event
sources
available
in
the
system,
I'm
going
to
use
kafka
and
I'm
going
to
select
here
the
broker.
Url
pick
a
topic
consumer
group.
C
C
So
I'm
going
to
start
a
new
process
here,
there's
essentially
a
kafka
producer
running
inside
openshift
and
send
a
couple
messages,
and
you
see
that
application
coming
up
scaling
up
as
I
sent
those
messages
so
test
message
there
you
go
test
message:
two,
let's
take
a
quick
look
at
the
logs
of
our
pod,
the
logs.
C
So
you
see
here
the
text
that
I
just
sent.
Let
me
send
one
more
test:
three:
there
you
go
and
this
is
a
cloud
event.
So,
even
though
I
sent
a
string,
that's
going
to
be
converted
to
a
cloud
event
here.
I
can
also
start
this
other
application
that
is
essentially
going
to
post
a
couple.
Different
json
objects
to
that
application
as
well.
C
C
You
can
see
that
the
event
here
it
matches
an
order
again,
just
like
you
would
get
from
any
pretty
much
any
e-commerce
right
again
just
an
object.
That
is
an
object
and
whatnot.
So
this
is
all
great.
I
have
my
application
running,
but
what
you
can
do
as
well
with
openshift,
is
also
create
a
pipeline.
So
can
I
so
so
that
I
can
automate
the
ci
and
cd
right
for
my
application?
I'm
going
to
start
here
with
the
pipeline
builder
that
is
based
on
techtron.
C
I
can
select
again
a
number
of
tasks,
so
the
first
thing
here
for
my
pipeline
is
going
to
be
a
git
clone.
The
next
thing
is
going
to
be
a
jib
great
or
build,
because
that's
what
I'm
using
for
my
app
and
the
next
thing
here
is
going
to
be
a
kn
create,
and
then
I
can
come
here
and
configure
specifics
and
whatnot.
But
again
we
don't
have
enough
time.
C
So
what
I
did,
I
already
have
a
yemo
for
that
whole
pipeline
that
I
built
before
I'm
just
going
to
copy
and
paste
that
here
hit
create.
So
that's
the
same
pipeline
that
we
did
before.
But
now
I
can
start
that
pipeline
again
specifics
for
where
that
container
image
is
going
to
land,
and
here
I'm
just
picking
the
pvc
that
is
going
to
be
used
as
a
workspace
to
share
the
resources
right.
So
now
the
git
clone
has
started.
C
Essentially
once
this
is
done,
you
have
a
surplus
application
deployed
and
that's
pretty
much
it
that's
all.
I
have
for
you
today
thanks.
A
That
is
totally
awesome
stuff.
I
love
it
there's
one
more
thing
that
I
want
to
make
sure
we
show,
because
we
want
to
show
people
this
concept
of
the
virtual
machine.
I
mentioned
that
we
now
have
virtual
machines
as
first-class
citizens
in
open
shift.
If
I
come
down
here
and
click
on
workloads,
you'll
see
virtualization
right
here
now.
A
Why
might
you
have
a
virtual
machine
in
your
openshift
environment
in
your
kubernetes
cluster
as
a
first
class
citizen
is
because
as
a
developer
and
I'm
a
developer,
I
want
to
make
sure
that
I
have
access
to
my
legacy
application
infrastructure,
as
well
as
my
new
cloud
native
systems
that
I'm
working
on.
So
I
might
have
a
simple
virtual
machine
that
in
this
case,
let's
go
ahead
and
build
one
from
this
wizard
I'll
call
this
my
accounting
application,
let's
say,
and
then
I
can
load
it
up
from
a
certain
source.
A
This
is,
of
course,
the
virtual
machine
disk
image.
I
can
pick
one
from
a
container
or
maybe
a
url
or
actual
disk
image
in
this
case
I'll
pick
the
container
image,
because
I
already
have
that
in
my
copy
and
paste
buffer.
This
is
based
on
fedora
in
this
case,
but
you
could
have
centos.
You
could
have
window
servers.
You
could
have
a
rail
system
I'll,
go
and
pick
the
fedora
go
ahead
and
say
that
this
is
a
t-shirt
size,
small,
medium
large,
we'll
just
call
this.
A
A
If
there's
special
networking
configuration
special
storage
configuration
if
there's
other
aspects
of
it,
but
in
this
case
I'll
just
say,
review
and
create-
and
if
we
look
at
our
list,
you'll
see
that
this
is
loading
in
that
image
and
all
I
would
have
to
do
is
hit
start
virtual
machine.
So
it'll
take
a
few
seconds
to
start
up
that
virtual
machine
in
my
overall
cluster.
So
let
me
just
go
and
show
you
the
one
I
have
running
right
here
called
my
fedora
one
I
already
launched
earlier
and
the
cool
thing
about
this.
A
A
If
I
can
type
that
correctly,
you
can
then
of
course
interact
with
systemd
and
I've
already
started
launching
processes
in
there
for
hdbd,
ftp
and
other
components
that
I
have
prepped
inside
that
application,
because
it
does
have
a
bunch
of
transactional
data
here.
Let
me
kind
of
show
you
that
real,
quick
and
let
me
show
you
my
transactions
over
here
right
there.
So
there's
a
bunch
of
xml
files
for
my
legacy
application
now.
A
One
last
thing
I'll
show
you
to
prove
that
all
this
is
working
I'll,
come
back
over
here
to
advanced
cluster
manager
for
kubernetes
and
let's
see
if
we
can
actually
find
those
guys,
those
virtual
machines-
and
I
can
type
virtual
machine
here
and
you
can
see
there
is
in
fact,
my
accounting
virtual
machine
and
my
fedora
running
out
there
so
again,
advanced
cluster
management
sees
all
around
all
these
clusters
and
gives
you
one
complete
experience
to
manage
all
of
it
across
the
open,
hybrid
cloud.
If
you'd
like
to
learn
more
visit
us
at
openshift.com,
you.