►
From YouTube: OpenShift 4.6 and Beyond New Release Deep Dives and Road Ahead OpenShift Commons Gathering KubeConNA
Description
OpenShift 4.6 and Beyond: New Release Deep Dives and Road Ahead
Mike Barrett, Clayton Coleman and Derek Carr (Red Hat)
with Annette Clewett, Paul Morie, Rhys Oxenham (Red Hat)
OpenShift Commons Gathering KubeCon NA
November 17th, 2020
In this session, we will examine the trends we are seeing in the CNCF and discuss how those are affecting OpenShift in 2020 and 2021. We will include 3 forward looking product demonstrations during the session to help illustrate where the product is moving in these areas.
466755 OpenShift 4 6 Update
A
Welcome
to
the
session
everyone,
it's
become
a
bit
of
a
tradition
for
me
to
sit
down
with
derek
carr
and
clayton
coleman
and
about
thanksgiving
time,
kubecon,
north
america
commons,
and
to
really
dig
into
what
the
year
ahead
looks
like.
Why
we're
doing
the
things
we're
doing
how
it's
connected
to
you
and
with
that
gentleman
introduce
yourself.
B
A
Thanks
mike,
it's
always.
B
A
pleasure
to
get
together
with
you
in
person
or
or
virtually
here
at
kubecon
in
the
commons.
My
name
is
derek
carr,
I'm
one
of
the
architects
on
the
openshift
engineering
team
and
I've
had
the
pleasure
of
of
talking
about
kubernetes
and
openshift
for
the
audience
for
many
years
now,
and
I'm
excited
to
be
here
again.
C
Clayton-
and
it's
also
great
to
be
here
mike
and
derek-
I
love
having
this
conversation
with
y'all
and
it's
it's
just
a
great
time
of
year
to
gather
together
virtually
and
talk
about
our
favorite
subject,
which
is
open
shift.
As
everyone
knows,
we
can
talk
about
that
forever.
C
You
know
I've
worked
on
openshift
for
almost
eight
years
now,
and
you
know
every
year
we
come
up
with
something
new
because
the
industry's
changing-
and
so
you
know
as
both
openshift
architect
and
hybrid
cloud
architect
at
red
hat.
You
know,
I'm
totally
committed
to
you
know
seeing
this.
This
awesome
experiment
and
this
great
ecosystem
continue,
and
I
love
being
here
to
talk
about
it.
A
All
right
thanks,
gentlemen,
and
it's
not
just
the
two
of
you.
We
have
reese
oxenham,
annette,
kluett
and
paul
mori
popping
in
and
out
to
show
some
product
demonstrations
and
concepts
of
what
we're
going
to
be
discussing.
So
with
that,
let's
dig
in
you
know,
I
think
it's
best.
If
we
start
our
discussion
around
all
the
innovation,
that's
happening
around
bringing
kubernetes
out
to
the
edge
and,
at
the
same
time
closer
to
the
hardware
or
infrastructure.
A
Whatever
word
you
want
to
use,
you
know,
I
think
it's
it's
probably
fair
to
say
that
you
two
have
spent
the
bulk
of
2018
and
2019,
and
you
know
truthfully
even
some
of
2020
moving
kubernetes
into
this
self-managed
control,
plane
and
a
lot
of
people
don't
understand
what
that
necessarily
means,
maybe
they're
just
using
kubernetes
or
openshift
4
and
kubernetes
and
they're,
not
necessarily
picking
up
on
the
the
complexity
of
the
use
case
that
we
solved
for
them
with
that
2018-2019-2020
investment.
So
derek.
Why
don't
you
talk
about
that?
B
Yeah
sure,
thanks
mike,
so
it's
always
good
to
come
back
to
commons
and
and
reevaluate
the
progress
we've
made
and
you
allude
to
the
big
pivot
we
made
in
openshift4
around
providing
a
self-managed
and
self-updating
kubernetes
platform,
and
I
think,
last
year
we
met
and
talked.
We
always
talked
about
how
our
first
cluster
is
the
most
important
cluster
to
get
into
a
customer's
environment.
B
But
once
that
cluster
is
in
your
environment,
you
want
to
ensure
that
all
the
software
that
is
used
to
support
and
run
that
software
run
that
cluster
upgrades
together
and
and
continues
to
work
as
a
full
platform.
And
so
one
of
the
unique
capabilities
we
have
at
red
hat
is
being
able
to
provide
a
platform
from
the
hardware
out
right
and
so
within
openshift4.
Today
you
install
your
cluster
and
you
get
an
immutable
version
of
rel.
B
We
call
rel
coreos
that
provides
a
platform
for
not
just
the
cluster
infrastructure
services
itself,
but
also
your
end
user
workload,
applications
and
then
openshift4.
You
can
go
and
install
on
multiple
cloud
platforms
or
on-premise
in
your
environment,
both
virtualized
and
on
bare
metal,
and
we
like
to
talk
about
how
just
taking
kubernetes
isn't
enough
right
to
build
a
viable
container
platform.
You
have
to
surround
it
with
supporting
infrastructure
services,
and
so
that
might
be
your
ingress.
B
Your
dns,
your
monitoring
stack,
your
logging
stack.
A
whole
host
of
of
applications
are
needed
to
support
that,
and
so,
when
you
deploy
openshift
forward
today,
we
have
an
opinionated
set
of
core
services.
We
deliver
with
every
cluster
right
and
we
test
together
as
a
stack
in
what
we
call
a
release
and
so
in
our
four
one,
four
two
four
three,
four
four,
four
five
and
and
hopefully
now
by
the
time
we're
meeting
here.
B
Our
four
six
release
we've
been
able
to
install
that
stack
into
our
users
environments
and
upgrade
the
platform
in
whole,
not
just
the
kubernetes
assets,
but
the
underlying
operating
system
and
all
those
supporting
services.
So
last
year,
when
we
met,
we
talked
about
how
a
lot
of
innovation
in
the
kubernetes
ecosystem
is
happening
outside
the
core
of
kubernetes
itself.
Right
like
clayton,
and
I
always
like
to
talk
about
how
we
want
to
make
kubernetes
boring
and
allow
the
innovation
to
happen
above
and
around
us.
B
And
I
think
2019
and
2020
are
evidence
of
that
happening.
And
one
of
the
things
that
we've
put
into
the
center
of
openshift
4
is
what
we
call
our
operator
hub
or
the.
B
The
whole
notion
of
operators
about
how
you
can
deliver
extensions
or
add-on
capabilities
on
top
of
or
around
that
cluster,
to
get
incremental
value,
and
I
know
today
we're
going
to
talk
about
a
lot
of
those
things,
but
I
think
it's
worth
taking
a
step
back
and
just
being
like
man
there's
a
lot
there
right
like
whether
in
the
last
year
it
was
open
shift
virtualization.
So
you
can
manage
vms
on
top
of
bare
metal,
open
shift
like
who
can
think
of
that
right.
B
Containers
and
vms
together
managed
by
the
same
common
orchestration
framework
like
it's
crazy.
If
you
were
to
talk
to
clayton
and
I
six
years
ago,
we'd
be
like
that's
amazing.
Openshift
has
grown
to
take
on
that
use
case
or
a
service
mesh
right.
A
lot
of
people
are
doing
interesting
things
around
the
istio
community
to
monitor
traffic
happening
across
their
apps
or
in
some
cases
across
clusters
and
functions
which
we'll
talk
about
later
with
serverless.
B
These
are
all
components
that
we
see
evolving
and
iterating
above
and
beyond
that
core
stack
and
today
within
openshift
4.
When
you
install
your
cluster
aside
from
our
opinionated
set
of
core
services,
our
users
are
able
to
pick
and
choose
like
individual
elements
that
they
want
to
augment
their
platform
with,
and
we
provide
a
very
elegant
life.
Cycling
solution
for
those
additional
solutions
to
ensure
that
not
just
the
core
platform
is
life
cycled
and
kept
up
debate
and
secured.
B
But
the
whole
software
stack,
whether
that's
the
core
add-ons,
the
core
os
or
the
core
kubernetes
orchestration
layer.
So
I
think
it's
fair
to
say:
openshift4
has
grown
a
lot.
We've
proven
its
ability
to
upgrade
and
maintain
stability
in
our
user
community
today,
and
we
look
forward
to
doing
that
as
we
continue
of
all
the
platform
moving
forward.
A
Yeah
thanks
for
that
guy.
What
I
really
like
about
that
that
concept
is
that
all
those
features
of
the
edge
that
you
had
to
build
on
top
of
kubernetes
to
give
somebody
a
valuable
platform,
you've
made
them
self-aware.
You
you've
moved
them
the
crds,
you
you
leverage
the
basic
functionality
of
kubernetes
to
maintain
them
and
to
tell
them
that
they're
out
of
sync
and
they
need
to
come
back
into
sync
and
they
need
to
find
consistency,
and
that
was
just
a
a
really
great
way
to
have
a
sort
of
a
self-managed
platform.
A
That's
leveraging
its
own
technologies
really
great
stuff,
so
so
clayton.
You
know
now,
starting
in
2020,
now
that
we
conquer
that
world.
We
move
into
this
explosion
of
people
that
want
the
benefits
of
kubernetes,
but
they
won't
maybe
want
a
smaller
footprint
or
a
mon,
a
more
compact
cluster.
They
want
it's
cram
it
in
places
that
kubernetes
maybe
had
never
been
before.
Can
you
talk
about
some
of
the
things
we're
seeing
in
that
front?
C
So,
over
the
last
few
years,
kubernetes
itself
has
grown.
You
know
everything
that
we
talked
about.
That's
a
core
fundamental
requirement
of
kubernetes
cluster
is
something
that
consumes
resources
and,
if
you're
bringing
it
along,
you
need
to
watch
it
make
sure
it's
still
healthy,
as
well
as
we've
added
extensibility
to
kubernetes.
We
went
from
a
really
simple
initial
state,
where
we
just
had
a
few
binaries
that
we
ran
and
everything
was
fine
to
a
world
where
kubernetes
is
actually
managing
a
lot
of
the
components
on
the
cluster.
C
We
do
need
to
spend
just
as
much
time
in
optimizing
the
platform,
optimizing,
the
components
of
the
ecosystem
around
the
platform
and
looking
for
new
efficient
footprints
that
keep
all
the
benefits
that
derrick
mentioned,
while
still
giving
us
the
flexibility
to
deliver
that
consistent
life
cycle,
because
the
worst
thing
would
happen
is
if
we
over
optimize
for
a
particular
scenario,
and
it
breaks
a
use
case
that
we
don't
want.
So
this
has
been:
it's
been
a
pretty
long
journey.
C
I've
been
involved
with
performance
in
kubernetes
since
the
very
beginning,
I'm
guilty
of
some
of
the
the
early
projects
in
kubernetes
that
made
the
control
plane
more
efficient,
a
lot
of
the
work
done
at
red
hat
and
others
in
the
ecosystem.
We
looked
for
ways
to
slim
down
the
footprint
of
the
core
kubernetes
control
plane.
C
As
we
went,
we
built
a
lot
of
infrastructure
in
the
kubernetes
project
and
the
openshift
engineering
ecosystem
around
that
to
make
sure
that
regressions
were
caught
and
captured
right.
We
we
watch
how
many
resources
are
used
by
the
component
by
the
core
platform,
the
components
of
the
platform,
and
that's
something
that
you
know
as
every
release
goes
out.
Red
hat
picks
up
new
versions
of
kubernetes
very
early
compared
to
many
of
the
distributions
of
the
ecosystem.
C
So
we
often
find
and
catch
those
regressions
and
then
make
sure
that
they're
they're,
fixed
upstream
in
the
proper
way
and
as
ecosystem
components,
are
pulled
in
we're,
always
looking
for
ways
of
making
sure
that
we
can
deliver
a
reliable,
stable,
solid
platform.
So,
as
we've
reached
the
limits
of
what
you
can
do
by
optimizing
kubernetes
itself,
we
started
looking
at
additional
footprints.
C
So
one
of
the
first
steps
that
we
took
in
openshift
4
was
allowing
you
to
run
a
fully
full,
highly
available
kubernetes
cluster
that
had
three
control
plane
instances,
but
with
no
workers
and
actually
let
people
schedule
those
workers.
That's
a
fairly
common
configuration
in
a
lot
of
retail
and
edge
deployments
where
power
or
space
isn't
it
as
a
premium.
But
you
want
no
more
hardware
than
is
necessary
and
obviously
you
know,
as
we've
all
learned
over
the
years,
that
we've
gotten
better
with
distributed
systems.
There's
really
just
three
topologies
that
you
can
run.
C
C
C
So,
if
you
started
with
three,
you
could
add
more
and
a
lot
of
that
work
ties
into
things
that
derek
mentioned:
building
a
a
control
plane
and
an
api
system
that
made
it
easy
and
natural
to
start
simple
in
your
data
center
on
the
cloud
and
grow,
your
cluster
there's
two
simpler
configurations
than
the
three
node
control
plane
and
that's
having
a
single
node
and
having
two
nodes
in
an
active,
passive
kind
of
setup.
C
Our
focus
for
since
openshift
4
has
been
ensuring
that
code
ready
containers,
which
is
our
single
node
development,
focused
experience,
works
really
well
in
a
single
node
environment.
There's
a
lot
of
optimizations
that
go
along
with
that.
Not
just
installing
the
cluster,
which
sometimes
takes
time
because
we
have
to
make
sure
everything's
set
up
in
the
code,
ready
container
scenario
we
can
optimize
for
starting
that
vm
image
as
quickly
as
possible
and
simplifying
some
of
the
steps.
C
It's
not
a
real
production
environment,
but
it
actually
sets
the
stage
for
some
work
that
we're
doing
both
in
the
ecosystem,
as
well
as
moving
to
having
full
single
note,
support
and
open
shift
at
a
future
time.
In
addition,
we've
been
working
with
groups
within
red
hat
and
ecosystem
on
active
passive
setups.
This
is
actually
a
pretty
common
concept
for
administrators
of
enterprise.
C
Linux
and
we've
been
doing
this
for
quite
a
long
time,
with
some
of
the
components
of
linux
that
do
active
passive,
failover
setups
and
we
think
there's
a
lot
of
interesting
stuff
there.
It's
not
our
primary
focus
right
now,
but
it
is
something
you
can
accomplish
today.
Using
you
know,
technology
supported
by
red
hat,
so
we're
trying
to
balance
the
needs
of
resource
constrained
environments,
keep
all
of
the
flexibility
and
power
of
the
platform,
keep
the
configurability
and
the
self-management
self-monitoring.
A
All
right
that
is
absolutely
correct.
We
do
have
some
customers
that
have
ventured
forward
and
are
doing
open
shift
in
remote
locations
and
there's
really
some
exciting
requirements
that
are
popping
up
that
personally,
I
didn't
think
of
when
they
first
brought
them
to
us
and
it
really
is
around
hey.
You
know
I
don't
even
have
the
staff
in
those
locations
or
it's
a
different
staff
in
that
location,
with
a
different
sort
of
charter,
or
maybe
the
infrastructure
services
as
simple
as
dns
or
ntp.
Aren't
there
derek?
B
Yeah
sure,
thanks
mike
so
yeah,
I
think
it's
been
an
interesting
learning
experience
as
we
gather
more
feedback
from
customers
on
the
unique
constraints
in
which
they
may
choose
to
install
openshift
and,
of
course,
as
we
go
further
out
to
the
edge
or,
as
you
deal
with
the
realities
of
putting
clusters
in
many
locations
in
the
world.
You
end
up
with
unique
ways
of
looking
at
problems
than
maybe
you
previously
held.
B
One
of
the
things
I'm
happy
about
that
we're
doing
at
red
hat
is
we're
kind
of
looking
to
see
if
we
can
kind
of
blend
connected
services
with
that
broader
installation
story,
so
that,
for
example,
the
administrator
who
is
pointing
a
piece
of
hardware
into
an
environment
may
only
have
to
boot
an
iso
like
to
just
power
it
on,
and
then
we
can
start
to
explore
interesting
concepts
about
well,
what
happens
when
that
machine
powers
on?
Can
we
have
it
connect
to
a
different
endpoint
and
allow
a
different
persona
or
a
different.
B
Actor
within
the
overall
enterprise
solution
stitch
these
pieces
of
compute
together
to
form
a
cluster
right.
So
I
think
that's
a
really
interesting
unforeseen
new
evolution
around
the
whole
installation
day,
zero
day,
one
activities
that
we
see
more
and
more
as
we
get
to
the
edge,
and
so
there
are
some
some
exciting
stuff
that
we'll
look
to
show.
One
of
these
things
is
our
assisted,
installer
and
so
a
lot
of
times
today,
when
folks
think
about
installing
openshift
on
metal
or
in
a
virtualized
environment.
B
They
know
that
you
not
only
just
pull
down
our
installation
binary,
but
you
pull
down.
You
know
the
iso
to
boot,
rel
core
os,
one
of
the
things
you'll
see
that
we're
exploring
in
our
assistant
install
program
is
that,
rather
than
pull
down
both
the
install
and
an
iso,
you
pull
down
just
a
bootable
artifact.
That
knows
how
to
connect
to
a
remote
cloud
service,
and
so
today
that's
a
service
that
we're
offering
on
cloud.redhat.com
and
in
the
future.
B
They
just
deliver
the
hardware
turn
it
on
and
know
that
it
will
phone
home
to
a
location
that
allows
a
different
person
or
a
different
actor
to
stitch
it
together
in
a
larger
solution,
and
this
idea
of
like
blending
together
a
pool
of
compute
that
you
can
then
have
a
second
individual
come
in
and
say
I
want
to
stitch
this
machine
and
this
machine.
B
This
machine
together
into
a
cluster,
is
super
powerful,
but
what's
also
interesting
is
we
can
minimize
errors
that
can
occur
as
users
are
kind
of
understanding
their
environment
by
making
this
bootable
artifact
know
how
to
pull
out
and
understand
its
its
environment.
Right
like
what
is
the
nature
of
the
host
itself?
Does
it
meet
our
minimum
requirements
for
cpu
memory
and
storage?
B
So
we
can
do
a
lot
of
validations
and
verifications
that
the
cluster
will
be
successful
before
it's
ever
installed,
and
we
think
that
this
is
a
really
exciting
emerging
pattern
that
we'll
see
more
of
in
2021,
and
so
with
that
mike,
I
think,
it'd
be
great
to
go
to
the
demo
yeah.
I.
D
Thank
you
mike.
My
name
is
reese
oxnam.
I
run
the
field
product
management
team
here
at
red
hat
and
it's
my
pleasure
to
introduce
the
new
openshift
assisted
installer
to
you
all.
We
really
wanted
to
make
the
deployment
experience
of
openshift
even
easier,
and
whilst
we
recognize
that
there
are
many
deployment
options
and
target
installation
platforms
for
openshift,
we
wanted
to
provide
our
customers
with
a
more
streamlined
and
a
guided
workflow
for
deploying
clusters.
D
D
Now
we're
into
the
main
workflow.
We
have
to
provide
a
cluster
name,
which
is
important,
as
it
forms
part
of
the
dns
domain
name
for
the
cluster,
so
I'll
choose
a
name
that
matches
the
environment
I
want
to
deploy
into
next.
It
asks
which
version
of
openshift
I
want
to
deploy
and
for
this
I'll
stick
with
the
default
4.6
pre-release.
D
Finally,
it
provides
an
option
for
me
to
input
my
pull
secret,
an
authentication
key
for
pulling
the
container
images
required
for
both
installing
and
operating
at
any
openshift
cluster.
This
has
been
pre-populated
for
me,
based
on
my
cloud.redder.com
login.
The
next
page
is
where
the
bulk
of
the
configuration
will
take
place
and
the
cluster
name
has
been
carried
forward
from
the
previous
section.
D
Next
is
going
to
give
us
the
option
to
download
a
discovery
iso,
and
this
represents
one
of
the
most
important
design
principles
of
the
assisted
installer.
Every
node
that
we
want
to
become
part
of
the
cluster
needs
to
be
initially
provisioned
via
a
discovery,
iso
one
that
has
been
dynamically
generated
for
us
by
the
assisted
installer.
D
This
was
chosen
for
its
simplicity.
We
need
only
boot
the
target
machine
with
the
discovery
iso,
and
it
has
a
phone
home
mechanism
where
it
can
receive
all
of
its
instruction
from
the
assisted
installer,
bypassing
any
manual
configuration
for
the
administrators.
So
let's
go
ahead
and
select
the
download
discovery
iso
button
for
troubleshooting.
It
asks
us
to
pull
in
a
secure
shell
public
key.
So
in
the
event
of
the
nodes
not
appearing
as
expected,
we
can
do
some
troubleshooting.
D
This
section
also
confirms
whether
we
want
to
use
an
http
proxy
and
if
so,
we
can
ensure
that
it
gets
injected
into
the
iso.
But
in
our
case
we
don't
need
one
behind
the
scenes.
The
assisted
installer
platform
will
now
generate
our
custom,
iso
already
pre-configured
for
the
cluster,
we're
creating
and
we'll
make
it
available
for
our
download.
D
I'm
using
a
jump
host
that
I'm
accessing
over
vnc,
primarily
because
I
need
to
attach
this
discovery,
iso
to
my
metal
machines
over
a
virtual
media
interface,
a
brief
overview
of
the
setup
that
I'm
using
here.
I've
got
three
dell
fc430
blades
and
I've
opened
up
a
virtual
console
to
each
of
them.
That
will
allow
us
to
monitor
the
progress
and
also
attach
the
discovery.
Iso
directly
we're
only
using
three
bare
metal
machines
here
to
demonstrate
the
converged
master
and
worker
configuration,
but
it
would
be
absolutely
possible
to
have
additional
nodes
in
this
configuration.
D
D
We
have
the
ability
to
use
automatic
role
assignment
where
there's
logic
built
into
the
platform
to
help
with
best
fit
role,
placement
based
on
available
nodes
and
their
respective
configuration
here
we're
going
to
leave
it
on
automatic,
as
we
only
have
three
nodes
and
hence
they'll
default
to
both
baster
and
worker,
but
we
can
override
them
if
we
need
to
there's
also
some
additional
options
for
each
of
the
hosts
on
the
right-hand
side
of
the
pane.
Here
you
can
override
the
hostname
if
it's
not
provided
via
dhcp.
D
D
There
are
a
few
networking
configurations
available
for
us
to
choose
from.
We
can
either
go
with
a
basic
default
networking
or
a
more
advanced
configuration.
We
can
override
the
default
subnet
allocations,
but
here
I'll
stick
with
the
basic
option:
there's
only
one
network,
subnet,
that's
been
discovered
and
that's
what's
showing
here.
My
bare
metal
machines
have
multiple
interfaces,
but
only
one
with
dhcp
I'll
use
this
as
the
base
network
for
the
whole
cluster,
where
my
api
and
ingress
services
will
listen.
D
I
have
the
option
of
entering
them
if
I
had
already
reserved
ip
addresses
pre-populated
in
my
dns
infrastructure,
but
if
there's
no
reserved
ips,
we
also
have
the
ability
to
automatically
allocate
virtual
ips
from
the
dhcp
service.
If
permitted
to
do
so,
the
dhcp
server
in
our
lab
will
happily
allocate
ip
addresses
to
any
device
on
the
network.
D
Now
I'm
ready
to
install
the
cluster,
you
see
that
when
I
previously
selected
validate
and
save
changes,
this
is
checked
that
everything
is
in
order
and
that
the
configuration
I've
requested
is
valid.
The
machines
move
into
a
known
state
when
they're
ready,
let's
proceed
and
select
in-store
cluster.
D
As
you
can
see,
the
nodes
have
now
progressed
into
a
starting
installation
phase,
and
you
also
witness
that
one
of
the
nodes
has
been
selected
as
the
bootstrap
node.
This
is
another
incredibly
important
design
principle.
We
wanted
to
minimize
the
hardware
footprint
to
reduce
complexity.
Here
we
do
not
require
an
additional,
separate,
bootstrap
machine
for
installation.
D
Throughout
all
of
this,
the
cluster
events
pane
can
give
a
much
more
in-depth
view
of
what's
going
on
and
can
be
filtered
if
required.
Here
you
can
see
the
disk
write
process
and
the
state
changes
like
before.
If
we
look
at
the
console,
log
you'll
see
the
detailed
list
of
steps
that
the
provisioning
process
has
taken.
D
D
D
The
username
is
the
standard
cube
admin
and
I
can
copy
the
password
directly
from
the
assisted
installer
page.
As
you
can
see,
the
cluster
is
still
coming
up
and
starting
all
of
the
required
pods.
So
in
the
meantime,
let's
quickly
jump
over
to
the
cli
and
ensure
that
the
kubecon
big
is
working
properly
I'll
use
the
file
that
we
just
downloaded
and
we'll
ask
the
cluster
for
a
list
of
nodes
here,
clearly
showing
that
we
have
three
nodes,
each
of
which
are
both
a
master
and
a
worker
node.
D
We
can
also
verify
the
version
here
being
a
4.6
nightly
or
pre-release
version
from
here.
The
cluster
is
fully
operational,
ready
to
serve
workloads
and
we
can
go
on
to
deploy
any
other
operator
as
an
example,
because
this
is
a
real
bare
metal
cluster.
We
can
exploit
the
nature
of
bare
metal
performance
and
deploy
openshift
virtualization.
D
A
C
The
the
journey
we've
been
on
from
the
beginning
of
kubernetes,
you
know
kubernetes
was
initially
successful
because
it
was
a
set
of
concepts
in
application
infrastructure
that
made
it
easy
for
you
to
run
applications
right.
C
It
helped
you
deploy
a
container
instead
of
code
inside
of
that
mixed
with
your
application,
runtime
environment,
whether
that
was
java
or
a
compiled
binary
from
go
or
c
plus
or
rust,
or
source
code
paired
to
a
runtime
environment
like
javascript
or
python,
and
that
image
needed
a
set
of
criteria
for
how
it
ran,
whether
that's
environment,
variables
or
volumes,
matched
up
to
it
and
kubernetes
was
kind
of
that.
That
first
system
that
gave
you
just
enough
of
an
abstraction
that
you
could
write
and
run
almost
any
kind
of
containerized
application
on
top
of
it.
C
With
that
came
services,
we
added
concepts
like
deployments
and
ingress
around
those
those
two
simple
ideas
deployments.
Let
you
manage
the
rollout
of
your
code
and
pause
things
from
rolling
out
further.
If
stuff
stopped
working,
ingress
allowed
us
to
get
traffic
into
the
system,
and
alongside
that,
we
then
added
staple
sets
which
allowed
you
to
run
the
parts
of
applications
and
we
added
jobs
and
cron
jobs.
C
And
as
we
went
through
this
process,
we
started
to
recognize
that
not
every
simple
abstraction
was
going
to
be
great
for
all
users,
and
so
we
had
already
from
the
beginning
of
kubernetes
thought
about
extensibility
and
how
we
could
broaden
the
reach
of
kubernetes
to
new
types
of
concepts,
whether
that
was
how
you
run
the
app
like
a
workload
or
whether
that's
an
integration
like
the
way
that
service
mesh
provides
high
level
primitives
for
traffic
splitting
and
matching
on
parts
of
urls
or
matching
the
body
of
a
request.
C
All
of
these
concepts
we
knew
they'd
be
really
powerful.
We
started
working
on
extensibility
in
part
to
provide
new
ways
to
run
workloads
in
part
to
provide
new
ways
of
binding
workloads
together
and
then,
on
top
of
that
concepts
that
we
really
hadn't
anticipated
policy
and
new
types
of
integrations
into
both
the
nodes
or
into
the
cloud
environments
that
ran
around
kubernetes
and
so
over.
The
course
of
years,
as
we've
seen
applications
a
lot
of
people
started
with
really
simple:
12-factor
style
applications
on
top
of
kubernetes,
and
they
got
very
big.
C
They
were
able
to
run.
You
know
from
the
very
early
days
of
kubernetes,
1.0
and
open
shift
3.0,
because
we
focused
on
that
core
problem.
It
wasn't
the
the
highest
scale
system
in
the
world
right
we
I
talked
earlier
about.
You
know
the
improvements
we've
made
over
the
lifetime
of
kubernetes.
C
So
today
you
know
in
openshift
four
we
bring
in
a
bunch
of
extensions
and
standard
parts
of
the
kubernetes
ecosystem
and
we're
up
to
about
150
individual
kubernetes
resources.
From
the
very
simple
I
think
we
had
15
at
the
very
start
in
kubernetes.
So
there's
been
almost
a
10-fold
increase
in
five
years
and
the
number
of
concepts
that
even
a
fairly
standard
kubernetes
distribution
would
have,
and
that
doesn't
even
get
into
the
complexity
that
people
build
on
top
of
kubernetes
in
their
pipelines
of
how
they
build
and
deploy
applications.
C
So
I
actually
think
we're
right
at
the
edge
of
a
really
important
transition
where
we
start
thinking
beyond
the
cluster.
What
are
the
policies
and
patterns
and
integration
points
that
will
let
us
run
applications
naturally
across
clusters?
Everyone
can
do
that
today.
Right.
Nothing
prevents
an
organization
from
building
and
running
parts
of
their
application
in
different
places.
A
A
B
Yeah
sure
so
I
I
think
it's
fair
to
say
that
there's
no
one-size-fits-all
solution
to
every
problem
right,
even
in
our
dialogue.
So
far,
we've
talked
about
how
we
can
support
running
large-scale
kubernetes
clusters
or
needing
to
fit
within
resource
constrained
environments,
and
I
would
say
like
from
my
experience
thus
far,
it
comes
down
to
probably
three
things
right,
one.
What
is
the
required
or
acceptable
failure
domain
your
deployment
model
and
then
the
second
would
be
like
in
the
case
of
failure.
What
is
the
overall
blast?
B
Radius
that
you
feel
is
impacted
by
that,
and
maybe
the
third
we
don't
always
give
enough
attention
to,
but
is
very
important,
is
like
what
are
the
security
boundaries
where
the
cluster
meets
your
app?
That
need
to
be
taken
into
account,
and
so
that
first
question
of
well,
if
you
start
out
with
one
cluster
you're
going
to
put
that
cluster
in
a
given
data
center
right
or
you
might
put
it
in
a
given
region
in
a
particular
public
cloud
and
even
within
one
cluster,
we
get
questions
on
well.
B
Should
I
run
that
cluster
and
should
I
run
node
pools
within
each
availability
zone
within
that
that
public
cloud
or
within
my
data
center
within
racks?
And
so
it's,
it's
probably
first
important
to
ask
like
or
recognize
that
we
have
ways
of
managing
failure
domains
within
the
cluster
itself
before
you
even
decide
to
run
more
than
one
but
oftentimes.
You
know
you
end
up
having
to
run
more
than
one.
You
might
run
an
application,
that's
globally
replicated.
B
So
you
need
to
run
that
cluster
in
more
than
one
region
or
more
than
one
data
center,
because
you
want
to
bring
your
workload
closest
to
your
end
user
and
so
that
first,
real
world
recognition
typically
makes
users
go
from
one
cluster
to
two
right,
because
they
want
to
deliver
that
app
globally
or
as
close
to
their
users
as
possible.
B
When
when
folks
run
on-prem,
they
might
have
a
contract
to
have
two
data
centers
like
they
might
have
an
east
data
center
and
a
west
data
center,
and
sometimes
we
run
into
other
variations
of
that
situation,
where
they
might
have
two
data
centers
that
are
co-located
within
particular
latency
windows,
where
we
run
into
the
issue
of
asking
like,
should
I
do
stretch
so
there's
a
lot
of
variables
that
come
into
this,
but
from
a
simple
perspective
like
if
you
have
an
east
and
a
west
data
center,
you
probably
do
these
things,
so
you
have
separate
failure
domains,
so
you
can
control
your
blast
radius,
and
so
you
might
choose
to
run
two
clusters
and
then,
when
the
nature
of
that
is
you
got
to
put
your
app
on
both
clusters
and
your
apps
will
then
probably
run
either
an
active,
active
or
an
active
passive
setup,
and
then
a
lot
of
real
world
ramifications
come
up
to
that
right.
B
Does
your
app
need
data
and
do
I
need
to
make
sure
that
that
data
is
replicated
to
both
environments?
So
today,
in
kubernetes,
you
have
primitives
to
handle
things
like
persistent
volumes
and
persistent
volume
claims,
but
there's
nothing
innate
in
kubernetes
itself.
That
says
how
storage
is
replicated
right.
We
have
to
kind
of
look
at
a
layer
below
the
orchestration
platform
and
say
how
can
we
do
that?
How
can
we
handle
that
problems?
Similarly,
above
the
orchestration
level,
we
have
to
say
how
do
I
load
balance
to
my
app?
B
How
do
I
handle
getting
traffic
into
one
cluster
versus
another,
and
so
a
lot
of
the
unique
situations
around
individual
use
cases
will
motivate
how
and
people
choose
to
do
things
as
clayton
talked
about
earlier,
we
have,
you
know,
evolved
openshift
over
the
years
now
to
include
a
multi-cluster
management
experience
which
we
call
you
know
red
hat,
advanced
container
management
and
no
matter
which
choice
that
a
customer
may
choose
to
make
whether
they're
running
one
cluster
or
many
clusters,
there's
a
lot
of
exciting
primitives
that
are
introduced
in
that
solution
to
both
life
cycle
and
like
provision
and
deep
revision
clusters.
B
So
it's
it's
really
interesting
when
you
work
through
the
details
of
why
and
how
you
end
up
with
both
either
multiple
availability
zones
within
your
cluster
or
decide
to
do
multiple
clusters,
but
no
matter
your
choice.
I
think
we
can
have
tools
today
to
to
meet
users
where
they
are,
and
that
last
point
is
really
key.
I
think.
A
Yeah,
it
is-
and
you
mentioned
some
east-west
north-south
topics
and
some
customers
get
a
little
confused
when
they
hear
us
throwing
those
that
jargon
around
so
so
clayton
when
a
customer
decides
that
they
want
to
deploy
an
application,
maybe
a
replica
of
itself
in
completely
different
clusters
that
don't
know
about
each
other.
Sometimes
we
end
up
talking
a
lot
about
north
south
leaving
the
cluster
and
coming
back
into
the
cluster,
and
then
there's
also
some
research
being
done
about
east
west
staying
within
the
same
knowledge
or
the
pod
network.
C
Absolutely
these
these
two
phrases:
north
south
east
west-
if
you
imagine
a
map
where
you
know
your
data
centers
or
your
clouds
are
set
up
horizontally,
north
south
north
is
traffic
coming
into
a
cluster
and
south
is
usually
a
way
of
orienting
for
it
going
back
out
or
going
to
another
cluster
or
another
data
center,
whereas
east
west
is
typically
in
most
common.
You
know
the
thing
we
would
use
to
refer
to
inside
of
day
sitter
when
we
talk
about
east
west,
we're
kind
of
thinking
about
kubernetes.
C
As
a
as
I
like
to
call
it,
you
know
a
virtual
computing
plane
where
all
clusters,
no
matter
where
they
are,
might
have
partitions
to
separate
them
out
into
different
security
levels
or
different
geographic
regions
because
of
latency
but
they're.
All
kind
of
peers
of
each
other,
so
east-west
is
clustered
to
cluster
and
north-south
is
coming
into
clusters
and
leaving
clusters.
C
So,
there's
a
bunch
of
ways
that
people
solve
this
problem
today
in
kubernetes
as
derek
talked
about
there's
a
lot
of
standard
patterns
that
have
carried
forward
into
kubernetes,
one
of
our
very
early
openshift
customers
actually
ran
a
geographically
distributed
set
of
clusters
all
around
the
world
in
kubernetes
1
1,
and
they
actually
did
a
networking
configuration
within
their
enterprise
that
ensured
that
each
cluster
had
a
unique
set
of
pod
addresses
and
a
unique
set
of
service
addresses
and
every
cluster
could
reach
every
other
service.
C
So
that's
maybe
the
simplest
level
of
east
west,
like
network
level
configuration,
but
that
requires
a
lot
of
pre-planning
in
an
organization
there's
another
level
above
that
which
might
be
specific
kinds
of
integrations
that
you
can
do
either
with
your
network
stack
your
software
defined
networking
or
at
the
level
above
that,
that's
something
we
might
refer
to
as
vpn
or
tunneling.
So
the
submariner
project
which
red
hatters
have
been
working
on,
helps
you
build
vpn
tunnels
from
cluster
to
cluster.
That
kind
of
is
a
level
above
the
network
and
sits
usually
a
level
below
something.
C
C
A
service
mesh
federated,
of
course,
can
sit
on
top
of
that
next
level
up
and
hide
the
details
of
where
a
service
runs,
whether
parts
of
it
are
at
one
cluster
or
another,
and
then
finally,
at
the
at
the
top
layer
of
east
west,
you
have
what
I
might
call
a
virtual
application
networks.
So
red
hat
has
been
exploring
this
space
for
a
while
through
a
project
called
scupper
and
skepper
lets.
You
connect
up
an
individual
application
components
without
controlling
the
clusters
underneath
it,
so
each
of
these
layers
offers
options
and
different
performance
trade-offs.
C
One
of
the
things
that
I
think
as
derek
alluded
to
that
red
hat
would
like
to
do
is
work
within
these
ecosystems
and
within
the
cube
community.
So,
no
matter
what
abstraction
you
use,
it
can
be
standard
for
all
of
those.
So
if
you
define
a
service
for
your
kubernetes
application,
and
you
want
that
service
to
be
accessible
on
another
cluster,
there
should
be
a
simple
way
to
do
that,
both
at
an
administrative
level
and
an
application
level.
C
To
ensure
that
you
know
whether
your
development
on
the
development
side
of
the
house
or
the
operation
site
on
the
house,
you
can
get
your
applications
done
depending
on
how
much
control
you
have,
and
likewise
for
north
south,
whether
it
involves
cloud
load,
balancers
or
on-premise
data
centers
we'd
like
to
help
standardize
some
of
those
central
mechanisms.
So
an
application
where
part
of
is
running
on
one
cluster.
If
it
gets
moved
to
another
cluster
traffic,
should
follow
the
application
not
be
tied
to
location
and
standardizing.
A
Yeah,
that
is,
I
mean
unbelievable
points
and
with
that,
let's
bring
up
the
second
demo
and
what
the
second
demo
does
is.
It
takes
a
concept
of
the
east-west
network
and
it
shows
the
performance
differences
that
you
would
gain
going,
east-west
as
opposed
to
ingressing
the
negressing
a
cluster
through
a
rounding
tier
and
what
they've
done
is
they've
taken
the
replication
of
ceph
the
back
end.
A
E
Away,
thank
you
mike,
so
we're
going
today
to
talk
about
openshift
and
how
you
could
do
disaster
recovery,
either
for
a
particular
project
or
namespace
or
for
an
entire
cluster.
So
on
the
left,
we
have
the
current
active
cluster
region
one
and
on
the
right.
We
have
a
second
region
or
a
second
data
center.
E
So,
just
to
test
that
I
can
recover,
I
will
go
ahead
and
put
in
a
comment
and
the
comment
then,
once
we're
on
the
other
side
should
still
be
there.
E
So
we'll
now,
look
at
the
the
two
sites
so
on
the
top
left
is
site
one
and
on
the
bottom.
Left
is
site
two
and
then
on
the
right.
I
have
logged
into
the
ceph
cluster
on
site,
one
top
right
and
bottom
right.
I
have
logged
into
the
stuff
cluster
for
site
two.
E
So
if
we
look
now
at
the
information
in
the
what
we
would
call
an
image,
it
will
show
us
how
the
mirroring
is
set
up.
So
the
mirroring
in
this
case
is
enabled-
and
it
is
set
up
to
do
something
called
snapshot,
which
means
it
would
be
asynchronous
and
on
the
primary
side
or
site.
One
is
currently
true,
meaning
the
storage
is
being
used
on
site,
one
on
site,
two,
where
the
data
is
being
replicated
to
the
difference
is.
E
We
can
see
these
web
hooks,
which
I
configured
in
my
multi-site
application
repo,
and
there
is
one
for
e
for
site
1
and
for
site
two.
So
now,
let's
go
ahead
and
promote
the
storage
on
site
two,
so
we
took
down
site
one
for
the
application,
demoted
the
storage
and
now
we're
promoting
it
on
site
two,
when
we
do
that
again,
webhook
and
a
tecton
pipeline
is
ran
to
do
that
task.
E
So
let's
go
ahead
and
say
scale
site
two
up
by
increasing
the
replica
count,
we
can
watch
the
pods
coming
up
on
site
two
and,
as
we
see
it
there,
my
sequel
is
already
up
and
wordpress
is
almost
online
on
the
storage
side
it
is
now
flipped
over
and
the
primary
mirror
now
is
site
two,
so
the
storage
and
the
application
now
are
being
accessed
from
site
to
now.
If
we
go
back
to
look
at
our
application,
we
should
find
that
it
actually
is
back
online.
E
E
Post
the
comment
shows
us
that
the
storage
is
working
on
site
two,
so
now
that
we've
successfully
failed
over
to
site
two,
we
want
to
see
how
we
would
fail
back
to
site
one
so
right
now
the
storage
is
primary
on
site.
Two,
so
we'll
again
run
a
script
that
will
trigger
the
web
hook
to
run
a
pipeline,
and
in
doing
that
we
can
watch
the
pods
on
site
two
and
soon
they
will
start
to
terminate.
We
can
see
they
are
terminating.
E
E
A
Annette
that
was
incredible,
I'm
I
cannot
wait
until
that
makes
it
into
the
product
and
let's
go
20
21.,
I'm
rooting
for
2021..
So
let's
switch
gears
last
topic.
Last
topic
is
going
to
be
around
workloads
around
developer
experience
and
an
interesting
part
of
this
is
the
cncf
has
really
exploded
in
2020
around
pipelines
and
get
ops
and
build
techniques.
A
But
you
know
clayton,
we've
been
there
for
quite
some
time.
I
think
we
got
involved
in
2015
towards
the
end
of
2015
and
into
2016
and
from
day
one
we
felt
it
was
imperative
that
we'd
be
able
to
build
applications
inside
of
the
cluster,
and
when
we
were
looking
around
there
weren't
too
many
people
thinking
that
that
was
like
the
right
way
to
go.
Why
don't
you
take
us
through
some
of
our
journey
and
how
we
kind
of
got
to
where
we
are
today.
C
Sure
mike-
and
this
is
a
this
is
a
great
topic
because
it
talks
about
how
software
evolves
over
time
and
some
of
red
hat's
commitment
is
trying
to
help
people
evolve
their
organizations
over
time
and
still
benefit
from
the
latest
and
greatest
open
source,
but
to
take
into
account.
You
know
there
has
to
be
solutions
for
these
problems,
so
at
the
very
early
days
of
kubernetes.
C
The
development
team
needs
to
use
a
standard,
runtime
environment-
that's
properly
patched
that
has
the
right
security
rules
around
it.
That
might
be
scanned
at
an
interval
determined
by
the
operations
team.
Combine
those
two
together
and
run
it,
and
we
wanted
a
bit
to
be
easy
and
reproducible,
and
so
we
worked
on
a
number
of
technologies
that
both
helped.
You
do
this
on
a
cluster
as
well
as
used
kubernetes
as
a
jobs
engine
and
again
in
the
early
days
of
kubernetes.
C
The
jobs
concept
was
very
new
and
co-developing
the
build
feature
on
openshift
with
kubernetes.
The
platform
actually
helped
us
identify
places
where
we
needed
to
improve
kubernetes
security
and
over
the
years,
as
has
different
ways
of
combining
these
technologies,
have
emerged
as
kernel.
The
linux
kernels
been
improved
to
offer
some
new
capabilities
that
make
building
images
in
user
space,
much
more
achievable
because
you
can
reuse
those
same
fast
primitives
that
the
kernel
offers
for
the
container
run
time
to
an
end
user
securely.
C
We've
really
evolved
this
story
and
I'm
looking
forward
to
the
wider.
The
very
wide
range
of
ecosystem
components
out
there
that
meet
the
different
requirements
and
we'll
continue
to
evolve,
openshift
and
support
the
build
api.
So
you
really
get
the
best
of
both
worlds.
You
get
choice
and
flexibility
and
you've
got
you
know
the
the
option
of
new
technologies
that
will
bring
in
and
support
alongside
those
existing
concepts.
A
Yeah-
and
I
mean
at
the
end
of
the
day-
it's
really
just
a
application
service
on
the
platform.
They
they
wanted,
it
an
api
endpoint.
They
want
their
application
feeding
to
the
world
and
in
that
regard,
derek
we
have
in
this
last
year,
exploded
in
the
number
of
services
that
we
are
offering.
As
a
company
has
red
hat
out
on
cloud.redhat.com
on
the
kubernetes
platform
on
the
openshift
platform,
can
you
talk
to
us
a
little
bit
about
that
experience,
yeah
sure.
B
Mike
so
we
often
like
to
talk
about
how
openshift
depends
on
openshift
right,
there's
a
lot
of
back
end
services
that
we've
had
to
build
at
red
hat,
to
just
support
delivery
of
our
product
out
to
our
users
right
that
whole
supply
chain
of
building
our
own
software
packaging
and
putting
it
into
an
image
registry
and
making
it
available
to
the
world.
A
lot
of
folks
might
not
appreciate
that.
That's
actually
all
running
on
openshift
behind
the
scenes.
B
So
today,
if
you
go
to
cloud.redhat.com
you'll
see
an
ever
growing
list
of
sas
style
services
that
our
red
hat
sre
teams
are
managing
and
one
of
the
key
ones
might
be
things
like
the
openshift
cluster
manager
or
openshift
dedicated
service,
as
well
as
some
of
our
supporting
infrastructure.
Like
the
quay
image
distribution
system
itself,
our
remote
health
monitoring
system,
that's
running
a
really
large
metric.
B
Sync
data
store
that
lets
us
know
if
everything
in
the
platform
is
running
well
together
and
we
can
go
and
make
upstream
communities
better,
as
well
as
the
projects
that
we
consume
better.
To
also
make
our
users
happier
right
so
to
do
all
these
backend
services.
When
we
talked
about
failure,
domains
and
blast
radius,
these
things
apply
to
red
hat,
just
as
much
as
our
end
users,
and
so
we've
been
naturally
having
to
build
multi-cluster
services
to
support
just
the
business
of
openshift.
B
What's
interesting
and
what
I
admire
about
red
hat,
is
you
know
things
do
go
wrong
right?
We
try
to
minimize
our
outages
like
every
other
production
system
in
the
world,
but
when
a
single
cluster
goes
down
right,
we
can
rally
together
the
kernel
engineers,
the
kubernetes
engineers,
the
networking
engineers
and
try
to
solve
that
problem
at
a
very
deep
level
to
ensure
that
those
who
run
that
cluster
on
premise
or
themselves
don't
have
that
issue
but
naturally,
like
we,
don't
want
a
single
point
of
failure
to
take
down
our
production
system.
B
First
deciding
if
your
service
is
global
or
regional
right,
but
at
some
level
you
put
a
regional
microservice
out
there.
That
says
this
is
how
you
interact
with
your
solution
and
then,
along
with
that
regional
microservice.
You
need
some
persistence
and
you
know
we're
running
that
on
kubernetes,
and
so
you
might
be
using
open
shift,
container
storage
and
just
like
a
nets
prior
demo
where
she
showed
data
go
active
passive
across
locations.
B
You
need
to
make
sure
your
data
is
not
home
to
any
one
individual
cluster,
and
then
you
have
to
tie
load
balancing
into
your
clusters,
as
we
talked
earlier,
but
at
some
level
depending
on
the
work
and
how
you
navigate
users
to
clusters,
you're,
ultimately
going
to
figure
out
a
way
to
pin
your
workload
and
that
instance
of
your
workload
to
a
particular
cluster
where
the
job
is
actually
done
right,
and
so,
if
you're
running
more
than
one
cluster.
B
Eventually,
you
have
a
way
of
pinning
a
request
or
pinning
a
user-desired
state
to
an
individual
cluster
and
doing
action
afterwards
and
on
each
of
those
clusters.
One
of
the
reasons
that
red
hat
was
so
deeply
invested
in
the
operator
pattern.
Is
we
want
to
keep
the
intelligence
for
how
to
run
that
application
that
that
that
concept
within
the
cluster
itself,
so
it's
replicated
that
same
logic
applies
everywhere?
B
We
replicate
it
and
we
know
it
works
consistently,
and
so
today,
within
openshift,
you
see
tons
of
operators
that
are
appearing
in
the
operator
hub
that
represent
content.
We
actually
run
today
in
production.
So
if
you
get
quay
through
the
operator
hub
you're
starting
to
learn
the
patterns
that
we
use
to
to
run,
quay
live
itself
and
those
inherently
spread
clusters.
B
But
all
these
things
together,
like
I
would
say
in
the
end,
we
realize
that
we
need
multiple
clusters
to
run
reliable
services
because
our
services
are
globally
distributed
and
we
don't
want
to
allow
any
individual
cluster
to
have
an
individual,
failover
demand
or
accident.
And
what's
exciting,
is
we're
starting
to
learn
a
lot
of
patterns.
We
can
start
to
codify
these
things
as
we
bring
them
out
in
the
future
and,
at
the
same
time,
like
kube
keeps
evolving.
B
So
a
lot
of
times
when
we
talked
earlier
about
having
an
active,
passive
setup
of
your
app
like
in
your
passive
data
center.
You
might
not
want
to
dedicate
all
of
your
compute
to
that
thing.
That's
not
yet
being
run
right.
These
things
take
power
in
cycles
and
you
ultimately
want
to
drive
costs
reliably,
and
so
some
of
the
exciting
stuff
I
see
going
around
is
just
everything
around
a
service
and
inventing
right.
So
how
can
we
scale
things
down
to
zero,
even
when
they're
replicated
across
clusters?
B
I
think
a
lot
of
unique
innovation
is
going
to
come
out
when
we
even
look
about
what
it
means
to
host
kubernetes
itself.
As
these
things
evolve
right,
it's
very
meta,
but
definitely
for
sure
we're
always
looking
here
at
red
hat
about
how
we
can
run
the
services
to
support
openshift
on
more
than
one
cluster
reliably
and
at
good
cost.
A
But
it
really
the
flip
side
then
becomes.
How
do
I
consume
that,
like
I
don't
want
to
talk
containers
all
the
time
I
just
want
to
talk
api.
I
just
want
to
have
a
serverless
experience
where
I'm
just
talking
about
functions.
If
you
will
and
functions
without
inventing
is
kind
of
boring
and
we've
eventing
in
serverless.
So
let's
bring
up
paul
morey
paul.
Can
you
take
us
into
the
serverless
world
and
show
us
what's
going.
F
F
I'm
really
excited
today,
because
eventing
is
joining
serving
as
ga
in
openshift
serverless,
so
we're
going
to
concentrate
today
on
some
concepts
from
eventing.
Another
exciting
piece
of
news
is
that
we
now
have
a
developer
preview
of
functions,
so
we
will
be
working
that
into
the
mix
today
as
well.
F
As
I
said,
we're
going
to
explore
some
key
concepts
in
canadian
eventing
pay
native
eventing
is
about
addressing
common
needs
in
cloud
native
development
and
provides
composable
primitives
to
enable
late
binding
between
producers
and
consumers
of
events.
Let's
talk
about
the
central
goals
of
eventing.
F
We
want
other
services
to
be
able
to
be
connected
to
the
eventing
system
to
create
new
applications
without
modifying
existing
producers
and
consumers,
and
we
want
to
enable
cross-service
interoperability
using
the
cloud
event
specification,
that's
developed
by
the
cncf
serverless
working
group,
so
we're
looking
at
the
topology
view
in
the
openshift
developer
console
and
this
this
thing
called
default
here
that
we're
looking
at
is
a
broker.
It's
an
eventing
broker
and
the
event
broker
resource
is
a
powerful
tool
to
achieve
the
loose
coupling
and
independence
between
producers
and
consumers
of
events.
F
They
basically
provide
buckets
of
events
that
can
be
selected
via
their
attributes
to
send
to
consumers.
The
trigger
resource
is
what
we're
going
to
use
to
express
selection
criteria
for
which
events
consumer
wants
to
see,
but
to
start
we're
going
to
make
a
service
that
consumes
and
logs
events,
so
that
we
can
illustrate
how
these
concepts
work.
We're
going
to
do
this
by
writing
a
very
simple
function:
let's
check
this
out,
so
we're
doing
kn
fast
display,
create
display
and
we're
going
to
use
the
event
type
to
activate
it.
F
What
this
is
going
to
do
is
make
a
template
function.
Definition
that
we'll
look
at
in
just
a
moment
build
an
image
for
it,
push
the
image
to
quay
and
deploy
the
function
wrapped
in
a
k
native
service,
so
that
we
will
get
all
the
awesomeness
of
auto
scaling
from
zero
to
n
back
to
zero
again
and
the
power
of
immutable
application
revisions.
F
F
Let's
take
a
look
at
this
back
in
the
in
the
developer
console,
and
we
can
see
here
that
we've
got
a
we've
got
a
k
service
for
display
and
just
a
quick
refresher
service
is
a
very
high
level
concept
from
canadian
serving
it's
a
type
of
controller
similar
to
a
replica
set
or
deployment
that
creates
other
resources
to
do
its
work
service
allows
us
to
specify
the
spec
for
what
our
pods
that
a
run
should
look
like.
It
creates
a
configuration
resource
that
produces
revisions
which
are
immutable
snapshots
of
an
application.
F
F
So
now
our
display
function
is
deployed
and
we
can
start
pumping
some
events
into
it.
Remember
how
I
said
that
we
can
register
an
interest
in
events
before
there
is
a
producer
that
makes
them
I'm
going
to
do
that
right
now
by
creating
a
trigger
that
will
give
us
all
of
the
events
that
go
into
the
default
broker.
F
So
what
I'm
going
to
do
next
is
I'm
going
to
create
a
type
of
event,
source
called
a
ping
source,
and
that's
going
to
give
us
some
that's
going
to
give
us
some
events
to
display
what
this
is
basically
doing
is
every
minute
it's
going
to
pump
out
this
hello,
openshift
comments.
Hey
there,
everyone
event
and
we're
gonna
see
that
going
into
the
logs
of
the
display
function
so
check
that
out.
F
F
F
F
F
So
hopefully
this
gives
you
a
fairly
good
idea
of
how
these
broker
and
trigger
primitives
work
in
practice,
they're
very
powerful
tools
for
achieving
loose
coupling
between
producers
and
consumers
of
events.
We
hope
that
you
will
check
out
eventing
in
as
ga
in
openshift
serverless
and
check
out
our
developer
preview
of
functions
thanks
a
lot
back
to
you
mike.
A
All
right
paul,
thank
you
for
for
that
demonstration.
It's
you
know
you
had
a
nets
with
the
the
dr
of
the
application.
Can
you
imagine
connecting
that
as
a
back-end
component
of
a
larger
serverless
framework?
It's
it's!
It's
really
coming
together.
I
can't
wait
to
see
this
all
in
action
and
that
you
know
really
brings
us
to
the
end
of
our
conversation
and
it's
been
a
really
up
and
down
2020.
A
B
Well,
hopefully,
mike
you
can
hear
me
and
I'll
say
it
again.
I
think
at
red
hat
we're
all
like
passionate
open
source
engineers
and
we
work
in
a
variety
of
upstream
communities
and
I
think
we're
all
just
proud
on
a
human
level
that,
during
the
covet
crisis,
we've
been
able
to
been
productive
as
a
community
to
to
get
new
capabilities
in
all
of
our
upstreams.
B
And
so
you
know,
as
we
look
ahead
to
2021.
I
hope
that
we
can
keep
sustainably
working
within
our
upstream
communities
to
bring
innovation
to
all
of
our
users
together,
but
I
think,
even
with
all
the
things
that
are
happening
in
the
world,
like
innovation,
never
truly
stops
right.
Stuff
is
always
happening
around
us
and
things
are
evolving.
B
So
I
would
say,
like
I'm
particularly
interested
in
a
number
of
the
hardware
innovations
we
see
happening
across
the
data
center
today,
whether
that's
like
every
everything
you
could
attach
to
a
computer
is
getting
some
level
of
intelligence
associated
with
it,
and
we
want
to
be
able
to
take
advantage
of
that
both
inside
the
cluster,
with
our
workloads
and
potentially
outside
the
cluster,
to
orchestrate
these
systems.
So
there's
a
lot
of
interesting
excitements
innovations.
B
I
see
come
in
that
space
that
I
think
will
drive
change
in
cube
linux
and
the
overall
openshift
distribution
itself.
So
I
think
that's
one
of
the
areas
that
I'm
particularly
interested
in
seeing
evolve
in
2021
in.
C
Clayton
yeah-
and
I
agree
with
that-
derrick
there's
a
lot
of
yeah
there's
a
lot
of
areas
in
the
ecosystem
that
are
going
to
grow.
You
know
heavily
over
the
next
few
years.
People
have
made
huge
investments
in
kubernetes
and
building
out
this.
You
know
cloud
native
ecosystem,
there's
always
new
capabilities
that
people
are
dreaming
up
to
better
connect
their
apps
or
you
know,
to
connect
to
data.
I
think
you
know
for
us
a
focus.
C
You
know
where
we're
going
to
place
some
of
our
investment
and
our
bets
over
the
next
few
years
really
comes
down
to
dealing
with
the
complexity
right.
There's
none
of
this
stuff
is
getting
any
simpler
and
I
think
it's
the
open
source
community
and
the
kubernetes
community
and
the
folks
who
every
day
go
and
build
these
applications
to
build
the
simplifications
that
make
building
reliable
services
at
scale
easy
because
it's
it's
not
getting
harder
every
day.
C
It's
just
getting
the
needs
on
us,
the
requirements
and
whether
it's
in
industries
that
are
heavily
regulated
or
industries
where
people's
lives
are
at
risk.
You
know
we
need
to
think
about
the
design
and
you
know
the
reliability
of
everything
that
we
help
people
build.
So
I'm
super
excited
about
the
work
that
we're
doing
around.
You
know
multi-cluster
resiliency.
C
I
talked
about
the
interconnections
between
clusters,
I'm
hoping
you
know
in
another
year
or
two,
I'm
gonna
be
standing
up
here
and
you'll
be
programming
applications
to
the
kubernetes
model,
but
you
won't
think
about
where
they're
going,
because
your
operations
team,
your
cloud
provider,
your
service
provider,
red
hat
as
a
provider
we're
all
going
to
be
working
together
to
make
your
applications
run
where
you
need
and
they
will
stay
running
you
know
throughout
you
know,
screw-ups
of
config
changes
or
application
disasters,
or
you
know
covid19,
no
matter
what
it
is.