►
From YouTube: OpenShift 4 Road Map Update 2021 - Rob Szumski, Michael Elder, Naina Singh, Jamie Scott (Red Hat)
Description
OpenShift Commons Gathering at Kubecon/EU 2021
May 4th, 2021
Enabling Innovation Everywhere: Red Hat's Hybrid Multi-Cloud Road Map and OpenShift Release Update
Rob Szumski (Red Hat) | Michael Elder (Red Hat) | Naina Singh (Red Hat) | Jamie Scott (Red Hat)
A
All
right,
hi,
everyone
welcome
to
the
openshift
roadmap
update
for
2021..
Thank
you
for
joining
us
in
openshift
commons.
I
am
rob
sumski,
I'm
joined
here
with
michael
elder.
You
want
to
say:
hey,
hey
everybody,
and
today
we're
going
to
talk
about
what's
going
on
in
the
openshift
universe
for
this
year,
we're
also
joined
by
jamie
and
nina,
who
are
going
to
pop
in
and
out
and
give
us
some
demos
of
some
cool
stuff.
A
While
we
talk
about
that
universe,
so
I
want
to
roughly
set
up
our
conversation
to
be
around
something
that
we
just
announced
in
may
of
this
year,
which
is
openshift
platform
plus,
and
that
really
is
bringing
the
openshift
platform
up
a
letter
into
the
multi-cluster
arena.
Realizing
that
you
need
multi-cluster
security,
you
need
a
global
registry
and
you
need
multi-cluster
management
to
be
successful
in
today's
hybrid
cloud
world
and
so
we're
not
going
to
focus
on
the
actual
product
today.
A
So
I
want
to
talk
about
this
idea
of
standardized
tools,
because
you
know
you're
not
just
running
one
or
two
clusters,
but
you're
gonna
run.
You
know.
Maybe
a
hundred
clusters
and
getting
tools
and
policy
and
management
down
to
those
clusters
is
extremely
important
because
you
know
today's
applications
are
spread
and
more
distributed
than
ever.
Clusters
are
more
connected
than
ever,
and
that
means
that
you've
got
to
get
that
fabric
orchestrated
correctly.
So
we're
going
to
talk
about
all
this
stuff,
some
of
the
tools
that
are
available
upstream
in
both
kubernetes
and
other
ecosystems.
A
That's
when
we're
going
to
zoom
all
the
way
down
into
a
single
cluster,
because
you
know
what's
happening
in
on
a
disk
of
an
operating
system
or
a
policy
that
makes
its
way
all
the
way
down
into
the
nitty-gritty
is
just
as
important
as
zooming
all
the
way
up
back
into
that
multi-cluster
world
with
this
rocket
ship-
and
you
know
the
work
that
we
do
for
fast
packets,
for
maybe
like
a
telco
workload,
is
beneficial
to
your
application.
A
A
So,
let's
jump
into
it.
Security
what's
happening
upstream,
two
main
things.
First,
with
the
version
1.21
of
kubernetes
pod
security
policies
are
deprecated.
Now
this
doesn't
mean
that
they're
going
to
be
removed.
Yet
I
think
that's
going
to
happen
in
1.25,
but
a
few
things
to
note
here.
The
replacement
for
this
is
probably
going
to
be
a
little
bit
more
simple
than
what
you've
been
used
to
and
so
complex
users
might
like.
A
Some
of
the
features
of
the
open
policy
agent
we're
going
to
talk
about
how
that
fits
into
some
of
our
tools
later
today,
and
you
can
run
that
today
on
openshift,
just
fine
and
then
of
course,
openshift
sccs,
that's
a
security
context.
Constraint
are
unaffected
by
this
is
kind
of
what
pod
security
policies
were
modeled
after
in
the
beginning,
so
we
kind
of
had
your
back
before
we
got
your
back
now
and
we'll
have
you
back
in
the
future
too.
A
A
Now
I
don't
want
to
get
into
too
many
of
the
technical
specifics,
but
in
the
the
cri,
which
is
the
runtime
interface
for
cube,
this
is
now
there,
and
so
this
is
the
default
for
talking
to
runtimes
and
with
cryo.
What
we
use
in
openshift
can
do
the
user
id
mapping
in
and
out
of
the
container,
which
is
what
actually
is
doing
that
user
name
spacing
so
we're
waiting
on
kubernetes
to
roll
that
out
and
then
that's
going
to
make
its
way
down
into
openshift
as
well.
A
So
the
proposal
there,
the
cap
and
upstream
parlance,
is
still
moving
forward.
Some
of
our
engineers
are
pushing
that.
So
that's
one
to
take
a
look
at
when
that
comes
down
all
right.
Another
big
news
in
kind
of
the
upstream
arena
and
red
hat
world
is
advanced
cluster
security.
This
is
based
on
our
acquisition
of
stack
rocks.
A
We're
super
excited
to
have
these
folks
join
the
red
hat
family
and
build
this
into
openshift
and
openshift
platform,
plus
so
we're
starting
the
open
source
process
right
now,
for
all
the
stack
rocks
tools-
and
we're
really
excited
about
this-
because
it's
the
most
kubernetes
native
security
product
out
there
we're
going
to
talk
about
some
of
the
cool
features
and
see
a
quick
demo
of
it
in
action,
but
it
really
talks
about
this
idea
that
you've
got
to
secure
your
entire
supply
chain
from
you
know.
A
This
shift
left
into
where
you're
actually
building
code
and
building
containers
into
when
they're
actually
deployed
and
then
actually
after
they're
running
you
know,
malicious
things
can
still
happen,
even
if
you
pass
a
container
scan
for
example,
and
so
what
this
does
is.
It
has
a
combination
of
watching
host
and
cluster
state
through
some
cool
technologies.
If
you've
heard
of
ebpf,
this
is
kind
of
the
swiss
army
knife
of
doing
some
like
host
probing
into
what
the
kernel
is
doing.
A
So
we
take
advantage
of
that
plus
at
the
cube
layer,
admission
controllers,
looking
at
the
audit
log
and
some
other
cluster
events,
as
well
as
your
typical
image
scanning.
So
we're
going
to
see
a
really
cool
demo
of
that
in
a
second.
So,
let's
parachute
on
down
what
does
this
look
like
in
a
single
cluster?
So
I
talked
about
runtime
security
as
one
part
of
your
kind
of
threat
matrix
here,
and
this
is
a
new
capability
for
openshift
completely
new,
and
what
this
looks
like
is,
you
know.
A
Let's
say,
we've
got
a
pod
and
it's
got
python
in
it
and
we
got
an
application
and
there's
a
vulnerability
that
gives
you
some
local
execution
rights
inside
of
that
pod.
You
know
if
you're
going
to
move
left
and
right
around
in
this
environment.
What
you
need
to
do
is
have
some
tools,
so
you
might
pull
down
a
netcat
binary
and
then
start
using
it
to
wreak
havoc.
A
A
So
this
you
know,
means
that
a
that
that
binary
can't
be
run
if
it
matches
your
policy.
But
it's
also
going
to
protect
network
traffic
moving
left
and
right,
as
somebody
wants
to
exploit
other
applications
that
might
be
on
this
environment,
and
so
we're
going
to
look
at
a
demo.
So
I'm
going
to
hand
it
over
to
jamie
for
more.
B
Ad
team
welcome
to
red
hat
advanced
cluster
security
or,
as
we
call
it
acs
to
set
some
context.
This
is
the
cyber
kill
chain.
It
ranges
from
reconnaissance,
which
is
understanding
your
victim
and
the
system
architecture,
and
what
you're
doing
when
you're,
trying
to
attack
based
on
your
objectives
in
the
world
of
security,
there's
always
a
reason
for
an
attack.
The
reason
can
be
anything
from
denial
of
service,
stealing
data
installing
crypto
miners,
for
a
financial
gain
or
really
just
because
you
can.
B
B
B
First
in
the
cyber
kill
chain
is
reconnaissance.
We're
going
to
find
out
everything
we
need
to
know
about
the
application
to
start
to
exploit
it.
Then
we're
going
to
search
for
vulnerabilities
to
weaponize,
create
a
back
door
and
deliver
our
exploit
to
get
it
up
and
running.
Attackers
afterwards
will
frequently
look
to
establish
a
foothold
in
their
position
by
installing
malware
attempting
to
escalate
privileges
and
establishing
a
command
and
control
server.
B
So
this
is
ultimately
going
to
get
them
to
the
core
objective,
which
can
really
vary
based
on
their
needs.
Containerization,
as
a
movement
has
really
helped
the
security
team.
Containerization
has
just
isolated
our
attack
surface,
it's
taken
post
exploitation
activity
and
made
it
harder
to
move
through
the
environment
because
our
containers
are
self-isolated.
B
This
has
only
helped
the
security
community
and
now
it's
time
for
this
community
to
take
advantage
of
it.
So
today
I
want
to
walk
you
through
the
anatomy
of
an
attack
from
a
defender's
perspective
and
that
of
an
attacker,
as
the
defender
looks
to
investigate
an
issue
protect
against
reconnaissance
through
delivery
and
monitor
for
exploitation.
B
I
also
want
to
show
you
an
attack,
as
it
could
happen
in
the
wild.
So
let's
get
started
so
I'm
going
to
transition
over
to
advanced
cluster
security
here
and
as
an
incident
responder.
I
want
to
check
the
violations
now,
I'm
going
to
search
right
away
for
visa
processor
and,
as
I
look
at
visa
processor,
I
start
to
get
concerned.
B
I
can
immediately
see
that
a
shell
was
spawned
by
a
java
application
in
their
environment,
and
this
concerns
me
so
I
want
to
investigate
further.
I
click
on
it
and
I
can
see
the
gel
command
and
the
bash
command
were
executed.
With
several
different
arguments.
I
go
down.
I
start
looking
at
those
arguments
and
I
can
see
really
clearly
here
the
container
id
the
user
id
and
that
this
user
is
escalating
their
privileges.
B
Well,
at
this
point,
I'm
reasonably
confident
that
we've
been
attacked,
I'm
starting
to
initiate
my
incident
response
procedures
and
I've
lit
up
the
sock
and
for
those
of
you
who
don't
know
the
sock,
is
the
security
operations
center?
That's
responding
to
these
types
of
incidents,
but
right
now
now
that
I'm
initiating
this
procedure,
I
want
to
get
more
context,
so
I
want
to
see
how
is
it
that
someone
has
a
reverse
shell
in
my
environment?
B
B
B
B
So
I
need
to
address
this.
Let
me
switch
my
persona
to
the
developer
for
a
second,
so
I
scroll
down,
I
have
a
ton
of
image
findings.
I
can
see
the
cve
I
can
see
how
long
it's
been
my
environment
if
it's
fixable
and
it's
cvss
score,
and
this
is
really
cool
information,
but
as
a
developer.
This
isn't
enough
for
me.
So
I'm
going
to
go
over
to
my
docker
file
and
I'm
going
to
check
out
where
these
cvs
are.
B
So,
as
I
look
here,
I
can
see
the
components
associated
with
my
image.
I
can
see
that
there
are
several
cves.
There
are
137
cvs
in
what
appears
to
be
my
base
image
and
in
order
to
address
these,
I'm
going
to
have
to
upgrade
my
base
image
and
whatever
cvs
are
addressed
in
that
upgrade.
I
will
resolve
here.
B
I'm
also
going
to
check
out
my
run
instructions,
because
this
is
where
my
application
is,
and
I
can
see
here
if
I
were
to
upgrade
curl,
then
I
could
address
several
cps
in
curl
and
all
I
need
to
do
is
modify
this
run
instruction.
So
this
is
really
cool.
Helps
me
target
where
in
my
docker
file,
I
need
to
address
these
issues,
but
it's
still
not
enough.
What
do
I
upgrade
my
component
to
so
now?
B
B
So
one
of
the
cool
things
about
this
view
is
that
you
can
see.
Struts
is
a
java
vulnerability.
So
this
is
a
language
level.
Indication
and
the
remainder
are
packages
installed
on
our
host
operating
system.
So
that's
really
cool.
I
know
exactly
how
to
address
this,
but
it's
not
in
my
traditional
manner
of
addressing
this,
so
red
hat,
advanced
cluster
security
provides
developers
an
easy
tool
in
order
to
look
at
this
information
in
their
ci
or
on
their
host
on
their
local
host,
and
understand
that
this
is
something
that
they
need
to
address.
B
B
So
here
I
have
a
website
running
now.
It's
not
always
going
to
be
as
easy
as
saying
this
is
an
image
that's
vulnerable
to
shell
shock.
Please
exploit
it,
but
sometimes
it
really
can
get
that
easy.
So
I'm
going
to
understand
and
conduct
some
reconnaissance,
so
I'm
going
to
go
over
to
the
developer
tools.
I'm
going
to
check
out
this
website
a
little
bit
more,
I'm
going
to
refresh
the
page
I
can
see.
Oh
my
server
header
says
that
this
is
based
on
apache
2.2.22,
easy
fix,
guys.
B
If
I
can
spell
and
we're
going
to
look
through
all
of
the
vulnerabilities
known
to
be
associated
with
apache
and
now
I
can
start
selectively
looking
at
things
that
are
known,
vulnerable
in
the
environment
and
start
to
understand.
So
if
I
click
on
the
top
one
you
can
see,
this
is
available
in
metasploit,
which
is
some
solution
that
attackers
use.
I
can
easily
just
download
this
exploit
and
start
to
test
to
see
if
this
functions
within
my
environment.
B
B
Now,
I'm
a
good
attacker.
I
need
to
let
these
users
know
that
their
site
is
vulnerable,
so
I'm
gonna
go
deep
into
some
subreddits.
I'm
gonna
find
a
good
meme
and
I'm
gonna
exploit
them
to.
Let
them
know
that
I
care
now.
I
could
have
installed
a
crypto
miner.
I
can
start
to
escalate
my
privileges,
but
I'm
going
to
be
kind,
I'm
just
going
to
deface
their
site
today.
B
B
B
B
Now,
if
we
go
to
acs,
we
can
see
that
a
violation
occurred
really
easily
by
clicking
on
the
violations
and
the
top
one
here
is
what
I
just
did
it's
an
unauthorized
process
execution
running
in
the
shell
shock
deployment,
and
if
I
look
here,
I
can
start
to
look
to
see.
Oh
this
person
is
putting
cat
pictures
on
reddit
in
my
environment.
B
That's
really
cool!
Now
I
can
initiate
it,
but
you
know
what,
if
I
had
this
in
blocking
mode,
it
would
have
been
even
easier
advanced
cluster
security
takes
a
different
approach
to
blocking.
We
can
now
kill
this
pod.
So
if
this
deployment
is
running
in
a
replica
deployment,
then
in
immediately
once
this
activity
occurs,
I
can
kill
this
pod.
My
attackers,
modif
defacement
of
my
website,
is
then
reverted
back
to
what
is
normal,
and
I
have
alerted
my
security
team.
B
So,
let's
pretend
for
a
second
that
I
wasn't
just
facing
a
site.
A
lot
of
you
are
looking
in
the
audience
right
now
and
saying.
Shell
shock
is
old
news.
Someone
out
there
is
saying
jamie
that
came
out
in
2014.
This
is
super
old.
Why
are
you
even
showing
us
this?
Well
to
me?
This
came
out
three
months
after
I
started
in
cyber
security.
This
was
my
introduction
to
major
vulnerabilities,
so
I
figure,
why
not
make
it
your
own?
This
is
a
real
world
example
of
something
that
happened
to
me.
B
B
I
am
going
to
try
to
escalate
privileges
from
there
and
maybe
I
can
even
get
my
hands
on
a
nice
old,
cube
config
and
if
I
get
my
hands
on
a
cube
config,
then
I've
got
authenticated
access
to
the
cube
api
and
I
can
exact
into
a
pod
to
start
to
laterally
move
and
maybe
even
install
a
backdoor
or
some
malware.
So
let's
try
that
here.
B
B
B
B
B
As
you
look
at
visa
processor,
you
can
see
this
is
vulnerable
to
apache
struts.
It's
probably
been
deployed
in
an
emergency,
it's
got
secrets
in
its
environment
variables,
a
ton
of
vulnerabilities
and
if
I
even
scroll
down
you
can
see
it's
privileged.
So
I
can
go
use
this
container
and
I
can
get
to
the
host
operating
system
and
see
how
I
can
begin
to
laterally
move
now.
I
really
hope
you
learned
something
about
the
attacker's
perspective
and
the
defenders.
B
A
To
you
all
right,
thanks
jamie,
that
was
great
all
right,
so
let's
zoom
back
up
into
the
multi-cluster
arena
for
security.
So
here's
our
diagram
again,
you
can
see
that
we've
got
our
multi-cluster
tools
and,
of
course
you
know
the
policies
that
we
just
took
a
look
at.
We
don't
want
them
on
just
one
cluster.
We
want
to
get
them
down
to
all
of
our
clusters
and
it's
not
just
maybe
blocking
execution
policy,
but
it's
network
policy.
It's
cis
compliance!
We're
going
to
hear
a
lot
more
about
this
in
the
open
cluster
management
arena.
A
They've
got
a
bunch
of
tools
for
this
too,
and
then
we're
talking
about
hundreds
of
clusters.
Here
they
might
be
out
on
telephone
poles
and
other
places
like
that.
So
these
are
remote
sites.
They
can
be
physically
attacked.
So
we've
got
our
file
integrity
operator
and
some
other
things
to
look
at
that
host
state
and
make
sure
that
someone
hasn't
jumped
on
there
and
done
some
malicious
things.
A
And
then,
of
course,
you
know
we're
all
the
way
down
to
the
single
cluster
layer.
There
is
that
node
layer
where
we're
installing
these
sensors
and
these
agents
and
the
probes
that
actually
make
all
of
this
happen,
and
so
it's
really
exciting
to
be
able
to
have
one
place
to
enforce
all
of
this
across
all
of
your
clusters.
A
Here's
what
it
looks
like
in
the
user
interface
when
you're
looking
at
network
traffic.
This
is
a
key
part.
Obviously
you're
going
to
have
applications
that
are
talking
to
each
other,
and
so
you
can
get
a
good
idea
of
who
should
be
talking
to
each
other,
who
shouldn't
be
talking
to
each
other
and
then
enforce
policies
around
that.
A
Another
cool
thing
that
the
team
has
built
is
actually
recommendation
engines
for
hey
we're
looking
at
your
traffic,
and
this
is
a
policy
that
we
think
you
should
have
based
on
how
these
applications
normally
function
and
then
block
things
when
they're
out
of
bounds
of
that
here,
you
can
see,
on
the
right
hand,
side
a
list
of
some
of
the
policies,
so
a
bunch
of
stuff.
You
know,
maybe
you
decide
that
you
never
want
folks
to
run
curl
in
an
image,
because
that's
how
you
can
pull
down
content.
A
So
you
could
block
that
so
there's
a
mix
of
best
practices.
Some
you
know
kind
of
widespread
things
like
looking
for
heart,
bleed
and
other
exploits
like
that
as
well
as
maybe
you
just
care
about
any
cve
over
a
certain
threshold
that
you
want
to
block
that
or
do
something
else.
Sometimes
you
just
want
to
audit
it
instead,
so
a
bunch
of
really
cool
stuff.
Take
a
look
for
that
coming
in
the
open
source,
as
well
as
in
openshift
all
right,
let's
jump
over
to
applications
and
I'm
going
to
hand
it
over
to
michael.
C
All
right,
thank
you
rob,
so
I
want
to
talk
about.
What's
going
on
in
the
upstream
around
just
general
fleet
management.
About
a
year
ago,
we
brought
in
red
hat
advanced
cluster
management,
and
we've
been
on
the
the
effort
to
open
source,
all
the
components
of
that
so
there's
a
project
that
is
relatively
new,
relatively
young,
open
cluster
management,
dot
io.
This
is
where
you'll
find
all
of
the
open
source
capabilities
from
the
red
hat
advanced
cluster
management
product.
C
This
is
really
focused
on
bringing
together
some
technologies
that
are
helpful
when
managing
the
fleet
and
also
creating
some
novel
and
new
technologies
that
help
glue
together
all
the
parts,
so,
in
particular
simplifying
the
life
cycle
for
provisioning,
openshift
clusters
running
on
hyperscaler
clouds
in
the
data
center
on
virtualization
or
on
bare
metal,
simplifying
the
process
of
how
we
deliver
and
configure
the
fleet
and
then
also
audit
for
compliance.
Does
it
meet
all
of
our
expectations?
C
It
does
provide
some
integrated
capability
with
get
ops,
but
we've
also
been
working
on
bringing
in
argo
as
a
provider
of
get
ops
as
well
and
then
focus
on
an
inventory
of
what
clusters
are
in
the
fleet.
How
do
I
dynamically
place
policies
or
application
content
across
them
and
validate
that?
It's
running
correctly?
C
If
you
look
on
operatorhub.io
you'll
find
the
cluster
manager
and
clusterlet
operators.
These
are
sort
of
the
core
building
blocks.
They
enable
us
to
have
a
cluster,
become
a
hub,
we'll
see
kind
of
a
picture
of
what
that
looks
like
here
on
the
next
slide
and
then
an
agent
which
can
run
all
of
which
runs
as
kubernetes
native
pods,
and,
as
I
said,
this
brings
together
a
number
of
open
policy.
C
I
o
now,
when
we
think
about
cluster
life
cycle
capabilities
a
lot
of
times
we're
still
thinking
about
a
bare
metal
host
that
is,
in
my
data
center,
we're
thinking
about
a
virtual
machine,
we're
thinking
about
something
in
the
cloud
as
we
see
more
edge
scenarios
where
computing
capacity
is
pressed
further
away
from
the
data
center.
We
have
to
think
about
how
do
we
life
cycle
those
clusters
in
those
machines?
C
This
really
is
going
to
be
a
powerful
way
to
deliver
computing
capacity
wherever
it's
needed,
and
then
each
of
these
clusters
again
connect
back
into
that
control
plane
provided
by
the
advanced
cluster
management
kubernetes
capability,
which
is
backed
by
this
open
cluster
management
project
that
I
spoke
of
now
when
we
think
about
delivering
configuration
on
the
next
slide.
We
really
will
think
about
how
do
we
express
this
again
with
a
kubernetes
native
crd
and
here's
an
example?
C
That's
really
important
if
you're
doing
any
kind
of
networking
sensitive
capability,
particularly
for
things
like
a
5g
style
workload,
you
want
a
particular
operator
to
be
deployed
on
a
cluster.
You
want
a
particular
configuration
for
that
operator
to
be
made
available,
so
sriv
enables
some
really
powerful
capabilities
from
a
networking
aspect
within
the
cluster.
C
This
is
on
the
right,
a
set
of
yaml
and
policy
definition.
That
says,
I
want
a
certain
configuration
to
be
enforced,
validate
if
it's
present,
if
it's
not
present
automatically
created-
and
this
allows
you
not
only
to
work
with
the
sr
irv
example,
but
to
deliver
any
of
the
operators
like
the
file
integrity
operator,
the
cis
compliance
operator,
along
with
your
own
configuration
like
certain
roles
or
role,
bindings,
oauth
providers,
identity
providers,
networking
storage,
etc,
and
so
this
is
what
you
see
as
a
policy.
C
That
is
communicating
with
multiple
clusters
in
the
fleet,
and
assigning
configuration
in
this
case
is
a
concept
that
we
won't
really
talk
about
much
here,
but
there's
a
concept
of
a
placement
rule
that
says
match
this
policy
to
these
members
of
the
fleet
and
that
control
plane
is
actually
delivering
those
policies
and
configuration
down
and
validating.
Do
they
meet
my
desired
state?
C
C
It
allows
us
to
manage
clusters
that
are
running
on
hyperscalers,
so
whether
I
provision
a
a
kubernetes
or
an
openshift,
along
with
allowing
us
to
actually
provision
and
create
shift
on
the
hyperscaler
clouds
and
even
on
bare
metal
and
virtualized
platforms
as
well,
and
so
what
this
really
means
now
is
that
this
hub
allows
us
to
have
one
central
view
of
our
cluster
inventory,
regardless
of
where
that
openshift
cluster
is
running.
This
is
a
key
aspect
of
the
broader
vision
around
open,
hybrid
cloud.
C
Now,
what
can
I
do
once
I
have
a
cluster,
that's
under
management,
so
that
means
that
I've
got
the
cluster
manager
operator
deployed
and
running
on
one
of
my
clusters,
and
then
I've
got
the
cluster
lit
agent
operator
running
on
any
cluster
that
I've
either
imported
or
provisioned,
and
so
in
the
next
slide.
What
we
see
here
excuse
me
on
this
slide.
What
we
see
is,
I
can
deliver
a
set
of
governance
and
compliance
capability
across
any
member
of
the
fleet.
C
So
I
can
link
a
particular
technical
control
and
say
that
this
is
relevant
for
data
standard
xyz,
whether
that's
like
a
pci
dss,
the
nist
853
or
even
an
internal
data
center.
That's
excuse
me
data
standard
that
is
specific
to
your
organization.
For
each
of
these,
you
can
see
examples
where
I
either
I'm
simply
auditing.
Is
this
policy
in
existence
on
a
target
cluster
and
in
other
words,
is
this
operator
deployed
and
running?
Is
this
operator
configured?
C
A
So
there's
a
bunch
of
cool
stuff
happening
in
the
upstream
networking
arena
and
we're
going
to
look
at
three
of
them
that
kind
of
come
together
to
make
some
kind
of
next
generation
capabilities
happen.
For
us
the
first
is
submariner.
This
is
a
project
for
cross-cluster
connectivity,
and
this
is
kind
of
like
basically
some
ipsec
tunnels
and
other
kind
of
stitching
together
of
a
bunch
of
clusters,
so
they
can
talk
to
each
other.
What's
cool
about?
This
is
you'll,
be
able
to
do
service
discovery
and
other
cubisms
directly
across
that
boundary.
A
So
it's
really
going
to
be
easy
to
stretch
applications
across
two
clusters.
You
know
maybe
either
do
some
failover,
because
you
know
the
first
step
is
being
able
to
talk
to
the
rest
of
those
members
to
either
sync
data.
Anything
that
you
need
to
do
acem
is
going
to
actually
orchestrate
this
for
us
in
the
openshift
world,
and
so
that's
going
to
be
really
great
capability
when
used
with
our
next
capability,
which
is
istio
federation.
A
So
this
is
the
ability
to
connect
multiple
service
meshes
together.
Now,
if
you
remember
in
the
openshift
world,
not
everybody
has
to
use
a
service
mesh.
This
is
an
opt-in
on
a
per
name
space
basis,
and
so,
if
you've
got
name
spaces
running
on
multiple
clusters,
you
can
connect
those
together
again
over
that
same
bridge.
A
So
this
is
a
little
bit
more
powerful
because
you
have
a
little
bit
more
control
over
exactly
how
things
are
federated
and
then
you'll
be
able
to
stretch
that
identity
between
pods
across
that,
which
is
what
everybody
knows
and
loves
about
the
service
mesh
among
a
bunch
of
other
things,
and
then
last
we've
got
a
new
api
for
ingress.
This
is
the
gateway
api.
You
might
have
known
this
upstream
as
ingress
v2.
That
name
has
switched
to
the
gateway,
and
this
is
a
more
expressive
api
than
what
we
have
today.
A
So
if
you
think
about
ingress
today,
it's
kind
of
a
very
coarse-grained
kind
of
rule
matching
thing
and
then
it's
up
to
each
ingress
implementation
to
talk
about
how
it
handles
sticky
sessions
and
cookies,
and
things
like
that.
This
is
going
to
have
a
much
more
expressive
rule
set,
and
that
means
that
an
open
ship
in
particular
will
have
a
swappable
way
to
move
that
out
so
say.
A
If
you
want
to
use
metal
lb
on
a
bare
metal
environment
to
fulfill
this
need,
you
can
do
that
or
then
use
some
of
the
other
openshift
router
components
in
other
environments
paired
with
maybe
an
amazon
elb
or
a
load
balancer
in
azure,
so
really
exciting
to
kind
of
meet
everybody's
needs,
and
so
we'll
be
tracking
that
upstream,
as
it
gets
underway.
A
Remember
some
of
the
use
cases
for
this
it's
kind
of
better
than
a
stretch
cluster,
so
we've
got
some
customers
today
that,
like
to
you,
know,
have
a
cluster
stretch
between
two
data
centers,
that
are,
you
know,
maybe
a
few
milliseconds
apart.
This
is
just
a
better
scenario,
because
you'll
have
more
failure,
domains
there,
instead
of
stretching
it
across
and
then
just
easier.
If
everybody
can
talk
to
each
other
and
share
identity.
This
means
that
you
can
like
move
dependencies
from
one
cluster
to
another.
A
So,
let's
parachute
down
into
what
networking
on
a
single
cluster
looks
like,
and
so,
if
you'll
remember,
we
have
our
multis
cni.
This
is
what
allows
you
to
map
multiple
network
interfaces
into
a
single
pod
and
the
cool
thing
about
this.
Is
it's
all
driven
by
cube,
so
you
know
networking
is
typically
done.
A
So
we're
going
to
build
on
that
when
we
talk
about
our
networking
story,
and
so
let's
take
a
look
at
another,
more
specific
sr.
Iov
example
that
we
talked
about
earlier-
and
I
want
to
talk
about
this
in
terms
of
two
personas.
So
you
know,
cluster
admins
are
going
to
be
able
to
configure
this
hardware,
so
this
moves
packet
processing
into
user
land,
which
is
really
cool
because
you
get
basically
get
line
rate
speed
into
a
pod,
and
this
is
useful
for
things
that
might
be
decoding.
A
A
Maybe
like
a
certain
node
pool,
and
then
you
know
coming
in
with
that
ingress
and
implementation
for
the
gateway
api,
have
it
really
really
fastly
hooked
up
and
you
know,
specify
a
bunch
of
config,
but
you
as
a
developer
that
maybe
just
wants
to
start
processing
packets,
you're,
just
opting
into
this
with
an
annotation
and
you're
saying
hey,
I
want
to
get
a
new
interface
inside
of
my
pod.
I
don't
really
care
about
all
the
other
stuff.
A
Space
on
cluster
one
is
the
same
as
on
namespace
two,
it's
owned
by
the
same
folks,
and
so
what
that
allows
you
to
do
is
then
start
exporting
services
between
namespaces
on
different
clusters
and
there's
two
crds
again
we're
programming
the
kubernetes
layer,
instead
of
anything
higher
or
lower,
which
is
great,
and
that
just
makes
it
so
that
you
are
opting
into
sharing
something
you're
not
going
to
share
everything
and
then
in
the
for
the
service
import
crd.
This
is
where
it
gets
really
powerful.
Is
you
can
consume
that
via
kubernetes
api?
A
So
you
can
imagine
this
becomes
extremely
useful
for
building
your
applications
and
so
there's
just
one
more
powerful
thing
that
you
can
do
when
you
orchestrate
all
your
clusters
together
and
then
last
we
talked
about
service
mesh
running
over
the
same
ipsec
tunnels.
This
is
kind
of
what
that
looks
like
in
practice.
So
you've
got,
you
know
all
the
way
down
on
your
single
cluster.
C
This
slide
is
sort
of
the.
Where
do
you
come
from?
Where
do
you
go
right?
So,
in
this
case,
argo
is
focused
on
the
ability
to
deliver
capability,
deliver
applications
and
other
objects
into
the
cluster.
One
of
the
neat
new
things
that's
come
out
is
the
operator
centric
deployment.
It
makes
it
really
simple
to
get
started
with
this
in
openshift.
That's
the
openshift
kit,
ops
operator,
and
this
is
also
something
that
can
be
configured
and
driven
across
the
fleet
using
advanced
cluster
manager
for
kubernetes,
with
quay
we're
really
looking
at.
C
How
do
we
make
it
easy
to
source
the
images
that
are
going
to
run
against
the
environment,
so
a
central
place
for
image
scanning
and
integration
or
scanning
from
advanced
cluster
security,
to
allow
us
to
ensure
that
the
images
which
make
up
the
applications
running
on
the
fleet
are
properly
secured
and
there's
been
an
upgrade
from
python.
Two
to
three
was
just
recently
completed
as
well
for
that
project.
C
So
if
we
think
about
applications
within
a
single
cluster,
I
can
get
started,
particularly
as
a
developer
or
a
small
development
team,
and
I
want
to
use
a
get
op
centric
approach.
I
can
install
the
openshift
githubs
operator
and
then
from
there
I
can
see
my
repo
getting
delivered
into
that
particular
cluster.
C
C
Thank
you,
sir.
So
in
this
view,
what
we're
seeing
is
you've
got
an
inventory
of
clusters.
It's
provided
by
acm
or
open
cluster
management
art
we
are
have
been
building
integration,
to
allow
us
to
feed
that
understanding
of
the
cluster
inventory
to
argo,
and
then
we
can
use
the
application
object
from
argo
to
deliver
applications
and
content
across
clusters
in
the
fleet.
C
C
Now
within
argo.
I
can
see
a
view
of
this.
We
can
also
sort
of
zoom
out
a
little
bit
further
and
think
about
the
view
across
the
fleet
within
advanced
cluster
manager
for
kubernetes,
and
here
what
I'm
looking
at
is
actually
a
view
that
represents
a
topology
of
the
objects
in
my
world
in
the
middle,
I've
got
the
set
of
clusters
and
I
can
see
a
list
of
foxtrot
ap
northeast
foxtrot,
gcp
fox,
right,
u.s
west.
C
So
that's
an
openshift
cluster
that
was
provisioned
in
the
tokyo
data
center
for
aws,
another
one
provisioned
in
the
northern
europe
and
the
northern
europe
region
for
gcp
and
another
one
provisioned
in
northern
california
on
aws
u.s
west
one.
But
my
application
is
getting
delivered
to
all
three
clusters,
and
now
I
can
sort
of
get
a
central
view
of
what
makes
up
the
application.
C
I
can
see
things
like
the
deployments,
the
routes,
the
services
et
cetera
and
what's
also
kind
of
neat,
something
that
we're
not
going
to
go
into
a
lot
of
detail
here,
but
where
we
have
steps
that
aren't
programmable
in
kubernetes
that
require
some
additional
automation
outside
of
the
cluster.
We
can
even
bring
in
ansible
to
drive
that
capability.
C
A
typical
use
case
we
find
is
when
I
deliver
a
change
through
get
ops
or
otherwise.
I
want
to
automatically
open
a
service
ticket,
and
so
we
can
actually
have
an
ansible
job
drive
the
creation
of
the
service
ticket
that
allows
us
to
track
the
change
occurred,
or
maybe,
as
in
this
example
as
well.
I
want
to
drive
the
configuration
of
a
load
balancer
that
sits
in
front
of
the
clusters
and
maybe
that
load
balancer
isn't
programmable
through
an
operator
just
yet
so
I
can
use
that
capability.
C
So
here
what
we've
talked
about
is
how
we
use
get
ops
as
a
methodology,
how
we
ensure
the
security
around
the
images
that
we're
delivering
for
applications,
how
we
can
actually
simplify
getting
started
with
a
single
cluster
using
that
openshift
get
ops
capability
and
how
we
can
scale
that
out
to
many
clusters,
integrating
both
what
we
do
for
acm,
what
we
do
for
argo
or
what
we
do
for
clay.
But
let's
talk
about
the
next
level
layer
of
abstraction.
So
rob.
Can
you
tell
us
a
little
bit
about
what's
going
on
with
functions
upstream.
A
Yes,
absolutely
so,
as
michael
mentioned,
let's
talk
about,
you
know
getting
even
one
layer
higher,
so
you've
heard
the
word
server
list
before
and
then
maybe
functions
as
a
service.
Let's
talk
about
both
of
those
and
what
they
mean
so
k
native
is
the
serverless
platform
for
kubernetes,
and
it's
the
one
that
we're
backing
up
stream
and
it's
got
a
few
different
components
serving
eventing
and
having
parentheses
builds
because
that's
not
part
of
the
upstream,
but
that's
part
of
openshift
and
it's
a
big
part
of
using
a
serverless
platform.
A
A
You
know
that's
only
really
possible
in
this
model,
and
you
can
also,
instead
of
just
scaling
to
zero.
You
can
scale
up
on
certain
events
or
trigger
different
types
of
applications
to
do
things
based
on
events
that
might
be
when
a
new
image
is
uploaded,
go
resize
it
and
store
it
here.
That
kind
of
thing
or
serve
web
requests
doesn't
matter
what
it
is
and
so
embracing
k-native
is
it's
a
new
way
for
gaining
infrastructure
scaling
properties,
but
it's
using
all
the
same
primitives
that
you're
already
building
you've
got.
A
You
know
an
application
with
traffic
funneling
into
it.
Somehow,
and
you
know
maybe
reacting
to
a
few
things,
but
that
differs
from
functions
as
a
service,
and
this
is
kind
of
what
I'll
say
in
quotes
is
the
true
server
list.
This
is
that
developer
focused.
I
want
to
just
write
a
function
of
code
and
I
want
it
to
run
and
do
this
thing
I
don't
really
care
about
how
it's
scheduled,
how
it
runs.
I
just
want
someone
else
to
figure
that
out
and
I
just
want
to
run
my
code.
A
A
It
really
is
a
different
way
of
programming,
and
so
here
at
red
hot,
we
think
about
kind
of
a
developer
workflow
as
an
inner
loop
and
an
outer
loop,
an
inner
loop
is
kind
of
you
on
a
laptop
writing
code,
writing
some
tests
and
then
checking
that
into
a
source
control
and
then
entering
that
outer
loop,
which
is
all
your
integration
testing.
You
know
getting
it
into
the
wider
app
and
platform
and
then
ultimately
deployed,
and
so
what
that
means
is.
A
We
need
new
tools
that
function
intended
in
this
environment,
which
is
like
clis
that
are
terminal
friendly,
but
for
building
functions.
So
k
native
has
this
kn
concept
and
that's
a
really
great
cli
to
do
this.
We
also
need
new,
ci
cd
tools.
You
know
you
need
to
be
able
to
test
and
make
these
synthetic
events
around
the
things
that
your
application
is
programmed
against,
and
that
can
be
things
that
are
hard
to
kind
of
simulate
with
a
you
know,
a
jenkins
job
or
something
like
that.
A
It
just
doesn't
work
and
then,
because
these
functions
are
kind
of
running
all
over
the
place,
you
need
ways
to
aggregate
the
logs
look
at
errors.
So
it
starts
to
more
kind
of
resemble
like
a
mobile
application
that
you
have
deployed
out
on.
You
know,
hundreds
of
phones.
You
know
the
error
cases
they're
kind
of
roughly
the
same,
and
so
we
just
need
new
tools
for
this,
and
so
nina
is
going
to
talk
us
through
kind
of
the
the
current
state
for
functions
as
a
service
and
where
that's
going
in
a
quick
demo.
D
Thank
you
rob
hello,
everyone.
This
is
naina
and
in
this
demo
I'll
explain
what
it
means
when
we
say
that
we
see
serverless
as
a
deployment
model
and
serverless
functions
as
the
programming
model,
we
will
also
see
how
we
can
leverage
the
k
native
eventing
component.
So
for
writing
the
serverless
functions
in
the
local
environment.
We
would
need
kn
and
docker
installed.
D
D
It's
asking
me
which
project
path
I
want
to
do
I'm
going
to
do
the
same
notice,
the
default
runtime,
because
this
comes
with
a
prepackaged
runtime
like
node
quadrascope.
We
want
to
do
corpus
and
the
default
invocation
is
http
for
these
serverless
functions,
but
we
want
to
use
cloud
events,
so
I
am
going
to
write.
D
Let's
see
the
function
has
come
up
with,
so
the
function
construct
has
created
the
directory
structure
for
me
and
some
files.
So
if
we
can
go
there,
you
can
see,
we
have
three
classes
and
function.java
is
where
we
would
be
writing
our
code.
The
business
value
that
we
want
to
do
now,
let's
make
this
function,
do
something
such
as
translating
an
english
word
into
spanish.
D
D
D
D
D
This
is
our
ocp
dot
console.
So
if
I
see
in
our
topology
view
our
function
is
coming
up
here,
there
we
go.
If
you
go
in
and
add
again
event
source.
D
D
D
D
There
were
no
events
received,
so
it
was
terminating,
but
then
event
came
in
so
it's
starting
again.
So,
as
you
can
see,
it
does
the
auto
scaling
for
you,
and
here
we
go.
We
are
connected
to
cleanit
events
in
native
event,
source
cloud
thing.
We
are
getting
cloud
event
and
our
result
is
the
translation
of
the
word
hello
and
that
is
all
for
today
I
will
stop
sharing
and
back
to
raw.
A
All
right,
thank
you
nina.
That
was
awesome,
so
wrapping
it
up.
You
know
we
looked
at
functions
and
how
it
works,
but
let's
upload
it
again:
let's
zoom
up
into
the
multi-cluster
layer
and
as
I
mentioned
you'll
see
that
we've
got
some
layers
here
but
they're
blank,
because
you
really
don't
care
about
this.
You
know
at
a
functions
as
a
service
user
experience,
you're
going
to
be
maybe
in
an
ide
you're
working
with
your
cli,
and
you
don't
really
care
about
the
concept
of
clusters.
You
just
want
it
to
work.
A
You
just
want
to
run
five
or
six
copies
of
your
thing
or
if
you've
received
five
or
six
events,
you
want
to
spin
up
some
copies
of
it.
You
don't
really
care
which
nodes
it's
running
on
all
that
stuff,
and
so
it's
a
really
a
different
and
more
powerful
way
of
thinking
about
your
infrastructure,
but,
as
we
talked
about
it,
does
require
new
tools.
A
Maybe
if
you've
never
heard
of
fips
mode
for
encryption.
If
you
haven't
that's
good
on
you,
but
this
is
the
type
of
stuff
that
it
matters
deep
inside
the
operating
system,
but
it
impacts
us
because
we
want
all
of
our
clusters
to
be
compliant
and
that's
just
one
examples
of
many.
We
talked
about
fast
packet
processing.