►
Description
Get your espresso ready for the EMEA OpenShift Coffee Break as we welcome our special guest Edoardo Schepis, Solutions Architect Manager at AWS, to share experiences and talk about how Red Hat OpenShift on AWS (ROSA) is helping customers in their modernization journey towards cloud models and cloud native patterns.
Twitch: https://red.ht/twitch
A
A
A
A
D
No,
no,
I
would
be
very
happy
to
introduce
myself
so
first
of
all,
thank
you
for
having
me
well,
I'm
I'm
actually
at
the
moment,
working
in
aws
since
two
years
and
alf
as
a
solution,
architect
manager,
so
managing
a
team
of
solution
rk
here
in
italy,
I'm
visiting
milan.
D
Well
really
briefly,
I
started
my
career
in
summer
system,
where
I
met
for
some
user
groups
natalie.
That
is
missing
in
action
at
the
moment,
but
we
hope
yeah.
D
B
D
A
A
Okay,
so
you
you
are
now
here
in
your
new
role
in
aws,
and
what
about
your
topic,
the
topic
you're
going
to
speak
about
today
is
something
related
to
openshift.
I
suppose.
D
Yeah
yeah,
indeed,
it's
related
to
both
red
hat
and
aws.
It's
about
rosa
that
is
redat
open
shift
service
on
aws.
It's
an
interesting
offering
that
we
together
read
that
and
aws
are
we
launched
the
more
than
one
year
ago
and
it's
I
will
go
through
some
slides
just
to
have.
A
D
D
My
video,
but
I
hope
you
can
see
everything
I.
D
Okay,
so
this
is
the
really
a
quick
introduction
to
rosa
so
to
redact
openshift's
service
on
aws,
and
I
will
go
through
also
some
kind
of
demo
thank
hopefully
to
have
my
the
the
demo
god
with
me
and
looking
at
the
cli
and
the
web
console
and
an
application
that
I
just
deployed.
D
A
D
A
D
First
of
all,
why
red.openshift?
This
is
one
of
the
first
questions
that
I
receive
and
we
receive
from
customers
and
I
think
also
read
that
receive
from
from
from
customers
and
usually
it's
the
question
is,
but
you.
D
D
We
have
a
lot
of
different
options
because
we
want
to
target
different
personas
with
different
users
and
knowledge
level
on
the
on
on
that
side.
So,
for
example,
in
analytics
in
database
in
machine
learning
in
iot,
we
have
different
options
because
one
tool
doesn't
fit,
one
service
doesn't
fit
all.
So
we
think
that
it's
better
to
have
several
options
and
user
can
choose.
Regarding
containers.
We
have
ecs.
That
was
the
first
option
that
first
service
that
we
provided.
D
D
So
it's
very
simple:
it's
for
users
and
companies
that
are
starting
to
approach
containers
they
probably
are
want
to
reduce
the
time
to
build,
deploy
and
really
having
the
focus
on
containers,
and
they
are
already
on
aws
in
so
it's
it's
the
first
step
that
we
see
or
for
for
users
and
companies
that
are
already
using
aws
to
approach
container
world.
Then,
of
course,
we
have
the
qnedis
offering
that
is
more
for
users
and
companies
that
are
starting
are
coming
from
the
community
from
the
open
source
world,
so
they're
already
using
kubernetes.
D
They
want
to
use
kubernetes
and
they
appreciate,
of
course,
the
flexibility
that
kubernetes
provides
and,
at
the
same
time
they
would
like
to
leverage
the
the
infrastructure
and
the
options
provided
by
aws.
There
is
another
option.
Actually
that
is
what
was
already
available
since
I
think
more
than
two
years
now,
that
is,
to
deploy
openshift
on
aws
in
a
self-managed
way.
D
So
the
master
nodes,
the
infra
nodes,
the
the
compute
nodes,
so
both
the
control
plane
and
that
other
plane
you
can
deploy
it
on
ec2,
but
you
have
to,
of
course,
set
up
the
virtual
private
cloud,
so
the
vpcs,
the
gateways
subnet
the
auto
scaling
groups,
all
the
subnets
in
private
and
public
and
and
then
you
have
to,
of
course,
set
up
all
the
storage
behind
that
and
the
yam
police.
D
So
all
the
security
layers
that
is
easy
with
the
quick
start
is
not,
of
course,
an
immediate
job
is
not
as
a
service.
So
the
next
step
that
we
thought
and
the
agreement
and
the
collaboration
with
redat
what
what
this
collaboration
is
bringing
is
actually
rosa
rosa.
It's
a
manager
service.
First
of
all,
so
it's
a
service
that
you
can
use,
focusing
on
your
containers
and
applications
and
let
aws
and
reduct
manage
and
support
the
underlying
infrastructure.
D
So
you
can
reuse
your
openshift
skills.
You
can
leverage
what
you
already
have
on
openshift,
the
tooling
and
the
knowledge,
but
you
can,
let's
say,
move
your
all
the
heavy
lifting
all
the
all
the
infrastructure
layers
stuff
to
the
support
of
red
hat
and
aws.
We
will
see
the
details
in
in
a
moment
about
what
we
mean
with
the
support
and
how
we
will
and
redux
collaborate
on
that.
D
So
it's
really
an
easy
way
to
continue
using
openshift,
but
leveraging
the
cloud
opportunity
and
the
cloud
offering
and-
and
we
will
see
also
other
advantages
of
this
offering
so
a
diagram
we
cannot.
We
couldn't
have
a
diagram
in
this
call,
and
this
event
this
is
how
usually
the
customers
mix
and
match
or
use
our
offering.
Yes
is
yes,
so
you
will
see
you
can
say,
integrate
github
jenkins.
D
We
have
our
own
container
registry,
you
can
integrate
cloudwatch
for
monitoring,
but
also
you
can
use
fargate
fargate
is
technology
that
you
can
use
with
ecs
and
eks
for
again
to
let
aws
to
manage
the
service
or
the
clusters,
so
the
ec2
instances
without
having
the
the
need
of
patching
or
monitoring
the
the
under
the
the
layer
of
the
infrastructure.
D
So
focusing
just
on
that,
if
you
use
rosa,
you
have
everything
from
openshift,
so
the
operators,
the
building
monitoring
the
source
to
image,
jenkins
or
pipelines,
the
building
registry,
of
course,
the
open
shift
ingress
and
all
this
stuff-
and
you
just
use
ec2
and,
of
course,
ebs
or
other
other
stuff,
the
on
the
storage
layer
and
the
load
balancing,
and
we
will
see
in
detail.
D
So
these
are
the
main
two
difference
between
what
we
offer
and
rosa
offering,
and
I
think
you
can
see
the
value
if
we
compare
openshift
container
platform
by
itself.
D
It's
something
that
customers
use
in
their
own
environment.
They
install
in
a
customized
way.
They
can
do
a
lot
of
integration,
they
can
really
open,
say
and
customize
anything
on
on-premise.
D
But
of
course
they
are
not
leveraging
the
the
the
cloud
parting,
let's
say:
well,
we
have
the
openshift
dedicated
that
is
an
offering
by
redhat
and
the
red
dot
openshift
service
on
aws
sso
rosa,
where
you
leverage
the
cloud.
The
cloud
computing
model
in
particular.
If
you
use
rosa
you,
have
what
I
think
is
one
of
the
main
advantages.
D
That
is
the
pay
as
you
go,
so
you
have
also
for
open
shift
the
model
of
paying
as
you
consume
as
you
go,
and
so
this
is
really
interesting
for
many
customers
we
talk
with
you,
don't
need
to
buy
an
year
yearly
subscription
you
can,
or
or
or
buy,
let's
say
a
multi-year
subscription.
C
D
Okay,
okay,
okay,
thank
you.
So
I,
as
I
told
you,
the
support
the
management
is
handled
by
reddit
and
aws.
What
does
it
mean
exactly?
This
is
really
simple,
very
simple
table
and
picture
of
what
the
what
how
how
it
is
composed.
I
mean
how
the
support
is
split
between
us
and
the
red
hat,
as
you
can
see
with
rosa.
D
D
Of
the
support
of
the
control
plane,
compute
data
plane
the
support
in
general,
you
can
open
a
ticket
to
redot
or
aws,
and
then
we
manage
with
the
joint
support
to
understand.
Where
is
the
the
let's
say,
the
issue
and
the
building?
That
is
very
interesting
for
those
that
already
have
a
building
and
the
workloads
in
aws.
They
continue
to
have
the
same
billing
mechanism
also
with
rosa.
So
you
see
your
consumption,
your
you
can
control
the
cost.
B
A
There
is
a
question
from
phil.
First
of
all,
phil
thanks
for
joining
phil
king.
The
question
is
in
an
on-demand
situation.
How
agile
is
a
suspended
resume
for
rosa.
A
D
You
know
it's
if.
D
Because
you
you
have
the
rosa
installed,
of
course
you
have
all
the
master
infranod
always
running,
then
you
can
of
course
hibernate,
let's
say
or
terminate
the
the
work
nodes.
D
There,
where
there
is
the
pay
as
you
go
mechanism
regarding
the
the
cluster
itself
and
the
master
infranod.
As
far
as
I
know,
there's
no,
but
maybe
I'm
wrong
here-
there's
no
way
to
suspend
them,
or
maybe
there
is
some
kind
of
snapshot
you
can
do
and
then
restart
from
the
snapshot.
But,
okay,
we
can.
D
C
The
question
from
the
next
one
about
the
spot
instances
spot
pricing.
C
How
would
these
three
options
scree
in
on
screen
handle
spot
pricing?
I
know
you
can
handle
it
with
the
machine
auto
scaler
on
some,
but
I
think
it
was
last
release
that
rossa
added
support
when
creating
a
machine
set
support
for
spot
instances.
C
C
So
all
the
extra
machines,
that's
that
can
be
set
to
auto
scale-
can
use
spot
pricing
but
tested
that,
for
let's
say
for
two
weeks
on
a
pre-production
environment,
the
workloads
really
need
to
take
care
of
the
nodes
coming
and
going
since.
During
that
week's,
I
had
all
over
300
instances
used
as
nodes,
so
the
nodes
really
come
and
go.
They
are
cheaper,
they
are
60
cheaper,
but
the
actually
there
might
be
pain
for
developers
who
try
to
use
the
environment
so
that
that
really
requires
extra
extra
job.
From
the
deployment
point
of
view.
A
D
When
you
consider
spot
distances,
yeah,
of
course
it's
not
for
all
the
workload,
we
usually
position
the
spot
distance
for
hpc
for
test
and
dev,
but
also
for
stateless
applications.
D
Anyway,
because
of
the
layers
that
you
have
in
between
from
I
mean
between
from
the
ac2
and
the
application,
the
application
should
be,
I
mean
safe
and
the
spot
distance
is
one
option.
The
other
one
is
graviton
graviton
2,
so
you
can
leverage
the
price
performance
advantage
of
graviton
2,
that
is
the
arm
offering
and
for
the
cpus,
that's
another
option
for
saving,
and
that
is
available
as
well.
A
D
Well,
I
have
some
first
of
all.
Rosa
is,
I
think,
one
year
now
that
is
available
more.
A
D
Is
we
are?
We
have
a
roadmap
and
we
are
releasing
more
and
more
feature
that
are
already
available
in
the
openshift
container
platform
that
many
customers
have
on
premise:
the
con,
the
ocs?
What's
you
call
it
now?
It's
data.
Let
me
remember.
D
D
Something
that
is
different,
there
are
others,
and
we
will
see
in
the
console
that
you
have
the
add-on.
You
know
very
well
and
of
course
we
will
have
some
add-on,
but
what
we
have
mainly
roadmap
is
more
integrations
with
with
aws.
So
at
the
moment
you
have
the
integration
with
cloudwatch
for
monitoring
them,
but
we
we
don't
have
a
full
integration
for
s3.
You
can
integrate
s3,
but
it's
it's
it's
job,
you
you
have
to
do
it's
not
automated
or
something
that
you
have
in
console
regarding
the
open
shift
itself.
D
I
remind
I
remember
this
one
of
odf.
I
don't
know
if
fabi
you
have
other
differences.
A
No,
I
was
thinking
that
probably
terror
can
testify
more,
but
the
more
you
automate
the
application
life
cycle
in
terms
of
cicd
pipelines
or
github's
best
practices,
the
less
you
will
have
will
encounter
issues.
So
if
you
standardize
the
year
the
application
development
life
cycle,
it
will
be
less
challenging,
bringing
the
same
application
beyond
the.
They
said.
The
difference,
as
you
already
mentioned,
before,
getting
some
specific
implementation
on
storage
or
operation
data
foundation
or
some
specific
application
release
that
is
not
supported
or
is
supported
on
one
of
the
two.
A
A
The
the
more
challenging
and
risky
will
be
the
migration
from
one
of
the
two
towards
the
other.
C
Yeah,
that's
a
really
good
question,
but
I
think
that
from
develo,
let's
say,
developer
workload,
point
of
view.
There
is
not
that
big
challenges.
If,
if
you,
if
you
have
a
modern
workload
since
it's
still
the
same
kubernetes,
it
doesn't
matter,
is
it
rosa
or
ocp?
C
I
think
that
the
biggest
I
would
say
cultural
change
might
be
to
understand,
networking
and
regions
so
that
how
do
you
do
deploy?
How
do
you
replicate
data
centers
that
you
have
maybe
on-prem
on
public
cloud?
Do
you
have
single
available
clusters,
or
do
you
have
multiple
regions?
How
do
you
collect
clocks,
this
kind
of
the
overall
architecture
and
how
you
expose
services
like
do
you
use
in
aws
in
in
pro
on-prem
it?
C
C
Do
you
go
outside
of
doing
something
or
do
you
use
private
link
or
something
and
cross
region,
or
so
this
kind
of
networking,
and
also,
if
you
already
use
ocp
on-prem,
there
are
some
features
that
you
cannot
do
like
networking
is
not
the
same.
You
have
opensit
sdn.
You
cannot
select
obn
for
now,
and
also
you
only
can
create
messaging
sets
using
rosa
cli.
You
cannot
use
kdops
to
manage
that,
so
there
are
some
differences,
but
from
the
main
workload
on
exposing
workload
from
the
cluster,
it
still
opens
it.
D
Yeah
yeah
then
very
complete,
sir,
thank
you
and
thank
you
fabio
yep.
Actually,
if
you're
already
using
aws,
you
are
used
to
to
also
to
do
a
a
an
appropriate
and
accurate
cost
calculation
for
all
the
data
that
transferring
between
a
regions,
for
example,
or
the
the
definition
of
subnets.
You
have
three
options
actually
to
for
rosa
for
the
public,
the
private
and
the
private
link,
and
so
that
that's
something
that
mo
it's
more
on
the
opposite
side
and
and
agree,
and
also
in
the
day
two
operation.
D
Also,
you
can
leverage
the
cloud
watch
and
the
other
yeah,
of
course
prometheus
graph,
another
other
tool
as
well.
On
the
developer's
side,
I
agree
with
utero
and
I
see
less
less
issues
in
moving
moving
in
the
cloud.
Of
course,
it's
a
managed
service,
so
you
have
to
focus
more
on
on
the
on
the
on
the
application.
D
So
in
one
slide,
just
as
I
said,
we
already
covered
many
of
these.
What
is
really
interesting
to
me,
and
I
think
we
will
see
a
lot
of
more
coming
and
value.
There
is
the
integration
with
the
native
cloud
services
that
we
already
provide.
D
So
if
from
the
application,
you
want
to
use
s3
if
you
want
to
use
lambda
for
servers,
if
you
want
to
use
some
database
as
a
service
or
some
machine
learning
tools
and
services,
I
think
it's
very
powerful
because
you
are
closer
to
to
those
services
and
it's
easier,
but
one
advantage
that
I
want
to
mention,
and
customers
are
really
looking
at
that,
maybe
because
it's
in
europe
or
in
in
italy
that
is
particularly
sensitive
topic
is
the
region
rose,
is
available
also
in
italy,
for
example,
in
a
region
in
the
italian
region
in
milan
and
and
many
other
regions.
D
So
you
you
know
that
we
have
a
lot
of
regions
and
we
will
see
how
to
check
in
which
region
is
rows
available
and
we
are
launching,
and
many
other
are
coming
soon.
So
it's
interesting
because
customers
want
to
keep
their
data
locally
in
some
cases.
For
some
policies
and
regulations
that
there
are
in
countries
yeah.
D
And,
and
also
in
some
in
certain
aspects,
also
the
the
latency
in
in
a
way
I
mean
in
some
cases,
could
be
also
helpful.
To
have
a
closer,
I
mean
the
application
closer
to
the
end
user.
On
that
side,
of
course,
we
leveraged
the
sorry.
This
is
more
like
the
cloud
front,
so
you
have
the
edge.
D
You
have
the
edge
location,
so
it's
easier
to
have
the
cdn
that
support
you,
but
again,
the
region
and
the
fact
that
you
have
multiple
availability
zone
in
one
region
and
one
availability
zone
is
actually
multiple
data
center
provides
you
the
the
goods
of
the
reliability
and
high
availability
of
the
cloud,
but
at
the
same
time
the
data
is
still
in
your
region
and
that's
that's
the
main
advantage
of
rosa
at
the
moment.
D
D
Yeah:
okay,
okay,
okay,
okay,
just
this
light
because
it's
this
is
what
you
see
when
you
want
to
enable
and
start
working
with
rosa
in
the
aws
console.
So
you
access
your
console.
You
look
for
the
data
which
is
service
on
aws,
and
you
have
that
button
on
the
right
that
says
enable
openshift,
so
you
have
to
enable
it.
D
Of
course
you
need
an
account
or
that
site
and
then
you
will
have
to
download
the
rosa
cli
and
you
can
start
configuring
permissions.
Okay.
Before
doing
that
from
scratch,
I
mean
I
won't
do
that
from
scratch,
but
because
I'm
saying
for
the
time
let's
say
mainly,
I
want
to
show
you
the
the
console
okay.
D
B
D
I
hope
it's
clear:
it's
big
enough
to
see
okay
and
then,
when
you
download
it,
you
can
start,
and
there
is
really
a
nice
workshop,
that
I
suggest
you
and
we
can
go
through
that.
That
is
called
the
rosa
workshop
that
I
owe,
and
he
this
workshop
guides
you
through
the
steps,
how
to
set
up
how
to
deploy
the
cluster,
how
to
create
the
user
and
all
the
stuff
and
single,
also
deploying
the
an
application.
As
an
example.
D
Using
the
cli,
as
I
told
you,
let's
make
it
bigger.
D
A
D
Thank
you,
okay,
so
rosa
list
clusters-
and
we
have
passes
that
I
already
have
at
the
moment
that
is
my
rosa
2.
This
is
the
id
is
just
in
ready
state.
So
I
already
deployed
the
cluster.
D
D
D
We
have
the
console
url
the
region,
it's
in
the
eu
west,
one
so
the
ireland
region,
okay,
we
have
three
and
two
infra
nodes
and
two
computer
nodes.
We
have
the
cider
of
the
service,
the
machine,
the
pods,
and
then
we
have
the
details,
page,
the
cluster
manager
detail
page
that
we
can
access
in
order
to
access
that.
We
have,
of
course,
to
create
an
account.
Sorry,
a
user
in
the
in
the
console-
and
I
show
you
where
and
how.
D
So,
if
I
go
to
the
yam
service,
where
you
can
manage
the
resources,
create
the
policies
and
create
users,
the
user
groups
and
all
the
stuff,
you
see
that
I
created
a
user
here.
That
is
the
rosa
user.
This
is
what
I
created,
and
then
there
are
many
other
users
created
by
default
by
the
installation
and
they
are
used
internally.
D
But
there
are
also
users
that
you
need
to
create
at
the
very
beginning
is
very
simple:
you
need
just
the
administrator
access
and,
of
course
you
need
to
copy
the
security
credential
and
all
this
stuff,
because
you
need
them
when
you
use
the
cli
and,
and
that
then
just
to
show
you
also
what's
what
happened
in
the
ec2
environment.
You
have
really
a
decent
number
of
ec2
instances
that
have
been
created
by
the
the
installation
and
the
setup
of
the
of
the
machine.
D
Okay,
I
don't
know
why
I'm
not
virginia
ireland.
D
Okay,
here,
as
you
can
see,
there
are
the
three
masters,
the
two
workers
and
the
two
infranods
here
by
default.
You
have
a
certain
size
for
the
machine,
so
we
have
the
r5
fix
large
for
the
infra
nodes
and
5x
large
for
the
workers
and
then
five
2x
large
for
the
masters
again
here
here.
You
can
also
double
check
if
you
will
the
the
different
instance
types
available,
so
you
can
see
what's
available
and
the
details.
Okay.
D
So,
first
of
all,
when
we
say
our
r5x
large
is
the
memory
optimizer
before
cpu
starts
32,
gigabyte
of
memory
and
so
on,
and,
as
I
mentioned
before,
you
can
also
have
the
list
of
the
regions.
Where
rose
is
available
and
you
can
see
there
is
milan.
D
Are
many
others,
so
it's
it's
interesting
with
the
multi-exit
support
for
many
of
those.
D
D
So
let's
use
this
one,
and
here
you
you
didn't
see,
but
I
had
to
create
and
you
can
see
it
in
the
in
the
workshop,
I
had
to
create
a
specific
admin
for
user
for
the
cluster,
and
this
is
the
the
console
for
the
openshift.
You
see,
I
mean
if
you
are
used
to
with
openshift.
It's
quite
easy.
You
have
the
class
utilization,
the
memory
file
system.
B
D
As
I
told
you
so,
for
example,
here
you
have
the
resource
usage,
you
have
details
about
the
cluster
and,
as
I
told
you,
there
are
the
add-ons.
So
I
installed
the
cluster
login
operator
logging
operator
that
is
connecting
the
logs
of
the
platform
with
cloudwatch.
B
D
If
you
are
used
to
have
cloudwatch
as
your
unique
monitor
tool
for
all
the
resources
you
have
in
aws,
you
know
that
you
can
do
a
lot
of
things
there,
so
you
can
access
your
log
groups,
you
have
sorry,
you
have
the
infrastructure
logs
the
application
logs
and
some
audit
logs,
and
you
can
create
your
dashboard.
You
can
search
logs,
I
mean,
if
you're
used
to
to
do
to
work
with
cloud
which
is
pretty
simple
and
you
create
some
dashboard.
D
Some
metrics
monitor
the
events
and
also
correlate
these
events
with
your
logs
on
the
application
side
and
that's
really
powerful
stuff.
D
B
Oh
ed,
that's
that's
cool
what
we
we've
seen
the
the
hybrid
cloud
console
monitoring
controlling
multiple
openshift
clusters,
so
you
can
control
your
rosa
cluster
and
you
can
have
you
know
multiple
rosa
cluster,
connected
to
the
hybrid
cloud
console,
and
so
you
can
have
a
kind
of
inventory
of
all
your
clusters
so
that
that
is
cool.
For
you
know
the
multi-cluster
strategy,
if
you
will
in
a
sas
way
so
that
that's
that's
that's
interesting
and
yeah.
For
me
it
was
a
pretty
interesting
and
how
much
techy
is
edo.
B
D
A
D
D
At
the
very
beginning,
it
was
rel7
still
not
very
popular
linux
container
didn't
available
yet-
and
I
remember
that
many
customers
told
me,
but
it's
like
zfs
with
solaris,
and
so
I
was
back
to
my
old
times
in
summer
ecosystem,
then
so
I
I
use
it
to
work
with
the
with
the
cli
and
that's
still,
my
my
passion
just
to
answer
your
comment
natalie,
but
it's
everything
I
can
do
at
the
moment
for
rosa.
A
A
A
D
C
B
D
In
google-
but
you
know
it's
better
to
have
a
better
answer
from
the
from
the
documentation.
I
I
think
that
there
is
a
delay
and
there
will
be
always
a
delay
between
the
availability
of
new
release
of
openshift
and
rosa
updating
the.
A
D
Well,
it
is
basically
that,
first
of
all,
as
a
user,
you
can
open
a
ticket
both
to
redot
or
without
aws,
so
we
then
manage
the
the
the
case
and
we
dive
deep
and
do
the
troubleshooting
and
the
the
analysis
of
the
of
the
case,
and
we
we
assigned
the
right
sre
and
the
right
support
managing
that.
So
that's
the
main.
The
main
thing
I
mean
we
put
the
best
of
the
two
companies
on
the
support
side
and
I
think
that
you
know
very
well.
D
However,
that
is
good
in
the
support.
It
is
the
main
asset
for
probably
of
redact
in
providing
enterprise
subscriptions,
and
you
know
how
we
care
about
our
slas
and
availability.
I
mean
here
we
are
talking
about
99.95
percent
of
availability
of
this
platform,
so
it's
we
are
very
careful
about
that.
So,
just
to
answer
you
it's
it's,
it's
really
something
that
we
can
manage
on
your
behalf
and
for
the
support
side.
Of
course,
there
are
so
many
other
I
want.
I
wanted
to
see
to
find.
D
Maybe
it's
not
here:
okay,
yeah
yeah
in
the
in
the
product
page,
you
will
see
the
the
the
matrix
details.
Just
let
me
bring
back
the
the
slide
where
there
was
that
link,
so
we
can
dive
deep
a
little
bit
here.
Okay,
yes,
sure
this
one
because
I
mean
saying
supports
not
necessarily
means.
C
Let's
say
that
for
the
last
three
months
in
production
and
like
six
months
total,
all
the
support
tickets
that
I
have
created
is
something
related
to
running
something
on
openshift.
So
I
have
created
all
the
tickets
directly
to
red
hat.
Since
there
has.
I
haven't
encountered
the
problems
that
is
kind
of
easy
to
say
that
this
is
aws
infra
problem
or
networking
problem,
so
it
is
related
to
workload
on
rosa.
D
D
What
is
important
to
me
is
the
shared
responsibility
approach
that
I
don't
know
if
you're
aware,
but
we
usually
we
start
talking
about
this
since
the
beginning
to
our
customers.
So
the
fact
that
there
is
a
shared
responsibility,
metrics
you
you
need
to
take
care
of
very
carefully
so
just
to
let
you
understand
the
concept.
D
Is
this
one?
So
we
care
and
we
are
in
charge
of
the
security
of
the
cloud
and
the
customer
he
will
is
in
charge
of
the
security
in
the
cloud.
So
we
are
responsible
for
protecting
the
infrastructure
and
customer
is
responsible
for
how
to
protect
their
data,
how
to
protect
their
application,
of
course,
using
our
tools.
We
provide
a
lot
of
tools
for
that,
and
that
is
also
something
that
is
applicable
to
the
abstract
services
or,
let's
say
manager
services.
D
So
we
you
can
see
where
we
finish,
our
let's
say
where
our
responsibility
is,
is
the
boundary.
Where
is
the
boundary
between
us
and
the
customer,
responsibility
for
the
data
and
and
and
here
we
will
have
false
redoubt,
of
course,
in
the
case
of
rosa.
A
Doesn't
have
one
relative
more
to
the
use
case
if
you
can
provide
some,
obviously
from
your
personal
experience,
if
you,
if
you're
seeing
some
industries
or
some
specific
use
case,
whether
rosa
is
more
fits
more
in
terms
of
just
using
compared
to
just
using
self-managed
openshift
on
on-premise
openshift.
D
Yeah,
okay,
definitely
many
companies
are
already
using
openshift,
and
I
see
many
companies
in
that
in
the
fsi
segment
in
the
fsi
industry.
I
see
companies
that
are
already
using
openshift,
also
in
the
public
sector
in
public
administration,
and
I
I
know
very
well
that
you
see
them
as
as
well.
D
What
they
are
considering
why
they
are
considering
rosa
is
because
they
are
embracing
the
cloud
cloud
computing
model
and
they
want
the
pay,
as
you
go
also
for
those
workloads
that
I
have
on
premises
on
openshift
for
example,
so
they
they
like
the
fact
that
they
can
have
a
pair
as
you
go
model
for
openshift
and
the
other.
The
other
thing
is
also
the
flexibility
that
they
have
in
the
cloud
on-premise
for
having
an
open
shift
environment
in
some
companies.
You
need
to
request
the
virtual
machines.
D
You
need
to
prepare
a
budget
proposal,
internal
budget
proposal.
You
need
to
define
a
chargeback
you
need
to.
I
mean
it's,
it's
hard,
it's
more
an
operational
operational
model,
not
a
technology
problem
problem,
of
course,
but
in
the
cloud
it's
easier,
it's
it's
more
flexible.
You
can
set
up
a
cluster,
you
can
start
working
on
it.
You
can
start
testing
and
it's
still
open
shift
so
for
developers,
it's
very
powerful
to
have
an
easy
way
to
to
set
up
a
new
cluster
and
start
working
in
it.
D
B
D
Like
in
the
data
science
environment,
I
saw
that
in
the
past
you
had
also
an
event
like
this:
a
coffee
break
with.
A
Yeah
we
we
spoke
about
our.
I
want
the
new
camera
reddit
data
science,
which
is
a
manager
services
and
in
roma.
There
is
still
not
confidence,
but
is
environment
also
for
rosa
the
support
for
gpu.
So
that
means
also
rosa
will
be
more
and
more
will
more
and
more
fit
the
the
aiml
workloads,
as
you
mentioned,.
C
Say
that
you
compared,
why
would
you
use
in
rosa
instead
of
on-prem?
I
would
say
that
if
you're
already
an
open,
open
user,
it's
like
a
no-brainer.
If
you
need
to
run
on
aws
and
don't
have
the
resources
to
do
the
ops
part
the
day
too,
it
is
really
easy
to
get
started
and
with
the
security
token
service,
sds
integration,
it's
really
easy
to
integrate
to
iam
so
that
you
can
actually
run
containers
as
with
an
im
roll
and
you
can
use
them
to
access
so
its
really
good
integration.
C
But
I
I
would
say
that
the
proper
comparison
that
why
would
you
change
run
aks
instead
of
rosa,
because
you,
if
you're
going
to
russia,
you're,
probably
an
aws
customer,
and
then
you
have
to
compare
that
what
we
like
it
on
top
of
aks
and
if
you
just
run
kubernetes-
and
you
are
good
with
kubernetes-
you
have
the
containers.
C
So
I
think
that
that
is
the
proper
like
aks
and
winrosa,
and
when,
when
you
need
like
developers
more
access,
you
want
to
have
control
of
production
access
to
locks,
probably
cluster
reading
access
rosa
has
that
that
building.
So
you
will
save
a
lot
of
money,
not
maintaining
those
on
top
of
aks.
C
D
Not
much
power
enough,
but
but
now
now
you
know
the
everybody's
a
developer
actually,
because
with
infrastructure
as
a
code
and
even
in
the
operation,
I
see
developing
codes
so
much
so
it's
it's
actually
I
mean
there's
no
devil
and
no
ops
is
devops.
Natalie.
B
It's
true
it's
true.
I
like
what
you
said.
Everyone
is
a
developer,
so
yeah
infrastructure
as
a
code
now
gear
ups
and
all
kubernetes
and
all
those.
D
B
Infinite
discussion,
well
that
that's
cool
and
I
like
what
what
also
terror
said
that
you
can
you
you
can
choose
what
you
need
and
that's
a
valid
use
case,
no
having
rosa
that
help
you
with
multi-tenants.
If
you
have
a
teams,
developer
teams
or
people
that
need
to
join,
you
can
filter,
you
can
have
your
granular
airbag
rules
and
that
helped
a
lot
another
cool
thing,
and
I
think
it's
coming
more
and
more.
It's
the
integration
with
the
aws
also
services
right.
B
D
Yeah,
definitely
I
mean
it's
it's
something
that
you
can
leverage
easier.
If
you
are
in
rosa
you
can
do
it
anyway,
but
I
mean
it's
easy
because
you're
already
in
aws,
so
for
the
database,
for
example,
you
can
leverage
several
databases
like
aurora
or
document
db,
dynamodb
or
even
the
graph
database,
like
neptune
or
the
ledger
database
like
qldb
time
stream.
D
I
mean
there
are
so
many
options
and
and
the
storage
side
it's
also
interesting
because
you're
already
familiar,
probably
because
you
tried
to
install
openshift
on
aws
you're,
already
familiar
with
dbs
s3
or
if
yes,
afs,
but
you
have,
of
course
any
other
options
now
with
the
netapp
on
top
integration
or
other
stuff.
So
it's
a
it's
cool.
D
I
mean
you,
you,
you
can
expand
the
art
of
possible,
it's
not
in
the
console,
so
you
don't
have
in
the
console
in
the
open
shift,
console
the
ability
to
select
the
services
or
maybe
we
not
yet
I
don't
know.
I
mean
it's
in
the
roadmap
folk
that
I
don't
have
a
clue
at
the
moment
about
all
the
details,
but
it's
really
easier.
If
you
are
on
rosa
to
integrate
those.
B
You
mentioned
we're
adding
more
and
more
cloud
services
like
we
have
a
new
services
for
attaching
databases
called
the
roda,
so
from
openshift
console
you
can
attach
the
database
with
a
component
called
the
service
binding
operator,
so
it
basically
you
integrate
a
library
in
your
software
and
that
helps
in
automatically
binding
to
a
remote
database.
Sas
database
like
lds
at
the
moment
is
not
there,
but
we
are
building
this
ecosystem
of
databases
to
say
one
yeah.
D
B
D
B
B
Do
you
have
additional
slide?
You
close
up
you
wanna
from
the.
D
Slide,
actually
it's
it's
everything
I
had
I
prepared
for.
For
the
moment,
I
had
the
diagram
of
the
the
details
of
how
the
worker
nodes
and
the
master
infra
are
balanced,
with
the
internal
elastic
load
balancer
and
the
and
the
network
load
balancer.
So
these
are
a
little
bit
the
details,
but
I
mean
it
will
take
later
literally
one
hour
just
for
that.
One
last
thing
that
I
really
like
to
to
to
share,
if
I
find
the
slide
is,
is
about
also
the
certification,
of
course.
His
compliance
with
pci.
C
D
The
stuff,
the
partners
that
already
using
and
proposing
our
rosa,
accenture
and
ibm,
of
course,
and
others,
and
last
but
not
least,
the
well
architecture
framework.
I
really
encourage
you
if
I
find
this
light.
Okay,
I
don't
find
it
okay
here,
every
time
you
set
up
an
architecture
in
aws
or
in
general,
let
me
say,
but
mainly
in
aws,
take
in
account
these
pillars.
We
have
a
tool
even
that
you
can
use
for
us
as
your
application
and
your
infrastructure.
D
Basically
on
those
pillars,
so
you
can
go
to
the
console.
You
can
define
the
workload
and
there
is
an
analysis
of
this.
This
is
very
important
in
general
and
it's
important
to
review
it
periodically,
because
on
the
operational
excel
and
security,
reliability
cost
optimization.
It's
it's
really
interesting
that
you
can
find
many
optimization
in
the
in
the
future.
D
So
just
the
only
thing
that
I
I
want
to
add.
B
Yeah,
in
fact,
in
our
in
the
console
in
the
average
cloud
console
we
have,
you
know
the
for
performance
monitoring,
cost
optimization
for
your
all,
your
rosa
cluster
or
other
ownership
clusters,
so
those
very
valid
pillars
that
anyone
can
take
in
account
yeah.
So
thanks
ego
edo,
I
will.
I
would
like
to
close
up
with
some
some
follow-up.
Interesting
follow-up.
We
have
I'm
talking
about
book,
so
tarot
is
not
the
outer
of
this
book.
Unfortunately,
but
we
have
an
I.t
modernization
e-book
to
share.
B
You
can
download
for
free
from
this
page
it's
a
story
with
by
red
dot
and
ws
how
you
can
modernize
your
iit
infrastructure
with
managed
cloud
services
so
also
talking
about,
of
course,
rosa.
This
is
interesting
if
you
like
to
read
that
up,
sharing
our
landing
page
for
rosa
which,
by
the
way,
links
the
rosa
console
that
we
or
shared
also
before
right.
B
So
this
is
our
landing
page
here
you
find
all
the
information
about
rosa
and
thanks
to
our
black
managed
openshift
black
belt,
andrea,
that
shared
with
us,
the
a
rosa
roadmap.
So
I
we
put
in
the
chat
the
link
of
rosa
roadmap.
If
you
want
to
see
the
new
feature
coming
up,
it's
very
interesting.
This
is
the
landing
page
for
general
knowledge.
No
another
interesting
one
is
this
report
for
idc
on
how
to
accelerate
agility
with
the
cloud
services.
It's
interesting
to.
B
You
know,
define
kpi
and
metrics
on
the
cloud
adoption
and
last,
but
not.
B
Of
course,
of
course,
which
runs
on
this
sandbox
runs
on
aws.
It's
not
rosa,
it's
a
multi-tenant
to
to
to
talk
to
say
what
what
what
people
say.
No,
it's
a
multi-tenant
environment
of
vote
for
developers.
I
put
a
link
to
the
chat.
I
don't
know
if
that
went
through.
Let
me
check.
B
Yeah,
so
if
you
go
to
the
developer
sandbox
for
openshift,
you
got
your
free
account
to
to
work
with
openshift
on
the
cloud
you
can
start
developing
your
application.
Astero
mentioned
that
it's
a
multi-tenant.
You
have
your
quota,
you
have
your
services,
you
can
bind
now.
You
can
bind
also
the
databases
with
the
service
binding
operator.
It's
really
cool
environment
fabio.
I
don't
see
the
chat,
the
link,
I
don't
know
if
we
missed
it.
I.
D
A
A
B
You
have
to
come
back.
Of
course,
we
have
lots
to
say
about
cloud
services
managed
cloud
services.
As
you
know,
the
modernization
is
in
progress,
so
we
have
a
lot
to
say
about
that
and
to
close
up
fabio.
Our
appointment
for
next.
A
B
For
next
week
we're
gonna
have
we
have
to
confirm
our
guess.
It's
gonna
be
another
database
as
a
serious
show.
We
have
just.
We
had
to
wait
for
the
final
confirmation,
but
every
wednesday
morning
here
at
openshift
tv,
we
talked
about
all
about
cloud
native
upshift
architecture
cloud.
We
have
a
fantastic
guest,
we
have
fantastic
host
and
presenter.
Thank
you
fabio.
Thank
you.
Terror
thanks
everyone
joining
today
and
the
appointment
is
stay
on
openshift
tv.