►
From YouTube: OpenShift AMA Panel Architects, Engineers and Product Managers at OpenShift Commons Gathering 2019
Description
OpenShift AMA Panel Architects, Engineers and Product Managers at OpenShift Commons Gathering 2019
Red Hat Summit 2019
A
So
the
way
this
works
is
you've
been
saving
your
questions
all
day.
Haven't
you
all
right,
there's
two
microphones:
if
you
want
to
get
up
and
stand
up
and
talk
into
the
microphones
and
ask
your
questions,
do
that
otherwise
I'm
going
to
make
every
single
one
of
these
dudes
and
ladies
introduce
themselves
and
tell
you
what
they're
working
on
so
please
don't
be
shy,
come
and
ask
any
question.
This
is
an
ama.
So
this
is
your
opportunities
here.
A
B
Right
so,
while
our
team's
getting
up
here,
my
name
is
Joe.
Fernandez
I
run
the
core
platform
team
for
open
shaft
I,
also
now
I'm
in
charge
of
open
shift
and
Red
Hat
virtualization.
We
have
here
a
large
group.
These
are
open
shift
product
managers
as
well
as
our
members
of
our
open
shift.
Engineering
team,
lead
architects
and
managers.
B
We're
here
to
answer
your
questions
not
only
today,
but
throughout
the
week
so
I
know
many
of
you
guys
have
meetings
scheduled
in
the
customer
briefing
Center.
We
have
a
few
hundred
open
shift.
Customer
meetings
set
up
a
lot
of
interesting
things
going
on
so
I
obviously
heard
about
some
of
it
today
before
we
do.
This
I
wanted
to
actually
thank
all
of
our
customer
presenters
today.
That
was
amazing
to
hear
all
these
stories
so
round
of
applause.
B
C
D
B
E
Earlier
Mike
talked
about
us
being
courageous
to
change
things,
so
we
know
that
nobody
ever
installs
a
point,
zero
release,
and
so
we
had
the
courage
to
go
straight
to
four
point.
One
and
part
of
that
was
keeping
the
internal
timeline
moving,
which
is
we
actually
had
an
internal
four
zero
and
four
one
is
a
set
of
features
that
were
always
actually
plotted
for
for
one,
and
so
we
did
a
four,
oh
and
then
we
decided
to
soak
and
hold
it
longer.
Yeah.
B
So
the
version
number
for
the
first
GA
will
be
4.1.
You
should
expect
that
to
be
available
in
the
next
about
two
weeks
from
now
in
the
channel,
the
beta
that
many
of
you
guys
have
participated
in
that
was
a
4-0
and
and
then
after
for
one
we're
gonna
get
back
on
our
you
know
releasing
every
3
to
4
month
cadence,
so
42
should
be
end
of
August
September
we're
trying
to
get
a
4-3
out
around
the
end
of
the
year
and
and
then
you'll
see
that
it's
continued
that
into
the
new
year,
all
right.
B
And
so
Tecton
is
a
new
upstream
project
around
building
a
cloud
native
or
kubernetes
native
CI
CD
capabilities
right
into
the
tool,
so
that
that's
a
project
that
we're
really
excited
about
really
investing
in
we've
been
shipping
Jenkins.
Since,
since
the
start,
a
lot
of
customers
are
using
it.
The
other
thing
I
remind
customers,
is
there's
a
ton
of
choice
in
this
area,
right,
there's
so
many
different
CI,
CD
tools
and
so
forth,
and
so
you're
not
limited.
B
Just
to
the
you
know
the
tools
that
we
ship
most
customers
are
bringing
their
own
tooling,
whether
that's
Jenkins
or
bamboo,
or
teamcity
or
gitlab.
You
know
all
sorts
of
different
tools
and
so
forth,
and
so
what
we're
really
trying
to
do
is
make
sure
that
we
can
nicely
integrate,
regardless
of
what
you're
choosing
for
CI
and
CD
services.
But
if
you
want
to
know
the
direction
of
where
we're
going
check
out
the
Tecton
project
and-
and
you
know
that
that's
I-
think
we'll
you'll
see
a
lot
of
what
we're
we're.
H
B
I
B
Will
be
supported
by
right
at
right,
okay,
yeah
and
so
also
I
know
some
folks
lives
through
the
v2
to
v3,
so
obviously
v2
to
v3.
We
changed
everything
right.
We
completely
rebuilt
the
platform,
moved
everything
to
kubernetes
in
containers,
so
there
was
no
easy
way
to
automate
that
migration,
but
v3
to
v4
it's
a
kubernetes
platform.
B
It's
the
same
containers
all
that
you
know
everything
that
you're
doing
on
v3
should
work
in
v4,
because
the
innovation
is
all
around
how
we
operate
the
platform
and
services
on
that
platform,
as
Maria
mentioned,
there's
a
an
open
shift,
migration
tool
which
Maria's
the
PM
4
so
feel
free
to
pinger
later.
But
it's
going
to
allow
you
to
automate
sort
of
the
you
know
the
migration
of
3x
apps
onto
4x
and
so
forth.
It's
not
in
place,
it's
not
an
upgrade.
B
B
Right
actually,
there's
a
road
map
session
or
yeah,
so
there's
actually
we're
repeating
that
session
twice
I
know.
Last
year
some
people
couldn't
get
into
the
open
ship
roadmap
session.
So
they're
open
shift
roadmap
session
will
be.
It
should
be
on
the
agenda
two
times
we'll
be
demoing
the
migration
tools
in
that
session,
all.
B
H
J
K
I'll
actually
jump
back
to
that
last
question.
Just
a
little
hi
I'm
Eric
Paris,
one
of
the
architects
on
we.
We
discussed
kind
of
the
tool
that
we're
gonna
help
to
migrate
applications
from
3:00
to
4:00
right,
but
why
we
did
that.
Why
I'm
sure
a
lot
of
people
are
wondering.
Why
did
we
not
provide
that
upgrade
path?
And
it's
because
what
Joe
mentioned
the
operational
characteristics
of
everything
is
completely
different.
It's
completely
different
right.
K
The
way
that
you
configure
the
cluster
and
run
the
cluster,
and
we
had
more
concern
about
trying
to
create
an
upgrade
path
from
3.
For
that
would
actually
break
your
cluster
and
leave
you
in
a
state
that
was
difficult
to
recover.
Then
we
thought
it
would
be
for
customers
to
have
a
second
cluster,
that
they
were
able
to
get
up
and
get
running
and
understand
the
new
operational
ways
to
interact
with
open
shift4.
And
then
you
can
move
your
applications
bit
by
bit
from
one
to
the
other
and.
E
Actually,
it's
been
a
common
request
for
quite
a
long
time
to
make
sure
that
there's
a
path
so
that
applications
can
move
more
seamlessly
between
clusters,
so
rather
than
double
down
on
something
that
had
that
higher
risk
profile,
trying
to
make
sure
that
we
could
actually
ensure
you
have
a
successful
for
one
launch
but
also
spend
that
time
invested
in
tools
that
actually
help
you
move
in
between
clusters
has
more
benefit
in
the
long
run
in
between
401
clusters
or
older
3x
clusters
and
4x
clusters.
So
exactly
actually
that.
B
The
migration
tool
isn't
just
gonna,
be
helpful
for
three
to
four
migrations.
It'll
also
be
helpful
for
four
to
four
migrations
for
customers
who
can't
upgrade
to
every
single
release.
Like
let's
say
you
have
to
go
multiple
releases,
kubernetes
forces
you
to
go
sequential,
that's
not
the
norm
for
a
lot
of
customers,
and
so
I
think
that'll
actually
help
us
even
beyond
the
three
to
four
migrations.
So,
okay,
more.
C
B
B
You
know,
I
think
that
that's
been
our
focus
and
you
know
with
4x
we're
trying
to
make
it
easy
to
operate
the
platform
across
different
environments
right
and
obviously
one
of
the
things
you
lose
when
you
go
to
bare
metal.
Is
you
lose
a
lot
of
the
automation
of
your
virtualized
environment,
automated
compute,
provisioning,
automated
storage
and
networking
and
so
forth?
Check
out
the
keynote
demo
tomorrow?
B
We're
gonna
show
what
we've
been
doing
to
bring
a
cloud-like
experience
to
bare
metal.
So
that
operating
a
bare
metal
cluster
has
all
the
same,
similar
characteristics
to
operating
on
a
virtualized
environment.
But
then,
at
the
end
of
the
day,
the
choice
is
yours
as
to
where
you
want
to
run
it,
and
many
customers
here
in
the
room-
and
you
know
in
our
customer
base-
are
running
it
in
multiple
environments
and
obviously
that's
great
for
us,
because
that
means
that
we're
really
you
know
living
up
to
this
mission
of
being
a
hybrid
cloud
solution.
B
E
Add
a
lot
of
the
things.
A
lot
of
the
benefits
that
we
see
in
the
long
run.
Virtualization
has
a
lot
of
advantages
and
running
on
bare
metal
has
a
lot
of
advantages,
and
so
for
us
we
think
Red
Hat
more
than
anyone
else
is
actually
really
well
positioned
to
run
on
all
the
world's
hardware,
with
Linux
and
Red
Hat,
Enterprise,
Linux
and
core
OS
red
rel,
core
OS,
actually
takes
all
of
the
strengths
of
the
row.
Hardware,
certification
and
means
we
can
do
in
place
cluster
upgrades
and
so
those
automated
management.
E
M
Know
I
would
only
add
that
it's
very
rare
that
a
customer
only
has
one
infrastructure
these
days.
They
have
multiple
clouds
or
investments.
The
highest
population
of
growth,
we're
seeing
is
in
the
bare
metal,
so
it
may
be
a
smaller
population,
but
its
growth
rate
is
is
faster
than
some
of
the
other
infrastructures
out.
There
Joe
mentioned
this
keynote
tomorrow,
so
Red
Hat's
in
an
unusual
situation
where
we're
very
strong
in
infrastructure
right
we
have
our
OpenStack
investments.
M
L
B
So
Derek
showed
this
morning
something
cool
that
people
may
not
realize
it's
called
machine
sets,
and
so,
when
you
bring
up
that
initial
cluster,
we
want
to
make
that
super
fast
right.
So
it'll
it'll
bring
up
a
highly
available
set
of
worker
notes.
So
three
work
I,
sorry
controller
nodes,
throw
so
three
three
masters
three
compute
nodes
kind
of
all
configured
the
same,
but
then
you
can
actually
bring
up
machine
sets.
I,
don't
know
you
want
to
talk
a
little
bit
about
I,
just.
N
Want
make
sure
I
understood
your
question
fully.
So
are
you
saying,
like
you,
don't
have
a
homogeneous
fleet?
You
have
some
some
some
computers
with
different
CPU
counts
and
memory
capabilities
and
others
yeah
so
I
mean
that
that
shouldn't
be
an
issue.
I
mean
my
background
on.
The
upstream
is
largely
around
resource
management.
So
I
spent
a
lot
of
time
in
kubernetes
to
make
that
possible.
For
you,
so
I
would
say
you
should
be
successful
in
having
a
a
heterogeneous
pools
of
compute
in
4x.
I.
Think
we're
doing
more
to
make
that
easier.
N
So,
while
Joe
talked
about
how
we're
doing
work,
if
you're
on
particular
cloud
environments
to
make
it
easier
to
provision
and
deprovision
compute
that
could
vary
on
characteristics
that
that's
an
option,
but
we
also
have
the
ability
to
or
what
I
was
fire
of
scalability
is
that
you
could
have
different
configuration
for
like
accelerated
instances
versus
normal
worker
instances,
and
you
should
be
able
to
tweak
how
you
configure
that
host
differently.
The
another
capability
we
have
that
was
not
highlighted
too
closely
was
there's
a
node
tuning
operator.
N
So,
depending
on
how
you
label
your
nodes,
you
can
actually
have
an
operator
that
applies
to
any
profiles
to
those
hosts
for
you
automatically.
So
I
guess,
from
my
perspective,
we're
doing
a
lot
to
try
to
make
it
possible
to
run
heterogeneous,
node
pools
without
issue
and
we'll
continue
to
invest
in
making
it
easier
to
make
the
system
smarter
on
configuring
that
they
don't.
The
only
thing
you
might
want
to
do
is
just
change
your
system,
reservations
for
each
node
pool
yeah.
B
Where
I
was
going
with
machine
sets,
is
you
know?
Sometimes
you
want
to
actually
have
specific
hardware
for
specific
services,
specific
applications.
So
now
you
can
actually
create.
You
know
different
pools
of
compute
that
are
optimized
differently
for
like
so
these
are
my
machines
that
run
storage.
These
are
my
machines
that
are
run
heavy
AI
processes.
These
are
GPU
enabled
and
so
forth,
and
kind
of
tying
that
even
directly
to
operators
and
stuff
is
something
that
we're
so.
N
You
can
define
different
pools
of
compute
and
then
you
could
target
workloads
to
those
pools.
So
if
you
had
like
a
GPU
accelerated
instance
in
a
cloud
and
you
wanted
to
dynamically
auto
scale,
just
that
set
like
the
4x
capabilities
that
we
have
should
make
that
like
super
easy
to
do
and
so
yeah,
you
should
be
fine.
O
Hello,
my
questions
more
about
cloud
capabilities,
so
the
feature
sets
at
least
seem
to
overlap
a
lot
or
converging
a
lot
so
for
things
like
K
NATO
versus
a
TOS,
lambda
or
AWS
community
services
versus
your
own
platform.
So
in
the
future,
where
do
you
guys
see
the
platform
going
in
order
to
differentiate
yourselves
relative
to
just
going
to
native
services?
Yeah.
B
So
look
we've
been
competing
with
native
services
since
we
launched
OpenShift
3
right
like
openshift,
3
and
GK.
He
pretty
much
launched
around
the
same
time,
right,
I,
think
on
the
and
and
so
now
you
know
four
years
later,
there's
I
think
over
80.
You
know
CN
CF,
conformant,
kubernetes
distributions.
You
know
we're
gonna
do
two
things
in
my
mind.
Others,
chime
in
one
is
we're.
B
It
may
be
some
time
before
you
see
Amazon
support
you
on
Azure
or
Google
support
you
on
Amazon
right.
So,
for
us,
a
hybrid
cloud,
isn't
just
one
cloud
provider
and
an
on-premise
appliance,
it's
being
able
to
work
across
all
the
major
clouds
being
work
able
to
work
across
different
on-premise
footprints,
whether
that's
bare
metal,
OpenStack,
VMware,
what-have-you
and
being
able
to
operate
at
the
edge
right
so
so,
building
the
best
hybrid
cloud,
hybrid
kubernetes
platform
is
one
area
where
we
think
we
already
have
huge
advantage
and
we're
gonna
continue
to
differentiate
ourselves.
B
The
next
the
other
area
is
all
the
stuff
we're
building
on
top
of
the
platform.
So
Kay
native
you
mentioned
the
work
we're
doing
on
sto
the
working
we're
doing
on
CI
CD
services,
the
work
we're
doing
on
the
developer,
experience
with
our
middleware
team,
frankly,
with
the
the
teams,
our
partners
and
so
forth,
to
sort
of
build
a
really
rich
set
of
services
and
capabilities
for
end-users
to
drive
consumption
of
that
platform.
So
another
thing
buddy
else
wants
to.
E
P
E
Some
of
what
we're
trying
to
do
is
make
sure
there's
a
healthy
open-source
ecosystem
that
can
support
people
wherever
they
are
whatever
cloud
provider.
Whatever
hardware,
they
have
they'll
always
be
trade-offs
in
picking.
You
know
well-designed
services
from
a
particular
provider,
and
we
did
that
for
a
really
long
time
with
Microsoft.
B
We
know,
wherever
that
innovation
comes
from
it's
going
to
come
from
open
source
right,
and
you
know
that's
obviously
aligned
with
what
Red
Hat's
good
at
right
and
then
everything
from
our
investments
all
the
way
down
to
the
kernel
and
the
infrastructure
up
to
middleware
and
application
services.
I
think
gives
us
good
breadth
of
code,
expertise
and
capabilities
to
to
continue
differentiate.
So
yeah.
M
And
we
I
mean
we
love
it
right.
We
love
having
so
many
different
kubernetes
investments
from
different
vendors
right.
Let
me
go
to
indeed
and
do
a
job
search,
there's
a
lot
of
protection
on
this
technology
package
that
you're
all
involved
in
in
this
room
and
think
of
the
opposite.
Think
of
if
it
was
just
us
right,
so
we
encourage
the
cooperation
we
yeah
competition.
Actually,
it's
a
great
sort
of
place
to
be
thank.
O
Q
B
B
We
you
can
do
that
now
and
that
that's
that's
part
of
what
we're
doing
so
check
that
out,
and
you
know,
give
us
your
feedback.
Oh.
C
B
R
S
Great
question:
in
general,
you
want
to
build
an
operator
so
that
its
defensive
in
nature-
and
then
you
know,
is
gonna
fail
safely
as
much
as
possible.
So
that's
like
kind
of
the
foundation
of
this,
and
then
we
want
to
build
in
other
ways
of
bubbling
up
status
from
the
SDK
and
our
scaffolding
up
to
users
and
admins,
and
then
all
the
way
into
that
metering
component,
where
you're
actually
getting
operational
metrics
around.
S
For
some
reason,
this
database,
that's
being
managed
over
here,
has
hit
some
weird
situation:
either
needs
to
be
escalated
to
the
operator
author,
or
at
least
the
cluster
admin.
Something
like
that.
So
it's
kind
of
like
a
multi-pronged
approach
there,
but
we're
gonna,
be
you
know,
moving
a
lot
of
innovation
in
the
SDK
forward,
really
quickly
on
that
type
of
thing
and
I'll.
E
E
We
want
to
make
sure
that
there's
actually
a
great
channel
between
the
platform
and
Red
Hat
support
to
make
case
resolution
faster
to
help
us
collect
data
from
the
fleet
when
customers
opt
in
to
share
that,
to
give
you
access
to
early
drops
of
the
software
so
that
we
can
get
advanced
notice.
You
know
we've
tossed
around
some
ideas
over
the
last
year
or
so
about
making
it
easy
to
get
new
versions
of
the
kernel
that
people
can
test
on
bare
metal
right.
That's
part
of
that
story
around.
How
do
we
do
bare
metal
better?
E
It's
about
ensuring
that
we're
working
with
customers
and
users
to
make
the
software
better,
and
that
might
mean
you
know,
sharing
some
data
with
red
hat
sharing
faults
with
Red
Hat
in
a
way
that
we
can
take
that
turn
it
around,
and
so
the
support
team
is
better
arms
to
answer
questions
about
your
environment
and
well,
we'll
talk
more
about
this
over
the
the
coming
year
and
it'll
always
be
opt-in,
of
course,
because
you
know
earning
customer
trust
is
pretty
critical
to
us.
So.
N
N
The
worst
thing
would
be
if
you're,
just
constantly
fighting
with
your
operator,
and
so
I
would
just
like
give
that
advice
based
on
our
own
experience
in
in
the
for
development
cycle
is,
as
Rob
said,
always
think
about
fairly
merits,
always
ensure
you
could
turn
it
off
if
it's
not
working
for
you
and
try
to
keep
them
simple
honestly.
Also.
T
As
if
now
we
were
able
to
order
scale,
they
are
cloud
resources,
oh
I'm,
not
talking
about
the
Parata
scaling.
In
fact,
you
know
I'm
talking
about
the
Narada
scaling
on
the
cloud.
We
would
write
our
own
launch
configuration
and
then
specify
the
auto
scaling
there,
but
how
we
have
do
we
have
any
plans
of
extending
that
to
the
VMware
or
maybe
on
Prem
in
future.
N
Yeah
so,
as
Joe
said,
we
want
to
the
the
API
surface
we
presented
today.
If
you
look
under
the
covers
is
very
focused
on
describing
the
characteristics
of
the
compute,
you
want
to
bring
up
and
teardown
and
we've
not
much
else
so
the
set
of
platforms
we
can
support
for
that
will
come
over
time
once
that
API
is
available.
Any
of
those
platforms
should
be
able
to
take
advantage
of
about
us.
N
So
we
have
a
resource
that
we
didn't
show
called
just
a
machine,
autoscaler
resource,
and
you
say:
okay
I
want
to
scale
this
pool
of
compute
within
this
bounding
range,
and
once
we
have
support
for
each
platform
and
probably
should
talk
to
a
p.m.
about
priority
and
ordering
it
should
work
the
same
everywhere
and
work
well
now.
K
I
think
you
also
mentioned
on-prem
right
and
so
some
of
the
stuff
you'll
see
in
the
keynote
that
they're
talking
about,
they
aren't
gonna,
show
auto
scaling,
but
it
is
something
that
has
been
considered.
How
can
we
during
low
usage,
actually
power
off
machines
right
you've
got
real
Hardware,
we
can
power
it
off.
We
can
power
it
back
on
the
same
way
in
the
cloud
you
would
be
able
to
just
get
new
instances.
We
are
trying
to
figure
out
how
to
apply
the
same
autoscaler
to
bare
metal
right
on
Prem
as
well.
U
E
Think
in
the
short
term,
you're
gonna
find
we're
gonna
be
pretty
opinionated
about
what
the
control
plane
looks
like.
So
we're
probably
going
to
support
three
masters
and
part
of
that
is.
We
want
to
make
sure
so
and
actually
Derek
did
not
get
to
this
and
his
talk.
If
you
want
to
talk
or
Alex,
if
you
want
to
talk
about
the
EDD
operator
and
we
do
want
to
bring
control
planes
deeper
under
control
of
the
platform,
and
we
don't
want
to
expose
too
much
flexibility
now,
because
that
would
prevent
us
from
doing
that
later.
V
Yeah,
it
answered
the
immediate
question.
The
reason
we
recommend
the
odd
number
is
because
Etsy
D
needs
an
odd
number
of
nodes
to
operate
optimally.
You
can
operate
successfully
with
an
even
number,
but
you
lose
performance
that
way.
So
in
the
interest
of
conserving
resources,
we
just
scale
the
control
plane
to
be
the
exact
same
size
in
the
future.
The
plan
is
to
eventually
allow
the
control
plane
to
auto
scale
as
well.
That
work
is
underway.
I,
don't
think
we
have
anything
to
announce
about
that
now,
but
keep
your
eyes
out
the.
E
Longer-Term
trend
is
really
we
don't
want
to
force
you
to
have
to
make
operational
choices
that
are
meaningless
to
your
success.
We
want
you
to
focus
on
the
Cho,
the
choices
that
make
a
difference
of
workload,
scaling
hardware,
certification,
network
configuration
choices,
choices
that
actually
do
matter
and
as
much
as
possible.
We
like
the
control
plane
to
be
transparent,
seamless,
automatic
and
if
it
breaks
it's
our
problem,
not
yours,
I,.
W
Have
a
question
for
you,
as
it
relates
to
the
conversion
of
going
from
converged
infrastructure
to
hyper-converged
infrastructure.
I
was
talking
to
my
Nutanix
guys
and
at
one
point
in
time
the
Red
Hat
OCP
team
was
talking
to
Nutanix
and
some
of
us
are
trying
to
decide.
You
know,
as
we
do
our
on-prem
hardware
selections
as
we
move
forward
we're
they're
going
to
be.
It
seemed
to
stop
when
the
IBM
purchase
started,
or
so
another
reason
before
that
is
it
related
to
new
tactics.
No.
B
So,
first
of
all,
we
have
openshift
customers
running
on
Nutanix.
We
have
open
ship
customers
running
on
VMware,
OpenStack
and
other
platforms.
The
thing
is
is
that
certain
there's
platforms
that
you
know
we're
rel
is
certified
where
we
actually
have
a
relationship
with
the
partners
that
we
can
actually
jointly
troubleshoot
all
the
way
down
to
the
OS
and
its
integration
with
the
infrastructure
and
then
there's
platforms.
We
would
we
don't
have
that
relationship
or
we
don't
have
that
level
of
integration.
B
B
You
know
in
the
last
since
the
beginning
of
open
shift
and
so
forth,
that's
kind
of
so
it
really
has
nothing
to
do
with
the
IBM
acquisition
it
just
has
to
do
with
you
know:
each
providers
kind
of
integration
both
from
a
technology
as
well
as
from
a
partner
perspective
with
with
red
hat
and
and
our
joint
engineering
efforts,
so
yeah.
X
My
question
is
an
absurd
ability
capabilities
as
I
understand,
like
open
shift.
You
know,
kubernetes
sto
has
different
level
of
observe
ability
and
the
operative
operators
metering
and
sends
also
I
know
a
lot
of
observability
that
do
probably
is
it
a
plan
to
consolidate
and
provide
a
kind
of
all
you
know
a
unified
view,
so.
Y
Within
the
console,
there
is
absolutely
the
already
work
underway
to
give
you
kind
of
the
snapshot
view
of
everything,
that's
kind
of
happening
in
the
cluster
at
a
given
time.
That
being
said,
with
certain
things
like
sto
specifically,
you
know
you
may
end
up
with
a
lot
of
observability
data
that
actually
could
give
a
an
individual
with
malicious
intent,
more
kind
of
understanding
of
your
application
that
you
want
just
anybody
on
the
cluster
to
have
so
there.
Y
There
is
actually
good
reason
why
certain
components
here
are
segmented
in
such
a
way
where
it's
not
just
a
big-picture
view
where
you
see
kind
of
everything
all
at
once,
and
why
there
is
a
lot
more
nuance
to
the
views
in
which
those
are
presented
so
I
think
it
really
just
comes
down
to.
Is
there
security
concerns
about
exposing
that
data
to
a
more
general
audience
of
cluster
users,
that
really
kind
of
defines
when
you
will
see
big
picture
things
and
when
you
get
a
really
deep
drill
down.
X
E
So,
there's
a
there's
a
number
of
items
underway
to
allow
the
monitoring
on
the
platform
to
more
closely
integrate
with
the
UI.
There
is
some
role
based
access
to
that
segregation
of
certain
metrics
are
visible
in
the
console.
I,
don't
think
we're
gonna
get
quite
as
sophisticated
around
that
aspect
until
you
know
in
I
would
say
until
this
do
is
a
normal
part
of
every
cluster.
We
actually
want
to
focus
on
making
sure
that
the
security
boundaries
are
respected,
that
the
the
core
is
stable.
E
X
Y
B
Z
In
the
morning,
demonstration
of
OB
shift,
for
we
could
see
a
feature
where
you
could
remotely
see
your
clusters
and
which
versions
they
were
running.
I
have
two
questions
about
that
feature.
The
first
question
is:
will
that
portal
enable
you
to
remotely
trigger
an
upgrade
in
that
cluster,
and
the
sacred
question
is
whether
this
feature
would
be
available
for
partners
that
resell
will
be
shipped
to
consumers.
B
Yeah,
so
so
what
you're
you're
referring
to
is
it's
gonna
be
a
cloud
Red
Hat
com,
it's
our
open
shift,
cloud
manager
or
cluster
manager
and
so
yeah.
The
whole
idea
is
to
allow
you
to
remotely
register
running
clusters,
launch
the
Installer
to
provision
new
clusters
and
then
actually
be
able
to
see
what
versions
you
know.
Each
of
your
clusters
is
running
so
that
you
can
sort
of
decide
when
to
upgrade
and
to
trigger
that
upgrade
it.
B
It
is
a
hosted
service.
It
leverages
telemetry,
that's
in
the
cluster
to
send
information
back.
Obviously,
if
you're
it's
optional,
so
if
you're
in
a
disconnected
environment
or
offline
mode,
you
know
you
wouldn't
use
that,
but
if
you're
in
a
connected
environment
it
allows
us
to
keep
tabs
on
it.
We
are
also
looking
at.
How
can
we
take
that
capability
for
those
offline
customers
and
package
that
as
something
that
we
could
deliver
for
you
to
run
in
your
own
data
center?
It's
all
architected.
It
actually
runs
on
OpenShift,
it's
architected
s
container.
B
So
that's
something
to
kind
of
look
ahead
to
not
but
initially
it'll
be
available
as
a
hosted
service
and
yeah.
We're
very
open
to
we
don't
have
anything
to
announce
today,
but
we're
very
open
to
talking
to
partners
about
how
you
can
kind
of
have,
if
you're
a
partner,
that's
delivering
openshift
as
a
service
to
your
customers.
You
know
again
no
concrete
plans
to
announce
today,
but
that's
something
that
we
want
to
discuss
with
partners
about.
B
You
know:
what's
what's
the
best
way,
is
it
just
to
use
the
same
service
that
we
use
and
maybe
have
some
branding
or
whatever,
or
is
it
better
to
deploy
your
own
instance
of
that
once
we
have
it
as
a
deployable
thing,
those
decisions
haven't
been
made
yet,
but
that's
kind
of
you
know
those
are
things
that
were
exploring
and
so
forth.
In
would
love
your
feedback.
P
B
P
Really
put
that
to
that
so
with
open
ship
dedicated,
basically
the
clusters
that
we
have
that
Red
Hat
is
managing.
You
will
be
able
to
remotely
trigger
upgrade
for
those
clusters
for
OCP
clusters
that
are,
you
know,
self
installed
that
you're
managing
that's
a
feature
we're
looking
at
right
now,
but
at
the
moment
yeah
you
won't
be
able
to
go
out
trigger
remotely
trigger
upgrades
for
OCP
clusters,
yet
yeah.
Z
The
scenario
is
a
CSP
provider
that
clusters
to
to
you
know
to
all
the
consumers
in
that
feature
will
allow
you
to
keep
a
tab
of
what's
happening
where
what
the
Kusum
is
doing.
In
the
case
that
the
kiss
man
decides
to
manage
themselves,
the
the
cluster
so
can
say
you
know
what
you
are
on
a
version
that
has
critical
security
features.
You
should
have
grade
as
soon
as
possible
cetera,
so
you
would
allow
the
CSP
provider
to
better
service
their
consumer.
That's
yeah,.
B
It's
a
great
idea
again.
The
implementation
is
something
that
we
need
to
discuss
with
partners
like
yourself
in
terms
of
you
know
again
tying
into
the
infrastructure
that
we
have
in
place
for
cloud
dot
right
at
that
klom
or
you
know,
deploying
multiple
instances
that
it's
just
I
think
today
we
have
to
we
have
to.
We
have
to
do
more
digging
and
more
have
more
conversations
to
figure
out.
You
know,
what's
gonna
work
out
best
for
us
and
for
our
partners.
Thank.
Z
AA
Alright
first
I
want
to
address
the
gentleman
who
has
a
question
about
IBM's
acquisition.
Having
anything
to
do
with
your
plans
and
Nutanix.
We
would
very
much
like
Nutanix
to
be
supported
as
well
represent
the
power
systems.
We
have
a
new
technics
offering
on
power
as
well,
so
so
I
just
want
to
kind
of
dispel
that
notion.
Moving
on
to
my
question,
which
is
we
do
get
some
feedback
from
our
clients
that
they'd
like
to
mix
different
architectures,
they
want
to
put
workloads
appropriately
or
place.
AA
B
AA
B
So
so
today
you
know,
I
see
predominantly
we're
open
ship
runs
is
x86
architecture,
we
do
have
support
for
open
shift
on
IBM
power
and
we
have
an
entire
team
called
the
multi
arch
team.
That's
kind
of
focused
on
you
know,
multi
arch
capabilities,
not
just
for
power,
but
for
exploring
things
like
arm
Z,
even
and
so
forth.
We
think
this
is
an
area
again
can't
really
say
anything
about
the
acquisition
until
the
position
closes.
But
you
know
it's
an
area
with.
B
You
know
today,
with
IBM
as
a
partner
and
down
the
road
that
we
look
to
kind
of
collaborate
more
on
for
the
architectures
that
they
care
about,
but
yeah.
It
is
something
that
you
know
as
customers,
kind
of
adopt
more
a
diverse
compute
infrastructure
that
we
get
asked
about
a
lot
and
you
know
very
open
to
figuring
out
how
we
can
do
more
in
that
area.
Things
I.
AB
AB
At
the
moment,
with
our
three
clusters,
we
run
stretch
architecture,
so
we
use
AWS
as
well
as
bare
metal
on
Prem
control
planes.
One
side
of
the
one
link
we
don't
stretch
there
I
get.
Those
would
be
two
different
machine
sets
in
4.1.
Is
there?
Is
that
going
to
be
like
a
support
to
deployment
where
we'll
be
able
to
actually
deploy
bare
metal
and
ec2
instances
and
have
them
part
of
the
same
cluster?
Or
is
that
something
that
you're
intending
them
to
be
separate
clusters?
At
the
moment.
E
E
I
think
you
know
we.
We
have
support
for
rel,
seven
and
worker
nodes,
not
just
rel
core
OS
worker
nodes.
There
may
be
configurations
that
make
sense,
I
think
it'd
be
probably
a
little
early
to
say
there
might
be
some
assumptions
that
you
might
want
to
investigate
before
jumping
whole
hog
into
that
right.
Okay,.
AB
AB
U
Sorry
Katherine
I
was
gonna,
say
to
add
what
clean
said
he
kind
of
hit
on
it
was
with
the
case
of
rel
seven.
You
know
you
could
have
a
cluster
in
AWS,
for
instance,
and
then
a
Terrell
7
Newton,
try
it
out.
I
mean
I,
don't
know
that
anyone
stretched
it
far,
especially
if
a
worker
notes,
but
it
might
be
something
that
you
could
try
very
quickly
a
lot.
AB
U
U
N
The
one
question
I
have
is
and
I
guess
it's
a
complicated
scenario.
So
that's
why
we're
excited
to
ask
questions
but
I
assume
you
you're,
you
treat
if,
if
you
run
your
control
plane
in
the
bare
metal
environment
and
you
burst
out
easy
I
assume,
you
have
you've
turned
off
all
cloud
integration
points
right,
so,
okay,
so
I
think
understanding
some
of
those
nuances.
It
would
be
good
feedback
to
the
team
here
and
then,
but
I
don't
want
to
give
you
an
answer
of
yes,
it
would
work
great,
okay,.
AB
Fair
enough
and
I
guess
the
the
second
question
was
thinking
about
the
the
three
to
four
migration
I'm
a
little
familiar
with
valera
and
not
deeply,
but
we
obviously
it's
great
for
stateless
stuff,
but
we've
got
things
like
open
ship
containing
the
stories
about
hosting
PVCs
hosting
the
docker
registry.
What's
the
story
on
sort
of
stateful
pieces
again.
I
We're
going
to
show
demo
tomorrow,
but
we're
looking
at
several.
The
idea
is
to
give
our
customers
choice
so
there's
a
choice
to
copy
and
replicate
there's
a
choice
to
swing
the
PVS
or
move
the
pv
east
to
point
to
the
new
control
plane,
and
the
idea
is
that
you
know
your
architecture
better
than
anyone
and
you
will
see
what
are
the
options
that
where
that
gives
you
and
choose
the
best,
the
best
path.
Okay,.
B
Also
want
to
add
a
little
context
on
folks
mentioned,
bringing
your
own
rel
nodes
and
so
forth.
So
when
you're,
looking
at
the
openshift
installer
and
if
you've
been
in
the
beta
you're,
looking
at
like
the
right
now
for
AWS,
a
fully
installer
provisioned
mode
right
where
the
installer
is
taking
care
of
everything
from
configuring,
the
infrastructure
to
bootstrapping
all
the
core
OS
nodes
for
the
masters
and
the
workers
to
then
setting
up
kubernetes
and
everything
that
comes
top-
that's
not
always
possible.
B
So
we
do
have
a
mode
that
sort
of
user
provision
door,
combines
user
and
installer
provision
so
that
you
can
do
things
like
set
up
the
cloud
infrastructure
yourself.
If
you're,
you
know,
in
a
lockdown
environment
or
environment,
where
you're
not
able
to
delegate
that
control
to
the
Installer
or
bring
your
own
rel
nodes,
as
was
mentioned,
if
you're
in
an
environment,
for
whatever
reason
you
want
to
continue
managing
the
operating
system
outside
of
OpenShift
or
you
want
to
kind
of
have
a
traditional
rpm
based
set
of
components.
B
You
can
do
that
the
VMware
and
bare
metal
providers
that
Tracey
mentioned
that
came
out
in
beta
for
those
are
what
are
called
user
provisioned,
meaning
you
configure
the
infrastructure,
the
underlying
infrastructure
yourself,
and
then
we
automate
the
deployment
of
kubernetes
and
operators
on
top
of
that.
But
we
are
working
with
VMware,
as
you
heard
this
morning,
on
a
full
installer
provision
mode
word
where
those
choices
are
made
for
you.
U
U
Q
So
I'll
put
the
plug
out
there
again.
I
was
waiting
for
my
time
tried
out
openshift
comm.
There
is
two
different
sections:
please
go
out
and
look
it's
brand-new
as
a
Friday.
You
have
an
install
on
premise
or
install
with
options
that
Kathryn
mentioned,
or
the
nor
the
usual
install
flow
that
you
guys
have
seen
upon
trade
open
shift
for
a
few
months
now
with
Installer
proficient
it's.
B
B
Let's
do
that
right
and
then
we
go
to
select
customers
in
the
beta
and
say
no,
you
know,
re
Amazon
environment
is
very
locked
down
and
I
don't
have
the
authority
to
delegate
control
of
whatever
DNS
or
you
can
see,
is
wherever
to
you,
and
so
that's
how
the
in
you
know
the
user
provision
mode
for
AWS
was
born,
and
it's
feedback
like
that
that
you
know
it's
just
invaluable
to
us
to
continue
making
sure
that
the
product
suits
your
needs.
Thank.
H
B
So
so
open
ships,
container
storage
in
3x
was
built
around
the
Gloucester
technology
and
that's
gonna
cut
a
lifecycle
that
goes
out
into
2020,
something
yeah.
So
so
you
know
if
you're
on
that
infrastructure,
you
know
that
is
you
know
fully
supported
by
Red
Hat
patches
updates
the
whole
nine
yards
as
we
moved
to
four
dot
X.
You
know
we
had
a
just
like
we
had
a
number
of
architectural
choices
on
open
chef.
We
also
had
architect
those
two
choices
on
on
storage.
B
One
of
the
things
that
we
saw
was
a
lot
more
demand
for
object,
storage,
and
we
also
saw
a
lot
of
momentum
in
the
community
around
a
project
called
rook
right
so
generally
in
our
portfolio.
Object.
Storage
really
is
more.
You
know
closer
to
SEF
in
what
we
you
know
what
we
we
see
as
the
strengths
of
SEF.
Obviously,
CF
also
does
block
and
even
file
through
stuff
FS
and
the
rook
project
we
already
had
Red
Hat
folks,
contributing
to
it
and
so
forth.
B
AC
Yeah
good
question
so
open
shipped
and
it's
going
to
be
in
a
roadmap
session
tomorrow.
10:30
fear
joining,
so
the
technology
stack
for
OCS
for
dot
x
is
going
to
be
based
on
Roca,
Yosef
and
actually
nuba,
which
is
our
latest
acquisition
in
storage.
Space
for
multi
had
with
cloud
capabilities.
So
it's
not
just
row
conserved
but
also
nuba.
So
if
you
want
to
know
more
about,
it
can
come
to
the
session
tomorrow.
It's
room,
161
and
I.
B
Yeah,
alright,
so
again,
all
sessions
will
be
recorded
and
obviously,
there's
also
the
opportunity
to
meet
with
folks
like
shine
here
or
after
the
conference.
If
you
want
like
a
demo,
if
you,
if
you
couldn't
get
into
a
session
or
you
want
you
like
what
you
saw
in
a
session,
but
you
want
a
demo
for
your
broader
team
or
discussion.
I
just
reach
out
through
your
account
teams,
and
we
can.
We
can
set
that
up
and
okay
everybody's
happy
to
talk
to
you.
Okay
and.
B
So
basically
the
we
did
we
do
support
we'd
supported
both
sockets
and
course
for
the
worker
notes.
From
the
beginning
we
moved
to
focusing
on
core
based
because
it
gives
us
an
option
that
works
across
all
environments.
Right,
like
you,
can't
really
count
sockets
on
Amazon
and
so
forth.
That
being
said,
if
you're
an
existing
customer
that
you
know,
is
on
a
socket
based
program,
there
is
sort
of
a
grandfathering
or
renewal,
so
you
can
kind
of
continue
that
at
least
for
one
renewal
term,
and
then
you
know
talk
to
your
sales.
B
Reps
are
rough
around
the
rest
and
then
there's
options
like
cloud
Suites
like
some
bundles
that
are
our
socket
based,
but
yeah.
We
wanted
to
kind
of
simplify
the
pricing
structure,
also
get
out
of
discussions
around
what
the
right
ratio
is
from
cores
to
sockets,
which
is
you
know.
You
should
see
your
procurement
people
love
talking
to
us
about
that.
So
we
just
wanted
to
kind
of
simplify
that,
but
that
that's
kind
of
the
the
focus
now
is
on
core
V
CPU
base
pricing.
The
other
thing
is
we're
actually
with
the
metering
technology.
B
Finally,
gonna
be
able
to
offer
a
you
know,
fill
either
this
year,
a
consumption
based
model
for,
for
you
know,
for
if
you
have
like
a
base
set
of
capabilities,
but
you
want
to
burst
capacity
so
we're
looking
at
how
to
tight
the
metering
to
offer
more
consumption
based
pricing
off,
and
so
so.
Thank
you.
AD
B
AD
B
So
great
news,
so
the
the
the
Red
Hat
train
team
has
already
been
hard
at
work,
starting
with
the
first
beta
on
building
training
curriculums
for
open
shift
for
both
for
for
administrators,
as
well
as
for
developers,
although
for
a
developer's
perspective,
it's
the
same
thing
other
than
there'll
be
trainings
for
things
like
sto
and
K
native
and
the
new
stuff,
but
in
terms
of
how
developers
work
with
open
shift.
That
should
be
consistent,
so
yeah,
so
I
think
look
forward
to
that
and
there
may
be
some
sessions.
B
I
know
the
global
learning
services
team
is
here
at
Summit.
You
can
go
talk
to
them
about
that.
If
you
have
a
GLS,
you
know
all-you-can-eat
subscription,
that's
great,
because
you
get
access
to
the
whole
Claddagh
log,
and
then
we
also
have
other
options.
Brian
mentioned
learned,
open
shift,
calm,
a
great
resource.
You
know
for
folks
who
are
just
getting
started
and
want
to
actually
do
something.
Hands-On
and
self-paced
is
a
lot
of
learning
modules
and
we
keep
adding
new
ones.
B
AD
B
M
K
Go
back
to
your
read,
write
many
question:
I,
don't
know
where
Cheyenne
is
and
why
he's
not
yelling
at
me.
But
if,
if
my
answer
was
accurate,
I
guess,
if
you're
trying
to
just
use
the
vSphere
provided
functionality,
but
if
you
have
OCS
read
write
many
would
be
something
that
could
be
automated
and
supported,
and
that
can
run
on
top
of
write
anywhere.
You
want
to
run
it.
So
there
is
a
story,
but
the
specific
question
I
think
you
were
asking
it's
sort
of
manual
right.