►
From YouTube: AMA Panel: OpenShift PM team moderated by Joe Fernandes, Red Hat at OpenShift Commons Gathering 2018
Description
AMA Panel: OpenShift PM team moderated by Joe Fernandes , Red Hat at OpenShift Commons Gathering 2018
https://commons.openshift.org/gatherings/Seattle_2018.html
A
B
All
right,
my
name
is
Joe
Fernandez
I
run
the
product
management
team
for
the
OpenShift
core
platforms
and
Reza,
who
you
heard
from
earlier
is
my
peer
and
manages
platform
services.
So
so
we
have
a
lot
of
our
product
managers
here
at
coop
con
and
at
the
open
ship
Commons
gathering
before
we
start
I'd
like
to
thank
all
of
our
customer
presenters
when
we
started
this
event,
I
never
expected
to
see
a
room,
this
large
and
the
customer
presentations
today
we're
phenomenal.
So
so
thank
you
to
all
our
presenters.
B
C
Hi,
my
name
is
Anil
I'm
from
visa
I'm,
a
chief
systems
architect
for
visa
on
that
platform,
so
we
use
OpenShift
and
we
are
in
production,
but
I'm
going
to
have
a
little
tough
questions
to
you
now.
Okay,
let's
get
for
that
open
ships
is
great
right,
a
lot
of
futures
and
a
lot
of
things
it
does
underneath
it,
but
your
upgrade
path
and
as
well
as
implementation
sucks.
B
So
look
installing
installing
openshift
installing
kubernetes.
We
identified
that
that
that's
that's
been
our
biggest
challenge
right.
You
know
the
releases
are
happening
every
three
months.
The
installation
is,
is
complex
and
then
keeping
up
with
it
is
more
complex.
We
did
a
lot
around
automation
with
ansible
yeah.
That's
where
we've
we
started
that
that's
what
drives
the
openshift
install
and
upgrade
path
in
3x,
the
things
that
that
we're
challenged
with
is
just
the
environment,
being
dynamic,
immutable
and
so
forth.
B
So
a
couple
of
things
you
saw
today
reflect
the
investments
we've
made
over
the
last
year
and
a
half
actually
beyond
that.
If
you
think
about
core
West's
history
with
it
really
moving
to
a
fully
immutable
environment
based
on
on
RedHat,
the
Red
Hat
core
OS
operating
system,
but
essentially
moving
to
kubernetes
as
the
way
we're
installing
and
upgrading
kubernetes
right
and
we're
taking
the
the
installation
which
is
really
three
installations.
When
you
think
about
it
right,
you
have
to
set
up
your
provider
infrastructure,
whether
that's
Amazon
as
your
Google
OpenStack
VMware.
It
doesn't
matter.
B
You
have
to
set
that
up.
First,
then,
you
have
to
install
the
operating
system
and
manage
that
separately,
which
is
rel,
and
then
you
have
to
basically
run
the
OpenShift
installer
to
install
OpenShift
on
top
with
open
shift4
we're
combining
that
into
one
process
right.
So
you
basically
describe
the
cluster
that
you
want
and
where
you
want
it
to
run,
and
then
the
the
the
the
Installer
takes
it
from
there
we're.
Also
we
didn't
talk
about
this
today.
We're
also
then
automating
the
upgrade
process
through
something
called
over-the-air
updates
and
again.
B
B
We
also
have
offline
mode
for
disconnected
or
air-gapped
environments,
but
all
these
things
which
you'll
start
to
see
coming
out
in
beta
now
and
in
the
new
year,
reflect
you
know,
kind
of
where
we
see
the
future
of
installation
and
upgrades
and
really
where
we've
put
the
bulk
of
our
investment
over
the
past
year.
So
I
don't
know
if
anybody
else
wants
to
add
anything.
Mike
yeah.
C
So
what's
happening,
we
choose
an
atomic
as
a
base
voice
when
we
started
like
eight
or
nine
months
back
I
ever
hear
back.
We
worked
with
your
team,
they
said
everything's
great
and
go
without
of
me,
because
we
get
a
lot
of
benefits,
especially
around
the
ataxia
phase
and
then
less
binaries
and
less
but
from
dry.
So
it
does
help
in
that
way.
But
now
looks
like
what
we
are
hearing
is
you're
not
going
to
support
atomic
upgrade.
You
got
to
convert
back
to
RHEL.
C
D
So
the
upgrades
an
interesting
problem.
What
we're
investing
in
right
now
is
we're
telling
all
of
our
customers.
They
need
to
get
to
a
serial
upgrade
path
right
in
in
kubernetes
upstream
above
us
just
the
way
the
api's
are
are
written.
You
need
to
do
them
serially
as
you
as
you
go
through
them,
and
a
lot
of
our
customers
are
on
3.7
and
3.9.
They
aren't
necessarily
on
3.11,
which
is
the
last
3x
release.
D
D
That
way,
that's
all
investigation
of
R&D
work
right
now,
if
it
doesn't
pan
out
or
if
that's
just
a
massive
task
that
we
would
put
in
front
of
our
customers,
we'll
just
continue
down
the
3.11
upgrade
path,
which
is
what
you're
referring
to
and
that
entails
moving
to,
what's
known
as
an
unmanaged
installation
of
openshift.
So
there's
there's
some
customers
that
are
coming
back
to
us
and
saying
you
know,
look
I,
I
love
what
you're
doing
and
it's
gonna
take
me
longer
to
get
there.
D
Then
next
year
the
the
first
half
of
next
year,
I
need
to
have
my
own
rail
operating
system.
I've
had
it
for
the
last
eight
years:
it's
not
even
relative
or
it's
been
tweaked,
so
much
or
the
infrastructure
is
so
flex.
Maybe
it's
on
a
submarine
or
a
Boeing
or
whatever
the
case
may
be
that
it
won't
necessarily
be
something
that
you'll
automate
for
me
for
those
customers.
We
want
them
to
bring
up
their
operating
system,
and
then
we
will
lay
down
the
operator,
enabled
OpenShift
and
still
upgrade
it
over
the
air.
D
C
It
would
be
great.
Some
of
these
decisions
been
made.
You
work
with
customers
to
understand
what
we
are
going
through
for
us
right
when
we
are
into
production,
doesn't
matter
what
it
is
we
got
to
make
it
happen
right.
Yes,
so
if
you're
not
following
that,
you're
get
seriously
creating
some
kind
of
a
gap
in
approaching
clusters,
so.
B
B
I
mean,
as
Mike
said,
we
ripped
two
things
right:
we're
extending
3x
the
lifecycle
of
3x.
You
know
that's
gonna
be
around
for
quite
some
time,
in
addition
to
the
4x,
but
then
what
mike
mentioned
is,
in
addition
to
automated
in-place
upgrades,
we're
investing
now
in
this
research
into
automated
application.
Migration,
because
in
place
implies
that
you
go
release
to
release,
because
that's
how
kubernetes
works
for
customers
that
you
know
are
on
older
versions
of
kubernetes
or
3x.
That
might
not
be
what
they
want
to
do.
B
E
Make
that
just
add
one
thing:
Jill
yep,
ok,
so
one
thing
that
I
want
to
just
add
that
you
know
a
big
part
of
openness
for
is
is
leveraging
the
core
OS
technology
to
simplify
upgrades,
so
that
is
based
on
customer
feedback.
Yes,
there
is
going
to
be
an
initial,
not
a
manual
upgrade
to
get
to
opener
shift
forward
once
you're
an
open
shift,
4
we're
going
to
be
able
to
leverage
technology
that
was
proven
for
over
2
years,
where
we
upgraded
/,
I.
F
A
Good
hi
whoa,
that's
loud
I'm,
not
Jason
Kinsel
from
Oak
Ridge,
National
Lab.
So
when
kind
of
since
I
don't
know
3.4
when
I
started,
looking
at
open
shift
most
of
the
development
discussions
and
all
that
stuff
were
open,
they
were
on
Trello.
You
could
kind
of
follow
what
you
guys
were
thinking
and
where
the
product
was
going
over
time.
It
seems
like
after
core
OS,
a
lot
of
that
has
been
moved
depending
on
the
product
to
behind
a
JIRA
wall
of
some
type.
B
So
so
this
is
a
great
question,
so
so,
internally,
as
part
of
our
development
and
agile
process,
we've
made
a
transition
from
using
Trello,
which
we've
loved
we've
used
it
for
years
to
JIRA.
That
is
in
the
process
of
being
opened
up
right
now
we
use
JIRA
really
because
I
mean
if
you've
used
both
products.
Trouble
is
great.
It's
very
easy
to
use.
You
can
have
public
boards,
you
can
have
private
board,
so
it's
great
for
collaboration.
B
It's
not
great
for
traceability,
for
you
know
connecting
epics
to
user
stories
to
get
reports,
and
and
we
we
did
all
sorts
of
unnatural
acts
and
extended
Trello
itself
to
try
to
get
what
we
needed.
This
turns
out
for
where
we're
are
now
at
the
size
of
our
team
that
know
we're
over
like
30-something
scrum
teams.
At
this
point
it
was
just
too
much
to
manage
so
so
we've
we've
shifted
around
the
Foro
release,
all
of
the
work
into
JIRA
and
we're
in
the
process
now
of
making
all
those
boards
public.
B
G
H
G
B
So
it's
a
good
question.
So
so
again,
one
of
the
things
you
notice
about
the
open
shift
for
installer,
that's
very
different
from
open
shift.
Three
is
we're
actually
taking
care
of
everything
from
the
infrastructure
to
the
operating
system
and
up
to
kubernetes
itself
we're
using
a
technology
that
came
out
of
the
cig
cluster
lifecycle,
which
is
basically
called
machine
controllers
and
support
of
the
cluster,
API
and
so
forth.
So
machine
controllers
essentially
are
just
as
they're
described.
It's
basically
like
a
crew
Bernays
controller,
but
for
a
machine.
B
So
everything
that
you
can
think
of
for
pods
in
terms
of
declaring
state
and
having
it
designs
that
state
now
you
can
apply
it
to
machines.
We
just
chose
Amazon
as
our
first
target,
because
you
know
it's
it's
prevalent
and
it's
it's
easy
access.
So
so
Derek
can
talk
more
about
that.
We're
adding
that
now
for
for
all
of
our
target
suits,
so
there's
work
going
on
around
OpenStack.
You
probably
saw
that
there
around
bare
metal
around
a
sure,
Google,
vSphere
and
so
forth.
B
But
we
are
very
excited
about
about
at
work
just
because
we're
seeing
tons
of
demand
for
kubernetes
on
bare
metal
or
what
we
started
to
describe
as
as
kubernetes
native
infrastructure
versus
where
the
industry
has
been,
which
is
kubernetes,
is
an
abstraction
on
top
of
somebody
else's
infrastructure.
So
so
so
yeah
you
could
expect
the
the
bare
metal
provider
work
to
yield.
I,
don't
know
if
anybody
else
wants
to
anything
good.
G
G
D
Own,
oh
nice,
you
know
you're
actually
right
they
when
you
start
looking
at
an
infrastructure
that
isn't
being
provided
to
you
by
a
cloud
there's
a
lot
more
components,
we're
leveraging
a
lot
of
the
skill
set
in
our
OpenStack
area.
So
OpenStack
has
been
cracking
at
this
for
quite
some
time
with
director
and
partnering,
with
a
lot
of
the
ecosystems
on
on
premise:
hardware
providers:
ionic
has
extensible
IPS,
so
you
can
bring
up
a
larger
when
network.
B
And
that
that's
just
ask
assets
that
we
have
at
Red
Hat
stuff,
that
we
worked
on
like
ansible
networking
like
the
different
components
of
OpenStack,
we're
also
doing
work
in
the
community
right,
there's
a
lot
of
and
with
partners.
We
were
just
talking
with
the
guys
from
Dell
a
little
bit
earlier
too.
A
lot
of
our
hardware,
om
partners
are
also
really
interested
in
this
topic,
as
well
as
our
networking
and
storage
partners.
B
I
B
So
VMware,
as
as
an
infrastructure
provider
right
so
so
so
VMware
as
an
instructor
provider,
is
something
that
we're
working
on
again
in
terms
of
our
our
top
six
most
commonly
deployed
environments
in
the
data
center.
It's
VMware
OpenStack
in
bare
metal
and
obviously
Amazon
as
your
Google
and
the
public
cloud.
So
we
will
have
open
shift
for
installers
for
all
those
as
well
as
additional
targets.
You
know
beyond
that,
then,
in
terms
of
router
versus
ingress,
you
can
deploy
both
I,
don't
know.
J
J
Other
vendors,
but
in
terms
of
ingress-
and
so
we,
you
know
prior
to
kubernetes
I,
don't
know
if
you
know
the
history,
but
prior
to
the
kubernetes
ingress,
there
was
routes
which
we
created
and
at
this
point
we
are
all
in
on
eventually
flipping
over
to
kubernetes
ingress,
but
it
has
to
hit
feature
parity
with
routes.
First,
there's
a
number
of
things
that
you
can
still
do
with
relics
that
you
can't
do
with
ingress.
So
we
are
on
the
path,
we're
monitoring
it
heavily
we're
contributing
to
it.
You
can
use
ingress
route
objects.
B
I'll
actually
put
in
a
plug
for
my
blog
I,
just
wrote
a
blog
today,
it's
up
on
OpenShift
com.
This
part
2,
is
coming
out
to
Connor.
It's
called
open
shift
and
COO
Bernays,
where
we've
been
and
where
we're
going.
So
the
where
we've
been
blog
talks
about
some
of
the
things
that
we
that
in
OpenShift
go
all
the
way
back
to
kubernetes
1.0
but
weren't,
yet
in
the
kubernetes
upstream,
so
our
enterprise
customers
needed
it,
but
kubernetes
didn't
have
it.
So
we
we
built
on
top
of
kurma
days,
so
that
includes
routes.
B
The
concept
of
ingress
didn't
come
I
think
until
Kubb
1.3.
That
includes
the
concept
of
deployment
configs,
which
was
later
a
lot
of
that
functionality
was
taken
into
deployments.
Our
back
people
take
our
back
and
kubernetes
for
granted.
Today,
our
back
didn't
even
show
up
in
beta
until
kubernetes
1.6.
We
had
it
on
top
with
kubernetes
1.0
pods
security
groups.
You
know
which
came
from
security
context
constraints.
B
We
were
focused
on
a
different
market
right,
so
Google
and
Red
Hat
were
the
first
two
companies
to
bring
kubernetes
to
market,
but
we
were
bringing
it
to
enterprise
customers
in
hybrid
cloud
environments
with
different.
You
know,
concerns
around
security
and
different
requirements
for
how
they
manage
their
clusters.
So
we
did
a
lot
of
work
in
upstream
COO,
but
then
we
built
around
it,
but
our
goal
is
always
to
sort
of
merge
as
far
upstream
as
we
can
in
some
cases
like
our
back.
You
know
the
implementations
were
at
parity.
B
In
fact,
that's
because
most
of
the
implementation
was
Red
Hat's
or
like
a
hundred
percent,
and
so
we
just
basically
switched
to
to
the
upstream.
In
some
cases
there
like
in
deployment
configurations
in
routes,
there's
still
features
in
our
original
implementation
that
aren't
available.
Yet
in
the
upstream
implementation
and
customers
are
relying
on
those.
So
we
committed
for
those
customers
to
continue
supporting
both
and
then
you
guys
can
choose
if
you
want
to
use
one
or
the
other,
so
I
don't
know.
If
you
want
anything
else,
do
we
get.
K
This
question
a
lot
about,
like
is
openshift
gonna,
adopt
this
cube
resource
or
not
and
I
think
sometimes
people
have
a
misperception
that,
because
it's
a
resource
in
cube,
it's
like
graduated
to
version
1
and
it's
forever
stable
in
the
upstream.
So
like
a
good
example
right
now
would
be
open
shift.
3
ship
security
context,
constraints
out
of
the
box
upstream
has
pod
security
policy.
It's
not
clear.
It's
ever
gonna
reach
the
version
1
status
right,
so
people
say
well.
Why
aren't
you
adopting
this
yet?
K
When
are
you
going
to
adopt
deployments
to
manage
a
cube,
API
server
right,
like
it
makes
sense
for
us
to
use
the
platform
to
build
the
right
thing
for
the
right
use
case
we
need
and
anybody
else
can
go
to
extend
a
platform
given
all
the
work
we've
done
upstream
to
enable
that
so
I
wouldn't
be
surprised.
If
you
see
multiple
ingress
implementations
in
the
upstream
I
wouldn't
be
surprised.
If
you
see
multiple
security
policy,
implementations
like
there's
a
marketplace
of
ideas
and
people
can
go
and
pursue
them.
Yeah.
B
This
is
a
good
I
mean
I.
Think
ingress
itself
is
still
beta
upstream
in
kubernetes.
Right
ingress
has
not
yet
been
declared
stable,
even
though
a
lot
of
people
are
relying
on
it
and
then
there's
different
ingress
implementations.
You
know
which
you
get
from
different
vendors
right
so
so,
but
again
will
I
support
both
a
good
point
that
Derrick
made
that
I
wanted
to
comment
on.
B
Now
CR
DS
are
allowing
customers
through
the
operator
framework
to
build
applications
as
an
extension
of
the
same
kubernetes
api
right.
It's
a
really
powerful
concept
which,
which
said
described
and
Rob,
is
going
to
be
demoing
tomorrow
in
the
keynote
but
yeah
it's
it's
just.
This
is
I.
Think
gonna
unleash
a
whole
new
set
of
innovations
on
top
of
kubernetes
and
so
forth.
So
other
questions.
L
B
So
so,
first
of
all
that
open
ships
container
engine
and
open
ship
container
platform
are
not
separate
products,
they're
the
same
products,
they're
just
different
configurations,
so
we've
heard
from
a
lot
ik
we
heard
from
a
lot
of
customers
that
say
well
I
like
those
CP
but
I
I,
don't
want
all
the
stuff
that
comes
with
it
right.
I
am
I
use
my
own
Jenkins
service
or
I
use
my
own
Sdn.
B
You
know
I
use,
you
know
and
that's
kind
of
the
point
of
open
ship
right
like
the
whole
batteries
included
but
optional,
and
you
know
everything
being
pluggable.
That's
how
we
we
architect
architected
openshift,
that's
how
kubernetes
itself
is
architected.
So
all
open
ship
container
engine
is
is
a
it's
a
subscription
to
openshift
that
basically
reduces
some
of
those
components
that
people
commonly
swap
out.
Specifically,
you
know
the
advanced
networking
capabilities,
the
Sdn
that
we
provide.
B
Logging
like
we
provide
an
elk
stack
or
an
afk
stack
with
OpenShift,
a
lot
of
people
use
it
with
Splunk
or
with
their
own
efk
stack
and
then
some
of
the
CI
CD
capabilities,
the
Jenkins
services,
the
build
services
and
so
forth.
But
otherwise
it's
it's
actually
there's
not
two
separate
products,
because
a
lot
of
all
that
content
is
just
containerize,
so
essentially
you're,
just
not
entitled
to
those
additional
containers
and
so
forth,
and
then
with
operators.
B
That's
going
to
be
actually
an
even
more
convenient
way
for
you
to
determine
which
content
you
want
and
in
each
cluster,
because
operators
is
driving
our
whole
install
mechanisms
and
so
for
it's
going
to
be
an
even
more
convenient
way
to
sort
of
enable
or
disable
different
features
and
different
clusters.
So.
J
M
This
is
Jeetendra
I
the
talks
in
the
morning
and
the
use
cases
and
case
studies
that
represented
most
of
them
were
around
making
the
environment
available
to
the
developer
quickly
or
around
greenfield
applications.
Performance,
oriented
applications,
but
I
haven't
really
seen
any
brownfield
legacy.
Applications
migrating
to
to
OpenShift
I,
don't
know
if
you
want
to
talk
about
a
few
examples
of
how
people
have
done
monolith,
applications
into
into
open
sure.
D
So
it's
interesting,
the
the
customers
that
we
talk
to
will
always
fall
into
these
four
buckets
right.
There's
this
bucket
for
the
CIO,
the
the
transformation
of
the
culture
of
the
business
there's
this
bucket
for
greenfield
for
next
generation,
runtimes
and
applications,
there's
this
bucket
for
brownfield
the
lift
and
shift.
If
you
will
and
there's
this
bucket
for
sort
of
infrastructure,
ops,
they
bring
told
they've,
got
to
shut
down
machines
and
move
the
public
clouds.
How
do
you
do
that
right?
D
We
have
feature
sets
in
in
each
one
of
them
and
you're
absolutely
right
and
that
the
easiest
place
to
start
is
in
a
greenfield
area,
but
very
rapidly.
We
find
our
customers
move
towards
the
lift
and
shift
area
the
brownfield,
because
it
is
larger
and
it
is
more
revenue
impacting
to
them.
I
would
say
most
of
our
are
are
humongous
companies.
Our
global
financial
companies
are
doing
brownfields
and
they're
attacking
java
applications.
D
Fact
java
applications,
if
you
will
on
legacy
web
logic
and
WebSphere
moving
those
runtimes
over.
We
have
a
lot
of
popular
KeyBank,
I
believe
wrote
a
case
study
out
there
on
their
movement.
So
if
you
go
to
open
shift,
comm
slash,
customers,
I
would
say.
Probably
60%
of
them
are
stories
about
lift
and
shifting
legacy
applications
there
there's
even
some
that
get
into
message
busing,
if
you
will
those
those
es,
B's
and
those
frameworks
and
now
how
they
containerize
them
and
how
they
move
them
over.
B
To
say
we
have
the
partnership
we
had
announced
with
IBM
earlier
this
year
was
driven
by
demand
for
customers
who
want
to
run
WebSphere
in
kubernetes
on
openshift,
and
we
have
customers
that
have
spoken
about
that,
but
also
customers
who
are
doing
WebLogic
JBoss
right,
so
traditional
app
server
stacks.
So
it's
not
all
just
springg,
but--and,
nodejs
and
and
kind
of
micro
services
are
christian
is
a
lot
of
those
traditional
architectures.
And
then
you
talk
in
terms
of
like
database
workloads,
which
is
you
know,
being
further
enabled
with
our
work
on
the
operator
framework.
B
We
have
a
lot
of
customers
that
who
are
you
know
you
running
extensive
data
services,
whether
it's
databases
analytics
messaging
solutions
on
top
of
kubernetes,
we're
getting
a
lot
of
questions
on
HPC
on
grid,
so
so
so
again,
yeah
a
lot
of
the
kubernetes
talks
tend
to
be
around
greenfield
and-
and
you
know,
cloud
native
micro
service
style
architectures.
But
that's
by
no
means
you
know
what
we
see
and
what
we're
hearing
about
we're
hearing
a
lot
of
traditional,
a
lot
of
and.
D
Really
quickly
that
a
you
know:
a
year
ago,
we
had
a
competitor
in
the
market
that
had
an
alternative
Orchestrator
and
they
were
not
good
at
doing
brownfield
applications
and
they
weren't
good
because
they
didn't
have
the
storage
infrastructure
components,
the
PV
backbone
framework
right,
they
didn't
have
real
IP
addresses,
they
didn't
have
real
routing,
they
didn't
allow
you
to
do
multicast,
they
didn't
allow
you
to
do
UDP.
These
are
all
just
qualities
of
these
brownfield
applications
that
the
kubernetes
platform
always
cater
done.
Yeah.
K
N
My
question
is
not
about
the
upgrade
path,
but
it's
close
to
that
as
OpenShift
has
matured
and
we've
kind
of
gone
along
with
the
platform,
a
number
of
new
features
have
been
released
that
are
really
intended
to
be
consumed
by
development
teams.
Things
like
the
prometheus
operator
operator
framework
in
general,
but
also
platform,
components
right.
The
Sdn
daemon
said
one
of
the
things
that
I
found
challenging
is
understanding
these
new
things
that
are
coming
in
to
the
platform
and
how
to
consume
that
as
a
platform
operator.
N
So
it's
like
you
know,
because
some
of
these
things
are
coming
out
from
github
I,
just
don't
have
a
unified
view
of
then
what
each
upgrade
is
doing
in
terms
of
new
things
that
are
installing
and
new
products
that
are
becoming
a
part
of
the
cluster.
You
talked
to
maybe
how
you're
bringing
products
into
the
platform
and
managing
those
things
and
kind
of
making
that
information
available
to
customers.
N
D
What
you're
referring
to
is
I
just
tried
to
install
and
ansible
did
500
things
that
I
had
no
idea
that
it
was
gonna
do
on
there
right,
we're
not
doing
a
great
job
of
documenting
all
the
changes
that
ansible
performance
between
each
one
of
the
versions
and
a
big
part
of
the
Ford
auto
architecture
is
to
move
to
the
operator
SDK.
So
all
those
teams
across
the
ecosystem
now
have
a
consistent
way
to
get
on
to
the
platform
right.
D
There's
not
going
to
be
that
variety,
where
you
need
the
extensibility
of
ansible
to
do
it,
because
you're
gonna
write
your
content
to
that
operator
SDK
and
that's
how
you're
gonna
get
on
to
the
platform.
So
that's
the
ultimate
fix
I,
you
know,
take
cake
comfort
and
knowing
that
3.11
was
the
last
time
you
you
have
had
to
learn
about
the
stuff
on
the
fly,
so
it
all
changes
moving
forward.
Just.
O
Add
to
it
I
mean
if
you
saw
the
slides
that
Reza
showed
in
the
keynote
with
the
unified
hybrid
cloud
and
the
kubernetes
marketplace.
There
are
some
like
derek
we're
showing
them
in
there
demo,
so
you
can
actually
pick
and
choose
what
services
that
you
would
want
to
run
right,
I
mean
so
that
will
become
much
more
visible.
You
could
see
all
the
operators
cluster
up
as
the
CBO,
showing
all
the
platform
operators
and
all
the
available
operators,
but
yeah
I
mean
that
they've
opened.
You
for
I
saw
some
of
that
in
the
demo.
B
K
Pods,
a
lot
of
that
has
fed
into
enabling
us
that
decompose
open
shift
to
be
for
Oh,
like
you
saw
today
so
like
the
the
ability
to
be
very
lean
in
the
default
distribution
of
open
ships,
kubernetes
and
allowing
us
to,
for
example,
not
include
the
Sdn
like
in
our
cubelet
by
default,
or
do
not
include
the
DNS
services
there
by
default
and
make
it
clear
to
you
that
you
can
swap
them
out
and
better
separate
the
architecture.
I
guess
so.
K
I
think,
as
Mike
had
said,
like
the
changes
you
did
see
in
310
and
311
or
us
getting
set
up
technically
to
enable
what
you
saw
this
morning,
hopefully
in
forw
but
I,
don't
think
you
should
see
major
wholesale
restructuring
after
four.
Oh
because
at
this
point
we
start
to
look
like
any
other
cube
distribution
with
respect
to
your
the
core
control
plane.
Yeah.
B
The
effect
its
had
on
how
we
develop
open
chests
is
essentially
every
single
development
team
has
been
working
on
install
upgrade
because
every
team
is
working
on
install
upgrade
for
their
component
by
building
an
operator
for
that
component.
So
whether
you
go
to
our
our
team
that
works
on
logging
or
on
Sdn
or
on
Jenkins
or
on
whatever
Prometheus,
what
they've
been
working
on
over
this
past
year
is
how
do
I
install
my
component?
How
should
it
be
upgraded
and
that
you
know
how
do
I?
K
Then
the
only
thing
that
is
to
make
it
more
consumable
to
you
as
an
operator
today,
like
you,
were
tweaking
master,
configs
and
note
configs
and
having
to
work
at
a
very
low
level.
Some
of
the
stuff
I
couldn't
touch
on
very
much
this
morning
was
like
in
four.
Oh
you'll
see
API
groups
in
your
server,
that
literally
say
config,
openshift
I/o
and
the
api
definitions
of
those
objects
should
ideally
be
self
documenting
so
like
when
you
look
at
the
ingress
dot.
K
Config
that
open
ship
IO
object,
and
you
see
it
has
a
router
hostname
field
right.
It
should
be
very
obvious
to
you
what
it
is
and
what
the
interface
is,
that
you're
allowed
to
touch,
and
so
where
you
might
have
had
points
of
variability
about
how
you
set
up
particular
ansible
variables
or
what
you
can
figure
in
master
and
fig
and
all
that
stuff
for
for.
Oh
it
just
gets
very
standardized
and
that
your
interface
is
cube.
Objects
in
a
well-defined
API
group.
P
I'm
Kyle
from
Arctic
we
do
a
lot
of
consulting
for
enterprise
customers.
I'm
asking
this
question,
so
I
can
answer
it.
Next
week
goes
along
with
the
4.0
demo.
I
saw
around
auto-scaling
so
that
customers
asking
about
auto
scaling
for
probably
since
3/3
to
be
honest
and
I,
don't
think
any
of
them
have
really
been
ready
for
it.
Things
like
K
native
I,
think
are
gonna,
make
it
a
lot
easier
for
people
to
start
to
extend
these
applications
and
I
saw
you
spun
up
another
10
hosts
in
AWS.
P
F
B
So
a
couple
of
things,
though
yeah
every
time
we
do.
We
talk
about
auto
scaling.
You
know
that
the
common
question
is:
can
it
auto
scale
the
nodes
and
it's
like?
No,
no
kubernetes
can
only
auto
scale
pods,
it
can't
manage
infrastructure.
Well,
now
that's
changing,
and
you
kind
of
saw
that
here
right
with
with
growing
you
know,
the
the
cluster
API
is
that
the
machine
control
is
now
Cubase
can
scale
its
own
sure.
We
are,
in
addition
to
kind
of
eliminating
subscription
manager,
which
kind
of
limited
you
from
from
the
rail
perspective.
B
We
are
also
using
the
metering
project
which
you
heard
about
to
basically
enable
a
consumption
based
pricing
model
which
we'll
be
introducing
this
year,
so
this
would
essentially
allow
you
to,
on
top
of
some
base
subscription,
be
able
to
to
sort
of
burst
up.
You
know
node
capacity
or
for
stuff
that
runs
on
top
of
OpenShift,
like
our
middleware
storage,
to
to
sort
of
pay
for
that
by
consumption.
That's
kind
of
where
we're
going
the
challenge
with
consumption
based
pricing
is
you
need?
You
need
good
metering
before
you
know.
B
You
need
to
be
able
to
meet
her
consumption
before
you
can
charge
for
it.
The
the
metering
framework
that
we're
introducing
does
that
and
then
and
then
that
gives
benefits
to
to
you
as
administrators,
because
that
means
you
can
also
leverage
that
same
subsystem
for
doing
internal
chargebacks
internal
show
backs.
You
know:
charging
different
departments
for
utilizing
a
shared
cluster
and
so
forth,
so
so
so
we're
building
it
for
for
customer
usage,
but
we're
also
using
it
ourselves
and
then
I
think
that's
gonna
enable
some
of
stuff
you
discussed,
and
so
thanks.
K
Just
vanilla,
cluster
autoscaler
kubernetes,
like
you,
have
to
manage
that
machine
image
on
the
various
platforms
you
want
to
go
and
configure
and
tell
the
autoscaler
go
spin
me
up,
whereas
what
was
really
uniquely
interesting
about
the
redhead
core
rest
stuff.
Was
you
don't
manage
you're,
not
managing
that
ami
at
all,
like
you,
just
get
your
config
from
a
central
control
plane
and
its
life
suckled,
like
any
other
node
in
the
cluster
yeah.
B
I
think
you
know,
we've
talked
about
that.
Clayton
mentioned
that
terraform
message
this
morning
that
that
terraform
message
was
basically
to
spin
up
that
initial
boot
node
once
that
node
is
up.
That
node
is
a
kubernetes
note
that
basically
builds
out
the
entire
structure.
So
it's
really
Cooper
Nettie's
is
installing
the
kubernetes
cluster
and
stuff
using
the
API
is
that
the
Derick
talked
about
any
other
questions.
Hi.
Q
R
It's
there's
usually
nothing
special
about
the
application
other
than
if
you
have
like
a
complex
distributed
system
which
I
imagine
you
know,
credit
card
processing
or
any
kind
of
financial
application.
Is
it
helps
you
spin
that
up
in
a
very
consistent
way,
so
you're
not
just
handing
off
like
a
distributed
system
between
you
know,
maybe
two
engineers
that
need
to
set
up
a
development
environment,
but
your
CI
process
also
needs
to
be
handed
off.
R
Hey,
here's
version
1.1
of
our
entire
distributed
system,
and
if
you
can
hand
that
off-
and
you
know
it's
tested
well
you
hand
that
same
operator
off
to
a
staging
environment,
scale
testing
production
whatever
it
is.
So
it's
really
all
about.
Can
you
model
your
application
inside
of
an
operator,
and
the
answer
is:
if
you
can
run
it
on
Cooper
Nettie's,
then?
R
H
J
So
we
don't
have
plans
specifically
around
that,
but
we
are
looking
at.
We
are
looking
at
a
number
of
different
mesh
topologies
that
may
support
that,
including
so
like
network
service
smash.
Is
one
technology
we're
looking
at
we're?
Looking
at
you
know,
sto
multi
cluster,
we're
looking
at
envoy
as
one
of
those
options.
There's
a
number
of
different
technologies,
we're
looking
at
to
be
able
to
provide
that
functionality,
but
at
the
moment
we
don't
have
anything
specific.
Can
you
tell
me
more
about
your
use
case
for
that?
Yes,.
H
J
So
we
do
have
we
have
the
ability
to
to
say
all
traffic
coming
from
a
particular
project,
and
we
can.
We
can
filter
a
destination
service.
We
can
say
that
it
can
be
directed.
It
must
be
if
they're
looking
for
a
particular
database,
that
it
can
be
redirected
to
a
particular
service
on
a
external
location,
but
nothing
that
keeps
it
there's
nothing
specific
about
the
stateful
part
of
it,
yet
that
we
that's
something
we'll
definitely
take
into
consideration
and
I'd
like
to
talk
to
you
more
about
that.