►
Description
London OpenShift Commons Gathering 2019 OpenShift AMA Panel
Moderated by Mike Barrett
Panelists include: Dan Walsh, Lucas Ponce, Brian Gracely, Daniel Messer, Michael Hrivnak, Matt Doron
A
B
So
operators
seem
really
cool
I'm,
you
know
playing
around
with
templates
and
pipelines
and
Jenkins
and
thinking
wow
I
really
need
to
be
careful
that
I
don't
create
a
mess
here
with
lots
of
different
templates
that
do
relatively
the
same
thing
and
it
seems
like
operators
are
a
solution
to
that.
I
also
talked
to
the
OS
storage
guys
and
they
said
they
have
an
operator
we're
running
311.
D
So
yeah
you
can't,
but
you
won't
have
and
311
is
that
experience
with
the
hub,
so
it's
still
called
Marketplace
on
it's
a
little
bit
simpler.
But,
yes,
you
can
use
it,
and
as
of
today,
we
don't
have
any
operators
that
have
object
4
as
a
minimum
version,
so
yeah
you
can
continue
to
use
that
and
that
should
work
just
fine.
It's
just
the
experience
that
we've
shown
will
look
a
little
bit
different
on
the
free
11
cluster,
because
the
console
has
been
remodeled
a
little
bit.
D
E
Tell
most
people
is
that
the
oh
well
and
the
the
operator
lifecycle
manager
that
you
can
click
into
and
see
a
life
cycle
of
an
operator
to
control
it.
That
is
only
in
tech,
preview,
dev
preview,
one
3.11,
but
the
fact
that
you
are
running
an
operator
is
completely
supported
on
3.11.
It's
just
that
extra
benefit
is
not
there
yet
also
the
SDK
that
these
gentlemen
wrote
is
still
in
tech
preview
in
3.11,
and
then
that
goes
ga-in
for
dotto.
F
G
E
We
do
think
that
is
the
trajectory
that
we're
on
the
service
broker
is
in
general,
were
originally
written
to
bring
content
that
lives
outside
of
the
platform
into
the
platform
right.
That
was
the
first
conception
of
it
we
kind
of
took
that
over
and
we
also
used
them
with
the
template
service
broker,
because
we
like
the
binding.
We
like
the
fact
that
you
could
take
different
templates
and
use
the
service
broker
to
bind
them,
and
that's
why
we
brought
that
four
on
cluster
solutions.
E
Operators
are
right
at
the
cusp
of
being
used
for
off
platform
services,
Amazon
has
written
one
for
their
service
broker,
so
there's
the
any
of
themselves
without
our
influence
are
replacing
their
service
broker
with
an
operator
implementation.
So
we
think
that's
where
most
of
the
the
community
and
in
the
sort
of
industry
is
going
to
move
to
we're
going
to
maintain
the
service
broker
interface
for
a
couple
more
releases,
at
least
so
that
takes
us
into
20
20
20
end
of
2020
into
2021.
So
we
like
the
service
brokers.
It's
there.
E
H
I
J
I
With
the
current
slash
old
web
console
for
Apogee
f3x,
there's
documentation
and
the
possibility
to
add
UI
elements
at
theming
stuff,
like
that
in
the
preview,
the
new
console
I
have
not
seen
any
possibility
for
customization
of
the
UI.
That's
something
still
on
the
roadmap.
Is
it
intended
sister
documentation
somewhere
hidden
yeah.
E
So,
and
if
you
don't
know
it,
when
we
move
to
the
four
dot,
oh
we
move,
we
rewrote
everything
and
react,
and
so
it
was
a
pretty
significant
chord
change
in
the
technology.
When
we
move
to
that
user
interface,
what
your
mentoring
is,
we
don't
have
all
the
documentation
set
up
for
that
move.
Yet
we
are
totally
going
to
do
that.
I.
You
know
it
at
the
latest.
You'll
see
it
in
my
round
May
time
frame.
K
M
K
A
K
Neither
mind
about
the
operator
question
a
lot
of
the
operators
required
to
create
their
own
namespace
and
on
these
things,
which
means
that
we
end
up
having
security
issues
about
who
can
create
cluster
roles
and
all
the
stuff.
Is
there
a
way
to
do
it
properly
without
giving
too
much
access
to
one
Operator
that
actually
it
might
not
be
open
source?
So
I,
don't
know
what
is
in
there
or
what
is
actually
doing.
D
So
in
in
in
photo,
though,
there's
a
concept
coming
out
that
we
call
operator
groups,
so
the
idea
is
that
all
operators
and
openshift
get
deployed
to
a
particular
namespace
and
they
with
the
Arabic
data
that
comes
from
all
those
manifests
that
I've
talked
about
that
accompany
the
operator
which
Oh
amber
use
and
make
into
roles.
They
will
have
access
in
that
namespace
to
these
areas.
What
they
will
also
be
able
to
do
is
they
will
be
able
to
list
two
events
that
are
listed
in
this
operator
groups.
D
So,
while
an
operator
may
only
be
deployed
in
a
saner
namespace,
it
can
react
upon.
Group
version
kind
objects
being
created,
another
namespaces,
that's
the
whole
purpose
of
an
operator
group
or
default
right
now
is
that
we
have
an
operator
group
that
basically
says
all
namespaces
all
right.
So,
whenever
I
create
an
SCV
object
in
any
namespace,
the
operator
would
take
notice
of
that
and
take
action
and
in
the
future
we
will
have
a
little
bit
more
granular
access
to
that
and
say
only
a
particular
list
of
namespaces
will
have
access
to
that
operator.
E
K
native
I
think
it
was
on
one
of
the
slides
at
the
bottom.
It
was
talking
about
it,
going
tech
preview
in
4.1
and
then
4.2
or
4.3
that
takes
you
at
the
end
of
20
or
yeah
this
year
around
the
end
of
this
year.
We
think
it
might
be
production
ready
for
you.
Those
of
you
don't
know.
Canada
is
very
new
in
the
upstream
kubernetes
and
it
it
takes
a
while
to
get
it
through
alpha
to
beta,
to
stable
and
on
the
light
weight.
Vms
is
an
interesting
question.
E
L
E
L
L
I
would
hope
so
I've
always
thought
that
that
you
know
kubernetes
understands
that
it.
You
know
I'm
running
out
of
nodes,
you
know
workload,
so
I
should
be
able
to
generate
a
new
node,
and
then
it's
always
been
separated
out
that
you
know
the
say:
OpenStack
is
in
charge
of
generating
nodes
and
kubernetes
and
Shatner
generating
containers,
but
I
think
the
two
should
eventually
merge
into
Cobie.
E
F
E
You
know
we
just
don't
we
don't
in
that
storyline.
We
have
that
storyline
ready
for
the
end
of
2019
you'll
get
the
tech
preview
pieces
of
the
K
native
and
the
serverless
around
the
August
timeframe.
Probably
we
just
don't
have
a
commitment
to
the
Firecracker.
You
know
those
types
of
vm's.
We
think
we
can
solve
that
with
that
aspect
of
it
with
containers.
The.
L
Problem
with
with
whether
it's
firecracker
or
a
kata
or
whoever's
launching
you
know,
we
really
need
to
support
multiple
different
clouds
and
when
I'm
talking
clouds
I'm
talking
public
clouds,
I'm
talking
VMware
I'm,
talking
hypervisor
on
top
of
my
Mac
I
mean
it's
the
ex
hive.
So
it's
really
hard
to
say
you
know,
there's
going
to
be
one
I
mean
what
it
would
write.
I
mean
the
Amazon
Amazon
wants
to
get
fire
crackers.
O
E
It'll,
come
in
the
4.1
timeframe,
everything
that
we
showed
you
can
establish
your
own
registry
right.
You
don't
have
to
use
Quay
to
house
these
containers.
You
could
download
them
set
up
your
own
registry
wherever
the
case
may
be,
and
then
point
our
metadata
files
to
those
local
areas
and
the
4.1
timeframe
will
give
you
the
prescription
on
how
to
do
that
and
to
be
air-gapped
completely
to
pull
that
down.
So
that's
around
the
the
May
time
frame,
dove
May,
June
timeframe.
E
O
P
E
D
E
The
the
pecking
order
right
now
is
to
go
out
on
AWS
in
you
know,
toward
the
end
of
April,
then
30
about
30
days
later
released
4.1.
So
we're
not
following
the
traditional
release
pattern
anymore.
Now,
we've
switched
to
the
tectonic
implementation
and
I
talked
earlier
about
how
you
can
switch
to
the
the
latest
or
the
beta
channels.
It's
your
choice.
E
Now
you're
gonna
see
us
consuming
kubernetes
versions
a
lot
faster
than
we
ever
have
so
I'll
go
out
on
coop
1.12
in
April
and
in
May
I'll
switch
to
coop
113
and
then
around
August
timeframe,
I'll
go
to
Cuba
114,
so
we're
gonna
be
going
through
them
quite
quickly.
On
the
infrastructure,
the
pecking
order
is
to
get
AWS
done
and
then
in
the
May
time
frame,
do
bare
metal
in
VMware
and
then
in
that
next,
one
that
August
timeframe
do
GCP.
It
is
a
sure.
E
This
is
a
very
interesting
question
when
we
started
work.
Looking
at
engineering,
the
in-place
upgrade,
we
went
out
and
talked
to
a
lot
of
customers
and
a
lot
of
customers
said
they
don't
like
the
serial
upgrade.
They
don't
like
the
fact
that,
if
they're
on
3.7
in
order
to
get
to
3.11,
they
have
to
upgrade
to
3.9,
then
upgrade
to
3.10
and
upgrade
to
3.11,
and
that's
everybody's
kubernetes
distribution.
That's
just
the
upstream
API
boundaries,
that's
that's
how
it
makes
you
do
it.
E
At
the
same
time,
they
told
us
that
once
they
do
get
a
cluster
deployed,
they're
afraid
to
touch
it,
like
they're,
they're,
afraid
to
breathe
on
it
because
they're
they're
there
they
don't
have
the
tools
or
the
utilities
to
snapshot
a
cluster
and
redeploy
it
like
to
gather
up
some
of
the
the
storage,
TVs
and
sort
of
the
metadata
involved
in
that
the
role
binds
who's
allowed
to
log.
In
things
of
that
nature,
we
saw
some
presentations
from
some
storage
vendors
here
that
we're
solving
that
that
aspect
of
the
problem,
which
was
great
to
see.
E
So
we're
working
on
those
utilities
for
the
most
part,
since
a
lot
of
the
the
customer
base
is
a
little
further
behind
this
migration
utility
will
come
out
in
the
July
August
timeframe,
because
a
lot
of
our
those
particular
customers
wouldn't
consume
afford
auto
release
anyways.
So
we
think
we're
on
par
with
when
they
want
to
move
around
that
July
August
timeframe.
E
Q
I
think
is
certainly
coined
like
about
12
months.
Go
by
weave
works,
I
think
so
the
idea
is
rather
and
then
you
using
the
OSI
command
lines
and
helm,
and
things
like
that.
You
just
have
a
git
repository
of
what
the
latest
version
of
all
your
help
chart
sees
and
all
your
variables
and
there's
a
operator
in
your
cluster.
That
goes
off
to
your
get
repository
and
then
reconcile
that
with
what
the
current
state
of
the
cluster
is.
So
you
can
only
make
changes
your
open
chef
cluster
by
committing
to
you'll
get
repo
well.
E
I
think
I'm
a
lot
of
customers
use
similar
techniques
not
necessarily
get
in
that
their
touch
points
are
controlled
in
some
way.
Some
version
SCM
management
solution,
so
it
isn't
necessarily
something
we
haven't
seen
before,
but
it's
also
not
necessarily
something
we
have
to
help.
Do
you
know
they?
Those
those
primitives
or
interfaces,
are
there
in
kubernetes
for
you
to
achieve
that
without
us
coding
anything
for
you.
D
So
I
wouldn't
say
we
are
superseding
that
idea
right.
So
there
was
this
type
of
customer
that
is
very
accustomed
to
the
way
helm.
Did
things
will
stay
like
that
and
there's
also
a
very
large
ecosystem
of
helm
shots
out
there,
probably
much
larger
than
we
have
up
shift
templates
today?
What
we
do
with
the
SDK
is
you
have
seen
in
the
presentation
before?
D
Is
we
make
it
very
easy
to
turn
a
hound
child
into
an
operator
and
make
that
seamlessly
blend
in
with
the
rest
of
the
framework
like
or
per
the
lifecycle
manager
and
operate
a
metering
right,
so
I
think
on
an
open
shift.
You
will
receive
the
best
integration
using
that,
but
you
continue.
You
can
continuously
build
upon
the
innovation
that
happens
in
the
helm
ecosystem
by
using
the
SDK
to
update
your
operators
from
the
newest
charts.
E
So
it's
interesting
short
term.
We
we
we
go
wherever
that
the
content
goes,
whatever
format
has
the
most
content.
We
chase
it
and
that's
exactly
why
we
got
rid
of
gears
and
went
to
docker
containers
and
that's
why
we
support
we
allow
you
to
run
helm.
It's
not
like
it's
not
supported.
You
just
want
you
to
understand
that
we're
not
going
to
deliver
it
and
we're
not
gonna
fix
the
security
issues
with
it,
but
it
totally
works
on
openshift
its
short-term.
E
These
guys
have
this
awesome
command
line
with
the
operator
SDK,
where
you
can
convert
any
helm
chart
into
an
operator
with
just
a
little
bit
of
editing
like
even
I,
can
do
it
and
then
that
deploys
in
a
tiller
list
manner.
It
uses
the
CID
function
to
replace
tiller,
and
that's
today
right
now
you
can
achieve
that
I
think.
E
Looking
ahead,
we
will
continue
to
partake
in
that
helm
community
because
it's
there's
a
lot
of
content
out
there
in
the
world,
that's
written
in
helm,
so
we
want
to
make
sure
you're
happy
with
our
platform
and
it's
performing
just
like
anybody
else's
kubernetes,
so
we'll
still
protect
that
investment.
For
you.
E
It
the
premise
of
the
code
right
now
is
be
agnostic,
so
to
use
utilities
like
tarp
to
pull
from
the
PV
and
then
redeploy
into
the
new
PV,
but
in
particular
to
open
shift
container
storage,
which
cluster
is
a
component
of.
We
are
working
on
something
to
help
migrate.
Should
you
choose
from
Gloucester
to
Seth?
So
that's
another
project,
that's
kind
of
bundled
into
the
migration
project
that
I'm
talking
about,
so
that
that
could
be
of
interest
to
you.
E
Definitely
yep,
you
know
the
Jenkins
one's
an
issue
are
not
initiates
an
interesting
problem
to
solve
when
we
want
it
to
every
every
deployment
we've
ever
experienced.
The
customer
brings
up
a
CI
CD
component
to
it
right
they
it's
these.
These
worlds
are
very
much
connected
and
we
didn't
want
to
create
something
new
in
that
industry.
E
So
you
can
use
as
a
user
the
Jenkins
file
to
call
out
to
other
CI
CD
solutions
that
you
may
own
and
then
you'll
still
see
the
pipeline
movement
in
your
project,
so
it
was
kind
of
like
use
it
as
an
interface,
but
also
use
it
as
your
solution.
If
you
don't
have
a
solution
and
what
will
continue
to
develop
on
that?
If
you
want
to
get
it,
get
in
front
of
that
and
see
where
we're
going,
it's
open
shift.
B
L
B
L
As
okay
D,
the
current
thinking
is
that
it's
just
gonna
be
Fedora
core
OS.
It
would
be
the
platform
for
running
okay
D
on
top
of
a
core
OS
based
platform.
As
of
last
week,
I,
don't
know
if
there's
going
to
be
a
Sun
toss
version
of
it.
As
far
as
the
Red
Hat
version,
the
Red
Hat
version
is
totally
tied
to
open
chef,
so
don't
want
you
can't,
go
and
use
a
red
hat
core
OS
for
general.
You
know
running
of
containers
that
it's
tight.
L
E
P
P
M
L
S
E
You
know
I
only
comment
on
the
stuff.
That's
already
been
put
out
into
the
public
by
IBM
themselves.
It's
known
that
we,
this
this
product
set,
will
end
up
in
the
the
cloud,
be
you
over
it
IBM
and
that's
where
they
have
their
IBM
cloud,
their
ICP.
They
have
their
middleware
products,
but
we
will
be
run
as
a
separate
entity
and
we
will
report
up
through
our
existing
management
chains.
All
the
way
to
our
CEO
they've
also
said
in
the
public
that
they
really
view
us
has
a
way
to
achieve
open-source
prowess.
S
E
T
D
Guess
it's
Erica
with
me
and
so
yeah
we
are
going
to
have
a
model
for
commercial
offerings
there
which
require
a
license.
So,
yes,
this
has
been
thought
of
and
we
will
have
operators
that
require
some
sort
of
Enterprise
subscription
no
license
in
order
to
be
able
to
use
the
software.
So
that's
something
that
the
maintainer
of
these
operators
will
be
able
to
configure
in
the
back-end
system
that
is
available
to
them
to
actually
publicize
their
proprietary
operators
or
commercial
operators.
D
E
That's
just
to
reiterate:
we
don't
want
to
be
the
sort
of
commercial
brokerage
brokerage
house.
We
don't
want
the
ice
bees
coming
to
us
to
get
the
cut
of
the
money
that
the
users
spent.
So
the
operator
framework
itself
has
everything
that
is
fee
needs
to
succeed.
To
have
that
relationship
directly
with
the
customer.
A.
E
D
J
J
E
Right
now,
if
you
were
to
do
it
today,
the
supported
integration
is
to
take.
What's
called
the
IACP
and
I
say:
P
is
a
big
name.
That
means
a
hundred
different
things,
but
a
component
of
that
is
at
the
very
top
of
the
middleware
stack
over
there
and
it
gives
you
a
user
interface
and
it
gives
you
a
plug
into
a
C
ICD
framework.
E
They
IBM
wants
you
to
download
that
and
run
that
on
top
of
OpenShift
with
a
helment
and
that's
fully
documented.
Has
a
user
guide?
Has
the
whole
nine
yards
on
it?
So
that's
completely
supported
by
both
Red
Hat
in
IBM
today,
so
you
can.
You
can
do
that
right
now,
future
wise,
we
like
damage
and
we're
both
publicly
traded
companies
and
we're
not
allowed
to
have
meetings
yet
with
each
other.
So
we
haven't
talked
at
that
granular
level.
E
Pricing
is
not
expected
to
change.
The
only
thing
that
we're
engineering
and
working
on
is
really
that
how
you
consume
subscriptions
right
now.
You
use
this
utility
called
subscription
manager.
That's
been
in
rail
since
the
beginning
of
time
right
and
it's
it's
starting
to
show
it's
where
entire
in
this
particular
category,
where
you're
going
to
be
auto,
scaling
and
auto
removing
nodes,
and
so
we're
we're
thinking
of
clever
ways
how
we
can
do
that
subscription
management
through
the
master.
So
you
so
the
nodes
don't
have
to
have
that
intelligence.
E
U
Question
about
cloud
forms
so
card
forms
as
meeting
at
this
point
or
chargeback.
That's
now
moved
into
the
operator
framework,
also
auto
scaling
of
nodes.
You
could
do
that
to
cloud
forms
through
some
scripting
as
well.
Where
do
you
see
cloud
forms
living
and
do
you
foresee
a
lot
of
the
components
being
built
into
gravano
prometheus
at
the
operator
framework
and
eventually
cloud
forms
moving
to
purely
OpenStack?
Or
do
you
see
cloud
forms
living
on
in
future,
so
cloud.
E
Forms
has
a
fantastic
roadmap.
You
know
that
that
sector
of
the
industry,
when
you
look
at
ServiceNow-
and
you
look
at
like
you-
know,
app
dynamics
and
all
these
other
awesome
things
out
there
in
the
market.
There's
SAS
services
right,
there's
a
there's:
a
pressure
to
become
a
SAS
service
when
you're
in
that
industry,
so
cloud
forms
is
in
the
middle
of
transitioning.
To
also
offering
itself
has
a
staff
service
from
Red
Hat
to
do
your
monitoring
just
like
ServiceNow?
Would
the
in
that
process?
E
They've
really
turned
to
all
the
product
teams
and
asked
us
to
take
more
ownership
of
our
individual
management,
and
so
you
saw
in
3.11
that
we
shipped
Prometheus
to
replace
ocular
and
older
solution,
so
we've
completely
moved
to
Prometheus
to
do
the
event
management
in
the
monitoring
its
dev
preview
right
now
to
have
metrics
in
3.11.
So
if
you
want
to
see
it
just
give
me
an
email,
we'll
get
you
in
the
dev
preview
program,
you
can
see
metrics,
that's
the
replacement
for
the
show
back,
feature
and
charge
back
feature
in
cloud
forms.
E
So,
as
of
4.0,
we
think
we've
pretty
much
replaced
all
the
the
user
experiences
that
somebody
would
have
turned
to
you
mentioned
auto
scaling
that
will
also
be
in
Ford,
auto
native
to
kubernetes.
So
you
won't
need
something
outside
of
kubernetes
doing
that
for
you
and
then
cloud
forms
would
be
that
SAS
service
that
open
shipped
sends
telemetry
up
to
that
is
sort
of
the
aggregator
of
all
these
different
systems
that
you
might
have.
So
that's
the
way
it's
going
to
pan
out.
L
L
V
V
V
V
E
So
the
cross
that
boundary
from
3x
into
4x
we're
gonna
use
a
migration
utility.
It
won't
be
like
an
in-place
upgrade
that
you've
experienced
from
3.92
3.10
the
3.11
we're
taking
a
different
approach
to
across
the
the
major
boundary,
but
on
the
on
the
docker
runtime
question,
you
know:
look
at
all
the
kubernetes
distributions,
they've,
all
pretty
much
moved
and
even
darker
themselves
have
moved
to
using
an
alternative,
runtime
they're
doctors
using
container
D,
there's
hyper
SH,
there's,
there's
four
other
ones
that
are
out
there.
E
L
Industry's
moving
to
a
CRI,
that's
why
cRIO
is
what
curtain?
That's
what
cryo
is
so
it's
moving
at
he's
defying
runtime.
The
problem
is,
the
darker
demon
has
become
too.
It
lacks
performance
that
you
need
for
running
continuous
in
production.
So
what's
happened
is
darker
split
after
Cryer
was
introduced,
docker
decided
that
they
had
to
compete
against
cryo,
and
so
they
split
docker
into
two
different
demons.
L
They
created
a
back-end
demon
called
container
D
and
there
was
a
container
deep
before
them
that
was
really
for
darker
swamp,
but
they
basically
enhanced
the
cryo
are
the
container
d
to
match
what
cryo
is
doing.
So,
what's
what
you
see
happening
now
is
the
industries
moving
to
to
new
container
runtimes
for
running
containers
under
the
kubernetes
and
that's
container
d
in
cryo.
V
C
E
So
in
open
ship
3x,
we
had
what
we
called
the
internal
registry
and
it
wasn't
very
feature-rich
when
you
compare
it
to
other
standalone,
even
our
own
quake.
Dot.
Io
will
continue
to
do
that.
We'll
continue
to
bundle
an
internal
one
for
dot
o
will
be
the
exact
same
bundled
one
later
in
2019
when
we
have
time
we're.
E
A
Right
any
more
questions,
any
other
questions
you
can
ask
over
a
drink.
I
want
to
thank
everybody.
Who's
come
up
on
stage
here
today.
There's
a
lot
of
time
zone
jetlag
some
of
them
when
we
get
them
to
come
over
to
do
these
community
events.
They
also
end
up
in
tons
of
customer
meetings.
So
I
really
appreciate
you
guys
giving
us
your
time
today
and
staying
here
through
the
whole
day.
So
thank
you
very
much
give
them
a
big
hand.