►
Description
OpenShift Origin 3.10 Release Update
Guest Speakers: Derek Carr and Mike Barrett
link to slides: https://blog.openshift.com/wp-content/uploads/Whats-New-in-Origin-3.10.pdf
Host: Diane Mueller
A
Well,
hello,
everybody
and
welcome
again
to
another
open
ship
common
briefing.
Today
we
have
Derek
Carr
and
Mike
ferret
from
the
openshift
engineering
and
product
management
teams
and
they're
going
to
give
us
a
briefing
on
the
latest
release
of
open
shift,
origin
and
I'll.
Let
them
introduce
themselves
and
get
started
because
there's
a
lot
of
information
in
this
package
today,
so
take
it
away,
Mike
and
let's
get
them
in
a
move
on
it
here.
Great.
B
Thanks
Diane,
so
I'm
going
to
be
covering
what's
in
origin,
310
and
a
lot
of
you've
already
had
access
to
this
code,
depending
on
where
you're
pulling
your
origin
from.
As
you
know,
you
can
pull
straight
from
the
head
where
you
get
builds
every
night
or
you
can
just
use
the
milestone
on
the
release
pages
a
couple
weeks
ago,
we
did
release
another
milestone,
I
think
we
labeled
it
our
c0.
B
So
a
lot
of
these
features
you've
had
in
your
hands
for
a
couple
of
weeks
now
we
will
typically
when
we
release
the
downstream
version,
the
one
that
we
sell
over
the
counter.
We
cut
another
milestone
release,
so
you
will
probably
see
another
origin
milestone
released
next
week
at
some
point.
That
is
when
this
will
actually
ship
next
week,
probably
around
Thursday.
B
We,
you
know
we
try
to
give
customers
guidance
on
well.
What
was
the
the
big
purpose
of
this
release?
It
was
really
to
make
the
cluster
more
efficient
when
you,
when
you
look
at
a
cluster,
that's
been
up
for
a
long
time.
There's
you
know
Karen
feeding
that
you
have
to
perform
on
it.
There's
the
higher
levels
of
security
options
that
people
typically
want,
how
you
add,
resources
to
the
cluster
has
become
much
more
efficient
and
then
how
that
the
workload
diversity,
how
that
workload
touches
and
consumes
stored,
CPU
and
memory
we've?
B
Given
you
some
better
hooks
in
those
areas,
so
it's
really
just
a
more
efficient
release
for
us
in
terms
of
the
cluster
itself,
a
distributed
system,
the
resources
that
are
available
to
it,
the
first
area
that
has
some
subtle
changes
more
on
the
back
end,
not
something
that's
necessarily
going
to
be
seen
in
your
users,
is
our
automation
broker.
We
we
pretty
much
just
have
the
ansible
and
implement
a
you
know
the
automation
broker.
If
you
haven't
seen
this
concept
check
it
out.
B
You
know
when
you
deal
with
the
open
service
broker
and
service
brokers
in
general
in
our
industry.
Typically,
you
get
a
service
broker
for
every
single
service
that
you
run.
This
takes
a
step
back
and
still
uses
the
same
api's
but
says:
wouldn't
it
be
nice
if
you
didn't
have
to
load
another
service
broker
for
every
single
service
and
allows
you
to
just
load
just
the
one
service
broker
and
representative
lots
of
services
in
it.
So
this
has
a
subtle
change.
B
On
the
back
end,
we
move
to
using
the
CRD
api
instead
of
using
the
local,
a
TD
instance.
We
give
you
some
more
information
about
who
is
provisioning
the
services,
so
that
comes
out,
and
then
we
give
you
more
robust
error
messages
coming
back,
which
is
easier
to
troubleshoot
with
things
I
don't
happen.
B
There
we
go
another
thing
to
mention
that
not
a
lot
of
people
are
aware
of
a
lot
of
the
cloud
providers
do
have
service
brokers
at
this
point,
and
AWS
was
the
first
mover
by
far
in
this
area
for
the
open
shift
platform,
so
it
has
the
most
services
available
to
it.
If
you
have
not
seen
this
in
your
origin,
install
please
do
check
it
out.
It's
very
simple,
to
turn
on
and
to
load.
It
brings
in
I
think
over
25
AWS
services
into
your
service
console
in
origin.
B
At
this
point,
you'd
be
able
to
very
rapidly
start
seeing
what
an
application
might
look
like
if
you
merged
it,
both
on-premise
consuming
these
off-premise
services,
the
user
interface,
the
you
know,
our
console,
our
web
console
we've
been
constantly
improving
it.
Hopefully,
you've
enjoyed
those
improvements
and
the
new
look
and
feel
this
time
around.
We're
fixing
an
improved
search
of
that
catalog
since
the
catalog
is
getting
so
much
more.
You
know
even
off
service
and
in
service
services.
B
So
we,
this
was
a
new
feature
back
I
think
in
3.7
could
have
been
3.9,
but
we
gave
the
ability
for
a
user
to
have
multiple
routes
to
their
application,
but
we
didn't
expose
it
in
the
user
interface
and
now
you
do
have
that
option
to
turn
it
on
in
the
user
interface
and
then
you'd
be
able
to
see
the
multiple
routes
up
to
that
application,
generic
secrets.
So
there's,
if
you've
ever
clicked
on
our
resource
tab,
we
have
a
lot
of
resources
under
there
right
you
can.
You
can
work
with
your
config
Maps.
B
You
can
work
with
a
lot
of
the
things
that
don't
have
a
view
in
the
overview,
err
and
the
applications
tabs
so
that
you
have
a
lot
of
ability
to
edit
things
on
the
fly
here
under
the
resources.
Tab,
we've
added
an
ability
to
have
generic
secrets.
This
gives
you,
the
opaque
secret
that
you're
able
to
consume
in
the
user
would
be
able
to
attach
to
their
services
and
workloads
that
they're
deploying
on
the
platform.
B
There
was
catalog,
it
does
have
a
command
line,
not
about
people
who
know
about
the
command
line.
We
bring
that
into
the
fold
in
the
3.10
release,
so
you
can
be
successful,
adding
stuff
that
is,
you
know,
through
any
of
the
service
brokers
that
that
we
have
through
this
one
command,
so
the
the
service
brokers
right.
We
talked
about
the
ansible
one,
but
there's
also
a
template
service
broker.
B
The
you
know
when
we
did
give
you
the
generic
secret
management,
we
did
fix
the
add/remove
change
of
you,
do
have
the
life
cycle,
ability
and
that
user
interface
to
work
with
the
secrets-
and
you
know
some
of
the
error
messages
we
talked
about
it
for
the
ansible
service
broker,
but
also
as
we
provision
and
work
with
deployment
configs.
We
did
fix
some
error
messages
that
were
coming
out
in
that
area
of
the
product.
B
Jenkins
I
don't
know
if
anybody's
using
the
pipeline
feature
in
the
product.
It
really
gives
you
a
quick
on
board
to
deploying
Jenkins
on
kubernetes
and
consuming
it
and
the
dynamic
fashion.
We
were
leaving
some
old
build
jobs
out
there
on
the
Jenkins
farm.
This
release
fixes
that
it
allows
you
to
reap
them
in
and
bring
them
back
in
and
remove
them.
B
So
a
lot
of
improvements
in
those
areas,
and
then
we
update
our
build
agent,
the
the
slave
images
to
understand
the
nodejs,
that's
coming
from
our
Rohr
package,
which
is
the
higher
versions
of
node
like
ten
and
eight,
and
then
also
have
an
understanding
of
maven
the
CDK
and
the
mini
shift.
I,
don't
know
if
anybody
is
experience
this
yet
again
here
I'd
definitely
encourage
you
to
take
a
look
at
it.
It
allows
you
to
run
isolate
it
on
your
laptop,
have
a
fully
functional
cluster.
B
B
B
When
you
look
at
it,
some
of
the
awesome
improvements
are
around
the
packaging
in
the
front
end
of
your
operators.
When
you
take
a
step
back,
there's
a
lot
of
options
there
we're
championing
helm,
charts
for
the
operators
and
also
the
ansible
of
PlayBook
bundles,
and
then
you
can
also
obviously
just
do
the
straight
go
code,
which
is
what
the
operator
is
based
on.
B
C
So
at
each
release
of
chef
becomes
a
rap
version
of
the
underlying
continued
orchestration
engine
provided
by
kubernetes.
So
in
a
prior
comments
briefing,
we
had
described
some
of
the
major
highlights
that
came
out
of
the
curious
one
that
time-release
encourage
folks
to
all
that
if
they
want
to
refresh
their
memory.
C
But
we
want
to
go
through
a
series
of
particular
issues
or
enhancements
that
came
in
the
release
that
we're
gonna
highlight
here
on
the
right-hand
side,
though
I
just
want
to
make
clear
that,
like
people
understand
that
open
script,
engineers
are
largely
working
upstream
in
the
coronaries
community
to
drive
these
changes
forward,
and
so
we're
always
excited
then,
when
we
can
get
them
out
into
the
open
shift
ecosystem
during
this
actual
release.
But
if
you
wanna,
move
to
the
next
slide,
we'll
start
going
on
particularly
detailed
features.
C
So
there's
been
a
long
effort
ongoing
in
the
canaries
community
broadly
to
support
a
wider
variety
of
workloads
on
the
cluster,
as
well
as
a
wider
variety
of
platforms.
So
within
the
three
time
release
you're
going
to
see
enhancements
in
these
two
areas.
So
there
were
three
major
features
around
bringing
on
more
workload,
types
that
were
added
in
the
release.
A
huge
pages
feature,
CP
management
and
device
manager
so
quickly
on
each
the
device
manager.
C
People
were
welcome
to
go
talk
about
that
afterwards,
but
this
is
a
general
framework
where
you
can
encourage
device
plug-in
vendors,
do
make
their
devices
knowable
by
the
cluster
and
schedule
by
the
cluster
and
then
consumed
by
applications.
So
the
canonical
example
you'll
often
hear
in
the
community
around
this
area
is
GPUs,
but
we're
excited
to
see
what
the
broader
ecosystem
does
with
the
device
plugins
support
over
time
to
other
particular
features.
C
C
The
node
then
tells
the
container
engine
to
run
the
workload
and
the
container
engine
interacts
with
some
crude
primitives
called
the
completely
fair
scheduler
and
it
basically
allocates
CPU
time,
but
that
CPU
time
might
be
shared
across
multiple
CPUs
on
machine,
so
particular
sets
of
applications
are
often
sensitive
to
being
moved
across
CPU
cores,
so
the
cpu
manager
feature
is
meant
to
address
those
application
types
so,
if
you're
very
latency
sensitive
around
management
of
CPU
related
resources.
This
is
a
feature
that
could
be
up
your
alley.
C
It's
pretty
simple
to
enable
you
on
your
node
configuration
for
the
notes
that
are
running
this
class
of
workload.
You
turn
on
the
CPU
manager
feature
gate.
Like
you
see
on
the
right,
you
set
the
policy
to
static
and
basically
any
pod
that
gets
landed
on
that
node.
That
makes
a
whole
integral
EP
request
and,
as
in
the
guaranteed
quality
service,
tier
is
going
to
get
guaranteed
an
exclusive
CPU
core
for
the
life
of
its
workload.
C
So,
as
I
said,
this
is
a
useful
feature
for
many
latency-sensitive
workloads
that
are
looking
to
on
board
onto
kubernetes
and
open
ship
generally
and
excited
to
see
how
this
feature
evolves
over
time.
The
other
major
area
around
workload
expansions
are
on
huge
pages,
so
similarly,
this
is
a
pretty
easy
feature
to
interact
with
the
feature
gate
for
huge
pages
is
actually
owned
by
default,
but
just
to
enumerate
here
that,
when
the
flag
is
on
pod,
specs
can
now
request
huge
pages
in
the
same
way
that
they
would
request
CPU
or
memory
resources.
C
So,
given
that
huge
pages,
the
page
size
varies
by
architecture,
there's
a
unique
naming
constraint
around
how
you
identify
the
size
of
huge
pages
you
want
to
consume,
but
in
this
case
you'll
see
on
the
example
the
write
you're
asking
for
just
100
megabits
of
2
megabyte
huge
pages.
If
the
node
had
had
pre-allocated
huge
pages
configured,
the
node
will
expose
it
up
to
the
scheduler.
The
cubit
will
properly
isolate
them,
your
application
can
consume
them
and
the
scheduler
will
know
how
to
schedule
resources
to
those
notes
properly.
C
So
huge
pages
can
give
you
consumed
either.
As
empty
their
volumes
or
directly
in
your
application
outside
of
mine,
so
this
would
be
useful
for
applications
that
are
running
large
caches,
very
large
JVMs
and
has
often
been
an
impediment
to
getting
that
type
of
workload
on
the
cluster.
So
next
slide.
C
C
So
you
it's
broader
visibility
to
the
cluster
and
then
monitoring
agents
or
node
admins
can
better
understand
what's
going
on
with
machines
and
how
to
then
remedy
the
problem.
So
this
is
a
component
that
it's
probably
tech
preview.
We
expected
to
see
a
in
the
near
term
and
we
think
in
the
fullness
of
time.
This
will
help,
therefore
more
node
issues
to
support
auto
remediation
concepts
in
the
future.
So
interesting,
building
block
moving
forward
next
slide.
C
Yep,
so
this
is
a
another
feature
here
in
humanities:
110
that
we're
getting
an
open
ship
310
here
is
around
protection
of
local
ephemeral
storage.
So
many
folks
know
ranae's
can
schedule
CPU
and
memory.
We
discussed
how
new
and
this
released.
It
can
also
schedule
huge
pages,
but
another
major
item
that
had
been
missing
that
is
now
available
is
the
ability
to
schedule
ephemeral,
storage.
So
when
you
describe
a
ephemeral
storage,
it's
important
to
distinguish
this
from
persistent
storage.
C
Ephemeral
storage,
basically
is
the
disk
on
which
your
pod
writes
logs
any
empty
dirt
volumes
that
your
pod
consumes
or
the
actual
copy-on-write
layer
for
your
containers
that
your
application
might
be
working
against.
So
in
the
past
this
was
treated
as
like
a
best-effort
resource.
So
though,
the
node
actually
has
some
capability
to
observe
when
a
ephemeral
storage
is
running
low
and
it
would
have
evicted
pods
to
try
to
reclaim
that
space.
C
But
now
the
actual
node
is
able
to
report
back
how
much
ephemeral,
storage
it
has
how
much
and
then,
as
a
consequence,
pods
can
start
to
request
particular
amounts
and
then
the
kit
scheduler
can
schedule
it
out.
So
now
we
can
actually
make
it
that
management
of
the
actual
local
disk
is
being
managed
just
like
CPU
and
memory
resources.
So
the
feature
currently
is
tech
preview.
C
C
So
the
other
opponent
that
comes
in
tech
preview,
as
well
as
the
D
scheduler
component.
This
is
an
interesting
component
in
that,
if
you
imagine,
when
a
pot
is
first
scheduled,
the
scheduler
looks
at
the
current
state
of
the
cluster
and
tries
to
find
a
node
that
can
best
meet
the
needs
of
that
workload.
But
given
the
nature
of
the
cluster
and
you're
in
a
dynamic
and
fluid
system,
what
was
once
an
optimal
scheduling
decision?
C
So,
for
example,
if
your
had
a
replica
set
that
required
three
replicas
and
two
out
of
those
three
were
on
the
same
node,
but
space
becomes
available
later
on
another
node,
so
you
can
fully
spread
those
out.
You
know
a
common
use
case
with
scheduler.
There
would
to
remove
those
duplicates
and
rebalance
your
workloads
across
the
full
state
of
current
compute,
so
interesting
capability.
We
expect
this
to
grow
over
time
as
more
and
more
defragmenting
needs
will
be
needed.
Only
next
slide.
C
So
addition
to
the
performance
oriented
features
you
heard
earlier
around
bringing
more
and
more
workloads
on
to
the
cluster.
The
other
major
capability,
that's
in
deaf
preview
in
this
release,
is
supporting
additional
platforms
themselves.
So,
in
this
case,
windows
containers
support
is
a
feature.
We
continue
to
help
drive
in
the
upstream
and
look
forward
to
bring
out
everybody
now
within
open
store
origin.
C
There
are
some
prerequisites
in
place
on
the
particular
versions
of
Windows
Server.
You
need
to
get
to
join
the
open
shift
cluster,
but
I
encourage
people
who
are
looking
at
running
Windows,
lerk
loads
or
wanting
to
manage
them
to
take
a
look
at
this
join
the
Developer
Program
preview
and
look
at
the
actual
video
of
it
working
to
get
more
details
here,
but
that
next
slide.
C
So
go
through
similar
additional
features.
Here
it's
been
a
common
theme
that
we're
trying
to
make
more
and
more
the
actual
resources
that
run
on
the
cluster
of
report
metrics
and
then,
ultimately,
the
discover
will
be
a
Prometheus
and
alerted
against,
though
the
registry
component,
now
that
runs
on
the
cluster,
can
be
everything
it's
metrics
and
also
ensure
that
those
metrics
are
protected
by
a
authentication
endpoint,
and
this
is
just
a
like
stepping
stone
to
ensuring
that
everything
that
is
scraped
by
the
cluster
monitoring
Prometheus
solutions
in
the
future
are
securely
trusted
behind
authentication.
C
The
major
topic
I
want
to
spend
a
little
bit
of
time
on
here
and
Mike.
If
you
jump,
the
slide,
I
think
you
did
is
changes
to
how
people
will
see
the
deployments
apology
for
open
shift
itself
in
310,
so
prior
to
the
310
release,
the
control
plane
ran
as
a
set
of
system
via
managed
units
on
the
masters
when
running
on
Verone
well,
and
if
you
were
running
on
atomic
host,
they
were
running
as
series
of
system
containers.
C
What
we're
trying
to
do
is
consolidate
the
way
that
we
manage
the
control
plane,
whether
on
rail
or
I,
can
container
our
prize
Linux
environment
by
bringing
more
and
more
of
the
cluster
components
under
the
management
of
the
cluster
itself.
So
in
310
you'll
see
major
change
for
how
the
actual
deployments
are,
how
the
actual
control
plane
components
are
deployed.
So
at
CD,
the
API
server
and
controller
manager
are
all
now
running
a
static
pods,
which
means
the
cubelet
itself
running
on.
Your
masters
is
responsible
for
ensuring
that
those
pods
stay
running.
C
This
has
a
number
of
benefits,
first,
bennet,
being
primarily
that
you
can
actually
introspect
and
debug
and
view
the
logs
of
these
critical
control
plan
components
just
like
any
other
application
component
on
the
clusters.
So
if
you
navigate
to
the
namespace,
that's
running
these
pods,
you
can
do
the
normal
commands
to
view
logs,
exec
and
and
do
de
bugging
capabilities.
C
C
So
they
were
also
sniffing
changes
in
the
three
time
release
around
how
nodes
are
configured.
So
in
the
past,
people
who
may
have
been
familiar
with
the
node
configure
Moe
file
that
we
had
that
you
would
edit
and
then
ansible
would
push
out
into
each
individual
compute
node
in
your
cluster
and
then
when
the
node
agent
would
start
up.
It
would
read
that
config
and
locally
know
how
to.
C
You
know
run
run
the
node
agent.
A
change
here
in
this
release
is
that
we
have
a
feature
called
bootstrap
node
configuration,
and
this
makes
all
of
the
actual
node
configuration
now
managed
as
API
objects
in
the
cluster
itself,
so
that
you
can
make
a
change
to
a
set
of
nodes
in
an
entire
group
and
have
that
be
pushed
out
to
all
nodes
that
match
that
group.
C
Essentially,
so
what
you'll
see
in
3/10
is
that
for
each
node
group
type
there
is
a
config
map
created
and
managed
within
the
cluster
that
actually
holds
your
node
config
amyl
definition.
If
you
want
to
make
changes
to
how
your
nodes
are
configured,
you
edit,
that
file
directly
in
the
cluster
you
just
like
any
other
config
map
and
there's
actually
an
agent
now
running
as
a
daemon
set
on
every
node.
That
will
see
that
change,
bring
that
change
down
to
your
local
machine
and
restart
the
cubelet
to
make
that
new
configuration
take
effect.
C
So,
basically,
no
all
management
of
your
nodes
and
how
they're
configured
can
be
handled
essentially
within
the
cluster.
The
upgrade
process
ensures
that
your
notes
are
properly
grouped
and
the
proper
config
Maps
are
set
up,
but
basically
now
in
3/10
no
operator
admin
should
go
direct
to
the
node
to
make
a
change
or
how
the
node
operates,
but
instead
should
be
thinking
about
nodes
in
terms
of
groups
making
that
change
essentially
within
the
cluster
in
the
fullness
of
time.
C
Also
with
actually
sorry
another
major
feature
of
this,
and
why
this
is
important
is
that
the
way
nodes
join,
a
cluster
has
changed.
So
a
node
is
given
an
initial
bootstrapping
config
that
gives
it
enough
information
to
phone
home
to
the
API
server
to
say,
hey.
This
is
who
I
am,
and
this
is
I
want
to
join
the
cluster,
and
it
can
do
that
securely
and
then
the
API
server
now
knows
how
to
deliver
your
configuration
down
to
the
node
based
on
one
of
these
config
maps
in
your
cluster.
C
So
in
the
fullness
of
time
we
expect
to
continue
to
iterate
on
the
solution
and
align
it
with
other
features
that
people
may
have
seen
in
the
criminais
upstream,
that
we're
continuing
iterate
and
evolve
around
dynamic,
cubed
configuration.
But
again
more
and
more,
our
intent
is
to
get
more
and
more
this
stuff
under
management.
The
cluster
itself
next
slide.
C
So,
on
the
networking
side,
you
know
when
pods
want
to
be
able
to
speak
to
services
off
cluster,
you
could
run
an
egress
router.
There
were
some
changes
in
the
310
released
to
ensure
that
you
have
an
H
a
configuration
so
that
if
your
ass
endpoint
went
out
that
it
could
gracefully
failover
to
an
alternative
setup,
so
folks
who
are
interested
in
having
their
pods
communicates
you
off
cluster
services,
I
encourage
people
to
check
this
out
and
look
at
more
details
next
slide.
C
C
C
C
On
the
security
front
as
well,
another
tech
preview
feature
that
we
want
to
call
out
is
the
ability
for
containers
within
a
pod
to
share
the
pin
namespace.
So
this
is
a
feature:
that's
been
under
work
for
a
while
in
the
upstream
community,
largely
around
improving
the
debug
ability
of
containers,
but
for
folks
who
are
building
particular
classes
of
applications,
who's
running
multiple
containers
in
their
pots,
back
where
they
want
to
be
able
to
do
some
type
of
inter
process
communication
and
share
that
process
namespace.
C
This
is
a
tech
preview
feature
you
can
enable
the
feature
gate
on
the
notes
in
question
where
you
need
this
capability
and
basically
your
containers
can
see
each
other
within
the
pot.
There
are
some
caveats
that
want
to
call
out
the
work
through,
but
in
the
interest
of
time
we
can
move
on,
and
people
can
check
that
out
next
slide.
C
The
so
another
hardening
feature
around
the
security
front
is
that
the
router
service
account
that
controls
ingress
into
the
cluster
no
longer
needs
access
to
the
secrets
for
all
secrets
on
the
cluster.
This
is
just
a
best
practicing
hardening
capability
that
you'll
now
see
on
the
default
deployments
for
310,
which
ensures
that
a
compromise
of
your
router
doesn't
necessarily
compromise
your
entire
cluster
next
slide.
C
Each
release
there's
further
enhancements
in
the
storage
space
around
particular
storage
provisioners.
So
new
and
less
release
is
the
capability
for
Steph,
and
so,
if
folks,
who
are
looking
to
integrate
their
open
shift
environments
with
stuff,
this
would
be
a
thing.
We
encourage
everyone
to
check
out
it
again.
It's
tech
preview
for
the
release,
blood.
C
Finally,
open
shift
runs
on
on
Rails,
so
we
wanted
to
call
out
a
couple
of
the
key
highlights
or
l75
as
well
as
make
everyone
aware
of
particular
issues.
I'll
call
out
on
this
particular
slide
in
particular
when
I'm
interesting
calling
on
us
is
ensuring
that
everyone
understands
that
overall
IFS
is
now
the
default
storage
driver
for
container
runtimes,
which
means
it's
now.
C
The
default
storage
driver
for
openshift
installs
as
well
device
mapper
continues
to
be
supported
and
available,
but
we
encourage
everyone
to
move
over
to
overlay
FS
as
long
as
they
are
not
probabl
to
any
particular
educators
or
hypothesis.
Compliance
with
that
I
think
we
can
move
on
to
the
next
slide.
C
Innovation
in
the
container
runtime
space
has
been
a
long
going
effort
in
the
upstream
communities.
Community
RedHat
we've
went
sniffing
energy
around
the
cryo
effort,
which
is
OCI
compliant
imitation
for
a
Bearnaise
container,
runtime
interface,
dedicated
solely
to
kubernetes,
cryo
and
v110
continues
to
work,
is
fully
supported
and
meets.
C
At
this
time,
the
installer
can
allow
you
to
install
a
fresh
cluster
using
cryo
when
configured
appropriately,
and
we
encourage
folks
to
check
out
cryo
and
provide
more
feedback
on
the
runtime.
So
we
continued
improvement
in
rate
we're
seeing
ourselves
excellent
performance
when
running
cryo.
So
we
are
running
in
ourselves
in
production,
in
our
open
shift,
starter
environments
and
in
general,
finding
it
to
be
a
pleasant
operational
experience
with
that
next
slide.
I.
B
Terms
of
doing
tech
preview,
dev
preview
is
normally
will
be
using
upstream
bits.
You
know
we
might
ask
you
to
pull
from
github
or
even
from
locations
from
other
open
source
communities
that
aren't
from
Red
Hat,
where
we
typically
form
a
program
around
it,
because
we
don't
want
to
tax.
You
know
critical
support
services,
so
you
know
be
very
specific
people
that
would
have
knowledge
about
talking
to
you
on
that.
B
You
still
won't
be
able
to
get
break
thick
support,
but
the
support
services
will
still
know
what
you're
talking
about
when
you
call
you'll
still
not
be
able
to
upgrade,
will
make
you
reinstall
a
tech
preview
feature
for
the
GA
version
of
it,
so
it's
not
backwards
compatible
in
that
regard
and,
more
importantly,
we
can
forecast
the
GA
day
at
that
point.
So
that's
the
primary
difference
between
the
two
and.
A
Then
and
when
it
was
a
big
thing,
I
think
as
well.
There's
a
question
around
the
atomic
will
be
depreciated
in
favor
of
core
OS
atomic
merge
and
will
that
be
backward
compatible
and
is
the
core
OS
option
available
today?
I
think
there's
a
lot
of
meat
in
this
presentation,
but
I
think
that
was
one
of
the
bigger
things
he
addressed.
That
yeah.
B
So
in
terms
of
the
container
Linux
at
Summit,
we,
you
know
we
really.
If
you
haven't
seen
a
lot
of
those
decks,
they
go
to
a
pretty
deep
analysis
of
you
know
what
we're
pulling
from
container
Linux
and
what
we're
pulling
from
atomic
project,
and
now
that's
going
to
become
right
at
core
wet.
So
you,
you
won't
hear
us
use
the
words
container,
Linux
or
atomic
toast,
we'll
use
the
word.
Red
Hat
core
OS,
and
that
means
the
just
enough
operating
system
that
is
accelerated
for
container
usage.
B
Backwards-Compatible
is
a
it's
an
interesting
thing:
I
mean
it's.
It's
a
brand-new,
install
right,
brand-new
kernel
that
you
would
install
it
isn't
in
that
regard
backward
compatible.
We
did
keep
a
lot
of
the
smaller
sub
projects
in
the
container
Linux
upstream,
like
ignition,
for
example,
they're
carrying
that
forward.
So
where
we
care
things
forward,
you'll
still
see
them.
Obviously,
in
that
new
release,
can
you
get
your
hands
on
it
now?
B
B
The
so
there
there's
no
changes
the
Peavey's,
the
only
thing
happening
in
the
storage
Road
around
usability
is
in
this
release
and
if
you're
into
CNS
the
container
native
storage
that
release
pumps
a
lot
more
usage
data
and
event
data
into
Prometheus.
So
that's
the
only
usability
change
that
I
have
coming
in
the
the
PV
area
in
terms
of
the
operator
experience
in
the
web
user
interface.
B
So
the
the
ability
to
see
more
raw
kubernetes
that
comes
in
3.11,
so
in
3.11
we're
taking
a
lot
of
the
tectonic
consoles
and
we're
making
them
available
as
a
ga
user
experience
and
open
shift
and
then
that'll
be
two
consoles.
If
you
will
that
you'll
you'll
pick
the
experience
that
you
want
and
then
in
the
release
after
that,
they'll
merge
into
the
same
codebase
and
they'll
be
both
on
a
react.
Back-End
in
the
single
web
console
and.
A
There
there
was
one
question
earlier
that
you
mentioned
mentioned
and
answered
in
the
chat
Mike
around
helm
and
that
it
was
possible
to
run
down
today
on
NE
koude
cluster,
but
that
we
don't
encourage
it
for
multi
tenant
clusters
and
really
emphasizing
that
starting
in
3:11.
When
we
get
the
operator
service
broker,
you'll
start
to
be
able
to
see
the
ability
to
use
helm
for
operators
and
I
just
wanted
to
add
in
if
anyone's
interested.
In
learning
more
about
the
operator
framework,
the
operator
SDK
operators
in
general,
there
is
going
to
be
a
reoccurring.
A
Third
Friday
of
every
month
operator
frameworks.
Take
the
first
one
is
going
to
be
this
week.
Friday
at
9:00
a.m.
and
I've
posted
the
details
in
the
chat
or
how
you
get
there,
and
there
will
be
a
good
intro
talk
with
members
of
the
team
working
on
the
operator
framework
and
written
by
Sebastian
Paul,
and
the
number
of
other
folks
are
going
to
be
joining
on.
So
there's
really.
B
A
A
B
Just
any
other
questions
I
see
here
so
does
Prometheus
replace
ocular
in
this
release.
No,
it
doesn't
popular
will
actually
be
in
3:11
as
well,
and
then
we
will
completely
remove
it
from
the
product
and
the
release
after
3/11
we
did
release
a
z
stream
for
3.9
I,
think
today
or
yesterday.
So
that's
why
you
saw
three
dot.
The
dot
33
come
out.
That's
fine
to
upgrade
to
310
310
should
be
available
next
week
by
Thursday
is
when
I
be
forecasting
it
many
improvements
around
AWS
and
auto
scale
groups
that
comes
in
our
3.11
release.
B
So
the
way
you
saw
Derek
presenting
about
TLS
bootstrapping
nodes
in
that
was
a
huge
feature
to
get
in
to
enable
the
auto
scaling
feature
to
be
fully
capable,
so
we're
just
cleaning
that
up
and
that'll
be
available
in
3.11,
I'm
sure
Diane.
There's
a
request
for
a
session
on
si
si
si
si
is
getting
the
point
where
I
think
partners
would
be
interested
not
really
in
users,
but
people
who
distribute
storage
solutions
to
the
market.
B
A
B
A
So
I
think
we've
done
a
pretty
good
job.
That's
about
45
minutes
there.
If
you
have
more
questions,
people
will
be
in
the
slack
channel
afterwards,
if
you're,
if
you're,
not
in
slack
yet
ping
me
at
D,
Mueller
at
Red,
Hat,
calm
and
I'll.
Add
you
in
we're
also
very
I'm,
also
very
interested
in
knowing,
if
you
are
running
origin
deployments
in
production
to
at
the
origin,
release
anyone
who
and
which
one
of
you
are
actually
running
origin.
A
Any
more
questions
all
right.
Well,
thank
you,
Mike
and
Derek
for
taking
the
time
today,
as
you
do,
each
release
much
appreciated
lots
of
content
in
there,
so
I'm
gonna
try
and
get
a
number
of
briefings,
deeper
dives
into
different
topics.
So
look
for
those
coming
out
in
the
coming
weeks
and
months.
All
right
take
care
all
and
looking
forward
to
getting
more
feedback
from
you
all.