►
Description
OpenShift Commons Briefing
OpenShift 3.11 Release Update with Scott McCarty Red Hat
link to slides: https://red.ht/2OBR7yM
A
Hello,
everybody
and
welcome
to
one
of
my
favorite
versions
of
the
open
ship
Commons
briefing.
That's
when
we
run
down
what's
in
the
latest,
release
of
openshift
and
I'm
really
pleased
to
have
Scott
McCarty
who's
rap,
rustled
up
a
whole
bunch
of
the
other
PM
product
management
team
from
Red
Hat.
To
give
you
a
not
quite
high
level,
not
totally
deep
level
deep
dive
on
what's
in
openshift,
but
we've
got
a
lot
of
the
PM's
here.
Format
today
will
be
Scott's
going
to
kick
it
off.
A
Each
of
the
PM
will
take
their
section
that
they're
in
charge
o
will
have
Q&A
at
the
end,
if
there's
time.
So
what
I
would
say
is
ask
questions
in
the
chat.
Well,
we
are
people
are
talking
about.
There's
plenty
of
people
want
to
answer
them
and
we'll
have
open
up
the
mics
at
the
end.
So
with
that,
take
it
away
there
hey.
B
Thanks
Diane
yeah,
thanks
for
having
us,
I
rustled
up,
there's
about
five
of
us
I
think
from
the
PM
team,
product
management
team
on
and
we're
gonna
just
do
yeah
I'd
say
like
a
201
level,
the
deepest
dive
we
can
do
in
the
amount
of
time
we
have
so
I'll.
Just
dig
right
in
oh
one,
other
thing:
I
guess
before
I
kick
off.
I'll
say
all
of
us
are
pretty
public.
All
of
us
are
on
Twitter.
All
of
us
are
around
so
like
definitely
happy
to
do
follow-up
questions
afterwards.
B
If
we
can't
end
up
answering
it
here
so
but
I'll
kick
it
off
pretty
quick
because
we
had
a
lot
of
material
to
cover
so
on
I
just
want
to
dig
in
kind
of
we.
You
know
we
kind
of
talked
about
ahead
of
time.
There
are
three
main
you
know:
kind
of
themes
that
we're
kind
of
we're,
gonna
focus
on
during
this
release
and
then
also
in
this
presentation.
So
the
first
is:
there's
just
a
ton
of
improvement
to
the
admin
experience
in
OCP,
obviously
with
a
core
alights
acquisition.
B
B
B
This
is
kind
of
the
overall
stack
I
mean,
probably
if
you've
attended
any
of
these
you've.
Seen
these
before
but
like
in
a
nutshell,
you
know
open
ships,
just
kind
of
split
into
three
different
layers,
kind
of
and
even
that's
kind
of
how
we
mirror
our
sprint
teams
and
everything
internally.
So
so
we
focus
on
kind
of
the
service
layer,
the
platform
layer
and
then
kind
of
the
I
call
it
the
engine
room.
C
B
C
All
right,
hi
everybody,
I'm
Rob,
some
ski,
the
p.m.
for
the
overshift
console
and
I
came
over
from
the
core
OS
team.
We've
got
a
really
great
set
of
features
in
our
new
cluster
console.
This
is
kind
of
heavily
focused
on
that
persona
of
infrastructure,
admins
and
you
know
any
shared
IT
team
that
is
working
on
the
cluster
and
it
really
exposes
a
lot
of
the
kind
of
cast
containers
as
a
service
level
features.
C
So
the
admin
experience
is
really
into
seeing.
These
are
the
things
that
are
running
a
cluster.
You
need
to
be
able
to
troubleshoot
them,
manage
them
see
what's
going
on
resource
wise,
so
we've
got
a
really
great
set
of
screens
that
can
view
the
status
of
your
nodes,
and
this
is
all
the
kubernetes
statuses
that
are
for
like
disk
pressure
and
Pig
pressure
and
that
type
you
can
also
see
view
at
this
new
end
console
we've
got
a
default
metrics,
as
you
heard
about
in
the
themes
for
this
release
and
those
are
accessible.
C
Looking
at
these,
in
the
yellow
view,
is
really
great,
but
it
can
be
pretty
hard
to
visualize,
and
if
you
want
to
see
exactly
what
a
role
that
you're
granting
to
somebody
is
can
do
you
can
use
these
screens
to
do
that,
you
can
see
the
verbs
that
it
can
mutate
different
types
of
object,
levels,
objects
and
then
you
can
even
impersonate
these
roles
via
the
console.
C
So
if
you
want
to
see
exactly
what
access
somebody's
gonna
have,
you
can
go
and
personate
that
user,
if
they'll
click
around
the
screen
see
what
they
can
add
edit
delete
that
type
of
thing
and
the
beauty
of
this
is
this-
is
all
self-service.
So
if
you
have
a
owner
of
a
project,
you
know
like
a
lead
on
a
team,
they
can
manipulate
their
own
service
accounts
and
roles
and
role
bindings
for
that
namespace.
C
C
Also
included
is
a
cluster
wide
event
stream,
and
this
is
critical
for
seeing
what's
going
on
across
the
cluster.
If
you're
doing
some
maintenance-
or
you
want
to
see
exactly
what's
happening-
is
complaining
about
some
connectivity
or
something
like
that.
So
if
you're
a
cluster
admin,
this
is
available
across
all
namespaces.
So
you
can
see
you
know
the
firehose
of
what's
going
on
at
the
cluster
as
well
as,
if
you're
an
owner
of
a
project.
C
So
that's
a
lot
about
the
UI:
let's
switch
gears
and
talk
about
some
of
the
operators.
That
was
another
one
of
the
big
themes
that
we
talked
about
at
the
top
of
the
hour.
So
we're
happy
to
have
the
operator
SDK
available,
and
this
is
the
way
of
building
these
smart
application
managers
that
are,
you,
know,
kubernetes
naval,
so
you're
embedding
a
bunch
of
unique
operational
knowledge
and
do
a
piece
of
software.
C
And
then
you
know
adding
that
logic,
everything
that
would
be
in
a
run
book,
something
that
you
would
have
in
a
wiki
anything
that
needs
to
be
done
to
launch
this
application
in
this
specific
order
and
then
change
this
config
and
then
reload.
This
thing
all
that
data
can
be
added
into
an
operator
and
then
you
can
really
automate
your
application
deployment.
We
have
an
SDK
that
helps
you.
C
You
know
kind
of
do
a
bunch
of
code
generation
so
that
you're
not
having
to
write
90%
of
the
code
you're
just
bringing
the
business
logic
of
whatever
you're
gonna
use,
and
this
operator
SDK
has
been
used
by
a
lot
of
our
partners
such
as
Couchbase,
MongoDB
Redis,
and
you
can
see
the
theme
there.
It's
all
about
more
complex
staple
workloads
needs
special
management
on
kubernetes.
You
can't
just
kill
these
pods
and
reschedule
them.
You
might
need
to
do
data
rebalancing
when
you
scale
it
out.
C
C
So
there's
a
number
of
flavors
of
operators
that
you
can
build.
We
have
our
go
line,
one
which
is
the
end
of
the
most
complex
you're
using
you
know
the
full
power
of
the
go
protein
programming
language
in
all
the
kubernetes
libraries.
But
if
you
have
a
helmet,
you
can
actually
just
get
started
with
that.
So
the
beauty
about
this
is
it's
a
more
secure
way
to
run
hell.
So
it's
more
automated
you're,
not
running
tiller
and
the
way
this
works.
As
you
can
take
your
chart
and
you
bake
it
into
an
operator.
C
We
actually
called
helm
internally
as
a
library
and
it's
only
scope
to
the
namespace
where
you're
running
and
then
you
can
define
a
kubernetes
ERD.
This
is
a
custom
object.
So
in
the
example
on
the
right
we
see
we
had
this
Tomcat
object.
If
you
want
to
model
a
new
type,
just
like
a
staple
set
or
a
deployment,
you
actually
have
a
now
a
new
kind
called
Tomcat
and
you
can
expose
the
values
that
you
would
pass
in
to
that
chart
as
the
spec
of
the
object.
C
So
this
allows
people
to
reuse
a
chart
many
times
and
change
configuration
values
about
it.
If
you
removing
something
from
staging
your
production
and
you
need
to
scale
it
up
in
production,
you
would
overlap
overwrite.
The
replica
count.
You
know
to
be
20,
for
example,
and
whatever
these
objects
are
modified,
the
helm
turret
would
be
applied.
That's
the
power
of
the
operator,
so
you're
getting
this
live
reconfiguration.
C
C
As
I
mentioned,
we
have
the
go
operator.
This
is
the
the
most
complex
so
using
the
same
type
of
objects.
Example.
Tomcat
object
basically
will
model
everything
that
needs
to
be
done
to
launch,
maintain,
upgrade
Tomcat
and
put
that
into
a
logic
tree
in
woried,
and
it
will
and
she
ate
that
when
a
new
Tomcat
object
shows
up,
it
will
scale
it
up
scale
it
down
change.
C
You
know
the
max
active
sessions,
for
example,
that
you
see
here
you
know
bake
that
into
the
config
file
reload,
the
container
whatever
it
do-
and
this
is
really
great
for-
is
vs
and
development
teams
that
have
a
lot
of
programming
chops.
If
you
are
really
comfortable,
writing
go
and
you
know
all
the
advanced
features
that
you
can
do
inside
of
kubernetes
and
orchestrate
that
you're
going
to
be
a
really
good
fit
for
this
style
of
an
operator.
C
C
So
once
you
built
your
operators,
you
need
to
hand
it
off
and
run
it
on
a
cluster
or
more
likely
a
set
of
clusters.
So
we
have
in
tech
preview
in
311.
Is
the
operator
lifecycle
manager?
Now
this
is
the
place
for
you
to
do
rolling,
updates
and
deployments
of
a
number
of
operators,
so
you're
running
a
thousand
operators
on
clustered.
As
you
have
a
number
of
teams,
they
can
have
access
control
to
use
certain
types
of
operators.
C
So
if
you've
got
a
team
that
you
want
to
be
able
to
use
MongoDB,
you
can
all
that
operator
on
the
cluster
and
make
it
available
into
their
project,
and
then
they
can
do
self-service
of
that
MongoDB
operator
and
all
the
instances
running
and
you
can
use
this
via
their
open
ship.
Ansible
scholar
can
get
you
easily
up
and
running
on
this.
C
If
you
make
your
own
operators
with
the
operator
SDK,
either
the
go
or
the
helm
flavors,
you
can
add
those
into
this
catalog
as
well,
and
so
this
is
the
one-stop
shop
where
all
of
your
engineers
can
come
to
see
what
operators
they
can
use,
and
so
they
want
to
stand
up
a
quick
Prometheus
cluster
to
support
monitoring
of
their
application.
They
can
easily
do
that.
A
C
We've
got
some
work
being
done
with
our
automation
broker
to
integrate
more
heavily
with
galaxies.
You
get
a
lot
more
of
the
content,
that's
being
pushed
by
the
ansible
community,
and
this
is
really
great,
so
you
can
read
it
here,
something
about
how
you
configure
that
with
your
open
shift,
cluster
and
those
items
will
be
populated
inside
of
the
Service
Catalog.
C
C
Here's
some
more
information
about
some
of
the
ongoing
work
in
the
automation
broker,
support
for
authenticated
registries.
We've
recently
moved
over
to
a
new
registry
for
Red
Hat
containers
registered
at
fpou,
which
has
some
authentication
in
front
of
it.
So
you
can
define
those
in
the
broker
so
that
they
can
want
to
have
access
to.
Also
really
great
is
access
control
applied
to
the
Service
Catalog,
so
that
specific
APB's
can
only
be
exposed
to
certain
namespaces.
C
This
is
very
similar
to
what
we
just
talked
about
with
the
operators,
where
you
want
to
possibly
limit
certain
teens
to
only
have
access
to
certain
resources.
So,
following
the
are
back
model
of
the
rest
of
kubernetes,
you
can
also
expose
your
APB's
at
the
same
level,
so
this
could
be
limiting
those
to
specific
ones.
You
want
to
be
used
in
dev,
first
production
next.
C
Got
a
number
of
improvements
to
the
builds
that
are
build
or
features
that
are
built
into
openshift
and
jenkins,
so
plugin
stability
improvements
is
always
great,
keeping
up
with
the
releases
of
jenkins
themselves
and
a
few
other
things
to
help
you.
So
your
change
triggers
and
you
know,
decide
how
that
is
actually
going
to
trigger
downstream
builds
that
type
of
thing.
C
C
Number
different
and
tools
to
help
you
do
local
development
we've
got
our
container
development
kit
has
gotten
some
bumps
and
it's
based
now
on
a
newer
version
of
mini
shift.
You
can
see
a
little
bit
more
about
mini
ship
one
to
four
they're.
A
really
great
way
to
you
know,
use
OCP
locally
and
then
shut
that
off
to
the
same
OCP
version
that
you're
running
in
the
cloud
or
on
your
production
cluster
and
now
we've
got
some
go,
see
and
cuddle
included
in
that
as
well.
C
B
You
know
C
IV,
to
there's
really
like
an
ability
to
like
list
all
of
the
images
that
are
in
it
and
it's
essentially
a
way
to
like
enumerate
all
of
the
container
images
or
registry
repositories.
If
you
want
to
call
that
that
are
in
a
registry
server-
and
this
is
used
by
a
lot
of
our
partners-
it's
also
used
if
you're
programmatically,
hitting
the
API
and
doing
things
so
we've
updated
that
or
that
v2
dot,
2,
spec
I
believe
I
know
it's
another
interesting
feature.
That's
been
added.
B
Is
the
ability
to
bulk
mirror
image
tags
so
like
image
streams,
I
should
say
I'm,
so
image
streams
like
you
can
basically
mirror
them
between
different.
You
know
openshift
instances,
and
so
that
makes
it
really
nice
if
you've
kind
of
I.
You
know
everybody
talks
about
pets
and
cattle
in
the
context
of
cloud,
but
at
the
end
of
the
day,
there's
also
some
pets,
there's
always
some
pets
and
if
you've
built
a
you
know
a
curated
set
of
images
in
one
openshift,
you
want
to
essentially
mirror
them
over
to
another.
B
B
So
it
makes
it
easier
to
monitor
what's
happening
inside
of
the
container
registry
in
OpenShift,
we've
also
kind
of
fine-tune
the
the
debugging
levels
and
then
finally,
a
couple
of
couple
really
announcements:
one
deprecating
a
couple
things
around
the
standalone
registry
installers,
so
you
could
work
around
and
do
a
single
node
install
if
you
still
really
want
that,
but
we're
really
kind
of
moving
away
from
the
standalone
registry.
Installer,
that's
more
replaced
now
by
the
quay
enterprise.
Offering
another
thing
is
we're
kind
of
we're,
moving
away
from
registry
consoles.
B
You
know,
distribution
is
the
upstream
project
that
we
use
for
the
registry
and
which
is
now
but
moving
to
a
Quay
as
that,
as
essentially
the
the
technology
that
will
be
inside
of
openshift
and
then
also
I,
don't
think
I
mentioned,
but
we
are,
there
is
on
I
think
it's
on
here
to
you
know.
We
also
have
a
target
to
open
source,
the
Quai
enterprise
code.
B
So
that's
another
thing:
that's
on
there
to
move
into
Quaithe
v3
is
coming
out
along
with
Clair
v3
I
did
mention
it
so
CI,
2.2
spec,
which
is
good,
I,
think
the
biggest
thing
for
this
is
it's
really
able
to
support
multi
arc,
and
that
includes
things
like
Windows
images.
So
now
it's
really
nice
because
you
can
have
a
single
enterprise
registry
server.
B
That
has
you
know
whether
you
have
arm
images
and
Windows
images
and
the
NEX
86
images
you're
good
to
go
also
a
little
bit
of
rebranding
and
and
then
also
moving
quai
to
a
rel
base
image
which
is
kind
of
nice.
So
now
we
have
a
single
supply
chain
for
that
for
running
it
out
of
the
container
image,
then,
with
Claire.
You
know
we're
essentially
adding
some
new
data
models
that
you
can
import,
so
I
mean
in
a
nutshell.
B
The
value
of
this
is
that
you'll
be
able
to
pull
in
things
like
historically
red
hats,
always
released,
really
good
oval
data
and,
if
you're
not
familiar
that
it's
it's
a
stream
of
data
that
we
published
essentially
maps
like
RPM
packages
to
see
the
eases.
So,
if
you're
looking
for
possible
exploits
that
people
could
make
to
your
to
your
software,
you
basically
would
say:
okay,
let
me
check
this
oval
feed
and
verify
the
Red
Hat's
patch.
Certain
things.
Also,
the
container
programming
languages
have
some
of
these.
B
So,
for
example,
like
Java
publishes
their
own
format
of
the
central
data
stream
that
Maps,
whether
certain
you
know
CVS
have
been
fixed
and
certain
versions,
and
so
we're
basically
adding
the
capability
to
basically
look
at
these
and
and
consume
these
different
feeds,
and
you
know,
start
to
look
like
analyze.
Programming
language
is
inside
of
your
container
images.
B
Currently,
it
was
a
little
bit
manual
to
deploy
Claire
with
clay,
as
you
know,
as
a
sub
tool
with
in
Quay,
and
so
now
we're
basically
making
it
more
and
more
automated,
eventually
moving
to
an
operator,
and
so
with
that
I'm
gonna,
move
down
into
platform
and
you'll
see
the
the
corner
goes
into
blue
and
I
believe
we're
gonna
hand
this
to
you
is
it
mark
I?
Think
you're
gonna
cover
that?
Yes,.
D
That's
correct
thanks
God,
so
in
this
first
one
here,
what
we're
doing
with
AWS
autoscale
groups
is
we're
enhancing
our
current
capability
of
auto-scaling
kubernetes
clusters
for
increased
workload
elasticity.
So
it's
a
new
feature
that
enables
auto
scaling
nodes
on
AWS
infrastructure,
on
demand
and
based
upon
a
predefined,
auto
scale
group.
So
every
few
seconds
the
autoscaler
checks
to
see
what
the
pending
note
allocation
is
for
pods
and
then
make
sure
that
enough
nodes
or
AWS.
D
In
this
case
are
available
to
meet
your
desired
capacity,
so
when
demand
drops
and
fewer
nodes
are
required,
the
auto
skater
autoscaler
automatically
removes
those
unused
nodes,
and
so,
with
this
optimization,
you
end
up
only
using
those
allocated
resources
which
are
needed
for
the
current
demand,
and
so
customers
can
realize
savings
here
by
only
keeping
active
those
instances
which
are
required
to
enable
the
feature
there's
a
few
manual
steps
that
must
be
executed
outside
of
open
ships.
Then
those
steps
are
highlighted
here
and
they're
described
in
more
detail
on
our
product
documentation.
D
Next
slide,
please
support
for
ansible
2/6,
so
based
on
custer
demand,
we
have
the
support
or
a
newer
version
of
ansible
that
can
be
used
for
both
installations
going
forward,
as
well
as
upgrades
from
prior
versions
of
OpenShift
to
311.
To
use
this
update,
it's
basically
a
one-liner
to
enable
the
correct
repo,
as
shown
in
the
example
on
this
slide
next
slide.
D
Please
another
enhancements
customer
installs
our
lot
so
new
for
Oakland
ship,
311,
ansible
installs,
are
now
logged
by
default
and
they're
logged
to
a
location,
that's
defined
by
a
log
path,
parameter
in
the
ansible
config
file.
The
only
requirement
is
that
when
you
run
any
playbooks
that
you
be
in
the
open
ship
danceable
directory
prior
to
doing
so
next
slide,
please
reference
architectures.
So
the
OpenShift
reference
architectures
are
moved
into
our
open
shift
product
documentation.
This
process
started
back
in
310
and
the
content
continues
to
be
enhanced
and
updated
in
particular
for
311.
D
D
We've
done
a
number
of
enhancements
to
our
default
router
h8
proxy,
so
quite
a
bit
of
work
has
gone
into
it.
The
table
shown
here
represents
a
high-level
overview
of
those
enhancements
quickly.
Let
me
provide
a
little
more
background.
Information
about
each
our
documentation
provides
much
more
detail
than
is
shown
here.
A
D
So
I'll
refer
you
to
that
for
more,
but
basically
HTTP
to
at
the
moment
with
our
H
a
proxy
version
1:8
it
supports
HTTP,
2
or
front-ends,
and
so
what
that
means
is
that
we
can
terminate
those
requests
at
the
router
and
the
traffic
that
needs
to
be
translated
to
http
1.
To
talk
to
services
within
the
cluster
the
performance
front.
You
can
now
increase
the
number
of
threads
per
H
a
proxy
so
that
your
deployments
can
serve
more
routes,
for
example.
D
So
this
ends
up
resulting
in
better
overall
performance
for
our
research
than
increasing
the
number.
Of
course,
dynamic
changes
in
our
latest
hf
4rc
1/8
version.
There
are
more
options
for
improved
handling
of
our
configuration
changes.
So
the
goal
here
is
to
minimize
impact
the
need
for
having
it
do
a
full
router
reload.
D
Whenever
routes
or
endpoints
change,
client,
ssl/tls,
certain
validation,
we've
enabled
mutual
TLS
authentication
and
also
support
for
customizations,
because
our
customers
have
told
us
that
it's
sometimes
necessary
to
use
edge
or
re-encrypt
routes
to
support
older
clients
or
services
that
don't
yet
support.
S&Amp;I
but,
however,
still
require
certificates
about
verification
and
the
last
one
listed
there
logs
captured
by
aggregated
login.
Ok,
so
customers
want
to
be
able
to
see
traffic
to
their
routes.
D
So
by
deploying
the
router
with
an
RSS
log
sidecar
container,
we
can
collect
access
logs
so
that
operators
can
see
them
in
the
aggregated
logging
stack
next
slide.
Please,
since
openshift
310
we've
had
the
ability
to
assign
egress
source
IP
charm
egress
traffic
from
the
cluster
to
come
from
a
distinct
source,
IP
per
project
or
namespace,
so
the
customers
can
filter
it
on
the
edge.
But
so
what
I'm
going
to
cover
here
is
a
couple
of
enhancements
this
this
slide
and
the
next
slide
couple
of
answers
to
providing
that
namespace
YV
grass
IP.
D
So
what
we've
done
in
310
won
and
in
311
is
we've
added
two
options
to
make
that
egress
source
IP
highly
available?
So
how
does
it
work
in
this
first
method?
Basically,
a
list
of
two
or
more
egress
IPs
are
defined
for
the
project,
each
of
which
is
assigned
to
a
different
host
in
the
cluster.
The
first
listed
egress
source
IP.
D
Please
a
potential
downside
to
that
previous
slide
is
that
if
you
had
noticed
it
requires
more
than
one
egress
IP
to
be
added
to
the
whitelist
of
the
external
service
and
IP
addresses
are
often
a
valuable
commodity
at
the
data
side.
So
for
those
installations
where
that
would
be
an
issue,
we
offer
a
second
option
for
making
a
project.
D
Egress
IP,
hey
hi,
Leah
bill
with
this
slides
option
projects
are
automatically
allocated
single
egress
IP
on
a
node
in
the
cluster
through
which
all
of
its
traffic
flows
and
that
egress
IP
is
automatically
migrated
from
any
failed
node
to
a
healthy
note.
So
why
wouldn't
we
just
make
this
the
only
option?
The
potential
downside
to
this
option
versus
the
previous
slide
is
that
it's
potentially
slower
by
a
few
seconds
to
enable
this
particular
failover
versus
the
previous
slide
for
some
customers.
D
That's
important
other
customers,
it's
now,
but
you
now
have
the
option
to
choose
next
slide.
Please
configurable
VX
land
port,
so
for
those
of
you
that
are
using
openshift
on
top
of
vmware
with
and
with
vmware
as
nsx
@
VN
as
part
of
the
underlay
networking,
the
port
will
use
4
VX
lantern
openshift
is
now
configurable,
so
OpenShift
uses
VX
land
to
provide
association
of
positive
project,
even
if
they
happen
to
be
located
on
different
nodes.
A
D
Vm
were
modified
here
at
bx.
Lam
port
used
for
their
nsx
managed
infrastructure
to
the
same
port.
We've
been
using
in
open
shifts,
47
89.
They
did
this
to
adhere
at
RFC
7348,
and
the
problem
is
that
change
and
their
part
resulted
in
a
park
port
conflict
between
our
two
products.
So
customers
do
have
the
option
of
modifying
the
VX
LAN
port
and
vmware
to
resolve
the
conflict,
but
often
that
either
isn't
possible
or
reasonable
for
whatever
reason
to
do.
D
D
Security
context
constraints.
If
you
aren't
familiar
with
security
context
constraints,
they
are
simply
constraints
that
allow
administrators
to
control
permissions
for
pot
in
311.
There
is
a
new
option
to
prevent
containers
from
gaining
new
privileges,
and
so
they're
listed
here,
allow
privilege,
escalation
and
default.
Allow
privilege
escalation,
there's
also
a
new
control
over
which
CTL
options
can
be
defined
in
a
pod
back.
C
D
Is
forbidden,
CDLs
and
allowed
unsafe,
stuff
CPL?
So
these
these
two
features
in
combination
have
the
net
effect
of
being
able
to
group
the
CTL
parameters
into
state
and
unsafe
categories,
and
it
does
that
with
proper
name,
spacing
a
pod
isolation
to
enable
that
ability
to
set
a
set
state
to
CTLs
for
a
pot.
So
what
exactly
is
a
safe?
This
ETL
in
a
nutshell,
it
must
not
have
any
influence
on
any
other
pod
in
the
node
must
not
allow
farm
to
the
nodes.
D
Health
must
not
allow
the
ability
to
gain
CPU
or
memory
resources
outside
of
the
resource
limits
of
a
pod,
so
those
are
15
and
that
she'll
notice
those
are
listed
down
below
in
the
student
notes
section
of
the
slide.
But
that's
what
define
stage
CTL
all
things
that
CP
ELLs
are
enabled
by
default
all
unsafe.
The
CTLs
are
disabled
by
default
and
must
be
allowed
manually
by
the
cluster
admin
on
a
per
node
basis.
You
want
to
change
that
pods
with
pods
with
disabled
unsafe.
D
The
CTLs
will
get
scheduled
that
they
will
fail
to
watch
next
slide.
Please,
identity
and
access
management,
so
github
Enterprise
is
now
an
authentication
or
identity
provider
or
open
ship
using
OAuth
as
the
controlling
mechanism
that
facilitates
the
token
exchange
between
the
two
I'm
also
open
ship
has
in
tech
preview
support
for
the
security
support
protocol
interface
to
facilitate
single
sign-on
with
windows.
E
Mark
this
is
the
shortcut
key.
I
am
I'm
going
to
be
coding,
monitoring
and
logging
and
storage,
or
the
next
few
slides
and
product
measuring.
You
can
shift
team
based
out
of
the
Massachusetts
area,
Boston
area.
We
are
very
excited
to
actually
introduce
from
atheists.
You
know
as
a
GA
feature.
I
know
we
introduced
it
as
tech
preview
releases
ago,
but
that
was
paused
a
little
bit
because
of
the
core
OS
acquisition,
because
they
also
had
a
prometheus
stack,
which
was
much
more
robust,
and
so
this
represents
the
culmination
of
that
integration.
E
So
what
we
get.
Basically,
we
are
calling
this
the
cluster
monitoring.
So,
as
the
name
indicates,
it
is
really
capturing
the
metrics
and
monitoring
the
cluster
itself.
This
includes
things
such
as
you
can
see
down
here.
Things
such
as
you
know
the
state
of
the
node
itself
using
the
node
cluster
node
operate
node
exporter,
the
state
of
kubernetes
metrics
regarding
kubernetes
itself,
using
state
metrics
extra
and
then
the
Prometheus
server
collects
the
metrics
and
store
set,
and
the
alert
manager
is
what
you
use
to
manage
the
alerts.
E
The
other
thing
that
happens
here
is
that,
oh
just
a
moment,
the
the
one
thing
that
I
wanted
to
cover
here
is
that
the
the
monitoring
operator
through
the
monitoring
operator
you
can
control
the
way
you
you
can
configure
the
you
can
customize
the
configuration
of
the
stack
itself
by
editing
the
ansible
inventory
file.
So
that's
a
pattern
that
is
fairly
familiar
with
most
people
who
use
it
install
open
shipped,
so
that
shouldn't
be
any
different.
So
next
slide,
please
so.
E
As
I
said
earlier,
there
are
basically
three
distinct
UI's,
the
first
one
really
is
the
Prometheus
UI
for
querying
and
any
metrics.
As
you
can
see
here,
that's
the
first
one
here,
then
you
have
the
alert
manager
UI.
Where
you
can
manage.
You
can
see
the
alerts
as
well
as
manage
the
lifecycle
of
the
alerts
and
I
think
that
this
is
that
slide
the
diagraph
ik
over
here
and
then.
Finally,
you
have
for
more
detailed
queries
of
the
metrics
dashboards.
You
have
graph
owner,
as
you
can
see
on
this
picture
here.
E
I
think
a
couple
of
important
things
to
remember
here
are
that
you
know
right
now.
You
cannot
add
your
own
alerting
rules
or
you
cannot
bring
add
your
own
dashboards.
That
is
something
that
we
are
working
on
for
a
future
release,
but
right
now
we
ship
some
default
alerts
and
default
post,
which
we
think
are
relevant
and
that's
the
only
ones
that
you
get
the
way
alerting
itself
works
is
that
you
basically
Prometheus
manages
the
alerting
rules.
E
E
We
are
also.
Similarly,
with
the
logging
stack,
we
have
been
shipping.
The
efk
stack
elasticsearch
20
in
cabana
number
of
releases
now,
but
with
this
release
we
have
updated
the
versions
of
elasticsearch
and
Cabana
elasticsearch
moves
to
5
and
Cabana
moves
to
5
from
like
elasticsearch
2
and
Cabana
for
respectively.
So
what
this
basically
gives
elasticsearch
5
comes
with
a
the
if
I
had
to
highlight
one
or
two
things:
the
the
major
improvements
for
elastic
fire
has
been
about
performance,
improvement
and
also
better
resource
utilization.
E
So
that
means
that,
with
the
same
amount
of
good
encode
resources,
CPU
memory
that
you
have
where
elasticsearch
runs,
you
might
be
able
to
get
better
performance,
query
performance,
for
example-
or
you
know
you
could
actually
get
some
optimization
in
terms
of
the
amount
of
resources
needed.
So
that's
that's
really
good.
E
The
second
bullet
item
that
you
see
there
is
that
now,
with
the
3.1
release,
you
can
actually
save
a
cube
on
our
dashboard
and
share
it
between
users.
So
if
you
feel
like
a
user,
if
a
user
feels
that
they
have
created
a
dashboard
that
can
be
shared
with
others
and
can
be
used
as
a
template-
or
maybe,
as
is
those
are
the
things
that
can
accomplish
that
were
starting
3.11.
The
third
bullet
item
talks
about.
We
have
added
a
number
of
robustness
improvements
to
how
Flo
nd
talks
to
elasticsearch.
E
So,
therefore,
you
would
find
that
that
pipeline
is
much
more
robust
and
much
more
reliable
than
it
has
been
previously.
So
these
are
kind
of
the
continuous
TechNet,
and
you
know
performance
improvements
that
you
will
find
in
311
and
then
finally
for
to
accommodate
some
of
that,
we've
also
increased
the
amount
of
default
memory.
Now
that
fluently
uses
on
the
nose
to
756,
it
should
say
MB
is
missing.
Sorry
about
that,
then.
E
Finally,
this
is
just
a
visual
on
the
right
to
show
you
one
of
the
exciting
features
about
elasticsearch
5
itself,
which
n
Caban
of
Sarika
verify.
There's
something
called
time
lion
and
that
allows
you
to
compare
two
time-series
metrics
to
two
to
two
graphs,
as
you
can
see
here
on
this
slide,
and
this
allows
you
to
do
some
more
better
troubleshooting
by
comparing
what
happened.
E
This
is
an
example
of
that
shows
Ford
logs
and
compares
it
with
kubernetes
event
logs
and
compares
why
this
part,
for
example,
I
stopped
blogging,
something
because
something
the
kubernetes
an
event
logs
might
be
indicative
or
suggestive
as
to
why
that
happened.
So
it
helps
your
troubleshooting
skills.
Next
slide.
E
Switching
to
storage,
the
open
shift,
container
storage,
one
of
the
exciting
new
things
we've
been
we've
been
adding
a
number
of
feed,
including
to
the
to
the
storage
cons
of
a
web,
console
the
open
ship
web
console.
This
feature
is
basically
talking
about
PV
expansion,
which
is
the
the
persistent
voluble
expansion
that
can
be
accomplished
using
the
web
console
today.
E
If
you
need
to
increase
the
PVC
sizes,
then
you
need
to
go
and
edit
the
yamen
file
itself
and
then
request
the
new
size
and
then
that
incre--,
that
will
Inc
result
in
increase
of
the
PV
size.
But
then,
with
this
new
feature
capability
in
311,
you
can
do
that
through
through
a
graphical
user
workflow
from
our
web
console,
as
you
can
see
here.
Just
a
note.
This
works
with
OpenShift
container,
a
storage
which
is
our
cluster
storage,
which
is
available
with
OpenShift,
but
also
it's
available
with
select
other
storage
backends.
E
An
important
feature
that
is,
we
introduced,
we
actually
g8,
where
311
is
the
port
priority
and
preemption.
What
basically
this
allows
you
to
do
is
right
now,
parts
do
not
have
the
concept
of
priority,
so
so
what
this
allows
you
to
do
is
parts
can
have
a
priority,
and
then
that
indicates
to
Cuban
Ares
and
to
openshift
scheduler.
What
the
relative
priority
of
this
part
is
with
respect
to
others.
So
what
that
accomplishes?
E
Is
that
a
pod
with
the
higher
priority
when
it
when
it
gets
to
the
scheduler
for
scheduling
if
it
is
not
scheduler,
because
there's
a
schedule
if
it
is
not
able
to
be
able
to
scheduled,
because
there
there's
no
capacity,
then
then
a
lower
priority
pod
is
evicted,
and
so
it
allows
it.
So
it
does
two
things
one
is:
it
affects
the
scheduling
order
of
the
parts
itself,
and
then
you
know
it
affects
the
eviction
order
of
the
pods
part
with
a
lower
priority
gets
affected.
E
Did
you
know,
and
then
one
of
the
best
practices
here
to
remember
from
a
security
point?
A
point
of
view
is
that
this,
obviously
it
becomes
a
little
bit
of
a
security
hole
if
somebody
decides
to
use
very
high
priorities
for
the
parts
that's
causing
denial
of
service,
so
recommendation
is
use
to
use
the
resource
code
of
feature
to
set
up
a
quota
per
user
per
priority
level
so
that
the
DOS,
the
internal
service
is
not
does
not
happen,
or
at
least
is
mitigated
on
the
right.
We
show
how
to
use
this
feature.
E
The
as
an
admin
you
create
a
priority
class.
A
priority
class
then,
is
basically
arbitrary
value,
as
you
can
see
the
higher
the
value
the
higher
the
priority
and
then
then
on
the
right.
You
can
see
that
the
parts
can
pick
that
priority
class.
In
this
case
they
picked
a
high
priority
class,
and
then
this
scheduler
is
going
to
respect
that
and
also
when
it
comes
to
eviction
as
well.
That's
the
that's
the
priority
that
that's
going
to
be
used
for
the
eviction
order.
E
The
some
of
the
best
practices
here
are
to
for
the
admin
to
create
a
global
default
priority
class
apart
from
creating
priority
classes
by
name
the
default
priority
class.
Is
what
a
part
will
get
if
they
are
not
picked
if
the
part
is
not
picked
in
the
definition,
a
priority
class,
then?
Finally,
there's
something
to
remember
if
you
are
upgrading
into
311.
Obviously
it
was
not
there
previously,
so
there's
over
pre-existing
parts.
I
don't
inherit
this
priority
class.
E
Oh,
so
their
priority
will
be
effectively
zero,
so
something
to
keep
and
plan
for
that,
and
then
finally,
remember
that
priority
and
preemption
yeah
feature
is
enabled
by
default,
but
can
be
disabled
either
priority
and
preemption
or
preemption
by
itself
can
be
disabled
next
slide.
Please
we
we
I
talked
about
Prometheus
earlier
such
a
quick.
You
know
a
slide
on
what
you
can
expect
in
terms
of
how
to
resize
for
prometheus.
This
just
shows
something
that
we
have
done
in
terms
of
sizing
guide.
It
shows
the
number
of
nodes.
E
B
Alright,
thanks
to
char
so
we're
gonna,
move
down
into
the
engine
room
as
I
call
it
and
talk
a
little
bit
about
some
of
the
foundational
technologies.
Just
a
quick
update.
You
know
with
the
release
of
open
shift
311,
it
is
actually
on
rel
5
that
rel
6
beta
is
out
right
now
and
so
you'll
see
that
coming
down
the
pipe
and
then
also
just
a
kind
of
a
another
notice.
B
You
know,
atomic
Coast
is
really
kind
of
an
a
deprecated
state
and
you
know
obviously
there's
a
future
here
where
we
essentially
merge
what
we
had
acquired
container
linux
from
you
know,
from
core
OS
and
atomic
host
from
Red
Hat
and
kind
of
a
merge
into
something
called
rack
core
OS,
the
internals
you
know
inside
of
which
of
311
or
essentially
dr
113.
B
The
available
you
know-
and
you
know,
container
engines
are
essentially
docker
1.13
cryo
1.11
and
we
got
rid
of
having
two
different
docker
packages
because
actually
now
or
just
have
we're
back
to
just
one,
because
it's
not
moving
as
fast
so
major
changes.
I
mentioned.
You
know
we
we're
definitely
investing
a
lot
in
cryo
and
and
and
moving
towards
the
default
for
you
know
for
ATO.
B
You
know
not
really
down
the
road
but
but
definitely
you're,
seeing
as
it's
g8
in
311
and
works
very
well
and
in
fact,
is
live
in
shift
online
with
rila
actually
was
alive
in
310,
and
it's
expanding
in
311
couple
other
things
that
just
some
of
the
internals
you
know,
EBP,
f
kernel
was
enabled
in
seven
six
you'll
see
that
during
the
three
Lev
life
cycle,
which
a
lot
of
people
get
excited
about
because
a
lot
of
tools
in
the
upstream
community
they'll
ever
be
BBF,
and
so
a
lot
of
people
have
been
kind
of
saying.
B
When
we're
gonna
have
that
in
rel
we
were
able
to
back
port
that
3.10
kernel,
that's
in
rel,
7x
and
so
you'll
be
able
to
use
a
lot
of
those
DB
PF
tools
which
I
think
is
kind
of
cool.
Just
a
quick
update
on
cryo
we
obviously
cryo
is
pegged
to
the
kubernetes
released.
So
the
foundational
kubernetes
release
in
open
ship
311
is
kubernetes
1.11,
which
I
don't
think
we
explicitly
said,
and
so
the
cryo
version
is
1.11.
B
You
know.
Basically,
if
you
think
about
cryo
is
just
a
bridge
between
the
OCI
image
format
and
the
CRI
contain
a
runtime
interface
between
kubernetes,
and
so
it's
a
very
lean,
stable,
just
purely
focused
on
the
release
of
kubernetes,
and
so
it
will
never
become
out
of
sync,
and
it
essentially
should
never
be
able
to
break
kubernetes,
and
so
that's
really
the
value
of
sticking
with
a
1.11
cryo.
B
You
know
and
always
synchronizing
with
the
kubernetes
release
in
the
open,
shipped
release
with
you
know
again
in
that
seven
five,
five
time
frame,
which
is
the
width
which
is
live
in
the
open
ship
311
time
frame,
we
moved
to
build
a
1.2,
we
moved
to
multistage
builds,
which
I
would
say,
is
probably
the
the
biggest
excitement
they're
also
coming
down
the
road,
probably
1.3
and
the
7.6
time
frame,
which
will
also
be
during
3:11
because
again
the
the
relvar
j'en
moves.
B
You
know
Wow
well,
the
open
shift
timeline
also
moves
and
so
you'll
end
up
kind
of
getting
some
features
during
the
311
time
frame.
But
one
of
the
exciting
ones
is
rootless.
Will
a
set
will
be
able
to
do
rootless
builds,
probably
in
the
311
time
frame
pretty
well,
and
so
that's
a
really
exciting
feature
coming
down
the
pipe
another
one
is
you
know
pod
man
is
GA
s
during
this
life
cycle
in
311,
probably
so
during
the
7.6
release,
which
is
beta
right
now,
so
so
another
exciting
tool
that
provides
basically
a
command-line
compatible.
B
B
So
pod
man
is
extremely
useful
in
that,
and
so
it's
kind
of
exciting.
This
ga
is
demon
list,
so
it
really
wires
into
like
if
you
have
existing
build
systems
that
are
outside
of
open
ship.
This
is
a
really
nice
tool
that
kind
of
comes
in
the
open
ship
toolbox,
I'm
in
then.
Finally,
one
last
note
on
container
native
virtualization:
it
is
developer
preview.
It
is,
there
is
actually
I
guess,
there's
three
major
things:
I
want
to
say
about
it
and
its
developer
preview
in
3:11.
We
are
definitely
looking
for
people.
B
If
you
have,
you
know,
if
you
have
an
interest,
C
and
V
product
team,
a
dread
calm
with
that,
though
I
will
say,
there's
two
major
values
of
it
that
why
you
might
want
to
check
this
out
one
you
can
bring
existing
virtual
machines.
So
if
you
have
a
VM
and
you
want
to
bake
off
a
virtual
machine
image
and
then
you
want
to
pull
it
in
and
make
it
look
like
as
if
it's
a
resource
object
within
the
kubernetes
world,
you
can
do
that.
B
That's
extremely
useful,
because
now
you
can
have
a
single
application
definition
that
includes
VMs
and
containers.
So,
for
example,
if
you
have
a
database
that
just
doesn't
container
as
well,
so
you
don't
want
to
have
kubernetes,
killed
and
restart
it
30
times
or
you
don't
have
an
operator
that
works
well
with
it,
or
even
if
it's
proprietary,
software
or
old
software
that
nobody,
you
know,
is
around
anymore
and
doesn't
really
fully
understand
how
it
works.
B
You
can
still
bring
it
as
a
VM,
manage
it
fire
up
instances
of
it
along
with
your
application,
so
you
might
have
an
app
that's
made
up
of
two
containers.
One
VM:
you
can
define
that
in
a
Cooper
Daisy
ammo
file,
you're
going
to
find
that
in
a
helmet
art.
You
know
you
can
essentially
use
all
of
your
deployment
methodologies
that
you
do's
and
openshift,
which
is
really
exciting,
because
now
that
the
point
of
methodology
spans
both
both
containers
and
VMs.
So
that's
really
the
last
slide.
Oops.
B
Sorry
out
of
that,
but
with
that
I
think
we
have
just
a
couple
Oh
last
little
notice,
I
forgot
I
didn't
want
to
over
view
just
some
deprecation
notices.
We
did
put
this
at
the
end.
We
just
really
thought
it
would
be
useful.
This
will
be
in
the
recording.
I
won't
go
through
all
of
these,
but
at
least
want
to
leave
it
on
the
screen
for
a
while.
So
you
can
D,
you
know
a
couple
exciting
ones
that
I
would
say
is
moving
core
DNS.
That's
when
I
get
asked
about
decently.
B
Obviously
the
Prometheus
one
is
exciting.
You
know
a
lot
of
the
doctor
calls
we're
replacing
with
bilder
and
pod
men
so
like
again,
some
exciting
stuff
in
there,
but
also,
if
you're
concerned
about
anything
fill
free
to
reach
out
to
any
of
us.
We
can
answer
questions
I'm,
happy
to
help
with
that.
It's.
E
B
Tushar
a
lot
of
the
time,
in
fact,
I
historically
always
misunderstood
the
word
deprecated.
The
word
deprecated
means
that
it's
frozen
in
place
right
like
at
some
future
time.
We
will
actually
remove
it,
but
essentially
think
of
it
as
still
supported,
but
not
going
to
add
new
features
and
then
we'll
eventually
be
removed.
So
deprecation
is
kind
of
in
a
frozen
state
towards
the
end.
You
know
if
something's
life
cycle
as
opposed
to
remaining
dry
and
then
here's
the
final
kind
of
roadmap
that
you
look
for
going
forward
again.