►
Description
Another release of Kubernetes is just being released and it’s time for another overview/update from Red Hat’s Clayton Coleman and Derek Carr on all the many and varied new features and functions that are included in Kubernetes 1.8! Clayton and other Kubernetes contributors share details about the next release and beyond.
Clayton Coleman is the lead architect, engineer, and strategic visionary for application platforms in the cloud at Red Hat. Clayton is a core contributor to both OpenShift and Kubernetes, the open source platform as a service and the containerized cluster manager. He has helped set the direction for the evolution of cloud-native applications and the platforms that enable them.
A
Hello,
everybody
and
welcome
again
to
what's
becoming
a
very
regular
happening.
Another
kubernetes
update
on
the
OpenShift
communist
briefings
this
time
we're
going
to
be
talking
about
the
kubernetes
1.8
ways
which
should
be
out
sometime
later
today,
but
I've
got
today,
both
Derek
Carr
and
Clayton
Coleman
with
me
from
Red
Hat
gonna,
walk
us
through
everything,
that's
in
Hua,
Denny's,
1.8
and
there's
a
lot
so
I'm
gonna
let
on
Clayton
and
kick
us
off
and
it
started
right
away.
So
thanks,
Clayton,
meaning
garrett
for
joining
us.
B
So
what's
new
this
release
three
days,
1/8
was
the
biggest
release
ever
as
is
no
surprise
for
a
growing
project.
We
had
over
2,000
pull
requests
and
2500
commits
merged
between
June
30th,
which
was
the
kubernetes
1/7
release
date
and
the
kubernetes
1/8
release
date,
which
is
hopefully
today.
Although
may
change
circumstances
demand,
we
had
over
300
Navy
committers,
there
were
39
features
added
to
kubernetes,
which
is
actually
a
fairly
low
number.
If
you
compare
it
to
the
total
number
of
pull
requests.
B
So
don't
don't
underestimate
the
amount
of
changes
in
kubernetes
that
was
29
cigs
and
five
working
groups.
A
cig
in
kubernetes
is
the
organizational
unit
of
either
a
functional
area
or
user-focused
area
and
working
groups,
or
a
new
concept,
or
a
fairly
new
concept
that
try
to
bring
together
people
with
disparate
instances
across
many
different
parts
of
the
kubernetes
codebase
to
effect
real
change
and
to
drive
important
initiatives
in
this
release.
We
put
four
features
too
stable,
and
this
is
a
little
bit
of
a
term.
B
I
can
expand
on
stable
and
kubernetes
usually
means
that
we
consider
this
ready
for
general
use
that
we
have
strong
API
guarantees
around
break.
The
API
is
going
forward
and
also
that
it's
general
for
ready,
regular
use
features.
We
moved
16
features
to
beta
and
there
was
a
large
number
of
new
alpha
features
in
the
various
areas
which
I'll
go
into
in
a
little
bit.
B
The
I
think
the
important
takeaway
is
kubernetes
is
growing
very
rapidly
and
as
a
part
of
that
growth,
this
release
is
also
seeing
a
lot
of
focus
inside
the
corridas
community
on
stabilizing,
not
just
the
things
we
deliver
the
code
and
the
documentation
and
the
examples
and
the
tutorials
and
what
it
enables
for
grenades.
Users.
We're
also
focusing
on
the
the
meta
of
making
sure
that
kubernetes
is
a
successful
community,
that
it
is
efficient
that
people
can
orient
themselves
and
work
within
the
community.
B
What
is
kubernetes
to
api
conventions
and
general
patterns
that
we
try
to
instill
across
the
platform
and
to
help
ensure
that
everybody
in
all
the
different
SIG's,
has
a
place
to
go
to
get
when
coordination
between
SIG's
is
not
as
efficient
as
it
could
be,
or
when
something
impacts.
Multiple
SIG's
Sagarika
texture
was
the
place
to
to
bring
issues
and
to
get
answers,
as
well
as
to
help
identify
when
others
need
to
be
brought
in
to
help
the
process
along
part
of
part
of
sig.
B
B
C
As
thing
inside,
one
of
our
biggest
priority
is
their
red
header
to
demonstrate
that
you
can
run
kubernetes
and
open
shift
clusters
at
large
scale
and
one
of
the
items
here
that
made
q18
that
we
think
will
broadly
benefit.
The
community
was
around
what
happens
when
things
go
wrong.
So,
as
most
users
know
of
about
openshift
and
career
news
today,
is
that
you
know
you
a
great
debugging
tool
to
find
out
what's
happening
in
your
cluster
over
the
life
of
your
resources.
C
Our
events
and
events
tend
to
not
be
punitive
when
everything
goes
well,
but
in
environments,
where
things
may
not
be
going
well,
let's
say:
you're
your
pod
can
never
start
or
the
image
can
never
be
pooled
or
you're
in
some
Crouch
looping
scenario
at
small
scales.
It's
great
to
know
these
things
right.
It's
great
to
know
that
my
application
is
not
working
and
I
can
look
at
the
event
in
the
long
tail.
C
It's
it's
really
problematic
to
be
keep
being
told
that
your
applications
not
working
or
that
your
pod
can't
pull
its
image
or
that
a
particular
event
can't
happen.
So
one
of
the
things
that
we've
observed
when,
when
offering
our
our
OpenShift
online
offerings
is
oftentimes
applications
get
defined
and
then
they
might
not
ever
be
able
to
be
converged
on
the
desired
state.
And
it's
great
that
the
system
tells
you
about
it.
C
When
it
first
happens,
it's
bad
when
it
keeps
telling
you
about
it
constantly
over
the
next
week,
two
weeks,
three
weeks
and
so
at
scale.
What
we've
observed
was.
Let's
say
you
had
some
percentage
of
your
applications
that
could
never
run
on
the
platform
because
they
were
poorly
configured
or
something
like
that.
C
You
ran
into
a
spam
problem
to
your
master,
which
ultimately
could
really
deteriorate
Kuster
performance.
So
one
of
the
things
that
we
did
to
address
this
was
define
an
event
budget.
So
a
given
resource
has
an
initial
budget
of
25
of
ents,
and
then
it
has
a
refill
rate
of
like
one
event,
every
five
minutes,
and
this
had
a
really
dramatic
impact
on
reliability
of
our
clusters.
C
So
if
you
imagine
across
hundreds
of
nodes
as
Eve
nodes,
has
you
know
one
or
two
pods
that
may
not
be
successfully
running
by
design
that
we
were
able
to
dramatically
reduce
the
long
tail
events
being
sent
to
our
masters
from
we're
getting
hundreds
and
hundreds
of
events
per
second,
approximately
three
events
per
second,
so
at
scale.
Hundreds
of
knows
where
some
percentage
of
those
notes
might
have
pods
that
don't
actually
successfully
run
we.
C
We
were
able
to
use
our
experience
to
find
something
and
address
it,
so
that,
as
cluster
operators
who
are
running
these
application
platforms
on
behalf
of
other
users,
users
that
make
mistakes
don't
wake
the
cluster
operator
up
at
night
right.
So
I
think
that
this
was
a
really
good
data
input
to
the
community.
The
PR,
unfortunately
in
Atlantic
u-17,
but
didn't
make
it
in
Cuba
me,
but
then,
along
in
the
q18
cycle,
we
did
things
to
improve
events
further.
B
It's
very
important
important
to
be
able
to
to
take
actual
customer
scenarios
and
user
scenarios
and
translate
those
back
into
meaningful
fixes.
Part
of
this
is
closing
the
loop
with
at
the
very
large
scales,
making
a
concerted
effort
to
identify.
You
know
the
top
blockers
both
from
users
and
for
from
customers
and
from
community
at
large
members
and
people
who
have
had
similar
problems
and
try
to
synthesize
the
overall
an
overarching
effort
out
of
it
and
that's
something
that
we
think
is
a
unique
value
to
how
we
contribute
to
kubernetes.
B
B
But
it's
something
that
OpenShift
users
very
commonly
see:
clusters
with
tens
of
thousands
of
applications
or
tens
of
thousands
of
users
who
are
quickly
spinning
up
or
tearing
down
applications
that
may
be
running
as
development
environments
or
test
environments,
offering
playgrounds
to
developers
to
have
a
shared
pool
of
resources
where
they
can
at
a
fairly
low
cost
to
the
overall
organization
experiment.
In
a
fashion,
that's
going
to
really
resemble
their
production
environment.
B
Obviously
kubernetes
tries
to
be
as
simple
as
possible
and
no
simpler
and
one
of
the
things
that
one
of
the
improvements
that
went
into
the
alpha
state
in
1-8
was
the
ability
to
take
some
of
these
very,
very
large,
dense
clusters,
which
are
making
API
calls
to
get
very
large
numbers
of
pods
or
very
large
numbers
of
namespaces
or
user
information,
and
to
add
capabilities
to
both
compress
and
to
break
those
into
smaller
chunks.
This
was
really
in
a
sense.
B
Compression
went
in
in
1:7
but
wasn't
enabled,
and
we
began
really
stressing
it
in
the
1a
timeframe
and
then
the
the
chunking
of
very
large
API
calls
into
individual
results
how
to
benefit
both
to
the
cluster
itself,
because
a
lot
of
integrations
into
kubernetes
involve
listing
everything
and
then
watching
for
changes,
and
it's
a
very
powerful
pattern,
because
you
can
continually
check
to
see
that
you're
up
at
the
right
state.
But
when
you
make
those
very
large
requests,
it
was
having
an
impact
on
other
operations
on
the
cluster
and
so
by
chunking.
The
results.
B
Instead
of
asking
for
all
pods
on
the
cluster,
you
can
ask
for
the
first
500
pods
and
then
get
once
you've
processed
those
you
can
get
the
next
list.
We
actually
plan
to
enable
this
by
default
in
kubernetes
1-9,
but
it
should
also
be
available
in
OpenShift
in
a
fairly
aggressive
pattern.
We
think
that
this
is
gonna
be
really
really
valuable
for
bringing
down
the
tail
Layton
sees
on
both
API
requests
as
well
as
honestly.
The
number
one
impact
is
that,
for
an
end-user,
large
administrative
queries
become
much
more
responsive
because
you
start
getting
data.
B
Almost
immediately,
without
giving
up
any
of
the
consistency
that
you
want
and
other
ways
that
as
well
for
the
previous
one,
our
goal
with
events
was
to
keep
events
very
useful.
We
actually
made
events
more
useful
by
focusing
on
the
things
that
actually
we're
showing
up
this
sort
of
refinement.
You
know,
is
kubernetes
maturing
into
an
environment
where
you
can
really
trust
it
with
all
of
your
applications
and
as
a
you
know,
as
a
corollary
to
both
of
the
first
two
observability
of
kubernetes
as
a
platform
is
very
important
to
us.
B
Prometheus,
which
is
a
metrics
gathering
project
open
source
project
that
became
part
of
the
cncs
last
year
is
a
really
has
a
really
great
user
experience
for
application,
authors
and
operators
to
be
able
to
easily
pull
ad
hoc
metrics
from
many
different
components
of
a
scale
out
system.
The
kubernetes
community
has
worked
really
closely
with
Prometheus
to
to
expose
metrics
to
be
gathered,
but
also
the
on
the
Prometheus
side
to
build
in
support
for
the
kind
of
dynamic,
rapidly
changing
environments
that
kubernetes
represents,
and
so
we're
really
excited
because
we
spent
a
lot
of.
B
How
accurate
those
api's
are:
we've
added
new
metrics
in
a
number
of
places
and
we're
really
excited
because
we
think
that,
as
this
becomes
more
and
more
formalized,
you'll
see
a
larger
you're,
seeing
a
large
move
in
the
kubernetes
ecosystem
to
expose
these
metrics
everywhere.
That
sort
of
simplification
of
the
ecosystem
to
where
you
can
very
easily
get.
These
metrics
is
going
to
really
improve
the
operation
and
running
of
large
kubernetes
clusters
and
Eric
I
I.
Think
you
know.
This
is
something
that
you're
deeply
familiar
as
well.
C
So
when
you
know,
and
generally
I
think,
Prometheus
has
been
an
invaluable
tool
for
for
us
to
kind
of
get
a
handle
on
figuring
out.
You
know
where
there's
smoke,
where
there's
fire
and
where
we
should
focus
our
energy
and
improving
general
reliability
across
the
platform.
So
you
know
it's
Clayton
talked
about
here
with
the
SE
d3
migration.
You
know
very
clearly
we
able
to
see
in
the
move
here
that
we
had
dramatic
improvements
and
network
and
memory
use.
You
know
I'm
just
sitting
here.
Thinking
about
this.
There
are
other
areas.
C
I
can
think
that
we
that
we
haven't
even
called
on
this
deck,
where
improvements
were
made
so,
like
generally
speaking,
I
think
the
the
openshift
use
case
of
kubernetes
is
slightly
more
directed
than
maybe
what
you
see
in
the
general
upstream
community
were
slightly
more
opinionated
on
how
people
should
follow
up
particular
best
practices,
and
so,
like
things
like
quota,
like
Prometheus,
was
invaluable,
identifying
areas
where
quota
was
making
more
calls
than
that
necessary,
and
then
we
were
able
to
draw
out
those
fixes
into
into
the
upstream.
But
generally
speaking,
I
would
say.
C
Our
experience
with
monitoring
in
the
online
environments
has
been
really
beneficial
to
understand
what
happens
when
real
users
will
use
a
platform
and
helps
focus
our
our
decision-making
and
inform
where
we
go
to
make
impact
some
cumin
ain't
around
stability,
so
generally
I
think
it
just
been
been
great.
All
around
yeah.
B
B
So
with
without
further
ado,
we
will
move
to
the
thing
that
everyone
is
really
excited
about,
which
is
new
features,
and
even
though
we
said
that
this
was
a
stabilization
release,
it's
very
difficult
to
convince
three
hundred
and
eighty
people
not
to
go.
Do
specific,
targeted
features
that
make
people's
lives
better.
So
I've
tried
to
bring
dark
and
I'll
kind
of
go
through
the
different
areas
of
kubernetes
and
will
highlight
some
of
the
top-level
features,
as
I
said
before
the
detail
on
these
is
it's
pretty.
C
So
I
think
a
lot
of
stuff
interesting
happened
in
kubernetes
one
day
in
the
big
auto
scaling
space.
Some
of
these
represent
areas
that
we've
add,
as
a
community
and
especially
from
Red
Hat
tried
to
represent
what
our
users
were
asking
for
and
trying
to
drive
core
features
into
the
platform.
So
one
of
the
first
items
here
is
folks
might
have
seen
there's
a
new
incubator
project
around
a
metric
server
and
the
metrics
API.
C
This
is
really
setting
the
groundwork
for
us
to
get
a
future
replacement
for
heaps
during
one,
the
community
and
so
I
think
it's
just
setting
a
good
foundation
for
us
to
grow
on
moving
forward.
Second
I'm,
here,
that's
a
real
interest
to
me,
and
this
has
been
something
that
we've
been
trying
to
push
through
the
community
for
up
to
two
years
now
and
think
about
and
has
slowly
evolved
from
alpha
to
beta.
And
now
in
this
release
is
oftentimes.
We
get
the
requests
that
users
want
to
scale
their
applications
horizontally
on
a
custom
metrics
source.
C
So
today,
in
the
platform
you
initially
had
just
CPU
as
a
scaling
target,
then
memory
got
added
and
now
in
the
Kuban
8
release,
you
have
the
ability
for
your
horizontal
autoscaler
to
target
custom
metrics
as
a
scaling
signal,
so
as
input
to
that
there's
a
custom
metrics
API
that
third
parties
can
implement
to
support
integrating
with
the
horizontal
pot,
autoscaler
natively
and
generally
then,
would
be
able
to
give
additional
signals
to
choose.
When
in
how
you
might
want
to
scale
your
applications
where
the
particular
resources
that
cube
supported
natively
might
not
fit.
C
So
this
is
really
exciting
for
us
to
see
that
this
is
now
graduated
into
beta.
In
addition,
we've
had
a
lot
of
feedback
about
just
a
lot
of
people
were
curious,
like
what's
going
on
with
my
horizontal
pod
autoscaler.
Why
is
something
scaling
or
not
scaling?
So
we
did
a
lot
of
work
to
try
to
improve
the
visibility
into
the
status
of
the
particular
HBA
so
that
when
things
are
going
right,
you
know
why,
and
when
things
are
going
wrong,
you
can
better
pinpoint
why
exactly
they're
going
wrong.
B
Wouldn't
be
a
presentation
without
some
technical
difficulties,
y'all
see
my
screen
still
yep,
sorry,
so,
with
the
1/8
release.
We're
also
continuing
a
lot
of
the
extension
work
that
we've
been
focused
on
from
the
very
beginning
of
kubernetes,
making
it
easier
for
people
to
be
able
to
plug
in
their
own
pieces
to
kubernetes,
and
this
is
something
we
see
as
fundamental
to
the
success
of
kubernetes
as
an
ecosystem.
Is
you
know
not
just
someone
who
takes
the
code
and
Forks
it
and
adds
in
some?
B
Some
tweaks
can
change
how
a
kubernetes
cluster
is
monitored
or
is,
is
controlled
or
is
limited,
but
those
can
be
easily
done
by
people
who
plug
in
on
top
and
so
in
the
1/8
release.
There
were
several
areas
of
extensibility
that
we
continue
to
mature
and
one
of
the
more
interesting
ones
and
is
flex
volumes,
which
were
a
concept
that
were
was
added
quite
early
in
the
kubernetes
release
to
kind
of
release.
The
pressure
on
I
want
to
integrate
my
new
storage
provider
and
how
that
be
injected
into
the
paas
running
on
the
cluster.
B
If
you
the
kind
of
normal
process,
is
you
build
some
code
into
kubernetes
and
you
change
the
Koopa
dainese
api's
and
you
wait
a
couple
of
years
and
once
you
get
to
that
point,
you're
in
the
Karina's
API,
but
obviously
that
won't
scale
to
the
kinds
of
new
and
interesting
technologies
that
people
add
in
the
future
flex
volumes
were
the
first
approach
to
allow
people
to
dynamically.
Add
new
volume
types
to
kubernetes
pods
on
on
the
fly
after
our
cluster
had
been
installed.
B
With
new
flex
volumes,
as
well
as
to
try
them
out,
there's
a
lot
of
really
exciting
stuff
being
discussed.
That
would
allow
us
to
do
more
sophisticated
secrets
and
security
integration
by
leveraging
flex
volumes
that
you
can
imagine
a
flex
volume
that
injects
into
you
pods,
a
secret
that
is
provided
by
the
platform
but
is
never
stored
on
the
platform
such
as
a
Kerberos
principal
or
another
form
of
like
a
private
key
injected
by
a
vendor
integration
for
doing
security
across
the
cluster.
We
want
to.
B
We've
also
worked
to
make
flex
volumes,
something
that
can
be
controlled
by
a
security
policy
in
kubernetes.
So
that's
just
one
concrete
example
of
it
is
an
area
where
there
are
many
different
ways
that
you
may
want
to
provide
content
into
a
pod
and
working
to
make
sure
those
a
api's
are
stable
and
are
easy
to
use
custom
resource
definitions,
which
are
the
replacement
for
third-party
resources.
I
got
a
couple
of
feature
improvements.
This
release,
including
the
ability
to
do
validations
that
work,
will
continue.
B
B
C
Yep
so
along
that
line,
some
of
the
new
features
that
were
coming
out
of
a6
storage
that
one
to
call
here
that
were
of
interest
to
folks
was
generally.
The
initial
request
for
a
persistent
volume
claim
might
not
be
the
right
long-term
request.
So
work
was
being
done
in
the
q1,
a
release
to
allow
you
to
dynamically
resize
your
PV
C.
C
So
you
could
grow
a
10
gig
to
a
hundred
gig
of
PVC
without
well
as
you
basically
move
on
the
the
increases
is
tend
to
be
transparent
and
work
is
being
flushed
out
to
make
sure
it
worked
well
with
quota
and
all
the
other
things,
but
a
basic,
a
lot
of
good
progress
was
made
in
the
cube
when
it
released
around
this
feature.
In
addition,
it's
a
common
request
that
people
want
to
be
able
to
snapshot
their
volumes
and
then
potentially
create
a
new
TBC
from
that
snapshot.
C
Work
was
being
done
and
an
alpha
phase
to
support
this
in
Q&A
and
it's
probably
representative
of
what
you'll
see
in
future
releases.
It's
getting
progressed
to
beta
and
stable,
but
a
lot
of
the
basic
primitives
that
people
would
look
for
around
PBC's
we're
getting
a
nice
attention
and
linky
Bunny
cycle.
B
Networking
is
also
an
area
that
just
continued
to
evolve,
we're
not
trying
to
be
too
aggressive,
but
to
do
this,
I
PVS
is
a
IP
virtual
servers
and
linux
is
a
kernel
feature.
That's
been
there
for
quite
a
long
time.
You
can
think
of
it
as
a
little
bit
of
an
upgrade
over
the
IP
tables
based
version
of
cube
proxy
and
there's
an
alpha
implementation
in
cube
1a
that
will
go
through
some
hardening
and
testing
contributed
by
the
folks
at
Huawei.
B
You
know
we're
hopeful
that
in
the
next
few
releases,
this
will
be
something
that
allows
us
to
improve
service
level.
Connections
between
pods
network
policy
also
continues
to
evolve.
Some
of
the
concepts
from
openshift
like
egress
policy,
making
their
way
into
kubernetes
network
policy
as
well
as
site.
Our
rules
for
pod
matching
a
lot
of
small
incremental
improvements
and
stabilization
that
make
it
easier
to
ensure
that
users
and
operators
can
find
the
right
balance
between
running
containerized
applications
and
preserving
security.
C
C
So,
in
the
resource
management
working
group,
we
had
spent
a
lot
of
time
on
identifying
like
well
what
key
focus
areas
could
we
tackle
to
drive
area
of
improvement,
improvements
into
the
platform
and
so
for
the
key
1/8
release?
We
focused
on
three
areas:
one
was
around
how
we
better
improve
CPU
management
on
the
node.
C
Second
was
around:
how
do
we
support
device
plug-ins
and
in
my
head,
this
is
for
folks
who've
been
tracking
the
community.
You
know
we
had
alpha
support
for
GPUs,
but
in
order
to
really
graduate
it
into
a
beta
or
stable
foundation,
we
needed
to
get
into
a
model
where
you
weren't,
having
to
integrate
support
for
particular
hardware
devices
natively
into
the
core
platform,
so
device
plugins
was
a
model
to
do.
Try
to
address
that
and
then
finally,
certain
workloads,
you
know,
have
particular
memory
requirements
that
we
were
hearing
over
and
over.
C
It
would
be
nice
if
you
could
consume
things
like
huge
pages
to
to
broaden
the
set
of
workloads.
I
could
run
well
on
the
platform,
and
so
work
had
been
done
in
that
area
as
well.
So
if
we
dive
down
a
little
bit
deeper
on,
say,
CPU
pinning,
this
is
an
exciting
feature
for
me.
So
folks,
you
might
have
been
familiar
with
the
quality
of
service
model.
Today
we
have
in
kubernetes
we
have
this
concept
of
best
effort,
burstable
and
guaranteed
service
tiers
and
so
and
the
best
efforts
here,
basically
a
can
use.
C
You
know
as
much
resource
as
I
can
scavenge
you
know,
and
the
burstable
tear
a
pod
can
have
a
minimum
request
for
a
particular
amount
of
resource,
but
then
can
burst
above
that
request,
as
as
resources
become
available.
But
one
of
the
things
we
had
observed
as
a
community,
it
was
that
there
wasn't
a
huge
performance
benefit
in
going
up
to
the
last
tear
around
guaranteed
clause.
C
So
one
of
the
major
things
that
we've
worked
on
at
Red
Hat
and
with
our
friends
at
Intel
was
trying
to
make
it
that
you
did
not
actually
have
a
performance
penalty
by
running
in
the
guaranteed
cost
year
by
allowing
you
to
actually
get
improved,
latency
benefits.
So
the
way
we
chose
to
tackle
this
was
a
new
feature
you'll
see
in
the
in
the
note
cubed
agent,
where
you
can
configure
a
node
to
have
a
particular
CPU
management
policy.
C
Think,
generally
speaking,
this
would
be
a
broad,
really
impeccable
performance
improvement
for
a
lot
of
workload.
Types.
It's
a
bit
intelligent
on
how
it
chooses
to
assign
particular
core.
So
it
actually
will
inspect
your
physical
processor
topology
to
try
to
find
the
best
fit
for
your
workload
on
that
node.
C
C
And
then
the
last
item
here
when
I
talk
about
huge
pages
briefly,
this
is
this
is
one
of
those
things
that
when
you
want
it,
you
really
want
it,
and
sometimes
this
can
be
viewed
as
an
impediment
to
bringing
certain
workloads
onto
the
cluster.
So
if
you
are
an
operator
of
an
application,
that's
managing
a
very
large
memory
set,
so
whether
it's
a
particular
database
management
system
or
a
large
Java
middleware
application,
oftentimes
people
have
tuned
those
applications
to
take
advantage
of
huge
pages
and
not
having
huge
pages.
C
Support
on
the
platform
was
an
impediment
to
bringing
on
those
workloads.
So
in
the
key
1/8
release,
you
now
have
the
ability
to
let
your
pod
make
a
huge
page
request,
and
then
your
application
is
properly
isolated
and
accounted
to
be
able
to
use
those
huge
pages
and
you
can
consume
them
through
all
the
metaphors.
You'd
expect,
so
both
be
a
shared
memory
and
as
an
empty
dirt
volume.
So
I
suspect
that
in
Cuba
now
and
beyond,
evolved
into
a
stable
State,
yeah.
B
I
think
you
know
in
general,
this
sort
of
targeting
application,
workloads
and
being
practical
about
there's
a
there's
kind
of
a
gap
between
the
kubernetes
idealized
that
many
people
use
it.
As
as
a
micro
services
platform,
where
you
don't
care
about
some
aspects
or
performance,
you
might
be
more
focused
on
the
gross
levels
or
the
development
efficiencies
you
can
gain.
B
It's
also
equally
important
to
us
to
ensure
that
that
broad
range
of
applications
both
can
run
on
kubernetes,
but
also
get
advantages
from
running
from
kubernetes
and,
as
we
talked
about,
as
we
mentioned
during
the
metrics
section.
Really
one
of
the
long-term
goals
is
to
be
able
to
tie
actual
concrete
resources.
People
use
kubernetes.
B
Because,
ultimately,
at
the
end
of
the
day,
you
know,
containers
are
just
Linux
and
you
know
the
work
that
we
do
in
the
kernel
and
device
drivers
and
overlay
and
user
name
spaces
and
SELinux
and
security
is
really
all
about
ensuring
that
an
application
workload
that
runs
on
one
Linux
cluster
runs
on
it
consistently.
You
know
when
we
talk
about
the
reasons
why
we
focus
so
strongly
on
this.
B
The
kernel
and
the
the
low-level
levels
or
the
levels
of
Linux
tying
up
through
kubernetes
into
OpenShift
is
so
that
we
can
ensure
that
applications
work
correctly
across
the
board,
and
so
cryo
is
a
big
investment
area
for
us.
You
know,
as
a
container
run
time
running
under
kubernetes,
designed
to
work
with
kubernetes
works
on
top
of
CI
standard
containers
is
able
to
run
all
docker
images
that
exists
today.
B
The
focus
for
us
really
is
cutting
out
parts
of
the
cutting
out
parts
of
the
container
runtime
that
hurt
our
ability
to
ensure
applications
are
portable
and
ensure
they
are
reliable,
and
so
we've
we've
been
working
pretty
hard
as
a
team
in
the
community,
with
others
in
the
ecosystem,
like
Intel
and
sousei,
to
get
cryo
to
its
first
release
candidate.
It
is
certified
against
cube,
1-8
and
has
been
has
passed
all
the
tests.
B
Therefore,
quite
a
coup,
1:7,
I'm,
sorry
and
all
the
tests
they're
working
on
getting
that
support,
post
release
four
one,
eight
and
in
being
able
to
move
cryo
to
production
status,
not
everyone
may
choose
to
use
cryo.
Our
hope
is
that
we
can
really
demonstrate
the
value
of
a
simpler
and
kubernetes
focused
container
runtime
and
how
it
ties
into
kubernetes
is
a
platform.
That's
excellent
for
running
applications
on
top
of
Linux,
which
Red
Hat
arguably
knows
as
well
as
anyone,
and
so
we'll
you'll
see
a
lot
more
about
cryo
in
coming
weeks
and
days.
B
Our
goal
is
to
make
sure
that
there's
diverse
ecosystem
of
of
container
runtimes
and
that
can
trade
off
different
advantages
for
end-users,
but
also
focus
on
the
thing
that
just
works
well
for
kubernetes,
and
so
those
are
the
high-level
features
in
kubernetes
1:8.
There's
a
ton
of
there's
a
ton
more
detail
that
we
could
get
into
I
urge
everyone.
When
the
Korea's
1/8
release
is
announced
to
go,
look
at
the
release,
notes
there'll,
be
an
infinite
amount
of
blogs
and
blog
posts
from
everybody
in
the
community.
B
Talking
about
the
things
that
they
personally
care
about,
the
most
to
me,
that's
a
sign
of
the
success
of
kubernetes.
You
know
it's
hard
to
point
to
someone
today
who
hasn't
realized
the
same
thing
that
RedHat
realized
almost
three
years
ago.
That
kubernetes
was
going
to
be
the
future,
and
so
we're
really
excited
to
have
everybody
join
us
on
that
criminais
is
one
nine
is
you
know
it's
still
fairly
early
in
this
SIG
groups?
B
There's
a
lot
of
things
that
individual
SIG's
are
still
working
through
you'll
start
to
see
over
the
next
few
weeks
that
coalesce
into
sig
specific
goals
at
the
top
level
across
the
project.
These
are
some
things
that
we've
talked
about
in
in
sig
architecture
and
at
community
meetings.
Already
stability
and
bug
fixes
continue
to
be
key,
continuing
extensibility.
You
know
we
really
want.
We
have
to
keep
chugging
through
extensibility.
B
Our
goal
on
the
openshift
side
is
that
extensibility
and
cube
ain't
done
until
openshift
runs
and
so
that
to
us
that
means
you
know,
studying
that
path
where
something
is
powerful
and
complex
is
openshift
can
run
as
an
extension
on
top
of
kubernetes.
We
think
it's
possible
it's
going
to
take
some
time
to
get
there,
but
it's
a
key
goal
for
us
to
both
all
the
right
mechanisms
are
in
place
to
all
of
the
use
cases
that
we
see
on
the
open
ship
side
from
administrators
and
users.
B
The
the
specific
controls
they
want
are
also
available
as
a
proof
point
without
having
to
get
openshift
to
use
those
and
finally,
a
scaling
improvements
across
the
board,
just
continuing
to
refine
our
approach
to
how
we
more
and
more
components
in
the
ecosystem
and
that's
scaling
not
just
from
a
performance
perspective,
but
from
a
community
and
ecosystem
and
integration
and
security
perspective.
It
should
be
easy
to
extend
the
platform
and
preserves
preserved
security,
and
so
each
of
these
kind
of
builds
off
each
other.
C
I'll
call
it
a
couple,
so
I
think
only
speaking,
there's
always
a
lot
of
interest
on
how
we
can
provide
more
accessibility,
around
patterns
like
initializers
and
custom.
What
book
plugins
for,
where
mission
controllers
being
in
tree
or
a
challenge,
I
think
generally,
that's
an
area
we're
continuing
to
invest
in
and
validate
against,
so
hopefully
we'll
be
able
to
proceed
on
that
one
line.
The
D
scheduler
is
an
interesting
topic
area
for
me.
C
So,
as
folks
might
have
seen,
there's
now
a
new
incubator
project
around
the
D
scheduler,
which
is
basically
asking
asking
the
question
of,
has
things
have
been
scheduled
now?
Is
there
a
now?
Is
there
a
better
place
for
that
pod
to
be
scheduled?
You
know
today
versus
last
week
and
I
mean
this
is
like
the
next
step
on
some
of
the
stuff
that's
been
going
on
in
the
scheduling
community.
Around
asking
like
do.
I
have
capacity
to
run
this
workload.
C
Now
it's
asking
or
my
previous
decisions,
my
optimal
decisions,
and
so
the
D
scheduler
is
a
interesting
focus
area
on
their
priority,
and
prevention
is
another
interest
area
within
six
scheduling
and
I.
Think
it's
really
critical
for
us
being
able
to
run
more
cluster
services
as
Damon
says
and
hopefully
we'll
be
able
to
get
that
over
the
hump
and
in
1-9.
C
Generally
speaking
on
the
node
level
improvements,
you
know,
I
touched
on
a
lot
of
those
things
that
got
outed
as
alpha
features
around
CPU,
pinning
and
ng
device,
plugins
support
and
that
sort
of
thing
I.
Think
generally,
in
the
one
nine
release
you'll
see
a
lot
of
stabilization
of
those
features
in
preparation
to
look
to
get
to
beta
and
110
and
beyond,
as
Clayton.
C
You
know
we
are
passing
all
of
the
one
seven
in
cluster
test
and
we
encourage
folks
to
to
check
that
out
very
shortly
after
the
cube
on
8
release,
you'll
see
a
branch
of
cryo
that
will
meet
the
same
need
on
K,
Bonet
and
what's
nice
about.
That
is
all
the
features
that
you
expect
will
just
work.
So,
although
all
the
metrics
gathering
that
you
get
from
see
advisor
today,
you
now
work
with
cryo
and
stuff,
but
generally
speaking,
you
have
one
on
is
a
very
short
release
window.
C
B
You
know
I
I,
think
you
knowing
in
closing
you
know
across
the
board.
This
is
a
kubernetes
as
a
long
haul
project.
For
us
we
want
kubernetes
to
be
the
best
place
to
run
containerized
applications.
We
want
it
to
be
transformational
to
how
how
large
organizations
build
and
develop
software.
We
want
it
to
be
a
stable
ecosystem
that
allows
people
to
to
orient
themselves
to
provide
value,
to
build
solutions
that
work
for
other
people
and
to
make
that
easy
to
run
and
secure
and
manage.
B
We
think
that,
just
like
the
operating
system
and
Linux
was
transformational
and
making
it
possible
to
do
to
see
the
the
world
we
have
today
that
we
want
kubernetes
to
help
build
the
world
that
will
see
tomorrow
so
expect
to
expect
us
to
keep
walking
this
path
of
making
it
a
predictable
and
excellent
place
to
run
applications
and
we'll
take
questions.
If
anyone
would
like
to
ask
their.
A
There
are
a
couple
of
Clayton
and
Derek
and
thank
you
for
this
and
I
think
your
point
about
the
best
feature
is
the
community.
So
these
a
lot
of
great
work
has
been
done
by
the
lots
of
people,
organizations
and
individuals
on
this
release.
So
it's
pretty
notable
release.
One
of
the
questions
was:
has
there
been
any
notable
progress
on
the
service
catalog
in
1.8?
You
know.
B
That
I
I
knew
that
there
was
something
really
important,
that
I
was
forgetting
and
so
there's
a
slide.
That's
missing!
Thank
you.
For
for
riveting
service,
catalog
went
through
a
ton
of
work
in
one
ate.
A
number
of
people
from
quite
a
few
companies
worked
extremely
hard
to
bring
it
to
beta
status.
The
there's
kind
of
a
few
loose
ends.
B
The
goal
is
to
make
it
beta
almost
very
very
shortly
after
cube,
and
the
Service
Catalog
in
a
lot
of
respects
is
the
first
extensible
part
of
kubernetes,
it's
something
that
runs
on
top
of
kubernetes
and
plugs
in,
but
it's
not
actually
tied
to
the
kubernetes
release.
So
our
goal,
you
know
it
has
been
a
testbed
and
has
helped
pave
the
way,
and
the
folks
involved
have
certainly
jumped
through
some
hoops,
but
it's
going
to
make
everything
else
in
the
koreas
ecosystem
better.
B
C
So
over
the
key
one
it
released,
there
was
a
federation
face-to-face
where
a
lot
of
the
core
contributing
companies
got
together
and
trying
to
sit
back
and
ask
what
was
federation
going
in
the
right
direction
and
what
we
can
do
to
accelerate
it.
So
folks
might
have
seen
some
announcements
that
went
out
where
sig
federation
will
be
renamed
state,
multi
cluster
and
one
of
the
challenges
that
we're
looking
to
address
and
looking
at
things
like
Service
Catalog.
Actually,
as
a
proof
point
was.
C
How
can
we
decompose
Federation
into
a
smaller
set
of
items
geared
to
particular
use
cases,
rather
than
the
select
one
large
monolith
called
Federation?
So
out
of
that?
There's
an
effort
going
on
right
now,
actually
and
hopefully
we'll
see
more
in
Cuba
9
to
move
Federation
out
of
the
cube
tree
proper
and
in
that
move
out
of
the
game
cube
tree,
it's
being
deacon
the
two
pieces,
so
there'll
be
a
cluster
registry
component
which
folks,
who
were
tracking
Federation
cluster,
was
the
unique
API
resource
offered
by
the
project,
and
so,
generally
speaking,
everybody
thought.
C
The
cluster
registry
is
a
useful
concept
or
a
foundational
tool
to
build
a
lot
of
other
tools.
So
out
of
that
sig,
there's
an
effort
to
go
and
decompose
cluster
registry
into
a
standalone
deployable
artifact
and
using
the
Federation
code
base
as
a
base
to
kick
that
effort
off
and
then,
generally
speaking,
one
of
the
other
things
that
we
at
red
have
been
pushing
hard
on
Federation
is
to
ensure
that
it
has
a
stable
lifecycle,
release
cadence.
So
I
think
there
had
been
some
confusion
across
the
community
about
when
Federation
says
a
particular
API
resource.
C
I've
reached
you
know:
beta
status,
I
think
that
that
wasn't
always
clear
what
was
meant
by
that,
and
so
what
we're
trying
to
do
is
by
moving
the
Federation
code
base
out
of
tree
getting
into
a
cadence
where
a
particular
release
of
queue
would
go
out
the
door
and
then
Federation
would
verify
it
functions
against
that
stable
release.
So,
in
the
same
way
that,
like
Service
Catalog
will
say,
I
worked
well
on
a
cube
on
a
platform.
C
Federation
will
probably
start
to
trail
mainline,
cube
releases
and
say
that
plus
two
weeks,
plus
three
weeks
figure
out
the
number
after
cube
release.
They
would
be
a
federation
release
and
then
that
gives
the
hardening.
You
need
to
know
that,
as
things
get
into
a
cube
release
at
the
end
of
the
day
very
last
minute
that
Federation
had
the
chance
to
respond
and
validate
against
it.
So
I
think,
generally
speaking,
Federation
is
still
incubating
and
growing,
and
it's
starting
to
decompose
so
that
we
can
accelerate
getting
its
use
cases
out
to
the
community.
A
All
right
and
there's
a
couple
more
questions,
if
you
guys
a
time-
and
this
might
be
a
little
detail
but
on
the
auto
scaling
feature
someone's
asking
if
we
can
scale
an
app
based
on
combined
condition
with
multiple
metrics
he's
example,
is
it's
like
to
scale
he's
app
when
the
CP
was
at
80?
You
know
above
80%,
and
the
memory
is
above
75%.
A
B
You
know
integrating
with
Prometheus
really
well,
our
goal
would
be
we
will
offer
and
out
of
the
box
instead
of
collectors
that
gather
all
the
metrics
of
the
platform,
and
we
do
want
it
to
be
possible
to
take
and
collect
additional
metrics
now,
obviously,
with
metrics.
You
know,
every
I
think
this
is
a
little
bit
like
backups
or
or
security.
Everybody
has
a
slightly
different
approach,
but
everybody
trying
to
accomplish
the
same
goals.
We're
gonna,
try
not
to
be
too
prescriptive
on
exactly
how
the
metrics
operationally
are
calculated.
B
We
want
to
take
advantage
of
the
flexibility
of
Prometheus
to
slice
and
dice
metrics
in
a
couple
different
ways.
Our
goal
will
be
to
take
most
of
the
elements
of
the
Prometheus
ecosystem
that
work
well
and
begin
to
support
them
and
out
of
the
box
gather
those
four
the
cluster
and
for
the
components
running
on
the
cluster.
The
next
step
would
be
making
it
easy
to
use
Prometheus
within
a
namespace
or
instead
of
namespaces.
B
So,
just
like,
we
have
a
jenkins
image
and
OpenShift
that
you
can
use
that
grades
well,
with
a
platform
we'd
like
to
have
a
fairly
simple
integration
there
for
what
might
call
tenant
Prometheus
and
then
the
third
step,
the
third
step
down
the
road
would
be
a
Prometheus
that
can
do
multi-tenant,
metrics
at
scale
and
actually
that's
as
part
of
that
custom
metric
stuff
that
we
talked
about.
We
actually
anticipate
that
being
one
of
the
first
paths
where
that
Prometheus
would
be
used,
but
multi-tenant
Prometheus
is
a
somewhat
complicated
project.
B
We
don't
want
to
jump
too
early
into
it,
so
we're
gonna
we're
gonna,
take
baby
steps
through
the
custom
metrics
work
to
gather
metrics
from
all
the
applications
on
the
platform
at
a
very
high
level
for
the
purposes
of
auto
scaling,
and
then
slowly
make
it
easier
for
operational
teams
to
run
their
own
Prometheus
together.
I
would
say
most
people
using
Prometheus
today,
you'll
see
our
new
config
file.
You'll,
see
our
images.
We
have
a
set
of
tools
around
securing
that
Prometheus.
A
Talking
again
on
the
release.
That's
hopefully
one
point:
nine
ish
by
then
on
December
5th,
the
open
ship
Commons
gathering
I'll,
send
you
thinks
and
register
for
that
too.
So
again,
Eric
for
all
your
work
and
everybody
in
the
community
for
all
the
work,
that's
gone
into
guru,
Nettie's
yeah
keep
in
touch!
Thank
you.
Thank.