►
From YouTube: OpenShift Commons Briefing #112: What's New in Kubernetes 1.9 - Features/Functions/Futures
Description
It’s that time again, another release of Kubernetes is just being released and it’s time for another overview/update from Red Hat’s Clayton Coleman on all the many and varied new features and functions that are included in Kubernetes 1.8! We’ll also get a chance to hear from Derek Car and other Kubernetes contributors about the next release and beyond, so be sure to join us with your questions and feedback.
Derek Carr is Principal Software Engineer for application platforms in the cloud at Red Hat will be our guest speaker. Derek is a core contributor to both OpenShift and Kubernetes, the open source platform as a service and the containerized cluster manager.
A
A
Well,
hello,
everybody
and
welcome
again
to
another
OpenShift
Commons
briefings
this
time.
We're
kicking
off
2018
with
a
talk
on
kubernetes
1.9,
the
features,
options
and
futures
Garet
car
is
our
guest
speaker
today
and
we're
going
to
have
a
lot
of
contents
on
top
very
long.
I
am
going
to
mention
that
we
are
hosting
another
open
ship.
Commons
gathering
in
London
on
January
31st,
so
go
to
Commons
got
open
ship
dot
org.
A
B
Right,
Thank,
You
Diane,
hopefully,
everyone
had
a
refreshing
holiday
break.
What
I
want
to
go
through
today
was
all
the
great
work
that
was
done
just
in
time
for
Christmas
for
kubernetes
1.9
I'm,
going
to
try
to
give
a
summary
across
the
entire
ecosystem
of
of
what
kind
of
complex
and
on
the
project
is,
is
long
and
storied,
but
these
days
I
focus
a
lot
on
particular
areas
around
the
node
and
resource
management.
B
So
I'll
do
my
best
to
answer
any
following
questions
in
areas
that
might
not
be
out
of
my
particular
domain
but
feel
free
to
ask
anything
that
you
want
further
clarification
on
afterwards.
So
with
that
in
mind,
what's
new
this
time
around
and
q1
done
so
pull
together,
some
of
the
stats
and
a
bit
amazed
here.
To
be
honest,
this
was
a
shorter
release,
q1
9,
so
always
the
fourth
quarter.
B
6,000
plus
pull
requests
are
still
a
lot
of
pull
requests,
and
even
in
that
time
there
were
some
approximate
like
75,000
comments
across
board,
quests
and
issues.
So
just
looking
at
the
measurement
of
like
the
community
of
vitality
and
health,
I
mean
a
lot
of
work
is
going
in
a
very
short
period
of
time
and
a
lot
of
discussion
about
future
work
is
happening.
B
Graduation
of
particular
features
and
I'll
give
a
little
bit
more
detail
on
that
as
we
go
through
for
folks
who
may
not
be
aware,
the
kubernetes
project
is
subdivided
into
a
set
of
special
interest
groups,
which
we
call
SIG's
and
as
well.
There
are
some
working
groups
that
kind
of
span
SIG's
that
a
lot
of
the
discussion
and
development
activities
for
the
overall
project
come
out
of.
B
Generally
speaking,
I
think
everyone
across
the
canaries
community
recognizes
that
kubernetes
is
becoming
a
central
part
of
folks
IT
operations
and,
as
a
result,
there's
a
strong
need
to
make
sure
that
stability
moving
forward
is
is
paramount.
So
there
was
a
lot
of
focus
in
the
1/9
released
here
to
continue
to
focus
on
fixing
bugs
and
ensuring
a
stability,
a
stable
platform
moving
forward.
In
addition,
I
would
say
there
was
a
bit
of
a
slowdown
on
the
community.
B
B
B
Typically,
things
start
in
alpha
as
they're
being
iterated
and
learned
upon
beta
when
we
think
that
they're
starting
mature
and
we
want
to
get
users
to
actually
take
advantage
of
them
and
when
they
get
to
v1,
we
really
feel
like
they've,
reached
a
rock
solid
state
and
it's
unlikely
to
change
and
will
not
change
in
a
backwards
incompatible
fashion
moving
forward.
So
the
key
API
is
that
move
forward
where
the
daemon
set
deployment
replicas
set
and
stateful
set.
B
This
was
about
a
let's
say
a
it
took
over
a
year
of
effort
to
to
reach
this
state
and
a
lot.
A
lot
of
work
was
done
to
ensure
that
lessons
learned
from
one
workload
controller
were
carried
over
to
every
other
one,
and
a
lot
of
work
was
done
to
ensure
that
there
was
consistency
across
those
controllers
for
how
they
manage
resources.
B
The
Batchelor
workload
api,
in
particular
users,
who
might
be
using
jobs
and
cron
jobs.
They're
gonna
have
a
separate
path
to
v1,
but
I
think
it
was
a
big
win
for
the
community
generally
to
see
the
the
four
major
workload
types
for
stateless
and
stateful
workloads
have
all
now
graduated
to
v1
some
things
to
remember.
B
If
you're
looking
to
migrate
your
existing
content
to
the
new
resource
types,
there
were
some
changes
that
were
made
to
the
workload
api
types
as
part
of
the
graduation
to
be
one,
in
particular,
any
of
the
workload
controllers,
these
selectors
that
are
used
to
target
pods
that
they
manage.
There
was
some
behavior
in
the
past
for
the
selector
inside
of
your
pod
template
supported
defaulting
on
what
your
controller
managed
that
that
was
removed.
After
a
lot
of
lessons
learned
and
so
in
general,
you
have
to
give
an
explicit
selector
on
your
your
workload.
Api
type.
B
B
So
you
know,
if
you
think,
about
what
sig
a
time
machine
produces.
They
do
a
lot
of
the
I,
don't
want
to
say
plumbing,
but
they
do
a
lot
of
the
great
work
that
supports
extensibility
to
the
platform
and
one
of
the
key
areas
that
really
evolved.
This
release
was
around
admission
control.
So
for
folks
who
may
not
be
aware,
historically,
since
Carini's
1.0,
there
was
the
ability
to
do
something
called
static
admission
controls.
B
So
you
could
write
little
code,
snippets,
that
intercepted
requests
to
the
API
server
and
they
were
used
to
do
defaulting
and
to
do
resource
constraints
and
things
like
quota
and
in
the
time
frame
now
between
cube
100,
all
the
way
up
to
now
cube
1
9.
You
know
the
number
of
admission
controllers
kind
of
blossomed
in
the
project
and
a
lot
of
patterns
were
identified
that
were
common
across
the
ecosystem
for
the
types
of
things
that
we
saw.
A
mission
controllers
do
and
I
think
in
cube
1
9.
B
There
was
a
great
work
to
chew,
cleanup
and
improve
the
extensibility
stories
so
that
a
new
users
who
want
to
go
an
intercept
request
to
do
some
custom
action,
don't
need
to
get
their
code
merged
into
the
coronaries
repository.
They
can
manage
this
externally
and
be
the
admission
chain
flow
was
cleaned
up
to
address
recurring
patterns
we
were
seeing
were
mutating
and
validating
emission
control
handlers
conflicted.
B
So
the
good
news
here
what
I
want
talk
about
is
something
called
mutating
and
validating
web
hooks
have
graduated
to
beta,
and
essentially
this
means,
if
you
are
interested
in
extending
the
platform
to
do
things
like
I,
want
to
intercept
when
a
namespace
is
created.
So
I
can
do
something
or
validate
that
these
names
conform
to
particular
naming
conventions
or,
for
example,
I,
want
to
intercept
when
a
pot
is
created
and
maybe
inject
a
common
sidecar
container
in
the
past.
These
things
were
really
hard
to
do
because
you
had
to
get
code
into
the
core.
B
Now
you
can
do
these
things
using
what's
called
a
mutating
web
hook,
and
you
run
a
small
server
and
there's
examples
that
would
be
published
out
of
the
community
on
how
to
do
this.
That
can
run
as
a
pod
on
the
cluster
and
anytime,
a
request
comes
into
the
server.
You
will
get
a
call
out
in
the
particular
chains
called
here,
and
you
have
the
opportunity
to
have
your
external
code
mutate,
the
incoming
object
prior
to
it
being
persisted
as
well
as
validate
that
object
to
enforce
any
constraints.
B
You
need
once
the
API
server
path
starts,
calling
up
to
external
resources
prior
to
persistence,
it's
important
to
make
sure
that
these
things
are
low,
latency
and
performant
to
support
the
community
and
their
needs
around
monitoring
of
this
type
of
thing.
There
are
Prometheus
metrics
now
collected
around
the
latency
of
calling
out
to
particular
web
books
and
in
particular,
as
I
said
these.
These
these
web
hooks
can
be
managed
outside
of
the
cluster,
as
well
as
being
managed
into
the
cluster
via
a
pod
referred
to
by
service.
B
B
B
Obviously,
if
you
fail
open,
that
would
mean
if,
if
the
server
can't
find
your
external
webhook,
it
kind
of
just
ignores
it
and
couldn't
give
some
it's
non-deterministic
behavior.
But
generally
you
have
the
flexibility
to
say
what
you
want
to
happen
when
things
can
be
reached,
I
gave
a
link
to
us
a
sample
admission,
webhook
server
that
we
had
OpenShift.
I've
worked
on
that
lets.
You
control
reservation
of
namespace
names
and
encourage
folks
who
are
interested
in
exploring
that
to
do
their
own
enablement
after
the
call.
B
This
is
also
particularly
powerful
if,
if
you're
using
things
like
custom
resources,
so
a
lot
of
people
using
custom
resources
to
drive
operator
patterns
and
many
folks
wanted
to
intercept
creation
of
custom
resources
to
be
able
to
perform
an
action.
This
now
kind
of
completes
that
vision
and
we
look
forward
to
getting
a
lot
of
feedback
from
the
community
about
it.
B
Another
great
thing
that
came
out
of
API
machinery,
which
I
think
we
touched
on
in
our
last
community,
call
or
out
the
1/8
release,
is
something
called
chunking
and
basically
this
now
graduated
to
beta.
This
is
particular
importance
to
me
as
an
operator
of
a
very
large
clusters
in
our
online
environments.
B
Where
previously,
when
you
would
you
know
many
of
our
controllers
or
our
clients,
whether
you're
doing
migrations
or
not,
would
commonly
need
to
list
all
the
resources.
So,
if
I
just
give
an
example
of
like
some
of
our
online
deployments,
which
commonly
have
you
know,
10,000
namespaces
and
each
namespace
has
nine
secrets,
it
turns
out
listing
90,000
secrets
is
a
really
painful,
slow
operation.
B
So
one
of
the
great
things
that
is
now
possible
is
you
can
basically,
when
you
do
a
cube
control
get
of
this
resource
by
default?
Now
it
has
a
standard
chunk
size,
so
it
will
fetch
the
resources
in
groups
of
500
by
default,
so
that
end-users
see
immediate
responses
and
have
a
perceived
latency
improvement
as
well
as
the
server
is
much
more
efficient,
actually
able
B
being
able
to
return
all
these
resources
timely
without
reaching
a
timeout.
B
So
this
is
one
of
those
internal
density
and
scale
improvements
that
might
not
get
a
lot
of
attention
in
blog
posts,
but
are
really
critical
to
actually
running
reliable,
dense
clusters
or
doing
things
like
migration.
So
this
is
something
I
don't
want
to
highlight
and
I
think
is
a
big
win
for
the
community
around
reliability.
B
The
last
thing
I
wanna
talk
about
from
API
machinery
is
some
of
the
work
that
was
done
around
custom
resources.
So
for
folks
who
are
aware,
you
can
register
your
own
resource
types
that
you
want
to
manage
in
kubernetes.
So
if
you
have
your
own
third-party
operator
resource
type
that
you
want
to
be
able
to
do
crud
on,
obviously
you
can
do
that
now
and
kubernetes,
but
what
you
were
not
able
to
do
previously
was
validate
your
resources
prior
to
persistence.
B
So
a
lot
of
people
had
to
do
client-side
validation,
which
has
its
own
pros
and
cons,
but
new
in
communities
1.9
and
on
by
default
is
the
ability
when
you
declare
your
custom
resource
type,
do
give
an
optional
open,
API
III
schema
and
when
your
custom
resources
are
then
created
by
own
users,
they
get
validated
against
that
schema
and
create
an
update
calls.
So
a
quick
example
of
this
is
on
the
right
hand,
side.
B
You
can
see
a
custom
resource
definition
that
has
in
it
spec
a
new
validation
clause
that
says:
ok,
anything,
that's
the
spec
dot
version
property
on
this
resource
must
be
one
of
these
two
values
as
well.
As
you
know,
the
Spector
replicas
must
be
between.
You
know
this
value
range,
and
so,
if
we
look
at
an
example
on
the
left,
this
is
an
example
of
that
custom
resource
definition.
B
In
this
case,
it's
a
kind
called
app
and
it
declares
a
version
field
on
a
replicas
field
that
don't
validate,
and
you
get
a
really
nice
user
experience
now
in
cube
1-9,
where,
if
a
user
posts,
what
you
see
on
the
left
that
gets
validated
on
the
API
server
according
to
those
validation
rules,
this
case
that's
going
to
fail
and
you
get
really
rather
nice
validation,
error
messages
in
response
that
lets
users
know
why
something
was
or
was
not
valid.
I
think
this
is
one
of
those
things.
I
guess.
B
B
So
there
was
some
confusion
in
the
past
where,
if
an
amount
of
entry
was
recorded,
you
didn't
actually
have
to
timestamps
to
know
when
the
request
was
received
versus
where
it
was
in
the
auditing
stage
for
processing
that
request
now,
there's
particular
time
stamps
to
let
you
now
track
those
two
things
clearly
and
kind
of
gives
better
granularity
into
your
audit
loads
and
the
other
item.
I
want
to
call
out
a
little
bit
since
this
is
useful
for
things
like
custom
resources
is
a
new
feature
in
our
back
that
lets.
B
You
define
cluster
roles
that
Union
together
the
role
the
rules
of
other
cluster
roles.
So,
for
example,
what
you'll
see
here
is
create
a
cluster
role
called
monitoring
and
the
aggregation
rule
says:
I
want
you
to
match
any
cluster
role
that
matches
the
aggregate
to
monitoring,
true
label
and
then
there's
a
back
end
controller
and
career
Denny's.
That
then
goes
and
says:
we'll
find
all
cluster
roles
that
match
that
label
and
dynamically
populate
the
rules
for
the
monitoring
role.
Based
on
that.
So
this
is
nice.
B
If
you
want
to
integrate
your
out-of-the-box
role,
types
like
default,
edit
in
view
we
use
any
custom
resources,
you
create
so
I
thought
that
was
useful
to
call
out
next
up
out
of
six
CLI.
This
is
one
of
those
features
that
I
really
like
I
have
long
history
of
the
project.
I
believe
I
I
tried
to
get
this
feature
in
originally
in
2015,
and
it's
taken
some
time,
but
I
was
really
happy
to
see
that
this
landed,
which
is
you
can
now
use
field
selectors
in
cube
control.
B
So,
for
example,
if
you've
ever
struggled
to
get
how
do
I
find
all
knowed
all
pods,
scheduled
to
a
particular
node
in
a
one-line
script
command.
You
know
that's
now
possible
here,
where
you
just
have
the
filter
pods
on
the
spec,
node
name
or
I,
want
to
find
all
pods
that
were
running
all
pods
that
we're
not
running.
If
you
want
to
filter
events,
you
know
based
on
their
source.
Basically
you
have
a
lot
more
flexibility.
B
Now
and
Cube
control
to
do
things
based
on
the
actual
field,
values
versus
just
things
like
labels,
inherently
field
selectors,
you
know
you
have
to
know
which
field
selection
clauses
are
available
for
that
resource
type,
but
I.
Think,
generally
speaking,
this
is
a
really
useful
big
win
and
if
you
it'll
avoid
people
having
to
write
a
lot
of
JQ
type,
filtering
semantics
that
we
had
seen
in
the
past,
I
want
to
very
quickly
go
over
what
was
going
on
across
some
of
the
SIG's
that
deal
with
our
cloud
providers.
B
So
in
the
AWS
sig,
some
work
was
done
to
support
C
5
instance:
types
that
used
nvme
device
volumes
in
addition,
nodes
that
present
themselves
as
having
EBS
volumes
that
are
stuck
attaching
are
now
automatically
tainted,
and
the
expectation
is
that
operators
will
now
monitor
for
that
taint
and
remedy
as
they
see
appropriate.
So
that
might
mean
if
you,
if
you
see
a
note,
has
been
tainted
because,
as
my
I'm
stuck
attaching
you
might,
you
know
just
choose
to
restart
that
note.
B
For
example,
in
addition
on
azor,
there
were
some
work
to
improve
the
load,
balancer
implementation
and
just
general
stability,
and
on
OpenStack,
a
number
of
iterations
were
done
to
improve
what
how
it
integrate
with
block
storage
and
load
balancer.
On
the
networking
side,
a
couple
items
thought
discuss
here
for
networking,
so
alpha
support
was
added
for
ipv6
and,
in
addition,
I
believe
in
the
last
update
call
from
cube
1/8
we
talked
about.
There
was
alpha
support
for
Q
proxy
now
supporting
IPPs
instead
of
just
IP
tables.
B
This
has
now
graduated
to
beta
and
the
cube
1
9
release
and
we're
excited
to
see
the
outcomes
of
that
as
people
start
to
evaluate
it.
There
were
a
lot
of
potential
reported
benefits
that
you
know.
We
need
to
measure
in
our
dense
clusters
ourselves
to
see
the
pros
and
cons
of
the
change,
but
generally
speaking,
idea,
BS
has
a
lot
of
potential
long-term
benefits
for
improving
your
performance
on
dense
clusters,
where
you
have
a
large
number
of
services,
we're
writing
things
like
IP
tables
rules
was
very
slow
or
even
evaluating
those
chains
was
slow.
B
Moving
on
to
sig
node.
They
were,
generally
speaking,
a
lot
of
performance
and
reliability,
improvements
that
were
done
in
Cuba
9
to
just
make
sure
that
the
cubit
is
more
stable
at
running
your
workload
across
the
container
runtime
ecosystem
I
think
we're
starting
to
see
all
the
work
that
was
done
around
the
container
runtime
interface
and
signa
come
to
fruition.
B
So
I
wanted
to
highlight
the
great
work
that
was
done
out
of
Red,
Hat
and
Intel
and
others
for
cryo
getting
moved
to
stable,
and
so
it
passes
all
of
the
IDI
tests
for
cube
1/9,
and
it
is
integration
with
mini
cube
when
we
encourage
everyone
to
try
it
out.
In
addition,
the
other
runtimes
have
evolved
in
the
ecosystem,
so
a
container
d
moved
to
beta
as
well
as
the
other
isn't
listed
here.
Generally
speaking
this.
B
This
is
really
important
to
me,
because
you
know-
and
this
is
probably
the
first
release
where
it
was
really
true-
that
the
idea
of
being
what
a
plug-and-play
particular
container
runtimes
has
has
come
to
fruition,
and
now
you
get
to
evaluate
the
runtime
you
want
to
run
based
on.
You
know:
performance,
metrics
stability,
those
types
of
things
in
particular
here
at
Red
Hat.
B
We
will
be
looking
to
deploy
cryo
out
to
our
openshift
online
clusters
very
shortly,
so
for
the
debugging
tools,
a
lot
of
to
make
it
easier
to
debug
environments
when
you're,
using
a
variety
of
container
runtime
choices.
There
is
a
new
effort
for
CRI
tools
that
has
improved
to
basically
make
you
be
able
to
introspect.
What's
happening,
they're
machine
independent
of
the
container
runtime
in
the
resource
management
space,
a
lot
of
work
was
done
to
just
kind
of
continue
to
iterate
and
prepare
for
graduating
features.
B
We've
been
doing
for
a
while,
so
for
device
plugins,
a
lot
of
work
was
done
to
just
kind
of
improve
the
reliability
of
how
the
Cuba
interacts
with
device
plugins.
At
this
point,
we
still
only
have
a
limited
set
of
plugins
available
in
the
community,
largely
focused
around
a
GPU
accelerator
use
case,
but
if
folks
are
interested
in
participating
and
integrating
with
other
plugin
types,
I
think
we'd
love
contribution.
B
We
eliminated
it
from
being
tied
to
the
croisé
model,
but
basically
that's
another
thing
that
we're
preparing
to
be
able
to
graduate
to
beta
in
the
future.
On
the
node
side,
it's
important
that
you
know
how
your
workloads
are
running.
So
there
were
a
number
of
numerous
metrics
improvements
done.
So,
as
folks
may
be
aware,
the
cubit
embeds
component
called
C
advisor
associate
visor,
got
extended
to
add
support
for
accelerator
stats.
B
In
addition,
ephemeral
pot
store
just
as
an
activity.
That's
been
going
on
for
a
while
in
the
community
that
lets
you
control
how
much
local
disk
pods
can
consume
right
now
we
have
monitoring
and
a
metrics
data,
now
reported
to
sate
how
much
local
storage
is
being
used
and,
in
addition,
for
folks
who
integrate
with
the
cubit
summer
API
for
metrics
collection
in
the
past,
we've
just
reported
stats
that
were
container
only,
but
now
we
give
pod
level
usage
stats
which
lets
you
know
multiple
containers
very
easily,
how
much
I.
B
So
this
was
another
one
of
those
things
that
came
out
of
just
the
observability
of
running,
really
large
clusters
and
unique
challenges
you
run
into
when
you
want
to
preserve
the
amount
of
CD
space.
That's
used,
so
the
major
improvement
that
came
in
the
quota
and
this
release
is
that
you
can
now
do
object,
count
quota
on
all
standard,
namespace
resource
types.
B
So
there's
a
syntax
for
this
now,
where
you
can
just
say
count
and
the
resource
name
and
the
group
they're
in
and
in
addition,
you
can
also
now
quote
a
huge
pages,
so
that
was
another
Perry
preparatory
work
item
done
to
support
graduating
that
to
beta
in
a
future
release.
So
it's
a
quick
example
here
is,
if
you
are
wanting
to
control
the
number
of
pods
that
a
user
can
consume
and,
in
addition,
the
number
of
jobs
that
they
can
spawn.
This
is
the
example
of
the
new
syntax
that
lets
you
basically
quota.
B
B
Okay,
so
six
scheduling
there
were
more
iterative
improvements
in
pot,
priority
and
preemption,
so
new
and
coupon
nine,
the
pot
priority
feature
which
is
still
an
alpha,
is
now
respecting
pot
disruption
budgets.
In
addition,
it's
integrating
properly
with
the
qubit
eviction
logics.
So
folks,
who
may
have
not
been
aware
pot
priority
is
basically
a
mechanism
that
lets
you
say
you
associate
a
priority.
B
So
this
introduces
some
unique
challenges
when
trying
to
integrate
with
how
the
qubit
itself
chooses
to
evict
resources
which,
in
the
past
has
always
been
when
pods
are
using
more
than
they
asked
for,
and
resources
are
scarce.
The
new
logic
is
basically,
you
continue
to
be
at
danger
if
you
use
more
than
you
requested,
but
assuming
there
are
no
pods
that
are
using
more
than
requested.
It
then
will
break
the
ties
with
priority
and
then
work
against
whoever
the
largest
consumer
resources
relative
to
their
request.
B
In
addition,
some
interesting
work
that
some
of
our
team
members
here
at
red
have
doing
was
we
added
a
new
priority
function?
I
believe
it's
alpha
that
lets.
You
folks
are
aware,
when
a
pod
has
a
CPU
request
and
I'm
a
CPU
limit
right
now,
the
scheduler
had
only
satisfied
resource
requests
and
it
didn't
really
care
what
your
limit
was.
We
had
gotten
a
lot
of
feedback
that
users
wanted
to
prefer
pods,
whose
limits
could
be
satisfied
so
that
you
could
reach
your
maximum
burst,
and
so
this
is
a
new
priority
function
that
was
added.
B
B
One
other
thing
I
know
here:
I
wanted
to
call
out
here
was
there's
some
work
going
on
six
scheduling
around
various
incubator
projects.
One
of
those
items
is
the
D
scheduler,
which
is
basically
looking
to
look
at
an
existing
set
of
pods
that
have
been
scheduled
across
your
cluster
and
perform
for
better
for
worse.
You
know
a
defrag
and
see
if
there's
a
better
home
for
that
pod
now
and
if
so,
look
to
move
it.
This
is
a
any
commuter
work.
B
That's
continuing
above
in
the
community,
in
six
storage,
few
items
I'll
call
out
a
lot
of
them
are
all
alpha.
The
first
is
for
folks
who
are
aware
when
kubernetes
wanted
to
support
new
volume
plugins,
you
always
had
to
get
code
in
core
kubernetes
that
was
kind
of
a
similar
problem.
As
the
emission
control
problem
I
talked
about
previously
and
that's
a
bit
of
a
hindrance
towards.
B
Broadening
the
ecosystem,
because
of
a
few
reasons,
one
it
makes
your
integration
have
to
be
open
source
and
some
folks
had
trouble
with
that
too.
It's
just
hard
to
get
your
code
into
cube,
sometimes,
and
so
there
was
a
great
effort
done
from
something
called
the
container
storage
interface,
which
defined
a
common
API
pattern
across
multiple
container
orchestrators.
This
was
an
effort
across
the
kubernetes
community,
the
mesa
community
cloud,
foundry,
docker
swarm
and
basically
a
new
volume.
Plugin
was
written
for
kubernetes
core.
That
is
currently
an
alpha.
B
That
knows
how
to
interface
against
the
container
storage
interface
definition
and
in
the
long
term,
this
will
allow
you
to
enable
volume
plugins
that
can
be
deployed
containerized
on
the
cluster
and
not
need
to
be
in
the
core
kubernetes
itself.
In
addition,
alpha
support
for
raw
block
devices
was
added
and
there's
one
implementation
today
in
the
community.
I
expect
that
to
grow
in
the
future
and
then
finally,
I
think
we
talked
about
in
queue
bunny.
B
There
was
an
initial
support
to
allow
you
to
resize
your
Proficient
volumes
that
resize
support
got
extended
to
addition,
volume
types
so
new
and
q1
9.
You
can
resize
your
DC
persistent
disks
yourself,
this
your
8
OS
EBS
volumes
and
your
cinder
backed
persistent
volume
claims.
So
I
expect,
based
on
the
experience
of
that
growing,
with
multiple
storage
volume,
types
that
that
will
be
set
up
to
go
to
beta
in
a
future
release.
B
B
Some
work
was
done
as
saying
Windows
to
try
to
further
evolve
or
improve
the
support
of
running
pods
on
Windows
nodes,
so
listed
a
number
of
them
here,
but
at
this
point
basically
I
think
sig
windows
in
the
broader
community
is
wanting
everyone
to
evaluate
its
usage
and
provide
feedback
to
the
community
on
how
to
further
iterate.
But
this
is
a
heartening
sign
of
C,
where
the
set
of
workload
types
that
are
supported
on
kubernetes
continues
to
grow,
not
just
on
Linux
itself
but
across
the
broader
operating
system
ecosystem.
B
So
that's
cube
one
nine
in
a
nutshell:
let's
look
forward
a
little
bit
for
cube
110
and
then
we
can
take
Q.
A
so
cube.
110
is
is
very
early.
So
after
cube
on
nine
went
out
the
door
as
you
can
imagine,
everyone
took
a
very
well-deserved,
long,
vacation
and
cube.
110
planning
is
just
starting
across
a
variety
of
our
SIG's.
So
what
I
wanted
to
highlight
here
was
a
couple
items
that
you
know
I
personally
hope
to
see
its
continued
to
get
attention.
B
B
We
talked
about
everything
being
extensible
in
across
the
companies
platform,
so
whether
that's
container
runtimes,
that
was
API
machinery,
extension,
hook's,
custom
resources,
storage
volume,
types
device,
plugins
along
those
themes
of
not
needing
to
get
something
into
core
cube
to
take
advantage
of
it.
I
expect
all
those
extensibility
vectors
to
continue
to
evolve
as
well
as
more
and
more
clusters
or
run
at
greater
and
greater
densities.
You
know
we
should
continue
to
see,
improves
scaling
improvements.
B
Some
of
the
items
listed
here
that
are
of
interest
to
me
as
I
noted
earlier,
the
D
scheduler
component
as
an
incubator
project,
that's
being
worked
on
in
renée's
today
and
is
probably
going
to
be
looking
to
get
some
more
actual
real-world
feedback
post,
cube,
110
the
priority
and
preemption
features
I'd
love
to
see
those
get
graduated
to
beta,
as
well
as
number
the
other
things
that
were
discussed
previously
and
for
folks
on
the
call
off
their
particular
features
that
you
would
love
to
see.
Terminators
start
to
explore
as
well
the
community's
very
opened.
B
A
A
If
you
have
questions,
ask
them
in
the
chat,
if
not
because
I'm
not
seeing
any
questions
which
probably
means
you've
stunned
them
into
silence
with
all
of
those
features,
and
you
can
reach
us
on
the
slack
channel
or
in
the
kubernetes
community
channels
and
get
answers
there
and,
as
Derek
pointed
out,
he's
also
airplane
car
on
Twitter.
But
you
probably
know
I'm
on
github
as
well.
A
So
please
feel
free
to
reach
out
to
him
or
to
ask
questions
on
the
open
ship,
Commons
mailing
list
and
if
you're,
not
on
that
list,
yet
send
an
email
to
me
or
tweet
me
on
the
at
OpenShift
Commons
Twitter
handle
and
I
will
get
you
set
up
there.
It's
going
to
be
an
interesting
year,
2018
lots
of
good
stuff
coming
down
the
pike
for
kubernetes
and
all
of
the
ancillary
upstream
projects
that
are
related
to
it.
A
So
if
there's
topics
that
you're
interested
in
hearing
about
kubernetes
related
upstream
or
other
workload
stuff,
please
let
me
know
and
I'll
be
happy
to
organize,
recruit
speakers
and
to
get
folks
generally
the
information
they
need
as
quickly
as
possible.
So
again
not
seeing
any
questions
Dericks,
which
means
you've,
probably
done
an
awesome
job
here
or
stunned.
Everybody
and
I
really
thank
you
for
taking
the
time
today
to
listen
in
on
this
song.
It's
a
rather
large
audience
so
the
place
for
that.
So
thank
you
all
this.