►
Description
From the March 28th 2017 OpenShift Commons Gathering in Berlin @KubeCon https://Commons.openshift.org/Gathering
A
A
I'm
going
to
introduce
from
google
Epona
sinha,
who
is
the
project
lead
for
Cooper
Nettie's,
were
really
lucky
to
have
her
here
today,
she's
going
to
give
a
version
of
the
what's
going
to
be
the
keynote
at
coop
con
now
and
a
short
synopsis
of
that
for
you
and
a
couple
of
demos.
So
we're
going
to
go
right
into
it
and
thank
you
and
take
it
away.
B
Great
Thank,
You,
Diane
I
feel
very
fortunate.
I've
got
slides,
I
think
or
head
slit
there.
We
go
all
right
good
morning.
Everyone
and
I'm
a
pronounced
una
I'm,
a
product
manager
at
Google
for
the
Cooper
Nettie's
project,
and
today,
I
want
to
first
of
all
welcome.
All
of
you
and
I
am
really
delighted
to
talk
here,
a
little
bit
about
Google
and
Red
Hat's
contributions
as
well
as
hopefully
demo.
Two
of
the
new
features
that
are
in
this
release.
B
There
is
a
community
that
we
have
worked
hard
to
develop
and
foster
around
this
project
that
works
on
and
basically
shares.
This
goal
with
us,
which
is
to
create
this
common
platform,
and
this
chart
shows
the
evolution
of
that
community.
So
you
can
see
where,
as
in
the
beginning,
Google
and
redhat
were
in
fact
quite
prominent
in
terms
of
the
contributions
and
the
commits
that
were
made
to
the
project
over
time.
B
The
number
of
Independence
independent
contributors
that
aren't
necessarily
associated
with
any
company
has
grown
as
well
as
the
number
of
companies
that
are
part
of
the
community
has
grown
and,
and
this
diversity
of
companies
is
an
individual's,
is
extremely
important
to
that
platform
goal.
If
you're
trying
to
have
a
project
that
is
the
platform
for
the
rest
of
the
world,
then
you
need
the
diversity
of
the
different
different
environments
that
other
users
work
in
and
throughout.
B
First
of
all,
you
can
see
you
know,
there's
a
lot
of
user
and
customer
interest
in
Coober
Nettie's
and
also
there's
a
lot
of
interest
from
contributors.
You
can
sort
of
see
here.
You
know
on
google
trends
over
the
last
two
years,
a
huge
increase
in
the
cooper
Nettie's
interest,
but
also
30-plus
commercial
distributions
of
cooper
Nettie's
to
date.
B
I
just
like
to
give
you
a
flavor
of
the
contributions
and
the
level
of
depth
that
Red
Hat's,
an
open
shift,
has
had
in
shaping
the
Cooper
Nettie's
project,
so
this
slide
shows
a
current
snapshot
of
all
the
special
interest
groups
in
the
project,
and
this
is
always
changing.
We
are
adding
and
collapsing
groups,
and
but
the
next
slide
actually
shows
how
many
of
these
special
interest
groups
are
led
by
an
engineer
from
from
Red
Hat
next
slide.
B
Please
each
group
has
multiple
leaders,
but,
as
you
can
see
a
lot,
this
is
I
think
about
forty
percent
and
the
ones
that
are
red.
Have
a
red
hat
engineer,
leading
that
group
leading
the
group
means
that
you
are
helping
to
shape
this
specific
area,
so
you're
helping
to
shape
storage
and
scheduling
and
network
and
how
those
things
work
in
Coober
Nettie's,
hopefully
to
those
of
you
in
the
audience
that
are
open
shift
users.
This
gives
you
a
lot
of
comfort.
B
Your
needs
and
your
requirements
have
a
tremendous
hand
in
shaping
the
direction
of
the
Cooper
Nettie's
project,
and
really
this
is
no
different
in
the
1.6
release,
which
is
launching
this
week
next
slide,
please.
So
the
Cooper
dad
is
one
dot
6
release.
The
major
theme
of
this
release
is
multi
workload,
multi-team
large
clusters.
B
There
are
many
features
in
this
release.
Large
clusters
is,
of
course,
one
of
the
features,
but
I
want
to
emphasize
two
particular
standout
features,
and
that
is
role
based
access,
control
and
storage
classes,
also
known
as
dynamic
storage
provisioning.
These
two
features
are
important
because
they
add
critical
functionality.
That
I
think
changes
the
game
of
what
you
can
do
with
containers
in
production.
B
B
B
B
Next
slide
now
I'm
going
to
go
into
the
feature,
so
the
first
one
is
a
role
based
access
control,
which
is
really
now
that
you
have.
You
know
a
large
cluster
or
multiple
large
clusters.
How
do
you
schedule
multiple
teams
into
those
clusters
such
that
they
don't
interfere
with
each
other
and
they
have
the
right
set
of
permissions
next
slide?
Yes,
and
this
and
this
change,
you
know
one
of
our
founders
and
and
and
lead
CL
Tim
Hawkin
characterizes
the
introduction
of
of
our
back
beta
as
it's
like.
B
We
went
from
doss,
which
was
you
know,
single
user
and
where
everyone
I
can
see
everything
to
unix.
Where-
and
you
know,
you
see
only
your
things
and
there's
the
principle
of
least
privilege,
so
it's
that
type
of
big
change
for
us
next
line.
This
is
what
we
look
like
before.
Fine
grain
are
back.
You
know
you
have
here
a
set
of
three
node
or
five
thousand
node
cluster.
B
You
have
multiple
pods
and
multiple
workloads,
in
fact,
and
that
belong
to
different
teams,
but
there
isn't
a
good
way
through
the
Cooper
Nettie's
API
to
set
up
authorization,
and
so-
and
you
know,
authorization
is
by
default
at
the
cluster
level
and
all
pods
have
the
same.
Authorization
is
kind
of
vanilla,
it
looks
the
same
and
we
did
have
a
mechanism
called
a
back,
but
that
is
more
based
on
a
static
local
file,
whereas
are
back,
is
truly
dynamic
and
it's
through
the
Cooper
Nettie's
API.
B
So
next
slide
with
role-based
access
control.
The
picture
looks
something
like
this:
you
can
isolate
into
namespaces
here
we
are
showing
the
workloads
of
the
blue
team
in
a
blue
namespace
and
the
workloads
of
the
green
team
in
a
green
namespace
and
what's
more
important,
is
that
on
a
per
namespace
/
resource
basis
you
can
set
which
roles
have
what
actions
/,
what
resources
and
what
names
faces.
It's
actually
very
powerful.
There
are
many
many
use
cases
for
this
use
just
a
couple
example,
and
we
have
Alice.
B
She
is
a
user,
her
role
is
user
and
she
can
list
which
is
like
view.
Permissions
and
services
service
is
a
type
of
resource.
An
engine
HR
are
new
faces
here,
so
she
can
view
services
in
the
end
namespace,
but
not
in
the
HR
name
said
you
see
the
level
of
granularity,
it's
at
the
pro
resource,
/
and
namespace
level,
for
each
role
and
each
user,
that's
nice
and
and
of
course,
with
this
level
of
granularity,
there
is
a
huge
world
of
permutations
that
are
enabled.
So
we
see
some
other
examples.
B
Bob
and
you
know
bob-
has
more
an
admin
type
right,
so
he
is
not
just
viewing
he's
creating
you
can
create
pods
in
one
namespace,
not
the
other,
the
scheduler.
The
third
example
is
actually
a
system
role:
it's
not
a
person,
it's
a
system
role
and
this
role
can
read
pods,
but
not
another
resource
which
is
secret.
So
now,
let's
get
into
the
demo
hopefully-
and
you
can
play
the
video
and
I'm
not
sure
if
it's
going
to
be
as
large
as
I
would
have
liked,
but
you
can
see
something.
B
So,
in
order
to
do
this
demo,
I
have
created
a
three
node
cluster
in
google
container
engine
in
google
cloud.
And
yes,
can
you
see
the
screen?
Okay,
good
people
in
the
back
may
be
hard,
but
okay.
So
no.
This
is
proceeding
and
in
order
to
show
this
demo,
you
actually
need
three
users
or
you
need
more
than
one
user.
B
So
I'm
going
to
pretend
here
to
be
multiple
users
in
this
first
tab,
I
am
kind
of
the
supercluster
admin
and
the
other
two
tabs
are
actually
I'm
going
to
be
a
green
team
and
a
blue
team.
The
first
thing
that
I'm
doing
here
is
as
a
supercluster
admin,
I'm
going
to
create
a
service
account
for
the
blue
team
and
fetch
those
credentials
into
a
local
file
and
the
same
thing
for
the
Green
Team.
B
Now
I'm,
going
to
the
blue
team
tab
and
I'm
going
to
configure
cube
cuddle
to
use
the
credentials
that
I
just
created
for
the
blue
team.
This
is
all
set
up,
so
basically
I'm
setting
up
the
blue
team
in
the
blue
tab
and
then
I'm
going
to
do
the
same
thing
for
the
Green
Team.
So
the
green
team
is
also
going
to
go
ahead
and
get
credentials
and
and
actually
the
could-
you
pause
the
demo
for
a
second
I
think
we
have
moved.
We've
moved
ahead.
B
So,
yes,
here,
let
me
create
a
namespace
for
the
blue
team.
I've
gone
ahead
and
created
as
a
cluster
admin,
a
namespace
for
the
blue
team,
and
but
if
I
go
to
the
blue
team
at
this
point,
I
haven't
given
them
access,
which
is
why
you
saw
the
error
where
the
blue
team
wasn't
actually
able
to
access
the
namespace
so
now
I'm
going
to
give
the
blue
team
access
to
the
namespace.
This
is
actually
showing
you
our
back.
B
So
one
of
the
defining
things
in
our
back
is
this
concept
of
cluster
roles,
and
these
are
some
of
the
default
user
roles.
Admin
edit
view
there's
also
system
roles
which
I'm
not
showing
I've.
You
know
hidden.
The
system
rolls,
let's
look
at
what
the
cholesterol
admin
can
do,
and
there
are
many
things
here.
This
is
just
looking
at
a
subset
of
them,
but
they
have
a
cluster
role.
Admin
has
granular
position,
granular
permissions
over
resources,
many
resources
and
sub
resources,
and
these
verbs
create
delete
list
watch.
B
B
And
yes,
this
is
what
a
role
binding
looks
like
this
is
a
role
binding
and
what
this
role
binding
says
is
and
for
the
blue,
namespace
I
would
like
the
user
blue
team
developer,
which
is
the
service
accounts.
For
this
example.
I
would
like
that
user
to
have
admin
role
for
that
namespace.
So
that
means
everything
you
saw
above
the
blue
developer
can
do
create
delete,
watch,
etc.
For
the
resources
in
the
blue
namespace.
B
Only
now
we're
going
to
create
this
role,
binding
object,
it's
been
created,
let's
go
to
the
blue
service
accounts,
the
blue
team
and
see
previously
he
didn't,
he
wasn't
able
to
access
the
namespace.
Now
he
is
cube,
cuddle
get
pods
for
the
blue
namespace
and
we
don't
get
the
error.
Of
course
there
are
no
resources
yet
so,
let's
go
ahead
and
create
some
resources
in
the
blue
namespace
we're
going
to
create
a
ninja
next
deployment
and
the
next
deployment.
Now,
let's
see
if
that's
been
created,
get
pods.
B
Yes,
it's
running,
so
the
blue
developer
has
access
and
has
execution
permissions
in
in
this
namespace.
Okay,
there
are
no
services,
but
this
is
all
working
as
intended.
Let's
now
do
the
same
thing
for
the
green
user,
we're
going
to
create
a
green
namespace
and
we
are
going
to
create
a
roll
binding
for
the
green
user,
exactly
as
we
did
now.
This
is
for
the
green
namespace.
The
Green
Team
developer
is
going
to
have
admin
permissions,
just
like
the
blue.
One
did,
except
only
in
the
green
namespace,
okay,
so
we'll
go
ahead
and
create.
B
The
green
binding
and,
let's
see
I,
think,
let's
go
and
see
if
the
green
user,
yes
and
the
green
user
is
able
to
get
pods.
There's
no
pods,
let's
create
a
into
next
deployment
here,
actually
I
think
we're
going
to
check
that
he
can't
have
access
to
the
blue
namespace.
That's
right!
So
you
see
that
the
green
user
has
access
to
the
green
namespace,
but
not
to
the
blue
namespace.
This
is
what
we
want
right.
B
So
this
is
great.
Let's
run
this
forward
and,
of
course,
create
an
NGO
next
deployment
see
services,
there's
no
services,
so
this
is
working
as
intended.
The
last
thing
that
I
want
to
show
you
is
a
cross
namespace
permission,
so
we're
going
to
now.
Let's
say
the
green
user
wants
to
be
able
to
monitor
and
the
blue
namespace
all
resources
in
the
blue
namespace,
but
not
to
change
them.
B
Actually
going
back
to
the
blue
namespace,
I'm
just
showing
that
the
blue
user
does
not
have
any
permissions
in
the
green
namespace
so
cannot
get
pods
cannot
get
services.
The
blue
user
should
not.
We
did
not
set
that
up
right.
This
is
working,
but
we
want
to
give
the
green
user
read
access
to
the
blue,
namespace
read
not
right
and
so
I'm
going
to
show
how
to
do
that.
Of
course,
the
blue
user
cannot
do
that.
The
green
user
cannot
do
that.
B
Only
the
admin-
the
superuser
here
can
do
that,
because
that
person
can
see
all
the
namespaces,
so
you
can
see
as
the
admin.
Hopefully
you
can
see
this
admin
can
see
the
blue
engine
X
and
the
green
engine
X
deployment,
as
well
as
a
bunch
of
system
deployment
that
are
on
now
I'm
going
to
create
a
role
binding
for
a
the
green
user
to
have
view
permissions
in
the
blue
namespace.
So
here
you
see
this
role.
B
B
B
B
Thank
you,
I'm
not
sure.
If
I'm
running
overtime,
please,
let
me
know
Diane
alright
great,
so
that
is
role-based
access,
control,
I
think
for
enterprise
deployments,
where
you
want
to
have
multiple
teams,
multiple
workloads.
This
should
prove
very
valuable.
It
is
not
yet
on
by
default,
it's
it
is.
It
is
available,
though,
as
beta
and
in
future
releases
it
will
go
and
NB
default
next
slide,
so
that
was
the
demo
next
slide.
B
Okay,
the
other
feature
I
said
I
will
talk
about
is
a
dynamic
storage
provisioning
which
enables
and
is
the
backing
for
stateful
workloads,
and,
let's
see,
I
think
I
will
present
a
couple
of
slides
and
then
move
to
the
demo
for
this
as
well
just
quickly.
I
want
to
explain
what
is
happening
with
dynamic
storage
provisioning,
so
in
dynamic
storage
provisioning,
the
idea
is
actually
even
in
non
dynamic
and
static
storage,
provisioning.
B
The
idea
is
that
there
is
a
cluster
admin
who
creates
kuber
Nettie's
view
of
storage,
so
Cooper
Nadi's
view
of
storage
is
through
persistent
volume,
object
which
says:
okay,
I
have
a
storage
of
X
type
in
NY
cloud,
and
so
many
gigs,
and
this
is
what
Cooper
nellies
should
do
with
it.
After
you
know,
my
claim
to
the
storage
is
gone
like
either,
recycle
it
or
keep
it
around,
and
then
the
user
we
have
kind
of.
B
We
want
to
isolate
the
pod
from,
and
you
know,
from
the
actual
details
of
the
storage
so
that
the
pot
is
not
specific
and
is
actually
portable
across
deployment,
and
so
we
have
created
this
concept
of
a
persistent
volume
claim.
The
claim
is
a
request
to
request
for
resources
and
says:
I
want
X
amount
of
storage
of
a
particular
type
in
case
that
storage
class
types
are
defined,
and
so
the
PVC
the
persistent
volume
claim
when
it
gets
created
it
binds
to
any
available
persistent
volume.
That
means
it
meets
its
request.
B
It's
a
claim
out
there
saying
I
need
five
gigs.
If
there's
any
volume
out
there,
that
has
five
gigs
that's
available.
I
want
to
buy
into
that
and
that,
once
that
binding
takes
place,
if
it's
consistent,
it
stays
there,
a
pod
can
associate
with
the
claim,
but
the
pod
is
ephemeral,
it
goes
away
or
it
moves
between
nodes.
B
The
persistent
volume
claim
stays
balanced
to
the
volume
and
keeps
the
data
for
that
sod
in
that
volume,
so
that
when
the
pod
comes
back,
it
again
associates
with
with
the
claim
and
it
has
access
to
the
same
volume.
This
is
extremely
important
for
stateful
applications.
So
that's
the
main
mechanism.
You
can
go
through
the
next
couple
of
slides,
I
just
show
here's
a
pod.
The
pot
is
associated
with
the
claim
and
you
can
delete
the
pod
and
you
can
bring
the
pot
back
and
everything
is.
B
Is
there
as
as
before
what
dynamic
storage
provisioning
does?
Is
the
changes
in
the
previous
slide?
What
we
were
looking
at
is
the
storage
exists,
it's
out
there.
The
claim
comes
along
and
it
binds
to
whatever's
whatever
is
available.
Ok,
this
is
wasteful,
because
someone
has
to
provision
that
storage
in
advance
and
the
storage
has
to
sit
there
right.
That's
not
what
we
want.
If
we
want
efficiency,
dynamic
storage
enables
the
concept
of
you
know:
abstract
storage
classes,
so
the
cluster
admin
can
still
say.
Yes,
I
have
a
storage
class.
B
B
B
First
I'm
going
to
show
you
the
manual
method
and
so
I'm,
going
to
create
a
disk
here,
I'm
asking
Google
Cloud
to
create
a
disk
of
size,
10
gigabytes
and
it's
a
standard
disk,
and
I'm
going
to
call
it
manual,
disk,
1,
ok,
and
now
I
see
that
this
manual,
this
one
has
been
created
in
u.s.
central
one
and
US
central,
a
with
10
gigabytes
as
requested.
The
old
old
old
old
way,
which
is
a
bad
practice,
is
to
inline
this
storage
in
the
pod
manifest.
B
So,
hopefully
you
can
see
the
screen
and
here's
the
pod,
manifest
I've
said
the
disk
that
I
want
to
attach
and
mount
here
is
the
GC,
a
persistent
disk.
Its
name
is
Manuel.
This
one,
it's
file
system
is
X,
for
this
is
very,
very
specific
right.
This
pod
manifest
cannot
go
anywhere
and
it
can
only
use
that
disk.
So
this
is
bad.
What
we
want
the
pod
manifests
to
look
like
is
actually
very
independent
from
the
details
of
the
storage.
So
here
it's
going
to
call
a
persistent
volume
claim.
B
It
just
has
the
name
of
the
claim.
It
doesn't
even
say
what
the
claim
is
doesn't
say
how
much
doesn't
say
what
type
of
disk
nothing
and
let's
look
at
the
claim.
So
this
is
now
a
very
portable
pod,
manifest
it
can
come
and
go
from
cloud
to
cloud.
It
can
come
and
go
from
time
to
time,
and
hopefully
I
will
show
you
the
manifest
for
the
PVC
and
yes.
So
here
is
a
PVC
manifest,
and
this
manifest
is
also
fairly
generic.
It
just
says
that
I
want
five
gigs
of
storage
and
it
does.
B
It
can
declare
a
storage
class,
I'm
going
to
come
back
to
explain,
storage
classes,
it's
a
concept
and
dynamic
storage
provisioning,
but
here
we've
given
the
empty
string,
which
means
that
I
don't
want
to
use
the
storage
class
and
when
any
storage
class
just
give
me
any
five
gigs,
that's
available.
That's
what
the
the
claim
is
saying.
Okay,
I
think
we're
going
to
create
this
claim.
B
Oh
okay,
yes
and
I
want
to
show
you
the
persistent
volume
itself,
so
I
have
to
create
the
claim
right,
but
now
I
also
need
to
create
the
volume,
because
this
is
manuel
provisioning
and
the
persistent
volume
manifest
is
where
all
of
the
details
are
there.
I
say
that
it's
a
five
gig
storage.
It
is
actually
a
dce,
persistent
disk,
here's,
the
name
and
the
reclaimed
policy.
So
this
is
cooper,
Nettie's
view
of
that
10,
gig,
disk,
I'm,
saying
Cooper,
natives.
B
You
can
use
five
gigs
of
that
10
days
in
10,
gig
disk
and
please
delete
it
after
you're
done.
You
could
also
set
this
to
reclaim
this
reclaimed
policy
to
you,
know,
recycle
it
or
retain
the
disk
but
I'm
going
to
delete
it
for
easy
cleanup.
So
this
is
manuel
provisioning
and
I
have
I
have
previously
provision
to
the
disk.
Then
I've
told
Cooper
Nettie's
about
the
persistent
volume.
Then
I
created
the
claim.
It's
a
portable
pod,
that's
nice,
but
it's
still
very
manual.
B
Okay,
now
I've
gone
ahead
and
created
the
persistent
volume,
so
the
volume
is
created.
You
can
see
that
it's
created
and
it's
available
status
is
available
now
I'm
going
to
create
the
PVC,
the
claim
and
I'm
going
to
bind
it
or
Cooper.
Nettie's
is
automatically
going
to
bind
it
right.
So
now
the
PVC
came
along
and
it
said,
I
want
five
gigs
Oh
happens
to
be
a
volume
already.
You
know
five
gig
volume.
Let
me
bind
so
now.
B
What
I
want
to
show
you
is
how
easy
it
is
to
do
dynamic,
storage,
provisioning,
so
I
think
the
next
step
is
I'm
going
to
clean
this
up
and
yep
delete
the
manual
PVC,
and
that
should
delete
that
should
delete
everything
and
I'm
going
to
show
you
that
it's
deleted
yep.
It
should
delete
the
PVC
and
the
pv
and
also
delete
the
disk.
So
now
there
is
no
disk
with
dynamic
storage
provisioning.
Like
I
said
I
don't
have
to
pre
provision
the
storage.
All
I
need
to
do
and
well
it's
creative
is
create
the
PVC.
B
That's
the
claim,
but
let
me
first
tell
you
the
concept
of
storage
classes,
so
the
storage
admin
can
still
come
in
and
declare
that
there
are
multiple
types
of
storage
available
without
provisioning
them.
Here
the
admin
has
created
a
fast
storage
class,
which
is
an
SSD,
and
when
we
do
get
storage
classes,
we
see
that
the
fast
class
is
available,
as
well
as
the
default
storage
class
in
1.6
for
google
cloud
is
a
standard
disk,
so
those
two
storage
classes
are
available,
but
no
storage
has
been
created.
B
Admin
has
said
what
type
of
storage
is
available,
but
nothing
has
been
created
now
the
persistent
volume
claim
comes
along
and
it
says
yeah
I
want
to
use
that
fast
class
whatever
it
is
that
my
admin
said:
I
want
10
gigs
of
it
and
when
I
create
this
claim,
you
will
see
that
everything
happens
automatically.
I
don't
need
to
create
a
persistent
volume.
I
don't
need
to
provision
the
storage
and
by
itself
you
know
the
PV
has
been
created.
B
You
see
this
PP
with
this
long
numerical
number
and
it
has
the
right
delete
policy
and
then
it's
been
bound
to
the
PVC,
and
if
we
look
at
the
disks
and
we
get
disks,
you
will
see
that
the
storage
has
also
been
provisioned.
So
this
is
automation,
nice,
no
wastage
and
then
I
think
I
go
on
and
I
show
you,
you
know
the
default
and
how
to
do
default,
but
I
think
for
the
sake
of
time
we
can
skip
that.
B
B
B
B
This
has
been
a
big
release,
for
you
know,
moving
the
storage
moving
storage
forward,
and
so
their
support
for
user,
written
and
user
run
and
dynamic
Phoebe
provisioners,
which
is
very
nice,
as
well
as
a
number
of
third-party
plugins
that
that
have
made
it
into
the
release.
That's
it
for
storage,
just
have
one
more
slide
on
the
kind
of
the
future
of
the
project
and
where
we
are
going
and
so
I
think
again.
B
We
want
to
be
the
platform
we're
trying
to
build
a
platform
for
the
rest
of
the
world
to
run
distributed,
system
applications
and,
and
that
requires
you
know,
multi
workload,
multi-team
efficient
scheduling
in
a
cluster
in
large
clusters
or
multiple
clusters
and
some
of
the
road
map.
Here
you
know
around
security,
we're
going
to
make
our
by
our
back
default.
There's
also
Network
policy
which
allows
pods
to
say:
okay,
I
have
access
to
this
network
or
this
part
of
the
network.
B
But
not
you
know
not
not
be
available
to
take
requests
from
any
part
of
the
network
that
network
that's
network
policy.
We
will
continue
adding
more
features
to
stateful
application,
support,
upgrading
stateful
applications
and
without
downtime.
That's
on
the
on
the
road
map.
Also
GPU
support
is
extremely
important
for
those
of
you
running
machine
learning
and
there
are
quite
a
few
and
running
machine
learning,
including
tensorflow
and
other
types
of
framework.
So
that's
coming
soon.
B
In
fact,
there
is
an
alpha
implementation
of
multiple
GPUs
in
in
the
1.6
release
and
then
I
mentioned
multi
workload,
scheduling
so
there's
several
features
in
this
release.
For
you
know,
custom
scheduling
and
advance
scheduling,
but
we
will
continue
to
forward
move
forward
on
that
works,
to
make
it
efficient
to
schedule
multiple
different
types
of
workloads
in
a
cluster
in
terms
of
extensibility
and
there's
work,
and
that's
alpha
work
in
this
release
on
a
different
cloud
provider
separating
those
out
and
making
each
of
those
more
powerful
and
also
the
container
runtime
interface.
B
The
CRI
for
docker
is
beta
in
this
release
and
going
forward.
We
will
be
adding
support
for
many
other
runtimes
just
provide
flexibility
to
our
users
and
then,
lastly,
a
Service
Catalog,
Service,
Catalog
and
sig
Service
Catalog,
and
understand
the
work
that
they're
doing.
There
enables
Cooper
Nettie's
to
consume
services
outside
of
Cooper
Nettie's
through
it
through
a
Service
Catalog,
and
it
uses
the
open
services
broker,
API,
which
it
has
a
heritage
in
the
cloud
Family
Foundation,
and
so
that
that
was
actually
I.