►
Description
A look at what lies ahead in Kubernetes in terms of enabling new application workloads, running multiple clusters at scale, and managing services across multiple clouds with OpenShift with Mike Barrett, Marc Curry, and Clayton Coleman from the OpenShift team at the OpenShift Commons Gathering Boston.
Learn more: https://www.redhat.com/en/summit/2017/agenda/sessions
A
All
right
get
started
so
how
many
people
traveled
to
Boston
for
this
event
raise
your
hand
Wow.
There
is
like
nobody
from
Boston
in
this
room,
so
you
all
are
going
to
go
to
a
thin
weigh
game
on
Wednesday
and
I
kid.
You
not
you're
going
to
sing
Neil
Diamond,
Sweet,
Caroline
and
I
want
to
prepare
you
for
this.
It's
going
to
go
a
little
bit
up
like
this
you'll
hear
Sweet
Caroline.
A
A
A
We
get
to
talk
about
kubernetes
and
I
would
say
starting
this
year's
the
first
time.
I
I
never
have
to
explain
what
kubernetes
is:
we've
coober,
what
we've
gotten
past
that
in
the
industry,
this
is
a
great
project
to
be
a
part
of,
and
we're
happy
to
be,
a
part
of
the
two,
and
this
is
from
slack
analytics,
I
love
going
to
this
tool
and
playing
with
the
time
dates
right.
So
this
is
commits
since
project
inception,
and
you
can
tell
that
the
beauty
of
an
open
source
project
when
the
independent
contributions
become
so
large.
A
A
These
are
men
and
women
are
almost
pulling
double-duty.
There's
not
there's
only
24
hours
in
a
day,
and
we
actually
ask
them
to
work
24
hours.
You
have
to
be
part
of
the
SIG's
right.
They
have
to
call
in
and
be
part
of
the
community
and
lead
that
project
that
section
of
the
project,
while
also
dealing
with
us
product
management
teams
and
asking
them
to
do
their
other
jobs
during
the
day.
A
Now,
if
you're
not
familiar
with
it,
you
can
go
to
CI
OpenShift
Red,
Hat,
comm,
slash,
release,
underscore
roadmap,
or
something
of
that,
and
you
can
see
very
very
nicely
where
we're
working
on
what
I
did
that
last
night
and
this
pattern
emerged
if
I
look
at
where
most
of
our
hours
and
sprint
cycles
are
being
spent
in
the
career
Nettie's
projects,
it's
around
persistence,
it's
around
security,
it's
around
workload,
diversity
and
cluster
reliability
and
resilience.
So
we're
going
to
go
into
each
one
of
those
topics,
and
this
isn't
asking
me
anything
session.
A
B
D
B
Excited
about
adding
a
new
feature
to
the
product,
but
at
the
end
of
the
day
it's
really
about.
If
you
start
something
running,
is
it
keep
running?
Do
you
not
to
worry
about
it?
So
you
can
go
worry
about
other
stuff,
and
so
there's
a
lot
of
very
technical
I
almost
want
to
say
like
these
are
too
technical
at
some
level
right.
This
is
a
bunch
of
scheduling
and
workload.
B
Nerds
who
sit
around
in
rooms
and
talk
about
you
know
queuing
theory
and
resource
management
and
hierarchies
of
control,
but
all
of
the
things
that
red
has
involved
in
are
just
about
taking
actual
concrete
problems.
People
are
facing
and
focusing
on
making
those
as
reliable
as
possible
in
kubernetes,
so
I
think
an
interesting
one
that
I
like
to
call
out
here.
The
first
two
are
really
about
making
kubernetes
easy
to
then
this
is
critical.
Right,
like
kubernetes
is
not
supposed
to
be
this
static
project
that
you
just
used,
and
it
solves
all
your
problems.
B
It's
supposed
to
be
something
that
the
community
can
build
solutions
around
that
people
can
solve
problems
in
new
novel
ways
that
make
applications
easier
to
run.
That
make
it
easier
to
secure
your
cluster
that
make
it
easier
to
integrate
third-party
cloud
provider
services.
Service
Catalog,
which
you
saw
in
the
earlier
AMA,
is
taking
advantage
of
the
aggregated
API
service
work
just
to
make
it
easier
to
plug
in
and
that
plug-and-play
aspect
is,
are
going
to
be
a
really
fundamental
part
of
communities
going
forward.
Auto
scaling,
Auto,
islet,
idling,
auto
sizing.
B
This
one
comes
up
a
lot.
You
know
you
have
these
big
clusters,
you
know
some
people,
everybody
starts
small.
You
run
a
couple
things,
you
add
a
few
more
and
suddenly
you
have
all
these
containers
running
our
focus
with
auto
scaling,
Auto
IV
link
and
auto
sizing
is
really
about
helping.
You
understand,
what's
actually
happening
and
then
at
the
platform
level
automatically
going
in
scaling
down
things
that
aren't
doing
anything.
B
So
if
you've
got
a
test
app
that
is
being
used
an
hour
a
week,
there's
no
reason
it
needs
to
be
running
and
trying
to
build
use
into
the
platform
to
make
it
easier
for
your
applications
to
automatically
react,
not
just
to
your
needs
as
a
developer
or
as
an
operator,
but
also
to
the
needs
of
kind
of
the
business
scaling
it
down
to
make
room
for
other
people.
So.
A
E
F
A
Else,
if
you
go
to
the
cube
con,
you
get
a
lot
of
people
that
are
doing
some
really
amazing
things.
They,
if
you
can
think
of
a
reason
to
create
a
controller
they
thought
of
it.
There's
there's
controllers
that
that
will
provision
kubernetes
itself
when
it
needs
another
cluster.
So
there's
there's
a
quite
a
bit
of
really
awesome
work
going
on
here
in
Red,
Hat's
driving
this
aggregated
API
services,
because
you
want
to
hit
one
central
API
and
you
want
to
then
go
into
all
these
other
sort
of
a
P
eyes.
B
I
think,
like
we
started
to
see
you
know,
everybody
has
different
ways
of
solving
certain
problems,
but
a
lot
of
times
it
comes
down
to
pattern.
You
know
you
see
a
pattern
as
an
operator
or
developer
and
you
want
to
harm
it's
a
pattern.
So
this
actually
comes
up
when
people
talk
about
core
OS
had
mentioned
operators
as
a
concept
which
is
just
a
little
bit
of
software
that
drives.
B
You
know,
spinning
up
that
CD
or
spinning
up
elasticsearch,
or
you
know
anything
that
you
can
pre
can
and
have
it
be
managed
for
you,
the
idea
of
like
an
intelligent
agent
almost,
and
we
see
that
pattern
so
much.
We
want
to
make
it
easy
for
people
to
to
be
able
to
say
you
know,
I
know
how
to
run
Postgres
and
because
I
know
how
to
run
Postgres.
B
I
may
start
with
you
know,
just
setting
up
a
template
and
creating
it,
but
then
I
would
start
to
want
to
operate
that
and
manage
it
in
some
folks
country
data,
one
of
the
OpenShift
partners
has
actually
built
on
some
of
this
work
and
some
other
work
in
the
community
to
go.
Build
one
of
these
automated
systems
and
they're,
they
know
how
to
run
Postgres.
We
know
how
to
put
together
these
operators
to
make
that
happen
automatically.
We
want
to
make
it
really
easy
for
everybody
else
to
be
able
to
go.
B
A
A
lot
of
the
auto
scaling
out
oil
and
other
sides
and
work
had
a
lot
to
do
with
figuring
out
how
to
clean
up,
keep
stir
and
have
a
data
repository
for
these
there's.
A
lot
of
debate
in
the
community
on
whether
or
not
the
couplet
should
be
asked
to
do
all
these
extra
monitoring
tests
or
whether
another
child
process
should
have
been
spawned.
That
is
all
resolved
itself,
and
it
really
opens
up
the
floodgate
in
the
the
1.7
timeframe
for
us
to
really
really
attack
that
solution.
A
I
know
a
lot
of
you
in
the
room
have
been
waiting
for
custom
metrics
for
auto
scaling
aside,
adjust
CPU
you've
been
waiting
for
network
connection
connectivity
for
auto,
auto
idling
and
not
a
sizing.
Now
that
we
have
all
that
data,
we
can
make
predictions
for
you
right
if
we
see
that
you're
launching
another
workload
and
you're
having
a
hard
time
figuring
out
what
quotas
you
want
to
put
on
that
bod,
we
can
make
some
suggestions
based
on
other
people
that
have
launched
that
workload.
So
that's
all
work.
A
That's
now
pouring
out
of
the
the
fact
that
we
solved
that
problem.
The
next
one
is
disruption:
budgets,
I
love.
This
word
disruption
budgets.
You
know
when
you're
running
the
cluster,
wouldn't
it
be
great
if
you
could
bake
in
some
intelligence
into
the
workloads
that
the
workloads
wouldn't
let
the
admin
hurt
them
in
a
bad
way,
but
the
admin
can
still
work
on
a
very
high
level
and
just
say
blindly.
Do
this.
Mr.
A
cluster
and
disruption
budgets
really
allow
us
to
bake
that
into
the
workload
so
I
can
I
can
say
hey
when
you
deploy
this.
This
EAP
cluster
make
sure
there's
all
these
two
instances
up
and
I
can
bake
that
into
that
knowledge.
So
now,
when
the
admin
tries
to
drain
out
the
cluster,
it
says:
hey
wait.
A
minute.
I
need
to
I
need
to
make
sure
I
of
two.
So
that's
a
disruption
budget
and
you
can
really
see
how
that's
going
to
help
us
balance
that
cluster
out
federaciĆ³n
gets
a
lot
of
play
time.
A
B
So
we
think
about
in
Federation's
still
somewhat
of
a
growing
project,
and
the
goal
for
Federation
in
kubernetes
is
to
make
allow
you
to
run
a
load
across
a
couple
of
clusters
without
having
to
think
too
much
about
it,
see,
set
the
policy
and
if
a
cluster
goes
down,
the
cluster
Federation
can
pick
those
workloads
up
and
in
Federation
in
the
sense,
is
just
like
one
of
those
patterns
that
I
talked
about
earlier
right.
It's
a
way
of
saying.
B
Instead
of
just
dealing
with
one
cluster
I'm
gonna
deal
with
multiple
clusters,
we
do
the
same
operations
I'm
going
to
I'm,
going
to
allow
it
to
be
balanced,
so
I
might
want
to
put
more
weight
in
the
European
region
than
in
the
US
region
above
and
beyond
Federation,
and
then
this
is
where
you
know
the
Red
Hat
interest
comes
in.
Is
it's
really
not
just
about
federating
workloads,
because
I
mean
I?
Would
guess
that
everybody
would
say?
Well,
you
know
all
workloads
are
the
same.
B
Well,
if
all
workloads
are
the
same,
why
do
we
have
dev
QA
and
production
clusters?
Why
do
we
have
rules
about
who
can
access
production?
So
some
of
what
we'd
like
to
do
with
Federation
is
not
just
about
workloads
but
helping
people
who
want
to
run
multiple
clusters
who
have
to
run
multiple
clusters
for
security
reasons,
for
process
reasons
for
isolation
reasons.
You
know
the
goal
of
running
multiple
clusters:
it's
not
trying
to
make
it
too
hard
to
run
it's
to
say
at
worst.
If
you
lost
this
entire
cluster,
does
my
business
keep
operating?
B
I
think
this
is
this
is
actually
a
really
great
time
because
a
lot
of
people
out
of
the
box-
and
this
happens-
because
it's
the
way
that
a
lot
of
people
use
kubernetes
is
they
tend
to
have
one
or
two
applications
that
are
their
whole
cluster.
They
think
okay,
well,
Federation
is
going
to
be
this
pane
of
glass
and
all
the
clusters,
and
it
can
absolutely
do
that
when
you
start
talking
about
sovereignty,
rules
or
internal
policy
or
different
classes
of
employees
being
able
to
access
different
data.
B
One
of
the
approaches
that
we'd
like
to
actually
take
is
make
Federation
be
more
of
an
on-demand
thing.
That's
about
helping
a
specific
set
of
workloads
run
really
well.
So
the
problem
with
single
pane
of
glass
is,
if
you
don't
design
it
quite
right.
If
you
lose
the
Federation
controller,
every
workload
is
now
broken,
and
so
we
think
I
think
this
is
going
to
evolve,
but
I
our
goal
would
be.
B
You
can
run
many
Federation's
and
you
can
keep
them
scoped
if
you
want
to
to
specific
teams,
organizations
applications,
workloads,
different
clusters,
different
regions
and
make
that
easy
to
do
so.
We'd
like
it
to
be
as
easy
to
spin
up
a
new
Federation
as
that,
it's
been
up
a
new
app
to
give
it
only
the
permissions
it
needs,
because
the
other
problem
with
a
single
Federation
layer
is,
if
you
compromise
that
Federation.
Does
that
mean
every
single
cluster
has
not
been
compromised,
so
we
want
to
create
a
new
single
point
of
security
failure.
B
We
want
to
make
it
easy
for
you
to
say
only
the
production
team
can
access
these
applications.
That
means
that
the
production
team
should
be
able
to
access
these
clusters.
If
they
want
to
do
deep,
debugging,
sometimes
they
can
and
the
Federation
controller
should
only
eat
a
particular
Federation
might
only
be
able
to
do
what
it's
allowed
to
do
in
a
very
specific
set
of
this
cluster.
So
the
the
very
easy
answer
is,
we
think
Federation
can
be
single
pane
of
glass.
B
If
you
want
it
to
be
as
you
get
bigger,
we
want
it
to
be
easy
to
transition
into
that
next
level
of
separation.
So
you
might
have
100
clusters,
you
might
have
10,000
applications.
What
are
the
odds
that
all
10,000
of
those
applications
should
all
be
at
one
point
of
failure
and
we'd
like
to
make
it
easy
to
say
you
can
reason
about
that
separate
your
cluster
that
handle
it.
That
way
is.
C
That's
all
right!
So
that's
something
that's
coming,
but
it's
not
here
yet,
but
that's
something
that
really
requires
Federation.
The
ability
to
stand
up
say
when
Singapore
comes
trine,
bring
up,
10,000,
VPN,
firewall
connections
for
those
employees
at
those
locations
and
then,
as
the
clock,
moves
as
the
sunlight
moves
across
the
world
to
stand
up
and
bring
down
these
different
locations
and
autolite
of
them.
But
Federation
is
absolutely
required
to
track
all
of
those.
B
I'd
like
to
make
one
other
point
as
well
as
like,
if
you're
an
organization,
that's
running
lots
and
lots
of
software,
you
have
processes
and
tools
in
place
to
help
manage
that
today.
What
we'd
like
to
do
is
actually,
when
we
think
about
Federation,
it's
really
a
company
already
solved
many
of
these
challenges.
You
know
we
use
LDAP
centrally
to
manage
users
and
identity.
Sometimes
some
people
use
LDAP
to
manage
policy
centrally
so
they'll
create
groups
and
the
buying
groups
to
put
attributes
on
groups
that
say
what
you
can
do
in
certain
spaces.
B
Some
people
use
third
party
tools
to
do
that,
we'd
like
to
make
it
easier.
You
know
as
a
whole
infrastructure
to
reason
about
if
I
have
these
set
of
policies
that
come
from
a
certain
source
like
LDAP
or
essential
config
configuration
management
DB.
How
do
I
apply
those
safely
to
large
numbers
of
clusters,
so,
instead
of
setting
up
each
individual
cluster
as
a
snowflake,
make
it
easier
and
easier
both
an
open
shift
and
an
integration
to
be
able
to
easily
say
I'm
running
a
couple
hundred
clusters.
B
A
D
B
C
Have
it
on
a
roadmap,
we
actually
have
a
car
a
card
for
that
as
it's
a
work
in
progress
and
the
goal
being
one
IP
per
project
for
us,
how
that's
actually
going
to
map
out
we've
got
other
things
like
the
ability
to
have
one
egress
router
provide
egress
for
more
than
one
pod
more
than
one
project.
You
know
we're
we're
still
trying
to
figure
that
out,
how
exactly
that's
going
to
work,
but
that's
absolutely
on
a
near-term
roadmap
to
provide
that
functionality.
A
number
of
large
customers
that
require
that
and.
B
How
are
you
doing
there?
How
are
you
doing
the
DNS
assignment
to
those
IPS
today?
Do
you
want
to
do
this
on
an
IP
basis
for
me,
troll
off
I
think
that
that's
getting
into
like
some
of
the
stuff
that
we'll
talk
about
in
the
security
side,
but
this
might
actually
be
something
that
we
just
want
to
have
a
deeper
discussion
on,
because
it
always
comes
down
to
the
cluster,
makes
it
easy
for
you
to
control
network
and
we
want
to
add
more
capabilities
to
control.
B
G
H
B
A
The
other
big
area
of
investment
is
workload
diversity
and
we
have
a
pretty
good
flavor
of
features
for
cloud
native.
In
the
last
release
you
saw
kubernetes
deliver
stateful
sets.
This
is
really
for
ordinal
services
for
applications
that
have
very
very
particular
things
that
have
to
take
place
in
a
very
specific
order,
and
you
want
to
be
able
to
sort
of
orchestrate
that,
following
that
order,
we
now
stateful
sets-
and
you
just
heard
earlier
on
the
stage
brokers
right
for
off
platform
services.
A
The
only
thing
that's
there
that
hasn't
really
been
addressed
as
low
latency
services,
and
now
that
we
have
full
workload
diversity.
We
can
start
investing
in
these
this
week.
Actually,
today
it
kicked
off
at
Google
headquarters.
We
have
IBM
Watson.
We
have
Nvidia
and
24
other
members
of
the
resource,
sig
and
kubernetes
at
Google
headquarters,
and
were
really
pounding
out
this
new
design.
A
There's
a
concept
here
where
we
think
we're
going
to
create
a
child
isolator
process
in
this
isolator
process,
we'll
go
interrogate
the
hardware
and
find
out
what,
with
that
hardware,
is
capable
of
and
passed
that
back
to
the
API
services
into
a
scheduler
now
whether
that
is
an
extension
of
the
existing
scheduler
or
another
schedule,
or
that
will
tackle
these
very
specific
problems.
That's
up
for
debate
and
that's
what
the
great
minds
are
working
on
the
next
one.
You
want
to
talk
about
energy.
You.
B
C
B
So
this
is
actually
an
interesting
one,
which
is
we
tend
to
see
that
a
lot
of
people
are
focused
on
the
special
cases,
because
they
have
those
special
cases
and
they're
doing
lift
and
shift
or
they're
trying
to
adapt
to
get
the
other
benefits
of
containers.
But
one
of
the
things
that's
kind
of
been
a
struggling
block
on
this
in
the
design
side,
as
we
said
well,
we
could
go
if
it
exposed
every
single
knob
that
exists
in
the
kernel
for
every
use
case
possible.
Would
the
end
system
actually
be
more
usable?
B
That's
actually
been
some
of
the
process
of
trying
to
work
together
on
the
app
side.
Maybe
the
I've
got
a
web
application.
If
possible,
I
wanted
to
get
an
exclusive
core
automatically
and
then
say
we
also
want
that
to
work
well
with
the
very
custom
tree.
So
all
the
custom
tuning
is
possible
and
so
balancing
that
has
actually
been
some
of
the
delay
as
we'd
love
for
everyone
to
upgrade.
B
Once
these
designs
start
to
play
in
and
everybody's
clusters
to
instantly
go
to
the
point
where,
if
you
asked
for
a
single
core,
you
actually
get
an
exclusive
core
and
to
have
the
cube
would
do
that
automatically,
but
then
still
preserve
all
the
flexibility
of
when
you
want
to
take
control
away
and
fine-tune,
because
in
reality
you
know
fine-tuning
is
always
going
to
help
you
get
that
last
mile
of
performance,
but
the
broad
stuff
will
benefit
every
application.
How.
A
C
Many
people
are
not
using
the
defaults
network
that
comes
out
of
the
box
and
I
saw
a
couple
of
hands
if
you
could
maybe
yell
out
what
you're
using
in
place.
What
was
that?
What
was
that
in
the
back?
The
multitasking
I
just
put
our
hand
in
front
NSX
anybody
using
calico
anybody
using
Conte
of
anybody
using
countries,
but
if
anybody
using
ovn,
okay,
it's
interesting.
C
So
we
have
a
number
of
substitute
s,
DMS
that
if,
if
you've
found
advantage
in
one
of
these
other
SPMS,
you
can
swap
out
ours
for
one
of
a
number
of
others
that
we
and
will
support
the
solution
along
with
the
partner
Kalecgos,
the
latest
one
that
just
get
added
to
our
list.
It's
in
sort
of
a
quote:
unquote:
tech
preview-
that
we
support
that
as
well,
but
there's
a
number
of
reasons
why
you
might
choose
one
Sdn
over
another.
C
But
the
bulk
of
the
time
is
because
it
has
a
feature
you
couldn't
or
didn't
otherwise
find
available
in
our
default
Sdn.
So
specific
to
that
last
bullet
there
o
VN.
We
have
a
number
of
features
on
our
roadmap.
They
going
to
take
a
significant
amount
of
time
to
develop,
but
we
happen
to
have
a
lot
of
people
that
are
tied
and
working
closely
with
the
o
VN
project.
C
Ovn
is
another
OVS
based
SDM,
so
what
we
are
investigating
right
now
is
a
decision
will
be
made
soon,
but
we're
looking
very
closely
and
looking
at
what?
What
do
we
gain
if
we
rip
out
a
significant
portion
of
our
SP
and
replace
it
with
ovn
a
number
of
advantages
right
off
the
bat?
Here's
a
number
of
features
that
would
otherwise
have
taken
us
a
really
long
time
to
develop,
for
example,
end-to-end
ipv6.
C
You
know
this
and
there's
another.
We
have
full-blown
multicast
capabilities,
there's
a
lot
of
things
that
are
built
into
it
by
default
it
in
potential
as
greater
potential
for
moving
double
overlay
performance
issues
when
it's
laid
on
top
of
OpenStack
there's
a
number
of
advantages
that
we
would
glean
from
something
like
ovn.
If
you
look
at
the
number
of
releases,
it's
going
to
take
us
to
build
Adobe
n
versus
what
it
would
have
taken
us
over
a
longer
period
of
time
to
build
those
into
our
existing
product.
There's
probably
a
net
advantage.
C
I
C
And
there's
other
advantages
beyond
the
future
matrix
as
well,
there's
also
our
Rev
product
and
our
OpenStack
products
are
also
moving
towards
ovn
based
Sdn.
So
there's
going
to
be
some
cross-platform
sharing
of
development
and
resources,
there's
also
a
number
of
as
we
going
to
say,
this
lost
my
chance
on
a
lot
of
advantages
that
are
afforded
by
doing
that.
So
we'll
go
in
that
row.
Cool.
A
All
right,
so
that's
a
lot
of
involvement
in
workflow
diversity.
Next,
one
is
persistence,
labas
storage
features
into
the
product
right
and
a
lot
of
people
wanting
to
really
push
it.
We
have
finally
gotten
a
proposal
out
in
the
kubernetes
upstream
for
snapshotting
how
many
people
want
their
tenants
to
be
able
to
snapshot
their
own
Peavey's
yeah?
A
B
Today,
when
you
create
a
stateful
set,
you
can
specify
volume.
So
you
know
it's
about
tying
data
to
applications,
so
you
have
a
consistent
set
of
data
within
a
with
a
bit
of
compute
process
that
thinks
this
is
the
same
entity
that
had
that
data
before
we're
working
to
make
it
easier
to
use
stateful
set,
spread
across
multiple
zones
or
regions
within
a
data
center.
You
have
a
cluster
and
you
want
to
have
multiple
failure
domains.
We
want
to
make
it
much
much
easier,
really
and
to
be
totally
honest
as
staples
notes
have
evolved.
B
We
knew
we
would
have
to
do
this.
It
was
not
one
of
the
initial
features
kind
of
a
course
set
up
so
that
you
can
occasionally
get
into
scenarios
where
you
create
a
PV,
and
you
don't
have
the
perfect
balancing
the
work
that
will
go
on
in
the
near
term
is
to
just
make
this
work.
The
way
that
you
would
expect.
So
it's
been
a
quick,
very
large
staple
set.
You
can
get
volumes
distributed
across
all
regions,
all
the
regions
and
zones
that
you've
internally
configured
very
accurately
and.
H
Have
tried
up
good
I'm,
sorry
just
to
go
back
to
the
workload
diversity
yep
amazon
has
GPU
capability.
Now,
where
you
can
drop
GPUs
as
easy
as
you
add,
and
take
away
Ethernet
interfaces.
Is
there
any
discussion
in
terms
of
being
able
to
recognize?
As
maybe
you
know,
workload
is
going
to
increase
dynamically,
adding
the
GPUs
to
the
node
and
then
allowing
your
pods
or
any
micro
services
to
take
advantage
of
that
on
the
fly
with
a
us
having
to
to
go
in
and
make
them
aware
so.
B
I
think
the
hope
would
be
that
we
do
this
through
cluster
auto
scaling
automatically,
but
I
think
that's
a
good
question.
We
haven't
really
talked
about
changing
on
demand.
We've
talked
much
more
about
getting
to
the
kind
of
broader
cluster,
auto
scaling
so
that
you
define
a
shape
of
nodes,
and
so
when
a
new
machine
comes
on
that
needs
eight
GPUs.
We
just
spit
up
a
new
instance
and
make
that
easy,
but
I
think
we
should
I
can
try
to
catch
up
with
you.
H
B
Not
on
this
deck,
but
it
should
be
I
forgot
to
put
that
in
there
I
just
thought
about
that
wall.
So,
yes,
there's
work
going
on
right
now
to
add
trying
to
get
the
proposal
in
kubernetes
fleshed
out,
so
that
machines
can
state
how
much
storage
they
want
to
make
available
for
local
disk
and
then
the
scheduler
and
the
quota
system
will
take
that
into
account
and
allow
you
to
use
Peavey's
that
are
local
disk.
So
v
apps
don't
have
to
change
whether
they're,
using
something
like
EBS
or
whether
local
disk.
B
You
can
define
low,
latency
storage
for
as
a
storage
class
and
say
I
want
to
have
all
of
my
sheet
machines
that
have
SSDs
offer
up
200
gigabytes.
If
someone
comes
along
and
has
a
stateful
set
workload
that
want
50
gigabytes
per
machine,
the
schedule
will
take
that
into
account
and
say
that
still
has
50
gigs
free
go
ahead,
create
a
PV
by
net
PV
to
that
node,
which
then
means
you're
tied
to
that
node.
B
B
B
The
other
work
that's
been
done
on
dynamic,
provisioning
and
scheduling
and
lis
region
awareness,
and
all
that
so
I'm
hopeful
that
this
is
going
to
be
the
thing
that
they're
really
had
lets
people
go
through
these
low
latency
and
makes
it
easier
for
developers
honestly,
because
this
is
the
most
frustrating
thing
as
the
local
developer.
I
don't
have
access
to
EBS
if
I'm
running
a
possi
cluster
of
locals
under.
A
The
next
last
one
is
security
right,
it's
inherent
in
all
the
features
that
we
do,
but
in
particular
there's
features.
We
have
to
deliver
for
security,
and
this
is
where
a
lot
of
work
is
taking
place
in
three
six,
three,
seven,
two,
three,
eight,
the
first
one,
how
many
people
have
heard
of
user
name
spaces
and
want
it?
Yes,
so
this
is
the
awful
I
require
you
to
run
as
root
and
therefore
90%
of
your
docker
images
cannot
work
right,
so
it'd
be
great.
A
If
they
it
could
start
as
a
root
user
and
map
back
to
a
different,
G,
ID
or
UID
in
the
kernel
itself.
We
finally
starts
really
I
practice.
Not
it's
taken
quite
some
time.
We
think
in
the
3.6
we'll
have
a
tech
preview
of
a
more
controlled
environment.
Maybe
you
label
some
nodes
and
you
send
these
user
namespace
workloads
to
those
particular
nodes
to
segment
them
off
from
the
cluster.
Did
you
want
to
say
anything
about
user
name,
space
I,
think.
B
I
think
we'd
still
recommend
that
even
in
that
the
next
year
or
two
there's
still
a
lot
of
benefits
to
actually
going
through
if
you're,
very
security-conscious
about
just
don't
run
its
route
and
there'll,
be
it's
always
be
some
trade-offs
here.
Both
the
goal
over
the
next
couple
of
releases
is
to
get
all
the
support
in
the
container
runtimes
to
do.
Mapping
of
arbitrary
user
ranges
back
down
to
that
underlying
user,
so
that'll
take
I.
A
Secrets
a
lot
of
questions
on
secrets
and
I
know
of
three
main
projects
for
secrets
and
they're
kind
of
landed
in
this
order.
The
first
one
is
to
encrypt
at
rest
secrets
and
FTD.
We
still
will
encourage
people
to
use
filesystem
encryption,
but
at
CD
itself
will
be
fully
capable
of
encrypting
those
at
rest
secrets,
as,
if
he's
already
isolated
off,
it's
not
associated
with
a
network
traffic
that
the
rest
of
the
workloads
are.
A
So
it
is
a
safe
area
after
we
land
that,
and
that
would
probably
be
three
six
one
time
frame,
we'll
just
miss
three
six
we
also
have
stabilizing
api's
for
vault
integrations,
so
there's
a
number
of
volts
on
the
market
in
this
area.
What
we
need
on
the
kubernetes
side,
our
default
API,
is
that
allow
us
to
innovate
with
any
of
the
vaulting
solutions
and
that's
a
project
that
we're
champion
in
the
upstream
how's
that
going
so.
B
The
vault
integration
there's
probably
three
or
four
different
approaches.
There's
the
kind
of
you
can
build
it
today
and
there
might
be
some
trade-offs
you'd
make
with
security.
So
you
can
run
something
alongside
your
containers
on
a
node
and
no
pods
can
start
up
and
be
enabled
to
go.
Get
that
we'd
like
to
make
that
easier
out
of
the
box.
There's
maybe
a
second
level
past
that
which
is
container
identity,
making
it
easier
to
give
containers
an
individual
container,
an
identity
that
can
be
asserted
by
the
rest
of
the
cluster.
B
A
The
last
part
of
secrets
that
are
really
evolving
is
you
know,
after
now,
two
years
worth
of
secrets
out
there,
we've
come
to
realize
that
all
secrets
are
not
the
same
right,
they're,
very
specific,
something
deal
with
infrastructures
to
deal
with
applications.
Some
are
tenant
based,
some
are
admin
based.
They
really
need
a
typing
interface
right,
they
need
categories
and
then
I
can
start
establishing
different
things.
F
B
I
think
some
of
this
is
a
terminology
part
on
our
side,
which
is
we
didn't
when
we,
when
we
created
this,
we
set
up
the
cluster
admin
role.
We
didn't
make
it
clear
that
cluster
admin
is
literally
root
on
your
entire
cluster,
so
we
have
a
couple
of
other
mechanisms
which
is
we'd
like
to
actually
get
to
the
point
where
cluster
admin
is
like
logging
into
the
machines
on
the
master
with
root,
and
so
you
should
never
use
cluster
admin
unless
you
also
have
a
security
model.
B
That
wants
you
to
log
in
and
act
as
root
on
the
cluster
masters,
but
we
would
like
to
do
two
parts
of
that
make
it
so
that
anyone
who's
an
administrator
on
the
cluster
would
have
to
have
an
elevated
set
of
privileges
to
see
all
secrets
straight
away.
That's
that's
what
that
secret
categorization
APL
subdivision
is
really
about
it.
B
Yet
the
person
who
can
log
in
to
the
master
or
that
CD
and
has
right
access
to
that
can
still
blow
up
the
cluster
I
think
the
the
fault
integration
is
where
we
would
actually
take
the
next
step,
which
is
there's
really
two
attacks.
It
prevent
people
from
accidentally
treating
cluster
admin
as
if
it's
not
the
most
powerful
force
in
the
entire
known
universe.
Right
cluster
admin
is
the
Deathstar.
B
The
vault
doesn't
necessarily
trust
kubernetes,
it
doesn't
necessarily
trust
openshift
or
the
machine,
and
instead
you
can
work
out
a
process
where,
as
an
admin,
you
grant
for
a
limited
time
this
particular
workload
to
access
these
secrets-
and
that's
really
I-
think
that's
the
ultimate
defense
to
that
challenge
is
the
secrets
just
shouldn't
be
under
the
control
of
the
platform,
and
we
think
the
vault
integration
will
do
that
well.
But
to
the
first
question,
yes,
we
would
like
to
subdivide
down
on
the
cluster
so
that
cluster
admin
is
not
people
administrating.
A
Any
questions
on
secrets
right,
so
the
scanning
came
to
the
product,
probably
when
the
3.3
timeframe
with
cloud
forms
scanning.
The
next
step
is
to
default
lis
scan
the
registry
of
the
internal
registry.
You'll
see
us
doing
that
around
the
3.7
time
frame.
Signing
also
comes
to
the
product
in
3.6,
which
is
probably
a
like
a
July
August
timeframe,
a
leverages,
the
PGP
signing
that
you
find
in
RAL
the
next
step.
Now
that
we
have
these
core
features
in
the
product
is
to
start
introducing
policies
around
them.
A
C
I
C
C
Is
they
just
on
port
TCP,
80
or
TCP
443?
So
it's
it's
again.
It's
a
tech
preview
feature
I
strongly
recommend
you
check
it
out
if
you're,
if
you're
already
using
the
multi-tenant
plugin
this,
because
there's
a
good
chance
that,
once
we
iron
out
some
of
the
kinks
that
that
might
end
up
being
the
that
might
be
the
default
content
of
multi-tenant
just
because
it
can
do
multi-tenant,
look
better.
Yes,.
B
B
I
B
Yeah
we've
been
working
on
a
number
of
ways
of
making
it
easy
for
people
to
use
existing
geo
replication
solutions
like
object,
storage,
and
so,
if
you
have
an
a
global
object
store,
it
should
be
easy
for
you
to
geo,
replicate
images
and
have
multiple
clusters
use
that
there's
a
number
of
exploration
areas
for
this,
and
it's
excuse
me.
We
also
want
to
make
it
easier
to
get
images
from
region
to
region
at
their
different
clusters.
I
think
just
come
find
us.
We
can
talk
more
about
that
afterwards.