►
Description
Stephen Augustus is the Product Management Chair for the Kubernetes project. Additionally, he leads the Special Interest Group for Azure and is currently the Features Lead for the Kubernetes 1.12 Release Team.
Details on the Kubernetes Release Schedule can be found here: https://github.com/kubernetes/sig-release/blob/master/releases/release-1.12/README.md
A
Hello,
everybody
there's
a
lot
of
you
joining
so
we're
taking
a
couple
of
minutes
here.
This
is
an
open
ship,
Commons
briefing,
the
topic.
This
time
is
kubernetes
one
point
one
two
or
one
twelve
update
and
we're
very
happy
to
have
stephen
augustus
who's,
a
red
Hatter,
who
is
also
the
kubernetes
product
management
chair
so
well-versed
in
everything.
B
B
I
am
a
specialist
Solutions
Architect
on
the
OpenShift
Tiger
team,
which
means
I,
sell
open
ship
duty,
dives
and
architectural
discussions
around
the
e
openshift
platform,
as
well
as
kubernetes
internals
from
the
kubernetes
side.
I'm
the
the
PM
today's
product
management,
chair
I,
have
also
participated
on
the
relief
team
a
few
times
so
for
Nettie's
111
and
112
I'm.
B
Lastly,
I
am
dick
Asher
chair,
though,
heavily
invested
in
the
outer
space.
The
integrations
between
kubernetes
imagine
the
different
API
endpoints
that
we're
working
on
so
features
like,
first
and
foremost,
the
so
TLS
or
transport
layer
security
that
security.
We
consider
to
be
one
of
the
the
cornerstones
of
kubernetes.
So
it's
very
important
that
the
story
around
being
able
to
provide
security
and
for
nodes
and
the
associated
control
plan
components
are
is
well
done
essentially
so
that
we
have
this
differentiation
between
day
zero
day,
one
operations
and
kind
of
day
two
operations.
B
So
a
one
consideration
for
day
zero
day.
One
is
okay:
well,
I've,
wrapped
a
bunch
of
kubernetes
nodes.
I
need
to
handle
certificate
management
in
some
way.
I
need
to
be
able
to
provide
the
control
plane,
so
the
API
server,
the
cubelet,
the
API
server,
the
cubelet,
the
controller
manager,
the
scheduler
and
and
all
the
different
integration
points
between
those
things,
a
means
of
providing
a
identity
to
each
of
those
components
right.
So
the
means
for
that
was
TLS
bootstrapping.
B
So
essentially
in
a
trip
entities
cluster
you
can
instantiate
a
cluster
BA
or
certificate
authority
and
and
that
certificate
authority
can
delegate
certificates
out
to
the
associated
components
for
the
cluster,
so
kubernetes
skilless
bootstrapping
was
introduced
in
kubernetes
1.4.
That
we've
had
we've
had
a
nice
long
journey
to
to
get
to
this
point,
but
I'm
happy
to
say
that
gilles
bootstrapping
is
now
GA
as
of
1:12.
B
For
for
this
day
to
operations,
often
often
I
have
users
and
customers
who's
in
kubernetes
clusters
and,
as
you
may
know,
certificates
have
a
specific,
specific
time
frame
for
validity
right.
So
what
happens
when
certificates
are
no
longer
valid
right?
You
need
some
means
of
being
able
to
rotate
those
certificates.
In
addition,
you
have
be
able
to
react
to
security
events
right.
So
the
way
you
would
react
to
security
event,
compromise
or
certificate
would
be
to
rotate
it
to
so
that
functionality
there's
the
day
to
day
operation
from
the
TLS
side.
B
Server
certificate
rotation
for
couplets
first
have
moved
to
theta
in
112
right.
So
what
that
means
is
we
have
the
means
to
generate
CSRs
or
certificate
signing
request,
s'
issue
them
to
a
cluster
CA
and
have
those
requests
approved
to
issue
new
certificates
for
associated
components,
so
that
that
kind
of
that
kind
of
delegates
the
responsibility
of
providing
of
having
a
cluster
administrator
handle
this.
B
So
that's
really
excited
to
that's
really
exciting:
to
see
that
kind
of
evolution
in
the
process,
one
being
able
to
spin
up
the
point
being
able
to
spin
up
a
cluster,
see
and
delegate
certificates
and
then
also
have
that
cluster
CA
have
the
ability
to
to
listen
to
and
approve
certificate
signing
request.
So
that's
the
the
TLS
one
TLS
for
112
moving
on
a
sure
to
a
sure,
I
put
up
second,
because
it's
near
and
dear
to
my
heart,
either
the
azure,
vmss
or
virtual
machine
scale
sets
moved
to
GA.
B
So
if
anyone
is
familiar
with
common
cloud
provider,
say
eight
of
us
and
have
spun
up
a
cluster
or
set
of
nodes
before
and
used
something
like
auto
scaling
groups
as
your
VMs
s
is
analogous
to
AWS,
auto
scaling
groups
right.
So
we
see
that
functionality
see
that
functionality
move
into
ta
for
112.
With
that
move.
We
also
so
part
of
part
of
one
of
the
whole
backs
for
introducing
cluster
autoscaler
functionality
to
to
kubernetes
for
adder
was
around
the
support
for
vmss
right.
So
now
that
vmss
is
officially
supportive
and
it's
in
GA
phase.
B
We
also
see
the
integration
with
the
MSS
and
cluster
autoscaler
moved
in
with
into
beta,
so
for
people
who
are
familiar
with
cluster
autoscaler
if
you're
not
familiar
with
cluster
out
of
scaler.
Essentially
what
it
does
is
it's
in.
It's
an
API
endpoint
that
you
can
that
it's
part
of
the
cluster
that
can
react.
You
react
to
requests
or
new
notes
and
based
on
the
place
based
on
the
cloud
provider
that
it's
integrated
with
and
dynamically
spin
up.
B
Those
notes
right
so
in
the
case
of
AWS,
it
can
react
to
AWS
with
with
the
cloud
provider
with
the
cloud
provider.
Integrations
added
can
react
the
requests
or
in
in
cluster
and
spin
up
new
nodes
within
that
auto
scaling
group
for
workers
for
masters,
so
now
that
same
functionality
is
available
on
the
on
the
Azure
side.
B
Finally,
not
on
this
slide
we're
seeing
improvements
to
assure
availability
sets.
Availability
zones.
Excuse
me
so
again,
availability
zones
analogous
to
availability
zones
within
AWS,
so
these
are
basically
it.
This
is
basically
the
ability
to
break
out
your
break
out
your
compute
nodes
and
your
storage
nodes
and
different
things
like
that
into
separate
fault
and
failure
domains.
B
So
that
means
is
you
know
if
you
know
a
sure
region
goes
offline,
for
whatever
reason
a
region
is
now
broken
into
a
set
of
availability
zones
right,
so
availability
zones
are
classically
set
of
domain
a
set
of
data
centers,
rather
so
so
now,
a
region
like
a
u.s.
East
one
u.s.
west,
two
things
like
that
in
AWS,
the
similar
you
know,
similar
concept,
is
there
where
multiple
data
centers
are
built
within
the
region.
B
B
132
scheduling
so
I'm,
just
just
as
a
heads
up
each
of
these.
Each
of
these
headings
that
I'm
going
through
are
are
aligned
to
things
right.
So
SIG's
are
kubernetes
special
interest
groups.
The
kubernetes
special
interest
groups
are
essentially
a
way
of
its
it's
a
governance
model
that
allows
for
delegation
of
activities
that
are
specific
to
some
subset
of
kubernetes
work
right,
so
you
know,
there's
their
scheduling.
B
Are
ones
for
cloud
providers
like
AWS,
there
are
ones
for
security
that
which
is
off
there's
ones
for
handling
no
great,
so
so
things
that
happen
on
the
cubelet
level.
So
each
of
these
special
each
of
these
special
interest
interest
groups
may
have
sub
sub
projects
that
get
even
more
granular
into
the
specific
focus.
Additionally,
there
are
working
groups.
Working
groups
are
cross,
big
efforts
and
they're
meant
to
be
ephemeral
right.
So
things
like
things
like
multi-tenancy,
it's
wonderful
working
groups
right
and
that's
that's
work
across
node
and
scheduling
and
different
things
like
that
right.
B
So
secondarily,
there
is
the
idea
of
EXO.
So
secondarily
there's
the
idea
of
preemption
right.
So
it's
not
necessarily
always
enough
to
use,
do
define
priority
for
a
pod
or
set
of
pods.
It's
also
important
that
those
highly
critical
jobs
that
you're
running
say
that's
payroll,
processing,
imaginary
service
is
able
to
is
able
to
run
regardless
of
the
priorities
that
are
that
are
happening
on
the
cluster
are
on
that
node
right.
B
So
the
idea
of
preemption
allows
you
to
essentially
preempts
non-essential
workloads
right
so
kik-kik
those
workloads
right
so
that
you're
crucial
jobs
can
run
right.
So
the
quota
by
priority
functionality
essentially
allows
you
to
define
a
set
of
quotas
right
so
so
kubernetes
quotas
or
the
the
ability
to
set
to
set
scope
of
resources
across
say
CPU
or
memory
or
our
namespace
level
right.
So
now
now
the
quotas.
Now
you
can
integrate
quotas
with
the
idea
of
priority
right.
So
there
is
a
I
think.
A
B
Feature
moves
to
beta
into
112.
We
also
have
the
idea
of
tainting
notes
by
priority.
So
if
you're
aware
of
proven
EDIUS
teams
in
toleration
essentially
taints
they
that
you
can
set,
you
can
essentially
configure
a
set
of
labels
that
that
define
a
define,
a
resource
in
a
certain
way
right
so
in
the
instance
of
a
node,
the
the
nodes
so
say
like
the
master
nodes
right.
B
So
so
in
a
system
like
in
in
certain
configurations
and
distributions,
we
have
the
idea
of
of
configuring
that
node
as
a
specific
role
and
and
using
that
role
to
ensure
that
ensure
that
worklet,
certain
workloads
don't
get
scheduled
on
those
nodes
right.
So
your
masters,
you
don't
want!
You
don't
want
to
actually
schedule
application
workloads
on
your
masters
right,
because
you
want
to
ensure
that
the
control
plane,
the
control
plane,
can
survive
right
and
and
and
be
able
to
schedule
other
work.
So
we
we
do.
B
You
ensure
that
we
set
certain
labels
to
ensure
that
that
workloads
won't
be
scheduled
on
things
like
masters
or
in
the
case
of
open
shift
like
infra.
No,
it's
great!
So
then,
there's
this
idea
of
toleration
right.
So
if
you
configure
a
deployment
in
a
certain
way,
you
can
also
configure
the
ideas
of
toleration,
which
means,
if
a
node
like,
if
a
node
matches
this
taint
I'm
going
to
allow
this
workload
to
run
right.
B
So
it's
hollar,
eighths,
the
taint
I
think
basically
says:
don't
run
this
stuff,
but
but
the
toleration,
if
configured
will
say
it's
okay
to
run
this
stuff
here,
because
that's
specifically
configured
it.
That
way
right.
So
now
we're
bringing
this
idea
of
basically
layering
on
top
of
that
API
and
allowing
you
to
configure
priorities
for
certain
things
right
so
I
can
can
configure
the
the
prior.
Oh,
yes,
we
can
all
right
so
that
functionality
moves
into
news
into
data.
B
From
the
API
machinery
sake,
we
have
so
API.
Machinery
is
basically
involved
and
lots
of
the
little
underlying
bits
that
that
users,
users
and
cluster
operators
may
not
concern
themselves
with
day
to
day,
but
these
are.
These
are
some
of
the
most
into
important
integration
points
or
the
kubernetes
ecosystem.
So,
if
you're
aware
kubernetes
is
you
know,
we
use
a
few.
People
have
talked
about
kubernetes
being
like
kind
of
like
this.
B
C
B
Certain
quotas
for
I,
say
high-cost
resources
here,
but
the
the
idea
is
is
when
you
initially
instantiate
some
resource
on
kubernetes.
You
often
have
to
come
back
and
kind
of
think
about.
Okay.
Well,
what
are
the
resources
that?
What
are
the
resources
that
the
this
pod
or
deployment
will
actually
write?
How
much
much
memory?
B
How
much
CPU
should
have
be
scoped
to
mainstays
for
certain
teams
and
different
things
like
that
right,
so
by
default,
currently,
currently
in
previous
versions,
at
least
by
default,
when
you
run
a
pod,
the
the
scope
is
unbounded
right,
so
you
can
kind
of
do
anything
you
want
if
you
haven't,
set
those
limits
right.
So
you
know
this
is
so
now
we're
restarting
to
consider
things
like
kind
of.
B
So
you
can
kind
of
you
can
you
can
start
to
begin
to
ensure
that
a
pods
are
scoped,
a
certain
amount
of
resources,
as
opposed
to
every
pod
running
unbounded
right,
so
you
get
a
system
that
is
going
to
be
more
considerate
of
the
constraints
of
the
actual
underlying
compute
those,
but
the
API
server
dry
run.
Teacher
is
really
really
interesting.
It's
really
exciting.
If
you've
ever
used,
some
command-line
utility,
you
may
be
familiar
with
the
idea
of
a
dry
run.
Dry
run,
equals
true
or
apply
equals
false,
or
something
like
that
right.
B
B
Ok,
the
CLI,
so
6ui
is
responsible
for
basically
everything
in
the
ecosystem
around
the
tool.
You
may
be
very
familiar
with
cube,
cube
control,
so
server-side
printing
is
the
idea
that,
when
I
request
something
from
from
you,
you
control
and
and-
and
you
know,
it's
and
generally
the
API
server
I-
want
to
make
sure
that
that
that
information
is
being
returned
from
the
server
itself
and
not
something
that's
mobbed
from
the
client
right.
B
So
the
ability
to
make
sure
that
everything
is
essentially
quote-unquote,
pretty
printed
right
and
coming
from
the
API
server
or
coming
from
whatever
inter
integration
point
from
the
API
server
ensures
that
when
we're
building
applications
or
or
additional
plugins
and
things
like
that
for
deep
control,
we
can
ensure
that
there
is
a
consistent
API
that
it's
that
it's
that's
leveraging
right.
We
can,
if
you
can
trust
you,
can
trust
the
output.
That's
coming
from
that
command
right,
so
that
functionality
moves
to
GA,
12.
B
So
we
want
to
try
to
make
that
experience
more
consistent
and
between
these
two
features:
cute
control,
plugins
and
and
separating
the
the
repos
for
CLI
utilities,
both
moving
to
beta,
we'll
start
to
see
more
of
that
stability
press
the
control
guys.
So
essentially,
if
you,
if
you
think
about
what's
been
happening
across
across
the
the
last
several
release
cycles
around
around
removing
code
from
entry
kubernetes
from
kubernetes
cupid
ideas
of
repo
and
moving
it
around
so
that
one
we
can
iterate
over,
we
can
iterate
over
the
toad
a
lot
faster.
B
The
development
cycle
is
faster.
The
feedback
loop
is
faster
right
as
opposed
to
working
with
this.
This
monolithic
code
base
right.
So
so
the
same
thing
is
happening
on
the
on
the
control
side,
where
we're
starting
to
pull
out
the
the
different
utility
is
used
to
generate
CLI
tools
or
acute
control
and
and
from
an
ecosystem
level.
B
You
know
so
from
me
from
the
Red
Hat
side,
we
leverage
the
OSI
tool
which,
for
open
chests
the
OSI
tool,
is
essentially
mocks
up
a
lot
of
a
lot
of
similar
API,
integrate
interactions
that
that
cube
control
does
as
well
right.
So
the
idea
here
is
to
allow
anyone
to
write
a
tool
if
they
need
to
that.
Can
leverage
the
same
API
isn't
do
it
in
a
consistent
manner
right.
B
Network
policy
is
basically
it's
it's
the
ability
to
it's
essentially
parts
of
an
SDN
firewall
for
for
the
for
the
components.
Phonetics
cluster,
though
the
same
way
you
would
think
of
a
firewall
and
and
routers.
If
you
were
building
a
traditional,
traditional
network,
you
would
be
setting
routes.
You
would
be
specifying
you
know,
you're,
specifying
the
your
five
temple
rules
right.
B
These
means
of
isolation
around
around
multiple
teams
around
around
so
whether
it's
teams,
whether
it's
customers,
however,
you
decide
to
build
your
cluster
about
whatever
use
case
that
provides
we're
starting
to
we're,
starting
to
see
this
ability
to
create
a
firewall
within
essentially
a
firewall
within
kubernetes
rating
and
and
prevent
never
prevent
the
ingress
and
egress
of
traffic
specific
resources
right,
so
the
egress
functionality
was
was
introduced.
Egress
moves
to
GA
as
well
as
IP
block
in
this.
B
In
this
release,
what
that
means
is
so
egress
is
the
ability
to
again
have
traffic,
leave
the
pod
and
leave
the
pod
for
a
different
destination
so
before
when,
when
network
policy
was
introduced,
that
was
not
possible
or
that
was
not
configurable
via
network
policy
at
least
and
and
additionally,
the
the
IP
block
functionality
is
instead
of
having
to
hone
in
on
it,
I
think
specific
IP.
For
for
these
rules,
you
can
now
specify
a
cider
box
as
you
would.
B
All
right,
so
here
is
the
the
node
group,
it's
kind
of
very
central
to
very
central
to
secure
Panetta
days
overall,
the
so
sick
node
is
responsible
for
a
lot
of
the
things
that
happen
around
the
cubelet
and
the
couplet
interacts
with
with
the
API
server
and
the
associated
components
right.
So
runtime
class
is
introduced
as
alpha
into
ones.
Well,
then,
runtime
class
begins
to
introduce
this
idea
of
having
cluster
scoping
for
container
runtime
properties.
So
I
know
I
know,
that's,
maybe
not
super
clear.
B
What
that
means
is
I
can
now
bubble
up
information
about
the
container
one
time
that
I'm
I'm
leveraging
for
organizing
so
whether
it
be
whether
it
be
soccer
or
cryo,
or
some
CRI
compliance
spec
you're,
now
able
to
leverage
the
properties
of
that
within
the
group
entities
kubernetes
lawyer
right,
so
what
that
means
is
I
can
start
to.
If
you
know,
if
I
was
to
build
an
you
know
for
a
simple
use
case,
if
I
was
to
build
an
application
that
just
simply
gave
me
information
about
the
the
node
that
was
running
right.
B
A
B
C
B
Process
the
pod
process,
namespace
it
another
tongue,
twister
it's
moving
to
as
moving
to
beta
and
112
and
thought
process
names
to
be
sharing
that
essentially,
as
as
the
name
implies
allowing
to
share
to
share
information
about
pits
within
across
pods
the
cross
pods.
That
may
be
spending
namespaces
from
the
cubelet
device
plug
in
registration
side.
B
So
this
is
interesting
because,
as
as
I've
been
mentioning,
the
the
goal
is
to
start
to
create
less
kubernetes
entry
code
and
more
of
this
kubernetes
is
a
kernel
and
has
surrounding
components
that
work
well
and
are
known
api's
right.
So
we
have
things
like
CRI,
which
is
the
container
runtime
interphase
DNI.
A
B
Is
the
container
network
interface
CSI,
which
is
a
container
storage
interface
right?
So
when
using
these
tools,
you
often
don't
have
knowledge
of
exactly
what
spec
you're
using
for
that
tool
without
without
you
know
doing
some
so
much
for
after
it.
So
so
the
kubernetes
device
plug
in
registration
feature
allows
you
the
we're
starting
to
build
this
API,
where
a
device
plug
in
the
CSI
say.
A
B
Using
something
like
OCS
like
OpenShift,
container
storage,
yes,
I,
compliant,
and-
and
you
can
basically
your
your
your
registering
CSI
your
registering
OCS
or
some
CSI
combined
framework
against
the
the
cubelets
of
the
people.
It
has
knowledge
of
what
framework
it's
it's
going
to
run
right.
So
this
is
also
important
from
the
the
network.
B
Storage
is
a
so
I
was
mentioning
it's
kind
of
a
tie-in
from
the
last
slide.
So
it's
mentioning
CSI
and
one
of
the
things
to
really
care
about
and
I.
Think
one
of
the
most
important
things
that
that
we're
starting
to
take
heavy
consideration
into
is
the
idea
of
apologia
where
dynamic
provisioning
right.
B
So
now
that
I
now
that
I've
built
a
cluster
like
that
across
multiple
availability
zones,
if
I
decide
to
configure
storage
or
my
cluster
I,
need
to
ensure
that
the
storage
that
I
can
figure
is
close
to
the
compute
that
I'm
going
to
assign
it
to
right.
So
if
I
have
you
know,
if
I
have
a
storage,
that
I've
been
stand-in,
US
East,
US,
East,
1a
right
and
my
compute
note
that
I
want
assign
it
to
is
in
u.s.
East
1e.
C
B
That's
storage!
That's!
That's
storage!
That's
attempting
to
make
a
link
with
compute
that
is
across
a
data
center
right.
So
we're
talking
about
high
latency,
we're
talking
about
we're
talking
about
guarantee
concerns,
reliability,
concerns
for
your
application,
all
right,
so
we're
starting
to
build
started
starting
to
have
this
idea
of
topology
aware
provisioning,
where
my
compute
and
my
cluster
knows
where
my
compute
lives.
B
B
So
CRT
is,
if
you're
familiar
with,
if
you've
been
around
the
Cure
bananas
ecosystem
for
a
while?
It's
here,
the
easiest,
VPR's
and
TBR's
or
party
resources,
CRTs
were
introduced
and
I
think
one
seven,
one
eight
and
they're,
basically
an
ability
to
ability
to
define
a
set
of
define
a
set
of
instructions
to
the
for
your
grenades
cluster.
To
do
right.
So
this
is.
This
is
essentially
the
and
tabbing
having
a
framework
around
being
able
to
version
these
things
to
get
status
on
them.
So
this
is
ER.
B
You
can
encode
that
human
operator,
knowledge
to
kubernetes,
constructs
and-
and
that
idea
is-
is
that
idea
leverages
CR
days
to
make
that
happen
right.
So
the
idea
that
I
can
then
do
a
cute
control,
yet
elasticsearch
nodes,
or
something
like
that
right
and
be
able
to
view
those
elasticsearch
notes,
depending
on,
depending
on
your
your
your
implementation
right.
So
the
reason
that
this
is
important
is
now
that
we're
now
that
we're
we've
moved
past.
This
idea
of,
like
kubernetes,
is
really
good
at
stateless
house.
Okay,
kubernetes.
A
B
For
for
the
enterprise,
we
have
to
make
sure
that
turbine
IDs
also
works
for
stateful
applications,
and
one
of
the
parts
of
that
story
is
being
able
to
backup
and
restore
the
volumes
that
are
attached
to
stateless
applications.
I
don't
want
to
I,
don't
want
to
destroy
a
pod
that
is
attached
to
you
know
that
is
leveraging
some
database
and
then
have
the
database
go
away
or
in
the
event
that
that
happens.
I
need
to
know
that
I
have
some
assurance
that
I
can
restore
that
database
back
to
a
known
state
right.
B
B
See
some
improvements
across
the
next
two
or
three
release
cycles,
all
right,
so
auto
scaling,
auto
scaling
again,
if
you're,
if
you're
in
a
cloud
provider,
if
you're
in
a
cloud
provider
situation
we're
also
you
know,
we
have
the
ability
to
do
things
like
you,
leverage
the
cluster
autoscaler
right,
so
cluster
autoscaler
hooks
into
the
cloud
provider.
Api
is
and
can
can
spin
up
nodes
dynamically
based
on
information
from
the
cluster
right
in
cluster.
There
are
there
are
fit.
B
There
are
features
that
allow
you
to
do
that
to
the
the
primary
one
is
HPA
or
the
horizontal
pod
autoscaler,
so
horizontal
pod,
autoscaler,
essentially
Bill,
can
spin
up
new
pods
within
a
some
spec
like
a
like.
A
deployment
replica
set,
what-have-you
and
use
metrics
use
metrics
around
around
your
applications
are
used
predefined,
pre
time
metrics
that
you
set
on
when
to
scale
across
multiple
pods
right.
So
now
that
the
metric
server
has
been
introduced
and
metric
server
is
more
mature.
B
We're
introducing
the
the
functionality
to
add
functionality,
graduating
Zubaydah
in
this
release
to
be
able
to
define
custom
metrics
within
the
metric
server
and
have
the
horizontal
pot
autoscaler
leverage.
Those
metrics
you
define
when
in
how
it
scales.
Secondarily
there
is
so
in
this
kind
of
rewriting
of
improvement
of
the
HPA.
You
see
HPA
v2,
essentially
there
there
is
an
improved
scaling
algorithm
to
make
sure
that
there
is
not
a
lot
of
thrashing
when
your
your
your
pods
are
scaling
up
or
down
right.
B
So,
if
I,
if
I
react
to
some
event
or
an
event
that
happens
for
15
minutes
that
causes
a
scaling
operation,
I
want
to
make
sure
that
that
event
is
actually
done
before
I
start
to
scale
down
these
pods
right
or
we'll
run
into
a
kind
of
this.
This
the
sleep
of
okay,
I've,
started
to
scale
down
the
pods,
but
this
this
event
is
still
happening.
B
Horizontal
is
spreading
across
multiple
nodes
and
the
vertical
is
reconfiguring
pods
so
that
they
have
expanded
limits
right.
So
this
is,
this
is
an
so
HP
a
is,
is
within
the
core
code.
Vp
a
is
an
ad
on
the
Tsukuba
netic,
so
you,
but
you
would
go
to
kubernetes,
slash
Auto
scale,
learn
our
autoscaler.
Excuse
me
and
that's
that's
the
repo
for
it
and
and
from
there
you
can
install
there's
a
script
to
add
vertical
pod
autoscaler
functionality
to
your
to
your
notes.
So
we
see
that
functionality
I'm
into
beta
in
1202
and
body
thanks.
B
So
what's
next
right,
so
we've
talked
about
a
lot
of
different
features
and
I
own
the
the
OpenShift,
the
core
OS
blog
post,
that
you
know
we're
really
seeking
to
be
boring.
Kubernetes
should
start
to
be
more
boring
now
that
we've
laid
down
and
built
the
api's
for
a
lot
of
the
the
baseline
components
for
kubernetes
right.
So
the
next
step,
the
next
step
and
what
will
continue
to
see
across
113,
114
and
so
on
and
so
forth,
is
where
we're
we're.
Really
stability
focused
right.
B
So
113
is
going
to
be
a
very,
very
tight
timeline.
113
started
the
development
for
113
essentially
started
last
week,
but
because
of
you
know,
because
of
us
holiday
is
different.
Different
events
happenings
that
we've
got
Thanksgiving
and
Christmas
coming
up
and
and
cube
con
and
Shanghai,
and
a
Q
Khan
in
Seattle,
though,
and
and
in
betwixt.
B
All
of
this,
the
the
people
who
are
actively
working
on
these
features
are
also
going
to
be
people
who
are
attending
these
events
and
giving
talks
and
and
spending
time
with
their
family
on
on
Thanksgiving
and
Christmas
the
other
December
holidays.
So
the
the
timeline
is
significantly
shorter.
Usually
we
spend
we
spend
about
quarter,
doing
development
and
shipping,
and
and
now
we're
looking
at
a
little
less
than
you
know,
it's
going
to
be
maybe
a
little
less
than
two
and
a
half
months.
B
So
this
release
we're
we've
gone
and
forth
about
this
idea
of
having
stability
releases
versus
feature
focused
releases
right.
So
so
this
is
going
to
be
another
attempt
at
going
for
a
stability
release
right.
So
we
what
in
in
that
we
want
to
essentially
graduate
the
existing
functionality
of
kubernetes
right,
ensure
that
the
code
that
is
already
there
works,
make
making
incremental
improvements
right.
B
So
one
of
the
things
that
we're
doing
is
we're
officially
deprecating
support
for
a
TV
version
too
right,
so
everyone
is,
or
should
be,
using
a
CD
version,
3
of
some
variant
of
that-
hopefully
the
latest
at
this
point.
So
we
now
that
we've
had
this
kind
of
duplication
notice
up
for
a
while.
You
know
we're
gonna
move
we're
gonna
move
forward
with
deprecating
at
CD
2
for
this
release.
B
So
these
these
tests
will
run
for
long
long
periods
of
time,
and
it's
it's
because
of
that
longer
because
of
that
longer
run
time,
it's
it's
harder
to
get
signal
faster
right.
So,
as
as
some
of
these
stability
concerns
were
alleviated,
we
discovered
some
additional
turns
around
core
DNS
right,
so
we'll,
hopefully
see
the
starting
of
a
move
from
of
core
DNS
as
the
default
for
all.
B
So,
overall
we
we
want
to
see
core
DNS
and
now
that
we've
we've
kind
of
vetted
out
some
of
the
scalability
concerns
around
the
project.
Overall
focus
is
turning
on.
The
focus
is
turning
to
make
sure
that
this
is
default
for
the
next
for
the
next
few
cycles,
right
so
again,
we're
really
working
on
graduating
existing
functionality.
Finally,
I
would
be
remiss
if
I
don't
do
a
plug
for,
for
my
my
my
sake,
so
we're
gonna
be
heavily
product
management
focused
for
the
for
the
next
release
cycle,
we're
trying
to
gather
and
kind
of.
B
This
idea
of
of
keps,
which
are
the
kubernetes
enhancement,
proposals
right.
So
this
is
basically
how
a
sake
can
present
to
the
community.
The
idea
of
new
functionality
right
and
and
then
features
the
the
features
that
we've
talked
about
would
be
scoped
within
that
cap
right.
So
we're
working
out
automation
for
that
a
way
to
present
that
to
the
community,
so
that
you
can,
you
know
you
as
a
passerby,
can
go
to
some
site
and
and
view
all
of
the
active
caps
for
kubernetes
see
where
the
features
are
we
want
to
provide.
B
A
B
So
the
caps
are
currently
available
there
in
the
kubernetes
community
repo.
So
allow
me
a
few
seconds
to
pull
that
up,
but
it's
it
would
be
github.com
slash.
Your
Boneta
is
slash
community
and
then
there's
a
caps
folder
right
and
then
each
of
those
caps
are
broken
down
by
their
top-level
caps,
which
are
basically
why
reaching
kubernetes
urban
eddies
events
right.
So
things
like
property
management.
Things
like
you,
know
things
like
TLS
things
that
are
related
to
know
that
touch
basically
every
component
of
the
of
the
ecosystem,
and
then
there
are
six
specific
caps
right.
B
So
within
each
of
those
caps,
you'll
also
be
able
to
view.
It
will
also
be
able
to
view
a
set
of
metadata
that
lets.
You
know
one
who
owns
the
cap
and
then
two
who
are
the
participating
sakes
within
that
cap.
I'd,
say
to
get
an
idea
of
like
the
the
integration
points
around
you
going
to
be
actually
attacking
that
work.
A
A
C
Yeah
sure
so,
right
now,
of
course,
you
know
core
OS
was
intact
by
Red
Hat,
so
we've
been
working
for
quite
a
while
with
engineering
to
converge
both
what
was
the
Kouros
tectonic
platform
and
the
red
hat
OpenShift
platform
into
one
single
converged
platform,
so
that'll
be
a
release,
Vulcan
ship.
That
includes
all
the
capabilities
that
ring
tectonic
and
a
lot
of
the
other
core
us
technologies
will
live
on
as
well
like,
for
instance,
clay
still
lives
on
core
Oh.
C
A
sexual
operating
system
will
be
an
embedded
OS
in
the
converged
platform
on
the
exact
timeline
for
the
release.
Right
now
is
TBD,
but
it
will
be
there
the
next
release
after
3.11.
So
what
exactly
that
comes
out?
We're
not
a
hundred
percent
sure
yet,
but
that'll
be
a
converged
platform
and
includes
all
the
capabilities
from
tectonic
merged
with
open
shift.
There
will
be
a
beta
program
for
people
that
want
to
adopt
that
and
I
put
my
email
in
the
chat
in
case
someone's
interested
about
joining
the
beta
program.
A
Almost
at
the
the
top
of
the
hour,
so
I
want
to
respect
people's
time.
There
was
one
actual
request
as
well
for
demos
of
some
of
these
new
features
and
if
you
want
to
send
me
drop
me,
an
email
at
D
Mueller
at
Red,
Hat
comm
on
specific
topics
that
you're
looking
for
people
to
talk
about
I
can
track
down
speakers
to
do
deeper,
dives
on
future
briefings
as
well.
I
misspoke
earlier
that
the
3.11
openshift
release
is
going
to
be
on
and
18
so
in
two
weeks
time
rather
than
next
week.
A
B
B
A
part
of
the
part
of
kind
of
the
effort
around
around
doing
the
enhancement
process
teachers
process
of
the
marketing
communication
from
the
release
team
perspective
is
also
to
make
sure
that
some
of
this
information
is
available.
While
we
are
going
to
be
so
from
the
the
CNCs
side,
we'll
also
be
giving
a
webinar
on
November
6th
November
6,
without
which
I'll
be
moderating
as
well,
and
that
webinar
will
will
hopefully
go
through
some
of
the.
B
A
A
Have
to
repeat
it
again:
that'll
be
great,
so
thank
you
very
much
Steven
for
taking
the
time
and
for
everybody
who
joined
us
here.
I
apologize
for
a
little
bit
of
muting
issues,
but
I
think
we
managed
to
muddle
through
it
and
we'll
be
back
again
next
week
with
another
update
and
that
one
is
on
core
OS
itself.
And
hopefully
you
all
join
us
again
for
that.
So
thanks
again
Steven
and
thank
you
to
everybody
else.
Take.