►
Description
Ask Me Anything with OKD-WG Co-Chairs & Technical Leads
OKD-WG co-chairs: Diane Mueller, Christian Glombek, and Vadim Rutkowsky
OKD-WG members: Charro Gruver, Michael McCune, Neal Gompa
Topics covered: OKD4 Beta, Autoscaling OKD, Single Cluster Install and OKD Road to GA
A
All
right,
everybody
welcome
to
another
open
ship.
Commons
briefing
this
time,
we're
changing
the
format
a
little
bit
and
we're
gonna.
Do
a
live.
Ama.
Ask
me
anything
with
the
okd
working
group
members,
myself,
Christian
Gwang,
Baek,
Fatima,
zovsky
and
Michael
McEwen
and
Charo
Grover
Grover.
What
I
think
I've
said
that
wrong?
But
that's
okay,
I!
Do
that.
B
All
right,
hi
everybody,
so
this
is
a
condensed
presentation
about
what
is
okay,
t4
will
have
on
the
agenda.
What
is
okay?
Look,
what
is
okay
defer
prepare
for.
First
of
all,
then
what
is
Fedora
core
OS,
then
we'll
quickly
introduce
the
Oconee
working
group
and
after
that
we'll
start
the
ask
me
anything
session.
So
let's
get
this
going.
What
is
okay,
d4,
okay,
d4
is
our
community
distribution
of
OpenShift.
B
That's
the
origin.
Community
distribution
of
kubernetes,
the
OpenShift
codebase
running
a
top
Fedora
core
OS,
so
usually
in
the
open
ship
product
you'll
have
the
same
codebase
on
the
cluster,
but
then
you'll
have
Red
Hat,
Enterprise
Linux
core
OS
as
the
base
operating
system
we're
replacing
that
in
the
community
version
with
Fedora
core
OS.
Now
you
can
find
all
the
info
on.
Ok
do.
B
This
is
a
little
bit
of
a
visible
visualization
of
okay
deep,
so
we
have
the
application
layer
on
the
top
below
that
is
running
kubernetes
in
our
okd
OpenShift
flavor,
so
that
is
operator
driven
kubernetes
on
autopilot,
essentially
you'll
get
automatic
updates
in
the
future
and
and
you'll
have
the
great
the
great
thing
that
you
actually
manage.
The
base
operating
system
life
cycle
together
with
the
cluster
life
cycle.
B
So
really
you
can
choose
your
favorite
provider
and
run
okay
on
top.
Aha,
what
is
rhetoric
or
less
than
Fedora
core
OS
is
an
automatically
updating,
Linux
operating
system.
It's
aimed
at
containerized
workloads,
it's
based
on
our
technologies,
rpm,
os3
and
ignition
OS
tree.
You
may
know
from
other
projects
like
flat
pack
and
two
other
things
as
well.
It's
a
an
image-based
opera,
well
yeah
image
based
composers
that
make
an
operating
systems
hoods.
B
Am
I
immutable
there's
only
a
few
parts.
You
can
actually
change.
So
that's
enhanced
security
and
it
allows
us
to
actually
replace
one
os3
comment
with
the
other
updating
the
operating
system.
We
use
core
OS
assembler
to
build
it
to
build
the
images.
You
can
actually
run
that
yourself
and
build
your
own
operating
system
images
with
that
tool
and
for
Fedora
core
OS.
We
used
the
Fedora
RPM
packages
from
the
official
Fedora
repositories.
B
Next
up
is
the
okay
D
working
group,
but
yeah.
This
is
us
at
I,
am
buddy
me
and
taro,
and
a
lot
of
other
people
that
chime
in
so
we
have.
We
are
present
on
slack
in
the
open
shift,
F
channel
on
the
kubernetes
slack,
and
we
also
have
the
open
shift,
Commons
slack,
where
we're
really
in
all
the
channels.
The
general
channel
is
probably
best
entry
point
there,
and
then
we
also
have
a
Google
Group,
which
we
also
use
as
our
email
list.
B
The
okd
repository
is
the
technical
one
and
the
community
repository
is
or
of
the
organization
what
contains
all
the
organizational
things
for
the
working
group
meetings
and
other
community
related
things,
and
with
that
that
was
just
a
really
quick
walk
through
with
that
I'd
like
to
open
up
open
it
up
for
the
AMA
I'm,
not
sure
if
anybody
else
wants
to
chime
in
before
we
open
it
up.
Yeah.
A
So
I'm
gonna
yeah,
that's
that's
the
slide.
We
wanted
to
really
highlight
here
and
part
of
the
reason
we're
doing
the
AMA
is
that
okay
d4
is
in
preview
now
I
think,
but
even
correct
me
if
I'm
wrong,
I,
think
we're
at
beta
5
at
the
moment
and
so
just
to
emphasize.
Okay
D
is
the
open
source
side
of
open
shift
and,
if
you
can
think
of
it,
I
think
we've
described
it
as
the
sibling
as
opposed
to
an
upstream
release
process
and
maybe
Christian
you
want
to
talk
about
that.
B
Quickly,
go
back
to
the
to
this
slide,
but
this
it's
yeah,
it's
really
this
one.
So
we
use
the
same
code
base
as
the
open
shift
product.
We
just
run
it
on
top
of
a
different
operating
system,
which
is
why
it's
not
well.
The
operating
system.
Fedora
core
OS
is
upstream
to
the
rel
core
OS,
the
actual
cluster
code,
the
kubernetes
an
operator
code
we
use
is
the
same.
So
that's
why
it
is
it's
a
it's
it's
difficult
to
describe,
but
I
think
sibling.
Distribution
yeah
probably
fits
best
here.
A
A
Some
of
them
are
here
with
us
today
and
we
we
also
are
trying
to
figure
out
ways
to
enable
the
people
to
test
all
of
the
different
platforms
and
one
of
the
folks
and
from
Charo
groover
is
here,
and
he
has
been
doing
most
of
the
work
making
single
cluster
installs
workable
for
okd
for
and
so
Charlie.
You
want
to
tell
us
a
little
bit
about
what
you're
doing
and
how
you're
doing
it
and
how
you're
making
that
happen.
C
Absolutely
so
the
the
goal
that
I
had
for
the
work
that
I've
been
doing
is
to
enable
someone
to
achieve
a
local
operations
or
development
environment
and,
and
really
in
you
know,
in
the
roles
that
I
play.
I
sit
on
the
fence
between
both
operations
and
development
and
I'm,
a
huge
fan
of
bringing
those
communities
together.
I
think
developers
write
better
code
if
they
understand
the
infrastructure
that
is
executing
their
code
and
I
think
operations.
C
C
So
some
some
recent
changes
that
were
that
were
made,
allow
you
to
run
and
install
with
just
a
single
master
and
no
workers
designated
and
get
a
cluster
up
and
running
that
way.
If,
if
you
go
to
my
github
page,
which
we
can
probably
drop
a
link
to
it
in
the
chat
or
something
later,
I've
got
a
prepped
tutorial
that
will
take
you
through
the
build
of
a
single
node
cluster
on
a
simple
bare
metal
device.
C
The
the
only
thing
you
need
is
CentOS
7,
6
I'm,
currently
using
I'm,
going
to
test
8
to
see
that
it
works
as
well,
and
at
least
20
gigabytes
of
RAM
32
gigabytes
of
RAM
is
best
but
most
inexpensive.
Little
tabletop
server
machines
are
fairly
approachable
these
days
it
it
uses
a
DNS
load,
balancing
very
poor,
Matins
load
balancing,
so
you
don't
have
to
stand
up
and
they
checks
the
instance
or
anything
like
that.
C
So,
while,
ultimately
we're
going
to
have
code,
ready
containers
and
something
that
you
can
basically
download
double
click
and
run
I'm
still
a
huge
advocate
for
you,
opening
the
hood
and
seeing
a
little
bit
of
what's
running
underneath.
So
if
you,
if
you
take
a
look
at
this
tutorial,
it
will
get
you
all
the
way
through
to
a
fully
functional,
fully
running
open
ship
cluster.
That
is
one
master
worker
combination,
all
in
one
virtual
machine
cool.
A
D
Nuns
are,
we
sure
we
have
a
prototype,
but
CRC
folks
are
deciding
which
way
to
they
want
to
go
to.
They
want
to
stick
to
okie
de
or
two
they
want.
This
is
an
experiment.
It's
their
call
honestly
if
they
would
prefer
the
stick.
0Cp
is
an
official
name
for
CRC.
We
could
come
up
with
our
own
okay
D
for
based
code,
Rayleigh
container
ish
thingie.
So
that's
not
really
a
problem.
Yeah.
C
So
you
do
have
to
have
a
bootstrap
node
and
at
least
one
master
node
that
are
up
and
running
the
bootstrap
process
has
to
complete
itself,
and
then
you
can
destroy
that
extra
machine.
The
the
containerized
route
that
Vadim
is
thinking
about
would
make
that
a
little
bit
less
heavy
lifting
and
would
enable
us
to
create
a
CRC
download
that
people
could
pull
down
and
effectively
double-click
install
onto
their
local
workstation.
C
It
stops
us
from
being
able
to
do
that
with
even
with
the
VM
based
payload
I
mean
like
it's
not
like.
There
aren't
api's
to
orchestrate
that
know
that
that's
absolutely
right
and
that's.
Currently,
if
you
look
at
the
code,
rated
containers
is
built
from
from
two
projects.
There's
the
the
sub
project
is
SNC.
C
It,
though,
is
that
once
you're
running
it,
you
can
only
access
it
from
your
local
machine
unless
you're
willing
to
do
some
network
gymnastics
and
stand
up
in
each
a
proxy
instance
or
something,
whereas
the
single
node
cluster
that
we've
been
playing
around
with
is
actually
sitting
on
your
network
as
a
machine
on
your
network
and
therefore
is
accessible
off
that
machine.
So
there's
definitely
more
work
for
us
to
do
on
it
to
take
one
of
these
two
routes
and
create
something
that's
easily
distributable,
but
that's
kind
of
the
state
of
where
we're
at
today.
D
D
That
means
we
are
using
all
the
containers
builds
exactly
as
they
are
in
gossipy
for
the
majority
part
of
the
operators,
or
they
are
payloads
and
things
and
then
just
ensure
that
it
runs
on
Fedora
cores.
So
as
soon
as
OCP
Knightley's
would
be
able
to
support
any
random
platform.
The
atmosphere
IP
I
in
case
of
for
5v,
director
and
all
all
of
the
stuff
the
okd
would
get
those
for
free.
D
D
But
we
cannot
put
this
as
a
as
an
item
for
the
next
better,
for
instance,
or
for
the
release,
because
that
we
have
to
follow
OCP
in
this
matter,
so
you
would
probably
file
an
enhancement
for
the
OCP
saying
the
port
additional
vSphere
platforms
being
very
platform
sensor.
One
have
it
approved
and
folks
from
various
operators
would
be
able
to
work
of
that.
A
B
Go
ahead,
yeah,
maybe
to
maybe
to
add
to
that
yeah
we're
following
at
the
OCP
development
here.
So
unless
that's
on
the
open
shift,
roadmap
or
the
product
roadmap
we
took,
we
won't
have
it
anytime
soon,
so
I
think
the
yeah.
The
way
would
be
to
create
an
enhancement
proposal,
get
that
merged
and
then
it'll
be
put
on
the
oakley
shift
roadmap,
the
roadmap
for
both
the
product
and
deal
and
the
community
project.
This
I'm
not
aware
that,
right
now,
any
additional
fees,
VMware
platforms
are
on
the
road
man,
yeah.
E
I
could
add
a
little
more
upstream
context
if
we
want
so
I'm
working
on
the
cloud
team,
which
is
interfacing
with
the
upstream
cluster
API
project,
which
is
where
we
get
a
lot
of
our
actuator
code.
You
know
for
these
various
controllers
from
and
although
there's
quite
a
well-developed
vSphere
implementation
upstream,
which
is
what
we
use.
Roisin
API
as
well
I
have
not
seen
the
upstream
with
like
an
alpha
version
or
anything
of
the
VMware
Cloud
foundation.
E
Yet
I
actually
just
heard
about
that
recently,
myself,
so
I
haven't
seen
if
there's
been
an
experimental
version
or
what
coming
there
yet,
but
presumably
once
it
arrives
in
the
upstream
it.
You
know,
as
described
by
my
team
and
Christian
and
Diane.
You
know.
If
this
comes
into
the
to
the
product,
then
we
would
bring
it
from
upstream
back
into
you
know,
into
OCP
and
then
onto
okd
from
there.
A
A
C
Was
a
period
of
time
where
Fedora
core
OS
wasn't
respecting
fixed
IP
addresses
that
might
have
been
configured
through
the
the
Kay
args
directives
when
you
boot,
but
that
that
has
been
fixed,
I'm
pretty
sure
it's
all
the
way
up
into
the
stable
Channel.
Now
that
it,
the
with
with
kernel
arts,
now
booting
a
Fedora
core,
OS
node,
it
will
respect
the
IP
directives
even
for
multiple
NICs,
and
it
will
persist
the
hostname
that
you
give
the
machine
so
yeah
I've
been
running.
C
My
lab
with
fixed
IP
addresses
for
a
while
now,
even
even
using
two
Nicks,
because
I
use
the
second
NIC
for
a
storage
area
network
that
I've
got
with
some
I
scuzzy
fan
devices.
If
I
remember
correctly,
this
was
fixed
in
late
February
because
I've
a
glee
remember
these
changes
actually
landing
in
Fedora,
core
OS
and
and
make
mission
update
to
go
with
it
to
make
it
so
that
they
weren't
just
accidentally
being
scrubbed
as
far
when
they
were
being
passed
on
to
the
kernels.
B
Yeah
I
think
that
should
work.
There
is
one
catch
at
the
moment,
though,
in
Fedora
core
OS.
There
is
a
an
issue
where
the
naming
scheme
is
not
the
same
as
in
rel
core
OS,
so
we
right
now
default
to
the
legacy
naming
scheme,
so
the
the
interface
name
would
be
eth0
on
Fedora,
core
OS
and
ens
192
on
our
cause,
Murrell
core
OS.
So
that
might
be
an
issue
that
that
you
caught
here.
If
you
try
to
name
your
interface
ens
192,
that
should
be
fixed
sometime
soon,
though
okay,
so.
A
C
That's
inbound
routing,
that's
just
so.
Ingress
is
just
when
traffic
is
hitting
the
the
the
cluster
either
to
go
to
an
application
or
go
to
the
console
or
code
to
some
other
endpoint
ingress
is
handling
that
request
coming
in
and
passing
it
through
to
the
correct
destination,
so
that
can
be
our
route
through
H,
a
proxy
or
nginx
or
whatever.
But
ingress
is
just
literally
referring
to
the
kubernetes
concept
of
just
pulling
routing
traffic.
Inward
egress
is
routing
traffic
outward
in
OpenShift.
C
A
You
perfect
there's
no
too
basic
of
a
question
anywhere
and
thank
you
for
that
was
great
from
YouTube
there's
another
question:
is
there
any
possibility
to
roll
kubernetes
for
IOT
devices
at
all
I'm
assuming
with
okd?
He
means
upgrading
updating,
end
devices
operating
systems
over-the-air
since
the
steam,
since
we've
been
talking
about
bootstrapping
to
small
hardware,
so.
C
Yes,
most
of
the
blocker
here
is:
we
don't
have
a
way
to
orchestrate
beyond,
like
a
single
network
system
like
when
you're
talking
about
IOT,
rollouts
you're,
talking
about
wide
area,
network,
orchestration
and
I
can
promise
you
right
now.
Nobody
has
been
thinking
about
that
right
now,
wide
area
network
orchestration
is
extremely
difficult
to
get
right
and
and
that
that's
that's
the
real
problem
with
it
like
getting
the
Machine
the
cluster
nodes
to
fit
small
enough
to
work
on
what
we
would
class
as
IOT
hardware.
That's
actually
the
easy
part.
A
That'd
be
totally
cool
and
yeah
join
the
the
okd
working
group
I
think
there
are
a
few
people
exploring
kubernetes
on
the
edge
in
IOT
so
and
it's
something
that
hasn't
come
up
in
the
topic
yet
in
the
okd
working
group.
The
other
thing
I
wanted
to
pause
for
a
second
and
give
a
shout
out
to
the
Fedora
core
OS
folks
and
the
people
in
the
Fedora
community,
because
they've
been
really
good,
collaborating
with
us
getting.
A
You
know
in
not
quite
in
sync,
because
we
don't
expect
them
to
be
totally
in
sync
with
our
release
cycle,
because
it's
a
little.
You
know
it
is
what
it
is,
but
they
have
been
incredibly
responsive
and
then
participants
in
a
lot
of
cross
community
collaboration
here
so
yeah,
a
big
shout
out
fedora
and
the
whole
Fedora
community
for
for
their
help.
With
this
they're.
A
A
We
can
learn
a
lot
for
them
and
we
have
been
so
that's
been
been
a
great
collaboration.
So
the
other
thing
people
have
asked
a
lot
about
in
and
I've
asked
Michael
McKean
here
today
to
come
to
answer,
because
we
keep
getting
questions
about
auto-scaling
quite
a
bit
and
I'm
wondering
Michael.
If
you
could
take
a
minute
and
talk
about
you
know
some
of
the
conversations
we've
been
having
around
auto-scaling
okd.
E
E
Like
machines,
gene
said
some
machine
deployments
and
in
much
of
the
same
way
that
we
look
at,
we
look
at
applications
to
pods
and
deployments
and
replication
controllers.
This
allows
us
to
start
treating
kubernetes
clusters
in
the
same
way
with
a
declarative
language
about
these
things,
and
so
recently
we've
integrated
the
cluster
API
into
the
kubernetes
autoscaler,
and
this
is
really
the
backbone
of
how
auto
scaling
is
done
in
openshift,
and
so
any
provider
that
is
provided
by
the
machine,
API
and
openshift
can
interact
directly
with
the
cluster
autoscaler.
E
So,
whether
you
have
me
sphere
or
whether
you
have
AWS
or
whatever
year,
your
implementation
is
behind
the
machine.
Api.
The
cluster
autoscaler
just
talks
directly
to
that
machine
API
by
creating
new
machines
by
increasing
machine
set
replicas,
and
in
this
way
the
autoscaler
does
work
for
you
within
your
cluster,
and
so
that
was
kind
of
a
high-level
discussion.
There
was
some.
There
were
some
requests
to
see
how
this
works
or
what
not,
because
there
are
a
few
resources
associated
with
it
so
yeah.
A
E
Let's,
let's
hope
this
works
well
first
step
is:
can
I
share
my
screen?
Okay,
so
hopefully
everyone
is
seeing
my
terminal
now
and
you
can
just
see
so
I'm
dealing
with
a
server.
That's
at
the
4.40
KD
I
think
I
downloaded
this
a
couple
days
ago
and
set
it
up
and
at
this
point,
I'm
in
a
project
called
OpenShift
machine
API,
and
this
is
the
project
that
you
want
to
be
in.
If
you
want
to
kind
of
assess
how
these
things
interact
on
your
on
your
cluster.
E
So
if
I
do
get
machine
sets
here,
we'll
see
the
various
machine
sets
that
have
been
deployed
for
us
now.
You
may
notice
that
you
know
there
are
three
machine
sets
here.
Each
one
of
them
has
one
machine
in
it,
but
if
I
do
get
machines,
we
can
see
that
there's
actually
six
machines
that
make
up
this
cluster,
but
only
three
of
them
are
represented
in
the
machine
sets
now.
Some
of
this
is
because
we
do
not
put
masters
into
a
machine
set.
E
That'll
take
a
little
time
so
I'll
show
a
quick
preview
of
some
of
these
manifests,
while
I'm
applying
them
and
then
as
we're
waiting
for
the
cluster
to
scale
out
I'll,
go
back
and
kind
of
look
in
depth
at
these
different
things.
So
to
begin
with,
if
I
just
get
the
pods
in
this
project,
you
can
see
you
know,
we've
got
a
cluster
Auto
scale
operator,
we've
got
the
Machine
API
controllers,
and
this
is
what's
actually
talking
to
the
cloud
and
we've
got
the
Machine
API
operator.
E
Now
the
Machine
API
operator
is
what's
looking
for
the
Machine
sets
and
machines,
and
what
this
cluster
autoscaler
operator
is
looking
for.
Resources
like
a
cluster
autoscaler
and
a
machine
autoscaler,
and
you
can
find
links
to
the
two
examples
for
all
these
in
the
okd
documentation
it's
under
the
managing
machine
section.
So
the
first
thing
I'm
going
to
do,
though,
is
create
an
auto
scaler,
because
we
don't
actually
have
the
autoscaler
running
at
this
point.
It
doesn't
start
by
default.
E
E
So
we
make
a
cluster
auto-scaling
look
at
the
pods
and
you
can
see
that
it's
already
starting
to
create
the
actual
autoscaler
form
know
that
that
won't
take
too
long
now.
The
next
thing
I
need
to
do,
though,
to
actually
make
this
work
is
I
need
to
create
a
machine
Auto
scale,
and
this
is
a
resource
that
will
tell
the
auto
the
cluster
autoscaler,
that
it
should
look
at
a
machine
set
and
that
it
should
scale
that
machine.
E
But
now,
when
I
go
to
get
the
machine
sets,
nothing
really
has
changed
at
this
point.
Everything
looks
pretty
much
the
same,
and
part
of
this
is
because
the
the
cluster
autoscaler
will
not
scale
up
until
it
sees
pods
that
are
in
a
pending
state,
so
it
will
attempt
to
scale
up
whenever
it
can,
but
only
if
it
sees
pods
that
are
waiting
to
get
assigned
somewhere
and
likewise
it
will
scale
down
when
it
sees
nodes
that
are
underutilized.
E
So
the
next
thing
we
need
to
do
is
create
a
workload
to
actually
force
this
to
scale
up,
and
so
I've
got
a
small
manifest
here.
That
just
creates
a
deployment,
and
does
you
know
not
too
much?
It
just
creates
a
container
and
sits
and
sleeps
there
and
and
puts
on
an
echo
for
us
now
what
I'll
feel
is
I'm
going
to
create
this
and
I'll
put
it
into
another
project,
and
at
this
point,
if
I
just
look
at
the
pods
in
that
project,
you
can
see.
E
E
E
I
can
type
it
correctly.
Is
that
the
one
that
we
have
specified
to
have
more
is
growing
out.
Now,
though,
it
heads
desired
to
have
four.
It
currently
should
have
four,
but
none
of
them
are
ready
yet-
and
this
is
where
you
might
start
looking
at
the
machines
we
can
see,
there
are
three
machines.
Two
of
them
are
still
in
the
provisioning
state.
One
of
them
has
been
provisioned
now
the
machine
is
an
abstraction
that
sits
above
the
actual
low
layer
here.
E
So
if
I
looked
one
step
below
I,
look
at
the
nose,
what
we
see
here
is
six
nodes.
Now
this
might
look
a
little
weird
to
begin
with,
but
the
new
nodes
won't
appear
here
until
they're
actually
ready
to
be
running,
so
we
won't
see
them
until
they're,
actually
ready
to
be
addressed
by
the
machine
and
by
the
cluster.
I
should
say
so.
I
promised
I'd
go
a
little
deeper
into
some
of
these
manifests.
E
Now
you
can
see
you
know,
I've
I've
chosen
this
kind
cluster
autoscaler.
This
is
this
is
the
resource
that
you
want
to
create
to
actually
instantiate
a
cluster
autoscaler
and
openshift
only
runs
one
cluster
autoscaler
at
a
time.
You
could
only
have
one
of
these
running
and
it
must
be
named
default,
and
this
is
the
way
that
the
system
kind
of
looks
for
these
things.
Now
in
the
spec,
though,
you
can
see,
I've
got
a
bunch
of
options
here.
Balance
similar
node
groups
ignore
a
demon
set
utilization.
E
E
That's
very
you
know
reasonable
in
terms
of
some
of
these
things
about
you
know,
ignoring
the
demon
sets
and
the
delay
after
add
some
of
these
I've
changed
so
that
I
could
give
a
different
demonstration
here
so
that
it's
looking
at
a
much
smaller
time
you
know
for
when
it
should
do
things
now.
Another
thing
to
keep
in
mind
here
is
these
resource
limits.
This
is
gonna.
This
is
going
to
limit
what
the
autoscaler
will
do
across
the
entire
cluster,
not
just
what
its
watchin.
So
this
max
nodes
total
24.
E
Those
aren't
just
the
nodes
that
the
autoscaler
is
watching.
That's
the
nodes
for
your
entire
cluster.
So
you
can,
you
know
if
you
have
a
hard
upper
limit,
you
say
I
never
want
to
go
over
24
nodes
across
everything,
whether
the
autoscaler
is
watching
it
or
not.
This
is
how
you
would.
This
is
how
you
would
kind
of
prune
those
things
then,
and
likewise,
I
can
I
can
set
limitations
on
the
number
of
cores
and
the
amount
of
memory
that
could
be
seen
across
the
cluster.
E
This
is
again
more
ways
for
you
to
tailor
the
auto
close,
the
autoscaler
to
your
specific
needs
and
then
gun
a
little
bit.
These
scale
down
options
here
you
know
scale
down
is
I,
think
it's
enabled
it
may
be
enabled
by
default.
I'm,
not
I'm,
not
sure,
but
you
have
to
turn
it
on
and
then
you
can
change
these
delays
for
how
long
the
machine
you
know
the
autoscaler
should
watch
before
it
decides
to.
You
know,
mark
a
mode
for
deletion
or
what
have
you?
So
that's.
E
That's
kind
of
the
first
thing
that
I
did
was
I
created
the
cluster
autoscaler
and
that
instantiate
it
and
you
can
see
at
this
point.
I
haven't
mentioned
anything
about
the
cloud
provider.
Every
all
that
information
is
contained
in
the
machine,
API
and
so
I
haven't
really
had
a
lot
any
sort
of
credentials
or
tell
it
where
to
do
this,
that
that's
all
being
handled
by
a
different
component.
E
But
basically
you
know
I,
just
I
want
to
create
a
machine
autoscaler
which
means
I
want
to
tell
the
autoscaler,
but
look
for
some
knowns
and
and
to
manage
those
nodes,
and
you
know,
as
normal
I
can
I
can
give
it
a
name
I'm
sure
some
of
you
have
noticed
now.
Some
of
our
our
new
machines
have
started
to
come
online
and
then
really
the
main
information
for
the
Machine
autoscaler
here
is
I
want
to
tell
it
the
minimum
number
of
replicas
and
the
maximum
and
then
the
other
key
piece
of
information.
E
I
like
what
machine
said
that
it
should
scale.
Now
you
know
in
4.5
there
will
be
support
for
scaling
down
to
zero,
but
in
4.4
the
minimum
I
can
go
is
one
so
I'm.
You
know
kind
of
left
that
in
place
here
and
and
what
happens
with
this-
is
that
when
you
create
one
of
these
machine,
Auto
scalars,
what
you
can
see
happen
now,
I'm
gonna
take
a
pause
here
for
a
second,
because
what
I'd
like
to
do
is
now
that
we've
gone
to
max
size
here.
E
I'll
take
another
quick,
look
we'll
get
the
nodes
just
so
we
can
see
we've
actually
added
all
these
extra
nodes
as
they've
become
ready
now
and
if
we
get
our
machines
and
we
can
see
everything
is
now
running
and-
and
it
all
looks
pretty
good.
So
what
I'm
going
to
do
at
this
point
and
then
we'll
get
the
pods
we'll
just
look
at
the
pods
overtime
and
scale
some
of
them
might
still
be
pending,
because
you
know
I
increase
the
size
of
this
way
beyond
the
maximum
of
our
cluster.
E
E
So
it's
starting
to
shrink
the
cluster
to
back
down
now
just
to
go
back
into
the
deep
dive
a
little
bit
here,
I'm
looking
at
the
Machine
autoscaler
and
this
machine,
auto
scale,
error
object,
is
actually
also
being
watched
by
the
cluster
autoscaler
operator
and
what's
happening
here
is
under
the
covers.
If
I
look
at
the
Machine
sets
off.
E
E
E
E
A
Don't
you
I
last
the
first
one
from
YouTube
and
you
may
have
covered
it,
but
can
you
please
explain
if
anything
changed
in
evilly
evenly
distributing
pods
across
cluster
autoscaler
feature
is
perfect
for
user
based
workloads,
especially
when
in
dev
test
at
prod
environments,
but
anything
changed
in
evenly
distributing
pods
across
the
cluster.
So.
E
Like
as
I
understand
the
question,
so
the
autoscaler
is
not
doing
anything
about
scheduling,
pod
workloads.
That
is
still
the
kubernetes
scheduler.
Does
then
what
the
autoscaler
can
do
and
I
don't
think
that
we
set
this
up,
but
a
little
look
quickly
here,
yeah,
so
balance
similar
node
groups.
What
the
autoscaler
can
do
is
that
if
we
had
set
multiple
machine
sets,
so
if
I
had
two
machine
sets
that
both
could
grow,
I
could
have
the
autoscaler
equally
add
pas.
Equally
add
nodes
to
those
groups
to
those
machine
sets.
C
E
E
It
looks
at
nodes
to
see
how
much
load
they
have
on
them
and
if
their
load
drops
below
a
certain
threshold,
then
the
autoscaler
will
attempt
to
scale
down
those
nodes,
and
it
looks
at
pods
in
the
cluster
to
see
if
any
are
unbending
and
if
there
are
any
earth
or
any
pending.
If
there
are
any
pods
that
are
pending
and
the
autoscaler
has
the
opportunity
to
scale
up
nodes,
then
it
will
do
that.
So
that's
I
hope,
I.
Hope
that
answers
the
question
that
okay.
F
E
Right
I
mean
I,
think
yeah
that
should
all
be
handled
at
the
Machine
layers,
so
we
will,
it
will
mint,
whatever
templates
you've
put
in
place
for
the
machines
and
nodes.
It's
not
actually
it's
going
to
do
whatever
you
tell
it
to
do
so.
If
you've
created
machine
sets
that
have
machines
that
have
some
sort
of
special
storage
or
GPU
or
hardware,
it
will
just
continue
to
do
what
you've
told
it
to
do.
It
will
not
try
to
do
anything
beyond
that
phase.
I
don't
need
those.
We
have
something
else.
Yeah.
C
So
I
with
that
does
does
the
autoscaler
gracefully
handle
when
there
is
no
more
resource
to
give
when
it
needs
to
have
more
and
they
can't
get
it
like.
So
if
your
machine
set
templates
include
requesting
a
resource
that
there's
just
no
more
of
so
like
everything
else,
is
there
but,
like
you,
don't
have
another
GP
to
attach,
or
you
don't
have
another
M
dev
where
you
don't
have
another
or
another
I
scuzzy
lon.
E
That's
that's
a
great
question
and
especially
in
this
case
we're
dealing
with,
like
you
know,
it'd
be
one
thing
if
the
if
you
were
just
dealing
with
one
provider
right
but
because
the
autoscaler
is
actually
talking
to
the
machine
api
you're,
not
talking
directly
to
the
provider.
So
what
would
happen
is
if
the
autoscaler,
let's
say
the
autoscaler,
still
has
room
in
its
machine
set,
so
it
can
scale
up.
E
But
when
you
go
to
request
from
the
cloud,
the
cloud
fails
to
create
the
machine
or
the
node,
because
the
cloud
is
denying
you
the
request
based
on
resource
limits
or
availability
or
whatever.
So
what
you'll
see
is
you'll
see
an
error
come
back
through
to
the
autoscaler.
Basically,
that
the
you
know
the
machine
api
cannot
allocate
those
resources.
E
Now
the
alerts
that
you'll
get
from
that
I
think
at
this
point
probably
are
not
as
verbose
as
I
would
like
to
see
them.
We
could
always
do
more
to
make
this
easier
to
interpret
because
I've
run
into
this
issue
before
we're
like
it.
Just
didn't,
have
the
resources,
but
the
autoscaler
will
handle
it
pretty
gracefully
it
in
the
logs.
For
the
autoscaler
you
know,
you
will
see
that
it
has
attempted
to
scale
up
and
there's
been
a
failure
from
the
provider
side,
and
it
will
just
it
won't.
E
E
E
C
Like
I
know
that
there
are
certain
circumstances
where
I
see
if
the,
if
you
have
a
quota
imposed
on
you
from
your
from
your
tenant
supervisor,
which
also
includes
a
rate
limit,
if
you
mix
and
that
rate
limit
is
a
so
many
requests
within
a
time
period
like
if
you've
run
out
like
you,
could
easily
get
yourself
locked
out
of
the
API
it
by
being
whacked
too
hard
by
the
autoscaler.
So
if
there
is
no
backup
facility,
there
probably
should
be
one
yeah
on.
A
That
on
that
note,
there
is
one
more
question
from
YouTube
and
then
we're
gonna
run.
We
can
run
over
the
hour
if
just
so
people
know
if
people
have
time,
but
we're
gonna,
try
and
wrap
up
in
the
next
ten
minutes
or
so.
Someone
on
YouTube
is
asking
in
case
of
auto
scaling
databases,
for
example,
MongoDB.
How
could
how
could
be
the
election
of
primary
and
secondary
replicas
could
be
handled.
E
So
I'm
going
to
pumped
a
little
bit
on
this
one
because
I
think
that's
actually
getting
into
the
vertical
pod
autoscaler,
which
is
different
than
then
the
cluster
autoscaler.
So
that
way
that
would
be
about
pod
scaling.
This
is
what
I'm
talking
about
it
specifically
just
about
cluster
scaling,
so
that
would
be
handled
by
whatever
pod
scaling
methodology
you're
using
yes,.
A
C
G
E
Of
the
difficulty
with
building
auto-scaling
stuff
into
you
know
something
like
a
database
or
whatever
yeah
like
once.
You
scale
it
out.
You
might
need
to
have
this
leader
election
and
this
other
mechanics
that
need
to
happen,
and
you
know
yeah,
it's
just
that's
just
so
far
beyond
what
what
kubernetes
is
actually
doing.
You're
now
totally-
and
you
know
this
is
why
people
run
zookeeper
and
things
like
that.
So.
A
It
sounds
like
this
is
a
topic
for
another
day:
a
deep
dive
on
that
and
I
can't
totally
see
that
happening
sometime
in
the
not-too-distant
future.
I
want
to
circle
back
to
ok,
D
and
the
beta
5
release,
because
we
get
a
lot
of
questions
around
release
dependencies
and
the
fact
that
it's
beta,
5
and
I
keep
telling
everybody
beta,
5,
very,
very
stable,
go
ahead,
use
it
test.
D
D
Installer
and
machine
config
operator
issues
for
Oh
kitty
are
handled
by
Oconee
team,
but
once
we
get
rid
of
that
console
for
patches
land
upstream,
we
will
call
this
GA
because
there
would
match
the
features
for
Ossipee
and
it
would
be
handled
by
OTP
team,
and
there
won't
be
a
difference
in
this
case.
D
So
on
the
road
that
we're
planning
to
Basin
OCP
for
five
codes
in
better
six,
we
would
bring
in
Fedora
32
based
grass.
That
would
fix
a
few
patches
related
to
OpenStack
and
a
few
other
improvements,
and
once
we're
done
with
that,
we
will
fix
a
few
things.
Something
infra
kept
our
patches
merged
upstream,
and
we
can
call
this
EDA
if
we
won't
have
any
blocking
issues.
We've
created
two
milestones
where
we
track
issues
on.
D
B
B
B
We
need
for
okay,
Dee
has
begun
and
we
expect
them
to
merge
soon
and
then,
when
four
point
five
OpenShift
OCP
comes
out,
we
will
rebase
for
the
last
time,
on
top
of
that
and
we'll
probably
have
a
release
candidate
release
or
in
yeah
yeah
at
first
and
then
we'll
go
GA,
but
it
shouldn't
be
shouldn't
be
too
long.
Now,
okay,.
A
So
I
have
there's
one
question
from
YouTube,
but
I
have
one
more
thing
that
I'd,
like
you
to
tease
out
a
little
bit
too,
is
Fedora
core
OS
images
available
on
different
cloud
hosting
providers
like
Google
or
Amazon
that
the
last
I
heard
they
were
available
for
doing
installs
of
okd
everywhere,
except
measure
or
is.
Are
they
on
Azure
now
and.
A
All
right
and
was
one
last
question
and
coming
in
from
and
the
workaround
I
think
we
have
a
link
to
it
that
I'll
post
as
well
so
I'll
post
that
with
you
or
if
someone
has
the
URL
to
the
workaround
and
put
it
in
the
chat
and
I'll
post
it
around
the
one
one
more
question
coming
in
from
okay:
well,
this
to
okay
they're
coming
in.
If
you
can
hang
with
me
for
a
little
bit
longer,
that's
fine
compute
nodes
are
good
fit
to
stored.
I
think
this
is
a
question.
Grammar
is
a
wonderful
thing.
A
Compute
nodes
are
a
good
fit
compared
to
storage
nodes
for
auto
scaling
or
can
I
apply,
autoscaler
for
storage
parentheses,
Steph,
HDFS,
ignite,
etc.
Nodes
also,
there's
there's
a
couple
of
questions
coming
in
that
way
and
then
there's
another
one
and
kind
of
mixing
them
up
a
little
bit.
The
cube
Service
Catalog
pods
are
in
in
quotes
crash,
loopback
off
state
when
okd
is
installed
in
an
environment
where
the
internet
is
disabled.
How
can
I
bring
these
pods
into
running
state
I.
Think
that
second
question
that
it
just
asks
is
kind
of
important.
C
E
It
easy
one
auto-scaling
so
easy
the
easy
answer
about,
like
compute
versus
storage
and
auto
scaling.
There
are
a
number
of
wonderful
features
associated
with
the
machine
sets
in
terms
of
being
able
to
use
selectors
through
annotations
labels
and
tanks,
and
you
can
actually
tailor
those
so
that
when
they
deploy
machines
and
nodes,
those
machines
and
nodes
will
take
on
those
same
labels.
So
if
you
have
a
situation
where
you
just
have
a
compute
set
of
machines-
and
you
want
that
to
auto
scale,
then
you
would
create
a
machine
set.
E
E
So
when
I
create
a
deployment
or
replication
controller,
I
can
put
selector
labels
on
the
pods
that
are
created
such
that
they
only
go
to
that
machine
set
and
then
that
machine
set,
maybe
has
you
know,
high-performance
CPUs
if
it
has
GPUs
or
some
sort
of
special
storage
or
whatever
it
needs
to
be,
and
in
that
way
you
could
segregate
your
workloads
and
control
the
auto
scaling
differently
with
each
one
of
them,
but
it
would
mainly
be
through
labels
and
those
times.
But
that's
the
easy
question:
okay,.
A
F
D
Needed
tons
of
information
due
to
see
what's
actually
happening
inside
there
we
have
a
tool
called
Mouse
gather
which
fetches
it
for
you,
so
that
we
won't
have
to
command
every
time.
Give
me
that
lock
and
add
local
that
one
once
we
have
a
bug,
if
it's
supported
scenario,
you
know
CP,
it's
definitely
supported
and
LGD
and
we
could
track
it
and
if
it's
important
to
know
if
we
might
look
better
so
on
that,
so
that
sounds
pretty
pretty
bad.
G
D
A
I'm
gonna
throw
up
the
links
to
where
you
can
find
us
online
too
and
where
you
can
add
some
conversations
and
log
issues
and
really,
if
you
can
spend
some
time
with
us,
join
the
the
Google
group.
That
forum
is
a
great
place
to
reach
out
and
post
questions
as
well
and
also
in
the
kubernetes
slack
I.
Think
that's
really
where
you'll
find
most
of
us
during
the
day
the
proxmox
I
would
love
to
see
that
stuff
documented
a
little
bit
better,
not
nudge,
nudge,
wink
wink.
Somebody
could
write
that
up.
A
That
would
be
great
I'm
looking
to
see
if
there's
another
question
here,
I,
oh
there's
a
bit
of
a
question
I
think
we
might
have
missed
this.
He
may
have
answered
this
Michael.
Please
help
me
correcting.
Let's
say
if
we
have
30
nodes,
20
nodes
started
and
one
minute
on
autoscaler
remaining
ten
can
assign
four
autoscaler
to
upscale
and
downscale.
E
So
let
me
see
if
I
understand
this,
so
let's
say
you
have
a
cluster
that
contains
20
nodes
that
are
kind
of
set
aside
for
workers.
You
want
to
be
able
to
scale
down
to
a
minimum
of
one,
and
you
want
to
create
these
other.
You
want
to
have
ten
more
nodes
that
could
be.
You
know,
usable
only
for
auto-scaling,
there's
a
number
of
different
ways
you
could
handle
this.
E
You
know
you
could
you
could
create
a
machine
set,
that's
just
linked
to
those
ten
other
machines,
and
that
would
a
potential
ten
other
machines,
like
you
know,
if
you
already
have
the
machines,
then
there's
no
need
to
Auto
scale
them
they're,
just
part
of
your
cluster.
But
if
there
are
ten
machines
that
are
available
that
could
be
requested
from
the
provider,
then
you
could
create
a
machine
set
specifically
for
those
machines
or
if
those
machines
could
be
included
with
the
other
20.
E
A
E
Right
so
then
yeah,
you
would
just
want
to
make
sure
that
you
set.
You
know
if
they're
all
part
of
one
you
just
set
the
minimum
to
twenty
and
never
let
it
go
below
twenty
or
if
you
had
some
set
of
twenty
that
we're
never
going
to
shrink.
You
could
create
another
set
of
ten
that
you
know
as
of
4.5
and
could
go
from
zero
to
ten
or
something
like
that.
E
A
We
think
we
got
it
there
and
that
last
one,
this
is
the
fun
part
about
restreaming,
multiple
platforms.
You
get
folks
from
YouTube,
Facebook
and
Twitter,
as
well
as
bluejeans
questions,
so
apologies
if
we
we
kind
of
messed
up
a
couple
of
times.
We
have
one
final
question
which
I
think
is
the
philosophical
question
of
the
day.
A
D
I
would
like
going
at
all
possible
ways
at
the
same
time,
of
course,
but
there
are
lots
of
things
how
we
could
extend
it.
For
instance,
we're
saying:
okay
t4
is
based
on
Fedora
cots,
but
that
might
not
be
the
case
for
your
set
up.
We
have
an
example:
how
to
build
your
own
stent
OS
based
core
OS.
D
But
I
don't
see
any
definitive
like
specific
place
where
we
want
to
end
up,
we
would
be
following
OCP
is
an
official
as
an
official
part
of
OpenShift,
probably
forever,
but
you
still
have
a
lot
of
ways
how
to
experiment
with
that.
As
we
said,
it's
choose-your-own-adventure
type,
where
you
would
use
trusted
things
for
a
mode
shift
and
play
with
it
and
extend
it
in
whichever
way
you
like,
because
all
of
the
source
is
open
and
we
help
with
the
way
you
how
to
experiment.
B
Yeah,
so
for
me,
the
immediate
goal,
of
course,
is
getting
the
GA
release
out
and
after
that,
I
would,
as
Madame
said,
I
would
love
to
see
more
external
contributions
and
really
I
want
to
enable
the
community
to
to
drive
their
own
contributions
and
drive
them
into
the
open
ship
code
base.
So
because
okay
D
will
be
built
off
the
exactly
same.
B
The
exact
same
code
base
as
OCP
it'll
be
possible
to
to
create
PRS
against
any
of
the
upstream
components
for
community
members
and
have
them
reviewed
by
the
respective
teams
and
get
them
in
also
I'd
like
to
see.
Okay
D
feature
more
more
operators
that
aren't
yet
in
OCP,
for
example,
there's
the
OpenShift
Acme
project,
which
is
a
controller
that
could
be
operator
eyes
and
attitude.
Okay,
d
in
the
future
and
yeah.
B
A
You
I
think
you
both
hit
the
nail
on
the
head.
For
me,
the
choose-your-own-adventure
I
think
that's
the
t-shirt
that
that's
the
thing
with
the
open
source
distribution
is.
It
is
the
place
to
experiment.
It
is
the
place
to
do
things
like
try
delivering
okd
to
the
edge
into
IOT
devices.
You
know
the
bare
metal,
all
kinds
of
you
know
things
that
keep
coming
up
in
the
working
group,
so
I
really
hope
to
have
you
all
been
listening
in
to
join
the
Google
group,
and
that's
you
know
where
I
see
this
going.
A
The
operators
more
operators
is
always
a
great
thing
and
please,
as
you
come
up
with
new
ideas
and
new
problems,
wherever
you
are
come
back
to
us
and
share
your
feedback,
the
OpenShift
dev
channel
and
kubernetes
slack
is
always
on
basically
there's
always
somebody
there,
whether
it's
an
open,
shifter,
an
oak,
a
deer
or
we're
all
in
the
same
boat
and
we're
driving
it
down
the
river.
All
together,
we
may
be
siblings,
but
we
do
talk
to
each
other
and
keep
talking
to
you
guys,
and
hopefully
you
all
talk
back
at
us.
A
So
that's
what
we
have
for
today.
If
you
guys
liked
this
AMA
style
session,
we'll
do
it
again
on
Mondays
we're
gonna,
try
and
host
different
AMA
sessions
for
different
pieces
and
parts
of
the
open-source
ecosystem,
that's
around
kubernetes
and
OpenShift,
bringing
in
different
kubernetes
SIG's
and
working
groups
and
other
projects,
and
so,
if
there's
something
you
want
to
hear
about
and
talk
about,
let
me
know
you
can
find
me
on
Twitter
or
on
slack
and
I
will
try
in
the
stage
these
things
for
you
on
Mondays.