►
From YouTube: SIG - Performance and scale 2021-09-23
Description
Meeting Notes: https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.qs7aweajr18k
A
Okay,
welcome
to
sixth
scale
september
23rd
link
the
document
in
chat.
Add
your
cell
phone,
please,
okay,
we'll
have
so.
We
have
two
things
on
the
agenda.
You
know
feel
free
to
add
things
for
things.
Folks
want
to
bring
up.
Okay,
first,
one
cuvert
via
my
phase
count
metric
is
missing
the
phase
label.
A
A
Okay
yeah,
so
this
is
something
that
we
had
that
saw
internally
this.
This
actually
came
out
of
the
issue
where
we
had
to
panic
and
divert
controller
one
of
the
things
that
we
noticed
like
when
we
went
through
and
actually
changed
out
the
controller
for
our
new
one.
We
noticed
this
where
we
lose
the
the
value
or
the
off
the.
I
think
we
lose
one
of
the
labels
off
of
the
the
the
my
phase
count
metric,
where
it
just
shows
up
as
value.
A
A
Okay,
all
right,
so
no
further
comments
on
that
one.
We
can
go
to
the
next,
which
is
vm,
pool
design,
discussion.
B
Hey
yeah,
I
can
talk
about
this
a
little
bit.
We've
had
several
discussions
in
the
past
and
everything
even
in
this
form,
and
we
had
a
previous
design
document
at
google
doc
and
what
I've
done
here
is
try
to
distill
all
the
previous
discussions.
B
We've
had
and
try
to
get
all
the
features
and
everything
that
we
were
interested
in,
having
and
kind
of
condense,
that
into
a
community
design
dock,
which
kind
of
our
community
design
process
and
revise
our
terminology
a
little
bit
to
align
a
little
bit
closer
with
concepts
we
already
have
in
cubert
and
and
kubernetes.
That's
kind
of
my
main
goal
here
is
to
make
it
feel
cohesive,
like
a
cohesive
api.
B
So
we've
had
a
lot
of
great
discussion
so
far.
I
think
we're.
I
feel
pretty
good
about
this.
I
feel
like
it's
something
that's
becoming
actionable.
One
of
the
items
that
you
point
out
ryan
was
this
selection
policy,
something?
B
A
B
Yeah
you're
on
it
right
now
so
proactive
it's.
I
called
it
selection
policy
now,
there's
it's
the
same:
exact
struct
under
update
strategy
and.
B
Because
I
think
it
makes
sense
for
both-
and
maybe
before
I
get
into
that
one
of
the
things
that
I've
kind
of
landed
on
here
is
this
idea
of
for
our
pool
whenever
we're
creating
vms
after
we
create
them.
How
do
we
manage
them?
B
And
I
came
up
with
these-
I
think
I
might
have
stolen
this,
this
terminology
from
google
gcp,
but
this
idea
of
an
unmanaged
vm
pool
where
it's
just
going
to
create
vms
and
never
touch
them
after
that,
an
opportunistic
kind
of
update
and
scale
in
policy
where
we're
only
going
to
touch
things
that
are
inactive,
so
not
actually
running
and
then
a
proactive
scale
and
update
strategy
which
would
actually
act
on
running
virtual
machines.
B
So
we
kind
of
give
the
gambit
of
possibilities
here
and
for
the
proactive
approach.
That's
the
one
we're
specifically
talking
about
here.
How
do
we
select
the
virtual
machines?
We're
going
to
act
on
and
ryan
came
up
with
this
kind
of
neat
idea
about
creating
a
list
of
priorities
or
ordering
that
allows
people
to
select
virtual
machines
based
on
like
labels
or
what
node
they're
on
or
things
like
that
and
actually
have
that
be
ordered.
B
So
we
kind
of
filter
through
we're
trying
to
match
vms,
based
on
what
the
user
has
said,
they
want
to
select
first.
A
Yeah,
do
you
want
it
so
for
this
for
this
discussion?
So
maybe
we
can
catch
everyone
up
like
the
like.
The
idea,
I
guess
behind,
like
the
virtual
machine
pool,
is
like
I
think
you
talk
about
it
at
one
of
these
sections
about
the
difference
between
it
and
replica
sets.
Do
you
want
to
like
talk
about
that
and
then
we
that
can
kind
of
give
us
like
a
little
intro
into
kind
of
the
use
case,
the
goal,
and
then
we
can
kind
of
maybe
dive
into
like
you
know
why
you
know
what's
like.
B
This
is
something
that
we
wrote
a
long
time
ago,
and
this
is
kind
of
like
one
of
the
initial
controllers
written
for
cubert
and
at
the
time
what
we
were
doing
is
we
were
looking
at
kubernetes
and
trying
to
envision
what
virtual
machine
management
would
look
like
in
this
ecosystem,
where
people
are
managing
pods
and
the
thing
we
came
up
with
was
a
virtual
machine
replica
set,
which
would
act
like
a
pod
replica
set
and
that
really
closely
resembles
how
people
would
want
to
manage
ephemeral,
stainless
workloads
in
practice.
B
That's
not
super
helpful
for
virtual
machines
because
it
works
in
the
vmi,
which
is
just
the
running
instance
of
a
virtual
machine
and
not
the
stateful
virtual
machine
object
which
then
instantiates
vms,
so
the
end
result
here
is.
We
have
a
controller
that
works
like
a
replica
set
that
works
on
ephemeral,
vms,
which
really
only
sorry,
there's
a
big
lab
car
going
by
handles
the
cattle
use
case
and
actually
just
kind
of
a
niche
portion
of
the
cattle
use
case.
So
it's
not
terribly
useful
for
a
lot
of
people.
B
The
idea
with
the
vm
pool
that
separates
from
this
is
we're
looking
at
what
kind
of
operational
patterns
people
use
in
traditional
infrastructure
as
a
service
platforms,
so
I
think
aws,
azure
gcp
and
I'm
trying
to
think
of
ways
where
we
can
align
those
operational
expectations
from
coming
from
those
platforms
with
cubevert
and
in
those
platforms.
You
have
things
like
an
aws.
You
have
a
auto
scale
group
which
has
vms
that
scale
out
and
in,
but
those
are
stateful
virtual
machines.
You
can
remove
virtual
machines
from
autoscale
group.
B
You
can
debug
them,
you
can
snapshot
them.
You
can
do
all
these
kind
of
stateful
actions
on
them,
which
you
can't
do
with
the
vm
replica
set
or
vmi
requisite.
I
mean.
So
all
that
said
the
vmi
replica
set
matches
the
container
management
workflows
and
the
vm
pool
matches
the
kind
of
infrastructure
as
a
service
management
patterns.
That's
kind
of
the
distinction
that
I'm
making
and
we
think,
or
my
expectation
is
that
the
vm
pool
will
be
much
more
usable
for
people
they're
used
to
using
virtual
machines
in
the
vmi
replica
set.
A
So
here
here's
one
for
me
so
like
the
distinction,
so
virtual
machine
pools.
This
is
a
group
of
virtual
machines
and
they,
you
know,
do
they.
This
is
the
virtual
machine
api,
so
not
necessarily
the
virtual
machine
instance
api.
This
would
would
this
imply
that
since
they're
sort
of
like,
would
this
assume
that
there's
sort
of
a
run
time
associated
with
a
virtual
machine
tool
like?
Can
you
have
a
virtual
machine
pool
with
no
virtual
machine
instances.
B
Sure
you
could
technically
do
that,
so
you
create
lots
and
lots
of
virtual
machines
in
the
pool,
but
the
run
strategy
would
be
set
to
halted,
for
example,
and
what
that
would
do
is
it
would
provision
storage
for
each
one
of
these
virtual
machines?
It's
not
like.
A
B
B
A
So
like
we
have,
we
have
this
this
idea
of
the
virtual
machine
pool
okay
and
then
so,
like
let's
say
like
the
v,
what
other
states
like
can
a
virtual
machine
be
set
to
like?
Would
those
all
apply
in
the
case
of
a
virtual
machine
pool
like
if
other
than
running
like
could
we
say
set
to?
I
don't
know
if
there's
a
pause
or
a
stop
state,
would
that
apply
here
as
well.
B
It's
running
and
halted
are
the
primary
ones
that
we
would
have
there's
they
would
all
apply.
It's
just
stamping
on
a
vm
and
anything
that
you
can
do
with
the
vm.
You
can
do
with
the
vm
and
the
vm
pool
the
things
that
we'll
have
to
kind
of
think
through
a
little
bit
is
what
we
consider
availability
within
our
vm
pool
like
I
have
to
think
about.
B
But
the
auto
healing
was
one
of
the
criteria
or
one
of
the
features
that
I
was
interested
in,
where
we
auto
recover
virtual
machines
and
things
like
that.
I
guess
it's
a
little.
B
It's
like.
Okay,
here's,
here's
a
simple
example:
let's
say
we
have
a
vm
pool
with
a
replica
count
of
three
and
all
three
of
those
virtual
machines
have
a
run
strategy
of
halted,
meaning
we're
declaring
that
we
actually
don't
want
them
to
run.
B
I
guess
we
have
three
virtual
machines
in
that
case,
but
none
of
them
are
running.
So
are
we
considering
that
that
condition
is
satisfied?
I
think
we
are,
but
I
want
to
make
sure
that
we're
all
on
the
same
page,
on
what
the
replica
account
three
actually
means.
A
I
see
okay
yeah,
I
mean,
I
think
that
makes
sense
to
me.
I
guess
like
well.
The
thing
is
that
that,
like
what
would
be
yeah,
I'm
trying
to
think
of
the
use
case
because,
like
for
part
of
me,
I'm
thinking
of
I
guess
well,
how
would
I
say
this
like
if
you
want
to?
A
If
you
have
the
pool,
we
have
virtual
machine
objects,
there
are
vmis
for
those.
We
have
the
state
in
the
virtual
machine
pool,
but
we
also
have
well.
So
we
have
stage
on
the
virtual
machine
objects,
but
we
also
have
a
state
on
the
virtual
machine
pool
like
like
we
would
be
sort
of
uniform
like
would
we
say
we
want?
Are
we
or
this?
A
There's
some
uniformity
of
those
objects
like
like
I'm
trying,
like
I'm
thinking,
kind
of
in
like
a
job
like
I'm
trying
to
go
to
this
state
like
I'm
trying
to
go
to
like.
I
want
this.
Many
virtual
machines-
and
I
want
them
in
this
state-
I
want
them
running,
for
example,
so,
like
the
virtual
machine
pool,
will
try
to
always
have
that
many
virtual
machines
that
are
running
so
you
know
if
we're,
if
we
have
virtual
machines
that
are
all
halted.
A
That
would
imply
that
I
have
halted,
as
my
in
my
virtual
machine
pool
like
would
we
have?
Is
that
kind
of,
like
the
level
of
like,
like
state,
would
be
propagated
from
the
pool
down
to
the
virtual
machines?
Or
is
this
like
a
case
where,
like
if
we
have
one
managed
where
we
just
kind
of
kind
of
see
what
we
let
them
kind
of?
Is
that
what
you're
kind
of
getting
it
with
unmanaged
that
we
can
get
to
these
states
where
we're
going
to
be.
B
Unmanaged,
certainly
you
can
get
into
these
kind
of
strange
states,
because
you
can
do
whatever
you
want.
You
can
have
a
number
of
replicas,
for
example,
on
your
vm
pool
that
don't
match
the
reclick
account,
because
in
unmanaged
you
would
be
the
user
that's
created.
The
pool
would
be
in
charge
of
actually
scaling
in
those
instances.
So
you
could
say
you
want
a
replica
count
three,
but
it
actually
had
five
in
your
pool,
an
unmanaged.
B
The
thing
that
gets
kind
of
strange
is
that
we're
pointing
out
is
that
you
could
have
three
halted:
virtual
machines,
and
what
does
that
mean
with
unmanaged
versus
kind
of
these
proactive
and
opportunistic
selection
or
management?
Styles,
I'm
curious,
if
maybe
when
we
create
virtual
machines,
if
we
aren't
using
the
unmanaged
well,
I
don't
think
that
makes
sense.
I
think
people
get
what
they
ask
for.
B
So
if
somebody
has
asked
for
vms
to
be
created
and
they
don't
have
the
run
strategy
set
to
always
meaning
they
always
want
this
vm
up,
then
I
think
that's
just
what
they
get.
They
get
virtual
machines
that
aren't
running
and
they've
declared
the
best,
the
state
that
they
wanted
and.
A
B
You
set
that
in
the
vm
config
I
see
what
you're
getting
at
so
there's
a
vm
config
object
that
maps
to
pools.
You
can
have
a
one-to-many
relationship,
you
can
have
vm
configs
mapping
to
lots
of
different
pools
or
just
one
pool
or
whatever
you
want,
and
the
vm
config
object
itself
is
really
just
a
vm
object
without
a
status
and
the
reason
I
think
it
makes
sense
to
have
a
separate
object
for
this.
Is
I
don't
want
people
trying
to
start
vms
or
map
to
pools?
B
I
want
to
be
distinct
on
the
intent
of
what
has
been
created
so
calling
a
vm
config.
You
can't
start
it,
but
it's
just
a
vm
spec
without
status
and
in
that
vm
spec.
That's
where
you
would
declare
your
run
strategy
as
always,
or
as
halted
or
whatever.
So
the
expectation
is,
if
you
want
virtual
machines
to
be
running
and
you're
being
pool
you
separate
strategy,
always
in
that
vm
config,
and
it's
always
going
to
try
to
keep
these
vms
online.
A
Okay,
so
one
of
the
one
of
the
things
I
was
thinking
of,
like
if
sort
of
the
difference
between
like
we
have
the
virtual
machine
instance
and
the
virtual
machine,
the
virtual
machine
has
this
run
state,
the
virtual
machine
instance.
Doesn't
it's
just
the
runtime,
the
virtual
machine
pool?
A
Has
this
virtual
machine
config
object
where
we
declare
our
intent
of
you
know
what
we
expect
to
happen,
I'm
wondering
if
like
if
we
have
some
sort
of
config
like
this,
do
we
do
we
need
to
have?
Does
it
have
to
be
just
virtual
machines
that
could
create
it?
Could
it
be
virtual
machine
instances?
A
You
know
if
we,
if
essentially
at
this
point,
we've
abstracted
the
idea
away.
Like
I
mean
we,
I
could
basically
say
in
this
config
I
want
something
running:
do
we
need
the
virtual
machine
object
at
all
in
the
middle
of
this.
B
Oh,
that's
an
interesting
idea.
Oh
well,
you
need
the
virtual
machine
object
because
we're
treating
it
as
stateful
virtual
machines
rather
than
stateless,
so
vmi
is
stateless.
Vm
is
stateful.
A
B
A
Okay,
I
see
so
you
have
that
that's
the
other,
maybe
that's
the
so
the
so
the
you
would
have
so.
The
virtual
machine
object
has
the
pvcs
and
the
virtual
machine
config
would
reference
the
we
would
reference
the
pvcs.
In
addition,
so
it.
B
Once
we
remove
the
vm
object
that
logic
of
how
the
state's
provisioned
and
everything
would
have
to
be
moved
somewhere
else,
and
that
is
where
things
get
tricky
and
also
it
makes
things
tricky.
When
we
talk
about
things
like
how
do
we
detach
virtual
machines
from
this
pool,
because
the
thing
that
mapped
all
the
states,
if
it
isn't
the
vm
object,
so
we
can
detach
that
one
object.
That
kind
of
encapsulates
everything
that's
involved
with
that
vm.
B
A
Okay,
yeah,
I
think,
like
so
one
of
the
things
I
was
thinking
I
was
like
we
could
use.
If,
if
we
were
to
make
a
bunch
of
assumptions,
we
could
say
like
there's
a
dynamic
provisioner
for
storage
and
we
essentially
offload
that
we
say:
okay,
we're
gonna
assume.
Whenever
we
attempt
to
create
this
object,
then
it's
just
gonna
have
a
pvc
allocated
for
it
or
it's
allocated
ahead
of
time
or
something
that
could
be
possible.
But
so
you
mentioned
that
there
are
some
tricky
cases
where
like
okay,
but
but
maybe
we
can.
A
Maybe
this
is
configurable.
Maybe
we
could
say
like
okay,
we
don't
you
know,
we
expect
it
to
be
managed
elsewhere,
entirely
elsewhere,
we're
just
going
to
do.
You
know
the
vmis
and
and
whether
they're
running
or
not,
and
we're
just
going
to
look
for
names
of
pvcs
and
to
be
appear
or
to
not
appear
or
whatever,
and
then
we
just
kind
of,
let
you
know
if
it
doesn't
happen,
it
doesn't
happen.
We
just
literally
let
the
user
handle
it.
B
B
I
guess
one
of
the
realizations
I've
had
over
the
past
couple
years
is
that
I
think
I
would
like
people
to
begin
interfacing
with
the
virtual
machine
object
more
often
than
the
vmi.
I
think
the
vmi
is
a
special
case
for
people
that
maybe
really
understand
some
of
the
fundamentals
of
how
kuvert
works,
but
for
people
who
are
expecting
vm
like
behavior,
where
they
can
stop
start
their
virtual
machine,
perhaps
suspend
the
state
and
resume
the
state,
and
things
like
that.
B
B
B
Lot
of,
like
I'm
trying
to
maybe
one
question
I'd
have
is
what's
the.
Why
are
you
asking
because
it
sounds
like
you're,
more
comfortable
using
vmis
over
vms,
perhaps
or
what's
what's
the
thought
here.
A
I
I'm
just
trying
to
understand
more
about
where
I
like,
where
you're
going
with
the
virtual
machine,
config
and
kind
of
the
general
idea
behind
it,
because,
like
I,
my
thought
is.
That
is
that,
if
you
know
what
is
the
like,
my
like
my
thought
is
like
what
would
be
the
difference
between
hello.
So
what
would
be
the
point
of
a
virtual
machine
object
if
it's,
because
it's
almost
seems
like
to
me
that
the
pool
is
sort
of
just
another
form.
Another
abstraction
around
vmis.
Just
like
vms.
B
That's
pretty
good
yeah,
I
think
that's
about
as
accurate
as
we
can
get
and
I
think
that
being
able
to
treat
vms
as
a
standalone
unit
that
can
be
separated
from
the
pool
and
not
lose
anything,
that's
valuable.
So
you
can
take
your
vm,
you
can
remove
it
from
the
pool
and
you
still
have
a
working
stateful
virtual
machine
that
functions
without
having
any
of
the
states
involved
with
the
me
and
pool.
A
I
see
yeah
I
mean
I
think
like
it.
That
makes
sense
to
me
like
what
you're
the
argument
that
you're
making
it
makes.
That
makes
sense
to
me.
I
think
the
maybe
this
is.
It
could
be
something
that's
sort
of
out
of
scope,
at
least
at
least
at
least
initially
in
the
design.
Here,
maybe
it's
something
we
could
call
out,
because
I
do.
I
think
it
is
possible.
A
Unless
we
want
to
push
you
know,
put
people
away
from
vmis.
I
think
it
is
possible
to
do
that
to
have
sort
of
the
virtual
machine
pool,
be
the
abstraction
above
vmis
and
kind
of
let
some
other
things
handle.
You
know
pvcs
and
other
things,
other
infrastructure
movie
that
has
been
built
and
so
there
I
think
there
is
a
possibility
for
it
I
mean
well,
how
should
we,
I
guess-
well,
isn't.
B
A
You
don't
well,
then
you
don't
well,
it's
it.
It's
the
you,
don't
get
the
the
scale
down
the
scale
in
that
comes
with
this,
that
you
you
like
replicas,
wouldn't
have
it.
Well,
it
isn't
it.
Well.
I
don't
know
if
it's
it's
just
it's
a
an
argument
necessarily
of
stateful
versus
stateless,
like
the
like
the
you
could
still
have.
A
You
could
still
have
many
bmis
and
they
could
be
stateful
in
the
the
replicas
that
model,
but
when
you
lose,
which
you
lose
is
the
scale
and
that's
where
it
becomes
the
problem.
B
We
can
add
scale
into
vmi
replica
set
with
these
policies
and
things
like
that.
But
there
is
no
state
like
that's
like
I
don't
know
like
it's
not
like
a
for
example,
in
the
kubernetes
world,
a
stateful
set
where
we're
provisioning
storage
for
every
pod.
There's
no
such
concept
of
the
vmi
replica
set.
So
it's
it
doesn't
it's
stateless
like
there's,
no
storage.
A
Yeah
I
mean
I
guess,
maybe
I
have
a
different
definition
for
for
or
maybe
a
different
assumption
for
what
it
like
state
for
what
state
is
here
because,
like
the
I
guess
it
would
be
so
maybe
states
not
the
right
term.
You
could
have
a
pet
in
a
virtual
machine
replica
set.
You
could
have
something
that
you
don't
want.
You
want
to
treat
as
more
important
than
say.
I
mean
you,
you
would
you'd
have
vms
that
that
aren't
necessarily
cattle
and
and
it
doesn't
like
you're.
A
B
Part,
you
can
because
you
can
never,
for
example,
something
as
simple
as
restarting
your
vmi.
You
lose
everything.
So
it's
it's
treated,
ephemeral
like
one
of
the
definitions,
I
would
guess
as
a
pet
is
that
you
really
want
to
maintain
this
thing,
maintain
the
whatever
you've
configured
on
it
and
things
like
that
and
with
a
vmi
anytime,
any
sort
of
disruption
occurs.
You
lose
all
of
that,
so
it
doesn't
really
exist.
A
Okay,
if
you
had
vms
that
you,
if
you
had
a
set
of
vms,
I
guess
the
example
I
use
is
that
if
you
had
10
vms-
and
you
had
three
of
them
that
were
using
whatever
had
had
important
workloads
on
them
and
you
didn't
want
to
lose
those
just
yet
like
you
could
get
rid
of
the
seven
you
could
scale
down
for
the
seven
of
them
because
they
don't
matter,
they
don't
have
workloads
that
are
important
to
you
right
now,
they're
all
the
same
vms,
but
three
of
them
are
being
used.
B
Yeah,
it
just
sounds
like
a
advanced
scale
manager,
whatever.
That
is
whether
it's
the
vm
pool
or
the
vm
replica
set.
You
can
maintain
the
basically
the
vm
pool
is
going
to
be
something
that
can
give
you
that
exact
same
behavior,
but
also
with
enhancements.
B
So
you
can
have
your
staple
workloads
managed
in
the
same
way
as
your
stateless
there's,
nothing
preventing
you,
for
example,
for
from
creating
your
vm
config
to
look
exactly
like
something
that
would
be
managed
by
a
vmi
replica
set.
Like
you
don't
have
to
provision
storage,
you
could
use
container
disk
in
your
vm
config
and
then
you
have
a
bunch
of
essentially
stateless,
vms,
being
managed
by
vm
pool.
So
every
time
you
restart
those
vms
there's
nothing
from
the
previous
run
that
exists.
A
Okay,
yeah!
No,
I
think
that
that's
really
where
I
want
to
go
that
is
sort
of
that.
The
difference
between
those
two
things.
So
you
talked
about
also
the
virtual
machine
config.
So
this
is
going
to
contain
the
the
vm
object
and
as
well
as
sort
of
the
configuration
for
the
pool
or
is
it
not
or
not,.
B
The
vm
config
is
totally
isolated
from
the
vm
pool
I
mean
all
it
is,
is
a
vm
spec
and
the
vm
spec
contains
at
very
basic.
All
it's
going
to
contain
is
how
do
you
want
this
vm
to
run
so?
Do
you
want
to
always
run
which
means
that
if
it
gets
killed,
we're
going
to
restart
it,
we're
always
going
to
try
to
keep
this
thing
online.
B
In
addition
to
that,
what
does
the
dmi
spec
look
like
that
this
vm
is
going
to
run
so
when
we're
talking
about
translating
from,
for
example,
the
vmi
replica
set
to
the
vm
pool,
what
you
do
is
you
take
your
vmi
put
into
vm
config
and
then
set
the
run
strategy
to
whatever
you
want.
Presumably
always
always
want
this
thing
running
while
it
exists
and
that's
it
then
you're
you've
pretty
much
got
an
enhanced
vmi
replica
set.
B
B
One
to
many
from
the
vm
config
standpoint
and
the
one
to
one
from
the
vm
pool
and
that's
something
else
to
discuss.
If
you
make
a
change
to
your
virtual
machine
config.
B
The
assumption
here
is,
depending
on
how
the
update
strategy
was
set,
that
those
changes
roll
out
or
let's
say
you,
map
your
vm
pool
to
point
to
a
different
vm
config.
Then
we
would
start,
depending
on
the
update
strategy,
either
rolling
out
or
launching
new
instances
opportunistically
or
not
doing
anything.
It's
unmanaged,
but
that's
another
kind
of
operational
pattern
that
you
get
with
vm
pools
that
don't
exist
with
replica
sets
in
my
replica
sets
today.
Okay,.
A
Yeah
that
that
would
be
a
good.
That
would
be
a
good
time
we
talked
about.
Let
me
see,
did
only
pause,
do
are
there
any
more
questions?
Do
people
have
any
other
questions
or
any
comments
about
the
general
idea.
A
Okay,
okay,
we
can
just
keep
going.
Then.
Let's
talk
about
the
role,
the
the
update,
I
think,
let's
see
so
see
if
I
can
find
it
update
strategy.
Okay,
so
we
have
unmanaged
opportunistic,
proactive.
A
B
Yeah
exactly
so,
if
you
change
your
vm
config
or
point
the
vm
pull
to
a
different
config,
then
that's
where
this
comes
into
play
and
we
have
three
kind
of
buckets
here.
We
have
unmanaged,
which
means
that
if
you
change
your
vm
config
or
something
mutates
there,
existing
virtual
machines
will
never
get
touched.
They're
just
going
to
always
be
what
they
were
at
the
time
of
creation
and
the
user
has
the
option
to
do
whatever
they
want
there.
B
They
can
manually
update
them,
they
can
whatever
opportunistic
is
where
we
are
only
updating
the
offline
config
of
a
virtual
machine.
So
the
virtual
machine
object
is
considered
an
offline
config.
The
vmi
is
the
running
instance
of
that
virtual
machine.
It's
the
online
config,
so
we
would
update
the
offline
config,
meaning
the
vm
and
then
once
that
vm
restarts
someday,
it
would
pick
up
the
new
update
config,
but
we're
not
touching
the
live
instance,
so
we're
not
doing
anything
proactively.
B
That
would
actually
destroy
the
running
virtual
machine
instance
in
order
to
perform
the
update.
So
that's
opportunistic.
If
somebody
shuts
down
the
virtual
machine,
we
pick
up
the
update
and
the
last
bucket
here
is
proactive,
and
that
would
be
I'm
thinking.
That
should
be
the
default
and
the
default
proactive
solution
is
where
a
vm
config
gets
updated.
B
We
are
going
to
begin
rolling
those
updates
out
proactively
to
all
virtual
machines,
meaning
applying
the
changes
to
the
vm
and
then
restarting
the
vm
in
order
to
pick
up
those
changes-
and
this
is
where
we
talked
about
selection
policy.
B
B
First,
only
and
the
base
policy
would
be
something
like
random
or
do
the
oldest
first,
in
addition
to
this
proactive
strategy,
we
also
at
the
top
here
we
have
this
max
unavailable
and
that's
what's
going
to
throttle
like
how
how
we
do
the
rolling
updates
and
things
like
that,
so
we're
not
going
to
be
able
to
perform
a
proactive
update
unless
we
can-
or
I
guess
the
guarantee
we're
making-
is
we're
not
going
to
perform
any
sort
of
proactive
action
that
involves
more
than
max
unavailable
vms,
going
offline,
an
unavailable,
I'm
defining
as
a
running
vmi
that
has
condition
ready
set.
A
A
Yeah
my
question
was
going
to
be
around
like
kind
of
the
general
idea
of
eviction
like
how
would
we
you
know
what
would
be
like
our
policy?
I
think
that
kind
of
answers
it
with
is
maxine
available.
I
mean
we're
basically,
this
is
you
know
this
is
pod
destruction,
budgets.
Basically,
we
are
we're
setting
some
sort
of
expectation.
Here
I
mean
you
could
even
set
this
like,
for
example
like
so
I
could
say
you
know
if
I
really
care
anything
that
I
consider
running
I
can.
A
I
could
consider
important
so
I
could
do
or
I
could
actually
get
away
with
just
doing
opportunistic.
That
would
give
me
like
the
case.
Okay
after
the
vm
vmi
has
been
removed,
we'll
just
wait
till
the
new
one
comes
in,
but
you
know
so
don't
restart
anything.
So
that
would
be
like
that.
I
think,
would
be
equivalent
to
100
here,
because
we're
not
going
to
let
anything,
restart
yeah.
A
Okay,
yeah,
that's
pretty
cool
the
the
other
thing
that
I
thought
of,
because
when
I
first
read
this
I
was
I
thought
I
thought
where
you're
going
with
this
was
I
thought
about
the
you
know
like
we
could
have.
You
could
have
vmis
in
different
states,
so
you
have
it
here,
like
your
writing.
Condition's
false,
like
one
of
the
things
maybe
we
could
have
here
is
we
could
select
well
actually.
Maybe
this
is
what
it
is
like
whenever
you
had
something
that's
unavailable.
Those
are
chosen.
A
First,
some
sort
of
policy
like
I
think,
ready
condition
might
be
it
so
like
it
would
be.
If
I,
if
I
selected
whatever
25,
then
we're
gonna
pick
the
the
ones
that
are
not
ready
and
I
can
control
that
ready
condition.
So
that
would
give
me
the
control
over.
You
know
when
we
have
vmis
that
are
just
not
in
a
state
that
you
know
they're
shut
down
or
whatever
you
know,
maybe
they're,
whatever
state
they're
in
they're.
Those
are
chosen
first,
regardless
of
what
the
ordered
policy
is.
That
was
that
how
yeah.
B
They're
shut
down,
I
think
they'll
immediately,
regardless
of
this
proactive
thing,
so
somebody
says
proactive
and
they
have
a
bunch
of
vms
that
are
what.
B
There's
it's
kind
of
the
same,
so
halted
means
that
you've
declared
the
intent
that
you
don't
want.
Your
vm
running
shutdown
means
that
the
vm
is
not
running
at
the
exact
moment
that
you're
looking
at
it
and
that
can
be
a
transient
state,
meaning
that
it
could
be
shut
down
and
immediately
getting
restarted.
So
I
have
to
be
careful
how
I
use
the
terminology.
B
Let's
say
halted:
we're
saying
that
a
virtual
machine
is
declared
that
we
don't
want
to
run,
and
in
that
case,
with
the
proactive
strategy,
I
would
expect
all
those
vms
to
immediately
get
the
new
change
like
there's.
Just
no
reason
that
I
can
think
of
that.
This
ordering
or
anything
like
that,
would
matter
for
those
vm,
so
the
selection
ordering
or
selection
policy
the
way
I'm
viewing
it
is
this
only
something
that
applies
to
vms,
that
are
in
an
active
running
state
and
how
we
perform
the
update
for
those
because
that's
destructive.
A
A
So
then,
this
is
active
running
state,
so
we
can
have
vmis
with
a
false
ready
condition.
This
could
be
literally
like
a
vmi,
that's
in
scheduled
right,
like
yeah,
so
this
would
be.
B
Let
me
think
about
that,
because
we
have
an
active
running
virtual
machine,
but
now
I
don't
think
it
would
be
targeted
any
differently.
It
would.
B
Specific
instance
that
you're
talking
about
wouldn't
be
at
it
would
be
one
of
these
unavailable
virtual
machines,
so
it
would
be
throttling
how
quickly
we
can
proactively
update
virtual
machines.
It
wouldn't
be
selected
first
necessarily
or
last,
or
it
would
still
fall
within
that
selection
policy,
because
it's
actually
a
vmi
that
exists
for
a
vm.
So
it's
an
active
there's
an
active
instance
running
there.
A
B
Available
throttling
it's
not
what
do
you
mean
by
selected
you're
saying
when
this
will
come
into
like.
A
When
we're
doing
the
update,
yeah
we're
doing
the
update,
where
we
have,
let's
say
we
set
max
on
available
25
and
so
we're
going
to
select
that
we
have
to
select
some
vms
that
are
going
to
be
removed
and
updated,
and
I
kind
of
where
I'm
going
with
this
is
like
you
know
what
it's
like
ready
condition
false.
This
is
referring
to
to
what.
B
So
max
unavailable
is
saying
that
we
are
not
going
to
do
any
sort
of
proactive
action
if
we
find
x
number
of
vms
in
this
unavailable
state.
So
if
you
have
100
virtual
machines
and
25
of
them
are
unavailable,
then,
when
it
comes
to
performing
the
updates,
we're
saying
that
nothing
can
be
selected
to
update,
because
we
do
not
have
enough
available
capacity
within
our
pool
to
perform
the
update.
B
So
we
can't
do
a
destructive
action
of
updating,
running
virtual
machines,
because
25
of
them
are
offline
right
now
we
don't
want
to
take
any
more
offline.
It's
a
protective
measure
to
make
sure
that
we
don't
take
everything
offline
at
once.
A
Right
so
I
guess
like
where
I
was
going
with.
My
question
is
like
the
difference
between
a
vmi
like
we
have.
We
have
a
vm,
we
have
the
vmi,
the
vm
is
we
set
it
to
running.
We
have
a
our
vmi
starts
coming
up,
it
reaches
scheduled,
it's
not
ready,
yet
it's
still
unavailable.
A
It's
still
unavailable,
okay,
so,
okay,
so
because
it's
unavailable
we're
adding
to
this
number.
Okay.
B
Okay,
and
if
there
none
of
those
are
set,
I
think
it
just
becomes
ready
as
it
hits
running.
So
that's
another.
A
A
Can
we
target
this
vmi
before
it
gets
to
running
like
and
and
have
it
be
something
we
want
to
update,
because
it's
if
it's
not
a
running
workload
like
let's
just
kill
it
first
I
mean
I
guess
that
would
be
newest,
but.
B
Yeah
so
you're
talking
about
this
once,
but
it's
kind
of
like
an
edge
case,
you're
saying
that
edema
has
been
created.
It's
somewhere
in
between
the
phase
is
somewhere
in
between
scheduling
and
running.
So
it's
not
quite
the
pod
hasn't
even
like
we
haven't
started
the
human
process
in
the.
A
B
I
don't
know
I
mean
I
guess
we
could
optimize
for
that.
That
could
be
a
selection
one
of
the
criteria
here
I
don't
know
what
we
would
call
it,
but
we
could
call
it
something
like
provisioning.
First,
like
the
vm
is
stuck
in
this
state
of
almost
running
and
not
quite
yet,
and
we
know
that
it's
not
online,
yet
that
we
could
select
it
first.
I
don't
know
what
we
would
call
it,
but
sure
that's
that
seems
feasible.
A
B
I'm
not
terribly
worried
about
that
specific
one
though,
because
it
it
all
will
get
sorted
out
in
the
end.
It's
just
that
that
vm
will
be
allowed
to
completely
start
and
become
ready
before
it
would
be
acted
on.
A
B
B
There
are
cases,
for
example,
let's
say:
we've
created
a
a
vm
pool,
and
some
vms
are
stuck
in
this
pending
my
scheduling
state
for
a
really
long
time.
That
can
happen
if
there's
resource
constraints
so
we'll
create
the
pod,
and
the
pod
will
always
be
in
scheduling
until
resource
freezes
frees
up
in
order
to
actually
do
something
with
that
yeah
we
can
target
those.
A
Okay,
so
we
we
have
our,
we
have
our
roll
out.
We
control
it
with
our
policies.
We
have
our.
We
can
set,
how
many
we
want
that
are
available,
okay,
yeah,
that
makes
sense.
That
makes
sense
to
me,
okay
and
then
the
maybe
the
other
one
is
the
other
interesting
one.
Is
this
scale,
and
so
that
do
you
want
to
talk
about
this?
One.
B
Yeah
using
the
same
terminology
for
scaling
as
I
did
for
update
strategy,
we
have
the
unmanaged,
opportunistic
and
the
proactive
buckets
and
with
scale
in
unmanaged
means
that
we're
never
going
to
delete
your
virtual
machine
or
the
state
or
anything
like
that.
B
Opportunistic
means
that
we're
only
going
to
scale
in
vms
that
are
in
a
halted
state
like,
I
guess,
actually
yeah.
I
think
you
want
to
declare
that
the
run
strategy
is
halted,
so
we'll
we'll
tear
down
the
ones
that
you
said
that
you
don't
want
running
and
within
the
opportunistic
and
proactive
subproactive
again
is
just
going
to
mean
we're
going
to
tear
them
down
and
whatever
order.
The
selection
policy
has,
but
an
opportun,
opportunistic
and
proactive.
B
We
have
something
new
and
this
is
the
state
preservation
field,
and
this
allows
us
to
prefer
preserve
the
state
of
the
virtual
machine
gear
and
scale
in
so
what
this
means
is
essentially
we're
going
to
orphan
any
pvcs
that
are
associated
with
a
virtual
machine
during
scale
in
if
somebody
sets
the
preservation
state
to
offline
and
I'll
explain
what
that
means
in
a
second
and
what
this
allows
is
when
we
do
a
scale
out
again,
all
that
storage
is
going
to
be
already
existent,
so
it's
already
going
to
be
provisioned
for
that
exact
instance,
and
it's
going
to
improve
reduced
provisioning
time,
because
storage
already
exists
and
it'll
beat
up
quicker.
B
So
we're
saying
that
we're
going
to
optimize
scale
out
by
preserving
the
state
during
scale
in
I
have
three
three
options
under
state
preservation:
disabled,
I
think,
is
going
to
be
the
default
where
we
don't
do
any
of
this
preservation.
So
once
you
do
scale
in
you,
you
lose
those
vms
forever.
When
you
scale
out,
you
get
fresh
new
ones
offline,
I'm
calling
it
offline,
because
this
aligns
with
our
snapshots
terminology
offline
means
that
we're
going
to
preserve
the
offline
state
of
the
virtual
machine.
B
Only
so
that's
the
pvcs
for
the
virtual
machine
will
remain
present
within
the
cluster
after
the
vms
lead
and
then
the
vm
during
scale
out
the
same
vm
will
get
recreated
and
adopt
those
exact
same
pvcs
and
the
online.
This
is
kind
of
a
future
looking
feature
that
we
can't
actually
do
right
now,
but
I
think
it
would
be
kind
of
neat
it
would
be
on
scale
in
what
we
would
do
is
we'd
actually
suspend
the
virtual
machine
saving
both
its
pvcs
and
its
memory
state.
B
And
then,
when
we
do
scale
out,
it
would
be
like
an
instant
boot,
because
you'd
already
have
you'd
be
taking
the
memory
state
from
the
previous
virtual
machine.
That
was,
that
instance,
count
in
the
in
the
vm
pool
and
when
you
scale
in
you
save
it
off
and
when
you
scale
back
out,
you
would
use
that
exact
same
memory
state
and
you
would
be
like
a
super
fast
boot
time.
B
Yeah,
so
there's
a
really
cool
here's
my
thought
behind
this.
You
could
do
a
pre-warming
of
your
virtual
machine
pools.
So
you
could
say
I
want
a
thousand
virtual
machines
in
this
vm
pool
and
you
could
scale
it
back
down
after
they've
all
started
to
like
three
and
then
as
capacity
is
needed.
You
get
like
instantaneous
vms,
starting
like,
as
you
scale
out
up
to
a
certain
point
because
you've
pre-warmed
them
all,
so
it
could
be
kind
of
a
neat
optimization
if
we
ever
get
there.
A
A
I've
got
you
know,
I'm
going
to
do
this
proactive,
I'm
going
to
do
you
know
an
ordered
policy
by
I'll.
Do
it
by
blue
bass
policy
by
my
newest.
Like
so
I
go
down.
I
scale
on
the
five.
So
now
I'm
just
going
to
kill
five
vms,
the
the
ones
that
are
the
newest
based
on
their
the
time
stamp
right,
yeah.
A
Okay
and
then
I
can
even
control
a
little
bit
more,
but
if
I
had
some
label
selector
there,
I'd
like
okay,
I've
got
one
very
important
vm
and
maybe
let's
create
the
newest,
but
I
want
to
make
sure
that
or
if
you
have
an
unimportant,
vm
whatever
and
it
was
created,
it
was
it's
the
oldest
vm,
so
it
contradicts
with
this
one.
We
would
select
the
label
selector
first
or
the
selector
yeah.
B
Those
take
priority
first
and
then
there's
the
base
policy
and
no
base
policy
is
set.
Then
we
assume
that
you
just
want
your
ordered
policies
and
if,
for
example,
you
just
have
order
policies
set,
if
we
can't
match,
then
it
would
be
kind
of
like
a
mix
between
proactive
and
not
predictionistic,
because
we
can't
we
essentially
the
update
or
scaling,
would
block
if
we
couldn't
match
and
maybe
I'd
send
some
sort
of
event
or
something
like
that.
B
Yeah,
I
could
do
that
real
briefly.
That
might
not
be
something
I
immediately
implement.
That
might
be
like
a
follow-on.
The
auto
healing
strategy
is
something
that
I
got
from
how
aws
manages
their
auto
scaling
groups
of
ec2
instances
and
what
happens
is
if
a
vmi
or
I'm
sorry
doesn't
make
any
terminology
that
makes
any
sense
in
aws
an
ec2
instance
is
failing
its
liveness
and
readiness
probes
over
and
over
and
over
again
and
the
oscar,
and
group's
just
going
to
delete
that
ec2
instance
and
reprovision
a
completely
fresh
new
one.
B
B
So
this
would
be
somebody
who's
wanting
to
manage
cattle
specifically
and
they
want
auto
recovery
of
like
a
corrupted
vm
like
the
vm
ever
gets
in
the
state,
we're
just
it's
not
gonna
boot
ever
again,
for
whatever
reason,
then
we
have
the
ability
to
like
totally
kill
off
that
vm
all
of
its
states
and
automate
the
reprovisioning
and
recovery
of
that
vm.
A
B
Yeah
exactly
it's
incomplete,
I
would
say
incomplete,
because
if
we
do
need
the
number
of
failures,
the
time
between
failures,
things
like
that,
I
need
to
I'm
going
to
make
a
note
of
that.
I
don't
think
that's
going
to
be
in
the
initial
implementation.
B
If
I'm
writing
this,
at
least
because
it's
kind
of
an
advanced.
A
A
B
A
Okay,
we
covered.
A
They
didn't
scale,
then
oh
detach.
Oh,
that's!
A
good
attachment
is
a
neat
use
case
like
so
we
take.
We
take
something
out
of
the
pool.
We
essentially
take
it
under
our
wing.
Like
you
know
the
admin
swing
like
we
just
want
to
manage
it
ourself
now,
and
then
it
gets
replaced
with
the
new
one.
Yep,
okay,
yeah,
that's
that's
cool,
because
then
we
can
like
a
lot
of
the
use.
Cases
you
can
see
is
like
okay.
A
We
need
to
do
some
sort
of
advanced
debugging
on
this
vm,
like
we
don't
plan
on
bringing
it
back
like.
Let's
just
we
don't
care
like
a
bring
up
a
new
one
and
we'll
just
do
we'll
analyze
this
one.
A
Okay,
I
think
we
covered
all
of
the.
I
think
that
was
literally
all
the
sections.
Oh
naming.
Oh
that's
another
good
one
naming
like
so
the
so
the
names.
Well,
you
want
to
talk
a
little
bit
about
naming
and
how
it
would
yeah.
B
Jump
back
up
to
the
top
with
the
config
options
that
I
have
no
a
little
bit
down.
No,
no,
it's
in
the
actual
api
example
or
not
is
it
there?
I
have
name
generation.
A
B
This
is
how
vm
objects
and
other
objects
within
the
pool
get
named,
and
here
you
can
set
a
custom
vm
prefix
if
you
want.
So
this
is
going
to
be
a
string.
That's
going
to
just
be
the
prefix
of
all
vms.
By
default.
It's
just
going
to
be
the
pool
name,
there's
the
prefix
and
maybe
that's
fine.
Maybe
I
don't
even
need
to
add
vm
prefix
to
this.
I
just
thought.
B
Maybe
it
makes
sense
that
people
might
not
want
the
vm
pool
to
be
the
prefix,
but
the
things
that
actually
matter
here
are
the
postfix
and
the
postfix
is
always
going
to
be
an
integer
and
it's
going
to
be
consistent.
So
it's
just
going
to
be
like
dash
1-2-3-4.
B
What
this
allows
us
to
do
is
for
people
who
want
to
pre-generate
either
secrets
or
config
map
references.
Let's
say
everyone.
Somebody
wants
a
new
cloud
and
net
secret
for
every
vm
in
their
pool.
They
can
auto-generate
all
of
these
secrets
with
putting
the
postfix
in
the
end
like
dash
1-2-3,
and
then
we
will
have
the
vms
that
get
created
automatically
pick
up
those
secrets,
because
we
will
append
the
vms
postfix
to
the
secret
references.
B
So
somebody
says
they
want
their
secret
to
be.
I
don't
know
my
cloud
and
that
secret
on
the
vm
config
object
if
they
set
this
append
postfix
to
secret
references
option
and
the
generate
the
name
generation
field
here
when
we
actually
create
a
vm
from
that
vm
config,
we
are
going
to
see
hey,
there's
a
vm
secret
ref.
Here
they
have
this
boolean
set
and
we're
going
to
post
we're
going
to
add,
append
the
postfix
of
the
vm
to
the
secret
reference
to
pick
up
the
unique
secret
for
this
vm
instance.
A
Yeah
like
so,
the
idea
was
that
we,
we
have
a
lot
of
secrets
that
are
out
there.
You
know
whatever
it's
like
some
keys,
they're
all
unique
and
we
just
want
to
create
a
bunch
of
vms
and
we
want
all
those
vms
to
be
able
to
get
their
own
their
own
secrets.
You
know
where
we
we
can
choose
that.
A
We
control
the
naming
of
the
secrets,
so
we
could
and
we
control
the
naming
of
the
pool
vms,
so
we
could
line
them
up
so
that
the
vm
pool
always
selects
the
the
a
unique
secret
every
time
when
it
when
it
when
it
goes
to
find
secrets
and
same
with
config
maps,
yep
yeah,
yeah,
cool,
okay,
yeah
and
then
yeah,
so
the
name
generation
is
predictable.
So
we
do
like
you
mentioned.
I
think
you
have
the
like
the
pool
one,
whatever
two
three
and
then
and
the
idea.
B
Exactly
the
only
kind
of
caveat
with
all
this
is,
if
you
detach
my
pool
to
here
then,
because
that
that
vm
technically
still
exists,
it's
just
not
part
of
the
vm
pool
anymore.
We
skip
it,
so
we
skip
it
and
we
would
create.
Like
my
pull
four,
because
we'd
want
three
replicas,
we
see
that
one
of
them
can't
let
the
gap
there
can't
be
created
because
it
already
exists
and
we
would
have
to
skip
it.
I
think
that's
the
only
way
it
would
be
handled
so
we
could
get
a
little.
B
There
there's
not
again,
I
guess
what
I'm
trying
to
say
is:
there's
not
a
guarantee
that
the
order
is
always
going
to
exist
and
same
thing
with
scale
in
like
scale
in
we
might
select
randomly
items
within
the
pool
beams
within
the
pool.
So
you'd
have
like
lots
of
gaps,
for
example,
and
the
integer,
and
then
once
you
scale
out
it's
going
to
begin
filling
those
in
as
it
can.
B
So
I
make
sure
that
there's
no
expectation
that
that
the
sequence
exists
in
order
it's
more
of
a
this
is
a
predictable
naming
scheme.
Not
a
has
anything
to
do
with
scaling
or
scale
out,
or
even
the
replica
count
won't
be
represented
by
this
post
fix.
A
Yep,
that
makes
sense
yep
I
mean
you
could
always
control
it
if
you
needed
to,
if
you
really
really
wanted
to
you,
couldn't
make
it
essentially
that
way
that
you
know
you
can
create,
you
know
with
the
labels
or
whatever.
If
you
really
want
to
go
out
of
your
way,
but
the
idea
is
like
you
said
we
just
want
to
know.
We
won't
have
a
way
to
name
these
things
so
that
we
can
associate
objects
with
them.
A
Yeah,
okay,
cool
all
right!
Well,
we're
over
time
I
mean
and
final
questions
or
thoughts
from
anyone.
Otherwise
we'll
we'll
call
it
here.
B
A
Yeah
no,
this
is
good.
I
think
this
is
like
one
of
those
things
where
it's
going
like
in
terms
of
like
this
group
like
there's
a
lot
of
like
we
talked
about
some
of
the
performance
like
you
mentioned
with
you
know
the
pre-warming
and-
and
I
think,
there's
even
like
a
somewhere
in
here.
Like
a,
I
don't
know
if
it's
didn't
run
these
in
parallel,
I
forgot.
I
think
there
is
a
I
can
take
for
it,
but
the
idea
is
that
we
can
have
something
that
that
we
can.
A
If
we
have
a
bunch
of
vms,
that
kind
of
fit
this
use
case,
we
can
do
it,
we
can
create
them
and
we
also
use
it
as
a
an
api
to
create
things
in
a
performing
way.
Yeah
cool,
okay,
all
right!
Thank
you!
Everybody
all
right.