►
From YouTube: 20200610 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
June
10th
cluster,
like
psycho.
Meeting
first
website
is
a
sub-project
close
your
gaze,
because
it's
a
project
of
a
flashy
lifecycle.
We
have
a
meeting
etiquette.
If
you
want
to
see
something
or
you
can
add
stuff
to
the
agenda
or
you
can
raise
your
hand,
you
can
find
the
present
feature
under
the
participant
list.
A
A
All
right,
let's
go
for
the
PSA,
so
I
only
have
one
PSA.
Today,
a
few
folks
asked
for
337.
First
of
all,
I
apologize.
If,
like
this
wasn't
clear,
we
usually
don't
have
a
lease
cadence
set
and
for
the
past
month
or
so
we
have
had
like
a
lot
of
incoming
PR
reviews.
Some
of
them
have
been
partial
and
they
have
blocked
kind
of
like
a
release
because
they
have
like
once
we
release
they
become
API.
So
we
can
change
those
public
api's,
which
is
like
both
code
in
some
in
some
cases
and
see.
A
Are
these
cecelio
I
brought
up
a
good
point
like
I?
Think
yesterday,
which
is
like
going
forward?
You
should
make
sure
that,
like
a
master
is
always
you
know,
releasable,
state
and
I
do
agree.
That
is.
We
also
have
we're
lacking
like
reviewers,
and
we
need
to
make
sure
it
like
if
you're
interested
you
can
step
up,
but
we
can
also,
if
you're,
interested,
only
review
in
certain
things,
we
can
use
approvers
in
different
folders,
so
we
have
different
owners
files
in
different
folders,
given
that
we're
in
the
state
from
0
3
7.
A
B
C
C
A
D
I
tie
my
desktop
has
died
from
Botox
right
I.
Just
give
me
a
moment.
Is
this
so
right?
First
of
all,
I'm
a
sphere
meeting
this
week.
One
of
the
issues
that
a
lot
of
people
face
is
they
don't
want
their
credential
to
their
be
center
inside
workload.
Cluster
now,
given
we
have
a
management
cluster
which
does
have
credentials
V
Center
we're
thinking
about
the
possibility
of
running
the
cloud
provider.
D
Integration
on
the
management
cluster
said
two
questions,
because
I
I'm
not
sure
how
of
being
interesting,
knowing
how
other
providers
are
actually
handling
the
deployment
of
decks.
There
are
no
CPI
because
it's
needed
to
get
node
reference,
and,
secondly,
is
there:
are
there
any
other
providers
who
would
need
have
a
similar
requirement
around
running
the
club
controller
manager
on
the
management
class
incident
workload
buster.
E
So,
for
a
sure,
what
we're
doing
right
now
is
we're
just
adding
it
as
an
add-on
after
the
cluster
is
up,
so
it's
not
very
long-term.
There
were
just
applying
the
e
animals
after
the
cluster
has
been
provisions.
I
know
this
year
was
doing
something
different
in
provisioning,
the
cloud
controller
manager
as
part
of
the
cluster
with
its
own
CR
DS.
We
were
eyeing
the
cluster
resource
sites
proposal
and
seeing,
if
that
could
spits,
are
useless
to
deploy
the
node
manager
and
co-manager
automatically.
F
Well,
it's
it's
actually
related
to
cap,
C,
so
I'm
curious
to
know
how
you're
handling
the
fact
that
cluster
API
is
relying
on
the
provider
ID,
because
at
some
point,
if
you're,
applying
the
CPI
after
the
cluster
is
up
and
running
then
you're
in
a
state
where
you
have
the
cube
controller
manager,
that
is
running
the
service,
controller
route,
controller
and
all
of
the
cloud
provider
controllers
and
also
the
CPI.
That
is
running.
Also
these
controllers.
So
are
you
like
reconfiguring
the
control
plane
to
disable
like
the
cloud
provider
asier
and
then
move
to
external.
E
Yeah,
so
we're
so
my
memory
is
a
bit
fuzzy
right
now,
so
I
might
have
to
come
back
to
you,
but
I
think
we
changed
the
Tyrrhenian
configuration
or
a
cluster
configuration
to
change
the
compare,
the
sorry,
the
cloud
provider
to
be
external
and
then
it
actually
like
the
cluster
doesn't
become
fully
ready
until
it
has
cloud
provider
deployed.
But
as
soon
as
the
first
control
plane
is
up,
then
we're
able
to
apply
the
mo
for
the
external
cloud
provider
and
then
that
completes
the
cluster
I.
E
A
D
E
Did
you
yeah
I
just
want
to
say
Plus
on
to
do
it
generically
I
think
it
is
something
everyone's
going
to
run
into
regardless
of
infrastructure
and
then
in
terms
of
timeline
from
my
side,
it's
not
urgent,
urgent,
like
we
should
start
thinking
about
it,
but
maybe
that
would
be
a
good
candidate
for
0.4
just
because
the
external
cloud
provider
isn't
going
to
be
GE,
I
think
isn't
going
to
be
like
the
defaults
at
recent
Azure
and
so
like
I
mean
Cabrini's
one
tween,
so
we
have
a
bit
of
time.
A
G
A
We
also
have
the
beta
1
beta
2
migration
for
Cuba
DM
at
the
same
time
yeah.
So
we
can
do
it
all
together.
You
know
see
how
that
goes.
So
I
guess
I
came
in
here
for
an
action
item
here
would
be
great
to
get
it
to
get
just
like
a
few
words
on
the
road
map
document
that
we
have
14:04
and
just
sort
of
like
when
we
get
into
blending
for
three
or
four,
we
can
yeah
go
back
and
assign
something
cool,
any
other
questions
on
CPI
on
the
management
Custer
once
twice.
A
A
C
C
Obviously-
and
this
is
this-
was
kind
of
a
known
issue
with
the
autoscaler
implementation
at
the
time-
was
that
you
needed
the
cap
e
components
to
be
running
inside
the
same
cluster,
that
the
autoscaler
was
and
Jason
brought
this
up,
I
think
a
week
or
two
ago,
and
we
started
talking
about
it
and
there
was
actually
a
really
simple
solution
that
allowed
this
to
work
and
I'll
I'll.
Try
to
just
quickly
demonstrate
here
so.
C
The
normally
we
would
start
the
autoscaler
and
we
would
pass
it
a
single
Kubb
config,
but
it
turns
out
in
in
the
autoscaler
there's
another
flag
called
the
cloud
config
which
can
be
used
to
pass
in
a
reference
to
a
file
that
contains
the
configuration
that
the
cloud
provider
would
use.
So
in
this
case,
what
what
jason
is
added
is
a
patch
that
allows
us
to
push
in
another
Kubb
config,
which
is
the
cluster,
would
talk
to
for
the
Cappy
components
and
then
the
normal
coupe
configures
used
to
monitor
the
pod
workload
inside
the
cluster.
C
So
what
I'm
going
to
do
here
is
I'm
going
to
try
to
have
it
look
at
the
cluster
API
cluster
and
then
operate
in
the
work
cluster.
So
right
now,
if
I'm
get,
you
know
the
machines,
you
can
see
that
I'm
looking
at
the
cluster
API
cluster
and
on
them.
It
sees
these.
You
know
these
two
machines
that
are
in
there.
C
So
let
me
just
copy
this
and
I've
got
two
different.
You
know
this
work
cluster
that
could
configures
the
coop
config
for
the
target.
That
I
would
like
to
monitor.
I'm
gonna
grab
this
and
then
I'll.
Add
the
cloud
config
here
and
that's
going
to
be
the
normal
config.
The
configure
I've
been
using
for
the
cluster
API.
So
hopefully
this
will
work.
Okay.
So
at
this
point
the
autoscaler
is
running.
It
doesn't
see
anything
because
none
of
those
machine
deployments
are
set
up.
C
You
know
to
do
that
so
or
to
be
set
up
for
scaling,
so
it
is
it's
watching
pods
in
the
workload
cluster
and
watching
artifacts
in
the
in
the
management
cluster.
So
at
this
point,
what
I
need
to
do
is
I
need
to
adjust
the
machine
deployment
to
have
the
proper
annotations
I.
Just
think
that
just
need
to
grab
those
again
quickly
here.
C
Okay,
so
what
I'll
do
here
is
just
quickly
add
these
annotations
and
I'll
just
I'll
give
it
a
sigh.
You
know,
midsize
have
one
and
a
max
size
of
three
or
something
we'll
just
we'll
just
watch
it
scale
out
quickly.
All
the
activity
pretty
much
works,
the
same
I'm,
just
gonna
I'm.
Just
copying
me
so
I,
don't.
C
All
right
now
we
should
start
to
see
we
see
the
autoscaler,
it's
it's
picking
these
things
up,
and
it's
so
now.
We
know
that
it's
watching
that
cluster
to
see
with
your
factor.
So
what
I
want
to
do
at
this
point,
though,
is
I
would
like
to
deploy
a
work.
Loader
and
I'm
gonna
use
the
same
workload
that
I
used
before
in
the
previous
demo.
This
is
just
a
dummy
task
that
runs
busybox
and
sleeps.
C
C
Okay,
we
should
see
there's
a
bunch
pending
now
and
what
we
shouldn't
see
very
shortly
here
and
I'll-
try
to
stop
this
once
it
picks
it
up.
Okay,
so
we
can
see
on
this
line
here
that
the
autoscaler
has
decided
to
give
this
thing
three
replicas
at
this
point,
so
it's
attempting
to
scale
out
to
the
back
that
it
can
and
if
I
go
back
here.
C
C
Anyways,
if
I
go
back
to
the
to
the
management
cluster,
you
can
see
that
it
actually
is
creating
those
machines
and
if
I
look
at
the
machine
deployments,
it's
asking
for
three
replicas
and
you
know
it
will
take.
It
will
take
a
minute
or
two
for
this
to
scale
out,
and
you
know
likewise,
scaling
down
works
works
the
same
way.
You
know
once
I
go
to
delete
that
workload.
C
Sometimes
the
scaling
down
takes,
you
know,
can
take
a
minute
to
come
back,
especially
since
it's
already
bringing
them
up
at
this
point.
What
we
should
see
on
the
on
this
side
is
a
scale
down
at
some
point
and
I'll
see
if
I
can
stop
the
logs
to
catch
it,
but
it
might,
it
might
take
a
little
longer
okay.
So
there
we
go
so
you
can
see
it's
already
caught
the
scaled
down
and
it's
trying
to
bring
us
back
to
one
replica
at
this
point.
C
So
we've
done
basically
the
kind
of
same
thing
we've
done
before
you
can
see
it
scaled
back
down,
so
we
there's
also
a
documentation
update.
This
PR
is
in
motion
right
now,
so
I
imagine
it
will
be
merged
sometime
this
week
and-
and
hopefully
this
would
be
out
in
the
1.19
release
of
autoscaler
but
I'm.
Not
quite
sure.
You
know
how
the
release
works
on
that
style,
so
so
yeah
that
that's
just
about
it.
A
G
C
That's
correct
you,
you
will
deploy
the
autoscaler
or
into
your
workload
cluster
and
then
give
it
the
coudé
config
of
your
management
cluster,
and
so
you
yeah,
you
would
have.
You
would
have
an
auto
scalar
for
each
workload
we
talked
about.
We
talked
about
reversing
it.
You
know
having
many
auto
scalars
running
in
the
management
cluster
each
talking
to
their
own.
You
know
workload
cluster,
it's
actually
possible
using
this
arrangement
now,
but
I'm,
not
sure.
That's
the
best
approach.
C
C
As
it
stands
now,
like,
I
think
we're
just
what
we'd
like
to
do
is
just
enable
users
to
use
it
as
they
see
fit
so
like
I.
Think,
if
your
default
assumption
from
the
Cappy
side
is
that
you
have
one
management,
cluster
and
several
workload
clusters,
we
want
to
be
able
to
enable
people
to
run
the
autoscaler
the
way
they
want
to
so
I
guess
it
might
seem
a
little
odd
to
be
injecting
those
configuration
credentials
from
the
you
know.
C
The
management
cluster
into
the
workload
cluster
I
haven't
tested
this,
but
I'm
fairly,
certain
you
could
run
the
autoscaler
in
your
management
cluster.
If
you
wanted
to
you,
just
give
it
the
coop
config
of
the
workload
cluster
to
look
at
for
the
pod
scaling.
That
would
also
work
if
you
didn't
want
to
mix
those
credentials,
but
the
notion
of
which
credentials
to
use
or
if
to
use
like
a
service
account
or
something.
D
Yeah
in
some
ways
it's
a
bit
similar
to
running
I
just
think
is
a
bit
similar
to
running
in
cloud
controller
manager
outside
on
the
management
clusters.
Where
I'll
say
it's
almost
like.
We
need
a
generic
mechanism
to
be
able
to
Kawai
workloads
at
running
and
management
cluster,
but
with
the
coop
config
of
the
workload
cluster,
and
then
we
can
consume
it
in
multiple
ways.
D
B
One
of
the
benefits
is
is
because
the
well
right
now
we
have
one
cute
config
that
we
expose,
but
because
we
have
that
we
can
leverage
that
as
part
of
any
deployment.
If
we're
deploying
cluster
autoscaler
or
CCM
on
the
management
cluster,
we
can
basically
mount
that
secret
into
an
expected
path
and
pick
it
up
and
run
with
it.
I.
A
B
We
didn't
want
to
basically
break
users
who
are
using
the
1q
config
flag
for
a
self
hosted
cluster
with
the
next
release
of
cluster
autoscaler,
but
at
some
point
I
think
it
would
be
nice
to
break
that
backward
compatibility
and
then,
at
that
point
either
flag
would
be
able
to
fall
back
to
the
e
cluster
config
as
it
stands
right
now,
only
the
cube
conveyed
flag
will
fall
back
and
the
cloud
config
flag
will
fall
back
to
the
value
of
the
cube
config
flat
right
now.
Oh
I.
A
A
All
right,
Micah's
sister
thought
this
is
the
same
doctor.
We
were
just
discussing.
C
This
was
kind
of
a
crazy
idea,
I
put
forward
just
because
I
thought
it
was
kind
of
amusing,
but
it
also
kind
of
opens
up
the
idea
of
this
cold
notion
of
kind
of
you
know
federated
capi
like
where
the
idea
here
would
be.
If
we
had
a
capi
provider
that
was
able
to
talk
to
another
capi,
then
like
a
lot
of
these
issues
could
just
go
away
right
because
we
could,
in
the
autoscaler
world
we
could
just
use
in
a
workload
cluster.
C
We
could
theoretically
use
this
Kathy
provider
provider
to
synchronize
back
to
the
management
cluster.
Now
you
know,
there's
a
whole
load
of
baggage
that
would
come
with
this,
but
I
think
I
just
wanted
to
kind
of
open
the
discussion
up
to
see
if
we
had
if
there
was
any
history
from
the
project
about.
You
know
this
notion
of
kind
of
workload
clusters
and
how
they
could
talk
to
a
manager
cluster-
and
you
know-
maybe
just
kind
of
get
the
discussion
going.
C
A
A
C
Sorry,
my
my
clients,
I,
can
wear
what
I
was
saying
was
I
think
if
we
can
reuse
some
of
the
abstractions
and
kind
of
you
know,
building
blocks
that
we've
already
put
forward.
I
think
it'll
make
it
easier
to
consume
for
other
projects.
You
might
want
to
keep
this
kind
of
thing
or
might
want
to
use
cluster
API.
You
know
like
the
way
autos.
A
J
Can
I
can
mention
which
is
like
just
the
originally
before
we
had
CRTs,
we
had
API
server
aggregation
which
did
allow
for
the
same
objects
to
appear
in
both
management
cluster
and
the
use
for
cluster.
It's
certainly
when
we're
talking
about
baggage.
It
came
with
a
lot,
a
lot
of
baggage,
so
I
agree
with
vids,
but
just
in
the
interests
of
like
the
full
picture,
we
did
sort
of
originally
have
that
and
we
did
sort
of
move
away
from
it.
C
Yeah
this
history
is
good
to
know
this.
This
definitely
helps
me
out
because
I
this
was
founded
when
I
was
thinking
about
this
I
was
like
I
had
no
idea
if
this
had
come
up
before
so
it
makes
sense.
What
you're
saying
about
the
aggregation
and
and
yeah
I
know
it's
like
an
extremely
difficult
problem
to
solve.
A
D
Yes,
so
it's
partially
a
PSA
that
there
is
an
inadvertent
breaking
change
in
zero,
three
three
from
zero.
Three
two,
so
we
use
hashes
to
determine
if
a
change
has
been
made
to
kcp
in
order
that
machines
that
make
up
the
control
plane
need
to
be
upgraded.
Now
it
turns
out
that
is
Oh
free
free.
When
someone
me
added
a
field
to
change
the
retry
behavior
cube
ATM
that
this
inadvertently
changed
their
hash
of
of
your
kcb
essentially,
and
that,
if
you
do
it
upward
from
zero,
three
two
zero
zero.
D
Three
three,
you
will
see
all
your
control
plane
instances
replace-
and
this
is
because
they're
having
algorithm
uses,
spew
underneath
so
even
Neil
point
of
field
skip
print
it
out
Connally,
so
the
knockin
effect
is
even
additive.
Changes
that
API
are
essentially
breaking
for
the
purposes
of
it
would
cause
your
control
playing
machine,
speed,
replace
and
I
just
want
draw
people's
attention
to
that
issue,
because
it
seems
like
something
we
wanted
fix,
but
it's
given
that
hashing
functions
used
in
quite
a
few
places.
It's
got
quite
broad
impact.
K
K
What's
current
is
by
doing
a
like
logic,
aware
comparison
of
the
templates.
It's
a
little
trickier
in
our
case,
because
the
like
user
intent
is
not
clearly
separated
in
a
cube,
ADM
config.
The
the
way
the
controller
works
is
that
it
like
adds
things
to
the
spec
when
it
needs
to
populate
stuff,
so
they
will
be
different
even
if
they
came
from
like
the
same
place,
but
I
think
that's
the
like
direction.
That
I
would
push
us
towards
is
leaning
towards
more
of
those
like
semantic
comparisons
and
less
on
unreliably.
E
A
K
L
M
M
Yeah
I
was
just
gonna.
Add
that,
like
basically,
what
what
we
saw
when
we
were
investigating
this
was
basically
just
like
and
any
version
change
starting
at
three
two
you
get
new
hashes
so
like
if
you
upgrade
or
downgrade
like
you're,
getting
new
machines
every
time,
so
I
just
wanted
to
kind
of
call
that
out
if
it
hasn't
been
already
that,
like
basically
it's
just
all
new
identical
machines,
different
hash
and
and
I.
Think
it's
pretty
like
pretty
not
good.
I
guess
is
the
way
I
would
describe
it.
A
Yeah
so
I
think
like
we
should
explore.
The
league
says
suggestion
of
doing
symmetrical
quality
here,
a
kiss
I've
seen
this
work
quite
well
in
taking
the
case
of
adoption
and
definitely
get
away
from
this
hash
and
which
uses
few
on
that
the
hood
and
yeah
some
yeah.
Maybe
we
should
revisit
the
same
as
well
like
the
Machine
deployments
but
like
if
that
might
be
less
affected.
Justin.
J
Yeah
I
just
want
to
mention
how
my
cops
does
this,
which
is
we
track?
I
guess
the
inputs
to
the
machine.
So
in
this
case
it
would
be
the
cuvette
diem
config
file
so
like
to
Michael's
point.
If
you
can
identify
the
inputs
that
are
going
in,
you
are
effectively
done
and
you
don't
just
say
like.
Oh
this
effects
the
control
plane
or
this
note
of
the
control
plane.
But
not
that
note
because
you
don't
have
to
so
we
do
it
at
the
lowest
level,
the
one
that
actually
interfaces
with
the
machine.
A
Think
the
main
problem
with
that
is
that,
like
how
do
we
detect
changes
there
like,
for
example,
a
default
in
my
book
or
like
something
that,
like
maybe
the
Caprica
has
added
the
machine
has
added
doing
you
know
like
we
need
to
think
about
these
things.
I
guess
like
go
back.
Look
at
the
code
to
have
a
clear
answer,
but
yeah
I
really
definitely
explore
something
different
than
spew
for
sure.
A
A
K
I
think
the
more
important
reason
to
me
would
be
that
not
every
generation
change
or
certainly
not
every
resource
version
change
should
replace
machines.
There's
lots
of
things
that
trigger
generation
changes
that
are
not
changes
to
the
template.
That
gets
that
would
would
necessary
so
tater
replacement.
A
A
A
A
A
A
All
right
what
why
don't
we
like
see
if
we
can
get
a
group
together
on
it's
like
later
and
just
chat
or
zoom,
maybe
later
today
or
tomorrow,
make
sure
that,
like
we
are
on
top
of
this.
N
N
N
You
know
say
that
in
this
group
before
we
started
to
read
and
track
a
bunch
of
issues
that
are
the
repository
itself
about
what
we
think
has
to
be
done
before,
remove
all
the
values,
automation
and
all
the
the
owner
file
and
whatever
so
last
time
you
gave
me
a
bunch
of
like
material
to
read
about
the
actual
move,
so
I
strongly
issues
from
from
that
and
yeah
I
mean
that's
that's.
Finally,
we
don't
mean
we
had
a
discussion
internally
about,
if
proposing
in
your
mouth,
to
make
a
proposal
so
I'm
I'm,
hoping
for
any.
A
N
Yeah
I
mean
yeah.
As
I
said
we,
you
know,
I
mean
we
don't.
We
know
we
are
not
in
a
rush
for
any
case.
We
are
trying
to
build.
You
know
to
make
our
community
like
aware
of
what
we
are
doing
and
what
it's
already
running,
so
whatever
it
it
takes
to
trigger
the
discussion
where
we
are
happy
to
be
part
of
it.
J
I'm
just
going
to
say,
like
I
I,
don't
think
this
would
count
as
a
new
sub
project.
So
I.
Imagine
this
just
just
the
of
a
sort
of
rubber
stamp
type
thing
it
if
even
any
rubber
stamp
is
needed
at
all,
but
just
like
it
said,
I
assume
it's
a
repo,
not
a
sub-project,
but
I
almost
I,
don't
know
what
we've
classified
the
others.
S.
O
J
Okay-
and
in
that
case,
yes,
it
would
be
a
it
would
I
imagine
we
would
have
to
do
a
process,
but
it
would
be
imagine
a
straightforward
process.
The
thing
which
would
probably
help
most
of
all
is
if
there
was
any
evidence
of
resources
to
support
testing
I,
don't
know
whether
I
know
in
the
I
don't
know
whether
packet
has
any
like
tests,
they
fire
off
against
the
grid,
the
what's
that
thing
called
test
grid
that
would
probably
help
but
I
don't
think
it's
even
that
I,
don't
think.
Even
that
is
attacking
to
be
required.
N
J
N
A
N
A
C
Yeah,
so
you
know
the
autoscaler
supports
scaling
to
zero
I.
Think
for
most
of
the
providers
that
are
there
and
recently
we've
seen
some
increased
activity
around.
This
kind
of
question
of
you
know
can
we
also
add
scaling
to
zero
to
and
from
zero
for
the
campi
provider
and
I'm.
Sorry
I
forget
this
user's
name
seh.
The
person
I
was
talking
with
on
this
issue
there
you
know
they're,
saying
if
they
can
get
this
then
they'll
be
able
to
exact
capi.
C
So
I
wanted
to
bring
it
up
here
because
we
have,
you
know
on
OpenShift,
we've
implemented
this
behavior,
so
I
have
like
a
proof
of
concept,
but
it
requires
touching
each
of
the
of
the
capi
providers.
So
the
real.
The
real
part
of
the
issue
here,
is
that,
in
order
for
this
to
work
in
the
autoscaler
there's
a
few
pieces
of
information
we
have
to
expose
from
each
provider
and
the
way
we've
done.
C
That
is
by
using
annotations
to
expose
information
about
the
memory
and
CPU
requirements
of
machines
and
GPU,
and
this
is
so
that
when
the
auto
scanned-
and
we
put
those
on
the
machine
sets
of
machine
deployments-
and
we
do
that
so
that
when
the
autoscaler
goes
to
zero,
it
knows
what
type
of
machine
to
remake
afterwards
or
it
knows
the
you
know
the
quantities.
What
you're
looking
for
another
thing
we've
had
to
do
as
well
as
we've
had
to
add.
C
You
know
to
like
machine
sets
to
machine
deployments,
we've
had
to
add
a
field
to
carry
taints
forward,
so
we,
if
there
are
tanks
that
you
want
to
associate
it
with
those
machines.
You
know
we
we
have
those
tapes
carry
through
I,
think
in
Cathy
right
now
there
may
be
on
the
I
want
there,
unlike
the
ku-band
config
object
or
something.
But
you
know:
we've
done
something
where
we've
carried
those
through
in
the
machine
set
submission
machine
deployments
so
that
you
can
have
you
know
when
you
go
to
zero
and
come
back
up.
C
You
can
have
machines
that
start
with
my
dogs.
We
have
specific,
like
Tate's,
that
you
can
come
up
with
so
in
order
to
bring
this
change.
You
know
kind
of
upstream
it's
going
to
require
a
bunch
of
modifications
to
the
individual
providers,
and,
aside
from
that,
it's
not
too
big
a
change
on
the
autoscaler
side,
but
I
wanted
to
again
I
wanted
to
start
the
conversation
here,
because
now
we
have
users
kind
of
asking
about
this
and
I
think
we,
as
a
group
are
probably
going
to
have
to
decide.
You
know
how
do
we
want?
C
How
do
we
want
to
approach?
This
are
the
the
ways
that
we've
decided
to
do
it
kind
of
downstream
if,
if
I
start
bringing
those
up
and
proposing
them,
you
know
this
would
obviously
be
an
additional
requirement
that
providers
would
have
to
conform
to
if
they
want
to
be
able
to
have
this
scaling
from
or
to
zero.
C
P
C
So
right
now,
like
the
way
our
machine
set,
some
work
they're
pretty
they're
generic.
They
like
the
entries
that
we
have
for
like
GPU
and
CPU
and
memory.
Those
are
generic
across
all
providers,
so
we
have
like
one
set
of
annotation
labels
that
all
the
providers
can
use
and
then
they,
when
they
create
you
know
when
a
machine
set
gets
created.
Those
annotations
have
to
be
added
back
by
the
specific
provider
so
that
you
know
the
autoscaler
would
know
how
to
do.
Does
that
answer
the
question
so.
P
C
So
I
think
I
I
have
to
remind
myself
a
little
bit
about
how
this
works.
It's
not
it's
not
creating
the
specific
machines,
it's
creating.
It
is
scaling
the
replicas
up,
but
I
think
there's
a
point
at
which
it
needs
to
inject
information
about
the
memory
size
and
the
CPU
size
and
I'd
have
to
double-check.
This
is
a
good
question,
there's
a
point
at
which
what
it
does
that
scaling
to
create
the
new
machines.
C
L
L
When
the
autoscaler
needs
to
figure
out
how
much
CPU
or
whatever
this
particular
machine
set
machine
can
provide,
we
add
some
annotations
to
the
machine
set,
so
you
can
either
add
those
annotations
yourself
when
you
create
it
or
on
the
platforms
that
we
support.
Today,
we
basically
have
like
a
helper
controller
that
runs
per
cloud
provider.
So
when
you
deploy
the
AWS
provider,
you
get
this
little
helper
controller.
L
That
basically
looks
at
machine
sets
and
is
cloud
aware,
so
it
says:
oh
this
machine
Center
has
a
template,
and
this
template
says
I'm
using
like
on
AWS
like
a
in
five
X
large
and
I
have
a
little
lookup
table.
That
tells
me
the
attributes
of
that
isn't
and
then
I
put
that
onto
the
machine
set.
So
then
the
autoscaler
when
it
needs
to
figure
out
how
big
a
particular
is
gonna,
be
it
looks
at
those
annotations
those
annotations
aren't
present.
Then
it
needs
to
have
at
least
one
machine
or
node.
B
C
Yeah
I
think
thank
you.
Mike,
like
it
might
really
filled
in
the
failing
part
of
my
memory
there
yeah,
the
important
part
about
this
is
yeah.
The
autoscaler
also
needs
to
know
this
information
about
the
about
the
what
the
mission,
what
the
node
can
provide,
so
that
it
can
do
it
scheduling
to
predict
what
it
needs.
So,
when
it's
down
at
zero
normally,
it
would
like
Michael
saying
that
information
would
come
back
through
the
machine.
C
L
And
on
the
upstream
cloud
controllers,
like
AWS
or
TCP,
whatever
the
autoscaler
is
doing,
they're
natively
before
the
cluster
API
stuff
was
added
to
it.
They
basically
are
pre
computing,
this
list
and
building
that
into
the
cluster
auto
scale
and
binary.
So
when
when
they
need
to
scale
up
our
AWS,
it
says
all
that.
Well,
this
template
is
using
this
instance,
size
and
I
have
stored
in
my
code
already
that
this
in
size
provides
this
much
detail,
so
we're
doing
something
a
little
bit
more
dynamic.
L
They
are,
but
we're
actually
using
the
same
source
of
truth
as
they
are
works,
basically
scraping
those
bits
from
the
autoscaler,
the
innocent,
sized,
Eva
and
bacon
into
our
little
helper
controllers,
and
that's
basically
how
that
works
today,
because
if
you
go
off
into
the
weeds
of
trying
to
dynamically
query,
billing
API
is
I.
It's
it's
a
disaster,
so.
G
Richard,
just
for
curiosity,
if
I
cut
it
right,
a
lot
of
calories
capable
to
retract
those
information
from
running
machines,
is
it
possible
for
Delta
scalar
to
notate
the
Machine
set
before
scaling
down
to
zero?
So
you
don't
have
to
change
all
the
provider.
L
C
Yeah
I
think
the
only
the
only
part
I'd
be
confused
about
is
that
if
you
created
I
mean
I,
guess
if
you
created
a
machine
set
with
zero
that
started
with
zero
replicas
to
begin
with
and
then
wanted
to
scale
it
up
does
capi.
We
can't
be
no
like.
Could
you
mind
that
information
after
you
scaled
the
first
one.
L
What
you
have
a
note,
that's
what
the
autoscaler
does
today
is.
It
looks
at
the
note
and
the
note
reports
how
much
of
you
know
CPU
and
RAM.
It
has
and
I'm
assuming
also
GPU,
but
I.
Don't
know
that
much
about
that
area.
Personally,
so
that's
in
when,
before
we
added
these
annotations,
that's
what
we're
doing
on
the
cluster
API
part
of
the
autoscaler.