►
From YouTube: Kubernetes SIG Federation 20170515
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
C
So,
okay,
to
fix
it,
like
probably
simple,
fix
holes
to
do
it
in
the
controllers,
but
we
discussed
like
module
and
we
discussed-
and
we
found
like
similar
to
create
as
a
service
object
like
either.
If
a
server
itself,
we
should
be
taking
care
of
a
few
things
like
the
cluster
IP
elevator
or
the
old
Northport
allocators,
similar
to
that
we
don't
have
anything
in
Federation
right
now.
So
probably
we
should
be
fixing
those
okay
apart
from
the
actual
issue.
C
Ok,
so
right
now,
our
piece
of
my
test
is
like
a
like:
we
do
daily
deploy
and
do
parallel
testing
for
each
PR,
so
there
is
been
slightly
lacks
in
detecting
if
anything
incompatible
is
merged
into
cater
cluster.
So
we
kind
of
put
detect
it's
a
little
late,
okay,
yeah,
and
that
should
be
okay.
Maybe,
but
if
you
do
pre
submit
test
blocking,
maybe
or
it
could
be
a
problem.
E
If
we
solve
that,
if
we
move
ahead
with
actually
detecting
when
fields
are
not
supported
in
the
clusters
versus
the
Federation
control
plane-
and
we
choose
something,
we
can
overcome
and
I
mean
we'll
have
to
fix
it.
I
mean
it'd,
be
good.
If
we
had
an
indication
there
was
breakage,
but
at
least
the
tests
not
breakage.
If
there
was
skew
relief,
the
tests
wouldn't
fail
immediately.
You
have
like
the
warning
rather
than
an
error.
Yeah.
E
A
I'm
not
sure
if
you,
if
you'd,
be
able
to
get
living
in
this
cycle,
but
I
guess
that
designs
more
way
to
for
us
to
prioritize
it.
So
I
don't
know
if
anyone
is
willing
to
take
it
out.
Yes,
we
did
discuss
some
potential
solutions
and
we
have
it
some
idea
of
how
to
fix
it.
But
I
don't
know
if
anyone
has
recited.
F
Would
we
be
interested
in
a
potential
band-aid
for
now
as
well
like
it
might
be
in
the
reconcile
process
there
if
it
just
handled
the
new
object
in
a
way
that
it
would
validate
the
reconciliation
still
and
not
keep
trying
to
reconcile
it
over
and
over
again
as
a
temporary
fix?
Until
we
get
the
bigger
thing
done,
pretty
I
guess
I'm
curious
what
you
mean
by
the
temporary
fix
like.
E
F
C
The
controllers
are
actually
already
right
continuously
reconcile
the
problem.
Is
that
also
try
to
do
the
same
thing
right?
The
tests
have
an
expected
version
and
desired
version
as
on
the
god
and
want
versions,
and
they
don't
match,
because
there
is
a
skew
now,
because
the
expected
version
doesn't
have
this
new
phase
and
the
version
that's
actually
obtained
has
the
field
we
could
maybe
fix
stuff
saying:
okay,
this
is
the
expected
behavior
for
now.
I,
don't
know
I'm.
C
D
A
C
F
C
E
E
If,
if
for
some
reason,
I
just
blanked
on
the
fact
that
the
clusters
would
be
deployed,
a
version
that
didn't
include
the
PR
that
just
merged-
but
in
this
case
you're
saying,
the
change
was
reflecting
in
the
deployed
clusters
which
I'm
planted
like.
Is
there
something
that
is
redeployed
on
each
CI
run
on.
D
C
A
B
A
A
B
A
Okay
and
regarding
with
competition,
yeah
I,
guess
we
discussed
in
last
time
and
I
think
we
have
volunteers
it
just
on
me
to
start
doing.
I
was
trying
to
find
a
tool,
but
in
the
meantime
the
region
just
come
up
with
I
can
maybe
send
an
email
saying
first
week
I'm
taking
next
week
module
HHS.
You
look
something
like
that,
and
until
we
have
make
a
tool
and
I
wanted,
like
github
handle
to
be
routed
automatically
to
let
you
can
do
since
Federation
will
call
on
call
and
it
automatically
pays.
A
C
A
D
E
A
D
C
Nickel
and
the
motto
I
have
a
few
other
issues
like
one
of
them
of
which
is
been
blocked
for
a
while
is
updating
the
utilities.
I
don't
see
any
tasks
related
to
that.
But
what
do
you
see
like
weed?
We
should
be
able
to
upgrade
the
utility
in
this
cycle.
You
may
not
go
to
v3,
yes
matrix
yeah.
Technically,
we
should
have
done
that
last
cycle,
so
we
are
already
or
do
I,
don't
know
who
has
time
to
build
a
tag,
though
do
you
have
any
voice?
Yeah
I
can
do
that
yeah.
C
E
D
C
Was
more
like
like
an
automation,
test
framework,
but
it
doesn't
outline
like,
what's
the
procedure
exactly
doing
now,
grace
it's
like
automating,
the
tester
same
work
actually
to
find
out
right
now
we
are
finding
the
a
few
issues
right
like
we
cannot
upgrade
protection
Kentuckians
before
the
predator
clusters
so
like
that
selector,
permutation
and
combination
of
all
sets
of
upgrades,
but
we.
A
E
Have
test
merge
bo
know
that
they're
running
and
it
I
mean
it's
a
fairly
simplistic,
like
upgrade
sorry
like
create,
create
an
object.
You
know,
check
that
is
propagated,
upgrade
make
sure
it's
still
propagated
I'm,
not
entirely
sure
that
that's
a
good,
like
validation
of
upgrade,
it's
a
good
starting
point,
but
we
haven't
really
spent
a
lot
of
time
on
that
and
I'm
I
guess.
Maybe
it's
just
not
a
huge
concern
for
1/7.
It's
certainly
before
I
would
use
something
in
production.
E
A
I
think
the
plan
was
to
get
started
with,
like
those
simple
operations
like
you
said,
and
I'd
more
more
complex
operations
over
time,
so
I
yeah
I
would
agree
that
really
need
to
buy
types
of
great
testing.
And
you
see,
if
can
you
religious,
weekly,
explain?
What's
the
status
I
didn't
understand,
you
said
they're
not
running
our
CIE,
so
yeah.
C
So
actually
the
test
framework
exists
and
I
think
think.
Test
is
already
there
in
leader
test
cases.
Okay,
so
all
we
need
to
add
the
CI
job
for
the
upgrade
okay.
That
is
not
yet
done,
but
the
test
framework
itself
is
done.
Maybe
say
the
s4
is
done
and
is
there
any
intent
is
yeah
one
submitted
by
Maru
related
to
sink
test
cases.
Are
there
here.
E
D
E
A
C
E
D
D
You
can
either
us
now
actually
know
as
of
now.
Actually,
no,
but
maybe
maybe
some
use
cases
additional
use
cases
which
might
be
important
or
necessary
and
which
maybe
maybe
people
think
that
needs
to
be
included
here
can
be
mentioned,
because
this
obviously
is
not
the
exhaustive
list
of
cases
like
just
some
priority
cases.
The
time
ticket
in.
A
D
Clusters
photo
so
currently
currently
how
I
have
thought
of
solving
this
is
like
using
preferences
the
while
creating
the
HPA,
the
user
can
specify
the
preferences
that
say:
X
cluster
should
have
all
the
workload
and
white
cluster
should
not
have
any
ok,
and,
in
that
case,
in
the
cluster,
which
is
not
on
frame
I,
wouldn't
create
the
HP
object
at
all.
Ok,
but,
as
you
mentioned,
this
is
one
one
point,
because
the
underlying
KS
cluster
does
not
take
the
HP
object
with
a
with
some
in
lesser
than
1.
D
D
So
that
is,
that
is
how
I
am
trying
to
or
thinking
of
solving
it.
So
if,
in
a
particular
cluster,
we
do
not
need
the
auto
scaling
to
happen,
then
just
remove
the
autoscaler.
Compare
the
HBS
on
that
closure.
It
shouldn't
the
object
itself
should
not
exist.
So
that's
the
only
solution.
I
have
right
now
there
could
be
some
better
solution
for
this,
then
we
can
think
of
or
I
can
take
seditions
yeah.
A
D
D
B
B
That's
the
sentence
that
was
bugging
me,
the
other
one
is.
This
is
in
general,
I.
Think
the
last
few
sets
of
discussions
we've
had
about
modifications
to
deployment
controller,
replica
site
controller,
we're
moving
toward
yes
doing,
watches
on
the
underlying
objects
themselves,
as
opposed
to
watch
it
on
all
the
pods,
and
that
makes
sense
for
many
reasons.
No,
no,
no.
B
I
mean
as
one
of
the
main
tenets
the
Federation
is
that
it
allows
you
to
scale
more
than
what
you
can
scale
within
a
single
cluster,
so
in
general,
I'd
move
away
from
the
principle
where
we
do
watches
on
all
other
line
pods.
If
the
object,
if
we
can
instead
move
the
watch
on
the
object
that
creates,
but.
D
No
I,
actually
what
what
sedition
and
the
design
is,
that
those
controllers,
like
deployment,
controller
or
replica
set
controller,
will
keep
a
watch
on
the
HPA
object,
not
the
parts.
So
if
there
is
an
HPA
object
which
is
targeted
towards
the
particular
replica
set
object
of
the
deployment
object,
then
the
reconcile
of
those
objects
in
those
clusters
will
be
skipped
because
HP
is
supposed
to
do
that.
Okay,.
A
Actually,
this
is
another
one
which
I
also
thought
it
wasn't
very
clear
in
the
design,
though,
you
did
mention
the
problem
saying
Mao's
will
be
too
a
sort
of
masters
like
better
planning,
controller
NHK
even
further,
but
I
didn't
really
understand
which
solution
will
you
going
with?
Like
you
yeah
here,
you
did
say
that
now
they
are
going
to
be
two
point
of
two
controllers
for
number
of
replicas,
of
the
target,
object
and
I
wasn't
sure
which
actual
solution
you
were
going
with,
maybe
a
few
a
particular
line.
So
you
know.
D
D
A
A
D
D
Maybe
a
method
or
something
on
those
controllers
which
can
in
the
rebel
in
the
reconcile,
which
can
check
just
like
we
check
if
the
clusters
already
or
if
something
is
pending.
So
if
there
is
a
target,
if
there
is
a
HPA
targeting
a
particular
object
in
a
particular
questions,
skip
that
question,
should
we
do
that
at.
C
Basically,
the
idea
was
to
add
something
equivalent
to
rebalance
is
equal
to
fall,
something
like
say,
I,
think
the
same
thing
is
being
proposed
for
shadow
replica
site
as
well
say
you
have
a
replica
said,
but
the
controller
is
not
supposed
to
gallica
scale
down
the
replicas
or
we
comprise
replicas,
and
then
the
HPA
controller
actually
controls
the
field.
As
in
when
you
iron
you
HPA
target
with
that
replica
set
as
a
target.
The
HPA
controller
then
goes
and
sets
x3
to
true,
so
that
replica
said
customer
federated
replicas,
a
controller
won't
actually
reconcile.
C
D
E
I
guess
founders
only
under
the
impression
that
if
you
had
an
HPA
managed
resource,
it
would
still
be
propagated
if
it
was
intended
to
be
propagated
by
the
controller.
It's
just
you
wouldn't
set
like
the
replicas
or
any
of
those
seals,
and
that
would
be
the
responsibility
of
the
HPA
to
make
sense
right.
C
E
Mean
it
I,
guess
the
the
upside
to
things
kind
of
being
migrated
to
the
same
controller.
Is
we're
not
really
talking
about
upgrading
a
whole
bunch
of
code?
It
should
just
be
the
same
controller.
It
gets
a
check
for
whether
it
needs
to
do
the
scheduling
staff
or
whether
it's
something
like
it
just
skips
that
that
thank
you
answer
fun.
It.
D
Does
it
does
so,
but
this
this
calculation
of
this
check,
if
the
HPI
exists
or
not
it
either
ways
it
has
to
be
done
in
either
the
HPA
controller
set
it
I
mean
if,
when
it's
created
or
when
it's
creating
some
NHPA
in
a
cluster,
then
it
has
to
set
that
and
it
has
to
be
per
cluster.
It's
not
like
the
whole
rebalance
desktops.
This
check
has
to
be
per
cluster.
D
That
if
say,
five
clusters
are
federated,
then
for
three
HPA
controller
might
say
that
I
exist
over
there
and
for
two
I,
don't
for
whatever
reasons
for
HPA
Wii
console
has
happened
and
user
has
given
preferences
only
for
three
clusters,
but
user
doesn't
care
or
doesn't
want
to
care
kind
of
stuff.
Now.
A
In
that
case,
it
is
a
lot
more,
which
is
a
bit
more
complex
than
being
shadow
ones
in
which
I
do
when
it
was
sinking
that,
if
it
has
that
annotation,
we
totally
know
if
you
won't
even
created
in
the
end
the
line
clusters
and
will
do
nothing.
In
this
case.
We
want
just
the
repeaters
field
to
be
a
lot
of.
D
C
D
D
C
D
Correct
so
what
I
mean
to
say
is
it
is
exposed
to
the
user?
So
user
can
either
do
coop
a
CTL
scale
or
user
can
call
the
API
and
user
would
expect
that
if
we
call
set
API,
then
the
the
replica
should
be
scaled
to
that
number
and
at
least,
if
not,
then
there
should
be
some
reason
sent
back
or
failure
sent
back
to
the
user.
Yes,
but.
A
This
case
I
hope
the
other
two
classes
don't
even
have
the
replica
set,
because
if
they
have
at
least
one
replica,
then
the
HPA
would
be
there.
It's
because
we
have
such
media
preferences
as
you
prefer
those
three.
So
even
as
she'd
xi
say,
if
there
is
space
in
those
minutes,
this
review
does.
If
it
has
only
to
those
three
clusters,
if
not,
then
it
is
below
it
will
be.
Is
it
I
guess.
E
I
thought
there
was
a
use,
I,
don't
know
there
is
a
suggestion
to
me
that
you
would
have
some
clusters
that
are
HPA
managed
and
some
of
them
weren't,
and
if
that
were
the
case,
I
kind
of
agree
with
Madhu's
question
at
least
I
think
I'm
getting
a
question
that
maybe
there
needs
to
be
sort
of
a
separation
between
your
each.
Your
ex
your
min/max
replica
count
for
HPA
manage
clusters
versus
non
HPA
manage
clusters.
Now.
A
Actually,
I
don't
see
a
use
case
for
that.
If
you
are
it
because
it
is
either
a
look
at
it
at
the
figuration
level,
either
it
is
managed
by
HP
or
not,
and
if
it
is
managed
by
children
all
be
the
biggest
eights
and
all
the
underlined
glasses
are
managed
by
achieved.
If
not
then,
but
none
of
them
are
ok.
E
A
B
E
D
E
If
you
look
at
the
documentation
for
each
documentation
for
HPA
for
Q,
it
says
rolling
updates
are
supported
for
deployments,
but
not
for
replica
set.
So
if
you
were
federating
replica
sets
and
you're
using
HPA
you're
expecting
rolling
updates
to
work,
it
will
not
work
and
that's
just
a
limitation
about
the
underlying
implementation
for
HPA
yeah
yeah.
A
E
C
This
is
what
this
like
I
know.
It's
funny
want
to
answer
this,
but
this
is
one
of
the
reasons
why
I
want
you
to
integrate
the
comments
that
you
have
on
your
dog
into
the
proposals
itself,
particularly
about
how
you
are
going
to
link
the
the
target
controller,
which
is
the
deployment
or
replica
set
controller
into
the
HPA
controller,
because
that
is
writing
out.
That
design
is
going
to
answer
this
question.
Technical
is
asking
about
deployments.
D
C
Actually,
like
the
design
and
explain
how
it's
all
going
to
work,
because
even
after
reading
those
comments,
it's
not
clear
to
me
what
you're
exactly
going
to
implement.
Yes,
there
are
ideas
in
the
comments,
but
there
are
many
ideas
there.
So
I
don't
know,
which
is
the
one
that
you
are
considering
right
now
or
which,
because
the
one
will
be
implemented
right
now,.
A
Yeah
and
I
was
working.
The
I
was
running
like
with
deployment
there's
these
specially
cute
cradle
commanders
well,
rather
than
just
basic
correct
operation
where
they
troll
out.
You
see
all
of
those
as
HPA
have
any
special
eq
qaradawi,
which
we
need
to
mush
shell,
apart
from
the
usual
condense
skin.
No.
A
D
A
C
D
D
B
Right:
okay,
you,
since
we
have
these
cluster
annotations
I.
Don't
you
kept
referring
earlier
on
the
cost?
Relations
where
some
objects
would
exist
in
the
cluster
others
is
enough,
would
not
exist
in
another
one,
even
though
this
is
a
proxy,
you
have
to
speak
to
you.
How
that
affects
that
annotation
with
the
default
would
be
so.
A
Our
expedition
searching
for
your
annotations,
if
you
want
diagram,
is
a
good
idea
on
Yammer
and
do
click
early,
create
it.
This
is
like
a
part
of
a
wrapper
over
it
to
make
it
simpler
for
you
to
create
it
like
we
have
the
same
volume
spaces
as
when
you
can
just
say,
keep
going
to
create
game
spaces
and
interesting
and
will
auto-generate
the
angle
for
you.
A
B
D
E
I
have
a
question
of
spreading
things
across
clusters,
because
this
proposal
seems
to
be
sort
of
geared
towards
the.
What
I
would
call
the
latency
these
case,
where
I
want
to
spread
things
to
all
the
clusters,
maybe
not
all
the
quests
but
most
of
the
clusters
and
I
want
workloads,
running
kind
of
just
spread
equally
or
relatively
equally.
But
what
about
the
low
use
case?
Maybe
I
want
to
put
things
where
there
is
available
capacity
or
I
guess
when
I'm
thinking
about
like
min/max
for
all
the
clusters
that
solves
the
use
case
of
you
know.
E
D
D
E
E
D
But
just
to
specify
right
now
the
current
idea
that
I
have
to
implement
is
that
I
will
take
only
or
the
implementation
would
take.
Only
the
min
max
per
cluster
as
annotations
weights
will
not
be
used
if,
if
it's
needed
to
specify
separate
annotation
structure
or
something
that's
a
different
topic
of
debate,
but
currently
the
HPA
object
in
federation.
If
it
is
specified,
it
can
take
min
max
/
/.
The
kind
of
annotations
message
I
mean.
D
For
example,
if
you
have
three
clusters,
you
can
specify
min
max
for
two
clusters.
Thirty
third
cluster
could
be
omitted
or
not
I
mean
that
might
be
no
annotation
for
the
third
cluster.
Since
that
a
meaning
would
be
taken
for
that
question
or
if
there
is
no
remaining,
then
only
two
clusters
can
XP
I
mean.
E
It's
not
really
specific
to
HPA,
but
the
current
mechanism
for
annotating
and
providing
an
indication
of
where
you
want
things
put
when
I
think
we
discussed
the
fact
that
it's
like,
if
you
have
a
hard
number
and
like
I,
want
three
here.
I
want
three
here:
I
want,
you
know:
X
total
I
mean,
if
you
add
clusters
or
like
to
me
like
waiting
I,
don't
know
and
seems
like
waiting
is
something
they
need
to
support.
I,
don't
think
it's
specific
of
this
proposal,
but
that
is
one
more
use
case
for
it.
Yeah
yeah,
probably.
F
D
Yes,
actually
that
Torrance
Torrance
proposal
of
policy
based
selection-
that
is
something
which
can
be
the
advanced
use
case
of
this
thing.
So
I
I
have
already
put
one
question
to
him
and
I
sort
of
still
need
more
discussion
with
him.
We
can
use
that
policy
based
mechanisms
further,
but
first
I
need
to
put
in
the
basic
implementation,
at
least
to
prove
that
yeah
this
works
in
this
much
then
we
can
extend.
A
D
D
D
E
A
B
B
A
B
A
G
B
D
I
can
explain
a
little
bit
like
the
three
main
properties
on
our
tuskless.
Autoscaler
are
the
main
replicas
maxed
Africa's
and
some
metrics
for
examples.
If
you
utilization,
okay,
min
max
is
something
which
cannot
be
omitted
currently,
but
if
we
think
of
say
doing
away
with
min
Max
also,
then
you
have
to
specify
a
particular
metrics,
taking
example
of
CPU
utilization
say
you
can
specify
70%
CPU
utilization
needs
to
be
targeted
in
discussion
and
how
it
does
is
that
it
averages
the
utilization
across
spots.
D
D
Now,
in
this
scenario,
I
didn't
understand
what
to
put
the
reply
rates
on
yeah,
but
what
you
guys
are
talking
about,
having
some
mechanism
where
I
can
specify
the
target
capacity
or
I
can
specify
that
this
cluster
has
no
capacity,
so,
okay,
more
replicas
can
go
there
and
the
cluster
has
less
capacity,
so
pressure
can
be
reduced
from
this
directly.
Doesn't
map
here?
A
D
A
A
C
I
have
a
question
about
this.
This
is
not
exactly
the
using.
Maybe
I'm,
not
understanding.
This
correctly
using
weight
is
not
exactly
the
answer
to
spill
over
the
spillover
problem
right,
because
if
you
have
raised
future,
you
are
just
saying
how
you
want
to
scale
the
pots
like
how
fast
or
how
slow,
how
tall
you
want
to
scale
the
part,
but
not
exactly
spill
overnight.
Am
I
understanding
the
sign.
H
Merely
avoid
sir,
you
say
if
I
wait
years
to
now
eight
years,
one
then
create
twice
as
many
year
twice
as
many
mirrors
you
given
this
one
right,
that's
so
mean
so
weights
are
not
a
mechanism
for
saying
like
put
weight,
turn
on
a
mechanism.
Resign
fill
this
bucket
and
then
fill
this
bucket
there
mechanism
containing
silvers,
bucket
and
different
grapes
in
with
other
bucket,
you
can
get
by
doing
like
make
this
weight.
C
B
Well,
if
you've
use
HPS,
something
or
you'll
never
get
the
steady
state,
then
yes,
because
to
start
reading
about
race.
But
if
you
eventually
get
the
steady
state
where
you
get
2
times
4
and
on
one
side
than
the
other
and
you're
at
stage
days
are
not
thinking
about
race.
I'm,
just
think
about.
There
are
two
men.
Two
times
more
pods
in
one
cluster
than
the
other
I
have.
A
The
seniors,
Christian
and
I
don't
want
to
support
this
like
marginally,
but
I.
Let
you
said
we
can
do
it
by
1
million
one
over
there.
They
can
take
one.
We
do
want
to
support
this
like
this.
My
party
cluster
put
everything
you
can
in
this
as
long
as
you
can,
if
not
then
spill
over
to
the
other
and
maybe
be
current
mechanism,
we
have
these
index,
we
haven't
specified.
Maybe
it's
not.
A
B
There
was
the
experience
and
you
always
have
that
one
lingering
that's
wrong,
so
don't
have
doors
behnazir.
G
A
You
are
expected
to
experience
like
even
just
you
know,
just
because
it
like
forget
is
P
for
a
moment,
and
you
want
to
have
this
distribution
of
replicas
by
using
federated
earth,
because
it's
the
way
you
do
it
right
now
is
1
million
at
one
thing,
you'll
see
without
that
movement
of
the
gusset.
So
maybe
we
need
a
better
way
to
search
online.
It.
D
B
B
Guess
this
we're
kind
of
using
the
annotation
with
replicas
and
deployments
or
thinking
of
buying
them
to
HPA
I.
Think
it's
important
to
have
a
consistent
story.
There
yeah,
you
intentionally,
have
it
because
I
think
it's
called
replicas
that
annotation
something
cluster
annotation,
maybe
have
it's
called
just
okay,
yeah.
D
D
B
B
It
helps
the
overall
documentation
of
each
of
these
controllers,
but
also
there's
stuff
going
on
core
kubernetes
with
paint
and
toleration.
It
seems
to
have
a
lot
of
crossover
with
this,
even
though
it's
not
a
plot
at
the
cluster
level,
and
so
how
are
we
different
from
paints
and
foundations,
because
right
now
we're
kind
of
operating
in
this
vacuum?
With
regards
to
what
we're
trying
to
do
with,
which
is
selection,
how
the
climate
we
just
have
got
a
much
dead.
B
A
Until
so,
two
questions
I
think
we
should
we
use
annotations
and
we
should
support
the
same
use
cases
as
the
clicker
sites
and
deployment
does
and
the
things
we
discuss
right
now.
I
generate
concerns
fault,
that's
the
structure
of
that
annotation,
which
I'd
like
to
replicate
deployments
as
well,
irrespective
of
each
gear
and
like
tation,
said.
If
we
have
a
separate
design
discussion
on
that,
we
should
fix
them.
D
A
And
it
is
proposals
and
I
like
the
way
we
reconcile.
Both
of
them
is
so
we'll
do
a
selection
based
on
cluster
selector
like
do
a
filtering
and
then,
amongst
those,
the
liquid
clusters,
we
see
the
weights
based
on
V
spec
winces.
So
that's
how
we
will
resolve
if
resources
both
those
annotations.
Yes,.
D
B
A
D
D
A
A
D
A
D
B
D
Created,
yes,
yes,
so,
for
example,
in
a
local
cluster,
80
percent
target
is
if
there
are
four
parts
and
they
are
giving
90
percent
make
it
5.
So
probably
the
utilization
will
come
down
to
come
down
to
80
percent,
but
the
min
and
Max
are
still
still
honored.
So
if
the
user
has
said
5
as
the
max,
then
it
cannot
go
beyond
5.
A
D
Are
saying
is
yeah?
What
we
are
saying
is
that,
with
min
and
Max
constraints,
we,
for
example,
in
the
local
cluster,
it's
averaged
out
across
the
parcel
that
cluster
across
clusters
it's
reached
out
across
the
cluster
on
the
observed
current
reported
metrics,
so
HP
objects,
I
have
the
current
reported
metrics
also.
A
I
do
like
this
proposal
that
all
the
literally
using
we
are.
We
are
delegating
matrix
processing
to
the
underlying
cluster.
We
are
not
adding
other
types
configurations,
but
I
can't
go
under
the
concrete
uses
right
now,
but
I
think
there
might
be
a
uses
where
they,
if
we
were
doing
an
expedition
list,
it
would
have
mold.
You
walk
like
all
the
clusters,
and
it
would
know
that
maybe
I
should
spill
a
while
or
like
do.
A
You
can
clean
some
replicas
in
other
clusters
here
each
HPA
is
this
like
possessing
at
because
the
live
is
one
of
the
advantages.
I
was
imagining
about
the
projection.
Hp,
like
my
identification
experiment.
Why
I
will
just
create
multiple
stage
field
that
the
Federation
H
fear
has
more
expectation.
There
is
overview
of
all
the
different
clusters,
and
maybe
it
can
do
a
smart
decision
based
on
that,
but
I
can't
come
up
with
a
complete
example
in
that.
H
Seems
very
much
something
you
could
add
later
you
can
build
HBA
and
I.
Think
I
think
one
point
design
your
funds,
that
is,
that
the
HPA
always
has
to
work
without
the
Federation,
so
I
think,
even
if
you
have
a
federation
level
of
HPA
cleverness,
the
individual
clustering
HPA
still
have
to
be
able
to
operate
without
it
so
building
it
first
and.
H
And
so
that
the
individual
clusters
right
now,
these
will
cluster
each
be
able
operate
final
out,
Federation
control
claims
later
on,
we
see,
oh
there's,
a
very
good
use
case.
We
can
figure
out
if
there's
a
way
to
make
the
Federation
control
plane
key,
clever
the
HD
Federation
a
controller
summer,
but
not
bringing
to
constraint
the
beat
to
go
and
clusters.
Hba
controllers
will
still
work.
Vibration
goes
away.
Video,
that's
likely
to
be
a
completely.
H
H
I
think
you
have
to
it
seems
like
if
you
want
to
have
the
Federation
is
part
of
one
of
the
Federation's
design
requirement
is
that
it
can
go
away
so
I.
Think
if
you
build
this,
assuming
that
the
Federation
can't
go
away,
that's
unfortunate
I
think
it
is
doing
the
metrics
collection
and
rebalancing.
It
definitely
makes
it
more.
The
Federation
has
to
be
there.
A
D
Yes,
I
was
saying
that
the
drawback,
the
only
drawback
that
I
solve
it.
This
is
a
lag
indecision.
Otherwise
all
the
decisions
can
be
made
based
on
the
data
which
is
available
on
HP
objects,
localize
the
objects
so
from
a
metrics
like
nickel.
If
you
are
saying
that
you
have
to
collect
metrics
from
each
cluster,
and
you
would
have
a
more
fine-grained
view
of
that,
you
actually
have
the
result
of
the
metrics
means
using
the
metric
shifts.
They
are
averaged
out
and
they
are
calculated
as
the
current
observed
metric
sessions,
50
percent
of
30
percent.
B
So
so
far,
which
is
well
aligned
to
the
same
satellite
that
we
observe
with
the
potential
designers,
we
have
this
roll
out
roll
back
with
federated
deployments.
It's
not
it's
not
another
metric
that
surround
observing
results
in
the
via
the
link.
Siemens
are
big
results
and
it
keeps
the
federated
controllers
dumb,
like
we
like
them
to
us.
B
This
is
one
of
the
cycle
going
through
all
these
controllers
and
many
it
because
of
the
timing
at
which
I
entered
to
the
project
and
all
the
works
loses
every
time,
I
think
about
a
new
control
on
earth.
What's
the
Delta
between
stink
and
what
this
proposal
is
and
every
time
that
Delta
is
small
at
least
it
makes
me
feel
a
bit
more
comments,
yeah
and
it
looking
at
the
Delta
between
HPA
and
and
employments
would
be
even
smaller.
B
D
Are
two
additional
methods?
That's
what
I
have
another
PSO
there's
a
shed
you'll
and
there
is
a
that,
in
fact,
they
can
be
made
into
one
one
complete
whole
or
call
it
big
function.
Also.
A
second
function
which
takes
in
all
the
objects,
cluster
objects
that
writted
objects
and
tells
what
Sun
gives
you.
D
B
You
need
to
be
tough
in
it.
Go
to
stores
like
we've,
had
a
lot
of
discussion
and
there's
a
lot
of
discussion.
That's
in
that
document
and
I
think
you've
done
a
lot
of
work,
but
I
don't
think.
What's
in
the
current
design
document
that
reflects
all
the
work
you've
done.
So
you
know
I
know
putting
putting
documents
together
is
enough,
maybe
not
that
fun,
but
it's
for
the
better
Sigyn
and
for
yourself
out
of
things
mm-hmm
I'll.