►
From YouTube: Kubernetes Resource Management WG 20171011
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
C
A
See
this
I,
don't
like
I,
don't
intend
to
take
a
lot
of
time.
With
this
topic,
I
mean
I
was
hoping
that
we
would
have
more
of
non-flying
discussion
over
this
issue
itself.
But
my
goal
is
like
just
raise
awareness
for
one
aid
like
we
sort
of
had
a
plan,
and
we
tried
to
stick
to
the
plan
when
it
came
to
device
plugins,
but
we
still
have
a
whole
bunch
of
work.
A
That's
left
in
order
to
get
device
plugins
reasonably
production-ready,
at
which
point
we
can
ask
more
people
to
consume
it
and
and
and
like
sort
of
give
us
feedback
and
see
if
it's
actually
meeting
more
wireless
spectrum
of
needs.
So
in
that
regard
and
I
try
to
summarize
based
on
feedback
from
a
whole
lot
of
you
and
the
things
that
we
need
to
be.
A
We
need
to
get
done
around
device
plugins
and
try
to
like
categorize
them
across
releases,
and
some
of
them
may
extend
beyond
110
but
like
I,
didn't
I
really
want
to
like
go
beyond
one
at
this
point.
So
some
of
these
items
actually
have
owners
because,
mostly
because
people
are
started
working
on
that
already,
but
the
rest
of
them
do
not
really
have
owners.
A
C
C
B
Just
a
quick
clarification
and
V
1.10,
when
you
say
complete,
could
refactoring.
That
means,
like
could
be
factoring,
is
done
right,
okay,
yeah,
okay
and
the
other
one
should
we
include
specifically
adding
more
tests,
because
it's
more
like
implied
in
the
list,
but
it's
not
a
specific
test.
I
mean
as
in
unit
tests
rather
than.
A
B
D
To
comment
I,
think
of
like
the
the
working
items
listed
here,
are
the
work,
so
we
have
discussed
before
I
mean
it's
not
necessarily
the
the
final
plan.
I
think
we
may
need
to
discuss
some
the
priorities
for
some
of
them,
and
some
of
the
working
items
listed
here
still
needs
more
discussions
like,
for
example,
at
the
prospect
to
allocate
a
PC
I
think
we
haven't
fully
decided
whether
we
need
this
or
not
so
I
spiked.
This
may
change
over
time,
as
we
finalize
it.
Yeah.
C
So,
for
my
respective,
like
I
know
that
this
was
gonna
go
for.
That's
it
right
have
to
be
able
to
demonstrate
that
the
plugin
could
work
with
more
sets
of
devices,
and
so
I
know
that
the
discussion
on
that
one's
been
going
on
I
think
personal
I.
Don't
want
a
rathole
on
that
particular
one
too
much
because
in
practice,
all
of
its
useless,
if
we
haven't
actually
demonstrated,
but
we
work
well
with
a
dummy,
ete
plugin,
so
I'm
happy
to
also
make
sure
that
we
move
our
energies
towards
just
verifying
the
hooks
themselves
work.
E
A
Actually,
if
you
look
at
it,
there
is
a
working
item
before
that
which
says
finalize
an
internal
software
architecture
for
device
plugins
support
in
Kuebler
to
improve
reliability
testability
in
VM.
Those
are
the
three
main
categories
that
we
won't
improve
upon
and
McDonagh
I.
Think,
like
we
discussed
this
in
a
previous
meeting.
I
should
like
what
the
specific
goals
are
like
refactoring
and
why
we
actually
need
it.
D
E
A
A
D
D
Yeah
I
feel
like
yeah
as
I
mentioned.
I
think
this
is
a
really
nice
least
working
items
we
have
discussed,
but
they
may
change
over
like
over
time
as
we
finalize
some
of
the
this
agent
and
yeah,
maybe
I
saw
how
alone
is
the
discussion,
so
maybe
we
can
move
on
to
the
next
one
which
do
you
want
to
spend
more
time
on
this.
A
C
A
A
Think
from
like
an
awesome
naming
perspective,
if
it
is
going
to
actually
be,
is
actually
meant
to
be
used
for
any
sort
of
resourceful
by
the
sort
of
fits
in
this
life
cycle
model
that
this
extension
is
just
providing
then
maybe
like
device
plugins
is
not
the
right
like
device
is
probably
not
the
right
keyboard.
I.
A
A
D
Allow
for
software
license;
it
means
it
may
also
require
some
class
level
controllers
are
like
a
plugins
and
right
now
we
don't
really
work
that
good
use
case,
but
this
supporter
a
favor
retinol.
We
since
the
best
pranky
adjusting
introduced
that
after
future-
and
we
are
still
trying
to
get
the
more
experience
on
this
a
little
bit
north-
to
dismantle
its
scope
to
very
narrow
scope.
A
B
E
I
agree,
yeah
I
think
there
right
now
is
to
make
this
is
more
generous
for
other
resource.
It
is
to
earlier
it's
not
mature
enough
and
we
also
don't
have
the
real
call
real
use
cases
in
production
on
demand
and
we
gotta
to
support
those
kind
of
things
and
and
on
the
other
hand,
yes,
I,
understand
the
motivation.
It
is
for
better
communication,
but
I
think
the
from
the
user
perspective,
including
is
provided
a
feature
and
the
first
and
any
of
those
kind
of
things.
I
worry
I'm
at
the
same
constant
I
worry
about
that.
E
We
over
promise
once
we
rename
so
we
rename
this
is
resource
canteen
people.
We
all
started
to
use
in
this
way
to
think
about
the
many
arbitrary
source
ask
for
component.
Is
the
supporter
sue
this
party
handle
yeah?
That's
not
what
we
want.
We
want.
They
came
to
ask
to
see.
Here's
the
new
type
of
the
resource
component
is:
can
you
plan
and
design
to
support
those
kind
of
things?
So
then
we
came
back,
provide
the
bad
hair
support,
I.
E
Problem
is
I,
wanted
to
came
to
our
channel
to
ask
us
properly
instead,
if
they
aren't
their
own
before
Cuba
natives
and
a
fork
that
us
packing
to
do
something
unless
they're
on
their
own.
But
I
want
to
have
that
the
channel
people
came
to
ask
to
ask
those
have
faces,
or
we
know,
there's
a
diamond
II.
C
C
G
G
So
the
idea
was
to
either
checkpoint
the
CPU
manager
state.
So
we
have
a
state
abstraction.
That's
separate
from
the
policy
that
we
can.
We
can
checkpoint
that
independent
of
the
policy
which
is
nice
and
then
I,
guess.
The
alternative
proposal
is
to
try
to
reconstitute
States
somehow
based
on
C
group
FS.
That's
the
problem
that
I
saw
with
that
is
that
C
group
FS
for
containers
is
managed
beyond
the
CRI
boundary.
Also.
We
can't
really
make
assumptions
about
about
how
the
update
container
resources
all
through
the
see
her
I
was
actually
implemented.
H
C
My
first
bias
on
this
was
that,
ideally,
we
would
have
had
a
way
to
reconcile
this
from
the
secret
fest
today
in
the
cubelet
reconciles
from
super
fest
for
managing
pod
c
groups
to
figure
out
orphan
pods,
and
I
guess
I
I
concede
your
point-
that
since
it's
the
writer
to
that
thing,
it
can
be
the
knowledgeable
reader
and
so
having
the
cubelet
have
to
reconstitute
from
the
container
itself.
I
could
I
could
I
could
see
that
that
kind
of
blur
this
the
abstraction
line
so
about
another
ideal.
C
C
Like
if,
in
the
following
issue,
a
fish
I
feel
like
we
had
the
same
problem
with,
where
is
the
cubic
gonna
track?
Anything
that
would
have
been
kept
for
devices
across
culet
restart
boundaries?
And
so
you
know
generally
I'd
like
to
figure
out
a
way
that
if
we
do
checkpoints
anything
with
this,
that
we
don't
do
it
a
ton
of
times
in
different
ways.
Yeah.
G
E
And
I
can't
summarize
what's
the
checkpoint,
so
we
try
to
avoid
the
internal
state
checkpoint
actually
met
the
copper
region
for
the
inner
history.
The
first
thing:
initially,
we
we
don't
have
the
good
Rocha
mean
we
don't
know
what
kind
of
things,
because
everything
is
moving
on
it
like
that
before
we
even
don't
have
the
CI
interface
defined.
So
how
how
how
Couponing
will
come?
Anyways
right
time
is
not
well
designed
right.
Now
we
have
the
CI
in
alpha
and
also
before
we
don't
have
the
same
I
and
also
in
bag
have
before.
E
So
we
also
don't
have
that
today
we
don't
have
the
storage
interface
between
terminate
node
and
the
storage,
so
in
two
you
generate
arbitrary
of
those
I
will
knock
the
checkpoint.
It
is
just
Sims
in
denies
the
API
all
the
requirements
next.
If
yet
we
know
to
well-defined
abortion
API
and
the
between
modules
on
the
node
I
prefer
not
have
the
standard
wait
for
checkpoint,
especially
for
internal
state.
So
once
we
have
those
API,
we
can
start
thinking
about
what
kind
of
things
you
want
to
check
apart
in
there.
E
So
for
this
kind
of
does
the
CPU
affinity
checkpoint,
I
really
think
about
it
as
the
way
to
recovery
from
the
running
state
when
Cuban
state
start
thing,
this
is
not
just
first
one
people
want
to
recover
from
the
running
state
and
the
CPU
assignment
is
just
one
of
those
kind
of
things
and
I
think
an
inspection.
This
is
easy
to
recovery
from
the
ratings
data.
I
understand
that
maybe
have
some
requirement
for
the
CI
I
change
we
could
have
started
talked
about
those
kind
of
things.
E
Instead,
it
goes
through
the
internal
checkup
on,
but
if
we
must
have
to
do
the
internal
check
appointment,
for
example,
for
the
semi,
some
of
the
thing
I
change,
we
prefer
added
assessments
each
component,
each
sheep
tracking
implementation
include
her
to
increment
their
own
and
the
nectar
in
the
lowest
layer.
So
thence,
then
we
don't
have
the
backorder
compatibility
issue
and
a
forward
combination
because
they
meant
that
feature
for
each
V's.
At
this
moment,
yeah.
A
C
Yeah
I
should
all
go
to
think
that
that's
like
going
too
far
to
be
honest
and
no
I'm
open
to
other
opinions,
but
I
prefer
that
versus
us
writing
our
own
state
file,
for
example,
and
in
general,
the
versioning
constraints
on
state
files
like
I,
don't
know
how
most
people
will
handle
this,
but
like
at
least
across
cube
releases,
will
probably
still
drain
nodes
and
treat
them
as
installs
for
entirely
new
nose.
I
think
GE
would
work
similar,
so
I
would
expect
all
state
to
be
wiped
away
across
upgrade
boundaries.
A
C
Know,
I'm
I
think
how
I
would
want
us
to
work.
That's
how
I
would
want
us
to
work
and
I
think
at
GK.
You
guys
do
rolling
node
upgrade
so
I
think
that's
actually
I
you
may
work,
but
in
practice
like
I
would
prefer
that
I
can
just
go
and
ask
the
container
runtime
itself
to
report
back
via
some
inspect
container
calls.
D
D
A
Like
in
terms
of
just
the
control
plane
logic,
we
don't
have
a
specific
requirement
there,
but
if
you
look
at
it
from
slightly
a
meta
level
which
is
not
looking
at
just
see
groups
but
like
I'll,
see
you
described
earlier
direct
like
the
fundamental
idea
of
like
provisioning
resources
for
a
pod
before
a
pod
is
going
to
run.
I
think
that's
a
common
requirement
right,
so
we
can
fail
early
rather
like
trying
to
Persian
when
a
container
starts
like
so
that's
a
matter
requirement
and
then
once
we've
potion
it,
why
do
we
actually
checkpoint
it?
A
C
E
A
It's
like
you
know,
in
from
a
lifecycle,
standpoint,
cubed
gets
a
pod
and
then
before
the
pod
is
even
known
to
the
CRI,
you
go
through
the
container
manager.
You
do
a
bunch
of
allocations,
you
allocate
probably
CPU,
sets
probably
memory
and
then
then
you
go
over
to
like
talk
to
device
plugins
in
the
future.
C
Yeah
I
agree,
I
mean
I
was
saying
that
we
wanted
to
have
a
pre
planning
phase
where,
ideally,
we
could
do
CPU
end
device
assignment
apology
where
before
ever
actually
starting
this
thing
and
have
it
be
done
in
some
code,
areas
that
knew
of
each
other.
So
I
guess
when
I'm
wondering
is
like
if,
if
in
the
interim
like
right
now,
device
plugins
are
alpha
and
you
guys
are
check
pointing
to
the
file
it
is
there
an
objection
than
if
we
just
do
the
same
for
the
CPU
manager?
C
J
C
I'd
like
to
avoid
that
Seth,
to
be
honest
and
also
like
I'd
like
to
keep
it
that
when
these
things
are
dynamic,
that
like
they
can
be
dynamic
at
a
granularity,
that's
different
than
having
to
report
back
to
the
API
server,
so
yeah
I
guess
for
now.
If
there's
no
there's
no
issue
with
check
pointing
to
a
local
file,
I
guess
I'm!
Fine
with
that.
If
we're
actually
nowhere
sign
device
plugins,
which
I
had
not
kept
up
to
date
with
and
then
I
guess,
we'll
we'll
proceed
that.
H
A
H
H
D
A
D
C
C
A
If
the
resource
up
here
is
dynamic,
once
we
have
extensibility
through
quota
limit
range
and
so
on,
it
can
probably
handle
that
seamlessly.
But
whenever
we
get
rid
of
resources,
make
its
it's
not
something
that
we
take
into
account
across
the
stack
and
so
getting
your
resources
can
happen
due
to
two
reasons.
One
is
a
resource
goes
bad,
like
a
set
of
a
memory
bank
has
gone
bad
or
like
a
bunch
of
CPU.
Sockets
have
gone
bad,
or
this
friend
pad,
or
in
this
case,
like
a
GPU.
A
When
I
mean
in
any
of
those
cases,
the
capacity
reduces,
but
the
resource
is
still,
there
was
also
the
second
scenario.
The
overall
resource
itself
disappears.
That's
not
the
case
for
some
of
most
class
resources
like
CPU
memory
and
storage,
but
that
is
quite
likely
a
scenario
for
extended
resources,
maybe
not
GPUs,
because
I
don't
realistically
see
why
that
would
be
the
case,
but
maybe
there's
there's
some
like
unknown
use
case
in
the
future,
where,
like
people
who
only
install
and
like
dynamically
uninstall
resources
and
so
on.
A
In
addition
to
this,
also
is
other
point
mentioned
where
on
a
given
machine,
you
might
have
different
classes
of
resources,
one
there
is
important,
the
other
one
that
is
not
important,
so
the
ones
that
are
not
important
should
not
like
affect
the
functionality
of
rest
of
the
important
resources.
Again
like
this
is
hypothetical.
A
You
don't
have
a
concrete's
now
you're
here
and
trying
to
summarize
whatever
was
mentioned
at
issue,
so
the
proposal
that
I
had-
and
there
are
also
two
proposals
to-
is
that
one
of
the
proposals
is
that
we
we
don't
like
automate,
getting
rid
of
resources
or
like
deleting
resources
that
have
previously
been
advertised.
So
the
workflow
is
somewhat
somewhat
like
this
you.
A
So
as
a
user,
you
install
a
device
plugin
the
assumption,
that
is,
that
the
device
plugin
is
important
and
like
failures
as
part
of
the
device
plug-in
or
something
that
you
would
be
interested
in
looking
at,
and
you
don't
want
to
like,
silently
ignore
and
there's
also
other
assumptions
like.
We
cannot
require
users
to
have
their
own,
like
parallel
monitoring
stack
in
order
to
guarantee
that
the
rest
of
the
cumulus
stack
would
functional
properly
because
we
don't,
you
know
like
make
such
requirements
today,
and
so
that's
that's
one
more
assumption
there.
A
But
there
is
an
implicit
assumption
here
that
q,
that
would
know
moving
forward
so
when
cubelet
knows
which
plugins
have
registered
to
it
and
when
it
restarts
he'll
check.
I'll
wait
for
those
plugins
to
register
with
a
configurable
timeout
and
if
the
plugins
don't
register
with
to
blurt
within
that
duration
and
the
cubelet
taints,
the
node
saying
that
there
is
a
problem
and
there's
node,
it
prevents
any
future
paths
from
running
there.
A
It
lets
the
existing
paths
run
because
it
doesn't
want
to
like
disturb
workloads,
and
at
this
point,
cluster
administrator
or
some
sort
of
like
automated
workflow.
That
Alice
rate
hasn't
deployed
is
expected
to
come
in
step
in
and
repair
the
node,
and
maybe
the
node
gets
repaired
and
it
goes
online
again
or
the
node
does
not
get
repaired.
The
expected
workflow
for
taking
or
for
like
removing
the
device
plug-in
from
that
node
is
that
there
would
be
a
graceful
drain
happening
and
then
the
device
plug-in
beam
and
said
is
like.
A
If
daemon
set
was
used
as
a
deployment
model,
then
that
node
would
not
be
part
of
the
daemons
at
scheduling
food
and
so
like
a
dream
followed
by
a
removing
of
the
device,
plugin
gracefully
and
then
the
cube
rate
gets
restarted,
at
which
point
like,
then
the
queue
blade
gets
probation
again
at
which
point
the
cubelet,
like
the
device,
if
I
can
ask
me
like
safely
removed
from
that
node
and
the
node
has
been
put
back
into
the
system.
So
this
is
one
proposal.
A
The
other
proposal
is
like
somehow
figure
out
a
way
to
like
dynamically
handle
handle.
All
this
hose
is
not
fully
flushed
here,
but
the
idea
is
like
okay,
we
don't
we
don't.
We
don't
like.
Add
this
burden
of
requiring
a
cluster
administrator
to
step
in
and
if
the
user
decides
to
like,
say
instruct
equivalents
to
like
ignore
device,
plug-in
failures
and
then
just
move
on
then
then
we
would
thought
like
Cupid
would
probably
like
go
ahead
and
do
that.
But
there
were
some
like
got
just
a
shell
which
have
not
yet
been
flushed
out.
A
So
I
guess.
One
of
the
main
questions
is
like
that
is
a
use
of
the
D
hit
if
we
go
through
the
paint
like
graceful
drain
process.
But
what
is
the
general
sense?
People
who
who
manage
cubed
as
in
production,
have
today
prefer
having
explicit
failures
that
needs
human
intervention.
Or
would
you
prefer,
like
some
automatic.
B
A
Don't
think
so
I
mean
you
sort
of
like
having
an
external
control
plane
in
the
format
device
plugin.
So
it's
up
to
you
to
sit
high
priorities
and
like
make
sure
that
you're
running
device
plug-ins,
ideally
in
the
guaranteed
cost
class,
so
that
they
device
plug-ins,
don't
get
impacted
by
your
own,
and
you
have
also
like
said
no
allocatable
and
so
on.
So
you
have
you
have
a
reasonably
stable
equivalence
deployment
which
can
which
can
like
protect
certain
critical
jobs
from
homes
and
that
sort
of
the
requirement.
D
So
my
general
failure
is
if
we
can
support
dynamic,
a
loading
device
plug-in
with
reasonable
compromise.
You
say,
I
see
it
will
give
us
more
flexibility
and
also
allowed
us
to
support
the
veteran
jeweller's
devices
and
I
think
we
should
be
able
to
with
supported
with
reasonable,
comprise
a
day,
for
example,
for
us
I
think
we
can
introduce
some
special
name
information
for
the
best
packing
resources.
A
D
So
you
know
design
proposal
we
mentioned
they're
like
for
the
pause
that
are
already
assigned
with
the
resource.
They
should
be.
They
shouldn't
interrupt
them.
They
should
keep
running
as
I.
The
devices
are
still
in
could
stay,
but
we
will
remove
the
resource
from
the
capacity
from
node
capacity
so
that
future
parts
will
not
be
sent
to
that
node
I
see.
A
So
one
issue
I
see
with
it
is
example
like
that's.
A
container
needs
a
restart
in
that
in
a
pod
which
had
device
allocated
to
it.
What
will
happen
in
that
case
like
because
now
we
have
the
capacity
is
zero,
but
we
had
containers
which
are
already
using
that
devices
like
so
I
feel
that
is
a
bit
confusing
in
yes,.
D
D
B
Aren't
all
bets
off
when
she
once
as
a
cluster
I
mean
or
as
a
user,
you
could
move
the
device.
Plugins
I
mean
you.
Can
you
can
say
that
it's
reasonable
to
say
that
we
won't
stop
your
pods
or
containers
that
are
using
the
device,
but
if
they
restart,
then
all
bets
are
off
your
device.
Booking
is
not
here
anymore.
That's
your
problem,
right,
yeah,.
E
A
comment
one
comment:
I
think
that
a
proposal
leash
propose
that
actually
have
the
one
assumption.
The
node
is
not
sharable
with
some
other
one
older,
so
I'm,
not
sure
that
a
assumption
it
is
right
or
wrong,
because
you
have
to
notice
that
if
there's
a
shareable,
not
not
just
the
ml
job,
for
example,
takes
the
GPU
as
example,
and
it
not
just
ml.
Job
is
running,
there's
some
other
job,
so
you
are
tannic
to
lose
the
more
capacity
from
resource
capacity
from
that
faster
you're
forced
doing
those
kind
of
things.
E
D
K
A
That's
the
discussion
primarily
here
would
be
preferred
having
like
explicit
errors
that
sort
of
like
fail
loudly
and
like
found
the
cluster
administrator
step
in
either
manually
or
just
like
through
some
automated
process
that
they
have
explicitly
set
up.
All
would
be
like
just
through
the
resource
and
say
that
the
behavior
like
ran
out
mentioned,
for
the
part,
is
sort
of
undefined
once
this
happens,
but
then,
like
the
node
will
keep
marching
on
I.
Probably
agree
is
enough
events
and
probably
morning
signals,
and
so.
C
My
perspective
I
feel
like
I'm,
most
likely
going
to
just
manage
this
known
as
an
atomic
unit
and
if
any
of
the
device
devices
report
as
unhealthy
or
go
bad
on
that
node
I'm
probably
going
to
take
that
note
out
of
commission
market
unschedulable
and
look
into
getting
a
new
node
and
not
really
repair
the
existing
one.
Yeah
no
agreed
that
so
like,
but
but
that's
basically
what
I
imagine
our
our
playbook
would
be
on
this.
If
any
device
plug
in
on
that
node
record
unhealthy,
Oh,
immediately
market
unschedulable
and
drain
it
in.
A
D
A
A
If
things
are
I
would
want
like
very
predictable
failure
models
that,
like
instead
I
was
saying
I,
can
put
in
a
playbook,
and
people
can
go
through
that
and
like
know
like
realistically
what
rather
like
having
too
much
three
or
four
log
streams
and
event
streams
and
find
out
like
what
are
the
series
of
events
that
happen
and
like?
Why
do
I?
Do
this
spot
like
restart
and
unveil
the
start
after
that,
like
sort
of
got
stuck
because
if
the
pod
gets
evicted,
then
it
gets
run
on
a
different,
node.
A
Well,
I
said
the
pod
is
like
sort
of
stuck
in
this
weird
state:
I,
don't
know
I,
don't
know
I
I,
hoping
it's
not
beard
and
that,
like
you'll,
probably
have
it
the
part
of
the
fine,
but
like
that's
more
that's
an
extra
different
failure
scenario
that
we
have
to
educate
our
users
and
educate
the
rest
of
the
support.
Folks,
right
so
my
so
my
gut
feeling
is
that,
like
having
an
explicit
failure,
model
is
probably
easier
to
explain.
A
That
said
like
if
we
get
feature
request
in
the
future,
where
people
say
that
hey
I
want
this
dynamic
model
and
I
have
this
use
case
full
bar,
then
we
can
possibly
extend
to
them
that,
like
you're,
not
excluding
that
option,
but
obvious
saying
is
like
okay,
we
can.
We
can
support
a
dynamic
model,
but
we
support
the
dynamic
model
and
and
justify
the
added
complexity
once
we
actually
have
concrete
use
cases
for
it.
So.
B
Good
question
here:
I
completely,
understand
the
need
for
and
I
agree
with,
the
fact
that,
when
your
GP
or
your
device
goes
bad
and
healthy
in
schedule,
that
makes
all
sense
me
having
problem
wrapping
my
head
around
the
use
case
or
what
case
would
be
an
example,
and
you
don't
have
been
removing
it
device
plug
in
I
mean,
except
for
dating
the
device
plug
in
what
would
be
the
use
case.
Resonance
I.
A
Suppose
you
know
we
don't
have
a
use
case
and
that's
probably
why
this
this
colonization
is
even
happening
because
we
had
like
concrete
use
cases.
Then
we
could
like
come
to
a
conclusion
much
easily
like
correct
me.
If
I'm
wrong,
you
don't
really
have
any
use
case.
That's
been
expressed
in
the
community
that
that
requests,
dynamic,
handling
right.
D
You
knock
the
know
that
the
whole
no
ties
are
not
usable,
like
a
10
I
seem
to
like
a
propagate
failure
to
too
much
like,
for
example,
and
like
a
who
has
some
people
are
asking
about
the
use
case
to
support
FPGA
and
because
of
the
FPGA,
have
different
properties
on
a
single
node,
so
they
are
considering
to
register
multiple
resources
just
for
aa
FPGA,
and
so,
if
you
ban
resource
goodbye
and
we
just
knock
the
whole,
no,
that's
not
usable.
For
some
reason,
somebody
is
tracking
a
small
damage.
D
A
Here
you
guess,
oh
so
Rhodes
for
Rose
was
asking
like
what,
if
he
paint
the
node
and
I
was
just
telling
him
that,
like
the
proposal,
is
to
actually
paint
the
nodes,
so
you
can
have
like
pots
that
tolerate
the
taint
and
continue
running
there,
so
we're
sort
of
like
I
mean
this
is
all
up
to
the
users
right,
like
the
proposal
that
we
have
now
is
up
to
the
user
to
define
what
the
processes
right
like
the
user
can
entertain.
They
can
mark
their
own
unhealthy.
A
They
can
do
a
whole
bunch
of
things,
but
it's
basically
after
the
user
to
sort
of
define
a
life
cycle
process
at
that
point
and
we're
just
reusing
whatever
existing.
Like
cycle
semantics,
we
have
first
for
the
dynamic
use
case.
We
do
have
to
add
some
more
features
to
the
Kuebler,
so
I
know,
I
am
I'm
still
a
little
bit.
B
My
only
form
promised
occurred,
the
mallow
that
you
propose
is
the
third
five.
When
again
the
D
cursor
I
mean
once
you
remove
the
device
plug-in
and
he
actually
has
to
go
in
and
drain
the
node
etcetera
I
mean
that
feels
like
a
really
weird
model
and
frequently,
if
it's
just
like
device
health,
then
I
completely,
understand
and
I
agree.
A
A
C
A
A
A
K
E
The
management
interface
so
so
I
feel
like
that.
We
don't
understand
the
the
on
demand
or
request
them
at
this
moment.
So
then
I
make
the
are
we
moving
and
all
those
kind
of
things
is
not
at
least
the
for
now
it
is
not
not
the
high
priority.
I
think
we
don't
wait,
we
can
wait
and
then
waiting
for
their
assessment.
So.
E
E
I
think
there's
the
let's
make
things
more
clear,
so
the
device
packing
reporters
accident,
healthy,
Alan,
Alda
and
some
of
the
craft
management
tools
or
system
we
attend
another.
And
this,
it
is
all
said,
our
whole
system
and
whoever
I
mean
that's
their
their.
How
they
act
excited
it's
not
a
device
plug
in
10.
To
the
note
that
directly.
A
If
the
device
megan
is
not
registering,
then
in
either
the
cubelet
has
to
expose
expose
the
node
health,
because
the
cubelet
sort
of
knows
that
the
device
against
do
not
register
or
there
should
be
some
other
way
for
other
parties
to
in
respect
weather
device.
Plugin
has
registered
with
the
cubelet.
E
This
is
s
my
I
keep
asking
this
question.
I
think
this
is
part
of
the
whole
lifecycle
to
management.
We
need
have
so
so
kind
of
nightmare
that
we
kind
of
introduce
a
one
way
to
undermine
that.
You
have
this
kind
of
the
way
to
a
device,
but
on
the
other
hand
it's
not
a
symmetric
man.
We
don't
have
that.
How
we
are
removed,
I
mean
so
so.
I
agree
with
that.
When
we
don't
need
a
handler
for
the
production
support,
we
don't
need
a
handle
on
demand.
E
D
Okay,
I
guess
is:
we
are
running
out
of
time
so
that
we
can
can
hear
the
discussion
in
the
issue.
So
I
see
the
the
the
main
thing
we
need
to
accessorize
whether
we
want
to
have
that
us
park
employee
mark
to
support
the
dynamic
resource
removal.
Are
we
just
tell
users
like
they
need
to
take
this
actual
administration
steps
to
remove
the
result.