►
From YouTube: Kubernetes SIG Node 20200609
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
D
A
So
this
is
the
first
one
that
I
talked
about
a
few
weeks
ago.
This
get
preferred
allocation
extension
to
the
device
plug-in
API
after
some
back-and-forth
and
some
feedback.
You
know
I
tried
to
address
all
the
feedback
we
had
and
the
final
conclusion
was
kind
of
it
wouldn't
hurt
anything
to
actually
add
this,
but
that
said,
I
probably
won't
have
time
to
add
it
into
119.
So
it's
been
merged,
it's
something
that
we
will
probably
move
forward
with.
A
Unless
there's
huge
objections
to
it
before
120
comes
around
I
know,
Alexandre
has
some
some
feedback
that
he's
given
a
few
times
that
he's
added
here.
So
if
anyone
wants
to
+1
that-
or
you
know,
add
any
more
feedback
at
all,
please
go
ahead
and
do
so
because
it
probably
won't
be
happening
in
the
119
timeframe
anyway.
So.
E
A
A
Probably
a
combination
of
both
I,
don't
know,
I
don't
have
time,
I'll
say
that
much
to
start
with,
and
then
the
second
piece
I
guess
is
I'm
not
actually
sure
I.
Think
Derek
would
need
to
speak
to
that,
whether
we're
even
allowed
to
do
these
things
at
this
stage,
since
we
only
now
just
merge
this
and
it's
technically
after
the
enhancement
freeze
and
all
that
I.
B
Think
in
this
case,
how
it
or
in
phrases
honestly
the
scope
of
enhancements,
was
originally
for
things
that
spanned
SIG's
I,
think
sig
notes
one
of
the
SIG's
within
the
community
that
is
more
structured
on
kind
of
making
changes,
always
tracking
enhancement.
From
my
perspective,
I
think
this
is
their
incremental
tweak
to
an
existing
feature.
That
I,
wouldn't
think
is,
as
I
wouldn't
have
objected
to
to
merging
this
we
might
have
been
scolded,
but
I
don't
think
it.
It
doesn't
have
a
broad
impact
across
components
with
him.
A
F
D
Okay:
let's
go
to
the
next
one,
which
is
part
of
a
resource
alignment,
so
its
BG
on
the
call.
G
Hey
it's
Chris
here,
hey
Chris,
so
last
week,
Derek
left
us
some
comments
and
also
asked
to
include
how
the
eight
containers
will
be
handled
and
the
situation
with
debug
or
F
American
tears
so
busy
today
applied
those
changes
and
the
example
here
is
a
little
bit
odd
because
we
did
it
more
in
the
pot
scope
way
to
be
to
be
the
same
as
the
pot
scope
defined.
But
it
is
generally
looks,
look
like
this
and
yeah
I
wanted
to
ask
for
for
what?
What
is
the
decision
or
the
opinion
on
that.
F
C
B
I,
don't
think
that
that
prohibits
us
from
doing
more
in
the
future
for
general
purpose
workload,
scenarios
I
think
I
did
put
a
comment
in
there
that,
like
whether
we
do
like
this
whole
discussion
is
basically
an
ode
local
decision
and
whether
that
node
local
decision
is
container
or
or
pod
bounded.
Anytime.
We
go
to
do
something:
that's
not
a
node
local
decision,
but
a
pod
level
decision
like
we're
gonna,
have
to
figure
out
a
policy
to
mitigate
that.
So
I
didn't
really
want
to
block
on
that.
F
B
D
All
right,
so
that
covers
the
enhancements
for
nineteen,
so
I
think
do
we
want
to
go
on
and
let
his
move
to
his
presentation
next,
so
I
think
the
other
ones
are
sort
of
later
and
they
take
a
little
bit
more
time.
So
good.
H
D
Yeah,
so
I
can
share
or
I
think
Derek
would
have
to
make
you
co-host
either
way.
It's
fine
with
me.
You
know
it.
H
H
So
this
I
think
term
idea
idea
here
is
that
it
is
not
a
small
incremental
change,
but
it's
more
like
us
reaching
towards
like
an
end
goal,
which
is
which
would
be
to
get
this
this
as
flexible
as
flexible
handling
of
topologies
as
possible,
and
also
like
in
to
get
an
algorithm
which
would
sort
of
like
give
us
a
chance
to
efficiently
if
we
simplify
those
good
sets
of
resources.
So
here
sometimes
it's
large.
There
is
this.
This
link
to
this
kept
this
this
overall,
this
this
kid
or
request
Anton's
direct
link
to
this
file.
H
H
And
if
you
got
the
next
slide,
please,
then
that
obvious
question
is
that
if
the
record
is
a
cost
between
two
resources,
then
what
what
does
that
mean?
And
it's
just
value?
So
it's
somehow
like
tails
that
that
what
what
what
is
this
distance
between
two
devices,
so
it
can
represent
latency
or
it
can
represent
bandwidth,
but
it
somehow
octaves
tells
that
resources
or
devices
which
are
have
small
costs
between
themselves
or
somehow,
like
close
and
and
we
should
try
to
get
a
reserve
state
which
has
as
small
as
possible
like
like
total
cost
and.
H
Since
these
are
just
numbers,
this
is
like
like
this,
this
kind
of
like
bit
abstract
ideas.
Then
we
just
like
normalized
normal
estimate
when
we
count
costs
and
even
even
for
them,
costs
can
be
sort
of
like
like
one
directional
or
isometric,
so
it
can
be
expensive
to
reach
from
this
device
to
that
one.
Then
we
just
like
what
we
do
is
a
bit
just
like
zoom
the
costs
between
the
it
was
just
both
direct
just
to
get
some
sort
of
overall
idea
how
they're
connected
and
next
slide.
I
H
I
H
But
it
sort
of
depends
how
how
we
actually
want
want
the
cost,
because
the
cost
can
come
from
or
multiple
of
places
like,
but,
for
example,
if
the
device
plugins
can
tell
costs
if
it's
part
of
the
topology
information
that
they
are
sending
back,
then
it's
like
may
be
fixed.
But
if
it
is
something
which
we
ask
from
the
device
plug-in,
then
it
can
be
like
something
which
is
dynamic.
F
H
So
this
is
this
is
how
the
costs
could
be
could
be
like
described,
and
we
had
behaved
already
like
thought
about
some
work
performance
enhancements.
For
example,
it
turns
out
that
many
devotees
are
somehow
Krug's
together
so
that
they
don't
have
like,
even
though
the
worst
can
have
like
it
like
many
sort
of
zappy
wisest,
and
they
just
not
share
the
common
course
to
everything.
So
so
we
can
just
like
three
times
one
and
on
on
the
right
side,
right
side
of
the
of
the
slide.
H
There
is
this
example
graph
that
is,
that
is
generated
from
this.
If
somebody
has
this
resource
costs
which,
on
the
left
side
ant
and
develop,
request
at
that
request,
this
one
CPU
and
1fb
JD
was
and
one
qat
device,
and
that's
what
what's
like
said
on
the
blue
box
in
the
middle.
So
the
graph
is
a
layered
graph.
H
E
J
H
H
So
so
what
we
do
is
because
these
connections
they
can
be
are
symmetric
and
also
that
if
we
ask
device
plug
into
sort
of
like
approximate
their
cost
and
they
can,
in
principle,
keep
whatever
numbers
which
might
not
map
to
the
other
side
of
the
number.
So
we
just
are
counting
them
together.
So
if
you
look
at
this
graph
on
the
right
certain
that
the
link
that
you
are
talking
about
is
coming
from
the
CPU
to
it,
jae-won
it's
under
there
and
it's
15,
so
it
actually
is
ten
plus
five.
H
So
that's
our
best,
like
approximation
of
the
cost,
is
fifty
in
there
and
the
arrows
on
the
graph.
They
don't
mean
that
this
that
these
are
not
now
them
sort
of
them
arrows
between
the
all
the
possible
connections
which
we
have
said
here,
but
we
have.
Instead,
we
have
just
like
built
a
graph
in
this
layered
format.
So
the
idea
is
that
we.
J
H
H
It
shouldn't
be
illegal,
except
for
for
this
layer.
Internal
arose,
for
example,
this
FPGA
arose
that
the
green
arrows
are
in
the
inner
cycle,
because
it
means
that,
if,
if
we
would
have
a
case,
but
we
would
have
like
several
FPGA
devices
in
the
in
the
request
that
somebody
would
request
like
three
FPGA
devices,
then
we
would
need
to
actually
like
travel
inside
the
one
fbj
layer
in
this
graph
actually
pick
as
many
resources
as
we
sort
of
like
need
before
going
further
back
to
the
next
sort
of
resource
layer.
So
the
black
arrows.
H
This
is
now
sort
of
implementation,
detail
detail
a
bit
but
black
arrows
here
they
represent
sort
of
these.
These
costs,
which
are
not
part
of
the
traversal
path,
the
power
part
of
the
cost
calculation,
so
example.
There
is
a
cost
between
CPUs
and
qat
devices,
which
were
representing
with
that
that
black,
our
route,
which
has
70.
So
it's
just
something
which
we
need
to
sort
of
like
take
into
account
when
we
are
calculating
the
total
costs.
H
J
H
That's
the
next
next
step
is
that
we
have
that
we
go
to
how
to
bid
on
other
costs,
but
so,
in
this
whole
case,
there
are
that
it
is
sort
of
like
splitting
the
two.
So
the
first
phase
is
maybe
that
when
we
do
have
the
costs
between
the
devices,
then
what
we
do
with
them,
and
this
algorithm
is
now
an
example
of
how
we
can
find
this
good
resource
sets
like
pretty
fast.
D
D
J
H
That's
right,
so
so
so
the
background
with
this
is
that
is
that
I
made
them.
This
proof
concept
like
like
implementation
of
the
algorithm
and
that
that
is
the
file
which
I'm
feeding
it
isn't
a
hunter
left
and
then
the
result
I
did
it
using
this
golem,
which
is
this
this
a
numerical
computation
this
this
library
for
go?
It's
actually
can
output
the
crabs,
which
is
what
you
see
on
the
right,
so
so
that
is
them.
H
A
Sorry,
from
my
perspective,
as
long
as
we
have
a
good
way
of
getting
these
numbers-
and
we
trust
simulated,
annealing
or
whatever
algorithm
we're
using
to
traverse
the
graph
I,
don't
think
it
would
be
that
difficult
to
debug
it
I'm
I'm
waiting
to
hear
how
the
numbers
are
generated.
Though
that's
that's
been
my
feedback
multiple
times.
I
H
So,
let's,
let's
go
forward
to
the
next
next
slide
thing,
which
is
how
to
determine
the
costs,
and
we
were
thinking
that
there,
like
possibly
for
of
this
kind
of
fundamental
cases,
that
what
kind
of
like
courts
can
exist
in
the
D
Y
in
the
system.
The
case
one
is
that
the
costs
between
system
devices
and
the
idea
is
topology
manager
is
the
component
which
actually
like
takes
care
of
finding
these,
and
this
one
taken
happy
had
had
from
these
this,
like
what
it
is,
a
reported
Linux
kernel.
H
Mostly,
you
can
be
pissed
Numa,
CTL
and
friends.
You
can
get
access
access
to
T's.
So
what
we
have
on
the
on
the
picture
is
that
we
have
like
this
various
components
in
the
system
composing
the
system.
We
have
the
the
memory
controllers,
we
have
the
CPU
CPU
core
and
we
CPU
threads
and
CPU
sockets,
and
so
at
the
numbers
here
they
are
costs
between
them,
so
the
cost
themselves
they're,
just
something
which
I
added
here
for
demonstration
purposes.
These
are
not
like
real
data,
but
the
tools
like
no
mercy
ETL.
H
H
H
Idea
here
is
that
that
device
is
a
connected
with
in
this
PC
a
bridge
topology,
so
the
three
dots
on
the
top
they
they
just
showed
sort
of
like
like
refer
to
the
rest
of
the
rest
of
the
system-
that
at
some
point
that
will
be
connected
connected
to
the
to
the
CPU
CPU
cores.
But
here,
for
example,
you
can
see
that
it
would
make
sense
to
actually
like
have
devices
bar
0
and
fuchi
roll-call
together
and
device
bar
1
full
1
to
go
together,
and
this
this
case.
H
A
H
Right
that,
in
principle,
this
could
be
like,
because
now
the
other
person
is
that
we
want
to
kind
of
have
this
bottom-up
or
top-down.
You
know
in
a
way,
but
that's
also
the
case
where
two
devices
they
don't
have
anything
to
do
with
each
other,
so
they
will
never
never
be
a
single
byte
going
from
device.
Let's
say
food
to
be
West
bar
and
in.
If
that's
the
case,
then.
A
H
A
So
if
you
work
this
I
guess
the
the
way
I
put
it
is
that
if
you
left
this,
this
calculation
done
to
the
topology
range
rather
than
the
plugin.
Then
you'd
have
a
lot
more
branches
in
your
tree,
because
you
couldn't
rule
out
two
devices
that
actually
have
nothing
to
do
with
each
other,
just
because
they
happen
to
be
connected
on
the
PCIe
hi.
Yes,.
F
A
H
H
It
can
just
be
like
physical
wire
between
two
devices,
two
GPUs,
like
alike
whatever,
or
it
can
be
like
a
network
connection
or
something
else,
but
the
fact
remains
that
the
only
devices
or
the
only
components,
the
system
which
sort
of
understand
this
is
again
the
device
plugins.
So
if
there
is
such
a
connection
and
the
device
plug
is
need
to
somehow
be
aware
of
it,
it's
a
hot,
they
need
to
be
configured.
The
devices
need
to
be
configured
to
use,
use
it
or
something
the
device
plugins
need
them
to
communicate
the
lower
cost.
J
J
H
H
H
So
now,
when
we
have
them
the
current
current
thinking
that
the
podium
service
might
be
able
to
choose
policies.
That
might
be
one
way
to
actually
like
use
it
that,
if
you
don't
want
to
have
closed
devices,
then
you
just
select
like
some
maximum
class
cost
policy
or
whatever.
So
that
could
be
one
one
way
to
handle
it,
but
right
now,
so
that
the
default
thinking
is
that
that
it
would
make
sense
to
choose
in
I,
guess
99
pairs
of
the
cage
is.
F
J
H
In
the
last
last
slide
is
is
something
about
the
changes
to
device
pluckiest,
not.
Is
it
something
which
I
haven't
tried
out
in
in
practice
practice
at
all?
So
it
is
just
a
thought,
thought
experiment
at
this
point,
but
it's
clear
that
there
needs
to
be
some
sort
of
of
interface
to
the
device
plug-in,
so
they
can
pass
up
up
this
discourse
information.
H
So
one
thinking
that
we
had
would
be
to
extend
this
topology
infamous
each
so
that
it
would
actually
have
a
map
of
costs
between
the
devices
and
then
the
second
option
would
have
would
be
like
like
in
the
scaling
scape.
That
would
be
this
metal
interface,
which
we
could
ask,
and
then
it
would
return
the
this
map
of
them
of
the
costs.
H
H
I'm
not
exactly
sure
that
which
one
is
would
be
better
better,
but
it
might
have
might
be
that
this
topology
info
is
actually
like
more
static
compared
to
this.
This
using
the
method
to
actually
ask
for
the
costs,
and
it
could
be
like
a
performance
optimization
either
way.
Actually
so
so
it
is
something
which
needs
needs
to
be
like
excellent
10,
helper
library
for
finding
out
the
cost
in
there
or
how
the
device
is
actually
connected
to
the
system,
and
there
is
also
already
like
something
which
is
used
within
artists
device,
plugins,
plugins
collection.
H
It
could
be
used
users
as
a
base
basis
for
this
work,
and
then
we
can
have
this
in
this
implementation
in
somewhat
backwards
compatible
way
to
actually
use
this.
This
Numa
event,
information
like
ask
the
cost
information
or
some
approximate
approximation
of
the
cost
information
that,
of
course,
we
will
lose
many
of
these
more
advanced
use
cases.
But
we
will
have
like
this
that
already
the
most
plugins
will
get
somewhat
somewhat
good
good
resource
resource
assignment,
using
that,
and
that's
that
poster
presentation
and
I
guess.
H
But
what
I
still
want
to
add
is
that,
as
you
can,
as
you
have
heard-
and
this
is
some
something
of
a
it
like
looking
forward
looking
moving
exercise,
so
this
is
also
or
something
which
which
might
be
like
if
you
go
and
review
the
captain.
All
kind
of
this
comments,
of
course,
a
welcome
part,
but
especially
T's.
That
is
this
like
a
direct
so
where
we
want
to
go,
we
want
to
actually
like
handle
this
like
topology
on
this
level,
and
if
we
do
then,
is
sort
of
this
added
complexity
which
comes
from
this.
H
A
Yeah,
so
I
have
just
a
couple
questions
and
feedback
in
general,
the
entire
proposal.
You
know
Edgar
kept
a
long
time
ago
and
then
once
I
forgot
a
lot
of
it,
but
I
did
until
you
went
back
through
today
and
I
think
you
presented
it
very
well.
It
makes
a
lot
of
sense.
What
you
guys
are
proposing,
and
you
know
what
your
ideas
are.
The
the
only
concern
I
had
back
then,
which
I
also
have
now
is
having
the
plugins
themselves
generates
these
costs.
C
F
You
don't
need
to
report
the
cost
if
you
don't
expect
interaction
between
the
devices.
So
if
you
have
device
which
knows
what
it
can
transfer
data
between
each
other,
so,
for
example,
like
FPGA
and
Sarah
will
be
networked.
So
the
plug-in
knows
what
this
connection
exists
unless
connection
this
can
be
used
by
when
and
reports
if
it
doesn't
have
like
GPU
this
FPGA,
for
example,
if
we
don't
have
anything
in
common,
we
don't
report
it.
So
those
part
of
a
graph
are
not
considered
within
the
calculation,
but
the
cost
is
reported
only
for
the
scenarios.
F
Too
so
so
practically
like,
we
have
eaten
information
about
what
PCI
buses
and
when
the
CPUs
and
out
of
this
detailed
information,
we
do
a
rough
approximation
which
normal
mode
it
is.
So
this
is
something
what
a
standard
Linux
kernel
and
like
99%
of
us
have
in
our
systems,
which
can
be
used
as
a
basis
for
this
cost
information
prototype.
F
F
Infiniband,
which
might
be
having
like
melon
milk,
specific
issues,
it
can
be
like
MDMA,
again,
vendor,
specific
and
so
on,
and
so
forth.
So
like
for
common
card,
where
we
can
provide
what
library,
which
can
be
tagged.
But
if
it
will
be
some
special
out
of
one
connectivity
between
devices.
I
think
it's
when
the
responsibility
to
talk
like
all
to
provide
the
libraries
yeah.
A
Yeah
I
guess
I,
just
I
still
can't
wrap
my
head
around
how
I
would
sit
down
and
write
my
plugin
based
on
this
right
in
a
generic
way.
You
know
I'd
somehow
have
to
know
about
all
the
possible
devices
that
I
could
ever
connect
to
and
then
write
the
way
write.
The
way,
the
way
that
I
could
calculate
costs
if
they
happen
to
be
connected
on
the
machine
that
I've
now
deployed
the
plug-in
on,
but.
F
A
H
A
Have
the
basic
affinities
like
Numa
and
whatnot
set
up
and
then
only
over
time.
You
can
add
these
affinity
cost
calculations
to
your
plug-in
when
you
knew
in
fact
do
it
on
a
one-off
basis.
You've
got
a
very
special
purpose
machine
and
you
have
some
patch
to
the
plug-in.
That
knows
how
to
calculate
the
costs
better.
On
that,
and
at
least
then
the
plumbing
is
in
place
in
the
kubu
it
so
that
you
can
give
it
these
costs.
So
it
can
do
the
right
thing.
Yeah.
F
Special
card
for
five
generators,
so
it
includes
like
fpg.
It
includes
two
network
cards
like
some
special
isaac:
foil
tea,
like
radio
encoding
decoding
part.
So
all
of
those
we
know
what
it's
connected
together
and
we
usually
when
we
get
this
pipeline.
We
need
to
get
it
together,
but
it's
multiple
different
resources
announced
by
multiple
different
device,
plugins.
A
Yeah
I
think
framing
it.
That
way
helps
me
come
to
terms
with
it
a
little
bit
more
that
you
know
we
can
we
put
this
machinery
in
place
that
allows
you
the
plugins
to
kind
of
generate
costs
amongst
any
device
on
the
system.
You
don't
have
to
use
it,
but
it's
there
and
if
you
choose
to
use
it
sorry,
if
you
don't
choose
to
use
it,
then
the
algorithm
will
still
be
able
to
give
you.
A
You
know:
numero
finna
T's,
whatever
internal
infinities
you
would
have
calculated,
is
if
we
get
extended
the
topology
manager
that
we
have
today,
but
at
least
now
the
machinery
would
be
there.
If
you
do
want
to
build
some
of
this
custom
stuff
into
your
plug
in
the
what's
built
into
the
kugel,
it
can
actually
handle
that
and
give
you
the
affinities
that
you're
looking
for.
H
You
know
I,
think
I,
think
that's
them.
That's
sort
of
the
way
way
for
what
maybe
but
I
think
the
key.
The
first
first
part
is,
is
which
I
would
have
the
the
most
most
want
to
see.
Is
that
everybody
like
rushed
to
the
cap
and
and
add
comments
and
and
find
find
issues
there
and
then,
especially
think
about
the
big
picture.
That
is
this.
Is
this
sort
of
something
which
we
would
like
to
see?
It
is
something
which
we
should
like
spend
time
on.
H
I
think
that
is
that
thing,
which
is
all
sort
of
like
like
matters
to
me,
the
most,
because
I
think
that
this
is
the
implementation
details.
They
are
something
which
we
can
like
work
out
or
figure
out,
but
but
it's
true
what
was
what
was
said
in
the
beginning
that
the
departing
will
get
like
necessarily
like
more
tricky,
I,
believe
and
complex.
They
will
be
added,
but
then
we
will
have
maybe
best
better
performing
systems.
A
Well,
the
one
specific
piece
of
feedback
I'll,
give
independent
of
you
know
thinking
that
not
being
comfortable
with
the
fact
that
the
plugins
might
have
information
about
other
devices
on
the
system.
Is
that
right
now
we
don't.
There
is
no
way
to
cover
the
case
of
specifically
a
GPU
wanting
affinity
with
a
network
card
and
I
know
it,
at
least
in
Nvidia.
That's
something
that
we
really
want
to
be
able
to
do
and
there's
no
easy
way
to
extend
what
we
have
to
make
that
possible.
D
Okay,
just
in
the
interest
of
time,
we've
got
about
nine
minutes,
left
I
think
you
know:
we've
gone
through
the
slides
in
the
cap
and
well
not
the
cap,
but
the
slots
and
the
presentation
so
I.
What
I
got
is
just
another
request
to
folks
that
are
interested
to
please
review
the
cap
and
just
the
sort
of
general
question
is
this
something
that
you
know
folks
in
the
community
think
would
be
useful
unhelpful.
D
D
G
I
think
we
can
skip
that
one,
because
it
is
not
necessary
to
present
this.
It
was
regarding
the
discussion
two
weeks
ago,
one
when
we
were
talking
about
5g
deployment
and
UPF
on
the
edge,
far
edge
and
etc,
and
if
you
want,
if
you
guys
want
I,
can
present
it
maybe
next
week
when
we
will
have
more
time
on
on
my
own
first
day.
G
D
K
Disable
some
flock
other
the
schedule.
The
minds
of
the
specific
CPU
and
I
just
wanted
to
know
if
it's
a
feasible
solution
to
do
it
under
the
CPU
manager.
Because,
again
for
me,
it
looks
like
the
most
appropriate
solution.
So,
for
example,
in
case,
if
you
have
some
annotation
it
asking
you
to
disable
load
balancing
and
you
have
static,
CPU
manager
policy
and
it's
guaranty
pod,
so
with
you
just
once
it
will
allocate
CPUs.
It
will
also
disable
the
secure
location
for
this
specific
solution.
K
A
K
F
Let's,
actually,
architecture
dependent
and
hardware
dependent
so
again,
do
we
really
want
to
have
web
architecture
dependent
components
inside
by
a
couplet,
because
couplet
is
universal
between
the
bare
metal
and
cloud
mode?
What's
one
thing
second
thing:
do
we
understand
correctly
what
actually,
with
by
by
doing
what
you
actually
trying
to
achieve
like
some
kind
of
scheduling.
K
F
You
have
real
time
when,
when
we
have
a
problem
which
potentially
needs
to
be
solve
it
in
in
different
ways,
so
like
oh,
she
is
spec
when
the
container
starts
it
can
spits
a
fiber.
Real-Time
quota
for
formal
processes
and
Ramsay
is
able
to
put
it
into
a
scheduler
parameters
from
a
container.
But
again,
misinformation
is
lost
between
with
couplet
and
what
she
I
expect.
So
my
you
might
want
to
expose
with
real-time
parameters
up
to
recuperate.
K
F
Well,
what
I'm
saying
is
word
like
we
have
architecture
level
where
the
couplet
represents
a
high
level
resources.
It
communicates
it
down
to
the
CRA
level,
which,
like
I,
begin
abstract
resources.
When
we
see
a
plug-in
like
Rio
container
D
doesn't
really
matter
it
converts
it
to
o
shahi
parameters,
and
you
see
a
runtime
actually
does
for
real
tweaking
of
the
corner.
F
So
would
it
be
like
run
see
over
in
a
kind
of
like
me,
my
runtimes
doesn't
really
matter,
but
like
well
component,
which
interacts
with
a
kernel
interfaces
is
a
level
below
watch
what
I
see.
What
in
his
proposal
is
both
like
well
high
level
think
will
like
from
one-handed.
Do
it
does
with
CRI
approach
to
container
level
and
another
hand?
It
goes
directly
and
tweak
something
in
the
kernel
which,
for
me,
doesn't
make
sense
so
like
we
have
like
two
channels
of
a
communique
into
eternal
for
the
same
process.
F
K
F
You
can
easily
do
if
you
really
need
to
do
is
what
you
can
create:
a
small
C,
for
example,
and
password
annotation.
So
practically
all
the
port
annotations
will
be
visible
in
like
runs
you
and
on
create
container
request
in
run
C.
You
can
just
parse
those
annotations
and
do
things
forward.
You
don't
like.
If
you
want
to
use
annotation,
you
don't
need
to
change
anything
in
the
home
late
for
that
you
just
do
what
C
or
you
can
define,
even
like
your
this
runtime.
A
F
A
There
is
the
same
as
if
he
just
implemented
it
as
a
plugin
for
his
kind
of
one-off
use
case
so
I
think
the
bigger
question
is:
is
this
something
that
would
be
useful
in
general
to
everyone
and
is
it
then
worth
the
effort
to
try
and
push
it
into
the
OC
ice
cracking
down
from
there?
Instead
of
just
putting
some
one
off
kind
of
annotation
based
I,
don't
want
to
say,
hack,
the
hack
in
place
right
other.
D
Okay,
just
a
quick
time
check
it's
two
minutes
past
the
hour
and
I
know.
Probably
several
of
us
have
another
meeting
to
get
to
so
I
think
our
team
is
got
some
different
feedback,
a
couple
options
to
explore
our
demons
that
sound
good
for
you
to
maybe.