►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
B
C
Okay,
so
I'm
see
how
this
goes
so
I
am
the
cloud
native
architect
for
resource
management,
not
for
all
of
Intel
I'm,
not
even
sure
what
my
org
is
currently
because
we
just
reworked,
but
Sasha,
is
my
counterpart
and
satg,
so
just
to
make
sure
we're
all
talking
about
the
same
subject
when
you're
talking
about
Resource
Management,
there's
two
parts
and
I
I
know.
C
So
here's
the
general
picture,
which
is
basically
this
is
just
a
spread
of
of
a
heterogeneous
system
and
maybe
what
you
might
want
scheduled
on
there
right
So
currently
with
we'll
go
through
those
physical
resource
plugin.
But
currently
you
ask
for
a
certain
number
of
cores
and
does
it
spells
and
scheduling
so
kubernetes
is
not
great
at
handling
heterogeneous
clusters
at
this
time
right.
C
The
second
part
is
once
you
get
to
the
node.
What
what
do
you
do
with
your
resources?
So
how
do
you
schedule
your
resources
determines
both
the
performance
of
your
workload
and
how
long
it
runs,
and
this
is
also
getting
added
attention
to
the
sustainability
forums
that
are
starting
up
and
they're
all
over
the
place.
There's
one
Within
cncf.
C
I
know
it's
Numa
specific,
but
this
you
can
do
this
with
any
sort
of
resource
changing.
So
if
you
have
your
memory,
your
CPU
and
your
xpu
in
different
zones,
you
have
the
UPI
bust
hole.
I
call
this
the
toll,
you
know,
there's
a
troll
living
in
the
under
the
bridge
and
the
pi
bus
ceiling.
All
your
time.
C
C
So
my
team
has
a
few
different
projects.
We
have
Telemetry
where
scheduling
over
in
satg,
which
I'll
also
run
over
as
CPU
we're
scheduling,
which
is
part
of
that
there's
power
management
and
then
there's
the
kublet
piece
that
we
would
like
to
get
done.
So
that
is
a
really
big
piece
that
I
would
like
to
understand.
Community
needs
for
so
Telemetry.
We're
scheduling
why
we
want
to
avoid
scheduling
on
unhealthy
nodes.
C
I
think
I
did
the
site
yeah.
So
it's
a
Telemetry
where
scheduler
is
currently
an
extender,
so
we
take
it
Telemetry
data
to
eight
scheduling
and
descheduling
decisions
in
kubernetes.
We
use
policies
to
enable
rule-based
decisions
on
pod
placement,
and
these
are
powered
by
metrics
collected
from
the
nodes.
You
can
use
Prometheus.
You
can
also
use
other
metrics
collectors.
C
C
So
this
is
the
general
layout
of
how
this
works,
whereas
Prometheus
there's
Prometheus
adapter,
there's
customistics
API.
Basically,
if
it
goes
into
the
custom,
metrics
API,
we
can
pull
it.
Then
there's
Telemetry,
where
scheduling,
scheduler,
working,
scheduling,
working
with
the
scheduler
and
then
there's
you
can
have
Tas
policies.
C
So
if
it's
metric
name,
if
it's
equal
to
one,
then
don't
schedule
there
and
then
this
is
if
it
is
scheduled
here
and
it
prioritizes
nodes
based
on
a
comparator
and
an
up-to-date
metric
value.
So
if
the
temperature
is
low
in
this
case,
you
schedule
there
these
schedule.
So
if
a
pod
with
his
policies
running
on
a
node
and
that
it
violates,
can
be
just
scheduled
with
the
kubernetes
scheduler.
C
So
basically,
if
your
temperature
is
too
high
or
your
amount
of
RAM
is
less
than
some
amount,
then
it
will
Des
schedule
and
we
also
allow
labeling.
This
is
less
than
the
scheduling,
but
maybe
based
on
your
particular
scheduler.
So
there's
so
in
this
case,
we
basically
make
labels
based
on
these
rules,
so
you
have
card
zero
equals
true
and
card
one
equals
true,
and
this
is
used
partially
with
GPU
we're
scheduling.
C
So
we
can.
This
is
just
info
on
this,
and
we
could
you
can
submit
PRS
for
changes.
This
is
open
source
and
we
do
have
future
work
for
Taz
before
I
want
to
release
this
and
try
to
put
it
into
the
community,
which
is
specifically
to
move
from
an
extender,
because
we're
currently
currently
a
kubernetes
extender
to
the
kubernetes
plugin,
which
plays
a
little
bit
better.
With
the
current
scheduling
decisions.
C
So
we're
currently
in
that
work,
but
once
that's
done,
we
do
plan
to
try
to
push
up
straight
into
the
community,
and
these
are
more
links.
I'll
I
can
send
this
after,
but
there's
white
papers
on
this.
We
have
a
power
specific
example,
and
then
there
is
a
recent
kubecon
talk
and
demo
done
by
denisio
and
madelina
on
my
team.
C
So
this
is
the
case.
I'll
go
over
the
use
case.
The
node
has
two
gpus
and
each
has
you
know
a
certain
amount
of
memory
and
you
want
to
make
a
replica
set
to
three
and
each
of
these
needs
five
gigabytes
of
memory.
You
end
up
with
those
nodes
being
split
or
one
one
of
those
pods
being
split,
because
you
can
put
one
in
each,
but
then
you
still
have
three
and
three.
So
this
is
basically
keeping
you
from
scheduling
across
across
gpus,
but
there's
other
ways
to
do
this.
C
There
are
other
other
pieces
of
this,
but
the.
But
with
this
we
we're
using
the
Intel
I
915.
So
that
was
that's
a
deep
Intel,
specific
GPU
and
we
can
choose
a
number
of
millicores
here
and
we
can
choose
an
amount
of
memory
per
each
and
choose
how
many.
So
this
tells
you
how
many
gpus
you
want
and
then
what
the
spread
of
the
memory
is
for
the
particular
one.
C
So
this
is
that
particular
project
there's
NFD
with
GPU
plugin,
because
you
really
do
need
NFD
to
do
the
node
feature
Discovery.
So
you
know
what's
running.
A
Okay,
I
think
one
question
that
I
had
was
on
the
GPU,
so
this
this
is
for
any.
C
C
D
So
our
team,
who,
who
is
doing
the
support
for
GPU,
is
involved
in
this
scheduling
part
as
well,
so.
D
C
Yeah
so
yeah,
this
is
scheduling
only
and
then
we
have
the
last
project.
That's
Intel,
specific,
well,
GPU,
aware
scheduling
and
Telemetry,
where
scheduling
aren't
Intel
specific,
but
we
are
using
them
for
Intel
is
a
power
manager
which
it
provides
limited
control
over
the
pot
So
kubernetes.
Currently
you
don't
have
really
any
power
over
the
configuration
of
CPUs
assigned
to
the
pod.
C
So
if
you
wanted
to
lower
the
seat
frequencies
or
raise
the
frequencies-
and
you
do
want
that
all
the
time
with
performance
or
sustainability
environments,
you
can't
do
that
and
so
we've
just
the
Intel
kubernetes
power
manager
is
designed
to
expose
and
utilize
the
Intel
specific
power
management
Technologies
So.
Currently
we
have
granular
control
over
the
configuration
cores.
We
can
change
the
frequency
of
all
the
cores
in
a
shared
pool.
We
can
lower
power
consumption
by
controlling
the
frequencies
as
a
shared
pull
cores,
and
then
these
are
the
particular
features
we
have.
C
These
are
sstbf
sstcp
and
then
frequency
tuning
and
currently
so
I'll
advise
everyone
to
wait
about
a
month,
maybe
less
until
we
release
a
new
version
of
power
manager,
but
we're
changing
from
using
the
library
we
were
using,
which
was
a
python
Library
to
a
golang
library
that
we've
also
built,
which
is
supporting
it's
easier
to
deploy,
and
also
we
have
a
better
is
faster
for
us
to
get
in
functionality
and
it's
open
source,
so
I.
Those
are
also
shared
here.
C
So
I
can
share
those
links
also,
and
we
also
have
a
white
paper
on
how
to
use
these.
It's
I
don't
know
we
like
it.
It's
currently
it'll
remain
an
operator
in
the
future.
I'd
like
us
to
add
a
grpc
interface
to
it.
So
you
can
control
cores
from
outside
the
power
line,
the
power
manager
through
the
power
manager,
basically
so
that
it
doesn't
have
to
be
through
the
pods.
C
We
got
the
police
officer,
but
were
you
sorry
to
go,
and
then
this
last
piece
which
extra
this
is
a
community
project
more
than
Intel
project,
is
that
we
have
the
currency
of
kublet?
Is
we
have
some
restrictions
that
we
can't
handle
today?
So
you
can't
mix
pins
with
shared
cores?
You
can't
choose
which
Newman
Zone
tax
spreads,
so
topology
manager
does
a
complex
packing
in
Numa
zones
different.
C
C
I
can't
handle
Affinity
of
anything
below
node
level
for
a
cold,
so
it
doesn't
support
CPU
or
memory
less
nodes
and
there's
still
a
Max
8
pneuma
Zone
limitation,
which,
as
you
start
looking
at
the
Thailand
without
a
lot
of
cores,
are
doing
it
these
days,
plus
the
fact
that
there's
multiple
sockets,
that's
a
pretty
big
limitation.
At
this
point.
C
And
part
of
the
challenge
to
this
is
Google.
It
has
a
set
of
resource
managers
that
have
to
be
addressed
every
time
we
add
a
new
feature
to
the
couplet,
so
the
current
Solutions,
including
CRM
and
CPU
Pooler,
they
work
by
turning
off
the
couplet
functionality
entirely,
which
can
have
unintended
assuming
something's
working.
Thank
you
so,
and
we
still
cannot
schedule
course
by
pods
or
across
specific
Newman
zones
or
affinities.
C
C
Or
CRM,
while
still
being
native
to
the
ecosystem,
and
if
you
want
info
on
crrm
Sasha's
here
so
he
can
give,
he
probably
has
a
presentation
in
his
back
pocket.
He
can
go
through
and
future
is
we
want
to
finish
getting
this
RFC
through
and
developed,
create
a
cap
and
start
work
and
get
the
kublet
remodeled
following
the
specifications
and
which
I
have
links
to.
We
want
to
plug
our
resource
managers
into
the
new
model,
which
will
be
more
native.
C
C
And
if
we're,
if
we
have
time
we're
at
the
half
hour,
we
can
go
off
over
the
RFC
for
the
CPU
landscape
exploration
dock,
where
we
went
through
all
of
the
different
issues
and
then
customer
customer
requests
Etc,
and
we
put
them
all
in
this
document
so
that
we
knew
what
we
were
missing
so
that
list
that
we
have
up.
There
mostly
comes
from
customers
or
from
those
issues
right
added.
C
A
C
C
C
C
So
at
the
beginning
this
basically
goes
through
the
fact
that
kubernetes
was
initially
written
with
a
simple
model
of
node
resources
and
how
they
would
be
configured.
This
has
worked
well
for
generic,
but
now
we
have
a
wider
range
of
use
cases,
which
is
why
a
lot
of
you
are
in
this
group.
Is
you?
Have
your
own
specialty
use
cases
for
HPC
or
AI,
or
you
know
any
other
specialty
case?
A
C
So
we've
we
wish
to
move
to
a
Kublai
resource
plug-in
model
similar
to
how
specialty
resources
are
handled,
as
we
have
within
the
device.
Plugin
model
I'm.
Currently,
here's
what
what
the
Kubla
looks
up
like
currently,
so
you
have
the
kublet,
you
have
the
topology
manager
and
then
you
have
the
hinge
providers.
The
providers
are
your
CPU
manager,
your
device
manager,
your
memory
manager
and
everything
has
to
go
through
the
topology
manager
to
go
through
so
anytime.
You
make
changes,
you
have
to
make
sure
all
of
these
places
work
correctly.
C
So
this
the
exploration
dock
is
is
listed
here
there
are
some.
There
is
some
commentary
on
here.
I
would
like
more
commentary
just
to
make
sure
when
we
start
the
cup.
We
know
what
we're
doing
as
far
as
who's
working
on
what
so
we
make
sure
we're
handling
everything
the
cases
I
have
in
there
are
already
in
there.
So
you
can't
oh
the
other
one.
C
C
So
solutions
to
any
one
of
the
of
these
challenges
require
related
solution
to
optimized
memory.
So
if
we,
if
we
change
where
the
cores
are
now,
we
have
to
check
the
memory.
So
if
you
touch
the
CPU
manager,
you'll
now
have
to
check
the
test,
touch
the
memory
manager
and
you
still
have
to
touch
the
topology
manager
right,
no
matter
what
you
do.
C
So
our
design
proposal
is
to
make
a
plugable
resource
hub,
basically,
instead
of
retrofitting
functionality
to
the
existing
model,
continually
right
and
basically
to
pull
all
the
resource
managers
that
are
currently
in
their
topology
manager,
CPU
manager,
device
manager
and
memory
manager
out
into
a
plugin
and
then
work
backwards
from
there.
We're
still
going
to
need
to
also
handle
the
runtimes.
C
So
there
needs
whatever
plug-in
we
do
has
to
be
both
because
because
we
do
want
to
roll
those
managers
out,
probably
into
a
grpc
piece.
We
still
also
want
to
make
sure
that
rent.
You
can
also
route
them
through
the
runtimes,
because
there's
projects
that
route
the
including
CRM
that
route,
the
resources
through
the
runtimes.
C
So
this
particular
one
will
go
through
the
goals
we
want
to
be
able
to
plug
resource
managers
into
couplet
to
allow
customizer
of
resource
requirements.
We
want
to
be
able
to
export
resources
to
expose
them
to
the
scheduler,
so
that
piece
maybe
more
complicated,
because
now
we
have
added
annotations
right.
C
We
want
to
make
it
simple
to
expand
resources
to
those
currently
not
envisioned.
So
when
you're
talking
about
memory
in
the
memoryless
notes
or
CPU,
less
nodes,
there's
other
components
there
right.
You
want
to
make
it
simple
to
expose
attributes
about
resources
non-goals,
we
don't
want
to
break
any
existing
use
cases.
C
So,
whatever
solution
we
add,
there
should
be
full
support
of
default
Behavior
and
we
don't
want
to
change
default
Behavior.
We
don't
want
to
create
any
more
latency
than
there
is
today
for
scheduling
and
that's
I
thought,
maybe
something
we
do
in
the
future
or
how
to
look
at
after
we
get
this
done
is
there
is
still
quite
a
bit
of
latency
in
scheduling.
C
C
This
is
the
first
time
I've
done.
This
has
done
this
before
I'm
sure.
A
C
We
do
have
some
resources
to
work
on
this,
but
we
would
like
community
health
problem.
You
know.
D
C
Testing
yeah,
basically
we're
we're
looking
currently
at
pulling
out
all
of
the
the
current
caps
that
were
already
put
in
for
CPU
manager
and
device
manager,
all
of
all
of
the
managers
and
just
pulling
them
out
and
then
looking
at
those
particular
tests.
C
D
Well,
I
can
say
a
few
words
because
there
are
a
few
additional
things
beside
besides
with
CPU
management
activities.
If,
if
you
can
stop
shares,
so
I
can
reuse.
D
D
Yeah,
that's
good
all
right,
so
I'm
going
just
to
reduce
couple
of
slides
from
our
presentation
with
me
and
one
of
our
team
members
from
my
team
and
auntie.
We
did
on
HPC
and
watch
fork
and
they
in
in
last
pubecon.
So
Marlo
already
mentioned
about
our
project
here.
D
The
history
of
what
project
is
such
what
we
try
to
create
a
couplet
resource
plugins
about
three
or
four
years
ago,
and
at
that
time
Community
was
not
ready.
D
Now
it
looks
like
the
community
a
bit
more
receptive,
but
meanwhile
to
validate
all
the
ideas
and
to
to
see
what,
like
what
we
are
proposing
is
actually
working.
We
needed
to
have
some
solution
and
we
come
up
with
some
intermediate
step
and
this
intermediate
step
is
share
a
resource
manager.
It
works
as
a
normal
container
runtime.
So
couplet
sees
it
as
a
container
the
okra,
what
whatever
it's
absolutely
transparent
towards
the
couplet.
It
doesn't
reinvent
review.
D
So,
in
my
back
end
it
still
uses
container
view
or
cry
or
whatever
you
prefer
to
use,
but
what
it
does
it
allows
you
to
have
a
dedicated
set
of
policies
on
how
you
are
managing
the
resources,
so
we
have
both
policies
related
to
Hardware.
So,
like
always
scenarios,
what
model
was
just
mentioned
like
limit
of
Numa
nodes
or
memory
key
in
different
setups.
All
of
these
things
we
tried.
We
know
how
to
work
with
it
so
like
we
have
tests
with
like
huge
machines
like
32
circuits
and
so
on.
D
We
had
scenarios
with
different
memory
tiers
like
how
you've
been
to
this
memory,
PM
cxl
memory,
which
is
upcoming,
the
hardware
and
so
on.
We
also
tried
to
walk
from
a
perspective
of
not
only
Hardware
but
also
application.
So,
for
example,
if
you
have
a
set
of
containers
which
needs
to
work
together,
let's
say
like
your
application
plus
service
mesh
container.
You
don't
want
to
like
when
the
data
is
password
between
those
two
containers.
D
You
don't
really
want
to
run
to
cross
like
L3
cache,
the
domain
zones
or
well
even
worse,
like
memory
domain
zones
and
so
forth.
We
support
container
affinity
and
anti-affinity
so,
for
example,
like
your
database
should
not
be
affected
by
like
CPU
consumption
of
backup
container
or
something
similar.
D
So
we
provide
to
to
the
user,
who
said
of
knobs
how
to
get
out
of
open
out
and
let's
actually
like
the
main
difference
between
like
what
Team,
what
where
Marvel
is
working
and
my
particular
team.
My
team,
we
are
focusing
specifically
what
happening
with
zoom
or
not
like
all
the
details
always
give
knowledge
of
the
hardware
or
all
combinations
of
how
it's
ready.
It's
going
to
work
so.
D
So,
as
I
mentioned
right
now,
it's
implemented
other,
like
kind
of
proxy
between
where
kublet
and
actual
container
runtime,
but
we
are
working
together
with
container
game
project
and
cryo
project
to
implement
the
thing
which
is
called
NRI,
an
old
resource
of
the
phrase.
It's
also
plug-in
interface,
similar
to
what
complete
Community
is
now
thinking
of
implementing.
D
D
So
right
now
we're
communication
between
the
couplet
and
runtime
is
kind
of
imperative,
so
cool
blood
dictates
how
certain
things
needs
to
be
implemented,
like
the
CPU
said,
a
bunch
of
other
things
or
like
transforming
a
set
of
borders
and
so
on.
The
problem
with
that
is
what
it
was
okay
five
years
ago,
when
they
had
only
runs
the
other
runtime,
the
current
set
of
run
times.
We
have
VM
based
runtimes,
like
Kata.
D
Not
necessarily
with
true
so
sum
of
information
available
only
on
the
runtime,
so
with
things
what
my
team
is
trying
to
do,
we
are
trying
to
make
sure
what
certain
information
properly
passed
between
the
couplet
and
runtimes
and
when
utilizing
how
it's
done
and
besides
with
CPU,
we
have
several
other
activities
like
NRI
I
already
mentioned,
but
when
we
have
a
few
things
which
is
related
to
class
based
resources
or
quality
of
service
kind
of
resources.
So
it's
cash.
It's
memory,
bandages,
it's
Google!
D
Well,
to
some
degree
memory
achieving
can
be
represented
as
quality
of
service
scenarios
and
so
on,
and
another
thing
is
device
manager.
So
like
what
Marlo
just
mentioned
about
the
GPU
scheduling,
it's
good,
it's
it's
a
way
how
to
utilize
GPU
based
on
current
device,
plugin
API.
But
the
problem
is
what,
with
current
device
plugin
apis
contains
a
lot
of
I
wouldn't
say:
give
Zion
mistakes,
but
in
inefficiencies.
D
If
we
are
looking
from
a
current
Hardware
state-of-view
so
like
it
worked
well,
if
you
have
single
exclusive
use
of
Hardware
without
any
knowledge
about
internal
resources,
no
shared
usage
and
so
on,
but
as
soon
as
we
start
to
thinking
like
okay,
let's
have
one
physical,
GPU
or
somehow
accelerated
device
to
be
shared.
Let's
think
about
what
memory
on
it.
Let's
think
about
we,
internal
topology
of
those
accelerators
and
so
on
and
so
forth.
D
Rose
simply
didn't
work
like
what
are
different
workarounds
and
what
we
implemented
with
GPU
device
plugin
for
Intel
accelerators.
It's
also
a
set
of
workarounds,
and
we
have
my
own
Google
for
NVIDIA
gpus.
We
have
very
own,
but
it's
all
not
really
extendable,
so
what
we
are
working
together
with
Nvidia
and
recently
we
also
have
people
from
our
projects
joining.
D
So
what
one
notable
is
akri
like
iot
kind
of
devices
Network
attaching
devices
we
have
those
two
initiatives
like
one
is
CGI
container
device
interface
once
again
on
the
runtime
level,
how
we
attach
number
container
to
the
device
yeah,
sorry
how
how
attaching
device
to
a
container
all
the
Nifty
details,
how
how
it
should
be
done
on
a
low
level.
But
when
the
upper
part,
like
the
couplet
part,
is
dynamic
resource
allocation.
D
So
this
is
revisiting
how
we,
how
the
user
can
request
the
device
so
going
from,
where
previous
model
of
let's
use
extended
resources
and
when
try
to
have
like
all
kind
of
combination
of
those
resources
and
when
gpus,
scheduler
extensions
and
so
on.
We
are
coming
to
interface
similar
to
persistent
volume
claims.
So
your
request
with
Device
of
particular
class.
D
So
this
is
like
set
of
puzzles.
What
Intel,
in
overall
working
across
multiple
teams
in
Work
Source
management
from
scheduler,
existing
Cold,
Blood
things,
low
level,
runtimes
and
well
combinations
of
it.
D
I
think
I
will
stop
this
web.
If
there
are
additional
questions,
I
can
pull
out
some
other
slides
or
some
other
details.
A
Cool,
thank
you,
I,
don't
know.
If
anyone
else
has
any
questions.
B
So
I'm
curious
how
the
folks
in
the
in
the
traditional
Research,
Drive
management,
scheduler
Community,
would
interface
with
something
like
this.
So
if
you
had
somebody
from
the
Altair
or
the
sketch
MD
community
that
wanted
to
pass
back
to
the
scheduler
to
make
some
to
give
it
information
about
what
components
of
a
node,
what
resources
on
the
Node
it
should
have,
is
there
some?
Is
there
something
in
what
you're
you're
proposing
here
that
they
would
they
would
be
using
or
that's
a
kind
of
an
entirely
separate
some
respect.
D
So
what
are
several
things?
How
we
can
tackle
it
so
right
now,
recuplet
is
discovering
our
resources
and
when
doing
my
assumptions
about
how
how
those
resources
are
present.
On
the
note,
it's
not
necessarily
much
as
what
actually
a
runtime
has
and
can
runtime
schedules,
but
what's
one
one
side
of
the
story,
I
will
come
back
to
it.
D
The
second
part
of
the
resources
is
extendable
resources,
and
here
we
have
two
variants:
how
it
can
be
announced
to
be
like
to
a
scheduler
like
one
is
device
plugins,
so
device
plugin
says
I.
Have
this
amount
of
instances
of
particular
resource
type?
Second
variant
is
what
you
can
patch
of
an
old
object
and
say
what
this
node
object.
Has
this
amount
of
this
resource
allocatable
and
when
couplet
will
do
with
simple
accounting?
D
How
many
ports
are
using
this
particular
resource
to
help
with
what
we
have
NFD
so
like
our
GPU
device?
Plugin
is
working
together
with
NFD
to
actually
automate
with
announcing
of
those
resources
so
like,
for
example,
like
this
melee
CPU
or
melee
GPU
parts
of
of
a
GPU
or
like
a
GPU
memory,
is
handled
by
by
NFD
plugin.
D
D
With
things
what
I
mentioned
like
this
Dynamic
resource
allocation
it,
it
has
similar
setup
like
storage
drivers,
so
you
have
a
cluster
level
components
which
works
together
with
scheduler,
and
you
have
a
node
component,
which
is
responsible
to
actually
attach
a
device
to
a
container
and
work
together
with
runtimes
to
do
it
so
it
it
will
be
the
part
of
this
cluster
level
component
to
to
talk
with
the
scheduler
to
make
sure
what
the
resources
are
available
and
can
be
consumed
for
report
and
provide
with
apology
information
on
which
nodes
these
resources
will
be
available.
D
Regarding
the
Google
at
runtime
part,
it's
long
long
way
to
actually
get
there,
but
what
we
eventually
need
to
do
is
what
we
need
to
remove.
This
discrepancy
between
the
couplet
knowledge
and
runtime
knowledge,
so
it
means
what
at
some
point,
we
need
to
release
it
to
a
protocol.
How
a
couplet
and
runtime
are
talking
about
the
resources.
D
So
right
now
in
this
class
based
POS
resources,
what
we
have
we
have
a
CRI
messages
where
runtime
talk
tells
to
the
couplet
which
quality
of
service
classes
are
available
and
what
like,
what
types
of
curious
classes
available
and
what
is
the
potential
values
are
available.
D
So
we
know
the
kublet
Can
report
it
into
an
old
status
and
then
the
scheduler
can
consume
it
to
make
a
scheduling
decision.
So
it's
it's
similar
model.
What
we
have
right
now
for
the
native
resources.
One
of
the
difference
is
what
Google
had
not
discovering
it.
Googleb
gets
it
from
from
varanta
foreign.
D
So
if
we
are
talking
about
Resource,
Management
plugins
like
regardless,
regardless
on
which
level
of
couplet
or
on
runtime
so
now
later,
we
will
need
to
have
exactly
the
same
interface,
so
couplet
or
sorry.
Plugin
should
be
able
to
tell
to
runtime
or
to
couplet
what
kind
of
additional
resources
it
might
be
available,
so
it
can
be
used
in
scheduling,
decision
and
not
obviously
no
status
and.
D
D
It
was
a
huge
roadblock
in
in
terms
of
dockersham,
because
well
you
you
still
needed
to
to
take
care
of
well
the
docker
API,
which
was
quite
simple
now
when,
when
dockershim
is
removed,
we
have
a
bit
more
freedom
like
how
we
can
evolve
a
CRI
protocol.
A
A
Of
the
wires
I
think,
that's
probably
that's
a
good
time
actually,
five,
two,
so
thank
you
very
much,
Marlo
and
and
Sasha
for
coming
along
I.
Think
a
couple
of
people
has
dropped
so
sorry
about
that,
but
we've
got
it
all
recorded
and
we'll
share
it.
So
thank
you
very
much
for
your
time.
It's
really
really
good
stuff.
C
D
A
Yeah,
that's
great
yeah
and
any
slides
just
just
Chuck
them
into
the
into
the
slack
Channel.
A
Okay,
thanks
very
much
so
yeah,
that's
probably
it
for
today,
our
next
session
is
going
to
be
actually
6th
of
July
I
think
it
was
in
the
agenda.
Previously
it
was
down
the
29th
of
June,
but
it
won't
be,
then,
because
we've
already
done
two
sessions
in
June,
so
yeah
6th
of
July
will
be
the
next
time
and
we're
gonna
do
have
a
session
I
think
on
psyllium
and
ebpf,
so
yeah.
Thank
you.
Thank
you
again.
Sasha
Marlo
and
see
you
all
next
time.
Thank.