►
From YouTube: Kubernetes SIG Node 20221206
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20221206-180344_Recording_1938x1028
A
Good
morning,
everyone
today
is
the
December
6
2022,
and
today's
our
weekly
signaled
meeting
and
we
have
a
bunch
of
data
topic
and
also
welcome
back
to
Sergey
I
I,
noticed
that
the
morning
I
opened
the
signal
the
agenda
I
saw.
Of
course,
then
I
also
know
you
run
outside
the
car
okay,
but
we
miss
you
last
a
couple
weeks.
So
welcome
back,
and
maybe
we
start
with
as
usual,
so
Sergey
you
want
to
update
the
team
about
our
PR
status
and
cap
status.
Here
thanks.
B
Yeah
we're
entering
127
development,
I
mean
we
will
enter
it
soon,
so
not
much
happening,
but
I
see
some
PRS
being
created
and
not
much
been
nursed
closed.
So
if
you
have
energy
I
think
right
now
there
will
be
more
attention
to
enhancements
repositories
and
kubernetes
repository.
So
if
you
have
energy
and
time,
please
review
what
you
can.
Thank
you.
C
A
A
Cool
thanks,
so
next
one
I
think
we
have
the
dymo,
so
Mano
I
already
make
he
both
you
and
another.
The
the
speaker.
D
Yeah
we're
not
doing
anything
a
diaper
demo,
I
thought
I
had
to
put
that
in
there,
but
we
are
starting
work
on
a
cat
for
a
plugable
resource
management.
Kubernetes
and
I
can
go
through
the
diagrams
I
presented
here
before
I've,
been
speaking
with
Kevin
cluse
and
Patrick
orley
regarding
Dynamic
resource
allocation,
maybe
piggybacking
off
of
some
of
that
work
and
incorporating
that
in
this
to
make
it
basically
take
Dynamic
resource
allocation,
and
here
I'll
turn.
My
camera.
D
Similar
person
basically
take
Dynamic
resource
allocation
and
use
that
as
a
base.
Instead
of
because
we
were
going
to
use
basically
device
plugin,
but
I
can
run
through
a
slide
deck
quickly
to
give
people
an
idea
of
what
we're
talking
about
and
then
we're
looking
for
people
who
want
to
participate
in
a
working
group
to
make
sure
the
design
is
right
as
we
go
forward.
D
There
you
go
so
basically
the
Target
customers
we've
been
talking
to
and
there's
more
than
this,
so
HPC,
mlai,
Telco
and
then
wasm
type
workloads.
D
Telco
includes
anything
like
flex
room
that
sort
of
thing,
and
so
we
were
looking
at
ways
to
incorporate
CPU
Management
in
there
because
they
need
a
mix
of
shared
and
shared
and
static
course,
and
also
isolated
course.
So
there's
only
three
different
types
of
course
they're
looking
at,
but
then,
when
you
start
wandering
around
the
sustainability
path.
D
Now
you
want
cores
at
different
frequencies
and
you
want
to
be
able
to
schedule
these
accordingly
and
start,
treating
cores
more
like
a
resource
of
different
states
right
instead
of
like
homogeneous,
like
we
do
today
and
so
going
down
that
path.
As
we
were
looking
you
know,
we
had
this
discussion
back
in
December
of
other
use
cases
in
addition,
but
also
there's
still
the
state
that
with
operators
today,
you
still
have
to
talk
to
the
API
servers.
D
So,
if
you're
doing
on
the
Node
resource
management,
for
instance
with
power,
because
we
have
the
power
manager
as
well,
you
have
to
talk
to
the
API
server
and
then
it's
not
until
the
operator
is
set
up,
that
you
can
do
that,
and
other
people
are
doing
this
with
operators
as
well.
But
it's
not
ideal.
D
It's
added
traffic
I
can
go
through
this
full
list,
but
basically
the
point
is
to
make
couplet
more
like
a
micro
kernel
with
the
alarms
and
special
custom
drivers
for
resources
instead
of
pushing
all
of
that
management
into
kubernetes
So
the
plan
here.
So
here's
what
we
have
currently
in
the
topology
manager,
memory
manager,
device
manager
and
CPU
manager
at
pardon
I,
don't
also
have
the
dynamic
resource
allocation
here.
D
I
haven't
updated
that
since
that
went
in
just
recently,
and
we
have
all
the
managers
talking
to
each
other
internally
right
do
you
have
container
manager,
topology
manager
and
all
the
hint
providers
going
in
choosing
topology
today
and
where
we'd
like
to
go
is
basically
have
I'll
skip
to
here.
Basically,
phase
one
is
to
have
a
default
plug-in
that
does
everything
we
do
today.
We
do
have
this
working.
This
was
the
demo
we
we
do
want
to
show
is.
D
Basically
we
can
get
this
working
with
the
plugin
and
then
feed
into
the
Kublai
and
then
have
these
resource
plugins
externally
to
biology
info
for
the
first
round
would
still
have
to
be
passed
through
because
the
topology
manager,
but
the
second
round
basically
goes
and
pulls
that
in,
but
these
were
old,
slides
So.
D
After
talking
to
Kevin
on
Monday
and
Patrick
I
think
instead
we'd
like
to
use
Dynamic
resource
allocations,
we
also
we
can
get
the
different
cores
externalized
to
the
scheduler
all
the
way
up,
so,
but
basically,
the
internals
of
your
resource
management.
Plugin,
look
more
like
this.
D
We
have
the
coupon
at
the
container
manager
and
then
you
have
the
resource
manager,
that's
basically
what
you're
doing
to
plug
in,
and
this
selects
also,
if
you're
doing
on
the
Node
resource
management,
for
instance,
power,
I'm
part
of
sustainability.
That's
part
of
the
reason
that
powers
part
of
this
you're
able
to
directly
pull
in
information
from
the
kublet
as
far
as
the
pods
running,
instead
of
having
to
use
the
API
server.
So
this
reduces
your
load
on
the
API
server
and
your
sensitivity
to
how
they're
doing
proposed
next
steps
is.
D
We
we
do
have
an
issue
open,
so
we've
done
that
already
and
that's
in
the
references
section.
So
we
want
to
publish
a
cat
for
applicable
resource
management
and
submit
code
which
covers
the
plugin
mechanism
and
first
plugins
for
a
community
discussion
and,
first
plugin
being
of
course,
the
default
one
and,
of
course,
at
this
point
would
be
gate
logic
right,
so
you
can
disable
the
current
internals
or
you
can
do
it
externally.
The
demo
we
have
is
actually
using
the
code.
D
That's
already
topology
under
CP
measure
memory
manager
as
is
and
rolling
it
into
a
plugin.
So
it's
not
added
it's
not
duplicated
code
or
added
code.
We
have
end-to-end
compatibility
and
we
want
to
double
check
that
the
performance
impact
of
the
resource
management
versus
standard
kubernetes
is
not
worse,
yeah
and
support
the
current
state
of
device
manager
together
with
the
political
resource
manager.
So
that's
what
we're
trying
to
work
on
we're.
Looking
for
other
people
to
be
engaged
and
Community
Support,
all
these
things
so
today
you've
seen
all
this
before
as
well.
D
Here's
the
issue
that
we've
created
and
we
should
have
a
cat
started
by
next
week
for
people
to
start
contributing
to
goal,
is
Hope
hoping
for
127
but
we'll
see,
and
then
we
had
an
initial
RFC
I
see
sashes
on
here.
He
was
also
part
of
that.
D
And
I
can
start
a
working
group
also
with
the
within
slack
that
helps
so
people
can
meet.
We
can
have
further
discussions.
I
know,
I
had
started
this
a
while
ago,
but
read
some
stuff
that
we
have
to
attend
to
First.
A
D
Yeah
I'll
be
glad
to
share
and
is
there
I
guess
people
can
reach
out
to
me
on
slack
if
they
want.
B
To
point
here
that
what
we
discussed
model
with
you
is,
there
are
two
aspects
of
this
work.
First
aspects
is
obviously
technology
like
how
we
do
it
exactly
what
kind
of
obstacles
we
need
to
jump
over
and
like
what
implementation
details
we
need
to
do,
but
also
second
aspect
is
Community
aspect.
We
need
to
make
clear
that
this
work
is
is
not
to
benefit
specific
vendors.
It's
probably
like
we
will
have
some
set
of
open
source
plugins.
That
will
work
for
everybody
and
I.
B
Think
it's
a
vendor's
interest
to
share
their
plugins
as
well,
so
it
wouldn't
be
absolutely
islands
of
isolation
when
some
people
may
work
with
only
a
subset
of
customers
and
kubernetes,
who
makes
like
will
fragmented
because
of
this
work.
D
B
D
I'm
hoping
in
the
next
week
or
two
that
we
can
start
discussion,
so
I'll
I'll
put
a
bunch
of
times
up
today
and
then
people
can
enter
and
I
understand
we're
right
against
holidays,
but
I'd
like
to
at
least
get
enough
that
those
of
us
who
do
work
somewhat
during
holidays
can
get
some
some
work
done
as
far
as
getting
where
we
want
to
be
and
make
sure
that
we're
not
shutting
everyone
out.
A
A
G
This
is
some
only
some
questions
because
in
the
C
group,
with
two
of
memory
usage
and
the
current,
the
current
feature
gate
is
our
Alpha
and
I.
Think
this
there's
some
problems
here
and
also
there's.
There
are
some
comments
that
we
probably
to
have
need
the
power
level
setting
for
for
it,
but
now
we
only
have
a
memory
slot
throttling
factor
in
kublet
and
I
think
the
current
Factor
is
not
work
as
expected.
In
my,
in
my
opinion,
the
first
problem
here
is
I
want
to
talk
about.
G
Can
we
optimize
this
Behavior
to
to
make
it
considering
the
memory
request?
This
is
the
first
processor
here
and
another
problem,
so
is
that
the
defaulting
value
is
for
0.8
now,
but
this
may
make
some
some
parts
like
Java
to
have
some
performance
issue,
because
we
set
the
memory
high
in
crosstalk
review
to
this.
This
would
be
a
problem
if
someone
enable
this
feature
gate
so
I
think
this
this
the
default
defaulting
value
is
not
that
proper.
G
C
We
chose
that
number
as
something
that
we
don't
have
to
worry
about.
It
was
an
initial
Alpha
and
we
just
wanted
some
number
and
I
think
basically
to
move
that
forward.
We,
we
are
looking
for
feedback,
so
this
is
in
a
way
feedback
from
you,
which
is
great,
and
then
we
can
figure
out
like
what
is
the
best
way
to
set
it
like.
Should
we
make
it
configurable
at
the
cubelet
level,
or
do
we
really
expect
that
it?
C
It
will
have
to
be
done
at
each
pod
level,
or
can
it
be
tied
to
like
your
QRS
classes,
for
example
those.
H
C
I
Yeah
just
to
provide
another
comment,
I
think
we
had
some
discussion.
We
were
originally
designing
this
like
to
put
it
on
the
Pod
spec
or
not
to
put
in
the
Pod,
spec
and
I.
Think
there's
a
lot
of
discussion
that
it
would
kind
of
add
some
confusion
if
we
would
add
it
on
the
Pod
specs,
we
tried
to
come
up
with
some
heuristic
right,
which
is
the
0.8,
but
I
think
it
makes
sense
that
it's
not
always
going
to
work
and
I.
I
Like
the
proposal
you
put
together
of
taking
into
account
the
memory
request,
kind
of
making
it
more
like
a
step
function,
because
I
I
think
basically
like
if
I
understand
correctly,
your
concerned
is
kind
of.
If
the
memory
limit
is
kind
of
very
high,
then
the
0.8,
you
know
the
throttling
will
will
be
kind
of
lower
right.
So
you
kind
of
want
less
Delta
between
the
memory.hi
and
memory.max
sort
of
right
is
that
kind
of
the
concern
if.
G
I
understand
yes,
I
just
started
this
testing
for
I
do
some
initial
testing
before
we
use
this
feature
kit.
So
we
found
the
problem,
and
this
may
be
a
other
customer's
problem.
I
think.
A
I
from
I
just
saw
this,
the
the
foreign
for
me,
the
first
reaction
is
actually
punishly
guarantee
workload
even
more
right,
because
we
don't
take
the
request
into
consideration,
so
the
the
burstable
can
play
the
game.
So
then,
you
are
kind
of
the
more
encouraged
people
using
burstable,
because
the
the
formula
don't
take
off
the
request
in
in.
A
We
have
the
similar
problem
when
we
first
have
the
eviction
policy
eviction
like
the
okay,
we
are
at
the
system
Network
and
we
pre-detect
this
one
so
which
one
to
kill
so
the
original
algorithm
also
it
is
I-
have
the
problem.
It's
kind
of
more
polished,
obviously
polish,
the
people
even
for
the
burstable,
like
the
we
didn't,
take
off
the
request
in
and
the
way,
but
the
customer.
A
Actually
certain
customer
really
do
the
resource
management
well,
and
the
monitoring
and
put
the
request
number
is
better,
but
people
can
game
in
that
one.
So
so
we
spot
that
problem
with
this
problem.
So
there's
a
yeah,
I
think
the
park.
You
sparked
the
right
problem
here.
Potential
could
be
make
the
user,
especially
for
we
are,
as
the
platform
offer,
the
vendor
right.
A
G
Oh
yeah,
yes,
that's
the
problem.
So
what
was
the
next
step?
Would?
Can
we
update
the
cap
for
this
feature
of
I?
Don't
know
what
is
the
next
step,
which
next
step
is
better
foreign.
I
I
think
it's
something
we
want
to
iterate
on
in
next
release.
Anyways,
so
I
think
this
is
really
good
feedback
and
a
good
time,
and
we
can
try
to
update
it
and
figure
out
what
the
best
solution
forward
is.
One
of
the
other
pieces
of
feedback,
I
can
just
say,
I
also
received
I
I
did
a
talk
about
this
feature
at
kubecon
with
renal
and
one
of
the
piece
of
feedback
we
received.
I
Some
folks
actually
wanted
to
completely
disable
memory.hi
being
set
in
some
cases
because
they
don't
want
any
throttling
at
all
for
their
memory.
So
basically
they
only
want
the
kind
of
previous
Behavior
so
that
so
the
problem
right
now
is
it's
kind
of
Once.
You
turn
on
the
feature
you
get
all
pods
we'll
have
a
memory.highset
and
I
think
some
folks
want
to
kind
of
opt
out
on
a
per
pod
basis.
I
So
that's
also
another
kind
of
thing
we
should
consider
so
anyways
I
think
we
can
kind
of
collect
the
different
feedback
and
figure
out
how
we
can
update
it.
G
A
So
may
I
suggest
David.
You
also
put
your
feedback,
so
we
have
the
central
Google
doc
to
collect
people's
feedback
Paco
stock
right
since
the
initiate
that
one.
So
can
we
put
our
what
we
heard
and
all
the
other
inputs
people
feel
free
to
put
it
there.
So
then
we
could
have
like
the
proposal-
enhancement,
enhancement,
care
price
based
on
the
current,
how
we
are
going
to
then
we
can
debate
And
discussing
over
the
cap,
but
the
first
start
put
the
feedback
into
the
dock.
What
kind
of
things
we
already
kindness?
J
H
Hello,
so
we
have
a
small
issue
regarding
cubelet
on
Windows,
basically
there's
an
issue
which
occurs
whenever
a
plugin
has
to
re-register
on
Windows,
because
at
the
moment
the
reconciler
and
plug-in
manager
basically
looks
at
the
timestamp
to
figure
out
if
the
current
plugin
has
to
be
re-registered
or
not,
but
time
measurements
on
windows
are
a
bit
less
fine-grained,
as
on
Linux,
pretty
much.
If
you
call
time.now
on
Windows
consecutively,
you're
gonna
get
that
same
timestamp
and
the
graininess
is
something
like
between
1
and
15
milliseconds.
H
So,
basically,
if
the
plugin
has
to
be
re-registered
within
that
window,
it
will
not
because
it
will
basically
have
the
same
timestamp,
and
so
it
will
not.
We
currently
have
a
couple
of
unit
tests
which
are
failing
because
of
this
on
Windows.
H
Pasted
in
in
chat
the
basically
two
plugin
manager
tests,
which
are
failing,
and
you
can
even
see
the
in
the
error
message
that
says
that
expected
tends
to
be
timestamp
to
be
newer
than
the
old
one,
which
is
which
is
which
it
isn't
and
I
have
set.
The
pull
requests
for
this
I
think
it's
linked
in
the
document.
H
Which
proposes
to
change
the
Reliance
on
those
timestamps
again,
basically,
currently
the
reconciler
detects
whenever
a
plugin
has
to
be
registered
re-registered
based
on
the
timestamp
right
now,
I'm
proposing
to
just
use
to
use
instead
uid,
which
is
set
whenever
a
new.
Whenever
the
the
desired
set
of
world
plugins
are
updated,
then
the
reconciler
will
notice
that
those
are
different
and
then
re-register
properly
the
Purity
tests
for
Windows
pass
with
those
changes
for
those
three
tests.
So
if
you
could
take
a
look,
that
would
be
great.
A
A
But
we
need
to
fix
the
this
one
just
for
tester
grid
and
all
the
tests
are
grids
are
always
so
just
by
literally
I.
Think,
okay,
let's
fix
that
as
soon
as
possible.
Right
you,
otherwise
how
we
are
going
to
know
the
the
issue
block
the
future
issue
right.
So
that's
all
so
sticky!
You
are
on
the
next
one.
Thanks.
B
Yeah
I
wanted
to
highlight
like
give
an
update
on
the
sidecar
working
group.
We
started
it
a
few
weeks
back
and
we
had
three
meetings
so
far.
You
can
find
all
the
meeting
agenda
and
recordings
in
a
document
I
linked
to
the
agenda.
B
What
we've
been
discussing
in
this
meetings
is
what
were
additional
proposals
and
requirements
for
sidecar
containers.
We
scoped
it
down.
We
cut
out
proposals
that
were
rejected
in
the
past
and
you
know
why
they
were
rejected.
Then
we
formulated
new
proposal
and
on
today
meeting
earlier
this
morning.
For
me,
every
morning
we
just
we
discussed
how
it
may
look
like
in
yaml.
B
We
still
have
open
questions
about
termination
ordering
and
what
kind
of
improvements
we
can
make
on
termination
stage,
but
I
think
we're
pretty
close
to
final
propose
on
how
side
Cutters
may
look
like
in
127.
I
will
I
plan
to
send
update
on
this
effort
later
today.
In
case
there
are
some
people
who
cannot
participate
in
discussions
and
want
to
highlight
some
problems
with
the
proposal
or
tell
us
something
that
about
scenarios
that
we
don't
support
and
have
to
support
with
this
proposal.
A
A
So
looking
forward,
you
have
the
report,
I
showed
them.
Michelle
joined
us
so
Michelle.
Do
you
want
to
pop
out
a
data
on
kubernetes?
Yes,.
F
I
can
quickly-
hopefully
it
won't
be
too
long,
but
yeah.
Basically,
there's
there's
a
end:
User
kubernetes
Group,
where
folks
there
talk
about
running
stateful,
workloads
on
kubernetes,
there's
a
lot
of
database
vendors
who
are
building
operators
for
their
databases
and
there's
also
just
end
users
who
are
using
those
operators
and
so
that
Forum.
F
Currently
they
give
a
lot
of
talks
and
they
have
they
trade
a
lot
of
best
practices
with
each
other
on
like
how
to
write
an
operator
and
best
practices
like
setting
pot,
anti-affinity
and
pdbs,
and
things
like
that.
F
I
wanted
to
start
an
effort
to
have
more
active
engagement
with
that
Community
from
the
kubernetes
side,
and
so
we
can
actually
because
they're
end
users
and
you
know
they're,
basically
our
customers,
so
I
wanted
to
have
a
more
active
engagement.
Where
we
can
kind
of
understand.
F
You
know
all
sorts
of
requirements
that
they
have
and
other
other
things
they
would
like
to
see
us
have
in
kubernetes
to
support
their
use
cases
more,
and
so
we
reached
out
to
them
already
and
we're
gonna
schedule
in
January
a
session
to
have
a
round
table
basically
between
kubernetes
maintainers
and
the
members
of
the
community,
and
we
can
just
you
know,
start
kind
of
brainstorming
ideas
and
and
other
and
get
their
feedback
on
on
on
kubernetes
and
what's
it
like
and
so
I'm,
basically
making
a
a
world
tour
of
all
the
sinks.
F
To
like
to
see.
You
know
who,
in
all
the
various
sinks,
might
be
interested
in
attending
this
round.
Table
I
started
a
doc
to
just
collect
names
so
that
when
we
end
up
scheduling
the
first
meeting
in
January,
you
know
I
can
reach
out
to
everybody
involved
in
to
get
them
on
the
invite
and
then
I
think
in
terms
of
long
term
I
think
it
I
think
long
term.
F
It
might
be
interesting
to
see
if
there's
enough
ideas
discussed
there
to
perform
a
working
group
around
stateful
workloads
but
I
think
we're
not
we're
not
there
yet.
But
I
think,
maybe
after
one
or
two
of
these
roundtables
we
can
get
a
better
sense
of
you
know
how
many
problems
might
be
worth
solving
in
the
space
so
yeah,
that's,
that's
all
I
want
to
say
if
you're
interested
in
participating.
Please
add
your
name
to
the
doc.
A
Thanks
Michelle
and
yeah,
we
are
people
come
hard
to
find
Community
company
kubernetes,
don't
manage
this
data
database
worldwide.
We,
but
we
here
it
is
every
time
I
ask
what
it
is
the
details
and
then
again
with
a
concrete
example,
what
kind
of
thing
can
be
config
how?
How
are
resource
management
and
how
are
kubernetes
in
certain
cases
can
and
also
contain
runtime
interface,
to
reduce
it
always
kind
of
do
the
detail,
but
it's
not
down
to
the
specific.
What's
the
problem
domain
right,
what
specification
and
another
another
problem
for
me?
A
It
is
each
different
database
have
a
different
requirement.
So
so
it's
not
abstract
aggregate
abstract
enough
for
us
as
the
signal
to
attack
our
next
or
effective,
we
can
partner
with
the
storage
team,
scheduling,
team
or
the
other
team
to
to
get
work
together
and
include
after
Signet,
so
so
that
so
I'm
glad
we
have
finally
formed
some
of
the
iPhone
connect
is
Workforce
like
this,
so
we
could
talk
too
many
business
vendors
data
for
working
out
vendor
and
figure
out
aggregate
of
the
other
problem
of
abstract
the
problem
and
Define
the
problem.
E
F
I
think
Ryan
you
had
a
question
about
the
doc.
I
gave
access
to
the
kubernetes
dev
mailing
list.
Are
you
on
that
list?.
E
F
I
would
have
changed.
They
recently
changed
the
kubernetes
dev
mailing
list,
I.
Think
there's
like
a
it
used
to
be
a
Google,
Groups
and
I.
Think
I
don't
know
if
it's
still
but
I,
it's
Dev
at
kubernetes.io,
which
is
where
I
shared
it
with.
B
Yes,
ahead
of
next
meeting
retrospective
and
kept
party
like
kept
planning
I,
wanted
to
highlight
this
Perma
betas
and
thermal
Alpha
feature
Gates
that
we
have
on
signaled
sounds
then
not
owned
by
signals,
maybe
like
more
belonging
more
to
seek
security
or
some
other
six,
but
I
wanted
to
highlight
them
and
in
case
somebody
wants
to
take
an
ownership
of
that,
and
maybe
if
you
have
time,
maybe
we
can
go
through
them
and
share
some
information.
B
If
anybody
has
information,
I
think
up,
armor
was
planned
for
126
GA,
but
maybe
it
was
moved
to
127..
Yeah
Andrew
commented
on
that
next
one
Qs
reserved
it
was
in
Alpha
since
111
and
it's
Mark
is
owned
by
his
SAS
and
I.
Don't
know
the
status
of
it.
There
was
anybody.
K
B
Oh
yeah,
if
you
don't
need
it,
let's
remove
it
or
if
we
wanted.
Let's
progress
it
beta.
B
A
Find
out
the
mega
tennis
yeah,
it
is
by
the
Bay
Community,
which
is
Google
Now
yeah
macdennis,
actually
is
the
main
leading
out
that
effort.
Yeah.
A
I
think
the
macadamus
can
help
to
get
just
like
the
team.
Eau
Claire
right
so
I
asked
him,
but
did
he
actually
try
to
recruit
the
more
people?
So
are
we
trying
to
grow
more
people
for
community?
So
this
is
why
app
armors
take
really
long
time,
because
he
he
don't
have
benefits,
but
he
initiate
that
one.
So
he
went
to
other
people,
maybe,
but
he
can
lead
other
people.
So
this
is
maybe
make
Dennis
is
the
same
thing
here.
So
yeah
yeah.
B
Is
anybody
on
the
call
interested
in
gn's
feature
I?
Think
it's
just
mostly
paperwork,
but
maybe
some
test
promotion
to
what
is
called
conformance
may
be
required
and
stuff
like
that.
B
Anybody,
okay,
if
you
have
interest
contact
me
or
Mike
next
one
custom,
CPU,
CFS
water
period,.
A
If
I
remember
correctly,
so
so
I'm
not
sure,
do
we
still
need
this
one.
Okay,.
A
A
B
One
is
Francesca
wants
to
take
it.
J
Be
happy
to
help
the
API
itself
graduated
to
v1ga
in
120
around
120,
but
you're
right,
you're,
totally
right.
The
kubernet
support
is
still
better,
so
I'm
more
than
happy
to
help
you.
B
J
B
I
believe
topology
manager
was
discussed
for
126
as
well,
but
I
haven't
I
forgot
what
happened.
B
A
A
Yeah,
so
our
our
decision,
our
agreement,
it
is
to
graduate
existing
one
to
GA,
has
the
higher
priority
and,
as
some
people
think
about
that,
it
should
be
deprecated
with
all
the
new
change,
but
so
far
I
don't
think
about
all
the
new
proposals
can
completely
replace
previous
one,
and
so
so
I
just
wanted
to
mention
here
so
I
do
think
about
the
other
things.
I
have
the
higher
priority.
Yeah.
B
G
A
So
can
we
have
sounds
like
the
even
last
time
we
made
a
decision,
but
the
things
things
change
people
might
not
involve
that
decision,
and
also
so
can
we
have
a
web
pager
to
described
all
those
features.
I,
remember
one
pager,
but
there's
the
new
changes
not
to
include
all
those
kind
of
things.
Can
we
at
least
make
that
up
to
date
and
then
we
can
have
like
the
front
or
signal
the
strategy
we
can
discuss
so
Resource
Management
to
work?
A
No,
what
group
could
we
put
all
those
kind
of
things
together
and
the
memory
management,
the
new
node
management,
the
CPU
manager
all
together
and
the
current
status?
And
what's
our
strategy
which
one
to
to
graduate
which
one
it
is
a
new
feature,
I
had
additional
functionality
and
which
one
is
the
new
enhancement.
So
can
we
have
some
something
I
think
that
will
be
beneficial
for
the
community
for
beneficial
for
this
group
and
also
for
our
customer
I
mean
users,
foreign.
K
I
just
want
to
highlight
one
thing
here:
we
previously
graduated
device
manager
and
CPU
manager
in
this
release
already,
so
maybe
that
should
be
taken
into
consideration
as
well.
It
probably
wouldn't
make
sense
that
we
make
a
different
decision
now
for
other
resource
managers.
When
we
already
made
a
decision
to
graduate
CPU
manager
and
Device
Manager.
A
B
Okay,
any
takers
on
having
this
fight
one
picture.
B
Okay,
this
one
is
downward
API
huge
pages,
but
now
your
suggestions
actually
Ryan.
Yes,.
B
A
That's
all
for
today,
any
other
topic
people
want
to
board
up,
and
or
maybe
you
want
to
discuss
here,
otherwise,
everyone
got
15
almost
30
minutes
back.