►
From YouTube: Kubernetes SIG Node 20200407
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Sorry
in
the
GC
class,
I
didn't
find
immediate
data
structure,
say
giving
me
the
data
containers
ID,
so
the
there.
This
attachment
above
this
particular
comment,
I
used
an
attachment
because
in
the
log
I
think
I
spotted
a
deficiency
so
I.
Of
course,
the
what
Allen
Hall
suggested
is
a
lot
simpler
than
my
APR.
B
However,
if
you
read
my
comment
as
opposed
to
it,
what
I
found
is
that
let
me
so
what
I
found
is
that
I
in
this
log
symlink,
which
is
in
utility
class,
so
we
impose
basically
255
the
suffix,
which
is
a
third
log,
so
251
characters
for
the
combined
lines
of
the
full,
upon
name,
container
name
and
container
ID
in
the
current
convention.
Container
ID
is
that
the
last?
B
So
if
you
scroll
a
little
lower,
you
can
see
my
attempt
in
the
Lexie
test
that
go
so
if
I
try
to
reconstruct
the
container
ID
from
this
particular
example,
basically
it
would
fail.
The
assertion
would
fail.
I
couldn't
so
I.
Look
at
some
casters
at
my
disposal.
The
longest
such
a
thought
logged
a
symlink
was
193.
So
when
we're
not
there
yet,
however,
since
I
have
very
limited
visibility
into
the
actual
deployment
introduction
yeah,
if
you
read
following
the
constipated
yeah
I
see
several
options.
B
A
Yeah
so
said,
I
apologize,
I'm,
not
fully
up
to
speed
on
what
the
issue
was
that
was
being
resolved,
but
I
guess
what
a
kind
of
the
first
question
I
would
have
is
like.
Is
there
a
gap
in
no
dee
dee
coverage
for
this
function,
and
is
there
a
way
that
we
can
demonstrate
this
flake
or
this
race
in
a
node
Edie?
Today.
B
So,
prior
to
a
last
Tuesday's
meeting,
I
actually
started
writing
some
tests.
It
doesn't
look
good.
Basically,
the
GC
and
the
log
manager
are
started
by
a
couplet,
so
I
feel
realistic
and
say
not.
Bad-Looking
test
would
be
at
a
couple
a
level
so
I
have
some
a
test
code
which
I
can
attached
to
this
issue.
So
in
summary,
for
your
question,
so
I
think
you
asked
two
things.
One
is
so
why
there
isn't
a
certian
in
IE
yeah.
B
Indeed,
there
isn't,
because
there's
no
assertion
with
regard
to
the
presence
of
the
mink
second
question
is
how
to
add
this,
that
I'm
still
trying
to
figure
out,
but
in
the
past
week,
as
you
can
see,
I
am
focused
on
understanding
and
trying
to
implement
the
maintainer
suggestion.
So
I
didn't
spend
much
time
on
writing
the
test.
C
C
I,
don't
know
what
the
nantao
suggest
to
you,
but
I
think
that
we
talked
about
this
one
and
explained
from
my
perspective.
I
am
concerned
because
I
look
at
your
Pia
and
my
concern.
It
is
because
that
log
I
look
at
you
how
you
implement
the
log
you
have
the
to
when
it
is
okay
to
the
native
log
and
then
there's
the
rename
to
rename
and
also
the
end
result
and
open
the
file
and
rotate
the
file
and
then
there's
the
garbage
collection.
C
My
concern
is
the
lock
contention
because
o
it
is
the
highway
aisle
operations
and
involve
here
and
and
worry
about
the
next.
This
is
not
contention.
So
that's
why
I
suggest
that
I
guess
that,
based
on
what
you
reply,
I
guess
then
and
also
suggest
the
same.
Nothing
it
is.
Is
there
any
way
we
could
avoid?
C
Make
sure,
basically
against
next
what
you
suggest
in
your
original
proposal,
without
a
lock,
you
see
it's
just
like
there's
a
certain
time.
We
make
sure
the
container
is
removed
and
before
we
do
seaming,
we
we
make
sure
the
container.
It
is
tired.
So
what
I
suggest
just
double
check
is
the
container
died.
I.
Think
that
you
try
to
yeah.
You
try
to
reply.
It
is
actually
there's
no
way
for
you
in
the
in
the
lock.
C
Maybe
I
haven't
done
it
his
change
yet,
but
based
on
what
he
just
say
that
after
that
one
in
the
garbage
collection
or
was
an
occultation,
you
don't
you
don't
have
the
data
about
the
container
status
right,
so
you
just
don't
know
that
which
container
a
container
ID
or
next.
You
cannot
sue
that
container
ID
and
to
get
the
container
status.
C
B
Yeah,
so
to
answer
a
question:
the
kill
container
is
not
isolated
to
the
TC
class,
so
yeah.
So
if
it's
isolated,
we
can
maintain
some
data
structure,
but
you
say
toward
at
the
end
of
this
particular
comment,
which
is
a
bit
long.
So
I
mentioned
that
say
suppose
we
maintain
this.
A
data
structure
is
a
Yi's,
a
GC
or
a
reference
manager.
B
However,
we
need
to
accomplish
collect
at
this
data
structure
as
well
right
so
let's
say
we
have
a
long
run
in
pod,
say
running
for
if
not
a
year,
let's
say
a
month
or
something
and
then
containers
a
common
goal
right.
So
of
course,
at
any
particular
moment
we
can
determine
in
a
recent
past
which
containers
were
gone
right
all
right.
Well,
we
need
to
copy
to
collect
this
structure
as
well.
Actually,
I
think
you
or
lantau
haven't
read
my
initial
proposal.
B
Many
initial
proposal
is
this
so
that
the
TC
is
every
minute
right,
so
at
minute,
1
or
0,
we
we
take
a
look,
and
so,
if
some
particulars
evening
is
deemed
to
be
a
healthy,
we
just
recorded
at
the
next
minute
n
plus
one.
We
check
again
if
this
simile,
a
file
name,
still,
can
still
be
considered
unhealthy.
Then
we
remove
it.
So
the
rationale
is
that
so
there
could
be
a
subsequent
rotation
of
the
log.
However,
the
rotation
shouldn't
happen
per
minute,
yeah
and
at
reasonable
default
for
max
size,
etc.
B
B
My
feeling
is
just
that,
so
we
have
several
options.
Basically,
the
very
first
meaning
delayed
the
same
link,
removal
on
one
one
minute,
so
the
additional
this
structure
is
very
simple.
The
second
is
the
introduction
of
the
lock
and
third
one
the
simpliest
I
can
think
of
is
to
lift
the
container
ID
to
the
beginning
or
earlier
part
of
the
same
link,
compensate
foul
name,
yeah
and
and
more.
B
If
you
read
this,
my
comment,
yeah
so
I
haven't
implemented
and
oh
the
alternatives
I
suggested,
because
some
alternative,
especially
toward
the
end
of
the
comment
I
think,
is
the
capacity
is
some
power
even
more
complexed
to
what
I'm
presenting
now.
If
we
have
another
use
case
where
the
DC,
the
GC
class
needs
to
know
the
Magnum
between
container
name
and
and
the
container
ID
with
regard
to
dead
containers,
then
probably
that
can
be
justified.
I.
C
Don't
generally
concern
about
a
lock
I
just
have
the
consent
when,
when
the
last
week
he
came
here
brings
introduced
the
log
ID
really
not
I
to
the
final
grant,
so
you
basically
there's
the
lock
lock
and
then
the
lock
in
half
the
location
of
natives
of
the
log
and
then
and
then
there's
the
sound
like
a
heavy
there's,
the
some
disk,
I/o
and
I
kind
of
say
that
a
highway
because
it's
based
on
the
resources
usage,
unlike
to
note
and
then
there's
the
another
one.
It
is
on
the
on
the
garbage
collection.
C
So
that's
basic
and
I
think
of
xfinity
of
having
because
we
need
to
remove
those
container.
So
so
that's
my
concern,
and
and
but
today,
I
look
at
your
called.
A
I
saw
your
updated
departamento
and
I
can't
understand
what
what
what
do
you
came
from,
because
I
do
suggest
to
avoid
that
problem
to
make
that
a
singing?
It's
not
a
volley,
the
same
name.
Could
the
problem
like
the
chick,
the
container
I
can't
say:
they'll
say
the
so
you'll
Conway,
but
maybe
in
and
I'll
have
some
other
idea.
C
A
We
don't
have
a
way
of
measuring
how
often
this
happens
so
like
I
was
just
trying
to
work
back
and
it
looks
like
the
original
reporter
gave
a
bash
script
to
try
to
reproduce
what
I'm
wondering
is
like
before
trying
to
settled
on
a
solution.
Should
we
get
a
no
dde
that
does
the
reproducer,
and
if
we
see
that
no
te
is
still
flaking,
then
we
go
and
do
a
resolution.
A
If
it's
not
flaking,
I'm
told
even
know
if
we
actually
have
the
problem,
but
it
seems
like
the
first
thing
we
should
do
is
have
the
note
e
to
e
that
reproduces
this
reproducer
and
we
have
one
no
DeeDee
Eve
for
nude
log
test
that
basically
says
I've
run
a
pod
I'll
try
to
get
the
container
logs,
but
it's
definitely
not
stressing
the
use
case,
and
so
maybe
just
stressing
the
use
case
is
the
first
step.
Ted
I
would
recommend
because,
like
then,
we
can
look
at
saying.
A
Harry
ran
this
dresser
20
times,
and
the
cubelet
was
still
okay
with
the
with
the
change
proposed,
but,
like
I,
don't
feel
like
a
deep
rush
to
fixing
it
without
understanding
the
the
real-world
number
or
how
often
this
is
happening
or
that
clear,
reproducer,
that's
not
from
2017.
If
there's
a
more
recent
research,
yeah.
D
Explaining
them
lose
comment
on
it
that
you
know
we
should
probably
be
able
to
check
to
see
if
the
container
is
running
before
we
removed
assembly
to
fix
the
issue.
I
think
he's
I
think
he's
right.
It
looks
like
the
the
author
just
didn't
have
access
to
that
information.
We
should
be
able
to
get
that
off.
One
see
or
just
be,
like
so
yeah
I,
think
I
think
we
can
fix
this.
We
just
have
to
without
without
introducing
a
new,
lock
iso.
D
E
I
just
plan
to
comment
on
the
under
issue:
I
thought
actually
in
the
GC
in
the
GC
that
component,
where
I
have
a
function
to
list
all
the
containers
and
we
need
we
you
that
yeah
yeah,
either
at
least
all
the
running
one
or
at
least
all
the
containers.
And
then
we
just
need
to
check
whether
the
the
the
the
the
generic
response
to
a
nice
evening
is
there
and
whether
it's
funny
and
decide
whether
to
remove
it.
E
B
B
Prior
to
the
removal
of
unhealthy
symlink
released
the
live
containers,
wouldn't
that
basically
Haig's
for
some
other
kind
of
race.
After
that,
the
listing
after
we
obtained
at
the
leasing
of
the
live
containers,
because
it's
dynamic
right,
so
we
don't
know
how
long
say
any
particular
or
some
of
the
live
containers
would
continue
to
live.
B
B
B
We
cannot
just
use
the
prefix
of
the
container
ID
to
match
against
either
dead
or
alive
containers.
That's
a
part
of
the
confusion.
I
wanted
to
express
in
my
comment
so
I.
Please
allow
me
to
come
back
to
those
comment
one
minute
ago,
so
I
look
at
my
key.
Are
the
scope
of
this
particular
new
lock,
a
symlink
lock
can
be
moved
inside
at
the
for
loop.
Our
update
IPR
after
this,
maybe
okay.
C
B
My
concern
or
on
my
whistle
Carter
III,
is
that
T
at
the
G
C
and
the
wrong
rotation
to
my
understanding
are
independent
right.
So
given
because
the
ant
Island
is
one,
of
course,
we
are
going
to
add
more
tests.
However,
they
only
exercise
a
certain
type
of
workload
right.
A
A
Disgusting
cluster
eating
a
were
discussing
the
no
de
suite
and
I
guess
what
I'm
trying
to
do
is
one
like
really
appreciate
your
passion
to
try
to
make
the
cube
look
better,
but
I'm
also
trying
to
balance
that
with
the
passion
to
make
sure
it
doesn't
regress
and
so
having
the
tests
first
to
demonstrate.
The
problem,
I
think
is
really
way:
I'd
love
to
pair
both
our
passions
together.
If
that
makes
sense.
A
I
think
that'd
be
a
the
ideal
path
forward
right,
because
then
we
know
that
we're
not
regressing,
and
we
know
that
we're
testing
what
our
users
it
and,
in
general
any
opportunity
I
have
to
try
to
get
more
assistance
in
adding
more
no
ii
ii
ii
ii
ii.
Scythe,
inc,
Don
and
I
would
appreciate
the
help.
That's.
A
C
F
F
Great
and
so
for
everyone
that
know
my
name
is
Christophe
via
chick
and
I
want
to
give
you
a
brief
presentation
regard
enhancements
toward
high
performance
packet
processing,
and
the
purpose
of
this
presentation
is
to
give
you
a
clear
insight.
What
we
want
to
do,
because
some
related
documents
and
PRS
will
show
up
on
the
next
meetings
us
we
are
working
on
these
enhancements.
F
So,
let's,
let's
move
to
the
agenda?
Okay,
so
first
I
would
like
to
tell
what
what
is
the
purpose?
What
is
the
goal
of
these
enhancements
later
little
about
background
motivation,
and
this
is
based
on
5g
and
the
PDK
example,
and
what
is
required
by
the
BTK
to
run
with
kubernetes
in
the
effective
way?
What
limitation
and
challenges
this
brings
to
us
and
what
changes
we
propose
to
be
done
and
what
is
the
current
ongoing
work?
F
Ok,
let's,
let's
get
started,
the
objective
of
proposed
changes
is
to
improve
kubernetes
in
supporting
performance,
intensive
high
throughput
network
applications,
and
actually
there
are
several
use
cases
of
that.
For
example,
containerized
5g
deployments
have
strict
requirements
that
has
to
be
met
about
speed
and
latency.
F
F
Ok,
a
little
above
background
as
5g
networks
rolls
out
telecom
companies.
Telecom
companies
try
to
shift
from
the
vnf
to
the
CNF,
which
basically
mean
that
packet
processing
won't
be
done
in
the
virtual
machines
and
will
be
done
in
the
containerize
environments.
So,
for
example,
UPF,
which
is
a
key
component
of
5g,
must
have
been
ensured
effective
packet
processing
to
to
meet
those
requirements
I
mentioned
earlier,
and
how
to
do
this
on
the
application
side,
and
the
answer
here
is
the
DP
D
K
and
the
BD
k,
which
stands
for
data
plane
development
kit.
F
F
This
is
this:
this
is
it
first
one
CPU
pinning
which
would
eliminate
context
switching
overhead
in
Cuba
Netta's.
We
have
CPU
manager,
static
policy
which
supports
CPP,
so
this
can
be
done.
Page
looks
as
cupola
doesn't
allow
swapping.
So
this
is
also
done
huge
pages
to
reduce
the
memory,
access
time
and
Cuban
kubernetes
supporting
huge
pages
and
their
improvements
in
this.
Also
in
this
area,
version
white
and
virgin
wine
118
will
I
think
it
actually
offers
the
isolation
of
each
pages,
amount,
containers,
feature
and
yeah.
F
Sro
V
to
bypass
epic,
to
us
which
and
to
so
to
reduce
latency,
and
there
is
an
Intel
array
of
V
device,
plug-in
that
supports
that
in
Cuban
Etta's.
So,
as
you
see,
those
rack
requirements
can
be
met
and
the
last
requirement
that
we
mostly
focused
about
is
the
Numa
alignment
of
resources
to
align
all
of
the
computer
resources
from
only
single
node,
and
this
is
to
prevent
in
turn
in
macomb,
unike
ssin
and
drops
in
the
performance
on
multi
circuit
machines
in
cuba,
natives.
F
We
have
topology
manager
that
supports
Numa
alignment
of
CPUs,
CPP
and
PCI
devices,
but
there
is
no
component
about
no
no
alignment
of
memory,
so
we
have
some
limitations
and
challenges.
Regarding
this,
as
I
mentioned,
there
is
no
component
that
manages
the
Numa
alignment
for
the
memory,
and
this
includes
huge
pages.
F
Topology
manager
coordinates
the
assignment
of
resources
at
the
container
level.
There
is
no
possibility
to
assign
the
resources
at
the
pod
level
and
also
CPU
manager
doesn't
support
Numa.
Our
CPU
sharing
I
will
tell
you
based
on
an
example
why
this
is
a
problem.
So
now,
let's
go
briefly
through
those
challenges.
F
First,
challenge
is
to
run
the
PDK
applications
in
a
single
pot
in
the
effective
way
and
in
the
effective
way,
means
that
we
want
to
have
the
fast
access
to
the
resources,
and
this
can
be
done
by
aligning
all
of
the
computer
resources
from
single
NameNode.
So,
as
I
mentioned
earlier,
there
is
possibility
to
do
this
to
do
this
for
CPUs
for
PCI
devices,
including
Niek,
and
there
is
no
way
to
do
this
for
the
memory.
F
The
second
challenge
is
to
run
multiple,
interconnected,
DP
DK
applications
in
a
single
pot
that
those
applications
can
communicate
with
each
other
through
a
nick
or
shared
memory.
So
the
issue
here
would
be
if
one
of
those
two
containers
would
have
assigned
resources
from
the
other
nominal,
then
if
they
would
like
Metro.
A
F
Okay,
so
the
example
could
be
okay,
maybe
not
the
PDK,
but
some
applications.
That
became
us
to
communicate
with
for
the
memories.
So,
for
example,
it
would
be
some
database
or
something
like
that
and
if
they
would
communicate
through
through
the
memory
and
resources
for
the
second
container
would
be
from
the
second
node.
They
would
have
to
use
those
qpi
or
UPI
channel
that,
with
between
the
sockets.
F
Yeah
I
think
I'm
not
sure
what
what
is
the
purpose
of
to
the
PT
case
right
now.
But
we
can
ask
that
to
my
colleague
in
the
in
the
document,
but
I'm
aware
of
the
of
the
situations
like
that
two
containers
may
have
to
communicate
with
each
other
in
for
the
memory
or
Nick's,
and
the
third
challenge
that
is
connected
with
the
little
bit
with
the
previous
previous
challenge
is
to
run
the
BTK
application
alongside
the
containers
supporting
the
DP
DK
up
and
running
on
the
shirts
pupil.
A
F
A
We
I
don't
know
if
Kevin
or
victory
here,
but
I
thought
I
thought
we
were
working
towards
making
it
not
a
requirement
that
the
sidecar
container
needed
to
request
interval
CPU
values,
but
that
the
work
in
topology
manager
and
CP
manager
could
be
extended
to
burstable
clause.
Tears.
Am
I
not
recalling
those
discussions?
Well,
I,
don't
know
if
Kevin
or
Victor
counter
or
anybody
else,
who's
working
a
space
they're
here
to
speak
to
it.
Okay,.
H
A
H
It's
with
different
problems,
so
our
current
situation
and
and
we
CPU
policy
inside
my
CPU
manager,
research,
what
it
has
only
one
shared
pool
and
everything
what
is
not
exclusive
is
part
of
a
shared
pool.
So
if
you
have
multi
socket
system
where
shared
pool
will
be
covering
always
all
the
sockets,
all
the
nominals,
yes
so
vegetation
to
solve
what
will
be
to
to
have
like
multiple
share
it
pools
and
what's
not
possible
in
current
static
policy.
So
it's
requiring
new
policies.
Yes,
that's
right.
I'm.
H
H
F
F
A
F
A
F
So
this
this
is
those
free
changes
and
that
we
propose
that
are.
Regarding
these
few
challenges.
The
first
one
is
memory
manager,
a
new
component
which
goal
is
to
identify
adequate
amount
of
memory
or
huge
pages,
add
new
MA
notes
and
provide
topology
change.
The
topology
manager
support
the
alignment
of
memory
and
hich
pages
to
the
same
Numa
node
and
to
guarantee
that
the
memory
access
is
done
at
the
same
time,
a
note
where
CPUs
are
assigned.
This
basically
depends
also
on
CPU
policy
and
topology
policy,
but
this
is
the
general
goal
of
this
doc.
F
I
F
I
will
go
to
the
planner,
explain
here.
The
second
change,
let
me
get
back,
this
document
was
shared
with
signaled
last
week
and
slides
are
still
be
done,
and
this
will
be
more
brief,
more
in
detail
presented
in
the
in
the
next
meetings,
new
CPU
policy
that
would
limit
those
sharing
CPUs
to
local
nominal.
So
we
would
have
a
couple
of
sharing
groups
rather
than
one
that
is
currently
in
the
static
policy.
It
would
also
support
the
exclusive
CPU
assignments,
and
this
is
the
same.
F
Behavior
aesthetic
policy
does
right
now,
and
there
is
the
proposal
document.
It
is
still
in
working,
progress
and
slice
form
will
be
made
and
the
new
topology
manager
policy,
which
goals
is
to
support
the
binding
of
ports
which
have
different
topology
requirements.
To
the
same
note
and
this
goal,
which
would
actually
require
a
pod,
spec
change
and
in
the
proposal
that
we
will,
there
will
be
more
details
about
that
and
the
second
goal
is
to
support
pod
level
resource
alignments
for
all
containers,
four
to
the
same
nominal.
F
So
if
we
want,
if
you
have
two
containers,
we
can
tell
that
we
want
to
align
them
for
21
one
old,
and
the
proposal
is
still
in
progress
and
to
summarize
that
we
will
generally
make
changes
to
those
three
components
and
memory
manager
is
a
new
component,
and
actually
that's
that's
all.
If
you
have
any
questions,
I
encourage
to
ask
right
now
or
in
the
document
as
comments
and
me
and
my
colleagues
can
can
address.
It-
did.
E
C
A
little
bit
we
went
off
the
time
last
week
and
it's
the
possible
you
have.
Those
no
matter
is
the
typology
manager
part
binding
on
the
same
Numa
node,
and
also
look
at
the
per.
So
you
also
suggest
like
the
Purnima
node,
the
CPU,
poor,
sherry,
poor
and
all
those
kind
of
things.
Actually
enough
could
be
like
the
conflict
between
scheduling
decision
and
accumulate
decision
right.
The
to
Pinet
could
be
Co
a
based
on
all
your
requirement
or
your
new
policy
and
I
cannot
really
run
that
powder.
C
I
But
you
already
have
this
kind
of
conflict
is
device
manager
and
CPU
manager
like
if
the
polity
manager
receive
requests
that
he
and
he
has
divided
like
device
devices
is
requested
by
the
pot
only
on
the
number,
not
one,
and
he
has
CPUs
static,
CPUs
available,
all
the
nominal
to
so.
We
already
have
this
kind
of
problem.
I
C
I
True,
like
treszura,
we
already
have
a
number
of
people
who's,
trying
to
think
how
we
can
solve
it.
The
scheduler
Clairol
like
a
schedule,
extension
or
something
like
this.
They
want
to
create
some
general
approach
that
will
be
suitable
for
the
kubernetes
for
the
community
because
again,
like
most
proposals
will
you'll
be
will
be
need
some
change
on
the
note
not
not
status
section
like
not
API
like
you
need
to
provide
some
where's,
the
information
about
Nomo,
topology,
somehow
I,
don't
know
so
yeah.
I
H
Last
week,
actually
we
talked
about
it
and
beside
within
changing
when
old
status
and
making
the
schedule
are
aware
of
always
topology
resources
or
is
another
possibility
is
just
for
no
to
say.
Like
sorry,
I
have
a
problem.
This
miss
port
I
cannot
run
it
and
it's
not
a
whole
problem
of
a
note.
It's
just
problem
for
this
particular
port
schedule.
Please
reconsider
this
kind
of
error.
Right
now
is
missing.
F
Just
wanted
to
say
that
we
are
aware
of
it,
and
so
the
approach
right
now
is
to
minimalize
it
and
and
then
think
about
some
some
changes
to
the
scheduler
or
another
another
way
to
achieve
this
because,
like
it
was
set,
it
is
currently
the
problem.
Also,
with
CP
monitor
static
policy
yeah,
we
will
make
it
force,
but
we
will
bring
new
new
enhancements
and
possibilities.
This
changes.
A
F
A
E
A
The
other
thing
I'm
curious
about
is
like
once
these
pods
are
in
production
and
running
like
what
is
the
production.
Interaction
with
these
pods,
like
are
people
doing
backups
of
some
database
that
they're
running
in
that
sidecar
using
Q
control,
exact
like
where
does
the
next
level
of
problems
exists,
for
how
people
are
doing
something
in
that
supporting
container
or
not?
You
don't
I
mean
because
that's
what
I'm
trying
to
figure
out
is
like,
where
does
where
do?
I
reach
a
point
of.
A
F
F
F
Question
and
I
think
that
the
most
important
is
here,
the
the
memory
manager,
so
it
would
be
the
first
challenge
and
actually
the
cat.
The
cap
was
done
for
that
a
while
ago,
but
if
it
could
be
discussed
also
on
the
document,
it
would
be
great
to
ask
my
colleague
and
actually
his
his
acting
in
this
area,
and
this
is
the
the
young
gan-chan.
I
In
general
is
possible
to
configure
normal
notes
like
on
the
little
machines.
Digital
machines.
Well,
I
am
not
sure
if
Google
cloud
engine
supports
this
kind
of
configuration,
but
if
it
supports
like
configuring
virtual
Numa
nodes,
why
not
like
you
can
configure
it
run
cluster
on
top
of
the
virtual
machine
and
test
like
run
Multi
multi
Numa,
test
multi
Numa
test
on
it.
So
should
me
shouldn't
be
too
hard,
like
I,
created
little
machine
on
my
local
laptop
and
just
run
a
kind
cluster
on
it,
and
they
said
memory
manager
POC
on
top
of
it.
A
A
And
so
basically,
there's
there's
a
cost
to
to
this
right.
One
is
the
code
and
complexity
cost
and
then
the
other
one
is
just.
How
do
we
get
good
test
signal
and
maybe
the
follow-up
I
could
ask
is
like
if
you
could
sync
up
with
maybe
Kevin
or
Victor
and
brainstorm
on
what
we
could
do
here
better.
That
would
be
appreciated.
A
I
C
E
C
Think
you
categorize
at
the
three
different
pieces
and
the
memory
management,
CPU
policy
and
the
typology
management
policy.
Yes,
we
they
are
all
related
to
each
other.
I
just
want
to
see
you
naked.
For
example,
your
CPU
you'll
see
you
next.
You
want
to
suggest
the
per
Numa
CPU
sharing
pool.
Actually
that's
like
that,
no
matter
how
you
are
going
to
pin
CPU,
that's
the
hint
for
Colonel.
Basically,
that's
a
hundred
to
Colonel
how
to
allocate
of
the
memory,
and
so
so
so
a
lot
of
those
kind
of
things.
C
E
C
A
proposal
and
what
do
those
proposal-
and
we
need
a
really
thinking
about
if
you
really
want
to
support
I-
think
that
even
a
couple
years
ago,
when
we
first
started
the
kubernetes
first
start
us
all
those
memory
management.
There's
the
people
in
the
community
have
the
demanding
to
support
the
high
performance
and
I.
Remember
that
time,
I
keep
missing
that
we
cannot
really
separate
those
things.
Cpu
memory
Numa
all
have
to
take
together
and
I
look
into
this
one
as
a
whole,
and
so
this
this
is
just
my
my
first
reaction.
C
When
I
saw
the
proposal
and
a
lot
of
you
challenged,
she
saw
and
though
you
have
the
scenic,
that
I
agree
with
the
dark
and
then
top
one
challenge,
maybe
is
the
one
we
should
the
first
one.
We
should
be
focused
wrong
and
not
off,
but
it
may
also
problem.
It
is
those
challenges.
Early
I
mention
that
it's
not
just
related
to
the
scheduler,
so
we
have
the
potential
challenges.
Scheduler,
maybe
just
controversial,
make
the
controversial
decision
about
open
it,
not
just
that
and
also
deployment,
and
even
we
are
do
well
good
job.
C
You
have
like
the
people,
the
price
on
workload,
obviously
further
for
your
use
cases,
maybe
my
concern.
In
else
we
add
those
feature
in,
and
people
don't
know
how
to
use
in
those
feature.
X
is
complex,
the
future
and
even
scheduler,
don't
know
how
to
place
the
jobs.
I
suppress
the
part.
If
we
don't
solve
that
problem,
the
worst
part
it
is.
The
people
have
like
the.
In
your
cases,
a
sensor
I
looks
like
they're
based
on
your
first
slide.
C
C
Topology
exposed
to
those
kind
of
things,
and
they
already
have
thanked
the
running
part
or
those
running
part
also
have
the
different
performance
requirements,
all
those
kind
of
things,
and
then
later
this
gave
you
new
part
which
have
some
neck:
the
high
performance
requirements
and
the
application
on
the
node
learn.
So
then
they
caused
the
problem.
They
said,
oh
there's
a
features
not
so
so.
I
think
that
we
start
anew
just
really
sinking
more
from
the
kubernetes
level.
C
How
we're
going
to
manage
knows
those
worker
node,
obviously
most
the
code
in
will
be
most
of
those
scheduling
and
the
collaboration.
All
those
kind
of
things
will
be
on
the
node
aside,
because
it
is
how
a
interactive
is
the
kernel,
but
we
need
to
figure
out
a
focus
on
the
API
kubernetes
api.
What
you
Matt
Houston
we
used
to
talk
about
in
signaled,
any
resource
management
work
group.
We
talked
about
resource
class
and
radical
nowhere
and
so
Matt
Houston
it
is.
We
not
expose
those
kind
of
problem.
C
I
have
many
many
years
with
the
book
before
this
project.
All
you
propose
that
I
saw
those
kind
of
problems
we
solve
those
problem,
but
we
won
top
were
proud.
One
top
mistake:
we
made
it
it's
just
quick
and
dirty
at
another
level.
I
said:
I
was
working
on
another
level
and
I'm
thinking.
That's
fantasy
projects
talking
on,
and
then
we
just
solve
this
problem
and
by
the
end
they're.
Actually
at
the
class
level
management
that's
meaningful,
and
so
we
have
to
go
through
all
the
trouble
and
we
do
a
lot
of
work.
C
That's
my
concern.
This
one
share
is
here,
and
so
it
is
Strongbow
problem,
but
tell
you
once
you
thinking
about
the
neck.
The
Sun
and
Stalin's
work
Noland
and
sky
deal
there
and
for
the
to
the
cluster
and
they
have
the
different
requirement.
Unless
you
have
the
unified
of
the
workload
from
your
use
cases,
I
can
say
that
the
you'll
work,
no
sorry
it's
quite
unified,
but
not
all
the
customer
have
that
one.
So
this
is
why
neck
the
people
we
nee
I
only
care
about
my
work,
knowledge.
C
So
that's
why
I
just
only
care
about
how
I
solve
this
problem,
another
node,
because
all
is
the
same
so
I
can
do
the
word
easy.
But
if
you
have
like
the
heterogeneous
of
the
work
node
and
you
want
to
manage
those
kind
of
things,
this
is
basically
it's
meaningful.
So
we
need
to
solve
at
the
dock,
but
that
has
level
how
to
do
those.
How
to
do
those
application
under
it
is
location,
yeah.
F
J
Have
a
question
about
memory
management,
so
I
I
read
a
proposal
and
haven't
stood
correctly,
that
it
forces
single
node
assignment
for
the.
So
in
my
understanding,
it's
actually
not
a
job
of
int
provider
to
like
apply
restrictions
like
that.
It's
a
it's
a
job
of
topology
managers
to
actually
get
the
like
all
possible
variants
and
apply
topology
policy
to
those
and
like
to
understand
the
concern
here.
D
J
Work
is
to
actually
apply
its
policy
if
it's
single
no
more
than
okay,
so
it
will
filter
out
all
the
rest.
It's
not
then
like
this
would
make
the
solution
more.
Generic,
in
my
case,
in
my
opinion,
because
workloads
are
different,
so
not
all
of
them
actually
require
to
be
run
on
the
single
nominal
and
like
who
is
with
this
approach,
I
like
to
actually
provide
the
hints
and
let
topology
managers
to
decide
the
the
configuration
based
on
its
policy
its
it
would
would
be
more
generic.
I
I
Problems
that
you
cannot
cannot
know
where,
like
where's,
the
container
container
will
really
reserve
the
memory
on
which
nominal,
like
you'll,
specify
luminoled,
0
1
and,
like
you,
don't
know
any
information
about
where
this
memory
will
be
really
allocated.
So
you
will
need
to
reserve
the
same
amount
of
memory,
also,
no
more
no
0
and
also
on
note
like
one.
I
H
Which
will
work
incorrectly
in
case
if
you
have
conceived
EU
well
and
in
bias
enable
to
a
feature
so
sub
nama,
clustering?
So
practically
you
will
have
the
resources
to
nuuma
nodes,
which
is
performance.
Wise
is
very
close
to
each
other,
but
you
practically
like
house
three
oak
memory
available
for
your
containers
to
decide.
I.