►
From YouTube: Kubernetes SIG Node 20200331
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
hopefully
I'll
be
quick,
so
let
me
paste
the
link
to
the
chat,
so
this
is
a
link
to
my
PR.
So
let
me
describe
problem.
Similar
problem
has
been
reported
before
and
when
I
look
at
the
referenced
issue
issue
is
a
5-2
or
172
I
scrolled
through
the
issue
and
I
found
some
previous
PR,
which
was
abandoned.
So
let
me
describe
the
current
risk
condition
which
we
observed.
B
So
there
are
two
classes
involved.
One
is
a
container
and
augmenter
and
the
other
one
is
container
GC.
So
looking
at
how
a
container
allowed
rotation
works,
so
first
we
renamed
the
current
log
to
rotate
in
log
file,
so
the
law
filename
contains
the
current
timestamp.
After
that
we
call
CRI
to
reopen
the
container
law.
So
if
this
a
reopening
attempt
fails
step,
three
is
to
rollback,
meaning
rename
this
time-stamped
rotated
log
file
back
to
a
container
log,
so
we
we
know
that
the
container
GC
runs
periodically
so
I
checked,
I.
B
Think
one
minute,
since
the
timing
of
these
two
goroutines
is,
is
indeed
monistic
so
between,
while
steps
1,
2
or
a
1
3,
there's
very
small
but
non
the
negligible
amount
of
time
during
which
the
container
GC,
of
course,
DC
does
several
things
among
which
is
to
clean
up
stale
a
symlink.
So
the
container
GC
may
find
the
same
name
to
be
a
housing,
and
it
removes
it.
B
So
this
PR
proposes
the
introduction
of
a
lot
between
container
GC
and
the
container
lock
manager,
I
think
at
a
city
note
and
a
meeting
when
we're
not
diving
into
that
the
detail
of
the
a
PR.
Since
many
people
have
a
look
at
the
PR
yet
so
I
just
want
to
bring
signals
attention
to
this
risk
condition
and
that
proposed
a
solution.
A
Yeah
I
haven't
look
at
your
Pia,
but
to
from
high
level,
do
you
ever
think
about
the
instant
introduced
locking
here
and
is
this
actual
complexity?
Do
you
ever
think
about
the
we
can
make
sure
not
to
check
whether
the
container
is
still
running
before
we
removing
after
signaling,
and
so
basically,
we
just
just
just
before
which
then
we
just
always
check
and
if
it's
the
continent
still
running,
and
we
just
don't
remove-
is
that
well
Beach
simplify
this
whole
logical
here
we
won't.
A
A
Yeah,
because
you
basically
really
want
to
do
this
one,
it's
the
continent
already
really
calm
and
it's
not
running
I
mean
come
on.
It's
God
instant,
it's
not
running
so
then
you
want
to
do
those
things
right
so
so
instead
introduce
that
the
locking
here
and
there
can
we
just
do
the
one
extra
check
here.
B
B
B
A
I
think
juice
there,
so
I
kind
of
stunned.
What's
the
problem
based
on
you
described,
that's
the
problem
but
I
think
about
it's
just.
Can
we
simplify
the
the
is
there
any
secret
fail,
simple
way
to
fix
this
problem?
Instead,
the
introduced
locking
here
it
was
small
and
so
and
also
it's
more.
It's
like
okay,
we
doing
that
lock
rotation
and
we
we
expose
those
those
problem
and
introduce
that
way.
So.
B
C
B
A
D
Cool,
so
we
have
yes,
yes,
so
the
C
group
v2
was
much
recently
in
a
cubelet
and
so
idea
is
to
have
the
some
test
cases
running
which
can
use
the
C
group
based
C
Group
B,
to
supported
infrastructure
to
run
the
test
cases.
So
I
went
over
the
test.
Infra
and
I
saw
there's
a
continuity
job
running
with
some
periodic
job
here
with
running
cuckoo,
but
it
is
e2e
and
so
that
way,
I
kind
of
wrote
a
very
simple
skeleton
work-in-progress.
D
We
are
for
not
just
a
bit,
which
has
just
one
periodic
test
case
right
now,
which,
instead
of
continuity,
uses
Q
and
what
it
does.
It
also
uses
the
field
or
R
31,
which
comes
no
C
groupie
to
enable
by
default.
This
is
not
public
here,
and
this
is
not
ready.
You
know,
walking
shepherdess
I
just
wanted
to
gather
a
feed,
but
I
went
to
seek
testing
last
week
and
they.
C
D
C
A
Last
time
we
talked
about
sequel
version
to
use
so
came
from
asteroids
just
start
from
the
know.
Dt
first
because
know
the
each.
We
actually
have
the
more
heavy
more
know
the
level
resource
management
testers.
So
then
we
can
sue
the
evolve
of
the
node
each
week.
Next
that
we
have
to
have
them,
resources,
stress
tester
and
we
introduced
before
and
to
when.
A
We
first
introduced
the
resource
management,
even
QoS
and
but
those
never
really
expanded
at
the
class
level,
because
it's
just
harder
to
make
those
tests
that
deterministic
and
is
harder
to
grow
greater
those
tasks
to
as
the
confirm
test
or
performance
test
at
the
castle
level.
So
we
maintain
that
I,
don't
know
that
I
want,
and
so
so
that's
why
yeah
I'm
not
totally
okay,
this
one
database,
this
one
and
can
we
also?
Can
you
have
some
another?
Where
do
you
have
the
continent
II
want
you,
because
we.
C
A
So
we,
when
the
stick,
will
listen
to
you,
because
we
have
the
one
goal
to
pushy
the
community.
A
exact
opponent
is
compatible
right
cuban
ideas,
especially
on
the
node,
and
always,
though
sick
of
watching
to
you
so
and
the
seek
to
enable
signal
which
one
to
actually
dance,
the
or
I
have
to
change
you
long
time
ago,
with
these
girls
don't
know
which
we
asked
that
Nick
the
Renzi
make
the
Renzi
compatible
and
in
theirs
they
also
leave
a
container
comparable.
A
F
A
Yes,
so
now
we
want
to
add
the
node
III
test
just
for
the
crowd
plastic
group.
Reversion
to
you
test,
we
are
wondering,
can
you
have
to
add
the
continuity?
One
question?
No,
the
ete
here
so
make
that
it
is
comparable
and
there
so
signal
they
can
push
NSA
next
trainees
we
can
say:
oh,
we
did
this
and
we
endorse
those
two
container
and
time
which
is
being
installed
in
the
pasta
and.
A
F
E
G
How
shall
one
feedback
here
on
that
TRS?
Don't
start
with
the
periodic
test?
First
start
with
the
pull
tests,
you
know
pre-summit
tests,
then
it's
easier
to
test.
Otherwise
you
will
have
to
like
wait
for
the
CI
jobs
to
run
or
try
to
get
tests
and
try
to
run
that
test
for
you.
So
it's
better
to
start
with
a
piece
of
a
job
rather
than
CI
job.
F
F
A
Actually,
this
is
more
for
the
C
group
or
version
two
and
conform
master
for
the
kubernetes.
So
so
another
thing
I
want
to
mention
that
even
we
add
those
to
the
pre
submit
test
and
there's
the
the
reason
we
started
from
the
no
DG
first,
the
six
is
simple
for
everybody,
I
think
so
you
don't
need
to
end
up
to
Nokomis
Apple
for
and
also
is
like
the
resources
saving
you
stand
from,
that.
A
You
are
not
always
created
off
the
street
or
four
nodes,
caster
and
one
and
then
you're
going
to
some
problem,
and
it
could
be
totally
different
problem
that
work.
Probably
many
other
problem,
nothing
to
do
with
it
group
working
to
you.
So
you
we
started
from
node,
so
you
can
limited
depicts.
No
d2
is
Kai
for
InDesign
just
for
know
the
feature
so
and
also
can
easy
to
add
more
texture.
So
even
we
added
the
priest
submit
the
test.
A
F
A
A
H
Yes,
yes,
so
I
just
want
to
have
a
short
token
on
that,
because
we
just
created
a
document
that
this
is
the
attempt
to
explain
in
details.
What
is
the
memory
manager
about
that?
We
we
do
data
our
best
to
to
provide
a
clear,
clear
view
of
memory
manager
and
bye-bye
used
by
providing
many
illustrations,
and
the
main
point
is
that
we
have
a
warm
invitation
to
the
community
and
we
would
like
to
obtain
some
some
review.
Some
comments
and
I
saw
that's
Francisco.
H
There
are
some
comments
from
Francisco
intent
and
we
reply
to
them,
and
another
thing
is
why
why
I'm
asking
about
comment
is
because
our
colleague
from
from
Kuryakin
cannot
in
fact
in
fact
attend
these
meetings,
because
there
are
in
different
in
different
time
zones.
So
so,
if
you
could
left
comments
into
the
comment,
so
he
also
can
can
address
the
comments
and
yeah
it's.
We
also
plan
to
prepare
some
demo.
Some
proof
of
concept
so
might
be
dismal
for
next
know.
H
Who
will
show
this
concept
and
also
a
ya
know,
is
working
on
the
proof
of
concept
he's
attending
the
meeting.
So
probably
he
will.
He
will
he
will
do
the
demonstration
but
I,
think
you'll
in
this
mall
for
next
month
we
will
see
going
to
progress,
so
yeah
I
think
for
now.
It's
that
that's
all
from
our
side.
So.
A
Sure
yeah
thank
you
for
put
everything
in
the
dock
and
needs
carry
off
the
review
feedback
from
the
stock.
Please
increment
sure
the
review
and
I
will
illuminated
from
from
our
site
and
the
police
I
hope
that
Eric
can
nominate
one
from
the
Red
Hat
because
the
he
he
also
and
his
team
and
also
add
a
lot
of
memory
management.
Things
I
also
will
nominate
someone
from
my
side,
because
we
also
ok
a
lot
of
effort
on
memory
management
and
the
police
include
someone
from
the
Intel
for
the
and
topology
management.
H
A
I
will
put
in
some
comment
there
and
and
through
those
comments
because
start
today,
I
have
some
conflicts
that
he
cannot
attend
meeting
so
I'll
put
some
comment,
make
sure
you
can
nominate
someone
represent
and
his
team,
because
especially
nuclear
huger,
page
management
from
our
side,
because
we
implement
the
initial
version
of
the
QRS
and
also
in
issue
version
of
the
memory
management
and
also
in
help
folks
from
the
topology
management.
So
we
need
the
King,
but
but
everyone
in
signaled
and
the
community's
community.
Please
feel
free
to
comment
on
a
talk
so
yeah.
A
A
A
I
H
I
A
G
A
A
A
For
example,
in
a
pastor
in
a
caster
after
missing
back
in
the
past
and
after
nine
months,
so
all
the
worship
of
the
Cuban
egg
with
basically
don't
need
the
to
patchy.
So
ask
this
changing
even
like
the
after
nine
months.
We
still
need
Apache,
older
worship
right,
so
so
the
master
may
be
already
deprecated
a--
and
the.
But
cuba
nights
are
still
this
problem,
actually,
even
in
europe
before
this
change,
Irenaeus
accorded
change.
We
have
this
problem,
like
the
me
keeps
me
back
to
the
night
connected.
A
We
support
nine
months
bad,
so
you
basically
nectar,
for
example,
or
giving
time
one
data
file,
cuba
nights,
which
is
the
master
and
is
stop
support,
but
company,
could
it
be
like
the
two
version
behind
so
there's
always
have
the
fuzzy
area?
Is
the
Cuban
eight
accumulate
one
data
stories?
Do
you
support
or
not
right?
So
let's
always
have
the
first
area.
So
now
it's
more
like,
though,
even
like
after
certain
actor.
Even
so
we
used
to
be
connected
in
certain
ways.
Thinking
about
Oh,
even
you
are
already
kubernetes
already
claimed.
A
It
is
only-
is
only
support.
The
minimum
water
is
one
that
file,
but
just
some
we
kind
of
assume
customers
around
the
one
that
the
three
possible,
even
we
kind
of
encourage
them
operator
to
what
that
fire,
but
there's
the
potential
customer.
So
that's
the
first
error,
but
if
he
says
that
I
don't
think
of
others,
the
extra.
G
A
Before
that,
there's
the
fuzzy
area
I
can
reach
them
to
the
community.
I
think
that's
unclear
so,
but
actually
is
bounded
to
the
each
vendor
and
end
up
kubernetes,
basically
standard,
but
each
vendor
to
decided
like
the
certain
vendor.
So
you
have
to
after
certain
period
of
time
you
have
to
operate.
You
know
the
version
right
to
manage
to
get
rid
of
those
to
screw
so
there's
certain
vendor
even
never
allowed
up
to
to
worship
school
so
from
I.
A
A
But
then
that's
the
kubernetes
level
right,
so
it's
not
like
the
trooper.
Now
is
not
a
sticking
out
and
yeah
in
general.
I
love
this
proposal
because
I
I
being
supported
it
has
sent
her
Google
internal
data
center.
For
so
now,
I
understand
people
don't
have
the
working
environment
and
we
don't
want
to
operate
to
so
string
and
so
I
support
this.
I
The
okay
well
sounds
good,
then
we're
moving
slowly
in
this
because
all
of
the
disruption
to
our
schedule,
because
the
original
plan
then
to
actually
implement
this
policy
against
119,
so
that
119
would
be
the
first
release
that
was
supported
for
four
releases.
We're
gonna
push
that
back
now,
just
because
with
what's
going
on,
it's
very
hard
to
get
a
hold
of
people
and
for
people
to
pay
attention
and
things
that
are
not
top
priorities.
I
A
I
A
I
G
I,
yes,
so
I
added
this
item
like
two
three
weeks
ago,
but
I
couldn't
make
it.
This
one
is
about
the
darker
less
cap.
Yes,
so
the
cap
is
ready.
The
pull
request
is
ready,
it's
green.
So
basically,
what
it
does
is
just
like
we
added
provider,
less
tag
when
so
when
you're
building
cubelet,
if
when
you're,
adding
when
you're
building
a
PI
server
or
anything
else,
when
you
add
the
provider
less
tag,
then
what
happens
is
all
the
cloud
providers
will
get
removed?
G
G
We
created
new
report
released
called
Moby,
ipbs
Moby
term,
then
there's
a
lot
of
background
work
that
got
done
now
where
we
are
stuck
right
now,
is
getting
an
approval
for
the
cap
and
somebody
to
look
at
their
PR
and
give
us
a
thumbs
up,
and
it's
extremely
simple
PR
all
it
does
is.
It
adds
build
statements
right
at
the
top
of
the
file
saying
knock
Alice,
that's
it.
So
it's
an
absolutely
non-invasive
and
the
reason
for
doing
this
is
for
two
use
cases
at
this
point.
G
The
first
use
cases
is
a
kind
kind,
you
know,
runs
a
bunch
of
things
inside
continuity
and
it
really
the
Hewlett
that
runs
in
kind
doesn't
really
need
docker
at
all,
so
it
will
reduce
the
size
of
the
cubelet,
and
it
will
make
sure
that
you
know
kind
can
run
stripped
out
of
dock
dr.
shim
completely
so
that
one
use
case.
The
second
use
case
is
cluster
API.
The
cluster
API
image
builder
also
uses
continuity,
and
it
doesn't
need
docker
at
all.
So
there's
no
docker
in
in
the
cluster
API
image.
G
A
J
Just
have
a
question
for
the
growing
one
of
my
dear
I
actually
haven't
doubt
regarding
a
node.
You
do
it
test,
okay,
so
as
part
of
impaired
like
to
prove
that
the
the
failure
scenario
how
to
create
error
from
the
networking
plugin.
So
basically,
the
issue
is
like
this.
We
have
discussed
like
last
in
the
last
week
a
signal
with
currently
like
a
cubelet
when
it
deletes
a
pod
from
the
API
server
before
removing
before
getting
a
confirmation
that
the
act,
the
network
addresses,
are
actually
cleaned
up.
J
So
in
order
to
test
that,
because
I
was
asked
to
like
write
a
key
to
e,
no
D
to
e
to
make
sure
that
if
we
don't
break
this
thing
again
and
I
was
looking
at
the
it
with
the
it,
we
test
suit
and
all
the
mostly
the
cooker,
the
node
one
in
the
two
million
other
things.
But
I
couldn't
get
an
idea
like
how
I
could
create
at
E
to
e
V
I
can
simulate
the
failure
on
the
National
plugin,
because
those
things
are
like
are
not
generally.
J
A
A
A
Basically,
just
suggest
you
let
the
Creator
also
mentor
the
pod
and
running
the
pod
and
then
how
the
Tommy
laid
head
so
after
part
that
him
United
and
removed
the
pod
and
they
just
check
her
just
inspect
the
runner
CRI
cut
off
and
whatever
things
inspect
the
container
and
all
those
who
associate
the
scene
high
resource
is
gone
right.
So
that's
basically
what
I
noticed.
J
Dark
like
that
would
be
positive
test
test
case,
but
if
had
to
do
a
negative
test
case
like
where
we
are
making
sure
that
everything
is
cleaned
up
before
actually
remain
from
the
earth.
A
piece
over
then
had
to
ask
if
then,
how
to
make
sure
there's
a
failure
in
the
network
network
cleaner,
but
still
the
part
is
not
removed
from
the
AP
server,
which
is
happening
currently.
J
Because
the
third
point
in
the
third
point
he
was
mentioning
about
lakitu
test
the
demonstrate
the
fixed
link,
so
I
was
assuming
to
demonstrate
the
fix.
It's
basically
currently
like
if
I
check
that
the
pod
is
still
present.
Even
though
Network
this
is
a
some
networking
cleanup
to
be
happen,
then
I
do.
J
Yes,
so
for
CNI
it
is
I,
have
I
have
a
way,
because
I
can
temporarily
remove
the
binary
after
creating
the
pod
and
then
delete
the
part
that
will
make
sure
that
the
the
contrast
killed,
but
the
network
addresses
are
not
CNAs
is
still
failed,
which
is
basically
the
issue.
It
is
happening
for
me
naked
check
the
IP,
the
episode
that
the
pod
still
exists.
But
if
the
networking
is
a
non
CNI
thing,
that
is
where
I'm
trying
to
see
how
I
can
do
that.
J
A
A
F
So
you
need
a
you,
need
a
scaffold
version
of
a
container
runtime
that
includes
failure
cases
for
things
like
yes,
yeah,
so
we've
done,
we've
done
some
failure,
type
efforts
in
that
in
that
area,
around
scaffolding,
CNI
and
some
more
content
of
runtime
integration
tests
right,
but
we
haven't
exposed
that
at
the
cry:
CTL
validation
or
put
it
in
note
and
no
testing.
That's
the
interesting
problem.
We
probably
need
to
get
some
tests,
maybe
through
sitting
tests
some
some
groups
together
to
talk
about
how
we
do
this
whoops
scaffolding
and
testing.
F
F
I'm
super
small
to
you
know
this
is
crazy,
so
so
I
think
that
hand
what
we
got
is
a
problem:
scaffolding
at
the
integration
layer,
failures
for
networking
and
other
reasons
right
when
we're
talking
about
II,
do
eight
tests
and
more
specifically
know
D.
Do
we
test?
How
would
you
you
know
do
that
testing
when
you
know
see
nice
down,
they
don't
really
have
a
way
to
scaffold
that
up.
E
So
the
stories
are
retried
right
when
we,
when
we
return
a
failure
from
stop
and
see
and
I,
fails
right
and
I
think
we
have
some
issues
in
that
area
and
so
what
we
are
doing
we
used
and
what
we
are
doing,
is
making
Malta
like
more
resilient,
like
retry
for
some
time
and
so
on
and
one
possible
thing
there.
Maybe
maybe
it
might
be
worth
bubbling
it
up
as
a
custom
error
in
CRI
saying
that
hey
my
CNI
is
down.
Maybe
we
cry
and
some
time
or
just
says
he
and
I
down.
F
F
A
A
H
A
We
could
we
signal
there,
so
the
increment
of
the
Noda
level,
actually
kubernetes,
is
really
implement
after
empower
of
the
finger
injection
framework,
hopefully
signals
that
can
pick
up
that
work
and
we
do
have
that
when
we
start
eyes
those
like
a
force
like
a
no
the
restart
of
reboot,
but
that's
just
like
one
for
stuff
in
your
injection
at,
and
we
also
have
like
the
force
powder.
Is
that
container
we
started
and
we
do
have
those,
but
it's
not
like
the
like.
The
framework
have
to
inject
the
defensive
Vania
Network.
F
A
A
J
C
A
C
C
A
You
mean
the
from
the
continent
runtime
in
hour,
yes
nectar,
so
we
the
promise
the
container
next
to
today
for
union
for
docker
we
most
common
problem.
What
I
observed
it
is
but
I
believe
mate
is
the
Content
ID
and
the
darker
post
darker
fix
this
problem.
Still
many
people
rely
on
the
older
version
of
the
darker.
They
see
the
problem.
It
is
more
like
them
come
in
consistency.
We
offer
the
container
Staters,
like
the
continuity,
think
about
the
continent.
I,
don't
curse,
think
about
containers,
steel,
running
Oh,
sometimes
whisper,
sir.
A
C
Line
of
thoughts
about
this
is
actually
about
with
memory
manager
proposal
is
so
like
my
imaginary
situation,
so
we
have
report
report
requesting
X
gigabyte
of
memory
memory
manager
says
like
ok,
fine.
We
have
with
memory
available
on
particular
Numa
node,
so
it
says
to
continue.
Runtime,
ok,
fine,
attach
we
spot
or
start
this
container
and
this
memory
node.
But
what
actually
happens
is
wood
container
when
it
starts
it's
actually
it's.
C
Let's
say
we
have
amount
of
free
memory
on
this
node,
which
is
not
sufficient
or
like
amount
of
food
kernel
pages.
What
is
locked
and
cannot
be
migrated
or
ever
created
by
the
kernel
is
not
such
it's.
Such
big
word
like.
We
cannot
start
reading
this
container,
so
we
need
to
say
again
back
to
recuperate.
Sorry,
we,
like
you,
is
it's
good.
What
you
made
this
topology
decision,
but
we
cannot
fulfill
it
so
either
I
think
where
the
policy
decision
or
actually
move
it
out
from
this
node
somewhere
else.
A
This
is
dangerous.
We,
this
is
a
little
bit
under.
So
the
reason
it
is,
basically,
you
suggest
no,
the
have
we
more
intelligence
and
reject
the
allocated
of
the
part
and
the
the
reason
why
Sigma
is
is
dangerous.
Basically,
is
just
know
that
disagree
with
the
crash
level
are
for
scheduling,
decision
and
then
reject
it,
and
then
an
app
could
be
scheduler
because
they
are
not
look
at
the
same
amount
of
information
schedulers
you
may
be
thinking
about
this.
Noda
is
the
good
note
and
all
maybe
schedule,
nurses
think
about
this.
C
A
C
So
we
situation
would
schedule
or
made
a
decision
based
on
available
resources
already
might
be
changed
because,
like
our
pod
started
or
existing
boats
consumed
with
our
memory
and
so
on,
we
need
some
how-to
from
one
node
to
say
to
a
scheduler.
Sorry,
I
am
NOT
a
good
candidate
for
that,
even
we
were
reading,
please,
because
something
in
the
oh,
not
me
so.
F
We
can
certainly
augment
that
annotations
networking
like
right
now
we're
adding
into
the
sandbox
info
structure
the
DNI
result
of
the
latest
time
that
we
tried
to
reload
and
you'll
see
that
for
that,
for
each
pod
will
be
returned
back
to
stylus.
If
you
export
the
extended
result,
information,
but
we
probably
need
to
you,
know,
sit
down
and
come
up
with
more
formal
definitions
of
the
response
values
that
you
can
expect
for
different
types
of
failures.
F
A
Actually,
the
if
the
people
attend
the
signaled,
not
enough
or
maybe
Cooper
Cooper
trouble
from
long
long
time
ago,
even
the
first
Cooper,
calm,
meeting
and
I
suggest
we
started
to
implement
after
usage
a
while
scheduler
and
a
lot
of
problem.
If
the
signal
people
attendance
ignored
long
enough
there,
a
lot
of
actually
could
be
addressed
by
usage,
aware
of
scheduling
so
right
now
we
always
try
to
unit
note
like,
for
example,
even
like
the
recent
inactive
EPA
proposal
still
highly
bypass
after
scheduler
and
so
the
concern
for
me.
A
It
is,
but
anyway
we
we
add
more
complexity
to
the
node
level.
We
do
and
try
to
avoid
this
cascading
issue
and
others
kind
of
things.
We
try
to
make
sure
note.
It
is
committed
and
he'll
know
the
is
Rena
community
and
adopt
apt
excited
report
that
back,
if
your
server
and
a
lot
of
extra
complexity,
because
try
to
a
wide
the
scheduling
entity
to
have
were
so
kind
to
the
scheduler,
just
think
about
kind
of
schedule.
A
A
unique
doing
this
flying
the
scheduling,
work
and
the
customer
can
arbitrate
put
any
number
of
the
risk
we
requested.
Even
like
we
told
customer
even
on
the
node,
we
do
like
the
punish
if
you
are
not
a
while.
So
that's
also
introduced
a
lot
of
reduced
affinity
problem
for
us,
because
the
request
is
not
really
different,
in
fact
of
the
goo-goo
staging,
because
you
can
amoenus
holder
right
so
the
the
problem
a
lot.
A
It
is
just
because
a
scheduler,
it's
not
ucg,
aware
of
the
scheduler
I
just
want
to
say
we
do
support
no
the
front
they
were
like
even
without
kubernetes,
we
are
the
when
we
first
implemented
companies
and
we
already
started
we
put
off
the
know
the
level
so
through
this
racing
team
actually
implements
data
weather
and
initially
and
implement
of
the
Cuban
I
detail,
and
so
we
started
we
put
those
things
into
the
profit.
It
is
the
cupola
kubernetes
schedule,
never
really
respect
that.
So
so
then,
the
coupon
that
is
scheduler.
Look
at
that
node.
A
F
A
F
Passing
all
this
through
the
pods
isn't,
yes,
you
may
have
some
pot
specific
issues.
For
example,
the
copilot
had
its
own
C&I
plug-in,
but
other
pots
didn't.
Then
you
might
have
a
you
know,
one
situation
where
all
these
other
pots
can
run
fine
on
the
node,
but
not
this.
That
requires
some
hardware
level
stuff.
C
A
We
we
didn't
report
the
problem.
It
is
the
scheduler
I
locate
her
whatever
have
to
respect
that
data
link.
Once
they
start,
we
started
data,
then
Alex
earlier
you
see
the
okay
node
ID
mode
intelligence.
We
definitely
can't
do
the
promise
today.
Node
are
the
intelligence
than
rejected
you'll
end
up
even
more
dangerous
you're
in
depth
could
be
next.
There
ping-pong
the
workload
you
end
up
tonight
that
this
thing,
or
maybe
the
cascading
issue,
that's
the
dangerous
path.
C
A
A
So
that's
why
we
be
pushed
push
back
those
idea
and
because
we
already
invaded
a
lot
of
logic
in
the
node
and
it's
harder
for
people
for
us.
Obviously,
we
have
so
many
idea
under
node
people
and
kernel
people.
We
have
so
many
idea
we
want
to
expose.
We
won't
give
to
customer
and
we
want
to
boost
other
performance,
make
those
whole
system
more
efficient
to
support
a
different
type
of
the
after
woken
up.
But
we
need
the
involve
ways
together
with
the
castor.
So
that's
the
problem.
Yeah
yeah.