►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20230907
Description
Kubernetes SIG Scheduling Weekly Meeting 2023-09-07T16:58:20Z
A
Okay,
welcome
everybody
to
another
thing.
Scheduling
meeting
today
is
September
7th.
As
you
all
know,
this
meeting
is
recorded,
so
please
be
sure
not
to
share
anything
that
you
want
to
share
and
please
adhere
to
the
cncf
code
of
conduct.
A
We
have
two
items
in
the
agenda.
The
first
one
is
by
Kathy
propose
a
cap
proposal
for
a
generic
scalar
cash
extension.
A
It's
got
here.
I
can
see
her.
A
All
right,
yeah
take
it
away.
Do
you
need
to
share
your
screen.
A
Yes,
do
you
do
you
need
to
share
your
screen.
B
B
Okay
sounds
good.
Let
me
share
my
screen
hold
on
a
second
okay
yeah
here:
okay,.
C
B
Get
in
there,
okay,
good,
so
I'm
gonna,
it
will
be
so
I'm
going
to
go
through
on
the
generic
schedule,
cash
expression
for
for
the
schedule
of
plugging
and
why
we
need
to
do
this.
Okay,
first
side,
so
the
motivations,
if
I'm,
not
that
you
know
for
each
Auto,
three
scheduler
plugin,
that
plugin
needs
to
save
some
store,
some
resources
right
in
the
cache
and
then,
and
especially
if
those
resources
involve
some
complicated
specifications
on
accounting
models.
B
And
then
you
know
each
plugin
has
to
develop
a
local
cache
for
that,
and
also
in
addition
to
a
develop.uk.
It
also
involves
Implement,
independent,
informal,
informal
and
also
event
handling
logic.
To
replicate
I
mean
to
read
in
every
schedule:
plugin
has
to
implement
that
which
is
a
duplicate,
duplicated
effort
and
also
there's
also
another
problem.
The
resource
data,
because
there
are
two
caches
right:
one
is
the
core
cash
by
the
kubernetes
scheduler
the
course
scheduler.
B
Another
is
the
cash
by
each
Auto,
Trade
scheduler
plugin,
and
then,
if
there
is
some,
sometimes
the
resource
data
fetch.
B
You
know
between
the
you
know
two
type,
one
as
I
identified
in
the
slide
in
the
diagram
on
the
on
the
right,
so
like
the
T1
tampon,
a
T1
and
by
the
core,
scheduler
cache
and
also
a
type
added.
You
know
it
could
be
another
10
point
at
T2,
because
these
are
different,
different
flow,
different
process.
B
Then
these
two,
if
these
two
times
may
be
different
than
the
data
fetched
will
be
inconsistent,
will
be
not
the
same
like
at
the
same
time
point
then
they
will
could
be
inconsistent
which
could
lead
some
scheduling.
You
know
problem
or
some
preemption
problem.
So
this
is
a
so
two
issues.
C
Okay
hold
on
for
a
second
so
by
inconsistent.
So
basically
the
inconsistency
can
happen
even
within
the
entry
of
schedule
programming.
That
is
why,
in
the
beginning
of
every
discussion
cycle,
we
do
a
snapshot
so
I'm
not
fully
understanding
that,
because
the
inconsistency,
if
you
it's
not
recommended
for
you
to
but
any
time
of
your
screen
cycle,
you
go
to
the
cache
to
fetch
the
latest
fresh
data.
Nothing
not
recommended
because
we
know
this
kind
of
inconsistency
can
happen.
That
is
why
we
snapshot
in
the
beginning
and
for
out
of
the
tree
plugins.
C
B
Yeah
yeah
I
think
that
that
makes
sense
that
yeah
we
understand
that
it
could
also
happen.
You
know
for
even
without
this
other
tree,
but
still
I,
think
you
know.
If
we
can
have
you
know
one
extended
cash
together,
you
know
extend
the
current
schedule
of
cash.
This
will
reduce
the
inconsistency
probability
reduce
that,
but
the
keep
one
is
not
that.
The
key
point
is
you
know:
each
scheduler
plugin
has
to
re-implement
the
you
know,
a
local
cache
and
also
the
event
handling
logic.
B
Yeah
we
we
met
this
because
when
we
developed
tried
to
develop
a
decile,
we're
scheduling-
and
we
found
out
that
yeah
we
have
to
do
this.
You
know
we
have
to
develop
a
cash.
The
story,
it's
all
this
data,
yeah
I,
think
this
is
just
an
example
of
that
we
have
to
you,
know
re-implement
this
virtual
logic
to
implement
the
informa,
and
then
we
find
out
that
inconsistency.
Yeah
problem
so
I
will
hand
it
over
to
Marissa
to
go
through.
You
know
our
proposal
with
brief
proposal.
A
One
second:
can
you
actually
spend
more
on
the
use
case,
like
what
kind
of
informers
do
you
use
if
you
use
crds
or
not,
because
before
we
do
any
significant
refactoring?
We
wouldn't
look
at
the
use
case.
Whether
the
use
case
makes
sense
to
just
to
make
sure
that
the
investment
is
is
worth
the
complexity.
B
And
so
for
the
maybe
Teresa
can
talk
a
lot
more
detail
on
that
on
the
use.
The
use
case
is
for
the
on
for
the
dcio
on
So,
currently
in
kubernetes.
We
know
that
we
have.
You
know
CPU
memory,
aware
scheduling,
maybe
maybe
some
some
large
page
and
some
storage,
but
we
do
not
have
you
know
this
IO
aware
scheduling
and
isolation.
We
do
not
have
Network
at
all.
B
We
also
don't
have
any
cash
or
any
memory
bandwidth
or
we
are
scheduling
and
isolation,
so
we're
thinking
about
to
add
those
functionality
to
fill
that
Gap.
So
when
we
develop
this
I
also,
these
are
all
aware,
scheduling
and
with
you
know,
just
like
follow
the
auto
tree,
scheduler
plugin.
We
have
to
develop
our
you
know
to
implement
our
own
cache
so
that
to
duplicate
the
logic,
does
that
make
sense.
B
We
got
this
requirement
from
customers
saying
you
know
they
have
no,
it's
a
neighbor
problem
because
there's
no
disable
where
scheduling
and
isolation-
and
you
know
the
the
workloads
are
scheduled
onto
the
same
note-
would
compete.
But
these
are
all
resources
and
then
there's
those
workloads
service
were
impacted
because
of
that
by
the
neighbor
problem.
A
E
Yes,
it's
it's
stored
in
the
CR
and
we
use
an
Informer
to
listen
watch.
Let's
see
how
and
update
the
latest
resource
information
from
from
this
CR
and-
and
the
idea
comes
from
that-
and
every
scheduled
plugin
needs
to
implement,
need
to
add
an
Informer
to
watch,
watch
the
resource,
update
and
update,
there's
the
latest
info
to
the
local
cache,
and
why
don't
we
use
one
Informer
to
take
care
of
all
of
this.
E
A
Okay,
so
I
have
some
questions.
Maybe
we
can
leave
them
after
you
go
to
the
next
slide,
but
one
immediate
question
is:
why
does
it
matter
because,
first
of
all,
the
information
are
all
shared,
so
it's
still
the
same
Informer
and
again,
even
if
you
have
a
single
cash,
you
can
still
have
inconsistencies
such
as
that
you
might
have
a
part,
sorry
or
I.
Guess
you
care
about
a
node
I'm,
not
sure.
A
Let's
say
you
have
a
node
before
we,
you
have
a
corresponding
IO
CR
and
you
could
have
a
case
with
or
without
a
share
cash
where
the
node
exists
in
the
cache,
but
the
CR
doesn't
exist
or
vice
versa.
The
CR
exists
and
the
node
doesn't
exist.
So
all
of
those
things
can
still
happen,
even
if
you
have
a
single
cash.
E
And
then,
as
as
there
are
two
cache
exists,
coexist
in
in
the
current
implementation,
I
mean
between
the
entry,
plugging
and
out
of
tree
plugin.
So
there
is
a
time
interval
between
the
time
T1
and
time
T2
specified
in
the
second
page
just
now,
and
so
I
I
think
our
proposal
can
reduce
the
likelihood
of
the
scheduling
failure
in
that
time
interval.
So,
although
the
probability
is
very
low,
but
it's
kind
of
we
can
help
to
reduce
this.
The
probability
in
that
time
into
yeah.
C
C
Yeah,
as
Aldo
said,
if
the
inconsistency
is
coming
from
that
to
Total
independent
objects,
distinction
issues,
then
even
with
the
native
sharing
cash
for
both
entry
call
AP
object
as
well
as
CR.
Then
this
kind
of
issue
cannot
be
resolved.
For
example,
you
have
a
nailed
object.
You
have
a
node
related,
this
I
go
CR,
but
they
are
not
guaranteed
to
be
always
stick
the
status
so
yeah.
That
is
that
point
we
we
should.
We
have
to
adjust
clearly
whether
or
not
this
can
fundamental.
This
result.
A
I
guess
my
my
point,
my
point
would
be
if
we
are
just
reducing
the
probability,
I,
don't
think
it's
worth,
because
we
will
be
adding
likely
some
complexity
and
we
are
not
even
going
to
solve
the
issue.
We
are
just
reducing
the
probability
that
to
me
doesn't
sound
like
a
good
investment.
B
All
right,
so
we
cannot,
we
won't
I
mean
it
will
still
exist.
The
inconsistency
right
just
reducing
the
probability.
It's
not
very
strong
reason,
I,
think
you
know
the
the
other
reason
is
that
you
know
we
would
like
to
reduce
to
increase
the
developer
velocity
to
reduce
you
know,
like
you
know,
the
each
developers
so
why
we
need
to.
If
for
each
scheduler
plugin,
we
need
to
develop
a
separate
cache.
B
We
need
to
repeat
duplicate
the
effort
of
you
know
creating
that
cash
creating
the
handling,
creating
the
event
handling
logic.
A
So
again
doesn't
sound
that
variable
or
that
it
would
reduce
the
developer
speed
for
one,
but
I
think
we
are
maybe
just
like
thinking
too.
Theoretically,
maybe
just
let's
just
go
through
the
proposal
and.
A
Yeah
I
I'm
I'm
not
seeing
major
benefits
over
just
having
your
own
cash
for
your
own
objects
in
the
plugin,
which
it
might
be
more,
maybe
a
little
bit
more
I'm,
still
even
having
a
hard
time
thinking.
That
is
a
lot
of
effort,
but
even
if
it
is
complexity
that
doesn't
affect
the
rest
of
the
existing
scalar
code,
which.
A
Yeah,
it
might
slow
down
the
development
of
the
scheduler
as
a
whole.
Well
it
just
optimizes
for
your
use
case
of
a
of
out
of
three
plugin.
So
that
would
be
my
hesitation,
but
please
please
go
ahead
through
The
Proposal
yeah.
D
B
Not
just
out
nice
for
our
plugin,
it's
for
all
the
scheduler
plugin.
If
you
know
we
have
a
test
extension
so
that
each
each
schedule
per
view
in
the
future,
the
new
schedule
project
does
not
need
to
implement
their
own
local
cash
yeah.
But
that
way
we
can
see
you
know
the
effort
and
the
benefit
whether
it's
a
strong
case,
yeah,
okay,
Teresa,
sorry,
go
ahead.
E
Okay,
let
me
proceed
with
the
proposal
design
and
the
core
idea
behind
this
proposal
is
to
add
and
cash
extension
to
the
current
scheduler
cache,
which
is
highlighted
in
the
red
in
this
diagram
on
the
right
hand,
and
as
you
can
see
that,
and
let's
take
the
description
where
scheduling
schedule
plugin
as
an
example,
the
available
discount
or
support
discount
options,
block
size
and
and
rewrite
ratio.
These
attributes
and
their
accounting
method
will
be
included
in
this
extended
cache
and
add
the
filter
hook.
E
Point
the
entry
plugin
would
fetch
fetch
the
allocable
resource
and
consumption
resource
from
the
scheduler
cache,
while
the
discount
plugin
will
retrieve
the
discount
available
bandwidth
from
the
extended
cache,
and
in
this
case
the
scheduler
will
take
care
of
watching
the
port
and
node
events
to
update
the
event
specific
resource
and
and
both
resources
in,
in
that
case,
the
out
of
tree
schedule.
Plugin
does
not
need
to
implement,
do
a
use,
another
Informer
to
and
maintain
their
own
cash,
and
it
will
simplify
the
scheduled
plugins
development.
E
Here
is
an
reference
design
for
our
proposal,
and
this
is
a
class
diagram
and
the
the
cast
diagram
illustrates
the
modification
to
existing
classes
and
interfaces
in
this
current
schedule,
which
is
highlighted
in
blue
and
the
new
classes
or
interfaces
introduced
by
our
proposal,
which
is
highlighted
in
green
and
some
new
classes
that
supposed
to
be
implemented
by
vendors
or
the
scheduled
plugin
developers,
which
is
highlighted
in
Gray.
E
So
from
the
left
hand
on
the
Node
node
in
first
chart
is
proposed
to
be
extended
to
hold
the
extended
resource.
It
is
a
map.
The
map's
key
is
stands
for
the
resource
name
and
the
value
keeps
the
when
the
specific
specific
resource
information
and
the
extended
resource
interface
is
an
interface
that
manages
the
resource.
E
Accounting
method
should
add
port
and
remove
Port
methods
and
the
schedule
plugin
developers
or
the
resource
vendors
can
customize
their
resource
attributes,
and
then
there
are
accounting
methods
by
implementing
the
extended
resource
interface
and
the
extend
resource
handle
is
an
interface
to
manage
the
extended
resource
and
to
keep
the
external
resource
up
to
date.
The
run
method
of
the
interface
is
to
retrieve
the
resource
info
and
and
initialize
this
resource
in
the
extended
cache
I
mentioned
the
resource
vendor
here
is
because
that
it
is,
it
can
be
in
our
scenario.
E
It
can
be
an
independent
row
different
from
the
scheduled
plugin
developer,
meaning
that,
because
the
resource
vendor
knows
best
about
their
products,
the
resource
vendor
can
Define
the
can
implement
the
extended
resource
interface
and
the
external
Source
handle
interface
to
define
the
resource
attribute
and
the
result
how
the
source
is
accounted
and
the
schedule
plugin.
In
that
case,
the
scheduler
plugin
developer
can
use
the
extended
resource
handle
straight
away
without
the
need
to
know
too
much
about
how
the
resources
counted
for
making
their
scheduling
decision.
E
So
finally,
the
scheduler
plugin
would,
in
instantiate
the
external
Source,
handle
and
retrieve
the
extended
resource
through
this
handle
to
get
the
information
so
overall,
I
think
the
chance
to
this
current
schedule
is
very
limited,
but
it
can
largely
simplify
the
scheduled
plugins
development
and
in
the
backup
session,
there's
a
reference,
implementation
and,
and
a
simple
example
to
demonstrate
our
our
source
for
reference.
If
you
have
interest.
C
E
E
E
B
Yeah,
so
so
you
notice
that
we
did
not
limit.
Like
you
know
the
where
the
resource
extended
resource
will
come
from
I
could
could
come
from
that
I
know,
patient
could
come
from.
You
know
on
telemetry
collected
by
the
you
know
by
the
platform
and
feedback
into
the.
You
know
that
real-time
resource
information-
it
could
come
from
that
too.
B
We
need
to
collect
real-time
resource
information,
Elementary
information
and
do
scheduling
based
on
that
information,
because
you
know
it's
not
like
a
fixed
value
it.
So
the
the
information
depends
on
other
attributes,
attribute
characteristics
of
the
workload
like
block
size
for
this
type
of
block
size
read,
write
ratio,
something
like
that.
So
it's
not
like
you
know,
fixed
like
it.
Yeah.
C
I
I
just
want
maybe
trying
my
best
to
to
summarize
the
motivation.
He
may
be
just
correct
me
from
a
while,
so
you
seems
to
want
to
say
the
requirement
is
that
once
some
senior
object
has
very
close
direct
relationship
with
the
objects
that
is
cached
in
the
entry,
not
interested
in
the
scheduling
framework,
which
is
you
know,
if,
like
you,
have
customer
resources
object
that
has
closer
relation
with
that.
Then,
if
you
define
the
different
Handler
in
Alpha
tree,
then
that
can
make
is
sort
of
inconsistent.
C
That
is
why
you
want
to
add
a
hook
to
the
scheduling
framework
so
that
when
the
note
a
part
gets
updated,
you
can
have
immediately
changed
in
one
synchronizer
call
to
update
the
cache.
So
everything
for
Notepad,
as
well
as
you'll,
see
objects,
are
always
in
sync,
so
I'm
trying
to
assimilize,
but
not
sure
if
it's
the
real
motivation
that
you
want
to
introduce,
let's
just
scan
your
framework.
Is
that
true.
B
One
motivation
but
I
think
another
motivation
is
you
know
on
each
schedule?
Plugin
developer
does
not
need
to
read
to
implement.
You
know
the
event
handling
logic.
B
C
Well,
because
I
separated
the
email
Handler.
If
there's
no
thinking
issue,
no
inconsistent
should
sell
some
better
design,
comparing
to
have
everything
in
the
screen
framework,
because,
for
example,
if
you
have
totally
independent
three
object,
has
nothing
to
do
with
node
objects
doesn't
need
to
be,
but
with
a
node
update,
I,
don't
think
that
the
piece
of
great
benefits
to
put
them
all
together
in
in
the
in
the
scheduling
framework.
So
my
my
point
is
that
this
proposal,
you
have
a
strong
case.
F
Some
comments
about
that
yeah
regarding
to
this
diagram.
Suppose,
let's
assume
what
a
general
out
of
Chip
plugin
will
Implement
and
the
first
the
in
general.
If
the
out
of
three
is
current
plugging
will
handle
additional
results.
That
is
also
it
will
be
a
descale,
so
it
will
first,
it
will
Implement
a
local
cache
to
save
the
discount
information
and
this
information
may
get
from
the
CR
or
we
get
from
some
other
metronides.
This
is
one
thing
the
the
current
price
is
to
be
implemented
then.
F
Secondly,
the
plugin
needs
to
be
incremented
even
100
match
new
informal
to
to
get
to
the
notification
for
the
node
or
the
port
change,
so
that
if
the
new
node
added
or
new
node
deleted
on
the
port
ad
deleted,
then
it
can
update
the
local
cache
and
the
two
to
to
update
the
node
capacities.
So
this
is
the
second
scene,
the
the
plugging
you
to
be
done
under
the
third
thing
is
that
the
plugin
used
to
implement
the
the
filter
or
or
the
score
machine
like
what
is
the
other
plugin
needs
to
be
done
yeah.
F
This
is
the
current
implementation.
If
we
want
to
implement
a
scheduling
packet
but
suppose
in
this
case
and
yeah
as
you
just
decide,
because
when
we
start
a
scoring
cycle,
we
want
to
avoid
the
inconsistent
issues
so
that,
before
the
scheduling
scheduling
before
a
scoring
cycle
is
start,
we
will
create
a
snapshot.
Then
the
snapshot
will
be
great
for
both
the
entry
plugin.
F
This
will
be
used
for
the
CPU
membrane
and
the
first
auto-ship
plugin,
which
will
be
used
for
the
for
the
design,
for
example,
in
this
case
yeah,
but
in
case
we
have
a
event
like
the
positive
it
and
the
part
list
will
get
the
the
informal
of
the
entry
plugin
for
the
CPU
memory.
F
It
gets
the
positive
so
that
it
released
the
CPU
memory
for
the
for
the
port
for
this
node
with
results
this
part,
but
at
the
same
time
the
auto
chip
plug
in
the
informal
does
not
get
the
notification
for
download
notification,
so
it
do
not
remove
the
disk
IO
information
from
the
node.
So
at
this
point,
I
to
the
time
t
the
the
schedule
of
one
scheduling
cycle.
F
You
start
it
creates
a
snapshot
and,
at
this
time
the
the
snapshot
quit
time
it
actually
with
a
contest,
and
this
contest
it
is
to
see
for
CPU
memory.
It
does
not
have
the
Pod,
but
for
this
aisle
it
has
a
port,
so
this
maybe
create
some
of
the
inconsistent
issue.
So
this
is
what
we
see
if
we
using
different
informal
for
the
node
part
for
both
entry
and
the
auto
G
plugin.
This
may
cause
some
inconsistent
just
when
you
search
the
snapshot,
yeah.
C
F
D
E
F
It
to
create
yeah
I.
C
F
Yes,
what
I
mean
is
the
schedule
will
still
work,
but
the
the
maybe
this
time
is
work.
It
will
go
to
another
scouting
cycle,
yeah,
okay,
so
so,
but
if
the
time
is
different,
for
example,
one
is
a
T1,
another
is
G2,
but
when
you
created
a
snapshot,
it
has
a
Time
t
under
the
t
is
between
QR
and
the
t2,
so
you
will
get
a
contest
and
this
content
will
include
the
different,
maybe
the
Pod
information
for
the
results
for
this
node.
F
So
in
this
time
this
may
be
called
some
inconsistent
scheduling
issues,
I
think
if
the
node
is
not
quite
much
yeah.
A
We
have
another
Topic
in
the
agenda,
so
maybe
we
can
work
it
here,
I
guess.
Ultimately
we
have
to
go
through
the
cap
to
the
side,
make
sure
in
your
cap.
A
You
describe
the
use
cases
as
sources
of
information,
because,
if
I
think
about,
if
I
think
about
the
case
where
the
information
is
in
another
CR
and
then
this
doesn't
fix
anything
I
think
what
you're
trying
to
do
is
maybe
getting
some
metrics.
Perhaps
maybe
that's
one
use
case
or
one
source
of
information
to
get
to
you
know
gather
the
you
get
the
the
event
handler
for
pot
for
node,
create,
let's
say,
and
then
you
go
and
do
a
query
into
into
some
some
Metric
framework.
A
Maybe
that's
one
one
of
your
use
cases,
I,
don't
know
what
I'm
trying
to
say
is.
Please
describe
what
are
the
sources
of
information
your
the
out
of
three
plugin
might
use.
So
we
understand
why
you
need
this
extra
cash,
because
if
it's
in
another
CR,
then
for
sure
this
doesn't
doesn't
help
if
it's
somewhere
else
maybe
makes
sense.
So
please,
please
make
sure,
that's
included
in
the
proposal
so
that
we
can
take
a
more
informed
decision.
B
Okay
in
the
cap
right-
okay,
we'll
add
clarification
on
that
part
yeah.
So
we
we
submit
a
cap,
so
we
also
make
a
cap
to
the
to
the
sixth
scheduler
repository
right.
It
is.
D
D
So
what
my
question
is,
as
of
now,
all
the
preferred
plugins
run
serially
and
it,
and
at
least
from
the
default
ones
default
pre-filter
plugins
I
can
see
that
if
we
parallelize,
if
we
run
the
pre-filter
plugins
parallely
I,
don't
think
there
will
be.
There
will
be
any
race
condition
as
such.
D
So
is
the
so
so
maybe
just
a
question
it
has.
This.
Has
this
path
of
parallelizing
all
the
prefetter
runs
bin
explored
previously
or
not?
If,
yes,
why
this
has
not
been
taken
into
account
or
why
this
has
been
rejected.
If
it
has
been.
A
A
So,
for
example,
in
no
resources,
Plugin
or
the
topology
topology
spreading
plugin
they
as
soon
as
they
start.
They
opened
some
parallel
computations
for
all
the
for
all
the
nodes
and
then
some
of
them
might
actually
have
some
locking
to
access
a
shared
resource.
So
we
already
have
long
structure.
We
already
have
parallelism
at
that
level
within
each
of
the
pre-filter
plugins
and
they
are
highly
parallel.
So
I'm.
E
A
Sure,
if
paralyzing
everything
will
overall
help
or
actually
it
might
even
slow
down
the
calculations,
as
you
know,
maybe
you
don't
even
have
more
course
to
do
more
parallelism
or
maybe,
if
you
do
more
of
them
more
of
those
each
of
those
routines
were
start
colliding
with
each
other
because
they
want
to
get
a
lock
do
the
same
thing
so.
A
Worth
exploring
but
I
doubt
I
will
bring
benefit
Abdullah
way.
Do
you
have
anything
to
add.
C
Yeah
I
I
think
the
same
way.
That
is,
we
only
have
a
couple
of
usually
we
don't.
We
usually
I
just
have
a
couple
of
pre-theater
plugins
and
not
everyone
goes
through
other
nose
all
right,
so
some
of
them
are
just
to
simple
check
so
I'm,
not
sure
paralyzed
them
as
a
more
games.
C
Comparing
these
complexity,
it
brings
in
and
also
seen
some
Edge
and
then
also
in
some
edge
cases.
If
some
pre-pricator
can
real
quick
and
then
the
federal
parking
is
perfect,
plugins
can
abort.
So
if
you
just
in
paralyzed
Rundown,
you
have
to
another,
maybe
cancel
function
to
pass
that
cancel
thing
you
know
to
already
running
prefers
running,
so
that
is
pretty
small
complexity.
I,
don't
know
so
yeah
and
I
think
unless
we
have
a
clear
requirement
and
have
clear
metrics
your
scenario
you
can
give
us
government
burst.
A
Okay,
if
you're,
if
you're
willing
to
explore
by
all
means,
but
ideally
we
should
have-
we
should
run
the
benchmarks.
There
are
some
benchmarks
already
in
the
test:
slash
integration,
perf.
What's
it
called
test,
slash
integration,
slash
scalar
perf.
A
A
A
With
that
we're
one
minute,
we
have
one
minute
left:
we
have
about
three
or
four
open
caps
right
now.
We
have
some
time
until
until
the
the
cap
freeze,
so
we
have
an
assigned
reviewers
yet
to
each
of
the
Caps
I
think
by
the
next
meeting
we
might
have
maybe
a
a
more
complete
list
of
of
caps,
and
then
we
can
do
and
run
down
on
all
of
them
in
this
meeting,
but
yeah
any
last
minute.
Questions.
A
Right,
thank
you
all
for
joining
we'll
see
you
in
a
couple
of
weeks,
bye.