►
From YouTube: SIG - Performance and scale 2023-06-29
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
A
Yourself
as
an
attendee,
okay
and
feel
free
to
add
any
topics
all
right.
So
let's
talk
about
V1.
A
B
Yeah
so
I
think
the
three
bullet
points
that
are
left
are
creating
a
section
in
the
cube.
Word
keyword,
documentation,
the
pr
is
already
out
for
that,
then
I
think
you
would
like
to
find
a
place
in
the
user
guide.
To
also
add
that,
so
there
is
an
issue
I
have
created
for
it,
and
I
have
some
ideas.
We
can
talk
about
that
once
I
walk
through
all
of
this
okay.
A
Let's
start
with:
let's
start
with
the
documentation,
I
I
reviewed
this,
it
looks
good
to
me.
I
think
that
everything
came
here
came
out.
Nice
I
think,
okay,
so
we've
already
got
so.
We
just
need
approvals
on
this.
That's
what
we
need
I,
don't
know
if
I.
B
A
All
right
we'll
wait,
but
this
looks
good
I
think
so.
I
talked
with
the
maintainers
yesterday
about
this,
and
hopefully
they
get
a
chance
to
look
at
this
too.
If
we
don't
hear
anything,
okay,
I
don't
have
proverb
docs,
okay,
so
if
we
don't
hear
anything
I
think
maybe
in
a
day
or
two,
let
me
ping
them
and
we
can
get
get
this
all
wrapped
up.
I
I
think
this
is
good
to
go.
B
Yeah
sure
so
the
next
thing
I'm
going
to
put
the
chat
of
put
the
link
to
the
user
guide
in
the
chat
for
reference.
If
you
can
pull
that
up
yeah.
So
if
you
look
at
the
virtual
machines
section,
there
is
a
index
of
topics
here,
just
thinking
if
we
can
get
an
item
here
that
says:
release
V1,
perfect
scale,
benchmarks
or
just
say
benchmarks,
and
then
under
that
release
we
want
data.
We
can
stub.
A
Okay,
that
makes
sense.
I
was
also
had
the
funnier
second
ago,
so
we
have,
we
could
do
that.
We
can
have
the
mix
and
sound
something
for
V1,
and
then
we
can
update
it.
We
could
also
I
also
wanted
to
see
what
was
here
so
I
was
thinking
like
what
about
our
tooling
like.
A
B
Yeah
that
that
makes
sense
so
I
think
my
overarching
idea
was
that
if
you
go
go
to
the
welcome
page
there
is,
there
are
quite
a
few
sections
here.
B
There
is
no
I
mean
this
section
says
user
guide,
but
there
is
no
section
specifically
for
users.
So
what
I
wanted
to
have
is
have
a
benchmark
section
somewhere
in
the
welcome
page,
so
that
anyone
looking
for
it.
It's
clearly
visible.
A
A
B
And
one
more
thing:
I
think
there
is
a
mailing
list
thread
that
talks
about
breaking
out
these
operations
and
virtual
machine
tabs
into
a
power
Sig
documentation
page.
So
maybe
we
could
even
get
a
sixth
scale
documentation
page
in
this
guy
I.
A
Because,
like
I
was
I
was
clicking
on,
these
I
was
like
kind
of
hoping
there'd
be
a
subsection
here
because
like
if
we
did
benchmarks
for
scale
or
something
like
the
scale,
then
we
had
benchmarks
for
V1
and
then
we
had
tooling
there's
two
subsections
that
would
that
would
have
worked
yeah,
okay,
I,
maybe
okay,
let's,
let's
maybe
let's
go
see
them
I
I
still
think
so.
I
think
tooling
makes
a
lot
of
sense
somewhere
in
here
and
then
we're
gonna
find
the
right
place
to
like
reference
it.
You
know.
A
All
right!
Well
what
what
else.
B
A
Okay,
oh
we
can
sync,
so
the
release
is
likely
to
be
pushed
a
week
because
it's
turned
out
said
to
be
July
4th,
so
we're
I
think
well.
So
there
was
some
agreement
that
this
needs
to
be
pushed
a
week.
I
just
don't
think
it's
been
publicized,
yet
so
I
think
it's
gonna
be
the
11th.
It's
gonna
be
the
day,
so
we'll
I'll,
sync
with
you,
we'll
have
a
separate
meeting
about
this
I
ever
I.
A
B
Did
you
did
you
get
to
hear
whether
this
will
be
a
separate
blog,
or
this
will
be
the
block
with
the
the
release.
A
So
it's
going
to
be
the
Vlog
with
the
release
and
we
can
do
a
separate
product
when
we
talk
about
the
stuff,
either
with
the
tooling.
How
we
did
this,
the
methodology
and
stuff
I
mean
I
think
where
we
go
into
more
details,
I
think
what
would
be
good
is
we
focus
on
some
of
the
high
level
stuff
and
highlight
what
that's
in
that
document
and
then
and
then
we
kind
of
cover
any
sort
of
in-depth
stuff
in
a
separate
block.
That
would
be
a
nice
follow-up
actually
for
V1.
B
Okay
got
it,
so
we
need
to
come
up
with
two
things:
one:
a
concise,
concise
documentation
to
showcase
our
V1
release,
performance
benchmarks
and
then
in-depth
explanation
of
what
those
numbers
mean
for
anyone
who
wants
to
begin.
A
B
Okay
sounds
good,
okay
and
so
I
I
think
those
were
the
only
items
I
had
for
V1
one
more
thing.
I
have
we
talked
about
this
in
the
last
meeting
of
creating
a
separate
folder
where
we
can
walk
through
the
release,
V1
charts.
B
So
I
have
created
a
sub
directory
release
V1
and
we
should
be
able
to
get
data
here.
So
the
the
URL
will
look
really
nice.
It
will
say:
CI
performance,
Benchmark,
slash,
release,
V1,
slash
job
name
and
then
the
data,
and
that
will
be
our
like
index.html.
A
B
So
I
think
I'll
have
to
do
some
digging.
The
last
time
I
searched
for
the
settings
that
render
the
the
HTML
page.
It
was
only
turned
on
for
main
branch.
A
Okay,
yeah
I
mean
I,
also
yeah.
Maybe
that
could
be
reason.
I
I
think
also,
like
yeah
I
mean
I'm
thinking
about
historical
like
historically
right
once
we
go
like
if
I'm
going
to
this
repo
I
want
to
see
everything
from
all
the
releases
like
that's
kind
of
our
pitch.
It's
not
really
so
much
that
dude
open
the
branches
right
so
yeah,
okay,
I
just
wanted
to
float
that
possibility.
B
A
A
Okay
and
then
what
was
the
link
to
the
I
was
actually
fumbling
around
trying
to
find
it.
Yesterday,
where's
the
link
to
the
to
the
page
to
the
GitHub
page.
B
Oh
okay,
yeah
I
mean
I
can
share
their
links,
but
that
data
is
obsolete.
We
still
in
in
the
nice
two
halves.
We
still
need
to
run
automation.
To
I
mean
we
need
to
wire
up
automation
to
get
those
data
published.
Okay,
yeah
I,
give
me
one.
Second
I'll
just
try
to
find
that
link
and
share
it
with
you.
A
Okay,
still,
okay,
that's
fine,
so
what
I
could
do
so?
Actually
so
I
was
showing
I
was
showing
your
PR
to
the
main
things
yesterday,
I
mentioned
that
there
was
like
the
page
for
it,
but
let's
arrive,
if
you
don't
have
it
so
what
we
could
do
so
here's
one
of
the
recommendations
from
Roman
Roman
made
the
suggestion
that
we
could
take
where's
your
PR
here
this
one
I'll
just
use
this
as
a
reference
that
the
graphs
that
we
have
I
don't
know.
A
Let
me
let
me
do
this
where's
your
you've
gotta
you've
got
an
old
page
here
that
you
rendered
on
your
personal
info
yeah.
There
we
go
so
he
was
recommending
that
we
take
some
of
the
data
points
and
we
can
link
these
to
PRS,
and
so
what
he
said
was
that
there
is
already
the
flake
finder
in
prow.
A
Has
this
ability
to
to
to
associate
PRS
with
data
points,
so
there
might
be
a
way
that
we
can.
We
can
do
this
I,
don't
know
how
reasonable
this
is.
Like
I,
don't
know
if
you
can
actually
have
an
embed
links
into
these
points
or
something
like
that.
I,
don't
know
how,
if
that's
possible,
but
we
might
be
able
to
get
the
pr.
A
A
Yeah
well
late,
if
you,
if
you
get
it
just
throw
it
in
the
in
the
meeting,
can
so
I
can
reference
in
the
future?
Oh.
A
B
Trying
to
say
that
that's
a
very
interesting
idea
at
the
putting
of
br's
in
there,
the
the
only
challenge
is
that
we
would
have
to
find
a
way
to
get
all
of
the
PRS
for
that
day.
So
what
we
do
is
we
don't
run
this
post,
a
PR,
merge
or
pre-e-pr
merge
right.
We've
done
this
on
a
daily
basis,
so
multiple
PRS
could
have
gone
in.
A
A
B
Yeah
I'll
add
that
to
the
post
V1
list,
I
have.
A
A
B
A
B
Shang
and
Chang
did
some
experiments
with
the
flow
control
API,
and
it
looks
like
with
kubernetes
123.
B
B
B
It
might
be
good
to
maybe
file
an
issue
for
this
and
bring
this
up
in
the
sixth
scale,
kubernetes
meeting
that
that's
what
I
was
thinking
I,
don't
know.
If
people
have
thoughts
on
this.
A
Yeah
yeah
I
think
it
makes
sense.
I
think
we
gotta,
we
gotta
understand
what's
going
on
yeah.
This
is
bizarre.
I,
don't
know
how
you
when,
when
you
talk
to
when
you
message
me
about
this
I,
don't
know
how,
if
you're,
limiting
like
in
my
mind,
like
you're,
limiting
the
ability
to
list
to
Once
one
time
per
I,
don't
know
what
what
the
power
long
minute
or
something
and
the
rest
of
them
gets
noticed
in
the
the
rest
of
the
requests,
get
queued
and
then
some
get
rejected.
B
I,
don't
know
yeah,
so
we
had
a
theory
about
that
by
that
should
not
be
a
problem
right.
If
we,
the
doubt
API
In
fairness
changes
if
we
did
like
eight
concurrent
calls
every
five
seconds
that
was
being
solved
well
by
the
API
server
with
the
same
number
of
objects.
So
that
suggests
that
one
call
is
not
enough
to
you
know:
bring
down
the
API
server.
So
it's
like
this
consistent
pressure
that
API
server
sees
that
somehow
blowing
it
up
so
in
in.
A
B
That's
something
I
was
wondering
as
well,
so
what
happens
is
there
is
a
dispatcher
which
will
hold
hold
the
requests
in
queue
and
as
soon
as
one
is
done,
it
will
release
the
other
so
temporarily,
if
the
garbage
collector
go,
garbage
collector
has
not
run,
and
the
old
data
is
still
persisting
in
memory
ready
to
be
garbage
collected.
Then
the
new
request
will
then
allocate
new
data,
and
then
you
have
this
situation
where,
until
the
garbage
collector
is
run,
all
of
the
data
is
being
held
in
the
memory
and
then
iTunes
I.
B
A
So
I
wonder
I,
wonder
too!
If,
if
you
reduce
the
number
of
pressure
like
you,
you
still
do
the
one
less
with
the
restricted
policy,
so
we
reduce
the
pressure
I
wonder
like
what
the
limit
would
show
like.
If
we're
saying
you
know,
if
States
like
5
000
Secrets,
a
thousand
Secrets
I,
wonder
where
we
see
this.
It
would
be
interesting
because
and
then
comparing
with
that
with
without
again
so.
A
Helpless,
so
what
I'm
wondering
is
that
the
is
that,
if
it's
lower
like
I,
wonder
if,
because
so
what
the
theory
here
is
like-
let's
say
one
one
list
with
a
priority
level:
config
would
would
imply
that
okay,
that
we're
not
able
to
handle
it
right.
So
then,
let's
lower
the
pressure,
let's
see
where
it
is,
let's
see
what
amount
of
pressure
we
can
take
versus
without
we're
saying
that
we
can
handle
this
level
of
pressure
whatever
that
amount
is,
but
then
so
I
wanted
to
see
like
if
there's
actually
a
quantifiable
amount.
C
Yeah,
so
how
I
see
it
is
that
you
are
so
so
just
to
summarize
right.
So
previously
we
don't
have
a
AP,
if
defined
in
our
clusters,
and
then
we
discovered
a
limit
of
how
many,
how
many
requests
a
clusters
can
hold,
and
then
we
set
APF
settings
based
on
limit.
We
discovered
right
so
so
the
expectation
is
that
if
we
are
stay
under
the
limit,
the
the
API
server
should
be
okay.
C
But
if
we
go
over
that
limit,
the
the
APF
setting
should
protect
our
control
plane,
pods
like
against
failure.
C
Now
what
we
have
discovered
is
that
even
if
we
set
this
APF
at
the
limit-
and
we
still
and
we
and
we
go
over
limit,
API
server
can
still
experience
failure
regardless.
If
we
have
this
APF
defined
or
not.
A
C
We
were
able
to
push
to
the
cluster
you
know
to
to
to
handle
certain
amount
of
requests
at
a
certain
at
a
certain
number
of
like
like
object
in
the
Clusters,
without
issues
okay.
So
this
is
based
on,
like
that's
solely
based
on
the
capability
of
our
API
server,
so,
for
example,
how
much
memory
we
allocate
into
that.
So
the
expectation
is
that
it
like
once
we
have
this
APF
defined.
It
will
kind
of
set
a
cap
on.
C
You
know
on
like
on
on
how
much
request
our
API
server
can
handle,
and
thus
the
idea
or
the
eventual
or
the
eventual
expectation
is
that,
even
if
we
go
above
this
request
that
we
have
defined
in
APF,
our
control
plane
server
should
still
be
able
to
survive
right.
A
A
C
Right,
oh
actually,
we
should
be
able
to
see
pass
for
both
of
them,
because
if
we
are
consistently
staying
within
the
capability
capability
of
our
API
server,
then
we
don't
need
to
have
an
APF
defined.
A
C
A
So
so
here's
the
test
is
with
25
000
PVCs.
You
do
30
list
requests
the
restrictive
priority
level.
Config
means
we're
just
going
to
allow
allowing
one
list
request.
This
should
pass
right
because
we
are
just.
We
expect
this
to
pass,
because
it
means
we're
protecting
our
API
server
like
what
we
were
saying
and
then
without
it.
We
have
no
protection.
This
should
fail
because
we're
going
to
overwhelm
it
we're
going
to
go
over
the
limit,
but
the
results
you
saw
was
the
opposite
right.
A
So
the
test
the
test
is
you've
got
your
zone,
is
you
had
25
000
PVCs
and
then
you're
you're,
applying
pressure
equal
to
30
list
requests
at
the
same
time,
so
the
priority
level
config
is
only
allowing
one
list
request
to
go
through
it's
queuing.
However
many
you
can
and
then
it's
rejecting
the
rest
to
protect
the
API
server
right.
Without
it
it
allows
all
of
them
through.
C
C
So
what
I
observed
was
that
you
know
so.
The
amount
of
this
request
was
different.
It
was
not
30
like
it
was
something
else,
but
but
the,
but
the
main
idea
was
that
if
I
have
a
very
restrictive
like
priority
level
config
to
find
for
for
PVCs,
what
ended
up
happening
was
that
when
I
like
after
I
issue,
these
requests
to
the
API
server
like
I,
saw
a
huge
like
I,
saw
a
significant
memory
Peak
on
the
API
server.
C
So
so
from
the
client
side
like.
C
So
without
it's
the
same
so
in
okay,
in
both
scenarios,
the
API
server
was
overbound
right,
but
the
expectation
is
that,
if
I
have
this
priority
level,
config
the
like
once
this
is
the
like.
Once
this
is
defined,
our
clients
should
receive
a
lot
of
the
429
response.
Saying
that
you
know
the
the
the
requests
were
very
limited
right,
but
my
client
didn't
see
this
request
and,
like
it
didn't
see,
this
really
like
rate
limit
response
and
I
still
saw
a
lot
of
the
memory
Spike
on
the
API
server
side.
A
So
then,
then
maybe
we've
got
too
much
pressure,
then,
because
if
this
one
also
saw
the
spike
in
a
caused,
an
oom,
then
then
this
this
number
is
too
high.
We've
got
too
much
pressure.
I
understand
that
this
one
should
have
should
protect
us,
but
it's
not
going
to
in
the
case
of
like
it's
going
to
allow
one
list
request
here.
C
This
one's
yeah,
so
the
expectation
is
that
say
that
so
we
have
a
role
client.
So,
like
imagine,
we
have
a
road
client
scenario
and
we
want
to
prevent
these
real
clients
from
keep
spamming
our
API
servers.
So
we
have
this
priority
level.
Config
defined
for
these
clients
right
yeah,.
A
A
B
A
C
So
the
test
was
so
I,
so
for
my
future,
setting
sorry
solve
for
my
list.
Client
selling
I
had
something
like
a
listing
every
second
listening,
every
second
with
25k
PVCs
at
10
concurrency.
So
that's
something,
that's
very
quite
extreme.
So
so,
under
this
condition
with
or
without
a
priority
level
config,
they
both
failed.
C
A
I
think
what
you
need
to
do,
then:
let's
do
this,
try
this
test,
because
we
need
to
get
a
difference.
Do
do
like
this.
You
like,
like
15K
and
then
do
50
lists
requests
per.
Second,
this
is
going
to
help
you
with
the
concurrency.
It's
not
going
to
help
you
with
with
this
part.
If
there's
too
much
pressure.
C
Yeah,
but
you
know,
I
was
expecting
you
know.
Having
this
priority
level
config
like
will
be
some
sort
of
a
bulletproof
way.
The
expectation
was
that
once
we
have
just
defined,
we
should
be
able
to.
You
know
like
reject
clients
requests
like,
regardless
of
how
many
objects
or
or
the
rate
of
requests
right
like
we
should
be.
We
should
still
be
able
to
protect
our
API
server,
regardless
of
the
number
of
objects
or
the
rate
that
the.
A
Client
well,
this
is
the
I
think
this
is
a
tricky
part.
It's
like
so
it's
this
stuff
is
only
going
to
protect
you
from
like
incoming
requests.
It
can
only
get
you
at
the
at
the
API
layer.
It
can't
get
anything
at
the
memory
use
layer,
at
least
not
that
I'm,
aware
of
so,
if
you've
got.
If
you
do
one
list
request-
and
this
has
a
hundred
thousand
PVCs
right-
that
will
own
it
and
it
just
could
be
one
request
or
the
or
so
the
inverse
of
that,
which
is
where
priority
level
config
will
help.
A
A
C
Then
I
guess
that
would
be
a
separate
discussion
like
maybe
it's
not
for
for
this
for
this
discussion,
then
then
we
have
to
think
about.
How
do
we
tell
our
tenants
you
know,
because
previously
we
were,
we
were
advertising
a
rate
limiter,
that's
able
to
protect
our
control
plane
server,
regardless
of
our
clients,
Behavior
right,
but
if
there's
a
gap
indeed,.
A
Yeah
I
think
we
think
that
that's
the
Gap
is
that
you
have
to
limit
this
stuff
yeah,
you
can't
we
can't.
We
have
to
be
cognizant
of
what
go
of
things
that
cause
a
lot
of
pressure
and
then
and
then
limit
them,
not
not
rate
limit
them.
We
have
to,
we
can
rate
limit
them,
but
we
we
need
to
limit
the
number
of
these
things,
because
the
this
sounds
like
we've
gone
over
the
pressure
and
no
matter
what
we
do,
that
will
it
will
break
on
the
server.
C
So
so
previously,
I
had
25k
PVCs,
right
and
I
was
able
to
list
25
25k
PVCs
at
the
rate
of
every
five
seconds
and
10
concurrency
for
for
one
hour
without
any
issues,
without
any
priority
level.
Config
to
function
so
I,
don't
think
it's
the
amount
of
PVC
stand
matters
because
you.
C
Five,
no
four
four
hour
for
your
last
bullet
points.
Both
passed
both
passes.
No
five.
Five
list
requests
per
second;
no,
it
was.
It
was
eight
list
requests
every
five
seconds.
C
Yeah,
so
you
know
at
this
point
you
know,
so
that's
why
I
was
trying
to
say
like
I
as
of
now
I,
don't
really
see
how
this
priority
level
config
changes.
Anything
like.
Maybe,
with
this
priority
level
config
the
flying
we
can
stretch,
we
can
stretch
the
amount
of
requests
that
we
can
send
to
the
API
server
a
bit.
So,
for
example,
you
know
like
previously
the
save
point
was
25k
PVCs,
a
this
request
for
five
seconds.
That
was
the
save
point.
C
Maybe,
with
this
priority
level
conflict
defined,
we
can
push
the
save
point
further,
maybe
but
like
regardless.
You
know,
I
still
see,
I
still
see
that
under
extreme
cases,
under
extreme
amount
of
a
list
requests
that
we
send
to
the
API
server.
We
can
still
have
like
breakage.
A
B
So
yeah,
the
reason
why
I
was
trying
to
have
this
discussion
was
because
of
this
observed
case
I
think
it
would
be
a
good
bug
against
API
and
fairness
implementation,
and
we
can
get
some
discussion
going
in
the
sixth
scale.
B
C
So
other
things
that
I
was
talking
about
my
intuition
to
this
problem
is
that
because
the
API
priority
Fair
priority
and
fairness
component
itself
is
part
of
the
API
service
itself,
and
we
don't
really
know
if
the
APF
components
on
on
the
API
server
can
scale
very
well
right.
So
if
they're
coupled
together
and
this
component
doesn't
scale
well,
then
my
intuition
is
that
you
know,
while
this
component
is
reach
acting
or
recurring
or
or
like
all,
all
these
requests
itself
can
use
a
lot
of
memory.
C
So
if
that's
the
case,
then
you
know
if,
if
if
the
API
server
is
spanning
too
much
memory
or
too
much
resources
in
assigning
the
priority
for
each
request,
then
we
can
have
problem.
A
Yeah,
so
maybe
well
so
I
think
what
you're
saying
is.
The
theory
is
this
is
using
memory
and
it's
sort
of
tipping
us
over
the
limits,
because
we
have
a
lot
of
pressure
right,
it
could
be
could
be
so
maybe
what
you
can
do
is
try,
then
you
could
prove
that
I
mean
this
would
be.
This
would
be
the
test
right.
Maybe
it's
like
it's
like
this.
Maybe
we
go
back
down
a
little
bit
and
we
jack
this
up
really
high,
so
this
would
be
like
maybe
like
100
right.
This
should
absolutely
break
like.
A
We
should
be
certain
about
that.
This
should
not.
So
maybe
that's
what
we
need
to
do.
We
need
to
hit
some
extremes
here
and
well.
I,
don't
know
if
I
this
is.
Maybe
we
do
like
50
I,
don't
know
what
I
don't
know
how
the
memory
translates
yeah.
So
like
it's
a
little
bit
less
pressure,
so
we
get
a
little
bit
less
memory
when
we
load
all
the
secrets
or
all
those
PVCs
into
the
memory.
So
we
get
a
little
bit
of
breathing
room,
but
then
we
increase
this.
A
Yeah
I
think
it's
worth
a
try
before
we
go
to
the
six
skill
group,
because
we
should
see
this.
Let's
just
let's
see
this
work,
you
know
what
I
mean
like.
Let's
like
I,
feel
like
this.
This
this
is
working,
but
we
just
need
to.
Let's
just
see
it
works,
let's
just
confirm
it
that
we're
working
at
The
Works
the
way
we
expect
and
then
and
then
you
know,
let's
work
backwards
and
see
what
could
be
going
on
here.
C
Yeah
so
So,
currently
like
I'm
on
all
of
the
testings
that
I
have
done.
C
Each
of
the
tests
show
both
results
with
like,
regardless
of
if
we
have
this
priority
level,
conflict,
Define
or
not
right
so
I
think
one
like
I
think.
The
case
that
you
try
to
make
is
that
you
want
to
try
to
find
a
combination
where
we
can
have
a
failure
without
it,
but
we
can
have
a
success
with
it.
C
C
Yeah
so
this
confirms
this
confirms
that
once
we
have
this
APF
defined
for
objects,
we
can
have
some
benefits
to
the
Clusters,
but
I
don't
think
it's
like
still
bulletproof.
At
this
point.
A
A
B
Yeah
I
think
this
is
PLC
not
knowing
the
footprint
of
a
request
that
is
coming
in
is
a
well-known
problem
in
the
implementation
and
the
way
they
are
approaching
it
is
they
have
given
a
fixed
assignment
to
an
incoming
list
request.
B
You
know
in
hope
that,
once
they
learn
more
from
experiments,
they
will
be
able
to
fine-tune
the
implementation
to
have
more
variable
and
more
realistic
assignments.
B
So
from
this
experiments
we
can
probably
give
them
data
points
on
what
could
what
could
be
a
reasonable
estimate
of
the
incoming
request,
a
good
heuristic
and
and
improve
that?
But
but
that's
that's
like
a
long
way
into
to
the
future,
but
those
are
certainly
possibilities
with
this
yeah.
Okay,.