►
From YouTube: 2021-08-05 Kubernetes SIG Scalability Meeting
Description
Agenda and meeting notes - https://docs.google.com/document/d/1hEpf25qifVWztaeZPFmjNiJvPo-5JX1z0LSvvVY5G2g/edit?ts=5d1e2a5b
A
B
So
the
the
default
values
for
the
max
in-flight
settings
in
kubernetes
is,
I
think,
around
like
200
and
100,
but
I
was
just
wondering
like
for
gke
or
other
big
clusters
like
how
do
you
tune
those
values?
Do
you
do
it
based
on
like
cpu,
core
or.
A
Yes,
I
think
magic
will
know
it
better,
I'm
not
sure
how
much
we
we
we
we
can
actually
tell,
but
yes,
it's
more
or
less.
We
are
tuning
in
with
the
size
of
the
master.
C
C
B
I
see
okay
yeah,
I
don't
know
how
much
of
it
is
public.
In
the
case,
I
could
take
a
look
at
it.
The
problem
we
have
in
open
shift
is
the
default.
Values
are
very
high.
It's
like
three
thousand
and
1000,
and
even
though
we
have
pf
enabled
like
pnf
enabled
it
doesn't
help
because
the
cluster
gets
killed
anyway,
because
yeah
the
limit
is
so
so
high.
B
A
C
A
B
Yeah
yeah,
the
other
problem
is:
if
we
tune
it
too
low,
then
they
might
be
starving.
Right.
Request
may
be
starving,
because
you
know
the
concurrence
limit
is
so
low
and
we
don't
have
borrowing
in
pnf.
Yet
so
that's
the
other
concern,
but
I
think
right
now
more
so
in
openshift
we
need.
We
need
to
lower
this
value,
because
most
of
the
clusters
are
very
small
and,
like
I
don't
think,
there's
any
cluster
on
earth
that
can
like
have
4000
requests
like
in
flight.
Just
like
huge.
B
Yeah
cool,
that's
good
to
know,
so
I
think
we'll
implement
something
similar
on
our
side.
Do
you
know,
roughly,
like
our
cpu
code,
how
many
requests
you
allocate
for
the
limit.
B
I
see
yeah,
I
I
think
at
our
side
we
might
do
some
testing
with
a
mix
up
of
different
requests
and
see
like
what
a
baseline
like
you
know,
four
core
masternode
can
handle
and
then
maybe
like.
A
B
Out
a
value
or
heuristic
value,
based
on
that,
I
was
thinking
of
error
100
like
parkour,
but
we'll.