►
From YouTube: 2021-07-22 Kubernetes SIG Scalability Meeting
Description
Agenda and meeting notes - https://docs.google.com/document/d/1hEpf25qifVWztaeZPFmjNiJvPo-5JX1z0LSvvVY5G2g/edit?ts=5d1e2a5b
A
So
hi
everyone-
this
is
six
scalability
meeting
and
22nd
july,
and
today
we
have
three
topics
I
think
to
discuss.
One
was
for
a
few
meetings.
Does
anyone
know
what
is
it
about.
B
Is
it
about
the
priority
and
fairness
performance
benchmarks
right
this
one,
a
side,
note
marshall!
It
might
be
easier
if
you
presented
the
meeting
with
the
agenda,
so
we
know
oh
yeah,
sorry
for
that.
B
122
is
released
and
then
abu
is
going
to
work
on
it
so
yeah.
Maybe
I
can
rephrase
that.
B
A
Okay,
so
I
see
that
are
now
this
adding
some
item,
and
so
do
you
want
to
start
with
update.
D
Sorry
yep,
I'm
here,
sir
I'm
just
I
wasn't
sure
I
have
a
few
pending
prs
on
the
priority
awareness
side,
basically
watching
adding
support
for
list
with
width,
so
we
couldn't
make
it
we
couldn't.
Our
plan
was
to
put
it
into
1.22,
but
due
to
some
reasons,
we
couldn't
push
it
to
122..
So
I'm
working
on
those
prs
and
once
I
have
a
grasp
on
this
prs
I'll,
probably
start
looking
into
the
the
performance
test
for
pnf.
A
D
Oh
sorry,
this
is
not
about
the
job
migration.
I
was
referring
to
the
performance
benchmark
tests
for
pnf,
okay,
okay,
okay,.
D
No,
I
still
haven't,
got
to
it
yet
I'm
working
on
a
few
pr's
on
the
feature
and
then
once
they're
done
I'll
I'll
start
working
on
it
at
the
time
I'll
reach
out
in
the
group
to
the
group
for
some
directions
and
help:
okay.
Okay,
that's
that's
great.
E
B
E
A
D
But
that's
definitely
something
I
want
to
do,
because
this
would
be
a
good
way
for
me
to
get
more
familiar
with
the
sig
scalability,
the
test
framework
and
the
repo
and
all
that's
going
on.
So
I'm
looking
forward
to
it.
A
Yeah,
so
actually,
regarding
that,
I
think
we
merged
recently
some
kind
of
like
starting
tutorial,
where
you
can
check
like
how
to
use
cluster
loader
and-
and
you
know
like
how
to
write
simple
tests,
how
to
run
them.
So
I
think
this
will
be
a
good
starting
point
for
you
when,
when
you
want
to
see,
what's
how
to
use
it,
how
to
use
that
custom
loader,
basically.
A
Okay,
so
I
guess
now
we
can
go
to
the
update
about
the
job
migration.
C
I
don't
say
morning
every
night:
everyone,
I'm
anno,
I'm
the
co-chair
of
kids
in
front,
but
that's
my
camera
yeah,
that's
better,
so
I
just
want
to
give
an
update
on
the
job
migration,
I'm
working
so
right
now
I've
kind
of
reached
out
for
most
of
the
six
scalability
job.
C
C
C
A
C
A
I
I
think
so
I
remember
that
some
time
ago
I
was
replying
to
one
of
the
issues
that
even
our
like
previous
job
was
failing,
constantly
the
cube
scheduler
and
if
that's
the
case
then
then
let
me
get
back
to
it
and
I'll
double
check,
but
if
the
if
the
original
job
is
also
failing,
then
I
wouldn't
worry
too
much
about
it.
C
I
also
so
the
next
thing
I
did
I
think
yesterday
was
I
created
a
new
bracket
in
kit
server
that
will
be
used
to
us
to
go
along
belt.
So
basically,
what's
happening
I'll
put
the
link
of
the
issue
in
the
chat,
but
I
mean
the
bills
are
failing
and
I'm
not
sure
why
it's
failing,
because.
A
Oh,
so
I
think
that
the
idea
is
that
for
golonk
it
doesn't
really
matter
what
kubernetes
version
it
is
too
much.
I
mean
there
might
be
some
small
difference,
but
basically
what
we
are
interested
is
only
the
performance
of
of
binary
that
is
built
with
a
newer
version
of
compiler.
So
basically
it's
fixed.
Probably
it
would
take
some
time
to
migrate
it
to
the
newer
version
of
kubernetes,
and
you
know,
probably
would
need
to
tweak
some
thresholds
and
things
like
that.
A
B
E
B
B
Update
the
job
I'm
not
sure
like
because
that's
the
baseline,
we
are
doing
like
a
b
test
right,
so
we
run
the
baseline
and
then
we
compare
with
the
I
hope
we
go
head,
but
I'm
not
sure
it
might
be
a
head
of
the
114
branch,
but
that
would
be
bad
because
then
this
test
is
completely
not
useful,
so
yeah.
We
should
follow
up
on
this.
B
C
A
Okay,
so
this
is
the
link
that
you,
you
posted
right.
Okay,
so
I
guess
yeah
right
now:
probably
we
can.
We
don't
want
to
debug
it,
but
let's
take
it
offline
and
let's
make
sure
that
someone
looks
at
it.
B
But
we
are
actually
doing
what
we
are
supposed
to
do,
so
we
are
testing
which
is
currently
some
like
depth,
118
version,
so
it's
actually
good
like
so
these
jobs
serve
the
purpose
like
we
can
obviously
update
from
18
to
something
newer,
but,
as
marshall
said,
it
doesn't
really
matter
right,
because
we
want
to
basically
fix
everything
we
can
fix.
So,
for
example,
kubernetes
version-
and
we
are
just
testing
like
the
test-
is
not
for
kubernetes-
is
using
kubernetes
to
test
going
because
that's
a
great
benchmark
for
performance.
B
We
like
create
a
lot
of
your
routines,
like
any
changes
there
around
like
memory,
management
and
stuff
are
usually
visible
in
in
our
scavenge
test.
So
but
anyway,
like
still
like,
we
can
take
action
item
to
basically
update
this.
We
should
be
amazing
because
every
every
white
like
it
doesn't
make
sense
to
probably
do
it
every
minor,
but
like
every
three
miners,
or
so
we
should
it
should
bump
to
the
newest
available.
B
Okay,
yeah,
I
don't
know
if
you
can
like
open
an
issue,
we're
basically
asking
the
question
that
you
ask
currently,
why,
like
or
like
why
this
this
bucket
is
just
by
some
note
test,
then
it'll
be
easier
for
us
to
take
a
look
and,
and.
C
That's
all
of
for
me,
I
mean
I'm
sorry,
so
my
last
question
is:
I
create
a
document
that
basically
keep
all
the
code
out
that
need
to
be
raised
for
the
5k
job,
so
I'm
not
sure
if
I
need
to
basically
raise
other
order.
Quotas
like
let
me
find
the
issue.
C
A
Give
me
a
second
so
so
you
increase
some
quotas,
but
I
think
you
are
already
running
5k
tests
right
yep
and
they
all
pass
right.
Yep
then,
probably
I
I
would
say
that
probably
all
quotas
are
correct.
Okay,
that's
right!
I
don't
know.
Maybe
maybe
you
have
different
opinion.
I
don't
know
mad
voitech.
E
A
quick
look
and
like
run
the
results
of
random
tests
to
double
check
but
yeah.
I
would
expect
that
if
it's
passing
it,
it
should
be
fine.
C
B
C
E
Yeah,
we
should
probably
create
a
second
project
for
presubmits
to
avoid
interactions
with
those,
but
the
job
is
the
job
is
supposed
to
be
like
the
more
or
less
the
copy
of
that.
So
the
same
quota
should
apply.
A
Okay,
I
think
that
even
lower
quotas,
right
like
for
pre-submits,
like
the
the
regular
ones,
we
only
run
like
100
nodes,
right
clusters.
A
E
That
are
being
thought
as
something
that
may
cause
significant
problems,
so
it
shouldn't
impact
like
pretty
much
anyone.
A
C
Maybe
I
need
to
talk
with
my
coach
about
this
because
we,
basically
the
idea,
was
to
have
one
pro
dedicated
project
for
any
job
running
5k
notes.
So
if
you
need
to
have
a
second
question,
it's
not
really
that
difficult.
We
can.
Basically
it's
not
really
complicated
too,
that
we
just
need
to
be
sure.
We
don't.
A
B
It
also
like,
if
like
for
any
reason,
this
is
problematic.
I
don't
know,
like
cost
reasons
like
so
let's
have
this
discussion,
because
I
think
it
would
be
useful
to
retain
this
pre-submit,
but
this
optional
one,
but
I
think
it's
like
lower
priority
than
migrating
other
preceptors
and,
as
latex
said,
we
are
using
it
okay,
so
this
can
be
basically
last
item
to
do
like
if
you
are
blocked
on
this.
For
some
reasons:
okay,.
H
Hey
just
a
quick
question
about
these
optional
pre
submits,
so
is
it
limited
by
some
group
or
something
like
that
or
anyone
can
invoke
it
against.
B
H
B
But
anyway,
like
it's
like
using
a
fixed
project.
So
if,
if
you
manage
to
run
it
twice,
you
will
basically
fail,
because
you
will
have
conflict
so
probably
like
one
pre
submit
with
creative
notes
and
the
other
will
delete
them
to
create
cluster
or
something
like
that.
So,
but.
F
A
Okay,
so
so
I
think
we
can
go
to
the
next
topic,
so
I
actually
put
two
two
different
items,
but
I
feel
like
the
solution
could
be
one
for
both
of
them.
A
So
recently,
on
slack,
there
were
few
people
asking
about
like
scalability
updates,
so,
for
example,
the
story
is
that
someone
was
on
kubecon
2000,
whatever
18,
I
think,
and
they
asked
like
what
are
the
differences
between
you
know
what
was
happening
then
and
now
in
terms
of
scalability
limits,
and
I
don't
think
we
have
currently
any
good
documentation
to
show
like
what
are
the
scalability
limits,
changes
over
kubernetes
versions,
and
I
was
wondering
what
do
you
think
about
it?
A
If,
if
we
should
change
the
change
it
somehow
and
how
to
make
it?
Actually,
you
know
in
a
way
that
we
don't
forget
to
update
scalability
limits
because
they're
in
my
opinion,
there
were
some.
There
was
some
work.
Obviously
that
was
extending
those
limits,
but
at
the
end
I
think
we
didn't
document
it
well.
H
Yeah
I
was
actually
talking
to
matt
a
couple
weeks
ago.
I
think
about
the
same
thing.
It
was
a
slightly
different
format
of
this,
like
not
exactly
with
the
limits,
but
even
just
if
someone,
because
a
few
times
I
got
pinged
and
someone
asks
what
were
the,
what
were
the
scalability
improvements
or
like
some
kind
of
a
catalog
of
things
which,
like
scalability,
implements
that
went
into
release
and
when
I
actually
go
take
a
look
at
the
change
log
for
the
release.
H
It
seems
like
it
only
mentions
like
one
of
the
biggest
things
right,
but
it
seems
like
we
do
actually
make
a
lot
of
small
small
improvements
here
and
there.
Maybe
some
are
not
that
significant,
but
something
just
would
still
be
good
to
know
like
in.
Let's
say,
if
there's
some,
for
example,
some
some
change
in
watch
behavior,
which,
for
example,
affects
the
number
of
events
we
process
or
something
like
that.
I
I
don't
know
if
that
today
is
counted
under
our
or
that's
mentioned
under
our
change.
Log.
H
A
So
I
guess
it
depends
like
what
kind
of
details
do
you
do
we
want
to
include
them?
We
include
there
because,
like
you
mentioned
like,
for
example
like
some
kind
of
event,
speed,
processing
or
like
sending
whatever
this
might
be
like
not
really
something
that
represents
like
what
clusters
can
run.
I
think.
H
A
Like,
for
example,
you
know
like,
I
think
my
internet
connection
is
not
really
great
right
now,
but
I'm
glad
you
hear
me
so
so.
For
example,
I
can
imagine
that
you
know
like,
for
example,
you
have
some
service
or
or
whatever,
and
instead
of
sending
100
events,
you
are
sending
one
event
right.
So
let's
say
that
event.
A
Throughput
is
the
same
for
the
cluster,
but
but
then
we
are
just
sending
less
events
and
of
course
there
are
many
dimensions,
like
those
small
dimensions
that
we
probably
could
could
try
to
measure,
but
is
it?
Is
it
more
valuable
than
like
updating
actual
scalability
limits
that
we
are
testing
regarding
like
number
of
pods
nodes
and
and
namespaces
or
services,
whatever.
H
Yeah,
I
I
guess
that's
a
good
point,
what
you're
saying
that
limits
are
probably
more
customer
like
more
user-centric
and
it's
more
relevant
for
people,
but
I
I
yeah
I
so
I
I
won't
contradict
that
actually,
but
I
think
there
are
some
improvements
which
may
not
be
measured
by
our
limits.
For
example,
there
may
be
a
little
bit
more
engineering,
specific
improvements
which
we
know
that,
like
some,
which
which
might
help
with
some
kind
of
scenarios,
so
I
know
I
just
want
to
bring
this
up.
B
Yeah,
so
I
agree
with
marcel
that
like
actually,
this
might
be
two
different
things
sometimes
like
in
some
cases
it
might
be
the
same
right.
Like
the
you
know.
The
improvement
that
comes
to
my
mind
is
immutable
secrets
or
config
maps
done
by
voitec
right.
B
So
this
actually
like
improved
the
the
number
of
secrets
or
config
maps,
increase
the
number
of
superconfig
maps
supported
in
the
cluster,
but
sometimes
there's
like
really
low
level
improvements
or
like
improvements
that
don't
touch
like
any
particular
limit,
but
are
improving
a
bit
in
general
right
like
something
like.
I
don't
know,
watch,
bookmarks
or
or
things
around
that
right
so
or
like
some.
B
B
We
have
a
lot
of
people
asking
about
this
so
like
we
need
to
have
a
way
to
actually
maintain
an
updated
list
of
minutes.
So
it's
like,
I
think,
it's
tricky
like
some.
I
am
like
really
supportive.
If,
personally,
you
want
to
to
take
this
online
and
figure
out
something
and
also
shian,
for
your
idea,
like
about
tracking
the
improvements.
B
I
also
think
there
is
value
in
that,
like
even
just
for
for
us
to
know,
to
have
a
lock,
basically
to
be
able
to
buy
some
some
governance
stuff
right,
like
doing
community
updates
like
this
quarterly
right.
So
this
is
basically,
you
show
like
six
cavities
this
and
this
right,
but
we
also
need
a
process
here
like
because,
like
just
creating
a
catalog
or
like
a
document,
I'm
worried
that
we
might
just
like
start
writing
there,
something,
but
after
a
few
weeks
we'll
forget
about
it
right.
B
So
exactly
yeah
that
will
work
and
yeah.
E
Yeah,
I
think,
yeah
I
definitely
like
for
limits.
I
think
it's
it's
not
questionable.
I
think
we
we
just
need
need
them
for
for
improvements
themselves.
I
also
see
value
in
having
that.
I
think
my
biggest
my
biggest
concern
here
is
like
to
keep
the
discipline
of
like
really
keeping
them,
keeping
keep
keep,
keeping
them
documented,
so
yeah
I
like,
I,
wouldn't
start
them
without
having
a
way
of
well.
E
Maybe
enforcing
is
too
big
word
for
that,
but
like
somehow
ensuring
that
we
will
keep
like
anything,
we
will
be
doing
or
any
improvements
that
we
will
be
doing
will
actually
get
documented.
B
A
So
there
is
one
more
issue
that
currently
we
had
on
encore,
which
was
that
we
were
having
some
experiments
going
on
and
there
is
a
document
that
says
like:
what's
the
order
of
enabling
experiments,
but
then
you
know
on-call
changes
every
week
and
it's
quite
hard
to
to
see
like
okay
at
what
point
their
experiments
are
enabled.
What
could
be
that
could
be
possible.
You
know
it
could
be
possible
that
they
are
basically
failing
tests,
so
there
is
document,
but
there
is
no
like
really
easy
process
to
track
it.
A
So
as
an
example,
I
would
like
to
talk
about,
for
example,
huge
services,
so
at
some
point
we
enabled
huge
services
in
most
of
the
test
tests,
but
not
all
of
them.
So
this
this
basically
had
a
few
issues
like
one
one
is
that
it
pollutes
our
tests,
because
we
have
more
con,
more
flags,
more
conflicts
to
to
maintain.
A
But,
second
of
all,
what
we
would
like
to
do
is
basically
enable
this
in
all
tests
and
then
later
clean
it
up,
and
then
I
think
the
third
point
could
be
actually
update,
scalability
limits.
So
what
I
was
thinking
about
is
because
we
have
already
document
that
says
in
what
order
we
should
do
it.
What
I
was
thinking
about
is
okay,
if
you
want
to
create
some
experiment
in
perth
test,
what
you
do
is,
basically
you
create
issue,
and
you
have
this
checklist.
H
Yeah,
I
I
think
those
are
good
ideas.
Maybe
one
of
us
should
follow
up
on
like
what
do
we
want
to
do
here
with
the
process,
like
maybe
a
loose
proposal
on?
I
I'm
okay,
if,
if
you
guys
want
to
discuss
this
over
a
slack
thread
to
begin
with.