►
From YouTube: 2021-09-30 Kubernetes SIG Scalability Meeting
Description
Agenda and meeting notes - https://docs.google.com/document/d/1hEpf25qifVWztaeZPFmjNiJvPo-5JX1z0LSvvVY5G2g/edit?ts=5d1e2a5b
B
Is
catholic
so
yeah?
Maybe
I
will
start
so
today.
Is
this
is
sixth
globality
meeting
and
today
it's
30th
september
2021
and
today
yeah.
We
actually
don't
have
any
topics
on
the
agenda,
but
if
you
have
any
questions
then
then
you
can
proceed
with
them.
A
Okay
yeah,
so
my
name
is
kaifi
kathy
jung,
I'm
working
at
intel.
I
have
a
question
regarding
the
in
place
on
the
scalability
that
in
place
cloud
vertical
scaling
features
how.
How
is
that
going?
Because
we
we
did
some
pro
some
testing
and
then
we
found
out
there
are
some.
The
the
latency
is
a
little
bit
high,
so
we're
thinking
about
how
we
can
you
know
contribute
to
that.
A
B
Can
you
maybe
paste
it
on
the
chart,
but
it
sounds
like
it's
vertical
pod,
auto
scaling.
Then
I
would
say
it's
probably
auto
scaler.
A
A
Okay,
I
see
I
have
actually
I
have
another
question
so
in
your
agenda:
expansion
performance
benchmarks.
So
what
is
there
any
document
on?
What
kind
of
benchmarks
are
there
or
what
kind
of
performance
testing
are
you
testing
like?
You
are
testing
the
qps,
the
latency
yeah,
something
like
that.
C
C
We
also
have
like
some
specific
tests
for
like
generating
high
qps
and
validating
some-
some-
I
I
can't
remember
exactly
but
yeah,
but
we
are
generating,
like
I
think,
like
thousands
of
kps
and
and
seeing
how
the
system
behaves
something
like
that
yeah.
I
I
think
that
that's
mostly
it
and
then
for
the
p
and
f,
we
don't
have
any
benchmarks
yet
and
what
but
this
is.
This
is
going
to
change.
I
hope,
hopefully
soon.
A
C
No
like
so
so,
basically,
we
are
creating
cluster,
let's
say
like
with
100
nodes.
Then
we
are
trying
to
create.
I
think
thousand
posts
poster
like
those
spots
are
gathered
in
some
deployments
in
some
demon
sets
stuff
like
that
like
and
we
are
creating
some
some
services
based
on
based
on
those
and
then
we
are
running
some
rank
updates
and
and
then
we
are
deleting
everything.
C
So
all
this
time
we
are
watching
for
the
latency
of
this
of
the
system
and
and
and
we
are
checking
if
it's
in
our
slo,
just
like
one
second
for
most
of
the
cost
and
like
for
some.
These
lists
cause
it's
is
higher,
so
basically,
this
is
what
we
are
doing
and
just
to
clarify.
We
are
also
doing
the
same
in
like
5
000
notes
case
and
in
the
5000
we
are
creating.
I
think
100
50
000
pots
in
in
such
a
such
a
such
a
cluster.
C
A
C
Yeah,
I
hope
I
hope
there
should
be
some
document
discovering
that,
but
I'm
not
aware
of
anything
anything
like
that,
so
I
think
the
only
source
of
truth
like
the
source
code.
Unfortunately,
so
I
guess
maybe
just
action
item
for
for
the
our
our
seek
to
to
maybe
make
it
more
clear
on
yeah.
That's
basically
some
some
documentation
describing
that
at
least
at
some
high
level.
A
Yeah
yeah,
that
would
be
great
yeah.
I
don't
know
what
testing
you
are
doing
and
then
what
result
you
are
getting
so
yeah.
I
think
it's
good
for
people
to
have
an
idea.
Oh
yeah,
this
you
know
for
this
co-base.
You
know
the
medical
base
we
can
support
like
you
know
how
large
the
cluster
is
right.
What's
the
qps,
what
is
the
latency.
B
So
with
the
kps
so
with
the
qps,
it's
actually
like
you
shouldn't
think
of
it
as
a
qps,
because
in
terms
of
requests,
there
are
different
requests
and
basically
they
like,
for
example,
I
can
imagine
cluster
that
you
know
is
doing
1000
qps
just
getting
configmaps
secrets,
but
then
you
can
have
second
cluster
with
the
same
size
but,
for
example,
listing
all
the
pots
and
those
requests
are
much
more
heavy
and
because
they
are
much
more
heavy,
then
you
will
not
get
1000
qps.
B
So
qps
is
not
usually
the
the
value
that
we
are
interested
in.
Actually
we
merged
this.
This
question
about
the
qps
to
one
of
our
documentations.
I
will
post
it
soon
when
I
find
it
maybe
after
the
meeting
I
will
find
it
and
also
we
have
second
document
where,
where
we
are
kind
of
saying,
okay,
so
the
kubernete
there
are
some
kubernetes
kubernetes
limits
and
and
for
example,
we
are
supporting,
let's
say
up
to
5
000
notes.
B
A
B
Yeah,
so
this
meeting
has
agenda
file
and
I
will
add
action
item
for
me
and
after
the
meeting
I
will
find
those
two
documents
and
I
will
send
them.
I
will
just
add
them.
Add
them
here.
B
And
actually
yeah
do
you
want
to
talk
more
about
the
pnf
benchmarking.
D
B
Okay,
so
do
we
have
any
more
questions.
A
A
Okay,
so
where
can
I
know
exactly
like
you
are,
like
you
know,
for
example,
you
have
any
new
features
planned
for
your.
What
group
any
new
features
that
your
group
will
work
on,
or
is
there
any
a
list
or
document
which.
B
Have
that
so
actually,
as
a
as
a
sixth
scalability,
we
have
multiple
different
goals,
which
is
sometimes
it's
also
some
improvements
in
terms
of
scalability
but,
for
example,
first
and
most
important
is
protecting
from
scalability
regressions.
B
So
we
are
constantly
running
those
ci
tests
and
basically
analyzing
all
the
regressions,
but
also
sometimes
we
we
add
some
features
and
I
would
need
to
think,
but
I
think
recently
we
had
three
three
major
improvements
kind
of,
but
we
often
work
with
other
sikhs
to
to
improve
some
particular
parts
of
kubernetes
as
an
example.
As
an
example,
for
example,
there
is,
there
is
a
priority
and
fairness
and
we
were
quite
heavily
involved
in
priority
and
fairness
from
also
design
perspective,
but
but
basically
improving.
B
The
other
thing
could
be
immutable
secrets.
So
if
you
have
regular
secrets,
then
then
what
happens
is
that
when
the
secret
is
mounted
to
to
pot,
then
then
kublet
gets
this
secret
from
cube
api
server
and
then
opens
the
watch
and
with
immutable
secrets.
What
happens
is
that
it
just
gets
the
secret,
but
it
will
never
get
an
update,
but
this,
for
example,
also
reduces
the
number
of
watches
on
api
server.
So
these
are
two
two
improvements
that
I
can
think
right
now
that
were
made
quite
recently.
A
Okay,
got
it
okay,
so
basically,
this
group
is
concentrating
on
doing
the
ci
test
or
whatever
performance
test,
to
make
sure
you
know
everything
works
when
it
scales
out
or
something
like
that
is
that
right
is
my
understanding,
correct.
C
C
That
might
be
very
time
consuming,
but
we
I
I
just
want
to
say
that
we
do
contribute
to
new,
like
features
usually
like
developed
by
other
sticks,
like
I
think,
like
the
pnf,
is
currently
a
verbal
topic
that
we
as
a
secret,
are
contributing
to
to
it.
So
it's
not
that
you're
only
like
running
tests.
There
are
also
like
the
the
changes,
the
kubernetes,
that
we
are
involved
in
yeah.
B
Basically,
based
based
on
those
tests,
we
are
either
finding
regressions
or
finding
places
in
kubernetes
where
we
can
improve,
but
also,
but
also
we
are
consulting
other
six
in
case
of
they
came
up
with
new
feature,
then
as
a
six
capability,
then
we
are
also
consulting
them
to
make
sure
that
you
know
this
new
feature
will
be
scalable.
B
F
Okay,
got
it
yeah.
Okay,
thank
you.
Hey
guys,
quick
question
on
that.
So
I
saw
marcia
found
this
interesting
issue
with
that.
One
second,
for
example,
class
calls
call
latency
going
up
for
exam
class.
Well,
I
I
had
a
question
there.
So
much
do
you
know
how
come
we
didn't
catch
that
in
120
itself
did
it
was
it
hidden
or
like
our
tests
were
not
sensitive.
F
Okay,
so
this
requested,
this
is
the
issue
I
checked
reported
this
one.
F
Nice
dive
deep
there,
it
yeah,
I
think
what
I
didn't
understand
is
how
we
caught
this
in
123
and
not
121.
Was
it
regression
in
123
or.
C
Yeah,
so
I
think
like
this,
no
not
sure
like.
C
C
F
Or
or
like
data
or
120
121
like
back
then
did
our
tests
also
cover
apn
f
for
or
we
enabled
it
recently
for
our
5000
node
tests.
C
I
think
they
they
were
covering
like
pndf
yeah
yeah.
That's
true,
but
maybe
maybe
I
misunderstand
like:
are
you
referring
like
because
like
are
referring
to
like
the
the
last
comment
or
like
one
of
the,
because
I
think
there
are
two
changes
in
in
the
in
the
issue
you
linked.
So
the
first
one
is
that
basically,
we
are
seeing
some
lock
attention.
It
is
causing
some,
let's
say
regression
that
some
calls
are
slowed
down
and
then
in
the
last.
C
So
then
there
is
this
fix
which
like
improves
this
this
locks,
and
then
we
are
seeing
that
the
latency
of
like
the
exempt
cause,
is
going
down
to
like
it's
going
down.
But
on
the
other
hand,
we
are
seeing
that
some,
some
latency
of
some
calls
is
growing
like,
for
example,
like
the
note
high
priority
level
is,
the
latency
there
is
is,
is
growing,
I
would
say
even
significantly
so,
and
this
also
can
be
considered
to
be
to
be
regression.
So
are
you
frankly,
like
the
depressed
one
like
that?
F
D
C
A
C
A
I
have
a
question
regarding
the
latency:
is
it
like
into
an
agency?
How
do
you
mirror
latency
from
the
client
side,
when
the
client
send
the
request
and
then,
by
the
time
you
know
the
client
get
some
notification
or
query?
A
You
know
to
make
sure
that
workload
has
been
deployed
or
you
may
have
different
latencies
for
different
segments.
B
So
this
latency
that
magic
and
xiaomi
we're
talking
about
is
latency
like
pure
api
calls
latency.
B
But,
except
for
that,
we
also
do
have
some
slos
for
different
latencies,
that
maybe
you
you
would
be
interested
more,
which
is,
for
example,
pod
startup
latency
and
put
startup.
Latency
is
basically
measuring
the
time
from
when
you
create
pot
and
till
it's
basically
running
on
the
node,
taking
into
account
that
the
node
needs
to
be
stateless.
A
A
A
B
A
B
So
actually,
api
server
has
like
two
main
kind
of
requests.
One
you
can,
you
can
pull
basically
like
you
said
like
basically
to
get
requests,
but
instead
of
it,
api
server
also
has
watch
requests
and
basically
they
work
in
a
way
that
you
open
the
watch
and
you
you
say
that
you
are
interested
in
this
particular
pod,
for
example,
and
then,
whenever
anything
changes
for
this
part,
you
will
get
an
update.
A
A
B
Okay,
so
do
we
have
more
questions.