►
From YouTube: 2021-04-15 Kubernetes SIG Scalability Meeting
Description
Agenda and meeting notes - https://docs.google.com/document/d/1hEpf25qifVWztaeZPFmjNiJvPo-5JX1z0LSvvVY5G2g/edit?ts=5d1e2a5b
A
All
right,
I
believe,
we're
recording
so
welcome
to
our
six
community
public
meeting.
It's
just
two
of
us
put
something
here:
owner's
files.
I
have
no
idea
what
it's
about,
I'm
still
catching
up
after
after
paternity.
So
I
will
ask
if
I
take
about
this
or
maybe
leave
me
to
do
here.
B
To
clarify
about
hi,
I
can
quickly
clarify
what
I
meant
so
basically
during
the
the
writing
and
review
of
like
annual
report
for
2020.
One
of
the
questions
that
was
asked
by
steering
committee
was
like
how
we
are
keeping.
B
Up
you
froze
not.
A
B
Anything
sophisticated
or
we
don't
have
any
any
real
process
for
doing
that.
So
it's
something
we
we
can
probably
discuss
here
if.
A
B
I'm
sorry
so
yeah,
so
basically
during
the
I'm,
not
sure.
Maybe
I
can
repeat
from
the
beginning.
So
so
during
the
annual
report
review
for
2020.
B
A
B
Can
you
hear
me
now
yep,
okay,
so
so,
maybe
just
to
repeat,
I'm
not
sure
if
you've
heard
anything
so
yeah.
During
the
annual
report
review,
the
string
committee
asked
how
we
are
one
of
the
questions
pretty
much.
The
only
one
that
we
didn't
have
any
reasonable
answer
was
how
we
are
making
sure
that
our
owner's
files
are
up
to
date
and-
and
basically
I
didn't
have
any
reasonable
answer.
I
mean
we
are
not
doing
anything
really.
Here
I
mean
if
someone
is
disappearing,
then
we
eventually
will
remove
them.
B
But
we
are.
We
don't
really
have
any
process
for
like
adding
contributors
who
who
did
enough
stuff
to
to
the
reviewers
for
example,
and
so
on
and
so
on.
So
it's
something
that
we
should
probably
discuss
if
we
have
any.
B
What
are
the
conditions,
for
example,
under
which
we
can
add
someone
to
the
reviewers
file
or
reviewers
for
probably
mostly
for
perf,
test
repo
or
sub
sub
sub
directories
in
perth
test
repo,
because
we
don't
own
much
in
the
main
repo
pretty
much
only
mark,
and
it's
not
super
actively
being
worked
on
so
so
yeah.
It's
it's
mostly
probably
about
worthless,
yeah.
A
That's
a
good
question.
I
was
thinking
about
that.
Actually,
some
time
ago,
like
before
my
paternity,
there
was
a
moment
when
we
had
a
lot
of
reviews,
and
I
think
I
was
doing
most
of
them.
So
I
started
thinking
whether
like
we
should
add
more
reviewers
to
help
with
that,
but
I
believe
now
it's
again
like
kind
of
quiet
in
the
in
the
purpose,
but
I
generally
agree
that
we
should
have
some
process
right
from
for
adding
like
process.
A
It
doesn't
have
to
be
anything
formal,
but
we
should
just
like
add
some
under
once
in
a
time
right.
So.
C
To
that
point
like
I
wanted
to
actually
get
more
involved
and
I
was
wondering
like
what
could
be
a
good
starting
point.
C
I
have
like
I've
been
contributing
to
the
apps
like
the
api
server
a
little
bit,
but
I
haven't
done
much
on
the
perf
test
side
or
I
don't
know
the
know
more
about
the
scalability
tests
that
we
run
so
and
I
was
planning
to
carve
out
some
time
to
dedicate
in
this
area.
So
I
was
wondering
if
there's
like
some
starting
point
or
any
pointers,
you
have
that
I
could
follow.
A
A
So
if
you
can
find
an
issue
like
this,
that
will
be
definitely
a
good
starting
point.
I
can
also
like
other
to
do
to
basically
take
a
look
at
the
list
of
good
starting
issues,
because
I
was
trying
to
have
some
issues
there
always
like
for
like
some
unassigned
issues.
So
so
there
is
like
a
someone
like
you
who
wants
to
start.
There
is
something
to
start
with,
but
I
haven't
checked
this
in
two
months,
so
I
will
I
will.
I
will
check
it.
A
I
will
add
it
to
the
only
yeah
the
next
step.
Basically,
you
start
working
on
something
in
some,
like
particular
area
of
purpose,
and
I
believe
the
next
step
should
be
actually
you're
becoming
a
reviewer
right
in
that
area.
So
that's
that's
how
it
should
work
right.
B
B
Probably
came
up
with
some
more
specific
conditions,
I
don't
know
if
you,
if
you
contribute,
I
don't
know
10
prs,
that
are
of
reasonable
size.
I
mean
not
like
typos
fixes,
but
but
something
that
is
like
contributing
real
logic.
We
are,
you
are
becoming
a
reviewer
and,
like
once,
your
reviews
become.
B
Really
good
and
I
mean
they're
like
there,
there's
no
significant
additional
comments
from
approvers.
You,
you
probably
are
becoming
approver
or
something
like
that,
but
it's
probably
on
matt
and
myself
to
to
come
up
with
some
more
specific
proposal
like
how
this
policy
should
should
look
like.
I
guess.
C
B
The
other
thing
regarding
starting
there
are
issues
that
are
or
proposed
or
things
that
we
are
useful
for
us,
but
it
doesn't
mean,
like
you,
can't
propose
something
on
your
own
of
course
like,
for
example,
the
the
example
from
the
q4.
Actually
last
year
was
like
there
were.
B
There
were
two
folks
from
ericsson
who
were
who
who
were
missing,
basically
a
networking
like
reasonable
networking
tests,
but
not
that
not
really
at
the
kubernetes
level
itself,
but
like
more
of
like
this
lower
level
networking
in
terms
of
like
throughput
and
latency
of
the
network
when
you
are
running
applications
in
pods.
So
so
what
is
like?
Basically,
the
overhead,
for
example,
of
the
of
the
kubernetes
networking
stack
for
for
data
playing,
let's
call
it
and
like
they
basically
came
with
like
a
very
concrete
proposal.
B
How
to
do
it
took
us
a
couple,
probably
almost
half
a
year
to
have
that
done.
I
must
admit
that
I
was
like
basically
the
limiting
factor
on
the
path
and
my
reviews
were
weren't
super
fast.
B
So
this
this
is
another
example
like
where
we
could
benefit
much
from
from
increased
review
bandwidth,
but
it
actually
literally
like
two
days
ago
the
tests
were
set
up.
I
didn't
yet
have
chance
to
to
look
in
the
results,
but
they
the
tests,
are
passing,
which
is
actually
pretty
cool.
B
I
can
paste
the
link
here
on
the
chat.
I
will
paste
it
due
to
the
notes
later
too,
and
I
can
I
will
paste
the
links
to
where
the
test
is,
but
but
it's
basically
a
a
great
example
where
someone
is
coming
and
with
their
own
idea
that
we
are
not
testing
x
at
all
and
proposing
like
how
we
should
do
that
and
driving
that
so
so
this
is
like
yeah.
C
Cool
that
sounds
great,
thank
you
and
also
like
the.
I
know
that
we
run
the
escalability
tests
the
scale
tests,
and
is
there
a
way
to
look
at
the
like,
you
know,
go
over
the
results
and
try
to
see
like
look
at
the
metrics
and
see
what
the
what
the
load
is
on
the
throughput
is
and
trying
to
find
out
any
bottleneck.
B
B
So
so
you
should
have
access
to
everything.
If
you
don't
like,
let
us
know
we
will
figure
this
out
but
yeah,
but
yeah.
Basically,
if
you,
if
you
look
into
the
test
grid,
the
six
scalability
test
grid
yeah,
which
is
more
or
less
the
same
link
that
I
pasted
it's
just
like
one
of
the
tops
of
our
of
our
six
scalability.
B
Dashboard
in
test
grid,
then
okay,
we
should
be
able
to
click
into
individual
tests
and
and
and
then
into
their
artifacts
and
so
on.
So
so
yes,
you
should
have
access
to
everything.
A
Yeah
and
this
question
is
like
being
repeated-
I
believe,
every
few
meetings.
So
if
you
like
scroll
down
like
scroll
down
enough,
you
will
find
a
place
like
this.
Can
you
see
my
screen
because
I
can
see
it?
Yes,
yeah
like
this
was
like
some
pointers.
I
know
that
I
was
also
giving
presentation
about
this,
and
maybe
it's
linked
somewhere
here.
I
will
try
like
finding
it
if
not
like
yeah.
Maybe
you
can
repeat
it
or
something
like
that,
but
yeah
as
we
take
it
like
everything
is,
is
open
to
the
public.
A
So
all
the
configs
results
and
everything
is
available
to
everyone
so
yeah.
I
wanted
to
ask
why
tech
have
you
been
uploading,
our
like
meetings
to
youtube
or
do
I
need
to
do
that.
B
Be
on
the
like
leads
list,
so
at
least
most
of
them.
I
think
there
isn't
there
isn't
what
the
recording
from
one
of
those
were.
That
was
led
by
shia
and
I
couldn't
join,
and
I
think
he
forgot
to
okay
yeah.
I
would
like
there.
C
B
B
Have
recordings
from
from
the
period
where
you
weren't
around
yeah.
A
B
Yeah,
I
think
the
point
that
I
wanted
to
mention
is,
as
which
I
already
mentioned,
kind
of,
which
is
this
like
new
networking
tests
that
is
like
measuring
the
mostly
throughput
and
latency
in
different
configurations
like
there
are
like
couple
different
tests
there
or
test
cases
like
for
tcp,
udp,
http
and
like
pod,
to
pod
and
bought
to
service.
If
I
remember
correctly,.
C
B
So
there
is
a
doc
for
that
I
mean
we.
Let
me
search
for
it.
We
merged
that
to
the
perf
test
repo.
So
it's
there,
the
description
of
the
test
is
it's
there.
Definitely
let
me
find
that.
B
Good
so
yeah,
so,
as
I
mentioned
like
it,
was
we've
set
up
it
last
week,
but
there
were
some
some
problems
with
the
config
itself
and
it
seems
like
we
finally
managed
to
fix
that
like
two
days
ago.
B
But
to
be
honest,
I
didn't
yet
have
time
to
look
into
the
results
themselves
and
see
how
it
looks
like.
But
at
least
the
test
is
passing
and
is
producing
some
results,
which
is
pretty
cool
that
we
finally
have
that.
I
think
the
the
networking
and
like
the
networking
characteristic
is
one
of
the
things
that
we
were.
B
I
mean
I'm
seeing
for
like
122
kilobytes
per
second
throughput,
which
seems
pretty
low
to
me,
so
we
probably
should
look
into
that
a
little
bit
more,
but
we
at
least
have
some
starting
point
here.
So.
B
And
I'm
seeing
negative
latency
so
there
there
are.
There
are
some
issues
with
the
test
itself,
but
but
at
least
like
we
have
the,
for
example,
here
I'm
seeing
negative
latencies
so.
B
Yeah,
those
those
are
more
like
bugs.
I
think
we
like
the
framework
itself-
is
pretty
cool,
so
we
iterated
a
bunch
on
the
approach
initially
the
approach,
it
was
quite.
B
Not
kubernetes
friendly,
but
like
we,
we
moved
to
something
that
is
like
native
to
kubernetes,
and
the
setup
of
the
test
itself
is
based
on
crds
and
and
like
things
like
that.
So
it's
pretty
cool
and.
B
It's
a
cluster,
it's
it's
native
measurement
in
the
cluster
loader.
Yes,
that's
really
cool
nice.
Do
we
have
perf
dash
configured
for
that
or
it's
another
thing.
We
there
were
changes
to
perf
dash.
I
guess
they
weren't
yet
deployed.
We
probably
should
do
that
so
if
they
we
may
not
yet
have
them.
I
can
also.
A
A
Yeah
so,
like
that's
a
good
point
right,
so
in
general
like
if
you
have
background
in
api
machinery
and
like
things
around
and
you
have
any
ideas
about
anything,
we
could
do
to
like
increase
coverage
of
of
of
scalability
or
performance,
for
example,
like
you
know,
adding
some
benchmarks
or
things
like
that,
then
yeah.
That's
why?
Why
we're
here
right
so
we're
open
to
this
cousin
and
implement.
B
Like
hellbooking
yeah
yeah,
so
one
thing
that
comes
to
my
mind,
which
may
not
be
the
best
idea
but
like
I
will
throw
it
out.
So
I
I
remember
you
were
you
are
presenting
like
your
pnf
benchmarks
back
then
I
don't
know
quarter
ago
or
something
so.
B
Maybe
contributing
something
around
that
maybe
exactly
those
benchmarks
or
maybe
something
something
around
that
could
be
could
be
useful,
even
if
we
want
to
be
running
them
continuously,
but
only
maybe
I
don't
know
once
per
release
to
check
something
at
the
end
or
something
that
I
I
think
that
that.
A
Right
now,
and
we
will
definitely
find
resources
to
run
such
tests
continuously,
so
so
yeah
we'll
be
happy
to
provide
resources
for
that
and
yeah.
That's
if
you
like,
want
to
like
contribute
like
and
and
and
share,
one
of
your
benchmarks
and.
B
Given
that
we
are
going
to
invest
more
in
pns,
like
I'm
currently
working
on
the
cap,
modification
or
cap
update
or
kept
extending
the
current
cap
with
to
support
like
the
list,
requests
and
and
first
part
of
watch
requests,
I
mean-
I
think
you
have
we
weren't
around
like
during
our
pnf
sync
this
week,
where
I
was,
we
were
discussing
it.
A
little
bit
are.
B
B
I
think
there
will
be
a
couple
rough
edges
there,
and
I
don't
expect
that
my
my
upload
to
be
like
the
the
final
version
of
what
we
will
proceed
but
but
anyway
I
it's
much
easier
to
discuss
on
something
more
concrete
than
like
yeah
digestion
waving,
so
so
yeah,
especially
in
in
this
context
that
we
will
be
changing.
B
C
Definitely
let
me
look,
let
me
look
into
it
if
we
have
some
sort
of
test
to,
like
you
know,
stress
pnf,
with
different
like
fluid
schemas
and
priority
level
and
then
come
up
with
a
benchmark.
I
think
that
would
be
good.
A
All
right
cool,
so
yeah
ap,
I
propose
like
we
just
like
take
a
look
and-
and
you
can
also
like
tweet
me
on
slack.
If
you
have
any
questions
about
our
like
framework,
how
we
set
up
clusters,
how
we
run
the
tests
about
cluster
loader,
I
can
like
send
you
some
some
pointers
and
think
about
it
and
I
believe,
like
next
meeting,
we
can
discuss
like
the
plan
for
the
running
this
test
like
continuously,
so
that's
a
great
idea.
B
C
C
On
my
plate
as
well
like
we're
investing
in
a
pnf
heavily
as
well,
so
this
this
blends
right
in
so
I
think
this
is.
This
is
a
good
idea.
I
did
some
testing
before,
but
I
have
like,
I
probably
can
revive
them
and
then,
if
we
can
bring
them
into
the
perf
test
for
like,
I
think
that
would
be
great.
A
B
So
I
think
I
think
I
have
we
have
one
person
internally.
That
would
be
would
have
time
to
do
something,
but
if
we,
if
we
agree
on
both
like
list
changes
and
watch
changes,
there
will
be
quite
a
lot
of
that.
So
if
you
would
be
able
to
help
with
any
of
that,
I
think
that
would
be.
C
B
Awesome
yeah:
I
will
try
to
upload
the
at
least
the
initial
version
of
the
cap
as
soon
as
possible,
so
that
we
can,
because
I
I'm
sure
it
will
take
us
a
little
bit
of
time
until
we
converge
on
what
exactly
we
want
to
do.
But
but
yes,
I
definitely
would
like
to
see
both
of
them
to
to
happen
in
122
if
possible,
yeah,
so
so.
Yeah.
Any
help
here
would
be
extremely
extremely
useful.
C
Oh
yeah,
look
like,
like
I
said,
like
I
already
like
in,
like
sign
up
for
for
these
two
tasks.
So,
however,
however
way
I
can
help,
I
should
be
able
to
do
it.
B
Cool
yeah,
and-
and
this
is
something
that
is
kind
of
in
between
like
api
missionary
and
scalability-
I
mean
it's
technically.
It's
like
api
machinery
owns
that
and
stuff
like
that,
but
it
helps
a
lot
for
scalability.
So
it's
like
yeah.
C
Somewhere
in
between
exactly
yes
yeah,
it's,
I
can
see
pnf
kind
of
in
the
long
term,
becoming
a
very
crucial
like
piece
of
feature
in
in
scalability
and
like
reliability
for
kubernetes
yeah.