►
From YouTube: 2017-09-14 Kubernetes SIG Scaling - Weekly Meeting
Description
2017-09-14 Kubernetes SIG Scaling - Weekly Meeting
B
Okay,
so
I
think
like
it
wasn't
really
about
testing,
because
we
are
still,
we
still
didn't
define
the
SLI
Sandnes
allows
in
their
area
of
networking.
I
started
this
guy,
like
I
wrote
an
initial
proposal
of
forest
lies,
and
this
allows
for
the
networking
area
and
the
dock
like
isolate
the
dock
with
for
now
on,
v6
Cal
ability
and
cig
Network
and
I
did
quite
a
lot
of
feedback
on
it,
like
mostly
the
feedback
was
about
how
to
actually
define
them
precisely
not
really
about
what
exactly
we
would
like
to
measure
mostly
like.
A
B
Ok,
I
posted
so
basically
like
the
SLS
I'm
proposing
one
is
like
for
like
I,
don't
think
it
makes
sense
to
really
go
into
like
deep
details,
how
exactly
they
will
be
defined,
but
it
because
it
seems
like
it's.
There
are
some
some
back
and
forth
on
its,
but
like
the
high
level,
like
the
things
that
we
would
like
to
measure
is
like
first
like
it
ends
here
of
DNS
lookups
like
lookups
to
DNS
and
plate
and
see
for
them.
B
B
Latency
of
propagating
changes
to
say
to
be
slow,
bouncer
by
mekinese,
the
basically
like
how
fast
from
like
when
Bob,
is
getting
ready
or
not
ready
or
deleted,
or
or
something
like
that.
How
fast
it's
this
like
is
actually
reflected
in
IP
tables
or
whatever
else
mechanisms
we
are
using
for
the
internal
load,
balancing.
B
There
is
also
like
for
that
something
that
the
SLI
that
is
really
cloud
to
provide
their
specific.
So
probably
there
will
be
no
SLO
around
it
like
how
fast
we
can
create
like
external
load
balancers,
just
for
basically
know
that
how
fast
it
is,
it's
not
like
really
about
enforcing
anything,
and
the
last
thing
is
like
basically,
the
over
ahead
of
kubernetes
and
the
high
level.
It's
like
how
much
light
and
c-23
quests
is
added
by
like
all
this
IP
table
stuff
and
all
those
concepts
that
are
like
burnett
is
specific.
B
A
A
Okay,
very
good,
so
I
made
a
note.
I
made
a
couple
of
high-level
notes
here
and
I'll
get
the
I'll
get
the
doc
in
the
notes
as
well.
What's
is
there
something
specifically
motivating
motivating
the
motivating
the
networking
work
or
so
so.
B
Basically,
we
know
that
there
are
some
like
with,
for
example,
with
many
number
of
services
that,
like
iptables,
won't
work.
We
already
know
it
and
we
would
like
to
more
formalize
like
at
work
at
what
scale
and
what
what
expected,
but
what
users
can
expect
from
like
from
networking
stuff
inquiry,
is
currently
it's
like
in
theory
like
we
can,
networking
can
be
totally
down
and
still
like
our
current
SLS
will
be
satisfied,
which
is
just
like
perfectly
right.
A
A
B
Yeah
yeah
I
think
like
both
are
something
like
I've
heard
from
customers
personality
that
they
were
using
thousands
of
or
even
tens
of
thousands
of
services,
and
it
doesn't
work
for
them,
so
it
probably
depends
or
they
would
like
to
so.
It
probably
depends
like
on
what
customers
we
actually
look.
So
I
think
that
both
are
you
important
and
the
true
which
one
is
like
more
important.
So.
B
B
B
A
A
C
Our
scalability
tests,
so
you
need
an
agenda
for
the
meeting.
I
have
linked
the
GCE
scale
performance.
If
you
see
these
scale
correctness,
jobs
in
the
release
master
in
Test
cricket
dashboard,
they
both
continually
been
failing.
I
found
that
shawn
has
created
an
umbrella
issue
to
track
all
the
issues
related
to
the
bank
correctness.
Failing
tests,
yep,
that's
right.
Okay,.
D
C
A
D
A
few
which
there's
still
a
few
failing
ones
and
the
thing
is
some
of
them-
are
actually
flaking
like
a
few
tests
are
actually
flaking
like
they're
they've
passed
once
and
they
are
not
passing
again
like
I
did
not
file
issues
for
them
yet
because
they,
like
those
very
sure,
those
two
is
which,
which
passed
at
least
once
and
so
like
and
I,
think
the
correctness
is
kind
of
in
a
good
shape.
The
correctness
tests,
but
yeah
we're
having
a
real
having
regressions
in
the
performance
ones
like
the
density.
The
density
test
has
been
failing.
D
We,
the
load
test
itself
passed
passing,
but
density
test
has
been
failing
for
few
runs
already
and
like.
We
recently
figured
out
one
issue,
so
we
actually
used
even
small
nodes.
We
were
using
given
small
nodes
for
the
five
thousand
or
cluster
and
which
has
like
less
memory
about
three
and
a
half
GBA
or
something,
and
we
were
seeing
a
lot
of
whom
even
three.
D
If
it's
happening
across
all
nodes,
because
cubelets
and
NPD's
are
at
Neutron
detectors,
are
actually
sending
a
lot
of
events
like
a
huge
amount
of
QPS
of
events
and
node
status,
updates,
which
pretty
much
was
thrashing
the
api
server
and
like
having
high
memory
usage
on
the
on
the
master
on
the
api
server,
and
it's
eally
so
like
this
actually
helped
increasing
the
node
size
to
N.
One
standard
won't
help
because
we
think
it
helped
with
some
of
the
latencies
they
got
better
and
it
also
included
the
resource
usages
of
these
components.
D
But
this
is
still
one
more
regression
and
we
had
to
find
out,
like
I've,
been
trying
to
divide
it
for
a
couple
of
days
already.
You
know
I'm
trying
to
find
out
why
some
of
the
calls
still
have
high
latency,
like
like
prominently.
The
node
status
and
delete
pods
are
having
high
late
in
season,
and
the
too
flaky
like
some
calls,
are
having
high
latency
and
some
other
calls
are
having
high
latency
as
other
times.
So.
D
This
one
more
there's
one
more
which
to
spot
startup
latency
shutting
up,
but
that
seemed
to
be
a
flake
that
happened
for
one
run
and
didn't
happen
for
the
next
students,
I
think
the
students
and,
like
it's
it's
fine
now
the
bloodshed
I
can
probably
close
that
issue
down
like
the
issue.
The
problem
currently
is
only
with
the
like,
at
least
with
only
with
the
API
latencies.
D
D
C
So
it's
okay
that
he's
running
it
against
master
right
now,
because
wherever
have
the
release
process
master
is
the
authoritative
signal
must
be
with
please
we
stopped
looking
at
mastered,
start
okay
opponent,
so
I
asked
you
to
work
with
sand
or
somebody
to
make
sure
that
the
one
eight
versions
of
these
jobs
exist.
We
still
need
to
be
using
a
specter,
Nelson
cool
yep,
I'll
change
it,
and
if
we're.
C
By
next
week,
I
would
ask
you,
show
up
to
Monday's
Bernadotte
meeting,
because
I
anticipate
that's
what
we're
going
to
start
getting
individual
states
for
their
release,
blocking
issues
right
now
and
Polynices
Herman's
blotted,
ok,
cool
right!
That's
all
I
has
a
release
representative
thanks
guys
and
we're
good.