►
From YouTube: 2017-09-07 Kubernetes SIG Scaling - Weekly Meeting
Description
2017-09-07 Kubernetes SIG Scaling - Weekly Meeting
A
A
C
There
were
like
there
were
a
bunch
of
regressions
that
he
spotted
some
of
them
were
at
least
some
of
them
were
like
configuration
issues
like
the
cube.
The
fact
that
that
cube,
mark
large
cube
marks
were
failing
was
because,
like
it
was
misconfigured,
it's
still
like.
We
still
see
some
problems
in
real
clusters,
and
this
is
still
like,
not
debugged,
so
like
she
I'm
looking
into
it
now,
I.
A
D
Yeah,
so
so
we
fix
this
cube
mark
issue
thanks
to
whole
thing.
You
pointed
out
this
problem
with
monster
size,
but
we
we're
yet
so
for
the
correctness
tests
I
filed
like
seven
or
eight
issues
already
for
the
failing
ones
and
like
actually
quite
a
lot
of
them
actually
got
fixed,
or
at
least
people
sent
fixes
for
four
of
them,
and
people
responded
for
at
least
two
others,
and
we
were
having
issues
a
couple
of
regressions.
D
The
pod,
startup
latency
is
216
seconds
or
something
like
that,
and
it
was
this
I'm
looking
into
this,
and
there
is
another
issue
with
API
called
latencies,
so
the
API
call
latency
is
starting
from
about
a
week
back,
increased
like
thrice
and
which
also
is
probably
some
regression
and
yeah
I'm
looking
into
it.
It
should
hopefully
get
fixed
Friday
by
the
end
of
this
week,
yeah.
D
B
Did
you
have
a?
Can
you
explain
that
for
a
second,
are
you
planning
to
do
like
a
label
to
do
like
you
know
we
did
like
a
runt
n
label
to
detect
flakes
right,
which
people
don't
really
use
properly
or
haven't
used?
Are
you
planning
to
use
like
a
labeling
system
to
ink
you
and
then
you
know,
evaluate
whether
or
not
it's
worthy
of
500
node
kick.
D
Well,
I
think
I
mean
for
now.
The
the
the
plan
was
to
just
basically
whoever
is
reviewing
should
likely
identify
that
scalability.
The
the
PR
could
have
scalability,
in
fact,
and
just
manually
run
it
against
the
PR.
There
is
no
automatically
kicking
it
based
on
labels,
but
that
sounds
like
a
good
idea,
probably
yeah.
B
I
think
you
probably
want
to
do
a
batch
right
like
you'd,
want
to
batch
them
up
at
the
end
of
the
week
or
something
like
that
where
you'd
have
you'd
have
one
label
that
says
good,
mark
500
or
something
or
some
some
scale
tests
label
or
whatever
you
want
to
call
it
I
don't
care,
but
then
it
could
batch
them
all
up
and
then
run
them
as
part
of
your
periodic
runs
right.
So
that
way,
it's
it's
it's
exercised
before
it
actually
gets
merged.
B
C
A
D
D
Yeah,
that's
actually
a
good
idea.
We
probably
can
add
q
mark
500
to
be
to
the
secondary
tests
before
just
merging
like
in
during
the
batch
merges,
but
not
to
the
not
actually
run
it
for
those
like
in
the
first
cycle.
But
then
it
is
part
of
the
batch
before
submitting.
Yes,
I
will
discuss
this
with
tests
in
front
of
folks.
They
probably
should
know
more
about.
A
B
Had
a
question
for
Y
Tech,
you
guys
are
deploying
you
guys
are
managing
such
large
single
node
master
clusters,
which
seems
crazy
to
me.
But
has
there
been
pushed
on
your
side
or
is
there
still
push
to
deploy
a
che
configurations?
We.
A
Aaron
I
met
with
the
CCS
folks
yesterday
and
they
were
they
were
telling
us
that
they're
gonna
step
up
to
some
provide
some
account
budget
to
run
tests
on
AWS
and
we
were
kind
of
revisiting
the
provider
config
stuff.
You
know
this
is
this:
is
the
five
node
the
five
node
hae
master
cluster
and
was
really
lobbying
them
hard
for
getting
into
testing
around
that
configuration
and
it
seemed
to
be
saying
we're
gonna?
Do
it
I'm
not
sure
about
the
timing
on
that,
but
they're?
B
A
A
So
anyway,
that
was
my
only
that
was
my
only
update.
If
there's,
if
there's
no
other
topics
for
today,
we
couldn't
call
it
a,
we
can
call
it
a
wrap
anyone.
Anyone
all
right,
I,
think
we're
done
for
today
see
at
least
some
of
you
at
the
community
meeting
Thanks.