►
From YouTube: 2017-01-12 Kubernetes SIG Scaling - Weekly Meeting
Description
Public meeting recording of the Kubernetes Scalability SIG.
B
C
D
D
Yeah,
so,
okay,
so
Oh
confusing
interface
on
the
mobile
after
okay.
So
yesterday
between
those
two
groups,
I
think
the
cluster
ops
group
is
probably
the
better
one
to
reach
out
to,
and
so
Rob
would
be
the
guy
there.
The
lifecycle,
stuff
we're
building
up
the
thule,
but
we're
not
quite
to
the
point
where,
where
you
know
upgrades
are
sort
of
bleeding
edge
there
with
stuff,
like
you
badman,
so
it's
going
to
be
for
a
more
manual
documentation
now
worth
schooling.
So
the
cluster
lifecycle
is
more
focused
on
the
tool
and
stuff.
C
C
Rdv
I
do
think
we
should
probably
once
you
have
the
docs
in
place.
There
should
be
probably
a
group
of
folks
on
this
sig
that
can
run
through
it
and
validate
and
check
it,
because
I'd
I
have
to
get
this
super
right
if
this
goes
out
the
door
in
a
bad
way,
it's
going
to
take
people's
you
know
experience.
C
A
Exactly
what
I
think
this
needs
involvement
from
cluster
officer
cluster
life
cycle,
because,
like
I,
mean
somebody
who's
written
like
all
the
play-doh,
we
went
through
upgrade
testing
for
release
15
right.
How
is
that
cd32
met
cd2
benefactor,
an
upgrade
listing
for
this
next
release
of
trigger
nice?
What
are
the
scenarios
that
need
to
be
tested?
A
Paranoid
and
be
better
at
coming
up
with
the
scenarios,
but
then
whose
responsibility
it
is
to
implement
how
to
deal
with
the
scenarios,
maybe
that
all
back
on
groups
I
think
there's
there's
a
since
upgrade
testing
got
a
lot
of
coverage
and
it
was
really
painful
for
the
duration
of
15
and
I.
Don't
think
FC
d3
was
even
in
the
mix
there.
I
I
want
to
make
sure
we're
prepared
for
that.
A
C
The
official
support
rod
for
where
is
home,
child
who's
known
on
the
official
support
route
4
s
III,
is
that
it
has
a
TD
to
internally
inside
it
and
they're
still
going
with
exit
III.
So
237
is
the
last
known
version
of
to,
but
every
every
version
of
83,
that's
being
created,
actually
has
the
two
engine
inside
of
it.
A
You
know
exactly,
and
it's
not
sort
of
like
I
keep
trying
to
find
the
actual,
documented
deprecation
policy
for
alpha
versus
beta
vs
GA
features
and
it
in
as
a
5-2
features,
not
just
a
TI
stuff,
and
it
seems
like
the
support
for
SED
to
is
something
that
needs
to
follow
that
kind
of
policy.
So
I'm
trying
to
figure
out
how
many
versions
of
super
Nettie's
need
to
continue
to
support
a
TD
to
as
the
default
client
as
a
potential
configuration
all
right.
You
say
it's
just
a
CD,
3
I
call.
B
C
Ya,
given
the
fact
that
I
know
the
paranoia
of
cluster
updates
not
from
myself
but
for
my
customers,
my
customer
royal,
our
customers
print.
It
is
they're,
very
leery
of
changing
things
and
there's
a
only
a
specific
outage
window
that
exists
for
them,
because
it's
not
all
managed
environments
right.
Gk
is
vastly
different
than
the
plethora
cornucopia
of
different
customers
and
environments
that
exist
so
they're,
usually
much
more
leery
and.
B
C
A
So
it
sounds
like
Tim.
You
want
you
and
wojtek
to
sort
of
own
the
issue
on
on
talks
on
this
sec
stuff
and
then
maybe
it
sound
like
we're
thinking.
We
want
to
run
the
scenarios
that
we
would
need
to
support
past
either.
It
should
because
they're
officer,
cluster
life
cycle
I'm
still
genuinely
confused,
which
is
which,
but
like
one
of
those
two
groups
to
get
a
real
world
moster
operator
view
from
people
who
are
leery
about
upgrades,
just
their
paranoia,
so
I've
already
loaded
an
every.
C
A
D
D
Difference
between
you
stated
that
cluster
ops
is
focusing
the
best
practices
and
guide,
because
the
live
site
before
Toby
comes
of
the
unified,
tooling
crumbs,
and,
at
this
point
and
sort
of
where
things
around
house
or
office
is
the
right
place
to
go.
The
life
cycles
only
getting
started
at
the
upgrade
story
in
terms
of
choosing
operates,
make
it
automatically.
A
D
A
C
A
Ok,
but
so
some
run
this
potentially
hot
cluster
ops
and
try
to
get
a
test
playing
together
earlier,
like
well
before
kind
of
lush,
so
that
we
can
maybe
anticipate
the
risk
of
upgrade
testing
being
a
hot-button
issue.
I
want
to
make
sure
that
exit
III
is
included
that
their
summary,
when
I
heard
that
is.
A
B
Thank
you
all
pretty
well,
I
think
like
windy.
According
to
pager,
this
week's
like
kinda,
like
them
busy
putting
out
fires,
but
us
start
I
know
there
is
like
some
minor
problem
that
I
did
not
problem
detector
that
we
use
in
our
intern
tests
had
some
back,
probably
in
it,
and
it
seeks
that
went
in
whatever
like
cloud
with
this
week.
I
think
it
is
needed
for
to
mark
the
refactoring.
P
is
done
for
quite
some
time.
B
A
B
Mckenna
and
decay
in
DC
right
now
he
defended
us.
We
will
vision,
tango
and
all
the
master
set
up
stuff
from
the
calculator.
So
this
will
be
a
clean
interface
that
someone
would
need
to
implement
in,
like
some
few
scripts,
two
functions
actually
four
days
whatever
cloud
they
want
to
run
on
or
one
prime
or
anything
to
be
able
to
start
to
mark
faster,
assuming
they
cannot
alter
guitar
Kubek
fugu
Burnett
is
wrong
notes.
B
A
Okay,
it's
you
know
I've
been,
even
though
I
don't
necessarily
have
comments
on
those
TRS.
I
have
been
keeping
track
soon,
that
you
guys
didn't
see
reproductive
on
pushing
keep
our
forwards.
So
I'm
excited
to
see
that
that
progress
and.
A
A
E
In
addition
to
repose
up
to
designs
on
them
to
conduct
any
ideas
about
what
our
solutions
you
look
like,
etc,
and
maybe
do
we
own
this
project,
and
so
then
you
know
what
you
start
working
on.
It.
C
A
You
can
taste
Lincoln
and
slacker
in
the
meeting.
Is
that
would
be
I'd,
be
helpful
and
I
mean
maybe
a
follow-up
question
for
me:
I
know
that
Tim,
like
red
hat,
sort
of
been
eating
a
drum
about
the
fact
that
controllers
are
generally
really
chatty
and
inefficient.
His
job
controller
just
the
particular
problem
child
here,
or
is
this
part
of
a
broader
effort
to
kind
of
look
at
you
know
all
the
controllers
and
continuing
them
down?
We've.
C
Already
done
the
broader
controller
work
I
mean
David's
at
Wake
Tech
me
Rob,
we've
already
fixed
a
lot
of
the
broader
controller
problems
with
the
caching
and
the
watch
updates
there
used
to
be
this
core
problem
that
they
would
relist
you
often
the
I,
don't
know
the
details
of
the
job
controller,
but
I
do
know
that
if
you're
trying
to
institute
any
generic
batch
processing
engine
on
urban
at
ease
without
owning
certain
aspects
of
it,
it
will
always
be
woefully
inefficient.
This
is
a
state
space
that
is
highly
highly
highly
tuned.
A
Double-Checking
a
quick
came
from
I
know,
samsung
said
when
we
were
discussing
our
20
17
goals
that
were
looking
at
coming
up
with
guy
for
a
sort
of
cluster
sizing
stuff.
We
don't
have
anything
to
share
this
week.
We
may
have
something
to
present
to
the
group
by
next
week,
other
than
that,
that's
pretty
much
all
the
updates.
I
add
from
my
side.
E
A
Might
be
a
broader
discussion
at
the
community
chat,
Joe
Tim?
Maybe
you
guys
have
a
better
opinion
on
this,
because
I
there's
been
a
fair
amount
of
drama
over
this
whole
proposal
thing
and
I'm,
hoping
we're
going
to
get
a
chance
to
talk
about
a
little
bit
more
in-depth
today,
but
what
this
group's
cancellous
on
it
I'm,
not
quite
clear
I.
C
Don't
really
have
any
strong
opinions
on
it.
I
think
it
depends
upon
what
company
has
the
ability
to
put
resources
behind
a
proposal
and
I?
Think
ownership
would
be
clear,
pretty
clearly
demarcated
based
upon
what
the
work
is.
Our
our
sig
is,
you
know,
usually
about
optimization
and
tuning
and
potentially
architecting
some
pieces,
and
if
it
falls
underneath
those
sort
of
buckets,
then
I
think
we
probably
we'd
have
ownership
over
it,
but
you
know
it
probably
cross
into
other
62
as
well,
so
it
could
be
like
co-ownership
in
most
situations.
E
A
C
A
It's
kind
of
what
I
take
guidance
from
the
community
to
see
what
we
as
a
community,
would
like
to
do
to
push
stuff
into
this
release
and
then
from
there.
The
decision
we
make
on
that
can
inform
how
we
as
a
safe,
would
like
to
with
you
and
design
proposal
ownership
and
all
those
responsibilities
by
the
way.
I
don't
even
want
to
talk
about
the
PM
rep
think
that's
something
else:
I'm
about
with
Bob
and
Tim
and
Joe
to
figure
out
what
we
want
to
do
about
that.
Okay,.
A
Not
even
sure
I
really
know
right,
I
tried
to
sum
it
up
in
the
email
I
sent
were
it
sounded
like
the
PM
group
would
like
representative
from
each
state
to
attend
weekly
p.m.
meetings
so
that
the
PM
group
understands
which
features
are
being
implemented
by
whom
and
what
their
statuses
and
I
think
that
is
ass
backwards.
I
think
that
if
the
PM
group
is
the
group
that
owns
the
betrayer
and
cares
about
it,
they
should
be
showing
up
to
sit
meetings
to
understand
what
the
sig
is
doing
to
work
on
their
feature.
A
This
is
pretty
much
the
way
that
things
have
worked
inside
of
Google
as
I
understand
that
you
know
the
PM
is
sort
of
the
advocate
for
the
feature
it's
their
job
to
chase
it
down.
So
let's
just
check
out
his
/
community
and
see
where
things
evolve
from
there.
But
that's
why
I
like
I,
don't
think
it's
worth
a
broader
discussion
in
this
meeting
at
the
moment,
because
it
seems
like
it'd,
still
be
sorted
out.
Yeah.