►
From YouTube: Kubernetes Community Meeting 20170706
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
In this meeting, updates from: Release managers for 1.8, 1.7, and 1.6
SIG Updates from SIG Scheduling, SIG Instrumentation, SIG PM, Sig Docs, and SIG Testing
B
All
right
welcome
everybody
to
the
July,
6
2017
meeting
of
the
communities
community
I'm,
your
moderator,
your
singer,
Dumars
I
am
the
co-lead
for
various
six,
including
seg
Asher
and
I.
I
am
happy
to
take
you
through.
It
will
undoubtedly
be
a
short
meeting
because
of
the
holiday
week
in
the
United
States
this
week,
so
we
had
a
demo
scheduled,
but
unfortunately,
that
personality
cancels
so
there's
no
demo
this
week,
we're
just
gonna
go
through
the
release,
updates
and
right
now
is
in
the
is.
C
C
D
B
B
Functions
in
the
release
process,
so
17.0,
first
of
all,
amazing
I
wish
that
I
was
on
the
the
community
meeting
last
week,
but
holy
moly,
what
a
performance
so
proud
of
our
community
and
all
the
hard
work
that
went
into
the
release
team.
So
if
you
know
one
of
the
release
team
members
give
them
a
hug
some
sort
of
acceptable
to
kind.
Thank
you
for
their
work.
So
for
the
one
seven
release
we
have.
E
F
E
B
D
B
G
Is
going
on
or
one
point
eight
haven't
finished
planning
for
that,
but
have
a
few
of
the
items
kind
of
lined
up
so
one
of
them,
probably
the
biggest
one,
is
finishing
the
work
that
was
partially
started
for
one
point:
seven
on
priority
and
preemption.
So
the
idea
is
that
we're
introducing
this
concept
of
priority
class
at
each
pod
as
and
it
turns
into
a
numeric
priority
that
then
the
scheduler
can
use
to
create
lower
priority
pods
if
the
cluster
is
full.
So
this
solves
a
bunch
of
problems.
G
One
problem
that
it
solves
is
this
problem
that
we've
been
using.
This
kind
of
hacking
thing
called
the
tree
scheduler
to
solve
named
rescheduling,
but
that
interested
things
like
DNS
and
heap
stir
can
can
always
schedule,
even
if
your
our
clusters
user
pod,
so
that
will
be
replaced
with
this
different
mechanism
where
those
kinds
of
pods
will
have
higher
priority
and
then,
if
they
can't
schedule
they
can
evict
lower
priority
pods
to
get
scheduled.
H
The
one
thing
that's
people
probably
want
to
be
aware
of,
especially
if
you're
on
sequester
lifecycle,
is
the
proposal
that
Klaus
had
made
with
regards
to
changes
to
daemon
sense
that
I
don't
think
it's
gonna
be
executed
on
entirely
in
the
date
cycle,
but
we'll
try
get
a
proposal
cleaned
up.
The
basic
premise
is
that
we
want
to
functionality
the
for
scheduling
of
daemon
sets
from
the
controller
manager
into
the
scheduler.
Eventually,
I
think
it
might
go
through
a
couple
of
phases
and
iterations
to
get
there,
but
that's
that's
the
current.
H
Just
it
does
need
to
take
into
account
some
of
the
constraints
that
it
currently
exists.
Inside
of
the
current
handling
of
the
system
and
right
now,
people
are
using
daemons,
that's
in
ways
to
get
around
the
scheduler
because
of
conditions
that
exist
and
they're,
basically
bootstrap
conditions.
So
it
looks.
I'll
put
a
link
inside
of
the
community
meeting
minutes
so
folks
are
interested.
They
could
take
a
look
at
the
proposal.
G
Yeah
I
think,
and
some
issues
like
that
were
recently
brought
up
so
I
think
there's.
There
will
be
some
more
discussion
about
that
before
before
anything
is
like
kind
of
executed
on,
but
yeah
people
should
definitely
take
a
look
at
that.
I.
Don't
have
the
issue
number
here
at
the
moment,
but
yeah
that's
a
good
one,
and
actually
one
that
I
there's
a
doc
but
I,
don't
I,
don't
know
and
there's
also
an
issue
for
it.
G
G
Let's
put
it
that
way
that
that
will
do
things
like
if
that
will
like
spread
pods
from
over-utilized
nodes
to
less
utilize
nodes,
and
things
like
that
to
try
to
kind
of
make
the
cell
layout
healthier
by
moving
pods
around,
of
course,
while
respecting
things
like
pod
disruption,
budgets
and
so
on-
and
this
is
something
we've
talked
about
for
a
long
time.
It
solves
a
bunch
of
problems.
For
example,
like
you
have
a
multi
zone
cluster,
one
of
the
zones
goes
down
all
the
pods
get
rescheduled
to
the
remaining
zones.
G
G
It
also
well,
like
I,
said
like
like,
if
you
get
an
imbalance
in
the
load
across
pods
across
nodes,
for
other
reasons,
it
can
can
balance,
help
balance
that
out
so
mesh
from
red
half
has
been
working
on
that
and
we
he
may
have
something
in
1.8
for
that
and
by
the
way
I
found
the
issue
for
the
thing
Tim
just
talked
about
the
demon
set
pods
being
scheduled
by
the
default
scheduler
and
it's
four
to
zero,
zero.
Two,
so
anyway,
I
think
those
are
main
main
things
Tim
anything
anything
else.
Oh.
H
We're
gonna
the
one
last
bit
I
think
that's
worth
mentioning
is
we're
probably
going
to
be
me.
Andy
and
Bobby
will
probably
be
taking
a
look
and
trying
to
rationalize
the
the
world
of
component
config
to
config
maps.
How
are
we
dealing
with
file
handling
inside
of
the
scheduler
there's
currently
three
separate
ways
that
data
can
be
slipped
in
by
the
scheduler
and
lot
of
rationalize
that
a
clean
mechanism
for
concise,
clean
mechanism
that
aligns
with
what
other
components
are
doing
in
the
system,
namely
they're.
G
G
G
If
you
don't
do
anything,
then
it
will
continue
to
just
be
five
minutes,
but
if
you
want
to
stay
on
the
node
longer
or
shorter,
when
the
node
becomes
unreachable
from
the
master
or
cubelet
starts
reporting,
not
ready,
then
you
can
select
how
long
how
long
we
want
the
pod
to
stay
on
the
node
and
that
that
is
on
a
per
pod
basis,
so
we're
hoping
that
that
can
get
into
one
point:
eight
that
was
alpha
and
one
point
seven,
but
hopefully
we'll
get
to
beta.
At
one
point.
A
G
And
then
the
last
thing
was
being
able
to
configure
the
scheduler
policy
using
a
config
map
to
firfer
since
kind
of
since
the
beginning
of
time.
Since,
before
I
joined
the
project,
the
scheduler
policy
could
be
changed
and
by
scheduler
policy,
I
mean
choosing
which
predicates
get
used
by
the
scheduler
and
also
what
are
the
weights
of
the
priority
functions
could
be
modified
using
a
configuration
file
on
disk
that
the
scheduler
would
read
when
it
starts
up,
and
there
was
work
done
in
1.7
that
were
hoping
will
be
finished
and
merged
in
1.8.
G
That
allows
the
scheduler
to
take
that
configuration
from
a
config
map
instead
of
from
from
a
file
on
disk,
so
that
user
can
change
it
easily.
While
the
system
is
running
without
having
to
change
a
file
on
the
master
node.
This
is
kind
of
related
to
what
Tim
was
talking
about
with
the
component
config,
but
anyway,
that's
that's
kind
of
the
last
feature
that
I
can
remember.
41.8.
B
I
Yeah,
so
we
had
quite
a
few
things
happening
since
the
last
time
you
updated
here
and
so
heaps.
One
points
we
important
forward,
please
and
that's
been
seen
for
women,
and
it's
now
elasticsearch
five
and
graphite
and
retention
policy
is
supported
for
inference
DB
and
from
the
most
important
thing
is
that
the
mathematics
API
is
now
apparently
supported
by
hipster,
which
is
executed
groundwork
to
actually
allow
and
different
backends
to
feed
things
like
CH
ba,
and
so
you
can
actually
have
to
run
ifs
anymore.
I
It's
especially
ongoing
work
as
far
as
I
understand
and
currently
sorry
for
Methodist
building
tooling
around
actually
making
it
really
easy
to
just
write
your
own
magic,
API
server
so
that
you
can
pack
them
by
promises
in
custody.
You
don't
have
to
run
deep,
select
to
actually
collect
data,
if
you're,
probably
some
mechanism
in
place
to
to
do
about
that.
For
you
and
then
we
have
gives
a
matrix
and
we
are
kind
of
shooting
to
get
this
to
one
hello
and
because
it's
pretty
widely
used
for
now
and
the
matrix.
I
I
An
outlook
for
the
next
few
months-
yeah
we
just
want
to
wrap
up,
is
this
tooling
around
building
your
own
metal
KPI
service
and
it
should
move
into
Eddie's
incubator
and
we
are
kind
of
investigating
right
now.
What
the
API
could
look
like
that
supports
historical
metrics,
because
their
massive
effort
now
and
just
lets
you
back
with
basically
the
current
state
and
for
some
cases
like
idling
and
the
EPA.
B
Great,
thank
you
so
much
for
the
update.
We
had
a
request
in
the
chat
for
somebody
from
cig
instrumentation
to
update
the
community
repo
description,
just
so
that
it's
more
I
guess
accurate
or
longer
I,
didn't
read
it
myself.
So
if
you
could
add
to
that,
that'd
be
great
yep.
J
We'll
put
in
a
good
word
for
cig
testing:
you
not
that
I
have
anything
problem
to
present
today,
but
we
have
issues
representing
the
majority
of
what
we've
been
working
on
port
v1e
and
the
rest
of
17
in
testing
for
repo
we're
gonna.
Do
a
final
scrub
of
that
and
our
meeting
next
week,
Tuesday
at
1
p.m.
Pacific
and
I
plan
on
presenting
to
the
community
a
little
more
formally
what
our
roadmap
is.
We
would
love
additional
help
on
pushing
some
of
this
forward.
J
Sure
we're
all
aware
that
we
want
testing
to
be
better
I,
think
the
what
Rock
specifically
is,
but
some
awesome
proposals
for
work
in
the
community
repo
around
improving
our
CI
signal
and
I'm
sure
they'll
talk
more
about
this
during
the
one
seven
retrospective
tomorrow.
So
anyway,
if
you
want
to
talk
more
about
testing
related
stuff
or
come
help
out,
take
testing
Tuesdays,
1
p.m.
B
I
also
wanted
to
add
something
too
and
Aaron.
Correct
me
if
I'm
wrong
about
this,
but
sig
testing
is
seems
to
be
a
really
great
place
to
involved,
especially
if
you're
just
getting
into
the
project,
because
there
are
a
lot
of
things
that
need
to
be
done
that
don't
necessarily
require
as
much
deep
coding
experience
am
I
am
I
correct
in
that,
like
there's
a
lot
of
help
needed
and
things
that
are
not
necessarily
writing
code,
we
could
use
a
lot
of
help.
J
Around
actually
documenting
how
things
work
I
feel
like
every
time.
Somebody
comes
to
the
project.
The
first
time
they're,
like
two
tests
were,
can
we
point
them
at
Docs
and
they're
like
yeah,
but
90%
of
that
is
cropped.
We
could
really
use
some
help
in
actually
chipping
away
at
the
craft,
if
not
state
testing,
contributor
experiences,
also
a
great
place
to
share
these
stories,
can
collect
actionable
feedback
one
other.
This
is
kind
of
an
a
type
of
thing
that
one
other
technologists
threw
out
there
for
what
a
I
want
to
get
non-googlers
on
the
test.
J
J
What's
the
best
way,
don't
you
come
show
up
at
the
state
testing
meeting
and
I
can
post
a
link
to
an
issue
where
I
have
asked
the
questions
that
anybody
who's
interested
in
doing
tests
in
front
of
or
on
call
would
be
interested
in
like
what
are
the
responsibilities?
What
do
I
need
to
know
to
do
this?
What
credentials
or
tools
to
I
need
to
have
to
do
this?
This
is
definitely
a
journey
of
personal
and
professional
discovery,
and
if
anybody
wants
to
switch
one
more
that'd
be
great.
G
J
Marks
question
on
what
rotations
my
understanding
is.
There
are
two
different.
There
are
actually
three
different
rotations:
I
guess:
there's
a
user
support
rotation
Brian,
grant
seventeen
notes
of
kubernetes,
definitely
less
than
a
while.
Okay
people
often
ask
questions
and
the
response
is
to
you:
deliver
us
back
overflow
or
to
slack
or
post
a
link
to
our
troubleshooting
on
the
website.
J
That
takes
a
whole
lot
of
time,
and
sometimes
it
seems
like
people
who
have
been
experienced
in
the
use
of
kubernetes,
but
maybe
don't
know
the
nitty-gritty
of
how
to
review
a
pull
request
and
look
at
lines
of
code.
It
be
super
valuable
and
helping
us
triage
off
user
support.
However,
at
the
moment
I
think
on
me
or
box,
we're
very
few
people
have
volunteered
there.
That's
yet
a
third
effort
which
I
guess
maybe
contributor
experience
would
be
the
sink.
That's
learning
that
there
are
two
other
technical
rotations.
One
is
built
top
ill
cap.
J
Correct
me.
If
I'm
wrong
people,
it's
the
one,
that's
responsible
for
like
making
sure
that
it'll
doesn't
break.
They
can
start
and
stop
this
at
mid
Q.
If
needed,
they
can
manually
merge
things
they
have
superpowers,
but
they
need
to
be
used
with
great
responsibility.
Best
infra
is
basically
what's
going
on
with
the
tests.
Why
aren't
the
tests
passing?
Where
do
the
tests
run
so
things
like
the
maintenance
of
prowl
understanding?
J
Why
being
able
to
expand
quota
or
do
janitorial
tasks
on
tests
that
are
failing,
because
we've
run
out
of
subnet
quota,
given
a
Google
project,
but
I
am
literally
making
all
of
this
up
on
the
spot,
because
there's
no
document
for
what
a
testicle
fferent
person's
responsibilities
are.
There
is
a
document
for
a
bill,
cops
responsibilities,
but
I
think
that
can
also
stay
to
be
flushed
out
and
I.
J
A
Was
going
to
say
the
trying
things
until
they
don't
work
and
then
trying
to
get
feedback
on
them
is
fundamentally
how
we
are
improving
kubernetes.
So
we
really
need
people
to
try
things
that
look
look
like
they
may
or
may
not
be
able
to
do
them
and
learn
and
ask
questions
and
get
support
from
the
people
who
have
been
with
the
project
a
lot
longer.
B
F
F
Information
like
healthful
places
to
concretely
point
new
contributors
and
new
testers
is
really
awesome.
So,
as
Jennifer
Rondo
mentioned,
there
is
a
larger
conversation
about
where
to
point
new
contributors
to
and
that
kind
of
information
is
really
helpful.
So
I
expect
that
we
will
see
action
on
that
within
the
next,
hopefully,
hopefully,
within
the
next
30
days,
we'll
see
something
concrete
go
into
the
new
contributor
lines
into
to
make
contribution
easier
for
people
who
are
doing
the
documentation.
B
Great,
thank
you
so
much
okay
and
moving
on
to
announcements
so
tomorrow,
at
the
same
time
as
this
meeting,
we
will
be
having
the
communities
one
dots
have
been
truly
is
retrospective.
This
is
a
chance
for
us,
as
a
community
to
burr
Valley
around
what
happened
during
a
lot.
That's
release
and
learn
from
it
and
hopefully
get
some
great
concrete
suggestions
to
make
1.8
even
better
and
move
our
community
forward.
B
So
if
you
could,
please
click
on
the
document
that
is
linked
in
the
the
agenda
and
go
through
and
add
your
items
to
the
list
of
what
you
think
could
have
gone
better.
What
you
think
went
well
and
what
would
you
like
to
see
change
41.8,
the
more
bullet
points
we
have
been
there
ahead
of
time,
the
less
awkward
silence
and
me
sitting
there
waiting
for
people
to
write
things
during
the
retro,
which
is
always
appreciated
so
and
yeah
I'm
really
excited
to
see
what
comes
up
from
that
again.
I
just
want
to
thank
everybody.