►
From YouTube: Kubernetes Community Meeting 20161215
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo: end-to-end monitoring of kubernetes; Release update 1.5, 1.6; SIG introspection discussions - SIG-Scheduling, SIG-Autoscaling, SIG-ClusterOps; Q&A
A
Or
and
good
local
time
to
everybody,
this
is
the
Cooper
Nettie's
community
meeting
for
December
15
and
we
are
going
to
start
out
with
a
demo
from
one
of
our
special
interest
groups,
which
is
escaping
me
right
now
too
many
windows
Frederick.
Why
don't
you
go
ahead
and
introduce
yourself
for
my
apologies?
A
B
Nodes
and
I
have
already
set
up
a
namespace
for
our
monitoring
things
and
let's
just
have
a
quick
look
at
that,
and
we
can
see
that
they
are
there's
nothing
running
so
far.
I'm.
Can
everybody
see
my
screen?
Oh
thank
you,
okay,
so
we
have
nothing
running
so
far.
So
now
we
take
our
one
button,
push
solution
and
we
create
everything
and
then
we
can
have
another
look
at
how
it's
starting
up
all
our
components.
B
So
after
a
few
seconds,
we
can
see
that
everything
is
aspen
up
and
we
can
have
a
look
at
Prometheus.
So
what
we
can
see
here
is
that
the
configuration
that
we
have
in
place
automatically
discovers
all
the
components
within
30
days,
so
it
finds
the
HED
cluster
group,
controller
manager,
coop,
dns
and
all
the
components
and
is
starting
to
scrape
those
and
problem
as
its
collecting
all
of
this
data.
It
continuously
all
the
alerts
that
we
have
in
place.
B
So
here
we
have
a
pretty
exhaustive
list
of
alerts
already
in
place,
but
that
of
course,
always
improvable,
so
where
we
invite
everyone
to
contribute
to
that.
But
we
already
have
a
pretty
good
list
here
and.
B
This
Prometheus
instance
also
automatically
discovers
alert
manager
instances
with
it
running
norco
burnaby's
cluster.
So
that
means
that
when
allergic,
actually
fire
that
notifications
will
be
sent
out
accordingly
and
in
addition
to
that,
I
spoke
about
some
pre-built
dashboards.
So
we
have
a
couple
of
dashboards.
For
example,
14
entire
genetic
cluster
have
a
general
overview
of
our
what's
going
on
in
our
cluster
and
also
into
the
dimension,
so
in
this
case
we're
looking
at
the
higher
faster.
B
You
can
also
look
at
individual
nodes,
so
here
we
have
the
master,
for
example,
which
got
much
memory
left
anymore
at
the
larger
sites
worker,
which
does
have
a
bit
more
memory
left,
and
in
addition
to
that,
we
can
also
look
at
individual
deployments.
For
example,
let's
look
at
Cana
deployment,
for
example,
so
by
default
we
can
we
can
already
without
even
having
instrumented
applications
themselves.
We
can
already
have
a
pretty
good
overview
of
everything
that
is
running
within
Canaries
and
in
case
we're
not
covering
everything
with
a
deployment.
Yet
we
can
dig
into.
A
A
Okay,
well
thanks
Frederick
for
the
work
and
the
demo,
and
we
did
get
one
question
in
chat,
which
is:
where
is
the
code
for
this
demo
available?
Can
you
yes.
B
B
A
Very
cool
all
right,
any
other
questions
from
people
all
right.
Well,
thanks
for
the
sig
instrumentation
update-
and
this
looks
like
it's
going
to
be
a
lot
of
a
lot
of
fun
and
good
good
help
for
monitoring
going
forward
all
right.
So
we
are
on
two
releases,
don't
think,
there's
a
1.4
release,
I
haven't
heard
about
or
don't
1.4
update.
Do
you
know
of
one
saad
nomadic1.
C
Yeah
1.2
5.0
went
out
on
Monday.
There
was
a
security
issue
identified
with
that
release.
We
spun
up
a
1.5
one
that
went
out
less
than
24
hours
later.
C
We're
recommending
that
everybody
use
151
instead
of
150
unless
you're
careful
with
basically
the
flags
that
you
pass
in
for
authorization
and
authentication
the
ball
post
for
15
is
live,
the
docs
are
live
and
152
we're
going
to
hold
off
until
after
the
holidays.
Unless
there's
a
pressing
need
t0
issue,
my
name
is
talking.
Astaire
earlier
looks
like
you
may
have
something,
but
yeah.
A
All
the
excitement
there's
which
is
nice
after
the
1.5
million,
so
as
always
many
many
thanks
to
our
1.5
release
team
with
sod
and
Caleb
miles
from
core
OS
and
dims
from
mirantis
as
well.
So
thank
you
all
for
all
the
work
to
put
in
in
getting
us
to
a
1.5
release
and
then
a
1.15
one
right
on
its
heels,
which
is
exciting
but
important.
A
A
D
D
It
so
I
made
a
PR
with
proposed
timeline
for
1.6.
It
follows
the
same
schedule
in
the
same
kind
of
format
as
previous
releases
it'll
take
about
a
quarter
for
1.6
and
we're
obviously
we're
going
to
be
using
the
time
at
the
end
of
December
planning
and
potential
bug
fix
its.
We
expected
to
be
quiet.
We
expect
people
to
take
vacations
and
ice
enjoy
the
break.
So
we
start
the
coding
face
a
January
third,
which
is
the
first
first
week
of
January
and
the
first
holiday
first
day
after
the
holiday
have
a
seven-week
code
period.
D
In
that
time,
just
like
during
this
release,
we'd
have
a
feature
free
state,
so
you
have
till
the
third
week
of
january
january
24th
to
propose
any
features
that
you
would
like
to
be
in
the
future.
Repo
sorry
any
features
that
you'd
like
to
be
in
1.6.
They
need
to
be
in
the
features
repo.
I
think
we
had
a
few
issues,
or
maybe
one
issue
with
that
this
time,
any
features
for
1.6
should
be
in
that
repo
makes
it
really
easy
to
have
it
in
a
centralized
place.
D
Fehb
27
would
give
us
would
be
our
code
free
state
and
that's
would
when
we
move
into
the
code.
/
period
will
work
on
bug
fix
and
some
cigs
will
probably
be
focusing
on
bug
fix
the
entire
way,
but
just
as
a
general
rule
of
thumb,
no
more
features
are
going
to
be
going
in
at
that
point
and
then
the
release
would
be
scheduled
for
march.
Twenty.
Second,
on
the
PR
itself,
it
looked
like
a
lot
of
people
were
okay.
D
A
D
A
D
E
Yeah,
just
a
brief
note,
is
used
to
sense
for
me,
so
this
is
a
direct
list
for
the
future
owners
or
the
people
who
are
going
to
be
the
future.
Almost
one
at
six
so
draw
some
features.
There
are
lots
of
features
there
are
currently
in
alpha
or
beta
stage
wanted
five,
and,
if
you're
willing
to
continue
development
of
this
feature
in
wanted
six
place
of
data
labels
and
objective
milestones,
juror
or
just
confirm
just
paint
me
directly
and
confirm
that
these
features
are
going
to
be
developed
in
106.
E
I
will
update
the
milestone
of
the
labels,
so
the
current
moment
between
the
releases,
I'm
performance
and
maintenance
work
in
the
feature
stripper
I'm
cleaning
up
some
old
labels,
and
that
is
a
new
label.
Sword
lectual,
make
our
feature
submission
process,
no
clear
and
perfect
dad
it's
tall
for
me.
Awesome.
E
A
Off
we
go,
we
will
go
on
to
sig
scheduling,
so
these
are
the
continued
discussions
of
the
1.6
self-assessment
for
the
cigs.
I'll,
put
the
helpful
questions
in
again
into
the
notes,
but
basically
this
is
the
one.
Is
1.6
going
to
be
stabilization
for
your
sig.
What
have
you
have
you
done
your
introspection
and
what
is
your?
What
are
your
thoughts
around
the
work
that
is
being
done
by
your
sake,
so
David,
Oppenheimer
or
Tim,
st.
Clair
of
red
hat
to
December,
give
a
pittance
and
Clark
Claire
of
Google
and
Google
and
redhead.
G
Are
here
I?
Can
you
hear
me
so
we
can
hear
you
just
fine
okay.
This
is
David
yeah,
so
Tim
and
I,
and
the
folks
on
the
scheduling,
sick,
discuss
this
at
the
last
couple
of
sig
meeting.
So
here's
a
three
minute
summary
so,
first
in
terms
of
the
metrics
of
the
current
health,
Conor
Doyle
from
Intel
ran
code
coverage,
a
test
for
the
unit
tests.
We
have
in
the
code
reports
of
the
repository
that
were
responsible
for
and
found
we
had.
G
G
He
used,
the
whatever
tool
gets
invoked
when
you
do
make
with
the
cube
cover
a
cube,
underscore
cover
parameter
anyway,
so
apparently
there's
a
tool,
and
if
people
are
wondering
about
that
get
in
touch
with
Connor
because
it
seemed
pretty
straightforward
to
get
the
test
coverage
automatically
computed
for
for
the
unit
tests
for
different
directories
does
our
sig
own
code.
Yes,
in
a
bunch
of
different
repositories:
I
sorry
I
bunch
of
different
directories
in
the
communities
repository
and
also
in
the
communities
incubator.
G
We
talked
about
how
much
time
we
spend
responding
to
user
question,
there's
not
a
lot
of
traffic
on
the
sig
mailing
list.
We
don't
pay
attention
to
stack
overflow,
but
we
do
get
a
lot
of
issues
in
github,
but
generally
only
respond
to
the
ones
tagged
with
the
at-6
scheduling
people
still
fight.
You
know
people
will
file
issues
with
scheduling.
A
G
Resource
management,
questions
or
problems
or
suggestions
and
not
tag
them
with
the
sig
and
those
we
essentially
never
see.
So
we
talked
about
how
we
might
improve
that,
although
that's
a
tricky
problem
that
I
know
all
the
cigs
have.
There
were
about
a
hundred
open
issues
which
was
less
than
less
than
we
kind
of
expected,
and
actually
one
of
the
things
we've
already
started.
G
Doing
that
we're
gonna
continue
at
1.6
is
closing
some
of
the
old
or
Tim
Sinclair
started
working
on
this
closing
the
old
and
irrelevant
issues
or
things
that
aren't
actionable
to
try
to
get
that
down
down
lower
and
we're
using
the
new
labels
that
Brian
grant
set
up,
and
we
have
about
a
dozen
tests
flakes
that
were
responsible
for
and
historically,
you
have
about
75
25%
feature
stabilization
ratio,
that's
trending
more
towards
5050,
so
more
more
toward
stabilization,
less
new
features
in
the
past.
G
G
That's
part
of
toleration,
but
other
than
that
everything
else
is
either
moving
things
to
beta,
for
example,
the
node
affinity,
pod
affinity
and
taints
and
toleration
we're
all
going
to
be
moved
to
beta
or
working
on
design
like
we're
planning
to
work
on
design
for
preemption
and
resource
allocation
for
batch
workloads
not
implement
anything
but
just
work
on
the
design.
For
that,
and
then
everything
else
is
just
technical
debt
like
updating
the
owners
files,
improving
documentation.
We
have
some
concrete
ideas
about
areas
where
the
documentation
for
scheduling
resource
management
related
stuff
should
be
improved.
G
So
improving
logging,
in
addition
to
the
events
and
and
closing
those
inactive
and
duplicate
and
national
issues
like
I
mentioned
before,
and
then
the
last
thing
is
I
think
somebody
mentioned
this
at
the
last
cig
meeting
but,
together
with
the
node
team,
were
forming
a
temporary
working
group
from
resource
management
to
try
to
prioritize
2017
features
related
to
resource
management
that
span
kind
of
node
and
a
node
and
cluster
level
scheduling.
G
So
things
like
how
you
handle
Numa
scheduling
and
how
other
features
for
like
performance,
sensitive
applications
and
scheduling
disk
is
a
resource,
and
things
like
that.
So
we're
going
to
work
on
prioritizing
and
understanding
those
features
during
1.6.
But
again,
that's
not
something
that
will
be
implemented.
So
that
is
the
quick
summary
from
sig
scheduling,
which.
A
Is
awesome,
thank
you
for
the
update
and
they
you
too
Tim
and
the
rest
of
the
team
for
going
through
the
hundred
open
issues
and
relabeling
them.
I
know
they
were
relabeled
and
then
re
relabeled
after
the
labeling
changes,
so
that
was
extra
awesome
that
was
done
and
then
and
then
tidied
up
in
the
new
paradigm
pits.
As
you
all
know,
this
project
is
nothing
if
it
is
not
moving
fast
and
trying
to
improve
things
all
the
time.
So,
thanks
for
keeping
up
with
us
and.
E
G
A
G
There's
so
there
is
an
issue
where
we
are
having
discussion
about
something
that's
vaguely
related
to
preemption
and
or
that
mute
is
pre-emptive.
The
mechanism,
which
is
kind
of
like
resource
arbitration
between
different
batch
frameworks,
mom,
it's
not
an
issue
about
preemption
per
se
kind
of
like
in
the
board,
preemption
sense,
although
that
is
kind
of
a
piece
of
it,
and
we
need
to
consider
both
of
these
issues
together.
If
you
are
burning
to
make
some
comments
about
preemption
in
general,
I
can
open
an
issue
for
preemption.
G
F
F
G
Actually
I
don
has
mentioned.
That's
me
several
times,
and
actually
that
that
there
is
an
issue
open
for
that,
specifically
the
node
level,
like
ordering
the
evictions
and
keeping
a
like
important
pods
down
to
the
node
and
some
mechanism
for
that
that
might
be
related
to
scheduler
level
preemptions.
So.
F
You
may
know
the
answer
to
this,
so
there's
some
notion
of
criticality
that
was
added
for
DNS
right
for
pod
to
go
through
the
scheduler.
You
know
if
that
mechanism
also
works
for
given
sets.
A
All
right,
there
was
a
note
that
says
don
juan's
approach
to
things
like
coop
proxy
in
check:
yeah,
ok,
fantastic
next
is
that
would
six
scheduling
and
we
are
on
to
sig
auto-scaling
Solly
Ross
from
Red
Hat.
All.
H
So
the
horizontal
pot
autoscaler
generally
has
pretty
good
test
coverage
unit
test
coverage
somewhere
between
75
and
85,
depending
on
where
you
look
which
directory
so
that's
pretty
good
I.
We
realized
that
none
of
us
were
actually
looking
at
stack
overflow.
So
that's
something
that
we
need
to
work
on
I'm
in
general,
in
terms
of
stabilization
for
the
cluster
autoscaler.
H
For
this
release,
one
of
our
big
goals
in
general
for
this
release
is
to
get
more
insight
into
the
status
of
both
of
the
auto
scalars
and,
what's
like,
allow
users
to
figure
out
what's
going
wrong.
So
in
that
regard,
it's
there's
a
focus
on
stabilization
with
a
horizontal
pot.
Autoscaler
on
we
are
kind
of.
We
are
bumping
the
to
a
new
API
version.
So
obviously
that's
a
new
feature,
but
it
is
the
purpose
of
that
is
to
stabilize
some
of
the
custom
metrics
work.
H
H
H
I
in
terms
of
our
open
bug,
reports
I
it's
a
bit
high,
but
again
one
of
the
a
lot
of
the
bugs
we
are
seeing
is
just
people
that
are
confused
or
have
a
lack
of
insight
into
what's
going
on
with
their
HPA.
So
we're
hoping
that
our
work
on
providing
more
insight
into
this
is
going
to
help
in
terms
of
flakes.
H
The
HPA
has
a
couple
of
flakes
in
the
ete
test,
suite
and
there's
a
bunch
of
cluster
autoscaler
flakes,
but
we
want
to
we're
hoping
some
of
the
insight
work
will
hope
with
those
two
so
kind
of
all
ties
back
to
that.
How
do
we
get
visibility
into
the
status
of
the
auto
scaling
and
like,
like
we've
mentioned
previously,
I
discovered
as
I
was
looking
through
our
issues?
We
also
have
some
unlabeled
issues,
but
factoring
for
unlabeled
issues.
H
We
have,
in
the
order
of
magnitude,
of
about
100,
open
issues,
kind
of
flipping
between
questions
and
feature
requests
for
the
HP,
a
logic
to
be
more
complex
when
dealing
with
certain
failure.
Failure
cases
so
will
I
will
be
looking
towards
solving
some
of
those
in
the
future,
and
others
are
just
getting
better
at
answering
those
questions
or
being
like.
No,
we
don't
we're
not
going
to
choose
to
do
that
right
now.
H
Okay,
so
custom
metrics,
so
there's
there's
kind
of
two
parts
of
custom
metrics
there
is
the
actual
HPA
API
object,
which
will
definitely
be
in
16
on
and
then
there's
the
action,
the
backing
API
that
allows
the
HPA
to
retrieve
custom
metrics
and
that
will
that
we're
targeting
for
one
dot,
six
as
well,
and
hopefully
we'll
have
some
initial
implementations
on
top
of
some
of
the
monitoring
backends,
but
I
suspect
that
more
implementations
will
trickle
in
over
11
dot,
6
1
dot,
seven
and
we'll
look
to
stabilize
that
more
because
it's
going
to
be
alpha
in
one
dot.
H
I
A
Meet
Sully
cool
all
right,
any
other
questions
about
sick,
auto
scaling
all
right.
Let's
move
on
to
sig
cluster
ops
and
we're
going
through
things
a
little
bit
quickly
today
and
we
lost
one
of
our
agenda
items
about
docs
in
1.6,
so
we
will
have
extra
space
if
there
are
any
other
cigs
that
want
to
go
through
their
reflection,
questions,
Oh,
Rob,
cluster,
ops,
you'll.
I
I
So
if
somebody's
interested
in
doing
that,
we'd
be
happy
to
do
that,
there's
been
some
fluttering
around
with
some
related
cigs
with
some
overlap,
and
we
would
entertain
people
who
have
operational
concerns,
standalone
or
otherwise
to
participate
with
us,
because
there's
there's
time
in
our
agendas
we're
still
trying
to
meet
every
week
and
we
have
a
ton
of
backlog
of
work
around
producing
documentation,
I'd
love
to
see
a
new
flooded
demos
and
discussions
around
infrastructure
reference
architectures
and
things
like
that.
So
we're
seeing
all
this
activity.
I
Cluster
ops
goal
is
to
be
a
center
point
for
it,
so
that
we
can.
We
can
actually
discuss
it,
documented
figure
out
best
practices
around
operating
and
just
from
a
history
perspective.
Cluster
actually
works
around
taking
the
release,
that's
out
and
their
releases
that
are
out
and
figuring
out
how
to
build
operational
practices.
Reference
architectures
documentation,
commonalities
between
different
operating
environments
and
practices,
and
do
that
and
then
our
goal
is
having
those
conversations
to
be
a
place
where
people
can
come
together
to
talk
about
operating
and
improving
to
Brunetti.
I
So
people
need
to
say:
hey
we're
trying
to
figure
out.
If
this
is
the
right
thing
or
not,
we
would
be
very
happy
to
have
people
have
an
audience
at
cluster
opsin.
That's
why
we
have
weekly
meetings.
We
meet
every
Thursday
I.
Guess
it
one
o'clock,
pacific
in
central,
so
three
o'clock
for
me,
I'm
hoping
to
do
a.
I
We
produce
some
really
interesting
maps
of
functionality
for
cooper
Nettie's,
we're
about
to
do
some
networking
work
and
we've
written
a
document
about
caritas
upgrades
the
manual
way
and
all
of
those
things
were
refining
and
meeting
and
having
discussions
about
so
a
lot
of
good
stuff
happening.
It's
just
you
could
use
somebody
else
up.
Carry
the
ball
come
out
carrying
the
meetings
so.
A
The
volunteer
in
chat
from
Jason
singer
de
Morgan,
so
you
two
should
are
offline
about
that
and
thank
you
for
the
the
framing
of
what
the
cluster
ops
is
doing
as
well.
That's
awesome:
did
you
guys
go
through
draw
it
today?
Did
you
go
through
the
questions?
I
know,
a
lot
of
them
are
kind
of
tangential
to
you,
because
you
don't
tell
not
much
code
right.
I
No,
we
have
it
so
we
haven't
really
had
a
good
meeting
in
a
couple
of
weeks
because
of
coop
con
reinvent
and
a
whole
bunch
of
other
stuff.
So
we
are
we're
definitely
behind.
For
us.
The
the
hardening
release
is
a
chance
to
catch
up
and
actually
produce
the
documentation
to
collect
documentation
that
perspective,
so
Rob
yep,
where.
I
F
Thanks,
okay,
are
they
google
documents
or
yep,
okay.
A
I
Yeah
our
goal
was
to
use
mark
the
google
docs
while
we
were
doing
collaborative
editing
and
then
so
I
just
pasted
the
one
and
then
that
has
links
we'd
love
to
get
this
into
something
it
was
more
source
control,
especially
in
the
question
I
would
have
about
that.
Its
graphics
so
we're
producing
so
that
the
model
that
we
sort
of
settled
on
was
that
we
would
produce
graphics
for
the
documentation
first,
because
it
made
it
easier
to
actually
that
document
stuff.
I
A
I
Do
it,
and
and
I
know
that
some
people
have
been
using
these
methods
using
the
drawings?
Please
they
are
CC
drawings.
We
were
building
these
as
community
resources,
so
feel
free
to
pull
a
man
make
suggestions
for
it.
Our
goal
is
trying
to
demystify
KU
benetti's
in
the
drawings.
That
drawings
is
the
first
step.
A
Have
a
joke:
Oh
team,
okay,
other
cigs,
I,
didn't
see
anybody
drop
into
the
notes
that
they
wanted
to
go
through
the
introspection
questions
so
last
call
on
cigs
all
right,
I'm,
not
hearing
anything.
So
I'll
go
on
to
open
discussion
topics.
Does
anybody
have
anything
they
want
to
discuss?
A
F
J
Yeah
I
want
to
own
a
second
that
I
can
tell
you
know.
Our
eventual
goal
here
is
to
extract
a
lot
of
the
commonalities
across
things
like
urban
areas,
anywhere
k,
ops
and
all
the
other
things
into
cube
admin
and
then
break
that
out,
so
that
something
that's
more
for
a
full
cycle
like
your
entities
anywhere
can
use
it
I'm,
not
sure
exactly
what
the
status
is
there
I
know
there
is
mode
for
criminals
anywhere
to
actually
use
cube.
J
F
Her
and
I
are
working
on
setting
up
some
finer
grain
notification
mechanisms.
We
will
need
help
from
the
sig
leads
to
fully
implement
them
and
the
reasons
why
will
become
clear
when
we
send
a
note
to
this
exceeds
mailing
list
about
be
watching
for
that
and
please
follow
the
instructions
when
they
come
your
way.
So.
A
There
is
in
our
grand
effort
to
distribute
control
within
this
project
and
distribute
decision-making,
and
all
of
that
there's
and
engagement,
there's
going
to
be
another
request
out
to
the
sig
leads
because
they
know
many
of
you
are
on
here
about
hosting
this
meeting
going
forward.
So
at
this
point
we
have
twenty
five
six
give
or
take,
but
the
ebbs
and
flows
on
a
seemingly
daily
basis
at
this
point,
but
that
means
that
each
sig
would
be
asked
to
host
this
meeting
twice
next
year.
A
So
I
will
be
sending
out
a
male
about
that,
and
I
will
certainly
be
helping
in
the
background,
doing
getting
the
agenda
together
and
such,
but
ultimately
the
hosting
of
it
and
the
ownership
will
be
with
the
sig
leads
trying
to
keep
this
moving
forward
as
as
a
way
to
keep
you
all
engaged
and
getting
the
right
topics
and
the
right
discussions
happening
big
thumbs
up
from
Joe.
Anybody
else
have
thoughts
hosting
the
meeting
is
fun,
Oh,
Joey,
you're,
so
sweet.
It
really
is
actually
I
have
a
grand
time
doing
it.
A
J
A
And
there's
work
going
on,
even
as
we
speak,
because
I
guess
just
got
a
chat
about
it
to
get
every
sig
having
a
playlist
inside
the
Cooper
Nettie's
youtube
channel.
So
even
if
your
self
hosting
you
can
send
a
link
to
us-
and
we
can
add
it
to
the
playlist,
so
we
will
be
centralizing
where
all
of
the
sig
videos
are
going.
A
So
you
can
still
sell
foes
because
that,
if
that's
easy
enough
for
you
that's
great,
otherwise
we
will
be
able
to
host
them
for
you
and
upload
them
and
put
them
into
the
into
the
Cooper
Nettie's
playlists,
whether
they
are
external
playlists
or
internal
playlists.
So
progress
always
working
to
try
and
make
us
better.
As
I
said,
it's
a
really
fast
moving
project.
A
So
thank
you
all
for
continuing
to
offer
suggestions,
offer
improvements
in
work
which,
against
those
improvements
which
is
even
more
helpful
and
for
keeping
up
with
us
as
we
change
things
on
the
fly.
This
is
a
bus
we
are
trying
to
fly
so
let's
make
it
go
we're
putting
the
wings
on
as
we
go
to
mix
my
metaphors
anyway.
Any
other
topics,
quick,
a
general
question,
just
started
using
home.
What
is
the
consensus
in
Cooper,
Nettie's,
quick,
take
on
the
audience
is
a
good
is
about
it.
Don't
bother,
helm
is
great.
A
Okay,
doesn't
sound
like
it,
so
so
the
special
interest
group
that
works
on
helm
is
sig
apps
and
they
have
weekly
meetings
as
well
and
jason
says
it's
the
best
thing
since
sliced
bread
on
the
whole.
We
do
think
that
we
are
supporting
the
project
and
it
is.
It
is
beyond
incubated.
It
is
now
in
our
in
our
repo
and
we
want
to
go
ahead
and
keep
getting
feedback
on
it,
suggestions
from
it
and
where
there
is
lacking
documentation.
Let
us
know
or
help
us
make
it
better.
A
E
So
I
hope
I
have
a
brief
one
note
so
also
kind
of
send
a
message
to
to
the
glades
I
am
associate
cabinet
is
deaf
and
communities
regarding
their
roadmap
for
2017.
As
most
of
you
may
remember,
we
have
discussed
this
roadmap
during
the
developer
summit
in
austin
and
the
previous
communities
community
meeting
here
so
I'd
like
to
request
the
seeks
to
start
working
on
their
roadmap
and
sharing
it.
So
as
far
as
I
know,
sig
eps
has
posted
a
net
and
I
have
linked
yeah
this
sample
Jer.
A
A
Next
week's
community
meeting
is
actually
going
to
be
the
1.5
retrospective
and
the
confirmation
of
the
1.6
dates
career
and
the
community
meeting
on
the
29th
is
cancelled
because
you
should
all
be
resting
and
relaxing
and
spending
time
with
friends
and
family
instead
of
us,
unless
you
count
just
his
friends
and
family,
in
which
case
we
could
put
on
hats
and
do
something,
but
I
think
you
should
probably
go
spend
time
and
chill
with
with
people
that
are
closer
all
right.
Anything
Oh.