►
From YouTube: Kubernetes Community Meeting 20160414
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo of AppFormix e2e Testing on Azure, The Life of a Release Note, 1.3 blocking features report, Community Sites Efforts
A
Good
morning
this
is
the
Cooper
Nettie's
community.
Hangout
today
is
April
14
and
we
are
recording
this
event
we
have
today.
As
our
demo,
we
have
Travis
new
house
giving
us
an
update
or
a
demo
of
app
formics
and
what
they've
been
working
on
in
the
ED
testing
on
Azure.
We
also
have
an
update
from
david
mcmahon
about
the
cooper
Nettie's
release
cycle,
he's
our
recently
joined
engineer,
who
is
in
charge
of
release,
automation
and
so
he's
going
to
talk
about
the
life
of
a
release.
A
It
looks
like
a
decent
sized
agenda,
but
without
long
long
and
harried
discussions
as
we
have
been
not
harried
along
and
diverse
discussions
that
we've
had
over
the
last
few
weeks,
we'll
take
a
break
from
those
and
pick
them
up
again
with
what
to
do
with
the
broader
community
effort
proposal,
as
well
as
a
few
other
proposals
and
upcoming
weeks.
So
let's
get
this
started
with
Travis
hi.
A
B
B
But
today
I
want
to
talk
about
some
work,
we're
contributing
to
Coober
Nettie's,
and
the
first
thing
we've
focused
on
really
is
expanding
the
end-to-end
tests
to
the
azure
cloud
and
first
we
had
to
take
a
look
at
the
cloud
provider
scripts
that
existed
and
they
were
out
of
date
and
had
recently
been
pulled
out
of
the
tree.
So
we
have
reimplemented
those
they're
based
on
saltstack,
which
is
the
same
model
right
now
used
by
Google,
compute
engine
and
AWS
in
the
cluster
directory.
B
We
have
had
some
discussion
and
are
looking
towards
the
future,
which
I
think
is
going
to
be
based
on
something
called
config
manager
and
will
hopefully
kind
of
its
going
to
the
plan
is
to
make
a
move
away
from
saltstack,
but
in
the
short
term
we
wanted
to
get
something
up
and
running
quickly
and
we
do
have
a
pull
request
out.
It's
been
gone
through
a
few
iterations.
We
kind
of
request.
A
B
A
B
This
implementation,
but,
like
I,
said
we
want
to
get
something
started
and
we
really
wanted
to
get
the
intent
tests
running
in
some
fashion
so
that
we
can,
you
know,
evolved
this
federated
testing
process.
So
this
was
sort
of
a
prerequisite
to
just
have
a
sure,
be
able
to
deploy
and
again
that
that
code
is
out
for
review.
It's
gone
through
a
few
iterations
I
think
the
latest
is
that
this
?
of
how
it
should
be
implemented
if
saltstack
is
the
way
we
want
to
go
forward.
B
A
A
C
B
B
And
patching
the
net
assault
stack
until
time
comes
with
those
emerged
in
we
have
14
these
scenarios
that
are
passing
right
now
in
the
last
few
weeks,
since
we've
got
it
up
and
running,
we
have
discovered
two
issues
and
fix
them.
You
know
there
are
somewhat
minor,
but
you
know
this
is
the
kind
of
things
that
we
see
as
a
benefit
to
having
the
test
coverage,
and
so
we
are
working
with
the
sig
testing
group
on
evolving
this
federated
testing
process.
B
Right
now,
what
we
do
is
we
publish
results
to
Google
Cloud
Storage
bucket
we're
trying
to
figure
out
the
best
wave,
the
github
and
ultimately
present
them.
You
know
on
maybe
some
testing
dashboard
and
we
are
also
we're
still
working
through
getting
the
format
just
right
in
this
bucket
we're
trying
to
follow
the
same
format
that
I
think
the
google
jenkins
jobs
use
for
the
ete
runner,
and
so
that
that's
really
where
we're
at
with
the
CI.
B
B
If
necessary
on
new
releases
and
then
I
just
want
to
lastly
kind
of
share
another
contribution
that
we're
we're
kicking
off,
which
is
what
we're
calling
cube
health.
The
goal
here
is
that
we
want
to
provide
a
tool
that
can
do
real-time
monitoring
of
the
coup
benetti's
control
plane
so
that
you
can
watch
all
of
the
different
services
and
know
if
they're
you
know
up
and
running,
if
they're
having
satisfactory
performance,
then
you
know
the
time
to
launch
a
pod.
Is
the
API
stable?
Is
it
up
and
responding?
B
B
D
Quick
question
this:
this
is
Liam
Travis.
Just
can
you
sort
of
quickly
talk
about
how
that
project
cube
health
either?
You
know,
builds
on
top
of
or
next
to
keep
sir
and
seed
visors
sort
of
where
they
all
kind
of
have
their
place.
B
Sure
I
think
the
idea
is
to
provide
a
simple
signal
that
says
your
services
are
up
and
running,
see,
advisors,
kind
of
going
to
give
you
resource
consumption
of
your
various
containers.
But
it's
not
necessarily
going
to
tell
you
that
you
know.
What's
the
time
it
takes
to
perform
an
operation
is
my:
is
my
Cooper
Nettie's
cluster
still
healthy,
you
know,
can
I
spawn
up
new
pods.
It
are
the
various
services
up
and
running
on
each
of
my
nodes.
Okay,.
D
A
D
A
E
Hi
everybody
so
I've
been
working
on
the
release.
Note
bits
as
you've
seen
I
guess.
First
I
want
to
say
we
had
been
collecting
release
notes
earlier
with
a
simple
release:
note
label
up
until
the
one
point
20
release
and
since
1.2
Oh
hold
on.
Let
me
just
make
sure
I'm
sharing
here,
which
I
doesn't
look
like
I
am.
E
E
E
A
E
Okay,
everybody's
got
that
on
the
screen
or
sir,
you
can
see
that.
Thank
you.
Thank
you,
okay.
So,
as
I
was
saying,
everybody's
been
using
the
release
note
process
up
till
12,
but
I
think
it
around
that
time.
We
decided
we
really
wanted
to
come
up
with
something
a
little
bit
better,
because
if
you
notice
the
one
point,
20
release
notes
are
quite
a
bit
more
detailed.
So
we
wanted
to
capture
that
stuff
earlier
in
the
process.
E
E
So,
what's
the
first
step
of
the
life
of
a
release?
Note,
of
course
we
create
a
master
PR.
We've
all
seen
this
familiar
page
before
the
important
thing
here
is
that
their
title
is
the
release
note
and
that
shouldn't
be
news
to
too
many
people
we're
looking
at
ways
to
improve
that
we're
actually
there's
an
open
issue
right
now,
where
we
can
capture
more
detail
in
the
PR
itself
through
the
use
of
a
template.
But
for
now
the
title
is
the
release.
E
Note
one
nice
benefit
of
of
github,
unlike
other
source
code
control
systems
like
perforce,
for
example,
as
these
changes
are
immutable
right,
so
we
can
actually
edit
these
things
and
I
encourage
people
when
they
do
submit
a
PR
before
or
after.
If
it
is
a
release,
note
make
sure
that
your
title
is
exactly
what
you
want
to
communicate
at
release.
Fun.
E
E
One
of
the
things
that
have
has
come
up
is
that-
and
we
know
this,
that
you
know
reviewers
need
to
manage
these
labels
for
now.
That's
an
unfortunate
side
effect
of
github
ACL
control,
we're
looking
at
options
for
this
and
at
the
end
of
this
presentation
there
are
some
issues
that
are
open
and
you
know
we
want
to
collect
ideas
and
come
up
with
some
kind
of
resolution
there,
because
we
certainly
would
like
everybody
to
be
able
to
manage
these
labels.
E
So
moving
on,
you
can
remove
the
release,
no
label
needed
and
then
you're
going
to
add
one
of
the
release.
Note
labels
that
we've
got
in
place
right
now.
Those
are
simply
release
note,
which
should
lands.
Your
change
and,
in
other
notable
changes
section
of
the
release,
notes
that
release
time
release,
notes
action
required
which
lands
it
into
the
action
required
of
those
are
like
flags
and
other
disruptive
things
that
might
happen.
E
You
want
to
raise
the
tension
to
those
more
than
just
a
other
notable
change
and,
of
course,
if
you're
just
making
a
little
change
that
nobody
needs
to
be
communicated
to
about
release
note.
None
is
certainly
a
reasonable
one.
To
put
so
release
note.
Label
needed
will
be
pulled
off
automatically
once
one
of
these
other
ones
are
added,
but
you
can
also
remove
it
by
hand,
will
also
add
new
labels
as
needed
to
kind
of
as
we'd,
better
kind
of
get
a
handle
on
how
we
want
to
structure
the
release,
notes.
E
So
for
this
one
we
added
the
release.
Note
action
required
skip
by
that
too
quickly,
and
the
next
thought
that
are
the
next
consideration
is.
Do
you
want
to
cherry
pick
this
to
a
branch
that
introduces
the
cherry
pick
candidate
label
now
all
of
this
stuff
is
happening
on
the
master
PR,
so
we
did
remove
all
of
the
need
for
these
things
to
kind
of
be
managed
on
the
secondary
the
the
picture.
A
big
cherry
picked
PR.
So
hopefully
that
simplifies
things
a
bit.
E
You
had
a
cherry
pick
Canada
and
a
mile
both
of
those
are
needed
or
the
label
will
be
removed
because
we
want
to
you,
don't
know
where
it's
going
to
end
up
once
you
do
cherry
pick,
something
once
you
juice.
Rather
once
you
say
you
want
a
cherry
pick
candidate
on
your
PR.
How
do
we
handle
things?
Well,
if
it's
just
after
the
branch
and
it's
for
the
dot,
zero
release?
This
is
handled
by
the
branch
manager,
this
gap
or
and
or
Eric
Paris
who's.
E
A
lot
of
this
work
here
behind
the
scenes
and
you're
done
pretty
much.
They
batch
these
things
up
and
they're
put
to
the
branch
as
as
needed
if
it
is
after
the
dot
zero
released,
then
you
do
this
through
the
manual
process,
which
I
hope
you're
all
familiar
with.
Also
the
cherry
pick
coal
script
and
once
once
you've
got
the
cherry
pick
full
script
run
the
I
should
say.
If
you
do,
the
cherry
pick
full
script
after
the
cherry
pick
approved
label
lands
on
the
original
PR.
Then
everything
is
taken
care
of.
E
E
You
can
monitor
your
cherry
pick
status
here.
This
is
another
eric
paris
creation
here.
This
shows
what
cherry
picks
are
in
process,
what
the
status
is
and
then,
of
course,
at
release
time
in
the
changelog
is
where
your
feature
is
going
to
end
up.
So
this
is
where
you'll
actually
see
the
result
of
this
so
again,
you'll
want
to
edit
this
because
it's
going
to
have
your
name
attached
to
it.
So
the
better
the
description,
certainly
the
better
for
everybody.
E
So
keep
that
in
mind
when
writing
PR
titles,
here's
some
links
and
resources,
the
release
notes.
The
original
proposal
is
here
that
went
out
a
month
or
so
ago,
a
couple
months
ago.
Maybe
it
was
and
of
course
the
cherry
picks
document
which
details
a
lot
of
these
labels
and
how
they're
used
throughout
the
process
and
one
last
link
there.
E
The
crew
vanadis
release
the
new
repo
I've
been
working
to
populate,
there's
an
open
pull
request
on
it
right
now
with
bootstrapping
it
with
some
tools,
but
there's
nothing
in
it
currently
and
if
you
do
have
any
questions
about
this,
hopefully
some
of
these
future
enhancements
will
answer
that.
So
these
are
the
open
issues
that
hopefully
cover
some
of
the
concerns
that
you
have
specifically
better
ACLs
over
labels
and
real
quick.
E
If
there's
any
inconsistencies
in
the
docs
or
the
bots,
the
way
that
you,
if
you
go
through
the
process
in
it
and
you're
saying
why
does
it
work
this
way?
Well,
then,
it's
probably
broken
because
the
docs
and
the
and
the
bots
should
sync
up
and
if
they're
not
pleased
to
do
file
an
issue,
so
we
can
fix
it.
A
A
E
A
The
the
cherry
pick
labels
got
me
circularly,
routed
in
my
head,
yeah.
D
E
E
Well,
so
yeah,
so
let
me
just
back
up
here:
don't
crash
down
crash,
please
don't
crash.
Let's
go
back
to
yeah,
so
there's
the
3
release.
Note
types
that
are
currently
here:
we
go.
Everybody
see
that
yeah
so
release
note.
None
is
the
one
that
tells
you
that
we
don't
want
to
communicate,
tells
the
tells
things
that
we
don't
want
to
communicate
it,
but
one
of
those
is
needed.
The
leave
the
label
needed
is
the
one
that
is
always
going
to
be
there
until
one
of
these
other
three
are
avid
got.
E
A
F
Yeah
I'm
here:
can
you
hear
me
yes,
great
yeah,
so
as
part
of
the
1.3
milestone
and
make
it
better
than
1.2,
we
agreed
that
we'd
come
back
every
couple
weeks
to
get
an
overview
of
how
the
the
main
blocking
key
features
are
going
for
this
time
around
I
admit
I
was
out
sick,
the
last
two
days
so
scram
with
a
little
bit
this
morning
to
get
status
on
a
few
of
these.
That
I
didn't
know.
F
So
first
off
for
pet
set,
there's
a
very
long
PR,
that's
been
out
for
maybe
four
months
about
the
design,
that's
180,
16
and
basically
the
latest
is
that
the
design
is
mostly
agreed
to.
So
that's
good.
There
was
a
lot
of
discussion
back
and
forth
there
through
the
first
few
weeks
of
a
milestone
and
prototyping
is
starting,
but
there's
no
code
yet
submitted,
so
that's
pet
set
for
uber
Nettie's.
There
is
a
task
list
that
was
created
a
github
tracking
bug.
F
That's
23
653
on
that
has
a
whole
bunch
of
items
in
that
that
are
targeted
specifically
to
1.3.
So
that's
the
actual
scoping
down
into
what
will
release
in
1.3
I
know
that
there's
been
a
lot
of
discussion
about
uber
Nettie's
at
the
sig
Federation
meetings,
but
that's
actually
the
extent
of
my
knowledge
I
didn't
get
a
chance
to
catch
up
with
Quinton
or
anyone
else.
So
if
anyone
who
knows
more
details
on
the
urban
Eddie
status,
please
do
chime
in.
F
Nope,
okay
and
then
for
the
last
two
in
terms
of
scalability.
So
the
key
thing
here
is
working
on
adding
protobuf
support,
which
looks
to
add
5
to
10
x,
speed,
up
improvements
in
some
of
the
API
server
stuff,
and
so
that
is
design,
approved
and
merged.
That
is
much
of
the
code
is
merged,
but
not
all
of
the
the
code
for
the
protobuf
switch
over
and
the
so.
F
The
plan
is
to
finish
that
up
and
enable
it
in
testing
in
the
next
week
or
two
and
then
start
to
see
how
far
that
gets
us
towards
our
1.3
scalability
goals.
There's
also
spin,
some
optimizations
in
the
scheduler
as
well,
in
how
things
are,
are
scaled
up
and
also
scaled
down.
So
the
scalability
some
of
the
folks
working
on
that
are
also
making
sure
that
we
work
without
a
one
node
cluster
or
two
node
cluster,
just
as
much
as
we
do
with
thousands
of
nodes.
F
So
that's
scalability,
that's
basically
I
will
see
where
we
are
once
the
proto
buff
changes
land
and
then
the
last
sort
of
headline
item
here
was
federated
testing.
There
is
a
full
sort
of
design
out.
Our
rather
will
be
out
for
how
to
get
testing
from
people
outside
of
Google
and
how
to
aggregate
all
those
results.
They're
planning
on
reviewing
that
design
of
the
cig
testing
meeting
coming
up
next
next
week,
so
I
urge
everyone.
That's
interested
in
that
too
definitely
attend.
That.
F
There's
also
been
a
lot
of
work
at
Google
to
move
some
of
the
Google
internal
dashboards
that
we
have
moving
those
two
public,
App
Engine
apps
and
and
make
sure
that
the
place
that
where
we
store
test
results,
are
publicly
accessible.
So
you
probably
don't
see
that,
but
that
is
in
flight
and
moving
no
no
external
awareness.
Yet,
though,
still
working
on
that
and
then
the
last
time
is
that
you,
you
will
see
a
couple
of
additions
that
the
test
team
is
put
into
the
submit
q
page.
F
So
right
now,
if
you
go
to
the
q
and
look
at
PRS,
it
shows
the
number
of
lines
of
code
added
and
deleted
on
a
/
PR
basis,
which
can
be
useful
for
for
merging
and
also
there's
an
uptime
on
all
of
the
critical
builds
which
are
under
that
google
été
internal
tab
so
that
we
can
start
to
get
a
good
idea
of
which
builds
are
failing
most
consistently
and
causing
the
cuda
to
come
up.
So
there's
a
lot
of
work
in
in
tracking
the
data
that
that
test
team
is
producing
a.
F
G
Yeah
on
the
federated
testing
thing,
I
guess
I'll
just
say
real
quick.
Maybe
some
people
are
aware.
Maybe
some
people
are
behind
the
scenes,
the
mugger
that
now
rolls
through
github
and
comments
on
every
PR,
whether
or
not
bills
passed
or
not,
is
now
reading
from
Google
Cloud
storage
buckets
instead
of
Jenkins.
Oh,
it's
reading
publicly
accessible
results.
Hooray!
That's
that's!
A
big
step
forward
towards
federated
testing.
I.
G
Want
to
give
a
huge
amount
of
credit
Merrick
and
the
e
Paris
and
a
bunch
of
the
whole
testing
for
a
team
to
actually
push
that
forward.
There's
been
a
lot
of
stuff,
just
it's
been
steadily
marching
forward
and
I
really
like
I
think
it's.
It
speaks
very
well
of
the
progress
here
that
this
happened
without
really
breaking
the
submit
you
or
anything
and.
A
G
Well
so
I
guess
a
quick
question
since
I
I'm
pointing
out
how
there
are
things
that
don't
have
the
13
label
attached
and
I
don't
have
write
access
who's.
The
appropriate
do
do
I,
just
sort
of
find
a
Googler
and
bug
them
to
get
13
labels.
Or
is
there
a
is
there
a
person
to
talk
to
to
sort
of
triage
this
stuff.
F
It's
a
great
question:
I
would
suggest
you
nudge
the
on-call
person.
Anyone
who's
in
the
on-call
rotation
should
have
the
ability
to
add
and
subtract
labels
since
they're,
helping
bucket
things
into
teams
and
cigs
and
priorities.
F
A
As
David
mentioned,
we're
really
looking
at
both
automated
ways
to
make
labeling
possible
for
people
that
aren't
in
the
the
core
maintain
errs,
have
access
to
everything
group
and
how
to
build
a
progression
so
that
people
can
work
toward
and
understand
what
expectations
there
are
in
moving
up
with
permissions
and
responsibilities
within
the
chief
in
the
community.
I.
D
D
H
D
D
F
Actually,
I
would
like
to
talk
about
that
for
like
one
minute,
we
talked
about
heading
into
1.3.
We
need
to
do
a
better
job
of
tracking
the
list
of
features,
and
we
said
cool
we'll,
put
milestone,
1.3
on
them,
we'll
label
them
kind
feature,
and
then
that
will
be
sort
of
the
spanning
set
of
I.
Don't
know
a
few
dozen.
It
max
features
going
into
1.3
I
wouldn't
to
go.
Do
that
and
hit
some
trouble?
There's
not
really
a
single
github
issue
that
that
covers
each
feature.
F
There
are
some
that
are
very
long,
PRS
that
describe
the
design
there's
some
that
are
task
lists
that
are
kind
of
like
the
feature
but
maybe
extend
beyond
1.3.
They
don't
encapsulate
just
what
we're
doing
in
this
milestone,
and
so
I
think
that
I
have
not
done
a
great
job
in
forcing
me
like
one
github
issue,
/
feature
and
bucket
in
the
milestone.
F
What
Brian
just
alluded
to
Eric
tune
sent
out
a
proposal.
I
think
it
was
monday.
I'm
called
the
feature
workflow
proposal
where
he
had
some
ideas,
like
I,
said
about
using
a
separate
repo,
so
that
we
could
then
open
up
label
access
and
get
a
bunch
of
additional
stuff,
and
so
that
is
one
reasonable
solution
to
this.
So
I'd
encourage
anyone
to
take
a
look
at
that
and
comments
on
that
Hodgins.
Would
you
guys
think
it.
A
A
A
All
right:
well,
then,
I
will
move
on
to
sig
business.
None
of
the
cigs
have
jumped
up
yet
to
say
they
want
to
say
things,
but
I
will
I
will
have
a
couple
of
announcements,
one
for
cigs
and
just
generally
so
the
we're
looking
to
do
a
few
weeks
or
if
you,
a
few
blog
posts
over
a
few
weeks
from
the
different
special
interest
groups
about
what
their
charter
is,
what
they're
doing
what
they
hope
to
put
and
contribute
and
engage
with
or
engage
with
in
1.3.
A
But
I
am
looking
for
other
volunteers
and
you
can
contact
me
and
Bob
heard
in
ski
to
figure
out
what
the
the
broader
scope
is
for
it
or
if
you
have
time
to
volunteer,
to
write
a
post
and
want
to
get
your
name
on
the
cover,
Nettie's
blog
with
the
the
sig
sort
of
the
sig
vision
that
you're
the
cigs
you're
working
on,
then
that
would
be
super
awesome.
You
can
make
the
blog
post
will
run
it
by
the
cig.
Broadly,
people
can
review
and
comment
on
it
and
then
we
can
get
it
published.
A
C
So
we
have
been
working
through
the
six
scale,
working
on
the
trying
to
assemble
priorities
around
scalability
performance
targets
for
13.
So
we
had
a
pretty
good
discussion
about
it.
The
last
couple
of
weeks,
so
there's
a
document.
That's
in
draft
form
we're
currently
in
the
process
of
sorting
and
assigning
the
getting
people
to
step
up
so
I
think
we
have
some
more
work.
A
couple
of
the
Google
folks,
Clinton
and
wojtek
are
working
on
more
detailed
kind
of
performance
target
draft.
So
there's
a
bunch
of
work
going
on
there.
C
It's
kind
of
in
flight
I'm,
hoping
that
in
the
next
I
think
we're
going
to
have
another
longer
meeting
on
it
next
week.
So
maybe
the
week
after
I
could
give
a
longer
update
to
the
team
here
around
where
we
stand
well,
not
there
are.
There
are
a
few
things
on
the
list
that
have
I'll
say
feel
like
they're
very
important,
but
we
don't
have
anyone
specifically
stepping
up
to
work
on
them.
Some
of
them
are
kind
of
small.
C
C
A
G
Mean
I
can
give
a
quick
update
on
sig
testing,
but
I
think
I
sort
of
gave
the
biggest
priority
item
around
federated
testing
yeah,
as
TJ
alluded
to.
We
have
a
sort
of
a
roadmap
issue
that
lays
out
the
sketches
for
how
we're
going
to
accomplish
federated
testing.
That's
basically
priority
number
one
of
this
group
and,
what's
already
been
implemented,
is
basically
full
fee.
You
know,
like
I
said,
the
full
pull
request.
G
That
gives
us
a
full
month
of
actually
trying
to
use
federated
testing
prior
to
cutting
a
release
which
was
sort
of
the
end
goal
here
and,
as
I
said
earlier,
take
testing.
We
meets
weekly
tuesdays
at
ten-thirty,
pacific
any
and
all
are
welcome.
The
agendas
open
folks
are
welcome
to
make
suggestions
and
looking
forward
to
seeing
you
there.
A
G
I
guess,
but
the
other
tiny
thing
that
that's
happening
here
is
part
of
federated
testing
is
the
really.
The
ideal
here
is
to
be
able
to
say
that
a
container
is
the
unit
of
execution,
so,
instead
of
a
really
intricate
set
of
flags
that
are
passed
to
an
e
2e
test,
binary
or
a
ginkgo
binary,
if
you
can
just
run
a
docker
container
and
that
will
magically
run
the
tests
and
put
them
where
they're
supposed
to
go
and
the
internal
Google
Jenkins
server
too
much.
G
A
Okay,
all
right
any
other
cigs,
otherwise,
I
think
I'm
going
to
pick
on
Mike
Metro
here
in
a
second
any
other
cigs,
so
Mike
I
am
going
to
pick
on
you.
Do
you
have
a
moment
that
you
could
talk
about
the
community
sites,
conversations
that
are
going
on
sure
awesome
thanks,
you're,
not
a
cig,
but
you
are
definitely
a
special
interest
group
which
we
could
make
you
a
cig.
H
Sure
so
I
believe,
sometime
early
last
week,
a
group
of
folks
decided
to
try
to
chat
about
the
various
community
sites
that
are
popping
up.
Some
of
them
include
cubed
rocks
we're.
H
Some
of
them
are
for
double
cube.
That
rocks
is
trying
to
be
more
like
the
hacker
news
kind
of
vibe
for
Cooper
Nettie's,
core
cube
is
aimed
at
really
being
somewhat
a
digestible,
but
I
guess
a
little
bit
more
deeper
dive
into
certain
topics
and
kind
of
every
little
component
has
their
own
niche.
But
it's
really
about
trying
to
figure
out
how
we
can
all
collaborate
effectively,
how
we
can
leverage
as
much
information
from
each
other.
That's
out
there.
H
It
really
just
kind
of
co
self-promote,
but
each
other,
because
there's
such
a
wealth
of
information
out
there
for
so
many
vastly
different
people,
and
so
we
welcome
anyone
who
has
any
opinions
as
to
how
we
should
consolidate
if
we
should
consolidate
and
specifically
what
each
of
us
should
be
focusing
on.
If
in
fact,
we
should
remain
independent
because
there's
always
there's
always
room
for
lots
of
these
different
types
of
content
sites.
So
this
is
brand
new.
A
We
were
seeing
a
variety
of
people
that
were
interested
in
writing
public
content
and
pulling
together
resources
for
the
broader
user
community,
and
this
is
sort
of
the
starting
point
of
should
they
consolidate.
Should
they
differentiate.
What
is
the
vision?
It's
basically
the
community
sites
and
like
I,
picked
on
you,
because
you
showed
up
first,
as
I
was
scrolling
through
the
list,
but
we
have
Michael
housing
gloss
working
on
this
there's
I'm
not
gonna,
be
able
to
think
of
everyone.
I'm.
A
Sorry,
there's
the
set
point
cloud
team
which
we,
which
we
heard
from
with
a
demo
a
couple
of
weeks
ago
and
and
I'm,
not
sure
who
the
name
of
the
person
who
runs
khoob
rocks
anyway.
There
are
a
variety
of
different
sites
and
just
trying
to
get
this
consolidated
view
of.
Where
do
they
differentiate?
How
can
they
help
each
other?
It's
it's
a
really
neat
neat
idea,
and
this
all
started
very
grassroots,
so
super
happy
that
you
guys
have
an
opportunity
to
do
this.
Well,.
H
H
A
We
do
and
we
can
make
that
more,
there's
right
now,
it's
a
collection
of
how
to
engage
with
us
as
opposed
to
promoting
any
particular
resources,
but
we
are
talking
about
doing
more
promotion
of
different
resources
and
such
including,
like
systems,
integrators
and
and
people
who
work
in
and
around
the
Gubru
Nettie's
echo
system
and
putting
up
these
resources.
Community
resources
would
be
a
great
solution
to,
but
hey
because
of
all
the
work
that
John
Dillon
did.
We
can
still
say,
pull
requests
are
welcome.
D
Yeah
I
mean
pull.
Requests
have
always
been
welcome
to
the
docks
right.
It
just
is
a
lot
less
hard,
it's
differently
hard,
but
okay.
A
All
right,
well,
I,
have
one
last
announcement,
which
is
the
may
fifth
meeting
will
be
in
the
Asia
friendly
time
zone,
so
5
p.m.
pacific
time,
which
is
the
is
actually
East
Coast
us
unfriendly,
and
my
apologies
for
that.
It
is
also
Europe
unfriendly,
but
we're
trying
to
like
once
a
month
include
people
who
might
be
interested
from
Asia
more
directly.
A
So
the
that
meeting
will
will
happen
off
cycle
and
if
we
get
attendance-
and
we
see
some
traction
with,
it
will
continue,
and
if
we
don't
get
a
lot
of
attendance
or
don't
see
a
lot
of
traction
with
the
Asia
friendly
times,
we
may
not
continue
to
do
them
the
once
a
month.
So
we
will,
we
will
see,
but
we
will.
We
have
to
start
once
and
see
what
happens
so
that'll
be
May
5th
that
it
will
be
off
its
normal
time
zone
and
I
will
continue
to
mention
this
for
the
rest
of
this
one.
A
D
A
D
Get
earlier
feedback
on
things
that
are
going
in,
whether
it
it's
additional
documentation
or
testing
or
better
upgrade
supports
or
whatever
it
is.
You
know,
start
a
thread
on
the
committee's
google
group
or
well.
That's
probably
the
best
way.
I
can't
monitor
slack
all
the
time,
unfortunately,
but
yeah
I'd
be
great
to
get
some
feedback
on
that
and.
D
A
J
So
hi
everyone-
I
was
wondering
you
know
we
can't
be
looking
toward
monitoring.
You
don't
communities
clusters
and
you
know
I
understand
that
you
there's
an
integration
with
in
flags
TV.
You
know
that
gather.
You
know
several
metrics
about
the
cluster,
but
we're
kind
of
thinking
in
terms
of
getting
alerting
in
terms
of
you
know
for
cpu
goes.
You
know
above
a
given
threshold
or
memory
in
any
of
the
notes.
J
You
know
within
the
clusters,
you
know
boo
south
and
I
was
wondering
in
those
days
the
best
practice
in
terms
of
windows,
setting
up
our
learning
to
be
aware
of
situation
where
the
cluster
is
doing.
Cells
we've
also
been
integrating
with
Prometheus
as
a
monitoring
framework,
and
I
wonder
if
there
is
maybe
thoughts
about
marrying
you
notice
technologies
with
your
own
Eddie's.
D
That
make
sense
the
yeah,
so
hipster
actually
supports
multiple
different
storage
backends.
I
believe,
there's
Porter
like
ocular
and
things
other
than
just
a
fox
DD
hipsters
main
responsibilities.
Collecting
the
resource
metrics
from
C
advisor
on
all
the
nodes.
I
think
it
does
make
sense
to
potentially
unify
that
with
other
monitoring.
It
is
something
we
just
have
I
had
time
to
do.
We
do
actually
use
Prometheus.
D
Actually,
I
was
just
looking
at
a
controlled
example
for
users
to
use
Prometheus
to
monitor
their
metrics
Prometheus,
a
support
for
kerber
Nettie's
service
discovery
mechanism
or
to
discover
user
applications,
but
we
have
been
experimenting
with
using
it
to
monitor
communities
itself.
Also,
I've
actually
like
to
see
a.
J
I'm
glad
you're
putting
that
out
because
that
you
know
I
did
look
into
it
a
few
months
back
and
you
know
it
was
pretty
hairy
to
you
know,
get
for
me
to
play
nicely
with
the
discovery
you
know
with
the
uber
latest
discovery
system.
So
I
didn't
have
very
much
success
there,
but
I
can
take
a
look
at
it
again
and
see
it.
D
Yeah-
and
I
just
took
a
quick
look
yesterday,
coincidentally
and
prometheus
documentation
does
have-
does
explain
how
to
plug
into
communities
discovery.
I
haven't
tried
it
personally,
but
would
love
like
I
said,
would
love
to
extend
the
current
example.
Yeah
take
a
look
in
contribs
you
a
few
minutes
ago
to
move
that
to
where
all
our
other
examples
are
Oh,
perfect.
A
A
Prometheus
is
one
of
the
projects
that
the
cloud
native
compute
foundation,
which
is
where
the
Cooper
Nettie's
project
lives,
is
being
considered,
it's
being
considered
as
if
a
project
that
we
would
like
to
see
as
part
of
it,
so
that
so
that
there
may
be
future
work
in
that
way
and
as
people
work
with
as
people
work
with
Prometheus
and
Cooper
Nettie's
and
document
it
and
have
experiences
like
they
can
give
feedback
on
both
sides.
Then
you
know
we
will.
We
will
see
more
more
ability
to
make
them
better
and
closer
perfect.
A
H
D
G
D
Okay,
yeah
monitoring,
cig
or
something
like
that,
and
we
have
a
number
of
examples:
data
GOG,
New,
Relic,
no
cystic.
There
are
a
couple
of
others
that
I'm
blinking
on,
but
the
you
know
I
would
like
to
make
it
more
obvious
to
users
what
their
options
are
in
terms
of
monitoring
solutions
that
have
been
integrated
with
your
knees
and
what's
the
different
sort
of
target
use,
cases
are
for
each
of
the
tools
because
they
are
a
little
bit
overlapping,
but
many
of
them
are
actually
quite
different.
Is
there.