►
From YouTube: Kubernetes Community Meeting 20170629
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
DEMO: k8guard
Releases covered: 1.7, 1.6
SIG Updates from: SIG Node, SIG Docs, and SIG Autoscaling
A
In
the
kubernetes
community
meeting
today
is
Thursday
June
29
and,
as
always,
we
record
and
publish
this
on
YouTube,
but
for
more
excitement.
Today,
it's
being
live
streamed,
so
if
you
are
always
hoping
to
be
a
YouTube
star
like
Kris,
is
hoping
to
be
a
YouTube
star
she's,
getting
you
two
ice
creams
today.
So
we're
gonna
kick
off
our
agenda
and
this
is
Chris
Noba.
Our
moderator,
hey,.
A
A
C
C
We
want
an
automated
way
of
suppressing
the
violations
in
our
company
clusters
and
by
the
way,
the
name
of
the
cage
guard
is
not.
Kate's.
Cart
is
kate
guard
but
kate,
the
guardian
angel
or
kubernetes.
So
that
could
be
relevant
confusing
sorry
about
that.
So
the
kind
of
things
that
cage
guard
can
identify
and
take
action
in
them
are
things
like
this
violations
like
image
size.
C
You
have
the
few
case
that
people
were
had
a
star
in
their
ingress
and
that's
with
basically
for
all
the
traffic
from
your
all
the
traffic
from
that
cluster.
To
that
specific
part
which
you
never
want
to
be,
or
if
somebody
mounts
a
kubernetes
clusters
host
volume
to
their
container,
that's
also
a
big
No.
Another
violation
could
be
like
we
have
a
annotation
for
who
is
the
owner
of
a
namespace
that
annotation
has
an
email
or
chat
IDs.
So
if
you
don't
have
that
annotation,
we
also
can
detect
that,
so
you
can
also
configure
it.
C
So
all
of
these
violations
that
I
mentioned
are
all
configurable.
You
could
say
what
is
the
image
size
approved
on?
What
is
the
approved
annotation?
What
is
that
for
ingress?
All
of
these
things
are,
and
all
of
these
things
are
you
can
ignore
them,
or
you
say,
I
only
want
these
type
of
violations
to
be
activated.
So
when
we
first
start
developing
this,
we
start
the
most
simple
basic
thing.
We
just
want
something
to
discover
all
about
things
and
then,
by
the
same
time,
we
do
not.
C
We
didn't
want
to
take
action
on
them
in
the
same
microservice,
because
we
wanted
to
develop
separately
for
it
and
also
test
separately.
So
we
came
up
with
two
micro-services
discover
an
action
when
the
discovers
bad
things
and
when
it
does
action
in
them,
but
layer.
We
also
came
up
with
a
different
micro
service,
called
report
that
generates
a
human,
readable
reports
out
of
all
these
violations
and
actions
that
happens
in
our
clusters,
so
this
is
the
overall
design
of
it.
C
So
we
have
three
micro
services,
discover
action
and
report,
and
the
report
is
like
reads
off
of
a
database
Cassandra
and
will
generate
a
UI
for
you.
The
action
will
take
action
on
all
the
violations
that
could
be
scaling
them
down
to
zero.
If
a
DRI
deployment
it
could
be
suspending
if
there
are
a
job
or
a
con
job
in
kubernetes,
or
it
could
be
deleting
a
pearpod.
C
Can
show
you
an
example:
even
this
is
an
example
of
chat.
They
have
masks
all
the
names
here
because
it's
public,
so
this
is
our
chats
application
and
we
tag
the
people
who
did
the
violation
and
tell
them
this.
Is
your
warning
count
number
that
this
is
the
source
of
violation?
This
is
the
type
of
violation,
don't
do
it,
and
if
this
is
the
last
warning,
is
you
read
because
this
is
your
last
warning,
we're
deleting
your
pants
and
also
you
send
them
an
email?
C
We
shame
them
in
email
anyway,
so
back
to
the
design,
the
discover
part
also
have
an
API
that
you
can
consume
it
and
make
integration
at
the
top
of
this.
The
API
I
have
have
a
mini
cube
version
of
it.
This
is
an
API
response.
If
you
click
on,
let's
say,
deploy
to
give
you
all
the
bad
deploys.
This
API
is
being
provided,
so
you
can
build
at
the
top
of
Kade
card,
and
this
is
the
report
in
this
report.
This
is
human
for
human,
readable
and
human
searchable.
C
And
also,
we
also
collect
metrics
I'm
running
out
of
time,
trying
to
squeeze
as
much
information
into
this
table.
So
we
collect
two
kind
of
matters:
violation,
metrics
and
performance
metrics.
If
you
go
to
the
API,
the
slash
metrics
gives
you
tens
of
more
than
30
metrics
that
you
can
collect
like,
for
example,
one
example
would
be
yes,
the
percentage
of
bad
deployments
over
time
or
the
percentage
of
bad
images
over
time.
C
Another
metrics
these
metrics
are
provided
in
the
slash
metric
endpoint
that
could
be
collected
by
something
like
Prometheus
and
you
can
generate
graph,
honor,
dashboards
or
any
kind
of
dashboards.
You
can
do
that
in
other
metrics
that
we
collect
is
performance
metrics.
So,
while
we
make
all
this
API
calls
to
kubernetes
api
to
see
if
something
is
good
or
bad
or
describe
image
or
just
describe
apart.
We
also
collect
how
long
it
took
for
that
API
call
to
come
back
so
over
time.
You
have
a
good
idea.
How
slow
is
your
kubernetes
api
over
time?
C
We
open
sorts
kid
guard
in
Apache
version
two,
and
we
have
a
website
for
it.
That
is
I.
Think
is
the
one
of
the
most
developer
friendly
projects
out
there,
because
you
can
easily
try
it
in
mini
cube
in
a
few
steps
like
if
you
follow
the
darts,
it
can
build
all
the
micro
services
and
deploy
everything,
including
Kafka,
Cassandra
memcache,
everything
in
a
mini
cube
as
I.
Have
it
actually,
my
computer
I
just
build
everything
and
deploy
everything
community
in
a
few
minutes
less
than
a
minute.
I.
Think
so.
I
inquire
encourage
you.
C
If
you
want
to
give
it
a
try
to
go
to
Kate
guards
that
gets
out
that
IO
and
click
on
try
it,
it
will
guide
you
through
how
to
build
the
projects
for
either
mini
cube
or
dr.
Campos,
and
also
will
give
you
all
the
documentation
you
need
about
what
kind
of
violations
kit
guard
is
capable
of
doing
I.
Think
I
run
out
of
my
ten
minutes.
B
C
So
this
three
microservices,
if
you,
if
you
go
to
github,
let's
go
just
to
get
out
now:
github
tough
town,
/
kid
card,
you'll
see
all
the
repos,
but
don't
let
that
scare
you
that
there
are
so
many
repos.
You
don't
know
what
to
do.
Just
you
start
from
kid
card
start
from
here,
and
we
have
a
make
file
that
will
guide
you
through
doing
all
the
commands
you
need
for
doing
it.
We
encourage
contributing
back
to
cake.
Art
is
our
first
release.
A
C
Looking
forward
to
checking
it
out,
it
looks
really
cool.
Thank
you
and
one
thing
that
I
would
like
to
mention
that.
Currently
we
run
this,
the
Keith
care
discover
has
two
most
an
API
mode
and
also
a
cron
job
mode.
So
the
API
mode,
as
you
saw,
is
it
returns
an
API
for
you
in
terms
of
a
JSON
response,
but
also
you
have
a
messaging
mode
which
runs
currently
as
a
cron
job
list,
every
30
minutes
and
goes
through
and
discovers
all
about
things
and
puts
into
a
Kafka
topic
and
the
action
consumes
it.
C
C
It
will
be
a
real-time
that
as
soon
as
it's
a
violation,
it
immediately
sends
it
to
the
same
kind
of
a
topic
that
action
is
consuming,
so
I
just
wanted
to
make
sure
to
demonstrate
that
this
is
not
real-time
currently,
but
we
can
easily
add
a
real-time
integration
on
top
of
it.
Another
thing
that
I
would
like
to
mention.
C
C
Another
thing
that
we
we
have
designed
it
like
the
violations
are
very
extensible.
So
if
you
come
up
with
your
own
violation,
you
can
easily
add
it
through.
The
code
is
currently
it's
not
pluggable,
hopefully,
in
the
future
new
pluggable,
we
accept
the
Arts.
By
the
way
we
they're
very
easy,
accepting
our
peers
and
we'll
be
happy
to
review
it
and
merge
it
back
to
the
main
repos.
C
B
C
You
and
the
technology
that
we
are,
we
are
using
they're
using
golang
for
the
development,
kafka,
cassandra,
memcache
Prometheus
and
another
thing
that
I
would
like
to
mention.
You
do
not
have
to
have
all
of
this.
If
you
want
to
deploy
this
like,
you
can
basically
go
only
discover
you
don't
have
to
have
reports
or
action,
you
can
only
have
a
Discoverer
API.
That
shows
you
bad
things.
You
don't
need
talked
about
Cassandra
if
you
don't
need
action,
but
if
you
don't
need
reports,
you
need
all
of
this.
C
Only
if
you
meet
all
of
the
micro
services.
If
you
want
only
look,
for
example,
no
bad
things,
you
can
just
deploy
the
discovery
API
and
it
doesn't
need
anything
like
Cassandra
or
Kafka.
If
you
are
scared
of
those
things,
but
by
the
way,
and
all
of
these
things,
I
have
an
example
in
the
mini
cube
that
if
you
want
to
just
deploy
the
mini
cube
version,
you
can
have
Cassandra
cup
and
everything
included
in
it.
C
B
C
We
want
something
that
was
available
and
the
way
that
we
wanted
to
scale
out
the
action
service.
Well,
Kafka
was
the
best
way,
because
you
want
that.
We
want
the
same
message
to
be
able
to
be
consumed
by
different
clients
across
like
I
say.
If
somebody
wants
to
build
an
integration
that
also
consumes
Kafka
at
the
same
message
that
we
discover
they
build
their
own
integration,
they
can
use
that
easy
Kafka
because
they
accept
different
client
IDs
as
opposed
to
something
like
rabbit.
Thank
you
yeah.
That's
really
is
like
that's
a
great
use
of
Kafka.
B
Really
awesome.
Thank
you.
Okay,
let's
go
ahead
and
move
on
if
nobody
else
has
any
more
questions.
Thank
you
again.
That
was
a
wonderful
dinner
demo
and
it
looks
like
next
in
the
agenda.
Here
is
release,
dates
and
I'm,
not
seeing
anything
for
1.8,
so
I
probably
think
we
should
jump
into
1.7
and
it
looks
like
we
have
dog
Chen
a
release
manager
with
some
updates.
B
D
So
this
is
boring
this
time.
So
basically
we
can't.
We
can't
allow
there
be
nice
candidate
this
morning
and
hope
this
is
the
one.
So
we
kind
to
be
waiting
for
the
final
test
results
and
last
night
we
can't
one
has
to
be
out
cut
one
release
candidate
and
the
other
has
to
be
understood
and
then
there's
the
one
is
missing,
because
P
is
missing
exit
and
tonight,
so
we
had
another
one
so
there.
So
we
are
also
negative.
We
have
the
end,
you
are
documentation
needed
here
and,
and
maybe
you
can
yeah.
D
We
also
have
the
other
cycle
group
leaves
and
the
contribute
to
the
ladies
nodes
and
the
right.
Now
each
team
is
kind
of
a
cry
at
the
final
stage,
punish
the
release,
notes
to
make
them
more
user-friendly
language,
so
so
we're
pretty
much
it
everything's
rocket
ready
is
waiting
for
the
last
amid
our
signal
and
then
just
say
call
chaos.
It's
awesome.
A
F
D
But
what
we
really
were
in
the
queue,
and
so
so
in
a
secure
signal
and
image
YouTube,
we
we
enhanced
after
our
company
webcam
interface,
and
we
introduced
the
continent
matrix
to
our
interface
at
the
same
time
and
the
community
make
the
autorun
handler
made
more
progress.
So
in
the
1.0
we
have
the
company
integration.
D
D
We
have
it
has,
after
our
security,
suite
CPU
CPU
support,
I'd
known
and
in
the
image
Copernicus
and
the
better
individuals
of
the
90s,
and
we
also
fix
a
single
problem
into
the
our
enhance
our
of
of
the
user
space
of
the
equation
management
for
the
node,
which
it
is
make.
The
weaker
node
is
more
robust
when
there's
the
resource
establishing
issues
happen,
and
we
also.
So
we
also
have
the
give
me
one.
D
Second,
we
also
have
a
add
more
about
connect,
the
no
dominating
of
the
security
enhancement
and
denied,
for
example,
an
imitator
of
the
Cuban
egg
access
to
the
secret
and
about
an
object
based
on
base
notes.
So
those
kind
of
the
second
husband
Allen
on
the
side.
So
we
also
have
a
more
writer
support
and
the
image
calendar
and
it
takes
some
a
lot
of
issues
in
our
system
and
just
one
sec.
Sorry.
D
So
we
also
have
the
so
we
also
support
off
the
the
syncopate,
the
namespace,
that
is
another
actual
development
plaintiff
for
the
protocol,
and
so
so
we
also
in
another
comedy
factor
and
we
add
a
lot
of
the
support.
We
add
out
the
kernel
kernel
not
support
into
the
English
release,
so
we
also
have
to
make
the
novelty
factor
more
extensible
and
Platteville.
So
in
this
release
we
actually
I
have
the
Sun
or
night.
D
You
know
in
next
year,
so
we
talked
about
how
we
are
going
to
use
the
point
of
the
hydrophone
is
open
all
and
how
are
going
to
continue
to
enhance
our
32
you
or
we
talked
about
how
we
are
going
to
you
in
has
finished
for
the
notice
back
on
the
resource
management
side,
so
those
idiots
will
found
in
YouTube
and
Industry
can
add
those
resource
management.
What
is
going
to
be
executed?
D
So
we
have
the
work
plan
and
so,
for
example,
so
we're
going
to
add
the
call,
a
location
and
the
node
to
to
boost
the
some
of
the
users,
outputs
of
the
high
performance
or
closer.
We
also
in
the
qq3
we
are
going
to
either
more
of
those
know.
The
common
detector
and
also
by
the
individual,
is
the
remedy
system,
all
those
councils,
and
also
we
kick
off
the
trip
time
to
discuss
and
design.
A
G
G
Documentation
was
basically
a
big
flat
list
under
user
guides,
and
the
actual
topics
themselves
were
more
or
less
freeform
without
much
in
the
way
of
structure
or
guidance
or
standardization.
So
the
thing
that
sig
docks
has
been
primarily
concerned
about
for
the
past
year
is
introducing
that
standardization
and
creating
a
taxonomy
of
the
different
types
of
docks
that
we
want
for
kubernetes
that
are
best
for
the
project
and
then
migrating
the
existing
flat
list
of
unorganized
topics
into
that
structure.
G
Increase
our
involvement
in
the
other
SIG's
in
some
way
or
another,
because
right
now,
the
interface
between
the
other
SIG's
and
sig
docks
is
when
some
member
of
say,
sig,
networking
or
six
storage
sends
a
pull
request
against
the
docks,
repo
and
and
all
the
negotiation
and
guidance
about
whether
or
not
this
docks
fit.
The
dock
fits
and
the
overarching
strategy
happens
at
the
pull
request,
level
I
would
prefer
sig
docks
to
be
involved
earlier
or
vice-versa.
G
G
This
would
require
a
large
investment
of
resources
into
sig
docks
that
we're
currently
not
staffed
for
the
other
way.
It
would
be
to
do
it
in
Reverse,
which
would
be
to
designate
members
of
other
SIG's
to
liaised
with
sig
docks,
but
we,
however,
we
decide
to
do
this
if,
indeed
we
decide
that
this
is
the
best
way
to
serve
the
projects.
Documentation
needs
I,
feel
like
we
need
to
open
this
channel
of
communication.
G
G
G
The
thing
is,
though,
that
most
contributors
to
stick
Docs
can
usually
only
give
us
a
couple
of
hours
a
week
and
for
somebody
from
sig
Docs
to
be
a
liaison
to
a
to
another.
Sig
is
going
to
be
a
much
larger
job
than
that.
So
if
we
can
get
contributors
to
sig
Docs
that
are
willing
to
give
us
some
more
time,
that
is
a
way
we
can
use
them.
Does
that
make
sense?
I.
G
Come
up
with
a
better,
more
aggressive
timeline
for
when
documentation
related
information
flows
from
engineering
SIG's
to
sig
Docs,
because,
right
now
it
flows
at
exactly
one
point
as
a
large
pull
request
to
the
docs
repo.
We
need
to
set
some
standards
about
if
you're
developing
a
feature
for
a
release.
G
Ideally,
I,
like
in
a
completely
ideal
world
I,
would
see
sigdoc
staff
with
a
whole
bunch
of
technical
writers
from
across
the
community.
They
would
actually
be
generating
the
documentation,
but
sig
Docs
has
four
full-time
technical
writers
at
best,
which
is
definitely
not
enough
to
cover
a
product
with
the
massive
surface
area
that
kubernetes
has
so
what
few
resources
and
full-time
technical
writers
we
do
have
are
primarily
concerned
with
organization
and
curation,
and
the
actual
content
generation
comes
from
the
cig
engineers
themselves
for
the
best
for
professional
quality
documentation.
G
Mm-Hmm
I
I,
don't
even
necessarily
think
we
need
to
have
draft
documentation
early,
but
what
we
have
to
have
is
some
kind
of
almost
like
a
user
impact
PRD
for
every
feature
right
just
write
up
a
little
piece
of
documentation.
That
says
this
is
how
this
feature
is
going
to
impact
the
user.
This
is
what
the
user
is
going.
A
G
A
A
A
I
Are
some
comments
in
the
chat
about
requiring
documentation
or
release
before
teachers
get
merged?
That's
really
really
hard
right
now,
because
often
usually
features
are
sprinkled
across
many
PRS.
If
we
don't
really
have
a
good
way
of
blocking
them
or
alerting
them
after
whatever
time
we're
deciding
what
unit
nice
or
not
we're
going
to
be
experimenting
with
feature
branches
as
solutions,
so
we
can
actually
make
go/no-go
decisions,
not
just
based
on
documentation,
but
also
weather
features
are
ready,
whether
they
have
adequate
testing
and
so
on.
G
G
So
it's
been
a
long-standing
thing
for
having
support
for
previous
versions
of
kubernetes
or
documentation
versions
for
our
documentation
for
previous
versions
of
kubernetes.
Apart
from
the
latest
version
and
in
q3
we
are
migrating:
the
production
server
from
github
pages
production,
web
server
for
kubernetes
I/o
from
github
pages
to
net
LeFay,
which
will
let
us
host
branches
of
documentation.
Apart
from
master,
this
means
we
can
host
the
branches
for
previous
releases
of
kubernetes,
such
as
1
6,
1,
5,
etc.
G
G
G
We
do
not
maintain
previous
versions
of
documentation
or
actively
start
making
pull
requests
and
bug
fixes
against,
say
the
1
6
branch
or
the
1
7
branch.
If
1/8
is
the
current
version
so
once
that
release
is
finalized,
that's
what
we
got
sig
ducts
probably
doesn't
have
the
resources
to
maintain
4
or
5
versions
of
documentation.
At
the
same
time,
considering
how
much
we're
stretched
with
one
version,
but
it's
it's
better
than
what
we
have
now
and
it
I
think
it
will
be
pretty
serviceable.
E
Meaning
for
the
sig
Docs
from
one
hour
by
weeklies
to
weekly
one
half
hour
meetings
next
week
since
it's
a
holiday,
we're
not
meeting
but
then
the
following
week
will
be
one
hour
and
then
we'll
start
going
to
the
half
hour
format
and
then
yeah
and
then
related
to.
There
seems
to
be
interest
in
contributors
in
the
community
to
help
write
the
docs,
so
I've
been
talking
to
some
people
at
hefty
Oh.
E
J
J
Anything
I'm
doing
for
the
next
release.
So
what
went
into
this
current
release
were
a
bunch
of
improvements
to
cluster
autoscaler,
probably
the
biggest
of
which
was
improved.
Support
for
for
heterogeneous
deployments
in
which
cluster
autoscaler,
instead
of
randomly
choosing
a
node
type
to
to
scale
up,
will
actually
figure
out
what
it
needs
based
on
the
pods
and
then
kind
of
use,
an
internal
pricing
model
to
figure
out
all
right.
J
What's
the
what's
the
what's
the
minimum
size
that's
needed
to
support
what
we
we
also
landed,
an
input,
an
improvement
for
a
horizontal
pot,
autoscaler
called
status
conditions,
here's
the
status
conditions
on
things
like
pods
and
nodes
and
as
well,
that
kind
of
indicates
the
current
state
of
the
horizontal
pot
autoscaler
in
terms
of
things
that
could
be
potentially
causing
it
to
not
behave
like
you
would
expect
so
everything
from
oh
hey.
We
can't
scale
because
we're
having
a
problem
connecting
to
the
metrics
API,
oh
hey,
we
would.
J
We
would
be
scaling
you
up
by
30
replicas,
but
you've
hit
a
cap
of
you
put
a
cap
of
10
in
your
horizontal
pot
autoscaler.
So
maybe
check
that
out
and
increase
your
cap,
so
hopefully
this
will
make
it
easier
for
people
to
see
things
that
are
blocking
or
currently
affecting
their
horizontal,
which
is
something
that
we
struggled
with
in
the
past.
J
Those
are
those
are
the
two
major
improvements
looking
towards
the
future
we'd
like
to
start
you
try
to
stabilize
or
move
to
beta
rather
horizontal
pause,
autoscaler
v2,
with
its
support
for
status
conditions
and
custom.
Metrics
and
also
work
is
ongoing
on
the
vertical
protocol,
quad
auto-scaling
initial
concepts.
J
B
H
Okay,
so
about
two
weeks
ago,
I
sent
out
an
email
asking
for
some
clarification
on
triage
guidelines
for
issues
that
have
the
needs:
sig
label,
which
corresponds
to
issues
that
have
no
say
owning
them,
we're
down
to
one
from
1100
about
two
weeks
ago.
So
I
just
wanted
to
call
out
huge
congratulations
to
the
community
and
to
thank
everybody
who
has
put
in
the
time
to
triage
this
out.
H
It's
from
Socko
this
is
slightly
out
of
date,
I'm,
not
sure
why.
It
shows,
for
example,
that
there
are
39
open
issues
that
don't
have
a
snake,
but
this
will
give
you
an
idea
of
roughly
where
the
work
is
distributed.
So,
perhaps
surprisingly,
perhaps
not
signal
followed
by
City
API
machinery,
followed
by
six
CLI,
seems
to
have
the
lion's
share
of
the
issues.
I'm
sure
this
isn't
actually
representative
of
where
these
things
belong,
but
this
was
the
result
of
a
lot
of
people
taking
their
best
guess.
H
H
So
one
of
the
really
obvious
ones,
I
asked
the
mailing
list
and
we
think
we'll
we
will
have
an
answer
for
right
is
what
are
issues
when
we
do
with
issues
that
are
gke
specific?
Well,
we
think
we
ought
to
have
a
cig,
GCP
and
eventually
that
will
exist,
but
in
the
meantime
we
have
a
set
of
labels
that
belong
to
that.
Some
of
the
other
hairier
stuff
was
like.
H
There
are
a
lot
of
things
that
live
in
the
controller
manager
that
don't
seem
like
they
apply
to
any
one
specific
cig,
like
some
of
them
were
like
workload,
specific
and
I
think
workload
management
may
be
falls
under
cig
apps,
but
there
were
a
couple
other
things
like
the
node
controllers.
Well,
that's
got
the
word
node
in
it
is
that
owned
by
state
node,
or
is
that
about
the
lifecycle
of
the
cluster
or
X
etc?
H
I
think
these
are
questions
that
we're
going
to
require
SIG's
to
help
answer
as
they
go
through
and
sort
of
look
at
the
issues
that
have
been
assigned
to
them
and
things
we
could
probably
escalate
to
the
steering
committee.
If
and
when
that
moves
forward,
yeah
anything
there
so,
like
I,
said
I
played
on
I
can
send
out
an
email.
We
could
start
it
throughout
on
kubernetes
dev
about
this,
or
there
was
a.
H
It
was
a
dock
where
we
were
gonna,
try
and
codify
like
what
guidelines
did
we
follow
when
we
were
triage,
including
what
things
do
we
want
to
do
and
then
what
priorities
and
real,
quick
before
you
jump
in
another
thing?
That
came
up
yesterday
and
I
believe
in
email
that
that's
not
to
Cooper.
That
he's
talked
about
this.
Is
some
users
come
to
this
community
with
a
very
component
centric
view,
not
a
very
sick
centric
view,
so
it's
clear
to
them
that
they
have
a
problem
with
coop
proxy.
I
Historically,
where
things
have
landed,
but
you
know
I,
think
most
of
them
are
at
least
obvious
to
someone
so
having
having
the
was
there
comments
on
that,
but
I
envisioned
doing
there's
a
couple
of
things.
One
is:
we've
talked
about
putting
take
ownership
into
the
motors
files
in
the
tree,
so
that
would
be
one
mechanism
by
which
you
know
the
queue
proxy,
particularly
we
can
make
it
clear.
D
Don't
have
idea
to
me,
this
is
a
simple
media.
I
think
the
art
high
screen,
it's
butylene,
oculus,
take
a
group
person.
Chica
has
the
hyper
eyes,
so
you
are
going
to
the
type
of
great
you
can
see
here
to
stick
know
that
what
do
you
care
about,
and
so
we
can
stand
Hamer
so
negative
examples
and
Cuba
proxy
is
the
example
people
promised
they
should
have
at
least
a
step
after
the
test.
Integration
Hank
the
function
Annika
is
that
others
can
enhance
and
the
some
of
the
tasks
in
a
scan,
better
signal.
D
It
is
you
some
others
has.
It
is
majority
from
home
overnight
of
sick
network,
but
the
certain
in
ablution
and
Pat.
It
is
can't
write
a
signal
because
then
we'd
make
it
know
that
it's
not
too
stable,
so
we
care
so
so
those
kind
of
things
it
Ernst
because
monitor
kind
of
book.
If
I
don't
sleep
rules
baby,
please
I'm
a
set
of
the
tester.
So
when
the
particular
class
is
file,
we
could
trigger
those
kind
of
things
and
give
to
a
private
label
and
freedom
will
increase
in
has
a
focal
proxy.
D
So
if
that
feel,
and
then
we
can
say,
oh
ultimately
type
the
frogman
x
infrastructure,
so
that
it
is
owned
by
the
signal
and
a
see
a
light
worker,
so
both
teams
need
the
connector
income
exchange.
Those
idea
was
pretty
problem.
Where
is
it
okay?
So
that's
kind
of
a
major
like
the
release
of
the
engineer
for
the
first
time
and
and
so
the
uncle
people
can
charge
you
based
on
that.
So
it's
a
little
bit
a
negative
signal.
I'm
called
people
thinking
I
said
okay,
so
what's
the
new
issue
by
default?
Bezos
cannot
place.
D
D
I
D
This
it
is,
we
talk
about
in
a
one
point
for
each,
but
that
still
is
not
let
what
I
say
that
it
is
not
specifically,
it
calls
every
woman's
an
example
proxy.
Actually
it
is
Oh
Matthew
T,
it
is
most
of
actually
it
is
Mac
multimedia.
Try
it
first
and
invincible
proxy
good.
It
is
integration,
but
not
the
problem.
That's
the
glop
here
is
Q
proxies,
and
actually
it
is
a
people.
D
Think
once
know
the
problem,
and
so
we
we
quickly
the
most
in
the
past
will
be
be
the
most
of
the
tragic
and
thank
you
would
find
the
problem
then
hand
over
blackboard,
so
the
any
of
those
integration
problem.
We
need
a
clear
and
others
we
need
also
the
remote
has
for
those
companies
make
most
premium
smokier.
But
what's
that
campaigns
we
need
at
the
present
time
and
it
had
the
true
group
of
people
working
together
inside
the
negative
blog.
The
progress.
H
One
other
great
learning
I
had
as
a
result
of
this
is
kind
of
metrics
or
measurements.
If
I
hadn't
sent
that
email
out
I
said
how
many
issues
were
open
with
Nate
cyclical
about
two
weeks
ago,
and
there's
no
graph
that
I
could
show
you
that
shows
things
over
time.
I
really
wish
I
could
I'm
sure
it
would
look
pretty
awesome,
because
anecdotally
I
saw
that
number
dropping
by
like
a
hundred
a
day
and
I.
Don't
actually
know
everybody
who
helped
triage
it
down,
because
some
people
could
use
slash,
dig
things.
H
Other
people
could
apply
a
label
manually
and
thus
far
I
haven't
found
a
consistent
inquiry
that
I
can
run
against
the
get
up
archived
data,
so
this
is
kind
of
a
crime
or
if
anybody
knows
how
to
do
really
good,
interesting,
analytic
stuff
like
this,
we
were
talking
about
the
fact
that
there
is
exit
coming
up
within
Google
and
to
trim
back
specifically.
He
was
talking
about
what
cool
things
did
we
do
joining
that
contributes,
and
my
opinion
is
one
of
the
chopping
wood
and
carrying
water
things.
We
can
do
during
that
time.
H
That
can
also
be
fun
when
he's
sort
of
figure
out.
What
are
the
goals
that
we
want
to
accomplish,
and
how
could
we
measure
that
definition
of
success,
because
I
I
feel
like
this
assignment
of
things
to
SIG's,
will
help
us
for
the
better
I
solved
that
open
issue
counting
bathroom
about
4400
about
4200?
In
the
time
we've
done
that
so
I
know
like
some
of
the
really
stale
issues
that
have
been
open
for
a
year
have
had
some
effect,
but
it
would
be
great
to
really
sort
of
define
me.
H
What
me
think
success
looks
like
and
measure
that
go.
Do
the
policy
change
and
then
see
if
we
had
the
effect
we
want,
because
it
feels
like
a
number
of
times
in
attempting
to
steer
this
project
we're
kind
of
flying
blind.
You
think
it's
doing
things
for
the
better,
but
it's
really
tough
to
produce
graphs.
That
say.
H
Yes,
it
went
the
direction
that
we
thought
it
would
go
and
so,
like
the
only
thing,
I
know
how
to
do
at
the
moment
is:
go
play
around
with
bigquery
and
the
github
archive
data
of
events
that
are
generated
on
a
day-by-day
basis,
but
maybe
there's
more
useful
stuff
out
there,
and
you
would
certainly
love
to
hear
about
it.
It's
a
contributor
experience.
H
Do
you
do
you
have
any
news
to
share
about
that?
I
mean
at
the
moment
I
as
a
consumer
of
that
can
can
sort
of
go
login
and
click
on
various
panels
that
that
breakdown
of
issues
by
sig
is
really
the
only
thing
that
I've
I
have
personally
found
useful
for
like
look
getting
a
feedback,
but
the
stillness
of
the
data
has
made
it
difficult
for
me
to
rely
upon
that
and
I
am
as
of
yet
unaware
of
how
to
explore
or
add
my
own
charts
or
graphs.
That
sort
of
thing
can.
K
B
A
Says,
thank
you
to
george,
who
has
been
piloting
an
attempt
to
livestream
this
on
youtube
today
and
we
had
15
viewers,
three
of
whom
were
commenting
while
we
were
having
this
discussion,
so
we
will
publish
this
more
widely
and
see
what
we
can
do
about
having
this
live
stream,
as
well
as
the
zoom
connection,
as
well
as
having
it
available
as
a
recording
for
all
posterity
and
all
your
friends
to
watch
and
latha
yeah.
So
thank
you.
Thank
you,
George
for
doing
that
and
taking
it
on
as
a
project.
B
Welcome
and
then
I
have
an
announcement,
and
this
is
actually
a
call
for
forearms
to
I
will
be
stepping
down
from
the
organizer
position
of
sig
AWS
and
that
will
be
leaving
us
with
two
organizers.
So
if
anybody
knows
anyone
who
is
looking
for
a
wonderful
opportunity
to
step
up
and
join
the
sig
and
get
involved
and
possibly
help
out,
that
would
be
great.
Please
point
them
my
way
or
to
the
rest
of
the
organizers
and
we
can
get
them
on
board
it
and
ramped
up.