►
From YouTube: Kubernetes Community Meeting 20170504
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo - Improving etcd operations with Kubernetes; Releases; SIG Service Catalog; SIG Federation; Improve test maintenance; Code of Conduct update.
B
All
right
cool,
thank
you,
Sarah
all
right,
hi
everybody!
This
is
Doug
Davis
running
today's
May.
Fourth
edition
of
the
communities
community
call
the
recording
dry,
they
call
is
be
recorded.
So
obviously,
if
you
don't
like
that,
you
will
you're
free.
B
Yeah,
like
we
know,
Jason
I
can
do
anything.
I
can
Beit
knows:
okay,
okay,
I'll,
try
to
take
notes
during
your
talk,
I'm
Brendan!
Thank
you.
I
was
born
and
right,
yeah
branded;
okay,
Brandon!
Sorry,
sorry
I!
Always
this
preface
starts
talk
about
that.
Oh
yeah,
all
right,
all
right!
So,
first
up
on
the
agenda,
we
actually
have
Brendan
talking
about
or
giving
a
demo
and
that
your
screen
never
seen
there.
Okay
yeah
it.
D
B
E
Here,
yeah,
so
what
I
want
to
walk
through
was
some
work
that
we've
been
doing
with
this
piece
of
software
called
up
to
the
operator
and
how
we
had
used
it
squeeze
may
see
a
key
here
to
run
a
che
clusters
on
four
committees,
so
this
is
pretty
straightforward.
Great
all,
improvised
community
understand
this
like
scaling,
tailless
applications,
easy
you
say:
I
want
more
copies
of
my
pod
running
and
then
simonetti
kind
of
just
makes
it
happen,
but
doing
more
complex
applications
like
a
database
is
a
lot
harder.
E
It's
easy
to
create
like
a
single
database,
so
I
take
it
faithful,
set
and
run
like
my
people
on
the
faithful
set,
but
it
slightly
more
harder
to
run
a
distributed
database
where
you
have
all
these
concerns
around
resizing
the
cluster
and
upgrade
the
configuration,
backup
and
key
link,
and
these
are
all
like.
You
can
run
kind
of
static
security
databases
and
cited
stateful
sets
and
it's
pretty
pretty
unique,
but
you
start
to
need
essentially
some
lifecycle
management.
F
E
E
So
if
you
think,
through
analogy,
like
WordPress
as
my
people
who
vanetti
said
that
could
be,
and
so
as
the
primary
data
store,
we
have
to
make
sure
that
there's
a
system
in
place
that
is
able
to
take
really
good
care
of
super
Nettie's
and
our
whole
idea
is
that
there's
no
better
system
for
taking
care
of
distributed
systems.
Exurban
eddie's.
So
let's
use
kubernetes
to
manage
the
activity
cluster
that
is
relied
upon
by
kubernetes.
E
So
we've
created
a
resource
of
equity
quest
and
resource
and
you're
able
to
specify
hey
I
want
to
have
to
be
clustered
this
size
with
this
version,
etc
with
these
backup
policies,
etc,
etc.
And
so
this
piece
of
software
called
SP
operator
goes
through
and
ensures
that
whenever
you
specified
actually
happens
so
like
if
you've
specified,
you
want
3.1
to
be
even
x.
One
of
the
members
is
on
3.09.
E
They
will
actually
save
the
cluster
safely,
so
you
can
find
out
on
github
if
you're
interested,
and
it
makes
it
really
easy
to
run
FTD
clusters
on
top
of
kubernetes.
Now,
when
I
originally
gave
this
talk
that
coupon
a
long
time
ago.
One
of
our
hopes
was
when
it
gave
me
me
to
run
through
high
quality
FTD
clusters
for
kubernetes
a
chat
server
itself
using
kubernetes
and
I'm
happy
to
say
that
that
works
run
around
very
nicely
and
that
we
now
have
it
all
working.
So
I
wanted
to
show
you
the
update.
F
E
And
the
nice
thing
here
is
that,
because
we
have
all
the
time
packing
and
everything
of
kubernetes
to
rely
upon,
we
can
go
in
look
at
the
node
2
node.
Once
we
go
to
the
node
list
will
actually
end
up
getting
into
a
not
ready
city
because
it's
being
shut
down
and
destroyed,
and
then,
when
we
come
and
introspect
the
SV
cluster
Abell,
the
peso
notes
disappears
as
the
members
running
on
and
actually
remove
it
from
the
cluster,
and
so
now
we're
in
a
degraded
state
and
the
SME
operator
is
scheduled
a
new
pod.
E
But
it's
off
schedule,
because
we
don't
have
a
new
node,
that's
labeled
as
a
master
node
available
once
the
AWS,
auto
scaling
group
schedules
and
use
middle,
which
is
in
the
not
ready,
say
flips
over
to
ready.
The
STD
operator
is
then
well
actually
kubernetes
been
able
to
schedule.
The
pod
pod
goes
to
your
sanic
container,
joins
us
and
cluster
and
then,
as
we
cluster
is
healthy
once
again.
So
the
big
advantage
here
is
that
were
able
to
essentially
twofold
we're
able
to
use
serenity.
E
E
That's
the
demo
I
had
to
record
it
because
it's
painfully
boring
to
watch
AWS,
stand
down
a
machine
and
send
anyone
up
who
be
here
for
10
minutes,
just
walk
in
sinners,
so
I
edited
it
so
I
recorded
it
this
morning
on
a
live
buses.
So
one
of
the
natural
questions
is
holy.
Wow
is
this
page
to
do,
and
in
states
we
have
just
like
if
you
were
to
run
a
to
be
statically
on
your
clusters.
E
Bedrock
of
a
whole
system
is
foul
on
this,
so
it
D
trusts
your
filesystem
and
the
fact
that
files
on
this
ended
up
with
the
couplet
rescheduling
static
pods.
Then
we
can
trust
the
system
we
can't
translate
through,
but
or
kind
of
in
trouble.
Anyways,
so
I
think
that
it's
a
safe
bet
that
Linux
all
systems
in
the
food
would
continue
to
do
their
work
correctly.
E
So
the
way
that
this
whole
cluster
was
set
up
its
using
a
piece
of
software.
The
result
called
the
iconic
installer,
which
just
installed
line
patched
to
substring
through
Nettie's
across
whatever
platform
you
want
and
it's
care
phone
based
and
you
find
it
tectonic
installer
it
uses
components
and
cyclical
cluster
life
cycles
actually
deploy
the
cluster,
so
we've
integrated
as
to
the
operator
as
an
option
inside
of
bluetooth
and
then
terraform
installs,
a
yes
structure
and
then
including
solve
the
cluster.
E
There's
been
discussions
about
presence,
which
we
present
this
week
in
the
week
previous
about
to
badness,
potentially
using
it
to
be
operate
in
the
future,
and
those
are
mailing
list
posts
on
the
output
to
the
operator
and
is
katana
consult
on
the
sequester
lifecycle.
Marring
list,
which
you're
interested
that's
all
I
got.
I,
probably
have
about
one
question
of
where
the
time
yeah.
B
F
Yes,
so,
okay,
so
if
here's
the
status
for
1.7
use
management
in
updates,
and
so
we
finally
found
our
CIS
signal,
V
NSX,
R
Chong
from
the
coils
volunteer
to
take
vegetal,
and
now
we
have
the
encounter
team
builder
and
I'm
looking
forward
to
working
with
everyone,
and
hopefully
we
can
improve
our
release
process
and
experience
there.
So
yesterday
we
have
the
first
we
are
Sigrid
is
team,
pick
up
meeting
and
we
review
the
the
1.6
release
up.
F
The
1.63
means
the
retrospective
document
and
discuss
what
kind
of
the
from
a
community
the
wishlist,
and
we
are
not
lock
down
the
civil
working
item
and,
from
my
perspective,
my
personally
I
said
as
the
lead
I
want
to
solve
the
help
to
solve
the
to
problem.
When
it
is,
we
I
tried
I
want
you
have
the
group
I
have
the
single
whale
to
help
the
Rena
Steve
and
to
charge
E
are
to
determining
the
housing
is.
F
No
matter
is
alpha
release
of
at
her
release,
only
this
kind
of
whatever
so
have
the
easy
way
to
figure
out
users
or
the
signal
aggregator.
All
the
signal
from
the
flicky
tires
and
from
each
of
the
groups,
and
also
from
the
community
and
also
issues
asks
all
those
kind
of
things.
So
we
talked
to
our
I'm,
a
goatee,
there's
the
internal
Google
team
and
try
to
help
us
to
general
some
more
signal
based
on
all
kinds
of
building
our
tools.
F
F
F
So
thanks,
and
so
the
problem
is
that
we,
each
if
a
group
either
you
ask
the
Signet
and
there's
the
certain
feature.
Ideas
contributed
by
the
active
group
and
we
don't
have
they
don't
know.
After
that,
we
will
be
surprised
sometimes
and
there's
the
PR
team
may
not
issue
associate
or
even
there's
the
issue
associated
and
and
the
stable,
even
or
maybe
the
team
from
active
group
and
a
member
from
actually
booked.
They
have
no
idea
why
those
kind
of
included
and
they'll
cause
the
issues
so
I'm
working
on
some
proposal.
F
We
we
have
and
the
other
discussion
is,
will
be
students
team
and
we
try
to
analyse
the
figure
out
a
way
to
in
the
future
from
a
long-term
and
we
can
become
at
least
the
each
say
group
and
some
representable
either
these
or
delegates
someone,
we
all
say:
okay,
we
know
what's
going
on
for
with
our
simple
and
what
is
included
in
each
unit
needs.
So
this
is
kind
of
help.
Try
to
help
some
that
power.
So
those
two
problems.
F
What
I
identify
me
the
result
and
then
see
what
other
problems
for
the
tools,
all
those
kind
of
things
and
sigelei
these
teams
do
it
is
we
are
working
on
a
proposal
and
try
to
do
those
penalties,
I
I'm,
going
to
send
the
some
proposal
to
the
community.
So
to
me
about
what
kind
of
things
and
therefore
the
discussing
another
thing:
it
is
back
to
the
1.60,
sir
I
also
try
to
pad
with
speech.
F
It
means
like
the
FAA
means
and
make
sure
our
past
family
critical
requirements
or
for
the
family
means
so
for
each
other.
It
means
we
still
go
through
all
those
types
of
grade
and
they
figure
out
simply
charming.
It
is
retrains
blocker
and
which
means
non
proper
for
high
ratings.
Of
course,
those
who
come
is
not
actually
strict
eyes.
The
final
image
candidate.
So
yesterday
I
think
if
I
want
a
shoe,
which
is
to
make
the
serials
GCC
serious
passes
with
the
field
more
than
55
percent.
F
So
thanks
the
for
the
strip,
your
scheduling
team
and
they
are
taken
really
quick
action
and
v6.
So
we
are
going
to
cut
our
factory
today,
and
so
so
we
are
also
talking
about.
We
are.
We
also
talk
about
how
to
simplify
our
document,
like
I,
the
first
the
stage
and
the
products
so
so
andrew,
and
it
will
happen
some
cocoa,
so
give
you
a
cinder
to
become
in
here
so
to
make
a
lot
of
campus.
That's
all
from
one
point:
saving.
B
A
B
A
B
H
B
Nova
Chris
may
be
alright,
also
go
back
around.
They
do
join
later,
all
right.
Let's
move
on
to
a
Service
Catalog,
and
that's
me
so
we're
coming
up
on
our
alpha
six
release
of
V
zero,
zero.
Six,
hopefully
I'll
be
ready
for
next
Wednesday
Branagh
weekly
release
cadence
right
now.
For
the
most
part,
our
big-ticket
items
for
Service
Catalog
are
pretty
much
in
place
and
as
a
result
of
that,
we're
looking
at
actually
going
to
our
first
shelf
or
really
that's
our
beta
released
right
around
May
17th,
so
these
0.1
should
be
put
out
there.
B
B
Creating
the
secrets
that
you
can
then
bind
to
your
applications,
we
had
a
very
large
dependency
on
Jesse's
work
for
pod
presets,
which
thank
you
very
much
went
into
1.6,
so
we're
going
to
start
leveraging
that
very
soon
and
we
do
support
storing
all
those
resources
in
SCP,
as
well
as
as
GPRS
for
those
who
want
to
do.
One.
B
Api
working
group
of
them
over
in
the
ground,
your
foundation,
land,
to
make
sure
that
that
any
changes
or
any
issue
that
were
running
into
or
reflected
back
and
expect
in
terms
of
spec
updates
and
as
they
work
on
spec
updates
in
terms
of
new
features,
we're
working
on
prototyping.
Those
features
incriminate
itself
to
be
show
them
off
as
proof
of
concept
kind
of
things,
because
you
need
to
actually
have
them
implement
implemented
before
get
adapted
into
aspects.
I
B
J
One
of
the
questions,
if
you
hearing
we're
trying
on
yourself
here,
okay,
so
with
the
the
with
the
secrets
that
are
created
when
you
do
service
finds,
are
these
those
just
being
same
as
for
any
secrets?
Yes,.
B
E
E
I'll
start
with
testing
significant
progress
is
q1
and
Federation
for
testing.
We
have
CI,
which
has
been
green
for
many
weeks
now,
thanks
to
a
few
focused
efforts,
people
in
a
sink
now.
What
we're
going
to
be
doing
over
the
next
few
weeks
is
we'll
be
moving
to
a
build
confrontation
between
various
contributors
sake,
because
it's
only
been
a
few
q2
two
or
three
people
have
been
managing
the
CI.
For
now,
the
pre
submit
tests
have
been
running
on
every
PR.
These
are.
E
These
are
not
made
visible
yet
through
github,
but
after
a
bit
of
testing
for
work
that
has
to
be
done
once
that's
checked
in
and
enabled
every
TR
that's
updated.
You
can
double
go
through
the
Federation
pretiume
tests,
like
the
other
one,
but
be
good
to
go.
Awhile
now
also
integration
testing
has
been
significantly
significantly
improved.
E
E
So
the
more
we
can
move
to
integration
better,
it
is
for
being
able
to
test
more
and
and
have
less
infrastructure
related
failures.
We've
had
a
big
refactor
led
by
Peru
and
Red
Hat,
and
that's
how
because
he
accumulated
a
lot
of
tech
debt
over
time
by
adding
more
and
more
controllers
to
Federation,
he's
written
a
factory
type
controller
and
he's
been
able
to
move
a
lot
of
the
existing,
simpler
credit
controllers
and
we
implement
them
over
over
his
abstraction.
So
this
would
be
secrets,
config
maps
and
being
set
at
the
benefit.
E
There
is
there's
also
integration,
testing
components
with
this
pin
controller,
so
we
get
a
free
integration
testicle
and
we
expect
those
we're
going
to
roll
all.
The
other
controllers
that
are
more
complex
onto
this
abstraction,
then,
with
regard
to
this
sake,
there's
some
change
in
scheduling
we
used
to
be
bi-weekly
and
we're
going
to
move
to
weekly
and
the
main
reason
there
is.
We
kind
of
go
back
and
forth
between
administrivia
updates
and
you're
getting
behind
in
design
reviews.
E
So
now
we're
going
to
do
every
even
week
will
be
in
design
review,
we're
announcing
the
topics
and
events
expecting
to
read
up
on
them
and
then
every
other
week.
We'll
keep
doing
the
base
format
and
of
note
I,
guess,
there's
some
cross
collaboration
and
they
got
led
by
Michiel
to
try
to
nail
down
an
authentication
story
with
their
Federation.
G
B
E
H
Think
I'm
going
to
add
a
my
two
cents.
There
might
be
with
just
briefly
mentioning
that
we've
got
quite
a
lot
of
pretty
interesting.
Federation
related
features
scheduled
for
one
seven
as
well.
Amongst
them
federated
staple,
said:
federated
jobs,
federated
horizontal
pot
auto-scaling,
the
top
of
my
head
I,
think
yeah.
But
those
are
the
most
interesting
in
there
about
six
right.
Others
are
super
cool.
B
K
Right
here
on
Curia
or
not,
but
no
cool,
so
just
give
her
a
little
background
on
how
I
got
suddenly
interested
in
improving
some
of
the
process
around
our
tests.
I
found
a
nice
polite
email
from
err
question
my
inbox
a
couple
weeks
ago
that
said,
hey
I'm,
deleting
a
bunch
of
tests,
I
was
kind
of
like.
Is
this
a
good
idea
like?
K
Should
we
just
delete
a
bunch
of
tests
because
they're
failing,
and
so
we
had
a
chance
to
kind
of
sit
down
and
talk
about
that,
and
this
is
what
kind
of
that
short
answer
to
that?
Yes,
for
some
jobs,
it
doesn't
make
sense,
they've
always
built
an
they're,
they're,
basically
worthless,
but
there's
a
bigger
issue
here,
which
is
that
there
are
a
lot
of
testing
signals,
that
it
just
seems
that
people
just
don't
pay
attention
to
and
because
nobody
cares
about
them,
they're,
probably
useless,
and
we
run
tests
that
nobody
cares
about.
K
Unfortunately,
I
don't
have
a
very
good
handle
on
what
people
care
about.
So
that's
kind
of
what
led
to
this.
Let's
just
send
out
a
warning
and
delete
jobs
that
look
like
nobody
cares.
Stop
sucking,
which
does
clean
up
some
of
the
signaling,
but
also
runs
the
risk
that
we
lose
coverage
because
people
haven't
been
paying
attention
either
to
their
tests
or
to
the
warning
emails.
K
So
one
thing
we'd
like
to
do
is
make
sure
people
have
a
lot
of
opportunity
to
review
those
deleted
jobs
and
bring
back
jobs
they
care
about,
and
then
another
thing
we
want
to
do
is
improve
our
systems
for
managing
our
tests
and
job
creep,
so
that
we
don't
continually
get
into
this
situation,
and
we
think
there
are
a
few
simple
things
we
can
do
to
improve
those
processes.
So
in
broad
strokes,
there's
three
things
we
think
we
can
do.
One
is
that
we
want
to
start
requiring
explicit
ownership
of
both
tests
and
test
jobs.
K
Just
a
couple
short
definitions
for
the
purpose
of
this
chat.
Let's
just
say
a
test
like
for
an
EDD
test,
is
one
of
these
named
sting
that
you
can
select
over
so
like
when
you
run
a
needy
test
and
Jenko.
You
have
this
described
function
that
has
a
name
on
it,
and
a
job
is
one
of
these
configurations
that
spins
up
a
cluster
with
a
certain
set
of
parameters
and
runs
a
certain
set
of
those
tests,
and
so
we
want
to
require
swooshes
ownership
of
both
of
those
categories
of
things.
K
The
next
step
that
we'd
like
to
do
is
make
all
the
test
results
that
each
route
cares
about
highly
visible
to
that
group,
because
our
hypothesis
is
that
people
aren't
paying
attention
to
tests
that
might
be
relevant
to
them,
because
it's
extremely,
but
actually
go
into
the
test
material.
The
tabs
and
like
find
the
very
specific
pieces
you
care
about.
So
what
we'd
like
to
do
is
based
on
this
ownership.
Information
just
generate
dashboards
for
each
sig
that
contain
the
things
that
they
care
about.
M
Think
that
you
know
this
sounds
yeah.
I
think
this
will
be
a
great
start.
I
think
that
you
know
getting
to
where
jobs,
where
I
think
right
now,
yeah,
it's
you
know,
part
of
the
problem
is
it's
unclear
like
who's
test?
Is
this
and
I
think
that
this
might
be
if
we
can
clarify
that
I
think
that
will
be
really
helpful
and
yeah
and
so
yeah?
You
know
Aaron
after
this
is
how
some
additional
things
to
say
as
well,
which
I
think
are.
You
know
relevant.
E
Into
that
first,
just
a
quick
comment
on
I
I
think
we
all
want
better
ownership
of
tests,
and
we
want
to
do
that
at
the
state
level,
probably
rather
than
just
individual
level,
to
assist
with
the
cases
where
people
are
out
on
vacation
or
whatever
I,
just
I
really
kind
of
ate.
As
some
of
the
horrible
regular
expressions
we
have
to
use
to
articulate
which
substance
to
run
and
pine
for
some
way
of
richer
metadata
around
the
tests.
So
it's
one
place
we
could.
We
could
definitely
go
down
the
path
of
having
yet
another.
E
E
That
could
be
more
need
a
lot
more
effectively
utilized
by
the
existing
owners.
Much
that
we
have
some
apricots
used
to
auto,
assign
people
and
a
lot
of
people
to
approve
or
L
GCM
actually
spent
on
that.
Okay,
so
I'm
going
to
talk,
but
I
want
to
just
sort
of
made
clear
that
I
didn't
really
do
much
of
this
work.
Eric
played
has
done
a
bunch
of
this
and
so
I'm
having
a
number
of
folks
within
the
contributor
experience
working
group,
all.
E
E
Thank
you,
hi
Hubble.
We
would
rather
encourage
the
idea
of
good
test
hygiene
rather
than
having
to
continually
bring
down
a
hammer.
As
you
sit
around
this
community
long
enough,
no,
that
pretty
much
every
release
cycle.
We
enter
something
akin
to
a
fix
it
or
we
throw
up
our
hands
to
discuss
that
how
bad
the
submit
queue
has
become
or
how
flaky
everything
is.
America
made
this
fantastic
analogy
where
it's
like.
We
don't
really
brush
our
teeth
every
day
and
so
we're
shocked.
E
E
Okay,
so
we
now
have
this
document
in
the
community
repo
tote,
that
sort
of
describes
a
definition
of
what
weight
is
and
the
concept
that
we
would
like
to
accomplish
an
SLA
of
not
waking
way
too
much,
we're
refining
how
we're
going
to
measure
this.
We
are
in
the
process
of
creating
infrastructure,
to
measure
this
and
to
start
displaying
it,
and
initially
this
was
made
with
the
idea
that
we
should
just
shut
down.
E
This
will
get
cheaper
higher
so
that
nobody
should
be
able
to
submit
any
PRS
or
have
emerged
in
unless
music
community
I've
gotten
the
plate
rate
down
to
an
acceptable
buck.
It's
caused
a
little
bit
of
wailing
and
gnashing
of
teeth
both
in
the
PR,
where
we
discussed
this
issue,
as
well
as
the
Google
Data
threat
that
initially
proposed
all
of
this,
if
you're
interested
at
all
in
with
historical
context
of
because
what
is
today
I
encourage
you
to
go.
Look
at
those
things
but
effectively.
E
I
think
where
we're
at
is
once
you
get
to
the
point
where
we're
measuring
and
displaying
this
metric.
But
we
want
to
start
and
a
small
with
some
more
human
oriented
enforcement
mechanisms
to
get
people
to
properly
pay
attention
here.
So
to
that-
and
you
may
have
noticed
that
Eric's
been
sending
out
sleep
reports
and
top
test
daily
reports,
so
I'm
just
going
to
take
a
look
at
one
of
them
right
here.
E
E
He
does
a
fantastic
job
of
calling
out
where
we
are
this
week
and
how
we've
improved
since
last
week,
you'll
notice
that
we're
only
calling
out
sort
of
the
top
three
things
here,
we're
trying
to
start
really
small
see
if
we
can
focus
our
attention
on
the
top
three
on
a
weekly
basis,
and
we
actually
have
an
impact
that
we
will
actually
make
progress
and
I
really
must
be
monthly.
I
had
a
piece
of
color
to
this.
That
Eric
told
me
that
just
kind
of
blew
me
away
about
how
important
this
list.
E
Please
so
the
some
of
these
top
kids
I've
been
calling
this
list
internally
to
myself.
It
rated
hits
right
now
sort
of
test
Lakes
in
our
test
issues
in
kubernetes,
and
what
was
interesting
to
me
was
when
Eric
showed
me
that
sometimes
you
know
four
or
five
issues
will
be
clogging
like
tens
of
thousands
of
customers
or
multiple
thousands
of
test
errors.
While
the
majority
of
testers
that
we
see
are
in
the
ten,
and
so
you
can,
you
know
we
can
really
stabilize
and
increase
the
confidence
that
we've
got
in
our
system
by
addressing
Google.
E
These
rates
hit
yes,
I,
agree
completely.
So
maybe
sort
of
to
that
point
is
this
metric
that
unique
flavor
court
that
Herrick
sends
out.
This
is
separate
from
the
textarea
support
when
we
talk
about
key
our
consistency,
which
I
believe
basically
means
the
probability
that
given
excessively
to
PR
or
asperton
pre
technical
PR.
E
How
likely
is
it
that
all
of
the
tests
will
pass
for
up
to
90%,
which
is
great,
but
if
you
invert,
that
that
kind
of
means
that
one
out
of
ten
PRS
is
good
sale
which,
when
we
have
as
large
as
we
do,
isn't
the
best
experience
now,
it
could
be
that
what
Eric
showed
you
is
the
triage
tool
which
was
gathered
by
running
Pittsburgh.
You
can
get
to
it
by
going
to
code
case,
bio,
slash,
triage
and
it's
basically
a
time
to
see.
E
I
read
aids,
you're
all
just
refocus
to
make
sure
I've
got
top
data.
It
Edgar
Gates,
sort
of
all
the
different
failures
that
have
happened
and
tries
to
cluster
them
together,
so
you
can
see
which
particular
jobs
or
tests,
but
this
big
failure
has
caused
little
spark
lines
around
how
frequently
they
happen.
E
Finally,
all
the
numbers
and
stuff
are
really
useful.
We
all
really
like
graphs,
we're
not
quite
at
point
where
you
have
graphs
or
dashboards
of
all
this
stuff,
but
we're
starting
to
get
to
the
point
where
we're
taking
all
of
the
test
results
that
are
stored
into
Google
Cloud
Storage
extracting
those
using
something
called
Kettle
communities,
extract,
test
results,
transform
load
and
then
we're
using
bigquery
triggered
by
prowl
to
generate
metrics
on
a
repeatable
basis
about
those
test
results.
E
So,
for
example,
if
I
take
a
look
at
the
weekly
consistency
table,
you
can
see
here
the
wonderfully
to
size.
Bigquery,
that's
used
to
generate
this
metrics
and
you
can
see
the
latest
results
for
that.
You
look
back
up
here,
a
little
bit
and
so
I
believe
Eric
can
correct
me
if
I'm
wrong,
but
this
is
where
we
get
that
percentage
consistency,
but
the
emails
opt-ins
flavor
course
or
you
see
on
week
by
week
basis.
How
is
our
consistency
improving
and
how
many
commits
happen
to
have
gone
in
that
week?
E
E
E
Is
that,
like
the
work
that
you
know
you
just
discussed
and
without
any
credit
for,
and
we
keep
putting
all
the
work
that
Eric
and
probably
even
more
people
that
have
been
doing
it-
feels
like
it's
beginning
to
really
perfect
layout
and
escape
just
a
small
group
of
people
as
we're
now
asking
and
calling
for
more
people
to
get
involved
and
providing
the
context
and
the
community
meeting
is
brilliant
and
really
cool,
one
of
the
things
I
think
might
be
a
I'm
kind
of
asking.
E
Is
it
the
right
next
step
to
sort
of
have
big
leads?
Provided
context
and
sig,
it
is
an
email
or
a
signal
if
this
is
going
on
and
to
make
sure
that
people
pay
attention
to
again
I'm
going
to
stick
of
my
simple
life
I'm
stuck
of
the
greatest
hits
so
that
we
jump
on
these
things
quickly
like
how
do
we
spread
this
message
even
more
than
just
this
meeting?
E
What
we
can
certainly
maybe
get
a
defer
to
Eric
I'm
great
to
collaborating
this
wise
I
know
that
one
of
our
hopes
is
to
in
combination
with
getting
six
attributed
to
tests
and
jobs
more
directly
in
this
test
consistency
data.
You
can
see
how
really
also
you
get
to
a
dashboard,
that's
automatically
generated
that
we
can
start
using
on
a
state-by-state
basis.
What
are
the
top
failures
that
each
six
sorta
has
to
deal
with
and
that
people
need
is
a
tool
that
the
relief
team
could
use
to
drive
down
of
like
social
events?
E
Next,
please
that's
sort
of
the
question
here
and
that's
where
we're
going
to
get
eventually
good,
clean
I'm
digging
for
today,
right,
like
I,
want
to
make
sure
that
we,
you
know
what
I
hate
is
when
people
coming
when
I'm
developing
and
something
that
this
was
not
your
doing,
work
on
it.
But
then,
when
I
find
out,
oh
look.
This
is
one
of
the
larger
bugs
that's
holding
up
a
lot
of
other
things
in
the
context.
E
Here's
why
it's
important,
and
so
it
feels
to
me
almost
a
bit
of
a
full-court
press
and
communication
for
this
effort,
because
it's
important
and
it
feels
holistic
and
I-
don't
put
that
too
much
or
not
as
I
guess.
I
can't
speak
to
how
actively
these
emails
are
being
heaped
and
acted
upon.
I
think
that's!
That's
likely
why
it
is
a
section
included
where
we
highlight
things
that
have
stalled
or
seem
to
have.
E
No
action
and
I
would
encourage
and
be
thrilled
is
say,
lead
to
the
six
that
are
actually
my
emails
made
it
a
point
to
rate
these
particular
issues
in
their
six.
We
could
also
you
know
through
our
handcrafting-
include
those
particular
SiC
Google
Groups,
but
I
just
as
easily
lost
to
the
noise
as
a
kubernetes
etiquette.
Don't
work
I
like
again
I
feel
like
it
is
a
human
problem,
and
so,
if
there's
anything,
these
are
better
out
here.
I'm
asking
you
as
a
human.
Please
pay
attention
this
tiny
human
effort
quality
work
to
automate
it.
E
D
D
We
didn't
have
any
status,
we
don't
have
an
issue
tracker,
we
have
github,
and
so
we've
talked
about
maybe
enabling
status
via
labels
just
so
that,
if
you
have
a
list
of
flakes,
you
know
what
the
next
step
is
to
deal
with
it,
whether
it
be
triaging,
whether
it
be
fixing
it
whether
it'll
be
giving
a
code
review
or
though
it's
done,
and
that's
not
really
something
you
can
do
in
an
automated
fashion.
You
need
people
to
go
and
look
at
a
list
and
figure
out
where
it's
at
and
try
to
move
it
forward.
D
H
H
We
can't
bring
up
nerds,
for
example,
or
so
accompanying
a
pods
and
then
clearly
you
know
any
test
that
relies
on
being
able
to
successfully
launch
pods
gonna
fail,
and
you
end
up
with
these
cascading
failures,
and
it
seems
that
you
know
defining
some
basic
dependency
graph
might
address
a
lot
of
these
problems.
So,
for
example,
you
run
the
pod
test
first
and
if
they
don't
fail,
you
don't
bother
rendering
anything
else,
because
Kimi
they're
gonna
fail
and
also
generate
all
the
noise
for
the
downstream
tests.
H
Failing
is
just
noise
and
it's
an
awesome,
labeling
and,
and
then
we
end
up
with
hundreds
of
hundreds
of
you
know,
auto-generated
issues
that
that
seems
like
it
could
be
a
relatively
small
effort
with
a
very
large
pale,
and
there
are
other
examples
in
if
you
can't
get
services
to
run
them.
Clearly,
everything
else
that
relies
on
a
service
is
not
going
to
work
and
you
don't
need
to
generate
test
failures
and
triage,
bugs,
etc.
M
Yes,
sir
I
definitely
think
that
giving
all
this
information
out
to
people
is
a
route.
You
know
work
in
progress,
and
definitely
you
know
if
there
is
any
feedback
or
suggestions
that
you
have
about
how
we
can
make
sure
that
this
information
is
seen
by
all
the
relevant
people.
I
think
that'd
be
great
to
you
know,
reply
to
me
or
reply
to
the
email
or
you
know,
whatever
leave
a
message
and
WT
contributes
or
sig
testing
on
slack
but
yeah.
M
You
know
definitely
still
iterating
on
what
the
right
level
of
visibility
is
and
I.
Something
is
something
else
that
I've
been
trying
is
I,
do
actually
send
the
you
know.
I
do
actually
it's
like
sig,
whatever
is
included
in
the
email,
obviously
send
the
email
to
say
whatever
and
then
I'll
also
go
into
the
cig.
Whatever
slack
channel
and
post,
you
know
sort
of
a
even
more
concise
summary
of
what's
happening
and
how
that
cig
is
relevant
and
include
a
link
to
there
and
definitely
over
time.
M
You
know
we're
hoping
to
automate
some
of
this
because
yeah
it
is
sort
of
handcrafted
and
glorious
at
the
moment,
but
I
think
that
you
know
if
we
can
use
this
to
have
SIG's
start
or
that
leaders
in
SIG's
start
acting
on
this
information
or
even
even
if
it's
just
facts,
known
information
by
saying
hey.
This
information
is
useless
to
me,
but
even
better,
if
it's
like,
oh
cool,
hey,
this
is
super
useful.
So
let's
go
fix
this.
M
That
would
be
great
and
the
only
other
thing
consistency,
I'm,
distinguishing
consistency
from
stability
I'm,
calling
stability
like
the
pass
rate,
whereas
consistency
is,
is
it
giving
the
same
answer
every
time?
So
since
PR
is
it's
kind
of
hard
on
since
I
might
actually
push
a
bad
change,
a
test
failure
might
not
actually
be
a
problem
if
it's,
because
my
PR
is
bad.
So
as
long
as
every
time
I
push
the
change,
it
gives
me
the
same
answer
as
it
fails.
M
M
E
Really
in
like
digging
in
and
making
far
more
difference,
I
think
then
sitting
here
trying
to
solve
all
the
problems
for
all
the
people.
I
think
this
thing
is
small
I'm
taking
the
scenic
route.
Well,
maybe
not
the
beautiful
way
of
doing
things,
a
huge
just,
usually
that's
great.
It's
really
really
great
thanks
great.
A
So,
unless
you've
been
under
a
Dell
use
of
github
notification,
you
may
actually
have
seen
to
post
to
kubernetes
des
this
week
about
some
concerns
about
someone
joining
our
community.
I
would
like
us
all
to
continue
in
the
community
to
take
the
high
road
in
this.
There
have
also
been
pressed
people
asking
questions
missing
about
this.
A
So
we
have,
in
all
of
this
process,
found
some
flaws
in
our
code
of
conduct
and
code
of
conduct
policy,
as
in
the
policy
right
now
is.
If
we
have
a
code
of
conduct
violation,
it
should
be
escalated
privately
to
me
or
Dan
cones
of
the
cognitive
computing,
and
in
this
case
the
concerns
were
not
affiliated
to
me,
or
than
they
were
escalated
publicly
so
again
for
happiness
and
health
of
our
community
and
to
cut
it
as
low
drama
as
possible.
A
I
want
to
remind
everyone
if
anyone
comes
to
you
talking
about
something
that
is
a
code
of
conduct,
concerns
for
the
immediate
term.
Please
have
them
reach
out
to
me
or
Dan
and
going
forward.
We
need
to
establish
a
code
of
conduct
committee
and
define
what
we
want
to
do
for
enforcement.
In
this
case,
our
enforcement
was
to
request
that
this
person
not
be
assigned
to
our
project
and
we
have
gone
ahead
and
made
that
works
through
that
negotiation
and
made
that
possible
at
this
point.
So
please
point
back
to
any
of
the
public
statement.
B
A
Questions
oblique
questions,
forward-looking
questions.
What
what
can
I
help
with.
B
I
Comments
comment,
I
have
to
say:
I
am
super
impressed
with
the
speed
and
efficacy
of
the
response
to
this,
and
that
kind
of
sincerity
about
our
code
of
conduct
is
what
is
going
to
make.
This
community
feel
good
to
people
who
are
traditionally
feeling
a
little
bit
marginalized
in
other
communities.
So
I
just
want
to
thank
everybody
who
is
involved
in
that
rapid
response,
and
thank
you
so
much.
It
means
a
huge
amount
to
people,
especially
those
that
feel
like
they
don't
have
a
full
voice.
A
Thank
you
Jason
in
fairness,
since
we
had
no
code
of
conduct
committee
and
myself
and
Dan
alone
could
not
make
this
decision.
The
governance
bootstrap
committee
has
helped
us
work
through
this
decision.
This
decision
I
say
decision
this
past
that
we
took
because
there
was
no
decision
made
by
the
kubernetes
community.
There
was
a
request
made
to
have
this
person
not
participate,
so
we
made
no
decision
saying
this
person
could
not.
We
made
a
request
that
they
did
not.
B
Any
other
clarifying
questions
comments
all
right
cool.
Thank
you
very
much,
Sara
last
section
on
the
agendas
notices,
there's
no
named
Erol
taken
myself.
So
there
was
some
cleanup
done
of
the
featured
backlog
about
50
features
for
version
1.7.
If
you
are
a
feature
owner
and
there
was
a
request
for
a
note-
they
made.
Please
go
check
your
feature
and
make
the
appropriate
updates.
B
Let's
see
this
week,
I'm
sorry
next
week
in
Boston,
there's
open
vo
go
spec
summit.
We
do
have
a
kubernetes
day
if
you're
up
there,
please,
okay,
our
think
about
joining
a
link
to
the
email
that
was
sent
out
is
an
agenda,
so
you
can
follow
that
link
to
get
more
information
and
a
brand
new
cig
release.
Our
workgroup
is,
has
been
created
and
they
were
looking
to
figure
out
what
today's
our
best
to
meet
so
there's
a
survey
that
they
started
again.
The
length
of
that
is
in
the
agenda.
B
E
Number
was
around
Pacific.
There
was
a
question
of
there's,
always
the
question
of
what
is
above
din.
What
is
the
features
and
I
have
to
admit:
I
stood
on
both
side
of
the
fence
when
I'm
a
release
manager
I
play
the
hardline
knows,
though
it
looks
like
a
bug
to
me,
get
it
out
and
then,
when
I'm,
a
developer,
I
say
hey
I
think
this
is.
E
J
B
J
Think
there's
other
confounding
factors
here
that
we're
trying
to
get
more
formal
about
future
processing
right.
So
it's
not
apples
to
apples
and
what
does
it
mean
for
a
feature
to
be
in
a
release
because
features
tend
to
actually
mark
across
multiple
releases
as
they
want
eat
your
issue
as
they
move
from
element,
2
beta,
2,
GA,
right
and
so
I
think
you
know.
If
we
want
to
protect
these
metrics,
we
should
probably
get
a
little
bit
crisper
about
that
general.
J
J
F
F
We
said
9:00,
okay,
9:00,
you
stir
last
Monday,
so
we
already
passed
that
man
but
I'm
going
to
earlier
in
the
1.70
needs.
I
say
the
one
of
things
I
want
to
try
to
propose.
It
is
even
without
the
feature
you
should
found
in
the
feature
report.
It
should
have
some
recipient
here
to
some
through
the
main
report.
You
found
a
feature
request,
so
we
try
to
figure
out
and
we
need
the
EGC
group.
We
don't
initiatives.
F
F
G
E
Figure
these
people
come
also
I,
think
Joe
for
what
we
talked
about
earlier
about
things
that
are
there
one
feature
as
it
goes
from
output
of
data,
a
lot
of
things
that
I
worked
with,
basically
make
it
a
new
feature
to
move.
They
say.
Let's
say
foo
is
a
feature
and
food
comes
in
of
alpha
and
it
goes
in
the
future
repo
and
then,
when
screw
goes
the
data
they
then
create.
You
know
a
new
feature,
entry
to
act
of
moving
into
beta
and
then
again
for
GA.
J
Think
that
processes
and
I
I
don't
know
what
the
latest
is
on
that
process.
I
mean
when
you
it
was.
When
you
created
a
new
feature.
It
was
like
here's,
the
criteria
that
you
need
and
we
have
been
going
through
and
sort
of
keeping
that
one
feature
alive
across
the
different
phases.
J
K
G
B
C
Sorry
I
had
a
counter
general
question
knows
what
radio
guess.
We're
wedding
show
up
time
that
that's
okay,
but
just
wondering
if
there's
any
clear,
to
support
like
some
kind
of
interest
service
dependency,
meaning
you
know,
in
order
for
service,
be
to
start
I
need
to
make
sure
that
service
a
is
actually
up
and
running.
I
was
wondering
if
there
is-
and
you
can
have
a
really
great
anything
in
terms
of
kubernetes.
I
There's
nothing
in
the
works
as
far
as
I
know,
there's
a
long
discussion
in
issues
that
I
keep
68
and
others
some
sake.
I.
Don't
expect
any
quick
action
on
that
one
frankly,
I
think.
Eventually
we
want
to
add
some
sort
of
service
level
readiness
notion
and
at
the
concepts
that
app
controller,
which
is
the
marantz
thing
Imogen
has
there
move
into
the
API,
but
I'm
not
aware
of
any
one
prioritizing
cashing
out
the
design
issues,
because
it
is
going
to
be
pretty
significant.
Api
controller
changes
now.
C
I
understand
I'm,
just
you
know
asking
because
it
seems
like
it's
my
experience.
You
know
we
keep
seeing.
You
know
that
as
a
recurring
issue,
you
know
when
launching
you
know
a
whole
bunch
of
service,
the
mix
of
an
application
and
I
think
you
know,
especially
given
the
micro
services
landscape
of
this
day,
you
will
become
nice
to
have
some
Campbell
summation,
you'll,
be
able
to
cook
cook
CTL
on
a
directory
or
with
a
virtual
service
definition,
and
you
can
just
you
know,
order
the
last.
You
know
you.
I
Know
in
general
I
would
prefer
that
orchestration
not
to
happen
at
a
fancy
ation
time,
but
for
the
controllers
to
understand
the
dependencies.
It's
just
one
simple
example:
it's
a
pod,
I
think
container
need
to
be
started
in
a
pod
and
it
has
a
dependency
on
a
config
map,
and
the
config
map
doesn't
make
this
right
be
pretty
easy
to
wait
for
the
config
navigate
created.
Yes,
we're
actually
starting
the
container,
so
I
would
prefer
in
general,
for
dependencies
to
be
expressed
like
that,
and
one
thing
that
I
do
know
was
going
in
I.