►
From YouTube: Kubernetes Community Meeting 20181018
Description
We have PUBLIC and RECORDED weekly video meetings most Thursdays at 5pm UTC. https://contributor.kubernetes.io/events/community-meeting/
C
C
Reelected
steering
committee
member
we'll
go
we'll
leave
it
at
that
for
now.
So
as
this
is
the
weekly
community
meeting
and
it
is
kubernetes,
we
follow
the
kubernetes
code
of
conduct
which
is
available
online
on
the
community
website
and
basically
boils
down
to
don't
be
a
jerk.
This
is
not
being
live
streamed
on
YouTube
at
the
moment,
but
it
will
be,
it
is
being
recorded
and
will
be
posted
to
YouTube
later.
C
So,
please
keep
in
mind
that
anything
you
say
here
will
be
recorded
and
available
to
all
of
the
internet
for
all
eternity
today,
we're
gonna
have
a
demo
from
Marcos
happy
about
coop
chest,
different
cube
test
and
I'm
familiar
with,
but
equally
as
cool,
we'll
have
an
update
from
ice
about
the
current
release
cycle.
I,
don't
believe
we
have
any
pressing
updates
from
any
of
the
patch
release.
Managers.
C
D
D
That's
designed
out
of
a
problem
that
was
born
on
some
software
working
on
so
I
work
for
a
company
called
vapor,
vapor
dot
IO,
where
we're
working
to
build
kind
of
cloud,
edge,
cloud
infrastructure
and
kind
of
next-gen,
Internet,
II,
stuff,
super
cool,
but
one
of
our
biggest
problems
as
we
develop
applications
that
are
cloud
native
that
are
also
running
in
a
distributed
fashion
at
sites
that
may
have
flaky
internet
or
that
may
be
unresponsive.
At
times
we
had
a
difficult
time,
building
integrations
and
validation
tests
for
promise.
D
They
popped
up
a
lot
of
our
code,
written
in
Python
and
golang,
so
we
kind
of
took
the
Python
kubernetes
api
and
wrapped
around
it's
a
pike
test
fixture.
So
what
coupe
test
is
is
really
a
Python
framework
that
plugs
into
plugs
into
my
test
and
gives
you
a
bunch
of
helper
methods
and
functions
and
classes
that
make
it
easy
to
interact
with
deploy,
manifests
and
manipulate
those
manifests
in
a
automated
testing
fashion,
I'm
Marko
chappie.
That's
me.
This
is
also
me.
D
I
work,
a
vapor,
so
what
this
ultimately
looks
like
it
boils
down
to
is
a
PI
test.
Mark
fixtures,
so
you
can
do
things
like
apply,
manifests
that
wrap
and
decorate
a
test
function.
You're
then,
given
a
coop
object
which
allows
you
to
retrieve
those
deployments
iterate
through
deployments,
integrate
through
pods
through
really
all
the
API
objects
and
then
scripts
effectively
tests,
assertions
and
manipulate
and
manage
those
objects
so
delete
them,
tweak
them
and
change
them
and
retest
and
reassert.
D
So
here's
a
very
simple
working
example
where
we
deploy
from
a
config
directory
in
Engine
XML
file,
which
is
a
deployment
of
nginx
I'm
sure
most
of
you
are
familiar
with
that.
Then
we
then
retrieve
those
deployments.
We
make
sure
that
it's
actually
deployed.
We
get
those
pods
make
sure,
there's
at
least
three
of
those
pods,
and
then
we
iterate
through
all
the
pods
to
hit
that
pod
HTTP
directly
and
verify
that
that's
responding
in
the
right
way.
D
Obviously
this
isn't
exactly
what
you
want
to
write
for
a
test
things
like
number
of
pods
deployed
and
things
like
deployments.
Existing
are
all
really
kubernetes
features
and
functions.
We're
not
using
this
necessarily
test
that
kubernetes
does
what
it
says
it
does,
but
this
is
more
of
an
example
of
the
kind
of
primitives
and
the
kinds
of
objects
that
you
can
get
that
manipulate
with.
Really
your
tests
would
likely
be
focused
around
deploying
a
set
of
manifests
or
pods
doing
something
to
those
and
then
verifying
that
the
end
results
in
that
deployment
existed.
D
So
I'm
going
to
do
a
quick
demonstration
talk
through
this
project
a
little
bit
more
and
then
I
should
have
a
few
answers
of
questions.
So
let's
do
a
quick
demonstration,
so
I've
got
a
terminal
over
here.
You
all
go
see
my
terminal
shouts
inside
this
directory
I've
got
a
very
simple
flat
file.
Normally
these
would
either
be
with
a
project
or
potentially
a
super
set
of
projects.
This
is
a
very
simple
and
your
next
example.
I've
got
a
deployment.
D
D
D
I'm
gonna
run
my
PI
test
command
somewhere
up
here,
so
simply
passing
in
the
config
again,
the
documentation
goes
in
more
depth
about
how
you
can
manipulate
these
certain
flags
and
where
default
values
are
inherited
from
I'm
just
going
to
get
it
from
my
current
coop
context
context
here
this
spins
up
the
PI
test
framework
in
that
process.
It's
executing
the
job
inside
of
test
underscore
engine
X.
It
will
create
for
each
test
effectively.
D
Namespaces
coop
test
test
engine
X
with
a
random
timestamp
to
avoid
collisions
inside
that
namespace
would
not
have
those
three
pods
running
it
executes
those
tests,
returns
back
and
then
tears
down
the
environment.
There
are
other
things
you
can
do.
You
can
make
sure
you
use
certain
environments
between
tests
and
things
of
that
nature
and
in
fact
there
are
quite
a
few
quite
a
few
things
you
get
to
do
with
this
beyond
that.
D
So,
for
example,
when
it
comes
to
things
like
failures,
coop
tests
will
automatically
pull
the
log
files
from
all
the
pods
and
display
those
in
line
with
the
with
the
output
of
the
text.
Fixture
failure
so
makes
it
easy
for
you
to
dive
in
debug,
see
what's
going
on
and
I
propose
level,
and
there
are
tons
of
different
examples
of
what
kind
of
objects
we
can
manipulate
here
to
wrap
up.
What
you
could
use
this
for
today
are
things
like
testing
resiliency
services
during
rescheduling.
D
D
Adding
support
for
things
like
rendering
out
held
charts
before
doing
deployments
instead
of
just
doing
straight,
UML
manifests,
as
well
as
doing
things
like
dynamic
image,
ejection,
either
using
an
image
and
then
copying
code
from
locally
up
to
speed,
up
test,
iterations
or
being
able
to
build
docker
images
and
deploy
them
from
the
local
repos
welts
instead
of
using
predefined
ones
that
are
already
in
a
registry
somewhere
of
gplv3
project.
Pip
install
coop
tests,
sekito
comm,
vaporware,
coop,
test
and
coop
test
that
read
ducts
that
io
any
questions
each
other.
C
D
D
C
Yeah
I
we're
not
like
super
wet
to
the
name
either.
Just
I
had
a
couple.
People
ping
me
privately
about
it,
so
it
this
looks,
really
cool
and
we
say:
testing
does
have
like
couple
sub
projects
around
the
idea
of
making
it
easier
to
write
tests
against
Cooper.
These
this
seems
like
it's.
It's
really
cool,
and
maybe
you
and
I
should
talk
about
how
we
could
get
more
community
adoption
houses
or
support
of
this
yeah.
D
This
was
born
again
out
of
a
real
need
for
us
to
simply
just
wrap
the
kubernetes
Python
library,
which
is
fantastic,
but
it
is
kind
of
auto-generated
from
swagman
translate,
as
simply
as
we
want
for
iterating
over
and
stuff.
So
this
is
just
kind
of
a
smooshed
version
about
API,
then
bolted
on
to
PI
test,
which
is
our
framework
or
choice.
But
we're
really
open
to
those
interested
kind
of
expanding
the
scope
of
that
and
helping
to
make
that
portion
of
testing
objects
against
kubernetes,
easier
cool.
C
D
Projects
are
split
down,
the
middle
we've
got
Python
projects
and
billing
projects.
The
first
thing
we
need
to
testing
for
was
a
Python
project,
so
that's
written
in
Python
with
PI
tests.
I,
don't
think
we're
gonna
limit
ourselves
to
Python,
but
it's
our
focus
at
the
moment.
So,
ideally
we'd,
add
other
libraries
threading
this
kind
of
fixture
and
language
to
those
projects
as
they
come
up.
C
Ok,
cool
any
other
questions
for
mark.
Oh
all
right,
then,
thank
you.
So
much
for
your
time.
Mark
oh
I,
see
a
link
to
the
slides
are
in
there,
and
maybe
we
can
drop
a
link
to
your
repo
that
you
mentioned
at
the
end
and
the
notes
moving
on.
We
have
released
113
updates
from
ice
I.
Sorry
there,
yes,
I
see
you
standing
up
and
waving.
F
Cool
yet
another
week
in
the
113
cycle,
and
as
for
updates,
we
got
our
first
alpha,
this
Monday,
so
the
link,
the
link
for
the
alpha
binaries
are
up
there
and
we
are
planning
to
cut
our
second
alpha
coming
Tuesday,
that
is
the
10
23rd
and
as
for
other
dates,
the
enhancement
freeze
is
coming
up
again.
That's
also
next
Tuesday
the
23rd.
F
Currently
we
have
about
37
enhancements
in
the
feature,
repo,
so
request
again
to
show
their
issues,
your
love
by
adding
labels
any
status
updates
as
to
what
spending
for
the
for
each
of
those
enhancement
to
land
in
132.
It's
this
Kendrick,
Aaron
and
I
have
been
pinging
the
issues
to
find
out
what
spending
the
whole
idea
behind
this
is
to
keep
an
open
channel
early
and
continuously
through
the
cycle
with
feature
owners
so
that
we
can,
if
there's
anything
that
is
at
the
risk
of
slipping
from
1:13.
F
We
can
detect
it
early
in
the
cycle
rather
than
later
so,
in
an
effort
not
to
destabilize
the
release.
Much
so
we'll
be
following
up
in
the
upcoming
weeks
as
well,
but
thanks
to
the
feature
owners
who
responded
and
few
of
the
enhancements
have
already
been
pushed
out
of
113.
We
continue
to
follow
up
on
the
other
things
as
well
towards
this
Aaron
kindly
is
taken
the
list
of
enhancements
to
sake
art
for
their
review
again.
C
At
present
know
if
anybody's
super
interested,
please
show
up
to
save
your
architecture.
This
isn't
necessarily
a
formal
thing.
I
recognize
the
architectures
time
is
super
valuable,
so
I
intend
the
time
box
our
discussion
to
15
minutes,
but
I
think
if
the
idea
was,
let's
have
some
really
technically
minded
folks
on
the
project.
Take
a
look
at
things
prior
to
feature,
freeze
and
maybe
have
another
pass
at
it
prior
to
code,
freeze
just
to
sort
of
make
sure
that
we
are
meeting
the
whole
stability
and
reliability
theme
of
this
release.
F
And
going
to
see
I
signal
so
earlier
this
week,
when
Josh
and
I
were
putting
this
report
together,
our
sentiment
was:
oh,
my
god.
All
the
test
failures
are
open.
We
are
not
seeing
resolution.
We
need
to
bring
it
up,
but
in
the
last
24
hours,
thanks
to
all
the
owners,
who've
been
fixing
tests.
We
saw
a
lot
of
issues
thing
closed,
thanks
to
scalability,
6,
scalability
scale,
julinho
dan
api
missionary
foreclosing
a
bunch
of
test
issues.
We
had
a
brand
new
report
come
out
early
this
week.
F
We
do
still
see
a
few
new
issues
being
open,
not
many
of
them,
but
there
are
a
few
long-standing
ones
that
still
need
to
be
driven
to
resolution.
So
please,
if
there's
a
issue
on
your
sake
or
on
your
feature,
please
consider
that
as
priority
as
we
proceed
through
the
release,
we
also
have
a
beta
cut
in
in
about
like
15
to
16
days,
so
here
in
the
in
the
update
here,
we've
listed
a
bunch
of
issues
that
potentially
could
block
beta.
F
If
they
are
not
addressed
by
then
mainly
it's
the
upgrade
dashboard
that
continues
to
be
read
for
multiple
reasons,
and
we
are.
There
are
a
few
fixes
in
progress
where
we
are
splitting
out
the
jobs
so
that
we
could
get
more
CI
signal
and
there
is
attacks
to
justin
for
working
on
that
and
there's
also
the
long
standing
Damon
set
failure.
F
C
Ok
cool.
Thank
you
very
much
for
that.
Informative
update,
so
I
see
a
note
here,
the
boxes
that
we
have
a
release
of
111
planned
for
next
week.
Maybe
you
are
said
that
I
should
so
moving
on
next
on.
The
agenda
is
graph
of
the
week,
and
this
is
my
item.
So
I
will
try
to
share
my
screen,
but
first
the
TLDR
on
this
is
I
would
like
to
propose
that
we
do
something
other
than
graph
of
the
week
going
forward.
The
reason
I
say
so
is
I
kind
of
feel
like
this.
C
Slot
has
outlived
its
its
usefulness
in
as
much
as
I
think,
we've
shown
a
ton
of
different
graphs
and
I.
Don't
know
how
much
use
people
are
actually
getting
out
of
it,
and
so
I
thought.
Perhaps
a
better
use
of
this
time
was
to
turn
this
into
a
time
slot
dedicated
to
see
contributor
experienced
folks
sharing
like
contributor
tips
of
the
week.
C
Maybe
things
focused
around
our
automation,
maybe
some
cool
graphs
that
showed
you
what's
going
on
with
your
sake
or
what
you
could
do
so
this
is
the
issue
I
open
with
suggestions
on
what
to
do.
This,
if
there's
no
real
objections
to
this
I'm,
going
to
update
the
community
meeting
tab
like
going
forward
that
said,
I
do
have
one
last
graph
of
the
week
thing
to
sort
of
sunset
this
and
that's
with
my
best
Seinfeld
impression.
C
What's
the
deal
with
repository
groups,
I,
don't
know
how
many
people
actually
look
at
repository
groups
or
with
dashboards
on
dev
stats,
but
there
are
a
number
of
dashboards
that
are
broken
up
by
repository
groups.
So
what
I'm
doing
here
is
I'm
pulling
up
two
copies
of
those
stats,
one
on
the
left,
one
on
the
right.
The
one
on
the
left
is
the
old
school
way
of
doing
this.
So
I'm
going
to
click
on
the
repo
groups
thing
to
see
every
dashboard,
that's
grouped
by
repository
groups,
I'm
looking
for
github
stats
by
repository
group.
C
It
shows
this
really
pretty
shiny
graph
with
a
lot
of
different
colors.
This
is
showing
me
the
24
hour,
moving
average
for
all
repository
groups,
and
it's
showing
me
just
like
the
number
of
unique
reviewers
and
then
the
repo
groups
are
things
like
node
service,
catalog,
CSI,
clients,
storage,
and
this
is
somewhat
useful.
But
then,
like
orders
project
and
what
is
project
in
for
me,
so
I
thought
what,
if
we
instead
regenerated
all
these
repo
groups
and
did
them
based
on
the
repos
that
were
called
out
in
60
ml.
C
So
you
know
how
I've
been
nagging
people
to
like
enumerate
all
of
their
sub
projects
and
get
every
single
repo
and
stakes
animal.
It's
gonna!
Let
us
do
things
like
this.
We've
already
used
that
file
in
the
past
to
auto-generate
owner's
files
that
automatically
label
PRS
with
the
appropriate
sake,
and
so
now
I
can
start
to
look
at
the
github
stats
by
sig.
So
we
still
have
this
big
like
ugly
green
thing
here
called
kubernetes,
that's
the
bulk
of
the
contributions.
C
But
since
we
keep
talking
about
how
we
want
to
move
development
out
of
tree,
if
I
remove
kubernetes,
we
can
start
to
see
sort
of
what
activity
is
happening
for
individual
SIG's
based
on
the
sub
projects
they
claim
to
own
in
60
ml.
So
you
can
see
like
a
cluster
lifecycle,
is
pretty
active
when
it
comes
to
the
24
hour
moving
average
of
reviewers.
C
You
can
also
see
that
state
contributor
experience
happens
to
be
pretty
active,
so
this
is
a
pretty
fun
dashboard
to
step
through,
just
because
you
can
sort
of
see
the
percentages
across
the
different
SIG's
again.
This
doesn't
necessarily
have
the
ability
to
capture
SIG's
contributions
to
sub
directories
of
kubernetes.
It's
not
quite
the
way.
C
Duff
stats
works,
but
you
know
maybe
you
can
start
to
see
whether
or
not
the
number
of
reviewers
you
have
are
good-
or
maybe
you
know,
is
your
sig
actually
closing
a
whole
bunch
of
issues
and
your
repos
things
like
that,
like
you
can
see.
Doc's
generally,
has
this
nice
bump
here
kind
of
around
their
Sprint's
stuff,
like
that,
so
that's,
hopefully,
one
way
of
making
tough
stats
more
useful
if
I
show
you
dumb
stats
again,
it'll
be
because
I
think
there's
a
good
contributor
experience,
reason
for
it.
So
it's
not
my
share
on
that.
C
Cool
moving
on
so
the
segment
I
wanted
to
replace
this
with
is
contributor
tips
and
I'm
gonna
try
and
keep
this
to
5
to
10
minutes.
So
somebody
just
like
give
me
a
hand
signal
if
I'm
going
off
track
here
so
again,
gonna
share
my
screen
for
this
and
I
wanted
to
talk
about
one
of
the
good
ways
that
we
can
allow
people
to
work
on
this
project.
You
haven't
done
much
work
on
it.
C
Let's
talk
about
the
help
and
the
good
first
issue
commands
so
I'm
gonna
kind
of
do
this
live,
but
basically,
if
I
were
to
search
for
all
issues
or
let's
take
away
a
given
issues
in
the
kubernetes
or
have
the
label
good
first
issue
boom.
I
get
all
of
these
things.
That
should
be
ideal
for
people
who
are
new
to
kubernetes
to
work
on.
So
let's
just
click
on
a
random
one.
Hopefully
this
is
good.
C
It's
a
documentation,
update
documentation
is
a
great
thing
for
new
contributors
to
work
on,
because
they
often
bring
fresh
eyes
and
a
fresh
perspective
that
those
of
us
who
have
been
working
on
the
project
for
a
long
time
no
longer
have
it's
incredibly
valuable.
So
you
can
see
here
that
somebody
put
good
first
issue
the
bot
responded
to
that
by
saying
cool
we've
marked
this
is
suitable
for
new
contributors
and
it's
automatically
applied
the
good
first
issue
label.
C
We
considered
this
to
be
the
easier
nicer,
kinder
version
of
a
Help
Wanted
label,
and
so
when
we
say
it
meets
the
requirements,
we
provide
a
link
that
she
can
click
to
see
what
we
mean
by
good
first
issue:
there's
no
barrier
to
entry,
the
solution
for
how
to
solve
this
issue
is
clearly
explained.
There
might
even
be
contexts
and
examples,
and
it's
basically
ready
to
test.
This
should
be
somebody
with
very
little
experience
on
kubernetes
or
maybe
even
open
source
projects.
C
That's
sort
of
what
we're
looking
for
Help
Wanted
is
the
slightly
more
difficult
version
of
that.
Where
we're
saying
like
we
guarantee
that
there
will
at
least
be
like
a
cig
member
who
would
be
willing
to
help
Shepherd
you
through
the
solution
of
this
issue,
so
there's
that
something
you
can
do
as
anybody
like
you,
don't
necessarily
have
to
be
a
community
member.
You
don't
have
to
be
the
author
of
the
issue.
You
don't
like
pretty
much.
Anybody
can
do
this.
C
Is
you
can
find
issues
that
are
good
for
Help,
Wanted
and
and
tag
them
as
such?
So
I
really
think
this
is
a
great
thing
for
cig
leads
to
do
if
they
want
to
help
grow
their
sake.
So
again,
I'm
going
to
do
this
live
I'm
just
clicking
through
random
things
on
tests
infra.
These
all
are
not
always
that
great
greatest
low-low
things,
but
let's
just
for
example,
say
renaming.
The
trigger
files
is
something
that's
really
really
easy.
C
So
I
would
click
notice,
I'm,
like
not
assignee
I
created
this
I'll
just
be
good
first
issue
here
and
comment.
C
And
boom
the
Box
responded
with
these
things.
So
you'll
probably
hear
about
this
from
from
Paris
later,
but
I've
heard
that
she
put
out
a
contributor
survey
and
like
one
of
the
most
appreciated
things
by
new
contributors
were,
were
these
labels.
So
if
you're
sig
weed,
it's
really
good,
it's
a
really
nice
thing.
If
you
can
kind
of
take
a
look
at
which
these
issues
or
PRS
have
your
SIG's
name
attached
to
them
to
make
sure
that
they
actually
are
relevant
and
meet
these
criteria.
C
Another
just
little
rant,
I'll
give
as
I
stop
sharing.
My
screen
is
like
well.
Typo
fixes
are
often
really
good.
Super
super
low
effort
like
tracer
bullets.
If
you
will
to
just
verify
that
everything
works,
there
are
good
fixes,
they're,
really
not
often
high
value
contributions
to
the
projects
and
I
don't
want
to
get
into
this
slippery
slope
of
like
measuring
contributor
value
or
reducing
that
to
a
number
and
then
focusing
really
hard
on
that
number.
C
But
I
do
think
that
if
all
you're
going
to
do
is
provide
a
bunch
of
repo
fixes
across
the
project,
perhaps
that's
not
the
the
most
value
that
you
could
be
providing,
as
opposed
to
you
like
identifying
test
failures
or
so
on
and
so
forth.
So
I'll
have
an
example
of
like
something
you
could
do.
Instead,
that
would
really
help
out,
say:
testing
when
I
get
to
the
say,
testing,
I
think
I
think
that's
all
I
had
there
any
questions.
C
C
E
Awesome
so
hey
I'm,
Frederick
I,
together
with
Jethro,
lead
sick,
instrumentation
and
yeah.
So
let's
get
right
to
it.
So
one
of
the
things
that
we
have
been
working
on
is
we
have
been
improving
doing
a
lot
of
performance
improvements
with
coop
statement
within
state
metrics,
which
is,
if
you
haven't
heard
of
coop
state
metrics.
E
Basically
what
it
does
is
it
takes
kubernetes
objects,
so
let's
take,
for
example,
a
deployment
and
it
generates
metrics
out
of
that
kubernetes
object
so,
for
example,
the
expected
replicas
of
that
deployment
and
the
actually
available
replicas,
for
example,
those
are
extracted
and
exposed
as
Prometheus
metrics.
So
then
Prometheus
can
go
ahead
and
scrape
those
types
of
metrics
and
we
can
do
alerting
on
it.
So
let's
say:
if
not
the
expected
number
of
replicas
are
actually
available.
Then
we
want
to
get
an
alert
or
something
like
that
now.
E
These
are
incredibly
high
value
metrics,
but
they're
also
really
expensive,
because
they
have
a
really
high
cardinality
and
that
has
kind
of
two
problems.
One
is
that
Prometheus
actually
has
to
deal
with
that
cardinality.
The
courious
is
actually
very,
very
much
optimized,
so
it
can
handle
fairly
high
cardinality,
but
the
bigger
problem
that
we
were
seeing
is
that
cube
state
metric
helped
set
metrics
had
to
produce
so
many
metrics
that
it
itself
was
zooming
and
had
super
high
response,
latency
and
just
generally
super
high
resource
utilization.
E
Neutralization
we
have
something
similar
and
even
more
important.
Here
is
we're
not
seeing
spikes
anymore,
but
we
have
a
very
consistent
and
more
predictable
memory.
Consumption-
and
this
is
this
one.
The
response
latency
was
the
one
that
we
actually
had
the
biggest
difficulty
with,
because
Prometheus
is
a
full
based
mechanism
right.
E
So
it's
scraping
metrics
in
regular
interval
and
if
that
interval
is
actually
or
if
the
response
takes
longer
than
the
scrape
interval
is
then
Prometheus
essentially
times
out
the
request
and
just
drops
it
entirely
and
in
super-large
kubernetes
deployments.
We've
seen
this
for
some
time
and
so
those
people
weren't
even
able
to
use
group
state
metrics,
which
is
which
is
a
shame
because
it
has
these
wonderful
high
quality
metrics.
E
So
we
again
here
have
something
from
like
a
at
least
the
2x
to
a
10x
improvement
on
this
so
yeah.
Basically,
overall,
this
effort
has
already
been
successful,
even
though
it's
still
ongoing.
So
these
were
just
the
very
first
optimizations
that
we've
done
and
we've
just
gotten
better
and
pretty
much
every
dimension
and
gotten
a
lot
more
predictable
and
less
spikes.
E
E
Implementation
period,
so
the
general
gist
of
this
cap
is
we
about
two
and
a
half
years
ago
created
a
guidelines
document
for
metrics
exposed
by
kubernetes
and
back.
Then
we
already
had
some
violations
for
this,
but
we
took
it
just
as
that's
the
state
of
this
in
and
we'll
fix
it
later,
but
we
never
actually
did
fix
it.
E
So
now
we
want
to
make
sure
that
we
go
through
all
the
metrics
exposed
by
Trinities
and
essentially
make
sure
that
they
actually
follow
the
guidelines
that
we
have,
that
we
have
documented
and
then
as
a
follow-up
to
that,
and
that
this
is
more
kind
of
like
a
nice
to
have.
The
cap
primarily
focuses
on
actually
fixing
the
metrics
we
have
and
as
a
nice-to-have.
E
What
we
want
to
add
is
some
automation
so
that
we
can
make
sure
that
when
new
metrics
are
introduced,
that
we
don't
build
up
this
backlog
of
things
again
and
that
we
have
to
do
this
clean
up
kind
of
cask
again,
but
instead
just
have
like
a
CI
job
that
tells
us
that
there
were
metrics
introduced
that
do
not
follow
the
guidelines
that
we
have.
So
if
you're
interested
in
this
do
check
this
cap
out
and
then
the
my
last
topic
is
the
Prometheus
adapter
for
the
metrics
api's
in
kubernetes.
C
G
G
C
So
I
think
maybe
this
is
a
good
time
for
a
public
reminder
that
syncs
do
you
actually
have
like
a
pre
published
schedule
for
when
they're
supposed
to
update
this
might
be
really
surprising
to
some
of
us
I
think
it's
linked
here
and
meeting
notes,
though
I
don't
have
it
immediately
in
front
of
me
George?
Do
you
think
you
could
help
me
out
and
just
give
a
shout
out
to
who's
supposed
to
give
an
update
next
week?
C
This
is
something
I
as
a
host
I
went
through
in
paying
individual
six
a
week
ahead
of
time,
so
we're
shouting
y'all
out
on
the
community
and
I
would
expect
whoever's
the
next
host.
The
next
week's
community
meeting
will
be
pinging
again
but,
like
I,
feel
like
this
isn't
the
first
time
we've
had
six
kind
of
slipped
and
it's
really
kind
of
important
to
share
with
the
community
but
you're
working.
So
in
that
regard,
get
ready
for
a
completely
prepared
in
the
last
five
minutes
slide
back
from
sake.
C
Testing
I
did
actually
get
some
content
ahead
of
time,
but
you
all
know
I
like
to
do
stuff.
A
little
bit
show
you're
in
flash
here
with
demos
and
I
did
not
quite
have
that
much
time.
So
I'm
gonna
share
the
slide
deck
that
I
had
linked
to
Kim.
The
meeting
notes
using
this
super
awesome,
template
I
came
from
I,
think
Tim
pepper,
where
he
tells
me
like
what
content
I'm
supposed
to
fill
in.
C
So
as
usual,
I
like
to
start
off
with
how
friendly
a
place
sake,
testing
is
on
slack
and
I
was
trying
to
figure
out.
What's
our
update
gonna
be
because
I
wanted
to
maybe
just
show
a
single
slide
of
emojis,
showing
that
we
have
we've
killed
Jenkins
and
we
have
killed
much
github
and
then
I
realized,
I,
didn't
know
what
the
right
of
emoji
was
for
a
bunch
github.
So
there
we
go
in
terms
of
what
we
did
last
cycle.
C
So,
like
I
said
we
did,
we
did
kill
off
munch,
github
and
and
Jenkins
has
been
long
gone
for
a
while,
but
I
just
kind
of
want
to
underscore
how
big
of
an
achievement
this
this
is
like.
Two
years
ago
we
were
fully
reliant
on
Jenkins
and
this
thing
called
munch
github,
and
there
was
really
only
one
instance
of
those
per
repository
that
we
managed.
C
We
now
have
a
single
instance
of
prowl
managing
over
a
hundred
and
forty
repositories
over
five
different
github
organizations,
and
they
all
work
pretty
much
the
same
way
like
we
don't
want
to
necessarily
force
the
exact
same
workflow
on
everybody.
So
there
are
some
repos.
We're
like
approved
magically
appears
because
you
happen
to
be
in
an
owner's
files,
whereas
repos,
like
kubernetes
community
approved,
doesn't
magically
appear
you
you
have
to
explicitly
approved,
even
if
you
are
in
an
owner's
file
that
way
to
just
like.
Maybe
you
want
a
second
pair
of
eyes
or
something.
C
It
also
means
everybody
uses
the
same
workflow
where
tide
will
automatically
merge
PRS
if
all
of
the
tests
are
passing
and
all
the
appropriate
people
have
said.
Yes,
it
looks
good.
We
also
started
a
new
sub
projects.
Sub
project
is
called
kind.
They
beak
in
the
Ben.
Ben
can
tell
me
how
to
pronounce
it.
It's
basically
running
kubernetes
in
docker.
C
We
created
a
new
tool
called
go
fridge,
which
is
basically
manipulation
of
go
line.
Coverage
files,
why
might
we
want
to
do
this?
I'll
probably
have
like
a
lot
more
to
say
about
this
koukin
shanghai
during
the
conformance
testing
scenarios,
but
a
preview
is
that
we
run
kubernetes
on
GCE
instrumented
with
code
coverage
so
that
we
can
get
line
based
coverage.
C
That
CoV
file
is
the
result
of
line
based
coverage
from
all
of
the
couplets
API
server,
scheduler
controller
manager,
all
that
good
stuff
merged
together
into
one
thing,
and
then
we
get
this
nice
HTML
report.
Out
of
that.
That
shows
us
roughly
speaking
what
our
code
coverage
is
on
a
per
package
basis
and
we
can
drill
down
we're
looking
to
improve
this.
We're
curious
like
what
the
Delta
is
between
unit
test
coverage
and
code
coverage
and
e2
eco
coverage
in
kubernetes.
C
So
look
out
for
more
there
I
believe
I
forget
if
we
talked
about
parabola
this
last
time.
Parabola
is
this
tool
that
lets
you
use
a
config
file
to
automatically
manage
the
configuration
of
github
repos.
We
are
using
it
today
and
the
kubernetes
or
agreed.
So
that's
this
lovely
repo
right
here,
where,
if
I
want
to
do
something
with
a
kubernetes
project
like
maybe
I
want
to
join
a
new
kubernetes,
org
or
maybe
I
want
to
add
an
integration
to
my
github
repo
or
something.
This
is
the
repo
to
go
to
to
do
that.
C
Membership
requests
you
you
file
an
issue,
you
fill
out
this
template
and
then
somebody
opens
up
a
pull
request
like
this.
Not
sorry,
not
that
one,
but
let's
say
a
pull
request
like
this
one
here,
where
somebody
is
literally
just
adding
a
person's
name
to
a
llamo
file
when
we,
when
this
gets
merged,
parabola,
strums
and
boom,
the
person
automatically
gets
the
invite.
C
C
We
did
a
few
improvements
to
triage
to
our
triage
tools,
not
a
lot,
but
some.
The
main
thing
is:
we
made
sure
that
it
was
up
and
running
again,
and
we
also
have
alerts
set
up
to
know
when
it
stops
running.
You
can
see
here.
We
found
this
nice,
large,
lovely
spikes
and
it's
kind
of
weird
I
wonder
what
that
spike
is,
if
I
search
for
jobs
that
have
somewhere
gke
in
them
that
started
happening
roughly
around
that
time.
C
It
used
to
only
have
a
week
of
look
back
two
weeks
of
look
back,
might
help
the
release
team
find
more
things
more
quickly,
we'll
see
how
useful
this
is
in
helping
us
identify
when
I
see
a
test,
failure
isn't
feeling
just
for
this
one
job
or
failing
across
a
bunch
of
jobs.
Is
it
the
codes
problem?
Is
that
the
environments
problem?
Is
it
gk,
e's
problem
or
GC
e's
problem
things
like
that?
This
should
help
us
find
that
faster.
C
Finally,
this
is
not
something.
Maybe
a
lot
of
you
have
visibility
into,
but
we
do
manage
the
configuration
of
somewhere
between
600
to
700
jobs
across
all
the
kubernetes
repos.
The
way
that
we
do
this
is
with
the
amplifiers
we
have,
it
used
to
be
one
massive
gigantic
JSON
file
and
has
since
been
split
up
into
yellow
files
and
part
of
the
reason
we
did
this
is
we
can
spread
them
across
different
directories.
C
They
basically
run
this
weird
program
called
coop
tests
with
a
bunch
of
flags
that
do
things
like
make
sure
you
spin
up
a
cluster
with
a
hundred
nodes
in
this
particular
GCP
zone,
focusing
on
these
particular
tests
so
on
and
so
forth,
plans
for
the
upcoming
cycles.
Our
Charter
is
drafted.
I
just
wanted
to
walk
you
all
through
the
Charter,
real
briefly
at
least
the
TLDR
version
of
it.
If
I
can
click
through
to
view,
so
the
steering
committee
has
been
trying
to
push
six
to
enumerate
their
charters.
We're
trying
to
do
this.
C
Super
short
version,
where
you
just
say,
what's
in
scope,
how
you
collaborate
with
other
cities
and,
what's
out
of
scope,
so
what's
in
scope
for
us,
is
a
bunch
of
tooling
to
help
run
tests
and
automate
the
project
and
do
interesting
things
with
test
results.
What's
out
of
scope
for
us
is
actively
writing
tests
actively
fixing
tests
or
actively
troubleshooting
tests,
because
we
believe
this
to
be
the
responsibility
of
the
individual
tests
feature
or
sub
project
owners.
C
So
we
are
happy
to
act
as
an
escalation
point
of
last
resort
if
it
becomes
clear
that
somebody's
test
is
actively
harming
the
project,
but
we
don't
necessarily
believe
it's
our
place
to
fix
everybody's
tests,
because
they've
refused
to
write
them
well
or
they're,
not
keeping
track
of
them
so
well.
We
do
have
an
interest
in
the
project,
ci
signal
being
very
healthy.
That's
not
something
that
we
feel
it
is
our
responsibility
to
actively
maintain,
because
the
CI
signal
is
basically
the
results
of
individual
tests.
C
Editors
I
would
like
us
to
eventually
live
in
this
world,
where
we
follow
the
revolutionary
concept
of
refusing
to
release
kubernetes
until
everything
is
great,
it's
a
simple
concept.
It's
kind
of
tough
to
pull
off,
though
we're
going
to
continue
to
work
on
kind
to
make
it
more
robust
for
local
development.
We
are
working
on
up
screaming
an
image
to
allow
you
to
run
conformance
tests.
Thus
far
conformance
testing
has
been
largely
driven
by
sana
Boyd
and
they
have
been
running
an
image
built
by
hep.
C
Do
we
are
working
to
get
that
image
upstream
so
that
you
can
schedule
the
image
in
your
cluster
and
it
tests
your
cluster?
For
me,
inside
of
your
cluster,
the
call
comes
from
inside
the
house
that
sort
of
thing
we're
not
sure
if
this
is
the
ideal
way
to
test
cluster,
because
we
believe
that
conformance
tests
are
really
about
the
end-user
perspective
and
generally,
the
end
user
does
not
live
inside
the
kubernetes
cluster.
But
it
is
an
easy
way
to
get
quick
sanity
check.
C
We
have
a
community
member
who's,
doing
a
refactor
of
the
e2e
framework
inside
kubernetes
repo.
It's
a
pretty
large
effort,
we're
really
thrilled
with
it
we're
glad
somebody
is
doing
this.
It's
an
example
of
a
code
base
that
has
grown
organically
over
time
and
hasn't
had
a
lot
of
owners.
So
we're
really
appreciative
of
this
and
really
trying
to
help
provide
our
knowledge
and
their
bandwidth
where
possible.
Although
I
said
we
are
not
in
charge
of
the
CI
signal,
we
do
have
an
interest
in
maintaining
it,
and
also
in
my
capacity
as
like
a
release.
C
Team
shadow
I'd
like
for
us
to
as
a
community
agree
that
jobs
that
have
been
continuously
failing
for
more
than
120
days
are
jobs
that
are
probably
not
worth
our
time.
I,
don't
think
it's
a
good
use
of
our
money
and
I,
don't
think
it's
a
good
use
of
our
resources
in
terms
of
troubleshooting
I
view
the
fact
that
they've
gone
on
maintained
for
120
plus
days
as
a
sign
that
nobody
really
cares
about
them
any
way.
C
We've
also
made
sure
that
the
exact
same
set
of
jobs
are
used
to
block
releases,
whether
it's
the
master
branch,
the
release,
112
branch
or
the
release,
111
branch
and
so
on,
and
so
forth.
I'd
also
like
to
propose
that
we
use
test
grid
alerts
a
little
bit
better,
so
one
example
of
two
groups
that
use
it
today
would
be
the
folks
who
work
at
Google
who
are
part
of
cig
testing.
Here's
an
example
of
us
getting
alerted
if
the
triage
job
stops
feeding
in
new
data.
C
Here's
an
example
of
Signet
working
having
alerts
sent
to
the
Signet
where
test
failures,
Google
Group,
if
the
GCE
GCE
ingress
test
fails.
This
is
great.
This
only
really
works.
If
your
tests
are
passing
and
then
they
start
to
fail
so
that
you
know
they
went
from
green
to
red
and
you
should
do
something
about
it.
If
your
tests
are
always
failing,
it's
not
quite
as
helpful
because
we
don't
have
great
flake
detection,
but
we
can't
do
things
like
only
only
yell.
C
If
the
tests
haven't
have
been
passing
for
I'm,
sorry
only
send
an
alert
if
tests
have
failed
more
than
n
times
in
a
row.
So
maybe
if
it's
a
flake
it'll
pass
the
next
time.
But
if
it's
failed
five
times
in
a
row,
you
probably
want
to
get
an
email
about
this
we'd
like
to
propose
that
all
SIG's
use
Google
Groups
dedicated
to
receiving
these
alerts
and
we're
going
to
propose
that
no
test
should
be
allowed
to
block
the
release
unless
it
has.
C
We'd
like
to
do
some
improvements
to
our
version
of
Q
test.
So
our
the
the
thing
that
we
call
cube
test
is
something
that's
responsible
for
consistently
in
the
same
workflow
standing
up
a
cluster
of
running
tests
against
it,
tearing
it
down
and
getting
results
from
that
cluster.
It's
kind
of
a
user
experience
could
be
a
lot
better
and
we're
talking
about
how
to
improve
that.
Finally,
something
that's
coming
down
the
pipeline
that
we
are
currently
prototyping.
The
testing
for
repo
is
hashing.
C
The
contents
of
the
tree
of
a
PR
so
that
when
we
LG
TM
it
let's
say
like
you,
push
a
PR,
that's
got
like
10
commits
and
then
you're
just
like
adding
commits
as
you're
addressing
review
comments.
You
address
all
those
review
comments.
Somebody
says
sounds
great
LG
TM
and
then
you
squash
the
PR
down
the
one
commit
and
you
lose
your
LG
TM.
But
this
new
feature,
you
don't
lose
your
LG
TM,
because
the
contents
haven't
changed
just
the
commits.
C
If
you
want
to
help
out
or
contribute.
Please
come
to
our
slack
channel.
We
use
not
as
many
emojis
as
taken
tricks,
but
a
fair
number
of
them.
I
said:
I'd.
Give
you
all
a
good
first
issue
to
take
a
look
at
so
there's
this
great
anti-pattern
in
a
lot
of
the
intent
tests
in
the
project
where
people
write
out
expect
error
not
to
have
occurred,
which
is
super
unhelpful
when
it
comes
to
troubleshooting,
because
a
lot
of
our
tests
end
up
failing
with
the
string
timed
out
waiting
for
condition.
C
This
is
incredibly
unhelpful
for
tracking
down
flakes,
identifying
failures
and
figuring
out
what
went
wrong,
why
it
went
wrong
and
how
to
fix
it
so,
rather
than
submit
a
typo
fix,
if
you
just
search
through
the
kubernetes
codebase
and
find
this
string,
all
you
have
to
do
is
add
a
little
message
that
says
what
were
you
doing
when
you
expected
the
error
not
to
happen?
Were
you
waiting
for
a
pod?
Were
you
waiting
for
a
surface?
C
Were
you
trying
to
create
a
load
balancer
things
like
that,
really
add
a
lot
of
value
and
are
just
as
much
effort
as
a
type
of
fix
if
you're
in
another
cig,
please
fix
your
broken
test.
It
would
make
the
whole
project
move
a
lot
faster
and
try
using
slash,
meow
and
slash
woof
on
your
PR.
Sometimes
it's
really
great.
Finally,
those
are
the
chairs
for
cig
testing
and
all
the
places
you
can
come
talk
to
us,
I,
sorry,
I'm,
sorry,
I
ran
way
way
long
on
all
of
that.
But
does
anybody
have
any
questions.
C
Last
thing,
announcements,
so
I
am
actually
the
last
scheduled
host
for
community
meetings.
If
you
would
like
to
sign
up
to
host
future
community
meetings,
come
on
down
CDC
contributor
experience,
channel
on
slack
or
get
in
touch
with
George
or
Parris
who
are
in
charge
of
making
sure
the
community
meeting
runs
well
and
smoothly,
and
it's
this
great
fun
place
for
us
to
hang
out
tickets
are
really
running
low
for
the
contributor
summit
for
coop
con
Seattle.
C
B
Can
you
hear
me?
Yes,
all
right,
there's
I
might
have
a
lot
of
background
noise,
so
the
kubernetes
outreach
in
terms
just
went
live
today.
This
is
a
third
party,
internship,
meaning
it's
not
necessarily
in-house,
so
it
would
be
through
outreach
ease,
so
the
potential
interns
would
apply
through
there
go
through
their
application
process
and
then
they
would
do
a
first-time
contribution.
B
So
if
you
see
a
an
increase
in
typos
fixes
and
things
like
that,
it's
because
they're
not
seeing
anything
for
the
that
they
can
work
on.
But
the
three
internships
that
are
live
are
with
client,
libraries
with
Brendan
burns
and
the
other
one
is
a
developer
guide
with
me
and
Nikita,
and
the
contributor
experience
family,
like
that's,
going
to
be
cool,
they'll
it'll
start
in
December,
go
until
March,
so
more
information
about
that
soon.
C
C
H
Will
definitely
do
in
this
juice.
I
just
want
to
call
out
sig
testing
and
your
contributions
are
as
well
for
all
of
the
amazing
things
that
were
on
that
list.
Those
have
such
far-reaching
and
important
impact
across
the
project
that
people
just
interact
with
these
things
day
to
day
and
don't
necessarily
realize
how
how
big
those
impacts
are,
but
the
thousands
of
hours
saved
by
that
work
and
the
amount
of
care
and
time
and
diligence
is
just
incredible.
H
I
To
riff
off
on
that
and
do
a
shout
out
for
Eric
fader
and
Cole
magner,
who
are
seem
to
be
almost
always
on
slack
and
available
to
answer
questions
in
a
friendly
cheerful
way,
and
it
just
it
just
makes
the
whole
community
feel
great
and
safe
and
just
they're
always
happy
to
explain
how
things
work
when
I'm
trying
to
go
through
the
docs
and
bring
them
up
to
date.
So
I
really
appreciate
that.
C
You
know
that
they're
here
right
now,
but
I
will
happily
pass
that
on
when
they
show
up
in
the
office.
Thank
you
very
much
I'd
like
to
take
that
sake,
testing,
love
and
turn
around
to
to
folks
who
have
showed
up
to
a
couple
meetings
and
have
asked
how
they
can
help
out
and
then
have
gone
ahead
and
helped
out
so
Patrick
Olli
I
believe
works
for
Intel
working
on
the
E
to
E
framework
refactor.
Like
I,
said
it's
a
really
huge
effort.
It
is
non-trivial.
C
C
I
also
think
Matt
Hicks
has
done
a
tremendous
job
of
doing
improvements
that
sort
of
overlap
between
contributor
experience
and
testing
so
he's
the
guy
who's
been
responsible
for,
like
adding
labels
on
a
per
repo
basis,
he's
the
guy
who
added
that
LG
TM
tree
hashing
functionality,
which
we
will
be
rolling
out
to
other
repo
soon
and
he's
also
working
on
flipping
around
the
logic.
On
the
not
okay
to
test
okay
to
testing,
because
that's
been
super
confusing
for
a
lot
of
contributors,
we'd
rather
have
positive
state
in
labels
than
negative
state.