►
From YouTube: Kubernetes Community Meeting 20171019
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
A
All
right
greetings
today
is
Thursday
October,
19th
2017.
This
is
a
community
meeting.
It'll
be
posted
on
youtube.
So
please
be
mindful
that
we're
being
recorded-
and
please
go
on
mute
if
you're
not
talking.
My
name
is
Matt
ruzek
I'll,
be
the
moderator
today,
I
work
for
the
scale
and
performance
team
at
Red,
Hat
and
I
participate
in
the
cig
scale.
A
B
E
D
D
So
you
get
all
the
transitive
dependencies.
You
don't
have
to
pull
this
in
yourself.
Making
writing
tests
economical
way,
really
simple
and
easy,
so
automatically
bootstrapping
test
stubs
that
provide
like
tests
on
storage
and
Reconciliation
loops
to
make
sure
they're
run
and
then
making
the
development
cycle
easy
by
automatically
spitting
up
control
things
that,
for
you,
I'll
just
give
a
quickly
on
the
status
before
jumping
in
that
this
is
just
a
conceptual
exploration:
it's
not
a
product,
and
so
it's
mainly
looking
at
like
what
could
we
do
with
tools?
And
it's
right
now.
D
It's
built
on
API
extension
servers
which
are
different
technology
than
CRTs
and
a
little
more
heavyweight
all
right.
So
I'm
just
going
to
jump
in
right
here,
so
I
have
terminal
up
I'm
starting
out
in
this
empty
directory
with
a
boiler
plate.
This
boiler
plate
has
a
copyright
in
it
which
will
be
appended
to
or
presented
to
all
the
files
that
are
generated.
So
the
first
thing
we
do
is
we
go
run
in
an
initialization
command,
and
this
sets
up
the
initial
directory
structure.
D
All
the
vendor
files
are
right
there
and
you
know
okay.
This
is
a
good
set
of
vineyard
files.
The
second
thing
we're
going
to
do
is
create
a
resource,
so
frameworks
like
rails
have
commands
that
allow
you
to
just
create
resources
using
a
command
and
will
generate
a
bunch
of
stubs
for
you.
So
we
have
something
similar
here.
I'll
run
that
and
then
the
last
piece
is
to
go
ahead
and
run
the
server.
So
there's
a
bunch
of
stuff,
that's
hidden
individual
commands
under
this,
but
I'm
just
going
to
say
from
the
server
locally.
D
So
I'm
going
to
pull
up
here
just
while
this
is
building,
so
this
is
going
to
take
a
little
while
to
generate
all
the
code,
so
it
has
all
the
code,
generators
packaged
with
it.
It's
gonna
generate
clients,
Mouse
pieces
and
then
it's
going
to
compile
the
API
server
and
it's
in
a
compiled,
a
troller
Manager,
and
then
it's
going
to
run
them
along
with
that
CD.
D
So,
while
it's
doing
all
that
I'll
just
kind
of
open
up
the
code
that
was
generated,
so
this
director
was
generated
with
the
types
go
file
here,
and
so
this
just
has
a
simple
stub.
That
shows
you
where
you
can
start
dropping
in
fields,
and
it
gives
you
a
simple
validate
and
a
simple
defaulting
function,
and
these
exist
just
so
that
when
the
server's
running
you
can
see
that
they're
run,
and
it
has
a
nice
like
learning
experience
to
it.
D
These
actually
exist
by
default,
so
I
could
delete
these
and
it
would
still
work.
You
can
see
that
they're
appended
to
this
foo
strategy
and
the
strategy
exists
with
default.
Empty
defaults
for
all
the
storage
strategy
functions,
there's
also
a
test.
This
test.
Actually
works
and
actually
goes
ahead
and
stores
the
object
and
then
reads
it
back
out,
deletes
it
and
then
make
sure
they're
gone.
So
you
can
add
right
to
this
test
right
here:
you're
defaulting
logic
test
and
your
validation
logic
tests.
D
F
D
Allows
you
to
just
start
filling
in
your
logic
right
there.
If
you
like,
this,
will
give
you
a
client
set
for
a
corcoran,
ettus
client
set
and
your
api's,
so
you
don't
need
to
set
any
of
that
up
and
then
also
controller
test
in
the
controller
test.
The
empty
one,
just
checks
to
make
sure
the
reconciliation
loop
runs
and
fails.
D
If
it
doesn't
run
that
you
can
add
force
logic
here,
the
last
piece
this
does,
while
it's
compiling
is
it
generates
an
empty
example
for
you,
and
so
this
will
appear
in
reference
documentation,
and
so
you
can
fill
this
out
with
a
specific
example
to
show
you
how
that
is
your
object.
So
you
can
see
here.
It's
finished,
compiling
it's
generated
all
the
code,
it's
running
an
API
server
and
it
prints
out
this
little
command.
D
That
tells
you
to
use
a
queue
config
and
it
prints
out
the
API
versions.
You
can
see
it
has
our
API,
you
can
run
control,
apply,
show
sample
food,
so
the
goal
here
is
just
to
make
it
really
simple
and
easy.
So
users
don't
trip
over
tasks
that
can
be
generated,
so
we
go
ahead
and
create
an
empty
sample
for
you.
So
as
soon
as
your
API
server
is
up
and
running,
you
can
go
test
that
it
actually
worked.
D
All
right
see:
what's
here
and
I'll,
go
ahead
and
build
the
docks,
so
I
have
this
over
here
you
can
see
they
just
finished
generating.
This
is
what
the
reference
documentation
looks
like.
On
the
right
hand,
side
you
can
see
where
the
sample
bag
dumped.
So
if
you
have
to
get
the
sample
it
goes
in
there
and
it
allows
you
to
also
run
a
grade
it
through
mini
goop,
so
the
way
I
ran
it.
Just
here
was
on
its
own,
but
I
can
do.
D
So
this
will
tell
it
to
run
again
I'm
passing
a
bill
flag.
That
says,
don't
build
it
as
I
already
built
them.
I'll
ask
me
for
password,
since
it's
running
on
port
443
and
then
now
it's
running
using
my
local
mini
coop
instance
and
then
actually
running
the
local
binaries
on
my
machine
that
we're
connecting
to
them
so
now
I'm
doing
oopsie
tail.
D
So
this
time
you
can
see
I
didn't
give
it
a
specific
coop
config,
it's
just
using
the
default
one
that
made
it
for
mini
coop,
and
you
can
see
it
aggregated
and
right
here
you
can
see
the
API
showing
up
it
does
a
couple
of
other
fun
things
that
were
exploring,
such
as
basil.
Support
for
basil
builds
in
using
gazelles
automatically
generated
the
build
files
which
results
in
much
faster
development.
Iterations,
that's
like
down
from
a
minute
to
build
to
sub.
D
D
So
that's
my
demo.
It's
useful
right
now
you
can
see
in
the
logs,
it
says
defaulting
fields
and
then
we'll
print
out.
I,
look
back
running
reconciliation
loop
for
foo,
so
you
can
it's
actually
kind
of
an
easy
way
to
explore
the
relationship
between
the
API
server,
your
controller
and
how
they
interact
another.
Can
you
cube?
Crowd
goes
well.
A
C
This
is
Anthony,
so
we're
just
gonna
get
getting
started
with
the
190
Timmy's
trying
to
get
together
the
team
for
the
first
for
these
team
meeting
next
week,
I've
started
a
thread
on
the
cig
release
manager.
So,
if
you're
on
there,
please
go
look
for
that,
adhere
in
the
release
team
other
than
that
next
week,
we're
gonna.
According
to
the
proposed
schedule,
we
have
feature
freeze
coming
up
so
I'm,
going
to
send
an
announcement
for
that.
C
A
A
A
B
Awesome
cuz,
I,
literally
still
working
on
my
slides.
This
will
be
great
everybody,
I'm
incredibly
prepared
so
in
the
slides,
yet
yeah
yeah
we'll
do
it
live
so
back
in,
like
mid
September
I
had
started
to
draft
slides
to
sort
of
brag
about
all
the
awesome
stuff.
That's
happened
in
q3
for
cig
testing,
I'm,
going
to
try
to
abridge
that
I'm
gonna
try
and
walk
through
a
couple
demos
of
things.
B
So
some
of
what
I
say
may
not
be
the
state
of
today
and
if
somebody
knows
about
that
and
can
correct
me,
I
greatly
appreciate
it
and
with
time
at
the
end,
I
will
just
sort
of
hand
way
roughly
where
state
testing
would
like
to
head
one
nine
time
so
with
that
I
posted
a
link
to
slides
in
the
doc.
If
you
can't
see
them,
just
let
me
know
and
I
will
like
consult
I
will
turn
these
into
something.
That's
a
little
more
finalized.
Okay
I
might
need
to
adjust
permissions
yeah.
So.
B
B
B
B
So
maybe
this
is
all
old
hat
to
many
of
you,
and
then
we
have
a
Google
Doc
with
most
of
our
plans,
issues
that
we're
going
to
chew
through
for
1-9
I'm,
in
the
midst
of
turning
those
into
github
issues
and
adding
those
to
the
v1
on
in
the
kubernetes
testing
for
repo.
So
just
walking
through
a
couple
of
things
that
are
in
1/8,
we
now
have
a
project
called
Bastas,
which
is
basically
in
charge
of
leasing,
GCP
projects
from
a
pool
of
empty
GCP
projects.
B
Many
of
you
probably
weren't
aware
that,
in
the
bad
old
days
about
four
months
ago,
we
had
to
have
a
Google
compute
project
per
job.
So
anytime
somebody
wanted
to
add
a
new
job
to
our
CI
infrastructure.
They
had
to
get
a
bug
there,
any
person
or
figure
out
how
to
get
that,
get
a
new
Google
project
and
then
figure
out
how
to
like
tune
that
make
sure
everything
was
set
up
correctly,
and
we
now
pretty
much
just
check
out
a
project
from
the
pool
run.
B
The
you
know
stand
up
the
cluster
in
there
run
tests
against
it.
If
the
project
has
been
checked
out
for
too
long,
it
gets
reaped
if
the
project
didn't
get
cleaned
up
appropriately
by
the
tests,
because
they
exploded
MIT
run.
There's
something
else:
that'll
go
back
and
clean
that
up
and
projects
are
generally
cleaned
up
before
they
are
put
back
in
the
pool.
B
So
sand
put
this
together.
He
did
a
presentation
in
cig
testing
a
little
while
ago.
One
of
the
things
I
personally
liked
a
bunch
is
that
right
out
of
the
gate,
he
made
sure
there
was
a
dashboard.
So
you
could
see
what
Vasquez
was
doing
in
terms
of
the
number
of
projects
that
are
dirty
or
free
or
available
of
different
types,
and
you
can
sort
of
see
there
little
gaps
here.
So
here,
for
example,
where
something
maybe
kind
of
went
wrong.
Maybe
phosphorus
went
down,
maybe
we're
having.
B
We
were
having
info
related
issues,
but
she
could
pretty
quickly
see
that
like
if
all
the
jobs
were
failing
for
some
reason.
It
might
be
because
the
number
of
dirty
projects
like
went
up
and
then
when
we
fixed
the
problem
and
redeployed
dirty
projects
all
about
clean
and
we
were
sort
of
back
to
steady-state,
I
love
being
able
to
see
steady-state.
B
B
Uber
Nader
uber
Nader-
you
probably
all
have
interacted
with
this
at
some
point
day
in
and
day
out,
yet
displays
tests
results
from
various
jobs.
So
here's
an
example
linked
to
the
Guber
Nader
page,
where
I
wanted
to
highlight
a
couple
of
the
new
things
that
have
happened
lately,
where
did
that
stream
came
from
well,
it
came
from
this
particular
test.
That's
cool!
B
If
I
want
see
the
standard,
error
or
standard
out
of
the
tests
as
they
were
running
instead
of
having
to
go
click
on
build
log
and
look
through
like
12
to
16
megabytes
of
stuff.
I
can
just
click
down
here
and
it
will
expand
and
I'll
actually
get
to
see
some
of
the
logs
from
that
particular
test.
I
scroll
down
way
further,
because
that's
a
lot
of
logs
I
can
also
see
all
of
the
tests
that
were
skipped
in
this
particular
run.
I
can
also
see
all
of
the
tests
that
passed
in
this
particular
run.
B
It
can
also
up
top
here
see
that
two
tests
failed
333
succeeded.
Most
of
these
are
improvements
that
came
from
either
Clayton
Coleman
from
Red,
Hat
or
Brian
from
Google.
I
can
also
click
this
button
here
to
see
recent
runs
of
this
particular
job,
although
tester
it
kind
of
does
the
same
thing
and
I'll
get
to
that
later.
B
B
Let's
look
at
one:
that's
failed
for
whatever
reason
so
just
while
I'm
here,
a
number
of
people
will
often
ask
like
what
are
the
jobs
that
are
required
for
a
pull
request
to
get
merged
and
all
of
the
job
names
down
here
have
a
have
a
word
required
next
to
them.
That
indicates
which
of
the
contexts
are
required
for
merge.
B
There's
also
a
context
here
by
the
submit
queue
which
will
also
tell
you
like
what
it
thinks.
The
next
concrete
step
is
so
in
this
case
it
knows
at
least
one
test.
That's
not
green
right
now,
but
it'll
also
update
this
if
it
needs
like
if
the
pull
request
needs
to
be
rebased
or
if
somebody
needs
to
add
a
LG
TM
or
the
label,
so
I'll
click
through
on
the
details
here
to
get
to
a
gerber
nadir
page.
So
all
right,
this
looks
pretty
familiar
right.
B
If
I
click
on
this
pull
request,
number,
then
I
can
sort
of
see
the
history
of
all
jobs
that
ran
for
this
particular
pull
request,
and
so
I
chose
a
for
example,
but
if
I
found
something
that
had
been
test
over
time,
I
would
see
these
results
sort
of
March
out
with
various
greens
and
Red's
as
we
move
on
it's
a
really
handy
place
to
see
results
for
all
jobs
that
were
kicked
off
from
your
pole,
request
all
right
and
I
lost.
My
slides
already
look
at
that
we're
doing
great
here.
B
B
So
our
goal
is
those
of
you
who
are
familiar
with
hack,
ET
echo,
it's
just
a
shim
around
downloading
like
go
getting
this
and
running
this,
so
we've
sort
of
migrated
away
from
a
bunch
of
as
much
horrible
bash
as
we
can
to
something.
That's
that's
actually
written
in
going
and
there
are
different
sort
of
plugins
or
deployers
that
can
be
implemented
for
more
than
just
the
scripts
that
live
in
the
cluster
directory
and
kubernetes.
So
there's
one
for
tributaries
anywhere.
There's
one
for
cops.
B
There's
one
for
gke
things
of
that
nature,
so
I'm
a
femaie
three
familiar
with
basil
basil,
I
still
don't
know
how
to
pronounce
it.
Sorry,
it
doesn't
have
the
greatest
cross.
Compilation
story
right
now.
It's
way
way
better
than
it
was
three
months
ago
and
we're
rapidly
hoping
to
move
towards
a
world
where
it's
the
de
facto
way
of
building
and
running
kubernetes
and
all
of
the
tests
and
infrastructure
around
kubernetes,
because
it
does
a
fantastic
magical
job
of
caching.
B
Everything
and
running
only
what
needs
to
be
run
so
been
the
elder
from
and
g/km
squad
team
put
together.
This
script
called
planter,
which
is
basically
just
a
shell
script
around
running
basil
inside
of
docker,
and
it
definitely
works
on
Linux
and
Windows.
Sorry,
Linux
and
Mac
I
believe
it
also
works
in
Windows,
but
not
like.
You
can't
run
basil
natively
on
windows
and
it's
definitely
exercised
against
kubernetes,
kubernetes
and
kubernetes
tested
through.
B
We
are
very
interested
in
data-driven
decisions
and
metrics
about
our
tests.
We
have
a
program
called
kettle
which
is
basically
responsible
for
scraping
all
of
the
test
results
that
are
stored
inside
of
Google
Cloud
storage,
buckets
and
turning
that
into
something
that
gets
stuffed
into
a
bigquery
data
set,
which
we
can
then
use
to
generate
useful
metrics,
Cole
Widener,
recently
updated
the
set
of
scripts
that
we
use.
So
I
can
show
you
this
wonderful
dashboard
and
ask
you
to
help
out.
B
So
the
two
graphs
up
here,
I
wish
I
could
better
articulate
the
math
behind
them
and
essentially,
if
you
want
to
know
the
nitty-gritty
of
what
these
mean,
please
come
to
the
sig
testing,
slack
channel
or
YouTube
can
screen
about
statistics
and
test
results,
but
essentially
these
roughly
show
the
chance
that
any
given
single
pull
request
or
commit
will
flake
arbitrarily
and
when
I
say
flake
I
mean
same
commit
passes
or
fails
continuously
so
anytime.
Somebody
like
slams
retest
without
actually
updating
their
pull
request.
That's
going
to
count
towards
this
anytime.
B
Somebody
continually
adds
new
commits
to
their
pull
request
that
doesn't
necessarily
count
as
flaky,
because
their
code
could
have
legitimately
failed
because
of
bugs.
So
a
good
thing
to
see
here
is
that
these
lines
have
generally
kind
of
calmed
down
a
little
bit
and
they're
starting
to
go
down
into
the
right.
B
So
if
anybody
out
there
wants
to
help
with
the
stability
and
velocity
of
the
community
as
a
whole
and
wants
to
know,
what's
the
freakiest
test
that
the
most
people
are
running
into
it's
this
one,
it's
at
the
top
and
it's
at
the
left.
If
you
want
the
next
one.
After
that,
you
know,
you're
gonna
have
to
do
some
math,
but
it
looks
like
the
GCE
Basel
job
is
most
often
failing
with
this
SIG
apps
related
job.
B
So
if
you
are
a
member
of
sig
apps
and
know
what
the
test
is,
that
ensures
that
job
should
run
to
completion
when
tasks
sometimes
fail.
That
are
not
locally
restarted.
You
would
be
a
super
awesome
hero.
If
you
fix
this
test,
you
would
reduce
the
number
of
flakes
and
more
of
us
will
get
our
code
merged
more
frequently.
Next
to
tables
down
here
show
the
longest
failing
PR
jobs,
the
number
of
days
they
have
been
failing
in
a
row.
Many
of
these
are
not
actually
blocking
PR
jobs.
B
Okay,
lunch
github
lunch,
github
I
have
talked
about
this
before
it's
the
thing
that
sweeps
through
github
and
pulls
things
and
then
dust
things
to
them.
It's
deprecated,
no
more
munch
github
only
proud,
please.
It
still
does
kind
of
a
number
of
things.
So
we
have
this
issue
here,
where
the
functionality
that
each
Munzer
has
you're
attempting
to
re-implement
as
either
a
proud
plug-in
or
a
standalone
program.
B
Excuse
me
proud
that
said,
I
will
go
to
brief
on
real,
quick
and
show
that
we
do
have
metrics
on
munch
github
still,
so
we
split
up
munch
github
into
at
least
for
the
main
urban
eddies
repo.
We
split
that
up
into
the
submit
queue
instance,
which
is
the
one
running
in
green
and
the
misc
commanders
instance,
which
runs
everything
else
so
like
blunderbuss
responsible
for
assigning
people,
approval
handler
responsible
for
putting
that
comment.
B
That
says
who
has
yet
to
approve
your
pull
request,
closing
statements,
putting
the
needs
rebase
lis
labeled
when
a
PR
has
been
updated
or
when
a
merge
happened
that
caused
that
PR
to
need
a
read
like
all
of
that
stuff
is
run,
miss
plungers,
so
that's
less
important
to
us
than
making
sure
your
code
against
merged
as
quickly
as
possible.
So
you
can
generally
see
here
like
this
little
notch
over.
B
Looking
thing
where
turns
out,
kubernetes
seems
to
be
worked
by
a
lot
of
day
job
people,
because
we
have
a
pretty
nice
consistent
Monday
through
Friday
hub
of
merge,
pull
requests,
ok
same
story
with
Jenkins.
If
you
have
any
issue,
I
won't
click
through
to
it,
but
hopefully
we're
doing
the
right
job
of
reaching
out
to
people
who
still
have
jobs
that
are
running
on
Jenkins.
We
would
like
to
end-of-life
Jenkins
by
the
end
of
1
9,
or
shortly
after
the
1
9
timeframe.
B
B
So
we
are
highly
motivated
to
help
unblock
you,
generally
speaking,
the
jobs
that
are
still
on
Jenkins
or
jobs
that
are
tricky
or
corner
cases.
It
might
not
be
easy
to
live
off
and
we
want
to
identify
those
blockers
and
help
accomplish
that
so
I,
just
like
chatted
with
six
scale
this
morning
to
make
sure
we're
on
track
for
moving
the
scalability
and
the
coudé
mark
jobs
off.
B
So
this
thing
proud
that
I
keep
talking
about,
gets
the
thing,
that's
responsible
for
handling
github
events
and
doing
all
the
bot
commands
for
kubernetes
that
you
knew
and
loved.
It's
got
a
pretty
decent
readme
here.
That
explains
the
various
sort
of
micro
services
that
implement
prowl
meat,
now
sort
of
have
an
announcements
section,
because
we
have
folks
from
OpenShift
and
folks
from
sto
who
are
also
using
their
own
deployments
of
Kraus,
and
we
are
starting
to
try
to
make
sure
we
break
things
or
change
things
that
people
are
aware
of
that.
B
If
you
would
like
to
get
started
using
prowl
in
your
home,
there's
a
document
here
that
describes
sort
of
how
to
use
gke
to
turn
up
a
cluster
LUN
prowl
in
your
cluster
modify
plugins
file
to
enable
certain
plugins
for
your
repos
of
choice.
So
I
believe
believe
this
is
also
being
used
by
the
tensorflow
team
shortly
I'm-
probably
running
long
on
time.
Here
so
I'll
just
say
these
were
things
that
were
described
in
Eric's
email.
If
you
put
a
slash
hole
in
your
PR,
they
do
not.
B
Merge
label
will
be
applied
to
your
pull
request.
Similarly,
you
may
have,
in
the
past,
put
up
a
pull
request.
That's
like
a
work
in
progress,
but
you
just
want
to
see
the
bot
like
run
test,
so
you
can
see
like
how
it's
doing
there
and
you
might
have
prefixed
that
PR
with
WIP
or
put
it
in
brackets
or
parentheses.
We'll
have
a
plug-in
now
that,
like
automatically
notices
that
and
we'll
apply,
it
do
not
merge
label
and
it
will
also
notice
if
you
change
the
pull,
request,
title
and
remove
the
do
not
merge
labels.
B
The
ability
for
prowl
to
send
messages
to
slack
the
most
effective
way
we
found
to
use
that
right
now
is
to
message
the
kubernetes
dev
channel
when,
whenever
somebody
manually,
we
would
anticipate
that
manually,
merging
and
commit
has
a
human
is
a
really
extremely
rare
event.
Generally,
if
we're
doing
this
right,
while
making
sure
that
all
the
automated
testing
works
and
it's
the
automated
testing
that
merges
stuff,
we
are
identifying.
There
are
cases
where
this
falsely
triggered
so
release
coordinators,
who
are
manually
merging
in
or
doing
cherry-picks.
B
That
seems
to
trigger
it
trying
to
revert
a
bull
request
through
github
zui
that
seems
to
trigger
it
and
we'd
appreciate
help
in
reducing
the
noise
of
this
to
make
sure
that
we
do
appropriately
notify
and
say
hey.
This
is
kind
of
unusual
versus
hey.
This
is
just
you
know
the
release
manager
doing
their
job
fast,
forwarding,
a
branch
or
something
I.
Don't
have
too
much
time
to
talk
about
this,
but
I'll
just
say
that
we
are
planning
on
replacing
the
submit
queue.
B
It's
slow,
it's
really
complicated,
there's
a
lot
of
questions
about
what
is
it
doing?
Where
is
my
pull
request?
In
it,
we
have
a
high
confidence,
II
realized,
it
certainly
seems
like
we
could
be
merging
multiple
things
at
once.
I
will
again
just
sort
of
defer
to.
We
have
an
issue
in
testing
fro.
We
have
turned
on
a
replacement
for
the
submit
queue
in
the
testing
for
repo.
It
essentially
uses
github
queries
to
say,
hey.
B
What
are
all
the
pull
requests
that
seem
like
they've
got
the
right
labels
on
them
and
all
the
tests
seem
to
pass
and
it
looks
like
they
are
cleanly
marginal.
Can
we
just
like
gather
up
all
of
those
and
just
get
right,
run
the
tests
one
more
time
and
then
merge
those?
This
describes
how
it's
done
a
batch
as
well
as
serially.
B
This
supports,
though,
in
multiple
rico's,
not
just
kubernetes
kubernetes,
with
one
deployment,
and
this
supports
with
multiple
branches
at
launch,
which
should
play
nicely
with
the
desire
to
start
using
feature
branches
for
development
and
once
I
share
the
slides.
You
can
also
click
through
to
the
design
dock,
which
sort
of
describe
the
motivations
for
why
we
think
the
existing
submit
queue
is
not
that
great
and
we
wish
to
remove
it
operationally
test
grade
is
probably
my
favorite
thing
from
this
group.
It
may
look
a
little
different
than
the
last
time
you've
seen
it.
B
B
Looking
at
the
summary
tab
right
now,
which
has
little
icons
that
show
which
jobs
are
considered
flaky,
which
might
be
normal
for
a
pull
request
job
because
pull
request
jobs
aren't
going
to
always
pass
because
people
aren't
always
going
to
submit
perfect
code,
which
jobs
are
failing
and
consistently
looks
like
the
Cuban
men.
Gce
job
is
failing
and
hopefully
there's
a
passenger
look
cool.
B
Nobody
TV
has
actually
been
consistently
passing
overtime
and,
if
I
click
through
to
that
I
can
see
it'll
take
a
little
while,
but
this
shows
like
the
past
24
hours,
I
think
test
results
across
all
pull
requests.
So
I
find
this
is
generally
an
easier
method
of
displaying
this
stuff,
as
opposed
to
Google
nature.
B
B
What's
the
general
stability
of
all
the
tests,
our
pre
submits
are
pretty
flaky
right
now,
I
have
in
the
past
seemed
a
lot
more
of
the
green,
so
it
seems
like
if
I
were
to
just
quickly
say
our
projects
a
little
unstable
or
it's
more
unstable
than
it
was.
We
can
know
we're
also
going
to
be
using
like
I
think
it.
Actually
it
supports
it,
but
I
don't
know
how
to
use
it.
Yet
the
concept
of
having
test
grading
email
you
if
your
test
is
failing.
B
B
Finally,
triage
sort
of
an
alternate
view
of
if
I
wanted
to
fix
something.
What's
the
most
important
thing,
I
could
fix
this.
Basically
attempts
to
cluster
together
failure,
text
from
older
jobs
and
all
different
test
cases,
so
you
can
sort
of
see
it
sorted
by
quantity.
So
this
is
generally
speaking,
the
test
failures
over
the
past
week
and
then,
as
I
scroll
down,
I
can
sort
of
see
particular
test
failures.
I'll
just
pick
this
one
expected
end
points
map
got
that
surfer.
B
One
I
can
see
which
jobs
and
failed
in
sparklines
aren't
really
telling
me
that
it's
separated
to
one
job,
just
most
of
the
GCE
cos
cos
they
have
jobs,
seem
to
be
having
issues
with
this
and
they've
been
having
issues
with
it.
Pretty
consistently
over
the
past
week,
so
this
is
another
place
you
can
quickly
see
like
in
aggregate.
What's
a
test
failure.
When
did
it
start?
Where
is
it
happening?
B
This
might
help
you
quickly
triage
or
isolate
where
to
go
next
for
troubleshooting,
okay,
so
I
think
you
got
the
message
that
for
v19
we'd
really
like
to
end-of-life
all
the
things.
Well,
we
mainly
just
want
to
end
of
life
Jenkins
and
lunch.
Github
we'd
also
really
like
to
end
like
submit
and
of
like
submit
queue
and
turn
that
into
time,
which
is
just
another
proud.
Plugin
we'd
like
to
support
a
very
few
set
of
very
well
scope.
B
Things
we'd
like
to
have
more
of
a
support
policy
in
place
to
make
sure
that
if
this
submit
queue
is
broken,
what's
the
escalation
policy
look
like
like
who
do
you
contact?
First?
How
do
you
know
they've
acknowledged
that
contact?
How
do
you
know
that
the
problem
is
being
worked?
How
do
you
know
what's
happening
next?
We
have
been
in
talks
with
Jason
or
de
Mars
to
put
together
a
document.
B
Look
for
things
like
a
status
dockets
that
IO
page
look
for
things
like
more
active
people
in
slack,
look
for
a
potentially
and
label
that
gets
added
to
certain
issues
that
denotes
them
as
really
have
to
drop
everything
on
work
on
this
right
now.
Look
for
a
testing
optional.
In
slack,
where
alerts
will
start
going
from
various
pieces
of
our
testing
for
structure.
B
If
we
notice
that
it's
starting
to
get
unhealthy
so
instead
of
humans
having
to
proactively
look
at
the
submit
queue
and
various
other
dashboards,
we
could
actually
have
alerts
sent
to
that
channel
if
conditions
are
met.
Look
for
just
general
operational
transparency
in
general,
I,
love,
dashboards,
I
love,
seeing
how
it's
all
working
and
what
the
what
the
steady-state
looks
like
and
we'd
like
the
rest
of
the
community,
to
be
able
to
see
that
as
well
and
finally,
we'd
really
like
to
enable
more
support
by
non-googlers
I
mentioned
earlier.
B
That
SEO
and
OpenShift
are
using
this.
So
we
have
a
community
of
people
who
are
knowledgeable
about
the
infrastructure
that
drives
kubernetes,
but
at
the
moment
they
can't
really
actively
support
it,
because
this
is
all
running
on
infrastructure.
Behind
Google's
walls.
We
are
working
with
the
CNC
F
to
find
a
way
to
open
the
pool
to
people
who
are
interested
and
are
trusted
to
actually
help
go
on
some
kind
of,
dare
I,
say,
on-call
rotation
to
make
sure
that
the
project
is
flowing
smoothly.
B
B
B
What's
going
on
with
your
board,
I
did
show
you
how
there
was
a
dashboard
that
said
there,
the
flake,
iasts
jobs
and
then
the
phileas
test
was
thinness
jobs
and
there
was
a
fix
it
within
Google
a
couple
of
months
ago
to
just
sort
of
like
ad
hoc
and
text
text
and
tests.
So
you
noticed
I
call
that
sig
apps
as
owning
the
flake
iasts
test
for
the
flaky
astana
power
to
go
just
constantly
saying
like.
Why
haven't
you
fixed
this?
H
Not
thinking
necessarily
around
shaming,
just
as
much
as
like
there's
a
ton
of
tests
out
there
that
aren't
strongly
associated
with
a
sig
and
in
those
dashboards
in
terms
of
what
they
cover,
we
don't
have
perfect
coverage
or
assignment
of,
like
you
know,
test
two
SIG's
right,
so
I
think
just
getting
sort
of
a
stronger
relationship.
There
might
actually
help
move
this
forward.
It's
one
idea
which.
E
I
think
it
would
be
useful
to
review
various
metrics
at
this
meeting
on
a
regular
basis
and
we
should
figure
out
what
those
metrics
should
be,
and
maybe
there
will
have
to
rotate
because
there
are
so
many
metrics,
but
between
these
tests
metrics
and
the
new
dev
stats
dashboard,
we
weren't
we're
gonna,
want
to
start
improving
on
various
api's
of
the
project
and
so
reviewing
them
at
this
meeting
on
a
regular
basis.
Gonna
be
a
critical
part
of
that
and.
H
And
I
think
you
know
I
totally
agree
with
Brian
I
think
that's
a
great
idea,
but
I
think
having
those
things
roll
up
to
something
that
feels
more
personal
versus,
like
hey,
there's
a
test.
That's
failing
like
you
know,
not
my
problem
right,
if
you,
if
it
lands
on
a
sig
that
you're
closely
associated
with
it,
starts
feeling
like
hey,
maybe
that's
something:
I
should
actually
get
yeah.
B
I
mean
I
think
it's.
We
don't
really
yet
have
a
policy
in
place
that
mandates
that
if
a
test
is
failing,
that
a
sig
has
some
sort
of
SLA
that
they
have
to
fix
that
test
within
a
certain
time
right,
just
like
we're.
Still,
we
still
have
baby
steps
for
how
do
you
even
escalate
and
make
sure
that
the
submit
queue
being
blocked
entirely
is
broken.
F
H
E
I
think
one
thing
that
usually
helps
with
setting
SOAs
is
also
to
understand
the
historical
slis.
So
if
sig
testing
could
actually
do
some
analysis
of
like
what
so
we
have
some
sense
of
what
kind
of
SLA
might
be
reasonable,
then
I
think
the
next
step
would
be
for
cig
testing
to
actually
propose
the
SLA.
E
B
Of
I
I
wouldn't
mind
seeing
that
applied
at
the
job
level.
First,
before
we
start
going
to
the
granular
test
level,
so
we
have
some
sort
of
general
notion
of
like
I,
think
in
our
heads
roughly
that
a
pre
submit
job
that
is
run
for
every
pull,
request
and
blocks.
Every
pull
request
for
merging
is
something
that
probably
should
run
in
less
than
an
hour
and
should
probably
be
pretty
non
flaky
for
some
value
of
non
flaky.
B
Similarly,
we've
had
noise
in
the
past
about
how
the
cops
job
doesn't
seem
like
it's
a
useful
candidate
for
blocking
all
pull
requests
if
nobody's
gonna
fix
it
within
24
hours
of
it
breaking
right
and
so
like
organically
there.
We
got
collaboration
with
insig
testing
and
cig
AWS
to
make
sure
we
better
understand
that
job
and
it's
more
actively
maintained
but
I
think
just
at
the
job
level.
We
should
start
with
those
essays
before
we
start
going
to
the
individual
test
case
level,
but
I
agree
with
regular.
B
You
know
pointing
out
of
the
top
three
say
like
when
Eric
was
mailing
out
regular
reports
of
the
top
three
flake
iasts
test
cases
and
then
going
out
as
a
human
being
and
talking
with
those
people
was
that
actionable
did
that
did
that
accomplish
stuff?
If.
E
You
can
write
down
some
of
these
guidelines
like
these
kinds
of
jobs
didn't
run
for
no
more
than
an
hour.
These
kinds
need
to
be
99.99%,
not
flaky.
Things
like
that.
I
think
that
would
be
a
good
start
as
well
for
informing
decisions
about
what
jobs
are
made
blocking
versus
you
merge
or
releases
or
whatever
cool
okay.
You.
A
B
A
G
And
we're
thinking
maybe
we'll
just
skip
for
next
week,
because
we
had
like
a
50-person
or
actually
including
GBC,
like
80%
GBC,
in
a
two-day
session
that
had
a
lot
of
good
things
going
on
and
we
want
to
make
sure
that
service
catalog
also
has
time.
So
it
probably
makes
better
use
of
this
meeting
time
to
give
them
maybe
10
or
15
full
minutes,
and
then
we
can
go
and
give
like
10
or
15
minutes
to
storage
next
week.
Ok,.
A
I
So
I'm
going
to
look
very
underprepared
by
comparison.
This
will
probably
take
maybe
about
five
minutes.
So
just
a
quick
recap
if
anybody
is
not
familiar
already
with
what
Service
Catalog
is
a
service.
Catalog
is
an
integration
between
he's
in
the
open
service
broker,
API
open
service
broker,
API
is
the
descendant
of
the
Cloud
Foundry
service
broker
API,
and
what
a
service
broker
is
is
an
entity
that
that
handles
provisioning.
I
Well,
it
offers
a
set
of
capabilities.
Called
services.
Services
can
have
many
plans.
A
plan
is
a
tier
of
a
service,
so
the
service
broker
is
a
is
something
that
presents
a
certain
set
of
services
and
their
plans.
You
can
using
the
Service
Catalog
provision
new
instances
of
these
services.
You
can
update
like
change
plans
if
the
services
plan
updatable
so
say
you
start
out
on
a
just
like
a
starter
plan.
Maybe
it
was
some
low
performance
characteristics
or
whatever
your
service
is,
and
you
want
to.
I
After
you
make
the
front
page
of
hacker
news
or
Slashdot,
you
want
to
move
up
to
a
more
higher
level
plan
that
gives
you
more
performance.
That's
something
that
you
can
do,
and
once
you
provision
an
instance
of
a
service,
you
can
get
credentials
to
use
it
in
your
application,
that's
called
binding
and
when
you're
all
done,
you
can
unbind
to
it
unbind
from
it
and
be
provision
the
service
instance,
so
the
just
to
drive
that
an
example
home
quickly.
I
The
canonical
thing
that
we
point
to
for
this
is
like
a
service
might
be
a
database
as
a
service.
So
when
you
provision
an
instance
of
this
service,
now,
you've
got
your
database
that
you
can
you
use
and
when
you
want
to
start
programming
it
against
your
database
and
use
it
in
your
application,
you
make
a
binding
to
it.
That
gives
you
bad
credentials
that
are
put
into
a
kubernetes
secret,
and
then
you
can
you
use
them
just
like
anything
else,
that's
in
a
secret
and
that's
essentially
what
it
is
from
a
functional
standpoint.
Now.
I
I
It's
been
about
eight
years
since
we
started
this
dig
sig
and
started
working
on
the
incubator
repo,
so
things
have
gotten
quite
a
bit
easier
and
things
like
cube
CTL
come
a
long
way
to
supporting
resources
that
aren't
part
of
the
core,
so
I
actually
would
be
able
to
give
a
demo
sometime
very
soon.
Is
that
there's
a
demo
on
the
incubator
repo
site,
but
it's
a
little
out
of
date,
I'm
planning
to
rerecord
it
now
that
and
here's
I
buried
the
lede.
I
Here's
the
big
takeaway
here
now
that
we're
getting
ready
to
publish
our
first
beta
release
so
recently,
in
the
sink
over
the
last
several
weeks,
we
have
started
publishing
release
candidates
for
one
release.
The
last
one
was
on
Monday
its
release.
Candidate
I
think
it's
probably
it's
likely
that
we'll,
maybe
cut
one
more
either
tonight
or
tomorrow
morning
and
we're
shooting
to
release
a
final
ODOT
1.0
release
in
the
next
couple
days.
I
I
would
love
to
have
any
feedback
if
folks
can
try
it
out
and
report
issues
or
bugs
and
that's
pretty
much
it
from
us.
I
want
to
thank
everybody
that
has
contributed
to
that
project,
but
inside
the
saying
and
inside
the
incubator,
repo
and
then
a
lot
of
the
things
that
the
API
machinery
sig
has
done.
A
lot
of
things
6cl
I
have
done.
I
have
really
helped
us
out
and
I
just
want
to.
Thank
everybody.
That's
contributed
in
whatever
way,
either
code
or
reporting
bugs.
So
thank
you
very
much
everybody.
It's
really
appreciated!
A
G
So
we
we
met
last
week,
we,
it
was
kind
of
an
interesting
cig
face-to-face.
We
have
never
really
had
more
than
maybe
20
people
in
a
room
we
had
50
and
we
had
Cabot
at
50
thanks
to
Paris
for
finding
a
room,
which
was
the
reason
why
we
had
to
cap.
It
I
think
the
interest
of
what's
been
going
on
lately
has
been
sort
of
phenomenal.
G
We
probably
could
have
gotten
like
80
people
coming
to
the
face-to-face
if
we
could
have
found
the
capacity
for
them,
which
is
something
that
we're
gonna
address
for
the
next
one,
the
agenda
items.
You
know
they
came
from
a
large
range
of
things
that
we've
got
linked
in
the
community
meeting,
but
I
think
the
biggest
one
was
CSI
and
out
of
tree
plugins,
which
we
discussed
at
length.
G
I
think
we
did
take
a
lot
of
time
explaining
to
people
why
it's
not
done
yet
and
why
it
can't
be
done
by
the
end
of
the
year
is
because
there's
a
lot
of
interest
from
vendors
and
end
users
on
that
Brad.
Is
there
anything
you
want
to
hit
before
the
end
of
today?
Just
that
you
felt
was
another
big
highlight.
G
So
you
know,
there's
a
there's,
a
rundown
of
you
know,
issues
or
just
topics,
and
each
one
I
think
this
is
probably
one
of
the
most
well
run
face
to
faces
that
we
had.
Usually
people
run
out
of
time
and
this
one
we
manage
to
cover
everything.
A
lot
of
this
had
to
do
with
the
fact
that
we
sort
of
set
up
a
polling
system
of
what
people
wanted
to
talk
about
before
the
agenda
and
had
voting
done
all
within
about
five
minutes
of
meeting.
We
discussed
local
persistent
storage.
G
Block
storage
was
presented
by
Aaron
Boyd,
and
this
again
is
how
we
are
not
just
relying
upon
on
the
mount
in
the
POSIX
API,
but
also
looking
into
how
to
get
raw
blocks
into
the
system
and
how
to
do
this
securely,
which
is
probably
one
of
the
biggest
challenges.
Snapshots
I,
think
finally
got
over
the
hump
of
use
cases
that
has
been
challenging
it
for
the
longest
time.
G
Jing
and
a
whole
bunch
of
people
have
been
working
on
this
for
a
while
Jing
Brad
and
more
and
I
think
that
I'm,
hoping
in
1.10
will
be
able
to
get
it.
You
know
in
alpha
and
so
that
we'll
be
able
to.
Finally,
you
know
we
found
but
I
think
the
tight
use
case
where
we
easily
could
have
made
a
complicated
API
for
snapshots.
G
You
know
again,
you
can
see
here
on
the
community
list.
There
is
a
bunch
of
YouTube
links
for
some
reason.
I
know
that
some
people
have
sat
down
who
couldn't
make
it
and
listened
to
all
two
days
worth
of
talks.
So
there's
you
know
some
big
guys
in
this
room,
at
least
at
the
surprise
of
that
I
know
at
least
a
few
people,
some
of
them
who
are
not
related
to
anyone
who
was
in
the
room
who
actually
did
listen
to
the
16
hours
of
discussion
on
storage.
G
We
are
seeing
a
large
uptake
of
interest
from
two
groups
of
folks,
vendors
and
startups
that
are
showing
up
and
want
to
make
sure
that
they
are
container
ready
for
storage
and
kubernetes
ready
and
also
a
lot
of
folks
who
were
previously
involved
in
OpenStack
found
that
this
was
a
good
place
to
go.
Look
for
the
future
of
OpenStack
and
I
think
the
future
of
sort
of
infrastructure
overall
and
I.
Think
storage
plays
a
large
part
in
that
the
other
one
was.
It
was
a
very
good.
You
know.
G
I
was
very
worried
about
the
sheer
size
of
folks
would
get
in
that
room,
but
we
could
have
an
order.
Discussion
and
I
feel
that
not
only
were
we
able
to
go
through
the
technical
details,
we
benefit
from
all
of
people
from
many
different
walks
of
life
in
the
room,
but
we
were
very
much
able
to
get
out
the
kubernetes
message
across.
G
We
definitely
encountered
of
them,
especially
regarding
staple
sets
and
self-healing
sorted
systems
that
we
know
are
still
unanswered.
So
there's
just
a
ton
of
work
going
on
and
I.
You
know
those
there's,
you
just
go,
take
a
look
at
the
list
and
you
can
see
a
lot
of
new
faces
that
were
in
the
room,
new
companies
that
were
in
the
room,
and
it
was
great
because
some
of
these
new
faces
and
new
companies
already
knew
about
kubernetes
and
had
played
with
it
in
ways
that
maybe
those
of
us
develop.
Didn't
it
had
it.
G
So
it
was,
you
know,
I
had
high
hopes
and
they
were
exceeded.
For
this
event,
you
know
I,
don't
know
if
anyone
else
has
anything
to
say
you
know
it
would
be
neat
just
to
go
to
the
YouTube
session
and
see
if
there's
something
that
interest
you
and
just
click,
and
hopefully
you'll
find
you
know
a
good
healthy
example
of
you
know:
cool
steak,
discussions
going
back
and
forth.
You.
A
F
D
F
Ricci
has
officially
launched
for
those
who
don't
know
out
Ricci.
It
is
a
program
that
pays
stipends
to
interns
who
may
be
underrepresented
in
the
open-source
community.
It's
a
great
program
and
where
the
program,
this
is
the
first
time
that
kubernetes
is
a
part
of
it.
We
do
hope
to
continue.
We
did
get
in
during
the
witching
hour,
though,
so
we
did
only
have
one
say,
participate
that
CLI
Phil
is
actually
going
to
be
mentoring
and
we're
going
to
be
picking
one
intern.
F
We
already
have
several
applications
and
it's
only
been
live
for
a
few
hours,
so
it's
definitely
a
popular
and
worthwhile
endeavor
for
us,
CN
CF
is
actually
paying,
and
this
is
also
their
kind
of
beta
test.
If
you
will
to
see
if
if
this
would
work
with
kübra
Nettie's,
so
cheers
to
that
also
contributor
summit
invites
are
definitely
going
out
today,
including
the
lottery.
So
look
out
for
that.
One.
G
A
Right
George
did
you
have
anything
before
we
go
two
minutes.
Nope
he's
good
office
hours
next
week.
So
all
right,
well
I
wanted
to
thank
the
note
takers,
Ryan
and
Josh.
Please
thank
them
for
for
taking
such
great
notes.
There
was
a
lot
discussed.
Sorry
I
had
to
cut
the
testing
one
short,
but
you
know
we
wanted
to
get
through
everything.
So
I
think
that's
it.
Everybody
have
a
good
week
and
we'll
see
you
next
week,
bye
Thursday.