►
From YouTube: Kubernetes Community Meeting 20160512
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Navops Demo; SIG Apps; SIG Scale, SIG OpenStack updates; merge queue challenges; 1.3 feature update
A
Good
morning,
all
this
is
Cooper
Nettie's
communities
or
thursday
may
12.
As
I
say
every
time
this
will
be
recorded
and
posted
on
issued.
So
don't
say
anything.
You
wouldn't
want
your
mother
to
know
you're
saying
today
on
our
agenda.
The
first
thing
up
is
a
demo
from
the
team
at
unova.
So
Rob
you
want
to
tell
us
a
little
bit
about
what
you're
going
to
demo
and
your
partner
in
crime.
A
B
Hi,
it's
rob
the
lone
gem
here
with
the
camera
I'll
just
quickly
clip
on
video.
So
you
know
we're
real
yeah
we're
going
to
demonstrate
our
navbox
launch
bottle.
Show
you
how
we
build
and
looks
like
things
out
of
on-premise
or
in
the
cloud
and
you're
going
to
show
you
a
quick
type
of
girls,
but
just
like.
B
Excellent,
thank
you
so
just
quickly,
I
won't
go
into
a
full-on
advertisement,
but
just
so
you
know
who
we
are:
we've
been
around
for
many
years
and
kind
of
a
different
space.
The
HPC
space
Cena
has
been
building
super
large
clusters
for
real
big
oil
companies,
life
sciences,
organizations
for
many
years
we've
been
orchestrated
workloads
on
that.
We
brought
our
technology
over
to
the
kind
of
Cooper
Nettie's
container
space
over
the
last
year,
and
we've
been
rebranding
and
repackaging
these
solutions
for
four
containers.
B
So
what
we
wanted
to
talk
about
and
I
won't
get
into
the
the
customers
too
much
here
want
to
talk
about
the
net
loss
launch
product.
We
came
up
with
it
last
fall
and
it's
a
very
easy
way
to
build
a
cluster.
So
you
I,
based
solution,
gives
you
a
full
atomic
uber
Nettie's
flannelette,
see
dr.
cluster,
all
prepackaged
on
premises
in
bare
metal
or
you
can
build
the
cloud
or
Google
Cloud
and
Amazon
for
that
or
you
can
even
build
a
hybrid
cloud.
B
Black
em
son
did
some
testing
on
our
product.
A
little
while
back
and
they
saw
that
the
benefit
really
comes
down
to
a
command
line
approach
where
you
have
been
a
200
steps
to
install
a
whole
cluster.
This
is
a
simple
UI
and
I'll
just
jump
down
and
show
you
that
really
quickly,
I'm
not
going
to
take
a
lot
of
time
and
xterra
gave
us
a
10-minute
limit
and
I
believe
we
can
achieve
that.
B
A
D
B
There
we
go
there,
we
go.
That
was
my
fault.
Okay.
Let
me
start
really
quickly
back
at
the
beginning,
so
here's
the
google
compute
engine
and
you
can
see
there's
nothing
running.
I've
just
started
up
a
master
node,
and
I
think
you
can
tell
see
that
now
and
if
I
go
on
refresh
the
screen
on
google
compute
you'll
see
how
easy
it
was
to
create
a
fully
probation
master
node
and
that
little
delay.
We
had
probably
allowed
me
enough
time
to
actually
get
it
started
up
and
running.
E
B
F
So
right,
so
we
kind
of
skipped
over
the
actual
installation
of
nav
ops
launched,
which
is
really
just
done
from
a
script
on
the
internet,
which
pulls
down
a
set
of
containers
which
set
of
images
and
then
launches
some
containers
that
run
on
a
on
a
doctor
engine.
Typically
in
your
your
local
data
center,
on
your
laptop
in
Rob's
case.
Here
we
fold
down
several
containers,
the
first
one
that
you
saw
there
was
the
web
UI
we
all
kind
of
a
control
container
that
holds
the
complete
provisioning
solution.
F
This
has
a
lot
of
existing
technology
that
we
had
developed
over
the
last
several
years
for
provisioning
cluster
resources,
bolt-on
bare
metal
dimelo
on
local
hypervisors
or
in
the
areas
cloud
providers.
As
we
have
that
container,
then
we
also
have
a
container
that
hosts
an
operating
system
repository
for
actually
doing
local
governmental
installations.
In
that
case,
it's
really
just
a.m.
since
we're
using
fedora
core
atomic.
F
It's
posting
a
HTTP
server
hook
thing
nos
tree
repository
that
is
then
I
served
up
through
through
you
know:
pixie
kick-starts,
that's
the
bare
metal
of
general
architecture
when
running
in
what
Rob
showing
here
get
some
on
the
cloud.
So
you
only
need
local
containers,
the
UI
and
the
kind
of
orchestration
solution
or
abridging
solution,
and
it
goes
and
it
talks
directly
to
GC
key
and
creates
the
resources
assess
them
out
through
the
cloud
in
it.
The
aku,
benetti's
installation
and
uses
the
Cuban
agents
that
are
provided
as
part
of
fedora
core.
B
Video,
this
is
the
demo
we
showed
at
Google
Cloud
Print,
we're
going
to
just
tell
him
camera
just
going
through
how
this
works.
What
we've
done
is
we've
commissioned
the
local
cluster
near
it
was
running
on
a
trade
show
booth
and
we
were
spinning
out
essentially
instances
in
the
cloud
as
the
workload
went
up
not
like
yeah.
F
So,
what's
going
on
right
here
before
we
get
started,
we
have
the
for
the
for
windows.
So
what
would
has
happened
here?
Is
we've
actually
set
up
in
a
box
launch
installation
with
a
single
master,
a
single
worker
on
a
local
brew?
Nettie's
cluster
we've
launched
several
thousands
on
top
of
that
local
infrastructure.
You
see
their
little
test
application.
The
upper
left,
which
is,
as
some
demo
clients
that's
generating
some
loading
up
their
demo
server.
F
We
have
in
the
right
that
cash
showing
kind
of
the
status
of
the
replication,
colors
and
possible
system
I'm
in
the
lower
left.
We
have
the
the
graph
on
ax
from
from
the
google
reporting
system,
trilling
up
from
keith
start
showing
the
current
load
in
the
cluster
and
on
the
right.
We
have
the
kind
of
current
state
of
the
gcp
project
that
we're
going
to
be
bursting
into
that's
a
load
increases
against
our
application.
The
one
pot
of
the
one
instance
running
there
is
our
VPN
actually
really
bridging
a
local.
F
So
are
experiencing
our
local
final
network
to
the
gcp
cluster
network.
So
with
that
we'll
get
started
here.
So
the
first
thing
I'm
going
to
do
is
scale
up
in
this
movie.
I'm
scaling
up
the
number
of
demo
clients
which
this
is
going
to
increase
the
load
and
we're
going
to
have
to
scale
up
in
Coober
Nettie's
the
amount
of
replication
for
pot
instances
to
handle
this.
So
this
is
automatically
detected
by
our
you
know.
Through
our
rule
engine
I,
we
we
can
also
use
the
horizontal
pod
autoscaler
in
this
case
as
well.
F
F
So
now
you
can
see
we
are
pause.
The
engine
automatically
launch
snarks
for
notes
there
in
a
gcp
work
or
one
work
or
two
to
handle
the
additional
loads
required
to
meet
the
eighth,
the
eighth
pending
demo
or
the
eight
actual
demo
application
pods.
So
after
a
little
bit
of
time
there,
those
startup
and
those
are
running
they'll
immediately
join
our
cluster
and
we'll
see
that
over
there
and
refine
in
just
a
second
that
the
available
capacity
has
increased.
We
can
see
that
now
we're
handling.
You
know
more
per
second,
this
demo
hood.
F
Actually
the
first
instance
got
available
a
little
bit
faster,
so
we'll
see
another
stair
step
here
in
a
second
where,
when
the
second
gcp
instance
becomes
available,
so
there
you
go
the
second
one
started
up
now
you
can
see
all
8
of
our
pods
are
up
and
running
and
handling
are
increased.
Our
increased
load
of
our
web
application
on
this
is
all
possible.
You
know,
because
we
support
all
these
multiple
provisioning
technologies
inside
of
our
nav
ops
product,
so
scaling
it
back
down,
you
know,
will
reduce
the
load.
F
F
B
A
A
No
okay,
so
we
have
been
working
to
set
up
a
new,
a
new
way
to
handle
the
cover
titties
wiki.
In
order
to
make
it
easier
to
update
you
wouldn't
need
actual
commit
and
merge
privileges
on
the
Cooper
Nettie's
main
repository
in
order
to
make
updates.
So
we
have
started
moving
the
content
out
to
a
new
community
repo
under
the
Cooper
Nettie's
organization.
So
I
put
in
a
couple
of
links
in
this
agenda
item
Matt
Farina
has
made
the
first
pull
request
against
it
and
gotten
us
sort
of
stubbed
out.
A
We
want
to
set
up
a
directory
for
every
special
interest
group
so
that
you
can
have
your
own
space
to
share
content
right
up
any
information
share
notes
and
we
can
start
moving
away
from
what
makes
for
a
less
than
transparent
starting
point
for
new
users
coming
into
the
community.
So
right
now,
there's
a
lot
of
Google
Doc
shenanigans
in
order
to
get
somebody
to
see
what
the
agendas
look
like
or
what
what
might
be
being
discussed
in
a
particular
special
interest
group.
A
So
as
as
we
start
bringing
in
new
special
interest
groups
and
as
we
have
time,
I
want
to
start
migrating
more
of
the
content
around
our
different
meetings
and
special
interest
groups,
including
this
one
into
the
community
repo.
So
we
will
start
putting
agenda
agenda
notes
up
into
the
community
repo
there,
as
well
as
sending
them
out
to
the
mailing
list,
but
no
longer
posting
them
on
the
blog
for
cooper
Nettie's.
So
that
work
is
continuing
forward.
A
Aaron
was
very
patient
in
waiting,
for
he
was
the
one
that
created
the
first
full
record
of
the
first
issue
about
fixing
it.
So
you
didn't
have
to
have
merger
choir
or
merge
permissions
in
order
to
make
updates
to
the
wiki,
which
way
we
can
hopefully
get
more
people
engaged
with
and
working
on,
content
out
on
the
web.
G
A
Not
documentation,
not
the
so
there
is,
if
you
go
to
github
com,
Cooper
Nettie's,
Cooper,
Nidhi's
wiki,
there
is
content
there
that
is
really
hard
to
update
anything
that
is
in
coover,
midis
or
sorry.
Github
com,
Cooper,
Nettie's,
Cooper,
Nettie's
github
do
stays
where
it
is,
so
that's
the
documentation
site.
So
we're
not
looking
to
move
that
around.
This
is
just
the
ad
hoc
cross
communication
stuff
that
has
been
happening
in
github
com,
Cooper,
Nettie,
scooper,
Nettie's
wiki.
A
A
I
See
this
now
we
can
great
okay,
so
we
actually
spent
our
meeting
this
morning
going
through
some
of
these
slides
as
a
group
just
to
make
sure
that
we
were
all
kind
of
on
the
same
page.
So
we
just
jump
ahead
here
and
I
Sarah
I
sent
the
link
to
you
as
well.
So
if
you
want
to
post
it
somewhere,
you
can
do
that,
but
the
link
to
this
presentation
is
also
in
the
the
six
scale
meeting
notes
doc.
I
I
I
So
okay,
so
discussion
on
performance
goals,
so
this
has
actually
probably
taken
the
most
has
gotten
the
most
energy.
For
you
know
a
number
reasons:
one
is
that
I
think
everyone's
eating
to
avoid
I'll
call
the
marketing
mistakes.
You
know
confusing
kind
of
short
term
testing
efforts
with
the
overall
project
goals.
I
So
what
we
refer
to,
as
the
hundred
note
means
so
we're
trying
to
be
very
careful
about
how
we
talk
about
these
I
think
there's
been
a
pretty
healthy
discussion
around
the
need
to
support
different
goals
and
employment
patterns.
I
think
the
specific
one
has
been
the
one
that
I
would
really
want.
Everyone
to
keep
in
mind
is
that
there
are
some
different
deployment
patterns
here
and
I
think.
I
Certainly,
the
Google
team
is
very
eloquent
about
explaining
that
smaller
clusters
are
arranged
in
the
availability
zones
is
a
great
way
to
manage
to
make
sure
you
can
round
lots,
of
course,
with
high
availability,
but
I
think
there
are
also
some
other
use
cases
where
larger
single
clusters,
which
perhaps
may
have
different
availability
requirements,
are
still
a
requirement.
So
one
of
the
really
great
discussions
we've
been
having
is
about
making
sure
we
can
get
the
kind
of
community
goals
arranged
to
support.
Both
of
these
there's
been
a
pretty
good
discussion
around
you
know.
I
Is
there
some
natural
size
limit?
We're,
like
you
know
it's
just
kind
of
not
really
that
useful
to
go
beyond,
and
you
start
looking
at
some
of
the
stats
around
the
largest
clusters
in
the
world.
That's
probably
that
may
not
be
effortless
pending.
You
know
millions
of
course
kinds
of
setups.
So
so
that's
a
that.
There's
an
ongoing,
healthy
debate
about
that
and
just
thought.
I
would
mention
that
there
we
have
had
a
fair
bit
of
discussion
around
wide
around.
Why
variation
in
note
sizes
have
some
some
companies.
I
Some
efforts
are
oriented
towards
really
large
nodes.
That's
great
there's
been
work
going
on
around
us
in
the
note
group
and
then
so
shout
out
to
the
nodes
sake
and
then
also
federated
testing
continues
to
be
a
hot
topic
and
sig
testing
and
overall,
the
importance
around
being
able
to
characterize
performance
on
a
bunch
of
different
environments
continues
to
be
important
and
I
have
one
other
comment
on
that:
a.
B
I
Bit
later
here
I
I'd
say
we
have
a
very
interactive
group,
I'm
kind
of
walking
through
this
a
little
bit
fast,
but
there
are
an
awful
lot
of
people
here
who
are
part
of
the
sig
performance
group.
Sig
scaling
group
that
I
welcome
you
to
than
to
make
a
comment
as
I
go
here.
Joe
did
you
have
a?
Are
you
on
that?
You
want
to
make
make
a
comment
here.
I.
F
Am
on
you
know,
I
think
you're
handling
it
great
I
mean
you.
No
no,
no
comments
here.
I
think
it's
just
there's
it's
not
quite
as
linear
as
I
think
we
originally
expected
in
terms
of
previous
goals
together,
but
a
lot
of
ins
and
outs,
so
I
think
the
slide
capture
set
great.
F
I
I
I,
guess
I
think
one
of
our
goals
for
the
presentation
today
was
to
make
sure
that
there
was
some
context
around
this
discussion.
So
as
a
follow
up
I'm
sure
some
of
you
will
want
to
go
and
look
at
some
other
docs
and
dig
into
some
of
the
numbers.
But
this
is
some
of
the
context
around
the
house.
Numbers
are
being
discussed,
okay,
so
kind
of
two
tracks
here.
What
one
is
two
tracks
within
the
track?
So
so
in
terms
of
just
discussion
of
performance
goals,
one
thing
I
think
everyone
should
be
aware.
I
Is
that
there's
a
of
a
shift
in
terms
of
how
we
talk
about
how
we
characterize
clustered
performance
mid-stride,
so
Clinton
has
been
kind
of
leading
the
charge
on
this
quinton
wojtek
and
some
of
the
other
other
Google
folks
and
I
think
there's
some
stuff
that
they
are
not
quite
ready
to
publish
here.
But
this
is
kind
of
like
the
snapshot
here,
which
is
to
move
to
a
course
for
cluster
pods
per
core
and
then
pods
per
second
is
kind
of
the
one
of
the
ways
to
look
at
it
in
the.
But
this
is
work.
I
That's
been
going
on
kind
of
in
the
course
of
the
123
work,
so
for
13.
The
sort
of
previous
gen
way
of
looking
at
cluster
performance
is
still
the
way
things
are
gonna
get
talked
about,
which
is
nodes,
pods
per
notes,
maximum
number
of
pods
and
still
still
all
in
the
context
of
the
current
Layton
sees.
I
D
I
Again,
I'll
leave
this
open
for
other
comments,
but
in
terms
of
working
on
control,
this
is
really
largely
around
control,
plane
kinds
of
work,
so
we've
been
using,
for
example,
the
pause,
containers
and
scratch
containers.
So
the
goal
here
has
really
been
that
discussion
here
is
really
about
control,
plane
performance,
not
about
total
cluster
throughput
and,
of
course,
you
quickly
get
into
the
specifics
of.
I
F
Honestly,
in
terms
of
containers
per
pod,
I,
don't
think
that
those
are
numbers
that
that
people
have
been
looking
at
from
the
control
plane.
I'm,
not
sure
it
makes
that
big,
a
difference,
I
mean
if
you
had
100
containers
per
pod.
It
would
probably
start
you
know
changing
things,
because
the
control
plane
would
have
to
track
more
data.
But
my
assumption
here
is
that
we're
talking,
you
know
the
two
three
you
know
for
the
most
containers
working
again.
I
I
think
a
lot
of
this
work
has
really
been
going
on
in
the
node
in
the
note
zig,
and
there
are
certainly
issues
that
I'm
not
probably
the
most
qualified
to
talk
about
in
terms
of
kind
of
maximum
numbers
of
containers
per
node.
And
this
is
you
know
around
dr.
demon
performance
or
you
know
other
things
so.
I
All
right
shall
we
move
on
great
okay,
so
just
thought
I
would
throw
up
here.
So
this
is
some
of
the.
So
this
is
on
the
track
of
interesting,
interesting
work.
That's
being
done
to
to
actually
move
the
performance
ball
ahead
in
the
13,
so
the
one
that
six
work
actually
got
done
pretty
early
I
there's
been
a
bunch
of
work
around
control,
plane,
optimization
and
network
overhead
I.
Put
some
PR
thing,
PR
notes
here,
put
some
names
up
here.
I
I
It
is,
and
I
have
another
comment
on
this
in
a
minute.
Some
of
these
things
there's
kind
of
a
natural
point
to
go
to
to
the
single
PR,
where
you
can
kind
of
follow
your
notes
from
there.
Some
of
these
are
pretty
large
sets
of
PRS
and
I.
Guess,
if
you're
specifically
interested
a
couple
of
the
names
here
would
be
the
folks
that
go
poke
at
and
slacked
asked
some
questions,
certainly
worth
mentioning
at
CD.
30
has
been
a
pretty
big,
pretty
big
effort.
I
Chorus
guys
were
talking
to
the
meeting
this
morning
that
ETS
like
this
project.
This
part
of
the
project
feel
really
well
et,
is
passing
they're
working
on
some
upgraded
testing
tools,
but
it's
something
like
they
were
feeling
pretty
pretty
positive
about
how
this
part
about
how
this
was
going
and
again
I'll
pause.
If
a
positive
someone
wants
to
jump
in
with
a
comment,
oh.
A
I
Mostly,
there
I
think
that
the
issues
around
kind
of
pulling
together
or
pretty
this
bird
set
of
PRS
that
all
kind
of
encapsulate
the
work
is
your
art
makes
it
hard
and
I'm
going
to
lob
I'm
going
to
make
another.
You
know
lobbying
plea
for
trying
to
be
a
little
bit
more
organized
about
that
from
the
future.
They're
continuing.
I
I
know
I,
know,
I,
think
everyone
kind
of
knows
the
case,
but
it
at
least
is
a
good
point
to
show
the
impact
of
that
so
and
then
on
the
data
sharing
track.
Certainly
I
mean
it's
critical
for
us
to
be
able
to
exchange
in
the
exchange
data,
about
performance
and
about
cluster
configurations,
and
so
there's
still
work
to
be
done.
There
I
think
Marik
has
been
making
some
good
progress
on
a
on
a
performance,
monitoring,
dashboard,
I.
I
Think
at
the
meeting
this
morning
he
was
I,
don't
know
feeling
like
it
was
maybe
a
little
bit
early,
perhaps
to
be
showing
us
off,
but
I
think
it's
good
great
progress
in
this
direction,
so
I
was
going
to
essentially
insisted
hey.
This
is
great
to
see
this
kind
of
progress,
so
Sarah
might
be
I
guess,
I
would
lobby
for
a
quick
demo
at
some
future
community
meeting
of
the
dashboard
yep.
H
I
I
I
How,
and
when
did
we
release
our
gonna,
be
in
an
hour
LTS
model,
he's
kind
of
understanding.
The
future
of
release
management
will
really
helped
organize
the
work,
especially
because
some
of
these
scalability
oriented
topics
are
kind
of
long
running
projects.
It
would
really
help
us
organize
the
work
if
we
had
had
some
of
that
nailed
down
and
then
just
again
the
topics
around
it's
kind
of
hard
to
follow.
Some
of
these
work
efforts.
I
I
A
A
You're
most
welcome.
Thank
you
for
joining.
We
are
going
to
need
to
go
a
little
bit
quicker
through
because
we
also
have
another
last-minute
add
to
the
agenda.
So
I
want
to
introduce
Michelle
nirali,
who
is
leading
up
an
effort
to
start
a
cig
that
is
focused
on
user
experience
in
the
application
developer.
So
Michelle,
do
you
want
to
talk
a
little
bit
about
that?
Maybe
two
to
five
minutes?
Hopefully
we
won't
go
into
a
long
discussion
or
will
spin
that
out
if
we
need
to
sure.
J
Yeah,
so
my
name's
Michelle
I'm
on
the
home
team
at
dais
and
I
also
want
to
introduce
Matt
Farina
he's
on
the
advanced
technology
group
at
HP,
and
we
just
want
to
introduce
a
new
sig
or
possible
new
sig
hold
the
gaps,
rather
it's
more
of
a
transformation
of
sick
config.
So
some
of
us
who
have
been
following
say,
config
and
sig.
Big
data
got
together
there
recently
to
talk
about
some
concerns
we
had
so
on
the
sick
and
fig
side.
J
It
seemed
like
there
was
a
broad
range
of
topics
that
were
being
discussed
and
perhaps
the
name
didn't
match
the
scope.
The
sig.
There
was
a
lot
of
overlap,
also
between
topics
discussed
and
that's
again.
Other
said
some
community
meeting
and
participation
was
dwindling
as
well,
and
on
this
a
big
data
side,
they
were
looking
also
to
perhaps
transform
into
a
more
general
sig
for
running
applications.
So
basically
a
long
story
short.
J
So
there's
a
lot
of
attention
currently
on
focusing
and
focus
on
building
scaling
and
operating
to
béarnaise
and
that's
awesome,
but
we
also
wanted
a
place
where
we
could
focus
on
how
users
define
manage
and
run
applications
internet
ease,
especially
for
people
who
are
just
getting
started
so
we're
think
about
creating
this
app
or
the
sig
called
sick.
Apps
will
probably
going
to
do
weekly
meetings
wednesday
morning
at
9am.
The
first
one
will
be
this
wednesday
and
we're
going
to
have
someone
from
openshift
named
prashant.
J
A
A
A
But
I
love
the
user
focus.
I
think
this
is
super
awesome
and
it
is.
It
is
that
time
in
our
evolution
so
also
on
this
list
for
the
sig
update
with
sig
cluster
ops,
it
looks
like
there's
a
remote
architecture
or
sorry,
not
a
remote.
A
reference
architecture
conversation
happening
today
at
1pm
Rob
did
you
have
anything
more
than
just
that
announcement.
A
C
C
So
the
right
things
are
going.
If
you
created
a
new
PR
today,
it
wouldn't
get
merged
for
about
a
week,
which
seems
pretty
unacceptable.
What
I've
noticed
in
the
last
two
days
is
an
extremely
large
number
of
Google
employees
who
are
adding
priority
labels
to
their
PRS
for
whatever
reason
to
jump
in
front
of
the
queue,
and
this
doesn't
seem
manageable.
When
we
have
so
many
people
backed
up
that
arbitrary
features.
Kind
of
are
leaping
to
the
front.
C
I
know
that
we
have
these
13
release
features
as
well,
but
I'm
not
sure
what
the
like
best
way
to
attack
this.
In
case
people
wonder
the
q
could
run
at
about
50
PR
zayday
if
it
went
full
speed
for
24
hours.
Maybe
it's
a
little
lower
right
now,
seeing
some
tests
are
taking
longer
may
be
44p
hours
a
day,
so
if
we
had
no
flakes
the
Q
would
be
able
to
run
and
drain
by
end
of
day
tomorrow,
I'd
like
to
start
a
discussion
on
how
to
make
this
better.
C
My
gamification
thought
is
that
you're
only
allowed
to
up
a
priority
from
p
3
2,
p
2.
If
you
fix
a
flake,
the
second
flake,
you
can
go
from
p2
to
p1
and
if
you
fix
a
third
flake,
you
can
go
to
p0
like
that
way.
You're
incentivized!
If
you
want
your
PR
to
go
first,
to
make
the
system
work
better
for
everybody
else,
I'd
like
to
have
this
discussion
off
list,
but
everybody
should
start
thinking
about
a
stop
arbitrarily
labeling
your
features,
so
they
get
in
front
of
everybody
else's
feet,
cheers
and
B.
C
A
Have
a
little
bit-
and
this
is
a
really
important
thing,
but
including
I'm,
seeing
little
golf
claps
in
the
in
the
chat,
so
I
think
figuring
a
way
to
make
this
work.
I
love
your
gamification
idea,
I,
don't
know
how
we
can
enforce
that.
Maybe
there's
is.
We
certainly
can
suggest
this
as
or
suggest
this
and
engage
and
encourage
this
as
a
community
I
don't
know.
C
C
L
M
Well,
I,
don't
know
the
end.
The
end
I
think
if
it
fixes
a
flake
that's
frequently
encountered,
then
that
should
be
like
a
p0
p1.
But
if
it's
like
any
there's
a
ton
of
us
working
on
features
that
we're
trying
to
get
into
cube,
13
and
so
I,
don't
think
any
new
feature
should
necessarily
immediately
Pumped
higher
than
the
normal
p3
priority,
because
it's
it's
really
annoying
to
everybody
else.
But
I.
L
M
Of
us
get
hit
in
rebase,
hello
and
I.
Think
there
are
some
I
know
like
I
know
scheduled
job
when
and
recently
and
I
can
get
that.
That
would
be
difficult
so
like
if
your
PR
like
grows
beyond
like
a
certain
size.
Maybe
I
could
reason
on
that.
But
I
haven't
seen
that
as
the
metric
that
people
are
using
lately
to
you.
Well.
C
A
F
You
know
having
a
default
priority
for
any
work
that
people
want
to
do.
I
think
there's
also
room
for
another
priority
above
that
for
p0
p1
work
that
actually
has
gone
through,
essentially,
as
we
start
to
develop
the
more
formal
process
of
getting
stuff
into
a
release
where
you
actually
have
a
wide
agreement
where
you're
like
you
know
that
stuff
is
sort
of
on
the
track,
and
you
know
the
community
has
sort
of
put
effort
as
a
hole
into
that
thing.
F
M
End
up
getting
like
cigs
competing
with
each
other
right
so
like
if
the
notes
sake
see
something
that's
a
high
priority.
Get
in.
You
know
then
you're
going
to
end
up
competing
with
like
I,
don't
know
pick
another
cig,
be
a
TI
machinery,
cig
and
everybody's
going
to
say
that
their
stuff
is
is
p0
p1.
So
then
priority
ends
up
being
not
thing.
Well,.
F
Know
what
I
would
say
is
that
this
is
stuff
that
cuts
across
the
project.
That
is
actually
you
know
part
of
the
release
planning
cycle
where
essentially,
it's
being
trapped
on
a
spreadsheet,
there's
a
document
in
an
owner
to
actually
back
that
thing
up
right.
So
so
in
terms
of
release
priorities,
this
has
already
been
sort
of
longitudinally
identified
as
being
critical
to
the
release
versus
something
that's
just
opportunistic
I
was
hoping
to
sort
of
sneak
this
in
you
know,
type
of
thing,
I.
A
Think
the
key
point
is
cleaning
up
the
submit
queue
and
the
flakes
so
that
we
don't
have
to
have
these
trade-offs
and
that's
one
of
the
reasons
I
like
Eric's
idea
of
instant
incenting
people
in
order
to
fix
flakes
and
push
PRS
through
and
I
know
that
we
still
get
into
the
more
work
to
get
to
the
more
work
and
suddenly
we're
yak
shaving.
But
but
it
is.
But
it
is
a
reasonable
approach
to
this.
C
E
C
C
A
A
Let's,
let's
see
if
we
can
encourage
better
behavior
and
certainly
work
on
on
flakes,
is
it
possible
for
the
broader
community
who
is
doing
future
work
at
this
point
to
actually
meaningfully
contribute
against
flakes
and
see
how
this
is
is
happening?
We
had
a
lot
of
discussion
about
this
a
couple
of
months
ago
and
we
were
trying
we
Google
we're
trying
to
make
it
more
transparent
and
easy
for
people
to
help
on
that
and
I'm
certain
that
we
can
help.
L
There's
actually
a
little
bit
of
work
going
on.
One
of
the
test
engineers
at
Google
is
looking
at
higher
highlighting
flake
of
the
day.
Basically,
what
was
the
most
single
flaky
tests
in
the
last
24
hours
in
hopes
that
if
we
can
fix
that
one
for
two
weeks
running,
it
helps
in
a
large
way.
He
did
some
basic
analysis
of
like
looking
back
a
few
weeks.
A
A
L
Sure
that
works
for
me
yeah,
so
I'll
start
off
I.
L
People
are
on
track
for
feature
complete,
which
is
now
seven
business
days
away,
but
they're
tight
on
that
more
or
less
so,
I
guess
business
as
expected.
Although
we've
done
a
better
job
of
sort
of
scoping
and
moving
resources
around
and
asking
for
help
early
so
that
they're
not
going
to
blow
past
feature
complete,
which
is
which
is
great
so
the
first
off
is
uber
Nettie's.
L
Pet
sets
same
same
story
more
or
less
probably
a
little
bit
further
along.
This
is
the
staple
application
support
feature.
The
last
few
PRS
are
in
review
and
once
those
get
in
the
Indian
test
will
will
shortly
follow
those.
So
hopefully
we
will
get
to
writing
the
Indian
tests
late
this
weekend
next
week
in
terms
of
scaling,
the
1.3
goals
is
a
2000
node
clusters
working
and
passing
the
sls.
L
L
And
then
the
last
item
is
about
distributed
testing.
So
this
one
is
a
little
behind.
Perhaps
it's
hard
to
hold
this
to
the
feature
complete
date
in
some
sense,
since
it's
a
test,
dashboard
and
not
actual
code
within
communities,
but
the
dashboard
for
aggregating
all
individual
test
results,
including
from
from
outside
companies
other
than
Google,
is
in
flight
and
the
estimate
for
that
is
527,
which
is
two
weeks
away.
A
N
So
if
TJ
can
go
through
the
list
and
throws
them
out,
that
would
be
helpful,
general
observation,
so
between
chorus
and
redheads,
a
number
of
items
which
Appy
once
and
a
blocked
on
lgtm.
Even
though
implementation
is
already
there,
it
seems
like
proposal
was
so
GTM.
Can
we
go
through
the
cycle
a
little
bit
where
somebody
else
asks
for
the
comment,
and
then
we
go
back
and
given
that
there's
seven
business
days
remaining,
it
would
be
nice
to
unblock
them.
N
N
N
That
would
be
awesome.
I
keep
asking
community
I
get
kind
of
community.
A
cig
leads
to
review
it
and
make
it
accurate.
So
my
general
observations
are
28
items
right
now
there
is
10,
p,
2
and
all
of
them
I
in
progress
or
waiting
for
final
lgtm.
So
it's
I
don't
know
it's
cutting
pretty
tight
in
my
world
and
then
yeah.
So
the
dog
has
quick
update
on
the
current
status.
So
rocket
Metis
is
pretty
close.
If
somebody
can
help
this,
the
advisor
at
least
accelerator
g
LG
TMZ's,
it
would
would
be
helpful.
N
N
O
I'm
still
here
so
we
have
a
brave
like,
like
early
wrote,
an
important
update
from
sig
OpenStack.
We
have
released
our
first
OpenStack
provider
for
kebab
scripts.
I
will
share
the
link
in
chat
right
now
and
I
will
edit
to
the
mitten
admitting
gender
assu
are.
We
have
released
the
first
full
set
of
scripts
to
deploy
your
communities
cluster
on
OpenStack.
It
is
based
on
heat
and
some
souls
tech
stuff.
So
now
it's
done.
Listen
that
you
need
to
deploy
communities,
cluster
or
nuttin
strike
is
only
open
state
mesquite
and
they
skip
up
strips.
O
A
Minutes
you
did,
you
did
great,
so
I
have
one
further
notice,
which
is
we
will
do
in
Asia
friendly
timing
again
on
jun
2nd.
So
if
you
are
working
with
people
who
live
and
exist
in
in
the
asian
time
zones
or
if
you're,
traveling,
today's
and
time
zone
will
do,
will
test
this
again
on
jun
2nd,
but
we
had
exactly
one
person
in
the
asia
time
zones.
A
On
the
last
meeting,
we
don't
see
a
substantial
uptick
in
that
then
I
think
I'll
double
back
in
July
and
keep
this
at
10am
until
we
see
more
requests
for
Asia
friendly.
If
anyone
who
did
present
today
wants
to
go,
take
a
peek
and
at
the
notes
and
add
a
little
bit
more,
that
would
be
super
helpful
I'll,
send
these
out
tomorrow
and
then
otherwise.
We've
got
a
like
two
minutes
three
minutes.
If
anyone
wants
to
bring
up
any
specific
time
zones,
I,
don't.
A
K
K
A
We
are
having
an
a
Google
booth
that
is
showing
off
all
of
the
open
source
projects
and
within
Google
cloud
or
the
google
clouds
contributing
to
and
cooper.
Nettie's
is
going
to
have
the
presence
there
doing
open
office
hours,
and
then
we
also
have
a
cooper,
Middies
hackathon
that
is
happening
on
thursday
at
Aulis
con.
It
is
actually
the
odds
con
contribute,
hackathon
and
one
of
the
days
is
sponsored
one
of
the
rooms.
One
of
the
days
is
sponsored
by
Cooper
Nettie's
in
the
cloud
native
compute
foundation.