►
From YouTube: Kubernetes Community Meeting 20170427
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo, Federation Policy; Releases; SIG ContribX, SIG Storage, SIG Autoscaling; 1.6 Retrospective part 2
A
All
right
what
we
are
recording
in
lives,
so
the
community
meeting
for
Thursday
April
27.
We
have
Jason
volunteering
to
know
dick,
so
thank
you.
Jason.
We
have
a
demo
as
our
first
first
point
of
affairs.
Turin
are
you
here?
Oh
my.
B
Okay,
you
should
be
able
to
see
a
terminal
right
now.
Let
me
know
it's
little
can
see:
yep,
okay,
cool,
okay,
yeah.
So
just
briefly,
what
I'm
doing
today
is
annoying
some
work
that
we've
been
doing
within
sig
Federation,
it's
targeted
for
like
the
1.4
1.7
as
an
alpha
feature,
and
so
at
a
high
level.
B
What
we
have
here
is
this
nginx
replica
set
that
has
this
requires
EU
jurisdiction,
annotation,
and
this
basically
tells
the
system
that
this
application
must
be
deployed
on
clusters
within
EU
zone,
say
for
like
regulatory
group,
okay.
So
what
we're
going
to
do
is
go
ahead
and
create
this
replica
set
and
the
federated
control
plane
is
going
to
go
ahead
and
deploy
it,
and
so
we
can
do
is
look
at
the
Fed
rate
or
the
replica
set
to
see
sort
of.
What's
happened
here.
B
And
so
if
we
go
and
look
at
the
individual
clusters,
we'll
see
that
indeed
the
replica
set
has
only
been
deployed
in
Europe
and
so
there's
two
replicas
in
Europe,
West
one
and
then
another
two
replicas
will
be
in
Europe
West
too,
because
we
just
specified
a
weight
of
one.
Let's
even
write
a
my
Internet's
a
little
little
slow
here
and
then,
if
we
look
at
the
you're
sorry
the
US
central
cluster.
Indeed,
there
are
no
resources
there.
Okay,
so
great
that
works.
How
did
this
actually
get
accomplished?
B
So
I
popped
up
a
little
diagram,
and
so
what
this
shows
is
sort
of
the
flow
through
the
system
when
the
developer
requests,
this
replica
set
be
deployed,
so,
first
of
all,
they
specify
the
replica
set
they
give.
This
requires
the:
u
jurisdiction,
annotation
that
basically
gets
into
the
API
server,
which
is
running
at
admission
controller.
That's
calling
out
to
our
policy
engine
and
supplying
the
replica
set
as
input
the
policy
engine
takes.
B
The
input
takes
the
policies
that
it
has
loaded
and
then
computes
the
value
for
that
replica
set
preferences
annotation
before
returning
it
back
to
the
API
server
the
admission
controller.
At
that
point,
the
admission
controller
applies
the
annotation,
the
replica
suprematism
annotation
to
the
resource,
and
then
the
resource
is
great
to
be
created
and
employed
normally,
okay,
so
it's
not
shown.
There,
though,
is
the
policy
itself.
B
So
I
don't
have
too
much
time
here,
so
I'm
not
going
to
detail,
but
I'll
highlight
the
important
bits.
So
this
is
the
policy.
Basically,
we
have
this
high-level,
a
declarative
language
that
you
use
to
author
policies.
The
policies
are,
basically,
you
can
think
of
them
as
just
like
collections
of
rules
and
rules,
basically
define
JSON
values.
So
what
we
have
here
is
this
annotations
rule,
that's
defining
the
value
for
the
replicas
step.
B
Reference
is
key
and
in
this
language,
rules
can
basically
reference
other
rules,
and
so
here
we
have
the
annotations
rule
referencing,
this
other
rule
replica
replica
set
clusters.
So
what
happens
is
that
when
the
admission
controller
queries
the
policy
engine,
it's
basically
querying
this
annotations
rule
to
get
the
values
for
D
annotation,
and
then
the
policy
engine
evaluates
the
rules
and
produces
that
that
annotation
value
okay.
B
So
what
happens
is
that
this
reference
here,
replicas
clusters
gets
replaced
with
the
value
defined
by
that
other
rule,
and
so
what
replicas
that
clusters
is
doing
is
basically
filtering
out
clusters,
where
this
up,
like
I,
said
where
the
input
replicas
that
can't
be
deployed,
and
so
we
look
at
all
the
clusters
that
are
known
by
the
policy
engine.
We
filter
out
ones
that
are
excluded
by
the
input,
because
perhaps
the
developers
specified
some
intent
there,
and
then
we
also
filter
out
ones
that
are
not
allowed.
B
Based
on
the
jurisdiction
of
the
cluster,
if
it
turned
the
invalid
jurisdiction
is,
is
defining
a
set
of
clusters
that,
where
this
replicas,
that
can't
run,
if
we
say
a
replica
set,
can't
be
deployed
on
a
cluster
if
in
this
case
the
replicas
that
requires
you
jurisdiction,
but
that
cluster
is
not
within
an
EU
zone.
And
so
here
we're
looking
at
the
the
region
attribute
on
the
Federation
cluster
object.
B
Okay,
so
that's
like
the
first
part
of
the
policy,
but
what
we
also
want
to
do
is
define
or
is
enforce
the
policy
such
that
apps
that
require
PCI
compliance.
Don't
get
accidentally
deployed
on
clusters
that
aren't
PCI
certified.
So
what
we're
going
to
do
is
define
a
rule
called
insufficient
PCI
compliant,
and
this
is
going
to
produce
a
set
of
clusters
that
don't
meet
PCI
compliance
for
the
for
the
input,
and
so
it's
going
to
look
very
similar
to
what
we've
done
above
for
invalid
jurisdiction.
B
So
we're
going
to
check
to
see
whether
or
not
the
input
includes
an
annotation
requiring
PCI
compliance
and
then
what
we're
going
to
do
is
check
whether
the
cluster
has
been
annotated
to
indicate
that
it's
PCI
certified
and
so
we're
just
going
to
say.
If
the
annotation
is
not
present,
then
then
it's
not
PCI
certified
we're
not
going
to
worry
about
PCI
levels
or
anything
like
that
for
now,
and
so
then
we'll
add
a
reference
to
that
rule
here
and
that's
all
we
have
to
do
for
now.
B
Okay,
so
the
policy
engine
is
basically
deployed
alongside
the
rest
of
the
federated
control
plane
and
we
have
a
service,
that's
exposing
the
policy,
engine's
API,
and
so
what
we're
going
to
do
is
push
the
policy
into
the
policy
engine
just
directly
through
its
API
right
now
and
then,
if
we
go
ahead
and
annotate
the
cluster
one
of
the
clusters
with
two
indicated
PCI
certified
we'll
see
that
when
we
try
to
deploy
an
application
that
requires
PCI
compliance
here,
it'll
only
go
to
that
Europe
West
one
cluster.
Now.
B
And
sure
enough,
the
demo
gods
have
not
been
kind
to
me
and
the
annotation
is
not
showing
up
as
I
would
like.
So
that's
not
good
one.
Second,
so
what
I?
What
I
would
have
expected
to
happen
here
is
that
the
Buster's
value
would
only
contain
the
replica
set.
Sorry,
the
the
Europe
+1
cluster,
but
something's
gone
wrong.
So
just
give
me
one
sec
I'll
see.
If
I
can
I
probably
a
mistake
in
the
policy?
Let
me
just
take
a
quick
look
to
get
back
and
figure
that
out
Jenny.
B
Thank
you.
Thank
you.
Thank
you.
Thank
you,
okay.
Okay,
so
let's
fix
that.
Okay,
so
that
should
be
it
yeah
live.
Demos
are
always
always
tricky.
Okay,
so
what
I'm
going
to
do
is
just
I've
modified
that
so
I
just
need
to
re-upload.
The
policy
now
give
me
a
sec
I'm,
just
going
to
run
the
same
command
from
before.
I
did.
B
Okay,
sorry
got
about
a
minute:
okay,
okay,
so
maybe
that's
not
the
most
okay.
So
what
we
would
have
seen
was
this
replica
set
clusters
would
have
been
set
to
just
the
Europe
West
one
cluster
and
when
we,
so
what
would
have
happened
was
when
we
query
the
the
Europe
West
one
cluster.
We
would
see
that
only
that
it
was
a
sir
that
was
deployed
only
to
that
one
cluster.
What
I
wanted
to
show
also
was
that
we
can
handle
basically
conflict
any
input.
B
So
one
of
our
requested
clusters
are
not
allowed.
Ok,
so
I
think
that's
just
about
my
time.
Thanks
for
watching
you're
interested
in
this
federated
placement
placement
stuff
check
out,
sig
Federation,
if
you're
interested
in
policy
in
general,
you
can
check
out
the
policy
agent
project,
we
have
slacked
and
you
can
check
out
on
get
out.
Okay,
thanks,
look.
A
If
not,
okay,
cool,
we
will
roll
right
into
releases
to
arms
plot
so
for
the
one
seven
or
lease
Don.
Do
you
happen
to
beyond
the
line?
It's
not
that's!
Okay!
For
one
seven,
we
are
still
looking
for
someone
to
fill
the
the
testing
lead,
which
is
responsible
for
looking
at
the
CI
signal
coming
across
test
grade
and
a
bunch
of
other,
but
some.
A
Two
places
I'm
giving
a
go/no-go
signal
week
over
week
on
our
status
to
release,
so
I
will
link
in
the
the
thread
at
asking
for
candidates,
that's
in
a
thread
in
community
/
committees
day.
So
if
you
are
interested
filling
that
rule,
please
sign
up
announcement
from
yee-haw
related
to
that
the
feature
repo
of
will
close
on
the
1st
of
May.
So
that's
next
week
here,
there's
a
brand
new
template
that
D
hores
LinkedIn,
so
that's,
hopefully,
smutch
simpler,
easy
to
use
then
the
than
the
previous
template.
A
So
it
should
be,
hopefully
we'll
get
better
usage
of
that
did
you?
Oh
and
yes,
if
you've
got
a
one
sixth
feature
that
is
now
or
a
previous
feature
that
is
now
scheduled
for
one
seven,
please
update
your
templates,
the
checklists
there
and
description
for
your
hundred
PR.
If
you
don't
get
around
to
it
ego
or
will
be
doing
reconciliation
with
himself
yeoreum.
Why
did
you
want
say
anything
else
about,
though
the
new
template
or
features
for
one
seven
years.
D
That
ever
similar
to
expected,
say
thank
you
girl,
so
yeah
we
have
an
updated
feature
of
feature
template.
We
have
extended
the
deadline
for
submitting
the
features
based
on
the
community
feedback
and
based
on
the
current
status
of
the
fishiest
rifle
so
I
have
noticed
it
so
many
features
that
I
expect
to
be
in
1.7
s.
Event
still
be
in
updated
to
1.7,
so
we
have
extended
the
deadline
and
please
update
your
features
before
Monday
I
mean
before
the
end
of
day,
Monday
or.
A
Look
at
the
chat,
so
the
question
about
the
testing
role
arm
the
testing
this
is
so.
This
is
a
little
bit
different
than
the
actual
test
infrastructure
work.
This
is
mostly
testing
is
so
CI
signal
is
most
about.
Looking
at
the
results
of
chaos,
integration
runs,
and
you
know
keeping
up
keeping
a
sharp
eye
out
for
regressions
on
the
various
sweets
that
would
go
out
there
leaks.
A
C
A
And
there's
a
link
in
the
and
the
notes
to
the
github
release,
page
they're,
the
one
six
we're
still
looking
for
a
patch
manager
from
Google
and
Marcin
will
be
handling
patch
management
while
Anthony's
on
vacation.
So
if
that
is
don't
have
any
questions
about
releases
you
the
upcoming
one,
seven
or
the
previous
one
six.
A
E
E
Some
of
the
ways
that
we're
hoping
to
do
this
is
to
make
sure
there's
a
standard
subset
of
labels
across
all
the
repositories
and
to
make
sure
that
the
BOP
commands
are
all
in
the
same
format.
We
also
want
to
increase,
improve
the
automation
process
to
promote
the
velocity
and
doing
this
through
the
PR
workflow
and
designer
review
and
issue
triage.
One
of
the
ways
that
we're
going
to
do
this
is
to
ping
inactive,
reviewers
and
reassign
PRS,
so
that
PR
is
emerged
faster,
okay,
so
I'm.
E
E
We
want
to
evaluate
the
release,
notes,
policy,
establish
goals
of
writing,
release
notes
and
we
define
our
approach,
and
we
also
want
to
verify
all
of
the
templates
and
make
sure
that
they're,
updated
and
optimized
for
a
contributor
experience
and
make
sure
that
all
Bach
commands
are
consistent
and
easily
discoverable.
And
then
we're
also
focusing
on
metrics
this
releases
well
to
make
sure
that
our
policies
are
data-driven.
F
So
yeah,
just
going
back
on
that,
we
actually
have
a
company,
that's
helping
us
out.
They
did
a
demo
at
the
contributor
X.
This
is
guaranteeing
by
the
way,
there's
a
company
called
Sappho
that
did
a
demo
at
the
last
contributor
experience
meeting
and
but
they've
got
a
tool
that
actually
allows
us
to
easily
build
dashboards.
To
look
at
things
like
a
time
to
time
to
merge,
to
look
at
things
like
time
to
first
comment,
as
well
as
number
of
merges
per
week
number
of
merges
per
company
per
week
or
per
contributor
for
a
week.
F
They
have
a
SQL
database
backing
it
up,
but
you
don't
actually
know
how
to
write
code
in
order
to
add
dashboards
and
they've
kind
of
got
a
neat
little
feed
option
that
allows
you
to
kind
of
select
your
top
graphs
to
look
at
so
we're
looking
at
working
with
them.
A
little
bit
more.
A
couple
of
us
are
from
the
contributor
experience
are
kicking
the
tires
and
playing
with
the
tool
a
little
bit
more
before
making
a
final
decision.
G
H
Hi,
so
mostly
what
I'm
gonna
try
to
go
over
is
the
face-to-face
that
we
had
recently.
It
was
very,
very
well
attended,
surprisingly
so
I
think
we
had
people
who
sort
of
had
the
normal
folks
from
just
even
get
the
invite
list
again.
So
I
got
Garrett
to
post
the
chat
and
I've
added
it
to
the
community
minutes
the
links
to
the
agenda
items
and
the
minutes
of
the
face-to-face,
but
the
attendants
were
great.
H
It
was
a
the
normal
folks
who
are
always
there
Red
Hat
diamonte,
you
know
Google
Dell
NetApp
support
works,
and
then
you
know
we
had
a
lot
of
other
folks
showing
the
IBM
Linda's
Salesforce,
pure
storage,
OpenStack
I'm,
a
bunch
of
people
who
are
you
know
very
senior
folks,
is
doing
things
on
their
own
behalf
and
then
some
other
Linux
companies
that
were
showing
up
for
their
own
work
room,
maybe
was
full
of
40
people,
maybe
more
it
was
hosted
by
EMC
and
no
it
was.
It
felt
good.
H
This
fig
is
alive
thriving
with
morning
people
and
everyone
seems
to
be
integrating
themselves
pretty.
Well.
The
topics,
the
agenda.
One
thing
to
note
is
that
a
bunch
of
things
that
we
had
thought
we
had
to
talk
about,
we
didn't
because
they
were
implemented
in
one
box,
six
things
like
mount
options
and
other
areas
of
complexity.
So
this
is
a
pleasant
surprise,
but
there
were
some
big
things
that
were
we're
diving
into
in
the
storage
sake,
and
that's
why
I
want
to
talk
about
today.
H
Ladies
inside
of
kubernetes,
and
like
the
Goldilocks
problem,
you
have
to
make
sure
we
don't
have
too
many.
We
don't
have
too
little
and
there's
a
really
sweet
design.
I
was
pretty
impressed
with
what
you
know.
We,
the
Sega's,
come
up
with
this
time,
because
the
model
is
now
a
2
object
model.
It's
already
consistent
with
the
way
we
handle
volumes
in
general
I
really
urge
folks
to
take
a
look
at
it.
H
If
you're
curious,
yeah,
I
think
it's
going
to
be
a
little
bit
more
intuitive
and
some
the
other
ideas
that
we
had
and
it
came
about,
I,
don't
think
any
one
company
could
have
come
about
with
this
decision
on
their
own.
It
really
involves
everyone
working
together.
The
basic
functionality
will
be,
you
know,
creating
removing
snapshots
these
trickier
cases
where
how
do
we
handle
these
between
namespaces?
How
does
this
work
with
a
lifecycle
of
the
snapshot?
And
you
know
what
are
the
permissions
and
ackles
of
the
snapshot?
H
And
how
does
this
also
work
with
whether
or
not
you
want
to
clone
it
or
the
birth
of
a
volume?
How
do
you
create
volumes,
possibly
from
a
snapshot?
So
please
take
a
look.
You
know
this
is
in
still
the
design
phase
and
I
think
that
the
design
is
getting
fairly
solid
and
there's
even
a
the
model
of
storage
group
is
taken.
H
Is
the
sword's
cig
is
kind
of
an
update
every
other
week
of
all
the
other
events
going
on
and
for
efforts
in
the
storage
sake,
so
I
think
there's
a
biweekly
or
a
weekly
meeting
on
snapshots
that
occurs,
and
so,
if
you're
interested
just
in
snapshots,
you
can
show
it
to
that
one.
If
you're
interested
in
commenting
on
the
design
or
areas
like
that,
please
simply
go
with
the
dachshund
and
lead
and
ask
questions
I'm
stored
signaling
list.
Please
realize
that
there's
been
a
lot
of
context
and
earlier
thought
and
brought
into
it.
H
So
you
know
maybe
start
with
trying
to
ask
questions
about
context
on
the
list
before
going
in
the
dock
and
being
like.
This
makes
no
sense,
because
a
lot
of
things
initially
may
seem
confusing.
That
makes
sense
if
you
look
at
the
full
ecosystem
of
volumes
and
kubernetes
in
general,
but
what's
neat
is
that
it
looks
like
we
are
making
good
progress
to
possibly
agree
to
not
operate
to
a
green
on
where
we're
going
to
be
with
snapshots
this
cycle
and
then
there's
already
prototype
work
and
making
progress
across
continents
for
implementing
this.
H
Maybe
when
said
and
probably
1/8,
because
the
next
item
that
came
up
and
it's
a
really
one-
that's
sort
of
in
the
heart
and
soul
kind
of
I
think
had
a
log
jam
over
the
last
few
releases
for
kubernetes
storage
is
plugins
and
one
thing
that
we
want,
we
kind
of
realized
was
releases.
1.3,
1.4
1.5
of
storage
for
kubernetes
has
really
been
focused
on
taking
a
lot
of
the
upheaval
and
rhe
architecture,
architect
work
that
we
did
for
mounting
and
unmounting
volumes
and
really
fleshing
it
out
and
stabilizing
it.
H
So
there
has
been
a
lot
of
work,
especially
big
thanks
to
de
Monte
and
chakri
on
our
plug-in
model
and
plug-in
API,
but
it
was
sort
of
on
the
back
burner.
While
people
talked
about
it
a
lot,
but
a
lot
of
the
real
work
was
focus
was
going
on
and
stabilizing
the
system
overall.
I
think
that
it
actually
worked
out
really
well,
because
there
was
so
much
pre
thinking
and
pre
worrying
about
storage
plugins
that
by
the
time
we
finally
were
able
to
really
focus
on
it
full
full
force.
H
It's
made
a
lot
of
progress,
I'm,
going
to
explain
the
wording,
because
the
wordings
are
really
confused,
really
confusing,
and
it
is
there's.
You
know
the
three
minutes
that
it's
going
to
take.
How
much
time
do
I
got
by
the
way
like
I'd
like
five
minutes
or
eight
minutes
results
or
something
now
I
should
be.
You
should
be
fine.
Okay.
H
Vendors
are
so
many
storage
vendors
in
kubernetes
that
we
don't
want
to
have
to
have
a
very
high
review
burden
for
the
you
know,
the
core
members
of
the
storage
system,
the
sword,
sig,
and
also
we
want
them
to
be
able
to
iterate
on
their
plugins
in
there
with
a
decoupled
from
the
releases,
but
we
also
don't
you
know
we're
already
seeing
way
too
many
plugins
in
tree
and
it's
just
a
large
burden.
Overall.
H
Initially,
we
now
have
an
API
I
call
a
called
flex
1.0,
which
is
that
many
of
the
abstractions
that
we
need,
but
the
plugins
are
still
intrigued
and
a
lot
of
this
work
was
done.
They
said
earlier
by
Chakri
and
them
the
diamonte
team
has
been
offering
quite
a
lot
about
two
resources
and
this
design
has
been
going
through
many
iterations
and
was
presented
in
the
last
face-to-face.
H
While
this
work
was
going
on
other
orchestration
systems
out
there,
like
Cloud
Foundry,
doctor
and
mesosphere
notice,
that
they
were
having
the
same
challenges
with
storage
plugins
that
we
were
and
brought
forth.
The
idea
hey
if
we
could
solve
this
problem
with
semantically
similar
approaches.
This
would
benefit
the
whole
community
of
container
orchestration.
H
And
so
the
idea
of
having
expect
that
would
conform
to
the
same
overall
ideas
and
methodology
came
about,
and
it
was
initially
called
CDI
for
container
volume
interface
and
eventually
became
the
name
CSI,
which
is
really
bad
for
kubernetes,
because
we
have
another
CSI
system
but
really
kind
of
cool
for
people
who,
like
the
old
TV,
show
because
you
know
we
get
to
talk
about
it.
A
lot
and
me
also
for
storage
overall,
because
it
stands
for
container
storage
interface,
which
is
more
descriptive
for
really
what
the
problem
is
trying
to
solve.
H
Csi
is
a
spec
and
it
is
a
specification
for
the
sort
of
life
cycle
of
how
plugins
and
volumes
should
interact
in
containers,
orchestration
systems
overall,
even
larger
than
kubernetes.
One
of
the
things
that
we
made
very
clear
to
the
people
working
on
CSI,
which
was
a
loose
affiliation
of
these
different
container
orchestration
teams
or
companies
or
you
know
organizations
was
that
while
we
would
pay
a
lot
of
attention
and
contribute
to
this
as
a
sig,
we
also
did
not
want
Kober.
H
That
needs
to
be
beholden
to
it,
and
if
things
really
did
diverge,
it
was
kind
of
going
to
be
like
sayonara.
You
know
this
is
taking
so
long
to
get
out
of
three
plugins
working
in
big
storage
that
we
don't
want
to
slow
that
velocity
down.
So
far,
we've
been
very,
very
lucky.
It's
actually
I
think
helped
out
quite
a
bit
G
from
mesosphere
sod
and
Chakri
another
person
from
docker
Tim
myself.
H
A
few
other
people
have
spent
a
lot
of
time
on
the
spec,
and
it
turns
out
to
have
aligned
very,
very
well
with
both
Flex
and
what
we
want
to
do
for
out
of
tree
plugins
in
storage,
say
the
implementation
of
CSI
is
one
of
the
other
existing
CSI
within
kubernetes
I'm,
not
exactly
current
a
flex
volume.
Thank
you.
The
implementation
of
CSI
and
the
next
step
of
flex
1.0
is
going
to
be
called
flex
2.0
and,
as
CSI
is
getting
finally
nailed
down.
H
But
in
any
case
it
does
look
like
good
progress
is
being
made
here
and
that
things
are
coordinating
fairly
well
and
the
fig
had
a
deep
presentation
of
the
CSI
proposal,
which
you
can
find
here
on
the
minutes
and
it
went
it
went
like
buttah.
You
know.
One
issue
technically
came
up
which
I
want
to
talk
about
here,
but
other
than
that
there
was
a
lot
of
agreement,
and
a
lot
of
this
feeling
of
this
is
good
work
and
that
this
is
work
that
was
done
transparently
throughout
the
state.
H
The
reason
this
has
come
about
is
that
it's
becoming
more
and
more
apparent
that
many
of
the
plugins
and
technologies
that
want
to
integrate
with
kubernetes
are
offering
block
interfaces
and
not
just
file
system
interfaces,
and
it
looks
like
this
is
a
that
we
can
integrate
fairly
well
into
the
lifecycle
of
plugins
and
volumes,
and
that
would
be
advantageous
to
have
in
kubernetes,
and
it
also
seems
to
be
something
that
aligns
well
with
cloud
native
storage
technologies
overall.
So,
as
a
result,
CSI
incorporates
the
ability
to
have
both
file
system
and
block.
H
H
H
Is
a
whole
other
arena,
yes,
and
so
an
application,
mobile,
storage
right-
and
so
my
kind
of
personal
belief
is
that
those
sorts
of
object,
storage
ideas
belong,
more
and
sort
of
like
be
tested
and
is
not
really
something.
The
storage
sake
should
have
it
in
boundaries,
but
it
was
also
something
that
we
didn't
completely
converge
on
in
the
storage
take
place
today,.
G
At
the
30
Jesus,
because
if
you
look
at
what
kind
of
black
box
services
applications
we
need
to
access,
it's
things
like
storage
systems,
databases,
message,
queues,
etc,
so
objects
towards.
You
know
things
like
minute:
Minaya
set
blocks
towards
our
object,
storage,
etc.
Are
definitely
an
important
category
that
they're
not
the
only
thing
in
this
kinda
yeah
it
I.
A
H
Part
of
the
container
instruction-
yes,
object,
storage
is
sweet.
That's
why
so
it
has
nothing
I,
don't
know
if
yet,
but
a
whole
state.
We
agreed
upon
that
so
I
don't
to
represent
the
state
in
that
appointment.
Yet,
but
I
agree
that
is
myself
person
the
Flex
roadmap.
You
can
also
see
here
inside
the
minutes.
You
know,
like
I,
said
earlier,
we're
hoping
to
get
to
Dada,
2.0
nailed
down
and
possibly
mostly
implemented
by
the
end
of
2017.
H
Another
item
that
came
up
in
the
face
of
a
sig
was
the
idea
of
modifying
volumes
once
they've
been
created,
being
able
to
resize
them
and
possibly
changing
other
properties.
I
admit
at
this
point:
I
had
to
run
so
I,
don't
know
as
much
intimately
as
the
other
parts
of
the
cig
meeting,
but
security
also
came
up,
and
then,
for
the
next
day
we
had
a
large
discussion
about
containerized
mounting
for
those
of
you
who
don't
know,
I
really
urge
you
to
check
this
out.
H
This
is
just
cool
technology
and
again
sort
of
leverages
the
neat
things
about
containers.
You
know
how
do
you
solve
the
problem
of
you
know
as
container
one
of
the
things
that's
going
on.
Is
these
storage
plugins
require
the
node
to
have
kernel
user
space
and
other
forms
of
dependencies
on
the
system?
How
do
we
make
this
simple
and
easier
for
our
users
and
then
how
do
we
also
make
sure
that
we
don't
fragment
our
our
implementation
of
kubernetes
across
steps
of
distros?
H
And
this
is
a
problem
I
think
that
is
much
easier
to
run
into
or
in
the
storage
area,
because
the
plugins
are
so
diverse
and
have
so
much
different
types
of
functionality,
so
containerized
mounting
is
something
that
we
started
implementing
already
and
we
have
sort
of
a
hacky
implementation.
Yawn
in
Red
Hat
has
taken
that
implementation
and
come
up
with
a
new,
more
generic
design,
but
I
think
we'll
be
more
flexible
and
everyone
in
the
sig
was
very
excited
about,
allowing
us
to
I.
H
Don't
know
to
do
the
things
I
just
described
the
that
was
sort
of
the
beginning
of
the
second
day,
but
the
lion's
share
of
the
discussion
for
the
second
day
involved
local
storage
and
there's
a
whole
bunch
of
work
here
that
the
state
storage
community
is
going
to
be
focusing
on
probably
for
like
a
year.
At
least
this
is
kind
like
an
18-wheeler
truck
of
functionality
and
complexity
that
is
going
to
be
going
through
the
sink
storage
area
effects,
resource
management
and
also
will
be
affecting
scheduler
problems
that
we
want
to
solve.
H
H
So
if
you
are
interested
in
understanding
this
work,
it
is
nuanced,
it
is
deep.
It
is
wide
reaching
and
it's
probably
going
to
have
some
changes
that
affect
things,
including
the
way
we
schedule
jobs
to
storage
and
how
we
even
schedule
jobs.
The
non
storage
related
areas
we're
going
to
have
to
integrate
those
two
systems.
So
please
take
a
look
at
the
minutes.
There's
some
links
there.
There
have
been
many
dogs
have
been
iterated
several
times
over
from
all
over
the
community
and
I.
H
Don't
know
for
me
if
your
propellerhead
and
you
like,
storage
or
just
sort
of
go
to
any
low
level
internals
it's
fun.
Reading
I
don't
want
to
take
too
much
time.
I
think
that's,
basically
all
I've
got
to
cover
about
today.
There's
a
long
list
of
things
also
that's
coming
into
one
seven.
So
please
feel
free
to
go
to
the
features
repo
and
take
a
look
at
that.
You
can
me
know:
filter
across
storage,
I
feel
like
I've.
Taken
up
a
lot
of
time,
be
ready,
we'll
be
a
bunch
of
an
announcement
acute
con.
H
Excuse
me
a
ways
to
have
cross
orchestration
validation
for
CSI
and
then
possibly
also
find
other
resources
to
help
with
testing
documentation
and
other
areas
inside
swords
sake
and
so
I
think
it's
fair
to
say
that
there's
there
is
now
not
fair
to
say
there
is
a
storage
working
group
and
in
CN,
CF
and
kubernetes
storage.
Sig
is
beginning
to
figure
out
how
best
to
engage
with
the
CNCs
working
group
and
they
seem
to
be
goo
gastic
about
doing
that
too,
on
their
side.
So
hopefully
that
will
work
out
really
well
to
this
year.
H
Is
that
work
involved
with
the
resource
management
working
group
from
Caleb?
Yes,
the
local
storage
work
is
being
presented
also
in
the
resource
management
working
group,
in
fact,
for
the
face-to-face
that's
going
on
in
May.
It
is
on
the
agenda
there
and
there's
a
lot
of
topics
that
will
be
covered
in
both
this
work
is
being
done
by
vish
and
vish
is
working
kind
of
straddling
both
resource
management
note
SIDS
and
shortage
as
he
is.
H
H
So
the
question
here
it
be
nice
to
have
similar
testing
visibility
for
CSI
and
CNI
I,
agree,
I'm
kind
of
lucky
in
that
I'm
involved
in
both
CNI
and
CSI
and
I'm
sort
of
shooting
for
similar
goals
as
we're
looking
at
both
of
them
and
I'm
able
to
talk
to
both
teams,
at
least
on
the
Google
side.
I'm,
not
saying
it's
all
on
me
at
all
Kaleb.
Thank
you
for
trying
that
out,
but
I
think
that
you
know.
H
I
know
that
I
myself
I'm
trying
to
figure
out
the
best
ways
to
engage
with
the
CNC
F
in
all
of
these
areas
and
I
know
that
other
things
like
networking,
web
storage
are
also
those
are
the
only
two
working
groups
of
seams,
yet
that
I
know
of,
but
they
also
neatly
have
a
CSI
and
a
CNI
technologies
that
cross
orchestrators
and
I.
Do
think
that
there's
a
really
good
opportunity
here
to
engage
the
CMC
F
on
being
able
to
validate
plugins
across
orchestrators
and
them
I.
H
Think
that
there's
like
I,
said,
there's
a
few
guys
around
CN
CF
to
do
that
to
anyone
who's
interested
in
working
on
that,
please
reach
out
to
me
or
reach
out
to
Brian
grant,
who
is
also
very
well
tied
in
tightly
to
the
scenes.
Yet
you
know
I'm
still
trying
to
figure
out
how
to
engage
the
skin
CS
and
so
far
it's
been
fun
but
I
wouldn't
say:
I
know
how
to
do
it
that
well
just
yet
yeah
CF
working
groups
are
a.
A
Okay,
fantastic,
so
moving
on
to
eat,
we
have
our
CID
auto
scaling,
update,
solid.
You
know,
yeah.
I
I
I
Also,
finally,
I
have
begun
some
some
more
work
on
getting
a
Prometheus
implementation
of
custom,
metrics
API.
So
hopefully,
within
this
release
we
shall
see
a
it
kind
of
an
end-to-end
demo
with
custom
with
auto
scaling
based
on
custom
metrics
pulled
from
the
media.
So
if
you're
interested
in
that,
please
let
me
know.
A
G
Thanks
Caleb
could
hand
up
so
last
week
on
Friday
we
met
and
did
the
one
six
retro
and
we
actually
ran
out
of
time.
There
are
so
many
good
good
commentaries
and
feedback
about
how
things
went
that
we
decided
to
move
forward
into
this
meeting
if
you
could
hold
on
one
second
I'm
going
to
paste
a
link
to
the
document
for
the
release
retro
into
chat.
G
So
if
you
want
to
follow
along
here,
if
you
scroll
some
way
down
the
page
there,
you
will
see
in
bold
pause
and
tell
community
meeting
for
27
and
basically
we're
going
to
move
forward
from
there
in
the
document
before
we
get
started,
because
there's
some
people
who
may
not
have
attended
the
retro
before
I'm,
going
to
give
my
30-second
blurb
on
what
this
is
for
and
how
we
want
to
conduct
this,
which
is
the
intent,
is
for
us
to
improve
every
release.
We
do
and
a
core
way
with
that.
We
do.
G
That
is
to
make
observations
about
things
that
went
well
things
that
maybe
didn't
go
so
well
and
then
really
focus
on
things.
We
can
change
in
the
one
seven
release
life
cycle,
which
is
currently
in
a
way
so
in
terms
of
doing
that
effectively.
If
you
do
have
commentary
about
something
that
didn't
go
wrong
or
something
like
two
different,
please
just
make
it
constructive
and
try
not
to
point
fingers.
People
try
and
identify
processes
or
things
that
can
be
changed
easier
than
people,
because
people
had
to
change.
G
A
A
G
Is
not
even
yeah
he's
not
playing
how
to
playing
all
right?
Sometimes
it
takes
well
time
you,
okay,
so
feels
like
we
rushed
released
to
line
up
with
good
con
could
have
could
have
more
bake
time
for
release
Canada
before
official
release.
I
actually
did
anybody
want
to
speak
to
that,
because
I
think
that's
a
pretty
important
thing
because
lining
up
with
major
events
like
could
con?
That
mean
that's
a
tough,
tough
thing
right.
Anybody
want
to
speak
to
that.
I
Well,
I
guess
like
in
the
abstract
I
certainly
would
prefer
to
see
us
have
sort
of
them
quality
driven
releases
as
opposed
to
marketing,
driven
releases
and
I.
Think
if
we
use
major
events
as
our
cadence,
then
we
need
to
get
way
more
aggressive
about
kicking
things
out
in
anticipation
of
hitting
that
fixed
date.
Well,.
G
G
And
in
terms
of
putting
things
out,
we're
going
to
have
to
be
able
to
a
lot
of
features,
take
more
than
one
release,
and
certainly
we
speed
up
doing
cycle.
That's
going
to
be
even
more
true,
so
we
are
going
to
need
some
kind
of
mechanisms
kicking
down.
There
are
some
issues
and
a
community
people
that
I'll
point
people
at
you
take
a
look.
For
example,
there
are
some
discussion
of
feature
branches
as
one
possible
way
to
make
that
year.
H
Also,
one
I'm,
hoping
that
we
can
do
with
the
cig
release
is
fine
ways
to
have
people
really
hammer
on
the
release.
Candidates,
more
maybe
I'm
wrong,
but
I
get
the
feeling
that
folks
are
waiting
for
the
dotto
before
they
really
start
kicking
the
tires
and
I
think
that
you
know
what
I've
seen
in
the
past
is
when
you
can
shift
that
attention
to
the
release
candidates.
I
So
to
that
point,
I
think
we
just
started
using
the
phrase
release
candidate,
the
last
release
cycle.
I,
don't
know
if
that
made
a
difference.
I've
heard
Brian
make
this
feel
about
folks
need
to
be
looking
at
beta
releases,
pretty
much
every
release
cycle
that
I've
heard
are
painted
in
and
we
attempt
some
have
tried
to
do
our
part
with
sort
of
our
small
facet
of
testing
of
any
release
and
upgrade
testing.
I
But
I
think
that
the
massive
high
signal
upgrade
testing
that
ramps
up
sort
of
towards
the
end
of
the
release
process
it
we
might
find.
We
get
less
surprises
if
we
sort
of
do
that
continuously,
rather
than
the
last
few
weeks,
because
that's
really
I
mean
I
would
love
to
encourage
straud
community
participation,
I'm,
not
sure
what
more
we
can
do
than
what
we've
tried
in
the
past.
I
H
Three
thoughts,
one
absolutely
great
idea:
I,
don't
know
if
it's
worth
talking
about
that
in
sig
release,
sourcing
testing
about
how
to
make
that
happen,
and
so
I
I,
don't
I
can't
believe
I'm
so
excited
about
a
sig
release,
but
I
am
and
so
I'll
take
a
note
and
kind
of
been
that
up
there.
Thirty
is
upgrade
testing
on
it
and
there's
been
an
effort
to.
G
Move
more
more
of
the
manual
tests
to
automated
tests,
so
I
encourage
anyone.
Who's
ever
done
any
manual
tests.
Please
help
us
make
automate
those
tests
so
that
we
can
run
them
ahead
in
addition
to
running
them
on
the
release
branch,
I
notice
that
should
already
be
running
and
working
with
cut
through
stretch
and
as
far
as
advertising
in
releases.
Yes,
I
do
say
this
other
movies.
We
haven't
done
a
good
job
of
advertisements
and
getting
them
out
there.
G
H
G
H
Jason
I
use
in
the
community
meeting
minutes
then,
but
right.
H
I'll
follow
up
from
Sarah
and
I
know
that
Phil
is
near
I,
know,
caleb
is,
and
between
the
incapable
make
sure
this
is
part
of
series
great.
G
G
To
hijack
a
topic
but
I'm
totally
high
driving
a
topic,
one
of
the
problems
that
we
have
with
being
able
to
test
or
imitate
is
a
lot
of
people
would
love
to,
but
being
able
to
support
with
old
people
used
to
call
shadow
pooling
or
putting
a
cluster
manager
in
a
cluster
manager,
because
we
already
have
a
cluster
manager
would
be
a
very
beneficial
way
to
allow
us
to
do
this.
Release
candidate
testing
is
a
known
practice
that
many
other
people
do
in
clustering
solutions.
G
So,
like
large
institutions,
do
this
type
of
thing
all
the
time
now
with
kubernetes,
because
we
can't
do
that
yet,
but
with
other
solutions,
okay,
we
can
have
more
important,
exactly
yeah,
that
that's
great!
Thank
you
for
that
and
Dan.
It
looks
like
you
might
have
another.
One
here
exception
process
took
up
too
much
release
time
and
broke
out
the
contract
with
scheduled,
predicate.
G
Dan,
okay,
so
the
maasai
I
guess
we
can,
if
he's
passionate
about
those
dan
if
you're
watching
the
video.
Please
make
a
note
at
the
next
committee
meeting.
If
you
need
to
dc-10
that
he
was
rounded.
G
Let's,
let's
move
on
the
sake
of
time
and
Dan,
if
you,
if
you
get
this,
say
something
BC
I'm,
not
sure
who
DC
is
I.
G
See,
oh,
that's
probably
Gary
cuz
that
dere
Derek!
Are
you
here
China?
So
let's
go
down
to
flake
resolution
complicated
by
difficulty
in
a
Trinity
test
and
job
ownership
and
court
resolution
of
status
tracking
on
github
issues.
These
are
both
really
big
deals
and
the
impact
of
the
release.
Timing,
so
command.
G
A
Tom
chalo
some
of
the
problem
was
that,
in
order
to
gain
to
glean
the
status
of
any
flake
issue,
that's
open.
You
have
to
kind
of
parse
through
the
entire
common
thread,
which
is
kind
of
a
painful
process.
There's
not
a
great
way,
or
we
have
not
been
great
about
exposing
the
status
of
all
these
flakes.
A
Whether
someone
is
actively
triaging,
it's,
whether
the
fix
in
progress
or
whether
it's
just
been
dropped
on
the
floor
other
than
an
absence
of
communication
on
the
issue,
so
I
think
that
that's
what
Maru
was
talking
about,
I'm
a
poor
resolution
and
the
attribution
and
contestant
job
ownership.
One
thing
we
did
nearly
was
update
the
d-bots
to
know
how
to
add
labels
for
SIG's.
So
we
did
some
work
to
add,
say
ownership.
We
still
distilled
the
remaining
work
of
deprecating,
the
individual
assignment
of
Paso
failures.
I
The
exact
phrasing
I've
heard
within
cig
testing
a
month
ago
was
that
the
test
owners
unjust,
test
owners,
dot,
CSV
file,
needs
to
die
in
a
fire,
so
I
think
we're
in
favor
of
that
it
sounded
like
triaging
off
to
cigs
at
least
chunked
up
the
massive
list
of
200
plus
flakes
into
more
manageable
ten
to
20
plus
flakes,
but
I,
don't
know
like
how
responsive
the
six
were
or
not
again.
It
sounds
like.
A
It
went
well
very,
at
least
in
my
experience,
I
need
to
so
we're
keeping
these
status
reports
a
week
over
week.
The
flakes
count
so
looks
like
a
shoe
counts,
so
I
got
a
like
publish
a
a
graph
with
that,
but
they're
all
available
in
raw
form
on
the
release.
One
six
directory
in
the
communities
features
repo,
but
you
see
we
started
out
with
a
very
high
normal
number
from
previous
couple
releases
and
a
very
precipitous
drop
once
cigs
were
assigned.
Okay.
G
I
G
Okay
and
we've
got
about
five
minutes
left
and
I
want
to
reserve
to
those
minutes
for
the
updates
at
the
very
end.
So
I
just
want
to
try
and
make
it
through
these
last
items
and
then
what
I'm
going
to
ask
is
everybody?
Do
your
homework
and
look
at
the
next
section
of
the
retro,
which
is
what
we'll
do
differently
and
make
notes
in
there,
and
this
I
assume
will
get
handed
off
to
sig
release.
G
So
if
you're
passionate
about
these
please
get
involved
in,
there
cannot
be
enough
participation,
big
release
in
my
opinion,
so
take
a
look
at
that.
When
you
get
a
chance,
you
are
used
mention
that
some
features
got
mentioned
or
merged
directly
to
master
without
proper
view
was.
Did
you
wanna
speak
to
that?
Well,.
D
We
have
actually
discussed
to
the
cradle
last
weeks
or
to
consider
on
worlds
with
some
I'm
notes.
We
can
only
bow
the
features,
but
probably
about
some
so
about
them
transcend
a
global
process.
How?
How
could
we
follow
some
steps
before
something
over
much
to
do
the
summer
air,
so
it's
mostly
covered
in
the
document
that
enhances
their
features
process.
It's
also
linked
into
the
document
on
to
the
action
items
in
the
section
command
from
the
featureless
process.
So
it's
it's
not
really
major
item,
but
I
feel
it
that's
important,
but.
D
G
Just
in
Santa
Barbara,
you
wrapped
I
think
this
is
something
that
I've
heard
from
multiple
people.
That
code
moving
on
between
repos
is
a
challenge.
Is
there
any
plan
to
just
sort
of
look
at
how
how
this
happens?
This
release.
A
Anyone
not
that
I
know
of
and
I
would
need
a
little
bit
expansion
of
the
point
that
just
I'm
trying
to
raise
yeah.
G
Let's
we
can
take
that
offline
and
just
get
a
find
out
whether
using
specifically
referring
to
but
I
know.
There
was
turn
around
cops
and
some
other
stuff.
So
FTD
upgrade
path
was
not
discussed
enough
and
was
clear,
was
unclear
as
I
believe.
The
release
yeah.
J
J
So,
basically,
up
until
the
very
ends
of
released,
we
didn't
really
have
a
good
plan
about
how
to
do
this
upgrade
and
it
was
the
destructive
upgrade
where
we
didn't.
That's
the
deep
and
path
that
we
were
suggesting
that
people
didn't
have
a
way
to
revert
in
any
sort
of
same
way.
So
I
think
that
was
probably
so
big
that
we
needed
to
to
address
just
sort
of
make
more
clear.
Just
because
there's
a
lot
of
confusion
going
into
the
last
days
the
roads.
It
was
that.
G
Like
literally
when
we
hit
this
go
back,
release
cycles
like
back
to
like
one
for
release
cycle
when
we
originally
talked
about
it
and
send
out
Docs
pertaining
to
what
it
would
be.
In
the
beginning
of
the
one
six
release,
we
put
a
PSA
notice
with
links
to
all
the
documentation
of
what
would
be
effective.
So
it's
perhaps
like
that.
You
know
that
same
mantra
that
we
should
keep
on
repeating
ourselves
to
repeat
ourselves,
which
would
probably
help
to
get
the
word
across,
but
it
was
definitely
documented
and
articulated
so.
H
I
think
the
maybe
it's
it's
not
like,
as
this
community
community
gets
larger,
it
feels
like
there's
hotspots
of
folks.
You
know
important
things
in
the
broad
communication
may
be.
The
challenge
and
I
know
that
there
were
a
bunch
of
people
who
are
talking
exactly
about
what
Dan
had
just
discussed,
but
some
folks
were
very
aware
of
it
for
a
while,
as
Timothy
just
said,
broad
communication
seems
to
be,
and
maybe
also
brought
communication,
which
leads
to
full
full
delivery
and
making
sure
that
all
of
the
I's
are
dotted
and
the
T's
are
crossed.
G
Well,
I
think
that
maybe
they
met
the
meta
issue
here
too.
It's
just
that
if
there's
anything
that
goes
into
a
release
that
that's
hard
to
revert
or
needs
extra
attention,
this
another
cross-cutting
concerns
that
we
keep
interesting.
We
just
need
to
make
sure
that
there's
visibility
on
these
things,
I
think
it's
champion
that
every
community,
meaning
there's
going
to
be
other
things
that
are
going
to
come
up
just
like
this,
that
are
going
to
going
to
bite
us
I
hate
to
do
this,
but
we
are
out
of
time
Caleb.
G
A
I
would
love
to
run
through
the
announcements.
First,
one
is
from
Lucas
who's,
not
here
docker
multi-node,
a
guy
starter
guide,
is
removed
from
coop
deploy
repo,
as
it
has
been
updated
for
a
while.
Discussion
is
linked
in
here.
How
that
hell
happened,
and,
as
mentioned
before,
we
are
launching
sig
release
that
will
be
focused
on
proving
the
release
process,
release
over
release
and
not
making
it
a
fire
drill
every
time
to
try
and
staff
a
release
team.
A
So
there's
a
link
to
the
proposal
as
it
is
right
now
and
there's
still
some
final
comments
that
only
be
incorporated
and
will
work,
there's
also
a
manual
discussion
where
it
was
announced
and
we'll
work
through
the
rest
of
the
ceremony
of
maxi,
making
the
other
signal.
Okay
in
the
mailing
lists
and
labels
in
whatever
it
is
coming
and
peak
more
details
to
follow
on
meeting
times.