►
From YouTube: Kubernetes Community Meeting 20161006
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo of Windows Server containers in Kubernetes; SIG Network; SIG Storage, 1.5 Features timeline.
A
Okay,
so
good
morning,
everyone
this
is
we
october.
Sixth,
I'm
told
a
rendition
of
the
Cooper
net
community
meeting
and
relatively
short
agenda
this
week,
but
I
think
we
as
we
often
do.
We
will
find
that
people
want
to
talk
about
something,
so
we
will
see
where
it
goes.
I
know
we
have
to,
or
we
have
a
demo
this
morning
from
Michael
Michaels
of
aprenda
and
I
know
we
have
a
few
super
reports,
one
of
which
is
time
bounded.
A
So
after
the
demo
I'm
going
to
jump
to
the
Signet
work
report
to
make
that
we
have
time
for
Dan
to
make
his
next
commitment.
So
we'll
do
that
and
then
we
will
go
through
a
couple
of
other
sick
reports
about
what's
going
on
with
the
1.5
planning
and
notices
and
asking
and
answering
questions
as
we
always
do.
So
let
me
introduce
Michael
from
aprenda
to
talk
about
what's
going
on
in
the
world
of
Cooper,
Nidhi's
and
windows,
so
take
it
away.
Michael
yeah.
B
Absolutely
hi
everyone
so
I'm,
basically,
a
part
of
the
cig
windows
team
that
includes
g2
from
aprenda
Alex,
also
from
aprenda
and
a
few
other
folks,
like
Caesar
from
Red,
Hat
and
Alessandro
from
cloud
base,
and
the
cig
windows
has
been
working
on
integrating
windows.
Server
containers
with
Cooper
natives
will
show
a
couple
of.
B
Show
a
couple
of
quick,
slides
here
and
then
we'll
move
directly
into
a
demo.
Our
main
goal
into
bringing
Windows
Server
containers
into
Cooper
natives
was
to
expand
the
ecosystem
here
and
make
cooper.
Nate
is
the
best
cross-platform
cluster
manager
out
there
that
it
will
allow
us
to
leverage
a
lot
of
the
investments
into
windows,
server,
containers
and
run
a
lot
of
the
windows
based
applications
into
Cooper
Navy's.
B
In
terms
of
the
architecture
we
haven't
changed.
Anything
on
couplet
haven't
changed
anything
on
the
Cuban,
a
DS
master
component
and
most
of
our
work
has
centralized
around
the
windows
server
memory
and
the
cube.
Let
as
well
as
the
cube
proxy
components.
I
made
a
few
changes
on
each
of
them,
primarily
a
lot
of
changes
and
I
worked
inside
to
accommodate
some
of
the
functionality
gaps
between
linux
and
windows.
B
Server
I'll
go
into
a
couple
of
those
in
detail
in
a
little
bit,
but
the
architecture
looks
essentially
like
this,
so
you
have
the
info
container
within
the
pot.
You
have
one
or
more
containers
running
in
that
you
have
dr.
running
on
windows
and
the
node
itself
is
running
a
Windows
Server
2016,
so
with
Windows
Server
containers,
now
our
customers
will
be
able
to
use.
I
is
asp.net,
dotnet
core
and
any
other
windows
based
application
into
that
container.
We
have
an
existing
working
progress,
pull
request,
that's
available
and
we'll
share
the
slides.
B
So
when
you
look
at
the
kind
of
networking
design,
which
is
where
we
spend
most
of
our
time
container,
more
networking
or
the
ability
for
containers
to
talk
to
each
other,
/,
no
/
localhost
with
in
a
pod,
does
not
exist
on
Windows.
That's
a
communicative
feature
that
are
pushing
Microsoft
as
well
as
dr.
B
But
essentially,
this
discuss
first
forced
us
to
use
neda
sage
and
the
port
proxy
and
routing
table
manipulation
to
achieve
some
of
the
networking
abilities
of
cooper,
natives
and
the
ability
for
us
to
scale
our
containers
and
and
do
the
proper
routing
all
right.
Let's
dive
into
the
demo.
So
in
the
demo,
very
high
level
will
have
two
windows:
server,
2003,
notes1
linux,
no
to
the
communities,
core
components
and
then
we're
going
to
use
the
guest
book.
B
B
Need
to
excuse
me
for
a
second
it
to
exit
the
presentation
mode
here,
so
I
can
memorize
this
guy.
All
right
where
we
are
infrastructure
in
this
case
is
running
a
Windows
Azure.
So
have
a
set
of
notes.
Here
we
have
the
three
virtual
machines
I
mentioned.
I
will
have
the
cube
link
0
1,
which
is
a
linux
virtual
machine,
and
we
have
two
windows:
virtual
machines.
We
have
our
networking
adapters
that
are
basically
attached
to
the
VMS.
B
You
have
two
ethernet
adapters
for
Windows
host
and
one
each
on
the
Linux
host,
who
have
our
virtual
network
that
spans
all
of
the
virtual
machines
running
on
Azure,
as
well
as
the
routing
table
that
allows
us
to
do
a
lot
of
the
routing
manipulation
so
routing.
If
you
use
virtual
machines
within
your
private
work,
then
you
can
do
a
lot
of
the
routing
on
VMS
in
windows.
I0
had
to
create
the
routing
table
that
would
allow
us
to
control
the
traffic
flow.
B
So
let's
switch
here
to
cook
medicine
and
let's
go
ahead
and
see
what
our
infrastructure
looks
like
here.
So
we
get
to
your
notes.
So
we
have
three
nodes
here:
the
10
dot,
0
dot,
one
dot,
10,
that's
a
Windows
note.
The
dot
12
is
also
windows
node
and
the
last
one
is
a
Linux
mouth.
So
we
have
set
up
a
lot
of
the
environment
variables
allow
all
of
these
nodes
to
talk
to
each
other.
We've
started
the
Q
proxy
as
well
as
they
the
cube
led
on
there.
The
windows
node.
B
Notice
here
that
the
opening
system
is
windows
and
and
all
of
the
rest
of
the
settings
are
very
familiar
to
you-
guys
have
been
using
coburn,
a
t-cell
we
essentially
took
were
able
to
inject
into
Cuban
a
DS
the
ability
to
run
and
have
windows
here
as
you
so
s.
So
let's
go
ahead
and
get
pot
here
and
see
what
kind
of
pods
we
have
so
I
only
have
I,
don't
have
any
running
pots
right
now,
so
let's
go
ahead
and
create
our
our
Reddy's
master
controller.
B
That
I
would
allow
me
to
show
you
guys
the
guestbook
o
application
running
on
Windows
so
go
ahead
and
create
our
control.
Let
me
go
ahead
and
create
the
service
as
well,
and
let's,
let's
look
at
what
this
control
looks
like.
So
this
is
standard
replication
controller
and
one
of
the
things
that
main
things
that
want
to
target
here
is
that
the
node
selector
is
targeting
the
windows
tag
and
that's
exactly
our
when
I
described
my
pot.
B
My
note
I
showed
you
guys
that
that's
a
windows
note
go
ahead
and
create
our
slave
controller
as
well
and
create
our
slave
service
and
then
run
get
pots
to
see
so
I've
created
two
instances
of
my
slave
and
I
have
one
instance
of
my
master.
So
there
are.
The
containers
are
creating
right
now,
so
we'll
give
them
a
couple
of
seconds
here
for
for
them
to
get
created.
B
Alright,
so
we
haven't
had
a
connection
yet,
but
that's
going
to
go
ahead
and
happen
within
the
next
few
seconds.
Now.
A
couple
of
notes
here
so
because
of
you
know
some
some
of
the
limitations
of
windows
and
how
implemented
these
apps
the
apps
themselves.
The
way
they
communicate
is
that
using
labels
to
be
able
to
go
and
find
each
other
and
the
different
components
of
the
multi-tiered
application
they're
using
so
the
networks
in
this
case
is
a
flat
layer
through
networking
so
and
we're
using
that
Sh,
like
I
said
earlier,
to
their
port
forwarding.
B
So
the
service
discovered
in
this
case
is
happening
through
the
environment
variables
the
head
and
see,
if
all
right,
so
so,
there's
full
racing
requested
by
the
slaves
so
ready
to
accept
connections,
and
it
seems
like
we
should
be
able
to
be
good
to
go
since
that
synchronization
with
the
slave
has
happened.
So
let
me
go
ahead
now
and
create
my
actual
I
guess
book
application.
B
I'm
go
ahead
and
create
a
controller
and
create
our
service
here.
In
this
particular
instance,
the
service
itself
is
using
from
the
go
application
the
guest
book-
one
we
are
writing
to
the
master,
but
we're
reading
from
the
slaves
so
that
that's
kind
of
the
implementation
that
that
you've
used
here
so
go
ahead
and
look
at
all
our
details
here-
and
you
can
see
here
that
the
note
that
some
of
these
pods
are
running
is
there
windows
nodes
with
the
dot,
10
and
12
IP
address,
and
we
already
have
our
guest
full
application
running.
B
So
so
how
our
guestbook
the
IP
address
is
10,
20,
20,
dot,
194
and
the
port
is
3,000.
So
let
me
go
ahead
and
switch
on
my
windows
note,
so
I
can
show
you
a
few
things
and
also
be
able
to
access
this
application.
So
this
is
my
windows
note.
On
the
left
side,
I
have
the
the
queue
blood
running
on
the
right
side.
I
have
the
queue
proxy
now.
B
B
And
here's
our
guest
application,
so
I'm
going
to
put
a
couple
of
notes
here,
so
it
escapes
demo
community
one
and
go
ahead
and
submit
a
second
one.
So
have
a
guest
book
application
running.
So
if
we
were
to
look
at
the
environment
variables
here,
so
you
see
that
these
are
window
specific
environment
variables,
including
some
of
the
coordinate
specific,
as
well
as
the
ready
specific
environment
variables
are
created.
B
Now,
let's
go
ahead
and
scale
our
guestbook
application,
three
replicas
and
in
this
case,
will
be
taking
advantage
of
all
of
the
capabilities
of
Cuban
ad.
So
just
by
incorporating
the
windows,
server
containers
into
the
mix
or
able
to
utilize
the
replication
controller
and
get
all
the
benefits
like
high
availability
scaling
load.
Balancing
that
you,
you
would
expect
to
get
as
part
of
Coburn
eighties,
so
scatter
plot
here
and
make
sure
that
our
new
pods
God
created
here.
So
yes,
we
do
have
three
guest
book
application
pods.
B
So
now,
if
I,
so
most
of
them
are
running
on
10
12.
So
if
I
go
to
the
second
note
here
and
open
up
and
do
a
dog
appears
here,
you
notice
that
we
have
multiple
guestbook
images
running
and
you
also
see
the
printer
pause.
That's
the
infrastructure
container
that
the
chips
with
each
one
of
my
parts
so
go
ahead
and
run
a
script
here
and
see
what
will
happen
if
I've
demonstrated
a
bunch
of
connections
to
my
application
to
showcase
the
load-balancing
across
the
three
pods
that
I
created
so
so
going
back
here.
B
So,
going
back
to
my
slides
here,
the
next
steps
from
from
a
community
perspective
are
you
know
we
want
more
volunteers
to
come
and
try
windows
server
containers!
Try
our
work
file,
some
bugs!
We
have
published
our
sample
applications
who
publish
instructions
on
how
to
get
windows
server
for
Cuban
80s.
In
the
meantime,
while
that's
happening,
the
sig
will
continue
to
push
for
improvements
in
container
know
we're
working
with
Microsoft
that
includes
everything
from
native
overlay,
networking
the
container
mode
networking
for
the
ability
to
talk
of
a
local
host
as
well
as
dns
server
support.
B
A
You
Michael,
and
we
have
a
little
bit
more
of
an
update
for
you
later
about
any
specific
work
around
1.5.
If
you
have,
if
you
want
time,
if
you've
covered
that
now,
we
don't
have
to
do
that.
Yeah.
B
We
don't
have
I
mean
tab.
The
work
that
that
I,
don't
want
you
right
now,
including
DNS
storage
for
looms
and
see
advisor,
is
probably
going
to
be
part
of
our
beta.
See
advisor
is
probably
a
stretch
goal
for
us,
but
the
rest
of
the
things
are
things
that
we're
going
to
have
as
part
of
the
better
roadmap
that
gun
awesome.
A
C
So
my
question
is,
and
if
I,
if
I
missed
it
here
in
your
flow
I
apologize
but
I
mean
I.
Think
the
the
natural
thing
to
want
to
do
is
run
a
faster.
Its
a
mix
of
mix,
containers,
containers
and
I
was
curious
if
that's
kind
of
the
direct
like.
But
what's
the
thinking
around
that
in
perhaps
the
one
dot,
five
or
the
initial
time
frame
or
is
the
idea
to
kind
of
have
clusters
be
kind
of
exclusively
one
or
the
other
I
think.
B
More
importantly,
we
will
allow
them
and-
and
we've
actually
just
didn't
demo
it
today,
but
you
can
have
an
application
or
a
service
in
this
case
that
has
two
parts
on
Windows
and
two
pots
on
linux,
like
the
Redis
replication
controller
can
run
on
Linux
and
you
can
have
the
guest
book
go
obligation
run
on
windows.
Yes,.
A
A
B
I
actually
do
when
I
put
a
quick
note
on
that.
So
when
we
initially
when
I
did
our
work
here,
we
we
didn't
on-premises
infrastructure
and
then
reporting
everything
into
a
sure.
So
we
did
encounter
some
difficulties
so
so
g2
and
I
legs
and
folks
from
the
cig
windows
have
done
a
lot
of
the
work
to
make
it
possible
to
uncover,
ladies
on
either
and
they've
documented
a
lot
of
those
steps.
So
if
anybody's
trying
these
things
come
to,
sig
windows
ask
questions
and
we'll
help
you
get
started.
Awesome.
A
A
If
you
could
follow
up
on
that
thread,
that
I
mentioned,
and
if
you
need
me
to
find
it
I
will
but
I
just
want
to
make
sure
that
we're
not
creating
new
special
interest
groups
where
the
charters
are
two
overlapping
because
were
we've
already
referred
to.
Some
of
this
is
sig
scroll,
and
we
want
to
make
sure
that
we're
not
doing
that
without
being
thoughtful
about
it.
Okay,.
A
A
E
Excellent,
so
the
top
two
things
that
I
think
we
are
have
been
working
on
and
are
discussing
right
now
before
I
get
to
the
1.5
specific
stuff
are
around
network
policy.
That's
probably
a
term
a
lot
of
people
have
heard
related
to
some
of
the
work
that
we've
been
doing,
but
what
that
basically
is
is
an
alpha
proposal
based
on
pot,
ingress,
filtering
right
now
and
that's
been
going
on
for
the
last.
You
know
eight
or
so
months
or
nine
months,
and
you
know
when
we
talk
about
ingress,
we're
actually
talking
about.
E
Like
I,
said
potting
breasts,
there's
lots
of
different
the
term
ingress,
but
we're
talking
specifically
about
quad
ingress
for
network
traffic,
as
opposed
to
like
clustering,
dress
or
other
things
like
that,
and
so
we
developed
a
proposal
in
a
specification
for
how
a
community's
resource
or
object
should
look
to
describe
all
of
the
ways
that
you
can
classify
and
restrict
or
allow
ingress
to
the
pod.
And
that,
like
I
said,
is
currently
an
alpha
proposal.
E
You
can
see
that
mokuba
days,
repo
and
Doc's
proposals
and
it's
currently
being
implemented
by
a
couple
of
differents
networking
plug-in
vendors.
We
want
a
couple
of
good
implementations
before
we
actually
proposed
it
for
beta
or
GA
just
so
that
we
can
make
sure
that
we
shake
out,
and
you
have
missing
pieces
or
things
that
don't
work
quite
well.
E
That
sort
of
thing
so
I
mean
that
seems
to
be
going
pretty
well
right
now,
like
I
said
where
it's
basically
being
implemented,
we
haven't
had
a
ton
of
discussion
on
network
policy,
the
last
couple
of
sig
meetings,
because
you
know
people
are
basically
just
heads
down
implementing
it.
We
get
more
feedback
in
the
next
couple
of
weeks
and
will
attempt
to
make
a
decision
on
where
we
want
to
go
with
it.
The
other
thing
we've
been
working
on
a
lot
or
discussing
a
lot
is
no
tendency.
E
There's
been
some
work
with
the
sloth
around
multi-tenancy
issues,
including
a
meeting
last
week.
There
can
go,
and
we
hope
to
have
another
meeting
soon
around-
that
we
decided
that
everybody's
interested
in
multi-tenancy
to
come
up
with
their
own
definition
of
multi-tenancy,
since
it
seems
that
everyone
has
a
slightly
different
one
and
once
people
actually
know
what
they
think
multi-tenancy
needs,
then
you
know
we
can
try
to
arrive
at
a
common
agreement
as
to
what
multi-tenancy
needs
across
cigs
or
even
inside
cigs.
So
we
hope
to
figure
that
out.
E
E
E
I
would
try
to
do
that.
Thanks
jumping
on
21.5
work,
some
of
the
things
we've
identified
or
that
are
starting
to
heat
up
a
little
bit.
Art
there's
been
a
lot
of
discussion
last
week
about
cloud
providers
and
their
interaction
with
network
plugins.
There
have
been
some
issues
around
cloud
provider
routes
and
how
they
interact
with
network
plugins
as
some
or
plugins.
Don't
necessarily
need
routes,
but
currently
routes,
kind
of
gate,
note,
availability
when
you're
using
cloud
provider
so
we're
trying
to
shake
those
out
and
I.
E
Those
probably
should
get
done
by
1.5,
since
it's
fun
to
fix
that
stuff
up.
We're
also
want
to
firm
up
how
could
burn
these
interacts
with
CNI
plugins
and
how
information
gets
passed
from
communities
to
those
plugins,
for
example.
Host
ports
for
containers
is
the
main
driver
of
that
use
case
right
now.
E
Also,
there's
the
cube
net
plugin,
which
is
supposed
to
be
a
replacement
for
the
configure,
CBR,
0
and
flannel
experimental
overlay
options
that
Cuba
has,
and
this
cube
net
plugin
is
based
on
CN
I,
but
is
currently
built
into
cube,
built
into
cube.
'let
we
want
to
move
that
to
a
real
CN
I
plug
in,
and
so
some
of
the
1.5
work
will
be
continuing
to
move
that
out
into
a
real
sea,
and
I
plug
it,
and
one
of
the
reasons
to
do
this
is
kind
of
as
a
that.
E
Cni
is
able
to
do
what
we
want
out
of
network
plugins,
and
it
will
also
help
us
shake
out
some
of
the
sum
of
the
parts
of
the
community's
interaction
with
CNI,
as
well
as
some
of
the
problems
with
the
CNI
spec
and
that's
been
going
on
for
a
little
bit.
But
we
hope
to
continue
moving
towards
that.
And
after
that's
done,
we
hope
to
deprecate
the
configure
CBR
0,
&,
flail,
experimental.
E
Turns
out,
Flynn
experimental
overlay
was
actually
removed
this
week.
I
believe
so.
Yay
that's
gone
I,
don't
know
if
configure
CDR
0
will
actually
get
removed
by
1.5,
but
we'll
see,
and
also
the
new
container
runtime
interface
that
has
been
in
the
works
for
a
while
is
you
know,
pretty
big
change
and
we're
still
trying
to
figure
out
how
that
affects.
Network,
plugins
and
pod
network
setup
flow.
E
Timokhin
had
specifically
mentioned
the
service
proxy
local
preference,
which
is
where
a
I
believe
it
involves
an
annotation
that
you
can
put
on
the
surface
that
if
there
is
an
end
point
for
that
service
running
on
the
local
node,
the
service
proxy
will
direct
traffic
to
that
local
endpoint,
as
opposed
to
routing
it
to
some
other
node
in
the
cluster.
That
works
for
some
specific
cases.
But
Tim
is
working
to
fix
that
for
the
node
port
case
for
1.5
and
then
I
also
mentioned
no
policy.
E
A
E
Could
it
be
a
compliment
to
the
potty
ingress
filtering?
This
we've
got
your
us
filtering,
but
there
are
some
questions
there
around
whether
the
use
cases
are
more
cluster
administrator
focused
or
whether
they're
actually
apt,
focused
app
developer
focused.
So
that
is
still
something
under
heavy
discussion
and
with
all
of
these,
before
I
jump
to
any
questions,
people
have,
if
any
of
these
topics
sound
interesting
to
you,
definitely
try
to
join
our
meetings,
talk
with
lot
more
ideas
and
also
a
plea
for
reviewers
for
some
of
the
network
patches
or
PRS
as
well.
A
E
The
most
of
the
stuff
that
I'm
talking
about
here,
I
believe,
is
core
API.
The
parts
that
are
not
core,
API
or
not
part
of
the
core
would
be
the
actual
Network
plugins
themselves
from
third-party
vendors.
It
would
be
implementing
network
policy
and
things
like
that,
but
the
network
policy,
API
itself
and
objects
are
part
of
core
I
I.
F
Would
I
know
it's
a
big
thing,
but
I
would
love
to
entertain
the
thought
exercise
of
what
would
it
take
to
move
these
things
out
of
core
and
and
because
this
is
you
know,
goober
Nettie's,
as
a
you
know,
Sdn
control
plane
is
is
good,
but
you
know,
maybe
that's
not
something
that
should
be
built
into
the
core
api's,
but
I
will
take
my
answer
offline.
We
can
talk
about
that
later.
I
just
wanted
to
bring
that
bring
that.
E
I'll
share
yeah.
One
of
my
goals
personally
is
to
try
to
move
as
much
networking
specific
code
out
of
the
components
as
possible
and
move
that
into
AP
is,
or
at
least
make
API
is
available.
That
could
then
be
used
to
implement
these
things
outside
of
Coober
Nettie's,
whether
that's
3rd
party
plugins,
or
whether
that's
plugins,
that
are
part
of
you
know
the
Cooper
Nettie's
project
that
use
those
api's
but
are
not
necessarily,
you
know,
built
into
cube
latour
any
of
the
other
places.
E
We
had
some
discussions
around
our
policy
where
we
did
start
off
as
a
third-party
resource,
just
kind
of
to
prove
that,
but
we
decided
that
having
some
consistency
around
the
network
operations
in
network
policy
was
desirable,
at
least
to
the
point
that
when
you
set
up
a
cluster,
you
know
what
capabilities
would
be
available
and
you
could
access
those
capabilities
in
a
consistent
manner.
But
you
know
I
think
in
general,
we
do
want
to
continue
moving
as
much
network
specific
code
in
such
out
of
the
components
as
possible.
I
I.
F
Think
it's
a
different
discussion
in
terms
of
driving
consistency
in
terms
of
what
is
you
know,
the
set
of
Cooper
Nettie
services
that
that
users
should
expect
in
a
general
purpose.
Cluster
is
a
different
question
from
what
should
be
corn.
What
should
not
be
core
with
respect
to
third-party
resources,
but
it's
you
know
probably
a
longer
discussion
that
we
can
fit
in
this
meeting
here.
Yep.
E
Fair
enough,
let's
keep
bringing
it
up,
I'm
not
sure
where
the
appropriate
yeah
and
you
for
that
is
but
yeah.
Definitely.
I
would.
E
The
problem
and
if
you
do
have
more
questions,
feel
free
to
mail,
the
sig
list
as
well.
Yes,.
D
So
1.5
coating
is
started
to
wrap
up
on
what
we
did
for
1.4
1.4.
We
focused
mostly
on
bug
fixes
from
the
big
rewrite
from
1.3
and
we've
fitted
shipped.
A
new
version
of
dynamic
provisioning
gets
moved
into
beta.
Now
there
was
a
blog
post
recently
about
what's
new
in
that
feature
for
1.5,
it's
a
really
short
milestone.
We've
got
four
weeks
of
coding,
we're
focusing
primarily
on
adding
a
bunch
of
testing
into
storage,
to
avoid
any
major
regressions
in
the
future.
D
Beyond
that,
we're
focusing
on
designs
we're
not
actually
going
to
be
doing
a
lot
of
feature
coding.
This
milestone.
We
have
snapshots
in
the
works.
It's
a
feature
for
creating
backups,
basically
and
abstraction,
API
and
flex
volume
plug-in
is
something
that's
kind
of
been
on
the
back
burner
for
the
most
part
flex.
Volume
plugin
has
been
a
way
for
people
to
do
out
of
tree
volume
plugins.
It
has
stagnated
since
1.3
we've,
it
doesn't
mirror
the
internal
API
anymore.
It
doesn't
do
dynamic,
provisioning
or
the
attach
or
detach
or
interface.
D
We
want
to
bring
it
more
in
line
with
what
the
internal
volume
plugins
are
capable
of,
but
that
requires
careful
design.
So
we're
going
to
go
over
that
during
the
1.5
mile
stone
other
than
that
keep
driving
forward
on
dynamic
volume,
provisioning,
we're
thinking
about
making
dynamic
volume
provisioning
the
default
behavior
for
four
clusters,
starting
in
1.5.
We
are
at
the
minimum
going
to
have
basically
default
provisioners
for
AWS
GCE,
but
the
question
is
whether
we
should
make
them
the
default
behavior.
D
A
H
I
am
so
I
have
a
brief
update
regarding
the
actual
features
status
for
one
at
five,
so
we
have
decided,
as
we
have
discussed
it
in
kinetis
p.m.
and
competitive
melon
days,
though
the
feature
submission
phrase
disallowed
for
the
next
Monday,
so
Philip
10
is
the
last
date
and
someone
who
like
to
see
his
feature
in
15
release
has
ability
to
submitted
to
us
to
review
it
and
continue
working
on
it.
So
if
you,
if
you'd
like
to
see
a
future
in
this
release,
please
do
it
for
this
monday
and.
A
H
So
ever
since,
what
should
you
do?
Do
you
simply
have
to
submit
your
issue
to
the
future
stripper
with
a
beer
discussion
of
your
audio
feature?
What
you're
going
to
do
and
what
this
feature
is
about,
and
also
I'd
like
to
ask
you
to
pay
staff
here.
A
few
words
suit
it
spread
sheet.
I
have
prepared
and
presented
in
the
last
week
meeting
were
like.
You
may
see
everything
about
our
existing
features
that
are
going
to
be
added
resistance,
yeah,
like
certain
from
Monday.
It's
a
good
time
for
active
the
code
in
your
features.
H
A
A
Well,
we
are
going
to
get
through
our
agenda
quickly.
Okay,
cool!
Does
anyone
have
any
other
specific
topics?
Actually
I
know
two
more
topics,
cherry
picks,
so
Jesse
Frizzell
is
our
one
point
for
point:
release
sorry
Nina,
so
she
is
going
to
be
our
interface
to
the
point
releases
on
1.4.
So
tell
us
more
about
one
point,
four
point,
one
yeah
so
we're.
I
Going
to
build
the
release
tomorrow,
so
if
you
can
get
in
anything
that
you
desperately
need
cherry
pick
today
and
ping
me
on
it
and
if,
like
I
haven't
commented
on
your
cherry
pick,
yet
that
might
be
a
problem
or
if
I
haven't
seen
it.
So
maybe
reaping
me
if
so,
but
I
think
that
I've
gone
through
all
of
them,
but
I
want
to
make
sure
that
nothing
gets
left
out.
That's
like
super
important,
so.
A
Okay,
today,
Nia
make
sure
that
they're
in
for
tomorrow
time
I
should
change.
My
note,
which
says
cherry
picks
in
by
Friday
in
today,
okay,
excellent,
anyone
have
questions
about
this
or
what
happens
for
the
next
point
releases
or
the
handoff.
Anything
okay
sounds
like
there's
not
Caleb.
Are
you
on
the
zoom
chat
today?
As
you
had
said,
something
about
wanting
to
touch
on
owners.
I
am.
J
So
presented
at
the
contributor
experience
bi-weekly
yesterday,
we
have
been
waiting
for
people
to
reply
to
their
opting
emails.
Sorry
about
having
to
send
up
two
rounds
of
those,
but
response
has
been
overwhelmingly
positive,
we'll
be
updating
the
pr
against
Renee's
communities,
hopefully,
today
with
everyone's
opt-in
preferences,
we
are
also
testing
the
bots
integration
on
community
/.
A
J
A
Awesome
thank
you,
and
that
has
been
that
has
been
work
that
has
been
long
coming
and
is
going
to
help
us
a
ton.
It
does
not,
however,
as
I
mentioned
earlier,
limit
people's
ability
to
review
things,
even
if
you
are
not
listed
as
a
reviewer,
a
maintainer
and
owner
and
approver.
Having
reviews
from
you
is
always
helpful
because
we
can
we
can
establish
and
build
and
grow
trust
through
seeing
reviews
from
people
who
are
not
as
well
known
in
the
community.
So
please
do
make
reviews
if
you
have
opportunities
and
interests
on
particular
topics.
A
Okay,
the
next
big
topic
is
CLA
changes,
so,
as
you've
probably
been
seeing
in
pull
requests,
we
are
now
running
both
the
Google
CLA
bot
and
the
Linux
Foundation
CLA
bought
in
parallel.
If
you
have
issues
you
can
reach
out
to
me
or
Jenna
room
Thank,
You
Costa
company,
getting
his
his
login
aniruddh
and
we
will
help
either
connect
you
to
the
cloud
native
compute
foundation.
A
If
we
need
to
back,
you
know,
Duke
background
fixing
on
that,
or
we
will
try
to
figure
out
why
it's
not
working
properly,
so
both
of
them
are
running
in
parallel
for
a
bit
and
we're
pretty
close
to
having
all
the
corporate
CLA
sign,
which
is
awesome,
but
each
of
you
who
is
not
under
a
corporate
CLA
needs
to
sign
an
individual
CLA.
Any
of
you
who
is
under
a
corporate
CLA
will
still
need
to
make
a
linux
foundation
I
ed
and
have
that
connected
to
the
corporate
CLA.
A
A
That
covers
the
CLA
chain,
has
changes
owner
updates.
There
is
a
1.4
retrospective
tomorrow
so,
as
we
did
with
1.2
and
1.3,
we
made
a
specific
retrospective
time
and
Jason
singer
de
Mars
is
going
to
leave
that
tomorrow,
10
a.m.
and
it
will
be
on
this
same
zoom,
chad,
channel
I,
don't
know
what
to
call
this
zoom
account,
and
most
of
you
or
all
of
you
who
are
on
this
meeting
should
have
an
invite
for
it.
A
Last
time
after
1.3,
we
opted
to
do
two
one-hour
retrospectives.
We
had
because
wait
so
much
to
discuss
but
I
think
also
in
part,
because
we
didn't
spend
the
time
up
front
to
get
our
comments
in.
So
if
you
can
spend
a
little
bit
of
time
up
front
to
get
comments
in
and
then
we
will
be
able
to
get
through
this
in
one
hour
and
all
of
us
get
an
hour
of
our
lives
back,
there's
also
in
the
notices
section.
A
A
note
that
tomorrow
there
is
a
webinar
called
DevOps
in
the
age
of
docker
and
Cooper,
Nettie's
and
I
have
to
say:
aprenda
is
a
Twitter
marketing
machine
for
Cooper
Nettie's,
so
Thank
You
aprenda.
It
is
astonishing
the
amount
of
traffic
that
they
have
generated
around
this
webinar
and
that
is
going
to
be
very
well
attended
and
it
is
with
Gene
Kim
myself
and
Sinclair
from
aprenda.
So
it's
going
to
be
exciting,
either
way,
I'm
sure
we
will
it's
a
little
bit
early
for
the
west
coast,
but
we'll
figure
it
out
the
URL.