►
From YouTube: Kubernetes Community Meeting 20161208
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo - Kubeless; Release updates 1.4, 1.5; 2017 roadmap discussion; SIG 1.6 Stabilization discussion; SIG OnPrem proposal.
A
Alright,
so
we
are
recording
now
and
happy,
let's
see
tues-thursday
december
eight
and
welcome
to
the
Kuber
Nettie's
sig
meeting
good
local
time
to
you
this
morning
we
are
after
our
moments
of
technical
difficulties.
We
are
going
to
get
started
with
cupless
a
serverless
framework
that
Sebastian
go.
Askin
is
going
to
give
us
a
demo
low,
so
Sebastian,
let's,
let's
get
you
started.
B
C
Cool
thanks
good
morning,
everybody
good
evening
if
you're
in
Europe,
I'm
gonna
give
you
a
quick
demo
of
cube
less
as
I'm
Sebastian
from
skip
box
and
I.
Did
this
work
with
tuna?
Ed
I
mean
actually
most
of
the
work.
I
just
acted
as
a
product
owner
and
give
him
all
the
requirements
and
tested
bunch
of
things
cube
less
so
its
server
less
for
communities
and
it
came
out
a
little
bit
as
a
joke.
C
When
I
was
a
cube,
com
and
I
said:
hey
I'm
cube
less
in
Seattle,
if
you're
a
moviegoer,
you
get
the
job
and
then
the
meetup
Craig
mentioned
that
he
would
like
to
see
a
server
less
frame
working
Juventus
and
that
that
sparked
my
interest
and
Brandon
also
gave
a
talk
on
compiled
to
cube,
which
was
you
know,
very
interesting.
It's
a
little
bit
of
a
mix
of
those
ideas.
Definitely
there
are
alternatives
out
there,
fishing
from
platform
I'm
function
from
fabric,
a
open
whisk
is
a
project
from
IBM
which
entered
the
Apache
Incubator.
C
So
there
are
alternatives
that
we
could
run
on
on
communities,
okay,
but
what
we've
done
leave
it
at
a
different
design?
We
use
third-party
resource,
we
then
add
a
controller
running
in
the
cluster
that
looks
at
the
custom
resources
that
are
being
created
and
we
also
started
using
Kafka
the
event
stream
inside
the
cluster.
C
Why
did
we
do
this?
Finally,
its
exploration
I'm,
no,
not
living
to
grant
it,
but
just
curious
to
see
what
server
less
on.
Thank
you
and
also
because
g-cloud
functions
very
likely
runs
on
board,
so
they
most
likely
have
an
app
like
this,
that
that
provides
g-cloud
function
and
there
was
a
ton
of
craziness
around
lambdas
during
reinvent.
C
So
we
got
excited.
We've
done
some
work
on
something
called
Q
watch,
which
was
a
controller
watching
all
the
resources
inside
your
cluster
and
talking
slack.
So
we
had
the
control
in
place
that
we
could
extend
to
the
server
less
work.
Okay.
So,
let's
demo,
so
it's
on
github,
github.com,
/
skip
box.
/
q
bless
you
get
a
nice
CLI,
we're
still
working
on
it.
You
can
create
functions.
You
can
install
everything,
you
can
create
topics
for
messages
and
so
on.
C
So
let's
switch
to
the
terminal
I'm
on
me
cube,
so
I
have
no
pods
running.
If
I
look
at
my
butt
in
all
my
name,
spaces
I
see
that
I
have
a
cube.
Less
controller
I
have
a
cache
controller
which
runs
cat
one
casket
container
and
zookeeper.
I
also
have
a
third-party
resource
that
has
been
down,
which
we
call
right
now:
lambda,
okay,
that
I,
oh,
ok,
and
that's
it
so
now,
if
I
use
q
bless,
I
can
you
cue.
Last
function
list
and
I
know
functions.
So,
let's
look
at
a
basic
function.
C
C
So
you
write
your
function
and
then
go
to
best
function,
create
you
know,
and
then
you
specify
what
kind
of
trigger
this
is.
So
it's
a
HTTP
trigger.
You
specify
a
runtime
right
now.
We
only
have
one
run
time.
You
specify
a
handler
which
is
going
to
be
the
base
name
of
the
of
the
function
file
and
then
the
actual
name
of
the
function.
And
then
you
specify
the
file
and
attention
examples.
I
will.
B
C
Py,
so
if
I
did
this
properly
this
work
and
now
what
you
get
is
less
function
LS
you
get
a
function,
that's
being
created
underneath
what
you
get
is
that
created
a
custom
resource
following
our
lambda
aspect?
So
now
we
have
one
hour
lambdas,
which
is
demo
because
you
kept
the
jamming
file
of
the
demo.
I
see
that
I
respect
where
I
specify
the
handler
I
specify
that
the
lambda,
which
is
just
a
function
on
the
run
time
I'm
in
pecan.
D
C
The
method
on
the
OmniPod-
if
I
look
at
my
pot
now,
I
have
one
point
running
damu
pot
using
our
run
back
and
we
can
call
this
function
for
this
right
now.
We
actually
have
a
proxy
running
we're
going
to
get
rid
of
this,
so
Google
services
and
the
moment
the
function
demo,
so
I'm
hitting
the
HTTP
endpoint,
no
way
that
you.
C
So
we
look,
we
look
at
another
one,
an
employee,
I'm
game,
I'm,
going
to
figure
it
out.
We
can
also
create
I'm
going
to
try
to
recover
from
this.
We
can
parse
JSON.
So
if
you
do,
a
basic
Jason
will
return
the
Jason.
So,
let's,
let's
try
this
one
on
qsr
sunshine,
create
demo
to
it's.
Also
a
cheapy
trigger
runtime,
5.7
hander,
the
bed
name
of
the
file.
C
C
C
There's
a
few
things
that
are
a
little
bit
hackish
because
we're
using
a
proxy,
so
we
here
when
we
call
the
function,
we
actually
have
to
do
a
port
forward,
but
we'll
get
rid
of
this,
but
you'll
see
that
on
the
fly
by
who
might
cause.
I
have
another
pot.
That's
been
created
for
my
demo
to
function
was
injected
dynamically
in
the
runtime
and
then
we
were
able
to
employ
it.
C
So
I
probably
did
a
typo
in
yer
in
the
first
function
that
we
created,
but
people
are
going
to
say
well,
I
CP
HTTP
trigger
that
TV,
so
we
added
kashka
in
it.
So
we
manage
kappa
little
bit,
so
we
can
actually
create
topics
on
the
fly
again
see
that's
a
little
bit
hackish,
but
so
let's
create
a
community
topic.
C
C
C
Try
this
ok!
So
here
the
run
time
when
we
do
the
pub
sub
it's
a
little
bit
different.
The
way
we
inject
it.
So
now
we
have
another
file
running
demo
3
and
to
see
this
work,
we
can
actually
image
the
message.
So
we
did
a
little
cash
cow
record,
so
we
can
do
a
function,
publish
and
we
can
publish
on
the
topic
that
we
that
we
use,
which
is
community
to
pick.
C
So
you
publish
the
message
on
the
community
and
you
say
say
hello,
Zara
and
let's,
if
this
works,
so
he
published
and
now
right
now
to
actually
see
this
works,
you
actually
need
to
check
lots
of
the
consumer
and
it
says
hello,
Sarah,
okay,
so
that's
it!
I
messed
up
the
first
one,
which
is
the
basic
get
is
HTTP,
but
we
you
saw
that
you
know.
Google
earth
works
again.
It
leverages
primary
sources.
You
can
create
basic
functions.
Only
Python
runtime
would
love
to
get
some
help
to
get
java
load.
C
Something
like
this
and
we
have
the
the
basics
of
using
an
event
framework
and
from
just
looking
at
this.
You
know
the
big
issue
is
actually
going
to
have
now
other
services
running
that
are
going
to
emit
events.
You
don't
want
to
image
events
by
him,
so
you
need
other
things
in
your
cluster
to
submit
those
events
so
that
they're
going
to
call
the
function,
cool.
A
Thank
you,
yeah
you're,
getting
lots
of
chats
a
very
cool
and
nice
work,
and
so
you'll
be
able
to
scroll
back
through
that.
If
you
have
a
chance,
can
you
add
some
notes
to
our
meeting
agenda
because
we
didn't
get
a
robust
set
of
notes
from
that?
So
if
there
are
specific
things
in
a
link
to
your
slides,
that
would
be
super
helpful
cool.
A
Thanks
and
everybody
knows
how
to
get
a
hold
of
some
Goa,
if,
if
you
need
to
so
have
at
it
all
right
so
up
next,
because
we
are
in
the
run-up
to
the
1.5
release,
we're
going
to
go
through
releases
and
I,
think
I
saw
Jess
Jess
or
you
one
will
do
1.4
first,
if
you
are
maybe
no
Jess
didn't
see
her
ok.
Well,
they
did
see
sod
so
sod.
Do
you
want
to
talk
about
1.5,
sod
and
or
Caleb
yeah.
E
You
have
any
changes
that
need
to
go
into
147.
Please
reach
out
to
her
for
1.50.
Today
was
the
official
release
date.
We
are
not
going
to
make
it.
There
is
one
outstanding
issue
that
is
still
a
blocker.
It's
the
soft,
lock
up,
issue,
cpu
being
saturated,
there's
a
link
in
the
notes.
We
believe
that
we've
identified
the
PR
that's
causing
it
and
that
PR
PR
has
been
reverted,
but
we
need
a
couple
of
extra
days
to
verify
that
it
actually
fixes
the
issue.
E
E
Branch
status,
I'm
planning
on
cutting
a
beta
3
today,
beta
3,
if
all
goes
according
to
plan,
should
be
identical
to
what
we
release
on
monday.
If
we
find
any
more
issues
between
now
and
Monday,
then
we'll
go
ahead
and
add
those
in
and
it
may
be.
Different
cherry
picks
have
been
going
into
1.5
I've
been
doing
nightly,
backs
cherry-picks
unless
your
PR
is
very
complicated
and
has
emerged
issues,
in
which
case
I'll
ping
you
on
your
PR
to
do
it
yourself
at
this
point
there
should
be
no
more
cherry
picks
to
1.5
for
1.50.
E
If
there
are
please
ping
me
on
slack
and
I'll,
take
a
look,
the
docs
all
the
docs
are
completed,
except
for
one
outstanding
PR,
which
is
still
in
review.
Devin
and
jared
are
working
on
it
and
they're
actually
just
going
ahead
and
committing
grammar
fixes
and
things
themselves.
Hopefully
that
gets
results
soon.
But
since
we've
got
a
couple
extra
days,
it's
a
not
a
blocker
release.
Notes
have
been
merged
to
the
features,
repo,
there's
a
link
in
the
notes.
E
Please
go
take
a
look
at
that,
especially
if
you
have
a
feature
that's
going
out
with
1.5.
If
something
doesn't
look
correct,
please
create
a
PR
to
fix
it.
Aaron
is
taking
a
look
at
those
PRS
and
when
we
go
do
the
release
on
monday
we're
automatically
going
to
pick
up
that
file.
So
please
update
it.
If
there's
something
that's
incorrect
before
monday
and
finally,
we're
holding
burn
down
meetings
every
day
at
1pm,
there's
a
link
to
the
notes.
A
24
I
have
anyone
all
right
well,
thank
you,
Saad
and
Caleb
and
dims
for
all
of
the
work
that
you've
done
on
this
Plus,
everyone
who
did
triage
on
the
test,
failures
and
work
on
this
last
blocking
issue,
because
I
know
that's
been
a
bit
of
a
scramble
in
the
last
couple
of
days.
So
thank
you.
Thank
you
for
everyone,
who's
helped
on
that
all
right.
I
think
I
saw
a
as
well
we're
up
next
to
the
2017
road
map
with
you
and
Aparna
yeah.
D
F
A
F
Larry
bottle
of
serum,
so
let
me
share
my
screen
yeah,
so
this
is
a
brief
session.
This
is
like
your
cap
of
what
has
been
done
during
their
developer
summit
at
Seattle,
so
horn
and
I
we've
been
holding.
There
are
two
bananas,
2017
planning
roadmap
session,
make
sure
there
were
two
sessions
because
of
a
huge
interest
of
the
participants
to
join
them
and
like
to
define
the
roadmap
for
for
the
product
for
the
next
year,
so
yeah.
F
So
during
the
roadmap
session
we
we
should
involve
and
the
next
four
months
old
we
divided
the
audience
up
to
some
stations.
So
you
were
like
a
few
tables,
were
people
who
enjoy
another
session
there
and
participate
and
discussing
some
some
specific
stuff,
like
with
regrets.
Some
specific
6
I'll
show
that
on
the
next
life
also,
we
have
shared
a
template
banana
step
where
all
the
people
were
able
to
share
them
their
souls
into
structured
form
into
the
spreadsheet.
So
you
can
find
a
linter
template
here
in
these
slides.
F
F
Yeah,
so
with
this
lets
you
in
this
slide,
you
may
notice
how
this
stages
howdy
six
were
subdivided.
So
we
aggregated
there
a
specific
symptom
like
some
errors,
and
they
have
been
discussing
all
this
stuff.
So
on
this
slides,
you
may
find
them
in
the
meeting
notes.
Oh
I
have
aggregated
older,
so
we
have
aggregated.
Oh
just
all.
You
know
that
have
been
collected
during
their
meeting,
so
feel
free
to
suggest.
If
you
have
an
ascender
sir
partner,
do
you
have
any
other
questions
or
suggestions
on
him.
G
We
can
go
through
each
of
the
slides
in
a
little
bit
more
depth,
so
I,
let's
actually
just
start
with
this
I
think
this
was
maybe
one
of
the
highest
priorities
of
the
group
overall
for
2017
is
that
we
have
to
fix
the
known
gaps
in
the
project,
particularly
scaling
the
project.
There
was
a
whole
session
that
ryan
grant
and
years
led
about
scaling
both
the
code
base
and
the
community
and
there's
there's
a.
G
I
think
that
there's
a
there's
a
full
document
on
on
some
of
the
ideas
and
what
we're
going
to
do
there
in
terms
of
splitting
the
repo
and
making
it
easier
for
contributors
to
contribute
and
also
to
have
a
path
in
the
community.
There
was
also
a
discussion
on
improvement
of
the
documentation.
This
is
a
big
outstanding
item,
and
so
that's
something
that
we
plan
to
tackle
heavily
in
2017
and
it
has
been
it
to
be
varied-
has
been
improving
over
the
last
couple
of
releases
and
then
the
third
piece
is
the
need
for
reference.
G
G
Yeah,
I
think
the
the
applications
and
workloads
there
were
different
cigs,
that
kind
of
got
together
to
discuss
overall
our
roadmap
for
applications
and
workflows
and
I
think
what
came
out
of
that
is
that
we
want
to
have
some
well
documented
example
patterns
and
recommended
workflows
and
that's
kind
of
a
major
goal
for
2017
and
I.
Think
that
is,
you
know,
in
combination
with
the
work
that
has
been
happening
with
the
helm
package
manager
and
the
charts.
G
There
was
also
a
discussion
around
whether
we
need
to
be
more
opinionated
or
to
provide
more
opinionation'
on
the
CIC
d,
part
of
the
developer
workflow
and
in
general.
The
discussion
resulted
in
no,
we
don't
need
to
add
features
into
Cooper
Nettie's
for
this,
and
this
is
something
that's
an
arm
goal
for
the
project.
It's
probably
something
that
should
emerge
through
the
incubator
and
other
parts.
G
G
Yes,
so
scalability
and
Federation
Federation
was
missing.
Here
we
will
incorporate
that
I
think
they
have
a
pretty
robust
roadmap
already,
which
includes
taking
their
current
features
to
GA
and
improving
usability
and
but
on
scalability.
The
discussion
was
that
we
need
a
more
careful
definition
of
scalability,
something
that
is
not
just
at
the
cluster
level,
but
incorporates
some
production
like
applications,
and
so
we
sort
of
had
had
had
a
definition
of
scalability,
which
is
very
specific.
G
It's
based
on
APN,
latency
and
there's
just
to
kind
of
cluster
level
s
ellos,
but
we
found
that
that
was
an
insufficient
when
we
had
a
customer
or
user.
Like
you
know
some
of
our
larger
users,
and
so
we
want
to
create
in
2017
more
natural
benchmarks
and
also
more
extensive
tests
that
tests
a
more
production
like
environment
and
also
want
to
move
to
at
least
an
H.
A
config
for
scalability
testing
and
also
scalability
will
not
just
be
on
a
per
cluster
basis.
G
G
Right
so
this
was
a
big
group,
the
node,
often
network
teams,
they
kind
of
think
we
have
two
slides
on
this
and
so
and
I'm
not
sure
that
this
is
complete.
But
some
of
the
themes
that
came
out
for
2017
were
to
ensure
network
policy,
hierarchical
network
policy
and
move
towards
quality
of
service
in
the
context
of
hybrid
applications.
On
the
networking
side,
I
think
this
is
minimal,
but
ensuring
internal
l7
load
balancing
there
was
a
lot
of
discussion
as
far
as
what
needs
to
happen
on
the
node.
You
can
read
some
of
that.
G
Obviously,
the
p0
here
being
the
runtime
strategy
that
it's
we're
going
to
have
an
alpha
of
CRI
in
in
this
release
in
one
dot,
five,
and
so
that
needs
to
extend
and
evolve
and
eventually
go
ga
in
2017,
and
some
of
the
other
things
here
are
with
regard
to
GPU
support
and
so
forth.
So
that's
also
a
plan
for
2017
and
the
next
slide
yeah.
So
the
next
slide
has
more
on
off.
G
There
was
a
lot
of
discussion
on
off
some
p0,
some
gaps
that
we
know
we
have
with
regard
to
management
of
secrets
and
encryption,
that's
something
that
we
have
to
work
on
and
then
many
other
things
too,
to
essentially
enable
multi-tenancy
I.
Think
that
was
the
main
main
discussion
here.
So
moving
are
back
to
beta,
improving
I
am
four
pods:
improving
the
login
experience,
enhancing
network
policy
app
to
app
authentication.
This
was
all
in
service
of
things
like
multi-tenancy
and
also
in
service
of
the
work
that
we're
doing
in
the
Service
Catalog
sig.
G
Right
and
so
the
AWS
OpenStack
vSphere
bare
metal.
This
was
a
combination
of
cigs
that
have
been
talking
about.
What
do
we
need
to
do
in
2017
to
enable
other
other
types
of
infrastructure
or
other
other?
You
know
other
infrastructure
and
and
make
sure
that
we
have
a
robust
offering
for
all
types
of
clouds,
private
public,
bare
metal,
etc,
and
so
this
was
really
can
be
created.
Reference
architectures,
I
think
there's
also
assumed
in
here
the
work
that
we're
doing
on
install
and
making
sure
that
we
keep
adding.
G
G
And
I
think
the
other
thing
that
happened
with
this
discussion
is
that
there
was
a
little
bit
of
a
prioritization,
and
so
the
group
is
proposing
that
we
prioritize
AWS,
followed
by
a
juror,
followed
by
OpenStack
and
then
vSphere
and
there's
some
thinking
that
we
may
or
may
not
be
able
to
get
to
bare
metal
I.
Think
that
would
be
best
effort.
But
this
is
a
proposed
priority
for
2017.
G
So
yeah
cluster
lifecycle,
sig
and
cluster
operations
that
whole
group
really
had
a
lot
of
work
that
they
did
in
2016,
and
so
they
want
to
move
that
to
production.
They
want
to
incorporate
upgrades
with
user-defined,
downtime
and
maintenance
windows
and
also
the
ability
to
roll
back
to
do
that,
all
in
an
easy
way
through
the
potentially
the
cube
adium
a
tool
and
also
to
be
able
to
address
that
in
an
HH
a
cluster.
So
those
are
the
two
p0s
there's
some
rationalization
that
needs
to
happen
in
terms
of
cluster
lifecycle
tools.
G
G
G
G
Done
yet,
and
on
one
measuring,
simplify
the
deployment
of
third-party
monitoring
solutions.
So
there
are
a
number
of
good
deployment
salute,
monitoring
solutions
that
have
come
up
in
the
ecosystem
and
we
want
to
make
them
easy
to
use.
There's
also,
I
think,
completion
of
the
metrics
api-
and
this
might
be
a
maybe
a
p1
or
p2-
is
to
introduce
infra
store.
F
It
is
actually
older,
older
groups
of
six
did
have
been
present
during
their
roadmap
Megan.
So
if
you,
if
you
have
any
suggestions
for
getting
this
stuff
and
you'd
like
to
push
from
your
seek
some
such
forgetting
their
own
mother,
2017,
feel
free
to
do
it.
Your
slides
in
the
template
and
so
Andy.
A
Thank
you.
So
we
have
a
couple
of
comments
or
questions
that
came
through
all
this
was
happening.
Sig
scale,
Bob,
said
most
of
the
cake
scale
meeting
today
was
on
the
scalability
definition
and
it's
the
main
work
effort
in
upcoming
weeks
and
then
Justin
garrison
asked
about
getting
what
is
needed
to
get
better
bare
metal
support
in
2017.
Are
we
just
limited
on
people,
resources
and
prioritization,
or
is
there
something
else
yeah.
G
So
I
think
that
you
know
this
will
need
to
be.
These
questions
will
be
best
taken
up
by
the
by
the
groups
that
met
to
prepare
that
roadmap.
I.
Have
some
I
have
some
sense
of
why
you
know
these
priorities
and
so
forth,
but
I
think
it's
best
addressed
with
with
the
group.
So
each
of
those
slides
actually
has
an
owner,
and
so
we
can
connect.
We
can
make
the
connection
so
I
think
for
the
for
the
different
types
of
cloud
providers.
A
A
Nope
awesome,
alright?
Well
then,
we
will
go
on
to
the
sig
discussions,
so
we
sent
out
a
bunch
of
questions,
Eric
tune,
write-ups
and
questions
about
self
assessment
of
each
of
the
special
interest
groups
trying
to
determine
where
they
feel
their
infrastructure
is
where
the
field
their
their
development
cycle,
is
and
is
1.6
a
time
for
stabilization
released
because
we
wanted
to.
A
We
have
a
lot
of
features,
an
alpha
and
a
lot
of
features
that
we
know
we
have
work
to
be
done
so
I've
got
reports
back
from
sig
eps
sake,
network
sig,
openstax,
a
Goff,
&,
sig
lifecycle.
So
we're
going
to
go
through
these
pretty
quick
to
talk
about
what
this
self-assessment
did
so
sig
apps
was
up
first
Eric,
since
you
wrote
the
questions.
Do
you
want
to
give
any
more
framing
before
you
talk
about
to
get.
H
H
Looking
all
the
cool
stuff
people
are
building
and
trying
to
integrate
that,
and
rather
than
imposing
my
opinions
about
that,
I
want
it
and
I
thought
we
should
ask
the
community
how
they
think
things
are
going,
and
so
that's
the
genesis
of
this
survey-
and
I
can
tell
you
I
kind
of
I-
was
involved
in
see-
gaps
and
I
listened
to
sing
off
and
it
wasn't
it
the
answers
weren't
exactly
what
I
was
expecting,
but
that's
good
reason
to
to
do
the
survey
so
I'll
quickly
share
what
we
concluded
in
sig
apps.
H
We
said
we
have
no
idea
how
each
test
coverage
we
have,
but
we
felt
like
it
might
be
50
to
75
percent
coverage.
Of
course,
one
thing
we
know
it
was
like.
We
have
code
in
a
couple
of
different
companies
repositories,
so
it'd
be
great
to
have
automated
coverage
measurement.
We
need
to
be
done
in
some
way
that
we
can
somehow
cover
multiple
projects.
We
spent
about
a
quarter
of
our
time,
approximately
spent
responding
to
user
issues
and
we
felt
like
we
were
good
user
issue
responders.
H
We
felt
like
we
spend
a
good
balance
fifty-fifty
on
features
and
stability
and
we
have
about
600,
so
we
called
it
a
thousand
issues
related
to
our
sig.
Our
sig
is
popular
and
we
marchek
was
kind
enough
to
kind
of
measure
how
many
flakes
we
had
and
we
estimated
about
15.
So
that's
where
we're
at
thanks.
A
H
Decided
that
we
had
the
right
balance,
we
haven't
actually
done
our
key
one
planning
yet
so
I
would
say
that
I
will
be
able
to
speak
to
that
more
once.
We
plan
key
one,
even
though
q1
is
already
almost
upon
us.
It.
A
I
H
Totally
I
decided
it
I
mean
a
lot
of
us
I,
don't
think
we're
gonna
be
singing
new
API
types
next
quarter,
so
that
would
be
sort
of
an
odd,
a
big
feature.
I
think
there
will
definitely
be
improvements
to
existing
API
types,
and
you
can
argue
about
whether
that's
features
of
stabilization,
Thanks,
okay,.
A
J
So
I'll
start
by
saying
we
we
don't
think
we
need
a
strict
stabilization
cycle.
The
output
of
our
discussions
was
mostly
that
we
are
primarily
features
based.
Most
of
our
meetings
are
sort
of
idea,
foundries,
but
that
we
don't
think
we're
currently
in
a
state
where
there's
a
lot
of
things
that
are
unstable.
There
are
a
lot
of
you
think,
there's
a
lot
of
tech
debt
that
we
should
be
addressing
in
the
next
cycle,
so
things
like
moving
forward
alpha
api's
that
have
been
an
alpha
for
too
long
and
similar
sorts
of
issues.
J
At
the
same
time,
we
also
realized
we
don't
have
a
clear
understanding
of
what
our
test
coverage
is.
So
a
lot
of
different
people,
thoughts
that
we
had
varying
levels
of
test
coverage
and
similar
to
that
there
are
a
lot
of
things
that
we
probably
should
be
considering
in
networking
that
usually
we
don't
talk
about
in
this
sig,
so
the
next
cycle
we're
going
to
put
a
focus
on
really
defining
what
all
those
things
are
coming
up
with.
J
You
know
clear
ideas
of
what
that
test
coverage
is
what
needs
to
be
done
for
each
and
similar
to
test
coverage
kind
of
another
bit
of
tech.
Debt
is
the
documentation
where
a
lot
of
it
is
highly
technical
at
the
moment,
and
not
very
user
focused,
so
we're
going
to
be
doing
a
review
of
that
documentation
as
well
in
1.6
I'm,
trying
to
see
where
our
halls
are
and
where
we
can
be
improving.
That
user
experience
that
much.
What?
If
you
had
more
to
add
on
to
that
nope.
A
L
M
M
So
the
other
thing
that
was
discussing
the
list
recently
around
the
e2e
testing,
getting
at
least
one
OpenStack
provider,
reporting
that
on
a
regular
basis,
ideally
I
think
is
a
stretch
we'd
like
to
get
three,
but
it
could
1.6
timeframe
just
one
with
you
better
than
nothing
and
the
other
thing
it's
only
documentation.
That
is
not
actually
a
kind
of
vanilla,
getting
started
going
for
the
OpenStack
provider
at
the
moment
and
there's
also
a
number
of
parameters
that
have
been
added
to
the
provider
in
the
last
six
to
12
months,
which
pretty
much
undocumented.
M
F
F
F
A
A
B
D
In
general,
it
could
be
considered
both
a
stabilization
and
a
little
bit
of
a
little
of
new
features.
Some
we
want
to
address
upgrades
in
particular
and
provide
self-hosting
as
an
alternative
for
cube,
am
also
we're.
Gonna,
try
to
coordinate
releases
of
Cuba
dm2
too
much
match
the
16
release
to
sync
up
that
way,
which
isn't
the
case
now.
Also
a
lot
of
refactoring
and
yeah
simply
moving.
D
Towards
general
availability,
yeah
try
trying
to
get
things
narrow
beer
and
also
we're
not
going
to
try
to
implement
AJ
as
such.
Instead
try
to
address
some
blockers
that
we
are
kind
of
current
facing
now
so
yeah
stabilization.
A
Excellent
one
of
the
few
groups
that
I
said:
yes,
it's
really
going
to
be
more
stabilization,
that's
good
to
hear
as
well.
It's
I
know,
sick
storage
did
this
last
release
in
1.5
after
the
1.4
work
that
brought
in
a
bunch
of
new
york.
In
any
case,
thank
you.
Anybody
have
questions
for
sig
a
cluster
life
cycle.
A
What
work
they're
going
to
do
I
expect
we'll
have
more
of
these
reports
next
week.
We'll
get
another
chunk
of
cigs
having
gone
through
this
and
talking
about
what
they've,
what
they've
been
working
on
and
what
they've
come
to
you
for
the
1.6
work,
any
other
questions
all
right.
Well,
then,
we
will
continue
to
move
on
and
we're
getting
through
this
quicker
than
I
thought.
So
that's
kind
of
awesome,
so
tomasz
was
going
to
talk
about
sig
on
prem.
So
this
is
the
hardware
and
other
option
special
interest
group
and
oh
wait.
A
A
L
L
I
mean
on
the
sig
off
side
of
things,
because
we
actually
covered
this,
and
we
have
we
have
this
written
down
in
our
meeting
notes.
The
I
mean
the
highlights.
The
basic
highlights
is
that
David's,
who
contributes
a
lot
is
going
to
be
working
to
get
our
back
to
beta
I
think
is
the
biggest
thing
as
part
of
the
stability
of
that
particular
system,
then,
and
the
notes
sorry,
just
to
reiterate,
the
notes
were
from
the
thread
about
stability
and
all
the
things
that
you
can
address
correct.
That's.
L
So
we
are
going
to
review
the
owners
files.
We
do
sort
of
an
okay
job
with
the
few
people
that
we
have
for
sig
off
in
terms
of
the
code
WN,
but
we'll
be
reviewing
those.
We
are
also
going
to
be
trying
to
figure
out
a
better
way
to
archive
a
lot
of
our
conversations.
Slack.
Our
select
channel
is
awesome,
come
join
us
level,
but
it's
not
particularly
searchable.
So
hopefully
we
can
try
to
figure
out
how
to
best
use
like
our
Google
Form
channels.
So
we
don't
get
a
lots
of
repeated
questions.
L
The
I
we
don't
have
a
whole
ton
of
new
features.
A
lot
of
the
efforts
are
going
to
be
around
getting
our
back
to
beta,
trying
to
document
a
lot
of
the
components
that
already
exists
and
how
that
are
off
or
security-related
and
then
continuing
to
sort
of
try
to
evaluate
where
we
go
from
there
so
yeah.
L
It's
largely
testing
some
federated
API
service
stuff,
just
to
sort
of
bring
it
in
life
back
in
line,
possibly
talking
about
credential
rotation,
but
nothing
particularly
new,
featuring
nothing,
nothing
too
major
on
that
front
and
yeah
just
trying
to
continue
to
improve
the
docks
which
we
know
are
pretty
sparse
for
a
lot
of
the
author.
Elated
features.
L
A
We
have
an
archiver
for
slack,
but
that's
still
not
very
searchable,
it's
still
not
very
discoverable
and
so
we're
trying
to
make
more
formalization
around
how
cigs
communicate
with
each
other
and
with
the
broader
community
in
a
formalized
way.
So
we
will
have
more
on
that.
But
there
is
a
discussion
happening
on
contributor
experience.
Working
group
as
well
as
the
sig
leads
mailing
list
striking
out.
If
you
wanted
to
contribute
all
right
so
now
on
to
the
proposed
new
sig
of
a
cig
on-prem.
N
Problem,
hello,
everyone
greetings
from
Prague,
which
I'm
visiting
today
so
yeah,
so
just
to
recap
little
bit
so
there
was
the
there
was
this
proposal
from
Joseph
Joseph
Jackson
aprenda
to
start
seek
a
bare-metal
original,
the
original
name,
which
kind
of
evolved
into
seek
on-prem,
which
is
probably
better
a
name
for
that.
N
So
a
couple
of
companies
were
interested,
including
Korres,
I,
think
also
canonical
new
entity
from
our
side
aprenda
as
well.
Also
lots
of
our
partners
were
asking
for
this.
For
this
thing
to
be
created
to
kind
of
improve
the
the
the
bare
metal
on
on-premise
situation
to
make
the
user
experience
similar
to
other
platforms.
I
think
we
defined
a
draft
of
of
the
goals
and
focus
for
the
seat.
It's
on
the
mailing,
these
in
email
threat.
N
Right,
yeah,
cool
yeah,
thanks
Joseph
and
yeah.
I
would,
I
would
kind
of
summarize
the
goal,
as
you
know,
make
bare
metal
or
on-premise
first
class
citizen
in
Coober
native
world,
so
there
was
a
very
positive
feedback
and
also
some
world
some
concerns
managed.
So
we
were
invited
to
discuss
that
on
c-class
drops,
which
seemed
to
be
the
you
know,
most
overlapping,
sick,
so
yeah
we
joined
the
meeting
there
we
discuss
it.
I
think
the
word
the
discussion
was
very
good.
N
It
was
more
understanding
why
we
need
that
and
horga
or
her
has
castra
and
Margo
chappy
volunteered
to
represent
c-class
drops
on
this
meeting
today
and
maybe
present
their
position
and
answer
some
questions.
Also
joseph
is
here
in
there
are
some
questions
about
the
the
future
etc.
So,
yeah,
that's
that's
pretty
much.
That's
pretty
much.
It
I,
don't
know
how
much
time
we
have
want
to
go
through
some
more
details.
We
worked
with
Joseph
on
some
plan
for
a
couple
of
next
meetings.
You
know
defining
goals.
What
we
should
do
where
we
should
engage.
A
A
It's
just
a
guess,
but
are
there
any
questions
for
the
team
sounds
like
no
or
you're
muted,
alright.
Well,
then,
we
will
get
this
moving
along
and,
as
I
mentioned,
there's
there
is
the
thread
on
the
cig
leads
mailing
list
and
contributor
experience.
Working
group
mailing
lists
defining
what
our
requirements
are
around
being
a
cig
in
good
standing,
so
we
will
be
doing
a
lot
more,
a
lot
more
work
around
what
that
is
and
making
sure
that
we
have
the
right.
A
The
right
number
of
cigs
and
cigs
are
active
and
engaged
in
a
way
that
makes
makes
transparency
and
upward
communication
of
what's
happening
inside
the
cigs
up
to
the
broader
community.
Better
I
think
one
of
the
suggestions
was
also
a
cig
summary
mailing
list,
where
the
cigs
send
out
summary
notes.
That
kind
of
thing
so
we're
trying
to
make
this
a
better
experience,
especially
as
we
broaden
the
number
of
special
interest
groups
working
on
this
all
right.
So
the
answers
to
some
of
the
questions
are
we're
working
on
it.
A
A
A
So
once
we
cut
1.5
at
the
end
of
beginning
of
next
week,
then
we
will
define
a
patch
release
team
for
the
1.5
patches
going
forward
but
new
words,
so
we're
going
to
try
and
be
consistent
about
it
going
forward.
It
was
starting
to
be
confusing
talking
about
them,
as
we
have
multiple
versions
running.
At
the
same
time,
I've
talked
already
about
the
contributor
working
group,
a
contributor
experience
working
group
and
the
sig
leads
mailing
lists
about
what
it,
what
being
a
good
sig
sig
in
good
standing,
looks
like
so.
A
Please
take
a
look
at
that
discussion.
If
you
haven't
seen
it
I'll
link
it
in
the
notices
there
is.
The
December
22nd
community
meeting
is
going
to
be
our
12.5
retrospective,
we're
not
going
to
do
that
on
a
Friday,
because
that
would
be
the
23rd
and
most
people
are
I,
hope
heading
off
to
holiday
things
by
then
so
we're
going
to
go
up
to
the
Thursday
twenty-some
december.
A
O
I
have
a
quick
notice,
Sara
awesome.
If
anybody
cares
about
how
we
manage
issues
and
PRS,
there
is
a
thread
that
you
should
read
at
least
the
summary
at
the
top
about
issue
and
ER
recaps,
something
like
that.
A
and
I
have
started
changing
all
the
labels.
So
last
night
I
renamed
a
bunch
of
labels
to
make
them
more
consistent
and
added
a
bunch
of
missing
labels
for
all
the
cigs.
So
all
the
cigs
should
have
labels
now,
so
we
can
route
issues
and
pr's
to
the
appropriate
cigs
using
labels.
O
A
Anyone
else
discussion,
ok
to
the
notes,
thanks
Phillip
anybody
else
all
right.
Well,
then,
we
will
see
you
all
next
week
with
more
suggestive
cig
introspection,
because
we
want
to
make
sure
that
the
cigs
are
communicating
upward
with
the
broader
community
regularly
so
have
a
great
week
and
I'll
see
you
on
the
internets.