►
From YouTube: Kubernetes Community Meeting 20180531
Description
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
And
let's
go
ahead
and
get
started:
welcome
to
the
kubernetes
community
meeting.
It
is
May
31st
2018.
This
is
our
weekly
community
meeting,
that's
open
to
the
public
and
we
are
live
streamed
and
recorded
on
to
YouTube.
So
please
be
cognizant
of
what
you
say
during
this
meeting
today:
we're
going
to
have
a
demo
from
act.
Oh
me
some
release
updates
for
1.11
Caleb's,
going
to
give
us
a
quick
section
on
an
introduction
to
keps.
A
B
So
I'm
Roman
Alexa
cope
on
one
of
the
core
contributors
to
optimi
and
today
what
we
wanted
to
do.
We
wanted
to
talk
a
little
bit
about
application
delivery
on
top
of
kubernetes
right
and
the
problem
that
we
see
in
that
space
and
maybe
explain
why
we
started
up
to
me
in
the
open
source
and
how
it
actually
helps
with
the
app
delivery
on
top
of
kubernetes,
and
we
already
talked
about
it
at
the
sea
gaps
a
few
weeks
ago
and
the
kubernetes
meetup
as
well.
B
But
we
also
wanted
to
present
present
a
quick
overview
and
demo
here
just
to
make
everyone
aware
of
what
we're
working
on.
So
that
said,
I
think
let's
go
ahead
and
look
at
some
of
the
slides
and
then
do
the
quick
demo
and
first
of
all,
let's
look
at
the
stack
of
containerized
applications
right
on
top
of
governances
and
first,
the
rescue
furnaces
itself
after
that
there's.
Basically
all
the
individual
components
right
package
discount
charts
that
everyone
is
running
on
top
or
maybe
just
kubernetes,
manifests
playing.
B
And
then
it
comes
down
to
applications
right
consisting
of
multiple
helm,
charts,
basically
wired
together,
and
then
the
question
is:
can
what
tooling
do
we
have
today
to
define
marriage
and
around
those
applications?
Right
and
can
one
of
the
solution
would
be
the
helm
itself
right,
which
has
a
concept
of
the
umbrella
chart,
allowing
basically
you
to
bundle
all
dependencies
inside
that
umbrella
chart
and
that
umbrella,
chart
kind
of
naturally
becomes
a
definition
of
your
larger
service
or
a
larger
application
that
you're
trying
to
run.
B
That's
kind
of
this
distributed
nature
and
all
in
one
bundle
they
sometimes
and
don't,
go
together.
Very
well
and
all
those
people
they're
trying
to
implement
the
service
based
architecture
right
so
that
they
have
teams
delivering
individual
code
components
and
individual
help
charts,
but
they
can
also
work
independently
from
each
other
right
without
without
having
a
need
to
rebuild
it
umbrella
chart
every
time
it
changes
me
and
right
in
one
of
the
dependencies
and
what
they
also
want
to
do.
B
They
want
to
manage
the
overall
system,
like
as
a
collection
of
services,
wired
together
right,
so
they
can
actually
have
a
certain
way
of
expressing
where
services
should
it
should
be
deployed
and
how
they
should
be
composed
and
what
services
should
be
shared
in
which
environments
right,
so
they
can
basically
Express
service
composition
and
service.
Sharing.
Answer
is
placement,
and-
and
things
like
that,
so
basically
like
a
service
based
architecture
a
little
bit,
so
they
can
reason
about
it
at
the
slightly
higher
level
than
just
containers
and
charts.
B
And
basically,
what
we're
trying
to
do
with
abdomen
right
is
that
we're
building
this
framework
on
top
of
helm
in
kubernetes
manifests
that
allows
people
to
kind
of
reason
about
services
and
rules
about
services
right.
So
basically,
you
take
existing
home
charts,
you
create
services
around
them,
then
you
define
relationships
between
services
and
then
you
define
like
things
like
sharing
and
and
rules
and
placement
and
and
so
on
and
so
forth.
B
So
you
can
express,
for
example,
like
I
want
to
have
a
database
who
wishes
shared
in
Devon
stage
by
all
the
services,
or
you
can
express
things
like
production,
always
get
deployed
to
a
certain
cluster
or
development
environment
cannot
talk
to
production
environment.
That
services
from
deaf
cannot
talk
to
services
from
prod
right.
So
it's
basically
once
we
have
those
services
and
rules
defined,
then
we
can
take
this
model
and
enforce
the
state
right
using
kubernetes
and
helm
impossibly
some
of
the
other
tools
around
them.
B
Like
maybe
service
knife
read,
so
it's
a
it's
a
very
simple
idea
right
how
you
can
build
on
top
of
kubernetes
and
helm
a
basic
service
model
to
to
enforce,
to
enforce?
All
of
that
so
and
the
model
is
pretty
simple
right.
As
already
said,
you
can
basically
take
existing
home
charts,
create
service
definitions
around
them.
Put
everything
into
services,
then
describe
relationships
and
sharing
and
rules.
B
Now
the
app
to
me
is
gonna:
go
in
and
force
the
state,
so
I
mean
in
reality
the
model
is
a
little
bit
more
complicated,
so
I
think
if
you
go
to
the
github
page,
you'll
find
more
details,
but
I
think
because
we
only
have
ten
minutes
for
the
overview,
as
well
as
the
demo
I
think
we'll
we'll
leave
it
at
that
I
think
in
the
higher
level.
That
should
should
make
sense.
B
So
I
think
this
was
like
a
five
or
six
minute
overview
and
I.
Think
at
this
point
we
can
actually
go
ahead
and
do
the
demo
so
for
this
particular
demo.
What
we've
done
is
that
we've
basically
took
charts
right.
We
took
HDFS
Kafka,
spark
and
zookeeper
charts,
and
what
we
did
is
we
use
those
to
basically
build
something
that
we
called
analytics
back-end
and
we
built
an
application
on
top
right
which
basically
uses
that
analytics
back-end
and
it
takes
real
time
tweets
and
displays
stop
hashtags
on
the
webpage.
B
So
what
we
can
take
a
look
at
right
now
is
that
we
can
take
a
look
at
the
service
graph.
Alright,
so,
for
example,
you
can
you
can
see
that
you
have
a
cough,
can
spark
sharing
the
same
zookeeper
at
the
same
time,
you
also
have
HDFS
and
all
of
those
kinda
gets
folded
into
a
larger
service
called
analytics,
back-end
right,
so
I,
don't
think
we
have
a
lot
of
time
to
go
into
service
definitions,
but
I
can
basically
show
you.
I
can
basically
show
you
that
the
service
definitions
are
really
simple
right.
B
So
once
you
just
give
us
a
small
ml
file,
saying
that
hey,
please
run
me:
the
Twitter
stats
with
those
parameters,
we're
gonna,
go
and
instantiate
the
whole
service
graph
and
fulfill
dependencies
and
configure
configure
the
rest,
and
so
this
is
something
that
I've
just
created
right
before
the
demo.
You
can
see
that
this
was
created
20
to
23
minutes
ago
was
just
one
command.
I
basically
took
that
file
that
I
just
showed
and
I
called
policy
apply
using
Optimus
eto
on
it
and
I.
B
Think,
given
that
we
have
I
think
two
more
minutes
left
for
the
stem
I
want
to
show
probably
two
things
right.
So
one
is
how
you
can
run
in
you
service
instance.
Right
without
to
me
fulfilling
all
the
dependencies
and
then
how
you
can
change
the
rules,
so
I
think
the
first
thing
to
let
me
run
a
couple
of
a
couple
of
more
instances
of
the
services.
B
B
So
those
actually
look
like
exactly
the
same
way
how
this
thing
was
looking
right.
So
it's
basically,
someone
else
wants
to
get
Twitter
stats
and
that
file
looks
exactly
the
same,
so
I
can
go
and
I
can
look
at
at
the
endpoints
all
those
services
that
we
just
launched,
and
you
can
see
that
so
this
is
one
with
the
fake
back-end.
So
this
is
me
the
other
one,
with
the
fake
back
end
once
its
initializes-
and
this
is
this-
is
the
real-time
version
right
that
we
that
we
launched
25
minutes
ago.
B
B
We
can
change
your
role.
Real,
quick
and
I
have
a
rule
that
certain
services
go
to
certain
clusters.
So
now
I
can
basically
change
the
rule,
saying
that
all
the
services
should
go
into
one
cluster
and
I
can
do.
I
can
just
call
up
to
me
to
enforce
that
rule,
and
you
will
see
that
the
abdomen
will
say:
hey
like
I
need
to
move
some
instances
from
one
cluster
to
another
cluster
and,
after
me,
will
start
doing
that
right.
B
So
it's
gonna
start
creating
instances
in
the
cluster
east
and
it's
gonna
start
terminating
instances
in
the
cluster.
Was
we
basically
looked
at
some
of
the
service
composition
and
some
of
the
service
sharing
and
some
of
the
rules
that
we
can
enforce
around
services
right,
I
know
it's
it's
a
little
bit
rushed
right
for
such
a
big
project,
but
it's
kind
of
I
think
it
shows
the
point
and
just
to
close
on
the
demo.
So
they
asks
that
we
have
till
all
of
you
guys
is
maybe
even
more
thorough.
B
Look
maybe
give
it
a
spin
right.
Look
look
at
the
QuickStart
that
we
have
look
at
the
model
and
the
service
definitions
and
the
feedback
on
how
UI
works
and
CLI
works.
I
think
we
would
all
appreciate
it
if
you
just
maybe
have
time
or
point
your
team's
to
that
and
then
join
the
slack
and
share
some
of
the
feedback
with
us
and
we
post
it
also
some
of
the
links
for
the
people
to
look
at
right.
So
the
rebar,
the
forty
minute
demo
that
we
just
presented
last
week
at
one
quarter.
B
A
All
right,
thanks
Robin
for
those
of
you
looking
for
the
links
and
stuff
the
slides
already
linked
on
the
notes
and
things
like
that.
With
that
we're
moving
to
release
updates
with
Josh
burkas.
C
Howdy,
so
we
nine
of
this
release
cycle.
Our
next
deadline
is
draft
documentation.
Prs
are
due
June
4th
and
I
want
to
emphasize
that,
because
we
have
a
lot
of
features
in
the
feature
list
that
we
do
not
currently
have
any
knowledge
that
Doc's
are
in
progress
for
the
eye.
If
you
have
a
feature
in
111,
you
have
been
contacted
in
the
last
couple
of
days
by
members,
the
release
team
about
required
Doc's,
for
that.
C
So
please
do
respond
to
that.
Ultimately,
features
that
need
documentation
in
LAC
them
will
not
be
part
of
111.
The
currently
are
in
code.
Slash
I
want
to
apologize
to
all
contributors
for
some
confusion
about
required,
labeling,
etc.
Our
original
plan
for
the
111
release
cycle
was
that
we
were
going
to
be
using
tied
for
milestone,
merging
for
111
as
well
as
getting
rid
of
the
milestone
under
and
replacing
it
with
milestone
maintenance
job
under
prowl.
C
Neither
one
of
those
things
ended
up
being
ready
in
time
for
code
slash
and
as
a
result,
we
reverted
using
the
old
milestone
Munder,
whose
rules
nobody
really
understands
all
that
well
and
as
a
result,
you
do
need
to
actually
label
stuff
with
milestones
during
code
slush,
on
an
approved
for
milestone
as
well
and
sorry
for
not
giving
you
advance
notice
on
that
plans
changed
the
code.
Freeze
will
then
start
the
next
day
after
Doc's
are
due
on
Tuesday
June
5th.
At
that
point,
we're
gonna
start
looking
at
pruning
stuff.
For
the
release.
C
If
you
are
working
on
a
feature
or
your
cig,
is
working
on
a
feature
for
111
and
you
have
come
to
realize
now,
with
code
freeze,
only
three
working
days
away,
that
that
feature
is
not
going
to
be
ready
for
111.
Please
update
your
issue
in
the
features
repo
so
that
we
know
what
is
not
going
in
111
and
we
can
stop
tracking
if
we
can
stop
bugging
you
about
it
for
CI
signal,
we
are
actually
looking
really
really
good.
C
So
once
again,
I
want
to
thank
our
CI,
a
signal
and
testing
for
team
and
all
of
the
many
contributors
for
taking
test
failures
seriously
and
jumping
on
them
when
they
happen
and
getting
them
fixed
as
rapidly
as
possible.
It's
it's
been
really
great.
We
only
actually
have
one
open
eye
test
issue
right
now,
which
is
that
the
scale
density
test
is
still
being
flaky.
As
far
as
you
know,
it's
still
unknown
whether
that
represents
a
real
performance
problem
or
a
testing
problem.
C
C
The
general
doc
slush
I
don't
know
that
they
really
call
it
slush.
I,
don't
actually
know
what
the
standard
excepting
dot
changes
during
freeze
that
are
not
directly
related
to
features
is.
If
do
we
have
one
of
the
signals
people
here,
hey.
D
This
is
that
carnold
somewhat
new
signetics
contributor,
but
yes,
things
that
are
features
that
are
not
directly
related
to
one.
Eleven
can
be
issued
against
the
kubernetes
website
at
anytime
with
no
need
for
the
release
cycle.
We're
we're
pushing
updates
that
all
the
time,
but
but
if
your
feature
is
111
impacting,
then
then
we
have
sort
of
blocking,
but
a
PR
open
against
our
own
repo
that
will
put
everyone's
features
into
so
that
they
don't
go
to
the
website
before
the
actual
release.
So
setup
answer
your
question.
E
This
was
misty
I'm
here.
I.
Just
think
that
there's
a
little
bit
of
fuzzy
terminology
there,
we
have
a
branch,
it's
called
release
111
and
that
branch
is
where
111
specific
features
should
go,
because
that
branch
will
not
be
merged
and
published
until
111
goes
out.
Everything
that
gets
a
PR
against
a
master
gets
published
effectively
as
soon
as
it
gets
merged.
E
So
that
is
for
things
that
are
not
version
specific
and
things
that
are
stuff
like
typos
and
improvements,
and
things
like
that,
so
that,
just
to
amplify
what
Zak
was
saying
there
and
I
am
managing
the
docs
for
the
111
release
and
I'm
managing
that
branch.
So
if
you
have
questions
feel
free
to
reach
out
to
me
on
slack,
my
slack
is
misty.
It
should
be
fairly
easy
to
remember
and
thank
you
for
your
help
and
your
prompt
consideration.
We're
trying
to
keep
up
keep
the
trains
running
on
time.
C
One
other
thing:
I've
linked
the
feature
tracking
spreadsheet,
from
the
notes.
You'll
notice,
some
changes
there,
particularly
there's
a
new
tab,
called
milestone,
risks
that
the
features
team
has
put
in
there.
If
your
feature
is
listed
there
under
milestone
risks
to
do
with
something
like
Doc's
overdue.
That
means
that
it
needs
your
attention.
A
Okay.
Next
we're
gonna
have
Caleb
miles
is
going
to
be
talk
about
caps
if
you're
not
familiar
with
kept
so
we're
gonna
be
starting
to
add
a
kept
section
to
this
community
meeting.
That
will
do
things
like
highlight
new
caps
caps
that
are
open
tips
that
are
being
closed.
Cuts
are
being
implemented
to
kind
of
give
some
visibility
to
caps,
but
before
we
started
that,
we
wanted
to
have
Caleb
odd,
to
kind
of
explain
what
caps
are
and
kind
of
get
everyone
on
a
baseline
of
understanding
of
what
exactly
is
a
cap.
G
So
yep,
as
mentioned,
my
name
is
Caleb
miles
and
I'll
get
into
that
on
the
intro
slide.
I'm,
not
a
huge
fan
of
presentation.
So
there's
only
seven
slides
here,
I
want
to
leave
the
the
most
book
of
the
time
to
ask
any
questions,
because
right
now,
I
am
working
to
put
together
a
FAQ
for
tips.
So,
let's
just
get
into
it
so
about
me:
I'm
a
technical
program
manager
at
Google
Caleb
a
miles
on
slack
and
github,
and
I'm
Caleb
miles
at
Google.
Calm
for
emails,
I
don't
have
the
social
networks.
G
So
this
is
how
you
reach
me.
So
is
a
cap.
We
started
this
process
last
year,
while
I
was
working
at
core
OS
and
with
the
support
of
Brian
grant
and
Joe
bata,
we
put
our
heads
together.
I
started
thinking
about
what
is
the
next
evolution
of
the
design
proposal
process
that
takes
into
account
some
of
the
things
we've
learned
all
the
time,
working
on
kubernetes
and
really
fundamentally
like
a
design
proposal.
G
This
is
a
tool
to
motivate
problems,
and
so
I
keep
going
back
to
Russ
Cox's
excellent
blog
post
about
working
towards
go
to
where
he
talks
about
the
importance
of
properly
motivating
a
problem
and
an
open
source
project,
especially
one
with
them.
You
know
as
high
traffic
and
visibility
as
either
go
or
communities
or
as
a
maintainer.
You
may
not
have
be
working
on
that
particular
use
case
or
may
not
have
insight
as
to
why
a
particular
enhancement
needs
to
be
proposed
so
being
really
crisp
about.
G
G
This
is
a
constant
complaint
among
contributors
in
the
project
that
a
lot
of
things
happen
in
you
know
back
room
or
hallway
discussions
and
just
kind
of
appear
in
the
community
as
a
fully-formed
idea,
which
can
be
frustrating
if
you
are
a
member
of
sega,
and
you
want
have
a
forum
for
you
know,
commenting
and
providing
your
own
insight
so
a
little
bit
into
the.
G
Why
yeah,
like
I,
said
early
we're
a
very
massive
project
and
we
really
need
to
do
a
better
job
of
recording
our
history,
we're
just
dealing
with
the
fallout
we're
still
dealing
with
fallout
of
the
core,
why
ii
assume
from
or
by
Red
Hat,
where
we've
got
G
suite
accounts
disappearing,
which
holds
you
know
some
of
the
important
history
of
the
project.
We
also
keep
breaking
our
hosted
tooling.
So
if
you
have
run
into
the
unicorn
on
your
issue
or
pull
request,
you
will
certainly
understand
the
need
to
store
things
more
durably.
G
Things
can
really
languish
and
one
of
the
challenges
we
had
with
design
proposals
and
one
of
the
things
I'm
seeing
now,
which
is
concerning
with
keps,
is
that
we
have
pull
requests
that
just
grow
and
grow
and
they're
open
and
by
the
time
you
actually
get
around
to
doing
the
rotation
it
doesn't
match
at
all.
What
the
the
original
approach
was
and
there's
little
under
luck,
rotation
about.
Why
what
you
discovered
that
made?
You
change
your
approach
to
to
help.
G
You
know
everyone
understand
the
history
and
the
Providence
of
a
enhancement,
and
so
we
need
tooling
to
help
us
measure
a
more
quickly
on
an
individual
enhancements
and
so
like
every
great
process.
This
is
just
three
steps
to
profit.
One
is
identifying
the
problem,
that's
something
you
generally
do
as
an
evil
contributor
or,
as
you
know,
a
member
of
a
larger
team.
You
know
taking
into
account
the
feedback
you
get
from
your
users
or
you
know
you,
your
own
feedback
since
SIG's
are
the
decision
makers
in
the
project.
G
You
need
to
find
a
cig
to
agree
that
you're.
The
problem
you've
identified
in
your
motivation
is
something
that
the
cig
is
willing
to
sponsor
addressing
the
we're
still
trying
to
side.
You
know
what
it
belongs
in
the
repo.
What
are
the
problems
that
kubernetes
should
address
as
a
project,
and
so
you
need
a
sake
to
help
you
get
an
agreement
and
buy-in
your
problem
is
a
problem
that
needs
to
be
solved
soon
and
then
you
can
start
documenting
consensus
in
your
poll
requests
in
the
cap.
G
And
so
what
do
we
want
to
do
with
caps
in
the
future?
One
is
obviously
an
actual
frequently
asked
questions,
so
I'm
just
started
putting
together
some
of
that
now
and
I
really
love
to
get
more
feedback
either
now
or
offline
that
we
can
compile
and
and
put
into
the
the
first
cap
describing
the
process
or
another
camp
George
from
hep
do
who's.
Also
running.
This
meeting
has
graciously
volunteered
to
help
get
a
contributor
signed
up
so
that
these
caps
are
discoverable.
So
you
know
help
to
know.
G
You
know
when
it
cap
is
entering
its
final
period
for
comment,
and
so
you
can
figure
out
caps
to
work
on
I,
don't
have
someone
who
can
implement
them,
mom,
just
you
just
yet,
and
we
also
want
to
get
some
tooling
to
help
you
more
incrementally
work
on
caps.
So
that
really
enforces
the
really
suggest
that
you
don't
have
to
have
all
of
the
answers
upfront
and
with
that
I
can
take
questions
and
you
can
email
me
at
Kaleb
miles
plus
caps
at
Google.
G
For
now,
if
we
become
a
you
know
a
more
formalized
sub
project
under
sig
p.m.
or
contracts,
there
might
be
a
repository
outside
of
the
community
repository
for
filing
issues,
but
in
the
meantime,
try
and
file.
Send
me
an
email
or
file
an
issue
against
the
community
repo
and
assign
it
to
me,
and
I
will
do
my
best
to
be
quick
with
a
response
there
or
try
to
forward
you
on
someone
who
would
have
a
better
idea
since
it's
not
just
me
who
has
to
get
this
or
make
this
process
success.
G
A
G
Yeah
I
mean
this
is
also
one
of
the
things
we'd
like
to
address
with
cooling.
The
number
was
just
a
way
of
a
way
to
future
gen
Dexys
in
the
future,
and
so
you
know
like
RFC's
like
the
internet
kind,
not
the
rust,
similar
design
proposal
document
people
do
often
refer
to
them
by
their
number,
and
if
you
are
Brian
or
know
Brian,
he
can
recite
dozens
hundreds
of
issues
by
by
number
github
issues.
So
it's
Livius
a
way
of
indexing
them.
G
Overall
and
given
the
problems
with
with
merge
conflicts,
we
may
revisit
that
and
provide
a
human,
more
human,
readable
shorthand
for
for
the
numbering
system.
But
all
this
stuff
is
in
flux
and
you
know
again
we're
not
trying
to
build
process
for
the
sake
of
process,
so
things
that
don't
work.
You
know
we'll
get
rid
of
things
that
are
cause
of
friction.
We
will
try
and
build
a
tool
link
to
help
address.
H
So
Caleb
I
think
two
super
useful
directions
would
be
to
be
very
clear
about
the
granularity
expectations
of
a
kept.
It
seems
that
larger
broader
issues
tend
to
run
into
more
trouble
and
back
and
forth,
and
the
second
issue
that
seems
to
come
up
in
a
couple
of
attempts
is
the
confusion
around
the
lifecycle
of
a
cap.
What
does
approved
mean
when
does
it
get
merged?
What
are
the
remaining
questions?
Those
things
seem
to
always
come
up,
so
it
would
be
really
useful
to
give
clear
guidance
on
those
to
keep
this
moving
forward.
Thanks.
G
Jake,
oh
that's
a
excellent
point.
Yeah
I
think
my
own
personal
feeling
is
that
the
best
time
to
merge
a
section
of
a
cap
is
when
you've
got
agreement
to
there
to
help,
try
and
keep
these
pull
requests.
Small.
But
that's
a
great
thing
that
we'll
do
to
address
in
the
FAQ
and
to
keep
revisiting
as
as
community.
A
Okay,
before
we
move
on
for
indexing,
the
caps
on
that
site
that
Caleb
was
talking
about.
There
is
a
call
for
help
sitting
in
my
draft
email
box,
which
you'll
see
probably
kubernetes
dev
later
on
today,
that
we
are
looking
for
front-end
dev
help
on
that
site.
So,
if
you're
good
with
Hugo,
that
sort
of
thing,
please
ping
me
or
someone
in
contributor
experience
with
that
we're
moving
on
to
sig
updates
with
Singh
OpenStack
with
Chris
mm-hmm
hi.
I
I
You
know,
we've
we've
been
at
at
the
CN
CFC
ICD
dashboard,
but
you
know
kind
of
a
bigger
leap
for
us
was
the
external
provider
test,
reporting
and
test
grid
and
in
the
conformance
testing
and
as
was
noted
in
the
release
in
the
release,
update
you
know,
being
being
integrated
and
that
testing
means
that
we
have
conformance
testing
for
you.
You
know
that
has
an
impact
on
release.
I
This
testing
was
developed
by
but
by
dams
based
on
some,
some
mini
cube
test
reporting,
and
our
hope
is-
is
that
it
could
be
used
as
an
exemplar
for
other
providers.
So
if
you
are
a
cloud
provider
and
you're
interested
in
in
and
talking
with
us
about
how
we
we
build
out
this
testing
and
how
we're
reporting
back
to
the
test
grade
and
working
with
sig
testing,
please
don't
hesitate
to
reach
out
to
us
and
we'd
be
happy
to
talk
about
that.
I
We
also
have
a
bunch
of
drivers
in
our
external
repository
and
we've
expanded
testing
across
those
drivers
quite
a
bit,
and
so
both
of
our
cinder
CSI
and
cinder
flex.
Drivers
are
being
gated
against,
as
well
as
testing
for
the
Octavia
load
balancer
and
for
the
webhook
based
key
stone
authentication,
and
this
work
really
wouldn't
have
been
possible
without
the
open
lab
members
who
were
led
by
Melbourne
Hillsman.
I
You
know
it's
a
pretty
big
community
effort
and,
along
with
his
old
team,
for
their
support
and
resources.
If
you
want
to
go
see
kind
of
more
about
what's
happening
over
there,
there's
a
there's
a
link
to
open
lab
in
the
in
the
slides
other
completed
work
that
we've
done
is
we
have
started
an
initial
release
tag
and
our
plan
is
to
tag
new
releases
coincident
with
the
kubernetes
releases,
and
one
of
the
reasons
why
it's
important
for
us
to
have
a
release
tag
is
so
that
we
can
actually
weeks.
We
can
start.
I
And
and
and
give
an
avenue
for
people
to
download
the
provider
into
their
running
systems,
and
so,
as
we
shift
from
the
entry
OpenStack
provider
into
the
into
the
kubernetes
hosted
provider,
you
know
this
is
this
is
going
to
be
more
and
more
important
along
those
lines.
We've
also
deprecated
the
entry
driver
for
the
1.11
release.
I
This
was
introduced
with
a
generic
set
of
code
that
will
allow
all
entry
providers
to
deprecate
their
their
cloud
provider
code
when
ready,
and
that
includes
code
paths
that
will
point
users
to
external
providers
where
they
exist
and
also
indicate
when
some
entry
providers
that
are
no
longer
supported
will
be
removed
and
along,
and
you
know,
there's
and
there's
no
great
path
for
that.
So
again,
if
you're
curious
how
that
works,
you
know
we'd
be
happy
to
to
show
you
that
code.
It's
pretty
pretty
easy.
I
As
I
mentioned,
we
had
a
pretty
large
community
sig
OpenStack
meet
up
at
the
OpenStack
summit
in
Vancouver
last
week.
I
would
say
we
had
some
somewhere
between
40
and
50
people
from
the
OpenStack
community
cross,
open,
stack
and
kubernetes
community
show
up
with
the
tremendous
amount
of
interest
across.
You
know
where
many
of
the
projects
intersect.
We
have
an
etherpad
link
in
there
that
that
is
after
actually
to
both
of
the
sessions
where
we
had
a
deeper
dive
session
and
a
more
generic
planning
session.
I
You
can
feel
free
to
take
a
look
at
that
kind
of
see
some
of
the
things
we
talked
about
and
get
a
deeper
view
of
of
the
work
that
we've
been
doing
and
finally,
at
the
summit
we
released
a
OpenStack
in
containers
white
paper
that
was
written
by
the
by
the
Sigma
stack
community,
titled,
leveraging
containers
and
OpenStack
a
comprehensive
review,
and
that's
on
the
OpenStack
website.
If
you
want
to
take
a
look
you,
so
we
have
a
bunch
of
future
work,
that's
been
lined
up.
I
The
first
is,
we
are
we're
developing
some
drive,
some
new
drivers
and
also
consolidating
some
of
the
existing
drivers,
and
so
there's
been
a
tremendous
amount
of
interest
for
openstep
for
an
OpenStack
Manila
driver
and,
if
you're
not
familiar
with
the
Manila
project,
it's
a
shared
file
system
so
similar
to
NFS
and
looking
at
providing
some
sort
of
CSI
driver
with
within
the
within
the
external
provider.
For
that
we
are
also
looking
at
making
a
cinder
light
driver
for
truly
standalone
volume,
provisioning
right
now
for
our
standalone
driver.
I
There
are
some
pretty
serious
limitations,
it's
more
or
less.
We
support
our
BD
and
I
scuzzy
attachments,
and
so,
if
you're,
if
the
back
end,
doesn't
support
those,
it's
not
it's
not
possible
to
do
standalone
single
provisioning.
On
that,
the
the
plan
is
to
build
a
small
service
that
would
handle
the
volume
attachments
in
an
innocent
er
way
and
bring
support
for
the
standalone
driver
to
the
entire
suite
of
of
cinder
drivers.
We're
also
looking
at
you
know.
I
Right
now
we
have
essentially,
we
have
a
number
of
different
cinder
drivers
and
we'd
like
to
consolidate
all
those
into
one
codebase
into
a
single
CSI
driver
likely
dropping
support
for
the
Flex
driver,
we're
also
looking
at
adding
auto
scaling
in
the
next
few
months.
We
have
two
avenues
for
that:
one
is
heat-based
and
he
does
the
OpenStack
orchestration
project
and
another
assembling
based.
I
It's
it's
part
of
the
opens
that
clustering
project
again,
you
know
with
there's
a
there's,
a
pretty
large
diversity
of
OpenStack
clouds
and
how
they're
deployed
and-
and
so
we
you
know,
we're
looking
at
building
out
these
multiple
drivers
to
kind
of
account
for
the
multiple
deployment
scenarios
that
people
might
have
there.
There's
work,
that's
being
actively
done
on
both
of
these
and
we're
hoping
to
be
releasing
working
drivers
for
both
of
those.
I
I
But
one
thing
that's
important
for
our
community
members
to
know
is
that
we're
not
actively
adding
new
features
into
the
entry
provider,
we're
we're
doing
bug
fixes
where
we're
you
know
we're
trying
to
you
know
you
know,
keep
it
maintained,
but
if
you're
looking
for
new
features
and
you're
looking
for
active
development,
the
the
out
of
few
providers,
though,
is
the
place
to
look
another
major
thing.
That's
going
to
happen
where
the
community
is
emerging
of
cig
OpenStack
into
the
state
cloud
provider
at
the
Yukon
in
Copenhagen.
I
You
know,
as
we
created
it
repositories
to
host
these
these
kind
of
externally
developed
providers
within
the
kerbin
ideas
organization.
It
became
clear
that
there
was
going
to
be
an
explosion
of
SIG's
that
was
going
to
be
unmanageable
for
the
cloud
providers,
and
so
Andrew
cite
him,
as
has
been
working
to
to
run
a
proposal
for
to
create
a
sig
cloud
provider
which
would
essentially
take
all
of
the
individuals,
sig
cloud
providers
and
merge
them
into
a
single
working
group
and
then
kind
of
bring
them
under.
I
You
know
then,
and
then
have
them
be
working
groups
under
that
under
that
parents
saying
there
are
a
number
of
forums
and
documents
and
pull
requests
that
talked
about
this
in
the
you
know
that
that
talked
about
this,
including
including
the
pull
request
for
to
create
the
sig,
which
is
the
fourth
link
there
in
the
slides,
if
you're
a
cloud
provider.
It's
probably
is
very
important
that
you
follow
this
discussion,
because
that
means
major
changes
are
coming
ahead.
I
You
know
sig
OpenStack
is
going
to
be
disappearing
as
well
as
kind
of
all
of
the
other
SIG's,
and
this
is
something
that
you
know.
I
think
that
that,
for
the
help
of
the
community
is
going
to
be
very
important
because
it's
going
to
you
know,
make
the
development
of
new
providers
much
easier,
and
it's
also
going
to
help
us
to
create
standards
for
documentation
and
testing.
I
That
I
think
is
going
to
benefit
the
user
community
at
large,
and
you
know
kind
of
create
a
level
playing
field
for
all
of
the
cloud
providers
and
and
I
know
that
the
members
of
a
sig
OpenStack,
have
you
know
we're
not
interested
in
just
you
know
developing
for
our.
You
know
we're
very
interested
in
supporting
our
community,
but
we
also
feel
like
that.
There's
a
larger
community
of
cloud
providers
that
we
want
to
make
sure
that
we
support-
and
you
know,
create
a
positive
experience
for
everybody
to
those
ends.
I
H
I
I
You
know
you
know
be,
you
know,
you
know
be
be
maintaining
that
code
and
we
want
to
make
sure
that
you
know
that
were
part
of
the
kubernetes
community
and
the
most
efficient
way
to
do
that
is
to
have
the
have
the
individual
SIG's
on
that
code
in
a
way
where
they
have.
You
know
stronger
velocity
and
a
better
support
framework.
For
that,
so
yeah
I
mean
work
is
moving
ahead
really
fast
on
the
on
the
external
provider.
Now.
H
This
is
awesome:
I
was
holding
up.
Openstack
is
an
example
of
executing
on
the
vision
and
the
ability
to
iterate
independently
of
the
kubernetes
community,
and
not
wait
for
kubernetes
to
cut
a
new
patch
release.
Is
the
motivation
behind
the
extraction
of
the
entry
cloud
provider.
It
also
presents
the
opportunity
to
remove
I
think
about
a
million
lines
of
source
code
from
the
core
repo,
which
I
think
is
the
guiding
end
goal.
H
J
Sorry,
hello,
everyone
and
good
morning,
sorry
I,
didn't
prepare
the
slides,
but
I
have
comes
out.
Felicity
update
with
you
is
the
signal
to
progress,
especially
in
the
queue
to
you.
So
like
the
yo-yo,
we
have
the
tidy
progress
on
the
fiber
area.
Now
the
management
include
Windows,
support
and
also
the
application,
workload
management
and
another
separate
here
and
then
resource
management
on
the
node
and
last
one
is
monitoring,
knocking
and
at
affecting
it
here.
J
So
those
five
area
we
have
to
all
have
the
mid
are
steady
progress
so
first
next
star,
no,
no,
the
management.
So
we
we
have
not
the
one
half
year
project,
echo
dynamic,
you
open
a
cafe
we!
Finally,
in
this
queue
we
promote
that
Rebecca.
So
there's
the
document
engineer
wrote
and
even
for
the
Alpha
and
how
to
define
if
a
different
dynamic,
you
connect
the
cafe
for
gaming
Kentucky
momentum,
so
you
don't
need
to
always
how
to
code
it.
J
Those
configurations
and
also
wait
for
kubernetes
release
to
change
you
when
configurations
for
your
given,
faster
or
even
sometimes
in
some
of
the
emerging
site.
Oh
awesome.
It
is
the
particular
mode.
So
we
also
have
nectar,
there's
the
mainly
socotra
place
for
the
different
out
of
checkpoint
I'd
note
and
there,
for
example,
for
the
GPU
support
and
also
at
the
same
I
support
all
those
kind
of
things.
So
in
this
could
how
we
have
the
refectory
work
down.
J
We
provide
the
are
no
the
language
component
manager,
so
it
is
the
kind
is
the
library
so
people
want
to
do
the
checkpoint.
We
are
going
to
really
find
out
what
it
is
relation.
What
it
is
checks
are
all
those
kind
of
things,
so
if
the
people
come
around
and
want
to
use
each
checkpoint
and
you
can
using
a
condom
checkpoint
manager,
so
we
also
there's
the
proposal
work
in
progress.
J
We
try
to
do
the
problem
of
the
mechanism
for
people
in
Chinese,
so
we
already
have
to
kill
to
have
the
party,
for
example,
for
the
device
packing
and
device
the
packing
and
the
4gq,
and
the
also
dares
to
talk
about
the
network
of
hugging
and
also
storage
pocket.
So
this
is,
we
try
to
each
cracking
at
the
different
the
way
to
talk
to
the
to
panic.
So
we
try
to
consolidate
all
those
kind
of
mechanism
and
most
common
design,
so
that
is
still
in
the
progress
so
on
the
little
supporters.
J
I
also
have
another
of
the
progress,
though
most
it
is
on
the
we
have
the
Cuban
eight
stacks
for
the
windows,
notes
and
also
windows
container.
So
those
and
also
there's
the
change
around
the
container.
There
are
interface
and
to
support
windows
contender
so
and
then
we
also
have
the
made
the
progress
on
the
node.
It
has
the
four
windows
container
images,
but
that's
not
done
yet.
J
They'd
have
katana
rampant
as
a
default.
Hopefully
that
will
be
done
in
the
next
release.
So
then
we
have
the
so
so
on
the
application
and
the
workload
management.
This
corner
with
your
focus,
can
focus
on
to
make
the
all
the
CI
a
compliant
the
content
of
runtime.
It
is
product
already
so
crowd,
it
is
the
production
riot,
II
and
and
the
sick
continent.
J
It
is
the
tools
we
provide
to
to
replace
of
the
darker
in
gene
darker
darker
container
CAI,
because
we
have
the
cia's
packable
pizzazz
packable,
and
so
we
don't
want
to
force
the
community
or
product
kubernetes
provider
to
using
our
single
continent
record
so,
but
at
the
same
time
we
want
to
provide
stimulus,
Li
and
also
consistent
of
the
user
experience.
So
that's
why
we
we
have
the
alums
of
us.
There
I
cut
off,
hopefully
next
coat,
how
we
can
next
trainees.
We
can
promote
that
to
Chie.
J
So
please
read
off
the
company
frog,
g8
bug
and
that
actually
demonstrate
a
lot
of
us
say:
I
can
tell
you
kisses
red
one.
So,
at
the
same
time,
we
also
enhance
after
CRI.
So
when
major
enhancement,
it
is
amazing
that
ageism
we
get
to
the
windows
container,
support
and
another
one.
It
is
conten
LOV.
So
we
are.
We
are
made
a
progress
on
the
form,
a
clocking
format
and
encapsulation
and
the
rotation.
So
next
quarter
we
are
going
to
working
on
the
upper.
We
are
final.
J
We
are
going
to
find
a
source
application
streaming
connection
to
the
CI
and
so
so
mu
to
the
notice
that
create
here
so
in
the
one
that
enable
we
made
a
progress
toward
make
the
stack
calm
to
the
beta
and
the
road
by
default
in
the
one
that
enable,
for
the
other
purposes,
term
I
to
arm
we
enable
default
our
second
profile
and
in
extra
nice
we
want
to
make
that
default
for
overall
Coburn
ideas
and
promoted
activator.
At
the
same
time,
there's
the
we
work
were
close
twist.
J
The
kata
communicate
and
also
Google
open
source,
the
G
vector-
and
we
work
with
Marc
Krauss
with
the
two
community
and
proposed
us
that
sandbox
API
and
the
proko
to
help
have
the
purpose
of
the
design
to
how
to
integrate
with
those
to
send
a
box.
Mckenna
think
so,
so
we
made
an
aesthetic
progress
and
so
there's
the
also
stable
face
face
face
to
face
discussing
to
how
to
integrate,
how
to
integrate
the
continuity
and
the
crown
with
the
wisdom
when
we
and
us
say
those
at
the
ocl
Iowa.
J
So
so,
there's
the
proposed
discussed
I
to
the
community
a
heavily,
and
so
we
made
the
progress
hopefully
makes
the
code
how
we
are
going
to
address
those
TAC
problem,
for
example,
how
to
integrate
with
the
storage
of
in
degree
with
the
network
component,
and
that's
that's
the
army
next
podcast
goal.
We
also
proposed
the
note
that
TRS
bootstrap,
while
the
TPN
to
the
community
and
again
the
were
receptive
and
we
made
the
progress
and
so
there's
the
other
effort.
J
We
propose
to
the
iPhone
version
of
the
use
and
enable
the
user
name,
spaces
support
and
the
hopefully
we'll
get
emerged
to
the
1.11.
So
people
can
play
with
segment
and
the
experience
and
give
us
the
feedback.
So
we
can
enhance
them,
I'm
the
resource
management
side.
We
are
watching
our
promoter
cisco
to
the
background,
and
it
is
the
progress
on
that
one
and
they,
hopefully
we
can
get
to
that
merge
some
time.
J
So
you
know
when
they
even
will
have
to
find
a
feature
for
the
system
and
also
there's
the
ingham
inverse
resource
men
working
group
there's
the
mini
discussing
under
this
was
cast.
So
still
we
made
some
progress,
that's
you,
but
it's
you
right
now
is
we
are
working
on
how
we
are
going
to
to
base
other.
We
have
the
abundant
there.
Many
of
the
use
cases
to
use
in
our
resource
classes,
but
we
have
we
have
to.
We
have
to
talk
inclusion
yet
there's
the
several
alternative
approach
there.
J
So
last
one
is
the
monitoring
and
logging.
So
we
work
with
the
sig
instrumentation
and
try
to
make
progress
under
how
to
make
the
support
about
how
to
make
the
monitoring
McKenna
thing
more
extensible.
We
have
the
high
demanding,
unlike
the
new
resource
type
attitude,
the
kubernetes
Ida
to
the
node
and
how
we
are
doing
end
of
our
different
monetary
base.
So
that's
there's
the
ongoing
discussing
if
people
interesting,
please
give
us
the
feedback,
and
we
also
engineer
made
the
study,
progress
under
debug
code
and
unfortunately,
and
there's
the
back
of
force
API
with
you.
J
So
we
couldn't
make
this.
It
is
alpha
in
this
tree.
Nice,
so
I
feel
sorry
for
engineer,
because
we're
kind
of
the
key
kind
of
change
you
made
me
run
out
the
API
and
the
based
on
the
different
of
the
reviewer
and
the
back
of
force,
but
never
a
reach
of
the
completion
so
I
end.
We
suggest
that
you
promote
to
ask
you
a
problem
to
sig,
attach
picture
and
the
true.
So
we
can
have
the
decision
and
the
stick
with
that
decision,
and
then
we
progress
well
that
that's
all
called
updates
from
acute
you.
J
On
the
logic
side,
we
actually
have
the
amount
mention
that
Eric
and
I,
and
we
also
have
the
proposed
the
say
to
note
the
chatter
and
it
is
the
still
is
at
the
discussing
and
then
we
view
by
the
comm
signal
communicate.
We
have
the
signal.
We
hold
a
signal
every
week
for
one
hour
and
if
you
are
interested
any
copy
of
compost
orthotic,
please
join
that
signal.
The
Google
Group,
and
so
you
can
access
all
the
document,
all
the
planning
or
the
scan
and
a
little
Mac,
so
yeah
I
think.
That's
all
any
question.
A
C
J
We
do
just
not
even
I
know
the
management
is
conical
area.
We
also
have
the
windows
windows
area
and
we
also
have
the
like
the
container
runtime
area,
which
should
we
connect
the
application
workload
management.
We
also
have
the
resource
management,
so
even
Steve
sponsor
and
initiate
of
the
resource
manual
rule
work
with
the
engineering
found
many
company
and
we
also
have
the
security
security
actual
part.
Is
we
partner
with
the
sig
us
a
mainly
X
Factor
and
share
our
joint
to
roadmap
with
it
and
also
for
money
to
minami?
J
We
actually
pattern
is
the
SiC
instrument
and
there's
the
we
and
also
the
many
project,
actual
categories
that
say
the
scheduler
and
the
many
other
things
so
is
type
there.
Yes,
we
have
the
mini-sub
project.
If
you
look
at
our
the
proposal,
rep,
which
is
not
feminist,
third
and
I,
we
do
have
the
sub
project
and
we
are
in
the
process,
try
to
nominate
a
sub
market
output
here,
each
area,
okay,.
A
Sorry
done,
I'm
gonna
have
to
cut
you
off
there,
because
we
are
literally
almost
out
of
time.
Sorry,
we've
got
one
minute
left
real
quick
for
some
announcements.
Tins
I
posted
a
new
deprecation
policy
update,
make
sure
you
check
that
out.
Click
through
the
link
sig
leaves
we've
made
the
schedule
for
these
sig
updates
that
you
have
during
this
meeting.
Please
check
the
link
up
the
top
of
the
document.
A
Quick,
shout
outs,
real,
quick
I
assumed
I
would
like
to
shout
out
to
add
dims
in
the
OpenStack
team
for
quickly
getting
their
1.11
performance
results
on
to
the
dashboard
and
he'd,
also
like
also
a
shout
out
to
Benjamin
for
adding
conformance
test
results
to
all
sig
release
dashboards,
Josh
burkas.
The
steven
augustus
would
like
to
thank
misty
stanley
Jones
for
aggressively
and
doggedly
pursuing
1.11
documentation
deadlines.
A
few
Huck
wanted
issues
we're
looking
for
Mandarin
speakers
to
help
with
a
new
contributor
workshop
at
Keuka
Shanghai.
Please
see
Josh
for
that
I'm.
A
Looking
for
help
for
kept
double
O
5,
please
see
the
link
there
meet
our
contributors.
First
Wednesday
of
every
month
is
going
to
be
June
6.
That's
next
week,
please
paint
at
Paris
if
you
want
to
have
contribute
with
that.
Our
top
5
stack
overflow
users
for
the
month
are
constable
examiner,
Louie,
James,
Strachan
and
Jordan
Liggett,
and
the
thread
of
the
week
is
Justin
Garrison's.
How
has
kubernetes
filled
for
you,
which
I
thought
was
interesting?
Everyone
check
it
out
any
last-minute
announcements.