►
From YouTube: Kubernetes SIG Cloud Provider 2018-06-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
have,
we
have
a
lot
of
work
ahead
of
us.
I
want
to
say
thanks
to
to
Andrew,
for
putting
in
a
lot
of
the
work
to
get
this
thing
up
and
running,
and
thanks
everybody
for
coming.
I
think
that
we're
going
to
be
doing
some
important
work
here
and
and
Andrew
just
dropped
the
document
into
the
the
agenda
into
the
chat
into
the
chat.
A
F
H
A
D
B
B
He
kind
of
helped
us
get
set
up
with
end-to-end
tests,
how
to
run
end-to-end
tests
and
how
to
actually
upload
it
to
a
central
place
where
we
can
kind
of
start
reporting
those
tests
for
mostly
for
sig
release
to
actually
use
it
as
a
way
to
signal.
If
the
release
is
going
well,
if
it's,
if
there's
any
critical,
bugs
and
and
whatnot,
so
the
cap
really
outlines
at
the
most
basic
level,
what
we're
trying
to
do
with
end-to-end
deaths
and-
and
it
also
has
like
actual
concrete
instructions
for
how
to
actually
do
that.
B
And
so
some
of
the
steps
are
a
bit
manual
and
we
also
assume
that
you
have
your
own
cluster
running
and
that,
like
it,
doesn't
really
care
about
how
you
prevent
your
cluster
is
just
purely
steps
on
like
how
to
actually
test
a
working
cluster
and
then
how
to
upload
those
tests
to
test
great
so
test
code
is
kind
of
like
it's
a
centralized
kind
of
testing
infrastructure
or
test
the
test
result
collector.
B
That's
right
now
managed
by
Google,
but
will
be
kind
of
storing
test
results
from
every
cloud
provider
there
so
that
we
can
kind
of
have
like
a
central
place
to
store
results.
So
next
steps
for
that
cap
is,
if
you
are
a
current
cloud
provider,
or
are
you
going
to
be
a
new
cloud
provider
into
the
community
school
system?
If
you
can,
please
review
that
PR
outline
any
issues
you
see
with
it
or
just
give
it
a
thumbs
up.
B
B
A
You
know
since
we're
since
we're
at
the
end
of
the
release
cycle,
4,
4,
4,
4
1011,
you
know
we
should.
We
should
really
be
looking
at
what
we
can
do
to
publish
the
docs,
for
you
know,
to
kind
of
build
these
standards
and
publish
the
docs
for
1.12,
and
so
I
think
that
we
should
be
targeting.
You
know
as
the
final
the
final
deadline
for
for
all
of
the
cloud
provider
documentation
to
be
completed
at
this
at
the
regular
Doc's
deadline,
which
will
be
established
once
the
1.12
timeline
is
set
up
in
the.
A
In
the
meantime,
we
have
we,
we
have.
We
have
a
kept
in
place
that
that
that
that
talks
a
little
bit
about
the
documentation
requirements,
some
some
things.
We
publish
there
that
I
think
that
we
can
actually
build
a
more
and
more
devoted
documentation.
Kept
from
I
will
put
a
link
to
that
and
in
the
in
the
chat
here,
and
also
in
the
meeting
notes.
A
B
A
And
that's
and
that's
that's
exactly
the
idea,
and
you
know
the
way
the
way
I
envisioned
this
was
that
there
would
be
a
minimum
of
two
documents.
The
first
would
be
a
getting
started
document
which
would
essentially
tell
you
how
to
how
to
just
you
know,
just
an
introduction
on
how
to
use
the
the
cloud
provider
code
for
your
your
your
particular
cloud,
but
then
also
a
a
settings
and
configuration
document
that
would
be
kept
up
to
date
with
the
current,
with
all
the
current
settings
that
are
available.
A
So
it's
so
that
you
have
a
complete
reference
and
the
users
of
the
of
the
provider
code.
Don't
have
to
go
start
digging
around
in
code.
If
you
go
out
exactly
how
they
can
configure
I
think
I
think
with
those
two
minimum
things
in
a
consistent
format.
We
should
you
know
I
cover,
probably
you
know
eighty
percent
of
the
of
the
interest
out
there,
the
other.
The
other
thing
that
we
might
want
to
require.
Is
you
know
if
we
want
any
documentation
on
how
people
want
to
become
involved
with
the
development
efforts?
K
You
so
Andrew
and
Chris
Nishi
from
AWS,
so
I've
actually
been
working
on
this,
at
least
from
the
AWS
side,
and
so
we
found
two
specifics
to
your
initial
point.
We
found
two
specific
areas
of
improvement.
One
is
what
needs
to
happen
if
cube
ADM,
for
example,
and
is
used
to
set
up
twelve
provider
and
what
are
the
files
that
need
to
automatically
be
changed
in
the
API
server,
the
controller
manager
and
the
cubelet.
K
That's
up
provider
kubernetes
specific
to
AWS
I
was
going
to
raise
an
issue
today
because
Andrew
she
had
PR
I
guess,
which
talks
about
updating
documentation,
but
I
want
to
raise
an
issue
specific
to
AWS
and
start
working
on
that
I
think
you
made
a
comment
of
how
people
can
contribute
to
cloud
provider.
That
is
a
category.
That's
not
covered
in
that
issue,
but
I'm
happy
to
sort
of
work
on
that.
Based
on
the
cap
proposal,
we
propose.
K
I
am
happy
to
share
it
as
soon
as
I
raise
issue
I'll
link
up
all
the
Google
Docs
and
share
it
with
you
guys.
K
E
I
think
this
is
valuable
because
it
actually
is
I
found,
even
though
I've
been
working
on
the
kubernetes
project
for
years.
Now
that
this
documentation
has
sort
of
sprayed
all
over
the
place
when
it
comes
to
cloud
providers,
and
it
isn't
very
consistent-
we're
probably
not
going
to
miraculously
make
it
consistent
in
the
next
three
months,
but
just
to
inventory,
where
all
of
these
sort
of
things
potentially
live
would
be
valuable.
And
what.
H
What's
the
scope,
what's
the
scope
of
what
we're
documenting
are
we
documenting
just
the
cloud
provider
configuration
and
options
because,
in
my
experience,
that's
been
fairly
bring
up
agnostic
and
I
think
we're
gonna
get
into
a
real
rat's
nest?
If
we
try
to
document
every
single
deployment
tool,
Plus
every
cloud
provider
configuration
option,
yeah.
E
I
think
some
of
these
deployment
rule
tools
are
perhaps
not
super
appropriate
for
some
of
the
providers
to
so
it's
sort
of
a
grid
where
I
have
a
feeling
that
some
providers
might
make
a
declaration
that
look.
We
don't
really
recommend
this
tool.
It
doesn't
make
sense
for
our
cloud,
but
these
other
ones
do
but
that's
a
place
where
you
at
least
know
what
all
the
options
are
would
be
good
right.
Okay,.
H
A
That
the
the
people
who
are
working
on
the
cloud
provider
code-
we
would
you
know
the
the
expectation-
would
be
that
as
as
a
as
a
member
of
say,
cloud
provider,
you
know
if
you
were
working
on.
You
know
if
you
were
the
AWS
authors
or
the
azure
authors
or
the
OpenStack
authors,
that
those
teams
would
be
responsible
for
producing
and
maintaining
the
documentation.
So
we
don't,
we
wouldn't
want
to
make.
B
B
Adam
have
kind
of
raise
it
up,
raise
up
the
issue
a
couple
times
that
they
want
to
link
out
to
cloud
provider
Doc's
for
their
key
bottom
dots,
but
because
those
Doc's
are
lacking,
they
want
some
just
kind
of
they
wanted
to
kind
of
raise
some
red
flags
to
us
so
that
we
so
that
we
work
on
it.
So
I
think
they
are
two
separate
issues
which
again
and
when
we
develop
the
cap,
will
kind
of
have
a
company
checklist
to
to
go
through.
K
A
B
L
A
A
The
you
know
some
documentation
on
the
the
the
the
larger
cloud
provider
code
in
general,
and
you
know,
building
out
the
Cloud
Controller
manager
and
then
being
able
to
plug
things
into
that.
I
think
it
might
make
sense
to
as
we
as
we
work
on
these
individual
individual
issues
going
forward
to
devote
a
single
Kempe
to
them,
so
that
we
can
keep
the
kept
focused
and
you
know
make
those.
So
we
have,
you
know,
reachable
deliverable
goals
and
also
to
simplify.
A
L
Yes,
because
I
I'm
already
I
have
some
specific
ideas
for
what
what
good
standards
could
look
like
for
documentation,
specifically
about
adding
reviewers
and
the
title
block
for
a
file,
but
we
can
get
into
into
specifics
when
it's
time
for
that.
I
will
say
that
I'm
really
happy
to
be
having
this
discussion,
because
I
I
think
that
the
the
the
getting
started
documentation
is
one
of
the
weakest
spots
in
kubernetes
documentation.
L
Right
now,
it's
not
even
called
getting
started,
it's
called
setup
and
what
we're
doing
right
now
with
the
documentation
that
we
have,
is
that
we're
we're
renewal,
sourcing,
documentation
poorly,
where
we're
hosting
basically
outdated,
secondary
versions
of
cloud
provider,
documentation
and
I
would
really
I'm.
Looking
forward
to
having
cloud
providers
have
more
direct
ownership
over
over
the
content
of
their
files.
My
preference
for
documentation
is
to
link
out
as
much
as
possible
to
providers
own
documentation
because
I
I,
just
I,
don't
think
that
it
makes
sense
for
us
to
specific
I.
L
E
A
I
think
there
that
there
might
be
space
for
there
there
were
going
to
be
shared
steps
for
setting
these
things
up
in
shared
interfaces,
and
so
perhaps
you
know
part
of
this
process
too
should
be
identifying.
What
are
the
shared
things
that
we
don't
use?
Another
concern
that
I
would
have
is
that
every
cloud
provider
would
write
their
own
documentation
and
they
would
all
be
describing
the
same
thing
in
different
ways
and
to
me
that
feels
inefficient.
A
F
Also,
a
problem
of
providing
a
consistent,
you
know
experience
to
the
user
of
trying
to
send
up
one
of
those
Club
providers,
because
right
now
people
go
to
the
website.
You
see
a
nice
webpage,
they
follow
the
tutorial
or
they
how
to
whatever
it
is,
but
then
they
have
to
dig
into
like
a
markdown
document
stored,
and
you
know
to
finish
up.
L
Yes,
I'd
also
like
to
well
having
having
a
consistent
high
standard
for
shared
tasks.
Among
cloud
providers
also
reduces
the
burden
on
localization
teams.
Right
now
we
have
three
localization
projects
in
flight,
Chinese,
Korean
Japanese,
and
having
asking
localization
teams
to
translate
the
same
set
of
instructions,
a
number
of
times
is
seems
like
an
unreasonable
burden
on
localization
teams.
So
it's
it's
a
that
kind
of
streamlining
I
think
represents
an
improvement
for
everyone
in
the
contribution
pipeline
from
from
authors
to
maintain
errs
to
users,
I'm.
E
Just
brainstorming
here
but
I'm
wondering
if
we
shouldn't
come
up
with
a
recommendation
or
demand
that
these
different
versions
of
the
documentation
get
tagged
with
the
cloud
providers
name
found
frequently
in
them
just
to
allow
filtering
of
search
results.
Putting
myself
in
the
shoes
of
an
end-user
I
don't
want
to
have
to
search
for
the
docs
on
a
volume
mount
in
Google
search
and
they
have
a
hundred
different
versions
for
different
cloud
providers
that
I
have
to
sort
through
when
I'm
really
only
interested
in
one
right
right.
A
A
C
C
Just
wanted
to
go
back
to
Zacks
question
about
where
to
give
feedback,
I
think
on
the
guidelines.
I
agree,
it
should
be
a
separate
cap,
but
there
are
likely
high-level
ideas
around
naming
convention
and
structure
that
should
go
into
this
cap
and
making
those
changes
and
suggestions
now
will
be
super
helpful
to
guiding
this
in
the
direction
that
is
most
understandable
and
fits
best
into
the
context
of
the
larger
documentation
efforts
in
kubernetes
as
a
whole.
C
A
A
M
So
this
is
my
attempt
to
detail
all
of
those
problems
that
need
to
be
solved
and
suggest
a
way
that
we
can
do
that.
That
is,
you
know
it
starts
out
being
kind
of
intrigue
cloud
provider
specific,
but
the
end
result,
I
think
needs
to
be
something
that
even
the
out
of
tree
providers
are
going
to
want
to
deal
with.
So
it
deals
with
things
like
having
a
kubernetes
kubernetes
builds
tag
which
just
built
you
a
kernel
that
you
can
then
package
into
your
cloud
providers.
M
Beck
builds
how
you'd
go
about
that.
Where
do
where?
Should
certain
things
live
like
you
know,
maybe
we
would
like
to
keep
the
cloud
provider
implementations
as
sig
cloud
provider
repos
and
then
have
a
separate
thing,
which
is
the
cloud
provider
implementer
repo
for
a
lot
of
the
building
packaging
and
deployment
and
and
so
I
really
encourage
everyone
to
go
through
with
this.
If
we're
gonna
make
progress,
we
you
know
we
would
be
repo
is
now
open
and
I
would
really
like
to
see
us
start
making
some
of
these
changes
very
soon.
M
So
you
can
expect
me
to
start
sending
some
PRS
out
for
that.
So
please
take
a
look
at
this
cap.
You
know
every
cloud
provider
should
should
probably
make
sure
that
I
haven't
done
something.
That's
gonna
impede
you
from
succeeding.
So
please
make
sure
you
take
a
look
at
that
and
yeah.
If
anyone
has
any
questions
and
if
Andrews
about
to
tell
me
hey
I
suggested
you
break
it
out
into
a
separate
cap-
yes,
yes,
you
did.
M
M
D
B
B
So
the
pier
rocks
right
now
is
that
they
have
a
repository
that
has
a
working
implementation
of
the
tap
controller
manager
and
then
arbitrarily
we
have
that
they
have
a
reasonable
amount
of
user
experience
report
and
so
like
what
that
means,
we
kind
of
kept
it
vague
just
because
we
actually
don't
know
exactly
how
we
want
to
determine
this
going
forward.
But,
as
we
kind
of
get
more
experience
with
this,
we
can
have
more
concrete
requirements.
There.
B
A
B
A
A
That's,
that's,
that's
that's
pretty
important
and
it's
going
to
take
a
while
and
so
one
of
them
one
of
the
things
that
should
be
on
our
agenda
and
we
don't
have
to
solve
it
today,
but
is
coming
up
with
the
timeline
of
what's
the
target
for
how
we
want
code
to
start
to
start
being
removed
and
what
policies
do
we
want
to
have
in
place
so
that
to
encourage
the
development
to
stop
on
the
entry
cloud
providers
and
to
focus
primarily
on
the
other
tree
providers?
And
what
can
we
do
to
support
that
transitions?
M
Move
too
fast,
so
so
I
have
two
comments
on
that.
I
just
recently
talked
with
the
storage
team,
and
my
general
feeling
is
we're
probably
I
again.
I
would
love
to
work
out
how
to
do
it
faster,
but
even
we
have
dependencies
on
other
teams
and
the
storage
team
seemed
to
think
we're,
probably
looking
at
about
a
year
before
they
felt
like
they
would
be
in
a
good
place
for
this.
M
So
getting
getting
the
CSI
getting
CSI
sidecar
working,
getting
all
of
the
wrappers
around
the
see,
aside
for
all
the
old
pd's,
putting
the
switches
in
migrating
all
the
customers
and
then
forcing
everyone
to
default
to
the
beside
the
CSI
sidecar
and
then
removing
the
old
code.
They
seem
to
think
was
probably
about
a
year
and
we
may
want
to
sit
down
and
talk
with
them
on.
Is
that
really
how
how
how
quickly
they
can
get
it
done
or
are
things
we
can
do
to
make
that
work?
A
little
better
related
to
this
and
I?
M
Don't
want
to
be
a
buzzkill,
but
one
of
the
things
I
noticed
in
the
last
week
is
that
there
are
things
working
against
us
that
we
need
to
be
aware
of
and
maybe
work
out.
So
as
an
example,
in
the
last
month,
someone
added
a
new
cloud
provider
method,
node
has
been
deleted
from
cloud
provider
and
whichever
itself
I,
don't
have
necessarily
have
a
problem
with,
but
that
then
has
become
a
call
that
is
made
from
code
that
we
are
not
planning
on
moving
out
of
kubernetes
kubernetes.
M
So
the
total
workload
for
migration
has
up
for
something
that
hasn't
even
been
implemented.
Yet
so
I
think
there's
a
certain
amount
of
diligence
that
we
need
to
start.
Looking
at
exercising,
keeping
our
eyes
open
and
trying
to
head
off
things
like
new
cloud
provider
calls
being
added
into
modules
where
they
that
we
did
not
intend
to
move
so.
E
Let
me
make
a
comment:
I
think
it
may
be
true
that
we're
going
to
have
to
envision
even
a
transition
period
where
the
entry
ones
still
are
entry,
but
are
subject
to
being
disabled
so
that
you
could
switch.
You
would
have
this
period
of
six
months
a
year
where
you
could
opt
into
the
new
out
of
tree
version
or
not
as
you
choose,
and
maybe
we
start
the
entry
of
this
transition
period,
where
the
default
is
that
you're
using
the
entry
version
and
then
towards
the
end.
E
A
One
of
the
things
that
we're
trying
to
do
and
OpenStack
to
you
know
were
to
be
the
carrot
to
get
people
to
try
to
move
away.
Is
we're
not
really
accepting
the
features
upstream?
We'll
try
to
do
bug,
fixes
and
we'll
try
to
make
sure
that
the
code
continues
to
work
and
we
keep
the
two.
We
keep
the
two
Forks
synched
up,
but
new
things
are
going.
You
know
new
new
features
like
like
we
have
like.
We
have
a
feature
that
will
synchronize
Keystone
authentication
with
our
back
authentication,
and
that's
not
going
upstream.
M
And
and
the
reason
I
bring
mine
up-
is
that
you
know
like
I,
agree:
I
refer
to
it
as
the
phased
approach,
but
to
get
into
that
first
phase,
where
it's
really
working
we've
got
to
be
able
to
get
out.
All
of
the
club
provider
calls
out
of
code
that
we
don't
want
it
that
we're
not
planning
on
moving,
and
you
know
if
we
it's
much
easier
to
prevent
it
from
going
in
than
it
is
to
rewrite
the
code
once
someone's
already
implemented
it
somewhere.
M
C
There's
a
way
we
can
automate
the
inspection
for
the
club
provider
references,
so
we
can
compare
it
from
one
release
to
another
sort
of
least
aware
of
those,
because
I
expect
communicating
broadly
has
the
challenges
that
people
ignore
broad
communications.
At
this
point,
this
project,
yeah
I,
don't
think
you
know-
would
be
a
good
starting
point.
I
on
the
incentive
aspect,
I
expect
the
incentive
of
being
able
to
iterate
externally
from
the
kubernetes
repository
as
the
carrot
and
the
having
to
maintain
dual
implementations.
C
As
the
stick
are
gonna
be
sufficient
and
we
don't
need
to
come
up
with
any
formal
and
end
time
at
which
code
will
be
deleted.
But
if
that
turns
out
not
to
be
true,
we
can
revisit
that
one,
but
I
think
between
the
pain
and
the
velocity
improvements
it
will
be
Chris.
Maybe
you
can
exact
Roo.
Is
that
my
hunch,
right
on
that
yeah
I
mean
we?
We
have
significant.
A
Increased
our
participation
within
the
the
OpenStack
code
over
the
last
three
months
because
because
working
on
the
ex-girl
provided
it's
just
that
people
have
a
much
easier
time
just
showing
up
getting
the
pull
requests
in
and
it's
you
know:
they're,
not
rebasing,
all
the
time
or
what
I
mean.
There's
there's
so
many
things
that
that
work
for
us
that
I
think
I.
Think
in
the
end,
you
know
it's
been
working.
I
think
that
lines
are
good
too.
I
think
that
having
a
goal
is
always
nice
and
motivates
people.
I
know.
A
K
C
A
A
No,
so
these
meetings
are
if
I'm
correct,
these
meetings
are
going
to
be
bi-weekly,
so
we
mean
it's
time
every
every
other
month
and
we
are
going
to
have
a
quarterly
review
of
the
meeting
time.
So
if
we
start
adding
more
participants
from
the
asia-pacific
region
or
from
from
Europe
rebalancing
the
meetings
to
to
make
sure
that
we
accommodate
our
our
world
wide
culture
will
be
people
be
doing
on
a
quarterly
basis
and
I
want
to
thank.