►
From YouTube: Kubernetes SIG Cloud Provider 2019-04-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
today
is
April
3rd
2019.
This
is
the
kubernetes
sig
cloud
provider.
By
with
me
meeting
I'm
going
to
put
a
link
to
the
agenda
and
the
chat
case,
anyone
needs
it.
I
did
change
the
permissions
for
the
agenda,
so
you
have
to
be
in
the
kubernetes
dev
mailing
list
to
actually
edit
it
going
forward
just
an
FYI
and
actually
I'm
gonna
start
by
sharing
my
screen,
because
I
want
to
do
a
run-through
of
our
backlog
in
a
bit.
B
A
Great,
thank
you
all
right
so
before
going
through
our
agenda,
I
wanted
to
run
through
two
announcements.
The
first
one
is
that
we
are
escalating
our
six
folding
proposal
to
the
student
committee
on
April
10th,
so
on
April
10th
student
community
community
steering
committee
meeting
Tim
Sinclair,
will
kind
of
be
our
point
of
contact
to
walk
through
the
proposal.
So
Nisha
and
I
will
be
there
for
sure.
A
We
are
removing
some
of
the
class
specific
bootstrap
auerbach
roles,
and
so,
if
you
are
running
on
a
cloud
provider
that
depends
on
these
are
back
roles
just
be
warned
that
they
will
be,
they
will
be
removed
in
115
and
they
have
been
deprecated
since
113,
I'm
I
think
it's
just
the
AWS
provider
as
far
as
I
can
tell.
But
this
is
just
a
heads
up.
C
A
A
So,
let's
so,
what
I
wanted
to
do
was
run
through
our
p0
s
and
p
ones
and
then
and
then
see
if,
if
we
want
any
of
the
p2
s
and
P
threes,
added
or
bumped
in
priority
into
p0
and
p1
for
115,
so
I'm
hoping
this
one?
It
takes
about
like
10
to
15
minutes,
but
I
wanted
to
walk
through
each
issue.
Make
sure
that
everyone
has
a
say
in
what
we're
gonna
be
working
on
for
the
upcoming
release,
sound
good,
all
right
cool.
A
D
A
D
Yeah
I
mean
they're
an
in
fact
take
a
look
at
there's
a
in
the
cap,
there's
a
related
CVE.
So
this
discusses
how
you
can
use
the
prop
this
new
feature
to
solve
that
particular
CVE
and
there
are
other-
and
it's
a
makes
point
of
extensibility
on
API
machinery,
which
can
be
useful
for
tunneling
traffic
between
a
master,
that's
remote
to
the
cluster.
So
there's
some
interesting
things
you
can
do
with
it.
There.
A
A
All
right
keep
controller
or
manager
to
club
controller
manager,
migration,
so
this
is
defining
in
a
cap.
Let
the
migration
mechanism
is
going
to
look
like
and
providing
an
alpha
implementation
for
early
testers
to
use
so
Mike
and
I
are
gonna.
Take
this
one
I
think
it
belongs
in
p0
for
1:15.
Anyone
disagree.
A
D
A
D
A
B
Hopefully,
I
will
get
a
bit
more
bandwidth
after
this
week
to
work
on
what
I
was
working
with
was
the
reorganization
of
the
landing
page
of
the
dogs
and
then
the
half
stone
attempt
to
reorganize
the
sections
on
the
cloud
provider
section
so
I
will
be
hopefully
helping
to
pin
people
from
different
providers
also
to
classify
their
own
authors,
and
all
of
that.
What
we
were
doing
like
one
month
ago
is.
A
A
All
right
next
issue
is
stage
all
the
entry
cloud
providers
I'm
realizing.
This
is
a
bit
more
work
with
the
go.
Modulus
change
is
happening
and
all
that,
but
I
still
think
this
is
kind
of
p0
worthy
for
115,
sound,
good
and
yeah.
This
is
this
is
almost
done.
This
is
this
is
removing
all
the
dependent
internal
dependencies
that
is
going
to
be
needed
for
it
staging
so
that
also
must
be
p0
for
115,
while.
A
A
E
A
Travis
is
Travis
from
VMware
I,
don't
know
if
he's
on
the
call,
but
he
has
been
at
go
for
Travis
I,
say
yeah:
they
did
it
hasn't
on
that
particular
PR.
So
they
approached
the
it's
currently
in
place.
I've
gotten
for
smaller
PRS
merged,
so
I'm
kind
of
chunking
it
off
into
smaller
pieces.
So
it
is,
it
has
my
full
attention
and
it
is.
It
is
going
to
happen
the
truth.
1.15.
A
This
morning,
I've
got
two
others
I
keep
linking
them
in
I've
got
two
others
that
are
open
and
passing
tests
right
now.
But
it
is
it's
a
big
task
and
they
keep
changing
the
way
they
wanted
to
look,
which
is
totally
fine.
It
always
makes
sense,
but
I
apologize.
It
has
taken
so
long
yeah,
it's
a
it's
one
of
the
hard
dependencies
to
remove,
but
yeah.
If
you
go
to
the
PR,
there
has
been
a
bunch
of
activity
in
the
past
few
months,
I'm
breaking
it
out
into
small
appearances.
Travis
mentioned.
E
A
All
right,
okay,
p0p
ones,
okay,
I-
think.
Let
me
put
the
milestone
on
here.
Okay,
so
we
have
three
P
ones
for
removing
the
photon
cloud
stack
and
over
cloud
provider.
I
know:
Tim's
has
a
PR
open
for
this.
That
should
merge
soon,
so
just
throwing
this
in
the
p1
for
115
finalized
protection
for
service
little
bouncer.
This
is
sort
of
like
a
crossing
network
cloud
provider
issue.
A
I
get
quite
a
few
people
peeing
me
on
slack
about
the
fact
that
they
have
like
little
bouncers
cloud,
low,
bouncer
to
kind
of
be
dangled,
because
the
guys
server
or
not
the
API
store,
but
the
controller
manager
exited
or
aired
out
in
the
middle
of
a
delete
operation
against
Nobby,
and
so
this
seems
pretty
high
priority
we
have
to.
We
have
to
PRS
open
already,
but
we
need
to
kind
of
finalize
this.
A
bit
more,
maybe
write
a
cap
for
this
gear
tongue
from
my
team
at
VMware's
is
already
assigned
to
this
issue.
A
Ids,
okay
yeah,
so
this
is
kind
of
tricky
because
we
have
to
do
this
over
multiple
releases
like
in
one
release.
We
have
to
support
handling
the
finalizer
and
not
actually
adding
the
finalizar
so
that
in
a
future
release,
when
a
user
rolls
back,
we
don't
have
services
to
stuff
with
my
analyzers
without
the
controllers
handling
it
yeah,
it's
it's
it's
a
hard
problem,
but
I
think
we
can
at
least
get
that
first
part
done
in
115.
A
A
So
we
deleted
it,
backported,
cinder
and
vcr
support
in
the
entry
admission
controller,
and
now
we
need
to
figure
out
whether
we
want
a
mutating
web
work
for
this
or
some
sort
of
out
of
tree
solution.
I'm
throwing
this
in
p1,
even
if
it
means
that
we're
just
saying
like
we
don't
want
to
support
this
functionality
at
all
I
think
we
should
at
least
come
to
a
consensus
on
that
by
the
end
of
the
release.
A
A
Alright,
next
one
is
Jane
claw
provider,
node
labels
step.
So
this
is
the
cap
just
saying
to
promote
all
the
beta
dot
communities,
labels
for
the
node
instance
type
and
the
node
zone
and
region.
So
this
is
mostly
just
cleanup
work.
I,
don't
think
it's
that
high
priority
in
terms
of
users
demanding
that
this
label
should
be
updated,
but
this
is
more
of
something
like
really
outdated.
A
Cleanup
work
that
we
should
be
doing
because
those
labels
have
been
marked
beta
since,
like
1.2
or
a
long
time
ago,
so
I
personally
would
like
to
see
this
get
done
because
we're
gonna
have
to
like
rename
the
label.
This
is
gonna.
Take
multiple
releases
because
we
can't
break
compatibility,
so
I
want
to
get
I
want
to
get
this
started
sooner
rather
than
later.
A
D
A
Yeah
so
essentially
what's
happening
here
is,
if
you
get
a
second
update
event
on
a
node
that
is
currently
being
registered,
then
essentially
you
add
it
in
to
the
initialize
process
again,
and
so
this
is
typically
okay,
because
in
the
end
everything
reconciles
to
the
same
state.
But
then
you
end
up
doing
multiple
calls
to
the
cloud
provider
which
can
be
undesirable.
A
A
G
A
D
A
A
There
should
be
an
enhancement
issue
for
every
enhancement
that
you're
adding
to
kubernetes
anyways,
and
then
we
can
always
update
the
the
description
of
the
backlog
issue
as
we
progress
it
from
alpha
to
beta
and
J
okay.
So
there
was
one
backlog
item
that
I
skipped,
which
is
defining
the
requirements
for
cloud
config.
So
this
came
up
last
week,
one
of
the
PR
reviews
and
Clayton
Coleman
notice
that
there
is
no
API
guarantees
or
compatibility
guarantees
with
cloud
config
and
so
cloud.
A
A
I
guess
what
we
need
to
decide
for
1:15
is:
do
we
want
to
like?
We
need
to
have
a
discussion
about
this
like
do
we
want
to
keep
cloud
config
the
way
it
is?
Do
we
want
to
document
backwards,
compatibility
guarantees,
rules
around
how
to
update
add
new
fields
to
it
and
whatnot
doing
folks
have
opinions.
H
On
this,
I
would
be
very
much
in
favor
of
formalizing
the
idea
that
we
don't
break
it
just
for
fun,
but
I
would
be
I.
Think
it's
a
lot
of
work.
It's
sort
of
the
back
door
where
we
stuff
a
lot
of
functionality,
at
least
on
AWS.
That
is
not
really
stuff
that
we
want
to
support
everywhere,
but
is
certain
important,
a
certain
sort
of
subgroups
or
like
smaller
groups
of
people,
so
I
would
be
wary
of
trying
to
like
make
that
process
very
heavy
and.
K
To
me,
like
too
much,
there's
not
too
much
but
more
like
not
on
the
heavy
side.
I
was
reporting.
You
know
it's
basically
just
a
guarantee
that
we're
not
gonna
break
backward
compatibility
up
to
a
certain
reasonable
degree
which
finalists
think
right
looks
like
a
year
or
something
I
think
that's
reasonable.
I
H
And
you
sick
cluster
lifecycle,
folks
on
the
call
I
mean
I,
think
that
would
be
that
there
are
certainly
group
doing
that
and
we
could
look
into
what
they're
doing
my
my
interpretation
is
that
they
are
actively
formalizing
it
as
an
API
I
think
we
can
I
think
there's
a
clear
use
case
for
doing
that
for
other
pieces.
I,
don't
know
that
these
are
widely
used
options
that
have
the
same
use
case
in.
C
D
K
D
That's
my
boy
to
me
today.
It's
unofficially
part
of
the
kubernetes
project
in
a
year
from
now.
If
all
goes
well,
Google
Cloud
config
lives
in
cloud
provider
GCP
and
is
incapable
of
infecting
anyone
who
is
not
building
cloud
provider
trees
they
paid,
whereas
today
I
change
the
Google
cloud
config
and
you
may
not,
you
may
not
be
consuming
it,
but
it
is
still
part
of
your
source
code.
If
your
Amazon
or
Azure
or
VMware
right.
K
I
I
agree
with
Fabio
it's
about
changing
the
behavior
of
the
system
for
automation,
that's
calling
and
creating
clusters,
for
example,
if
not
it's
not
about
us
breaking
each
other
or
it's
about
us
breaking
our
customers
and
if
there's
value
and
being
consistent
across
the
way.
We
do
this
so
that
I,
don't
know
someone
working
on
the
Amazon
side
finds
a
better
pattern
for
interacting
with
those
resources
that
we
all
kind
of
have
the
same,
consistent,
behavior
and
especially
customers
who
are
moving.
K
There's
another
angle
to
it,
which
is
there's
a
lot
of
arbitrary
definition
of
beta
and
alpha
and
experimental
features
in
cloud
providers.
So
you
know
you
might
have
a
service
that
comes
out
today
and
it's
marked
this
better
and
you
don't
want
to
make
that
you
know
implicit
assumption
that
that
API
is
going
to
be
around
for
a
year.
K
So
you
want
to
mark
that
as
better,
so
you
don't
provide
support
for
it
in
you
know,
support
for
it
in
the
same
way
that
could
be
do
is
define
support
for
an
API,
so
it
should
be
left
to
the
club
provider
to
define
what
is
considered
bad
I
was
considered
alpha.
What
is
considered,
g8,
stable
and
then
apply.
You
know
the
same
rules
for
support
ability
or
those
API
it
sort
of
for
that
Club,
config,
yeah,.
H
There
is
the
component
working
group
that
is
going
to
define
how
components
should
move
their
configuration
to
a
component
config
approach
as
well
to
points
out.
All
our
cloud
providers
will
be
external
components,
so
we
can
say
when
you,
when
you
break
them
out,
start
moving
or
copying
like
the
ability
to
set
those
options
that
were
County
singing
without
big
into
component
config
and
work
towards
deprecating
cloud
and
sage,
because
there's
like
they
are,
they
are
there's
no
reason
to
have
a
separate
file
other
than
that
was
compatibility.
D
D
We
we
may
actually
you
need
to
be
more
involved
in
the
short
term.
We
the
sake,
saying,
look,
we
need
to
be
stable.
We
need
to
be
dot
dot
dot
at
some
point.
You
know,
I
think
we
should
be
giving
guidance,
but
once
it
is,
you
know
once
it
is
Google's
config
into
cloud
provider,
GCP
I
think
it
becomes
less
a
sick
cloud
provider
issue
and
more
of
a
you
know.
Google
problem
I
mean
Google
and
Google
customers
problems
or
if
it's
a
cloud
provider
AWS,
it
is
Amazon
and
Amazon
stammers
problem
yeah.
E
And
looking
at
this
I
definitely
agree
with
the
idea
that
we
should
have
some
stability
guarantees,
but
it
doesn't
look
like
there's
a
lot
of
meaningful
overlap
between
the
cloud
config.
So
I'd
worry
a
little
bit
about
like
cloud
sick
cloud
provider
trying
to
standardize
that,
but
I
definitely
agree
on
the
stability
guarantees.
I.
Don't.
K
I'm,
definitely
against
standardizing
anything
a
dead
level
I'm.
Okay,
with
with
you
know,
having
a
shared
support,
ability
statement
which
could
be
you
know
using
the
same
kubernetes
has
and
maybe
I'm
and
I'm
in
favor
in
general,
I'm
having
you
know
the
same
place
to
store
a
configuration
which
can
be
a
component
config,
but
not
as
standardizing
the
actual
API
or
to
comfy.
You
know,
parameters
or
anything
like
that.
That
should
be
just
left
up
to
the
cloud
providers.
A
Well,
yeah
I
think
we're
all
in
agreeance
there.
Anyone
want
to
take
the
action
item
on
this
to
document
somewhere
that
we
should
not.
We
shouldn't
break
the
config
compatibility
without
significant
warning
and
yada
yada
yada
same
thing
with
with
our
three
PS
and
then
maybe
also
document
longer-term
kind
of
offloading
that
ownership
guarantee
offloading
the
compatibility
guarantees
to
each
provider
when
we
externalize
them
all
I.
A
A
L
So
right
now
well
beyond
average,
the
basic
kubernetes
images
from
Alton
Vivian
or
from
scratch
to
destroy
us,
and
this
may
affect
every
cloud
providers
to
some
degree
we
can
down
which
containers
you
use
so
to
give
a
brief
background
knowledge.
So
this
rawness
is
much
smaller
in
size
and
there
is
more
secure.
So
we
want
to
use
the
destroyers,
but
also
this
troilus
doesn't
have
certain
packages
or
dependencies
like
a
Tyrian.
For
example,
it
doesn't
have
shell,
so
I.
Look
at
that.
There
are
some
containers
like
Cuba
API
server,
which
use
shell
to
upstart.
L
That
container
I
think
that's
for
the
log
redirection
physically
in
type
of
allow
to
attach-
and
this
is
just
one
example-
but
that
that
example
for
such
kind
of
shell
use
case.
We
can
now
remove
that
because
right
now
we
already
migrated
to
log
to
Kellogg
and
we
can
provide
a
flag
and
to
direct
a
lock.
L
L
What's
the
good
approach
to
contact
with
each
cloud
provider
so
as
they
can
be
a
wheeler
of
the
image
change
because
I
think
each
cloud
provider
we
we
manage
our
own
way
to
bring
up
a
container
and
the
some
containers
is
bring
that
with
a
shell
with
a
stammer
whips.
Some
may
not
yeah.
So
this
is
my
question.
I
am
raising
that
because
I
want
to
hear
some
more
feedbacks
or
also
I
want
to
know.
What's
the
point
of
contact
from
each
each
so.
A
L
A
L
For
the
first
question,
actually
I
already
provided
some
examples:
I
updated
the
DCP
manifest
that
can
be
used
as
a
good
example.
I
can
send
out
the
emails
through
each
cloud
provider
for
the
contacts,
but
I
don't
know
who
to
send
the
email
to
and
I'm
going
to
rebase
the
images
and
the
end
of
April.
So
there's
all
kind
of
softer
than
lines
so
I
want
to
have
a
better
coordination
with
all
those
people
for
the
second
fashion.
L
I
I
L
A
H
And
I
think
I
mean
I
know
cops,
has
a
more
obnoxious
shell
usage
than
others,
because
we
try
to
pipe
both.
For
example,
our
log
bar
log
cube
API
server
and
the
standard
out
and
preserve
signals.
So
yeah.
If
we
could
just
have
Caleb,
do
that.
That
would
be
great,
but
we
do
have
to
double
with
a
double
pipe
or
like
a
pipe
to
those
places.
Send
it
send
it
to
both
places.
H
L
L
H
L
H
L
Kind
of
stage
we
have
funded
while
trying
to
update
those
container
images
and
there's
some
exception,
and
we
will
keep
that
contain.
Those
containers,
in
the
exception
list,
like
an
IP
table,
can
the
images
than
require
IP
tables
in
one
use
case,
images
that
have
a
very
strong
dependency
on
shell
scripts,
for
example
at
CV
server.
That
can
be
one
example
through
the
extraction
list.
So,
like
you
have
a
plan
D,
you
got
also
question
not.
I
Really,
but
that
answer
did
give
me
some
sense
of
how
deep
the
rabbit-hole
goes
and
how
far
down
it
seems
like
you
are.
Maybe
is
the
best
way
to
I
I
expect
this
will
be
a
long
and
winding
path,
as
Justin
points
out
will
just
break
some
things
and
see
what
happens
so
I'm
all
for
it
sounds
great.
What
could
possibly
go
wrong?
I
also
like.
I
G
And
by
the
way,
it's
it's
merged
and
at
this
point
in
un
sorry
sorry
for
taking
so
long
to
review
that
but
yeah
in
terms
of
like
messaging
wide,
like
the
places
that
come
to
mind,
are
sega
architecture,
ke,
dev
itself.
The
this
meeting,
the
the
cloud
provider
chairs
and
posture.
L
I
D
G
L
G
Yeah,
so
in
kubernetes
community
there
is
a
an
owner's,
alias
right
that
will
list
the
the
chairs
and
technical
leads
and
then
in
each
of
the
six
specific
directories,
it'll
list,
the
chairs
versus
the
technical
leads,
but
I
honestly
like
either
one
that
you
reach
out
to
should
be
able
to
carry
the
ball
forward.
For
you.
L
A
A
quick
github
survey
of
the
community
when
I
look
around
not
many
people
are
actually
following
this.
When
I
look
at
the
cloud
provider
repos
that
are
out
there,
the
only
people
who
have
the
only
implementations
that
have
CSI
plugins
in
the
same
place
as
the
cloud
provider
that
I
could
find
our
vSphere
and
OpenStack.
All
of
the
other
CSI
plugins
that
are
under
a
kubernetes
org,
are
under
kubernetes
SIG's.
There's
a
couple
out
there
that
are
not
inside
of
a
kubernetes
org
anywhere
main
when
I
saw
four
that
is
Ceph
but
I.
A
Just
kind
of
want
to
bring
this
topic
up
and
see
whether
or
not
this
cap
should
be
amended
in
some
way.
If
that's
a
thing
or
what
the
general
thought
around
that
is,
and
another
good
prompting
issue
for
that
topic
is
like
AWS
announced
two
new
CSI
plugins
today
and
I
saw
some
press
around
that
they're,
both
in
kubernetes
SIG's,
so
wanted
to
know.
If,
if
this
was
really
something
that
that
was
desired
by
the
community
to
see
that
because
I
definitely
see
it
as
a
separate
plug
in
a
separate
from
the.
A
Init
speaking
for
vSphere,
it's
actually
the
same
I
mean
if
you're
going
to
move
to
an
out
of
treat
cloud
provider
for
vSphere.
You
have
to
use
a
CSI
plug-in
they
go
hand-in-hand,
but
when
it
comes
to
like
release
engineering
for
the
CSI
plug-in
it
is
you
run
into
oddities
like
a
if
I
want
to
tag
a
version
and
do
a
release.
F
I
think
it
was
I,
think
it's
coming
up
on
a
year
and
a
half
or
two
years
old,
now
and
I
think
that
the
original
intention
of
it
was
to
have
a
work
in
progress
and
we
could
adjust
it
if
it
wasn't
working
the
way
we
wanted
yeah
because
I
because
know
that
you
know
like
we
have.
We
have
an
entry
volume
provider
and
we've
been
working
to
make
that
interface
to
the
external
volume
provider,
but
we
still
have
to
keep
the
API
around.
So
I
can
see
whether
that
the
versioning
issues
are
are
real.
F
D
D
Please
excuse
me,
but
there
needs
to
be
a
top-level
GCP
repo
that
assembles
a
deployable
out
of
case
Kate's,
CCM
and
whatever
else,
and
where
you
can,
we
can
separately
version
each
of
these
binaries
and
make
something
a
deployable
from
that
and,
to
my
mind,
I
I
guess
I'm
just
gonna
strengthen
it.
It
seems
perfectly
reasonable
to
me
that
the
CSI
is
its
own
binary
and
that
we,
you
know
the
owners
of
the
CS
of
the
Google
CSI
driver.
Go
in
when
they're,
ready
and
say
hey
I've
got
a
new.
D
G
Giving
me
looks,
yeah
I
was
gonna
say
that
the
sounds
like
you
know,
part.
You
know
part
of
the
goal
for
the
like
the
hack
hack
scripts
that
exists
right,
like
the
cluster
up
scripts
and
stuff
like
that
or
to
move
those
out
and
turn
those
into
conformance
profiles
right.
So
it
sounds
like
this
is
not
quite
like
kubernetes
conformance
itself,
but
it's
cloud
provider
conformance.
So
maybe
we
can
co-opt
some
of
the
I.
Is
that
they're
having
over
there
and
developed
something
similar.
K
K
On
the
repo
situation,
so
right
now
we
know
the
sick
club
provider.
You
know
oversees
everything
that
has
to
do
with
club
providers,
but
there's
no
so
six-story.
Just
try
to
understand
does
not
want
to
have
the
same
overseeing.
You
know:
responsibility
over
the
CSI
plugins,
so
there's
a
the
CSI,
the
CSI
blog
has
done,
have
a
home,
so
to
speak,
so
they
they
don't
have
like
a
pre
assigned
stake
to
live
under.
So
it's
kind
of
like
a
gray.
You
know
what.
I
F
A
What
I've
seen
so
far
today
has
been
pretty
consistent.
There's
really
just
I
mean
there's
kind
of
three
categories,
one
being
not
under
any
kubernetes,
org
and
I
think
that's
separate,
but
it's
either
yeah
the
CSI
plug-in
lives
with
the
CCM,
which
there's
only
two
examples
of
today
that
I
could
find
and
every
other
one
is
under
the
kubernetes
SIG's
organization.
And
then
that
means
that,
in
order
to
get
that
repo,
you
got
to
have
your
sponsoring
cig
and
your
CSI
plugin.
Is
it
defined
sub-project
of
that
sig?
A
So,
when
I
look
at
at
AWS,
azure
Google's,
like
all
of
that's
in
place
for
all
of
them,
where
you
know
sig
AWS
has
each
one
of
the
CSI
plug-ins
as
a
sub
project
and
then
created
that
kubernetes,
SIG's,
repo,
and
so
that's
kind
of
it
seems
like
the
de
facto
standard
right
now,
but
I
wasn't
sure,
which
I
think
is
kind
of
what
Fabio's
getting
at
like.
If
that
should
be
codified,
somehow
and
yeah,
the
feedback.
F
But
providers
in
general,
because
it
seems
like
we're
pretty
machinery
in
place
and
we're
going
in
front
of
the
steering
committee
to
handle
a
lot
of
this
machinery
that
we
bought.
You
know
if
there
needs
to
be
a
place
where
people
can
do
this
work
instead
of
placing
you
know
having
having
to
find
cigs
instead,
finding
working
groups
underneath
you
know
a
provider
may
be
a
solution
that
addresses
this,
where
you
have
lighter
weight,
organizations
that
are
able
to
own
that
code
under
under
a
parent
say.
D
Yeah
I
mean
the
other
thing
I
want
to
point
out.
Is
that
I'm?
Not
it's
not
clear
to
me
that
all
or
even
most
CSIS
are
cloud
provider
specific
unless
the
persistence
team
has
changed
the
original
proposal,
you
know
all
of
the
all
file
system.
Drivers
were
going
to
become
CSI
across
the
board,
even
if
they
were
not
cloud
provider
specific
and
I.
In
fact,
I
believe
the
majority
of
them
are
not
cloud
provider
specific.
So
we
have
an
NFS
CSI
driver
and
it
would
be
interesting
to
see
where
we
would
put
that
that.
A
Already
exists
and
it's
under
the
kubernetes
CSI
org
and
in
discussions
with
sig's
storage,
they
were
making
kind
of
a
clear
line
in
the
sand
about
okay.
If
it's
something
that
is
core
like
a
core
file
system
thing
and
we
develop
it,
it
can
go
in
kubernetes
CSI,
but
no
vendors
could
so
that's
why
AWS
or
VMware
something
can
get
in
a
repo
there.
I
don't.
I
So
I
think
that
the
document
you
have
opened
now
has
two
sections
that
the
repository
should
have
the
following
sub
directories
and
the
repository
may
have
the
additional
directories
answering
this.
This
specific
question
may
be
as
easy
as
moving
a
CSI
from
the
shed
to
the
Mae,
but
I.
Think
the
bigger
question
is
about
consistency
of
structure
and
organization
of
how
to
assemble
the
bits
you
want
into
a
workable
installation
of
kubernetes
and
that
that
ends
up
being
a
much
more
challenging
question
to
answer
in
that
consistent
way.
Today,
Fabio
I.
K
Think
the
the
question
we
should
answer
today
is:
should
state
cloud
provider
be
the
stake
that
is
hosting
anything
that
has
to
do
with
claw
providers
where
cloud
providers
word
is
used
loosely
as
in
you
know,
any
vendor
that
has
a
cloud
platform
which
Cabrini's
runs
on,
which
means
cover
any
GSI,
CCM
and
authentication
plugins.
Whatever
else
sorry.
I
K
Distinction
is
clear
there,
because
if
it's
core
CSI,
then
it
lives
on
their
kubernetes
TSI
and
it's
owned
by
sting
storage.
If
it's
a
core
CSI
component,
even
like
the
NFS
CSI
plug-in
that
lives
on
their
kubernetes
ESI
and
it's
owned
by
state
storage,
anything
that
has
to
do
with
a
provider
specific
implementation
of
CSI
that
has
no
own
currently
so
there's
no
there's
no
cincone
there.
So
I'm
wondering
if
sick
cloud
providers
should
own
that
as
well
and
should
own
any
other
thing
that
may
come
up.
F
I
Just
gonna
say
I
think
where
it
gets
tricky
as
there
are
vendors
developing
a
CSI
drivers
that
work
on
some,
but
not
all
cloud
providers,
and
so
the
user
would
reasonably
expect
to
find.
Maybe
one
of
these
will
work
for
you
and
it's
a
I'm
just
having
a
hard
time
on
the
spot,
understanding
how
how
we
would
categorize
those
so
we're
we're
running
out
of
time.
I.
A
Think
we
should
move
this
to
the
mailing
list
I
or
we
can
just
like
decide
that,
at
least
for
the
first
action
item.
We
can
move
the
CSI
directory
into
an
optional
thing
for
now
and
then
we
can
fold
kind
of
all
the
CSI
drivers
under
either
the
sig
or
another
sig,
but,
like
I,
think
sig
cloud
provider
can
would
happily
kind
of
sponsor
that
repository
for
the
provider.
Specifics
you
fly
drivers
and
then
we
can
move
to
the
mailing
list
for
further
discussion
on
this.