►
From YouTube: Kubernetes Cloud Provider Refactoring WG 20170913
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
on
the
administrivia
side,
we
have
the
mailing
list
and
the
slack
channel
have
been
set
up.
I
think
the
next
order
of
business
is
to
set
up
a
recurrent
a
stationary,
slack
or
zoom
account
so
that
we
can
have
a
consistent
location
for
this
meeting
each
week
or
hijacking
the
cluster
life
cycle
account
this
week,
but
that
will
change
by
next
week
and
then
once
that
exists,
we
just
need
a
pull
request,
update
the
list
of
things
to
make
it
discoverable
where
this
is.
A
What
has
been
a
bit
unclear
or
frustrating
to
me
so
far,
it's
like
I,
don't
have
a
clear
idea
like.
If
we
do
a
chain,
the
clapper
interface,
then
I
don't
know
like
who's
responsible
for
whatever
we
sphere
or
who's
responsible
for,
like
this
cloud
provider,
you
name
it
and
if
we
had
a
single
github
team,
I
could
just
think
that
one
and
hope
all
that
they
see
it.
B
C
A
So
this
means
that
people
not
in
the
company's
organization
will
be
able
to
receive
these
notifications
anyway,
which
I
think
it's
great
and
we
and
this
for
the
most
cigs.
But
again
I'm
I
I'm
not
sure
if
it's
actually
working.
This
is
just
what
I've
read
should
be
working
but
yeah
we
can.
We
can
talk
to
sick
p.m.
A
some.
Some
of
those
have
been
working.
This
I
think
Robert
may
may
know
more
later,
yeah,
I,
think
and
and
also
possibly
a
Ouija
/,
whatever
cloud
provider
refactoring
or
something
could
also
make
sense
as
a
label.
If
we
want
to
dis
like
saying
all
that
we
own
this
issue
or
PR
instead
of
like
this
is
cloud
provider
related.
A
B
You
gonna
be
useful
just
to
step
back
and
discuss
what
is
what
are
the
objectives
of
this
project
and
working
group
and
then
talk
about
the
the
status
as
a
recurring
agenda
item
just
so,
we
can
do
a
quick
round
table
and
figure
out
what
are
the?
What
are
the
big
obstacles
that
different
teams
are
facing?
I
expect
some
of
the
more
detailed
work
to
be
done
in
other
avenues,
other
SIG's
like
say
a
juror
or
AWS
or
GCP
or
other
SIG's
related
to
specific
clouds.
B
But
for
this
meeting
and
working
group
I
think
what's
useful
is
to
talk
about
the
project
overall
and
what
obstacles
were
hitting
at
a
more
architectural
level
so
sit.
Maybe
you
want
to
give
a
quick
overview
of
the
proposal
and
then
we
can
move
on
to
just
status
of
the
folks
who
are
here
absolutely
not
have
representatives
from
everywhere,
but
I.
Think.
If
we
get
in
the
habit,
it
will
become
more
useful
for
others
to
join.
C
Absolutely
they
go
so
yeah,
I'll
start
off
by
talking
about
the
objectives
of
this
whole
change
and
and
why
this
working
group
is
formed.
We
can.
We
can
talk
about
the
status
after
that,
so
we
initially
started
this.
This
whole
project
started
when
I
tried
to
add
the
Rancho
cloud
provider
into
kubernetes,
and
it
was
apparent
that
it
wasn't
possible
to
add
every
one
of
cloud
into
kubernetes
and-
and
we
wanted
to
come
up
with
a
plugin
mechanism
to
add
new
clouds
and
managed
cloud
providers
of
external
code
in
kubernetes.
C
So
we
started
off
by
so
we
set
this
goal
to
make
kubernetes
clouds
plugin,
based
where
you
plug
in
a
cloud
rather
than
the
cloud
code
residing
in
the
code
repository
itself
and
this
this
project
started
about
a
year
ago
and
now
we're
at
a
stage
where
this
is
taking
shape,
with
a
solid
to
fight
a
lot
more
and
and,
as
this
feature
becomes
more
serious
and
it's
affecting
a
lot
of
people.
We
formed
this
working
group
to
have
a
formal
channel
of
communication
about
what
is
happening.
C
Why
we're
doing
this
and
what
is
left
and
how?
What
are
the,
how
uses
if
these
features
are
going
to
be
effective
yeah?
That's
that's,
basically
how
this
group
started
how
this
project
started
in
terms
of
status,
with
the
one
eight
release
we
originally
wanted
to
go
to
beta,
but
we
don't
have
end-to-end
tests
and
there
are
a
few
bugs
remaining.
A
I
can
also
say
some
words
that
so
yeah
right
after
one
seven
was
out
I
started
collecting
beta
radiation
requirements
and
to
reiterate
what
CID,
like
basically,
we
started
talking
about
this
a
year
ago
with
Tim
Horton
one
when
well,
the
rancher
provider
was
rejected
and
has
been
like
a
non-spontaneous
occasional
thing
like
well.
Actually,
there
was
quite
many
people
that
work.
A
The
proposal
when,
when
Sid
brought
that
a
lot
of
other
things
were
looped
in,
and
it
was
kind
of
we
made
this
pretty
good
plan
at
the
time,
but
it's
clear
that
it
hasn't
progressed
at
all
as
as
fast
as
we
lined
up
there,
I
mean
I.
Think
one
nine
was
the
release
when
in
three
cloud
forest
wood
from
be
removed
or
something
it
was
I
think
that
was
the
case
anyway.
I
mean
this.
A
B
A
A
What
works
today
and
we
got
the
persistent
volume
labeler
from
from
Rob,
which
was
great
to
be
converted
to
a
controller
instead
of
a
mission
controller,
so
I
think
we're
far
better
off
now
in
1:8,
then,
where
we
were
wearing
one
seven
cube.
Admin
is
also
relevant
to
this.
As
its
cube,
admin
sets
up
a
bear
like
minimum
viable
cluster,
the
ideally,
it
should
be
just
a
cube,
see
they'll
apply
away
with
with
your
new
controller.
That's
that's
a
goal,
so
you
do
cube
area
minute.
A
Then
you
apply
this,
this
new
cloud
provider
controller
that
will
start
like
I'm,
painting,
the
the
cubelets
in
the
cluster
and
etc
etc.
What's
needed,
we
still
have
have
to
reverse
the
dataflow,
like
Jordan
jet
said
last
time,
which
now
is
doesn't
work
from
this
perspective.
This
will
require
some
cross-cutting
efforts
with
single
as
well,
but
I.
Think
and
edy
tests
is
the
the
really
big
issue
or
don't
say
here.
A
I've
looked
into
this
quite
a
bit,
and
it's
possible
that
now,
when
we're
in
sequence,
a
life
cycle,
we're
working
on
an
e-cig
Federation
to
move
sig
Federation
tests
to
using
cube
admin,
comparing
this
anywhere
and
as
part
of
this
effort,
it
would
be
great
since
Federation
depends
on
cloud
provider
like
GC
persistent
disks,
we're
thinking
about.
If
we
could
use
out
of
three
cloud
providers
automatically
the
GC
one,
the
automatically
gets
coverage
from
there
more
about
this
slide,
I
mean
then
we'd
get
more
more
things
in
the
one
test.
A
C
C
So
I
wanted
to
add
a
quick
thing
about
the
cubelet
data
flow
model.
I
spoke
to
him
about
this
when
I,
when
I
met
him
the
day,
Jago
as
well,
and
he
said
it
can
be
solved
using
initializers
and
and
we
don't
need
a
cross-cutting,
complicated
effort.
Well,
it's
it's
still.
Gonna
be
difficult,
but
I
think
I.
Think
initializers
is
one
of
the
simpler
ways
to
solve
that
issue
where
an
initializer
for
the
IP
addresses
or
or
the
CCM
axes.
Initialize
the
flight
data
sis,
and
only
after
that
is
done.
D
So
I
guess
I'm
Rob,
right
I'm
working
at
Red,
Hat
I
did
the
persistent
volume
label,
controller
and
I've
been
working
on
a
number
of
different
clouds.
I
did
actually
have
kind
of
a
question.
You
know
there.
There
is
call
there
was
a
call
for
a
while
kind
of
died
down
to
create
a
working
group.
For
that
you
know
cross-cutting
across
the
clouds.
To
try
to
provide
some
consistency
is.
D
It
almost
seems
like
something
that
we
would
need
to
have
no
I,
don't
know
if
it's
a
part
of
this
group
or
if
it
would
be
something
that
you
know
this
other
working
group
would
do
if
that's
being
created,
or
you
know
this
is
going
to
do
the
work
of
that
group
or
not
I.
Think
there's
a
lot
of
overlap
there
to
be
addressed.
B
B
D
B
B
I
know,
but
there
are
canonical
examples
that
are
prickly
situations,
and
my
suggestion
would
be
that
we
identify
them
and
clearly
articulate
the
options
and
then
escalate
to
Sigma
architecture,
because
it
will
eventually
end
up
with
those
people
involved
in
the
conversation
anyway.
So
we
can
I
think
we
can
short-circuit.
B
A
lot
of
ongoing
conversations
and
shortcut
to
saying
here
are
the
here:
are
the
issues
here
the
options
and
that
that
needs
to
be
accomplished
by
the
people
who
understand
the
options
in
the
cloud
provider,
solutions
but
the
direction
and
how
consistent
do
they
need
to
be
and
that
that
aspect
I,
think
is
answered
by
the
folks
in
Sagan
architecture?
Does
that
sound
like
a
good
yeah.
D
That's
fine,
that's
fine!
With
me!
I
long
as
there
is,
you
know,
I
think
some
some
plug
in
there
to
say
there's
some
guidance
and
it's
being
given
somewhere.
We
provide
you
know
this
would
provide
some
level
of
guidance
and
maybe
another
working
group
or
sake
or
something
provides
the
absolute.
This
is
how
everything
should
work
once
they
form
things
out,
I'm,
just
kind
of
feeling
that
guidance
needed
to
be
somewhere
I.
B
B
E
Hi,
you
guys
can
hear
me
right
I'm
on
my
phone,
so
okay,
so
yeah,
I'm,
Andrew
I
work
at
digitalocean,
so
yeah
we
ran
into
a
similar
problem
with
rancher.
We
do
support
in
kubernetes,
but
you
know
it
was.
It
was
hard
to
get
it
in.
So
we
ended
up.
Just
you
know,
adopting
a
controller
manager
early
and
I
mostly
maintain
the
controller
manager
for
ocean.
E
F
A
Cool
one
immediate
question
that
doesn't
take
too
much
time
it
should
we
deprecate
now
that
Rob
did
this
work
with
persistent
the
persistent
volume
labeler.
Should
we
deprecate
the
admission
controller
right
now
in
1/8
and
say
like
remove
110
or
something
it
needs
to
be
at
least
six
months,
but
yeah.
A
E
E
Yeah
I
mean
it
seems
like
in
kubernetes
land
like
people
like
you
really
want
to
give
a
big
heads-up
when
you
completely
remove
anything
and
so
to
me
like.
If
we
know
it's
not
gonna
work,
we
should
just
give
that
notice.
Now
that
way
like
when
we
go
to
1/9
later,
we
at
least
have
the
luxury
of
knowing
that
we
gave
at
least
three
months
notice.
I
think.
C
A
B
B
B
A
Right
now,
we
we,
for
example,
enable
this
participant
volume
label
or
admission
control
in
all
deployment
least.
As
far
as
I
know
deployment
setups,
we
could
like
start
Earl
and
say
this
is
deprecated
and
for
the
cases
we
know
that,
like
cloud
provider
isn't
used,
we
can
just
keep
it
out
of
admission
chain
like
in
110
or
whatever.
When,
but
just
signal
early
I
don't
know,
yeah.
D
I
mean
deprecated
doesn't
mean
that
it
doesn't
work
anymore.
It's
just
that
it's
going
to
be
going
away
right,
yeah.
So
you
know
this
kind
of
a
saying,
I
kind
of
think,
the
the
more
leeway
you
give,
the
more
head
rooms,
probably
a
idea,
since
it's
gonna
be
a
breaking
change.
Once
I
go
when
it
goes
away
and
I
guess
you
know,
you
said
you
can
always
mark
it
on
deprecated
and
it
wouldn't
change
for
anybody.
E
B
B
D
B
B
E
Writing
the
docs
I
realized
that
when
we
advertise
this
new
feature,
there's
a
lot
of
like
operational
things.
That
might
kind
of
think
there's
a
lot
of
operational
overhead
to
make
this
work
and
I'm
wondering
if,
like
how
much
detail,
should
go
into
that
and
if
maybe
going
you
know,
trying
to
maybe
going
too
much
into
like
the
limitations
or
whatever
might
shy
people
away
from
using
this
feature.
So,
like
I,
don't
know
like
do
you
guys
have
any
thoughts
or
opinions
like
so
it
seems
like
so
from
the
PR.
E
B
Maybe
post
a
link
to
that
PR
in
the
slack
channel
just
sure
give
visibility
to
folks
who
didn't
make
it
also
just
it
is
10:30
I
was
hoping
we
could
keep
this
meeting
to
30
minutes
so
that
people
don't
avoid
it,
because
it's
yet
another
meeting
that
takes
too
long.
Do
you
guys
believe
that
we
can
pull
this
off
in
30
minutes
a
week
or
do
you
think
we
actually
need
an
hour.
A
I
so
yeah.
For
me
at
least
we
can
answer
your
question
and
then
like
try
to
find
a
time
next
week,
so
yeah
I
I,
think
we
should
provide
some
examples
and
I
think
those
examples
should
organize
from
a
baseline
of
a
cubed
and
cluster
I'm.
Indeed
biased,
but
but
I
mean
Hugh.
Batum
is
the
only
thing
that
doesn't
it
doesn't
do
infra
it
I
mean
like,
unlike
most
every
other
installer,
and
also
it's
it's
just
the
bare
minimum
I
mean
it
doesn't
install
any
add-ons.
A
A
E
Yeah
yeah
I
think
the
difficult
part
is
that
you're
kind
of
assuming
that
there's
like
two
types
of
readers
person
who
just
wants
to
run
class
control
manager
on
the
cluster
and
like
the
other
person,
is
like
someone
who
wants
to
adopt
it.
And
so
maybe
we
should
make
that.
Maybe
you
should
have
two
separate
dots
for
those
two
things.
I
think.
A
Two
separate
Dartmouth
makes
a
lot
of
sense.
That's
what
we
did
for
cubed
M
like
we
have
this
getting
started
or
creating
a
cluster
with
cubed
M
guide,
and
that's
it
basically
just
say
it's
fun,
cubed
a
minute
and
cubed
M
join.
Then
we
also
have
a
reference
guide
with
a
lot
of
detail
like
how
does
it
actually
do
things
so
following
it?
That
pattern
sounds
good
to
me.
A
Yeah
makes
sense
and
also
I
mean
well.
As
we
know,
the
entry
clockwise
right
now
are
kind
of
privileged
compared
to
the
other
ones,
and
I
mean
this
effort
is
making
that
not
the
case
eventually,
but
until
like
as
a
stopgap,
I
I
would
be
fine
with
like
having
separate
dots
for
for
how
to
run
Cloud
Controller
manager
on
GCE
how
to
run
cloud
control
manage
on
a
double
yes.
A
So
so
from
the
like,
how
to
run
this
thing
doc,
you
just
have
a
list
of
like
GC
AWS,
all
these
seven
I
think
at
the
moment,
and
then
also
just
to
be
fair,
of
course,
you'd
list,
the
other
like
Oracle
Nikita
lotion
as
well.
But
then
there
would
be
another
repository
repositories
right
eventually,
these
will
move
out
like
eventually
there
will
be
like
github.com
Google
cloud
platform
or
something
like
that.
Slash,
kubernetes,
controller
manager
and
and
I
don't
know.