►
From YouTube: Kubernetes SIG Cloud Provider 2018-08-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everybody
to
the
st.
cloud
provider.
Meeting
I'm,
Chris
Wragge,
one
of
the
colleagues
from
the
OpenStack
foundation,
and
it
looks
like
Andrew
and
Jago
are
also
here.
I
guess
we'll
get
the
meeting
started
with
the
usual
introductions.
Is
there
anybody
who's
attending
this
meeting?
For
the
first
time,
we
would
like
to
introduce
themselves.
B
B
Yeah,
you
should
state
a
cost
or
API
provider
for
this.
It's
crazy,
yeah,
I,
guess
like
I'm
here,
bringing
my
testing
experience
to
the
table
for
well.
Thank
you.
You
love
testing.
A
Okay,
so
I
just
dropped
the
agenda
into
the
chat
in
case
people
didn't
have
that,
and
so
you
can
take
a
look
at
it.
Our
first
topic
is
going
to
be
led
by
Jago,
which
is
providing,
which
is
about
provider,
binary
discovery
and
validation
or
ID.
We've
actually
allocated
a
pretty
good
chunk
of
time
for
this
good
20
minutes
that
we
can
talk
about
this,
because
that's
pretty
important
topic
of
now
that
we
have
this
external
provider.
A
D
So
this
is
a
pretty
broad
topic
and
is
relevant
to,
but
not
completely
owned
by
provider.
So
mostly
I
just
wanted
to
raise
awareness
and
start
a
discussion
here
as
to
whether
we
have
requirements,
and
there
were
a
proposal
or
direction
that
we
would
like
to
see
this
go
and
what
I
mean
by
this
is
may
be
a
good
place
to
start
is
the
cube
con
of
last
year
in
December,
Tim
Hakan
and
Michael
Rubin
gave
a
talk
about
kubernetes
as
a
kernel
versus
as
a
distribution,
the
open
source
project
itself
and
I.
D
Think
the
the
core
of
that
question
is
really
what
this
conversation
is
about,
and
what
does
the
kubernetes
build
and
release
process
produced?
Does
it
produce
an
artifact
which
can
be
combined
with
other
things,
by
vendors
or
independent
entities,
to
create
a
useful
collection
of
things,
or
does
the
kubernetes
build
in
a
release
process
itself
produced
executable
code,
that's
useful
on
its
own,
and
so
when
I
say
it's
not
just
about
a
cloud
provider,
what
I
mean
is
kubernetes.
D
The
project
is
being
broken
down
into
more
and
smaller
components
that
are
often
independently,
versioned
and
maintained.
So
examples
of
that
are
ingress
or
Cordia
nests,
and
sometimes
these
things
are
required.
So
DNS
you
must
have
DNS
I
expect.
If
we
don't
already,
we
will
have
required
but
mutually
exclusive
components
that
may
be
part
of
your
kubernetes
installation
and
so,
while
I
think
cube,
DNS
and
core
DNS
could
actually
coexist.
That
might
not
be
the
case
in
some
future
version
and
so
I
I
think
this
is
a
good
forcing
function
for
the
discussion.
D
We
currently
bundle
together
the
initial
seven
cloud
providers
just
because
they
happen
to
be
in
tree,
but
this
doesn't
provide
a
consistent
experience
for
users
of
external
cloud
providers.
We
did
some
work
to
articulate
requirements
for
having
a
repository
in
the
kubernetes
organization,
so
I
hope
we
could
reuse
that
for
some
discoverability
of
cloud
provider
code,
that's
necessary,
but
I
am
hopeful
that
there
can
be
a
consistent
directory
structure
or
discovery
mechanism
that
will
hold
not
just
for
the
cloud
provider
code
but
also
for
DNS
required
extensions.
Some.
E
Can
go
ahead,
I
was
just
gonna
say:
I
got
kids
dates.
The
way
we've
done
discovery
has
been.
We
have
tools
that
install
crew,
another
he's
like
we
have
cube
app
and
we
have
cops,
and
we
have
anywhere
and
lots
of
others.
I'm
sure
forget
it,
but
and
I
think
so.
I
definitely
seen
this
problem
pretty
heavily
from
the
like
maintainer
of
cops,
type
hat
and
I.
Think
we
we
currently
don't
even
have
a
in
addition
to
discovery.
E
We
also
don't
even
have
a
way
for
a
project
that
is
not
in
the
the
main
car
GD
file
to
be
built
and
distributed
in
a
what
I
would
consider
a
great
trustworthy,
repeatable,
reliable
way
and
I
think
Jo.
What
you
mentioned
about
you
know.
The
process
by
which
we
approve
repos
I
would
love
for
us
to
build
a
whole
process
around.
E
When
you
get
a
repo,
you
also
get
a
of
GCS
bucket
or
an
s3
bucket,
or
both
you
get
a
like
a
GI
build
and
a
signer,
and
they
believe
that
push
to
a
docker
repo.
So
they
believe
the
distribute
these
things,
and
then
we
don't
currently
have
a
great
distribution
mechanism.
I
think
it's
are
a
great
discovery
mechanism,
I,
think
cultures
working
on
that
I
would
like
it
for
it
not
to
be
part
of
help.
F
Yeah,
so
I
was
just
gonna
mention
on
my
end,
so
from
building
deej,
oceans,
clap,
controlled
manager.
One
of
the
issues
that
we
ran
into
was
well.
It's
not
really
an
issue,
but
like
a
future
concern
of
mine.
Is
that
we're
hosting
our
CCM
docker
image
on
docker
hub?
That
was
just
kind
of
the
it's
free
and
public
it
just
works.
I
like
we
have
concerns
around.
You
know
dr.
hub
in
the
future.
It
could
be
deprecated
or
like,
for
example,
like
not
to
say
that
Google's
registry
is
not
better
or
worse
than
dr.
F
hubs,
but
having
a
consolidated
like
knowing
that,
like
all
the
cloud
provider
distributions
and
everything
are
all
hosted
on
the
same
registry
and
all
those
things
like,
that's
something
that
would
be
nice
to
have
a
standard
floor
right
now.
Everything
it's
like.
That's
where
the
doctor
we've
just
come
from
is
just
yeah.
It's
all
free
for
all
right.
B
So
speaking
from
the
sig
testing
perspective
like
holy
crap,
I
would
love
to
live
in
a
world
where
we
have
a
pre-baked
release
process
that
she
can
just
stand
out
for
every
single
repo
like
I'm
trying
to
do
this
today
for
having
a
pre-baked
home
request,
review
merged
process
for
every
repo.
Well,
we're
not
there
today,
further
I've
seen
a
lot
of
friction
from
individual
projects
who
are
like
no,
we
want
to
make
our
sausage
our
own
way.
B
The
other
thing
I
wanted
to
add,
is
I'm,
probably
not
super
familiar
with
where
Justin's
work
is
today
with
cops.
But
it
sounds
to
me
like
we're
talking
about
the
context
of
hat
on
managers
and
I'm
I'm,
trying
to
understand,
if
we're
talking
about
like
things
that
go
into
the
cluster
like
DNS,
that
are
required,
or
if
we're
talking
about
things
that
run
alongside
your
cluster,
like
cloud
controller
manager
that
provide
additional
functionality,
and
then
I
would
also
think
about
things
that
that
layer
might
also
include
things
like
CNI
CSI
CRI
into
the
patient's.
D
Of
that
and
when
I
say
that
cloud
provider
is
part
of
this
whole
domain,
that's
what
I
mean
is
that
it
is
both
the
dependencies
of
and
the
extensions
of
kubernetes
that
make
for
what
a
user
needs
to
install
and
I
think.
Maybe
maybe
it's
a
spectrum.
Maybe
like
the
beginning,
there
are
there's
a
structure
or
spec
and
interaction
with
cube
ATM
that
can
inspect
and
consume
that
spec,
and
that
is
responsible
for
assembling
the
bits.
D
So
hopefully
we
don't
end
up
reinventing
everything,
but
really
I.
Think
the
starting
point
is:
is
it
the
kubernetes
community
goal
to
provide
as
output
and
a
useful
artifact
on
its
own,
or
is
it
the
kernel
that
is
consumed
and
extended
and
made
useful
by
other
entities
and
I?
Don't
expect
everyone
to
agree
on
that,
but
I
think
any
effort
or
some
conduct
of
cig
release
that
takes
this
work
done
you
start
from
from
that
moment
on.
D
A
B
Sorry
from
my
thinking
as
a
member
of
Cygnus
Kaleb
I
would
say
that
was
my
desires,
something
a
bit
even
stronger
than
the
position
you've
laid
out.
Jago.
Do
you
think,
there's
a
community
responsibility
to
provide
and
whether
it's
at
the
tripping
Eddie's
level
or
at
the
CN
CF
level,
but
a
vendor
neutral
place
to
expose
working
combinations
of
dislike
distributions.
B
B
Now
as
well,
I
think
yeah,
like
you,
were
suggesting
I
think
we
do
need
a
place
where
you
have
like
a
bill
of
materials
for
what
is
inside
of
your
distribution,
whether
I
think
the
community
has
a
responsibility
to
provide
common
tooling,
and
this
is
a
process
like
I
was
part
of
onboarding
non-googlers
on
to
the
release
process.
I
think,
certainly
we
need
multiple.
B
From
a
couple
of
you
know,
a
handful
of
trusted
locations,
so
not
just
Google
but
also
Amazon
I,
can
imagine
also
the
OpenStack
foundation
providing
some
tooling
and
their
shared
CI
resources
that
you
know
you
have
a
reproducible,
build
for
the
core
part
of
the
you
know
for
the
kernel
and
for,
and
that
also
validates
the
combination
provided
by
the
other
distributions
and
that's
checked
in
some
place
like
a
an
actual
like
receipt
file
for
every
release.
So
I
think.
B
Yes,
we
have
the
responsibility
of
producing
a
minimally
working
thing,
but
also
for
the
distributions
in
these
further
conformant
ones
and
probably
possibly
as
a
part
of
the
conformance
effort
specifying
what
all
is
has
been
validated
in
those
results.
So
what
CNI
plugin
did
you
use?
What
CRI
runtime
yeah.
D
G
D
B
B
Yeah
that
that
things
like
cross
cloud,
CI
effort
and
right,
okay,
but
I,
don't
think
it's
necessarily
like
the
full-on
manifests
I.
Think
it's
more
verifying
that
kubernetes
can
stand
up
on
clouds.
So
straw,
man,
let's
say:
there's
a
null
cloud
provider
and
the
release
team
is
in
charge
of
hurting.
You
know
shepherding
something
out
the
door
and
they
do
it
with
this,
no
cloud
provider.
D
B
Sorry,
just
so
just
to
follow
on
their
like
the
process
today
involves
a
lot
of
tests
that
are
substantially
more
than
conformance
tests,
so
how?
How
would
we
get
to
the
future
where
we
depend
solely
upon
conformance
tests
while
not
losing
fidelity
or
confidence
that
the
cloud
providers
that
block
the
release
today
would
continue
to
function
well
in
the
future?.
D
Well,
I
think
there's
two
answers
to
that.
One
is
I,
don't
believe
the
conformance
tests
are
sufficient
today
to
describe
what
a
user
expects
from
kubernetes.
So
there's
a
gap
to
close
there
and
that's
understood
and
working
on
that.
The
other
part
is
I.
Think
there
is
a
distinction
between
what
is
the
base
functionality
of
the
API
surface
of
the
base
that
mini
distro,
that
whatever
thing
runs
on
mini
queue
and
the
auto
scaling
nodes
and
things
you
would
only
expect
to
work
in
a
public
cloud
provider,
for
example.
D
Just
as
a
sanity
check,
and
that
we
should
strive
towards
some
standard
structure
and
format
or
pulling
in
the
rest
of
the
bits
such
that
someone
who
comes
along
and
makes
a
new
bit
knows
where
to
put
it,
how
to
get
started.
Whether
it's
required
to
sign
that
bit
or
not
what
the
license
requirements
are
is
where
it
will
live
to
ease
the
on-ramp.
D
For
that
new
bit
to
be
included
in
multiple
distributions
of
kubernetes,
so
that
it's
kind
of
a
starting
point
and
I
think
it
can
be
stronger
than
that
to
get
to
Caleb's
point.
But
I
want
to
make
sure
we
start
with
this
sort
of
first
principles
and
move
out
from
there
and
to
figure
out
what's
in
scope
and
who
else
needs
to
be
involved.
D
H
Yes,
Tim
has
the
floor.
I
agree
very
strongly
with
that,
and
anybody
who
saw
my
talk
knew
that
I
call
this
out
as
a
problem.
Almost
a
year
ago,
we
haven't
really
made
progress
on
it.
We've
sort
of
taken
a
middle
ground
position
to
it,
I'm,
not
sure
that
that
is
going
to
be
viable
in
the
longer
term.
H
So
you
know
having
having
the
release
produce
a
null
cloud
provider
result
I
think
is
an
interesting
result.
It
does
bring
up
the
issue
of
conformance
tiers.
We
have
facets
of
the
kubernetes
api
that
are
strictly
optional,
but
if
engaged,
they
do
have
some
amount
of
conformance
that
they
would
want
for
them
to
apply
I'm
thinking
about
things
like
load,
balancers
and
ingresses,
and
those
sorts
of
things
which
would
not
be
satisfied
on
a
milk
lock
light
or
whether
that
was
D
and
D
or
an
EQ
or
something
else.
H
H
D
D
C
Exactly
what
we
expect?
Well,
it's
a
I
think
what's
more,
the
working
group
has
had
that
as
a
goal.
Sort
of
one
of
the
ways
in
which
we
will
know
we
were
successful
is
when
we
could
delete
all
cloud
provider
code
from
what
I
would
call
kubernetes
kubernetes
yep.
Well,
that's
one
part:
that's
the
source
code
when
everyone
know,
is
the
build
and
release
process
right?
No,
no,
but
if
I
can
delete
it
from
kubernetes
kubernetes
and
can
run
all
of
this,
then.
Presumably
the
kernel
is
now
cloud
provider,
agnostic,
yes,
but
you
can.
H
Imagine
a
world
where
we
say
look:
the
Cadiz
release
star
has
always
had
support
for
these.
You
know
whatever
it
is
eight
or
nine
cloud
providers,
and
so
we
will
take
those
cloud
provider
modules
and
will
bundle
them
into
the
upstream
release
so
that
option
release
continues
to
be
useful
on
those
eight
cloud
providers
and
that
doesn't
feel
right
to
me
as
an
open
source
person
as
a
Google
person.
Of
course,
I
want
the
upstream
thing
to
work
with
my
cloud
provider.
H
Could
we
extend
that
to
you
know
any
cloud
provider
that's
willing
to
provide
conformance
tests,
like
maybe
I
sort
of
feel
like?
We
want
to
use
that
and
say
like
a
zero.
We
moved
to
model
where
we
have
no
cloud
providers
in
it
and
here's
the
criteria
for
getting
your
cloud
provider
added
and
then
everybody
goes
back
to
equal
footing.
All.
D
D
A
Mean
there
is
I've
got
a
lot
of
mixed
feedback
from
our
community
about
this.
There
are
leaders
in
our
community
who
are
very
upset
about
me
wanting
to
get
OpenStack
code
out
of
upstream
kubernetes
because
they
feel
like
we
diminish
our
position
if
we're
not
bundled
in
by
default
for
not
part
of
the
default
distribution,
especially
for.
A
A
A
D
H
G
H
H
You
know
a
three-day
warning
and
they
have
three
days
to
produce
verifiable
test
results
against
a
particular
snapshot
or
release
candidate,
and
if
they
can
show
those
things,
then
they're
included
in
the
release
candidate
index
and
if
they're
not,
then
they're
not,
and
they
can
come
along
and
catch
the
next
one.
Something
like
that.
I
mean.
H
The
distinction
I'm
making
is
sort
of
like,
if
you
can
do
it
in
the
window,
that
the
process
gives
you
then
you're
in
the
default
bundle
in
the
sense
that
you're
next
from
it.
And
if
you
miss
the
train,
then
you
missed
the
train.
Think
of
it
like
if
the
Linux
kernel
model
were
totally
different
and
all
the
driver,
vendors
were
managed
by
the
by
the
vendors
and
Lettuce
sent
out
an
email
to
Nvidia
and
said
be
a
bad
example.
E
I
wonder
whether
the
one
really
the
index,
this
notion
has
to
be
in
the
really
in
any
way
or
whether
we
actually
make
the
index
the
thing
right,
because
we
I
think
we
have.
We
have
goals
that
all
these
separate
projects
can
release
more
or
less
on
their
own
schedule.
We
have
goals
that
we
release
the
core
less
frequently.
E
We
have
another
goal
that
we
would
like
for
there
to
be
useful
artifact
both
for
a
sort
of
you
know:
community
health
reasons,
but
also
sort
of
for
laziness,
reasons
like
if
we
don't
give
a
working
combination,
then
people
will
try
to
combine
themselves
and
find
it
doesn't
work
and
ask
for
help
and
we
spend
their
lives.
Trying
I
like
to
say
this
one.
E
This
version
works
with
this
one,
and
this
one
doesn't
work
for
this
one,
it's
much
easier,
just
to
publish
that
working
set,
but
if
we,
if
we
publish
that
index,
maybe
that
index
is
the
release,
the
the
actual
relation.
It
may
be
the
core
releases
a
month
early
and
there's
plenty
of
time
to
stabilize
and.
H
Goes
to
what
my
talk
last
year
was
actually
about
right,
like
maybe
the
core,
and
the
distribution
actually
should
exist
on
different
cycles.
Right
I.
Don't
want
to
get
into
that
here,
because
there's
really
pros
and
cons
to
both,
but
actually
Justin,
I
I
sort
of
like
what
you
said,
or
at
least
what
I
took
away
from
what
you
said,
which
is
you
know
if
the
index
isn't
built
into
the
bundle?
But
if
the
index
is
say
online,
then
anybody
can
come
and
update
the
index
with
their
patch.
H
At
any
point,
and
by
having
the
verifiable
test
results,
they
can
say
I've
upgraded,
my
cloud
provider
module,
but
it
still
works
with
one
dot.
16.3
Reddy's
doesn't
have
to
cut
one
on
16.4.
In
order
to
upgrade
the
cloud
provider,
people
can
come
in
late
and
say
I'm
now
certified
against
one
dot
16.3,
and
you
know
we
could
maybe
work
out
the
process
where
you
know
you,
you
cut
our
C's
and
you
give
the
RC.
This
is
a
three
day
thing
again.
H
The
RC
has
three
days
of
soak
if
we
cut
our
c6
and
Google
or
and
digitalocean
and
whoever
come
along
and
say,
I
verify
against
rc6
rc6
is
approved.
Therefore
it
just
gets
renamed
to
the
final
version.
The
hash
is
the
same
and
then
I'm
in
my
my
test
results
carry
forward,
although
we
embed
version
numbers,
so
that's
broken
rock
the
gold
master.
C
Naming
schedule,
I
will
say,
I
think
that
better
aligns
the
incentives
with
the
results
that
we
want
right.
So,
for
instance,
it
gets
us
to
a
oh
I
missed
the
boat,
so
I'm,
not
even
gonna
worry
about
it
versus
hey.
It's
it's
in
everyone's
interest
to
get
all
the
tests
working
for
all
releases
right
there
there
is
no
boat.
I
can
get
on
at
any
s,
I
want,
which
leads
me
into
position
and
I'm
better
off.
Getting
all
the
tests
working
on
all
these
are
the
relevant
tests.
C
H
A
A
A
You
know
start
working
towards
some
goals
to
to
how
we
manage
artifacts
producer
effects
invent
you
know,
and
for
the
for
the
for
the
providers
as
a
whole.
So.
H
If
we
had
a
minute
I
think
the
most
useful
artifact
would
be
somebody
to
write
down
and
maybe
think
through
a
little
bit
more
of
the
corners
of
this
idea
of
the
null
cloud
provider
as
being
the
only
cloud
provider
in
the
release
and
that
being
what
conformance
is
tested
against
and
tortured.
Is
anybody
here
or
sit
also
on
cig
release
taalib
scale
over
here?
Oh
there
is,
you
know
cool,
so
it
would
be
interesting
to
maybe
cross
pollinate.
That
idea.
D
Yes,
so
I
think
that's
the
first
step.
Maybe
is
just
a
positional
statement
that
there
is
something
that
is
executable.
That
comes
out
of
the
other
end
of
the
kubernetes
release
process,
that
there
is
a
goal
for
a
consistency
of
structure
such
that
tools
can
discover
and
include
artifacts
in
a
consistent
way
and
some
treatment
around
the
coordinated
versioning
problem
space
and
then
flip
that
around
with
sig
release
and
probably
cluster
lifecycle
and
a
few
other
spots,
just
to
make
sure
that
it's
directionally
yeah.
C
C
D
Away
from
this
come
decision,
we
want
the
former
that
we
want
to
produce
something
that
is
executable
and
complete
at
the
output
of
the
current
build
and
release
process.
I
think
what
Walter
was
starting
to
get
at
is
what
you
didn't
want
to
bring
up,
which
was
the
colonel
distinction
and
having
every
six
weeks,
released
that
and
never
ever
cherry-pick
things
you
only
fix
forward,
and
if
you
want
that
nice
new
fix
you
get
the
new
version,
no
one
will
ever
spend
time
fixing
the
thing
before.
That's
right,
that's
a
very
different.
D
A
very
different
conversation,
it's
a
different
conversation,
but
I
think
we
should
start
with
this
at
least
articulate
a
position
from
this
group
that
came
through
in
the
consensus
of
this
discussion
and
see
what
it
looks
like
validate
it.
If
it
still
looks
good,
then
we
start
to
shop
it
around
and
see
if
there
are
drastic
holes
in
it
or
if
it's
worth
bringing
up
with
a
cigar,
detector
and
kind
of
making
staffing
a
sub-project
of
sig
release
would
be
my
initial
spot
to
start
that,
but
maybe
there's
a
different
way
to
do.
H
Aaron
is
asking
I
think
a
really
interesting
question,
which
is
you
know?
Do
we
end
up
at
cube
con
again
with
nothing
changed
since
this
talk
last
year,
I
think
to
the
topic
of
the
talk.
We
will
very
likely
end
up
with.
Nothing
has
changed
or
we
have
explicitly
decided
not
to
change
it,
which
is
the
thing
that
I
Walter
really
wants
to
talk
about,
but
I
think
we
can
end
up
there
with
a
strong
position
statement
from
sig
clubs
right
well
rewind
a
year.
There
was
no
signal
hog
rider.
H
We
supported
you,
know
half
as
many
clouds
as
we
do
sort
of
informally
today
and
we've
made
great
progress
in
the
extraction
of
cloud
provider,
and
we
can
come
up
with
a
strong
position
statement
that
sig
Club
writer
and
sig
release
both
agree
that
the
desired
end
result
is
the
null
cloud
provider
and
that's
what
conformance
to
be
testing
against,
and
while
it's
not
maybe
a
fully
ratified
and
plan
that
there's
large
by
it
I
think
that'd
be
decent.
Yeah
I
would.
H
B
I'm
just
wondering
like
concretely
does
this
mean
like?
Might
we
have
an
old
cloud
provider,
that's
bundled
in
alongside
all
of
the
existing
cloud
providers
that
are
bundled
in
or
do
we
think
we
might
be
at
a
point
where
we're
ready
to
talk
about
no
cloud
providers
being
bundled
in
by
q4
or
like
I'm,
just
trying
to
think?
What's
the
what's
the
thing
you
would
want
to
talk
about.
B
G
D
A
Yeah
and
one
thing
one
thing
that
I
want
to
remind
people
of
too.
With
regards
to
some
of
these
things,
we
actually
added
code
to
the
cloud
providers
in
general
that
allows
for
them
to
be
marked
as
deprecated,
so
we
have
actually
taking
concrete
steps
to
start
adding
machinery
to
to
remove
the
providers
so
something
that
a
reminder
to
everyone.
Oh.
A
But
if
you
like,
one
of
the
things
we
should
really
be
focusing
on
delivering
sooner
rather
than
later
is
is
updated
documentation
on
how
you
load
an
entry
provider,
how
you
configure
it
generally
so
that
we
can
use
that
as
a
basis
for
for
the
individual
providers
themselves.
Documenting
the
you
know
the
particular
parameters
and
the
settings
and
how
and
how
their
users
get
those
set
up.
A
Similarly,
this
is
this
is
also
very
important,
the
external
cloud
provider
that
we
produce
a
document
that
works
with
cubed
min
to
you
know
how
you
summon
the
master,
how
you
load
the
provider
code
in
there
and
how
you
gonna
configure
it
because
there's
a
little
bit
more
work
involved.
Since
that
code
is
an
entry
and
it
you
know
it
has
to
be
loaded
in
after
after
after
the
master
has
started
out,
and
so
those
are,
the
those
are
two
things
that
I'm
that
I
think
that
I
would
like
us
to
be
working
towards.
A
Actually,
you
know
as
soon
as
possible
and
and
I
guess,
I'm
happy
to
help
work
on
those,
but
I'm
also
wondering
if
there's
anybody
who
wants
to
volunteer
to
to
help
produce
those
documents
and
and
and
send
those
back
to
sig
Docs
for
personal
review
and
for
this
and
for
our
community
for
our
sig.
Also
for
some
review.
A
A
You
know
really
what
they're,
what
they're
looking
for
is
direct
feedback
on
what
people
think
the
best
way
to
organize
those
pages
so
that
someone
new
to
kubernetes
are
someone
wanting
to
stand
up.
Kerber
Nettie's
on
on
different
clusters
will
be
able
to
will
be
able
to
show
up
and
be
successful
in
doing
that,
regardless
of
the
system
that
they're
at,
and
so
this
is
an.
This
is
an
ask
that
sig
Docs
has
asked
of
us
to.
A
E
Think
I
think
that
is
a
great
effort.
I
think
some
of
the
challenges
here
are
that
so
I
you
know
have
previously
maintained
a
particular
opinion
institution
which
I
think
that
was
a
good
outcome,
but
other
people
have
different
opinions
and
there
are
lots
of
players
that
want
to
get
their
own
effort
in
into
that
page.
And
so
you
end
up
with
a
page
that
has
like
here
are
the
50
different
options
you
can
use,
and
so
the
user
comes
along
and
says:
I
want
to
run
on
platform
Y.
E
But
you
know
everyone
has
the
solution
for
platform
Y
and
it's
not
clear
which
ones
are
maintained,
which
ones
are
easier
or,
and
everyone
even
I
severed
opinions
at
which
right.
So
it's
difficult
to
know
how
to
how
to
curate
that
and
give
someone
a
good
experience
without
favoring
like
one
project
over
another.
That's
been
the
sort
of
real
metal
challenge
on
that
right.
A
And
that's
one
of
the
things
that
they're
hoping
to
get
out
of
us
is
I
mean
this
goes
back
to
the
previous
conversation
of
how
do
you
you
know?
How
do
you
discover
artifacts
and,
and
you
know
and
verify
them,
and
they
would
you
know,
they're
hoping
that
if
we
can
come
up
with
a
way
to
to
say
that
you
know
this?
Is
these
the
doctor?
This
is
this
is
documentation,
that's
maintained
and
we've
vetted
that
documentation,
and
we
have
some
process
for
that.
A
Then
they
are
comfortable
with
with
presenting
that
as
something
that
is
less
geared
towards
individuals.
You
know
you're,
you
know
you
know,
be
it
organizations
or
individuals
themselves.
You
know
seeking
to
advance
agendas
versus
you
know
serving
a
larger
community
and
that's
what
they're
hoping
to
get
out
of
this.
D
This
no
I
said
it's
a
tricky
question,
as
especially
with
with
Justin
raises
and
that
you
know
avoiding
the
jockeying
for
position
or
ordering
or
defaulting
and
these
kinds
of
situations
elsewhere.
In
the
code
we
use
a
lot
of
foo
and
bar,
and
so
maybe
those
examples
or
random
round-robin
hang
of
populating
from
a
list
as
the
example
our
strategies
that
we
used
in
the
past.
That
might
be
at
fault
so
curious.
What
other
people
come
up
with
those
suggestions,
but
it
is
stuff
I.
E
B
So
this
this
option
right
now
we're
a
select
group
of
people
get
to
choose
what
shows
up
yeah
it.
You
get
the
this
sawtooth
pattern
of
like
it
builds
up,
and
then
it
drops
allowing
people
to
sort
of
transparently
do
this
without
having
to
like
up
a
website
would
be
the
ideal
solution,
but
we're
not
there.
I
just
think
like
that.
Page
right
now
is
completely
paralyzing
to
a
new
user.
Yeah
yeah
just
choose
one
of
these
thousand.
A
A
E
I
know
one
option
that
we
haven't
I,
think
I
say
to
my
children:
it's
not
tried
this
right
where
we
I
think
there's
package
usage,
reporting
and
like
we
can
try
to
get
information
on
what
is
actually
used
out
there
there's
a
policy
very
Center
information
in
some
regards,
and
also
you
just
know.
If
you
want
to
report
like
all
the
versions
that
they're
running
to
some
weird
private
service,
but
we
could
give.
E
G
G
H
H
H
D
A
C
C
We're
not
saying
we're,
not
gonna,
take
feedback
or
anything
but
I
mean
there's
quite
a
few
proposals
in
there
about
how
to
do
how
to
build
a
cloud
provider
how
to
set
up
the
repo
and
how
we
should
move
forward
to
the
point
where
we
can
remove
all
cloud
providers
specifics
from
kubernetes
kubernetes,
whether
their
deployment
or
you
know,
cloud
providers
specific.
You
know
libraries,
so
yeah,
please
make
sure
you're
looking
at
that.
F
So
one
question
I
had
about
it
was
okay,
so
it's
clear
that
we
want
the
cloud
provider
code
out,
but
the
cap
right
as
of
right
now,
has
all
the
controllers
in
it
all
the
cloud
providers
to
the
controller
so
that
service
controller
up
controller
whatever
and
so
I'm,
not
sure.
If
you
want
to
be
pushing
the
controller
code
out
to
staging
as
right
now,
it
seems
like
that's
a
bit
unclear
and
that's
just
kind
of
based
on
a
gut
feeling,
but
I
just
don't
know.
F
C
So
my
only
worry
is
that
if
I
would
like
us
to
get
to
the
point
where,
where
the
kubernetes
kubernetes
core
doesn't
even
depend
on
cloud
provider,
I
would
like
us
to
be
completely
at
19
and
yeah.
Let's
have
the
discussion
I
think
it's
a
great
discussion
to
have,
but
the
perspective
I'm
coming
from
is:
if
we
can
get
to
the
point
where
nothing
in
kubernetes
kubernetes
depends
on
a
cloud
provider,
then
we
know
we're
dot,
and
so
there
are
three
controllers
in
tree
right
now
that
depend
on
cloud
providers
specific.
C
H
F
C
C
Think
in
fact
the
way
we
were
viewing
this,
and
this
is
mostly
because
I
think
we
Google
are
the
only
ones
who
actually
make
use
the
cloud
provider
interface
in
no
time
and
controller
once
there
have
two
versions
of
it:
one
that
is
entry
where
we've
actually
deleted
all
the
cloud
provider
dependencies
and
then
a
separate
version
of
the
node
ipam
controller,
which
is
the
one
that
is
pulled
in
and
used
by
Google's
cloud
controller
manager.
And
so
maybe
we
want
to
look
at
doing
something
like
that.
C
But
I
think
that
you
know,
and
if
we
did
that,
then
I
don't
have
any
problems
with
things
like
service
and
wrap
controller
remaining
in
tree.
But
I
really
would
like
to
get
to
the
point
where
we
can't
accidentally
have
built
any
cloud
provider
dependency
into
what
we've
been
sort
of
referring
to
as
the
kernel
modules
right.
C
So
I
would
love
to
get
to
the
point
where
the
cube
API
server,
the
KCM
and
cubelet
do
not
take
a
cloud
provider
option
and
cannot
load
the
cloud
provider
library
and
are
completely
agnostic
and
work
in
a
very
understandable,
predictable
way.
And
if
you
need
something
like
specific,
no
Taipan
behavior,
then
you
disable
the
no
type
end
controller
in
the
KCM,
and
you
run
it
in
the
CCM.
F
Okay,
so
maybe
this
is
looking
a
little
bit
too
far
into
the
future,
but
where
would
those
controllers
like
once
they're
out
of
staging?
Where
would
they
live
eventually,
because
I
I
don't
see
I
mean
I
mean
I?
Guess
you
could
live
with
Cloud
Control
Manager
Rico,
but
it
seems
like
from
the
kept.
It
seems
like
we're
saying
like.
Oh
we're,
gonna
have
a
controller
to
repo
that
has
like
service
route
controller
or
whatever.
So
one
yeah.
C
I
mean
it's
a
great
question,
so
one
possibilities
we
could,
if
we
is,
we
could
do
that.
The
other
is,
if
we
said
look,
there
is
Google
controller
manager
and
a
AWS
controller
manager
and
a
you
know
a
digital
ocean
control
or
manager
repo.
Maybe
they
can
have
their
controllers
there,
their
version
of
the
controller
living
there
specifically
and
right
now
it
seems
like
we're
basically
heading
towards
those
being
CNC,
a
free
boats
that
still
open
source
code.