►
From YouTube: Kubernetes SIG Cloud Provider 2018-10-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
good,
sorry,
great!
So
welcome
to
this
my
weekly
sink
cloud
provider
meeting
everybody
looks
like
we
have
a
pretty
light
agenda
today,
but
if
you
have
any
additional
items
that
you'd
like
to
add,
please
feel
free
to
drop
them
into
the
into
the
Google
Doc
and
also
put
your
name
down
if
you're
attending.
It
doesn't
look
like
me
of
anybody
new
this
week,
let's
just
someone
new
over
at
the
Google
offices,
but
if
you
are
new,
welcome.
B
A
B
So
I
was
hoping
to
kind
of
take
cube
con
as
a
chance
for
us
to
kind
of
reevaluate.
If
we
do
want
to
make
any
kind
of
breaking
changes
or
kind
of
like
a
like.
A
final
call
for
breaking
changes.
We
might
want
to
make
to
interface,
and
if
you
want
to
do
that,
then
we
of
course
have
to
say
beta
up
for
113
and.
B
D
B
D
I
would
I
would
like
to
suggest
that
we
get
really
crisp
about
what
GA
means
what
the
requirements
are.
We
talked
earlier
about
having
some
tests
that
we
would
add
later
I,
don't
know
that
any
effort
has
been
put
into
that,
but
planning
really
well
and
getting
early
in
the
q1
1:14
timeframe
and
just
sticking
the
landing
rather
than
being
unsure
in
113
I.
D
D
I
think
the
part
of
the
conversation
we
need
to
have
I,
don't
know
that
we're
gonna
come
to
that
I
guess
we
have
time
today
might
as
well
start
the
conversation,
but
my
intention
wasn't
to
come
to
conclusion
on
that.
Today
was
mostly
to
say,
we
needed
some
statements
about
the
testing
strategy
and
why
we
feel
comfortable
with
it.
If
that
includes
a
mock
cloud
provider
or
a
null
death-knell
cloud
provider
or
not
I,
don't
know,
but
I
think
we've
been
pretty
hand-wavy
about
the
requirements
of
that
in
the
past
and
I.
Think.
E
If
I
can
add
to
that,
I
also
think
we
should
think
about
this,
regards
to
conformance
I'm,
fairly
sure
that
there
are
api's
which
technically
are
implemented.
But
in
fact,
are
no
ops
on
a
good
number
of
the
cloud
providers,
and
so
we
should
be
very
clear
which
clot,
which
which
of
the
cloud
provider
api's,
are
required
to
be
fully
implemented
and
which
you're
allowed
to
be
no
ops.
Yeah.
A
Yeah
I
think
that
was
about
to
mention
the
same
thing
too,
especially
you
know
there
have
been
there
been
one
of
the
reasons
that
this
group
was
founded
was
an
issue
from
I.
Don't
know
a
year
and
a
half
ago,
where
every
cloud
treats
snow,
it's
a
little
bit
differently
and
I
think
it
makes
sense
in
the
context
of
this
provider
to
actually
if
we,
where
we
can
establish
consistent
behavior
across
across
the
providers
and
testing,
is
the
best
way
to
do
that,
but
also
I
think.
E
Maybe
I
mean
I
think
it
may
be
okay
to
have
certain
of
the
api's
listed.
As
you
know,
I
know:
I
opted
channels,
probably
the
wrong
word,
but
for
lack
of
a
better
one
optional.
So,
for
instance,
I
think,
there's
a
sort
of
a
push
certificate
method
somewhere
in
the
cloud
provider,
interface
and
I'm
fairly
sure
that
Google
is
the
only
one
who
implements
it
and
maybe
that's
okay,
but
I
think
you
know
for
purposes
of
the
other
SIG's
and
anyone
who
wants
to
rely
on
some
of
these
behaviors.
A
Testing
can
even
be
used
to
enforce
some
of
this.
Like
there's
like
if
something
isn't
implemented,
then
there
you
know,
then
there
should
be
a
consistent
way
that
the
user
is
informed,
that
it's
not
implemented
and
that
can
be
tested
for
right.
If
you
don't
get
the
proper
response
back,
another
option
would
be
that
you
get
a
you
know
a
standard
error
back
yeah.
F
The
cloud
provider
interface,
the
got
to
me
is
a
go
interface,
and
today
we
haven't
made
any
go
interfaces
stable
question
mark,
whereas
what
we
have
done
is
you
said
that
you
know
like
a
kubernetes
cluster
running
with
this.
Configuration
is
conform
if
it
does
work,
but
she
makes
sense
like
in
theory,
someone
could
write
a
cloud
controller
manager
in
Java,
which
obviously
would
not
use
the
same
interface
but
would
still
be
conforming
or
could
still
be
conformant
I
I.
E
Think
that's
true,
but
I
guess
I'm
viewing
it
from
the
other
direction.
I'm
thinking
you
know
for
someone
who's
modifying
service,
controller
or
route
controller,
they
are
dependent
on
behavior
of
the
cloud
provider
interface
and
we
are
trying
to
get
to
a
world
where
that's
a
black
box
to
that
and
I.
Think.
If
we're
saying
that
that's
a
black
box
that
they
can,
they
should
be
able
to
depend
on
the
behavior.
Then
we
need
to
be
very
clear
on
what
behavior
they
can
depend
on
when
they're
modifying
the
service
controller.
B
And
I
think
yeah,
but
I
think
this
is
why
a
lot
of
people
are
confused
because
when
they
think
about
api's,
yeah
they're
thinking
about
you,
know
core
view
on
Absalon
whatever
and
so
like
taking
that.
As
an
example
like,
let's
say
we
add,
you
know
a
context
parameter
to
method
in
the
cloud
provider
interface,
which
is
which
is
breaking
because
whoever
vendors
it
next
has
to
update
that
method.
B
Signature
like
do
we
we
should
define
if
we
consider
that
breaking
or
if
we
just
defined
like
or
we
consider
it
something
else
right
so
like
I,
feel
like
the
current
features
issue,
we
have
that's
kind
of
the
generic
like
support
out
of
cheap
providers.
Maybe
it's
time
we
break
that
out
into
like
specifically
the
interface
and
the
the
out
of
tree
that
component
or
the
cloud
control
map
cloud
controller
manager,
which
includes,
like
all
the
component
flags
and
whatever.
Yes,.
D
C
D
That
if
we
approach
this
problem
today,
we
would
have
a
CX,
I
type
name
and
go
in
that
direction
like
CSI
or
C&I
or
whatever
CCP
I.
Maybe
no
I,
don't
know,
but
I
think
we
need
to
be
clear
about
this
strategy
before
we
consider
it
GA
and
I.
Think
that's
the
right
direction
to
the
right
order
of
steps
to
pursue
is
to
come
up
with
that
strategy
and
then
demonstrate
that
what
we
have
meets
those
criteria
and
therefore
it
is
reasonable
to
be
GA.
B
E
Yeah
but
I
think
that
yeah.
That
sounds
right
to
me,
but
I
think
yeah.
We've
got
a
view
that
that
interface
as
a
pluggable
system
right
yeah
to
be,
but
if
you
adhere
to
whatever
we
say
you
need
to
do,
you
should
be
able
to
plug
you
should,
as
a
new
cloud
provider,
be
able
to
write
your
your
implementation
of
the
interface
adhere
to
what
we've
told
you
to
do,
and
everything
should
work.
F
A
The
other
way,
the
way
OpenStack
has
managed
some
of
these
changes
that
have
been
going
in
is
we.
We
just
pinned
our
releases
to
the
urban
eddies
release,
and
so,
if
you
want
so
we
we
don't
guarantee
that
the
the
the
head
of
our
branch
will
work
with
you
to
say
112
now,
you
know,
we've
already
pinned
it
to
113
alpha.
B
D
B
G
D
F
F
B
D
Yeah
I
am
interested
in
these
tests
because
I
want
to
know
when
ours
is
the
only
one
that
doesn't
go
the
way
the
herd
is
going
and
not
have
to
scramble
later
so,
I
see
the
clear
benefit
of
adding
some
tests
to
understand
where
the
implementations
deviate
and
I
think.
If
we
can
focus
on
that
benefit
and
just
make
it
useful,
then
we
don't
have
to.
B
A
I
think
you
can,
because
it's
a
because
it
isn't.
It
is
an
interface,
and
so
the
providers
are
performing
to
be
interface,
which
means
that
you
know
the
the
tests
will
call
out.
You
can
call
out
to
the
individual
cloud
or
by
you
can
write.
You
should
be
able
to
run
the
same
tests
against
all
the
cloud
providers
and
expect
behave.
You
know
particular
behavior
out
of
that
I
I.
E
Really
disagree
and
I'm
going
to
disagree
in
very
isolated
cases,
but
I
know
of
ones
like
the
cert
push
which,
as
far
as
I,
can
tell
what
it
does
is
it
takes.
It
creates
a
so
you
give
it
a
certificate
and
it
finds
a
way
to
push
it
into
the
environment.
Variable
on
a
node
and
I
think
the
keys
that
it
shows
up
under
etc.
Are
very
cloud
providers
specific.
That's
the
only
one
I
can
think
of
that's
like
that,
but
I
can't
swear
there
aren't
others
right,
I.
A
Mean
the
test
me
I
mean
is
you're
right
in
there
and
there
they're
gonna
be
places
where
maybe
you're
deviates,
but
in
places
where
we
want
behavior
to
be
consistent.
I,
like
you
know,
if
you,
if
you
say
you
want
to
do,
you
know,
create
and
notes
for
it.
You
know
you
know
that
was
the
expected
behavior.
It
doesn't
matter
what
the
naming
of
those
notes
are
count
the
matters
you
know,
so
the
tests
are
inherently
going
to
be
limited,
but
you
know
it's
a
way
to.
E
Think
I
mean
that's
true,
but
in
fact
I
think
the
signal
should
actually
be
in
the
interface
itself.
I
think
the
interface
should
make
it
very
clear
for,
as
I
said,
for
anyone
who
needs
to
use
the
cloud
provider
interface
like
the
service
route
controller
owner
which
interfaces
they
can
rely
on
being
consistent
across
cloud
providers
and
which
are
really
you
know,
use
it.
A
E
A
F
Is
a
there's
a
very
it
used
to
be
the
case,
at
least
that
UDP.
Those
answers
are
not
it's
a
90
on
AWS
and
I.
Think
it
just
silently
or
like
it
just
fails
to
create,
although
bouncer
and
like
it's
an
event
in
there,
but
it
isn't
really
like
it's,
certainly
one
that
would
be
great
to
handle
consistently
in
that
that
one
I
know
of
as
a
difference.
It's
not
exposed
to
the
interface
level
pretty
obviously
and
I
think
our
tests
just
skip
over
it.
They
check
your
cloud
provider
is
AWS
yeah.
B
E
I
think
the
catch-all
may
be
hard
for
everything,
but
I
think
that
we
should
try
for
the
core
pieces
that
are
widely
used.
We
should
try
to
get
something
a
lot
conformance
test
on
them.
Even
if
it
is
you,
you
run
your
own
e2e
conformance
test,
but
this
is
the
dictated
behavior
so
that
anyone
using
that
that
particular
method
can
rely
on
the
behavior
and.
F
E
D
Think
I
think
that's
right
and
I
think
that
testing
of
the
kubernetes
api-
that's
where
the
I
confirm,
assess
or
not
discussion
happens,
is
that
a
profile
or
a
badge
or
something
else
that
it
a
user
of
the
API
can
do
X,
Y
or
Z.
This
is
more
giving
ourselves
some
signal
guarantee
that
were
behaving
consistently
across
this
group.
That's
the
right
way
to
characterize
that
it
agree.
B
Yeah
and
I
guess
what
I
was
trying
to
say
is
like
the
interface
by
definition.
Does
that?
Because
you
can't
compile
your
code
without
conforming
to
the
interface
and
the
interface
should
its
signature?
It's
it's
tell
to
just
force
that,
but
obviously
things
are
more
complicated
than
that
and
every
provider
has
like.
There
are
little
niches
or
little
little
issue.
D
E
Say:
I,
don't
like
the
someone's
suggestion
for
like
if
we
have
a
create
three
notes,
an
implement
interface
that
is
basically
create
three
notes.
I
would
love
to
just
see
a
generic
test
that
is
called
the
cloud
provider
interface
and
create
three
nodes
and
then
verify
three
nodes
were
created.
That
just
seems
like
a
good
common
test
that
we
can
just
say.
Look
if
that
test
exists,
then
we
have
faith
that
the
interface
has
been
implemented
correctly.
B
A
A
D
D
D
B
So
last
week,
I
created
an
issue
which
pretty
much
tracks
all
the
dependency
dependencies.
We
have
against
kubernetes,
/,
kubernetes
and
so
Walter.
Just
correct
me
if
I'm
wrong,
but
the
approach
we're
taking
here
is.
We
need
to
move
everything
that
is
cloud
provider
into
staging
that
way.
We
have
a
way
to
like
sync
staging
repos
into
the
actual
external
repos,
while
being
able
to
still
build
providers
into
a
cube
controller
manager
entry
and
to
do
the
staging
migration.
E
You
know
we
get
a
better
ability
to
observe
backward-compatible
changes,
get
consideration
for.
Is
it
safe
to
make
sweeping
changes
or
do
they?
Those
changes
need
to
be
more
considerate
right
things
there
in
Kate's
case,
but
not
in
staging.
Theoretically,
someone
should
be
able
to
make
a
change
change
everything
that's
in
Kate's
Kate's
that
depends
on
it
and
boom
you're
done,
and
you
you're
assured
you
haven't
broken
anyone
in
staging.
If
you
make
it
change
to
something
public,
that's
marked
as
paano.
That
is,
then.
E
You
know
that
there
are
potentially
things
and
outside
repos
that
may
change,
so
you
should
be
spending
more
time.
You
should
be
spending
more
and
release
nodes,
etc,
and
so
I
think
it
is
just
a
much
better
place
to
be.
If
we
only
depend
on
things
that
are
in
staging,
I
mean
yes,
we
also
get
the
advantage
of
all
be
schooling
that
let
shall
with
deeds
put
in
or
sinking,
in
cetera.
A
B
And
yeah
definitely
will
take
a
look,
there's
also
a
few
packages
that
are
unassigned.
So
if
anyone
wants
to
take
the
fun
task
of
removing
dependencies
there,
then
please
take
them.
I.
Think
one
important
thing
to
note
is
that
we
haven't
really
defined
a
strategy
on
how
to
remove
those
dependencies.
B
We've
been
kind
of
just
like
it's
kind
of
just
been
like,
take
the
package
and
just
wing
it
and
see
if
someone
will
kind
of
approve
the
PR
and
a
common
pattern
that
I'm
seeing
all
these
is
that
there's
a
kubernetes,
slash,
utils
repo,
which
is
just
like
just
a
repo
where
people
dump
general
can't
like
common
packages
and
so
yeah
what
I've
been
seeing
is
like.
We
take
some
utils
package
that
is
shared
across
the
providers,
but
also
other
places
in
kubernetes
and
we're
kind
of
just
merging
them
into
utils.
B
And
then
we
have
to
update
the
vendor
in
kubernetes
a
kubernetes
and
then
pretty
much
just
update
all
the
imports
from
all
the
providers
and
wherever
else
that
package
is
used
and
so
we're
gonna
be
a
bit.
There's
gonna
be
a
bit
of
pushback
from
like
say,
cart
and
whoever
as
owners
of
those
common
packages,
and
so
that
might
slow
us
down
a
little
bit.
D
G
E
I
think
it's
worth
mentioning
when
we,
a
couple
of
the
people
of
my
team,
ran
a
bit
of
annex
Merriman,
trying
to
yank
out
all
the
cloud
provider
code
that
wasn't
ours
and
all
the
things
that
were
only
being
pulled
in
for
that
club
or
for
those
cloud
providers
and
what
we
found
is.
We
could
drop
between
a
quarter
and
a
third
of
a
million
lines
of
code
out
of
case
Kay
well
had
out
of
the
kubernetes
project.
E
E
D
However,
that's
a
really
good
segue
into
the
topic
I
added
to
the
agenda
late,
which
is,
if
there's
a
thread
going
around
about
LTS
working
group,
and
it
gets
very
much
into
the
kernel
versus
distro
discussion.
How
on
where
do
we
build
things?
What
are
the
artifacts?
How
are
they
discoverable
I?
Think
our
work
on
how
to
make
consistent
directory
structures
and
making
things
consistent
is
very
much
aligned
with
parts
of
this
discussion.
D
Do
it
so
I
love
to
move
the
slides,
okay
background
contact?
Is
there
and
then
I
also
link
to
the
discussion?
They
might
burn
a
tease
dev
about
the
formation
of
that
working
group.
Okay,
everyone
else
is
overwhelmed
with
incoming
messages
and
it's
hard
to
figure
out
what's
important
all
the
time,
but
I
wanted
to
be
really
clear
and
bring
it
up
here
as
well,
because
I
think
there
are
implications
and
I
think
we
have
an
input
into
that
discussion.
That
might
be
useful.
B
A
B
D
A
D
D
And
then
I
think
a
small
subset
of
the
folks
here
will
be
in
Shanghai.
We
also
have
intro
and
deep
dive
sessions
set
aside
there.
One
thing
I
want
to
talk
about
if
there
is
quorum
is
about
this.
How
do
we
build
what
we
intend
to
build
and
what
is
the
outlook
of
the
kubernetes
build
and
release
process?