►
From YouTube: Kubernetes SIG Cloud Provider 2018-07-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Ok,
so
I
think
agendas
fairly
short
today,
I,
don't
think
we'll
take
the
full
hour.
Most
of
the
updates
are
actually
just
kind
of
follow-up
items
from
last
week
or
three
weeks
ago.
So
I'll
start
with
kind
of
the
first
thing
on
the
list,
which
is
the
kept
that
we
had
opened
to
introduce
end-to-end
conformance
testing
from
all
the
providers,
so
for
anyone
that
hasn't
been
to
the
SiC
meetings.
A
So
the
cap
is
just
outlining
like
what
set
of
tests
that
we
think
is
good
enough
to
run
right
now
for
all
the
like
what
kind
of
tests
we
expect
from
all
the
providers
and
kind
of
like
what
tools
to
use
to
to
do
that
so
with
the
cat
merged
I,
think,
there's
a
bit
more
I!
Think
guidance
on
how
to
get
this
done
is
a
bit
more
clear
and
I
think
we
can
kind
of
start
pushing
forward
for
this
effort.
A
It
a
bit
more
aggressively
so
I
kind
of
want
to
talk
about
with
the
snake
on
what
we
think
is
a
reasonable
deadline
to
start
having
all
the
providers
reporting
these
tests
on
a
regular
basis.
I
was
thinking.
One
full
release
is
pretty
reasonable
deadline
for
all
the
providers
to
be
reporting,
at
least
like
the
base
conformance
test.
A
I
think
the
ideal
place
to
be
is
every
commit
by
I
think
in
the
cab.
We
had
said
that
every
patch
version
is
also
okay,
just
because
you
know
in
for
costs
and
kind
of
the
overhead
of
operating
CI
for
every
single
Cabrini's
commit
might
be
a
bit
too
much
to
ask
upfront
and
so
yeah
I.
Think,
as
a
kind
of
smaller
step
approach
is
bare
minimum
report
tests
for
patch
release
and
then
maybe,
when
we're
kind
of
more
experienced
in
this,
we
can
start
expecting
every
every
commit
out
of
master
good.
A
So
we
kept
it
pretty
much
says
to
ping
Ben
clinic
testing.
He
says
he's
okay
with
that,
so
this
is
kind
of
is
like
the
that's
pretty
much
all
we're
gonna
do
for
now.
Maybe
in
the
future
there
might
be
more.
It
might
be
a
better
way
to
do
this,
like
people
can
kind
of
bring
their
own
public
public
agreeable,
s3,
buckets
or
whatnot,
but
right
now
it
is
to
ask
them
for
a
TCP
bucket
owned
by
The
Shins.
Yes,.
E
A
A
Know,
that's
I,
don't
know
so
like
like
this
is
kind
of
what
I
struggle
with
a
bit
is
like.
How
do
we
kind
of
push
so
many
people
to
do
something,
especially
in
the
open-source
community
I,
don't
know,
I
feel
like
as
long
as
we're
recording
some
sort
deadline
showing
that,
like
hey,
we
want
to
meet
these
schools
and
we
all
kind
of
agreed
to
do
it.
That's
like
good
enough,
but
I,
don't
know
what
do
you
guys
think
it's.
C
At
least
a
target
there's
also
a
lot
of
overlap
with
the
conformant,
it's
working
group
and
there's
something
like
60
certified
kubernetes
providers
that
were
also
encouraging
test
results
of
the
conformance
test.
Suite
back
to
test
grid,
so
I
think
the
folks
who
are
actively
working
on
the
cloud
provider,
extraction
or
evolution
are
highly
incentivized
already
I.
Don't
think
we
need
a
stick
or
a
consequence
to
not
meeting
that.
It's
really
just
about
trying
to
come
up
with
a
reasonable
target,
so
yeah
a.
E
Suggestion
also
might
be
those
put
in
if
there's
some
Doc's
or
a
list
of
cloud
providers,
if
you're
not
reporting
your
ET
tests
by
some
deadline,
just
don't
put
them
in
there
and
the
reason
is
it's
both
a
carrot,
but
also
it.
It
avoids
to
curate
a
list
where
anyone
can
just
submit
something
and
it's
sort
of
hard
to
say
no
without
a
reason
and
the
list
just
blooms.
Otherwise
any
list
will
grow
to
infinity
in
kubernetes.
A
Yeah,
that's
a
good
point.
Okay
sounds
good,
so
I
guess
tentatively,
we'll
set
112
as
kind
of
the
deadline
and
we
just
go
from
there.
Hopefully,
everyone
kind
of
has
enough
of
a
reason
to
push
for
that
and
if
they
don't
again
not
much,
we
can
do
but
I
guess
we
can
get
and
like,
as
Justin
said,
we
kind
of
set
standard.
So
I'm
like
how
we
like
list
providers
based
on
whether
they
report
tests
or
not
I,
also.
C
Spoke
with
Mitra
and
iishe
and
been
about
creating
at
a
dedicated
dashboard
for
the
cloud
provider
effort
so
that
we
can
have
like
one
top
level
test
results
for
each
commit
for
each
cloud
provider.
So
each
cloud
provider
would
get
a
row,
the
goal
of
which
is
to
surf
this
early
at
work
on
one
area
and
someone's
testing
on
one
clever
provider
and
inadvertently
breaks
all
the
others.
Oh
Ben's
got
something
in
the
chat.
Maybe
oh
there's
already
a.
C
G
H
C
Time
so
I
think
that
there
may
be
value
in
having
just
one
other
view
into
this
for
the
cloud
providers
specifically
and
it
this
would
be
more
focused
on
the
conformance
working
group
and
platforms
and
distributions
doing
their
own
CIC.
Maybe
that's
the
same
set
of
tests
for
the
cloud
providers,
but
may
just
be
another
view
into
the
data.
A
A
The
doc
up,
State
updates
a
bit
back
late
next
on
the
list
is
another
cat
I
feel
like
these.
Sick
meetings
are
just
be
talked
about.
Cats-
it's
great,
so
this
is
kept
was
merged
last
week,
essentially,
is
just
a
template
for
any
providers,
so
I've
had
people
from
Alibaba
Cloud
I
do
cloud
visual
oceans
where
I
work,
IBM
cloud,
Oracle
kind
of
reach
out
wanting
concrete
ways
to
propose
new
providers
into
the
ecosystem.
One
of
the
biggest
benefits
is
having
those
kubernetes
org
own
repos
that
they
can
host
or
codon.
A
I
I
The
cluster
API
is
basically
a
declarative
way
of
mutating
in
managing
kubernetes
clusters,
so
this
is
going
to
be
managing
the
infrastructure
layer
but
managing
bits
and
pieces
that
are
not
necessarily
under
the
scope
of
the
cloud
provider
as
it
stands
today.
So
these
are
gonna,
be
things
like
creating
machines
and
configuring
machines
and,
like
configuring,
the
cloud
in
it
for
these
machines,
that's
were
bringing
them
up
and
then
making
sure
that
there's
a
happy
number
of
them
running
accordingly.
I
So
we
started
this
project
earlier
last
year.
It
was
myself
and
some
folks
from
Google
and
now
has
turned
into
a
working
group
under
seed
cluster
lifecycle.
We
have
some
code,
that's
currently
in
alpha.
We
have
the
API
and
that's
running
as
an
aggregator,
a
PA,
and
we
have
a
couple
of
controller
implementations
of
one
of
which
the
OpenStack
controller
is
or
the
what's
going
to
be.
The
OpenStack
controller
is
linked
below
and
we're
currently
working
on.
I
Aws
Google
has
an
implementation
out
there
already,
so
it
kind
of
falls
in
the
same
scope
of
like
multiple
cloud
providers,
but
solving
a
very
different
problem
than
what
the
actual
traditional
cloud
providers
are
solving
and
does
it
in
a
declarative
way.
So
that's
enough
for
that's
been
ongoing
for
a
while
and
there's
probably
some
overlap,
and
we
should
probably
start
to
work
closely
as
our
project
matures
with
you
folks
here
to
make
sure
that,
like
we're,
we're
working
as
harmoniously
and
as
complimentary
as
possible
along
the
way.
C
Thanks
for
coming
and
sharing
because
I
appreciate
it,
I
haven't
spoken
with
some
folks
here
about
the
cluster
API,
so
I
have
some
perspective
from
the
Google
folks
who
have
been
working
on
it,
but
I
am
interested
to
hear
sort
of
trending
directions
on
what
is
the
scope
and
limits
of
the
cluster
API,
as
currently
understood
by
clustered
lifecycle?
Sig
I
don't
participate
in
that
Sagan.
Don't
know
that
part
of
it
some
examples
would
be
like
is:
is
the
health
of
a
cluster
part
in
scope?
I
So
I
think
the
boundary-
and
this
is
like
we
haven't
like
written
this
down
anywhere.
So
this
is
just
sort
of
my
opinion
of
our
the
consensus
of
the
working
group
as
it
stands
today,
but
the
the
boundary
is
basically
at
the
like
the
machine
and
infrastructure
layer.
So
we
don't.
We
don't
really
concern
ourselves
with
the
health
of
the
kubernetes
software,
like
any
of
the
kubernetes
components
that
run
on
top
of
the
infrastructure.
We're
concerned
with
the
health
of
the
infrastructure.
I
You
know,
is
the
node
happy
and
things
that
would
make
a
node
happy
would
be
things
like.
Does
it
have
you
know
available
memory
is
CPU
available.
Is
it
you
know,
is
software
running
and
if
the
software
is
running,
then
we
sort
of
like
stop
there
and
then
you
know,
there's
a
whole
other
set
of
checks
and
balances
for
is
the
software
actually
going
to
be
doing
what
we
expect
it
to
do
and
then
really
it's
it's.
I
It
kind
of
boils
down
to
machines
like
we
have
two
main
components:
we
have
a
cluster
which
is
like
the
actual
definition
of
the
cluster
components,
which
is
just
basically
a
lazy,
declarative
way
of
saying
I
want
to
run.
You
know
kubernetes
version
in,
and
then
we
have
a
machine
which
is
can
be
also
thought
of
as
a
set
of
machines
which
just
describes
you
know
the
size
of
the
machine,
maybe
how
we
provision
the
machine-
and
you
know
some
networking
information
about
like
where
the
machine's
gonna
fall.
I
I
I
completely
agree,
I
think
we're
solving
a
lot
of
the
same
problems.
You
know
everything
from
like
how
do
we
interface
out
the
infrastructure
bits
so
that
they
work
for
every
cloud,
and
you
know
what
do
we
concern
ourselves
with
and
what
do
we
prescribe
or
how
does
the
code
talk
to
the
cloud
specific
bits?
Do
we
like
figure
that
in
is
it
part
of
the
same
binary?
Do
we
stick
them
into
the
same
pod?
A
Yeah,
so
it's
a
dive
a
little
bit
more
into
the
technical
details
there
so
I
think.
A
couple
months
ago
we
talked
about
handling
or
like
extending
node
conditions
or
node
States,
one
example
being
that
some
providers
support
node
being
shut
down,
and
we
don't
really
handle
shutdown.
States
like
just
like
the
node
is
just
it's
just
down:
it's
just
not
ready
until
like.
If
we
want
to
support
some
of
those
like
unique
state,
Kerr
provider,
is
that
something
we
wanna
do
machines
API
to
do
eventually,
or
is
that
something
that
we
still
want?
I
I
think
one
of
the
things
that
we
we
crossed
early
on
was
making
an
explicit
move
away
from
the
node
object
nodes
right
now:
kind
of
work
like
backwards
compared
to
the
Machine
API,
which
nodes
are
like
kind
of
read-only
and
they
sort
of
describe
the
state
of
a
node,
whereas
the
Machine
API
is
declarative,
and
you
say
this
is
what
I
intend
to
be
there
I
think
long
term.
It
would
be
cool
if
we
could
sort
of
munge
the
two
together
and
actually
make
nodes
declarative.
I
I,
just
don't
know
how,
like
the
you
know
the
politics
behind
getting
that
into
core
and
what
that's
gonna
look
like
was
sort
of
like
a
lot
for
us
on
day
one.
So
that's
why
we
sort
of
took
a
step
away
from
nodes,
but
I
think
if
we
can,
you
know
reasonably,
do
it
I
think
everybody
knows
you
know
the
working
group
would
be
on
board
for
it.
C
C
I
C
I
And
also,
if
you
talk
to
Tim
Hawken
and
a
couple
other
folks,
the
notion
of
your
master
in
a
node
is
like
I,
think
the
the
long-term
intent
is
for
those
two
concepts
to
kind
of
go
away
and
everything
sort
of
becomes
this
nebulous
machine
or
whatever
word.
You
want
to
use
that
and
you
can
sort
of
describe
each
one
independently.
C
I
I
C
I
If
you
click
on
sorry,
I
can't
type
very
much.
I
have
a
broken
hand
right
now,
but
if
you
click
on
the
the
github
repo
that
I
put
in
the
there's
a
there's,
a
pretty
decent
diagram
and
the
the
readme
is
is
pretty
like
it
kind
of
gives
a
holistic
view
of
the
APA,
which
I
think
is
a
horrible
name
by
the
way,
but
of
the
project
in
general.
What
were
ability
isn't
is
more
or
less
a
framework
and
the
actual
API
definition
is
one
component
of
the
framework.
Okay,.
C
D
There's
anything
that
we
can
do
to
help
the
membership.
You
know
get
involved
with
what
your
sig
is
doing,
because
I'm
sure
there
are
people
here
who
maybe
this
is
news
to
them
and
I
want
to
go
back
to
their
organizations
and
decide
if
it's
something
that
I
want
to
start
contributing
to.
You
know
to
help.
You
know
what
those
collaborations
we're
happy
to
you
know.
People
want
repositories.
F
D
I
G
I
have
a
quick
technical
question
while
you're
here
do
you
expect
the
long
term?
Everything
will
use
kind
of
like
the
same
tooling,
with
the
local
VM
for
bootstrapping,
or
is
it
reasonable
for
a
tool
to
consume
the
like
the
cluster
API
is
like
the
definition
of
the
cluster
and
not
necessarily
use
that
bootstrapping
system.
I
Great
question
so
the
cluster
API,
when,
when
we
originally
sort
of
like
we're
thinking
about
this,
we've
looked
at
it
there's
a
couple
of
other
open-source
projects
that
do
kubernetes
infrastructure
in
a
declarative
way.
That's
to
sort
of
extremes
to
draw
the
scale
was
a
on
one
hand.
We
had
a
very
prescriptive
way
of
saying
this
is
exactly
like
the
shell
commands.
You
run
to
configure
your
nodes
and
then
the
other
side
of
the
scale
being
like.
I
We
don't
concern
ourselves
without
whatsoever,
and
we
just
say
kubernetes
version
1.2
right
and
then
you
know
the
controller
decides
everything
else.
We,
the
cluster
API
decided
it
was
sort
of
more
like
I,
was
sort
of
thinking
of
idiomatic
to
go
and
like
more
kubernetes
like
to
go
with
a
sort
of
like
a
smallest
possible
API
definition.
I
I
think
I,
don't
think
we
really
have
necessarily
have
a
time
in
mind
we're
pushing
towards
it.
We're
still
crossing
a
few
major
hurdles
just
today.
Our
sig
calls
are
a
couple
hours
ago.
So
it's
it's
every
Wednesday
I'm.
Just
today
we
brought
up
a
pretty
fundamental
change,
which
was
switching
from
an
aggregated
API
server
over
to
series.
D
F
So
last
time
we
had
an
action
item
to
create
a
cap
that
would
make
it
compulsory
for
all
the
cloud
providers
to
maintain
documentation
for
both
entry
and
out
of
tree
cloud
controller
managers.
So
basically
we
took
the
cap
format
and
wrote
down
a
proposal
of
what
every
cloud
provider
needs
to
provide
with
every
release
and
we
just
listed
goals
around
creation
of
documentation
and
maintaining
the
developer
documentation
and
ultimately,
having
the
documentation
consistent
enough
that
sig
Docs
can
confidently
link
to
this
a
cloud
provider
documentation.
F
D
If
you're,
if
yeah-
and
if
you
look
at
the
you
know
kind
of
kind
of
what
we're
asking
there
are,
everything
may
not
be
widely
applicable
to
to
all
of
the
providers,
like
not
all
of
the
cloud
providers
having
entry,
you
know,
have
an
entry
provider
in
place,
so
it
wouldn't
make
sense
to
you
know
you
have
nothing
to
document
and
see
you,
and
so
we
wouldn't.
You
wouldn't
document
that
likewise,
if
you're
out
of
tree
provider
isn't
available
yet
and
that's
not
something
that
you
would
document.
D
But
the
idea
is
that
you
know
there's
there's
entry
code
there's
out
of
tree
code.
Typically
they
are
there
they're
very
similar.
So
a
lot
of
the
Doc's
would
be
similar
but
they're.
You
know,
there's
also
there
differences
in
how
how
the
how
the
if
you're,
if
you're
out
of
tree
you
know
and
and
so
we're
gonna
try
to
capture
the
things
that
are
similar.
Like
you
know
the
flags
that
you
pass
to
cubelet,
you
know
to
be
able
to.
D
You
you
know,
load
ordering
and
those
things
make.
You
know,
captions
similar
things
and
then
and
then
requiring
all
the
cloud
providers
make
sure
that
any
document,
very
specific
things
for
their
particular
system
and
our
goal
with
with
this
once
the
cap
is
approved,
is
to
really
have
everyone
producing
documentation
again
by
the
end
of
the
1.12
cycle
and
and
to
have
that
fall
within.
You
know
the
regular
cadence
of
the
sig
docs
team
and
the
release
team,
so
that
you
know
the
documentation
is
in
place
by
the
deadline
and
we
have
the
reviews
going.
D
A
I
think
with
documentation,
two
of
the
biggest
things
that
are
lacking
right
now
is
like
what
Flags
you
need
to
set
for
every
component
and
what
all
the
different
annotations
mean
and
how
they
correlate
to
the
difference.
Load,
balancers
being
a
good
example
and
I
think
pretty
much.
What
the
cap
has
to
post
right
now
is
a
pretty
good
kind
of
first
iteration
of
those
two
things
so
I
think
it's
pretty
good
start
cool.
F
And
I
have
used
AWS
examples
just
to
give
an
idea
of
what
needs
to
happen
for
initialization
of
the
cubelet
API
server
and
that
you
control
a
manager,
entry
and
out
of
tree.
Sometimes
that
manifests
needs
to
be
tested
before
I
write.
The
PRI
will
test
the
manifest
and
give
the
examples,
but
this
is
supposed
to
be
a
proposal
and
a
format.
So
I
would
really
appreciate
if
folks
can
just
read
it
and
give
me
feedback
before
I
open
the
PR
and
our
goal
is
to
open
the
pr
early
next
week.
C
A
C
A
Yeah
and
so
to
give
a
quick
little
review
of
that,
essentially
want
to
move
everything
in
package.
Slash
top
abiders
slash
providers
into
kubernetes.
I
staging
slash,
you
know,
package
types
of
cloud
providers,
and
this
is
just
a
way
for
us
to
signal
to
the
community
that
we
want
to
move
these
out,
but
we
still
want
to
keep
it
in
chromatic
score
because
trying
to
pull
that
out,
it's
gonna
break
a
lot
of
things.
We
just
do
it
at
once,
so
this
is
kind
of
like
a
staging
period.
A
A
F
A
I'd
so
I
don't,
the
thing
is
like
I
was
hoping.
Walter
would
give
an
update
on
on
if
that,
because
there
were
some
comments
and
I,
don't
think
he
I
don't
think
he
visited
that
PR
and
made
updates
and
I,
and
he
did
mention
that
he's
gonna
pull
the
staging
like
the
initiative
to
move
the
code
into
staging
directory.
He
said
he
was
gonna
put
into
a
separate
proposal:
I,
don't
I,
don't
think
he
got
to
that.
So
I'm
not
sure
it
seems
like
we're
bit
blocked
on
that.
So.
C
C
D
A
Yeah
so
I
think
I
think
that's
one
solution,
but
it's
still
not
like
it's
not
trivial,
like
every
provider
is
sort
of
different
and,
like
I,
think
that's
a
reasonable
solution.
I
think
we
should
just
do
that
just
a
matter
of
like
actually
putting
the
work,
making
sure
that
we're
testing
it
properly
and
that
we're
not
deleting
anyone's
load
balancers.
Does
anyone
want
to
take
this.
B
Take
this
task
I
just
wanted
to
provide
perspective
from
AWS
I
know
that
I've
seen
issues
similarly
about
the
load,
balancer
names
being
cryptic
and
yeah
I
think
it.
People
are
annoyed.
Another
unrelated,
but
semi,
related
issue
for
AWS
is
our
node
name,
which
is
currently
stuck
at
private
genus,
so
not
the
same
issue
but
a
similar
pain
point.
A
D
A
So
I
guess
for
actionable
items
here,
I'm
just
gonna
kind
of
put
the
issue
out
in
the
open
and
see
if
there's
any
takers,
if
there
isn't
I
guess
next
next
signal
meeting,
we
could
talk
about
it
and
I,
don't
maybe
I'll
take
it
or
something
but
yeah
I've.
Just
seen
too
many
issues
about
this
I
just
want
to
fix
it
and
it's
I
don't
think
it's
ideal
to
have
random
names
like
the
ones
we
have
right
now
for
little
bouncers,
cool,
okay,
so
I
think
that's
it
for
this
evening.