►
From YouTube: Kubernetes SIG Cloud Provider 2019-03-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Generalized
release
cadence
that
can
also
ensure
quality
of
all
the
controllers
that
are
being
published
for
the
infrastructure
virus.
So
this
was
a
summary
of
what
we
wrote
down
and
we
discussed
it
with
the
leads
for
each
of
the
six
in
of
us,
zero
and
GCP
IBM
cloud,
OpenStack
and
VMware,
and
essentially
all
of
us
agree
that
the
idea
makes
sense
and
that
holding
all
the
projects
unless
a
club
provider
would
help
us
get
the
quality
and
standardization
that
we
want
across
the
code
base
across
testing
release,
cadence
and
documentation.
B
B
You
can
have
a
user
group
session
which
is
more
informal,
and
probably
we
shout
out.
We
have
to
exactly
figure
out
what
the
cadence
will
look
like.
There
is
a
suggestion.
That's
a
cloud
provider
might
have
a
longer
session
so
that
other
provider
leaves
or
anyone
who
has
a
general
standardization,
that
they're
driving
can
come
and
give
an
update.
B
So
that's
section
number
one:
a
lot
of
people
had
worries
about
losing
the
user
group
sessions
that
keep
on
that's
not
going
to
go
away,
but
we
will
have
to
discuss
this
with
the
MTF
and
we'll
get
that
done
once
we
have
approval
of
this
proposal.
The
other
thing
is
the
meeting
cadence
the
binding
in
the
flat
channel,
and
we
don't
expect
anything
to
change.
All
we
are
saying
is
we'll
just
label
it
as
provider
provider.
B
What
that
means
is
provider
in
being
by
the
infrastructure
providers
name
on
the
quarterly
community
updates.
One
problem
has
been
that
you
know
some
SIG's
have
something
to
offer
the
others
say:
it's
not
really.
Some
things
have
many
sub
projects,
others
don't
so,
given
this
approach
of
holding
all
the
sub
projects,
we
will
propose
that
there'll
be
one
slot
or
say
cloud
provider,
and
we
have
to
figure
out
the
exact
approach
to
it.
But
instead
of
having
three
six
present
in
one
community
meeting
will
just
collapse.
B
That
also
helps
with
the
discoverability
and
essentially
in
six
dotty
animal.
We
will
list
all
the
projects
as
provider
AWS
or
provider
azure,
and
instead
of
having
a
specific
entry
for
the
provider
or
sagaie
adjure,
you
will
have
all
these
sub
projects
entered
under
a
cloud
provider.
So
that's
the
change,
that's
being
discussed
in
item
number
five
and
six.
B
B
There
was
a
discussion
about
sub
project,
really
students,
so
the
other
issue
is
most
of
the
projects
that
are
happening
in
sake
providers.
Cloud
providers
are
our
tree
features
and
the
major
goal
of
this
project
was
to
make
sure
that
any
ecosystem
related
project
that
can
add
a
plug-in
or
users
to
have
more
comfort
with
the
kubernetes
api
and
any
when
the
product
that
they're
working
with
the
interfaces
have
to
go
out
of
the
codebase,
and
we
have
to
make
sure
that
and
these
out
of
three
features
are
of
high
quality.
B
Sig
release
team
wants
to
only
take
care
of
entry,
kubernetes
features
which
is
K
/
K
and
for
all
the
arbitrary
features
right
now
we
don't
have
a
resolution
on
who's
going
to
take
care
of
the
race
process
or
how
look
like
the
discussion
has
thought
it
in
that
issue
that
I'm,
referring
to
and
I'm,
not
saying
that
that
needs
to
be
resolved
within
the
scope
of
this
proposal.
But
the
idea
is
the
cloud
provider
will
have
one
release:
cadence
and
the
provider
liens
will
be
responsible
for
driving
and
owning
that
release.
B
Cadence
I
mean
it's
again
ugly,
as
we've
been
following
it
very
very
carefully
going
through
the
Alpha
Beta
Beta,
GA,
really
cycle
for
KK
and
we're
hoping
that
the
same
would
be
adopted
by
every
other.
Sig
and
thats
it
cloud
provider
leads
would
help
build
that
standardization.
But
then
this
is
also
something
up
for
discussion
and
the
last
item
is
desperate
and
it
destroyed
each
think.
Provider
has
their
own
testing
suit
and
we
have
plugins
which
essentially
implements
a
non-blocking
jobs.
B
I'm
walking
to
the
release
cadence
and
what
we're
proposing
is
destined
for
us
should
move
all
the
tabs
related
to
the
providers
under
the
cloud
provider
and
all
the
tests
show
up
underneath
that.
So
again,
it's
not
a
major
change.
It's
just
how
it
looks
like,
and
it's
all
organized
by
the
actual
interfaces
linking
up
to
KK
rather
than
by
provider.
B
C
B
B
C
My
one
common
is
that
I
think
that
we
can
execute
on
it
sooner
than
San,
Diego,
I
think
but
san
diego's,
November
or
something
I
think
that
we
can
get
a
lot
of
this
done
sooner.
The
one
note
I
had
about
the
cube
cons
was
the
way
the
SIG's
organized
right.
So,
whether
or
not
you
know
we
have,
we
have
meetups
for
individual
cloud
providers
or
sub
SIG's.
That
was
the
one
point
of
contention
that
I
was
like.
C
C
Sorry
glad
yeah
I
think
there's
also
gonna
be
a
bunch
of
weirdness
that
shakes
out
from
this
like
moving
the
like
changing.
You
know,
changing
repo
names
figuring
out
what
dependencies
are
like
that?
This
is
something
that
needs
to
be
like,
like
Andrews
saying
like
we
can
start
sooner
rather
than
later,
because
I
think
we're
gonna
find
some
some
weirdness
around
this
yeah.
A
A
B
A
B
D
A
A
A
C
A
A
E
E
I'd,
so
I
just
got
an
email
from
the
CN
CF
saying
that
they're
they're
limited
on
space
for
cubic
on
Shanghai
and
so
they're
asking
us
to
reduce
our
our
speaking
time.
There
we
had
an
80-minute
session
that
we
proposed
so
I
just
want
to
get
a
sense
from
the
from
the
team
here
if
they
would
prefer
to
have
an
introduction,
introduction
introductory
session
or
a
deep
dive
in
Shanghai,
so
that
I
can
respond
to
Nancy
and
have
the
schedule
updated.
C
A
E
A
Worries,
okay,
so
for
one
so
I
wanted
to
spend
the
next
twenty
to
thirty
minutes.
Doing
so,
I
think
what's
best,
for
this
is
if
we
do
multiple
iterations
of
the
backlog.
So
we
have
consensus
because
we
don't
have
everyone
in
the
sake
on
this
call
right
now,
so
I
was
hoping.
We
do
kind
of
like
one
kind
of
quick
pass
of
the
of
the
backlog
right
now
and
then
maybe
in
the
next
state
call
or
two
state
calls
from
now.
We
can
do
another
iteration.
We
have
more
folks
here.
A
So
what
I've
done
is
so
this
is
our
current
backlog.
We
have
about
twenty
issues.
Twenty-One
issues
most
of
these
are
poured
it
straight
from
issues
in
kubernetes,
slash,
kubernetes,
and
some
of
them
are
new
issues.
What
I'd
like
to
do
is
go
through.
Each
issue
add
a
milestone
to
it,
so
that
so
for
a
milestone.
We
have
two
options.
We
have
milestone
next
and
milestone
115.
So
now
some
150
means
we.
A
We
want
to
get
it
done,
but
in
115
and
then
malsum
next
is
just
kind
of
a
placeholder
milestone
to
kind
of
punt
it
to
some
other
future
release
and
then
label
wise,
there's
gonna
be
a
p0
p1,
p2
p3,
so
p0
and
p1
is
issues
that
we
definitely
want
to
get
done
in
115
and
then
p2
p3
are
nice-to-haves
but
wouldn't
be
the
end
of
the
world.
If
we,
if
we
didn't
get
those
into
150
so.
C
So
question
there
is
the
reason
that
we
decided
to
go
with
the
P
P
number,
that's
that
that
was
actually
something
that
was
moved
away
from
in
the
rest
of
the
label
sets
it
so
they
use
priority
instead
of
the
or
rather
it
used
to
be
priority
zero
through
two
or
something
like
that,
and
they
shifted
erratic.
You
critical,
urgent,
important,
soon
and
important
long-term
are,
and
there
are
a
few
other
priority
labels,
but
those
are
the
commonly
used
ones.
Yeah.
A
No
no
reason
in
particular
I
was
copying.
The
I
think
it
was
a
cube,
ATM
backlog
and
they
had
p
0
to
P
four
and
like
I'll
use
that
I
have
no
problems
using
priority,
critical
or
whatever
else
the
rest
of
the
community
is
using,
so
that
it's
critical
or
urgent
party
long-term
priority
short-term.
It's.
C
A
Okay
cool,
so,
let's
start
with
issue
1,
removing
cloud
provider
dependencies
in
kubernetes.
So
this
is
something
we
started
last
quarter,
which
is
removing
all
the
internal
dependencies
in
kubernetes
kubernetes
so
that
we
can
kind
of
so
so.
The
reason
we
need
to
do
this
is
because,
if
we
want
to
stage
the
providers
there
can't
be
a
circular
dependency
back
into
the
core
repository.
So
we
have
to
make
sure
like
every
go.
Import
in
each
provider
is
not
linking
back
to
kubernetes,
slash
kubernetes.
D
A
A
Like
I
just
want
to
list
to
represent
like
the
current
state
of
the
world,
and
as
of
today
we're
there
still
like
that,
weird
import,
we
do
to
the
credential
provider.
We
should
leave
that
in
there
for
now,
I
think
for
for
115
there
should
be
a
priority.
Critical,
urgent
I
think
we
absolutely
want
to
get
util
mount
and
create
the
credential
provider
out
in
115.
E
A
A
This
needs
to
be
fleshed
out
a
bit
more,
but
the
idea
is
that
we're
going
to
publish
the
provider
in
staging
to
a
subdirectory
on
all
the
external
repos
and
it
essentially
gives
providers
option
to
imports
or
not
like
essentially
like
they
can
import
the
entry
versions
of
their
providers
from
the
auditory.
Those
I
would
like
to
put
this
in
p0
as
well
for
115
folks,
agree:
yep,
hey
Walter,
we're
just
doing
backlog
grooming
so
just
to
catch
up
quickly.
B
C
A
A
A
So
we
actually
remove
this
controller
in
1:14
because
we
effectively
found
out
that
no
one's
using
it
and
it
was
actually
written
incorrectly.
So
we
either
want
to
provide
like
some
sort
of
a
mutating
admission
web
book
for
this
or
something
similar
to
that
which
effectively
replaces
the
admission
controller
in
the
API
server
folks
have
opinions
on
this
I
know
for
auditory
providers.
This
is
a
bit
more
important,
less
so
for
entry,
because
we
have
the
entry
equivalent
in
the
API
server.
A
F
F
Depends
if
someone
wants
to
volunteer
to
do
the
work?
I,
don't
think
anyone
in
API
machinery
would
object
to
someone
doing
it.
I
think
you
could
probably
fast-track
a
cap
that
suggests
doing
that,
but
everyone
I
know
up
working
an
API
machinery
is
planned
out
for
basically
the
next
two
releases.
Okay,.
C
So
I
would
you
know
if
people
are
interested
in
doing
that?
I
would
also
get
up
with
the
sig
cluster
lifecycle
people,
because
some
of
the
people
who
are
working
on
close
to
our
API
are
I.
Think
closer
API
is
going
to
eventually
move
to
something.
That's
a
strongly
web
poke
driven,
so
working
with
them
might
be
a
good
planner
sounds.
F
Good
I
mean
the
big
thing
I'm
suggesting
is
just
that
it
does
sound
like
we
might
want
to
do
that.
I'd
also
suggest
talking
to
I
think
it's
Sully
who's
handle
is
escaping
me
right
now.
Directx
man,
12,
yes,
directx
man,
12
he's
already
got
some
suggestions
about
trying
to
classify
things
like
critical
from
non-critical
webhooks
and
it
this
seems
pretty
related
to
that.
F
A
C
A
A
Q4
last
year-
and
this
is
the
issue
to
essentially
use
better,
more
descriptive
names
for
when
we
create
elby's
for
low
bouncers
or
when
we
create
elby's
for
services
of
tableau
bouncer.
So
there
was
a
PR
merged
that
essentially
lets
you
override
the
name.
But
then
all
the
current
providers
are
defaulting
to
the
current
name.
So
the
biggest
trick
to
this
is
like
how
do
we
account
for
existing
elby's
that
are
already
created?
So
we
need
to
decide
if
we
want
to
like
either
ignore
those
or
try
to
like
dynamically
update
names.
A
D
C
A
Oops
typo,
whatever
see
you
later,
okay
I'm
kept
for
migrating
the
KCM
to
CCM.
So
I
know
mike
is
already
kind
of
Mike
and
I
are
kind
of
working
on
this
already.
F
A
F
F
F
It's
an
excellent
idea:
I
mean
the
the
idea
between
going
from
alpha
and
beta
is
always
that
we're
allowed
to
change
our
minds
on
the
alpha
and
fix
the
implementation.
So
I
my
perspective
on
this
one,
there's
not
a
whole
lot
of
difference
between
saying
we
have
an
alpha
and
saying
we
have
a
prototype.
F
D
F
And
then
the
other
thing
I'll
point
out
is
that
if
my
memory
is
correct
on
any
of
our
proposals
on
this,
you
need
the
feature
to
be
in
a
in
release:
n
minus
1
for
you
to
go
to
release
n
correct
right.
So
even
if
we
say
that
the
beta
version
gets
into
116,
that
means
that
you
can't
actually
do
the
migrate
until
you're
going
to
117.
F
A
A
All
right
next
one
is
investigate
usage
requirements
for
cluster
ID.
So
for
context,
this
still
from
understanding
cluster
acne
is
a
very
AWS
specific
thing.
I
know
TCP
and
other
providers
use
it,
but
it's,
it's
I,
think
it's
most
used
by
AWS
and
it's
essentially
used
for
resource
isolation,
making
sure
that
your
control
plane
is
not
gonna
delete
any
other
resource
in
that's
in
the
same
region,
but
not
owned
by
the
cluster.
A
So
I
know
Justin
and
I
had
a
few
chats
about
this
here,
but
the
idea
is,
it
seems
like
in
the
cube
controller
manager,
there's
a
log
message
that
they're
saying
that
it's
a
requirement
to
set
the
cluster
to
ID,
and
if
you
don't
set
it,
then
you
have
to
put
this
like
allow
an
tagcloud
flag.
So
essentially
we
need
to
figure
out
like
do.
We
want
clean
us
up
like
it's
a
still
a
requirement,
or
do
we
want
to
get
rid
of
it?
A
It's
just
nice
to
have
cleaning
up
all
these
flags.
We
have
that
are
effectively
doing
nothing
for
any
other
provider.
Yeah.
F
A
A
D
F
F
F
A
C
F
A
Looks
for
me
okay,
so
at
least
for
at
least
the
cat
and
alpha
I.
Don't
think
it
be
reasonable
to
like
that,
this
one,
okay,
all
right!
So,
okay.
So
this
issue
investigated
rolling
in
no
controller.
This
was
brought
up
by
the
Alibaba
folks.
They
apparently
have
customers
with
thousand
no
clusters
and
the
no
controller
is
getting
throttled
because
it's
doing
the
node
address
update
every
whatever
the
recent
period
is
so
they
ask
that
we
try
to
look
into
this
for
115
I'm
gonna
try
to
actually
get
the
Alibaba
folks
to
do
this.
One
do.
F
E
A
E
Opened
stack
clouds
of
experience
this
to
where
the
the
cloud
controller
manager
is
hitting
the
the
cluster,
the
the
the
host
cloud
API
a
lot
and
they
wind
up
rate
limiting
the
rate
limiting
them
out.
I
think
we
might
have
some
upstream
patches
that
try
to
limit
some
of
this
or
just
set
it
to
be
tunable,
but
I,
don't
know
if
it's
a
you
know
a
general
solution
that
we
have
in
the
controllers:
okay,.
A
So
the
issue
I
mean
the
open
issue
to
see
to
answer
exactly
that
question
is
like
for
large-scale
clusters.
Are
we
seeing
this
seeing
this
and
other
providers
and,
if
so,
like?
What
would
a
general-purpose
solution
for
this
look
like
or
can
we?
What
can
we
at
least
do
to
like
the
end,
the
interface
or
something
to
make
it
more
less
of
a
hassle
to
try
to
account
for
this
I.
F
E
And
that's
I
mean
that's.
The
I
would
I
would
consider
it
to
be
a
cloud
provider
issue
if
there
was
a
common
solution
that
everybody
would
want
to
implement,
but
as
it
stands
right
now,
if
it's
you
know,
if
it's
Alibaba
and
OpenStack
the
only
clouds
that
are
complaining
about
this
and
we
have
solutions
in
place,
then
you
know
we
should
just
solve
the
solutions
that
that
best
meet
our
needs.
I
would
just
like
to
see
it
be
a
general
solution.
F
Agree
with
that,
but
I
mean
the
other
parts
of
it,
and
the
other
thing
I
would
add
would
be
if
this
were
something
that
we're
happening
in
the
case
cm
where
the
case
CM
needed
to
be
throttled.
I
could
see
where
it's
like.
Okay,
that
has
to
be
a
kubernetes
solution.
Node
controllers
a
little
vague
but
I'm.
C
F
A
A
F
A
Okay,
yeah
I'll
leave
this
for
now
then
yeah.
This
is
the
same
thing
that
they
had
the
same
issue
with
routes.
I
know
Azure
had
it
at
one
point,
but
then
they
implemented
caching.
So
it's
not
really
a
problem
anymore,
so
yeah
I
can
follow
up
with
them
either
unless,
unless
other
providers
here
do
see,
issues
with
routes,
route
controller
being
a
bit
too
aggressive,
okay
I'll
take
that
as
a
no
okay
finalized
protection
for
service
load
balancers.
A
This
is
okay,
so
there
are
two
PRS
open
that
have
been
open
for
like
two
years
now,
or
something
like
that.
This
is
essentially
adding
finalizar
to
the
services
that
create
little
bouncers.
This
is
to
make
sure
that
if
you
accidentally
delete
a
little
bouncer,
but
the
delete
operation
against
the
load,
balancer
fails.
Sorry,
let
me
backtrack.
This
is,
if
you
delete
the
service
but
the
operation
to
delete
a
little
bouncer
fails.
Then
you
have
a
dangling
little
bouncer.
That's
not
going
to
be
cleaned
up.
A
So
the
reason
why
this
is
a
little
bit
tricky
is
because
for
backwards
compatibility
you
have
to
add
support
and
one
release
to
actually
check
the
finalizer
and
then
in
the
future
release.
Then
you
can
actually
add
the
finalizer,
and
this
is
important
because
if
you
were
to
add
the
finalizer
from
the
start
and
then
you
rolled
back,
then
you
have
a
cluster
where
there's
a
finalizer,
but
then
there's
nothing.
Checking
the
finalizer
anymore
mmhmm.
A
F
E
I
mean
I've,
seen
I've
seen
the
same
issue
in
OpenStack
load,
balancers
also
and
similar
to
Walter.
We
you
know,
are
our
issues
have
gone
deeper
because
because
it
exposed
some
problems
with
how
the
load
balancers
are
created
and
OpenStack
and
the
the
barrier
between
user
and
administrator
privilege
levels
but
I,
don't
you
know
I,
don't
know
if
a
solution
necessarily
needs
to
happen
in
here.
E
A
F
A
It's
on
it's
on
the
radar
I
think
Bowie
is
looking
at
those
PRS
as
well,
but
yeah
I'll
definitely
go
to
cig
network
as
well
and
make
sure
that
it
sits
on
their
radar
yep,
and
this
is
gonna,
spend
multiple
multiple
releases
anyways,
so
like
I
think
at
least
for
115.
We
want
to
get
in
the
the
first
PR
that
does
the
finalize
our
check
and
then
for
116.
We
can
try
to
actually
add
finalize
your
support.
A
Okay,
nodes
are
being
registered
multiple
times
in
the
paw
patroller
manager,
so
someone
did
open
an
issue
that
has
a
lot
of
it's
a
very
descriptive
issue
on
the
cloud
controller
manager
registering
nodes
multiple
times,
no
props
for
good
issue,
yeah,
so
yeah
I'd
say
like
read
this
issue.
I
think
what's
happening.
Is
that
because
the
CCM
does
it
checks
like
the
tink
like
a
no
taint
and
see
if
it
should
register
the
node
I?
A
Think
there's
multiple
events
going
through
the
controller
before
the
taint
is
removed,
so
this
is
kind
of
like
the
proposed
solution,
and
this
issue
at
least,
is
to
keep
track
of
like
what
nodes
are
currently
being
registered
via
map
and
then,
if
and
if
a
node
is
currently
being
registered,
then
just
ignore
it.
In
future
events,
I'm.
A
A
Well,
yeah
that
case
you
it
just
the
map
would
be
clear
and
then
you
just
we
you
just
like
you
like
worst-case
scenarios.
You
would
we
run
those
registration
code
paths,
but
then
it
wouldn't
be
like
at
least
you're
not
like
that.
That's
that's
what
it
that's!
What
we
have
today
anyways
so
like
worst
case
scenario,
has
me
to
fall
back
to
whatever
we
have
to
well.
F
A
F
A
A
C
A
Okay,
so
the
next
three
are
we're
moving
over
to
cloud
stack
and
photon.
So
dims
already
has
a
PR
open
for
that
and
they've
been
deprecated
for
about
a
year
and
it
seems
like
people
in
the
community
are
okay
to
remove
it
from
granny's
core.
So
I
say
we
just
put
these
all
in
critical
origin,
sir
who's
good,
all
right,
I'll
do
this!
I
can
do
this
later
running
up
time,
I!
Don't.
E
A
Okay,
yeah
I'll
follow
up
with
James
to
make
sure
there's
the
proper
warnings
and
announcements
make
sure
people
are
aware:
okay,
cool
I'm,
gonna,
stop,
sharing
all
right,
cool
I!
Think
that's
it
for
the
agenda.
You
need
to
stop
the
check
right.
So
there's
one
more
thing:
if
anyone
wants
to
submit
session
for
the
contributor
summit
for
keep
Connie,
you
let
me
know,
and
we
can
try
to
find
something
for
that-
I
think
it
may.
A
C
One
thing
to
mention
before
we
go
so
we're
planning
on
cig
p.m.
that
is
we're
planning
on
running
from
cig
to
cig
and
talking
about
caps.
A
little
bit
are
your
experience
with
caps.
Thus
far,
y'all
have
been
filing
quite
a
bit
of
them
and
doing
them
consistently
and
well
so
we're
interested
in
getting
feedback
that
was
coming
soon,
probably
around
the
start
of
1:15.
So
stay
tuned,
awesome.
F
Thanks
also
Andrew,
it
was
done
in
a
different
sig,
which
is
I,
think
I
I.
It
was
either
done
in
a
different
Sega.
You
guys
did
it
before
I
got
here,
but
I
think
for
us.
One
of
the
important
caps
is
the
network
proxy
cap,
because
we
are
not
going
to
get
cloud
provider
out
of
cube
API
server
until
that
lands,
yeah.
A
F
A
Need
to
double-check
that,
but
I
will
do
that
and
then
I'll
kind
of
pack
them
to
the
so
like
I'm
planning
on
doing
another
round
of
going
through
the
backlog
in
a
future
meeting.
So
in
case
anyone
else
has
opinions
on
those
things
we
can
get
their
opinion
on,
that:
okay,
cool
all
right,
we're
at
a
time.
So
thanks
everyone
for
coming.