►
From YouTube: Kubernetes SIG Cloud Provider 2019-07-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hi,
everyone
today
is
July
10th
2019.
This
is
the
bi-weekly,
say
cloud
provider
meeting.
So
one
announcement
before
we
start
on
the
agenda,
the
the
proposal
to
fold
all
the
providers,
specific
SIG's,
so
AWS
as
your
GP
IBM
OpenStack
in
VMware,
that
is
going
to
be
happening,
July
12!
So
that's
this
Friday!
A
The
plan
was
to
allow
those
six
to
fold
into
sub
projects
under
six
top
provider
by
by
July
12th
and
any
of
the
remaining
six
by
this
Friday
we're
just
gonna
update
the
community
repo
to
kind
of
just
consolidate
all
those
things
that
sub
projects
then.
So
this
is
just
kind
of
like
the
last
fYI
before
we
do
that
on
Friday.
If
you
have
any
questions
on
this
feel
free
to
ping
me
on
slack
and
I
can
fill
you
in
on
any
context.
You
might
be
missing
all
right.
A
So,
like
last
time,
what
I'm
gonna
do
is
go
over
the
all
the
existing
items
on
the
backlog,
we're
going
to
set
a
milestone,
either
116
or
next
so
116,
meaning
we're
gonna.
Try
to
tackle
it.
For
this
release
cycle
and
next
meaning
we're
gonna
punt
it
for
some
features
like
future
releases
and
feature,
and
then
we
have
four
priorities:
p0
p1,
p2,
p3,
p,
0
means
generally
p0
and
p1
are
items.
We
should
really
really
get
done
for
this
release:
p2
p3
nice
to
get
done,
but
not
absolutely
critical.
A
Okay,
so
let's
go
oldest
to
newest.
First
item
is
first
item?
Is
jean-claude
provider
note
labels
so
I
have
a
PR
open
for
this.
This
is
the
proposal
to
migrate
the
the
beta
labels,
for
instance,
type
zone
and
regions
for
nodes
to
the
GA
labels,
which
is
topology
brands,
zone
region
and
instance
type.
So
I
think
this
is
p1
and
we
should
probably
start
work
on
it
for
116
folks
agree.
A
There
was
some
discussion
that
I
had
a
talk
with
Michelle
from
sick
storage,
because
PV
is
also
used,
those
those
beta
zone
region
labels,
and
so
we
may
actually,
the
initial
plan
was
to
do
them
separately.
Sorry,
the
initial
plan
was
to
migrate
them
both
in
the
same
release,
but
we're
at.
We
might
actually
do
it
separately
because
the
Peavey's
are
going
to
be
migrated
to
use
the
node
affinity
field
instead
of
labels,
so
we're
gonna
go
ahead
and
update
the
nodes
and
then
worry
about
Peavey's
separately.
A
Okay,
cool
next
one
is
investigate
alternatives
for
a
decree,
PV
labeling,
so
last
release.
We
said
this
was
p1
I.
Think
p2
is
actually
better
for
this
one
because,
like
I
just
mentioned,
TVs
are
not
going
to
be
using
labels
in
the
future
for
node
affinity
and
so
I
think
there's
less
of
an
incentive
for
providing
an
out
of
tree
solution
for
PV
labeling.
So
so
the
initial
plan
was
to
have
an
auditory
emission
controller.
That
would
do
the
same
thing
as
a
persistent
volume
label.
A
A
C
D
A
A
A
D
A
A
A
Better
cloud
I'll
be
names
so
right
now
all
the
load
balancers
created
by
the
cloud
provider
uses
the
default
like
service
UID
mean
there
were
some
issues
upstream
or
by
the
KK
around
providing
some
mechanism
to
add
more
descriptive
names.
It's
it's
a
complex
problem
because
we
have
to
account
for
like
the
existing
elby's
and
some
providers
aloud
with
naming
some
providers
don't,
and
so
it's
a
bit
of
a
thorny
problem.
A
Ok,
silence
is
acceptance:
oh
you're,
going
keep
controller
manager
controller,
my
manager,
migration
cap.
So
this
was
the
cap
that
the
extraction
migrations
a
project
was
looking
on,
which
was
to
have
a
way
for
like
a
coordinated
leader
election
between
KCM
CCM.
So
we
have
a
better
story
for
how
to
migrate
h,
a
cluster
CCM
in
116
or
115
the
slipped,
because
we
missed
the
kept
deadline,
so
I
think
p0
for
116
makes
sense
and
we'll
try
to
get
API
machinery
to
approve
this
cap
as
early
as
possible.
A
So
right,
so
the
cluster
ID
functionality
I
think
that's
only
used
by
the
AWS
provider
right
now.
It's
a
way
for
doing
research,
resource
management
on
the
control
plane
where
you
can
pass
the
control
plane
like
a
certain
tag
to
use
across
all
the
manage
resources.
I
think
this
ticket
was
created
just
to
figure
out
if
we
want
to
deprecate
cluster
ID
or
if
you
want
to
enforce
cluster
ID
across
all
the
providers.
I
think
it's
in
a
weird
state
right
now,
where
we
warned
the
KCM
warns.
A
G
A
A
A
This
is
only
four,
so
this
is
this.
Is
you
only
run
it
to
this?
If
you
have
a
large
enough
cluster
and
it
depends
on
the
API
throttling
settings
on
your
account,
so
it's
I
don't
know
how
widespread
this
is,
but
it
seems
like
low
priority
until
we
can
figure
out
if
it's
a
common
enough
issue
do
people
are
people
and
other
providers
running
into
this
at
all,
so
we're.
E
A
A
A
Finalized
protection
for
service
load
balancers,
mr.
hone
I,
don't
know,
is
actually
as
young
from
Signet
work
has
picked
us
up.
So
this
is
mostly
just
a
tracking
issue
at
this
point,
so
this
adds
finalizar
protection
for
load
balancers,
so
that
if
the
service
controller
crashes
during
the
W
operation,
we
can
still
garbage
collected
when
we,
when
the
service
controller
comes
back
up,
and
so
this
was
alpha
and
115.
It's
gonna
be
beta
once
it's
student,
so
I
think
we
can
just
leave
this
p1
for
116.
A
Right
so
this
is
to
be
able
to
configure.
So
this
is
to
change
the
CPI
to
handle
multiple
routing
tables
like
so
adding
nodes
to
I.
Guess
a
routing
table
bursts
like
by
an
annotation
or
something
I,
don't
know,
I,
don't
know
what
the
design
would
be.
But
do
we
feel
like
this
is
a
common
enough.
You
skips
for
the
other
cloud
providers
to
look
into
seems
like
it
might
need
more
discussion.
G
A
Okay,
decoupling
cloud
providers
from
the
careers
III
testing
framework,
so
the
testing
comment,
sub
project
under
state
casting
is
doing
some
refactoring
work
to
move
the
e
to
e
testing
framework
to
staging,
and
a
part
of
that
is
gonna.
Try
to
decouple
the
the
cloud
provider
framework
inside
that
you
do
in
testing
framework
out
of
there
as
well-
and
this
is
gonna
we're
gonna
need
that
when
we
removing
their
entry
Club
providers,
otherwise
you
still
have
the
vendor
dependencies,
even
though
we
remove
the
cloud
providers
and-
and
so
this
is.
A
A
So
this
came
up
115
because
there
were
some
concerns
around
how
we
don't
document
like
API
requirements,
for
how
we
changed
the
cloud
config
file
that
the
providers
use
to
toggle.
You
know
various
integrations
and
features
I
think
the
action
item
here
was
to
just
document
it
somewhere
or
document.
Like
add
a
comment
in
the
cloud
config
struct
for
each
provider,
just
say
what
API
guarantees
are,
so
that
you
don't
have
people
breaking
those
guarantees
by
accident.
A
A
Okay,
I'm
gonna
leave
this
for
I'm
gonna
leave
this
as
p2
next
I.
Don't
think,
there's
enough,
so
it
seems
like
Alibaba
wants
it.
I
think
Timo
seems
like
there's
some
valid
use
cases
for
it
for
do
so.
If
we
can
get
enough
folks
to
chime
in
and
outline
what
those
use
cases
are
and
then
we'll
we'll
bump
this
into
the
1/16
mas
zone,
so
I'll
reach
out
to
Alibaba
shadow
right
now.
A
Correct
CMD
cultural
manager
should
be
easier
to
consume,
so
I
think
this
came
up
because
the
other
tree
providers
are
still
meandering
kate's
io
/
kubernetes,
and
this
is
mainly
because
it
needs
to
import
CMD
such
cloud
controller
manager,
so
I
think
I,
don't
know
like
I
feel
like
we
need
to
start
somewhere
to
either
move
the
cloud
controller
manager
to
staging
or
just
migrated
to
an
external
repo,
so
that
we
don't
require
imports.
Tukutz,
io,
/,
kubernetes,
I.
A
G
And
in
fact,
I
think
it's.
We
should
really
think
through
it
because
I
agree.
It
needs
to
import
the
controllers,
but
the
set
of
being
cooked
controllers
it
needs
to
import,
probably
varies
by
cloud
provider.
So
we
probably
want
to
make
sure
that
we
can
import
the
framework
in
such
a
way
that
it's
easily
the
the
set
of
controllers
is
easily
extended,
and
that
may
just
be
one
example
if
we
need
to
worry
about
things
like
which
flags
need
to
be
passed
in,
because
some
cloud
provider
needs
a
specific
flag
for
the
CCM.
G
B
A
G
A
A
F
A
Right
so
I
remember
so:
I
had
recommended
that
we
should
prioritize
making
the
controller's
more
API
quota
sensitive
before
we
died
before
we
dig
into
like
designing
something
like
this.
So
how
about
we
put
this
as
next
and
then
we
might
like?
We
should
go
back
to
the
other
she's
around
8
that
rolling
and
bump
priority
on
those.
E
A
Yeah
that
I
agree
I
think
it's
good,
though,
to
so
the
reason
why
I
tell
people
to
open
issues
is
because
then
we
get
more
data
points
on
like
it's
I'd,
rather
see
like
multiple
duplicate
issues
by
more
providers.
So
we
have
more
feedback
on
what
those
issues
are,
and
so
it
seems
like
a
pipe
throttling
is
coming
up
quite
often
wondering
if
that
needs
to
be
high
priority
for
116
I'm,
Craig
I.
Think
I
saw
you
up
your
hand
up
I.
A
G
A
G
Having
been
said,
I
think
I
think
keeping
a
bunch
of
Elkin
issues
is
perfectly
reasonable,
but
I
think
when
we
get
enough
of
them
to
jegos
point.
It
suggests
that
it's
time
to
start
gathering
it
coming
up
with
designs
and
discussing
alternatives,
and
so
maybe
the
fact
that
we
had
this
many
issues
means
it
is
time
to
you
know,
begin
discussing.
You
know
how
they
all
hook
together,
what
the
story
is
and
what
our
design
options
are.
G
A
B
D
B
A
G
A
G
G
H
Going
once
going
twice,
I
can
maybe
take
a
look
at
this
I'm,
just
not
familiar
with
like
how
deep
you
need
to
be
into
like
the
test
infrastructure.
That's
something
that
haven't
really
touched
yet.
So,
if,
like
somebody
would
be
around
to
basically
give
me
a
hand
and
give
me
some
guidance,
then
I
don't.
G
A
Cool
thanks,
Timo,
removing
unnecessary
flags
in
cloud
petroleum
engineer,
so
Timo
helped
us
figure
out
all
the
flags,
so
I
think
a
so
some
context.
So
in
one
1413
there
was
a
bunch
of
refactoring
work
to
consolidate
some
of
the
code
we
use
with
KCM
and
CCM,
and
that
ended
up
adding
a
bunch
of
flags
from
the
KCM
to
the
CCM
that
aren't
necessarily
needed.
So
Timo
did
some
extra
work
to
figure
out
what
those
flags
are
and
we
should
go
back
and
remove
the
ones
that
aren't
actually
needed
in
the
CCM
Timo.
H
I
can
totally
do
that,
especially
I.
Think
since
you
did
a
first
pass
and
basically
came
to
the
conclusion
that
most
flags
should
still
be
needed,
I
think
in
my
last
count,
I
asked
some
more
clarifying
questions
so
that
basically
I
guess
mostly
for
me
to
gain
a
better
understanding
of
on
why
some
of
these
flags
are
needed,
but
yeah
as
soon
as
I
think
we
have
agreement
or
confirmation
on
what's
really
needed
and
what
can
be
kicked
out.
I'm
happy
to
at
least
try
to
move
this
forward.
H
A
A
Class
controller
manager
should
be
able
to
ignore
notes,
so
this
came
up
from
one
of
the
engineers
working
on
virtual
cubelet,
where
they
want
to
set
some
sort
of
annotation
so
that
the
cesium
just
ignores
a
node.
In
this
case,
the
node
being
the
virtual
couplet
folks
have
opinions
on
this
one.
It
seems
like
a
pretty
nice
use
case,
so
I
was
thinking
I'm
just
putting
your
mouse
on
next.
Until
we
have
more
user
feedback
on
this
one.
E
A
Yeah,
it's
a
big
one,
so
I
feel
like
I
feel,
like
we've
had
conversations
in
past
cube
cons
around
like
how
do
you
build
and
push
the
CCM
image
per
provider?
I
feel
like
now
that
we're
getting
more
providers
actually
from
developing
the
CCM
and
running
in
production
environments
I
feel
like
need
to
start
having
design
discussions
around
the
actual,
like
standardizing
on
some
sort
of
process
for
how
to
do
bills.
I
know,
Walter.
You
listen
to
this
a
bit.
What
you
see
yeah.
G
B
A
Ok,
I
remember
this:
one
I
think
this
one
is
actually
I
think
it's
actually
safe
to
close
this
one,
it's
a
fixed
issue
and
it
was
reopened
because
someone
had
a
misconfigured
cluster,
so
I'll
follow
up
and
close
this
one
allow
discovering
no
changes
for
a
little
balance
or
targets
more
frequently
Timo.
You
want
to
give
us
a
quick
update
on
this
one
yeah.
H
Sure
so
that
that's
pretty
new
I
just
fight
this,
like
20
minutes
before
the
meeting
and
I
only
discovered
this
or
gained
some
insights
on
on
this
one
today,
so
I'm
very
happy
for
any
insights
of
feedbacks
that
you
might
have
from
this
one.
So
the
story
behind
this
is
that
I
looked
at
how
often
low
balances
are
updated
with
regards
to
denotes
that
they
can
send
traffic
to
and
based
on
what
I
discover,
there's
basically
a
period,
it's
called
notes
sync
period,
and
this
is
like
a
constant
fix
to
100
seconds.
H
So
to
my
understanding,
that
means
that,
even
though
notes
could
be
Ferrari
be
added
or
removed,
it
still
take
another
delay
of
100
seconds
for
that
update
to
be
propagated
to
clout,
load
balances,
and
the
question
that
I
wanted
to
raise
is
this
would
be
something
that
we
could
consider
to
make
configurable
one
way
or
another,
or
at
least
somehow
consider
options
to
reduce
the
latency.
That
goes
around
this
all,
assuming
that
my
my
understanding
of
what
the
current
situation
is
in
this
regard
is
actually
correct.
A
H
H
A
bit
more
lessening
right
so
I'm
sure
if
you
refer
to
this
one,
there's
like
the
at
line.
137
there's
like
lots
of
odd
functions
and
update
and
delete
functions,
but
that's
not
used
for
this
particular
piece.
That's
used
to
ensure
load
balancers
that
they
are
created
that
there
are
that
they
are
being
deleted,
but
I
don't
think
it's
being
used
to
update
the
list.
That
is
what
a
different
routine
is
for
in
the
in
the
iran
method.
H
A
A
We
should
investigate
possible
solutions
for
this,
so
either
make
the
node
resync
period
contributed
or
somehow
or
add,
add
the
node
watch
in
the
service
controller,
because
if
it's
in
this
case
yam
like
it's
already
doing
an
old
watch
for
other
things,
so
maybe
doing
the
node
watch
there
would
it
be
so
bad
I
think
the
tricky
part
is
actually
figuring
out
like
if
a
node
event
requires
updates
to
a
little
balancer
resource,
because
I
feel
like
most
of
the
time
a
node
update
event.
Wouldn't
so
I
think
that's
that's
the
part.
A
A
I
The
metrics
in
legacy
top
providers
currently
kind
of
all
over
the
place.
There's
no
III,
don't
know
if
there
is
a
plan
for
standardizing
any
of
this
moving
forward,
given
that
the
supply
provide
everything
was
moving
forward
and
that
that
seems
to
be
the
general
intent
of
the
sig,
so
I
thought
it
would
be
prudent
to
come
here
and
to
sort
of
raise
the
issue
and
start
a
discussion,
or
have
people
start
thinking
about
this?
If
this
is
something
that
mating.
E
I
E
Scope
for
this
project,
yeah
I,
think
it
might
be
useful
to
step
back
and
get
some
context
around.
What
the
emerging
conversation
on
metrics
and
communities
project
as
a
whole
is,
and
recently
there
was
a
discussion
and
they
kept
around.
How
metrics
being
admitted
is
actually
part
of
the
API
surface
area.
We
can't
just
change
it
without
brought
in
communication
and
deprecation
policies
and
related,
so
I
think
that
would
be
helpful
to
set
the
con
textbook
or
what
does
the
cloud
providers
thing
need
to
pay
attention
to
time
for
sure.
I
Each
of
the
cloud
providers
currently
do
seem
to
have
their
own
set
of
metrics.
They
currently
seem
to
presume
Prometheus,
in
fact,
the
cloud
provider
repo
that
you
guys
were
doing
the
116
thing
on.
It
seems
also
to
use
a
congenitally
use,
Kellogg
and
I.
Don't
know
if
this
was
like
an
intentional
decision,
but
I
know
that
there
is
also
some
discussion
about
moving
away
from
Kate
log,
and
so
if
that
was
the
case,
then
basically
I
mean.
If
you
want
to
use
Kellogg
you
should.
That
should
be
like
an
explicit
choice.
Right.
E
G
I
E
E
E
I
G
G
Oh
absolutely
on
the
braking,
not
breaking
the
AP
metrics
API.
One
interesting
point
would
actually
be
this.
Kc
embassies,
TM
migration-
I
do
not
have
a
concrete
example,
but
I,
when
we
move
controllers
from
the
case
yet
to
the
CCM,
I
imagine
that
there
are
going
to
be
metrics
that
disappear
from
the
case.
Yeah
are
from
the
JCM
and
will
then
appear
in
the
CCM,
and
so
I
mean
we
would
love
guidance
on
how
to
make
that
safe.
E
A
Yeah,
I'm
being
completely
transparent
in
terms
of
instrumentation,
like
I
don't
think
it's
been
something
that's
been
on.
My
radar
at
least
I.
Think
instrumentation
was
just
added
pretty
ad-hoc
we
a
long
time
ago,
and
we
just
kind
of
is
just
there
so
yeah
same
boat
as
Walter
like
I,
feel
like
instrumentation
wise.
A
I
I
E
G
I
G
Matheno
we're
across
some
of
our
extraction,
work
and
other
things
are
cross-cutting,
so
I
mean
I.
I
would
actually
suggest
that
you
seen
has
offered
to
do
sort
of
the
CCM
build
and
that
it
would
be
great
if
we
could
get
someone
from
cig
instrumentation
to
show
up
to
at
least
one
of
those
meetings
so
that
we
can
do
some
planning
some
like
boots
on
the
ground
planning
for
the
yeah.
G
A
I
E
J
Do
we
have
any
news
on
the
potential
submission
for
sub
projects
of
clock
provider?
I
know
that
Dan
Cohen
replied
on
the
on
the
original
thread
and
get
up,
but
I
don't
know
if
there
were
any
back-channeling
and
or
side
chats
about
this.
This
is
the
first
thing.
The
second
thing
is:
are
we
proposing
anything
for
sequel
provider
for
for
Cuba,
North,
American,
I,
guess.
J
A
On
the
topic
of
the
per
provider
talks
yeah,
the
only
bad
channeling
was
up
some
de
and
some
folks
in
steering
it's
on
most
people's
radar.
That's
going
to
be
that
is
going
to
be
coordinating
those
six
sessions,
so
I
think
we're
good,
but
once
we
do
the
updates
to
six
I
ya
know:
I'm
gonna,
look
back
with
with
Dan
Kahn
and
see
make
sure
that
everything's
in
line
so
that
all
the
sub
projects
have
had
their
sessions
for
for
cube.
F
G
Yeah
on
that
one
for
sig
cloud
provider,
it's
not
a
cloud
provider
specific
but
I,
know.
You've.
Seen
has
a
deep
dive
topic
having
to
do
with
all
of
the
gotchas
for
cloud
providers
who
want
to
set
up
a
new
cloud
provider
and
get
a
cloud
provider
specific,
build
and
deployment
going
and
if
his
CFP
doesn't
actually
go
through.
I
would
suggest
that
that
would
be
a
great
deep
died
for
our
sake.
G
E
A
K
We
work,
you
know
mostly
on
sex
scheduling,
but
so
what
we
wanted
to
eventually
have
is
spot
spreading
at
different
topologies.
We
already
have
two
topologies
solar
region,
but
we
want
to
add
also
instance
host,
so
we
we
are
pursuing
this
cap
that
is
linked
in
the
dog
and
basically,
at
this
point
we
only
one
agreement
in
in
two
things.
A
K
Basically,
the
idea
I
know
the
discussion
was
a
little
bit
I'm
clear
on
that
part,
so
I'm,
just
hoping
that
the
cap
or
the
pull
request
is
the
best
place
to
to
get
the
discussion
going.
Okay,.