►
From YouTube: Kubernetes SIG Cloud Provider 2018-09-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
let's
get
started
so
first
thing
on
the
agenda
is
regarding,
if
we
should
add
master
notes
to
the
cloud
elby's.
So
there
is
a
discussion
with
Clayton,
Justin
and
Tim
on
one
of
those,
oh
and
I,
think
also
in
Jordan
anyway.
Oh
no,
sorry,
Josh,
so
anyways
yeah.
We
were
talking
about
whether
we
want
to
add
the
master
cube
list
as
I
was
under
like
backends
for
the
elbe.
A
So
right
now
we
look
for
a
specific
label
on
a
node
to
to
determine
if
it's
a
master
and
the
label
was
created
from
too
bad
and
I.
Think,
and
if
we
see
that
label
we
essentially
like
don't
add
the
node
to
the
LP
pool
personally
I
think
that
we
should
start
adding
notes
to
the
elbe
pool,
because
we
already
have
a
label
to
like
specifically
exclude
nodes,
and
we
also
have
the
the
local
external
traffic
policy
for
us
to
not
send
traffic
to
a
master
node
if
it
doesn't
run
a
pod
for
that
service.
A
B
B
Gke,
obviously
doesn't
expose
the
masters.
So
there's
no
like
it
doesn't
sort
of
arise
on
gke,
and
I
think
there
were
some
bugs
in
around
four-ish
or
something
where
we
started
exposing
the
master,
and
I
think
we
changed.
Oh,
it
was
a
we
didn't,
we
didn't
expose
unscheduled
nodes
and
they
originally
then
master
and
keep
up
what
appears
unschedulable
and
when
we
changed
that
to
a
taint
in
toleration.
So
that
was
in
one
six.
Traffic
dates
are
going
to
the
master
and
people
were
like.
B
Why
are
you
sending
traffic
to
a
master
like
you're
gonna
daus,
my
master,
a
masters
under
sized
and
I?
Don't
want,
like
you
know,
massive
network
traffic
going
through
my
master,
which
seems
fair,
and
so
that's
when
we
cleaned
it
up
or
I
cleaned
it
up
and
add
the
label
selection
logic.
So
there
is
a
there
is
a
camp
that
believes
that
it
is
a
bug
to
send
it
through
the
master
and
then
I
think
some
people
added
some
labels
that
will
camera
which
way
around
it
is
I'm
worried.
B
It
might
be
the
wrong
way
around
where
you
can
effectively
like
override
that
and
either
for
say
no
to
opt
in
or
out
of
low
bounds
for
selection
and
I.
Think
that
that
is
feature
flagged.
Currently,
so
I
am
unclear
like
obviously,
if
we,
if
we
sufficiently
communicate
it,
we
can
make
the
change
in
cops
in
Cuba
or
and,
like
the
other
majors
I,
think
the
biggest
risk
is
that
we
risk
bringing
sort
of
or
changing
the
behavior
on
third
parties
that
are
rolling
their
own
isolations
yeah.
B
C
So
I
mean
I.
Let
me
throw
in
my
opinion,
which
is
that
we
have
a
nasty
tendency
in
our
community
to
to
allow
the
customer
to
shoot
themselves
in
the
foot
if
they're
not
deeply
knowledgeable
about
the
system
and
the
idea
of
I
can
throw
traffic
at
the
control
plane,
which
will
then
cause
my
and
my
system
to
stop.
Behaving
correctly
seems
like
another
example
of
letting
the
customer
shoot
themselves
in
the
foot
I'm,
not
against
allowing
you
to
send
traffic,
but
I
think
it
should
clearly
not
be
the
default
behavior.
D
So
I
think
both
for
maintaining
the
current
behavior
are
not
switching
in
no
matter
how
much
we
message
it.
It
doesn't
seem
to
ever
get
through
to
everyone
when
we
change
the
behavior,
so
I
think
for
both
maintaining
the
current
behavior
and
to
require
an
explicit
opt-in
to
traffic
going
to
the
master.
It
sounds
like
both
of
those
are
arguments
for
maybe
adding
the
ability
but
making
it
explicit,
send
traffic.
Does
that
make
sense
to
others
and
did
I
capture
that
right.
C
A
Yeah,
my
point
is
that
the
exclusion
model
was
already
added
and
it's
like
it's
already
like
an
alpha
feature
for
I.
Don't
know
like
three
releases
or
something
yeah
until
like
do.
We
want
to
go
with
the
exclusion
model,
but
have
an
exception
for
masters,
or
do
we
want
to
just
kind
of
like
roll
things
back
or
like
so
I'm
curious
for
VMware
and
I?
Don't
know
if
anyone
from
AWS
is
here
but
like
is
this
a
common
problem
with
those
with
those
providers.
E
In
a
platform,
but
we're
actually
working
on
that
because
we
have
a
vSphere
on
AWS
service,
which
is
actually
using
ELB
as
little
dance,
we're
actually
working
on
implementing
that
part
in
the
club
provider,
because
these
here
is
the
vSphere,
but
it
also
has
access
to
ELB
and,
from
my
perspective,
I
think.
The
ideal
way
forward
would
be
to
have
an
inclusion
model,
because
right
now
you
know
it
makes
more
sense
to
have
that
explicitly,
including
masters.
Giving
that
a
you
know
most
of
the
use
case.
E
Most
of
the
most
of
the
end
user
should
really
not
use
traffic
and
master
unless,
like
it's
something
very
specific
use
case
Fred,
because
you
know
it,
you
know
it
was
pointing
out
that
it
might
become.
You
know
problematic.
If
people
would
just
you
know
a
lot
I
by
default,
definitely
not
default
thing
to
do
and
same
will
be.
E
You
know,
if
you
ask
me,
if
I
were
to
implement
that
I
will
just
have
it,
you
know
explicitly
included
in
the
traffic
instead
of
having
that
you
know,
you
know,
working
in
an
exclusion
model
works,
it's
just.
You
know
kind
of
backward
when
you,
when
you
think
about
it,
like
you
have
you
have
it
excluded
by
default,
so
you
have
to
unex
cluded
it
we
just
backward,
but
it's
I,
guess
man.
This
thing
is
the
the
current.
You
know
the
current
status
quo.
Is
that
and
you
know
we
might
want
to
implement
that.
E
B
Think
also,
the
way
this
came
up
was
that
someone
was
doing
a
single,
node,
installation
and
I.
Think
there
is
an
there
is
an
alternative,
apparently
which
is
just
not
to
label
a
single
node
as
the
master,
because
it
is,
although
it
is
a
master.
It
is
also
a
note
which
is
something
we
don't
currently
have
as
a
concept
that
a
node
can
be
in
multiple
roles.
A
Yeah
I
think
also
for
OpenShift.
They
do
like
multi
master
like
API
server,
aggregation
or
something
like
that,
and
then
they
put
those
behind
elby's.
And
so,
if
you
have
an
l
before
the
control
plane,
then
they
want
to
be
able
to
like
have
kind
of
a
self-hosted
lb
that
can
send
traffic
to
the
masters,
which
currently
is
not
possible.
A
B
Need
to
talk
about
this
anymore
zone,
just
one
that's
there
was
there
was
an
external
traffic
policy
local.
There
was
a
possibly
another
workaround
which
is
like,
if
you
say,
external
traffic
policy,
local
and
you
explicitly
schedule
workloads
to
tolerate
your
master,
which
is
presumably
tainted
for
several
photo
in
your
local
model.
B
A
B
E
F
A
F
A
A
So
yeah
I
want
to
spend
some
time
to
talk
about
like
if
we
think
113
is
realistic,
I
don't
think
it
is
only
because
I
think
what
13
is
realistic
if
we're
moving
from
like
package
providers
like
a
staging
directory
in
kubernetes
kubernetes,
but
to
actually
like
get
it
out
of
kubernetes
kubernetes
in
113
is
pretty
aggressive,
because
that
means
that
you
have
to
build
your
binaries
out
of
tree
and
I'm
sure.
That's
a
big
big
challenge
for
a
lot
of
providers.
So.
C
I
would
actually
like
to
strengthen
both
of
those
statements.
I
think
that
getting
this
staging
all
cloud
providers
into
staging
is
a
really
good
goal,
but
I
even
think
that's,
probably
slightly
aggressive
I.
Don't
know,
I
tried
to
share
the
docket
on
if
anyone
looked
at
it,
but
it
showed
a
lot
of
the
dependencies
that
have
to
be
dealt
with,
including
things
like
package
runs.
The
kubernetes
package
run
time
for
doing
things
like
checking
cloud
provider
types.
A
C
Mean
I,
don't
know
if
you
saw
my
list
but
I
think
I
found
20
downstream
or
upstream
dependencies
that
need
to
be
dealt
with
from
within
the
kubernetes
codebase.
So
I
mean
there
is
a
sizeable
chunk
of
work
that
needs
to
be
done
to
get
that
extraction
to
happen
so
yeah,
but
at
the
same
time
it's
not
going
to
happen
unless
we
start
doing
it.
In
fact,
my
first
couple
PRS
will
start
rolling
out
this
week,
but
that
you
know
when
we're
looking
at
20
dependencies
at
least
and
seven
cloud
providers.
A
Back
out
back
yeah,
so
I've
updated
Walters
cap
to
include
three
steps
in
moving
the
provider
code.
So
the
controller
code,
the
cloud
specific
controller
code
I'm
treating
that's
kind
of
a
separate
problem
because
it
ties
directly
into
other
entry
things
that
we
haven't
thought
about,
but
specifically
provider
code,
which
is
under
package
cloud
provider
providers.
I
think
we
can
kind
of
do
those
in
three
steps,
which
is
to
move
it
to
staging
and
then
move
it
out,
but
still
build,
but
like
vendor
it
from
an
external
repo
and
then
move
the
entry
code.
C
C
A
Yeah,
so
what
I'm?
What
I'm,
assuming?
Is
that
there's
a
transitional
period
where,
like
the
provider
code,
maybe
like
just
in
the
repository,
but
we
may
still
vendor
like
cloud,
go
and
all
of
the
kind
of
the
main
interfaces?
And
if
we
do
that,
then
it's
it
seems
more
doable
to
do
in
like
the
next
release
or
two.
A
D
D
B
B
A
B
For
CCM,
so
that
there's
only
we
could
do
I
think
without
even
waiting
for
the
freeze
to
lift
right.
If
we
had
a
feature,
branch
and
I
think
I
think
we
should
also
talk
to
say
testing
because
or
testing
from,
because
I
know
that
complicated
tests,
like
upgrade
tests,
have
been
challenging
in
the
past
and
I.
Think
we've
already
likely
would
like
a
better
or
a
different
strategy
for
those
going
forwards.
Well,.
C
H
C
A
C
Bounced
that
idea
around
I
think
it's
worth
discussing,
there's
some
weird
things
that
happen
with
the
leader
elect
though
so
one
of
the
assumptions
the
leader
election
makes
is
that
it's
running
across
multiple
controllers,
so
if
it
loses
the
election,
if
you're
a
leader
and
you
lose
your
lease,
you
killed
the
process
you're
running
in
and
so
there's
some
weird
oddities.
That
can
happen
if
you're
holding
multiple
leases,
because
that
can
cause
sort
of
cascading
lease
failures
and
some
other
weird
conditions
to
happen.
You.
C
As
well,
so
we
can't
have
them
share
the
same
lease
because
they're
different
processes,
and
that
would
exclude
you
having
both
a
KCM
leader
and
a
CCM
leader.
What
you
could
do
is
say
that,
rather
than
having
one
global
lease
they
share,
you
could
say:
I
have
a
service
lease
and
I
have
a
route.
Lease
and
I
have
a
case.
Lease
and
I
would
CCM
lease,
but
I
think
that
takes
some
thinking
cuz.
It's
not
really
how
the
lease
system
was
designed
to
be
used.
D
Okay,
so
for
the
purposes
of
this
discussion,
the
complexity
is
secondary
and
the
when
can
we
do?
These
things
is
the
primary
part
of
this
conversation,
so
sound
like
andrew
is
eager
to
submit
a
PR
that
that
is
a
forcing
function
to
fix
some
stuff,
quick
in
principle
and
I'm,
happy
with
that,
but
I
think
Walter
and
from
the
vSphere
side.
So
I
forget,
who
is
time,
Fabio,
Steven
and
Fabio.
If
you
guys
can
do
some
legwork
before
this
team.
This
group
meets
again.
D
A
It's
and
to
make
my
intentions
from
earlier
clear.
So
what
I
want
is
to
rush
the
move
to
staging
and
then
once
we're
in
staging
then
start
talking
about
like
put
more
effort
into
migration
strategy,
build
pipelines
and
all
those
other
things
that
we
want
to
worry
about.
And
then
once
we
have
those
kind
of
figured
out,
then
we
can
go
on
to
the
next
phase,
which
is
to
kind
of
like
start
talking
about
actually
moving
the
code
and
then
whatnot.
A
A
C
And
I
would
actually
like
to
emphasize
on
that
one
as
part
of
sig
api
machinery.
We
were
looking
into
some
problems
in
the
versions
cache
and
we
found
that
there
are
quite
a
few
bugs
that
have
been
fixed
in
the
KCM
that
have
not
that
we're
evidently
still
in
the
CCM
by
code,
inspection
and
so
I
I
think
two
points
there.
One
is
I
think
to
jegos
point.