►
From YouTube: SIG Cloud Provider 2021-12-08
Description
Election for new SIG Cloud Provider co-chair.
Move to match cloud provider versioning with K/K versioning.
Discussion on zone label usage
Support for mixed protocols (IPV4 and IPV6)
A
Welcome
to
the
wednesday
december
8
2021
sig
cloud
provider
meeting.
As
always,
this
meeting
is
part
of
the
kubernetes
project
in
the
cncf
and
we
are
abide
by
their
general
rules
of
behavior.
So
please
be
considerate,
inclusive
and
polite
to
all
your
fellow
contributors,
and
with
that
I
think
we
can
begin
the
meeting.
A
All
right,
so
please
go
ahead
and
add
yourself
under
the
attending,
so.
A
I
don't
think
we
have
anyone
from
alibaba
or
baidu
here
so
subproject
updates
amazon.
Do
we
have
anything.
C
We
have
merged
a
few
changes,
nothing
specific
to
call
out.
I
don't
think
we
did
get
started
on
123.,
so
merged
support
there,
but
haven't
made
a
release.
Yet,
okay,.
D
Yeah,
I
did
chat
with
folks
about
making
sure
we
talked
a
few
meetings
ago
about
making
sure
that
the
provider
azure
releases
were
lined
up
numerically
wise
with
the
kubernetes
releases
and
they
are
making
that
adjustment
per
their
next
release
so
follow
up
based
on
what
was
discussed
on
this
call.
No
other
major
changes
just
keep
on
keeping
on.
A
Wonderful,
thank
you
so
much
for
catching
up
for
matt
bridget.
A
I
think
I
may
be
the
only
googler
on
this
call,
so
the
two
things
that
I'm
aware
of
worth
mentioning
one
is:
we
know
that
the
cloud
provider
gcp
on
the
main
kk
repo,
if
you
turn
on
the
two
cloud
provider,
specific
turn
off
feature
gates
for
turning
off
the
cloud
provider,
the
cluster
doesn't
come
up,
that's
somewhat
deliberate.
A
We
are
ins,
we
are
making
some
changes
to
the
cube
up
script
so
that
we
can
actually
properly
run
the
alpha
feature
gates
test
and
the
plan
is
to
file
a
whole
bunch
of
bugs
for
each
of
the
tests
that
fail
in
that
scenario,
and
that
basically,
is
meant
to
drive
us
towards
determining
what
we're
going
to
do
about
each
of
those
tests.
A
I
think,
quite
of
those
quite
a
few
of
those
tests
on
in
kkk
only
run
against
cloud
provider
gcp.
So
I
think
it
is
worth
having
that
discussion.
A
The
other
piece
of
interest
that
is
worth
noting
is
quite
a
few
folks
have
been
asking
us
about
having
official
releases
of
the
cloud
provider
ccm,
so
we're
starting
to
look
into
putting
in
the
the
infrastructure
to
generate
the
builds
of
the
cloud
provider.
Gcp
ccm,
I
imagine
if
we're
being
asked
to
do
that
other
providers
are
also
going
to
so
I
think
it's
things
like
kind
that
are
really
driving
that
I
don't
think
we
have
anything
from
huawei,
no
updates
from
ibm.
E
Here,
when
I'll
paste
it
into
the
notes,
but
a
release
came
out
in
the
last
24
hours,
I
didn't
personally
work
on
it,
but
I
can
cut
and
paste
out
of
the
release
notes
to
show
what
happened
there.
E
A
The
primary
thing
I'm
aware
of
from
extraction
and
migration
is
that
some
recent
docker
build
x
changes
broke
the
automated
build
for
the
api
server
network.
Proxy
jeffrey
yang
is
in
the
process
of
trying
to
get
that
fixed.
A
It's
a
little
unfortunate
on
the
timing,
because
there's
a
new
version
we'd
like
to
get
shipped
and
we
made
it
or
I
made
a
small
error
on
the
go
versions.
I
used
the
rd.
I
set
the
api
server
network
proxy
to
the
latest
kk
dependencies
and
that's
caused
a
few
problems
for
the
oldest
release
kk
release.
So
we
probably
need
to
set
the
dependencies
on
api
server
network
proxy
to
be
the
oldest,
supported,
kk
release
rather
than
the
newest.
A
It's
a
little
unfortunate.
We
we're
probably
going
to
need
to
discuss
whether
we
want
to
branch
the
connectivity
client
specifically
for
releases,
so
that
we
can
actually
get
the
latest
where
we
want
it.
A
So
I
think
that's
it
for
extraction
migration
now
on
to
the
agenda.
So
as
far
as
I
know,
we
only
have
one
excuse
me.
As
far
as
I
know,
we
only
have
one
nomination
for
co-chair.
That
is
nick
turner.
Do
we
have
any
other
nominations
for
an
additional
co-chair.
A
All
right,
then,
this
probably
is
a
relatively
easy
vote.
I
don't
know:
if
any
does
anyone?
Would
anyone
prefer
that
we
did
this
vote
in
private.
F
Should
we
should
we
ask
nick
if
he
accepts
the
nomination
first?
Oh,
I
suppose
we
should
yes,
hey
nick.
How
do
you
feel
about
the
nomination.
C
Oh
thanks
andrew
I
yeah
I
I
accept
the
nomination.
I
have
been
coming
to
saint
cloud
provider
for
a
while
and
I'd
be
happy
to
step
up
commitment
there
and
help
out
walter
with
some
of
the
administrative
work.
So
I.
A
A
I
would
very
much
appreciate
the
help.
Thank
you
all
right
so
on
to
the
question
of
anonymous
versus
non-anonymous
voting
does
was
anyone
prefer
that
we
did
anonymous
voting
and
if
you
don't
want
to
say
out
loud
that
you'd
prefer
it
feel
free
to
send
me
a
ping
on
slack.
A
I'm
going
to
give
everyone
a
minute
to
think
about
that
and
then,
if
I
don't
hear
anything,
I
will
go
ahead
and
call
for
an
a
a
vote.
But
also
does
anyone
have
anyone
else.
They
would
like
to
nominate.
A
I
completely
agree
all
right.
I
haven't
heard
any
anything
about
an
anonymous,
so
accepting
that
I'm
just
going
to
do
a
quick
roll
call,
el
miko,
yeah,
big
plus
one
for
me,
bridgette,
plus
one
andrew
yeah,
plus
one
richard
fun.
A
A
All
right
and
I'll
reach
out
offline
there's
a
bunch
of
places.
We
need
to
put
your
name
now
that
you
are
one
of
the
co-chairs.
Awesome.
G
G
So
we
had
an
engineer
who
was
looking
into
some
stuff
around
the
well-known
label
for
zone
usage,
and
they
were,
they
were
asking
me
some
questions
about
like
kind
of
the
community's
usage
of
that
label
and
if
there
were
any
sort
of
standards
around
applying
it
like
you
know,
in
a
blanket
way
or
or
anything
like
that,
and
I'm
just
I'm
just
curious
if
anyone
here
kind
of
knows
the
history
behind
that
label
or
if
is
there
any
sort
of
notion
of
like
you
know,
all
ccm
should
be
applying
this
or
every
provider
should
be
using
this
or
or
is
it
more
just?
G
F
So
the
topology
are
first-class
kubernetes
labels,
but
the
there's
no
guarantee
for
any
cluster
that
it
it
that
it
sets
the
label
on
the
nodes.
But
we
do
we
actually
so
tim
hawkins
put
together
a
kep
on
the
expected
semantics
of
the
labels
and
like
the
way
the
topology
hierarchy
is
supposed
to
work.
But
aside
from
that,
it's
not
strictly
required
in
any
cluster.
Let
me
find
that
kept
for
you.
Yeah.
A
Just
so
I
you,
you
have
the
well-known
zone
label.
Is
this
just
to
make
sure
I'm
on
the
right
page?
This
is
the
same
thing
as
the
failure
zone.
G
I
think
it's
the
same.
It's
that
I'll
put
in
here,
it's
topology
dot,
kubernetes,
dot,
io
slash
zone
is
the
label.
I
don't.
I
don't
know
if
that's
shared
as
the
failure
domain
as
well.
It
gets
a
little
murky
when
reading
through
the
docs
about.
F
Some
of
this
stuff,
it
is
failure
domain
was
what
it
was
called
initially
and
it
was
prefixed
with
beta
and
then
so.
Maybe
this
is
my
fault,
because
I
did
the
pr
for
this,
but
we
we
really
wanted
to
rename
it
to
signal
that
it's
ga,
but
it's
hard
to
rename
labels,
because
you
have
people
with
node
selectors
using
those
labels.
So
we
just
we
backfill
it.
So
we
have
both
labels
now
and
we're
slowly,
phasing
out
the
failure
domain
beta
labels,
but
they're
supposed
to
behave
in
the
same
way.
A
Yeah,
and
just
just
as
a
quick
background
for
folks,
a
lot
of
the
the
cloud
providers
have
sort
of
this.
What
I
mean
in
what
is
in
google,
known
as
the
zone
and
region
breakdown,
where
multiple
zones
make
up
a
region,
and
so
the
idea
here
is
to
allow
for
certain
particular
behavior.
So,
for
instance,
if
you
have
a
service
that
you
want
to
make
in
an
aha
environment,
you
want
to
make
sure
that
you're
running
one
instance
of
your
service
in
every
failure
zone.
A
Another
use
case
which
hasn't
been
built,
but
is
on
my
to-do
list,
miner.
Anyone
I
can
talk
into
doing
the
work
is,
if,
in
the
api
server
network
proxy
case,
if
you're
using
these
tunnels
to
send
traffic
from
the
api
server
to
one
of
the
nodes.
A
A
So
one
of
the
things
we
would
like
to
do
is
what's
called
a
zone
aware
back
end
manager,
so
that
it
will
prioritize
sending
the
traffic
through
a
tunnel
to
an
agent
running
in
the
same
zone
as
the
destination
that
should
allow
you
to
not
have
to
do
any
zone
jumping.
A
So
there
are
several
sort
of
usages
that
can
allow
for
having
these
zones,
make
your
life
better.
Having
said
that,
exactly
what
andrew
intimated
earlier
is,
this
is
a
best
effort
system
and
providers
are
not
required
to
set
these
zones.
G
Yeah
I
mean
that
that's
great
and
and
honestly
like
walter,
the
use
case
that
you
just
kind
of
mentioned.
There
is
kind
of
it's
it's
not
the
same
area
but
like
topically,
it's
very
similar
to
what
this
colleague
was
asking
me
about,
because
they
they
were
curious
like
if
they
could
depend
on
this
zone
label
being
set
so
that
they
could
build
automation
around.
You
know
being
able
to
do
similar
types
of
things
where
you
want
to
have
zone
awareness
about
like
nodes
that
are
in
your
cluster,
so
yeah,
it's
really
illuminating.
G
To
hear
about
okay,
this
is
not
required,
but
it
sounds
like
there
is
certainly
interest
from
the
community
and
having
this
be
like
it
would
be
useful
if
it
were
more
standardized
you
know
or
if
it
could
be
well-
and
I
I.
A
What
is
much
less
clear
to
me
are
the
other
players
and
how
much,
especially
when
you
start
dealing
with
systems
like
bare
metal,
it's
a
lot
harder
to
to
set
up
these
sort
of
zones
on
bare
metal.
Unless
someone
really
has
taken
the
foresight
to
set
it
up.
F
Yeah
yeah,
like
you'll,
see
it
like
in
vsphere
and
bare
metal
installs
like
you'll
you'll,
see
the
zone
labels
being
used,
but
it
becomes
a
bit
more
arbitrary.
Where
you
know
a
zone
might
be
a
rack
and
a
region
might
be
a
data
center
or
something
like
that.
But
I
would
say
it's
it's
pretty
standard
like.
I
would
assume
it's
there
all
the
time,
and
the
special
case
would
be
that
it's
not
there.
If
that
helps
at
all.
E
Yeah,
I
can
say
through
the
the
vmware
user
group
that
I
don't
think
it
is
guaranteed
there,
but
it's
very
commonly
used,
so
nothing
forces
somebody
to
put
it
in,
but
it's
up
to
the
user
to
decide
what
their
failure
domains
might
be.
You
know
you
could
elect
for
it
to
be
a
single
blade,
server,
a
rack,
an
aisle
and
a
lot
depends
on
what
you've
got.
I
mean
some
people
don't
have
multiple
aisles,
and
so
the
the
actual
deployment
in
process
varies
in
terms
of
making
a
move
to
standardize
it.
E
That
sort
of
implies
that
unless
you
know
vsphere
is
the
standard
it,
it
would
be
a
change
forcing
legacy
users
to
modify
that,
and
I
think
that
would
stir
up
a
hornet's
nest.
So
I'm
I'm
not
sure
at
this
point
if
it
isn't
perhaps
problematic
to
come
up
with
standardization
and
even
within
vsphere.
E
The
experience
is
that
the
usage
of
those
labels-
I
don't
think
it
is
standardized,
and
that
might
be
a
good
thing
just
because
it's
up
to
them
to
decide
what
their
failure
domains
are
based
on
their
perceived
risk
and
what
their
particular
situation
is.
And
when
you
get
to
on-prem,
there
might
be
quite
a
bit
of
variance
there.
A
A
The
other
thing
here
is-
and
I
don't
know
if
andrew
knows
that
the
the
answer
to
this,
but
if
you
have
very
old,
even
if
you
don't
pin
the
cluster,
if
you
have
very
old
nodes,
I
do
not
know
if
the
relevant
node
controller
goes
back
and
appends
the
newer
zone,
labels
to
the
old
nodes.
G
That
I
mean
these
are
some
great,
like
great
thoughts
being
tossed
around
here.
I
really
appreciate
it
and
I
totally
get
that
like
there's,
also
like
a
kind
of
an
impedance
mismatch
at
some
level
like
this,
this
zone
label
doesn't
apply
to
all
infrastructure
providers
and
especially
when
we
get
into
some
of
the
more
cutting
edge
kind
of
weirdo
providers
where
things
are
sliced
not
specifically
to
instances
or
something
like
that,
you
know,
then
it
looks
much
more
different
and
honestly,
you
know
the
it's
interesting.
G
We
were
seeing
it
not
being
displayed,
and
then-
and
this
was
just
on
our
deployment-
you
know
and
then
looking
at
aws,
we
saw
it
being
deployed
or
being
labeled
on
the
nodes,
but
when
we
looked
deeper
into
the
ccm
we
saw
that,
like
really,
that
label
was
coming
through
from
like
some
sort
of
volume
extension
which
was
adding
it
on
there,
and
I
don't
think
this
was
necessarily
related
to
the
ebs
like
csi
stuff.
G
A
F
It's
probably
worth
mentioning
people,
because
you
mentioned
the
csi
driver
also
had
the
zone
so
there's
a
topology
aware,
but
there's
a
topology
feature
from
the
csi
driver
as
well,
and
there's
a
there's
like
a
there's.
A
api
contract
between
those
two
between,
like
the
the
node
label
set
by
the
cloud
provider
and
the
csi
driver
to
standardize
on
the
the
topology
labels
to
agree
on
the
zone
region
metadata.
So
you're,
probably
seeing
like
the
volume
based
labels
being
set
by
the
csi
topology
driver
and
then
on
the
nodes.
It's
the
ccm.
A
Yeah,
so
it
might
be
worth
following
up
with
either
andrew,
not
andrew,
either
nick
or
justin
about
the
the
the
node
level
labels
awesome
bridget.
I
think
you're
up
next.
D
Yes,
thank
you.
So
we've
talked
before
on
this
call
about
that:
supportive
mixed
protocols
and
services
with
type
load
balancer
that
is
currently
sitting
in
alpha
and
there
are
folks
who
want
it
to
be
moving
forward.
So
I
am
gathering
all
the
information
needed
to
move
it
forward
to
beta,
and
there
were
a
few
remaining
questions
in
my
mind
at
least
like
we
got
an
answer
from
tim
hawkin
as
of
september
3rd,
at
least
this
year.
Google
cloud
had
not
implemented
anything
for
you
know
supporting
or
indicating
non-support
properly.
D
It
appears
when
I
got
answers
back
from
other
folks,
so
it
appears
just
about
every
other
cloud
wants
to
handle
it
correctly.
So
tim
did
indicate
when
it
was
discussed
before
that
it
wasn't
going
to
be
incredibly
hard
to
at
least
air
correctly,
but
just
highlighting
that
for
our
google
friends,
because
it
looks
like
we
have
con,
you
know
we
have
a
general
weight
of
people
would
like
to
move
this
forward
and
or
are
ready
for
it
to
move
forward.
D
A
Quick
note
on
that,
if
you
don't
mind
me
interrupting
absolutely,
please
do
tim
is
nightmarishly
busy.
He
he
does.
Okay,
everything
I
mean
literally,
does
everything
so
you
might
be,
even
though
he
will
also
be
busy
you're,
probably
going
to
get
better
traction
if
you
engage
with
bowie
on
this
yeah,
absolutely.
D
I
will
mention
just
I
will
at
mention
bowie
on
the
issue
and
just
ask
him
hey
on
the
enhancements
issue.
Let's
take
a
look
and
then
I
have
a
couple
of
other
questions
like
there's
an
issue
in
here
or
sorry,
one
of
the
hey.
We
should
make
sure
coop
proxy
doesn't
proxy
on
ports
that
are
in
an
error
state
and
I'm
like
okay,
that
was
in
the
you
know,
original
plan
and
I'm
wondering
who,
if
anyone
would
you
recommend
that
I
talk
to
you
to
verify
that.
D
Spelling
sorry
yeah,
so
there
was
a
from
the
original
plan.
It
said
the
coop
proxy
should
use
the
port
status
information,
blah
blah
blah
so
that
it
doesn't
allow
traffic
on
ports
that
couldn't
be
opened,
so
basically
don't
proxy
ports
that
are
in
error,
state
and
I'm
kind
of
wondering,
because
I'm
trying
to
pick
this
up
and
move
it
forward
from
somebody
who
kind
of
left
it
does.
Anyone
know
who
the
right
person
to
talk
to
is
to
verify
and
check
into
that.
D
Wonderful
and
then
the
only
other
question
and
I
see
kishoris
on
the
call,
which
is
great.
You
mentioned
the
entry
controller,
not
working
yet
because
it
assumed
the
protocol
to
be
the
same
for
all
ports
and
you
wanted
that
added
to
the
graduation
criteria.
H
Provided
aws
also,
we
are
going
to
block
creating
the
load
balancer.
So
from
the
dws
side,
we
are
good
to
go.
So
whenever
you
decide,
we
are
good
on
that
front.
We'll
have
further
enhancement,
but
that
will
be
out
of
band
of
this
feature.
D
D
D
Care
of
then,
all
right,
the
rest.
It
looks
like
it's
exciting
paperwork
that
I
will
do
in
the
1.24
cycle.
If.
F
A
So
I
did
a
quick
look
up.
I
do
believe
that
the
cube
proxy
is
actually
owned
by
sig
network
right
right.
Okay,
bowie
apparently
used
to
be
coach
one
of
the
co-chairs,
but
is
no
more.
D
A
D
I
did
want
to
remind
people
if,
if
you
weren't
paying
attention
one
of
the
very
exciting
things
that
happened
in
kubernetes
1.23
is
the
ipv4
ipv6
dual
stack
going
to
stable.
So
if
your
cloud
provider
wants
to
be
super
fancy,
you
could
get
with
that.
I
know
the
cloud
provider
I
work
for
is
certainly
going
to
get
with
that,
and
perhaps
you
could
as
well
and
then
all
the
customers
would
be
happy.
D
All
right,
I
just
talked
to
a
pm
on
arrow.
This
is
the
azure
red
hat
open
shift
like
azure
plus
openshift
equals
excitement,
and
I
just
talked
to
the
pm
and
said:
hey:
can
we
get
somebody
to
answer
this
person
so
conversation
ongoing?
I
will
get
sometime
in
the
next
week.
I
will
get
either
me
or
this
person
answering
it
feel.
H
A
A
A
Yes,
let's
be
sure,
I'm
not
entirely
convinced
that
this
is
a
a
amazon
problem
versus
a
sig
network
problem.
C
Yeah
you
can,
I
I
didn't
see
anything
that
suggests
that
it's
us
either,
but
I
can.
Our
either
of
us
can
take
a
closer
look
at
it.
H
If
you
can
see
me,
I
will
take
a
look
as
well
see
if
there's
anything
we
can
do
on
this
yeah.
Thank
you.
H
Yes,
that's
correct.
Okay,.
A
A
Oh
and
this
one
is
assigned
to
joe,
so
where
are
we
at.
A
Back
to
attention
all
right
cool
did
anyone
have
anything
else.