►
From YouTube: SIG Cloud Provider 2022-02-16
Description
Meeting time change, Node IPAM, feature graduation, annual report and more.
A
All
right,
hello,
everyone.
This
is
the
bi-weekly,
it's
a
cloud
provider
meeting
on
february,
16th,
2022
and
yeah.
I
think
we're
following
a
new
format
where
we
do
the
triage
first.
A
So
I
will
open
that-
and
I
only
see
one
issue
is
that
right,
yeah,
so
there's
one
issue
and
probably
there's
a
bunch
that
are
assigned
that
still
need
to
be
looked
at,
which
is
fine.
I
don't
think
we
need
to
be
going
through
all
those
this
meeting,
but
yeah,
let's
quickly
go
over
this
one
issue
here.
A
A
B
A
All
right,
let's
just
go
straight
into
the
updates,
then
okay,
I
see
an
update
for
aws
about
the
low
balance
controller.
C
E
Far
to
back
this,
this
is
the
one
thank
you
for
taking
a
look
at
this
earlier.
This
was
a
a
nice
gnarly
bug
that
andrew
vetted,
the
pr
and
merged
it,
and
I'm
wondering
because
someone
was
you
know,
one
of
our
one
of
my
colleagues
was
saying
in
it.
Let's
put
the
let's
backport
this
to
1.22
1.23
and
I
thought
okay
great
and
then
there's
a
comment
in
there
saying.
E
E
So
in
the
issue
that
the
pr
was
referring
to,
there
was
a
really
thorough
diagnosis
of
when
they
think
it
came
in.
But
one
thing
also
in
that
in
that
issue,
108
112,
that
I
thought
was
kind
of
interesting,
was
my
colleague
pointing
out
hey.
Is
it
possible
to
switch
to
out
of
tree
cloud
provider
azure,
and
I
thought
oh,
this
is
relevant
to
the
discussions
that
we're
having
here
about
you
know
basically
like
should
we
be
fixing
this?
E
I
mean
yes,
we
fixed
it
in
the
legacy
provider,
but
like
should
we
be
not
focusing
on
using
that
regardless
and
just
like
switching
to
the
club
fredder.
F
Bugfixes
are
generally
fine,
the
other
providers
are
still
merging
bug
fixes.
I'm
really
curious,
why
it
wouldn't
be
fixed
in
121
but
but
should
be
fixed
in
120,
though
yeah.
E
A
okay,
because
because
it
seems
like
like
this
check
was
always
here.
There
was
some
changes
to
also
check
the
j
label,
but
that
check
was
always
there.
But
there
was
a
change
and
I
might
have
made
the
change
actually.
E
A
Yeah,
okay,
but
anyways.
What
I
was
trying
to
say
is
that
the
service
controller
used
to
not
have
a
node
informer
and
then
we
added
a
node
informer
because,
like
there
was
complaints
about
so
basically,
if,
if
a
node
status
changes,
we
don't
actually
pull
the
node
out,
there's
like
a
60
second
loop
in
the
service
controller
that
pulls
it
out,
and
then
there
was
a
recent
change
where
we
we
check.
A
We
we
check
for
node
events,
and
then
we
reconcile
the
load
bouncer
if
any
of
the
one
nodes
flip
their
status
like
it's
right
here
and
it's
possible
that
this
snippet
of
code
was
added
in
121,
which
means
that
like
before
this,
this
check
never
had
to
happen
because
we
never
got
notified
for
load
balancers.
But
then
like
this.
In
addition
to
this
check,
which
I
think
was
just
for
volumes
anyways,
I
might
have
kind
of
triggered
those
two
things
happen
at
once.
E
E
A
I
don't
think
so,
but
I
can
let
me
dig
up
the
history
a
bit
more
and
I'll.
Send
you
the
links
and
then
maybe
that
might
shed
some
light
into.
E
D
A
I
am
working
on
on
getting
e3
tests
for
cubelet
for
gcp
crash
provider
working
so
that
that's
the
thing
on
my
my
I'm
gonna
play
right
now.
I
see
an
update
for
ibm.
B
A
A
D
A
That
you're
getting
a
lot
of
like
the.
G
G
Kind
of
added
yeah
yeah,
it
looks
like
it's
better.
One
thing
that
stood
out
is
a
little
bit
unique.
Is
the
provider
id
I
didn't
realize
this,
but
the
the
controller
handled
the
old
v1
interface
would
add
your
cloud
name
to
the
provider
id,
and
you
have
to
do
that
now
in
instances
v2,
it's
not
done
for
you.
We
just
hit
that
right.
Yeah.
We
did.
We
didn't
notice
it
until
we
ran
some
tests.
It
was
just
a
surprise.
I
guess
we
got
it
working
out.
B
H
H
So
the
test
has
been
the
satellite
has
been
running
for
a
while,
but
because
I'm
struggling
still
struggling
with
our
testing
for
us,
I
I
need
a
little
bit
more
time
to
get
a
test
with
green.
I
you
see
the
link
on
the
you
see
the
link
down
there.
That's
the
that's
the
dashboard,
that's
the
yeah!
The
second
one
is
the
dashboard
yeah
it's
so
for
now
it's
kind
of
it
looks
like
somebody's
report
card,
but
for
now
I
need
I
need
you.
H
I
need
to
wait
for
a
pr
to
update
the
job
definition
and
things
should
start
working.
It
works
on
my
machine
trademark,
so
yeah
I
need.
I
need
a
little
bit
more
time.
Besides
that
I
really
want
to
shout
out
for
other
classified
cloud.
H
Other
than
dce
to
add
their
parameters
to
the
to
the
files
piercer
will
come
to
chaos,
repo
it
shouldn't
be
too
hard.
Just
just
add
some
like
add
some
parameters
like
setting
the
zooms
or
a
little
just
just
should
be
a
little
bit.
It
should
be
a
little
bit
work
around
and
be
okay,
because
chaos
should
be
able
to
handle
difference
between
cloud
fighters
yeah.
Hopefully,.
A
Okay,
cool,
okay,
let's
jump
into
the
agenda,
I
had
a
quick
psa
in
case
folks
started
the
meeting
that
starting
march
2nd
we're
going
to
move
this
time
to
9am
pacific,
and
hopefully
that's
going
gonna
be
better
for
folks
in
europe
that
were
wanting
to
join
as
well,
and
we
did
a
poll-
and
I
think
9am
was
the
top
voted
one
all
right
and
then
jared.
You
have
one
about
the
node
pam
controller.
H
H
This
is
four
two
four
just
out
of
four
yep
you
see,
kcm
is
running
its
own
version
of
node.I
pen
controller.
So
if
we
have
a
node.controller
in
like
kcm,
then
we
will
have
two
instances
running
together,
and
this
is
this
is
not
anything
that
is.
This
is
not
really
something
oh
yeah,
this
one.
So
this
is
not
something
that
leader
migration
can
handle
because,
like
by
using
the
default
configuration
kcm
is
going
to
run
this
controller
regardless.
H
Yeah,
so
I'm
asking
like,
like
consent
about
like
whether
we
should
remove
node,
ipan
controller
or
use,
or
only
enable
that
in
kcm
for
the
cloud
proper
idle
s
version.
A
My
my
personal
opinion
is
that
so
node
ipam
controller
is
the
one
that
that's
like
a
dual
mode
controller
right
like
like
you're
talking
about
it,
has
like
a
cloud
mode
and
a
regular
mode.
H
H
H
H
Sorry,
so
basically
we
want
a
new
controller
that
you
do
anything
now
that
pencil
is
doing
right
now
at
least
called
either
with
building
television,
and
then
we
will
create
the
new
controller
in
qcm
and
for
the
no
different
controller
in
kcm.
We
just
we
just
assume
it
doesn't
have
any
cloud
priority
enabled.
A
And
then
only
move
the
cloud
allocator
controller
out
so
like
so
so.
To
be
clear,
I
think
that's
the
ideal
solution,
but
I
I
don't
know
how
practical
it
is
with
the
current
timeline
and
the
migration
that
we
have.
Okay,
like
it
might
make
sense
to
just
migrate,
just
two
copies
and
then
remove
the
range
allocator
and
the
ccm
and
the
cloud
allocator
in
the
kcm.
After
the
fact.
A
If
you
know
what
I
mean
okay,
I
yeah.
I
don't
actually
have
a
strong
opinion
at
the
moment,
which
is
better,
but
if,
if
if
I
could
pick
the
most
ideal
scenario
without
thinking
about
like
the
amount
of
work
and
time
involved,
I
think
it
would
be
like
excluding
the
two
controllers
splitting
the
current
node
controller
into
two
based
on
the
allocator
type
and
then
splitting
the
range
allocator
into
kcm
and
the
cloud
allocator
to
ccm.
F
How
would
you
do
your
other
suggested
solution
without
them
interfering
with
each
other.
A
A
So
maybe
it'll
help
to
just
enumerate
the
two
options
here.
A
So
my
yeah,
so
my
understanding
is
that
what
you
said
the
like
google
is
the
only
provider
using
the
cloud
allocator
type,
but
my
my
understanding
is
also
that
no
one
is
running
the
range
allocator
in
the
ccm.
Everyone,
like
everyone,
runs
it
in
the
kcm.
I
I
Sorry,
can
you
repeat
that
sure
so
the
plan
of
record-
and
I
think
we're
going
back
a
couple
of
years
at
this
point-
had
been
that
the
node
ipam
controller
as
it
exists
today,
would
get
moved
to
cloud
provider
gcp
and
would
be
part
of
the
gcp
ccm
and
that
the
node
ipam
controller
that
lives
in
kk
would
have
the
cloud
allocator
stripped
out
of
it
so
that
the
so
that
the
kcm
node
ipam
controller
no
longer
needed
the
cloud
provider.
B
A
F
Hear
anything
either,
but
that's
what
I
understood
from
sorry.
I
A
Okay,
but
if,
if
gcp
is
the
only
implementation
of
it,
I
think
we
can
just
like
we
should
just.
We
should
definitely
move
the
cloud
allocator
type,
and
I
wonder
if,
like
maybe
another
option,
is
disallow
range
allocator
in
kcm.
A
Maybe
we
can
just
not
worry
too
much
about
the
range
allocator
code
existing
in
the
ccm,
but
we
can
just
add
a
check
somewhere
in
the
controller
startup
to
disallow
the
range
allocator
because
everyone's
running
it
in
the
kcm
anyways
but
yeah.
Maybe
we
can
take
this
one
offline
or
jared.
Maybe
you
can
open
an
issue
you
can
discuss
it
there.
I
think
tim
had
an
issue
about
node
ipam
controller
2..
So
maybe
we
can
take
it
there.
A
Okay,
I'm
gonna
move
on
lebron
has
a
question
about.
Do
we
have
a
standard
future
graduation
process.
J
Specifically
for
sorry
specifically
for
cloud
provider,
for
example,
we
support
do
stack
in
1.22,
but
it's
alpha
like
should
we
say
it
can
be
ga
in
123.
If
no
one
sees
any
issues
about
it,.
E
I
that
I
worked
with
the
dual
stack
feature
gate.
It
specifically
went
to
ga
upstream
in
1.23,
so
you're
saying
that
you
you
want
to
use
a
different
like
it
was.
It
was
sorry
it
was
sorry,
it
went
ga
and
upstream
and
1.23.
It
was
not
alpha
in
1.23
or
1.22.
It
was
beta
and
1.22.
Like
I'm
just
kind
of
curious,
are
you
saying
that
there's
something
different
for
the
cloud
providers
than
there
is
for
ga
when
it
comes
to
the
feature
gates
or.
A
I
I
think
what
lebron
might
be
saying
is
that
cloud
provider
vsphere
specifically,
might
have
its
own
feature
gate
to
toggle
dual
stack
and
like
what
the
guidance
is
for
that.
A
I
see
like
it
wouldn't
make
sense
to
ga
the
dual
stack
for
a
version
that
for
kubernetes
version
that
didn't
jade
the
dual
stack
feature
as
well
so,
like.
Maybe
you
want
to
trail
behind
what
the
kubernetes
feature
gate
is.
E
I
I
can't
speak
to
other
club
providers.
I
can
say
for
azure
that,
just
because
a
feature
is
you
know,
available
upstream
doesn't
necessarily
mean
that
it
behaves
exactly
the
expected
way
without
you
know
different
settings
and
whatnot
downstream.
So
yeah,
maybe
that's
something
to
consider
because,
like
we,
for
example,
are
supporting
dual
stack
as
a
preview.
E
Public
preview
feature
it
became
ga
upstream
and
we
were
like
okay,
we'll
turn
on
the
public
preview
in
our
downstream.
So
that
seems
like
something
that
you
would
kind
of
control
on
the
cloud
provider
side,
but
yeah.
A
Okay,
awesome
peter
has
a
topic
about
new
topology
proposal
for
aws.
K
Essentially,
the
situation
is
that
in
aws
accounts
you
have
zone
names
and
those
are
randomly
mapped
to
zone
ids
which
represent
you
know
the
physical
zones
and
so
up
until
now,
the
zone
topology.
The
zone
label
has
referred
to
the
zone
names
and
for
cross-cluster
communication
that
spans
aws
accounts.
K
It
can
be
helpful
to
know
the
zone
ids
of
both
clusters
of
nodes
in
both
clusters,
so
that
you
can,
you
know,
maintain
communication
within
a
zone
id,
and
so
I'm
wondering
if
this
is
something
that
applies,
if
it's
applicable
to
other
cloud
providers
and
if
you
would
want
to
come
up
with
an
agnostic
label
to
use
or
if
this
is
aws
only
and
we
should
come
up
with
an
aws
specific
label.
K
A
K
K
K
You
need
to
know
the
zone
id
of
the
nodes
in
both
clusters,
rather
than
its
own
names,
because
the
zone
names
will
be
inconsistent
in
the
mapping
to
physical
zones.
B
L
I
don't
know
if
it's
exactly
the
same,
but
the
vsphere
cloud
provider
is
definitely
capable
of
using
labels
to
take
into
account
failure
domains
here,
I'll
put
a
link
to
the
documentation
for
that
in
the
chat.
F
Yeah,
I
think
what
what
is
more
important
is:
did
any
other
cloud
providers
randomize
their
their
zone
names
I'm
guessing
not,
but
I'm
yeah
curious.
A
Yeah,
I
walter
can
correct
me
but,
like
I
think,
for
google,
like
our
zone,
is
just
a
zone,
so
I
I
don't
think
there's
like
a
underlying
id
that
maps
to
like
a
physical
thing,
I
don't
know
like
bridget
for
azure-
is
that
something
that
could
that
is.
Is
that
a
thing
on
azure.
E
This
is
the
part
where
I
admit
that
I
pay
a
lot
more
attention
to,
upstream
than
I
do
to
specific
cloud
implementation
of
how
we
do
zones.
I
can't
tell
you
off
the
top
of
my
head.
F
Okay,
I
mean
I
would
say
that
this
kind
of
sounds
like
something
that
is
should
be
handled
with
something
aws
specific,
because
in
reality
we
should
probably
be
using
the
zone
id
for
the
zone
for
the
for
the
topology
zone
label,
because
it
makes
more
sense
and
allows
people
to
to
do
this
kind
of
stuff
across
account.
But
since
we
don't
in
a
lot
of
situations
or
users
of
kubernetes,
don't
then
yeah.
F
Maybe
we
support
this
kind
of
special
aws
label
that
allows
you
to
use
the
zone
name
for
the
zone,
but
still
still
do
cross-account,
zonal,
stuff.
A
Yeah,
I
think
that
makes
sense
to
keep
to
either
keep
the
zone
id
a
aws
specific
label
or
to
provide
a
toggle
to
change,
to
set
the
actual
zone
topology
to
the
zone
id.
If
it's,
if
it's
like
a
a
new
cluster
that
you
don't
worry
about,
breaking
compatibility
and
whatnot,
I
think
either
of
those.
F
Yeah,
I
just
copied
a
bunch
of
the
well
most
of
the
questions
I
figured.
We
could
tackle
some
of
it
together.
We
don't
have
to
do
all
of
it,
but
it
would
at
least
be
probably
easier
to
do
it
that
way.
L
F
L
The
one
thing
I
had
a
question
on,
I
just
for
primary
meeting
attendee
count.
I
just
took
today
assuming
hey
it's
as
representative
as
any,
and
I'm
not
going
to
go
back
in
the
records
and
count
them.
But
if
somebody
wants
to
be
my
guest
and
then
it
says
active,
what
is
it
primary
attendee
count
and
participant
count?
I
gather
they
might
be
trying
to
separate
people
who
just
lurk
and
are
not
participating,
but
just
to
guess,
I'm
not
sure
what
the
criteria
is.
So
I
declare.
E
Sure
I
think
that
there
are
meetings
that
also
broadcast
their
their
meeting
to
be
watched
on
like
youtube
or
whatever,
and
so
those
people
aren't
in
the
zoom
and
therefore
couldn't
participate.
That
might
be
what
that's
talking
about,
but
it's
not
relevant
here.
F
Yeah,
so
I
think
some
of
the
first
questions
like
what
work
did
the
sig
do
this
year?
That
should
be
highlighted.
I
just
want
to
make
sure
we
don't
miss
anything.
So,
for
example,
you
know
we
did.
We
can
probably
list
all
the
kepts
that
we
worked
on
and
that's
called
out
there
that's
relatively
easy,
but
what
else
did
we
do
that?
We
don't
want
to
miss.
L
F
F
We
can
talk
about,
we
can
go
through
the
subproject
updates
and
pull
updates
there
like
for
the
you
know
things
that
each
sub
sub
project
did
so
that's
something
we
can
start
with
anything
else
that
people
remember
us
doing
or
or
maybe
that
that
specific
sub
projects
did.
A
F
A
F
So
this
the
the
rough
draft
is
due
was
due
yesterday
and
then
the
I
guess
this
is
supposed
to
be
reviewed
by
I
I
don't
know
some
community
group
and
then
merged
by
by
what
is
it
march
1st
or
something
beginning
of
march.
Let
me
check.
E
F
Yeah
yeah,
I
mean,
I
think
it's
probably
easiest-
for
collaboration
to
just
throw
stuff
into
the
stock
directly
and
then
and
then
I'll
create
a
pr
from
whatever
we
have
awesome.
A
And
just
so
yeah,
maybe
we'll
set
the
deadline
as
february.
A
Maybe
I'll
say,
deadline
for
updates
in
the
so
all
that
stuff
up
until
february
25th
and
then
you
can
just
take
whatever
is
in
here
and
pr.
It.
A
Okay,
awesome
all
right,
I
think
that's
it
for
the
agenda.
Unless
anyone
wanted
to
discuss
anything
else.
Last
minute.
L
I
guess
just
an
fyi
from
myself
and
nick
we
did
get
the
kubecon
maintainer
tracks
have
been
in
awesome
and
that
whatever
that
draft
link
that
was
posted
in
the
slack
channel,
we
tweaked
that
up
to
the
last
minute.
So
whatever
is
left
in
that
dock
is
what
got
submitted.
L
F
Appreciate
it
yeah
and
thanks
to
steve,
for
I
don't
think
it
would
have
gotten
done
without
him
so
appreciate
that.
E
I
actually
was
hoping
we
could
revisit,
because
I
think
I
had
like
a
moment
of
inattention
and
I'm
not
sure
I
completely
understood
what
lebron
was
saying
and
maybe
I
shut
him
down
and
we
didn't
fully
explore
what
he
was
concerned
about.
But
I
was
a
little
bit
concerned
because
I'm
trying
to
picture
like
do,
we
expect
the
entry
club
providers
to
prevent
people
from
using
ga
features.
I
just
I'm
not
sure
where
that
goes
so
yeah,
maybe
lebron.
E
J
Yes,
so
the
issue
is
in
cloud
provider
vsphere
the
auto
tree
cloud
provider.
We
are
adding
support
for
dual
stack
so
now
node
can
have
both
ipv6
and
iq
for
addresses.
I
know
the
upstream
already
states,
ga
23,
but
our
country
doesn't
support
it
yet
and
right
now
we
added
that
support.
J
E
A
Yeah,
it's
a
it's
a
bit
of
a
gray
area.
I
think,
if
you're
an
entry
provider
there
is
a
bit.
There
is
an
expectation
from
the
user
that,
like
the
kubernetes
core
feature,
gate
itself
would
enable
dual
stack
for
your
provider.
A
A
So
I
think,
like
especially
in
the
external
case
it
it's
totally
fine
to
have
your
own
feature,
gate
that
enables
or
just
enables
the
dual
stack
capability,
because
I
guess
like
it
is
separate
from
like
kubernetes
like
the
core
apis
of
kubernetes,
supporting
dual
stack
versus
the
provider,
supporting
it
right
like
it's
the
same
thing
with
cni's
like.
If
you
want
to
run
dual
stack,
you
need
a
version
of
a
cni
that
also
supports
dual
stack
api
and,
if
there's
a
feature
gate
for
that,
you
would
expect
the
same
thing.
A
B
There,
okay,
so
okay,
we
gonna
also
have
yeah
thanks.
I
So
one
thing
I
will
mention-
I
don't
know
where
we
are
with
dual
stack
and
conformance
testing,
but
anything
that
is
part
of
the
conformance
testing
is
obviously
you
know
if
you're
saying
you're
121
and
you
know
and
you're
a
kubernetes
compliant.
E
Yeah
and
yeah
there
are,
there
are
quite
a
few
tests
for
yeah
dual
stack,
so.
I
A
Yeah
I
mean
like
the
loophole
sword
up
to
that
is
like
there's,
no
kubernetes
distro
or
version
that
only
supports
tool
stack
and
so
like
you're
gonna,
your
product
or
your
managed,
offering
whatever
is
gonna,
be
compliant
as
long
as
it's
compliant
on
a
non-dual
stack
version
of
that
cluster,
like
it's
still
technically
conformant.
So,
like
that's
kind
of
like
like
you,
don't
have
to
be
dual
stack.
You
don't
have
to
have
conformance
test
pass
on
dual
stack
but,
like
you
should
look
into
that
and
make
sure
it
works
if
you're
supporting
anyone
so
yeah.
A
A
Cool
anything
else:
if
not,
we
can
get
10
minutes
back.