►
From YouTube: SIG Cloud Provider 2021-09-01
Description
- https://github.com/kubernetes/enhancements/pull/2928 - Adding webhook extension support to the CCM
Needed to handle PVL cloud provider extraction case.
Please ignore my inability to spell :)
- question about `--cloud-provider` flag deprecation in API Server
- once we know how many there are we can estimate the times for each and give notice to those who are creating them
AI: before next meeting, volunteers please self identify for who will be creating the recordings
A
Hi,
welcome
to
the
september
1st
2021
sig
cloud
provider
meeting.
As
always,
this
is
a
cncf
meeting.
We
adhere
to
all
of
the
cncf
rules.
We
would
we
want
to
be
considerate
and
inclusive
of
all
of
our
fellow
contributors.
So
please
be
polite.
Considerate
and
inclusive
of
all
your
fellow
contributors,
I'm
going
to
go
ahead
and
share
my
screen.
A
And
we
will
go
ahead
and
get
started
all
right,
I'm
not
seeing
anyone
from
alibaba
or
baidu,
but
I
do
see
some
of
our
amazon
contributors.
Does
anyone
have
a
sub
project
update
for
amazon.
A
Hear
do
we
have
anyone
from
microsoft
here.
A
C
There's
not
much
updates
on
the
provider
tcp
side;
they
are
removing
bazel
from
the
corporate
tcp
aligned
with
the
mankiki
ripple
other
than
that
yeah.
Oh,
congratulations!
So
that's
actually
been
fully
approved
and
is
going
to
happen
yeah.
I
think
so.
It's.
A
On
tech
awesome
great
news,
I
believe,
as
a
side
note,
you're
also
you've
just
got
no
ipam
controller.
In
now,
which
is.
C
Oh
yeah,
it's
merged.
Currently,
google
cloud
can
run
node,
fm
controller
vision,
ccm
now.
A
Wonderful,
thank
you
so
much
cece.
I
don't
think
we
have.
Anyone
from
huawei
looks
like
no
updates
for
ibm
openstack.
D
Yeah
so
back
on
the
18th,
we
released
1.22.0
of
provider
openstack,
but
our
ci
has
been
down
for
a
little
bit,
so
we're
doing
a
little
bit
more
manual
testing
than
normal,
but
we're
working
on
getting
the
ci
back
up.
C
A
Cannot
actually
query
the
cloud
provider,
the
open
source
cloud
provider
test
history
right
now,
I
just
get
an
error
on
the
date
field,
every
time
so
fun
and
games.
E
A
All
right,
I'm
gonna,
take
that
as
a
no
extraction
migration,
so
there
have
been
a
bunch
of
talks
about
the
e
to
e
and
actually
give
me
a
second.
I
want
to
ping
joe,
because
I
thought
joe
was
supposed
to
be
here
today.
A
Sorry,
so,
just
briefly,
there
are
some
stuff.
While
I
I
turn
king
joe,
there
is
some
things
to
do
with
last
known
good
that
we
need
to
talk
about
with
respect
with
perspective,
on
e
to
e
testing.
A
There
was
a
little
bit
of
a
chat
that
we
had
in
the
cloud
provider
extraction
meeting
and
essentially
what
this
comes
down
to
is.
There
are
quite
a
few
tests
still
where,
even
if
they
don't
officially
depend
on
a
cloud
provider,
and
mostly
when
I
say
cloud
provider,
I
mean
gcp.
A
Okay,
so
joe
should
be
here
and
can
can
detail
in
a
minute,
but
and
as
a
result,
those
tests,
even
if
they
don't
directly
depend
on
the
cloud
provider
code,
will
only
run
on
certain
cloud
providers.
I
think
the
sig
storage
tests
are
a
prime
example
of
this
hey
joe.
A
And
so
one
of
the
before
we
can
really
turn
the
cloud
provider
extraction
feature
to
ga
and
I
think,
even
to
beta,
we
need
to
have
an
answer
on
what
we're
going
to
do
about
these
tests.
A
So
there's,
I
believe,
a
proposal
that
joe
has
been
working
on
for
doing
last,
known,
good
testing
and
with
that
intro
I
think
I'm
gonna
turn
it
over
to
you,
joe.
F
Yeah
I
was
planning
on
presenting
this
next
week
at
the
the
general
club
provider
meeting,
but
I
can
try
and
give
a
brief
explanation
of
what
we're
thinking
about.
F
So
the
idea
that
we
have
is
that
say,
say
you're,
say
you're
doing
development
in
cloud
provider
gcp
or
some
other
cloud
provider
specific
repo,
and
you
want
to
make
sure
that
your
code
is
continuing
to
work
with
upstream
kubernetes.
So
then
what
you
would
want
to
do
is
basically
it
would
be
really
great
if
you
could
have
a
job
that
is
always
running
like
basically
a
test
job
in
your
city
pipeline.
That
is
always
checking
the
latest
version.
F
The
latest
changes
in
the
kubernetes
master
against
your
changes
and
so
that
we,
but
we
also
want
to
keep
it
clear
that
that
we
don't
want
to.
We
want
to
keep
attribution
clear
so
there's
two
things
going
on.
We
want
to
run
these
tests
and
they
could
fail
at
any
time
and
they
could
fail
because
somebody
changed
something
up
between
kubernetes.
F
It
needs
to
get
fixed
and
it's
not
gonna,
and
but
we
also
have
developers
that
are
making
changes
to
our
cloud
provider
code
right,
and
so
we
want
to
make
it
clear
like
what
failed
to
developers
we
don't
want
to
have
our
test
runs
just
sporadically
start
failing
for
something
had
nothing
to
do
with
the
developer
and
then
they're
going
around
looking
for
for
a
bug
in
their
code
when
actually
it
was
changing
upstream.
F
So
we
want
to
make
those
two
different
kinds
of
testing
really
clear
when
a
developer
is
doing
their
development,
we
want
to
make
it
clear
that
if,
if
the
test
failed,
it's
because
either
there's
a
flank,
unfortunately,
because
those
happen,
but
if
there
were
no
flakes
it
was
because
of
a
change
they
made,
and
so
what
we're
thinking
of
is
having
basically
two
test
signals.
One
is
a
job
that
is
constantly
testing
your
our
our
cloud
provider,
gcp
branch
against
the
latest
to
kubernetes
and
then
updating.
F
If
that's
successful,
then
it
updates
the
last
known
good
marker,
and
so
then,
whenever
we
do
so,
then
we
always
know
like
how
how
close
we
are
to
having
a
working
copy
of
our
thing
against
upstream
and
when
developers
run
their
tests.
They
only
run
against
the
last
known
goods,
so
they
only
are
running
their
code.
Changes
against
two,
a
pairing
of
quadrature
gcp
and
kubernetes
that
are
already
known
to
have
passed
a
successful
test
run
if
the.
F
If
the
run,
if
the
tests
that
are
trying
to
find
a
newer
lkg
fail,
then
we're
going
to
have
that
be
signaled
differently.
We're
going
to
have
basically
going
to
be
like
a
pre-submit
saying
you
know,
lkg
is
passing,
is
going
to
turn
red,
but
we're
going
to
make
it
clear
to
developers
that
that's
not
something
due
to
their
change.
It's
just
a
problem
in
the
system.
F
We're
not
going
to
let
any
pr's
through
until
we
get
it
fixed,
but
it
still
makes
that
attribution
clear,
that's
kind
of
the
high
level
concept
and
you
can
do
it
either
way
right.
So
I've
just
described
the
way
of
doing
it
where
we're
doing
quadratic
gcp
development,
but
you
could
do
downstream
testing
too,
and
what
I
mean
by
that
is,
if
you're,
if
you're
doing
kubernetes
development
on
the
main
kubernetes
repo,
and
you
want
to
make
sure
that
that
works
against
quadratic
gcp
or
copyright
or
aws
or
azure
or
vmware
anybody
else.
F
You
could
use
that
same
approach
there,
of
course,
there's
some
policy
questions
that
need
to
be
answered.
You
know,
can
we
make
it
so
that
kubernetes
is,
has
a
pre-submit
block
on
the
development
of
some
out
of
tree
cloud
provider?
Do
we
want
that?
Do
we
want
the
signal?
Do
we
want
the
signal
to
be
blocking?
F
Those
are
all
questions
I
would
love
to
have
the
sig
weigh
in
on,
but
the
mechanism
is
being
worked
on.
Kermit
alexander
is
working
on
it
now
for
cloud
provider,
gcp.
F
F
F
There's
corner
cases
like
if,
if
you're
in
kubernetes
you're
getting
commits
all
the
time
so
you're
going
to
you're,
probably
going
to
like,
want
to
run
your
okg
testing
a
lot
in
the
case
of
cloudflare
gcp.
We
don't
always
get
a
pr
every
single
day
merged.
So
there
are
periods
of
inactivity,
so
we
probably
are
going
to
continue
to
like
at
least
once
a
day
be
doing
some
kind
of
lkg
testing
against
kubernetes.
To
constantly
bring
that
up
to
date,
maybe
even
more
often.
A
Cool,
thank
you
joe
one
minor
detail.
So
please
take
this
as
a
pitch
to
come
to
next
week's
club
provider
extraction
meeting.
I
think
this
is
a
fairly
critical
system
problem
that
joe
is
attempting
to
resolve
it
matters
to
kubernetes.
It
matters
to
each
of
the
cloud
providers
and
it
really
matters
to
to
our
ability
to
move
forward
in
cloud
provider
extractions.
So
I
would
strongly
encourage
people
to
to
come
to
next
week's
cloud
provider
extraction
meeting
to
help
us
iron
out
those
details
and
thank
you
for
this
job
sure.
G
I
got
a
question
yeah
for
those
of
us
who
don't
have
it
on
our
calendar.
Yet
is
the
extraction
meeting
at
the
same
time
as
this
one?
It
is
not
so
that
is
an
excellent
question.
A
So
the
cloud
provider
extraction
meeting
is
currently
being
held,
1
30
to
2
pacific
time
on
the
thursday
sort
of
opposite
the
wednesday
that
this
meeting
is
on.
Did
that
make
sense.
G
It
does
I'm
guessing
that
there
is
an
agenda
for
this
somewhere
out
there.
There
is
an
agenda
for
this.
A
I
will
go
ahead
and
copy
and
paste
it
into
the
list
meetings
notes,
because
I
don't
see
any
reason
why
they
can't
both
be
here.
This
actually.
F
A
Oh,
that
is
actually
a
pretty
good
question.
I
think
it
would
probably
make
sense
to.
A
I
think
the
two
people
I
would
like
to
see
invited
would
be
maybe
dims
and
one
of
the
someone
from
testing
like
maybe
ben
the
elder
or
someone
like
that.
A
All
right
yeah,
that
sounds
like
a
good
ai
for
me
to
follow
up
on.
Were
there
any
other
questions
before
I
move
forward
now?
For
me
all
right,
and
did
you
see
the
the
agenda
that
I
I
put
in
a
comment?
I
did
yeah.
Thank
you
very
much.
Yeah
awesome
I'll
do
my
best
to
be
there
no
worries.
Thank
you.
Thank
you
for
trying
agenda
so
now
onto
the
regular
agenda.
A
So
just
as
a
quick
reminder,
technically
caps
are
due
next
week,
so
there
isn't
much
time
left
and
they
need
to
go
through
things
like
a
a
deployment
review
and
they
have
to
get
through.
You
know
someone
from
the
tl
or
chair
from
a
sig
has
to
approve,
I
think,
possibly
before
they
were
even
ready
to
go
before
the
deployment
review.
So
I
will
just
say
you
are
rapidly
running
out
of
time
to
get
any
kept
you
would
like
approved
approved.
A
I
am
unlikely
to
have
a
lot
of
cycles
next
week.
So
if
you
would
like
me
to
review
the
cap,
I
would
request
strongly
that
you
get
it
to
me
this
week.
I
don't
know
if
you
have
a
similar
idea,
andrew.
H
A
But
again,
even
if,
if
you
get
it
to
andrew
on
the
last
day
and
then
expect
to
get
it
through,
the
deployment
review,
I'm
guessing
that
people
like
david
eads
are
not
gonna.
Have
many
cycles
free,
so
the
chances
of
getting
it
through
will
be
slim.
H
F
A
So
because
I
would
like
to
get
my
own
through
pr
review,
this
is
actually
one
we
didn't
manage.
We
had
planned
to
land
last
cycle,
it's
essentially
been
open
for
the
last
quarter.
I
know
there
at
least
one
person
did
put
some
when
I
went
and
looked
at
it
this
morning.
One
person
had
put
a
comment
on
so
I
have
done
one
or
two
tweaks
from
where
it
was
last
time,
but
please
go
take
a
look
especially
andrew.
A
This
is
specifically
about
the
storage
team
needs
a
way
to
be
able
to
run
a
would
like
to
run
a
web
hook.
I
think
there
are
other
other
cloud
providers
who
would
like
to
do
it
as
a
controller,
but
potentially
we
would
like
a
mechanism
by
which
we
can
have
cloud
provider
specific
web
hooks
that
fit
in
the
same
way
the
ccm
does.
So.
A
This
is
a
cap
for
an
extension
mechanism
to
the
ccm
so
that
you
can
optionally
also
host
web
hooks
in
the
ccm
and-
and
I
will
I
will
emphasize
the
word
optionally.
It
also
has
a
mechanism
whereby
you
could.
The
idea
is
you:
if
you
don't,
if
you
want
to
run
both
in
the
same
process,
you
can,
if
you
want
to
use
the
same
code
base
but
run
controllers
in
one
in
web
hooks
and
another.
That
should
also
be
relatively
easy
to
do
so.
Yeah,
please
take
a
look.
G
Yes,
I'm
just
writing
a
message
here,
so
a
little
bit
of
shameless
self-promotion,
I'm
giving
a
talk
tomorrow
with
a
colleague
about
doing
a
deep
dive
into
ccms
for
the
dev
conf
us
conference
and
as
part
of
our
due
diligence
around
this
deep
dive.
We
were
looking
into
you
know
where
the
cloud
provider
flag
needs
to
be
set
and
in
the
kubelet
documentation-
and
I
think,
even
in
the
code,
there's
plenty
of
comments
about
how
it's
going
to
be
deprecated
and
it's
going
away.
But
we
were
looking
at
the
api
server.
G
You
know,
because
the
migration
documentation
explicitly
says
that
you
should
not
set
the
cloud
provider
flag
on
the
api
server
and
on
the
kube
controller
manager,
and
we
went
digging
through
the
code
a
little
bit
and
we
don't
see
any
sort
of
similar
mention
there
of
those
flags
being
deprecated
on
the
api
server
and
so
just
kind
of
curious
to
see.
If
there
were
any
more
details
or
is
this
something
that
maybe
would
be
deprecated?
And
it's
just
an
oversight
in
the
docs
for
now
or
is
there
maybe
another
part
of
the
story
here?
A
So
so
I'd
love
to
hear
andrews,
but
I
will
pitch
in
quickly
as
far
as
I
know,
that
is
an
oversight
I
will
say
right
off
the
bat
as
the
person
who
did
it
in
cloud
provider,
the
cloud
provider
gcp
deployment
system.
When
you
do
a
cube
up,
it
will
explicitly
set
cloud
provider
to
external.
I
think
it's
external
on
both
the
kcm
and
the
api
server,
and
that
has
been
successfully
running
in
that
way
for
a
while
now.
So,
as
far
as
I
know,
that
is
just
an
oversight.
Andrew.
H
So
so
my
understanding
is
that
there
are
two
primary
reasons
why
you
need
the
cloud
provider
set
the
cloud
provider
flag
set
on
the
api
server.
The
first
one
is
the
stage
tunneler,
which
is
only
implemented
by
google,
which
was
which
we
act
right
in
122.
H
That
code
no
longer
even
exists
yeah,
so
so
really,
the
only
thing
left
is
is
pdl,
but
pdl
is
specifically
encoded
to
work
with
the
entry
providers
and
there
are
certain
scenarios
where
like,
if
you
have
the
csi
driver
installed.
A
Yeah,
as
far
as
I
know,
the
pbl
is
specifically
for
you.
If
you're
trying
to
on
board
a
lengthy
pd
onto
an
onto
a
new
cluster.
H
Right,
like
a
persistent
volume,
yeah
used
the
the
native
like
embedded
fields
for
like
google
aws
openstack
whatever.
But
if
you
are
creating
a
persistent
volume
that
is
managed
by
a
csi
driver,
then
csi
has
its
own
capabilities
for
topology
awareness,
I
believe
which
which
so
it
doesn't
rely
on
the
precision
volume,
labeled
mission
controller
from
the
api
server.
G
Okay,
this
is
probably
something
where
we
need
to
go
back
and
update
the
docs,
or
something
like
that.
At
some
point,
I
guess.
A
Yeah,
I
think
this
is
somewhere
where
we
need
to
update
the
docs.
The
other
thing
I
will
say
this
is
completely
my
fault,
but
on
the
kcm,
when
you
set
cloud
provider
to
external,
it
is
worth
being
aware
that
it
will
automatically
disable
certain
controllers,
because
it
assumes
that
there
is
then
a
corresponding
ccm.
G
And
that
sounds
pretty
consistent
with
what
we're
seeing
like
it
seems
like
if
you
use
cloud
provider
external
on
the
api
server
and
the
kcm
as
it
stands
right
now,
yeah
it
just
does
what
you
expect
it
to
do,
but
it's
just
it's
weird
that
you
know
the
docs
are
just
like.
Don't
use
these
flags
at
all,
so
I
just
want
to
make
sure
we're
giving
like
a
clear
impression
to
users
like
what
the
actual
story
here
is.
A
Yeah,
no,
I
I
completely
agree
with
that.
I
also
wonder
if
it
had
something
to
do
with
testing,
because
I
will
I
will
say
that
if
you
turn
the
especially
on
the
kcm,
if
you
turn
the
flag
off
on
the
kcm,
I
am
fairly
certain
that
your
cluster,
in
most
scenarios,
won't
come
up,
which
is
not
good
for
testing,
and
this
just
has
to
do
with
the
fact
that
you
need
a
corresponding
ccm.
That
is
doing
things
like
assigning
ip
address
to,
as
I
ip
addresses
to
the
nodes.
G
A
G
It
sounds
like
in
some
ways
on
the
on
the
kcm:
if
you
set
cloud
provider
equal
external,
does
that
also
kind
of
do
a
little
bit
of
what
the
disabled
cloud
providers
futuregate
does?
Does
it
actually
turn
off
those
other
mechanisms?
A
H
So
we
we
took
a
bit
of
a
well,
I
don't
want
to
call
it
a
shortcut,
but
or
maybe
it
is
because
it's
just
us
being
lazy,
but
the
basically,
if
you
turn
the
feature
gate
on
it,
doesn't
turn
off
behaviors
of
various
controllers
in
the
kcn,
the
kcn
just
exits
like
it.
Just
it
won't
run
if
the
future
gate
is
on
and
you
set
cloud
provider,
you
know
like
aws
or
cloud
provider
gtp.
H
So
basically
it's
forcing
users
to
set
cloud
provider
external
which
would
then
put
them
on
the
path
to
like.
Then
you
have
to
deploy
the
external
cloud
provider
and
the
csi
driver
and
whatever
else,
but
it
doesn't
specifically
let
the
kcf
continue
to
run
and
turn
off
other
behaviors.
A
G
Right
and
that's
the
way
we've
been
talking
about
it,
it's
like
this
feature
gate
is
so
that
you
can
double.
You
can
absolutely
make
sure
that
none
of
the
old
stuff
is
on
and
that
you're
all
in
on
the
new
stuff,
basically
yeah
one
more
question
andrew:
I
didn't
what
are
pdl's.
I
didn't
recognize
that
acronym
sorry.
H
I
meant
to
say:
pvls
persistent
persistent
volume
label,
it's
an
admission.
It's
a
built-in
admission
controller,
an
aps
server.
A
G
Did
you
have
anything
else
on
this
el
mico?
No,
I
mean,
I
guess
you
know
one.
Maybe
one
more
interesting
point
is
that,
interestingly
enough
in
the
documentation
for
the
kublet,
it
says
that
the
cloud
provider
flag
will
be
going
away
in
1.23
and
given
our
discussions
here,
I
feel
like
that
might
be
a
version
or
two
early.
A
Probably
two
honestly,
I
think
we've
got
two
interesting
points
on
that.
One
is
the
and
technically
sorry
just
thinking.
So
there
are
two
things
in
the
cubelet:
there
are
cloud
provider
extraction
material.
One
is
some
very
old
legacy,
mount
unmount
volume
things
that
almost
nothing
uses
anymore,
and
so
from
that
perspective
it
should
be
pretty
easy
to
remove.
A
The
other
is
the
credential
provider
and
they
think
the
interesting
thing
on
the
credential
provider
stuff
is.
It
doesn't
actually
go
through
the
cloud
provider
per
se
in
that
it
doesn't
go
through
the
cloud
provider
interface,
but
for
both
amazon,
google-
and
I
think
vmware.
Although
andrew,
can
correct
me
on
that
one.
There
are
some
very
specific
credential
provider
lookups
for
their
registries,
and
so
you
do
if
you,
if
you're,
not
using
the
extracted
credential
provider,
you
will
end
up
going
through
cloud
provider
code,
even
if
you're
not
going
through
the
cloud
provider
interface.
A
Awesome
did
anyone
have
any
other
questions.
E
E
A
A
Nick
yes
same
here,
okay,
so
I'm
not
too
worried
about
the
september
7th.
However,
I
do
agree,
we
should
agree
when
we
want
the
recordings.
Frankly,
I
would
like
to
have
the
recordings
a
finished
version
of
the
recordings
at
least
a
week
prior
to
the
first
day
of
kubecon.
Does
that
sound
reasonable.
E
Yes
and
ahead
of
even
when
they're
due,
we
should
have.
Maybe
in
writing
what
the
allotted
length
and
coverage
is
for
those.
I
think,
if
we
scroll
down
in
the
notes
we
came
up
with
whatever
that
was
about.
I
think
it
was
supposed
to
be
a
little
bit
on
road
map
and
status,
but
it
should
be
below
the
note
somewhere,
but
if
we
know
how
many
there
are,
we
can
come
up
with
a
cap
on
how
long
these
little
segments
are.
E
A
Okay,
so
how
about
as
an
ai
for
anyone-
and
I
should
probably
send
something
out
on
email
to
this-
to
sig
cloud
provider
that
people
have
until
next
full
meeting
so
two
weeks
to
volunteer
themselves
to
have
something
and
I'll
send
out
a
note
on
what
we
would
expect
to
be
in
there
and
then
they
can
volunteer
to
do
the
the
the
the
recording
for
their
cloud
providers.
Does
that
seem
reasonable.
E
A
Awesome
then,
I
am
going
to
quickly,
hopefully
quickly
go
through
the
triage
as
everyone's
seeing
my
triage.
Are
you
all
still
on
the
dock?
I
see
the
triage.
I.
A
I'm
gonna
really
wish
we'd
broken
this
one
up.
Okay,
I'm
not
gonna
worry
about
it
too
much.
A
I
A
This
one
gce
node
message
of
the
day
is
unhelpful,
broken
download,
link
reference.
A
All
right,
this
sounds
right
now,
like
an
aws
issue,.
A
This
one
is
a
little
interesting
credential
provider,
infinite
looping
on
something
that
is
not
kubernetes
on
a
gce
vm.
H
But
do
we
know
if
k3s
picked
up
like
in
their
fork?
Did
they
pick
up
the
credential
provider,
plug-in
mechanism.
F
Kermit's
already
looking
at
this
they're
they're
running
the
credential
provider
in
a
pod
and
they're
having
trouble
reaching
like
metadata
endpoints,
which
are
not
accessible
from
the
pod.
It's
all
pretty
unusual,
we'll
keep
talking
to
them
about
and
see
if
there's
anything
that
we
need
to
surface,
but
right
now
it
looks
like
it's
mostly
just
the
nuance
of
the
use
case.
Okay,
so
I.