►
Description
- MS/Azure Rep? Craig Peters leaving.
- GCP - Patching vendor for CCM.
Use /third_party, like k/k does
Backport to k/k, pin dependency to a certain commit (hash version)
- Please get 1.23 KEPs in early
Need alpha version of webhook server KEP.
kubelet credential provider to Beta
controller manager migration updates?
Overloaded KEP review with requests coming late. Deadline is ________
- GCE-PD Volumes and in-tree Tests (@jpbetz, @mattcary, @leiyiz)
Need to make sure that test signal is reported back into k/k
Need to integrate and test cloud-provider-gcp against k/k at sufficiently high frequency, and with sufficient automation
A
Hey
folks
today
is
thursday
august
12th.
This
is
the
cloud
provider
extraction
and
migrations
project.
Under
it's
a
cloud
provider
just
fyi.
This
is
cncf
meeting.
So
please
follow
our
code
of
conduct
all
right.
Why
don't
I
share
my
screen,
then.
A
B
Yeah,
I
don't
think
that
many
were
and
in
fact
we
may
want
to
copy
over
the
agenda
because,
as
it
was
a
google
only
meeting
last
time,
I
don't
think
we
actually
held
it.
So
cool.
B
A
Okay,
so
yeah
craig
craig
leaving
azure.
I
actually
had
a
chat
with
him
today
about
some
folks
at
vmware
like
want
to
help,
so
I
think
that'll
be
interesting
to
see
some
vmware
people
maybe
get
involved
in
the
extraction
project
from
the
azure
lens.
So
I
mean
that's
one
note
I
have
on
that.
One.
B
Awesome,
I
don't
remember
the
name,
but
I
do
seem
to
remember.
There
was
one
person
who
responded
to
craig's
email
from
microsoft,
saying
that
they
would
be
willing
to
do
some
step
up
and
then,
in
the
last
full
full
cloud
provider
meeting
we
did
have.
I
think
a
product
manager
from
microsoft
show
up
and
actually
agree
to
take
a
bunch
of
the
tickets
and
appropriately
distribute
them
among
microsoft.
A
Cool
cool,
okay,
all
right
next,
one
patching
vendor
for
ccm
for
gcp,
so
I
believe.
C
Yeah,
we
do,
I
think,
have
a
pretty
insane
approach
to
this.
We
just
needed
we
needed
to
get
a
change
in
that
was
in
kubernetes,
and
it
turns
out
that
now
that
122
is
out,
we
can
just
do
the
bump
to
122,
and
this
just
works
out
naturally
using
a
pretty
straightforward
approach.
So
we
don't
have
to
do
anything
anything
odd.
B
And
then
the
last
is
it
or
not?
Last,
but
the
last
from
last
week
or
two
weeks
ago
is
a
call.
Let
can
we
please
try
to
get
the
123
caps
in
early?
We
know
we've
had
a
lot
of
problems
getting
review
cycles
when
it's
the
day
before
caps
are
due.
So
I
I
would
like
to
just
call
do
a
general
call
out.
B
The
only
kept
that
I
am
aware
of-
and
this
one
is
on
me
anyway-
is
that
we
wanted
a
cap
for
the
alpha
version
of
adding
webhook
server
capabilities
to
the
cloud
controller
manager,
and
we
already
have
a
a
rough
draft
of
that
cap,
but
I
will
go
ahead
and
send
it
out,
but
if,
if
there
are
others,
then
we
should
try
and
make
sure
that
those
get
reviewed
early
awesome
touch
that
one
andrew.
A
Yeah
I
put
the
cuba
cringe
provider
to
beta.
I
don't
I
don't
remember.
I
don't
call
if
we
actually
need
to
update
the
cap
for
it,
because
in
122
we
updated
the
cap
to
beta,
but
we
just
never
made
the
milestone.
A
B
Or
something
for
one
time,
I
think
that
would
be
valuable,
but
I
think
we
should
also
be
spending
some
time
thinking
about
what
we
believe
the
graduation.
The
ga
graduation
criteria
should
be,
I
think,
to
date,
all
of
the
testing
on
controller
manager.
Migration
has
been
manual
testing
by
either
google
or
amazon.
A
Gotcha
yeah
same
thing
with
cubic
credit
provider.
I
think
the
biggest
blocking
thing
for
beta
is
having
the
bi
in
place,
though.
C
A
A
Okay,
I
will
find
out
the
date
and
put
it
here
later.
C
Matt
you're
here:
do
you
wanna?
Do
you
wanna
introduce
this
one.
D
Yep
yep
great
okay,
cool,
so
the
background
here
is
that
for
storage
stuff,
as
you
all
may
or
may
not
be
aware,
all
the
entry
storage
plugins
are
being
migrated
to
the
csi.
D
Driver
which
will
accomplish
cloud
provider
extraction
because,
once
these
plugins
are
migrated
to
this
csi
driver,
there
won't
be
any
in
entry
use
of
cloud
provider.
The
wrinkle,
though,
is
so
in
in
order
to
switch
on
migration.
D
What
that
means
is
need
to
have
a
the
appropriate
csi
driver
installed
in
the
case
of
gce,
that
is,
the
pdcsi
ahmad
driver,
which
will
replace
the
current
gcepd.
D
Volume
type
in
order
to
do
this,
though,
you
have
to
install
the
pdcsi
driver
in
your
cluster
and
that's
not
something
that
people
want
to
do
in
upstream
kubernetes,
so
instead
we're
going
to
install
it
in
the
cube
up
inside
of
cloud
provider.
Gcp.
Okay,
great,
like
that
all
sounds
good.
The
wrinkle,
though,
is
it
turns
out.
There
are
hidden
load-bearing
uses
of
gcepd
or
in
for
particular
tests
to
assume
you
have
a
default
storage
class
and
a
cloud
provisioner.
D
The
most
notable
is
there's
a
stateful
set
test
so
like
these
are
tests
that
are
not
in
storage
and
they
aren't
pegged
you
know
as
a
gce
pd
test.
So
our
our
plan
is
actually
to
remove
those
tests
from
the
kind
of
mainline
end-to-end
tests
that
are
runners,
part
of
kubernetes
and
run
those
out
of
the
cloud
provider
gcp.
D
D
So,
that's
why
you
know
any
such
dependent
tests
are
going
to
have
to
be
moved
to
run
out
of
cloud
provider.
The
cloud
provider
gcp
crowd
jobs.
Probably
I
didn't
explain
that
very
clearly.
Are
there
any
questions
so
yeah.
B
I
don't
think
it's
a
question,
I'm
more
of
a
validation,
so
cloud
provider
gcp
brings
up
ctm,
I'm
guessing
I
I
can
look
to
you
matt
and
go
to
understand
where
we
are
with
bringing
it
up
with
csi.
B
Last
I
checked,
we
didn't
actually
bring
up
clusters
with
csi
and
cloud
virgo
energy.
Yet,
although
that
is
clearly
part
of
the
plan
yo,
that
is
one
piece
of
work.
As
I
understand
it,
that
would
need
to
be
done
correct.
D
And
we
actually
have
apr
out
for
that,
so
that
should
be
done
shortly.
B
B
So
if
you
have
any
windows,
node
specific,
the
node,
ipam
controller
work,
I
believe,
would
need
to
land
first
or
at
least
for
those.
D
I
don't
think
that
is
required,
so
I
mean,
like
kind
of
our
first
main
concern
is
that
all
of
the
tests
that
are
going
to
be
broken
in,
like
the
main
kk
end-to-end,
runs
once
we
switch
on
csi
migration
by
default,
and
I
don't
think,
there's
any
windows
stuff
in
there.
Okay.
B
B
And
then
I
would
suggest
talking
with
like
either
anit,
latvey
or
ben
the
elder.
There
is
definitely
a
plan
to
start
you
know
consuming
the
and
in
fact
joe.
So
this
is
a
great,
a
great
segue
for
joe
to
start
peaking.
B
But
I
think
that
then,
the
two
bits
that
I
am
not
sure
of
are
what
I'm
going
to
refer
to
as
consuming
the
output
of
our
ede
tests
and
bringing
those
back
into
some
kkk
dashboard.
B
And
additionally,
I
believe-
and
I
think
joe
is
leading
this
effort.
There
is
the
need
to
be
able
to
do
something
with
what
I'm
going
to
call
the
last
known
good,
and
I
think
that
then
becomes
fairly
critical
and
I
in
fact
will
just
segue
straight
to
joe,
with
last
known,
good.
C
Yeah,
so
just
just
to
kind
of
echo
back
what
I
heard
everybody's
saying
so
we've
got
these
tests
in
kkk
in
kk
that
either
intentionally
or
unintentionally
depend
on
cloud
fire
specific
stuff.
So
those
tests
are
going
to
have
to
be
turned
off.
When
we
turn
off
the
cloud
provider
specific
stuff
in
kkk
that'll
make
sense
we're
instead
we're
going
to
have
to
run
those.
We
don't
want
to
lose
that
test
signal
entirely
and
that's
where
this
gets
really
interesting.
C
So
what
we
want
to
do
is
turn
those
tests
on
in
cloud
provider
gcp,
but
of
course
those
aren't
going
to
be
blocking
for
any
changes
in
kk
anymore.
So
you
know
the
code
that
actually
could
cause
a
breakage
will
now
not
get
a
pre-submit
check
on
kk
itself
right
if
you
change
stateful
set
in
some
breaking
way,
you're
not
going
to
find
out
later
until
cloud
provider,
gcp
picks
up
that
version
of
kk.
That
has
the
breaking
change
and
then
detects
it
yeah.
C
So
I
think
that's
we
are
doing
the
kind
of
what
we're
doing
in
cloud
provider.
Gcp
right
now
is
trying
to
improve
our
testing
so
that
we're
aggressively
checking.
You
know
the
latest
changes
in
kubernetes
and
seeing
if,
if
there's
anything
breaking
there,
when
we
integrate
it
with
quadratic
gcp,
that's
kind
of
a
somewhat
ortho
orthogonal
project
to
this
like
when
that
comes
online,
then
we
would
get
that
signal
would
get
at
least
communicated
more
aggressively
into
the
cloud
provider
gcp
team.
C
C
B
So
so
I
think
I
have
one
little
exception
that
I
want
to
make
to
what
you
just
said
joe.
B
But
I'm
going
to
claim
that
cloud
provider
gcp
basically
folding
in
the
kk
changes
once
per
release
when
one
like
122
is
or
123
124
is
is
published,
is
not
nearly
frequent
enough
signal
to
be
short
for
these
tests
and
in
fact
that
we
probably
cannot
turn
off
these
tests
in
kkk
until
there
is
a
mechanism
to
automatically
and
routinely
consume
kk
and
run
the
tests
with
a
recent
version
of
kk
on
these
tests.
C
So
yeah,
I
will
leave
that
policy
decision
up
to
the
sig.
What
I'm
offering
on
the
cloud
provider
gcp
side
is
over
time.
We
are
going
to
try
and
get
the
ability
to
more
rapidly
test
this.
I
do
not
have
a
timeline
for
when
we're
going
to
get
that.
D
Yeah
I
mean
I
can't
argue
with
that.
Of
course,
I'm
concerned
that,
like,
if
I
mean
you
know
if
this
is
like
a
bunch
of
grungy
stuff,
that's
like
always
traditionally
very
hard
to
actually
get
done
and
get
consensus
on.
I
mean
I
think
the
main
thing
is
like
we
really
need
to
get
this
on
by
default
in
123..
We
originally
wanted
all
by
default
in
121.
D
D
So
yeah
I
mean
I
agree
with
you,
but
I
guess
this
means
like
if
this
will
have
some
urgency
to
try
and
figure
out
by
trying
to
get
consensus.
On
that
I
mean
my
plan
is
still
to
turn
migration
on
by
default,
and
then
the
breaking
test
will
give
us
incentive
to
fix
things
like
if
we
wait
for
there
to
be
like
consensus
on.
D
B
B
So,
as
in
yeah
a
couple
examples,
one
we
can
leave
the
tests
running
in
the
old
state,
so
we
get
some
coverage
in
kkk,
even
if
we
have
to
set
some
some
sort
of
turn
off
some
some
some
normally
you
know
ga
fla
ga
feature
flags
or
whatever
it
takes
to
get
the
old
behavior,
and
then
we
can
enable
those
tests
in
in
gcp
work
towards
getting
them
all.
Passing
then,
once
we're
all
passing,
we
may
want
to
do
something.
Like
I
don't
know
even
fork.
B
I
mean
basically
set
create
a
release,
a
release
of
the
gtp
repo,
and
then
we
may
even
I
mean
as
ugly
as
this
is,
and
I
hate
the
sound
of
it.
We
may
even
just
on
the
what
I'll
call
the
main
line
or
f
or
or
or
rapid
branch
we
might
even
manually
fold
in
the
features
while
we
try
and
get
the
stuff
joe's
working
on
done.
But
I'm
just
saying
like
I
think
the
first
thing
is
understanding
their
policy,
then
working
out
how
we
can
minimize
the
cost
of
that
policy.
C
D
Yeah,
yes,
so
there's
not.
We
have
I
we
kind
of
briefly
looked
through
it's,
I.
I
don't
think
it's
actually
huge
like,
as
I
said,
there's
a
couple
of
staple
sets
ones
there's.
You
know
obviously
a
bunch
in
storage,
but
I
think
that's,
okay,
because
that's
sort
of
known
to
be
off,
I
mean
yeah,
so
I
guess
so
I
don't
don't
think
we're
going
to
solve
this
here
is
to
meet
the
the
good
person
to
start
to
with
this.
D
B
Yeah
yeah,
so
I
think
when
it
comes
to
the
gcp
repo
joe
and
I
are
probably
good
people
when
it
comes
to
building
any
whatever
test,
infra
that
you
might
need.
I
would
you
know
when
it
comes
to
reporting
the
results
back
to
kk.
D
B
Comes
to
trying
to
get
a
last
known,
good
run,
I
think
joe
is
probably
what
joe
and
kermit.
D
Okay,
okay,
so
it
it
seems
the
main
blocker
is
this
test
policy
stuff,
so
I
will
reach
out
to
me
to
ben.
You
want
me
to
loop,
you
in
on
that
joe
and
walter
yeah.
B
As
the
one
of
the
people
who
frequently
has
to
do
routing,
it's
helpful
for
me
to
understand,
what's
going
on
just
so
I
route
in
the
right
direction.
Absolutely
I.
D
Get
it
I
get
it
okay,
I.
D
B
For
this
policy
stuff,
I
I
don't
know
what
the
reporting
is,
but
it's
what
your
reporting
structure
is,
but
it
probably
makes
sense
to
make
sure
that
you're
keeping
amburish
in
the
loop.