►
From YouTube: SIG Cloud Provider 2021-04-28
Description
Discussed the PersistentVolumeLabel Admission Controller ; Discussion will continue in the next extraction meeting.
A
All
right
good
thing,
I'm
recording
yes,
okay,
hi
folks
today
is
wednesday
april
28th.
This
is
the
bi-weekly
cloud
provider
meeting
and
yeah
we'll
just
get
started
here.
Let
me
okay
I'll
wait
for
that
to.
A
Settle
yeah,
let's
just
I
don't
see
anything
on
the
agenda.
So
if
folks
want
to
add
items,
I
wasn't
here
the
last
thing
because
I
was
on
vacation,
but
I
did
see
some
topics
that
were
discussed.
I
don't
know
if
there
are
any
follow-up
items
but
happy
to
to
follow
up
on
those
as
well,
but
yeah.
Let's,
let's
go
straight
into
the
provider
updates,
I
think
given
alibaba
and
baidu
are
not
here.
Aws
is
up.
A
First,
I
don't
see
nick,
I
see
kishore
on
the
call
you
want
to
give
an
update.
A
A
All
right
idea:
machine
updates.
A
I
don't
see
kendall
here
for
openstack
on
on
the
vsphere
side,
we
need
to
cut
our
120
release,
which
is
which
was
blocked
on
that
cherry
pick
pr
from
from
cc
and
as
soon
as
the
release
is
up
for
that
one.
We're
gonna
we're
gonna
update
our
vendor,
at
least
for
four
kk
and
then
release
120..
That's
that's
about
it
from
our
side
extraction,
migration.
I
see
someone
added
this
topic.
D
So
we
actually
had
our
initial
discussion
in
the
last
extraction
meeting.
I
will
be
putting
the
meeting
up
on
youtube
so
for
at
least
steve
nick
and
andrew
who
weren't
able
to
attend.
We
we
had
the
meeting.
We
did
an
initial
discussion,
which
was
mostly
outlining
the
problem
and
sort
of
outlining
what
the
possible
solutions
were.
D
I
think
we
came
up
with
two
fundamentally
two
solutions,
and
so
I
one
we
will
have
we
we're
gonna
have
another
re,
react
the
meeting
at
the
next
extraction
meeting
and
try
and
come
to
the
decision,
but
myself,
michelle
nick,
not
nick
kishore.
I
think
actually,
my
apologies
all
went
over
the
possible
solutions
and
I
think
there
was
sort
of
a
divided
opinion.
Some
people
preferred
the
web
hook
solutions.
Some
people
preferred
more
of
a
an
operator
solution.
D
There
are
pros
and
cons
each
way,
so
we
we
just
decided
to
let
everyone
discuss
it,
think
about
it
and
then
come
back
and
we
would
try
to
resolve
to
a
decision
next
time,
but
with
various
folks,
not
being
there
and
dims
not
being
there.
We
decided
that
it
was
better
to
just
publish
the
meeting
and
then
come
back
on
the
next
one,
so
I
should
be
making
sure
that
that
meeting
makes
it
up
to
youtube
this
weekend.
A
Okay,
I
think
the
for
this
one
I
mean:
do
we
even
need
labels
on
tvs
anymore,
because
I
thought
that
some
of
the
topology
information
was
being
pushed
into
actual
fields
in
in
pvs.
A
D
I
have
this
cluster,
but
I
have
a
pre-existing
storage
device
that
I
would
like
to
bring
in
and
that
may
just
have
been
like
a
cloud
provider,
specific
persistence
prior
to
your
kubernetes
cluster,
but
for
those
scenarios
you
still
need
this
controller
and
just
as
a
way
of
getting
them
to
attach,
depending
on
which
cloud
provider
you're
in
so
we
need
like.
So
essentially
it
is
for
those
scenarios.
We
need
a
cloud
provider
specific
solution
and
then
there
was
split
opinion.
D
Some
people
preferred,
like
I
said
it's
kind
of
an
edge
case.
It
is
only
needed
for
legacy
storage,
but
assuming
that
the
cloud
providers
want
to
support,
you
know
attaching
legacy
storage
to
new
new
kubernetes
clusters,
then
then
we
will
need
a
solution
and
there
there
seem
to
be
two
basic
ways
we
can
go
about.
This.
A
Andrew,
I
I
think
part
of
it
yeah.
I
think
the
part
I'm
confused
about
is
you
mentioned
like
bare
metal,
or
you
know,
people
bringing
their
own.
D
D
I
believe
you
need
this
for
that.
I
think
aws
ecb
going
into
the
their
store
their
managed
kubernetes
has
similar
issues.
So
that's
sort
of
the
scenario
where
we
believe
we
need
it.
Okay,.
D
D
The
attachment
but
yeah
okay,
I'm
deferring
to
michelle
on
this,
but
I
I
I
have
very
little
problem
deferring
to
michelle
on
storage
issues.
She
is
very
concrete.
I
would
suggest
that
these
are
all
great
questions,
but
probably
make
more
sense
to
bring
them
to
the
next
extraction
meeting.
A
Okay,
yeah
yeah
like
we
can,
we
can
talk
a
little
more
detail
there.
I
think
the
only
thing
I'm
confused
about
is
the
fact
that
the
the
persistent
volume
label
at
mission
controller
is
like
it
has
a
built-in
logic
for
the
five
like
entry
cloud
providers.
So
it's
it's
not
actually
a
generic.
I'm
sure
you
know
sorry,
I'm
just
kind
of
talking
a
lot
at
this
point,
but
like
it's
a
it's,
not
a
generic
admission
controller,
it's
it
operates
absolutely.
D
Yeah,
so
so
I
I
think
the
the
the
general
vision
is
either
this
becomes
a
a
a
cloud
provider
specific
web
hook
or
it
becomes
a
cloud
provider
specific
controller
and
that
we
could
try
and
generate
a
certain
amount
of
the
code
that
is
consistent
across
kubernetes
clusters,
but
that,
in
fact
it
would
be
a
cloud
provider,
specific
piece
and
so
we're
just
sort
of
now
we're
just
discussing
which
cloud
provider
specific
path.
Do
we
want
to
take
gotcha?
Okay,
that
makes
sense.
A
Okay,
cool
yeah:
let's
chat
more
on
the
next
call
anything
else
on
extraction
migration
right.
A
Okay,
all
right,
so
I
don't
say
anything
else
on
the
agenda:
did
we
want
to
touch
on
1.22
caps
or
any
of
the
previous
topics
from
last
week?
B
The
service
reconciler
issue
that
I
brought
up
last
meeting
thanks
andrew
for
jumping
in
I
haven't,
had
a
chance
to
like
respond
back
yet,
but
we
are
having
some
discussions
on
this
issue
so.
A
Okay,
I
think
I
think
so
the
the
description
of
the
issue
or
the
race
condition
makes
sense
to
me.
I'm
wondering
how
the
issue
actually
like
manifests
into
problems
on
like
on
on
eks
or
whatever
platform
that
is
seeing
the
problem
like
are
you?
Are
you
ending
up
with
like
stale
or
bouncers
or
like.
B
Just
what
yeah,
what
I'm
saying
is
like
not
the
still
load
balancer,
because
delete
gets
called
eventually
that
will
remove
it.
What
I'm
seeing
is
like
the
security
group
rules
are
getting
leaked
and
not
clean
up
properly
and
they
build
up
over
time
and
that's
causing
a
nuisance
in
some
instances.
So
so
what
I
saw
was
like
when
we
delete
remove
the
rules,
there's
another
thread
that
runs
concurrently.
That
adds
the
rule
back.
B
So
that
is
what
I
was
trying
to
avoid.
So
what
I
was
hoping
was
if
we
could
queue
up
the
service,
all
the
into
one
reconciler
queue
at
least
like
for
one
service.
We
only
have
one
operation
at
any
time.
If
we
can
guarantee
that
somehow
this
issue
should
not
happen,
I
mean
we
could
fix
in
the
cloud
provider
code,
but
it
just
becomes
ugly
and,
like
all
the
cloud
providers
do,
the
same
fix
would
rather
want
to
invest
some
time
to
get
it
right
in
the
service
controller.
A
Yeah,
I
agree
with
you.
I
I
think
the
so
my
my
gut
feeling
is
that
the
right
fixes
what
you
mentioned
is
at
a
like
a
work
queue
where
we
like.
You
know
pop
off
the
work
queue,
and
only
you
know,
one
one
instance
of
reconcile
happens
at
the
same
time
and
then
for
the
for
the
update
case,
probably
at
the
beginning
of
every
reconcile.
A
We
should
check
if
the
service
has
a
finalizer
and
if
it
has,
if
it
has
the
finalizer
or
sorry,
not
the
finalizer,
if
it
has
a
deletion,
timestamp,
meaning
like
it's
supposed
to
be
deleted,
but
for
some
reason
like
there's
an
update
that
runs,
we
just
skip
the
reconcile
and,
assuming
that
it's
trying
to
be
deleted.
B
Actually
like
that,
may
not
be
foolproof,
because
it
just
depends
on
how
the
race
condition
manifests
like
it
could
be
that
after
the
deletion
like
update,
is
running
and
then
deletion
time
stamp
gets
added
later.
So
we
never
really
know
like
that.
That
may
not
be
like
extremely
safe
way
to
right,
but
if,
if.
B
No,
that's
correct,
that's
correct
but,
like
say
obvious,
trigger
trigger
delete
gets
triggered,
but
then
those
operation
will
like
call
the
cloud
provider
api
to
actually
delete
resources.
So
that
will
take
time.
So
we
can't
guarantee
completion
right,
so
there's
no
way
that
we
can
say
okay.
This
is
going
to
complete
in
this
money
this
much
of
time.
So,
while
the
update
is
in
process
and
while
the
delete
is
in
process,
they
can
run
into
each
other.
A
B
C
B
Like
if
we
could
just
run
one
operation
for
any
resource
for
for
a
given
service
resource,
then
we're
good,
even
if
update
gets
invoked
after
delete
is
complete.
That's
fine,
because
we
will
not
see
the
resource
anymore,
so
we
can
say:
okay,
it
doesn't
exist.
We
will
you
can
just
get
away
cleanly.
In
that
case,.
A
Because
my
thinking
is
that,
like
you,
could
still
run
into
a
scenario
where
you
do
the
delete
and
an
update
runs
afterwards,
because
there's
still
going
to
be
delay
between
when
you
remove
the
finalizer
and
when
the
service
object
is
completely
deleted
and
like
during
the
delete,
there
could
still
be
an
update
that
happens
and
queues
up
like
adds
accuse
of
another
task
in
the
work
queue
in
the
meantime,
even
if
the
service
was
deleted
in
between
the
time
the
the
task
was
added
to
the
queue
when
the
finalizer
was
removed,
I'm
not
exactly
sure
if
that
would
fix
it,
but
I
I
mean,
I
think
we
I
think
we
can.
A
We
could
probably
open
a
pr
and
like
just
go.
We
can
probably
just
run
with
your
suggestion
and
then
maybe
we
can
test
to
release
to
figure
out
if
that
behavior
is
what
we
need.
B
Sure
that's
what
walter
suggested
as
well
like
last
meeting
that
bring
it
up
and
see
if
it's
okay,
to
open
a
pr
on
this,
I
can
definitely
go
ahead
and
do
that.
D
I
I'm
also
just
going
to
mention
that
I
mean,
depending
on
the
controllers,
you're
talking
about
updates
and
deletes,
are
always
going
to
be
able
to
run
concurrently
and
there's
no
way
of
avoiding
them,
since
they
may
not
even
be
running
in
the
same
process.
D
I
mean
a
classic
case
being
the
update
is
being
done
in
the
in
either
the
kcm
or
the
ccm
and
the
delete
is
happening
in
the
other
and
that
that
is
just
the
sort
of
thing
we
have
to
be
able
to
handle.
B
D
And
we
should
make
sure
the
controller.
The
controller
doesn't
conflict
with
itself,
but
I'm
saying,
while
we're
doing
these,
these
are
not
these.
These
are
not
resources
that
are
exclusive
to
this
controller.
So
just
be
aware
that
you
know
there
will
be
service,
controller
and
other
things
playing
with
the
service
right.
B
A
So
yeah
I
mean
let's
yeah,
let's
pick
whatever
you
think
is
gonna
work
best
and
we
can
continue
the
discussion
in
the
pr.
B
I
can
take
it
I'll,
be
able
to
put
some
time
next
week
this
week,
I'm
fairly
busy,
so
I
can
definitely
take
it
up.
A
A
A
A
Okay,
that'd
be
awesome,
and
I
I
don't
know
that's
what
that
was
my
recollection
last
time
and
then
now
we're
just
going
now
where
we
we
we're
going
through
the
top
of
the
list
again
based
on
the
ones
that
do
have
the
needs
triage.
A
D
A
B
Sources
and
check
this
is
little.
I
want
to
take
a
look
at
it
as
well:
nine,
zero,
two
zero
okay.
A
Yeah,
I
think
this
music-
yes,
I
recall
this
was
related
to
like
dual
stack
on
aws.
I
don't
know
if
it
blocks
dual
stack,
but
I'd
be
curious.
Like
does
dual
staff
work
on
aws.
B
I
mean
stack
yeah,
it's
not
supported
for
the
entry
controller.
We'd
have
to
use
the
load,
balancer
controller
for
dual
stack.
B
I'm
not
certain
I
need
to
look
at
so
I
can
go
over
it
and
see.
What
is
the
issue.
D
Sorry,
just
quickly
did
we
assign
kishore
to
both
the
reviewing
the
fix
and
the
the
bug?
No,
not
the
fix.
A
No
good
call,
okay,
all
right
this
one,
I'm
going
to
remove.
A
B
I'll
do
the
needful.
A
Thanks
and
yeah
you
can,
I
think,
yeah
you
can
probably
just
close
it
don't
even
have
to
okay.
This
is
a
cube
up
one.
I
think.
A
Yep,
okay,.
B
Yes,
I
mean
this
is
expected
because
we
create
health
check
rules
for
the
number
of
subnets
or
the
availability
zones
that
are
there.
So
in
the
entry
controller
we
may
not
have
much
fix
for
now.
Unless
nlp
supports
a
security
group,
it
should
be
the
way
you
can
assign
to
me.
I
will
reply
back
further.
D
E
A
A
E
A
All
right,
you
can't
still
don't
service
stuff
updating.
You
know
this
video,
but.
B
A
Yeah
dc,
do
you
know
for
review
or
someone
else
is
time
to
run
this.
E
E
E
A
Yeah,
I
think
the
so
I
think
this
came
out
of
the
fact
that
there
was
that
bug
that
that
cc
fixed
where
the
cloud
provider.
E
I'm
thinking
of
something
different,
okay,
yeah,
that
makes
sense.
Okay,
we'll
follow
it.
A
Yeah,
this
is
just
saying,
like
integration
tests
to
just
make
sure
the
place,
the
basic
plumbing
of
ccm
is
working.
E
C
I
think
we
have
another
issue
to
track
the
umbrella
test
framework
for
collaborator
or
ccm.
C
Sorry,
this
yeah
sorry,
the
previous
one,
just
maybe
just
for
integration
test,
and
we
have
another
issue
for,
like
other
tests
plan
yeah.
C
A
B
A
E
A
A
A
D
A
Yes,
okay,
so
yeah
I
refreshed
it
and
I
see
I
see
about
14
left,
which
is
pretty
good,
so
yeah.
Maybe
we
can
probably
clear
these
out
the
next
run.
I
think
this
one
we
can
we
can.
We
can
triage
this
hold
on.
A
Okay,
cool
yeah,
let's
yeah,
okay
and
then
we'll
we'll
revisit
the
last
the
rest
of
them
next.