►
From YouTube: SIG Cloud Provider 2022-07-06
Description
Agenda: https://docs.google.com/document/d/1OZE-ub-v6B8y-GuaWejL-vU_f9jsjBbrim4LtTfxssw/
List of failing tests due to disabling cloud providers:
https://github.com/kubernetes/test-infra/pull/26747
[bridgetkromhout] test triage reminder - see https://github.com/kubernetes/kubernetes/issues/109913#issuecomment-1123973475 for Nick’s comment about breaking it out into multiple issues per-provider
[bridgetkromhout] cherry-pick deadline this week! https://kubernetes.io/releases/patch-releases/#upcoming-monthly-releases
A
All
right
welcome
to
the
sig
cloud
provider
meeting.
It
is
july
6.,
please
be
mindful
of
the
cncf
coded
conduct
and
we'll
go
ahead
and
get
started.
A
Add
yourself
to
the
agenda
and
or
the
attending
list,
and
please
add
any
agenda
items
as
well
as
subproject
updates.
If
you
have
any,
we
will
start
with
a
bug
triage.
B
A
A
I
don't
know
I
haven't.
We
haven't
really
heard
back
so
I
I
mentioned
that
it
was
something
that
could
be
optimized
and
asked
you
know.
Is
it
worth
spending
effort
like
tell
me
more
about
what
is
is
going
on
and
the
person
never
responded,
so
it
doesn't
seem
like
it's
burning
them
right
now,
or
else
they'd
probably
be
more
responsive.
A
So
I
guess
we
can
just
kind
of
sit
on
it.
I
don't
know
what
to
do,
though,
since
it's
half
triage
accepted,
do
we
have
a
procedure
here.
C
So
sorry,
quick
question
nick
yeah
is
this
something
that
remains
a
problem
once
cloud
provider
extraction
is,
it
is
done
yeah.
A
That
so,
as
the
cloud
provider
is
right
now,
no
it
shouldn't
be
a
problem,
or
at
least
it's
not
guaranteed
to
be
slash
likely
is
not
because
this
is
for
it's
basically
hitting
the
metadata
endpoint
to
get
addresses
which,
as
you
know,
is
done
by
the
cloud
node
controller.
C
So
my
suggestion
and-
and
you
are
welcome
to
do
what
you
want
with
it,
but
my
suggestion
would
be
to
be
a
bit
more
explicit
and
actually
say
you
know.
We
believe
this
problem
will
go
away
in
a
few
releases.
Once
cloud
provider
extraction
is
completed,
you
shouldn't
face
this
problem.
If
you're
using
the
external,
you
know
the
the
the
the
aws
distribution
found
in
kubernetes
sig,
slash
cloud
provider
aws
and
as
such,
unless
there's
an
imminent
need,
we
don't
plan
on
fixing
this
in
trade.
A
What
else
are
some
imminent
need?
Please.
A
So
I'll
go
through
subprojects,
I
don't
have
anything
for
aws
and
I
don't
think
kishore
is
on
okay.
So
let's
go
to
azure.
B
Hey
so
the
very
cool
issue
that
nick
opened
back
in
may
about
cleaning
up
test
failures.
We
did
some
work
on
and
I
put
a
link
to
the
few
that
we
still
have
that
we
need
to
diagnose
and
clean
up.
But
that's
thank
you
for
leading
the
charge
on
that
and
we're
gonna
keep
getting
that
grid.
That
test
grid
to
green.
A
D
As
a
part
of
updating
repository
to
124
version,
which
is
something
that
should
happen
soon-
I've
updated
documentation
about
pumping
process
which
started
some
additional
scripts
that
that
should
help
with
the
process.
So
it
will
not
be
some
so
problematic.
Next
time
we'll
be
handling
this.
This
issue.
C
I
did
I
need
to
actually
fix
that
pr,
but
I
I
have
been
promising
to
create
that
pr
for
a
while,
so
I
went
ahead
and
actually
created
it,
it's
essentially
adding
a
so
let
me
step
back
sorry.
So
there
are
two
feature
flags
that
we
have
to
test:
what
happens
when
you've
explicitly
have
cloud
providers
set
to
external
or
to
force
you
to
show
what
happens
when
you
have
it
set
to
external
and
right
now
we
have
no
nothing
in
test
grid
or
anywhere
else.
C
That
is
actually
testing
what
happens
on
all
the
tests
in
kkk
when
those
feature
flags
are
turned
off
so
entry
plug-in.
C
C
Yeah,
I
apparently
had
a
copy
and
paste
error.
I
needed
to
fix
it
anyway.
You
are
correct.
Yes,
the
second
one
should
be
disable
credential
provider,
but
the
idea
is,
we
will
shortly
have
a
test
grid
that
will
actually
show
us
all
of
the
tests
that
should
be
failing
under
that
circumstance.
C
So
hopefully,
I'll
have
that
running
this
week
and
we
will
see
just
how
many
tests
are
failing.
Just
so
we're
very
clear
about
what
the
expectations
are.
It's
not
necessarily
that
all
of
the
tests,
all
the
tests,
have
to
be
fixed.
C
The
first
thing
we
need
to
do
is
get
that
list
of
tests
and
triage
them,
so
some
of
those
tests
will
probably
need
to
be
moved
to
sort
of
become
distro
level
tests,
so,
whether
it's
cloud
provider,
aws
cloud
provider,
gcp
cloud
provider,
azure
cloud
provider,
alibaba
whatever
that's
fine,
some
of
the
tests
we
may
decide,
we
just
don't
need
and
the
rest
of
the
tests
will
have
to
be
rewritten
so
that
they
don't
depend
on
the
cloud
provider.
C
As
far
as
I
can
see,
those
are
essentially
our
three
options.
All
those
tests
pass
today,
but
they
all
pass
with
the
cloud
providers.
So,
presumably,
if
turning
cloud
provider
off,
fails
that
then
it's
yes
very
strong
campaigns,
it
is.
It
is
because
we
turn
those
them
off,
and
so
we
need
to
work
out
what
to
do.
A
I'll
definitely
follow
that
one.
So
once
we
have
the
list
of
failed
tests
like
what
is
our,
how
are
we
gonna
get
action
based
on
that
list
or
get
people
to
look
at
it.
C
So
my
general
feeling
is
that
we'll
we
as
a
sig
will
need
to
send
out
a
mail
that,
ex
with
the
list
of
tests
explaining
the
problem-
and
I
my
thought
is
we'll
start
by
sending
it
to
sig
release
and
sig
architecture,
and
we
need
to
basically
get
an
agreement
on
next
steps.
My
my
suggestion
on
the
next
steps
is
going
to
be
that
we
send
those
a
a
shortened
version
of
those
tests.
C
C
That
are
currently
depending
on
cloud
provider.
Our
suggestion
would
be
one
of
these
three
things.
You're
welcome
to
come
up
with
something
we
missed,
but
how
would
you
like
them
dealt
with
and
that
it
it's
got
to
be
an
agreement
from
the
owning
sig
for
each
test
on
how
they
want
them
dealt
with.
C
This
is
then
going
to
dovetail
back
to
sig
release,
assuming
that
some
people
want
to
run
at
the
distro
level,
because
right
now
we
really
on
may
be
suggesting,
because
right
now
there
is
practically
no
way
to
run
tests
at
the
distro
level,
based
on
an
upcoming
kk
release
right
so
and
and
just
to
be
clear
what
I
mean
by
that,
if
we
go
back
to
jacob's
last
note
on
the
changes
that
jacob's
making
he's
talking
about
a
set
of
scripts,
that
will
allow
his
team
to
automatically
update
to
124.
C
Now
and
that's
great
for
that
distro,
but
it
sort
of
shows
you
how
far
behind
I
think
gcp
and
in
practice
my
guess
is
every
one
of
the
other
dis
cloud
provider.
Distros
is
on
what
I'm
going
to
call
the
kernel,
which
is
the
code,
that's
actually
in
kkk,
and
so,
if
what
we're
trying
to
do
is
track,
whether
or
not
a
recent
kk
change
has
broken
some
cloud
provider
dependency
for
testing.
C
We
need
to
be
able
to
almost
immediately
run
the
latest
kk
code
in
a
distribution
and
that's
going
to
require
changes
to
the
release
channels
to
actually
be
able
to
run
those
tests
with
a
build.
That's
already
had
that
a
build
of
azure
vsphere.
C
You
know
gcp
aws,
that
had
with
the
latest,
and
I
do
mean
latest
version
of
the
kk
code.
A
So
yeah
I'm
just
trying
to
take
some
notes
here
on
what
we
need
to
do.
So
I
get
that
we
need
to
go
to
six
with
failing
tests
and
and
kind
of
ask
how
they
want
to
proceed
with
those.
But
in
the
beginning
you
mentioned
sig
architecture
and
sig
release,
I
think,
and
and
did
you
have
like
specific
questions
that
need
to
be
resolved
talking
to
them
or.
C
Well,
I
think
anything
that
crosses
sig
boundaries,
we're
basically
going
to
need
sig
architecture
involved,
and
I
think,
if
we're
talking
about
needing
to
make
changes
to
the
release
channel,
to
be
able
to
do
things
like
provide
kkk,
you
know
pre,
like
pre-release
pre-release,
kk
images
that
can
be
consumed
by
the
distros
and
then
mechanisms
to
then
reabsorb
test
results.
B
Actually,
that's
raising
interesting
questions.
I
have
a
question
when
you
say
latest
and
like
we're
talking
about
a
point
in
time
like
the
actual
point
release,
or
are
you
talking
about
pre-release.
C
C
C
C
So
if
I
make
it,
if
I
make
a
pr
and
I
submit
it,
it
has,
it
hasn't
been.
All
these
tests
have
been
run
before
it
even
gets
merged
into
the
code
base.
Now
we're
kind
of
saying.
Well,
we
probably
can't
do
that
so
it'll
probably
merge,
but
at
least
we'd
like
some
signal
that
it
broke
something
in
in
a
reasonable
time
frame,
even
if
that's
just
within
a
day
right.
C
So
that's
already
a
fairly
substantial
step
down
from
where
we
are
today,
and
you
know,
if
it's
not
continuous,
then
I'm
I'm
trying
to
imagine
what
it
would
be
like
if,
at
the
end
of
a
release,
we
discover
all
the
things
that
have
been
broken
on
the
cloud
providers
in
the
last
three
or
four
months,
and
that
that
sounds
pretty
horrific.
A
Okay,
yeah,
I
don't
know
how
to
summarize
that,
but
basically
we
want,
as
as
good
as
we
can
get
feedback
test
feedback
test
results,
whatever
we're
going
to
call
it
from
distros.
A
Let's
say
test
results
from
distros
in
place
of
the
pre-push
testing
that
happens
today,
which
is
better
in
the
sense
that
we
get.
You
know
like
clear.
We
know
when
we
break
something
and
we
need
to
take
our
plan
to
sig
architecture
and
sig
release
to
to
kind
of
solve
that
problem.
A
All
right
do
we
have
a
timeline
on
that,
like
we,
we
we're
pretty
close
to
getting
this
list
of
failing
tests,
so
I
guess
we'll
be
kind
of
I
guess
talking
about
this
next
meeting.
C
A
B
Oh
yeah,
I
just
added
a
couple
based
on
the
things
we're
actually
talking
about
when
we
went
through
the
provider
updates.
If
people
are
wondering
what
link
I
was
using
to
get
to
all
of
the
sorry
cat,
what
I
was
using
to
get
to
the
list
of
tests
that
we
were
in
the
process
of
fixing,
it
was
that
great
issue
that
nick
had
opened.
B
I
think
that,
especially
if
you
were
only
looking
at
the
things
that
needed
triage,
maybe
if
you
haven't
looked
at
this
one
lately
just
go
back,
take
a
look.
He
has
links
to
all
of
the
awesome
lists
that
you
can
go
and
find
out
what
your
current
failing
tests
are,
and
we
had
already
worked
through
a
couple
of
them
and
I
think
we
should
well.
We
have
you
know
a
moment
work
on
some
more
of
them
yeah.
It's.
A
B
D
B
A
Awesome
thanks
for
that
link
and
then
cherry
picked
up
mine
this
week.
B
I
actually
just
went
over
on
the
release
management
channel
on
kubernetes
to
remind
the
release
team
that
hey
we
needed
to
get
a
couple
of
our
cherry
picks
in.
I
think
it's
one
of
those
things
where,
if
I
forget
to
go
and
remind
people
to
review
our
cherry
picks
and
get
them
in,
sometimes
they
don't
get
in
and
then
you
have
to
wait
another
whole
cycle
and
every
once
in
a
while.
It
ends
up
being.
A
B
Oh
yeah,
especially
one
of
the
things
I
notice
sometimes
is
you
know
how,
when
you're
reviewing
all
the
things
that
you've
been
getting
in
lately
and
then
somebody
is
like.
Oh
yeah,
we
should
cherry
pick
that
we
should
is
not
actionable
or
clickable.
So
go
make
sure
you
actually
cherry
picked
the
things
you
intended
because
yeah.
B
A
Yeah
seriously
all
right.
Well,
I
think
that
wraps
up
the
agenda
for
today
does
anybody
have
anything
else
they'd
like
to
quickly
chat
about
before
we
drop.