►
Description
- Prospective Asian/European contributor friendly meeting times.
- Review Last Known Good (LKG) proposal for e2e testing proposal. (slides)
There are branching requirements for cloud providers. LKG testing can be done on each release branch. Version skew of both N-1 and N+1 would be ideal. There are two aspects of this: what version the cloud provider builds against, and what version of kubernetes it is tested against.
- For CSI we test against the last 3 versions of kubernetes and have another that tests against the head of kubernetes (but don't block on it).
A
I'm
going
to
switch
over
to
present
time
hi.
This
is
the
sig
cloud
provider
cloud
provider
extraction
working
group
september
9th.
This
is
our
first
try
at
having
a
europe
asia
friendly
time
meetings,
so
we'll
probably
chat
with
that
in
a
big
in
a
bit.
This
is
an
a
cncf
meeting.
So
please
remember
that
we're
we
adhere
to
all
the
cncf
guidelines,
many
of
which
amount
to
please
be
friendly,
polite
and
considerate
and
inclusive
of
your
fellow
contributors.
A
So
please
follow
through
with
that
with
that.
Our
primary
agenda
item
today
has
to
do
with
how
we're
gonna
move
forward
with
the
e
to
e
testing
a
broad
description.
Before
I
hand
it
over
is
we
have
feature
gates.
We
would
like
to
turn
on
to
from
alpha
to
beta,
which
are
going
to
disable
all
of
the
cloud
providers
from
the
cubelet,
the
controller
manager
and
the
kubernetes
api
server.
A
When
we
do
that,
we
expect
a
lot
of
things
a
lot
of
the
tests
to
essentially
break
this
has
to
do
with
things
like
how
do
you
even
bring
up
a
a
kubernetes
cluster
in
many
of
our
test
environments
like
the
gcp
test
environment,
and
without
that
we
expect
there
are
quite
a
few
tests
that
just
won't
work
toward
that
end.
We
need
an
alternate
plan
for
how
to
do
that,
and
joe
and
kermit
have
been
kind
enough
to
come
up
with
an
initial
proposal
for
us
to
walk
through
with
that.
B
Thanks
walter,
so
I'm
going
to
share
some
slides.
I
have
a
couple
visual
aids,
so
it's
going
to
be
helpful
for
me.
Make
sure
that
that's
showing
okay,
I
think
we're
good.
So
I
think
walter
has
explained
the
motivation
pretty
well.
I
mean
the
basic.
The
basic
observation
is
when
we
were
in
tree.
It
was
really
easy
to
test
a
cloud,
an
entry
cloud
provider,
because
you
had
all
of
the
latest
code
of
the
cloud
provider
and
all
the
latest
kubernetes
code,
and
you
could
just
make
sure
they
kept
working.
B
B
B
So
well,
that's
going
to
what
we've
seen
is
that's
prone
to
being
done
very
inconsistently
or
increasing
frequently
to
by
developers
they
find
about
out
about
a
compatibility
too
late.
Maybe
they
do
it
right
before
they
want
to
release
now
they're
in
this
bind
where
I
found
a
major
incompatibility
right
before
I
want
to
do
my
release,
I
gotta
get
it
all
sorted
out
now
and
it's
all
like
an
emergency
mode
kind
of
urgent
when,
if
you'd
found
it
when
it
happened
weeks
ago,
it
wouldn't
have
been
an
emergency
at
all.
B
You
could
have
very
calmly
figured
out
the
problem
and
got
it
sorted
out.
It
would
have
been
no
big
deal,
so
it
also
forces
developers
to
do
a
lot
of
toil
there's
a
lot
of
repetitive
tasks
required.
If
you
want
to
do
it
more
frequently,
you
know
if
the
developers
make
a
mistake
in
any
of
those
steps.
You
got
to
deal
with
that,
so
they
have
to
know
how
to
do
it.
People
are
going
to
get
frustrated
so
on
and
on.
B
It
also
really
doesn't
work
if
you're
going
to
do
the
downstream
testing,
where
you
want
to
like,
have
kubernetes
constantly
checking
if
it's
going
to
work
against
some
specific
cloud
provider,
because
kubernetes
already
has
way
too
many
manual
tasks
as
part
of
releases
like
just
adding
another
thing
to
that
is
a
pretty
big
ask
and
not
something
I
want
to
ask
them
to
do
so.
The
proposal
instead
is
that
we
find
some
way
to
try
and
automate
as
much
of
this
as
we
can
so.
B
It's
probably
tested
against
like
head
of
kubernetes
and
the
latest
of
years,
but
it's
always
looking
for
the
newest
pairing
it
can
and
whenev
whenever
it
finds
the
newest
pairing
it.
Can
it's
going
to
mark
that
as
the
last
known
good,
so
you're
constantly
tracking
this
thing
developers
can
then
use
that
last
good
right
like
when
you
submit
a
pr.
Some
of
your
tests
can
use
that
last
known
version
a
good
version
of
kubernetes
and
test
against
it.
That
should
really
that
test
should
only
fail.
B
If
one
of
two
things
happen,
either
developers
introduce
a
defect
in
their
pr
or
as
a
flake,
so
that
it
shouldn't
fail
because
of
incompatibility,
because
you
already
established
that
the
the
the
base
that
that
developer
is
working
off
was
known
to
be
good
with
kubernetes,
and
so
it's
pretty
clear,
it's
pretty
clear.
What's
going
on,
there
is
a
problem
which
is
if
what
about,
if
this
process,
in
the
background,
that's
looking
for
last
known
goods,
can't
find
one
right
like
what.
B
If
head
of
your
cloud
provider
versus
head
of
kubernetes
becomes
incompatible,
then
we
need
to
somehow
get
people
involved.
Like
the
maintainers
of
that
cloud
provider
really
need
to
get
alerted,
they
really
need
to
get
this
fixed
and
we
really
need
to
make
sure
that
there's
good
incentives
in
place
for
them
to
not
just
ignore
this
one
obvious
suggestion
would
be
just
block
all
pr's
for
merging
on
that
cloud
provider.
Until
it's
fixed,
it's
up
to
quadratic
decide
what
to
do,
but
that's
probably
a
pretty
same
policy.
It
also
keeps
things
from
compounding
right.
B
If
you
stop
merging
more
things
in
the
moment
you
lose
compatibility,
then
you
probably
have
the
minimal
delta
that
you
can
hope
for
when
it
comes
to
term
in
terms
of
fixing
it
so
just
to
visualize
that
at
some
point
in
time
on
the
left,
we
have
the
kubernetes
commits
on
the
right.
You
have
your
copywriter
commits
some
pairing
of
those
is
a
last
known,
good
right.
B
Those
have
been
tested
and
we've
identified,
that
that
those
two
things
were
compatibility
up
to
that
point
in
their
histories,
and
you
know
all
subsequent
pre-submits
of
that
cloud
provider
could
go
against
that
kubernetes
version,
and
you
know
the
the
only.
The
only
reason
any
of
those
pre-submits
should
fail
is,
if
any
of
those
commits,
since
the
last
known
good
introduced
incompatibility
because
we're
not
actually
moving
the
kubernetes
version
there
right.
So
that's
pretty
stable
testing
and
then,
of
course,
you
also
have
some
background
process.
B
So
there's
not
a
guarantee
that
you're
going
to
have
an
lkg
that
is
just
the
previous
commit
to
yours,
but
the
more
frequently
you
run
it,
the
smaller
the
delta,
and
this
can
work
both
ways.
I
just
showed
it
where
you
can
test
your
cloud
provider
against
upstream
kubernetes,
which
is
a
fairly
typical
use
case
right.
You
want
to
make
sure
your
thing
works
against
kubernetes.
A
lot
of
people
need
that
you
could
also
do
it
in
the
reverse.
Right
kubernetes
could
actually
test
against
a
downstream
cloud
provider.
I
like
to
call
this
downstream
testing.
B
There's
a
lot
of
systems
that
that
do
this
kind
of
thing
when
they
need
to
and
what
you're
saying
is
this
cloud
rider
is
important
enough
to
kubernetes
that
we
actually
want
to
have
a
test
signal
on
the
kubernetes
side
of
whether
or
not
we've
lost
compatibility
with
it,
whether
or
not
that
should
be
a
pre-submit
or
post-submit
failure.
How
how
people
react
to
that
failure
is
a
topic
for
discussion,
but
just
having
the
signal
is
often
valuable.
B
If
you
can
afford
to
run
all
the
testing
so
just
kind
of
iterate
some
options
for
finding
new
lkgs.
Well,
you
could
do
post
submits
if
you
do
it
on
every
post
submit
and
it's
a
high
activity
project
you're
going
to
be
doing
a
lot
of
testing
so
that
could
become
propagally
expensive.
So,
for
example,
kubernetes
might
not
be
able
to
test
every
single
one
of
its
commits
against
all
cloud
providers
in
the
world
right.
C
B
B
In
fact
it
might
even
be
it
might
not
even
be
enough
testing,
because
you're
gonna,
you're
gonna
be
see,
you're
gonna
be
jumping
so
far
between
kubernetes
commits
as
you
do,
that
testing
so
option
two
would
be
well.
Let's
just
do
it
periodically
right
every
n
hours
you're
going
to
run
a
test
to
look
for
new
lkg
for
high
activity
projects.
That's
pretty
reasonable
right!
You
can
strain
how
often
you're
running
this.
So
if
you
can
afford
to
run
it
once
a
day,
you
run
it
once
a
day
for
low
activity
projects.
B
B
So
this
is
kind
of
nice
in
that,
like
you,
get
the
best
of
both
worlds
from
from
at
least
for
a
low
activity
project,
you
guarantee
that,
like
every
time
a
developer
does
a
commit.
We
quickly
change
check
to
see
if
that
has
introduced
any
incompatibilities
and
then
also
we're
making
sure
we
do
periodics
as
kind
of
a
back
stop
to
make
sure
we're
keeping
our
testing
up
to
date
against
kubernetes
changes
on
a
relatively
recent
cadence.
You
could
also
do
like
post
submit
on
both
projects.
B
I
don't
know
if
that's
necessarily
needed
it's
it's
it's
an
interesting
idea.
Obviously
for
kubernetes
it
would
be
too
expensive
for
two
well
volume
projects
that
might
work,
but
you
know
that's
that
that
is
kind
of
the
extreme
option.
That
means
you're
literally
testing
every
single
possible
pairing,
which
is
maybe
too
much
but-
and
I
don't
even
know
how
to
like
hook
up
something
to
signal
on
both
of
those,
but
that
would
be
like
a
way
to
think
about
that.
B
We
want
to
make
it
really
clear
to
developers
what
is
their
fault
and
what
is
like
an
incompatibility
that
was
just
found
having
nothing
to
do
with
their
pr.
So
when
you
look
at
your
pr,
what
we'd
like
to
do
is
have
something
like
this
right.
If,
if
our
system
that
finds
new
lkgs
is
not
able
to
find
a
new
lkg,
we'd
like
to
say,
like
the
branch
is
frozen
due
to
incompatibilities
in
the
you
know,
go
read
here
to
learn
more
about
what
we're
wrong
right.
That's
very
different
than
like
your
pr
failed.
B
Do
you
want
to
make
it
clear
that,
like
this
is
just
a
branch
block,
and
it's
not
due
to
your
pr,
you
just
need
to
wait.
The
maintainers
need
to
get
it
fixed.
Also,
we
need.
We
need
some
way
for
developers
to
get
through
a
pr
that
fixes
breakage
if
needed.
So
we
might
have
to
introduce
something
like
a
slash,
lkd
jess
fix
that
you
can
put
on
a
pr
to
get
it
through
or
unblock
it
or
something
like
that.
So
we're
gonna
have
to
figure
out
what
exact
mechanism
we
use
there.
B
I'm
not
sure
if
that's
the
right
one
but
we'll
figure
something
out,
but
one
thing
you'll
notice
here
is
the
developer's
still
seen
all
of
their
tests
passing
right.
So
we're
still
saying
that
hey,
like
your
pr,
still
passed.
So
in
this
case
it's
really
nice
signal
right,
you're
waiting
for
the
branch
to
get
on
frozen,
but
otherwise
your
pr
is
ready
to
go
and
I
will
hand
it
over
to
kermit
to
talk
about
the
implementation.
D
I
think
everyone
should
be
able
to
see
that
so
yeah
I
was
going
to
just
cover
a
couple
implementation
details.
You
know
what
we're
how
we're
trying
to
you
know.
Look
at
this
for
cloud
provided
gcp.
You
know
some
thoughts
about.
You
know
what
might
have
to
be
tackled
for
this
to
get
implemented
in
kubernetes
as
well.
As
you
know
what
what
other
cloud
providers
you
might
think
about
where
they
were
they
implemented.
D
D
So
the
main
part
that
that
underpins
all
this
is
that
we
track
the
the
lkg
version
of
kubernetes
via
file,
so
we
just
simply
have
a
file
that
we
store
under
git
under
version
control
that
contains
the
commit
hash
you
know
of
of
kubernetes,
of
of
what
we
consider
to
be
the
last
known,
good
version
of
kubernetes,
so
you
know
manipulating
the
last.
D
No
good
version
of
kubernetes
on
updating
it
is
as
simple
as
just
simply
making
a
new
git
commit
that
changes
the
commit
hash
that
is
located
in
that
file,
so
I'll
go
over
in
a
second
over
the
the
the
post-emitted
periodic
test
that
we
have,
that
will,
essentially,
you
know,
run
a
set
of
e
to
e
test
and
they'll
just
simply
update
that
file.
You
know
where
they
to
succeed
and
that's
how
we
we
keep
track
of
our
lkg
version
and
then
for
our
other
pre-submit
test.
D
I
think
joe
mentioned
earlier
that
use
the
current
lkg
they
source.
You
know
what
the
current
lkg
version
is
from
this
file
as
well.
We
considered,
having
you
know,
a
global
google
cloud
storage
bucket.
You
know
as
an
alternative,
a
storage
mechanism
for
keeping
track
of
what
the
lkg
version
is.
But
you
know
having
that
extra
bucket
is
an
additional
piece
of
infrastructure
that
has
to
be
maintained
and,
potentially,
more
importantly,
it's
not
immediately
clear
to
everyone
else:
who's
using
consuming
cloud
provider
gcp.
D
What
the
last
known
good
version
is
because
they
would
have
to
you,
know,
look
into
the
bucket
and
get
that
information
from
there.
So
that's
part
of
the
reason
why
the
file
solution
was
was
chosen
instead,
so
we
have
a
couple
different
things
that
would
have
to
be
added
into
prowl
post,
submit
a
periodic
and
pre-submit.
The
pre-submit
is
how
most
developers
are
going
to
be
interacting
with
lkg
system.
D
This
will
be
sitting
in
there
and
their
pr
they'll
be
able
to
see
what
the
current
status
is
with
lkg
compatibility
there,
whereas
the
postman
of
the
periodic
are
the
ones
that
are
updating
the
lkg,
but
these
are
mostly
more
behind
the
scenes
things
that
are
running
in
the
background.
Those
are
not
immediately
clear
or
not,
rather
not
immediately
clear,
but
they're
not
they're,
not
visible
as
much
they're
not
intended
to
be
from
the
github
pr
view.
D
So
the
the
post
submit
just
runs
the
e2e
test
that
we
already
currently
have
with
cloud
provider
gcp
against
the
current
head,
commit
of
kubernetes-
and
you
know,
if
those
tests
succeed,
then
it
updates
the
lkg
file
that
I
was
talking
about
earlier.
With
the
heads
commit
to,
and
so
that's
how
we
we
keep
track
of
the
lkg
version.
That
way
the
periodic
does
the
exact
same
thing.
D
It
also
updates
the
the
lkg
file
based
upon
those
e2e
tests,
but,
as
I
think
joe
also
mentioned
earlier,
it
sure
is
that
the
you
know
we're
updating
our
lkg
version,
even
if
we
don't
get
prs
for
a
little
while
the
pre-submit,
on
the
other
hand,
doesn't
do
as
much
as
far
as
you
know,
testing
goes.
It's
mostly
just
meant
to
be
window
into
the
postman
of
the
periodic.
So
as
long
as
the
you
know,
the
last
most
recent
lkg
update
succeeded
from
either
the
postman
or
the
periodic.
D
Then
the
pre-submit
is
fine
as
well,
but
if
one
of,
if
the
most
recent
okg
attempt
did
not
succeed,
then
we
have
a
block.
So
we
can
override
that
we're
gonna
have
a
proud
plug-in
that
you
know
we're
able
to
type
something
like
lg
lkg
fix
into
pro
or
into
the
github
pr.
So
that
way,
we're
able
to
override
things
and
basically
have
a
way
to
fix
potential
lkg
issues.
So
that's
another
part
is
the
it's.
The
proud
plugin
as
far
as
lpg
and
kubernetes
go
so
this
is
not
talking.
D
This
is
talking
about
kubernetes
kubernetes,
proper
and
not
in
you
know,
copyrighted,
gcp
or
something
I
think
some
different
challenges
would
have
to
be
handled.
I
think
the
community
would
have
to
make
a
decision
on
whether
to
go
the
post-submit
and
periodic
route.
The
way
that
cover
the
gcp
is
doing
or
post-emitting
periodic.
I'm
sorry
first,
is
just
a
periodic.
I
think
that's
something
that
has
to
be
talked
about.
D
I
think
there
would
also
probably
have
to
be
some
discussion
from
some
contributions
from
the
cloud
providers
on
the
provider
specific
parts,
because
each
cloud
provider
I'll
be
built
in
a
different
way,
it'll
be
tested
in
a
different
way
on
each
cloud
provider
will
typically
know
how
to
handle.
You
know
those
parts
best.
D
I
think,
additionally,
on
top
of
that,
the
there
will
be
a
additional
partial
challenge
on
how
to
keep
track
of
multiple
okg
versions
of
of
the
cloud
providers,
because
everything
I've
discussed
so
far
is
just
keeping
track
of
one
component
under
lkg
with
simply
one
file,
but
we
would
have
to
keep
track
of.
You
know
the
lkg
versions
of
all
the
different
cloud
providers
you
know
and
handle
their
tests
independently
with
those
separate
lkg
versions.
So
that
would
be
something
that
would
have
to
be
tackled
as
well.
D
Additionally,
on
top
of
that
or
not
additional
topic,
I'm
talking
about
you
know
lkg
adoption
and
the
from
the
rest
of
the
cloud
providers
so
definitely
reach
out.
If
you
want
to
or
are
thinking
about,
adding
okg,
you
know
if
this
sounds
interesting.
I've
listed
some
of
my
contact
information
on
the
slides
there,
I'm
a
biased
party,
but
I
think
it
definitely
is
a
very
interesting
thing
to
work
on.
I
think
it's
very,
I
think,
it'd
be
very
useful.
D
So
in
terms
of
what
a
new
cloud
provider
who's
looking
to
adopt
lkg,
I
might
have
to
implement
looking.
You
know
right
now,
with
the
current
design
that
we've
outlined
here,
that
the
post
to
mid
and
periodic
are
going
to
be
relatively
cal
provider
specific,
since
those
are
going
to
be
dealing
a
lot
with
essentially
how
each
cloud
provider
is
is
building,
and
you
know,
testing
their
individual
parts.
So
those
are
going
to
be
very
specific
to
each
cloud
provider.
D
I
think
there
could
be
some
opportunities
for
code
reviews
or
code
reuse,
I'm
sorry,
but
those
are
going
to
have
to
be
we're
going
to
probably
need
to
have
some
early
adoption
to
see.
You
know
where
what
parts
of
the
of
the
code
base
can
actually
be
reused
among
cloud
providers.
For
that
to
happen,
I
think
I
can
more
safely
say
that
parts
like
the
pre-submit
that
basically
check
the
the
status
of
the
of
the
post,
mid
and
or
the
periodic
that
should
be
fairly
cloud
provider
neutral.
D
So
I
think
everyone
should
be
able
to
benefit
from
that
work,
as
well
as
the
pla
the
proud
plug-in.
I
think
that'll
also
be
cloud
provider
neutral,
so
we
should
all
be
able
to
to
use
that
code.
It's
only
the
post-submitted
periodic.
I
think
that
cloud
providers
would
have
to
implement
for
themselves.
D
I
think,
lastly,
one
one
thing
to
keep
in
mind:
is
you
know,
as
we
sort
of
all
work
on
this?
You
know
as
a
group,
I
think,
there's
there's
the
matter
of
you
know
scaling
lkg
testing,
so
one
one
area
where
you
might
consider
you
know
having
a
sort
of
a
group
effort
would
be
sharing
kubernetes
bills
because
we're
going
to
be
building
a
lot
of
different.
D
You
know
interim
commits
of
kubernetes
that
don't
make
it
you
know
into
official
releases,
so
that
could
be
very
expensive
from
a
compute
and
or
storage
perspective.
D
So
one
thing
you
know
that
that
we,
the
cloud
providers
might
consider
would
be
sort
of
sharing
or
caching
those
builds
going
forward,
but
I
think
there
will
probably
be
other
other
areas
where
you
know
similar
reuse
happens,
but
I
think
you
know,
as
you
know,
the
design
matures
the
code
matures.
I
think
we'll
get
that
figured
out
so
yeah.
I
think
that
that's
all
I
had
enjoy.
I
didn't
know
if
you
want
to
wanted
to
open
up
the
questions
or
what
are
we
yeah
thanks
kermit.
A
A
Things
like
start
with,
there
are
tests
which
are
pre-submit
today,
which,
presumably
in
this
are
going
to
become
post,
submit
when
a
test
is
pre-submit,
it's
going
to
block
the
pr
from
landing.
If
it
becomes
a
post
submit
test,
then
the
code
has
merged.
So
have
we
given
any
thoughts
to
how
we
resolve
that?
Like
do
we
roll
back?
Do
we
just
alert?
B
I
think
I
think
it
depends-
and
this
is
something
that
we're
going
to
very
much
want
to
learn
we're
coming
at
this
from
that.
Well,
we
have
nothing
now.
So
let's
get
this
in
place
and
then
learn
from
there
that
only
strictly
moves
us
towards
a
better
world,
but
I
think
I
think,
if
you
are
a
cloud
provider
and
upstream
kubernetes
is
broken
for
you,
that's
really
up
to
the
cloud
provider
right
like.
B
So
if
one
cloud
provider
breaks
who
makes
the
decision
on,
if
that's
important
enough,
I
certainly
don't
somebody's
going
to
have
to
do
that.
So
I
think,
there's
going
to
be
have
to
be
a
bunch
of
politics
policy
decisions
made
there.
My
expectation
would
be
that
most
of
these
start
as
post
submit-
and
there
is
a
you
know,
and
then,
if
there
is
interest
there
could
be
a
process
by
which
some
of
them
could
become
pre-submit,
but
I
think
the
bar
is
going
to
be
pretty
high
on.
B
What's
expected
of
you
if
you're
going
to
be
in
pre-submit,
we,
you
know,
if
you
look
through
kubernetes
history,
there's
been
a
bunch
of
cloud
provider
testing
that
has
been
in
pre-submit
for
periods
of
time
and
then
been
taken
in
and
out
again
due
to
you
know,
levels
of
activity
from
the
maintainers
and
icon
providers.
So
I
you
know,
there's
a
lot
of
history
here.
B
This
isn't
actually
an
entirely
new
problem,
like
we've
been
doing
this
and
there's
already
a
bunch
of
learning,
so
I
would
expect
I
would
expect
something
along
those
lines,
but
the
community
is
going
to
have
to
make
some
decisions,
and
this
this
is
probably
going
to
have
to
lead
some
of
that.
A
So
the
and
if
other
people
have
questions
please
chip
in,
but
while
we're
here,
I
will
ask
until
someone
sort
of
raises
their
hand,
have
we
given
any
thoughts
to
standardizing
the
some
sort
of
interface
to
how
you
bring
up
a
cluster?
I
don't
think
most
you
know
most.
I
don't
think
there
are
any
cloud
providers
that
want
to
do
cube
up,
even
though
that's
sort
of
the
standard
that
we
have
in
kkk,
and
I
wonder
if
what
we
want
is
a
standard
hook
for
lack
of
a
better
term.
B
I
I
think
that
kind
of
thing
is
awesome,
there's
a
bunch
of
different
frameworks
that
do
it
in
different
ways
that
different
people
have
different
affiliations
with
I'm
going
to
leave
that
as
a
bit
of
an
auxiliary
or
complementary
tasks
to
this.
But
I
am
hugely
supportive
of
it.
I
think
at
some
point
we're
probably
going
to
do
something
for
conference
gcp,
where
we
do
something
better
and
then
we're
going
to
probably
have
a
vested
interest
in
making
sure
that
the
capabilities
are
there.
B
But
we
do
want
to
make
sure
that,
like
whatever
we
do
for
lkg
testing
works
good
against
the
things
that
people
are
using
today.
So
if
you
are
using
something
that
is
not
cube
up
and
you
would
like
to
use
lkg,
I
would
strongly
recommend
reaching
out
to
kermit
and
working
with
him
as
he
implements
this,
so
that
we
can
get
full
support
for
whatever
you're
using.
C
I
mean
no
questions
from
me
like
I.
I
generally
think
this
sounds
like
it's
not
coming
through.
C
A
A
Yeah
definitely
thank
you
both
joe
and
kermit.
I
have
just
just
for
everyone's
information.
There
are
quite
a
few
folks
invite
who,
who
have
indicated
they
will
probably
show
up
to
the
afternoon
session,
so
I'm
hoping
that
some
of
the
the
architecture
builds
team.
So
people
like
dims
will
be
here.
Some
of
the
api
machinery
folks
have
evinced
an
interest.
A
I
also
have
folks
from
six
storage
and
it's
quite
a
few
of
their
tests
that
are
that
we,
we
suspect,
are
going
to
be
problematic.
In
addition,
the
csi
effort
is
in
very
very
much
in
parallel
to
the
rest
of
the
cloud
provider
extraction,
so
it
would
be
good
to
make
sure
that
we're
all
in
the
same
sync,
the
other
thing
I
will
mention
just
going
further
on
what
I
started
with.
A
We
do
have
two
feature
gates
that
we're
interested
in.
So
we
have
both
the
general
cloud
provider
extraction,
feature
gate
and
we
have
the
more
specific
credential
provider
feature
gate,
and
I
would
expect
not
what
it
would
expect.
We
know
for
a
fact
from
the
existing
alpha
feature.
Gate
test
runs
that,
when
those
get
turned
on
a
bunch
of
stuff
fails,
so
I
would
suggest
that
one
thing
that
would
might
be
good
for
joe
and
kermit
would
be
to
go
back.
Take
a
look.
A
Try
flipping
one
of
those
runs
to
turn
that
out.
Those
alpha
feature
gates
back
on
just
temporarily
or
on
a
local
run
and
take
a
look
at
which
tests
are
failing.
I'm
not
saying
that
we
need
to
necessarily
debug
them,
but
I
think
that
list
of
failed
tests
are
pretty
indicative
of
the
first
thing
that
we
need
to
be
fixing
with
this
sort
of
effort.
A
C
I've
just
one
quick
question
that
I
thought
of
here:
yeah-
and
maybe
I
missed
this
in
the
deck,
but
like
this,
this
testing
is
specifically
talking
about
like
integration
on
one
specific
version:
we're
not
talking
about
any
sort
of
upgrades
that
would
like
we're
not
doing
any
sort
of
upgrade
testing
around
this
or
or
is
that
part
of
it
as
well?.
A
So
it's
a
great
question.
I
will
turn
it
over
to
joe
in
a
minute,
but
there
is
at
least
some
upgrade
testing
in
the
ede
suite,
but
so
the
interesting
question
then
becomes
which
of
our
tests
that
are
in
the
kke
suite
can
remain
in
the
ede
suite
and
which
ones
fail.
If
you
turn
off
cloud
provider
and
would
need
to
be
migrated
out-
and
I
don't
know-
and
also
do
we
want
to
leave
the
possibility
for
so
for
some
tests
being
both
in
both
places
where
it
doesn't
actually
fail.
B
B
Out
of
their
of
these
runs,
if
you
are
going
to
do
the
upgrade,
if
you
are
going
to
run
the
update
grade
test,
you're
going
to
have
to
really
careful
making
sure
that
the
upgrade
tests
understand
which
version
to
use
for
old
and
which
version
to
use
for
new,
so
that
they're
testing
something
sensical
I'd.
Not.
B
I
wouldn't
trust
it
to
do
that
right
out
of
the
gate,
with
lkg
testing
you're
you're
going
to
get
equip
you're
going
to
get
roughly
the
equivalent
of
like
head
testing
of
the
two
projects
and
you
you
might
not
get
like.
I
don't
know
if,
like
when
you
get
close
to
an
rc
in
kubernetes,
if
you're
gonna
get
the
right
thing
or
not,
so
that
would
be
something
to
watch
out
for,
but
yeah
you,
you
could
and
probably
should
be
doing.
Some
upgrade
testing.
C
You
know
when
I
see
like
last
known
good
testing,
I'm
kind
of
expecting
to
see
a
matrix
of
like
all
the
different
versions,
and
then
I
can
kind
of
plot
out
and
say:
okay,
I
was
running
version
x
of
this
and
version
y
of
that
and
I
can
tell
that
they
work
well
together
and
then,
when
we
add
in
like
the
upgrade
mechanism
to
this,
it
like
adds
an
extra
layer
of
confusion
to
the
you
know
to
the
last
known
goods
yeah.
I
think
that's
just
confusing.
B
Yeah,
I
I
think
it's
going
to
be
confusing
people
until
they
build
their
matrices
out
for
themselves,
and
it
makes
sense
because
you
could
do
lkg
testing
on
on
your
master
branch
and
on
your
release,
branches
right
and
so
your
matrix
might
look
good.
This
is
just
another
dimension
that
we're
going
to
add
to
it.
So
I
think
I
think
it's
going
to
take
a
while
for
people
to
shuffle
that
out
and
then
figure
out
what
other
what
other
dimensions
they
want
to
get
sorted.
C
A
Very
cool,
the
other
thing
I
will
throw
in
just
as
one
thought
on
this
is,
I
think,
when
we
first
do
it,
if
you
decide
to
go
with
upgrade
testing,
there
are
two
versions
of
upgrade
testing.
One
is
very
short-lived,
but
if
you
are
transitioning,
if
you're
an
entry
provider
transitioning
from
pure
kk
to
your
to
whatever
your
system
is,
this
is
essentially
going
from
kcm
to
kcm,
plus
ccm
and
cubelet,
to
cubelet
plus
credential
provider.
A
That
is
in
some
ways
a
one-off
upgrade
that
hopefully
you
never
have
to
go
through
again
and
is
going
to
give
you
different
upgrade
results
than
once.
That's
been
done
and
you're
going
from,
you
know
out
when
you're
going
from
oranges
to
oranges
is
different
than
when
you're
going
from
apples
to
oranges,
and
so
I
question
whether
you
even
want
to
do
that
automate
that
up
that
special
upgrade
you
may,
but
it
is,
it
is
a
one.
It
is
you're
testing
for
a
one-time
event
versus
a
continuous
event
and.
C
Yeah,
I
think,
that's,
I
think,
that's
the
one
thing
about
all
this,
like
you
know
a
year
and
a
half
from
now
or
two
years
from
now.
You
know
when
all
this
entry
auditory
stuff
is
completely
gone.
Like
you
know,
this
whole
notion
of
the
most
complex
upgrades
from
the
old
to
the
new
are
just
we're.
Gonna
have
to
completely
de-cache
that
memory
or
whatever.
B
Yeah,
we're
gonna
have
to
re-learn
how
to
do
a
lot
of
things.
So
this
I
think
this
is
a
stepping
stone.
It's
gonna
be
a
multi-year
thing.