►
From YouTube: 20200814 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hit
record
please:
okay,
thanks
hi!
Everyone
welcome
to
the
cluster
api
office
hours
today
is
wednesday
october
14th.
Cluster
api
is
a
sub
project
of
sick
cluster
lifecycle.
Please
adhere
to
the
code
of
conduct
the
cncf
code
of
conduct.
It's
at
the
top
of
the
document.
A
If
you
haven't
read
it,
but
basically
just
be
kind
to
everyone
and
raise
your
hand
if
you'd
like
to
speak,
you
can
has
you
can
use
the
race
hand
feature
on
zoom,
okay,
so
we'll
start
with
the
psas,
so,
okay,
so
the
first
one
is.
It
came
out
of
a
conversation
yesterday
on
slack
about
the
recent
bump
to
the
minimum
kubernetes
version
for
management
clusters,
which
was
moved
to
119
one
in
the
main
branch
of
cappy,
and
basically
out
of
that
conversation,
we
decided.
A
We
should
have
a
discussion
about
the
policy
around
minimum
version
upgrades
and
dependencies
in
general.
So
I
think
vince
opened
an
issue
and
he'd
like
some
feedback
around
what
the
what
these
changes
should
look
like
and
what
the
policy
should
be.
So
we
can
have
like
a
process
that
gives
a
good
balance
between
allowing
for
rapid
change,
especially
in
periods
of
breaking
changes
like
right
now
when
we're
building
up
to
towards.
A
If
you
want
all
the
four
but
also
give
enough
time
for
everyone
to
review
and
raise
objections,
if
there
are
any,
I
think
that
kind
of
summarizes
it
bins
did
I
miss
anything?
No
okay,.
B
Yeah,
I
know
like
one
day
already
was
merged,
was
the
119
for
alpha
4.
we're
targeting
q1
2021?
So
if
you
have
any
questions
comments
concerns
on
that
like
or
if
you
want
us
to
consider,
reverting
it
like.
That's
that's
open
for
discussion,
and
so,
let's
maybe
take
like
another
week
or
so
to
understand.
If
there
is
any
drawback
and
then
yeah
we
can
discuss
it.
C
Yeah
andy,
I
would
say
the
only
reason
to
force
us
to
move
the
minimum.
Kubernetes
version
is,
if
there's
a
new
feature
like
server
side,
apply
that
we
might
be
taking
advantage
of
in
a
critical
non-optional
manner,
and
I
don't
think
today
that
we
have
made
any
changes
in
either
controller
runtime
or
cluster
api
that
require
functionality
that
exists
beyond
116.
But
I'm
not
positive.
So
I
wonder
if
we
could
or
should
just
roll
this
back
now.
C
Jack,
the
current
minimum
is
116
because
we
need
crd
v
1.
I
believe.
D
A
Yeah,
so
I
think
one
of
the
main
concerns
that
came
up
was
that
some
of
the
managed
services
managed
kubernetes
services,
don't
support
119
yet,
and
so
that
would
potentially
mean
you
can't
use
a
managed
service
as
your
management
cluster,
but
that
would
only
be
true
if
they
still
don't
support
it.
When
we
actually
release
this
change,
which
should
be
q1,
2021
and
chances
are
by,
then
they
will
be
supporting
that
version.
So.
E
Mike
perhaps
I
missed
this,
but
what
I
mean
I
guess,
do
we
need
to
bump
the
minimum
version
to
to
119
with
the
changes
that
are
coming
or
could
we
still.
F
B
I
I
can
answer
so.
There
were
two
reasons
like
two
up
to
update
and
one
is
like
generically
like,
given
that
we're
targeting
like
q1
2021.
This
will
put
us
like
an
n
minus
two.
If
my
math
is
right,
would
because,
like
there
will
be
121
released
at
that
point,
and
so
there
is
a
benefit
to
just
like
a
b
on
a
newer
version
of
kubernetes
like
this
is
just
in
like
a
good
practice
in
general.
The
other
thing
is
like
we
were
talking
about
like
supporting
servers
that
apply.
B
Unfortunately,
there
is
bugs
in
one
sixteen
zero,
two
sup
four
servers
that
apply,
which
we
cannot,
and
I
think,
like
117
or
118,
got
fixed
and
I'm
not
sure
if
it
was
back
quarter.
So
the
move
to
119
was
also
like
in
yeah
like
having
the
same
controller
runtime
version
for
the
bundle
dependencies.
B
There's
also
like
a
lot
of
concerns
that
I
have
like
around
testing
like
if
we
have
like
too
much
of
a
skew
like
how
do
we
test,
like
all
upgrades
like
this,
is
going
to
incur
like
it
more
and
more,
maybe
end-to-end
that
we
have
to
write
and
just
generic
maintenance.
Given
that
we're
alpha.
Personally,
I
would
like
to
reduce
the
maintenance
to
like
a
fixed
set
of
versions
that
kind
of
like
it
gives
us
like
a
little
bit
more.
F
B
And
reduces
the
amount
of
things
that
we
have
to
think
about
as
well
in
it
release,
but
I'm
open
to
write
this
down
and
get
feedback.
That's
why
I
brought
that
up.
C
At
the
same
time,
I
I
think
really
what
matters
to
any
project
that's
running
on
top
of
kubernetes
is
what
apis
do
you
need?
What
feature
gates?
Do
you
maybe
need
enabled
or
features
that
have
graduated
and
I
think,
with
116
we're
generally
at
a
spot
where
we're
good
we
like,
maybe
we
only
test,
1,
19,
going
forward
and
then
120
and
whatever
else,
but
I
don't
think
that
we
need
to
necessarily
say
you
must
run
on
kubernetes
119
for
your
management,
cluster
or
else
like
at
this
point.
A
Yeah
also,
I
just
want
to
point
out
it.
It's
also
possible
that
the
minimum
version
that
we
require
in
the
main
branch
right
now
is
different
from
the
minimum
version.
When
we
release
the
one
off
of
four,
so
we
could
also
bump
the
version
down
the
line
like
in
january
or
february,
once
more
versions
come
out
because
right
now,
119
is
the
latest
minor
version
right.
G
G
A
H
Yeah,
I
guess
my
my
question
is:
are
we
are
we
strictly
concerned
with?
This
is
like
a
documentation
issue,
or
is
there
also
an
intent
or
a
willingness
to
see
this
as
like
an
api
so
that
it's
discoverable
programmatically?
So
you
know,
for
example,
prior
to
upgrading,
you
know
controllers
on
a
cluster.
H
B
Yeah,
so
I
wanted
to
answer
a
couple
of
people
here
so
in
terms
of
like
features
that
we
use.
Yes,
like
things
might
work
so
maybe
like
this
is
a
matter
of
like
defining
like
not
necessarily
the
minimum
version
that,
like
we
like
support,
but
it's
more
like
the
suggested
minimum
version
that
you
should
run
these
controllers
off
and
that
we
test
going
forward
and
then
just
say
like.
If
you
run
on
an
older
version,
it
might
work,
but
it's
also
out
of
or
like
community
support.
B
B
B
The
other
thing
about
like
adopting
like
a
set
of
cadence,
like
so
116
118
and
one
indeed
that
might
not
work
for
us,
because
we,
while
we
do
have
like
we
said
like
now,
that
we
might
release
two
versions
per
year,
we're
not
on
a
set
cadence,
so
we
actually
plan
and
then
release,
so
this
will
actually
have
to
be
decided
before
planning,
so
that
might
not
apply
to
to
us.
B
So
for
this,
like
I'm
happy
to
revert
at
at
the
same
time,
we
might
miss
out
on
some
things,
so,
like
servers
are
applied,
which
I
would
personally
like
to
to
push
forward,
given
that
we
have
a
lot
of
custom
code
in
the
patch
helper
today
to
just
handle
conditions.
For
example,
I
will
document
this
and
at
the
next
meeting
I
can
we
can
show
the
kind
of
the
pr
of
to
the
contributing
guide,
and
we
can
just
discuss
on
that.
If
that
works,
for
everyone.
A
Sounds
good
all
right,
let's
move
on!
Thank
you
for
taking
notes,
whoever's,
taking
notes.
Okay,
so
christopher
you
have
the
new
provider,
repo.
I
I
This
is
a
reimagining
of
what
you
might
have
heard
of
and
probably
read
on
the
kubernetes
blog
at
some
point
called
virtual
cluster.
It
was
a
project
that
alibaba
kicked
off
and
we
started
helping
them
out
and
so
we're
trying
to
repave
the
the
actual
control
plane,
api
and
provisioning
to
be
under
cluster
api.
But
in
essence
it's
it's
an
interesting
project
because
it
brings
nested
kubernetes
clusters.
So
it's
control
planes
on
kubernetes
clusters
that
provision
into
themselves
using
this
relief,
really
fun
little
sync
controller.
I
That
will
take
a
workload
and
sync
it
down
to
the
supercluster
to
actually
schedule
it.
So
it
brings
on
some
new
worlds.
So
I
wanted
to
announce
it
here
that
we
actually
have
the
repo
we're
pulling
for
an
actual
weekly
time
slot,
if
you're
interested
in
being
a
part,
please
go
fill
out
on
that
doodle
and
we'll
get
a
weekly
call
kicked
off
and
then
we'll
start
deving
on
this
thing.
I
A
Super
cool
thanks
for
sharing
we'd
love
to
see
a
demo
at
some
point.
If
that's
something
you'd
consider.
A
Going
once
going
twice
all
right
for
brazil
and
warren
update
on
the
management
cluster
operator.
J
J
We
resolved
some
of
the
comments
and
we
kept
some
of
the
comments
in
there,
but
right
now,
yeah
we
filled
it.
We
filled
out
most
of
like
the
operator
lifecycle
sections
and
the
idea
we're
hoping
is
hopefully
by
early
next
week
we
can
move
the
contents
into
an
actual
pr
and
sort
of
get
started
on
that
on
the
next
sort
of
step
in
the
cap
process.
J
A
J
No,
I
think,
yeah,
I
think,
regarding
the
scope,
at
least
in
general,
we
didn't
see
any
major
blockers.
The
big
things
were,
we
put
it
in
the
goals
in
the
future
goals
section
or
the
non-gold
section
regarding
the
move
operation
and
also
the
change
in
to
the
multi-tenancy.
I
think
that's
like
a
big
thing.
Those
are
the
two
big
things,
but
not
much
sort
of
pushback
or
things
but
yeah.
J
If,
if,
if
these
words
have
have,
you
know
tickled
your
ears
a
little
bit,
you
know
change
to
multi-tenancy
and
move.
Please
go
to
the
dock
yeah.
I
think
we've
kind
of
explained
it
and
yeah
shouldn't
be
no
major
blockers.
So
far,
though,.
A
Okay,
thank
you.
Any
questions
for
warren,
our
favorite
zoo.
A
Okay,
james,
you
want
to
update
us
on
windows.
L
Yeah
thanks,
so
I
think
we're
we're.
A
couple.
Weeks
ago
we
talked
about
the
retry
support
and
adding
an
os
type
field.
L
If,
if
we
should,
if,
if
the
retry
support
cube
adm
for
119
is
sufficient,
then
maybe
we
don't
need
to
add
the
os
type
field
to
the
infrastructure
machine
as
an
optional
field
and
yeah
just
thought.
I'd
get
some
thoughts
there
and
other
than
that.
I
think
we've
addressed
all
the
issues
and
probably
ready
for
final
review.
K
K
We
asked,
for
instance,
nadir
to
to
try
again
and
now
with
the
new
release
of
cuban
meaning.
If
this
is
enough
or
not,
what
is
really
important
for
me
is
that
in
case
we
see
failures
in
kubernetes.
K
A
M
Since
my
name
was
reference
yeah,
I'm
going
to
remove
the
retry
joints
from
kappa,
so
we
start
getting
signal
for
119
releases.
Given
that
windows
is
planning
ga120
for
1.20.
Yes,
if
I'm
correct,
then
we
don't
need
to
support
the
older
versions
anyway.
So
yeah,
I
I'm
going
to
start
getting
signal
on
cube
adm
without
the
retro
experimental.
We
try
joints
and
then
raise
other
issues
in
cuba,
dm
that
we
can,
if
we
do,
need
further
fixes,
we
can
get
it
for
one
track.
One
point:
two:
two:
zero.
F
A
Would
do
that
a
while
ago,
but
yeah
if
we
can
get
rid
of
the
experimental
retries?
That
would
be
great.
K
Yes,
we
we
had
a
first
set
of
of
retries
and
basically
increase
the
time
out
in
equivalent
mean
one
119.
F
So
sorry,
we
backported
all
the
requested
fixes
to
the
support
skill,
which
was
at
the
time
117.
Sorry
it
was
116,
but
we
backported
117
as
the
minimum
version.
B
This
is
a
good
kind
of,
like
example,
of
like
how
we
should
document
our
support
matrix
and
when
we
should
deprecate
features
or
add
new
features,
because
we
spend
across
like
a
lot
of
versions
of
kubernetes.
M
Yeah,
I
I
think
it's
okay
to
say
that
we
keep
the
script
for
the
lifetime
we
run
out
for
free,
but
we
start
removing
it
from
testing
as
we
test
newer
versions
of
kubernetes
and
with
a
plan
that
we
for
we
went
out
for
four,
we
shouldn't
need
it
anymore.
B
So
that
depends
on
like
do.
We
want
to
keep
like
allow
users
to,
for
example,
spin
up
a
1,
16,
0
workload,
cluster.
B
A
Okay,
cool
all
right,
so
I
think
that
covers
it.
For
now,
let's
move
on
cluster
cuddle
as
a
cute
kettle
plug-in
check.
D
Hey
good
morning,
this
one
will
be
quick.
This
is
essentially
me
announcing
that
I'm
standing
down
from
this
issue
after
after
formulating
some
problem
statements
and
thinking
about
it,
it
seems
like
the
moving
cluster
ctl
ctl
to
cube
ttl
plug-in
right
now
would
be
putting
the
cart
before
the
horse
kind
of
thing,
the
horse
being
cube
ctl.
I
think
that
that's
for
some
background.
D
So
I
think
that
if
we
were
to
put
cluster
ctl,
as
is
into
cube
ctl,
it
would
solve
one
problem
potentially,
which
is
that
we
could
offer
users
a
single
cube,
ctl
front
end
for
doing
these
ux
gestures,
but
cluster
ctl,
as
is,
is
not
in
my
observation,
totally
standardized
across
cloud
providers.
So
I
think
that's
the
thing
that
we
should
address
first,
so
really
I'm
just
saying
that
I
think
we
shouldn't
do
this
work
right
now
and
I
have
a
closed
issue.
D
So
if
someone
else
disagrees
and
wants
to
take
this
on
or
I'm
happy
to
also
do
it
if
I
can
be
convinced
otherwise,
that's
it.
A
A
Cool
okay,
jen,
add
option
for
scale
in
for
kcp.
N
Okay,
thanks
and
hello,
everyone
yeah
we've
been
discussing
about
adding
scaling
for
for
kcp,
to
able
to
scaling
during
upgrading
and
so
there's
a
there's,
a
link
to
issue
and
also
linked
to
google
google
doc.
That
I
would
definitely
would
like
to
see
more
comments,
and,
and
probably
we
can
starting
to
do
a
actual
proposal
against
the
current
kcp
proposal
later
on,
and
but
if
you
have
any
ideas
or
concerns,
what
might
be
the
while
we
are
scaling
in
this
is.
N
A
Thanks,
could
you
please
add
the
link
to
the
open
proposal
section
at
the
top
oops
of
this
doc
and
same
thing
for
anyone
else
who
has
an
open
proposal
at
the
moment
if
you
can
track
them
here?
Okay,
that'd
be
great
thanks.
Any
questions
on
that.
A
All
right,
fabricio,
brainstorming,
doc
for
qbm
library.
K
Yeah,
thank
you.
I
will
only
need
to
point
out
that
I
started
this
document,
which
is
basically
around
the
idea
of
the
kubernetes
library
and
being
a
cluster
api,
one
of
the
main
consumer.
I
I
really
would
like
to
get
feedback
from
this
set
of
people,
and
the
document
is
focused
basically
in
in
two
topics.
K
A
Oh,
the
the
issue
is
the
duck
or
no.
The
dog
is
here:
okay
got
it
thanks
any
questions
for
fabric
too.
On
that.
A
All
right
andy
call
for
proposal
implementation,
help.
C
Thanks
yeah
we've
got
a
fantastic
group
of
folks
here
and
we've
got
lots
of
ideas
and
proposals
for
v1,
alpha,
4
and
beyond,
and
so
I
just
want
to
reiterate
some
calls
from
previous
meetings
that
if
you
are
interested
in
helping
shape
a
feature
request
or
a
proposal,
you
know
we
have
a
roadmap.
C
Vince
does
have
an
open
pull
request
to
make
some
changes
to
the
road
map,
to
clarify
what
we're
looking
to
do
for
alpha
4.
So
we'll
get
a
link
in
the
dock
to
that
in
a
minute.
But
if
you
have
a
chance,
please
take
a
look
if
you're
interested
in
working
on
a
proposal
working
on
code
working
on
documentation
just
helping
out
in
general,
I
think
that
we
have
a
lot
of
folks
too.
C
Hopefully
can
make
some
time
to
mentor
you
if
you
need
it
or
if
you've
already
got
experience,
that's
awesome
and
if
there's
something
that
interests
you
please
feel
free
to
comment
on
the
issues
or
reach
out
here
on
slack,
basically
just
hoping
to
get
more
folks
involved
if
you're
looking
for
it.
Thanks.
A
Yeah
thanks
for
bringing
that
up
andy
anyone
have
any
questions
about
this,
or
in
general,
like,
if
you
don't
know
where
to
start
or
how
the
proposal
process
works
or
anything
like
that.
A
Okay,
if
you
have
any
questions-
and
you
don't
want
to
ask
here-
I
feel
free
to
like
ask
offline
on
slack
or
just
dm
me.
Any
one
of
us
will
be
happy
to
help
you
all
right
and
I
think
we
have
a
demo
at
the
end
ben.
O
Yeah,
so
we
can
actually
play
it.
I
just
recorded
it
and
put
it
on
ascii
cinema
that
link,
so
we
can
actually
play
it
through
there.
I
don't
even
need
to
share
my
screen,
but
or
I
can
share
my
screen.
A
Oh
oops,
I
think
hold
on.
O
A
O
Close
other
tabs
I've
been
working
on
getting
the
cluster
auto
scaler
to
work
with
cappy,
taking
over
some
work
or
helping
with
some
work
that
jason
and
mike
mccune
have
been
working
on
and
one
of
the
things
we're
trying
to
do
is
get
the
upstream
testing
story
a
little
bit
improved,
and
I
had
this
idea
to
try
and
use
cute
mark
for
as
a
cappy
provider
to
kind
of
be
able
to
like
spin
up
what
they
call
hollow
hollow
nodes.
O
So,
basically,
q
mark
is
like
acts
as
a
cubelet
to
give
you
nodes
in
a
cluster,
but
without
actually
spinning
up
you
know
real
cubelet
doesn't
spin
up.
Real
pods
doesn't
need
real
vms
or
anything.
So
it's
nice
for
scale,
testing
and
yeah.
It's
to
the
point
now
where
it
works.
It
does
the
job,
I'm
sure,
there's
still
a
lot
of
rough
edge
so
put
together
this
little
demo
spin
up
a
coin
cluster.
It's
it's
a
very
accelerated
demo.
O
This
is
actually
one
of
the
slowest
parts
because
of
the
spinner
here,
but
so
the
way
it
works
with
cappy
is
we
still
need
a
real
control
plane
in
order
to
for
the
workload
cluster.
So
what
I
do
in
this
demo
is
install
both
kappa
and
this
cube
mark
provider.
J
O
But
that's
just
because
this
demo
cut
out
a
bunch
of
waiting,
and
so
this
is,
let
me
pause
it
here.
This
is
just
what
I
have
in
my
cluster
cuddle
config
file,
pointing
at
the
github
repo,
which
I
linked
in
the
notes
to
add
support
for
it,
because
it's
not
in
cluster
cut
up
by
default
right
now,
but
then
yeah.
So
I
oh
yeah.
O
I
kind
of
skipped
that
too,
just
I
just
created
a
created
using
cluster
cuddle,
I'm
creating
now
a
machine
deployment
using
the
keymark
infrastructure
provider
and
using
the
same
cluster
that
I
already
created
in
the
previous
steps.
O
So
it's
kind
of
this
weird
thing
where
it's
a
hybrid,
like
provider
cluster
like
one
machine
via
aws
and
then
other
machines
via
cubemark,
and
so
what
we
can
see
now
is
that
it
started
a
cubemark
pod
on
the
management
cluster
in
the
same
namespace
as
where
the
machines
are
being
created
and
then
on
the
bottom.
Here,
I'm
targeting
the
workload
cluster
and
you
can
see,
there's
a
cubemark
node,
that's
joined
it
in
addition
to
the
aws
one,
and
then
we
can
scale
it.
O
And
this
part
I
mean
it's
still,
this
part
actually
is
really
fast,
even
though
I
think
the
demo
is
still
cutting
out
time.
The
way
ascii
cinema
works,
but
yeah,
so
you
can
see
like
more
pods
are
being
created
on
the
management
cluster
and
as
they're
coming
up.
The
notes
are
appearing
on
the
workload
cluster.
O
A
Thanks
any
questions
for
ben,
I
can't
see
participants
hold
on.
A
I
have
one:
how
did
you
get
the
hybrid
cluster
thing
to
work
like
having
mixed
notes,
because
that's
not
something
we
a
lot
of
people
typically
ask
about
this
doing
this
with
like
other
providers,
and
it's
not
something
that
we
support
right.
O
Yeah,
so
I
think
it
kind
of
takes
advantage
of
something
that
we
have
in
cappy,
which
is
like.
We
have
that
guarantee
that
the
management
cluster
should
always
have
connectivity
to
the
workload
cluster,
and
so
because
of
that,
and
because
these
cube
mark
pods
are
running
on
the
management
cluster,
they
can
talk
to
the
api
server
to
register
themselves,
yeah,
that's
all
that
they
really
need
and
like
for
cluster
cuddle,
I
guess
it
just
was
a
happy
accident
that
this
guy
just
works
out
of
the
box.
A
Got
it
andy.
C
So
what
are
your
next
steps
or
plans
with
this?
Are
you
gonna
try
and
I
guess
where's
the:
where
should
the
code
live
and
are
you
gonna
try
and
get
this
integrated
for
auto
scaler
testing
or
something.
O
Yeah,
so
I'm
I
pushed
it
up
to
just
a
personal
repo
for
now,
but
happy
to
move
it
over
to
like
a
official
repo,
so
that
others
can
start
contributing,
and
you
know,
standardize
it
and
all
and
with
the
auto
scaler
we're
I'm
working
with
sig,
auto
scaling
to
kind
of
redefine
how
they
want
to
do
integration
testing.
But
right
now
I
have
it.
O
You
know
like
again
a
forked
repo
of
autoscaler
tests,
but
there
we
have
like
a
good
amount
of
tests
that
are
passing
using
this
and
I
imagine,
would
pass
with
like
a
regular
provider
too.
But
this
just
gives
us.
You
know
much
faster
feedback
and
we
don't
have
to
worry
about
timeouts,
quite
as
much.
O
And
yeah,
I
don't
know
what
else
to
that
this
project
needs.
I
mean
the
code
isn't
very
pretty,
but
other
than
that
I
don't
know
it
works
for
my
use
case
anyway.
A
E
Some
of
that
old,
auto
scaler
test
out
of
the
kubernetes
repo,
so
we're
trying
to
define
a
pathway
forward
where
we
could
use
some
of
the
new
kube
test,
2
infrastructure
and
maybe
have
a
separate
repo
for
these
tests
and
everything.
So
it's
a
little
complex,
but
I
think
it'll
take
us
a
while
to
get
there,
but
I
think
it'll
be
really
cool
in
the
end.
M
Yeah
on
the
related,
though
I've
so
for
those
who
don't
know,
cubecast
2
is
basically
part
of
the
attempt
to
get
rid
of
the
slash
cluster
directory
and
the
kubernetes
repo
and
to
make
so
a
lot
of
the
testing
assumes
gcp
right
now
and
cube
test.
2
makes
that
more
pluggable
I've
opened
an
issue
in
the
cube
test,
2
repo,
that
maybe
we
should
put
a
cluster
api
deployer
for
cube
test
2
and
that's
particularly
useful
for
other
bits
of
the
kubernetes
project.
M
E
Right
now-
and
I
think
you
know
ben's,
making
great
progress
on
this
as
well-
is
like
in
the
end
we'd
like
to
get
to
a
place
where
we
just
have
like
a
binary
and
like
a
coupe
config
right
and
we
could
hand
that
over
to
kube
test
and
then
it
could
just
run,
and
so
I
think
I
think
we're
hitting
that
so
far
and
we'll
just
we'll
keep
pushing
in
that
direction,
and
hopefully
we'll
have
a
generic
provider
interface
from
the
tests
so
that
anyone
who's
interested
in
writing
a
provider
for
those
tests
could-
and
you
know,
obviously
we'll
have
a
capy
provider
that
will
cover
everything
that
we
do
as
well.
A
About
the
upstream
testing,
I
really
love
the
idea
of
like
going
towards
that
and
like
being
able
to
use
cluster
api
providers
to
test
like
different
parts
of
the,
not
just
kubernetes
itself
but
like
csi
drivers
etc.
Like
you
said,
we
did
have
a
meeting
about
this
at
one
point
last
year,
when
we
were
talking
about
like
how
to
expend
conformance
to
be
able
to
test
other
upstream
repos
and
one
conclusion
that
came
out
of
that
was.
We
should
not
use
cubetest
at
the
time
we
didn't
have
keep
test
two.
A
It
was
keep
test
one
or
keep
tests,
but
I
think
I
took
some
notes
and
the
meeting
recording
is
in
an
issue.
I
can
send
it
to
you
afterwards,
but
it'd
be
interesting
to
revisit
that
and
see
if
those
are
still
valid
reasons.
E
Yeah
I'd
love
to
watch
that
it.
You
know
we
talked
with
the
sig
testing
folks,
I
think
a
week
or
two
ago,
and
they
you
know
when
we
talked
about
just
this
kind
of
like
a
binary
plus
a
cube
config.
They
were
like.
Oh
that's
exactly
what
coupe
test
two
is
all
about
so
yeah,
so
it
seemed
like
the
path
forward.
I
guess
yeah.
A
I
think
one
of
the
points
was
that
cube
test
doesn't
really
bring
anything
in
like
in
what
we
were
trying
to
do
at
the
time
and
the
current
state
of
keep
test
like
because
we
weren't
sure
if
it
was
going
to
stay
and
stick
around
or
it
didn't
really
make
sense
to
like,
invest
in
building
it
there,
but
because
we
can
really
reuse
anything.
That
was
already
exist
like
any
of
the
common
code
in
cube
tests,
but
yeah
all
right.
Oh
good,.
A
A
Okay,
if
not
we'll
see
you
all
on
slack
and
next
week
and
have
a
happy
day
like
warren
says
it
on
on
the
chat
all
right,
bye.