►
From YouTube: 20191016 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
The
last
week
we
had
a
team
on
the
clusters,
capital
design
proposal,
and
there
was
a
document
out.
Thank
you.
Every
everyone
for
the
feedbacks
and
most
I
address
I
figured
the
sort
of
them
and
to
today
I
move
with
the
Google
Doc
into
a
PR
in
the
cluster
API
repository,
and
it
is
there
for
final
review.
A
D
A
Usual
with
things
like
this,
we
typically
do
a
one-week
lazy
consensus.
So
please
take
a
look
at
poor
requests.
If
you
have
any
serious
concerns
that
would
block
the
blog
proceeding
with
it.
Please
add
your
comments
certainly
add
comments
as
well
that
are
not
blocking,
but
by
next
Wednesday.
We'll
take
a
look
at
all
the
comments
and
assuming
there's
nothing.
That
is
a
major
showstopper.
We
will
go
ahead
and
merge
the
PR
and
then
proceed
with
implementing
it.
E
Bet's,
you
are
up
next
yeah
I
have
to
be
essays.
The
first
one
is,
the
two
billion
provider
will
be
shipped
as
part
of
Cathy
for
P
1
for
3,
and
the
the
PR
has
already
been
merged,
feel
free
to
take
a
look.
It's
gonna
be
like
one
manager
and
one
artifact
with,
like
all
the
core
components
was
cubed
yeah.
The
second
thing
that
I
had
was
that
the
cluster
reference
is
not
required
for
machine
machines
and
machine
deployments,
and
this
PR
has
also
been
merged,
that
it's
gonna
be
under
spec.
A
E
They're
gonna
have
owner
references
to
the
machine,
so
the
Machine
controller
or,
like
other
controllers,
can
set
the
label
if
we
want
to,
but
but
yeah
like
I,
don't
think
we
need
to
add
respect
of
cluster
name
there
as
well,
but
yeah
the
label.
Like
sounds
good
to
me.
I
can
open
an
issue
about
it.
Cool
thanks.
D
Kind
of
following
up
with
what
Vince
said:
we've
merged
cap
D
and
the
kept
DK
into
the
cluster
API
repo.
So
any
work
going
forward
on
those
will
have
been
in
the
cluster
API
repo
and
there
any
critical
bug
fixes
with
the
previous
versions.
Those
will
happen
to
me,
the
original
repos
and
those
will
eventually
be
frozen
once
we
don't
support
those
routines
anymore.
A
Thank
You
Chuck
I
will
follow
up
on
that
and
just
reiterate
what
I
think
I
said
last
week,
which
is
we
are
really
striving
for
backporting
bug
fixes
as
needed
on
a
case-by-case
basis
for
anything.
That's
in
the
B
1
alpha
2
version
scheme
and
any
new
feature
development.
We
will
be
doing
in
the
master
branches
of
the
repos
and
not
be
back
porting
to
B
1,
alpha
2,
unless
we
have
some
really
compelling
case
to
do
it.
D
A
D
F
A
A
A
So
I
would
highly
encourage
you
if
you
have
small
things
that
you're
looking
to
get
done
or
fixed.
Please
open
issues,
and
with
that
back
to
you,
alright,
my
Catalina,
a
fig
woes
are
over.
So
where
was
I
Machiko,
so
machine
pool
is
basically
a
provider
agnostic
way
of
representing
a
scale
type,
and
the
idea
here
is
many.
A
Many
cloud
providers
or
infrastructure
providers
provide
an
abstraction
for
being
able
to
manage
a
set
of
machines
as
a
single
configuration,
and
these
abstractions
very
often
include
functionality
for
update
strategies,
as
well
as
auto
scaling
as
well,
and
so
initially,
the
machine
pool
for
v103
will
encompass
just
the
functionality
for
being
able
to
configure
a
set
of
machines
as
a
single
configuration
and,
and
also
perhaps
the
update
strategy,
and
then
in
the
future,
which
is
a
more
a
little
bit
more
of
a
complicated
concern.
Address.
A
The
autos
will
delegate
the
auto
scaling
to
the
underlying
infrastructure.
So
that's
that's
the
idea,
so
machine
pool
ends
up
looking
pretty
similar
and
the
doc
is
right
here.
So
if
you
haven't
looked
at
it,
please
take
a
look
and
if
you're
interested,
add
some
comments,
so
machine
pool
ends
up
looking
pretty
similar
to
both
deployment
machine
deployments,
as
well
as
machine
pool
machines
set
in
some
ways
so
very
specifically
machine.
A
There's
just
the
single
control
plane,
but
I
have
this
worker
pool
of
two
machines
right
here
and
if
I
come
over
here
and
look
in
the
browser,
this
is
the
azure
portal
and
this
is
a
VM
scale
set.
So
this
is
measures
abstraction
for
scale,
a
scale
group.
What
people
might
know
on
Amazon
is
ASG
and
on
Google
I
think
they
have
management
instance
groups.
So
this
is
the
azure
equivalent,
and
you
can
see
here
there
are
a
couple
of
instances
and
in
India
and
they
all
of
these
instances
share
the
same
configuration.
A
A
A
A
So
one
thing
to
note
is
that
this,
the
Azure
machine
pool
implementation
is,
is
quite
similar
to
machine
set
in
the
sense
that
you
know
it's
managing
a
group
of
machines.
The
main
difference
here
is
that
there's
no,
you
know
we
don't
have
a
machine
object
that
is
referencing
and
and
being
used
to
provision
track
individual
machines,
so
the
machine
pool
exists
and-
and
you
know,
has
an
infrastructure
breath
directly
to
the
azure
machine
pool
and
and.
A
All
of
the
handling
that
machine
set
does
of
you
know:
individual
standalone
machines
is
actually
delegated
to
the
cloud
provider
in
this
case,
and
so
all
right,
so
it
looks
like
something's
happening.
The
the
VM
scale
set
provisioning
has
happened,
so
most
likely
what's
happening.
Right
now
is
the
those
VMs
are
being
bootstrapped.
You
can
see
here
that
the
instances
have
reported
to
the
machine,
pool,
type
and
and
I
think
fairly
soon.
A
And
I
guess:
maybe,
while
we
wait
for
this,
if
anybody
has
any
questions,
I
can
field
a
couple.
Questions
I
have
one
the
provider
ID
on
the
machine
pool.
Does
that
represent
the
ID
in
Azure
for
the
machine
pool
like
it's,
not
a
machine
related
identifier?
Is
it
yeah,
that's
right
so
so,
right
now
the
POC
is
using
the
BMS
key
of
the
azure
VM
skillsets
ID
I.
A
Now
the
azure
is
URI.
Is
you
know
a
hierarchical,
and
so
all
of
the
instances
would
be
under
that
now,
one
of
the
things
that
I
realized
and
will
probably
change
you
know
in
the
near
future-
is
theirs
I'm,
not
sure.
If
that
assumption
can
be
made
across
cloud
providers
and
so
being
able
to
select
on
the
nodes
based
on
the
provider,
ID
I
think
might
be
a
you
know,
fragile
approach,
so
something
similar
to
how
machines
set
labels
or
C
has
a
selector
for
machines.
A
A
So
in
terms
of
scale
down
the
the
functionality
that
and
I
don't
have
this
implemented
in
the
POC
yet,
but
the
functionality
that
I
understand
you
know
will
need
to
be
implemented
is
really
the
ability
to
select
nodes
for
scale
down,
and
you
know
that
functionality
does
in
fact
exist
in
you
know
the
the
vm
scale
set
abstraction.
So
my
plan
is
that
you
know
we
have
some
way
of
communicating
that
these
are
the
nodes
that
we
would
like
to
select
to
be
scale
down,
and
maybe
this
is
a
you
know.
A
Maybe
we
take
a
similar
approach
for
this
delete
policy.
That
is
there
on
a
machine
deployment,
but
I
think
that's
that
that'll
need
to
be
a
key
factor,
and
and
I
have
not
written
about
that
yet
in
the
design
doc,
but
call
it
be
adding
a
section
to
address
that
as
I
understand.
That's
required
for
integrating
with
cluster
autoscaler
have.
G
A
Yes,
so
I
as
part
of
so
the
proof
of
concept
you
know
is
based
on
Azure,
because
you
know
that's
what
the
those
are.
The
accounts
that
I
have
access
to,
but
in
figuring
out
this
design,
what
I
actually
did
is
I
went
through
the
api's
for
each
of
the
ASG
from
AWS,
as
well
as
management
incident
instance
group
for
Google's
cloud
and
the
types
that
I
put
into
the
Machine
pull
abstraction.
A
So
are
the
fields
that
are
common
across
each
of
those
each
of
each
of
these
clouds,
and
so
really
this
is
targeted
to
at
least
take
into
consideration
the
three
major
clouds-
and
you
know,
if
they're
you
know
our
other
infrastructure
providers
or
or
environments
that
people
are
aware
of
that.
You
know
I
didn't
consider.
Please
take
these.
Add
it
to
the
doc.
A
Is
you
know
as
a
question
or
a
comment,
and
you
know
take
a
look
at
that,
but
yes,
the
idea
is
that
this
is
an
abstraction
across
providers
and
and
the
the
the
commonality
between
all
of
them
really
is.
What
influenced
this
design
and
frankly,
I
was
quite
surprised
by
how
similar
each
of
these
abstractions
that
each
cloud
provider
put
forth
I'm
guessing.
This
is
mostly
because
everybody
copied
AWS
but
I,
can't
confirm
or
deny
anything
like
that
all
right!
Well,
okay!
H
A
So
yeah,
yes,
I
think
it's
so
it's
open.
There
was
some
feedback
that
I
incorporated
last
week.
So
if,
if
you
would
like
to
go
and
comment
on
it,
I
think
it's
I
think
it's
a
good
time.
I
added
a
bunch
of
detail
about
state
transitions,
there's
still
some
detail
as
I.
You
know
mentioned
before
around
selecting
for
deletes
and
things
like
that,
but
I
think
you
know
for
the
parts
that
are
complete.
You
know
where
we're
at
a
good
place
to
be
able
to
to.
A
C
If
I
go
to
this
right,
you
are
not
creating
machines
in
this
case.
That's
great:
okay,
okay,
because
the
machines
are
managed
by
the
the
group
so
that
they
underlying
infrastructure.
This
is
similar
to
what
happen
for
static
pods,
but
in
that
case
the
API
server
create
a
mirror
portal,
so
the
user
can
add
a
good
perception
on
what's
going
on.
C
A
Think
that's
so
that's
certainly
something
that
I'll
be
exploring
there.
You
know
and
I
think
what
what
you're
getting
at
is.
Even
though
we
don't
have
standalone
machines
to
manage,
it
still
might
make
sense
to
have
a
placeholder
for
those
machines
so
that
some
of
the
same
some
of
the
same
things
that
rely
on
those
and-
and
maybe
that's
you
know-
maybe
that's
this
functionality
like
being
able
to
do
accordin
and
rain
on
a
on
a
specific
machine,
delete
a
specific
machine.
A
That's
certainly,
you
know
one
of
the
paths
forward
in
terms
of
being
able
to
manage
that
the
current
POC
just
really
uses
these
arrays
in
the
machine
pool
itself
to
track
those
instances,
but
I've
also
been
kind
of
going
back
and
forth
on
whether
or
not
this
placeholder
approach
is
better
and
you
know
in
terms
of
being
able
to
support
the
gamut
evolved.
You
know
all
this
functionality
that
we'll
need
to
support.
A
A
H
I
just
wanted
to
say
thanks
to
nadir
and
others.
The
control
plane
cap
is
in
a
pretty
fleshed
out
state
right
now
we're
trying
to
identify
any
the
missing
gaps
and
to-do
items
that
we
have
in
there
right
now.
So
do
please
provide
feedback
on
the
dock
and
we
are
hoping
to
have
it
in
a
PR
a
little
state
either
by
the
end
of
today
or
or
early
tomorrow,
and.
A
Did
we
resolve
the
state
of
the
upgrade
cap?
There
was
a
proposal
in
slack
on
Monday
so
two
days
ago,
to
essentially
move
anything
to
get
rid
of
the
upgrade
cap
and
move
any
content
related
to
control,
plane,
upgrades
into
the
control
plane
cap
and
then
have
documentation
issues
and
ultimately
documentation
for
how
to
upgrade
machine
deployments
and
the
fact
that
cluster
API
out
of
the
box
doesn't
support.
In-Place
upgrades
for
machines
did.
Did
you
all
resolve
that
decision.
H
So
I
can
say,
with
respect
to
the
items
that
you
mentioned
are
with
the
control
plane
kept
specifically.
Yes,
we've
been
incorporating
feedback
that
was
originally
with
the
upgrade
cap
into
the
control
plane
cap
for
the
section
that
we
already
had
there.
So
we've
been
flushing
that
out
in
a
lot
more
detail
and
capturing
those
bits.
I
cannot
speak
to
the
tracking
issues
or
the
state.
The
final
state
of
the
upgrade
cat,
though
sure.
A
So
I'd
say:
unless
there's
any
objections,
I'd
like
to
propose
that
we
move
forward
with
shuttering
the
upgrade
cap
and
making
sure
that
we
create
issues
for
the
items
that
I
mentioned.
I
think
that
when
we
took
a
look
at
the
upgrade
cap
and
the
actual
meat
of
what
was
going
in
there,
the
bulk
of
it
related
to
control,
plane,
upgrades
and
then
Vince
I
see
just
pasted
a
link
in
the
chat
to
the
conversation
that
we
had
in
select.
A
So
we
don't
need
to
make
a
final
decision
right
now,
but
if
you
all
have
a
chance,
please
take
a
look
at
conversation
in
slack
and
if
you
have
any
strong
disagreement,
let
us
know
and
I
don't
think
this
is
going
to
stop
us
from
working
on
anything,
because
we
still
need
to
do
the
control,
plane,
work
and
the
machine
deployment
related
bits
are
generally
implemented.
So
I
think
that's
all
I
have
on
that.
A
One
I
do
want
to
take
a
look
at
the
other
cups,
though
so
we
just
talked
about
machine
pool
and
so
jointly.
You
said,
the
the
dock
is
open
for
comments,
right,
I
believe
so.
I
I
think
a
few
people
did
comment,
but
I
don't
have
the
ability
to
change
those
permissions.
So
I'll
just
take
a
look
at
mine.
Let's
see
yeah
looks
like
I
can
add
comments
so
I
want.
A
How
much
more
time
do
you
want
for
comments
on
the
Google
Doc,
so
I
can
I?
Can
I
can
make
a
work
in
progress
PR
by
next
meeting?
If
people
want
to
comment
by
the
end
of
the
week,
that
would
be
it.
You
know
for
people
who
feel
very
strongly
about
this
particular
cap.
It
would
be
great
to
get
the
feedback
by
Friday.
A
A
D
G
Nothing
to
share
at
this
moment-
we've
discussed
it
internally,
so
yeah
it's
on
the
agenda
for
us
to
get
to
the
very
near
future.
Okay,.
A
A
D
A
You
17
all
right.
Yes,
this
one
I
would
like
to
talk
about.
So
there
was
a
period
of
time
where
the
with
be
one
alpha,
one.
The
infrastructure
provider
was
required
to
create
the
acute
config
secret
in
order
for
the
node
ref
controller,
to
function
and
for
us
to
be
able
to
map
a
node
to
a
machine
with
V
1
alpha
2.
A
They
have
a
the
management.
Cluster
is
the
same
thing
as
the
workload
cluster,
and
so
when
talking
to
the
API
server,
it's
just
that
you
can
use
a
in
in
cluster
config
instead
of
meeting
acute
config,
and
so
the
request
was
to
to
not
need
to
keep
config
for
the
node
ref
controller
Vince
is
pointing
out.
Cluster
is
now
required.
I
think
we
can
probably
maybe
give
it
another
day
or
two
and
then
close
it
out,
and
we
can
address
it
in
the
future
if
needed.
E
E
E
A
E
I
E
I
I
J
So
I
and
II
this
maybe
could
be
reconciled
with
the
another
issue
that
I
haven't
found
an
issue
on,
but
something
I've
been
thinking
about
for
dusting
pre,
creating
of
the
cube
config
secret.
So
then
it
doesn't
error
out.
If
you
recreate
it
pre
created
it,
did
we
pre
create
one?
That's
simply
empty
I
mean
it
would
be
a
secret
kicking
about,
but
it
wouldn't
have
any
secret
data,
and
if
you
supported
that
I
know
right
now,
you
check
to
see
if
the
data's,
no,
you
throw
an
error
events,
but
no
maybe
seller
testing.
E
J
Sure,
but
I
was
in
great,
but
I
was
simply
saying
more
like
high-level.
If
the
secret
already
exists,
then
you
don't
necessarily
care
what
data's
in
it
and
that
way
in
just
this
case,
he
could
simply
pre
create
something
that's
empty
and
he
decide
you
don't
care
about
the
cluster.
It's
the
secret
that
he
doesn't
want
bouncing
around
and
I
assume
him
in
actually
means
with
secret
data.
What
if
it
just
doesn't
have
secret
data
in
well.
J
A
I
A
A
A
Let's
see
was
the
end
result
here
we
wanted
to
use
master
or
something
not
or
something
to
compare
the
same,
meaning
that
it's
an
unstable
tip
of
the
code
base,
and
that
latest
would
be
the
latest
stable
following
precedent
from
say:
the
go
image.
So
if
you
have
comments,
I'd
say
please,
please
add
them.
I
think
that
it
would
nice.
It
would
be
nice
to
do
this
soon,
sooner
rather
than
later,
dance
yeah.
E
Yes,
that's
correct,
so
we
don't,
we
don't
promote
them
and
they
get
like
out
of
pruned
at
some
point.
I
think
30
or
60
days
is
something
like
that.
So,
if
we
want
here
like,
we
can
push
latest
to
be
also,
release
do
do
in
this
case,
or
we
can
live
things
like
this,
and
if
we
want
to
live
things
like
today,
we
should
just
close
this
issue
Jason,
so.
H
I
think
I'm
good,
with
potentially
just
neglecting
latest,
because
anything
that
we
do
is
going
to
be
confusing
for
some
set
of
users.
I
think
the
other
thing
I
want
to
highlight
is
is
that
we
need
to
make
sure
that
it
is
clear
that
these
images
are
only
meant
to
be
used
for
testing
qualification
and
development.
J
A
Not
to
flip-flop
alright,
so
I
in
my
very
wordy
comment
here,
the
situation
I
was
trying
to
avoid
was
today
latest
is
0.2
dot.
Seven
tomorrow
latest
is
0.3
dot
zero
and,
if
you're
a
muse-
and
you
just
always
pull
latest,
you
would
potentially
have
a
broken
environment.
If
there's
breaking
changes
between
0
to
and
0-3,
so
I
was
saying.
Well,
maybe
we
have
like
you
know,
but
eventually
like
v1,
v1,
0
and
1
and
then
as
specific
as
you
need
to
get,
but
I
don't
know
that
this
is
necessarily
causing
anybody.
J
Like
I'll
see
on
docker
hub
of
all
the
times
where
they
will
have
variations
of
images,
you
know
you'll
have
go
115
but
then
go
150,
Mon
Debian,
slimmed-down
yada
yada.
Would
it
makes
sense
to
use
the
I
know?
We
don't
have
a
major
component,
yet
so
the
minor
component
that
the
actual
image
name
includes
v2
or
v3,
and
then
the
tags
refer
to
whatever
is
latest
refers.
The
latest
release
within
that
and
that
way
people
would
just
be
pulling
copy,
v2,
:
latest
or
something
if
you're
worried
about
that.
J
A
Yeah,
maybe
I
would.
A
G
Yes,
well
in
the
discussions
were
having
about
integrating
drain
into
the
machine
controller.
There
was
a
little
bit
of
back
and
forth
about
you
know
what
about
this.
What
about
this,
and
obviously
not
everybody's
use
case
up
exactly
the
same,
but
sometimes
a
machine
object
might
get
Marx
to
lead
it,
and
you
might
want
to
do
things
before
the
machine
controller
actually
drains
the
node
and
also
probably,
more
importantly,
actually
deletes
the
instance
from
the
cloud.
G
So
you
leave
it
up
to
imagination
what
those
use
cases
might
be.
I
outlined
a
couple
of
them,
but
you
could
also,
with
this
same
kind
of
annotation
or
finalizar,
basically
kind
of
provide
a
hook
into
the
lifecycle
of
an
individual
machine.
I
think
is
a
higher-level
way
of
describing
it.
So
this
just
ensures
that
you
can
have
some
kind
of
process
happen
before
the
machine
gets
cleaned
up,
whatever
need
that
might
be.
Ideally
we
do
it
in
a
way,
that's
transparent,
you,
don't
we
don't
have
to
have
any
foreknowledge
about
any
other
components.
G
Managing
or
adding
these
finalized
errs.
We
can
just
kind
of
describe
an
interaction.
One
of
the
finalized
errs
allow
lists,
but
if
they
do,
we
could
just
use
a
list
if
your
things,
not
in
the
list
at
it,
after
you're
done
doing
your
thing,
remove
it
from
the
list
and
the
Machine
controller
just
says:
is
there
anything
in
this
list?
H
G
Don't
have
a
need
for
this
at
the
moment,
and
this
looks
like
an
issue
that
might
be
good
for
somebody,
that's
new
to
the
project
if
somebody
was
interested
in
contributing
or
not
to
take
it
on,
of
course,
if,
if
comes
down-
and
we
really
need
this-
nobody
else
wants
to
do
it.
I
could
probably
do
it,
but
yeah
it's
it's,
not
a
super
high
priority
for
me
either.
Okay,.
A
I
marked
it
long
term
and
next
Thanks
Chuck,
you
have
a
Docs
one
about
making
sure
that
we
can
clearly
define
what
versions
of
kubernetes
we
support.
Oh.
D
D
H
So
it's
definitely
not
available
now,
but
it's
supposed
to
be
able
to
fall
back
gracefully,
so
we
could
potentially
add
the
support
and
then
just
have
fallback
behavior
through
like
a
validating
web
hook.
For
you
know,
kubernetes
versions
that
don't
support
it.
So
I,
don't
think
it's
one
of
those
things
that
we
have
to
make
it
a
117
a
minimum
requirement.
Before
we
can
adopt
it.
A
Makes
sense
yeah
I
would
just
be
for
when
we
remove
the
validation
code
and
it
have
that
minimum
feature
set
minimal
version.
Okay,
do
you
define
a
generic
pivoting
process?
I
know
that
we're
going
to
or
assuming
we
merge,
the
cluster
cuddle
v2
proposal
and
we
do
it
for
alpha
three.
We
will
need
this
so
I'm
going
to
put
this
in
the
milestone
and
make
this
important
soon.
A
A
F
A
Yeah
I
don't
have
anything
for
against
Weaver
calico.
Is
there
any
risk
that
we
have
one
thing
with
calico
and
another
thing
with
weave,
or
does
that
not
matter
given
the
business
county,
I.
D
B
We
saw
at
some
point
books
when
the
running
doctor
and
doctor
with
weave
and
I
created
the
bug
report
for
the
weave
maintainer
said
they
couldn't
find
the
reason
for
it.
My
point
is
that
all-seeing
eyes
have
problems,
so
unless
we
do
a
comparison
for
this
particular
provider,
I
don't
think
we
should
switch
unless
we
have
a
really
good
reason.
F
B
F
A
H
I
was
just
going
to
say
that
what
sparked
this
discussion
was
the
requirement
to
add
the
fields
for
configuring
calico
as
a
CNI
in
the
example,
which
leads
to
some
confusion,
and
if
we
were
to
use
weave
or
one
of
the
other
CNI
solutions
that
can
use
the
default
cube,
ADM
arguments,
it
would
simplify
the
quick
start
and
the
examples
used
in
the
QuickStart
compared
to
having
just
the
arbitrary
bits
in
there
for
calico,
without
really
explaining
why
those
are
there
and
causing
kinda
user
confusion.
There
I.
A
A
A
So
then
we
have
some
cluster
Kota
ones.
It
fails
if
any
of
the
resources
don't
have
a
namespace
defined
in
the
animal
that
you
passed.
A
cluster
cuddle
I,
don't
know
if
this
is
a
regression,
but
I
do
think
that
it
should
be
fixed
for
alpha,
2
and
I
know
that
Fabricio
mentioned
me
earlier
today
that
this
is
not
an
issue
in
the
prototype
he
has
for
the
next-gen
cluster
cuddle.
A
Deletion
command
hangs
this
one
I
have
triage
and
I'll
get
the
labels
on
there
appropriately.
It's
it's
actually
a
Catholic
issue
that
we
fixed
figure
out
what
to
do
with
sub-module
make
files
wait
some
back
and
forth
on
this
one
I
know
we
have.
We
now
have
the
qpm
provider
merged
in
and
cap
D
I.
Don't
remember,
basically,
outcome
this
based
on
this
discussion.
If
we
wanted
to
keep
sub
modules
or
not
for
cap
D
and
cap
PK
within
the
cluster
API
repo
Jason
or
Vince,
did
you
all
have
thoughts,
I.
H
E
A
A
A
See
I
don't
know
so
if
you
have
it's
kind
of
depends
on
the
deployment
scenario,
if
you
have
one
copy
manager
per
namespace
watching
a
namespace,
then
it
obviously
can't
see
any
of
the
clusters
and
any
other
namespaces.
If
you
have
a
copy
pod
watching
all
the
namespaces,
then
we
could
potentially
try
and
guard
against
this
so
sort
of.
Maybe
we
can
do
it.
Maybe
we
can't.
H
Yeah
I
think
it's
like
it
definitely
affects
Kappa,
but
it's
likely
to
affect
other
cloud
providers
too
if
they
use
the
same
cluster
name.
When
for
the
tagging
that's
used
by
the
cloud
providers,
either
external
or
internal,
so
I
think
providing
this
assurance.
If
we
don't
provide
a
better
workaround
around
a
better
solution
to
that
is
probably
the
minimum
that
we
can
do.
Yeah.
A
I
mean
we
probably
could
update
the
cluster
API
code,
even
if
it's
reconciling
clusters
in
a
single
namespace
to
look
at
other
namespaces
for
validation.
Only
all
right,
I'm
gonna,
put
this
in
the
milestone
and
priority
wise
I
think
soon
makes
sense,
and
we
are
almost
out
of
time
and
we
have
one
Docs
issue
left
update
the
QuickStart
guide
for
the
cert
manager
changes.
This
definitely
needs
to
be
in
the
milestone
and
it
needs
to
be
done
soon
and
we're
out
of
time.
So,
thank
you,
everybody.