►
From YouTube: 20200930 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
september
30th
to
2020
cluster
api
office
hours.
Cluster
api
is
a
project
of
c
cluster
lifecycle.
We
have
a
meeting
etiquette
for
for
this
meeting.
Please
use
the
raise
hand
feature
in
zoom.
If
you
want
to
speak
up
and
you
can
find
it
under
the
participant
list,
I
will
try
to
moderate
also
the
chat
if
you
have
any
questions
or
comments
I'll
post
a
link
here,
if
you
want
to
add
your
name
to
the
attending
list,
we
have
a
live
agenda
today.
A
Or
if
you're
not
new-
and
you
want
to
say,
hi-
that's
okay
as
well,
move
on
cecil,
hi,
hi,
all
right
so
well
good
morning,
good
evening
to
everyone.
Let's
start
with
zero.
Three
ten,
the
milestone
is
almost
done.
We
have
five
issues
npr's
in
here
three
prs,
which
will
also
close
the
issues.
Most
of
them
are
actually
quite
ready.
So
I
see
a
few
lgtm
and
approved.
A
This
is
like
a
pretty
big
release.
We
have
had
like
a
lot
of
bug,
fixes
in
zero
three
ten
coming
from
from
zero,
three
nine,
especially
with
regards
to
memory
leaks
and
things
like
that.
So
I'm
looking
forward
to
cut
the
release,
probably
either
later
later
tonight
or
tomorrow
morning,
probably
tomorrow
morning.
I
want
to
get
some
signal
from
our
end
to
end
for
for
a
little
bit
to
make
sure
like
additional
flakiness
and
nothing
before
cutting
the
release.
B
Yeah
yeah
hi-
this
is
this
is
jan
I
was.
I
was
just
wondering.
I
saw
andy
left
some
comments
in
in
in
my
my
external
remediation
pr.
Is
it
possible
that
I'm
I'm
finishing
my
morning
tomorrow
or
what
is
the
schedule
for
this
one?
Can
I
can
I
still
push
something
on
on
that
pr.
A
Yeah,
I
think
we
have
until
tomorrow
morning
like
or
you
know
like.
A
B
Guys
but
it's
yeah.
A
Yeah,
tomorrow
morning,
our
time
I
guess,
like
it's,
gonna,
be
the
middle
of
the
day
for
you,
so
you
should.
You
should
be
fine
if
you
don't
have
time
to
like.
We
can
also
like
do
a
quick
follow-up.
Someone
from
from
our
side
can
take
care
of
it
like
we
can
work
it
out
like
that's.
Why
I
reached
out,
on
slack.
B
Yeah
that
yeah,
that
would
be
great
yeah-
I'm
like
you
know,
was
wondering
I
have
time,
but
tomorrow
tomorrow
morning
like
so,
it
will
be
anyway
like
midnight
for
you
guys
when,
when
we
are
waking
up
here
in
finland,
so
I'm
able
to
do
that
so
and-
and
we
can-
we
can
obviously
discuss
later
during
that
day,
when
you
guys
are
wake
up
if
something's
left
to
be
done
so
so,
but
I
was
just
wondering
whatever
suits
you
guys,
what
would
be
the
best
and
easiest
way
to
get
this
merged,
I'm
open
for
everything
it
works.
A
Out
like
we
can,
we
can
think
think
back
up
tomorrow,
like
our
time
and
yeah
yeah.
C
Good
yeah
we've
been
testing
rc0
and
cab
z
with
our
end-to-end
test
suite.
Would
it
be
possible
to
make
sure
that
we
do
a
code
freeze
and
like
have
another
rc
before
the
final
releases
got
given
that
there
are
some
big
changes
going
in
right
now,
so
I
just
want
to
make
sure
we
have
a
bit
of
time
after
the
last
pr
lens
to
test
run
the
test
again.
A
We
can
that
would
put
us
into
yeah,
like
that's,
probably
fine.
I
can
cut
that
tomorrow
morning
first
and
we
run
a
few
tests
and
then
we'll
release
the
final
tag.
I
guess.
C
Yeah
test
should
only
take
a
couple
of
hours,
but
ideally
we
would
have
those
as
release
dates
in
the
future.
But
right
now,
since
we're
doing
that
manually.
A
Yeah,
maybe
we
should
put
this
in
in
writing
so,
like
you
know
going
forward,
we
can
actually
you
know.
Do
this
more
often,
I
think
that's
more
valuable
to
have
signal
than
seattle
agreed.
A
I
see
nadir
is
saying
that
cap
is
also
running
cement
against
copied
zero
310.
I
assume
that
that's
the
current
tip
of
the
main
branch
right,
yeah
yeah,
so
let's
try
to
get
these
emerged,
there's
only
the
one
for
external
remediation
that
we'll
wait
tomorrow,
but
if
we
get
all
of
these
merch
by
by
today,
we
should
should
we
get
to
run
more
tests
and
then
the
external
remediation,
and
then
we
can
cut
the
two
tags.
A
Any
other
question
on
release.
This
is
a
reminder
like
after
zero,
three
ten
we're
gonna
move
to
alpha
four,
so
my
other
psa,
which
I
forgot
to
put
here,
is
alpha
four
for
planning.
We
have
a
roadmap
document
in
here.
I
would
like
to
start
like
kind
of
removing
the
alpha
3
things
and
if
you
have
anything,
that's
like
a
big
ticket
item
that
will
probably
warrant
that
proposal
like
a
contract
change
which
you're
probably
putting
in
into
the
zero
four
the
date
it's
wrong.
A
E
Yeah
thanks
vince,
just
a
quick
question
about
getting
stuff
on
the
roadmap
for
0.4.
Do
we
open
a
pr
against
the
doc
repo
or
just
kind
of
work
with
you
to
get
it
on
there
or
what's
the
best
way,
I
guess.
A
Yeah,
every
everyone
can
open
kind
of
their
own
pr.
I
guess
like
we'll,
have
to
do
some
rebasing.
If
that
happens,
like
the
other
way
that
we
could
do
this
is
like
we
have
one
pr.
I
was
prepping
one,
but
I
got
distracted
yesterday
but
and
then,
but
like
I
need
to
scan
the
backlog.
That's
that's
gonna,
take
some
time
and
but
yeah
like
either
way.
It's
fine,
like
you
know
like
we
can.
A
The
most
important
thing
is
to
open
the
rfe
issue
or
prep
like
a
document
for
proposals
which
we'll
probably
track
in
the
meeting
notes
here,
we
usually
track
them
up
here.
Yeah.
E
A
Good
and
the
other
thing
on
alpha
4
is
in
the
next
couple
of
days.
I
don't
have
like
scheduled
yet
like
either
tomorrow
morning,
pacific
time
or
a
friday
morning,
we'll
do
another
backlog
grooming
for
alpha
4,
but
for
this
time
we'll
do
the
next
milestone
and
I'll
send
out
like
an
invite
on
slack
and
feel
free
to
join.
If
you
can,
this
is
mostly
like
to
re-prioritize.
What's
in
next,
it
might
take
a
while
because
there's
none
issue
open
and
it's
usually
boring,
but
yeah.
A
A
A
A
If
there
are
changes
that
are
going
for
the
for
the
release
branch,
so
in
this
case
zero.
Three,
like
we
plan
for
a
monthly
release
if
there
are
no
changes
that
we're
just
not
gonna,
make
the
release
but
I'll
open
up
pr
to
the
contributing
guide
first,
so
that
we
can
discuss
on
it
if
that
works
out
for
everyone.
A
A
All
right,
yeah,
alpha
4,
like
it's
we're
going
to
target
like
a
march
2021,
probably
earlier
than
that
like
hopefully,
but
it
depends
like
on
how
long
the
rebels
will
take
and
the
code
to
merge
as
well.
A
Let's
move
to
discussion
tops.
There
are
no
other
questions,
david
go
for
it.
G
Thanks
vince
appreciate
it
all
right,
so
I'll
kind
of
lead
with
the
problem
statement
and
then
kind
of
let's,
let's
work
backwards
from
there
and
discuss
so
in
in
the
azure
provider.
G
We're
looking
at
how
so
the
way
azure
works
with
the
azure
provider.
Is
we
take
the
cloudant
bootstrap
and
we
take
that
data
and
we
just
feed
it
directly
into
the
api,
and
we
just
say:
here's
the
user
data
for
the
vm
or
the
vm
scale
set,
and
what
that
ends
up
doing
is.
It's
then
executed
the
cloud
in
it
on
those
hosts
and
it
it's
actually
available
in
the
azure
apis
or
so
like.
G
If
you
go
to
the
portal
and
you
have
access
to
view
that
machine,
you
could
then
view
any
of
the
secrets
in
plain
text
any
of
the
pki
from
that
cloud.
Init,
which
is
probably
is
not
the
is
not
what
we
advise.
So
you
really
want
only
the
people
that
have
only
the
folks
that
have
access
to
that
actual
host
to
be
able
to
possibly
get
at
that
data
so
to
make
the
security
bounds
either
a
vault
like
put
it
into
an
encrypted
store.
G
A
secure
store
and
then
mount
it
onto
the
v
and
mount
it
onto
the
host
when
it's
ready
and
then
maybe
do
some
decryption
or
whatever
that
you
need
to
do
to
actually
process
that
and
and
that's
that's
kind
of
what
the
aws
provider
is
doing.
G
So,
if
you
look
over
there
there's
there
is
some
work
done
to
load
it
up
into
a
secret
store,
then
to
pull
it
down
to
the
machine,
run
a
multi,
multi-part
cloud
init
and
then
execute
from
there.
That
way,
it's
it's!
The
secrets,
aren't
what
would
be
nice,
especially
as
we
start
to
get
into
other
bootstrap
providers
is
that
we
have
some
some
more
uniform
way
of
dealing
with
these
secrets.
G
So
we
could
do
the
same
thing
that
the
aws
provider
is
doing
right
now
and
store
it
in
a
secret
and
then
load
it
back
up
into
azure
and
do
that
do
that
same
kind
of
dance?
But
what
happens
when
we
move
into
like
the
ignition
bootstrapper
or
others?
It's
just
still
going
to
make
sense.
It's
still
going
to
work
the
same
way,
so
I
just
I
don't
know
kind
of
wanted
to
bring
it
up.
G
Could
we
provide
like
a
secret
or
like
a
date,
encryption
fee
to
a
bootstrap
provider
and
then
understand
that
we
could,
you
know,
decrypt
stuff
on
that
host?
Is
there
any
kind
of
more
general
idea.
H
Yeah,
I
definitely
want
to
do
something
about
this
or
we
run
alpha
4..
The
ignition
thing
is
coming
up
quite
quickly
for
ignition
doesn't
support
multi-part
cloud
in
it
and
when
I
looked
at
the
windows
cloud
base
in
it,
although
we
can
add
it
quite
easily,
it
also
doesn't
support
the
multi-part
stuff.
I
can't
remember
if
it
did
or
not
yeah,
so
I
don't
think
we
have
the
boundary
between
what
belongs
to
an
infrastructure
provider.
A
Yeah,
so
I
mean
one
thing
like
just
something
to
think
about
like
around
this
particular
topic
is
like.
We
have
talked
about
like
having
a
node
agent
as
part
of
that,
like
kind
of
gets
shipped
with
our
images.
It's
part
of
the
image
builder,
which
then
kind
of
hooks
up
into
the
whole
claustria
api
ecosystem
and
then
kind
of
has
plug-ins
for,
like
either
cloud
provider
or
like
data
storage
or
whatever
else.
A
It
does
need
a
full
proposal
because,
like
we
need
to
yeah,
understand
the
boundaries,
the
binaries,
they
are
not
perfect,
like
they're,
far
from
perfect,
and
we
have
seen
like,
for
example
like
now
that
we
we
also
have
been
talking
about
the
infrastructure
provider,
to
signal
back
to
the
bootstrap
provider
when
it's
allowed
to
generate
the
bootstrap
data
because
he
needs
some
data
back.
So
this
is
like
kind
of
like
a
a
cycle
of
like
a
loop
of
data
like
a
required
to
actually
generate
the
correct
bootstrap.
H
Go
ahead
and
you
yeah,
so
I
think,
there's
probably
still
going
to
be
challenges
with
the
user
data
for
node
agent
anyway.
The
other
side,
part
of
that
is
yeah.
We've
just
come
out
the
cube
adam
meeting.
I
will
throw
a
link
into
the
issue
in
a
minute,
but
we've
been
talking
about
making
cube
adm
consumable
as
a
library
suggested
that
so
I
will,
I
think,
it's
interrelated
to
this
idea
of
having
a
node
agent.
H
I
Hi,
so
I
think
we're
getting
pretty
close
on
the
windows
proposal.
I
addressed,
I
think
most
of
the
comments
yesterday
and
though
the
one
thing
that
came
up
that
I
did
add
that's
new
to
the
proposal
is
one
of
the
requirements.
That
kind
of
came
to
light
was
that
for
the
azure
stack
hci
provider,
are
they
implemented
windows
already
and
they
found
that
the
experimental
retry
actually
improved?
I
The
reliability
of
the
windows
nodes
joins
quite
a
bit,
so
adding
that
into
the
cluster
adm
bootstrap
provider
would
be
beneficial
from
a
windows
perspective.
I
The
challenge
that
was
that
that's
there
is
that
the
linux
shell,
that
is
provided
won't
run
with
cluster
or
on
the
windows
nodes,
and
so
we
need
a
way
to
tell
cluster
bootstrap
provider
how
what
what
the
os
type
is,
so
that
we
can
generate
the
right
script,
and
so
there
was
a
little
conversation
that
happened
here
and
I
think
the
the
idea
that
cecile
put
forward
was
that
this
information
is
really
tied
to
the
infra
machine
and
we
could
make
it
as
part
of
the
contract
between
the
input
provider
and
cluster
api.
I
I
I
essentially
proposed
that
we
have
essentially
like
an
os
type
that
would
be
an
optional
field
and
that
would
live
on
if
a
infrastructure
provider
could
implement
that
if
it
was
provided,
then
we'd
look
it
up
and
and
use
that
to
generate
the
the
retry
script.
I
I
think,
though,
the
longer
term
kind
of
goes
back
to
it,
nadir
had
said,
but
the
if
we,
if
that
becomes
a
if
cube
adm
is
a
library
that
we
can
reuse,
then
this
this
maybe
wouldn't
need
to
be
as
necessary,
but
there's
probably
other
places
where
the
os
type
would
be
valuable
to
generate
the
right
right
information,
as
we
add
window
support
here,
so
I
I
tried
to
outline
it.
I'm
still
getting
used
to
kind
of
how
all
the
pieces
come
together,
and
so
I'd
appreciate
it.
C
Go
ahead
and
see
him
yeah,
just
sort
of
a
tangential
question.
Does
the
conversation
that
you
mentioned
made
you
earlier
with
cuba
em
as
a
library
did
that
include
anything
about
retrying
or
how
we
moved
this
forward
in
view
on
f4,
because
when
we
first
added
the
use
experimental
retry
join,
it
was,
you
know,
supposed
to
be
temporary
and
not
really
a
long-term
solution
have
do
you
think
we
have
plans
to
like
remove
that
and
rely
more
on
cubitium
going
forward.
H
I
think
for
britain
is
probably
in
a
better
position
to
answer,
but
I
have
a
feeling
that
cuba,
dm
as
a
library,
might
not
be
completely
ready
by.
We
run
alpha
4..
It's
unlikely
to
be
ready,
so
we'll
have
to
figure
something
out.
Even
if
we
do
a
node
agent
and
we
still
execute
cube
adm
that
might
be
sufficient
to
get
rid
of
that
script,
but
long
term.
We
definitely
just
want
to
consume
its
library.
F
So
it
doesn't
it's
basically
at
this
stage
it's
basically
a
collection
of
requirements
and
then,
as
once,
we
have
a
collection
of
requirements.
We
we
can
decide
if
some
of
these
requirements
could
be
addressed
tactically
and
some
other
instead
required
more
of
them
work.
So
if
this
is
a
use
case,
I'm
going
let's
write
this
down
in
the
issue.
I
will
the
inclination,
the
issue
in
the
document
and
then
try
to
move
forward.
A
So
like
in
back
to
james
question
like
in
terms
of
like
having
the
the
os
type
on
the
information
spec-
and
this
is
usually
what
we
define
as
a
contract,
so
you
should.
You
should
definitely
go
into
the
contract,
the
controller
documentation
for
infrastructure
providers.
So
we
need
to
say
like
if
you,
this
is
an
optional
field
that
you
can
have
under
spec,
and
if
it's
there
we
default.
If
it's
not
there,
we
defaulted
to
linux
and
we're
just
what
we
I
guess
we
do.
A
What
we
have
been
doing
today
or
like
you,
can
set
it
to
windows
and
it
will
take
another
path,
and
these
are
the
valid
kind
of
choices
that
you
have.
This
sounds
good
to
me,
like,
I
don't
even
think
like.
We
probably
need
to
sink
back
at
this
field
into
them
onto
the
machine,
a
discussion
that
came
up
on
the
thread
and
in
this
thread
that
was
linked
here
is
we
want
the
machine
to
look
like
a
kubernetes,
node
and
everything.
That's
the
infrastructure
specific
goes
into
the
info
machine.
A
I
Okay,
so
I'm
not
sure
who
the
talus
folks
are,
but
maybe
we
can
start
a
thread
and
slack
on
that,
make
sure
that
they
get
some
eyes
on
it
and
agree
to
it.
So
it
sounds
like
the
the
contract
yeah,
so
I
wasn't
sure
if
it
had
to
actually
go
back
to
the
machine
for
cap
for
the
cube
adm
provider
to
to
pick
it
up.
It
sounds
like
that's
not
necessary,
so
I
think
I'll
take
another
look
and
try
to
articulate
that
properly.
A
This
is
probably
so
like
a
given
that
we're
trying
to
make
everything
optional.
It's
probably
fine
to
add
the
contract
changes
to
the
documentation
within
this
proposal.
I
would
actually
prefer
that
because
then,
all
the
changes
kind
of
like
get
merged
together
and
yeah.
This
seems
like
a
good
place.
Ben.
J
J
Just
because
of
that
and
if
we
just
had
like,
I
guess
it
would
be
a
new
binary
that
we
assume
you
know
part
of
the
image
builder
or
whatever
that's
on
the
machine,
but
it
wraps
cube
adm
with
some
kind
of
retry
behavior,
which
is
what
I
assume
like
just
basically
translate
the
bash
into
go.
And
then
it's
cross-platform.
A
Yeah,
that's
probably
something
where
material
is
trying,
probably
trying
to
push
forward
for
alpha
four.
This
is
not
something
for
alpha
three,
given
that
like
requiring
a
new
binary,
it's
like
not
something
we
should
take
lightly,
so
you
know
we
also
have
to
think
about
like.
Where
does
this
binary
lives
like?
If
does
it
ship
with
culture
api
to
the
ship
outside?
A
So
like
there's
a
lot
of
things
that
figure
figured
out
and
if
cuba,
dm
by
the
time
that
we
get
there,
has
a
library
that
we
can
use,
we
don't
have
to
shell
out,
which
would
be
great.
A
I
I
think
we're
okay
james
to
proceed
with
this,
like
this
seems
a
good
place
to
start
and
definitely
feel
free
to
reach
out.
If,
like
the
cappy
k
bits
how
to
wire
everything
together,
it's
not
clear
can
I
can
help
over
slack
or
something?
Okay,
thanks.
K
Hello,
so
I
just
wanted
to
sort
of
provide
a
status
update
for
the
management
cluster
operator
cap,
formerly
known
as
cluster
curl
operator,
but
so
I've
been
working
on
that
the
current
document
is
still
kind
of
sparse.
K
I'm
still
trying
to
gather
my
thoughts
in
a
sort
of
separate
document
and
structure
it
and
then
essentially
copy
paste
things,
as
I'm
ready.
One
of
the
things,
though,
as
fabiciu
and
I
were
sort
of
working
on
this,
we
came
across
as
we
were
trying
to
build
out
this
api.
K
We
came
across
the
topic
of
multi-tenancy
and
how
currently
cluster
cuddle
manages
multi-tenancy.
We
have
this
concept
of
management
groups
and
we
currently
allow
for
multiple
controllers
provider
controllers
to
be
installed
on
a
cluster,
and
some
new
information
has
come
to
light
that
maybe
the
community
might
not
like
that.
Oh
okay,
yeah.
This
issue
is
perfect.
This
place
for
discussion.
K
So
if
there
are
people
who
currently
have
opinions
on
the
current
state
of
multi-tenant
affairs,
please
feel
free
to
comment
on
this
on
that
issue
and
vince.
If
you
can,
please
add
that
to
the
doc,
so
I
can
take
a
look
at
it
too,
but
yeah.
I
just
wanted
to
sort
of
raise
this
thing,
so
I
can
collect
more
sort
of
user
information
and
sort
of
understand
the
problem
space
a
little
better
for
pc,
if
you
have
anything
to
add,
feel
free
to
chime
in.
F
We
decided
to
have
that
the
only
way
to
to
address
the
problem
of
using
a
provider
with
many
credit,
the
different
set
of
credentials,
was
to
have
a
different
instance
of
the
provider.
This
over
time
leads
to
a
layer
of
a
layer
of
complication.
If
you
look
at
how
we
are
deploying,
for
instance,
web
books.
This
is
this
is
overly
complicated
by
this.
F
The
possibility
to
have
many
provider,
and
also
there
are
several
issue
open
about
how
we
should
manage
upgrading
in
this
scenario,
and
so
in
parallel,
starting
from
kappa,
we
started
a
discussion
in
how
to
have
a
single
instance
of
provider
to
manage
many
credential
and
and
this
basically
it
is
a
more
elegant
solution
to
to
the
multi-tenancy
problem,
and
it
seems
that
all
the
providers,
as
far
as
I
know
at
least
azure
and
aws
and
probably
also
capability,
are
converting
to
this
idea
to
have
one
instance
of
the
provider
supporting
supporting
multiple
credentials
and,
and
so
this
basically
gives
a
cleaner
option
so,
which
is
to
have
basically
only
a
set
of
providers
installed
for
each
management
cluster
with
only
a
version.
F
So
we
should
not
have
problem
for
upgrades
and
all
the
work
will
be
cleaner
and
so
the
assumption
that
that
we
are
basically
taking
or
that
we
would
like
to
take
for
operators
that,
given
that
we
are
moving
to
a
model
where
there
is
a
single
instance
of
provider
for
each
management
cluster,
let's
start
to
implement
the
operator
supporting
only
this
use
case.
A
If
you
absolutely
need
the
only
difference
is
like
it
won't
be
like
something
that
like
we
can
give
support
out,
for
because
we
want
to
simplify
our
operational
model
and
like
what
we
support
when
we
have
to
do
upgrades
and
things
like
that.
The
other
thing
I
wanted
to
point
out
is
this
other
issue.
It
kind
of
goes
a
hand
in
hand
to
simplify
cluster
cuddle
move,
which
would
probably
be
more
similar.
A
If
this
isn't
implemented
like
like
described
in
here,
it
would
probably
be
more
similar
as
something
as
backup
and
restore,
because
close
record
move
today
has
a
lot
of
code
to
handle
like
the
move
of
a
single
namespace,
which
we
won't
do
anymore.
If
we
go
to
the
with
a
single
controller
approach
which
watches
all
namespaces.
A
So
when
you
do
move,
we
move
the
entire
management
cluster
all
the
time
and
in
the
future
we
can
think
about
like
how
can
we
use
move
or
like
extend
it
to
do
backup
and
restore
stuff
like
that,
so
something
to
think
about
like
definitely
like
it
feel
free
to
reach
out
like
with
more
feedback
and
yeah.
We
can
we
have
this
for
alpha
four,
that's
one
of
the
things
that
we
would
like
to
work
on.
Yeah.
D
Games
yeah.
I
had
a
quick
question
about
that.
So,
when
you're
talking
about
running,
there
was
talk
about
like
running
an
operator
and
I'm
just
wondering:
is
it
the
provider
like
an
infrastructure
provider
operator
you're
talking
about
or
is
it
like,
the
capi
operator
or
the
bootstrap
operator
so
like
right
now,
for
example,
in
some
clusters
that
I'm
testing
I've
got
like
a
single
instance
of
my
infrastructure
provider
operator
running
and
I've
got
a
single
instance.
D
F
So
the
operator
we
are
talking
about,
we
call
it
the
management
cluster
operator
and
it
was
formally
called
the
cluster
cut
operator.
So
it
is
an
operator
which
goal
is
to
manage
the
provider,
so
it
basically
may
should
manage
installing
provider
upgrading
providers
and
changing
their
provider
configuration
and
trying
to
articulate
the
idea.
F
So
the
operator
should
manage
the
provider
and-
and
the
idea
is
that,
then
we
have
only
one
instance
of
each
provider,
so
one
instance
of
copy
one
instance
of
the
infrastructure
provider,
one
instance
of
the
booster
provider,
watching
all
the
namespace
and
being
able
to
use
different
credential
as
if
required.
These
are
these
specifically
applied
to
infrastructure
provider.
Maybe
that
you
want
to
to
use.
D
A
D
Okay,
and
is
anybody
concerned
about
like
blast
radius
like
if
someone
goes
in
and
misconfigures
like
an
infrastructure
provider
and
all
of
a
sudden
it
affects
it,
affects
more
people,
because
now,
instead
of
running
one
of
these
providers
per
name
space,
now
you're,
just
you
know
you
misconfigured
the
global
one.
Is
that
not
a
concern
or
is
that
is
it?
Does
it
just
not
happen
very
frequently.
A
I
will
like
either
cecilia
or
nadir
to
answer
that
question.
H
Okay
yeah,
so
when
it
comes
to
blast
radius,
I
think
what
was
saying
is,
if
you're
concerned
about
the
installation
of
infrastructure
providers
well,
two
things.
So
the
teams
who
are
creating
clusters
are
not
the
same
team
who
are
installing
the
infrastructure
providers,
which
is
why
I'll
brief
point,
where
friendly
link
to
their
clapper
proposal
but
their
credentials
are
names,
are
globally
created
by
p
by
people
who
have
admin
rights
over
a
set
of
accounts,
and
then
teams
are
creating
clusters.
H
And
then,
if
they
want
to
protect
the
blast
radius,
then
it
probably
makes
sense
to
have
separate
management
clusters
completely,
and
then
they
can
upgrade
their
infrastructure
providers
across
a
set
of
management
clusters
like
shard
x,
number
of
teams
on
one
management,
cluster
put
another
set
of
teams
in
another
management.
D
Cluster,
okay,
I
see
so
yeah
we've
been
we've
been
kind
of
operating
under
the
model
where
we
could
say
well,
we
can
you
know
we
can
have
a
cappy
running
in
a
name
space,
and
so
we
can
have
you
know
10
name
spaces
and
10
cappy
operators,
and
then,
if
we
have
to
like
you
know,
upgrade
one
of
those
we
can.
D
I
mean
I'm
not
so
I'm
it's
fine.
I
can
take
this
to
slack.
I
think
I'm
I'm
not
I'm
not
quite
stating
the
problem
clearly
enough
here,
but
it
is
a
space
I'm
interested
in.
So
it
seems
like
there's
a
ticket
that
somebody
opened
for
multi-tenancy.
So
I'm
also
happy
to
bring
discussion
there
too,
but
yeah.
The
the
primary
concern
that
we
have
is
around
around
blast
radius.
D
D
So
anyway,
thanks
for
the
discussion,
I
don't
want
to
hold
up
the
the
call.
A
A
One
thing
that
I
wanted
to
mention
like
this
might
be
a
little
bit
philosophical,
but
controllers
like
it
should
be
built
in
a
way
that,
like
when
you
reconcile
a
cluster,
it's
kindly
scoped
to
only
look
at
that
cluster.
If
there
is
something
that
actually
affects
global
resources,
there
is
something
that,
like
it
needs
to
be
fixed
in
code
like
I
cannot
stress
this
enough,
like
we,
like
kind
of
like
our
modus
operandi.
A
Here
is
like
to
do
to
build
controllers
that
way,
and
we
want
to
make
sure
that,
like
that
is
actually
the
way
going
forward,
but
if
there's,
if
there
is
actually
globals
that
could
affect
multiple
clusters,
that's
usually
something
like.
A
I
really
want
to
look
into
because
that's
a
design
like
smell,
usually
so
we
have
tried
really
hard
to
like
get
here
like
in
a
way
you
know
like
I
didn't
even
go
into
beta,
like
we'll
make
sure
like
these
are
the
pillars
where,
like
our
controllers,
sit
on
the
current
approach,
like
of
having
one
controller
per
namespace,
was
required
because
you
have
to
kind
of
like
have
that
credential
for
a
controller
like
there
was
no
way
to
actually
load
credentials
on
the
fly
or
associ
associate
credentials
with
clusters.
A
Kavala
can
do
this
today.
I
believe
in
india
right
like
zero.
Six
can
do
that
today,
so
you
can
have
there's
like
new
crds
and
resources
that
are
gonna
be
introduced
so
that
you
can
load
up
credentials
on
the
file.
You
can
associate
those
credentials
with
namespaces,
I
believe
so
yeah.
That's,
there's
gonna
be
like
a
lot
of
work.
A
Coming
that
way
like
for
all
of
infrastructure
providers
and
yeah,
the
goal
is
to
kind
of
simplify
the
operational
model,
but
I'm
eager
to
hear
more
about
it
like
we
can
discuss
in
the
issue.
A
Thanks
june
had
a
question
say:
even
with
multi-tenancy
current
model
copy
part
namespace
will
continue
to
work.
That
is
the
goal.
The
goal
is
like
not
to
remove
any
namespace
flag.
That
said
like
we
won't
necessarily
support
it.
So
if
you
want
to
use
cluster
cuddle
in
the
operator,
you
have
to
opt
in
in
the
single
names
single
controller
for
all
namespaces.