►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
today
is
the
12th
of
may
wednesday
2021,
and
this
is
the
cluster
api
project
meeting
cluster
api
is
a
sig
kubernetes
project
and
we're
following
their
community
guidelines
in
this
meeting.
So
please
treat
others
as
you
would
expect
to
be
treated
and
raise
your
hand
if
you'd
like
to
talk
and
I'll
call
on
you.
A
A
So
I
don't
know
if
everyone
here
is
familiar
with
kubemark,
but
kubemark
is
a
like
testing
application,
we're
not
a
testing
application,
but
it's
like
a
kubernetes
holo
server
or
holo
node
server
that
you
can
use
to
do.
Testing
with-
and
you
know
several
months
ago,
ben
moss
started
a
kubemark
provider
that
would
essentially
use
cluster
api
to
deploy
these
kubemark
nodes
and
a
kubemark
node.
A
So
recently
I
had
been
updating
that
provider
and
trying
to
think
about
some
future
work
that
I
wanted
to
do
there,
mainly
in
terms
of
integrating
it
with
auto
scaler
testing
that
I'm
looking
at
and
when
I
brought
this
up
fabrizio
had
mentioned
that
he
was
doing
some
work
to
create
a
hollow
node,
kubemark
type
interface
for
the
cafd
provider
that
we
have.
A
A
I
wanted
to
kind
of
bring
it
to
the
bring
it
to
the
group
and-
and
really-
I
guess
you
know,
for
breezy-
and
I
were
talking
about
this-
so
I'd
be
curious
to
hear
fabrizio's
thoughts
you
know
about
like,
should
we
keep
or
or
what.
B
First
of
all,
I
totally
agree
that
this
could
be
useful
and
I'm
even
doubling
down,
because
I
I
think
that
this
is
useful
not
only
for
autoscaler,
but
it
could
be
used
for
for
copy
and
to
end,
because
I
want
to
start
stress,
testing
copy
and
it
could
be
useful
for
developer
experience,
because
whenever
developer
want
to
develop
locally-
and
it
basically
does
not
need
to
run
and
workload
on
the
on
the
workers
on
the
worker
nodes,
all
the
port
gives
an
alternative
which
is
super
fast
and
and
lightweight.
B
I
don't
have
stronger
preferences
on
the
provider,
to
be
honest,
yeah
the
the
provider
was,
was
the
ripper
was
open
and
but
being
not
active.
I
I
just
forgot
that
it
was
there,
and
so
I
don't
have
strong
opinion
about
where
we
want
to
host
these,
but
I'm
kind
of
figuring
out
how
to
sign
some
requirement
that
that
we
should,
let
me
say,
address
no
matter
where
we
keep
the
code.
B
So
definitely
there
are
overlap.
I
don't
want
to
to
keep
the
same
code
in
two
parts.
We
will
choose
one,
I
so.
The
the
the
kind
of
requirement
that
I
have
in
mind
is
one
the
first
one
is
that
if
you
want
to
rely
on
this
on
this
provider
for
copy
and
to
end
test,
that's
mean
that
we
we
have
to
assume
and
be
sure
that
the
this
this
this
cube
mark
implementation
for
machine
cooper,
mark
implementation
is
well
maintained
and
also
up
to
date
with
cluster
api
master.
B
Otherwise,
our
end-to-end,
we
will
run
into
program
so
point
number
one
is
that
we
have
to
be
sure
to
to
being
able
to
keep
track
of
with
maintenance
and
and
the
project
changes
is
just.
Let
me
say,
I'm
a
minutiae,
but
let's
keep
all
of
this
in
mind.
Is
that
if
it
is
a
separated
provider,
it
is
a
little
bit
more
complex
to
set
up
our
tests
because
we
have
to
install
to
provider.
B
B
In
my
opinion,
total
kind
of
requirement
is
that
we
have
to
get
people
from
seeks
capability
on
board.
Otherwise
there
is
the
risk
that
we
we.
We
are
not
following
the
cuban
market
changes,
and
this
is
tomorrow,
I'm
going
to
seek
scalability
and
basically
present
the
idea
or
or
try
to
get
them
on
board
in
india.
B
So,
first
of
all,
this
will
help
to
solve
initial
problem,
because
now
the
provider
kind
of
works,
but
there
are
some
flakes
which
is
not
ideal
so
so
and
second,
but
not
not
not
really
my
top
priority,
but
a
great
opportunity
is
that
it
will
be
super
super
super
great
if
we
can
get
six
capability
to
use
cluster
api
for
their
own
test.
This
will
be
a
a
huge
achievement
and
yeah.
Let's
see
tomorrow,
how
it
works
and
my
position.
A
Okay
thanks
a
lot
fabrizo
that
was
a
that
was
a
great
explanation
and
some
great
points
there
I
mean
I
tend
to
agree
with
you
on
a
lot
of
what
you're
saying,
and
I
think
you
know
vince
has
given
a
plus
one
to
keeping
it
in
tree
like.
I
think
it
makes
a
lot
of
sense
to
keep
it
in
the
cap
d
stuff.
You
know
for
all
the
reasons
that
you
mentioned.
A
You
know
on
the
on
on
the
scalability
stuff,
like
I,
I'm
also
interested
in
seeing
the
sig
auto
scaling
use
cluster
api
for
one
of
the
testing
paths
there
as
well.
So
I
think
you
know
there's
a
lot
of
extended
testing
that
we
could.
A
We
could
do
with
something
like
this
one
of
the
things
that
I
was
wondering
while
looking
through
the
kubemark
provider
itself,
because
I
think
at
this
point
for
me,
the
main
difference
seems
to
be
like
if
we
put
the
hollow
node
stuff
in
with
capd,
it's
pretty
much
tied
to
the
capd
implementation,
whereas
the
cubemark
provider
by
itself
seems
a
little
more
flexible
to
deploy
onto
pretty
much
any
cluster
that
you
could
create
with
cappy.
So
I
don't
know
that
anyone
would
use
that,
because
I
don't
know
that
you
would
create.
A
You
know
like
a
gcp
node
to
run
kubemark
on
or
something.
But
one
of
the
things
I
was
wondering
was:
is
there
a
way
for
us
to
encapsulate
the
kubemark
logic
inside
of
cluster
api
in
such
a
way
that
like,
if
we
wanted
to
keep
the
provider
around,
we
could
actually
just
use
like
a
library
coming
out
of
you
know
the
main
repo
or
something
so
that
was
kind
of
maybe
that'd
be
a
way
for
us
to
keep
the
implementation
in
one
place
and
still
keep
the
provider.
A
If
we
wanted
to
so
fabrizio,
you
got
your
hand
up.
Go
ahead.
B
And
this
could
be
interesting,
I'm
not
sure
of
the
of
the
how
it
is
important
this
option
to
use
it.
Let
me
say
standalone
what
I'm
sure
now
is
that
in
order
to
keep
it
separated,
basically,
you
have
to
maintain
a
lot
of
scaffolding.
B
The
entire
scaffolding
that
kuber
builder
puts
on
you
and
also
an
entire
report,
does
mean
a
separate
pro
jobs,
testing
and
stuff
like
that.
So
there
is
a
lot
of
work
just
to
get,
let
me
say
an
empty
provider
app.
B
A
C
Hand
up,
I
was
going
to
remind
like:
do
you
have
like
requirements
to
have
a
cube
mark
as
a
library
different
repo
or
like,
if
that
becomes
like
a
library
within
cap
b?
I
guess
like.
Would
that
be
okay
as
well,
because
I'm
thinking
like
if
right
now
like,
I
cannot
think
like
of
any
other
use,
are
from
like
within
our
testing
and
maybe
like
someone
else
like
you
know,
vendor
and
copy
downstream.
C
A
Yeah
I
mean
I
don't
I
personally
don't
have
any
hard
requirements
to
have
kubemark
as
a
separate
provider
the
way
it
is
currently
like
it.
You
know
just
kind
of
organically.
We
ended
up
there
and
yeah
what
what
nadir
are
saying,
your
your
volume's,
a
little
low
vince,
but
so
like?
I
don't.
A
I
don't
necessarily
have
a
problem
if
we,
if
we
centralize
the
work
in
the
capd
provider,
in
fact,
I
think
that
seems
to
be
making
the
most
sense
and
if
there
ever
arises
a
need
to
have
the
separate
kubemark
provider
on
its
own,
then
we
could
try
to,
like
you
know,
use
some
sort
of
packaging
to
get
the
library
bits
out
of
there
but
yeah
like.
A
I,
don't
necessarily
think
we
need
to
keep
it
separate,
and
I
think
I
think
honestly,
it
makes
more
sense
for
us
to
focus
the
work
inside
the
cluster
api
repo
in
the
in
the
capd
provider.
I
would
just
if
we
decide
to
go
that
way.
I
would
like
to
see
us
kind
of
archive
the
kumar
provider,
so
we
don't
like
just
so.
We
don't
create
confusion
in
the
community.
I
want
that.
B
Yeah
well,
I
I
think
that
we,
we
need
to
wait
for
seek
scalability
feedback,
because
if
they
are
actor
in
this
story,
this
would
be
a
huge
win
for
both
and
but
probably
it
will
require
something
to
accommodate
their
needs
as
well.
So,
let's,
let's
see
what
is
their
reaction
to
the
proposal.
A
A
Okay,
vince,
you
still
have
your
hand
up.
Did
you
have
something
else,
not
better?
Thank
you
all
right
cool.
Are
there
any
other
opinions,
or
you
know,
objections
to
what
we're
talking
about
here
with.
A
B
Okay,
so
I'm
starting
to
work
to
some
of
the
changes
required
in
v1,
alpha
4
for
basically
making
cluster
cattle
move
to
support,
moving
the
the
new
model
of
credential
from
one
one
cluster
to
the
another
to
the
other
and
while
taking
a
look
at
this
code.
B
B
C
Vince
go
ahead,
so
the
labels,
though,
are
indexable
and
we
could
query
on
labels
like
through
the
client,
I'm
wondering
if,
like
we
wanna
like,
if
there
is
any
downside
here
because
like
we
you,
you
can't
just
like
query
all
the
objects
with
a
specific
annotation,
but
you
could
do
that
with
with
labels.
B
Yeah
this
is,
is
a
good
point.
To
be
honest,
I'm
I
don't
have
in
mind
use
case
where
you
want
to
search
for
this
crd
or
for
this,
but
yeah
I'm
fine
with
either
one
or
because
now,
the
a
little
bit
of
more
constant.
Now
I
have
to
introduce
a
new
label
selection
notation
to
allow
basically
to
move
not
only
the
single
object,
but
the
object
and
his
entire
hierarchy,
for
instance,
annuity
and
identity,
with
the
secrets
that
they
that
is
linked
to
the
identity,
and
so
even
that
that
I
want
to.
B
D
B
I
I'm
not
aware
of
it.
This
is
why
I
proposed
the
change,
but
if
we
are
fine
to
keep
the
labels,
it's
not
a
big
problem.
For
me
I
will
just
add
another
label.
That's
it
okay
cool!
I
was
just.
A
Curious
you
seen
go
ahead.
B
B
The
label
can
could
be
applied
both
at
the
crd
object.
For
instance,
I
want
to
move
all
all
the
ipam
claims
object
or
it
can
be
applied
on
a
specific
object.
So
I
want
to
move
only
this
secret
and
and
for
instance,
we
are
using
these
to
move
the
cloud
config
secrets,
so
the
label
can
be
at
both
levels,
crd
or
object.
C
B
We
currently
in
cluster
cattle,
we
are
not,
we,
we
are
already
scanning
the
a
set
of
objects,
and-
and
that
means
that
that
is,
we
are
not
doing
a
separate
operation.
A
separate
list
with
a
label
selector,
so
it
is
having
a
label,
is
not
a
requirement
for
cluster
cathode,
given
that
it
is
already
reading
the
object
due
to
other
reasons.
A
F
Yeah
so
a
couple
of
weeks
ago,
we
decided
to
try
and
formalize
some
of
the
guidance
around
controller
multi,
not
controlling
multi-tenancy,
but
for
provider
multi-tenancy
contracts
around
for
azure
being
able
to
provision
into
different
subscriptions,
but
aws
bingo
was
provisioned
into
different
aws
accounts
and
for
vsphere
connecting
to
different
recenters.
F
I've
got
well
two
main
questions
I
want
answered
before
I
go
further
with
that
guidance,
I'll
go
with
the
easier
one
first.
So
there
is
this
thought
that
explains
more
about
what
the
use
cases
are
if
you're
not
familiar
with
that.
The
first
one
is
a
question
from
david
justice
around
how
label
selectors
work.
F
So,
for
instance,
aws
provider
uses
the
sort
of
default
label
selector
mechanism,
which
is
if
the
label
selector
well.
Sid.
F,
can
put
it
correct
me,
but
if
I
get
it
right
way
around
is
if
the
label
selector
is
male,
it
applies
to
everything
and
if
the
label
selector
is
empty,
then
it
applies
to
nothing
and
there
was
a
question.
It's
like
well,
that's
kind
of
it's
not
very
explicit
for
the
end
user.
Now
this
is
a
normal
kubernetes
construct
that
do
we
want
a
more
explicit
behavior.
F
I've
got
a
suggestion
from
the
new
gateway
api.
They
do
have
an
explicit
field
for
this
now,
so
maybe
we
want
to
do
that.
So
that's
one
question
and
then
the
second
question
is
so
in
the
guidance
we
say
for
cluster
scope:
resources
if
you're
referencing
say
a
secret
in
that
cr.
So
for
the
aws
example,
is
the
easiest?
Well
that
all
I
know
well
enough
is
there's
a
cluster
static
principle
that
references,
aws
access,
key
and
secret
access
key.
F
We
said
we're
not
going
to
support
this
in
the
that's
the
castle
operator
for
v1
alpha
4,
but
some
people
might
still
use
separate
instances
of
the
controller
per
namespace
that
have
deployed
multiple
instances
and
then
they
might
end
up
referencing
the
same
cluster
scope,
resources
and
then,
which
secret
being
read,
is
kind
of
weird.
So
there's
a
fun
there's,
basically
two
options.
F
A
Yeah
we
got
a
couple
hands
and
vince
added
a
comment
in
in
chat
as
well,
so
you
seen
go
ahead,
yeah
so
for.
E
Closer
scope,
preference
in
secrets
in
different
name
spaces.
So
I
guess
the
fact
that
you're
able
to
create
such
resources
is
in
itself
a
high,
highly
privileged
operation.
So
I
guess
that
if
you
have
the
right
our
backs,
this
means
that
you
are
pretty
much
a
trusted
user
of
the
mat
of
the
of
that
cluster.
So
referencing,
a
secret
within
another
namespace,
might
be
a
legitimate,
legitimate
use
case
for
you.
D
Yeah,
I
just
got
a
clarifying
question
around
running
multiple
instances,
at
least
from
the
docs.
I
don't
know
if
it's
outdated,
but
it
it
said
that
it's
a
way
to
basically
support
multiple
credentials
before
multi-tenancy
is
properly
supported.
D
F
I'll
just
reply
to
that
one,
I
think
the
main
one
I
had
was
around
scalability,
maybe
that
you
might
want
multiple
instance
resources.
You
might
want
a
higher
level
of
isolation.
Just
to
make
sure
I
know
red
hat
had
that
concern
I'll
leave
that
to
others
about
how
they
feel
about
that.
So.
C
B
B
This
becomes
kind
of
weird
and
complex
as
soon
as
soon
as
basically,
your
instance
of
the
the
provider
start
getting
different
versions,
and-
and
this
is
why,
in
the
move
in
basically
one
of
the
big
themes
of
we
want
alpha
4
is
to
remove
the
possibility
to
have
multiple
instances
of
provider
and
instead
support
multi
a
different
way
of
express
multi-tenancy,
because,
let
me
say,
original
multi
instances
was
was
a
a
workaround
for
solving
multi-tenancy.
B
Now
we
can
solve
multi-tenancy
in
a
proper
way,
and
so
basically
we
are
trying
to
avoid
complexity
in
the
deployment.
C
This,
if
I
can
add
to
that
like
the
web
code,
validation,
defaulting
and
conversion
is
used
and
considered
as
part
of
the
controller
code,
so
before
an
object
actually
gets
reconciled.
For
example
like
in
some
places
like
we
don't
do
nil
validation
because
we
defaulted
in
the
in
the
web
book
code
right-
and
I
mean
that's
expected,
because
that
code
is
part
of
the
chain.
C
So
if
you
have
a
batch
version,
that
does
that
and
the
old
version
doesn't
like
you're
getting
two
issues
and
yeah
so
like
the
the
these
web
books
are
global.
So,
let's
forget
to
mention.
I
wanted
to
touch
base
on
the
label
selector
so
so
far
we
have
never
allowed,
or
actually
we
have.
It
was
allowed
and
then
disallowed
that
a
label
selector
would
match
everything,
and
this
goes
against
like
kubernetes,
but
we
don't
have
to
follow
all
the
conventions
that
they
use
for
some
reasons.
C
So
I'm
wondering
like
is
that
like
100
like
we
do
we
want
to
do
this
100
because
you
could
label
like
things
if
they
have
to
get
a
default
credentials
like
you
could
just
say
like
hey
this
labels.
For
my
default
credentials
and
applied
them
to
all
the
name
spaces,
I
guess
like
you
would
want
to
get
the
label
into.
F
So
concretely
for
aws,
we
did
it
for
the
cluster
controller
principle,
which
is
a
singleton
cr
and
is
to
allow
the
upgrade
path.
So
if
you've
already
got
a
load
of
aws
clusters,
they're
already
using
a
set
of
credentials,
so
you
want
a
seamless
upgrade.
We
need.
We
need
something
that
can
default
to
all
namespaces
and
we
have
a
flag
in
the
controller
that
automatically
creates
that
resource
with
a
defaulting
to
allowing
for
all
namespaces.
F
I
think
what
I'm
hearing,
though,
since
we
don't
like
that
behavior
in
cluster
api,
that
maybe
we
should
go
for
the
more
explicit
option
that
the
gateway
api
has
opted
for,
which
is
there
is
a
from
field
which
is
an
enumeration
and
so
gateway
class.
There's
a
whole
bunch
of
way,
differences
with
the
way
gateway,
api
works,
because
they
have
a
many-to-many
relationship
between
gateways
which
are
load
balancers
and
http
routes,
which
is
not
quite
the
same
use
case.
F
So
all
of
those
resources
are
namespace
scoped
and
then
they
have
a
from
field
or
and
allowed
namespaces
equivalent
that
we
copied
that
you
can
select
same
namespace
or
namespaces
or
selector.
So
I
guess,
if
we're
sticking
with
the
cluster
scope
resource,
we
would
have
like
a
allowed
namespaces
from
an
enumeration
of
either
selector
or
all,
and
then
that
would
make
the
sort
of
defaulting
behavior
explicit.
C
Okay,
even
though
you
have
that
option
but
like
couldn't
you
just
pre-apply
that
label
like
to
all
the
resources.
F
Yeah,
that's
right,
so
we
could
then
put
other
defaulting
that
would
set
that
explicitly.
But
so
basically
we
explicit
we
definitely
move
away
from
the
standard
kubernetes
core
behavior
in
favor
of
something
that's
has
a
bit
more
explicit
and
not
relying
on
you.
Knowing
the
difference
between
nothing
and
open
bracket,
closed
bracket
in
a
yaml
file.
C
F
Okay,
thanks
for
that,
so
I
think
that's
one
question.
One
result:
two
we've
discussed
that
the
badness
around
having
the
multiple
instances
do
we
definitely
do
we
wanna
allow
you
to
select
a
secret
from
any
namespace
in
a
clusterscope
resource,
then,
given
that
it
is
generally
a
super
privilege,
vince
is
shaking
his
head
saying
no,
or
do
we
just
document
that
this
is
basically
how
it's
going
to
work
and
you're
on
your
own.
If
you
deploy
multiple
instances.
A
So
I
see
fabricio
and
vince's
hand
up.
I
don't
know
who
was
up
first,
though,
please.
C
Go
ahead,
vince
so
generally,
like
a
crossing
branch
based.
Boundaries
is
like
a
big
no-no,
so
I
would
keep
that
like
for
the
deploying
multiple
controllers
like,
as
we
have
mentioned
like,
that,
would
be
an
unsupported
path
and
we
kind
of
agree
that,
like
you
know,
if
we
want
to
go
in
that
direction,
we
need
to
make
changes
like
throughout
the
code
basis,
and
there
needs
to
be
a
proposal
that,
like
actually
spent
across
controllers,
solves
the
web
question
and
like
how
do
we
deal
with
crts?
C
So
I'm
inclined
to
just
you
know,
put
that
boundary
in
place
here,
because
we
already
decided
on
that
bit
and
like
yeah.
We
shouldn't
like
do
cross
namespaces,
pulling
up
secrets,
especially.
B
Yeah,
I
I
think,
on
top
of
that,
if
we
are
allowing
to
basically
have
a
secrets
in
any
namespace.
That
means
that
when
I,
when
we
implement
cluster
cattle
move,
we
have
to
read
from
all
the
namespace,
which
is
not
not
nice.
So
now
the
the
solution
that
we
discuss
is
that
we
read
only
from
the
namespace
that
we
are
moving
and
from
the
provider
namespace,
which
is
something
that
that
we
already
are
asked
so
is,
is
kind
of
better
scoped
to
the
cluster
api
world
inside
a
cluster
instead
of
all
around
the
cluster.
A
Then
thanks
everyone,
cool
cool,
I
was
going
to
say
it
yeah.
It
sounds
like
we
got
some
answers
to
your
questions
and
I
guess
in
terms
of
the
multiple
instances
of
providers
we.
It
sounds
like
the
next
step.
If
we
want
to
keep
pursuing,
that
is
to
create
an
enhancement
and
really
have
the
discussion
there
about
the
details
and
everything.
A
Okay,
well,
there
are
no
more
topics
on
the
agenda
and
no
one
has
added
anything
is:
are
there
any
kind
of
last
minute
questions
or
comments
or
concerns
people
want
to
bring
up,
or
should
we
end
the
meeting.