►
From YouTube: 20210106 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
first
cluster
okay
meeting
on
2021
happy
new
year
to
every
every
one
of
you
and
welcome
back
excited
to
start
a
new
year
with
all
of
you.
So
the
first
thing
that
I
wanted
to
mention
is
that
we
have
a
new
meeting
notes
link.
This
is
for
the
new
notes
for
the
new
year.
Usually
we
kind
of
start
new
documents
at
each
each
start
of
the
new
year.
I
put
I
put
that
in
in
the
chat.
A
We
have
a
meeting
etiquette.
So
if
you
would
like
to
speak
up
or
ask
questions
feel
free
to
add
any
any
gender
topic
to
down
here
and
use
the
raised
hand
feature
in
zoom.
If
you
want
to
speak
up
all
right,
let's
get
started,
we
don't
have
any
psas
for
today,
things
have
been
ongoing.
We
we
have
two
kind
of
branches
or
our
like
main
goal,
strike
repository
ongoing.
The
zero
3x
is
our
current
stable
release
and
the
main
branch
right
now.
A
It's
open
for
breaking
changes
for
that's
your
fourth
year
release,
which
is
going
to
be
a
few
months
away.
Any
questions
on
that
before
we
go
to
discussion.
A
A
All
right,
so,
let's
start
with
the
first
discussion
topic,
fabrizio
and
sagar.
B
Hi
everyone
so,
while
working
on
end-to-end
stuff
and
specifically
around
the
cluster
api
on
the
cluster
catalog
end-to-end
test,
I
noticed
that
in
the
v03
branch
we
still
have
a
support
for
kind
v07,
which
is
something
that
has
been
dropped
in
the
in
the
main
branch.
B
The
the
the
side
effect
of
of
this
there
will
be
some
side
effect
is
that
cup
d
in
zero
three
x
will
start
is
going
to
use
the
kind
network
instead
of
the
bridge
network.
B
And
I
guess
this
is
not
a
problem
because
captain's
a
developer
provider
we
are,
we
are
using
only
for
tests,
so
cluster
are
ephemeral.
This
should
not
break
nothing
existing
that
the
there
is
also
an
a
nice
side.
Effect
of
this
is
that
after
the
move
it
will,
it
won't
be
any
more
necessary
to
use
the
to
export
the
kind
experimental
docker
network
variable
which
basically
simplify
steps
in
the
quick
start.
B
So
I
I
don't
see
big
problem,
but
given
that
this
is
a
change
on
on
the
old
release,
I
I
prefer
to
ask
to
the
community
if
the,
if
you
agree
on
this
change
or
not
or
if
you
see
programmer.
A
There's
a
couple
plus
one
and
a
question
in
chat:
kinda
nine
is
kind,
zero,
nine
supported,
there's
no
mention
on
the
quick
store
page.
C
D
B
A
B
A
Okay,
I
mean
the
changes
sounds
good
to
me
like
it
is
a
breaking
change.
I
want
to
put
that
a
little
bit
in
close,
because
that
technically
our
test
apparatus
is
not
under
guarantees,
but
we
will
make
sure
that,
like
the
release
notes
have
enough,
you
know,
like
text
that,
like
warns
the
user,
that
zero
seven
has
been
deprecated
for
the
testing
environment
and
we're
now
using
zero
nine,
given
that
we're
planning
to
support
zero
3x
for
a
lot
longer,
we
should
definitely
upgrade
so
sounds
good
to
me.
A
All
right
so
andrew
you
have
the
next
two.
E
Topics
hi
guys
so
yeah
I
just
wanted
to.
I
want
to
share
with
you
guys
the
work
that
we've
been
doing
at
capital
one.
I
don't
want
to
take
up
too
much
time
in
this
meeting
because
it
may
not
be
something
everybody's
interested
in
so
I'll.
Just
try
to
give
you
like
an
introduction
and
a
summary,
and
if,
if
there
is,
you
know
if
there's
alignment
and
people
want
to
talk
with
us
further
later,
let's
sync
up
on
slack,
so
I
wanted
to
share
that
with
so.
E
We've
been
working
on
a
an
operator
which
it
seems
to
have
a
lot
of
overlap
with
a.
I
think.
It's
I
think
it's
at
the
proposal
stage.
It's
a
cluster
manager
operator
that
is
in
the
in
the
cluster
api
space,
and
we
we
started
a
similar
thing
a
year
or
two
ago,
where
cluster
with
this
operator
can
drive
cluster
api
to
provision
classes
on
demand
to
to
facilitate
a
kubernetes
platform
as
a
service.
E
So
people
can
people
can
sort
of
throw
applications
into
our
platform
and
our
operator
will
provision
clusters
and
decommission
clusters
as
necessary.
E
So
I
think
the
stated
goals
have
a
lot
of
overlap
with
some
existing
proposals,
so
we'd
really
like
to
you
know,
seek
as
much
alignment
there
as
possible
and
our
our
our
goal
would
be
that
we
follow
the
intentions
and
the
design
patterns
that
you
folks
are
are
building
out
closely
enough
that
eventually
we
can
just
switch
to
your
operator
without
too
much
pain
and
if
we
can
feed
in
our
learnings
into
that
process
as
well.
E
So
that's
our
aim
there.
So
this
this
operator,
it
currently
has
two
concerns
that
we
kind
of
want
to
split
out
into
separate
operators.
One
is
managing
clusters,
one
is
also
one's
managing
applications
for
users
and
making
sure
that
their
applications
are
are
you
know,
assigned
to
a
cluster,
have
a
name
space
on
that
cluster
and
that
there
is
a
way
for
the
people
that
own
that
application
to
access
their
namespace
and
deploy
things
into
it.
E
We
want
to
split
those
things
out,
so
I
so,
I
think,
there's
like
an
opportunity
for
a
couple
of
operators
there
that
could
be
of
interest
to
the
to
this
sig
we're
also
working
on
a
no
pull
management
solution,
so
that
we
can
so
that
we
can
provision
clusters
that
so
an
application
application
needs
has
specific
needs
with
respect
to
compute
or
like
gpu
capabilities
or
whatever,
or
just
the
application
needs
to
be
isolated
to
particular
nodes
within
a
class.
E
Whatever
cluster
it's
running
in
that
we
want
the,
we
want
the
platform
to
have
the
the
the
ability
to
provide
that
isolation
will
provide
that
specific
those
specific
infrastructure
for
those
applications,
so
that
we
can
have
different
kinds
of
workers
within
a
single
cluster,
and
all
we
have
to
do
is
just
manage
like
templates
of
like
these
are
the
kinds
of
kind
of
we're
thinking
along
the
lines
of
like
storage
classes.
E
These
are
the
kinds
of
nodes
that
should
be
available
for
applications,
so
any
application
that
requests
this
or
expresses
that
it
has.
This
need
we'll
get
we'll
use
that
node
class.
Something
like
that.
So
those
are
the
two
kind
of
major
initiatives
that
we
wanted
to
bring
and
hopefully
kick
off
some
or
feed
into
some
existing
conversations
and
start
we've
been
beavering
away
on
our
own
for
quite
a
while,
but
it's
it's.
E
I
think
it's
past
time
that
we
start
to
feed
in
what
we
can
and
to
into
this
sig
and
hopefully
add
some
value
to
the
community.
Here.
How
can
we
learn
more
about
question
and
chat?
How
can
we
learn
more
about
the
operators
described
yeah?
That's
a
great
question
everything.
So
we
work
for
capital,
one
so
being
a
financial
institution.
Everything
we're
doing
is
on
a
private
github
and
there
are
very
strict.
E
It
rolls
around
what
we
can
share,
so
I'm
going
to
need
to
we're
we're
going
to
need
to
work
with
our
internal
open
source,
and
I
guess
legal
teams.
I
guess
if
we
want
to
publicly
like
share
documentation
about
what
we're
doing
so,
we're
going
to
need
to
work
on
that
and
come
back
to
you,
but
in
the
meantime
we
can
have
ad
hoc
zoom
sessions
where
we
can
just
share
what
we're
doing
on
an
ad
hoc
basis.
E
So
on
what
I'll
yeah,
so
on
slack,
my
name
is
andrew
meyer,
my
full
name.
So
if
I
think
the
best,
like
the
only
way
at
this
point,
would
be
to
reach
out
to
me
directly
and
we
can
get
some
collaboration
going
on.
F
You
mentioned
that
there's
some
overlap
with
some
existing
proposals.
I'm
just
curious
if
you
reviewed
the
management
cluster
operator
proposal
and
exactly
like
how
much
overlap
or
where
the
overlap
is
like,
because
what
you're
describing
sounds
more
like
managing
the
workload
clusters
and
the
proposal
is
more
aimed
towards
managing
the
cluster
api
providers
themselves.
E
E
So
if
I'm
off
base
about
that,
that's
cool,
so
maybe
there
is
a
maybe
there's
a
an
opportunity
for
a
new
proposal
here.
So
we'd
be
happy
to
provide
that.
A
Oh,
thank
you
thanks
yeah,
thanks
for
for
pointing
that
out,
like
yeah,
it
would
be
great
like
if
you
folks
can
review
that
like
it
sounds
like
there
might
be
some
move
or
not,
but
it
might
be
very
thin.
One
other
thing
that
I
wanted
to
share
is
like
this
brainstorming
on.
How
can
be
that
we've
done
like
a
while
ago,
which
kind
of
introduces
the
concept
of
node
pool
and
cluster
api.
A
This
is
very
high
level
very
hand
wavy
or
how
it
would
work
we're
just
starting
to
think
about
like
how
do
we
make
the
user
experience
better
in
cluster
api?
So,
like
all
of
these
things
that
you
just
shared
like
actually
really
great
concepts
that
like
we
can
bring
up
stream
in
the
form
of
new
proposal
or
like
making
sure
the
the
proposal
we
already
have,
we
can
adapt
them
and
extend
so
so
yeah
like
that's
that.
That's
that
sounds
great.
E
So,
okay,
so
with
the
best
way
to
proceed,
would
be.
Would
that
be
for
us
to
provide
new
proposals,
for
I
think
these
three
different
items,
plus
worker
cluster
management,
application
management
and
node
pool
management
provide
those,
and
then
we
can
go
from
there.
A
Yes,
I
think
so
I
I
was
thinking
about
the
application
management,
because
there's
a
lot
of
like
things
already
in
you
know
in
the
space
in
the
kubernetes
that,
like
we
could
reuse.
So
we
just
need
to
make
sure
that,
like
we
don't
reinvent
the
wheel
or
like
we
collaborate
with
other
cigs,
if
that
makes
sense,.
E
That's
right,
yeah.
Somebody
just
pointed
out
that
there
it
might
be.
It
might
not
be
the
right
sig
for
all
of
these
things,
yeah
so
yeah.
Well,
we
would
like
yeah.
We
really
need
to
kind
of
improve
our
understanding
around.
All
of
that
as
well,
because
we,
like,
I
said
we
haven't,
been
collaborating
enough
and
we
are,
we
probably
don't
have
all
of
the
context
for
all
of
the
pieces
that
people
are
thinking
about.
E
A
Yeah
the
application
management,
it
definitely
feels
like
outside
of
the
scope
of
the
sig,
but
the
the
node
pool
and
the
workload
cluster
design
proposals
are
like
you
know
you
might
want
to
share
like
we
can
work
on
some
proposals
together
and
yeah,
keep
brainstorming
on
how
to
make
the
user
experience
better
but
also
add
new
features
by
building
using
the
current
building
blocks
as
well.
G
Right,
okay,
let's
go!
I
just
want
to
jump
in
for
a
moment
with
with
the
application
management.
I
don't
think
we
have
the
best
naming
around
it
at
the
moment.
It's
it's.
It's
name,
space
allocation
across
the
fleets
of
clusters,
and
so
it's
not
the
actual
deployment
of
specific
applications
like
like
what
admiral
or
some
or
or
ranchers
wheat.
Does
it's
it's
a
bit
different.
G
So
I
I
understand
that
maybe
it
isn't
supposed
to
be
here,
but
there
I
think
it's
also
poorly
named,
so
there
may
be
something
we're
not
explaining
it
well.
Also,
I
think.
A
Okay,
yeah.
That
was
another
thing
that
I
was
actually
thinking
about.
So
there
is
also
this
other
cap
upstream
in
kubernetes
from
the
sigmoid
cluster,
and
then
I
see
craig,
you
were
talking
about
multi-tenancy,
so
sigma
multi-class
is
actually
working
on
this
cap,
which
is
for
cluster
identification,
which
I
pointed
at
like
in
a
different
topic,
but
it
it
seems
like
this
is
a
recurring
topic
of
like
okay,
like
how
do
we
make
sure
that,
like
now,
we
have
all
these
workload
clusters?
A
How
can
we
manage
them
properly?
Like,
for
example,
you
know
allocate
namespaces,
but
also
like
understand
that
this
cluster
is
part
of
this
management
cluster,
rather
than
that
other
management
cluster
so
also
like
there
was
the
concept
of
a
cluster
set
so
like
identifying
a
set
of
clusters
in
one
way
or
another
that,
like
you,
want
to
perform
actions
on
or
you
want
to
like
do
something
on
like
so
take
a
look
at
it
like.
A
We
probably
need
to
collaborate
with
the
other
things
on
these
on
these
things,
but
yeah
like
for,
for
the
other
two
points
that
we
can
definitely
put
some
proposals
together.
Excellent.
Thank
you
very
much
cecile.
I
thought
that
you
had
your
hand
raised
before.
F
I
don't
know
it's
okay,
I
was
just
gonna
say
like
even
if
they're
not
like,
you
know
good
fit
for
within
the
cluster
api
repo,
it
doesn't
mean
they
don't
belong
in
the
sig
itself
and
they
might
be
like
sub
projects,
like
jason.
A
Mentioned
yeah
perfect.
Thank
you
all
for
for
sharing
and
eager
to
collaborate
on
these
things.
I
think
david
cincillo
are
next.
H
Yeah,
so
we've
been
working
on
machine
pool
and
cluster
auto
scaler,
respectively
and
in
in
azure
machine
pool
we
in
azure.
We
do
not
get
all
of
the
events
that
we
would
like
to
get
from
the
the
vm.
H
For
example,
when
we
go
to
upgrade,
we
don't
have
local
a
local
event
that
tells
us
that
the
vm
is
going
to
be
provisioned
and
then
cleaned
up.
So
what
that
makes
it
tough
to
do
is
to
you
know
from
the
vm
understand
which
one
in
the
pool
is
getting.
You
know
replaced
so
that
we
can
safely
drain
load
off
of
that
machine
so
to
coordinate
drain.
We
need
to
have
that
event.
H
We
were
going
to
drive
the
cord
and
drain
just
very
similar
to
what's
going
on
in
in
machine
and
it
started
the
state
there
would
would
need
to
be
tracked
and
tracking
it
on
azure
machine
pool
is
just
going
to
be
terrible,
so
we
want
to
track
it
on
individual
resources
right
so,
and
I
I
know
for
brazil,
I
brought
this
up
in
the
past
and
I
think
others
have
brought
it
up
in
the
past
as
well
modeling
the
machines
that
are
actually
making
up
the
pool
as
as
individual
resources.
H
So
we
didn't
need
to
right
away
then
now
we're
starting
to
need
to
we're
also
realizing
that
there's
there's
platform
ish
platform,
things
that
we
would
love
to
have
that
that
just
aren't
there
yet,
like
you,
know
max
surge
and
max
unavailable
these
these
kinds
of
things
and-
and
there
are
some
platform
things
that
we
want
to
expose.
But
there's
you
know
a
lot
of
platform
features
that
you
kind
of
want
to
keep
hidden
under
the
machine
pool.
So
this
differs.
H
It
still
differs
a
little
bit
from
machine
machine
machine
deployment,
but
I'm
not
sure
how
much
exactly
and
then
cecile
had
been
talking
about
cluster,
auto
scaler
and
cluster
auto
scaler
cecil.
Would
you
like
to
talk
about
your
discoveries
in
cholesterol,
scale
or
land.
F
Sure
yeah,
I
worked
on
the
spikes
right
before
the
holidays
to
try
to
integrate
machine
pool
in
the
cluster
api
provider
for
cluster
autoscaler,
and
it's
working
pretty
well,
except
for
one
small
detail,
which
is
actually
a
big
blocker,
which
is
cluster.
Auto
scaler
expects
you
to
be
able
to
delete
individual
instances
for
the
delete,
node
implementation,
and
so
we're
not
able
to
do
that
currently
with
machine
pool,
because
there's
no
way
to
like
for
a
machine.
We
just
mark
a
machine
for
deletion
and
then
cluster
api.
F
It
deletes
it
but
for
machine
pool
we
can't
say,
delete
this
specific
instance.
We
can
only
say
scaled
down
per
scale
up.
So
that's
a
problem
in
terms
of
implementing
cluster
of
scalar
all
the
way
so
yeah
and
then
fabrito
is
saying
it's
the
same
problem
for
a
mission,
health
check,
yeah.
That's
a
good
point.
H
Yeah,
so
one
thing
that
we've
started
down
the
path
of
in
the
azure
provider
is
we've
started,
modeling
or
started
experimenting
with
modeling
these
individual
machines,
as
as
themselves
machines
like
as
your
machine
pool
machines.
Now
the
question
then
becomes.
If
we
go
down
this
path,
we
run
into
a
few
problems,
one
for
example,
with
the
azure
implementation
at
the
azure
machine
pool
level,
we're
specifying
zones
and
then
the
the
actual
infrastructure
provider.
H
Behind
the
scenes
like
in
azure,
the
actual
resource
provider
in
azure
is
responsible
for
balancing
nodes
machines
across
those
those
availability
zones
across
those
zones.
So
if
we
start
deleting
things
without
that
kind
of
intelligence
at
the
cluster
at
the
cappy
level,
then
that
gets
a
little
scary.
You
may
lose
out
on
some
zonal
redundancy.
H
When
you
go
and
add
a
machine,
you
can't
tell
it
what
zone
to
be
in
at
that
point.
So
you
end
up
in
some
weird
situations.
H
F
Another
thing
I
just
want
to
add
is
there's
also
the
problem
of
different
providers.
Different
infrastructure
providers
have
different
capabilities
with
machine
pools
and
we
want
to
enable
the
ones
that
don't
have
certain
capabilities
to
still
be
able
to
leverage
the
ones
they
do
have
without
locking
the
ones
that
have
everything
from
using
everything.
F
I
Jason
yeah,
so
I
think
this
is
one
of
the
challenges
that
we
faced
with
cluster
auto
scale
or
with
the
initial
integration
as
well.
Is
you
know
the
the
way
that
it
operates?
It
really
operates
on
a
per
node
level
and
everything
else
is
just
kind
of
like
mapping
back
onto
that,
and-
and
we
started
out
with
the
weird
initial
impedance
mismatch
for
the
cluster
api
integration
there,
because
you
know,
obviously
it's
interacting
with
like
machine
deployments
and
machine
sets
today.
I
Hopefully,
machine
pools
in
the
future,
and
we
worked
around
that
by
adding
that
delete
annotation
to
a
machine.
I
But
that's
not
saying
that
that's
that
has
to
be
the
way
that
we
interact
with
it
for
for
a
machine
pool,
we
can
figure
out
a
similar
type
of
mechanism
for
doing
that.
That
said,
I
I
it
does
get
really
weird,
because
you
end
up
with
weird
differentiation
between
the
scaling
operations
and
and
what
you
determine
to
scale
down
and
scale
up
for
the
manual
operations
versus
the
automated
operations
and
I
think,
longer
term.
I
We
need
to
figure
out
how
to
solve
that
better
because,
like
you
said,
it
would
be
great
to
defer
some
of
that
to
a
system
that
has
better
understanding
from
the
infrastructure
side.
I
Auto
scaler
does
around
things
like
pod
disruption,
budgets
and,
and
things
like
that,
where
you
don't
want
to
necessarily
yank
out
infrastructure
that
would
disrupt
you
know
a
pod
disruption
budget
or
something
like
that,
so
how
we
find
that
right
balance,
I
I
think
is
you
know
I
think,
there's
going
to
be
like
a
short
term
solution
like
how
do
we
make
it
work
and
then
the
longer
term
solution
is
going
to
be
broader
collaboration
and
trying
to
figure
out
how
do
we,
you
know,
enable
better
integration
with
things
like
the
auto
scaler
to
handle
these
types
of
decisions
that
that
sometimes
requires
scheduler
input
sometimes
require
infrastructure
input,
and,
and
how
do
we
make
the
right
choices.
B
B
And
yeah
it
is
some
is
an
idea
that
I
would
like
to
to
explore
and
and
I'm
happy
to
to
join
if
we
want
to
explore
the
spot.
I
agree
with
jason
that
that
that
probably
there
are
so
many
problems
think
that
we
have
to
take
a
progressive
approach
to
get
there.
A
One
question
that
I
had
was
like
when
you
mentioned,
like
you,
tried
to
kind
of
like
use
machines
or
like
create
machines
when
you,
when
you
were
creating
machines
under
a
machine
pool
object.
I
would
assume
that,
like
you,
hacked
this
by
having
an
infrastructure
reference
that
was
going
to
a
not
yet
another
resource.
H
Learning
machine?
Yes,
so
it
wasn't
really
a
machine.
It
was
an
azure
machine
pool
machine.
So
really
all
it
had
on
the
spec
was
a
provider
id
I
mean
the
status
is
just
tracking
like
hey.
Are
you
draining
like?
Are
you
running?
What
are
your
conditions?
H
So
no,
it
didn't
tie
back
to
a
machine
yet
like
that
cappy
proper
machine,
but
it
we
could.
We
could
structure
something
like
that
or
or
something
as
like
kind
of
a
shadow
machine,
or
I
I
don't
know,
there's
a
lot
that
we
could
do.
It's
just.
H
I
think,
what's
going
to
drive
our
use
cases,
and
should
we
have
ca
just
operate
at
the
replicas
level
instead
of
saying
hey,
I
have
more
intelligence
about
deleting
an
individual
node
now
with
machine
health
checks,
this
becomes
a
little
bit
more
significant
and-
and
this
is
something
that
we
can
start
to
like
with
machine
health
checks,
we
might
be
able
to
actually
put
that
into
the
provider
itself,
like
the
the
the
cloud
provider,
not
actually
such
overloaded
terms.
H
We
could
program
azure
to
do
things
similar
to
that
and
have
it
plugged
into
like
a
health
check.
Api
at
the
azure
level,
where
the
the
scale
set
would
would
take
care
of
the
remediation
or
something
like
that.
H
So
there
are
ways-
and
this
gets
back
to
what
cecile
is
saying
about.
We
don't
want
to
you,
know
cordon
in
a
provider
implementation
for
machine
pool
based
on
lowest
common
denominator,
right.
A
Yeah
so
like
I
was
asking
that
because,
like
this
is
the
second
time
I
hear
like
kind
of
a
use
case
like
similar
to
this
one,
which
it
just
goes
back
and
I
started
thinking
about
it's
like
well,
I
mean
the
only
like
one
way
to
solve.
This
is
to
actually
remove
the
requirement
that
we
have
on
infrastructure
ref
today
and
make
machine
to
just
operate
or
like
to
also
operate
only
on
having
that
node
attached
to
it
a
kubernetes
node
attached
to
it.
A
One
benefit
of
this
is
that
you
could
reuse,
coordinate
drain
that
we
already
have,
and
other
like
you
know,
features
like,
for
example,
that
now
we
were
adding
like
the
ability
to
label
some
nodes
or
something
like
that.
So
then,
the
machine
controller
becomes
a
little
bit
like
more
unaware
of
the
infrastructure,
but
like
we
actually
kind
of
offload
the
infrastructure,
all
the
time
like
we
don't
inspect
it.
A
Apart
from
the
provider
id
which
then
it
goes
back
to
like
just
having
a
node
like,
we
just
need
a
different
way
to
associate
that
machine
with
a
node.
That's
the
end
goal
that
we
want
to
achieve.
So
if
we
do
that,
then,
like
machine
pool
could
just
create
machines
with
a
provider.
Spec
already
filled
in
and
operate
on
those,
but
I
don't
even
know
if
that's
like,
where
we
want
to
be
jason.
I
So,
while
I'm
all
for
reusing
implementation
as
much
as
possible,
I
would
like
to
see
some
type
of
clear
distinction.
That's
easily
noticeable
to
differentiate
between
kind
of
a
resource
that
we're
creating
out
there
just
to
act
as
a
machine
versus
a
machine
itself
to
avoid
kind
of
similar
situation.
A
Yeah,
that's
what
I
was
trying
to
say.
Like
you
know,
we
could
remove
the
infrastructure
requirement
if
you
can
provide
like
a
a
provider
id
like
out
of
band
like
that,
would
be
the
minimum
viable
product
to
get
the
machine
to
a
node,
because
you
know
like
at
the
end
like
we
just
want
machines
to
be
kubernetes
nodes
like
we
have
created
this
look
for
for
a
while.
A
So
maybe
this
is
time
to
maybe
open
an
issue
about
this
to
see
if
we
can
split
or
like
relax
the
requirement
on
infrastructure
a
little
bit
more.
There
has
been
some
other
requirement
that
came
up
like
from
the
the
kappa
n,
which
is
the
nested
provider.
I
don't
I
don't
know
if
there
is
the
folks
from
there
like
on
this
quad
right
now,
which
is
which
was
to
attach
like
random
kubernetes
nodes
that
are
already
there
to
a
cluster
with
the
control
plane
running
in
on
the
management
cluster.
A
That's
what
the
nasa
provider
it
like
is
going
to
do,
but
to
do
that,
like
you
need
to
get
away
from
like
getting
the
infrastructure
graph
in
there
and
require
that
to
give
you
the
provider
id.
A
C
Hello
happy
new
year,
so
I
just
wanted
to
sort
of
surface
this
up
again.
This
is
regarding
the
management
cluster,
slash
providers
operator.
Clearly,
there's
some
confusion
with
the
naming
here
as
well,
which
we
are
discussing,
but
I
just
wanted
to
sort
of
bring
up
the
cap.
C
It's
gotten
a
lot
more
mature
since
it
was
initiated
and
if
my
only
request
is
there
are
a
couple
of
open
questions
and
if
the
provider,
authors
or
people
who
are
working
on
providers,
if
you
can
take
some
time
to
look
at
some
of
the
discussions
on
there
regarding
you,
know,
credential
management
and
also
the
other.
C
I
think
the
other
main
question
open
question
was
whether
the
crds
should
be
namespace
go
up
versus
cluster
scope,
yeah
and
I'm
hoping
to
sort
of
wrap
it
up
at
the
end
of
the
week.
Thanks
for
all
of
you
who
have
looked
at
it,
I'm
actively
going
through
the
comments
and
responding
to
them,
but
yeah
just
thought.
I'd
raise
raise
this
up
again.
A
Thank
you
warren
any
questions
on
the
management
cluster
operator
cap.
This
has
been
open
for
a
while,
so
like
I,
I
would
love
if,
like
you
know,
in
a
couple
of
weeks,
we
can
actually
try
to
merge
and,
like
start,
you
know,
writing
down
some
code
and
try
things
out,
but
thank
you
all
that
already
reviewed
so.
A
All
right
is
there
any
other
question
comments
concerned
last
minute
topic.
A
D
Yeah,
maybe
not
really
related
to
or
only
tangible
related
to
the
class
operator
management
class
operator
topic
in
the
dock.
I
had
opened
discussing
kind
of
how
to
upgrade
controllers
and
running
multiple
controllers
or
operators.
At
the
same
time,
there's
been
a
lot
of
discussion
until
now.
D
I
know
that
the
proposal
for
management
cluster
operator
kind
of
defines
upgrading
controllers
as
well.
I
was
still
wondering
how
I
could
push
forward
the
idea
of
multiple
yeah
controllers
running
at
once.
D
If
I
or
what
would
be
the
next
step
for
me
to
to
engage
here,
because
I
still
think
that
the
concerns
in
the
dock
are
valid
and
we
have
discussed
quite
a
lot
there.
I'm
just
wondering
what
a
good
yeah
way
to
proceed
now
would
be.
B
F
C
Yes,
you're
correct
the
operator
and
the
proposal.
Clearly
this
identifies
a
specific
sort
of
direction
on
on
multi-tenancy
and
how
to
manage
the
providers,
namely,
like
you
said,
a
one-to-one
relationship.
C
I
I
did
take
a
look
at
last
year
at
the
at
marcel's.
Google
talk
I'll,
take
another
look
again
sort
of
regain
context
and
refresh
my
thoughts
on
there,
but
you
have
other
community
members
myself.
If
you
get
the
chance,
can
you
put
the
link
to
that
google
doc
in
here?
So
it's
just
easy
access,
since
this
is
a
new
meeting
notes
for
2021
for
cappy,
and
then
we
can
just
I'll
sort
of
take
a
look
and
comment.
D
The
main
point
why
I'm
bringing
this
up
is,
as
while
the
management
cluster
operator
brings
up,
that
it
should
only
be
one
provider
per
management
cluster.
It
doesn't
really
solve
the
question
and
I've
raised
in
my
opinion
and,
namely
that
how
to
prevent
the
spur
of
management
clusters
like
I.
I
still
believe
that,
with
the
one-to-one
heart
relationship
that
we
would
have
here,
if
you
would
want
to
run
cluster
api
and
in
a
production,
safe
environment,
you
would
not
get
around
creating
a
lot
of
management
clusters.
D
A
A
So
if
I
remember
correctly,
I
was
trying
to
like
get
the
notes
from
last
time.
The
the
main
word
that,
like
you
folks,
had,
was
like
a
breaking
changes,
impact
releases
that
could
impact
a
running
production
cluster,
which
is,
you
know,
like
a
complete,
valid
point
which
could
be
solved
with
more
tests
like
it.
A
We
do
need
help
with
that,
the
in,
in
our
experience
like
we
have
been
very,
very
careful
and
we
do
have
extensive,
like
end-to-end
tests,
for
example
like
in
in
a
lot
of
areas
like
including
the
control
plane
parts.
The
thing
that
I
wanted
to
point
out,
though,
is
like
we
are
adding
a
support
to
move
a
single
cluster
from
one
cluster
to
another.
A
If
I'm
wrong
like
you,
would
be
able
to
do
some
of
that
in
in
like
by
having
like
two
different
management
losses
and
moving
things
along
which
it
would
actually
be
in
line
with
what
we
are
trying
to
achieve,
because
one
of
the
things
that
we're
trying
to
push
forward
is
like
to
reduce
the
amount
of
controllers
running
in
a
single
cluster
which
could
in
have
skew
between
crds
and
things
like
that
and
have
more
like
a
lockstep
behavior,
so
promote
promote
that
which
has
worked
actually
very
well
in
past
environments.
A
D
Yeah,
I
think
it
I
think
we
should
then
continue
the
discussion
and
the
doc
and
the
proposal
because,
in
my
opinion,
I
have
the
concerns
you
brought
up.
D
I
have
kind
of
tried
to
address,
at
least
in
the
comments
that,
in
our
experience
there,
there
is
no
simple,
a
b
testing,
but
it's
for
production
clusters
usually
longer
process,
so
it
would
be
like
you
have
multiple
versions
which
spur
out
and
not
just
an
a
and
a
b,
and
the
second
issue
is
which
kind
of
complicates
this
in
my
opinion
is
sure
we
treat
clusters
like
cattle,
but
if
you
have
a
lot
of
management
clusters,
you
need
to
manage
those
management
clusters
as
well,
and
you
get
into
the
really
complex
scenario
suddenly
without
really
intending
to
do
so.
D
B
So
sorry,
I
only
would
like
to
to
add
that
the
management
cluster
operator
is
not
that,
in
my
opinion,
the
trigger
for
bdf
having
only
having
a
single
list
of
the
provider,
the
the
the
managing
class
operator
is
just
leveraging
on
a
decision
that
was
taken
for
other
reasons
and
and
is
leveraging
this
decision
for
basically
everything
a
simpler
code.
B
The
decision
was
due
to
the
problem
that
we
had
in
managing
a
different
version
of
the
same
provider
at
the
same
time
and
especially
for
managing
the
the
part
of
the
provider
that
cannot
eastern
specific,
which
are
crds
and
they
were
books
and
the
promise
that
there
is
no
kubernetes
does
not
support
to
have
instant
specific
web
book
or
insta-specific
crts
and
and
basically,
these
this
work
trying
to
overcome
this
limiting
kubernetes,
resulting
in
a
complication
in
in
how
we
build
our
our
deployment
yaml
in
our
in
our
customized
machinery
and
at
the
end,
in
a
lot
of
complexity
in
cluster
cutter.
B
So
the
operator
is
not
the
root
cause
of
the
decision.
The
decision
is
driven
by
other
topics
and
also,
I
would
like
to
add
another
point
with
regard
to
this,
that
I
think
that
improving
testing
trying
to
fix
these
goes
in
the
in
the
same
direction
that
kubernetes
itself
goes
and.
B
Never
end,
and
it
is
a
simplification,
so
it
is,
it
is
welcome
in,
in
my
opinion,
if
we
want
to
to
make
it
possible
to
to
run
multiple
instances.
The
provider
is
something
that
we
we
can
just
discuss,
but
I
would
advise
to
not
make
the
default
cluster
api.
B
D
A
Yeah
that
makes
sense
like
if
we
so
like
the
operator,
like
won't
necessarily
support
that,
but
it
it
would
not
like
stop
you
to
do
something
like
that.
But
that's
that's
one
way.
To
put
it.
I
guess
maybe
we're
in
like
we
should
add.
Like
some,
you
know
some
texts
that,
like
I,
I
guess
like
I
if
everyone
connected,
we
already
have
something
like
this
like.
A
Yes,
we
do
only
support
the
management,
one
cost
or
one
instance
of
a
provider,
but
if
folks
want
to
go
another
way
and
install
like
cluster
api
in
a
different
way
like
you
know,
they
should
be
able
to
do
so,
and
I
don't
think
there's
anything
in
there.
A
Actually
that
stops
because,
for
example,
we're
not
going
to
drop
the
namespace
flags,
we're
actually
gonna
still
have
them,
and
so
that,
if
you
do
want
like
to
to
go
down
that
path,
like
you,
others
like
anybody,
you
can
like
we're
not
going
to
stop.
But
this
is
the
same
approach
that
we
took
from
your
ability.
If
you
are
mutating,
for
example,
kubernetes
knows
on
on
their
cluster
api,
like
that's,
that
claustrophobia
won't
see
those
changes,
but
we
won't
block
you
like
it's
not
like.
A
We
actively
check
like
that
everything
matches,
for
example,
and
like
overwrite
all
of
the
changes,
some
of
them
we
do,
but
but
yeah,
that's
the
same
approach
that
we
took
I'm
interested
to
like
to
see
like
if
we
can
actually
like
do
an
amendment
of
the
proposal
like
after
it
gets
merged
to
allow
like
these
other
use
cases-
and
you
know
like
we-
can
collaborate
with
you
and
help.
You
drive
that
forward.
D
That
sounds
good
to
me
and
just
just
to
make
that
clear,
I'm
not
hell-bent
on
running
it.
Like
I'm
saying
I
mostly
see
those
concerns,
and
I
don't
see
the
concerns
that
I'm
having,
especially
with
this
sprawl
of
yeah
management
clusters.
I
don't
see
those
efficiently
currently
being
handled
in
the
proposal
or
in
the
direction
we're
going,
but
maybe
that's
just
not
even
that
big
of
a
concern.
A
B
Hey
yeah,
quick,
quick
note
also
did
these
this
week.
B
There
was
a
problem
on
that
to
end
after
discussing
the
problem,
someone
is
expressed
interest
in
a
work
through
on
the
end-to-end
machinery,
I'm
happy
to
do
so
and
to
share
not
only
how
it
works,
but
also
where
we
are
going.
So
if
someone
is
interesting,
let
me
know
or
write
your
name
below
the
the
entry
in
the
meeting
minutes
and
next
some
time
next
week
I
will
send
out
a
doodle
for
basically
organizing
this.
A
Workthrough,
it
sounds
great,
let's,
let's
send
out
maybe
an
email
to
the
mailing
list
as
well.
Maybe
you
know
this.
This
is
some
folks
that
can
come
to
this
meeting.
That
will
be
interesting.