►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
just
want
to
give
it
a
second
okay.
Thank
you.
So
I
just
opened
an
issue
for
the
conversation
we
had
like
last
week
about
cluster
class
in
the
managed
topologies
yeah.
If
you
can
open
this,
like
so
just
to
clarify
a
little
bit
of
the
higher
level
goals
we
are
trying
to
introduce
and
as
a
non-breaking
change
in
a
zero
for
x,
release
two
new
concepts.
B
Well,
one
once
you
are
the
and
one
a
new
managed
topology
inside
of
the
cluster
object,
the
new
crd
will
be
a
plus
what
we
call
cluster
class
in
here,
and
it's
really
like
a
way
to
stamp
clusters
of
the
same
shape,
and
what
that
means
is
like
in
practice
is
that
it's
going
to
be
a
collection
of
templating
references
like
we
do
for
machine
deployments,
and
I
think
machine
sets
yeah.
So,
for
example,
here
like,
if
you
see
on
the
spec,
we
have
like
a
little
template
for
infrastructure.
B
So
we
have
the
cluster
template,
so
it
could
be
azure,
cluster
or
aws
cluster,
for
example.
The
control
plane
ref
would
be,
for
example,
kcp
and
then,
which
me
we
will
need.
I
guess
like
to
introduce
new
templating
types,
so
that's
one
of
the
things
that
we
might
want
to
discuss,
and
so
that's
like
kind
of
the
first
iterations
I
don't
know
yet
about
the
machines.
B
Maybe
that's
unnecessary,
because
we
already
have
templates
and
templates
for
machine
deployments
are
really
just
probably
like
infrastructure
and
bootstrap
references,
and
those
templates
are
already
there
the
the
thing
here
that
was
like
a
little
bit
interesting
though
it's
like.
If
we
do
define
the
name
of
a
role,
then
you
can
reference
that
name
of
the
rule
inside
of
the
of
the
cluster
object
and
say
like
hey
for
this
role,
I
want
10
replicas
and
we
already
know
how
to
reference
the
role,
but
again
like
this
might
be
unnecessary.
B
Maybe
we
can
just
like
a
do
it
by
convention,
and
this
would
be
the
details
of
that
will
be
discussed
during
the
proposal
for
the
clustering
object.
The
claustrophobic
like
today
is
probably
like
a
maybe
very
like
a
little
useful.
It's
used,
for,
I
guess,
like
a
bunch
of
references
and
then
to
kind
of
group
all
the
resources
together,
in
fact,
like
every
object
that
is
associated
with
the
cluster,
has
labels
with
the
cluster
name
and
has
to
live
in
the
same
name
space.
B
So
if
the
cluster
can
reference
a
class
and
that
reference
for
now,
we
just
make
it
immutable
to
not
go
into
all
the
new
ability
factors
like
you
know,
to
change
a
class.
For
example,
the
cluster
will
know,
because
controller
will
know
how
to
create
the
resources
for
that
cluster.
So
then
we
can
say
like
something
more
simple
here
so
like
hey,
I
want
a
control
plane
with
three
replicas
and
I
want
this.
No
tools
like
settle
workers
and
then
reference.
B
Those
roles-
and
the
idea
here
is-
is,
for
example,
a
lot
of
companies
have
operator
administrators
that
could
set
these
classes
up
before
giving
the
ability
for
users
to
create
their
own
clusters.
But
then
they
know
how
you
know
like
that
cluster
is
shaped
and
that
shape
is
always
going
to
be
the
same
in
the
future.
We
probably
will
want
to
allow
the
reference
templates
to
be
immutable
and
then
propagate
those
changes
down
in
the
cluster.
B
So
then
imagine
like
an
administrator
wants
to
push
a
change,
for
example,
better
logging
or
update
the
configuration
of
like
all
the
worker
nodes,
etc.
We
could
let
folks
change
that
template
reference
and
then,
like
maybe
like
a
draw
up
line
of
actions
like
hey.
I
want
to
make
these
changes
and
then
apply
that
plan
and
then
the
the
changes
like
it
will
just
like
go
and
we'll
use
conditions
to
like
inform
the
user.
B
When
that's
done
so
that's
like
the
v1.1
idea,
but
like
for
the
first
iteration,
it
would
probably
be
like
in
your
voice,
except
for,
like
a
few
values
that
we
want
to
change.
The
other
super
useful
thing.
That's
in
here,
though,
it's
the
version
management
today
to
change
a
version
of
a
cluster.
B
You
have
to
change
kcp
every
machine
and
then
every
machine
deployment,
and
the
idea
here
is
that,
like,
if
we
do
know
what's
managing
the
cluster
under
this,
this
managed
spec,
then
the
version
could
be
propagated
first
for
the
control
plane,
for
example,
and
then
the
worker
node,
so
we
could
have
also
time
management.
We
could
also
make
sure
to
have
rate,
limiting
and
not
like
upgrade
blast
radius
like
everything.
At
the
same
time,
so
I
talked
for
a
bit.
I
hope
this
is
like
a
little
bit
interesting
for
folks.
C
Brian
with
his
hands
raised
hi
yeah
this.
This
feels
like
a
big
improvement
to
me.
Thanks.
Can
you
clarify
where
the
the
reference
to
the
infrastructure
specific
pieces
goes.
B
That
would
be
the
cluster
class,
so
today
I
think,
like
the
cluster
object,
if
you
remember
correctly,
has
the
infrastructure
ref
that's
required
and
then
this
control
plane
ref,
that's
optional,
but
I
think
we
still
want
to
make
that
required
at
some
point.
So
if
the
class
has
the
template
resources
for
those
references,
then
the
cluster
controller,
when
it
first
reconciles
and
sees
like.
Oh,
I
do
have
a
class,
but
I
don't
have
these
resources.
D
I
have
a
question
actually
so
you
mentioned
that
the
cluster
control
plane
ref
might
be
made
required
later
on.
Why
is
that
so?.
B
It's
just
something
that
we've
talked
about
it.
I
don't
I
don't
know
if
we
want
to
do
it,
but
control
plane
management
before
alpha
3
was
done
by
single
machines,
so
not
not
machines
in
a
machine
deployment,
but
actually
created
three
different
or
like
and
different
machines
objects
and
then
making
sure
that,
like
they
were
all
in
a
control
plane.
This
has
caught
like
what's
causing
like
a
lot
of
issues
that
you
can.
You
can
probably
think
about.
B
Like
upgrades
like
it
will
be
like
much
harder
to
actually
check
that
upgrade
has
been
successful,
and
so
we
introduced
kcp
and
the
concept
of
a
control
plane
provider
so
going
forward.
We
probably
want
to
allow
only
control
point
provided
to,
but
we
again
we
only
talked
about
it.
We
would
probably
not
make
this
change
for
alpha
4
to
create
those.
You
have
to
create
those
controls
and
resources.
D
E
E
So
technically
or
maybe,
if
my
understanding
is
correct,
the
way
that
we
do
that
right
now
is
like
the
kubernetes
binaries
are
built
into
the
image
themselves,
and
we
just
happen
to
like
reference
that
image
when
we
are
like
actually
creating
the
machine,
template
or
the
machine
so
like.
If
we
just
happen
to
manage
or
if
we
just
happen
to
update
the
version
that
is
specified
in
the
cluster
object.
How
would
how
would
that
version
get
propagated
over
to
the
machine
templates?
B
So,
usually
for
image
like
like
finding
an
image
like
the
default
path
and
folks
from
survival
like
correct
me
from
like
a
way
off
here,
very
well
correctly
like
it's
it,
we
use
the
version
in
there
to
actually
find
that
image
most
of
the
time
now,
like
it's
true
that
folks
can
just
say
like
hey,
I
want
to
use
this
image
directly,
so
that
would
be
unlike
an
unsupported
like
th.
B
This
version
will
not
be
supported
in
that
case,
like
you
would
have
to
change
the
image
id,
because
you
we
wouldn't
be
able
to
us
like
find
that,
probably
but
I'll,
let
cecil
any
scenes.
F
Yeah,
so
actually
the
we're
trying
to
encourage
folks
not
to
use
that
default
path
like
the
default.
The
defaulting
of
the
image
is
really
the
like
reference
images
that
we
publish
but
they're,
not
we
don't
expect
most
use
cases
to
fall
under
that
category.
We
expect
folks
to
like
build
their
own
images
and
use
those.
So
I
actually
asked
the
exact
same
question
it's
cigar
in
the
thread.
That's
something
that
I
I
would
like
us
to
think
about.
F
If
we
go
about
doing
this,
because
if
there
isn't
a
good
way
to
like
update
your
image,
which
is
currently
very
tightly
coupled
with
the
kubernetes
version,
in
my
opinion,
that's
one
of
the
biggest
areas
for
improvement
and
that's
something
we've
talked
about
in
other
issues
as
well.
I
think
3203
had
a
good
thread
on.
G
That
you
see
yeah,
plus
one
through
what
cecile
said
so,
regarding
the
version
matching
with
the
machine,
the
machine
image
matching
for
yeah,
we
still
have
a
dependency
there
before
before
holding
off
on
the
machine
boost
trapper
proposal.
G
One
of
the
goals
at
some
point
long
term
or
midterm,
was
that
through
the
cli
slash
agent,
that
we
would
embed
on
the
on
the
machine
side,
we
would
be
able
to
download
on
the
fly
kubernetes
components
without
having
to
pre-bake
them
into
the
machine
images.
So
that
might
be
something
we
want
to
explore,
whether
be
it
downloading
from
a
public,
endpoint
or
private
endpoint.
If
someone
is
deploying
from
using
air
gap
for.
A
H
Okay,
yeah,
I
realize
it's
not
long
left
so
I'll,
try
and
keep
this
brief.
So
the
experimental
operator
branch
that
we've
been
working
on
at
the
moment,
we
tried
to
do
a
rebase
to
get
it
all
upstairs
latest
master
and
that
got
merged.
H
And
basically
that
didn't
happen
as
we
quite
expected,
and
now
the
history
has
sort
of
broken,
so
you've
got
like
some
of
the
old
master
and
then
some
of
the
commits
from
the
experiment
and
then
some
of
the
new
master
and
then
some
of
those
commits
from
the
experiment
again
and
it's
kind
of
all
mixed
up.
So
at
the
moment,
there's
not
a
clean
way
to
get
a
merge
from
that
branch
back
into
master,
which
is
what
we
were
hoping
to
do
eventually.
H
So
those
conversation
that
started
in
slack-
and
I
just
kind
of
wanted
to
highlight
this-
to
people
who
have
some
ideas
about
how
we
can
fix
this
alex-
has
actually
come
up
with
a
branch
that
would
be
clean.
So
he's
got
a
rebase
with
only
the
experimental
commits
cherry
picked
on
top
one
of
the
options
is
we
can
try
and
force
push
that,
obviously
not
great,
I'm
not
sure
what
else
like
we
have
in
terms
of
options
to
clean
that
up
now
that
it's
on
the
fork.
H
So
if
anyone
has
any
ideas,
I
think
we're
all
open
to
ideas.
Yeah.
B
Could
we
I
mean
we
could
probably
keep
working
on
this
brand
for
now,
but,
like
maybe
at
the
end,
when
we
don't
just
merge
it
back,
and
we
just
cherry
pick.
Those
is,
is
the
merging
back
to
concern,
or
is
it
more
like
that
now
by
updating
the
branch
like
we're
gonna
get
like
in
trouble
with
the
with
rebasing.
H
I
think
it's
a
bit
of
both
because,
like
yeah,
I
think
the
original
idea
was
fine.
We'd
merge
back
in
and
yeah
we
could
do
like
a
cherry
pick
create
a
new
branch
to
to
bring
it
back
here,
and
I
don't
know
if
that's
going
to
cause
issues
with
the
the
rebase.
That's
not
the
rebase,
the
review.
In
that
point,
I
guess
we
can
come
to
that.
H
I
think
at
the
moment
like
we
can
continue
working
on
it,
but
it's
just
a
bit
difficult
to
work
out
what
the
actual
conflicts
are
with
master
at
the
moment,
so
yeah,
I'm
I'm
not
fully
up
to
this
speed
on
this
actually
like
alex,
knows
a
lot
better,
but
he's
not
on
the
call
today.
So
I
just
wanted
to
like
highlight
this
conversation
and
it's
probably
best
to
continue
the
conversation
in
the
stack.
I
Yeah
I
just
wanted
to
follow
up
on
what
joel
was
saying.
We
had
yeah,
we
had
kind
of
looked
at
this
myself
and
joel
and
alex,
and
a
few
other
people
at
red
hat
yeah.
We
had
kind
of
looked
at
what
was
happening
with
that
branch,
and
there
was
yeah.
There
was
some
sort
of
weird
interleaving
of
commits
that
had
happened.
I
I
I
tend
to
agree
with
joel,
like
the
new
branch
that
alex
has
created,
like
probably
the
easiest
fix
here
is,
if
we
can
agree
on
it,
just
to
force
push
that
branch
and
then
we'd,
and
then
it
would
all
be
cleaned
up
and
we'd
be
able
to
bring
it
back
to
the
main
branch
when
we
wanted
to,
then
that's
all.
I
had
to
say.
A
Okay,
thank
you
mike.
I
think
that
it
makes
sense
to
take
this
offline
and
and
see
if
we
can
go
on
with
this
proposal.
So
thank
you
again.
If
there
are
no
last
minute
topics,
thank
you
again,
jason
for
the
demo.
Thank
you,
everyone
for
the
interesting
discussion
and
see
you
next
week.