►
From YouTube: 20200826 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Cluster
api
is
a
project
of
simplex
lifecycle.
We
have
a
meeting
etiquette,
so
use
your
arrays
and
features
you
can
find
it
under
the
participant
list
of
two.
If
you
have
any
topics,
feel
free
to
add
them
to
the
agenda
document.
If
you
don't
have
access
to
this
document
that
join
the
cyclist
website
called
mailing
list
and
you
should
get
access,
soonish
I'll
post
the
link
to
the
meeting
notes
in
chat.
A
So
if
you
want
to
add
anything,
definitely
add
your
name
to
the
attending
list.
If
you
want
to-
and
let's
start
does
anybody
want
to
say
hi
or
introduce
themselves
before
we
start
with
the
psas.
A
Let's
see
a
few
new
names
well,
if
this
is
your
first
time
that's
in
welcome
and
let's
get
started
so
I
have
the
first
psa.
We
have
a
039
release.
I
wrote
soonish
because
there
is
like
still
11
open
things
here,
there's
a
lot
of
in-flight
prs.
A
A
If
you
have
anything
for
that,
it's
in
here
that's
assigned
to
you
definitely
feel
free
to
kind
of
comment
on
it
like.
If
you
can't
get
it
finished,
we
can
move
the
milestone.
I
don't
think
anything
in
here
is
like
necessarily.
C
Blocking
but.
A
Yeah,
let's
try
to
get
them
in
sooner
rather
than
later,
any
questions
on
the
release
and
just
as
a
reminder,
zero
to
nine
is
going
to
be
probably
gonna
have
zero
three
ten
as
the
last
release
for
zero
three
and
then
start
working
on
zero.
Four,
if
you
have
anything
for
alpha
4
that
you
would
like
to
propose,
definitely
feel
free
to
open
an
rfe
issue.
A
We'll
start
the
proposal
process
in
probably
like
the
next
couple
of
weeks
and
start
working.
You
know
like
all
together
on
proposals
and
we'll
definitely
like
have
more
zero
two
releases,
if
bugs
come
in,
that
have
to
be
back,
ordered
that
that
should
definitely
be
the
exception
and
not
the
rule,
because
he'll
I
need
do
you
want.
D
Yeah
or
you
can
go
ahead
to
zeal.
D
I'm
sorry
I
was
going
to
say
I
think
it's
probably
worth
mentioning
that
we
probably
wouldn't
be
back
porting
any
features
to
0.3.
If
you
didn't
already
cover.
That
means
that,
if
we're
going
to
be
adding
new
features,
they'll
go
in
the
the
main
branch
for
0.4
and
bug.
Fixes
and
security
fixes
can
go
and
be
backboarded.
A
They
said,
did
you
want
to
add
anything
else,
all
right,
all
right
february,
you
have
another
psa
here,
go
for
it.
E
A
E
Yeah,
so
basically
how
it
works
now.
Is
that
condition
the
the
condition,
especially
the
the
warning
like
deleting
bubble
ups
through
the
chain,
and
basically,
if
we
are
implementing
these,
but
the
rider
are
not
ready,
we
will
get,
for
instance,
a
clustering
deleting,
but
the
machine
will
still
be
ready
and
this
might
result
confusing
to
the
user.
E
A
Got
it
so
maybe,
given
that
infrastructure
providers
will
need
to
adopt
this,
maybe
this
should
go
in
zero,
three,
nine,
so
that
they
can
release
a
new
version
based
on
zero.
Three:
nine
to
see
what
you
think
yeah.
B
Yeah,
definitely,
I
think
we
should
get
if
we
can
get
it
earlier
rather
than
later.
So
we
have
time
before
we
went
off
before
to
get
that
in
from
the
infrastructure
side.
That'd
be
great
yeah.
E
Yeah
there
are
a
few
needs
to
to
get
addressed
from
the
last
review,
but
I
think
we
are
good.
F
F
Hi
everyone,
my
name
is
kalia,
I'm
from
microsoft,
I've
if
you've
ever
worked
in
the
windows
space
or
the
cube
adm
for
windows
space.
You
might
have
seen
some
of
my
work,
but
james
and
I
paired
to
create
this
proposal,
which
will
eventually
turn
into
a
proper
cape.
I
think,
is
how
it's
pronounced,
but
I
think
the
first
step
is
putting
it
into
google
docs
so
that
we
can
all
comment
on
it.
F
I
don't
know
what
the
format
is.
Do
I
go
through
the
whole
thing,
or
do
I
just
give
like
a
brief
overview
of
our
plans.
A
We
should
probably
give
a
brief
overview
and
introduce
the
problem,
but
yeah
like
starting
with
the
google
doc.
It's
usually
what
we
do
and
then
we'll
probably
move
to
apr
cool.
F
Okay,
so
basically,
the
crux
of
the
problem
here
is
that
to
get
windows,
support
for
cluster
api,
we're
a
bit
limited
because
we
don't
have
privileged
containers
in
windows
as
of
yet,
but
in
the
past
couple
weeks.
Actually
there
has
been
a
proposal
for
privileged
containers
for
windows.
So
that's
being
worked
on
concurrently
with
this.
F
Due
to
the
timing,
we
think
that
we'll
be
able
to
get
an
alpha
implementation
of
windows
and
cappy
before
then,
so
as
a
stopgap,
we're
going
to
use
a
product
that
was
developed
by
ben
moss
at
rancher,
it's
called
winds
and,
basically,
what
it
does
is
it
proxies
commands
from
a
container
through
a
named
pipe
so
that
those
commands
can
be
run
on
the
on
the
host
itself.
So
it's
this
proxying
model
has
has
been
used
in
a
few
different
scenarios
to
model
privileged
containers.
F
So
basically,
our
plan
is
to
use
wins
in
alpha
and
our
implementation
is
also
going
to
be
using
cloud
base
in
it,
which
is
a
product
by
cloud
base,
and
we
do
have
a
few
prototypes
using
cloud
base
in
it,
showing
that
we
can
get
windows
nodes
to
join
the
cap,
z
cluster
and,
I
think
also,
we've
had
success
with
cluster
api.
Amazon
ben
moss
had
a
prototype
for
that.
F
So
we're
just
going
to
build
upon
those
prototypes
and
create
a
solution,
and
our
goal
really
with
this
proposal
is
to
create
a
solution
that
will
be
extensible
for
all
of
the
infrastructure
providers,
and
so
that
includes
creating
a
set
of
scripts,
so
that
providers
can
set
up
their
own
vhd's
and
that's
pretty
much
it.
The
rest
of
the
details
are
in
the
document.
So
if
you
guys
would
like
to
take
a
look
through
it
and
add
your
comments,
I'll
be
reviewing
it
throughout
the
week.
F
James
is
out
of
office
this
week,
but
he'll
be
back
too
soon,
but
he
and
I
co-authored
this
together.
So
yeah.
G
And-
and
I
guess
to
add
a
couple
more
things,
you
know,
first
of
all,
what
we're
asking
from
c
cluster
lifecycle
is
a
few
things.
You
know
we
we're
looking
for
you
a
team
to
review
this
document,
provide
this
feedback,
but
also
to
take
along
this
journey
with
us.
You
know
as
part
of
this
work.
A
big
component
of
this
is
reviews,
architectural
guidance
and
you
know
getting
prs
merged,
and
we
know
that
that's
a
significant
investment
from
any
sig
in
kubernetes.
G
So
what
they're
looking
for
is
also
to
budget
some
of
your
time
for
the
120
time
frame
to
to
dedicate
to
this,
even
though
it's
not,
you
may
not
have
to
write
code,
significant
time
for
ppr
reviews
and
other
activities.
A
Probably,
as
probably
usual,
we
will,
we
can
send
the
document
to
the
sick
cluster
lifecycle
mailing
list
to
get
more
attention.
If
anybody
here
has
experience
with
windows,
definitely
look
at
the
proposal.
I'll
take
a
look
from
a
cluster
api
point
of
view.
I
don't
have
much
experience
with
windows
directly
but
yeah.
If
I,
if
anything,
pops
up
like
well,
definitely
comment
on
the
document,
then
in
wait
for
comments
and
then
oh
thanks,
james
already
sent
it
to
the
mailing
list
last
week.
A
That
perfect
wait
for
comments
and
then
we'll
transform
it
to
a
pr
for
cluster
api,
and
I
guess,
depending
on
the,
if
there
is
breaking
changes
that
need
to
be
made,
but
I
don't
think
so.
I
think
this
can
probably
be
adapted
right.
G
And-
and
I
see
ben
that
he
just
came
online,
but
you
know
ben
is
part
of
the
class
3pi
team,
so
love
for
him
to
get.
You
know
to
spend
some
time
on
this,
as
well
with
with
james
and
kalia,
and
the.
E
Thank
you,
so
I
I
would
like
to
to
start
a
group
discussion
around
this
topic.
Copying
githubs,
because
yeah
it
was
one
topic
that
that
popped
up
in
the
in
the
question
of
kubercon
after
the
capital
x,
and
it
also,
it
is
also
a
rfe
that
we
have
in
the
queue
for
v1,
alpha,
4.
E
E
What
what
is
my
take
around
this
topic
and
my
take
is
that,
as
of
today,
cluster
api
can
be
fairly
operated
in
a
github
model
very
well
for
for
water
regards
the
workload
cluster,
but
it
is.
This
is
not
so
true
for
what
regards
the
management
cluster,
because
for
the
management
cluster,
we
have
a
cluster
cutter,
which
is
a
an
imperative
tool.
So
giving
this
this
background
I
started.
E
I
would
like
to
start
and
to
hear
the
people
idea
about
people
opinion
about
the
dd
of
cluster
cut
operator,
and
my
my
view
on
this
operator
is
that
basically,
we
should
introduce
a
new
abstraction
we
will,
which
is
the
the
provider
or
the
provider
instance
in
a
management
cluster
and
use
the
operator,
the
cluster
cutter
operator,
leveraging
on
the
cluster
capital
library
in
order
to
make
possible
to
install
and
manage
all
the
provider
in
a
fully
declarative
approach.
E
A
H
Thank
you,
so
I'm
brian
borum
I
work
for
weaveworks.
My
boss
invented
the
word
gitops,
and
so
we
do
a
lot
of
cappy
a
lot
of
installing
of
clusters
using
githubs
and
completely
ignore
cluster
cuttle
because
it
just
doesn't
fit
the
model.
So
I'm
interested
in
what
you
might
come
up
with
yeah,
we
kind
of.
I
guess
we
we
just
do
pragmatic
things,
bend
the
rules.
You
might
say,
and
there's
a
bunch
of
my
colleagues
that
we
should
pull
into
into
this
discussion,
like
richard
case,
for
instance,.
I
I
think
it's
confusing
to
me
that
we're
talking
about
it
as
a
cluster
cuddle
operator.
I
It
seems
like
it's
a
well
so
to
the
extent
that
I
understand
it,
which
maybe
is
limited,
it
seems
like
it's
about
sharing
certain
configuration
of
your
clusters,
among
so
that,
like,
instead
of
them
being
files
on
your
local
file
system,
which
are
like
the
cluster
cuddle,
config
files
they're
more
like
stored
in
in
the
crds.
I
E
A
So
I
know
jack
andrew
have
their
hand
here
raised.
I
just
wanted
to
take
a
step
back
from
from
here
like
we're
talking
about
an
operator
to
manage
cluster
api,
not
an
operator
to
magical
chicago
directly,
apart
from
the
get
out
point
of
like
use
case,
there's
other
problems
that
we
probably
want
to
solve
with
an
operator
like
the
cli.
A
Today,
it's
version,
but
it's
its
own
apis
are
not
like
there's
not
a
declarative
api
that
we
can
then
convert
between
one
alpha
in
another
other
than
that,
like
it's
really
hard
to
like
propagate
changes,
we
have
had
multiple
pr's
that
went
into
closer
cutting,
for
example,
that
like
to
support
one
little
change
for
the
the
move
or
changing
aspect
or
super
a
new
feature.
A
This
applies
to
like
a
bunch
of
different
things,
because
there
isn't
the
right
abstraction.
So
this
is
kind
of
like
an
effort
to
create
that
abstraction.
First
jack.
K
H
Yeah
or
I
mean
you
know,
there's
a
thousand
flowers
have
bloomed,
we
have
people.
Who've
tried
to
do
the
whole
thing
with
helm,
charts
it's
yeah,
we
could.
We
could.
K
H
Your
own,
not
much,
no,
no,
I
mean
we
started
with
with
cappy.
H
You
know
v1
alpha
one
like
two
years
ago
or
more
and
did
things
that,
like
I
say,
kind
of
pragmatic
things
brought
stuff
up,
wrote
programs
wrote
scripts,
wrote
tools,
most
of
which
is
open
source,
but
but
some
isn't-
and
I
I
mean
part
of
why
I'm
I'm
kind
of
hedging
like
I'm
not
coming
straight
out
and
saying
this
is
the
one
way
to
do
it
because
all
of
it
has
issues.
You
know
like
that's
why
I
say
we
can
talk
about
it,
but
I
don't.
I
don't
have
like
the
answer.
H
Sorry,
I
don't
know
if
I'm
helping
to
answer
your
question.
Yeah.
H
So
you
know
that's
an
example,
though
yeah
we
in
that
case
we
didn't
use
the
library
we
didn't.
We
we
take
the
api
as
as
a
standard,
and
we
we
take
the
spirit
of
declarative
config
as
as
a
as
a
like
an
iron
rule.
But
then
we
get
creative.
A
L
Hi,
so
a
few
things,
the
first
thing
is
that
we
at
new
relic
are
huge
consumers
of
cluster
cuddle
move,
which
is
pivot
v2
effectively,
and
we
have
a
strong
desire
to
not
have
to
call
cluster
cuddle
move
in
like
a
script.
We
have
we,
we
would
prefer
the
ability
to
shift
at
will
via
cluster
configuration,
and
this
has
to
do
with
the
way
that
we,
orchestrate-
let's
say,
environment,
tear
down
or
environment,
stand
up.
L
We
have
some
use
cases
where,
like
clusters,
that
we
pivot,
we
want
to
temporarily
pivot
back
things
like
that
and
the
so
so
cluster
cuddle
move
is
one
thing.
We
would
definitely
be
interested
in
more
of
a
declarative
solution,
for
the
other
thing
is
flavors.
L
Cluster
cuddle
has
the
concept
of
flavors
and
we
have
worked
around
the.
We
worked
around
the
flavors
idea
with
operators
in
our
environment.
So,
for
example,
we
have,
let's
say
many
aws
accounts,
and
we
know
for
a
fact
that
the
vpc
for
every
or
let's
say
this-
we
know
the
subnets
that
we
want
to
use
for
any
machine,
but
we
have
development
teams
that
can
create
their
own
machine
objects.
Well,
we
have
to
basically
put
a
defaulting
web
hook
in
place
that
says:
hey
you're
in
this
aws
account
in
this
vpc.
L
You
have
to
use
one
of
these
subnets
and
we
could
use
flavors
as
a
way
to
kind
of
say,
hey
the
flavor
for
this
aws
account
is
this
default
to
this,
and
anything
else
you
provide
will
override.
So
that's
kind
of
the
angle
from
which
an
operator
for
cluster
cuddle
or
management
could
be
potentially
useful.
E
A
Awesome
yeah:
let's
get
started
on
a
working
group
for
this
because
given
a
this
is
probably
one
of
the
best
thing
that
we
could
do
to
manage
claustrapia
in
an
easier
way
and
a
more
virgin
way
as
well.
It
doesn't
mean
that
we
have
to
tackle
everything
at
once.
As
soon
as
we
get
like
a
more
of
the
idea
of
what
we
want
to
solve,
then
we
can
split
up
into
maybe
multiple
alphas
and
you
know,
take
a
more
iterative
approach.
A
A
Cool,
let's
move
on
clash
of
rollout
proposal.
M
Hello,
I
just
put
together
this
proposal
recently.
I
just
wanted
to
ask
folks
to
take
a
quick
look
at
the
google
doc.
M
So
that's
where
sort
of
things
got
started,
then
jason
mentioned
that
it
would
be
nice
if
we
supported
something
like
a
cube
kettle:
a
roll
out
command
so
that
kind
of
broadened
the
scope
of
of
the
things
that
were
possible
and
that's
sort
of
where
this
this
dock
ends
up
is
that
I
I
tried
to
look
very
carefully
at
how
cube
kettle
rollout
works
and,
with
the
long
term,
being
long-term
goal
being
that
we
would
have
cube
cattle
rollout
support.
M
But
a
short-term
method
here,
that's
proposed
here,
is
to
have
a
sub-command
or
have
a
command
for
cluster
kettle.
That's
called
rollout.
M
I'm
not
going
to
go
through
every
one
of
these,
but
I
think
some
of
the
stuff
is
already
there
there's
a
lot
of
things
that
could
be
done
today.
There
are
some
additional
small
tasks,
maybe
hanging
fruits.
You
could
call
them
that
we
could
do
to
have
additional
support
for
what
that's
required
for
these
commands
these
sub
commands.
Rather
then,
there
are
maybe
some
slightly
bigger
items
that
I
don't
really
discuss
here.
What
definitely
requires,
I
think,
some
some
deeper
discussion
with
the
core
members.
M
For
example,
the
machine
deployment
machine
set
model
fits
really
well
with
the
with
the
cube
kettle
rollout
command
in
the
sense
that
you
have
your
your
your
pod
deployments,
and
you
have
your
your
replica
sets
and
that
machine
set
machine
deployment
model
fits
really
well
there.
Unfortunately,
the
same
model
doesn't
fit
so
well
with
with
the
control
plane
the
cube
adm
control
plane.
M
So
that
would
be
like
a
big
ticket
item.
That's
not
really
discussed
here,
but
something
for
people
to
think
about
as
well,
but
overall
yeah,
please.
Let
me
know,
that's
all
I
really
have
to
say.
If
you
have
any
feedback,
I'd
appreciate
it.
A
Thanks
arvin:
is
there
any
thoughts,
questions
and
concerns
on
closer
cutter,
roll
out.
A
E
A
M
E
B
You
sorry
yeah,
I
need
to
go
through
the
dock,
but
I
guess
my
initial
question
is,
it
seems
to
me,
like
cluster
cuddle
is
more
meant
to
you,
know,
manage
the
management
cluster
and
the
cluster
api,
like
installation,
and
all
of
that,
and
we've
never
really
crossed
that
line
of
managing
like
the
workload
clusters
themselves,
as
in
like
doing
like
day
two
operations
on
the
workload
clusters
from
cluster
kettle.
B
So
it
seems
like
it's
kind
of
breaking
that
principle
to
me
and
it
doesn't
really
fit
with
the
rest
of
cluster
cuddle
usage.
So
I
think
cube
kettle
roll
out
like
if
we
were
able
to
get
that
working
the
same
way.
We
have
kettle
scale
work
with
mission
deployments.
That
would
be
really
awesome.
M
Yeah,
so
I
looked
at
cube
kettle
scale.
Unfortunately,
it's
not
as
simple,
at
least
that's
my
my
take
on
it.
There
are
some
comments
at
the
very
bottom
of
the
document,
but
it
would
definitely
require
upstream
changes
possible
proposals
to
to
cube
kettle
to
make
this
work,
and
I
wonder
if,
for
example
like,
if
you
look
at
you
know,
it
takes
something
like
the
the
cube
kettle
restart
command
right,
where
you
can
pass
in
a
deployment.
M
M
In
order
to
really
follow
that
same
model
with
cube
cattle,
you
would
have
to
have
machine
deployments
getting
pulled
into
the
cube
kettle
code
base
potentially,
and
I'm
not
sure
how
open
they
would
be
to
that.
I've
never
really
worked
with
that
community,
but
it's
my
point
being
that
it's
with
scale
it
was
really
nice,
because
the
the
the
abstraction
they
provided
was
really
nice
that
you
could.
Anyone
could
you
know
with
their
crds
could
could
incorporate
that.
M
Unfortunately,
that's
not
how
the
cube
kettle
rollout
code
is
structured
today,
so
it
would
require
some
upstream
and
cube
cattle
changes
at
that
level
to
make
it
more
flexible
to
you
know
various
resource
types
that
people
declare.
J
M
A
The
reason
to
have
to
prefer
a
closer
code,
plugins
versus
put
everything
in
sorry
cube
city
of
plugins
instead
of
closer
cuddle,
is
generally
to
kind
of
integrate
better
with
the
larger
community,
because
you
know,
then
it's
right
there,
the
functionality
you
can
just
use
it
with
a
tool
that
you
already
know
how
to
use
for
folks
that
are
like
already
using
cube,
ctl
and
they're
familiar
with
it.
N
David,
so
kind
of
parlay
off
that
so
has
that
been
a
discussion
item
cuddle
seems
like
it
would
be
a
really
nice
landing
spot
for
these
cluster
coi
kind
of.
M
Operations
not
that
I
know
of
if
the
question
is
directed
at
me.
N
More
to
the
to
the
general
group
I
mean
having
a
one-off
command
line
is:
is
all
fine
well
good?
Having
that
one
kind
of
anchor
point
for
the
community
does
seem
to
have
its
advantages
as.
A
A
So
it
seems
like
we
kind
of
like
a
need
to
have
like
a
larger
discussion
on
like
what
we
define,
as
you
know,
the
your
day,
one
and
day
two
operations
and
where
they
should
live
in
the
long
term.
I'm
honestly
like
fine
like
experimenting
with
closer
cuddle.
A
It's
probably
the
quickest
way
to
achieve
something
and
try
it
out,
although
if
there
are
like
concerns
that
they
like,
then
we
kind
of
like
have
a
president,
and
we
keep
doing
this,
then
we
might
alienate
like
cluster
api
with
the
rest
of
the
community.
So
we
have
to
be
mindful
of
that
as
well.
A
A
We
will
definitely
like
keep
this
in
mind
and
yeah,
as
jason
mentioned,
like
closer
cutter,
could
be
a
cubesat
plugin
in
the
long
term,
but
we
need
to
make
that
happen.
We
need
to
make
sure
that
we're
also
kind
of
in
a
schedule
with
kubernetes
releases.
So
there
is
a
lot
of
other
stuff
to
think
about.
There.
A
In
terms
of
this
proposal,
I'm
not
opposed
to
say
like
we
can
do
the
minimum
necessary
to
to
kind
of
like
see
this
through,
with
the
caveat
that
like
if
this
grows
too
much
we'll
we'll
have
to
revisit
and
pull
the
code
out
and
like
put
it
somewhere
else,
but
we're
with
an
alpha
phase.
So
we
should
experiment
whenever
we
can
and
then
can
someone
take
the
action
item
to
open
a
different
issue
regarding
the
qmcdl
plug-in,
because
that
probably
needs
a
little
bit
more
investigation
as
well.
A
Who's
gonna.
Do
it.
M
A
In
general,
I
think
it's
as
it's
more
general,
like,
as
jason
mentioned,
like
we
should
probably
like
see
if
longer
term
talking
like
probably
six
plus
years
months,
we
should
try
to
see
if
like
well,
I
don't
know
how
much
we're
going
to
have.
A
M
J
A
Awesome
any
other
comment
before
we
move
on.
B
Thanks,
I
just
wanted
to
bring
up
this
that
we
said
we
would
talk
about
at
the
office
hours
from
a
slight
conversation.
We
had
basically
that,
right
now
the
cappy
book
is
based
on
the
master
branch
of
the
repo
and
sometimes
that's
confusing
to
users,
because
something
will
get
merged
and
the
documentation
will
get
updated.
But
the
actual
you
know
feature
or
thing
that
is
documented,
isn't
available
yet
because
it
hasn't
been
released.
B
So
we
had
that
app
happen
with
cluster
kettle.
Get
the
get
comment,
get
cube,
config
that
wasn't
in
the
latest
release,
but
it
was
in
the
book
documented
as
being
available
as
a
feature.
So
I
think
we
brought
up
potentially
like
having
the
book
be
based
on
the
release
brunch.
B
Instead,
I'm
not
100
sure
that
I'm
kind
of
split,
because
I
think,
at
the
same
time,
having
it
based
on
master,
allows
us
to
have
more
flexibility
in
terms
of
like
documenting
things
and
not
having
to
wait
for
a
release
to
fix,
like
documentation
bugs
or
even
like.
Sometimes
we'll
have
something.
That's
changing
on
the
infrastructure
provider's
side
and
has
to
like
change
the
quick
start.
B
I've
had
that
happen
a
few
times
and
because
the
cappy
release
usually
happens
before
the
infrastructure
release,
we
have
to
wait
for
the
cappy
release,
then
release
the
infrastructure
provider
and
then
update
the
book,
so
it's
accurate
so
having
it
based
on
a
release.
Branch
would
mean
we'd
have
to
update
the
book
preventatively
and
before
the
thing
is
actually
released,
but
I
guess
we
have
the
same
problem
now
with
capy.
So
does
anyone
have
thoughts
on
that.
A
I
do,
but
if
someone
wants
to
chime
in,
please
do
all
right.
So
when
we
were
working
on
alpha
3,
the
book
was
actually
based
off
release
zero,
two
branch,
so
we
would
kind
of
to
update
it
would
kind
of
like
merge,
prs
or
backboard
things
in
from
the
main
branch
back
to
the
release
zero
to
branch.
We
could
do
the
same.
It's
a
matter
of
like
then
fast
forwarding.
A
So
if,
if
the
main
branch
is
kind
of
like
the
same
as
the
release
branch,
we
could
keep
working
on
the
main
branch
and
then
backboard
stuff.
When,
when
we're
ready,
we
can
just
fast
forward
the
release
branch
and
then
cut
the
release
there.
This
one.
A
This
would
be
fine
and
we'll
probably
cover
this
exact
use
case
when
we're
like
we
merged
something,
but
we're
not
ready
to
release
yet,
and
we
don't
want
to
merge
those
changes
or
to
show
those
changes.
A
That
said,
like
it
wouldn't
work
in
the
case
where,
like
we,
have
the
main
branch
in
alpha
4
and
then
the
release
03
is
fixed,
because
if
we
have
to
back
port
fixes
that
include
documentation,
we
can
only
pr
against
that
branch.
So,
while
there
is
like
the
the
window
that,
like
the
main
branch
and
the
release
branch
of
like
out
of
sync,
we
still
have
that
problem.
But
it's
probably
like
much
less
of
an
issue.
In
that
case,
I'm
not
sure
if
I
confused
more
or
hopefully
that.
C
I
I
A
So
on
this,
we
actually
already
do
this
today,
like
we
have
the.
So,
if
I
go
here,
we
have
the
legacy
docs
on
the
release,
co2
branch-
and
I
don't
know
if
this
actually
works.
We
don't
have
a
release.
Zero
three,
never
mind,
but
there's
also
alpha
one
as
well,
which
is
like
the
really
old
dock.
So
we're
really
doing
this
in
a
way
or
another,
but
so
having
a
release,
branch
and
maintaining
it
over
time.
It's
just
more
work
for
the
maintainers
I'll,
be
kind
of
straightforward.
A
I'm
fine
with
that.
If
like
we
can
share
the
responsibilities
to
kind
of
back
forth
and
release,
that's
probably
fine.
It
fast
forwarding
a
branch,
it's
not
a
big
deal,
so
we
can
make
that
a
policy.
B
A
We
could
it's
a
lot
of
more
maintenance,
so
I
don't.
I
don't
know
on
off
the
top
of
my
head,
like
if
yeah
the
other
thing
is
like
other
projects.
I've
done
this
in
the
past.
Q
builder
has
done
this
in
the
past,
and
now
they
they
kind
of
like
do
what
we
do
today,
because
maintaining
a
different
branch.
We
also
have
to
maintain
the
owner's
files
like
everywhere
and
it's
a
long
live
branch
alongside
the
main
branch
yeah
anyway,.
A
Yeah
we
could
do
that
as
well.
O
Yeah,
I
almost
want
to
invert
it,
so
I
do
think
having
released
zero,
three
branch
would
fix
the
majority
of
the
issues
in
that
we
fast
forward.
It
does
put
an
onus
on
us
to
produce
docs
that
go
along
with
the
release,
which
is
possibly
not
even
a
bad
thing
and
then
have
a
sub
domain,
which
is
that
goes
to
the
head,
so
that
goes
to
the
main
branch
like
latest
dot
cluster
api
and
I've
seen
that
model
used
for
lots
of
other
project
projects
as
well.
So.
A
Yeah,
we
could
do
that
and
andy.
D
Wouldn't
we
still
have
the
same
problem
with
the
release
branch
if
we
are
cherry
picking
or
fast
forwarding
things
there
that
aren't
in
a
tag,
yet
that's
been
actually
released
to
github
so
take
the
cluster
cuddle,
get
cube
config
example,
and
let's
pretend
that
the
main
branch
is
for
viewing
alpha
four
and
and
that's
where
that
commit
goes,
and
we
have
a
release:
zero.
Three
branch
like
as
soon
as
that
gets
picked
to
that
branch.
The
book
will
get
updated,
but
we
haven't
released
it
yet.
A
Yeah,
that's
that's
what
I'm
trying
to
capture
before
it's
like
this
would
solve
the
problem
when
the
main
branch
and
the
release
branch
are
kind
of
like
on
the
same
track,
but
in
that
case
like
it,
wouldn't
solve
that
problem,
but
that's
a
much
less
problem.
In
that
case,
I
guess,
because
if
we're
backboarding
something-
and
we
have
a
bunch
of
bug
fixes,
we
should
probably
release
right
away.
A
A
A
That's
yeah
pretty
straightforward.
A
Cool
in
terms
of
policy
change,
which
probably
document
this-
and
I
can
take
that
action
item
unless
to
say
you
want.
B
A
Won't
have
time
until
next
week,
but
yeah,
let's,
let's
definitely
work
together
on
the
on
the
policy
change
and
yet
documented
cool.
Any
other
question
on
the
docs.
B
B
Basically,
the
question
is,
as
we
get
near
to
v1
alpha,
4
and
we'll
think
about
releasing
a
v04
version
of
cluster
cuddle
will
should
that
be
compatible
with
zero
three
x
versions
of
cluster
api,
and
how
do
we
see
cluster
cuddle
version?
Support
going
forward
in
terms
of
like
with
cluster
api
releases,.
C
C
Regarding
the
actual
stance
on
the
version
for
cluster
cuddle,
I
think
that's
a
it's
a
good
point
and
it's
a
larger
discussion.
I
don't
know
how
what
we
want
to
support
moving
forward,
but
it
seems
like
there
were
like
discussions
on
that
thread
regarding
it.
So.
C
I'm
happy
with
whatever,
like
I'm
happy
to
discuss
more
with
the
community
to
see
what
the
expectations
are
for
cluster
cuddle
as
a
cli
versus
the,
because
if
we
even
do
this
management
cluster
operator,
then
the
cli
would
have
to
be
compatible
with
that
operator
as
well.
So
I
think,
there's
a
lot
more
discussion
that
would
be
required
for.
E
Yes,
my
take
on
this
problem
is
that
it
really
depends
by
the
velocity.
We
do
expect
that
cluster
cutter
will
change
in
the
future
because,
as
as
we
extend
the
compatibility,
the
skew
that
we
support,
basically,
we
are
reducing
the
compatibility,
so
my
opinion
now,
because
the
couple
is
still
changing
or
settling
down,
and
so
we
should
not
extend
too
much
the
compatibility.
G
J
A
C
A
B
Go
ahead
yeah.
I
think
the
three
options
that
we
need
to
choose
for
are
choose
from.
Sorry
are
either
one
say:
we're
always
going
to
support
every
version
of
cluster
api
with
any
version
of
cluster
cattle
and
have
tests
to
make
sure
that
that's
true
and
we're
back
compatible
to
say
we,
you
should
use
like
the
same
minor
version
and
document
that,
but
not
like
strictly
enforce
it.
A
So
you're
saying
like
just
just
to
make
a
scenario.
Example
like
is
like
the
version
that
you
have
installed
of
your
providers,
are
0
3x
and
then
you're
using
cli
on
004,
or
vice
versa.
B
Yeah
yeah
so
basically
yeah
what
media
wrote,
he
summarized
it
well
like.
Basically,
should
we
strict
block,
should
we
just
recommend
or
should
we
support
that
compat
forward
compat
every
way,
and
we
don't
have
to
decide
that
now,
but
that's
something
we
should
think
about.
As
we're
approaching
you
know
a
0.4
version.
A
I
would
say
we
should
probably
block
them,
because
we
could
get
in
really
weird
scenarios
that
can
be
unsupported
and
our
goal
should
not
be
to
kind
of
break
things
I
guess
like,
but
if
we,
if
we
have
to
support
the
full
thing
like
I
don't
yeah,
I
like
it,
I
could
impact
the
even
the
culture
how
to
operate
a
work.
A
Or
management
cluster
operator
andrew.
L
So,
just
to
clarify,
I
mean,
might
might
we
want.
You
know
the
first
release
or
two
of
v1
alpha
4
cluster
cuddle
to
know
how
to
upgrade
from
an
0.3
to
an
0.4,
because
this
has
really
bitten
us
before
with
the
two
like
major
versions
of
cluster
cuddle
is
having
the
existing
clusters,
like
basically
not
be
manageable
by
the
newer
version
of
cluster
cuddle,
but
the
newer
version
is
not
manageable
by
the
older
version
of
cluster
cuddle.
So
you
end
up
in
a
weird
place.
There.
A
D
Was
just
gonna
say:
upgrade
must
work
yeah
and
echo
what
you
said
that
you
we
don't
want
people
using
0.3
to
init
a
0.4
and
vice
versa.
You
wouldn't
use
0.4
to
init
0.3,
but
upgrades
most
definitely
must
work.
I
also
I
don't
know
if
we
covered
this
that
so
the
the
change
to
the
format
of
the
variables
to
use
the
bash
style
substitution
in
the
templates
that
broke
older,
zero.
Three
cluster
cuddles
with
newer
zero,
three
templates.
D
I
think
that
situation
should
be
something
that
we
strive
to
avoid
within
a
minor
release
series
of
cluster
cuddle,
so
for
all
zero.
Three,
ideally-
and
we
may
want
to
consider
a
breakage
like
that-
a
significant
enough
regression
that
we
need
to
issue
a
new
version
of
cluster
cuddle
that
addresses
the
regression
we
may
have
to
take
that
on
a
case-by-case
basis,
but
ideally
going
from
zero
three
four
to
zero.
Three
seven.
You
should
just
be
able
to
emit
clusters
without
having
to
worry
about.
You
know,
problems
like.
D
C
Yeah,
so
just
to
clarify,
when
we
say
class
classical
upgrades
should
work.
Is
that
going
to
be
a
thing
for
so?
If
I
have
a
v1
alpha
3
cluster,
would
it
be
the
cluster
cuddle
from
v1
alpha
4,
that's
responsible
for
upgrading
or
capable
of
upgrading
from
maybe
one
alpha
3
cluster
to
women
alpha
4
or
are
we
also
expecting
the
cluster
cuddle
cli
of
v1
alpha
3.
A
I'm
getting
lost
at
this
point
and
it
worked
in
minus
one
minute.
So
why
don't
we
take
this
either
on
slack
or
on
the
on
an
issue
I
think
the
dldr
is
like
upgrade
must
work
you
can
in
it
within
a
zero
three
x
or
zero
three
four
series,
so
it
has
to
be
in
the
same
minor
version.
Patch
version
can
different.
A
That's
fine
upgrade
must
work
across
cluster
called
versions
so,
but
you
need
a
new
classical
version
to
upgrade
to
alpha
4,
for
example,
and
that's
that's
fine,
but
let's
document
all
of
this
and
then
we
can
discuss
on
an
issue
with
the
box
that
we
want
to
put
in
place.
A
Does
that
sound
good
to
everyone
all
right,
perfect,
all
right,
long
meeting
today?
So
thank
you
all
and
see
you
next
week,
bye,
bye,.