►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
meeting
of
the
castelized
cycle,
and
today
we
are
going
to
discuss
the
the
idea
on
the
goal
of
of
the
cluster
of
the
management
cluster
operator,
is
a
meeting
of
is
ruled
by
the
cncf
code
of
control
so
be
kind
of
to
each
other
and
yeah.
Let's
start
so,
I
started
to
write
archive
this.
A
Skype
is
mostly
I
I
I
take
a
start
from
a
similar
document
that
was
initial
started
by
by
matt
and
and
and
the
goal
of
today
is
just
to
have
a
quick
look
at
summary
motivation
and
and
then
discuss
goals
and
and
not
goals.
So
summary-
and
this
is
this-
is
from
document
and
yeah-
it
is
to
provide,
in
addition
to
cluster
cut
or
cli,
to
provide
an
operator
that
handle
the
life
cycle
of
a
management
cluster
like
we
do
together
with
clusters
ctl,
but
but
based
on
cfds.
A
Okay,
this
design
was
intentional
and
it's
also
looking
at
the
result
of
the
the
reasons
and
secrets
secret
cycle
survey.
It
was
instrumental
to
expand
the
clustering
api
user
base
and
also
to
trigger
a
an
overall
rationalization
of
cluster
api
provisioning,
because
cluster
cutter
is
used
more
or
less
in
the
50
percent
of
of
the
resp
by
the
50
percent
of
the
respondent.
A
A
Hello,
thank
you
for
commuting,
so
the
second
use
case,
which
is
not
really
well
addressed
now
by
cluster
cutter,
is
the
github's
workflow
for
operating
the
management
cluster
itself.
So
we
are
fine
for
operating
the
mana,
the
workload
clusters,
but
the
management
cluster
is
kind
of
difficult
to
manage
in
a
key
up
way,
because
you
have
to
replicate
basically
to
replicate
all
the
logic
that
is
implemented
in
cluster
catalinite
or
move.
A
B
Yep,
I
think
I
think
it's
pretty
much
the
same
as
the
gitops
model,
but
effectively
running
the
same
playbook
in
dev
or
staging
and
then
promoting
that
up
to
production,
especially
if
you're,
a
large
cloud
provider
and
you're
running
like
a
management
cluster
in
your
dev
environment,
in
a
management
cluster
in
your
production,
environment.
C
Okay,
so
so
so
at
weaveworks
we
got
the
management
cluster
pretty
much
working,
except
for
upgrades
and
the
move
in
a
github's
way.
C
We
we
had
the
capic,
we
had
the
provided
components
in
a
helm,
chart
the
crds
and
the
manifests
in
the
hound
chart,
and
we
had
the
cappy
components
in
a
helm
chart
as
well,
and
then
the
cluster
cluster
file
split
up
as
a
single
in
its
own
helm,
chart
where
we
struggled
with
with
the
mansion
cluster,
was
around
the
move,
step
and
also
upgrades.
C
C
A
Yeah
I
looking
at
the
comment
of
the
issue.
I
I
I
it
seems
to
me
that
many
people
are
doing
so.
I
have
only
one
comment
that
now
you
are
in
order
to
do
so.
Basically,
you
are
forced
to
take
charge
of
managing
the
all
the
components
of
a
provider,
so
you
are
taking
charge
of
the
deployment
or
the
of
the
books
and
so
on
so
forth,
and
I
think
that
moving
forward,
we
should
make
this
kind
of
simpler
and
and
basically
provide
another
level
an
instruction
instead
of
entering
with.
A
D
So,
but
like
one
of
the
things
that
we
we
talked
about
was
that
the
first
version
of
the
classical
well,
the
management
cluster
operator,
which
is
kind
of
the
new
name
like
wouldn't
include,
move
for
a
reason
to
simplify
one
the
move
operation,
because
it
will
probably
become
backup
and
restore
and
to
reduce
the
scope
of
the
operator
rewrite
at
the
you
know
for
the
first
one.
So
I
think
from
what
I've
heard,
like
I
think
charles
is
saying
like
we
should
prioritize
the
move.
B
Yeah,
I
think
one
of
the
complexities,
at
least
in
my
mind,
that
falls
from
from
not
including
move
is,
is
take
home,
for
example,
or
or
similar
deployment
tools
that
set
like
config
maps
or
other
kinds
of
variables
during
the
the
deployment
of
those
artifacts.
If
you
are
effectively
deploying
it
without
them,
then,
if
you
tried
to
go
back
at
a
later
date
and
do
an
upgrade
or
something
similar,
there
would
not
be
under
ownership
of
the
that
tool.
A
A
Yeah,
this
is
definitely
a
point
from
a
movie
is
a
kind
of
way
of
operation,
because
it
is
the
the
only
operation
in
concept
that
that
handles
both
providers
or
is
interested
both
in
providers
and
in
cluster
objects,
while
all
the
other
operations
are
clearly
separated
on
on
the
first
category
under
or
in
the
second
category.
So
I
I'm
definitely
plus
one
to
keep
the
scope
as
as
small
as
possible
in
order
to
to
make
this
happiness
and
and
to
get
these
out
of
us
as
well
the
sooner
from
the
other
way.
A
A
I
so
my
but
yeah
feel
free
to
chime
in
my
idea
to
to
kind
of
help
in
defining
the
scope
was
try
to
get
an
agreement
or
in
what.
What
are
the
the
object
or
the
entities
that
that
we
should
include
in
the
crd
that
that
describes
amman
and
jim
and
cluster.
A
A
Yeah
we
should
decide,
in
my
opinion,
yes,
but
I
would
like
to
to
get
a
feedback
if
the
search
manager
and
its
lifecycle
so
upgrade
of
the
server
managers,
for
instance,
or
deletion
of
the
search
manager
are
under
the
responsibility
of
the
of
this
operator,
or
maybe
they
are
option.
An
optional
step
of
this
operator.
B
Given
that
cert
manager
is,
is
pretty
effectively
deployed
in
lots
of
other
clusters
that
are
not
the
management
cluster,
at
least
speaking
on
personal
experience,
we
are
deploying
cert
manager
in
lots
of
other
places.
We
kind
of
have
like
like
the
way
we
deploy
cert
manager
elsewhere,
so
it
might
even
be
more
convenient
if
cluster
api
sort
of
had
a
range
of
supported
versions
of
cert
manager,
instead
and
and
sort
of
said,
it's
your
responsibility
as
a
prerequisite
to
manage
cert
manager.
A
A
Yeah
to
help
the
user
to
get
to
a
management
cluster
which
is
up
and
running
so
I
I
would
like
to
to
maintain
the
the
current
experience,
which
is
basically
you
get
the
you
in
two
commands.
You
are
up
to
the
first
management
cluster,
and
so
I
think
that
we
should
manage
the
cattle
but
make
it
optional.
A
Okay,
so
I
see
that
richard
just
arrived,
so
I
richard
we
are.
We
are
starting
to
look
at
this
document.
This
document
is
started
from
your
cat
and
I'm
trying.
What
I
would
like
to
try
to
figure
out
is
what
are
the
goals
and
and
the
first
idea
that
comes
to
me
to
define
what
are
the
goals
is
to
try
to
define
the
entity
that
should
be
managed
by
this
operator.
A
So
the
first
one
that
we
discussed
is
search
manager
and
we
agreed
that.
Basically,
we
should
support
the
scenario
one
where
the
search
manager
is
missing
and
we
want
to
install
and
manage,
and
the
second
one
is
if
the
serp
manager
is
already
there.
We
should
basically
use
the
one
that
that
is
provided
by
the
by
the
by
the
administrator.
A
I
I
think
that
this
is
more
or
less
the
core
of
the
operator.
What
is
what
is
to
be
defined
in
my
opinions
is
all
the
details
that
goes
with
a
provider
instance
or
a
provider.
The
the
first
point
is
that
in
cluster
cattle
there
are
the
repository
configuration.
Typically,
we
pull
our
provider
components
from
github,
but
you
can
add
more
repository
use
local
repository
whatever.
C
It
I
was
going
to
say:
is
there
an
alternative
to
having
the
repository
for.
C
A
A
Okay-
and
this
is
and
and
basically
it
is
a
low
cluster
cattle
to
provide
an
abstraction
which
is
the
provider
and
allows
the
user
to
don't
care
about
all
the
components
that
build
up
a
provider.
So
the
deployment
for
the
book,
the
deployment
for
the
manager,
the
airbag
rules,
all
this
stuff.
So
the
this
these
things
that
we
call
component
yaml
are
fetched
by
custer
cattle
and
managed
by
custer
cartel.
A
A
We
are
removing
an
important
part
for
caster
cutter.
We
are
removing
also
a
big
part
of
a
cassette
cutter,
because
if,
if
you
work
at
this
lower
level,
it's
difficult
to
understand
what
is
in
your
cluster,
how
to
plan
upgrades
how
to
check?
If
your
management
cluster
is
consistent,
so
the
problem
managing
this
higher
level
is,
in
my
opinion,
is
required.
B
So
could
I
give
a
counter
point
then
the
that
would
are
you
saying
then
that
the
cluster
control
operator
would
have
to
be
able
to
authenticate
to
a
git
repository,
because
now
you'd
have
to
cover
use
cases
of
ssh
not
being
accessible
and
having
to
do
username
password?
B
Maybe
it's
not
a
git
repository,
maybe
it's
a
http
endpoint
I
mean
these.
Are
these
are
problems
that
you're
now
talking
about
that
things
like
cap,
controller
and
argo,
cd
and
flux
are
designed
to
to
handle
right.
So
I
I
I
would
say
that
you
might
be
complicating
where
I
would
really
want
cluster
ctl
operator,
like
knowing
the
order
of
operations
and
things
that
they
should
probably
occur
with
versus
getting
them
and
bringing
them
to
the
cluster
that
it
would
then
be
conducting
those
operations
on.
C
I'm
inclined
to
agree
with
robert
on
that
I
we
have
some
very
large
banking
customers
that,
while
they
deal
with
weave
works,
they
simply
want
nothing
to
do
with
githubs.
They
want
a
jenkins
operation
that
pushes
manifest
with
group
ctl,
and
the
operator
should
respect
that
change
from
the
manifest
and
handle
the
operations
in
order,
but
having
having
implicit,
get
ups
in
a
cluster
either.
F
C
Flux
I
mean
with
flux
or
argo.
Any
other
github
style
tool
would
immediately
put
us
on
sort
of
the
defensive,
with
very
with
very
many
change
management
teams
and
the
sort
of
customers
we
want
to
get
this
tool
into.
So
there
needs
to
be
a
clean
up
demo
clear
the
invocation
of
how
the
the
changes
are
applied
into
the
cluster
and
how
the
operator
affects
them.
In
my
humble
opinion,
the
operator
should
see
that
the
changes
have
been
pushed
into
the
cluster
operate
against
them.
C
How
they
get
into
the
cluster
should
be
external
to
cluster
ctl
and
cluster
ctl
should
not
do
that.
So
that
way,
one
customer
that
wants
to
use
argo,
one
customer
who
wants
to
use
flux.
One
customer
wants
to
use
jenkins
another
customer
that
wants
to
use
whatever
other
shelf
scripting
or
delivery
mechanism.
They
can
do
that
and-
and
we
don't
fall
afraid
of
those
those
very
large,
especially
banking
customers-
that
that
don't
like
to
be
forced
to
be
used
forced
to
use
this
sort
of
thing.
G
Yeah
I
would
like
to
hi
I'm
I'm
the
I
work
at
capital,
one
with
andrea,
we're
doing
something.
We
just
launched
a
platform
that
is
many
clusters
with
centralized
management
clusters.
We
would
like
to
we
we're
not
yet
at
a
point
where
we
can
share
that
out,
but
we
would
like
to
align
and
contribute
to
this
effort
and
we,
our
delivery
mechanism,
for
these
type
of
resources,
is
not
using
git
ops.
G
We
it's
simply
something
we
can't
use
because
for
our
enterprise
requirements,
we
use
a
custom
delivery
mechanism
and
we
operate
it
on
resources
once
they
enter
the
kubernetes
system.
B
So
so,
bringing
us
back
to
the
the
idea
of
having,
like
a
crd
in
my
mind,
and
what
I
liked
about
that
idea
of
introducing
like
a
crd,
is,
if
you
had
like
the
template
for
a
provider
in
the
form
of
some
kind
of
a
crd
or
something
where
now
cluster
ctl
is
is,
is
taking
what
it
would
be
grabbing
from
a
git
repository
and
just
grabbing
from
from
that
definition,
and
and
applying
it
and
and
maybe
the
the
the
version
zero
zero.
B
One
of
that
crd
is
is
basically
just
a
yaml
and
a
config
map,
or
something
I
guess,
I'm
trying
to
say
is
like
is,
is
how
what
would
be
applied
to
the
cluster
that
a
banking
customer
or
anyone
else
could
say
here.
You're
still
cube,
ctl
applying
it.
You
still
have
full
control
over
the
fetch
and
application
to
your
cluster.
But
now
you
have
something
running
inside
that
cluster.
That's
effectively
taking
that
and
doing
it
in
the
correct
order.
C
Yep
that
that
would
work
for
some
of
some
of
our
banking
customers
for
for
defense
type,
military
type,
customers
that
they
they
don't
want.
I
mean
we're,
we
is
the
githubs
company
and
these
customers
simply
say
no.
We
want
we
works
in
there,
but
we
want
nothing
to
do
with
githubs,
so
we
provide
them
some
other
sort
of
tool
chain,
while
while
they
get
it's,
it's
a
case
of
education
and
getting
getting
them
comfortable
in
two
three
years
time.
They'll
come
round
to
it.
C
A
Well,
but
I
I
kind
of
agree:
the
design
is
not
going
to
force
to
use
any
any
tools
and
just
to
try
to
to
go
back.
If
I
look
at
the
the
proposal
that
was
linked
to
the
to
the
issue
is
kind
to
define
a
provider
spec
so
where
we,
which
is
really
simple,
I
want
this
provider
copy
type.
It
is
a
core
provider
version
I
want.
I
want
to
be
0
3.9.
A
H
A
Oh,
it's
from
max,
oh
sorry,
okay,
but
so
sorry,
but
okay.
So
this
is.
This
is
a
simple
abstraction
that
adds
all
the
complexity
of
of
of
water
provider
is
okay
and
more
or
less.
It
is
the
same
level
of
a
structure
that
that
is
used
by
cluster
cattle.
A
Okay,
and
in
fact
this
is
a
a
small
variation
of
the
current
crd
used
by
cluster
cutter
and
and
the
idea
is
that
when
a
user
applies
no
matter
if
it
applies
using
github
or
or
using
kubercattle
or
in
using
a
program
when
the
user
applies.
These
this
example
this
api
to
the
cluster,
then
the
operator
we
will
take
charge
of
installing
the
so
or
installing
all
the
other
components
and
from
the
moment
on
it
will
take
care
of
managing
upgrades
and
so
on
and
so
forth.
A
A
In
that
way,
or
because
of
cattle,
already
supports
all
the
use
cases
for
for
people
that
don't
want
to
fetch
from
from
internet,
they
want
to
manage
their
their
own
copy
of
manifest
or
component
manifest
and
whatever.
So
we
already
supporting
this,
the
the
point
is
that,
if
this
kind
of
like
we
want
to
embed
the
kind,
this
kind
of
flexibility
into
the
operator
or
not
in
my
opinion,
yes,
because
I
wanted
operator
to
provide
the
same,
the
same
functionality
that
that
we
already
have
just
in
a
different
way.
H
Yeah,
I
think
I
think
that's.
I
think
the
that
proposal
that
the
max
has
written
is
is
a
good
first
stab
with
us.
That
would
solve
a
lot
of
the
the
github
scenarios
where
you
just
apply
an
instance
of
that,
then
it
wins
to
all
of
the
providers
would
be
good
to
have
those
other
config
options
available.
H
G
Thank
you
so
for
for
my
identification,
am
I
understanding
it
correctly
to
assume
that?
Okay,
maybe
it's
not
tied
necessarily
to
get
ops,
but
it
still
needs
to
pull
from
somewhere.
I
I
Clarify,
I
think,
maybe
just
maybe
I
think,
there's
some
confusion
in
terms.
I
think
when
we
talk
about
pulling
from
github,
we
mean
going
to
a
repo
checking
the
releases
and
getting
the
latest
release,
which
is
what
cluster
coal
is
doing
right
now.
So.
H
I
A
C
Another
option
is
also
a
helm
chart
for
the
cluster
components
and
the
provided
components
and
having
the
management
operator
be
able
to
be
notified,
that
a
new
version
has
been
made
available
and
should
upgrade
to
that
new
version.
C
C
So
if,
if
you
say
you
don't
want
to
do
the
move
at
the
moment-
and
it
should
be,
the
move
should
be
switched
to
a
backup.
Slash
restore
that's
acceptable,
but
but
the
upgrade.
If,
if
we
can
have
it
such
that
the
the
upgrade
can,
if
the
operator
can
just
handle
the
upgrade
and
handle
it
either
from
a
like,
you
say,
https
get
repo
or
or
a
helm,
repository
or
or
so
forth.
Then
that
would
allow
organizations
to
handle
every
possible
scenario.
C
Because
some
organizations
just
don't
have
access
to
github
externally
that,
but
if
they
could
have
an
internal
harbor
or
artifactory
or
whatever,
where
they
can
point
at
the
helm
chart
for
the
provided
components
and
the
cappy
components
that
solves
that
issue,
whether
it's
a
hound
chart
or
just
a
flat
file
system
or
that
that
sort
of
thing
endpoint
some
organizations,
don't
even
have
a
helm
repository.
C
So
those
scenarios
work,
but
it
should
be
an
explicit
push
to
the
customer
group.
Ctl
apply.
I
want
this
version
and
it's
you
can
find
it
here.
A
Okay,
I
I
think
that
now
we
have
a
kind
of
agreement,
so
it's
something
that
that
it
is
required.
We
are
not
imposing
constrained.
We
are
just
moving
what
cluster
cattle
is
doing
now
to
a
gitops
approach
and
we
are
splitting
basically
the
the
and-
and
this
is
the
configured
the
configuration
for
the
fetch
part.
A
A
A
And
the
last
point
is
that
when
I
install
a
provider
I
do,
I
usually
do
apply
a
set
of
variables
or
flags
to
the
provider
instance
and
and-
and
this
should
be
part
of
the
api
that
we
are
going
to
define.
A
I
don't
know
if
you
should
be
a
specific
api
typo,
we
can
use
secrets
or
whatever,
but
it
should
be
part
of
the
domain
and
basically
the
the
set
of
of
the
the
of
all
this
information
is
will
be
recorded
when
I
do
it
in
it
or
will
be
defined
when
I
do
in
it
and
will
be
the
base
for
doing
upgrades
and
then
so
this
is
the
minimum
base
and
yeah.
Then
I
wrote
two
other
possible
entities
that
are
in
in
in
the
scope
of
of
this
operator.
A
One
is
the
management
group
which
is
basically
a
set
of
co
of
providers
that
works
together,
and
today
it
is
implicit,
and
that
is
it
only
surface
when
you
do
upgrades,
but
it
is
in,
in
my
opinion,
could
be
interesting
because
it
it
will
allow
to
to
make
operational
group
of
containers.
So
I
want
this
group
of
containers
to
upgrade
from.
We
want
alpha
3
to
be
one
alpha
for
four,
so
this
allows
me
to
change.
A
Only
one
end
one
object
in
my
cluster
and
then
have
the
the
the
change
to
propagate
the
this
one.
The
last
one
are
cluster
templates,
so
cluster
tempests
today
are
managed
by
by
cluster
cattle,
but
in
light
also
what
was
vince
was
telling
before
I,
I
personally,
I'm
leaning
to
keep
them
out
of
the
scope
of
the
caster
cutter
operator
initially
of
the
manager
of
the
management
cluster
operator,
because
clustered
paid
is
not
are
not
part
of
the
management
class
that
they
are
workload
class
and
we
already
have
many
way
to
manage.
A
A
Okay,
so
trying
to
move
on,
we
have
nearly
a
time,
but
hopefully
we
can.
A
We
get
this
already
started
discussion,
so
assuming
that
we
have
a
an
eye
level
agreement
or
of
what
is
this
coupon
in
term
of
api?
A
So
I
think
that
some
of
them
are
really,
let
me
say,
easy
to
agree
upon
which
is
okay
in
it.
That
means
install
an
initial
set
of
provider,
add
a
new
provider
to
the
initial
set
and
and
then
there
is
so
so.
This
is
the
installation
part.
Then
then
there
is
a
group
of
operation
on
provider
which
are
upgrade
so,
for
instance,
upgrade
a
full
management
group
to
latest
in
in
the
same
contract,
which
is
the
the
current
behavior
or
cluster
cattle
upgrade.
A
Then
I
think
that
we
should
start
thinking
also
to
upgrade
to
the
next
contract,
so
I
want
to
upgrade
my
management
drop
to
v1
alpha
4
or
it
is.
This
is
also
another
operation
supported
by
cluster
cattle
upgrade.
I
want
to
upgrade
a
specific
setup
provider
or
single
provider,
so
I
want
my
aws
provider
instance
to
move
to
v06.1.
A
A
I
want
to
change
the
sync
loop,
because
I
I'm
debugging
or
or
thing
or
thing
like
like
that
and
then
and
then
we
get
to
to
move.
A
I
think
that
I
I
tend
to
agree
that
we
with
means,
because
if
we
look
at
move
now,
move
basically
moves
only
the
they
installed
the
cluster,
so
they
were
close
the
cluster
and
they
were
club.
Cluster
is
kind
of
out
of
scope
for
from
all
these
entities
that
that
we
discussed
so
far.
F
The
only
challenge
I
see
or
concern
is
the
api
that
we're
going
to
surface,
for
this
will
likely
need
to
ensure
that
there
is
pretty
much
declarative
and
we're
not
embedding
any
operations
or
something
like
that
february.
If
you're
a
colleague
with
the
cube
id
and
operator,
it
was
one
of
the
challenges
that
we've
seen
yeah.
I.
A
I
agree
about,
I
agree,
and
this
is
why
I
try
to
define
the
scope
of
api
as
the
first
thing,
because
modeling
the
api
in
a
declarative
way
will
take
some.
A
A
Okay,
so
we
we
are
at
time
and
I
I'm
going
to
post
the
recording
to
to
the
channel
and
my
neck,
take
aways
that,
starting
from
this
initial
surface,
I
should
strive
to
to
draft
an
api
and
the
next,
and
then
I
will
we
can
iterate
on
offline
on
the
document
or
if
people
prefer
we,
I
can
arrange
another
meeting.