►
From YouTube: 20200217 Cluster API Operator Sync Meeting
Description
Ad-hoc meeting to discuss work to be done on the CAPI operator
A
Yes,
yeah,
it
seems
to
be
recording
now:
cool,
okay,
hey
everyone.
So
today
we're
going
to
talk
about
the
cappy
operator,
which
there's
a
proposal
that's
merged
with
that
already,
and
it
came
up
on
the
call
last
week
that
warren
is
no
longer
able
to
work
on
this
and
lead
that
effort.
So
some
of
the
folks
from
red
hat
and
some
other
interested
parties
are
still
interested
in
this
operator
and
want
to
take
over
and
kind
of
keep
going
with
that
project.
A
So
I've
attached
a
document
today,
which
hopefully
means
we
can
discuss.
You
know
what
the
current
state
is,
what
the
work
needs
going
forward,
what
dates
are
and
basically
how
we
can
continue
with
this
project
and
where
we
want
to
take
it
in
the
future.
A
A
B
C
I
think
it's
at
least
two
weeks
before
that,
in
terms
of
code
freeze
date
program
correctly,
there.
D
A
And
in
terms
of
the
current
state
of
the
implementation,
I
don't
know
if
we'd
be
able
to
talk
about
what
what's
already
happened,
so
that
we've
got
an
idea
of
like
where
we
are
at.
In
terms
of
how
implemented
it
is.
B
I
have
added
some
node,
maybe
then
warranty
calls
I
mean
so
the
the
design
of
the
operator
builds
on
top
of
the
new
model
for
managing
multiple
credentials
and
the
work
for
this
new
model
is
almost
done
both
in
copy
and
and
also
in.
As
far
as
I
know,
in
some
provider,
so
that's
mean
that
all
the
dependencies
or
the
or
the
predecessor
are
already
in
place
and
operator.
Work
could
actually.
D
B
In
basically
preparing
a
work
breakdown-
and
this
is
documented
in
a
community
file
which
is
linked
here
and
also
linked
to
an
issue-
and
the
tldr
of
this
of
this
document
is
that
basically,
the
the
workhorse
organizing
into
macro
phases,
the
first
one
is-
was
basically
to
build
the
actual
operator,
and
this
work
is
basically
it
can
it's
out
of
the
critical
path.
So
it's
not
blocking
everything
else.
It's
not
making
the
test
failing
so.
B
And
then,
as
soon
as
the
operator
was
basically
ready,
we
were
planning
to
to
make
cluster
cattle
to
use
the
operator.
This
part
is
a
little
bit
more
critical
because
as
soon
as
you
start
touching
cattle
in
it,
basically
you
you
get
a
test
failing,
and
so
this
one
requires
a
little
bit
more
of
attention.
B
B
C
At
some
point
we
have
to
make
the
switch.
I
guess
we
just
don't
have
the
delay
to
make
the
switch
enough
for
four,
given
the
time
timeline
that
we
have
so
maybe
end
of
the
year
or
like
early
next
year.
So,
like
kind
of
phase
it
out
a
little
bit
more
would
be
best.
A
Okay,
okay,
so
we've
got,
I
see,
there's
there's
three
pr's
linked,
which
obviously
are
implementing
part
of
the
phase
one
warren.
Are
you
happy
for
us
to
kind
of
like
pick
those
and
the
changes
you've
done
and
continue
with
those.
D
Yeah
so
the
first
three
they
essentially
are
just
stacks
on
top,
so
once
the
first
pr,
the
initial
pr
gets
merged,
and
then
we
just
start
rebasing
onto
the
next
two,
but
most
of
that
is
essentially
just
scaffolding
work.
You
know
some
of
the
make
files
and
the
cube
builder
all
the
generated
work,
there's
no
actual
logic
in
there,
but
yeah
I
mean
it's.
D
It
gets
you
to
a
good
state
where
you
know
you
can
build
the
darker
files
and
actually
start
like
you
know,
deploying
things
to
a
cluster
and
testing
it
out
yeah.
So
you
definitely
feel
free
to.
I
think,
right
now
the
pr's
are
still
open,
but
they're
in
like
draft
mode,
I'm
sure
the
maintenance
have
access
to
do
whatever
and
give
you
all
that
permission.
D
D
It
and
like
I
think,
they're
just
they're
ready
to
be
merged
in
the
reason
I
did
it.
That
way
is
because
I
was
trying
to
do
it
in
a
incremental
sort
of
addition.
Obviously,
in
the
beginning,
because
q
build
is
most
of
it's
generated,
it's
still
like
extra
extra
large
size
prs.
D
But
if
you
look
at
the
commits
like
I
guess,
the
only
thing
I
could
do
is
sort
of
squash
commits
before
merging
in
but
they're
already,
I
think
they're
at
least
I'm
confident,
they're
ready,
but
obviously
pr
reviews
right
so
yeah.
D
I'm
okay,
if
you
know
if
the
community
is
okay
with
it,
I'm
okay,
like
sort
of
shepherding
a
bit
of
that
work
in,
I
have
a
little
bit
of
downtime
right
now
anyways.
But
if
there's
somebody
else
who
sort
of
wants
to
like
you
know,
essentially
if
they
want
to
have
context
of
what
was
done
before,
then
they
can
sort
of
pair
on
it
too,
and
take
responsibility
for
merging
I'm
okay
either
way.
D
I,
unless
there's
like
major
reworks
or
changes
for
those
pr's,
I'm
fine,
like
fixing
a
few
things
here
and
there
and
then
like
just
trying
them
off.
E
C
So
if
we
merge
them
into
the
main
branch,
though,
like
it
becomes
incomplete,
coded
like
it
might
be,
have
to
shipped
out
like
with
zero
four
zero.
So
then
that
puts.
D
C
A
weird
spot
where,
like
we're,
released,
blocking
that
makes
sense
so
either
like
if
we're
confident
we
can
ship
by
april
like
or
like
late
march,
like
I'm,
not
opposed,
but
if
we're
not
like,
we
should
either
use
a
different
branch
or
yeah,
especially
because
like
there
might
be
a
lot
of
changes
and
a
lot
of
iterations
on
it,
so
happy
to
just
have
a
different
bench
with
different
improvers
so
that
you
know
things
can
proceed
smoothly.
D
So
yeah
my
suggestion
there
is
at
first
I
wasn't
sure
myself
if
it
should
be
like
in
an
experimental
for
the
operator,
should
be
an
experiment
folder.
If
you
look
at
the
first
pr,
it's
actually
merging
straight
into
like
the
top
level
directory
cluster
api.
D
So
that
could
be
actually
one
of
those
changes.
If
we
feel
like
we
can't
ship
everything
by
whatever
april
15th
rather
april
1st.
D
D
First
one
alpha
for
delivery,
and
then
just
you
know,
work
towards
phase
one,
because
it's
not
just
the
features,
but
it's
like
also
having
relevant
tests
and
everything
right
for
the
for
these
things,
and
it's
quite
a
bit
of
work.
D
And
if
you
look
at
that
hack,
md
dock,
I've
sort
of
tried
to
break
it
down
to
sort
of
bite-size
chunk.
So
it
could
be
parallelized
to
a
certain
extent
but
yeah,
I
would
say
either
if
you're
putting
it
in
the
main
branch.
Then
let's
look
at
the
scope
of
work
that
needs
to
be
done
and
sort
of
reduce
that
to
something
that's
doable
or
just
drop
it
in
some.
I
don't
know
if
what
the
other
path
forward
is
like
either
another
branch
or
an
experimental
folder.
C
C
As
long
as
the
code
doesn't
touch
the
rest
of
the
code
base,
we
don't
have
to
merge
the
branch
itself.
We
could
just
merge.
You
know
copy
the
folder
and
just
open
a
new
pro
with
the
operator
working.
I
don't
yeah.
I
don't
see
much
issue
many
issues
with
that.
We
we
have
previous
history.
C
I
think
zach
has
mentioned
the
upgrade
tool
which
then
became
kcp,
but
the
upgrade
tool
itself
was
in
a
different
vmware
on
repo
and
then
like
it
kind
of
got
like
deprecated
and
part
of
it
in
terms
of
like
code,
came
out
to
be
kcp
but
like
it
actually
changed,
because
one
was
a
cli
and
the
other
one
was
an
operator.
A
D
I
can
update
my
prs
then,
to
I
can
create
another
branch
and
then
have
it
merge
to
that
branch.
A
C
Yeah,
okay
super
powers,
but
I
was
thinking
like
even
for
a
supervisor.
You
actually
do
have
to
set
up
testing
so
maybe
like
it's
not
if
we
want
if
we
refer
to
the
secondary,
but
we
can
go
down
that
path.
We'll
just
need
sick,
sick
leads
to
sponsor
that
temporary
repo.
For
now,
it
might
also
give
us
more
room
for
experimentation
without
yeah,
especially
for
upgrades
so
from
promotion
3
to
alpha
4.
G
C
It
does,
I
don't
think
it
like
it's.
It
was
originally
supposed
to
be
like
controllers
rather
than
like.
You
know,
like
I'm,
like
an
app
like
an
operator
for
cluster
api
that
manages
its
life
cycle.
If
that
makes
sense
like
types
that
are
like
included
in
cluster
api
and
it
will
ship
we
can.
We
can
see
that
too,
but
then
I
guess,
like
we'll,
have
to
add
a
feature
gate
to
the
cluster
api.
C
That
says,
is
the
operator
enabled
the
thing
is
like
the
crds
will
get
installed
regardless,
and
that's
what
I
don't
like
about
the
experimental
folder
is
that,
like
like,
there
might
be
clusters
that
have
like
a
dangling
crds
and
then
we'll
have
to
overwrite
them,
which
closer
call
of
cli
does
that
today,
when
you
upgrade,
but
given
that
if
this
is
like
a
management
layer
on
top
of
it
like
that's,
why
I
was
thinking
like
maybe
a
separate
temporary
repo
would
be
got
a
way
to
go
or
branch.
G
I'm
just
like
thinking
about
like
if
we
have
time
pressure
and
we
think
we
can
get
something
we
want
alpha
for
them,
adding
on
the
additional
work
of
setting
up
ci
looking
for
sponsors
and
so
on,
then
we
can
just
basically
stop
the
stop
thinking
about
v1
alpha
4
altogether.
In
my
opinion,
because
it's
just
going
to
reduce
the
time
we
have
for
actual.
C
Implementation
mean
from
everything
that
we're
talking
about
here.
It
just
seems
like
the
timeline
is
actually
going
to
be
next
release
yeah
so
like
maybe
alpha
five,
maybe
a
beta
one,
we'll
see
how
things
are.
A
Okay,
I
wonder
if
we
should
table
this
and
move
on
to
some
of
the
other
stuff.
Does
anyone
have
anything
else
they
want
to
add
about
the
history
or
any
questions
before
we
move
on.
A
Nope,
okay,
so
the
next
part
I
wanted
to
discuss
was
kind
of
like
the
future
in
terms
of
like
what
are
the
requirements,
because
I
know
we've
got
various
different
interested
parties
in
this
from
various
different
companies.
So
what
are
we
actually
looking
for
from
this
operator
long
term-
and
you
know
like
in
terms
of
red
hat's
interest
in
this-
we
want
to
long
term
be
able
to
manage
multiple
cappy
installations
in
multiple
namespaces
so
like
if
we're.
If
no
one
else
wants
that,
it's
probably
not
worth
us
getting
involved
here.
G
G
This
is
what
I'm
looking
for
mostly-
and
I
know
it's
currently
not
in
the
proposal,
because
I
participated
in
the
original
prs,
but
then
again
I'm
open
to
helping,
even
if
we
don't
include
this
in
the
immediate
goals,
but
just
have
this
on
some
kind
of
road
map
or
like
idea
that
sometime
in
the
future,
we
might
support
this.
A
When
I
was
reading
the
proposal
this
morning,
I
was
trying
I
sort
of
got
the
impression
that
it
was
designed
in
a
way
that
this
could
be
like
a
future
goal.
Is
that
something
yeah
lauren's
nodding
as
if
that's
something
he
had
in
mind
when
he
was
writing
that?
If
that
you
know
obviously
like
we're,
we're
happy
to
get
involved
and
keep
that
as
a
future
goal
for
now,
but
like
what
we
don't
want
to
do
is
like
the
community
to
sort
of
six
months
down
the
line,
be
like.
A
Oh,
no,
we're
going
to
change
our
mind
on
this.
So
I
guess
we're
kind
of
looking
for
some
commitment
that
that
is
something
we're
going
to
be
able
to
do
long
term.
B
So
I
I
I
so
in
one
wire,
we
were
using
a
multiple
installation
of
copy
and
we
moved
away
because
you
it
it
basically
brings
problem
back,
so
we
are
not
interested
in
these
from
the
from
the
same
time,
at
least
in
my
opinion,
I
I
don't
want
to
to
block
on
this,
but
what
I
I
really
would
like
is
to
see
a
proper
design,
because
we
faced
many
problems
and
cluster
cutter
code
is
is
a
is,
is
full
of
par
of
this
code.
D
B
This
initial
implementation
that
that
we,
we
didn't
find
enough,
robust
and
and
the
basically
the
most
important
to
the
really
nasty
problem
that
in
in
a
cluster
there
could
be
only
a
version
of
the
weapon
which
is
the
major
broker
of
for
this
solution.
So
this,
if
the
community
want
to
go
to
this,
I'm
not
opposite.
What
is
important
is.
B
And
that
we
have,
let
me
say,
someone
that
take
charge
of
the
additional
complexity
that
we
are
putting
in.
A
So
I
think
that
it
sounds
like
you've
got
some
really
useful
insight
into
why
that
kind
of
cappy
multi-name
space
thing
may
not
work
like.
I
think,
we're
totally
happy
to
start
investigating
those
issues
and
try
and
come
up
with
some
solutions,
but
obviously
we
don't
have
that
same
context.
So
it'd
be
really
great.
If
we
could
understand
a
bit
more
about
the
issues
you've
seen
and
then
we
can
start
investigating
those.
C
Yeah,
like
just
building
on
top
of
what
you
mentioned,
like,
I
think
in
the
fullness,
then
we'll
need
to
nail
down
the
actual
limitations
that
we
want
to
put
in
place
if
we
want
to
actually
do
run.
What
support
running
multiple
things
in
multiple
namespaces,
as
in
like
the
limitations,
are,
should
be
coded
in
like
regards
against.
Like
certain
scenarios
like
for
brixton
engine
like
you
can
only
have
one
web
book,
but
the
web
books,
like
both
the
default
thing,
validation
and
conversion.
Workbooks
are
all
part.
C
Consider
part
of
the
code
so
like,
for
example,
when
you
create
a
machine
or
a
cluster
defaulting
is
even
for
like
a
like
one
of
the
fields
like
that
might
be
a
pointer
just
throw
in
an
example
is
defaulted
to
like
a
maybe
destructive
like
we
could.
C
We
could
actually
use
it
in
code,
as
you
know,
just
without
like
checking
nil
values
and
things
like
that,
but
it's
also
like
more
complicated,
defaulting
or
validations
that,
like
is
in
place
in
the
web
books
that
are
yeah,
they're
gonna
be
hard,
especially
because
sometimes
we
push
effects
in
the
web
book.
And
then
there
is
like
an
affix
needed
in
the
controller
as
well,
so
those
lock
step
behavior
are
going
to
be
really
really
hard
to
actually
think
about.
C
C
The
other
thing
is
validation
on
the
validation
side
like
if
we
over
time,
we
have
allowed
more
and
more
things
to
be
mutable
so,
like
we
usually
start
with
like
a
lot
of
immutability
feels
like
you
can
only
said
once.
Kcp
is
a
good
example
of
that.
If
those
skills
become
immutable
and
one
of
the
controllers
does
not
understand
them,
there
might
be
unexpected
behaviors
that
are
going
to
come
up.
So
these
are
all
things
like
kind
of
neat
answers
and
trying
to
kind
of
enumerate.
D
A
Yeah
this
is
all
this
will
like
really
valuable
for
us
to
like
then
go
away
and
try
and
resolve
some
of
this,
like
we've
done
some
internal
research
on
this
kind
of
thing
as
well.
I
like
some
of
the
limitations
we
were
thinking
about.
Is
you
know
what,
if
everything
has
to
be
the
same
version,
even
though
you've
got
these
multiple
copies
or
something
like
that?
So
I
guess
that's
something
we
can
discuss
in
the
future.
C
The
other
requirement
from
vmware
would
be
extensive.
Testing
like
we
get
for
cluster
curl,
and
all
the
other
controllers
like
we'll
need
something
similar
for
the
operator
that
that's
like,
for
example,
upgrades
test.
Like
all
these
scenarios
like,
if
you
want
to
add
a
feature
like
we
want
to
install
multiple
copy
or
kcp
or
whatever
like.
C
We
should
have
proper
tests
to
do
so,
and
this
could
be
both
as
like,
periodics
and
like
pr
blocking
we're,
also
able
to
kind
of
like
say
the
pr
blocking
only
run
when
cluster
credit
code
changes,
for
example.
So
what
can
be
flexible
in
those.
G
Just
a
question
from
my
side
here
when
we
talk
about
testing
here,
we
talk
about
a
full
entire
end
testing
that
we
spin
up
an
actual
copy
cluster
and
check
if
it
works
or
we
talk
about.
We
have
an
input
for
the
operator
we
validate
the
output
of
the
operator,
which
doesn't
necessarily
create
a
true
copy
cluster.
Afterwards.
C
Yes,
everything
that
you
just
said:
okay,
we
do
have
a
lot
of
things
like
these,
though,
like
so
like,
you
should
be
able
to
kind
of
copy
paste
like
a
lot
of
that
code
and
jobs
that,
like
actually
do
spin
up
capi
cluster
a
lot
of
times
using
cluster
cuddle
cli.
So
those
will
have
to
move
to
cluster
card
operators
to
make
sure
that
they
behave
the
exact
well,
not
the
exact
same,
but
similarly
at
least
for
brits.
You
can
speak
more
to
those
and
warren
as
well.
I
think.
B
We
we
have
both,
we
have
tests,
let
me
say
we
have
unit
test,
we
have
a
integration
test
using
controller
runtime
test
dev
and
we
have
end-to-end
tests.
G
My
only
concern
would
be
because
of
personal
experience,
because
we
have
internally
similar
operators
that,
like
create
crs
for
them
different
operators
or
controllers,
is
that
the
flappiness
of
the
end-to-end
tests
seems
to
increase
when
you
start
chaining
them
together.
Like
this,
it's
something
we
will
have
to
see.
I
don't
think
it's
like
necessarily
a
big
problem,
but
that's
just
just
something
we
observed
internally
because,
like
each
controller,
is
eventually
consistent,
that
this
can
sometimes
build
up.
B
Yeah
agreed,
so
this
is
a
potential
problem
in
the
customer
being
to
end
framework.
There
are
a
lot
of
tricks
to
to
get
around
this
problem,
so
we
are
using
different
name
space
stuff
stuff
like
that.
But
at
the
end,
when,
when
using
cluster
api
and
cup
d,
there
is
a
limited
amount
of
scenario
that
you
can
run
in
parallel
and
and
and
you
have
to
break
down
scenario
in
different
jobs,
yeah,
okay,.
D
I
think
the
end-to-end
testing
is
sort
of
critical
for
something
like
this,
because
myself
like
in
the
beginning
like
when
we
were
having
these
discussions.
One
of
the
concerns
for
like
the
current
proposal
or
moving
towards
the
sort
of
the
simpler
model
was
because
there
was
no
confidence
in
even
between
upgrading
between
patch
releases.
D
So,
if
we're
going
to
add
this
complexity
of
having
multiple
controllers
in
the
same
cluster,
then
we
need
to
have
the
self-confidence
in
in
our
tests,
right
and
and
in
the
output.
G
Don't
get
me
wrong,
don't
don't
get
me
wrong.
I
completely
agree
just
my
point
of
view
would
be
to
invest
more
into
a
lower
cost
test
testing
like
more,
inter
different
variations
of
unit
tests,
different
stages
before
full
end
to
end
tests
simply
because
they
can
get
similar
results,
usually
quicker
and
without
as
much
flappiness,
at
least
in
our
experience.
D
Yeah,
that
makes
sense,
I
mean
if
you
follow,
like
the
testing
pyramid
yeah.
Obviously
you
start
with
solid
base
of
unit
and
probably
leveraging
m
test
as
well
to
test
something.
G
A
A
Okay,
I
did
add
some
extra
questions
in
this
section,
but
I
feel
like
we've
kind
of
covered
a
lot
of
those
with
the
existing
conversation
we've
had
yeah,
it
doesn't
sound
like
we've
got
any
blockers
for
potentially
going
down
this
route.
It's
just
a
case
of
we
need
to
make
sure
we've
got
a
thorough
proposal
and
have
thought
through
all
of
the
issues
that
have
been
highlighted.
A
Okay,
in
terms
of
the
implementation,
I
just
wanted
to
check
out
if
there's
any
like
guidance
or
anything
we
should
know
before,
like
we
start
trying
to
hack
on
this,
you
know:
is
there
an
idea
that
we
can
start
reusing
bits
for
code
from
cluster
cuttle,
for
instance,
or
is
it
pretty
much
a
from
scratch,
build
it
to
the
design.
B
If
I
can
add
something
so,
at
least
in
the
discussion
from
warren
and
in
the
evening,
while
working
on
the
proposal,
the
idea
was
to
basically
reuse
most
of
the
work
in
cluster
cattle
and
in
fact
the
operator
is
at
the
end
of
the
story,
is
going
to
replace
a
beat
of
the
caster
cut.
Also,
for
instance,
I
have
the
code
that
install
provider.
B
Now
it
is
in
caster
cattle.
Initially,
I
use
buffing
cuts
the
catalan
and
in
the
operator
and
at
the
end
of
the
story,
the
operator
cluster
cutter
used
operator.
So
the
code
is
basically
moved
into
the
operator,
but
but
the
operation
are
the
same.
So
this
was
a
the
idea
to
reduce
most
of
what
is
caster.
Scatter
still
holds
for
me.
E
I
have
a
question
in
the
end,
which
part
will
be
responsible
for
deploying
cert
manager.
Will
it
be
operator
or
cluster
catalog.
B
So
there
are
people
requiring
possibility
to
skip
to
automatic
install
of
the
research
manager
because
they
want
to
bring
in
their
own
version.
There
are
people
asking
to
skip
upgrade
for
the
self
manager,
so
the
tldr
is
that
cluster
categories
will
be
responsible
or
self-serve
manager,
I'm
not
sure
if
it
will
work
in
the
same
way
as
a
as
just
today.
E
And
one
more
question
for
me:
imagine
I'm
an.
I
am
an
end
user
of
cluster
api.
I
want
to
bring
it
up
like
I
want
to
install
it.
We
we
now
have
operator
and
I
have
clustered
cattle
which
comment.
Should
I
run
to
get
it
running
like?
Will
it
be
as
easy
as
it
is
right
now
or
like
how?
How
will
that
work.
B
So
the
proposal
talks
a
little
bit
about
this.
If
I
remember
well
so
please
warrant
chiming.
If
I'm
saying
something
wrong,
so
at
the
end
of
the
story,
there
will
be
two
possible
approach
for
spinning
up
a
cluster
one.
It
is,
let
me
say,
imperative
using
the
caster
cluster
cattle
and
the
workflow
should
be
as
simple
as
us
as
today,
so
possible,
better
better.
Let's
keep
it
as
simple
as
today
as
a
goal,
and
then
there
will
be
a
second
workflow
that
today
is
not
possible,
which
will
be
fully
declarative.
D
Yeah
so
it
will
be
provided.
The
idea
is,
as
part
of
the
make
commands
right
that
will
be
part
of
the
release
assets
of
cluster
api.
D
So,
if
you
had,
let's
say
an
empty
like
kind
cluster,
you
could
just
keep
cuddle
apply
the
operator
assets
which
would
include
all
the
provider
crds
right
provider-
crd,
sorry,
the
the
crds
is
defined,
but
that
will
be
used
by
the
operator.
D
E
A
D
Yeah
so
the
way
I
was
sort
of
breaking
the-
and
this
is
the
hack
md
thing-
was
how
I
sort
of
tried
to
break
it
down
in
my
head.
Whoever
is
sort
of
you
know,
joel
if
you're
you
and
the
rest
of
red
hat
is
sort
of
leading
this
feel
free
to
do,
however,
you'll
feel
best.
My
background
is,
usually
you
know,
small
break
everything
down
to
small
bite-sized
chunks,
so,
for
example,
in
some
of
the
hack
md,
let
me
pull
that
up
quickly.
D
Yeah,
so
the
strike
touts,
so
the
workflow
is
just
how
I
sort
of
broke
it
down
in
my
head
of
like
breaking
it
down
by
implementing
the
operator
first
and
then
the
apis
and
crds,
and
then
eventually,
classification
comes
in
last.
The
stuff
that's
crossed
out
is
the
prs
that
have
are
in
there
already,
but,
as
you
can
see,
like
sort
of
I
broke
it
down
of
like
what
is
the
simplest
thing
that
can
be
done
in
small
chunks.
D
So
first
thing
obviously
is
installing
some
of
the
providers,
so
that
can
be
done
in
two
ways.
Right.
You
have
these
different
fetch
configurations.
D
If
you
look
at
some
of
the
other
examples,
if
you
scroll
down
yeah
so
there's
like
you
know
stories
that,
like
in
the
in
the
api
right
from
the
proposal
feedback,
they
were
saying:
okay,
we
want
to
add
tolerations
affinity
and
node
selector,
so
those
could
be
sort
of
added
in
to
the
api
separately
and
then
somebody
else
could
say.
Oh
I
need
to
add
in
you
know
things
for.
You
know
fetch
configuration
selector
or
we
need
to
have
this
api
handle.
D
What
is
the
next
one
right
as
an
admin?
I
would
like
to
install
yeah
the
provider
status
contract,
so
the
contract
version
needs
to
be
part
of
the
provider
status.
So
I
tried
in
my
head
to
sort
of
break
it
down
into
chunks
of
work
that
really
doesn't
necessarily
clobber.
You
know
they
just
additions
to
the
api
right
you're,
not
just
like
jamming
everything
in
that
was
my
sort
of
way
in
my
head,
obviously
doing
it.
D
This
way
is
very
slow
and
tedious
versus
just
like
you
know,
putting
in
all
the
apis
and
then
just
flushing
it
out,
but
the
way
I
thought
of
this
is
then
each
story
sort
of
comes
with
its
own
set
of
tests,
and
that
way
the
prs
could
be
sort
of
you
know,
chunked
up
properly,
yeah
and
and.
D
A
Yeah,
I'm
just
wondering
if
it
makes
sense
for
us
to
kind
of
use
this
as
a
you
know,
as
the
sort
of
list
of
tasks
that
need
doing
and
as
people
want
to
tackle
the
part
they
assign
themselves
by
putting
their
name
next
to
a
bit
and
then
when
it's
done,
they
cross
it
out.
Maybe
that's
because
the
alternative
I
see
it's
like
creating
a
whole
bunch
of
issues
to
track
this,
and
I
don't
know
if
that's
just
going
to
clutter.
D
So
so
that
was
that
was,
I
was
discussing
with
fabrizio
and
vince
so
still
like,
because
I
this
was
like
a
pretty
big
chunk
of
work
that
I
hadn't
really
been
part
of
before,
and
these
open
source
projects,
my
idea
was,
was
initially
was
like
yeah.
Somebody
would
sort
of
create
these
issues
or
whoever
would
take
on
the
work
would
actually
create
a
detailed
github
issue,
because
in
the
end
this
hackmd
should
just
like
you
know,
can
be
lost
and
people
can
still.
D
D
I
think
I
area
api
and
I
did
area
operator,
but
I
don't
know
if
that
operator
label
has
been
created,
so
it
was
like,
I
think,
throwing
errors
that
way.
You
know
people
can
at
least
group
by
the
label
area
operator,
for
example,
or
api
and
sort
of
get
to
see
all
the
work.
That's
being
done.
The
other
thing
I
think
she
was
looking
at
was
using
the
github
projects.
D
Thing:
yeah
that's
again
up
to
the
community
for
you
to
figure
out
if
that's
a
viable
path
but
otherwise
just
create
like
take
each
one
of
these
stories
and
write
detailed
github
issues
of
what's
expected.
And
you
know.
G
D
Yeah,
if
you're
taking
the
issue
from
the
hackmd
just
strike
it
out,
like
just
say:
okay,
I'm
you
know
it's
out
like
you
know,
it's
been
done
or
like
put
the
issue
number
in
hack
md.
If
you
are
starting
from
there.
I
I
foresee,
though,
as
y'all
go
through
some
of
the
stuff
you'll
be
like.
Oh,
I
don't
know
this
thing.
Clearly,
there's
more
stuff.
That
needs
to
be
done.
D
That
can
be
broken
up
further,
so
either
you
also
rally
around
the
hackmd
dock
as
your
sort
of
synchronous
point
or
we'll
find
a
different
sort
of
tool.
You
know
standard
project
management
story,
tracker
of
sorts.
A
Yeah,
I
guess
we
can
just
try
it
out
and
yeah.
I
guess
my
suggestion
would
be
similar
to
that
of
warrens
and
just
be
like
you
know.
If
we
create
an
issue,
link
it
and
then
use
the
issue
to
track
it
from
there
and
cross
out
when
it's
done
yeah,
we
can
give
it
a
go
and
like
evolve
on
this.
If
it's
not
working.
A
Okay,
the
only
other
questions
I
had
is,
like
you
know,
in
terms
of
red
hat
commitment
like
I'm
hoping
alex,
can
spend
pretty
much
all
of
his
time
over
the
next
couple
of
weeks.
On
this.
Do
we
need
to
have
a
primary
contact
for
the
project
like
someone
to
report
back
or
like
be
responsible
for
this
or
anything.
B
I
think
that
it
would
be
nice
to
to
keep
alive
the
the
broader
community
on
how
this
goal
this
effort
is
going,
especially
if
we
are
moving
it
out
of
the
main
repo
or
in
a
separate
branch.
So
it
is
not
so
visible
for
everyone.
So,
having
I
don't
know
a
report
every
two
weeks
or
every
week
it
will
be
nice
just
to
keep
the
community.
B
G
I
would
have
one
really
technical
question
really
briefly:
have
we
thought
about
like
our
back
permissions
and
how
that's
supposed
to
look
like
if,
because
we're,
basically
creating
a
whole
bunch
of
crs
right,
and
at
least
our
back
should
be
something
we
think
about,
because
I
see-
or
I
foresee
problems
like,
how
do
we
define
them
beforehand
or
how
do
they
do
we
alter
them?
If
there's
like
different
providers
that
maybe
are
in
a
different
api
group
or
whatever.
G
C
B
B
G
A
Fine
in
terms
of
the
way
I
was
looking
at
this
before
like
this,
if
you
are
using
the
capi
operator-
and
you
don't
know
ahead
of
time,
what
kind
of
bootstrap
provider
you
know,
infrastructure
providers
and
stuff
you're
going
to
install
there's
no
way
to
predict
the
permission
set.
So
in
terms
of
our
back,
you
know,
the
only
way
it
would
be
able
to
work
is
to
basically
give
it
the
cluster
admin
role
in
terms
of
like
how
we
were
actually
going
to
use
it
in
the
product.
A
We
would,
you
know,
we'd
know
ahead
of
time,
what
api
groups
and
stuff
we
need
to
give
it
to
be
able
to
grant
the
permissions
for
the
things
it's
deploying.
So
I
think
that's
kind
of
up
to
whoever's
running
the
operator
to
kind
of
come
up
with
whatever
they're
comfortable
with
in
terms
of
permissions.
I
don't
think
there's
anything
we
can
do
blanket-wise
as
a
recommendation.
Apart
from
cluster
admin,.
G
B
B
So
I
make
an
example:
if
someone
decided
to
use
stateful
settings
in
our
provider-
or
I
don't
know
persistent
volume
or
pvc
or
whatever,
there
is
not
not
yet
a
filter
on
on
what
we
are
studying
so
in
in
this
sense,
the
operator,
cluster
cattle
are
only
a
proxy
of
what
each
provider
decide
to
is
done.
G
I
agree
the
thing
I've
been
just
thinking
about
is
we
have
these
fetch
configurations
which
basically
fetch
stuff
from
our
providers
from
the
internet,
and
if
a
provider
were
to
be
hijacked-
and
we
don't
have
any
outback
permissions,
then
this
is
obviously
a
security
risk,
because
then
we
would
have
a
cluster
admin
level,
two,
installing
whatever
we
fed
from
the
provider.
B
B
G
B
A
Okay,
I'm
conscious
that
we've
only
got
four
minutes
till
the
the
community
meeting.
So
is
there
anything
else
we
want
to
discuss
now
or
would
everyone
like
a
five-minute
break.
B
A
D
Well,
regarding
whatever
we
discuss
on
the
whether
it's
a
separate
reaper
or
a
separate
branch
or
whatever,
regarding
the
prs,
feel
free
to
either
ping
me
on
get
github
or
at
me
on
slack,
I
still
log
into
kubernetes
slack
on
my
phone.
So
I
should
get
the
notifications,
but
that
would
be
a
fastest
way
to
reach
out
to
me.
If
you
need
any
sort
of
input
on
those
pr's
great.