►
Description
#sig-cluster-lifecycle
#capn
#capi
A
All
right,
good
morning,
everybody,
this
is
january,
12th
2021.
This
is
the
cap
and
office
hours
call.
As
always.
This
is
a
recorded
meeting.
So
don't
say
anything
you
wouldn't
want
to
be
put
up
on
youtube
later
and
if
you
can,
I've
dropped
a
link
and
I'll
drop
another
one
in
case
you
folks
that
have
just
joined,
don't
have
it
yet,
but
in
the
chat
for
zoom
there
is
a
link
to
the
actual
attendee
or
the
the
agenda.
Doc.
A
A
Started
cool
and
the
first
thing
is:
there's
a
couple:
new
pho
or
there's
one
new
folk
right
now,
scott,
if
you'd
like
to
you,
can
give
yourself
an
introduction.
B
A
Cool
thanks
for
that
and
I
don't
have
any
psas
faye.
I
don't
know
if
you
have
anything
you
want
to
bring
up
to
the
group
before
we
go
into
the
two
agenda
items.
C
Oh
no
also
yeah,
I
I
was
just
looking
at
a
while
in
the
slack
channel
you
discuss
with,
I
think
bright
right,
yep
yeah
yeah.
I
probably
will
follow
up
on
that.
If
I
didn't
misunderstood,
what
it
said
is
that
the
campaign
points
to
cappy.
A
Yeah,
so
it
sounded
like
the
I
had
that
as
the
first
agenda
item,
because
I
wanted
to
to
start
us
off
here
with
that,
at
least
so
it
sounded
like
breit
was
trying
to
figure
out
exactly
if
we
need
to
move
all
of
vcs
all
of
the
functions
around
vc
that
are
related
to
syncing
actually
into
this
project.
They
were
saying
that,
let
me
see
yeah,
so
if
you
click
on
the
link
in
the
in
the
agenda
here,
I
can
actually
pull
it
up
too.
Let
me
share
my
screen.
A
So
I
think,
there's
a
good.
Oh!
I
actually
can't,
because
it's
slacking,
I'm
not
gonna
open
up
slack.
So
if
you
click
on
that
link,
you
can
actually
go
see
the
conversation,
but
breit
was
bringing
up
that
it.
It
doesn't
technically
have
to
be
homed
within
cap
n
for
all
of
the
syncing
logic
and
all
the
all
the
custom
stuff,
because
realistically
cap,
the
way
that
we're
the
way
that
vc
is
built
and
the
way
that
you
all
have
implemented
already
doesn't
actually
use
the
same
provider.
A
Work
that
we're
doing
from
the
upstream
to
provision
clusters,
for
example,
and
so
bright,
was
kind
of
saying
what?
If
what
if
there
was
a
different
split
in
the
project
where
it
was,
you
still
have
a
you
still
have
a
new
way
to
provision
nested
control
planes,
but
then
also
still
have
a
separate
project
that
was
doing
the
sinking
logic
so
that
it
didn't
really
matter
where
the
control
plane
provider
was
coming
from.
C
Yeah
that
that's
what
yeah
I
didn't
realize
he
did
I
I
was
thinking
he
just
want
to
say
that
kind
of
extending
the
current
canon
operator,
because
colonel
turner
operator
has
two
mode
right
is:
why
is
the
audi
chronometer
otherwise
in
a
local
mode,
so
we
were
trying
to
replacing
the
local
mode
with
captain
and-
and
maybe
he
suggested
that
maybe
we
can.
We
should
continue
the
cloud
mode,
current
cloud
mode
thing
to
different
cloud
providers.
C
A
Yeah,
it's
an
it's
a
it's
a
different
approach,
I'm
not
against
it.
A
I
think
my
current
thought
processes
is
we're
probably
going
to
have
a
a
couple
iterations
on
this
whole
project
as
we
go
and
giving
it
a
place
like
at
least
giving
us
a
place
to
solve
some
of
the
problems
we
had
early
on
with
vc,
where
we
couldn't
actually
instill
proper,
like
build
pipelines
and
like
the
problems
of
just
basic
development
within
the
the
multi-tenancy
repo
and
and
being
able
to
get
to
an
actual
virtual
cluster
sub-project,
that's
separate.
A
I
think
this
is
at
least
gives
us
a
progression
where,
once
we
have
a
cap,
end
provider
built
in
place-
and
we
have
some
of
the
functions
around
syncing
and
all
that
re-implemented
and
it
works
for
both
the
older
functions
and
the
new
stuff
that
we
could
kind
of
potentially
propose
another
project.
It's
the
way
that
I
was
approaching,
or
what
the
way
that
I
was
expecting
that
we
would
approach
it.
C
Yeah
yeah
yeah,
I
agree
so
yeah.
I
think
we
need
to
have
a
place
single
place
to
combine
our
efforts.
So
I'm
thinking
because
maori
tennessee
is
not
really
a
good
place
to
holding
everything
to
share
the
ripple
with
other
projects
anyway.
So
yeah,
I
agree.
So
I
think
we
should
first
approach
is
moving
everything
spread
out.
C
Yeah,
I
I
think
that's
so
maybe
we
should
have
a
another
thing
if
it
is
pretty
much
just
movie
moving
vc
stuff
to
another
repo,
if
that
is
that
is
possible
or
that's
worth
doing
so
because
I
was,
I
see
these
two
things
are
tied
together.
I
don't
I
I
don't.
I
don't
have
a
strong
feeling
that
the
synchro,
because
synchro
alone
doesn't
work-
I
mean
if
you
only
have
a
sinker,
we
don't
have
all
you
don't
have
all
the
roundup
captain
stuff.
C
C
A
I
wonder
yeah,
I
wonder
like,
as
we
go
down
this
path,
if
we,
if
we
do
end
up
with
something
where
it's
you
could
deploy
a
nested
cluster
and
a
nested
cluster,
is
just
a
reference
to
a
control,
plane,
endpoint
type,
then
it
doesn't
really.
Then
it
doesn't
really
matter
what
the
provisioning
the
provisioning
tool
is
there?
It's
just.
We
need
that
one
cr
that
represents
where
you're
going
to
get
your
cube
config
file
and
where
you're
going
to
get
the
the
endpoint
that
you're
going
to
connect
to
for
that
nested
cluster
to
work.
C
A
Yeah
yeah
because
we
yeah,
if
they,
if
you
don't,
need
to
actually
go
and
do
that
and
you
want
to
use
a
managed
control
plane.
So
it
should
be
totally
fine
to
do
that
and
yeah.
So
as
long
as
we
build
that
way,
it
doesn't
seem
like
we're
we're
stopping
that,
but
that
from
working.
C
A
Yeah
cool,
okay-
I
don't
know
if
you
want
to
if
you
want
to
respond.
A
Cool
as
well,
if
anybody
else
is
not
that
is
interested
in
adding
anything
to
this,
and
and
has
seen
that
conversation
that's
going
on
you're
more
than
welcome
to
chime
in.
C
Yeah
yeah,
maybe
just
give
some
background
so
currently.
The
way
that
I
implement
the
this
the
tenant
operator
is,
I
have
two
mode,
the
local
mode
and
the
cloud
mode.
So
it's
just
a
crd
is
who
provides
the
exactly
what
chris
said.
Who
gives
you
the
credential
to
access
to
access
the
con,
the
the
the
control
plan?
That's
all
about
it.
C
We
are
capping
trying
to
replacing
the
entire
local
mode
kind
of
thing,
because
local
mode,
there
is
no
way
to
do
the
upgrade
and
we
use
the
you
know:
in-memory
udcb,
there's
no
precision,
storage
at
all.
It's
just
for
you
know
protecting
purpose,
but
we
do
have
some
usage
internally
usage.
We
because
the
the
intel
vc
has
been
in
some
of
the
cloud
product
already
in
that.
In
that
thing
we
use
the
cloudable
provider.
So
do
exactly
that!
You
know
you
know
we
don't
have
the
problem
of.
You
know
persistent
storage.
C
How
do
you
manage
it?
It's
all
separated!
It's
like
it's
like
you
know.
We
see
product
with
the
eks,
you
can
think
about
that
or
yeah,
but
but
we
don't.
Currently
we
don't
have
a
supporter
for
uks.
Yet
because
you
need
to
write
a
bunch
of
new
controllers
to
call
the
eks
api,
which
we
haven't
done
by
the
way.
If
anyone
wants
to
try
that
write,
write
a
small
controller
to
codecast
api,
that's
more
than
welcome.
A
I
think
that's
one
of
the
nice
things
about
about
kind
of
shifting
us
towards
towards
cap
towards
cappy's
apis
as
well,
because
if
we
use
the
standard,
the
standard,
location
and
path
for
cubeconfig
files
from
a
management
cluster,
it
makes
it
pretty
easy,
because
every
every
provider
should
be
using
that
same
secret,
name
and
location
associated
with
their
their
cluster,
which
I
think
helps
because
then
we
don't
even
need
to
have
anybody,
implement
a
special
provider
for
any
of
the
managed,
managed
solutions
or
even
like
cops
or
anything
like
that.
A
We
could
just
use
off-the-shelf
tools
and
just
say:
here's,
the
new
endpoint
and
you
already,
because
it's
cappy
deployed
you
already
know
what
the
secret
name
is
going
to
be
for
the
cubeconfig
file
for
the
admin
config
at
least,
which
I
think
is
beneficial.
D
Hey
chris,
what
do
you
mean?
What
are
you
referred
to
by
cappy.
A
A
D
Okay,
gotcha
yeah,
I'm
I'm
just
thinking
about
I'm
familiar
familiar
with
the
the
auto
scaler
and
how
the
different
cloud
providers
are.
You
know
they
each
provide
their
own
implementation.
According
to
the
interface,
I
thought
that
was
a
nice
solution.
I
don't
know
seems
like
you're
describing
something
where
you
wouldn't
even
have
to
do
something
like
that.
But
that's
also,
I
thought
a
nice
option.
A
Let's
see
yeah,
so
this
is
kind
of
what
I'm,
what
I'm,
what
I'm
getting
at,
and
I
still
sharing
my
screen,
I
believe
so
or
at
least
chrome.
So
the
things
that
I
think
are
interesting
here
is
because
we'll
have
something
like
when
you
go
and
generate
your
cube.
Config
all
of
the
providers
should
have
some
sort
of
path
like
this
to
have
generated
a
cube,
config
file,
there's
locations
for
them.
A
B
A
A
Cool
all
right,
I
feel
like
that,
makes
sense.
I
also
wanted
to
bring
up
another
thing.
Our
our
agendas
are
super
lightweight
right
now.
I
think
it's
beginning
of
the
year
and
all
that
as
well,
but
I'm
I'm
contemplating
whether
we
should
move
these
to
bi-weekly,
because
we
don't
have
a
ton
that
we're
bringing
up
and
there's
only
a
handful
of
us
working
on
the
project
right.
This
very
second
interested
everybody
else's
thoughts
on
that.
C
Every
two
weeks
bi-weekly-
I
mean
I
understand,
but
I
I
would
still
prefer
five
weeks
for
just
a
couple
more
weeks,
because
I
think
outside
will
have
more
things
to
to
talk
after
child
come
back
from
his
location.
I
think
we
do
need
to
push
on
cp
design.
I
think
we
still
have
kind
of
four
things
to
discuss
people
I
would
suggest
once
we
do
the
real
development
mode
we
probably
can
can
yeah.
If
we
can
predict
in
the
next
two
weeks
we
are
just
going
to
coding.
So
that's
okay,.
A
All
right
that
sounds
good
yeah.
I
just
wanted
to
throw
that
out
there
as
a
as
a
suggestion,
because
we
kind
of
go
back
and
forth
on
whether
on
what
our
agendas
really
look
like
all
right-
and
I
don't
have
anything
to
share
on
the
ncp
side
of
things-
I'm
still
dragging
my
feet
on
that.
So
I
need
to
get.
I
need
to
need
to
get
on
to
writing
that
doc,
a
bit
more.
C
Yeah
yeah,
since
there
are
some
new
guys
here,
I
probably
give
a
heads
up,
so
I
was
trying
to
move
working
on
the
you
know.
If
somebody
watches
for
the
multi-tenancy
report,
you
you
guys
can
notice
that
I'm
going,
I'm
I'm
currently
working
on
a
self-explanatory
experimental
project
in
terms
of
vcs
by
supporting
multiple
super
clusters
instead
of
one
so
so
yeah.
I
I
think
I
can
expose
more.
C
Maybe
next
meeting
I
can
show
more
details
about
what
I,
what
I'm
trying
to
do,
yeah
just
give
you
guys
a
heads
up,
so
it's
kind
of
you
can
think
about
it.
It's
kind
of
another
way
to.
I
will
define
it
as
another
way
to
do.
The
multi
cluster
management,
which
is
different
from
could
be
fed,
could
be
federation,
but
under
the
condition
that
is,
is
fitting
more
like
in
the
vc
scenario.
It's
like
a
serverless.
C
You
don't
need
to
you,
don't
need
to
understand
the
underlying
cluster
details
from
the
tenant
perspective.
It's
more
multi-tenant
solution,
so
so
for
federation,
it's
kind
of
slightly
different
is
like
you,
you
know
everything.
You
already
know
you
how
many
classes
you
have
so,
but
you
just
need
to
have
a
easy
convenient.
You
know
place
to
manage
those
clusters,
like
you
have
to
specify
how
many
replicas
in
class
a
however
and
class
b,
so
in
vc
with
modis
cluster.
C
I
was
trying
to
hide
this
kind
of
complexity
in
the
sense
that
you
only
work
on
your
virtual
cluster.
You
just
see
one
cluster
but
underline
for
the
part
provider
perspective.
You
have
have
multiple
clusters,
yeah,
that's
my
my
goal
and
there
are
indeed
some
use
cases
for
that.
B
Cool
yeah
and
chris
just
one
note,
also
on
the
back
to
the
cap
end
and
using
like
kappa
or
cap
v,
or
something
like
that
is
the
control
plane.
The
one
issue
that
I
can
already
think
of
with
that
just
in
terms
of
a
complexity
with
the
design
will
be
that
cappy.
Currently,
you
can
only
have
one
cappy
provider
watching
a
single
namespace
so
because
it
uses
also
the
capi
base
crds
for
machine
control,
plane
things
like
that.
B
The
secret
for
the
cube
config
is
going
to
be
in
a
different
name
space,
because
if
you
have
cap
v,
for
example,
on
a
specific
name
space,
that's
where
the
cube
config
is
going
to
be
in
not
in
the
cap
end
name
space,
so
you're
going
to
have
to
copy
over
the
secrets
or
whatever
it
is
in
order
to
use
them.
A
Yeah
and
I
think
so
fei,
I
think
that
also
aligns
with
what
we're
doing
currently
with
at
least
with
vc,
because
all
of
the
secrets
for
that
end
up
being
within
the.
C
B
A
C
A
A
A
After
that,
if
we,
if
there's
still
some
time,
I'm
gonna,
I'm
gonna
get
back
to
starting
on
that
the
nested
api
server
implementation,
at
least
because
I
can
build
that
without
the
the
scd
implementation
in
place,
since
I
can
just
pass
in
those
values
and
we'll
sub
them
out
later.
So
as
long
as
I
can
get
the
ncp
dock
in
place,
I'll
get
working
on
that.