►
Description
#sig-cluster-lifecycle
#capn
#capi
A
Morning,
everybody,
this
is
a
cap,
end
call
for
january
5th,
2021
happy
new
year,
everybody.
We
don't
really
have
any
things
on
the
agenda,
but
if
you
could
just
make
sure
you
go
put
yourself
to
the
attending
list
and
if
you
have
anything
to
talk
about,
add
those
to
the
list
really
just
wanted
to
keep
this
called.
A
I
imagine,
I
know
all
of
the
apple
folks
weren't
weren't
online
until
basically
this
morning,
so
there's
not
much
many
updates
from
us,
I'm
I
fee,
I
saw
you
did
a
bunch
of
things
on
virtual
cluster
over
the
last
couple
weeks.
I'm
not
sure
if
you
wanted
to
talk
about
anything,
you
did
there,
but
I'm
just
going
to
kind
of
leave
the
leave
the
complete
agenda
open.
If
anybody
wants
to
talk
anything
about
anything
from
my.
A
Just
a
quick
update
as
well,
I
started
working
on
right
before
the
break,
making
the
cube
api
server
controller
just
to
see
what
that
was
going
to
be
look.
It
was
going
to
look
like
using
the
cluster
add-ons
and
declarative
patterns
project
just
to
feel
out
that
that
side
of
things
but
haven't
haven't
gotten
anywhere
to
anywhere
close
to
a
pr
yet
so.
C
Yeah,
so
I
think
chris
and
I
we
should
start
working
on
the
cappy
controller
crd
design
and
that
stack
to
be
kicked
off.
Nothing.
The
you
know.
Our
side
is.
C
Either
etcd
or
apes
server
copy
control,
controller
implementation,
real
quick
in
my
sign
I'm
trying
to.
I
think
I
have
discussed
with
this
quiz
offline
I'm.
I
was
trying
to
add
in
support
to
that
sinker
to
support
multiple
super
clusters,
which
is,
which
is
a
request
that
I
heard
from
internally,
both
internally
and
externally.
I
I
I
I
timely,
timely
gotta,
get
you
know
slack
channel
pins.
They
ask
me
if
confidence
more
than
one
supercluster,
so
I
think
that's
an
interesting
problem
and
I'm
trying
to
resolve
it.
C
But
again
this
is
a
rough
idea.
That's
the
reason!
If
you,
if
you
look
at
I,
I'm
creating
a
experimental
directory
in
the
vc
repo,
what
I'm?
Basically,
what
I'm
thinking
is.
I
will
I
will
have
a
kind
of
another
scheduler,
but
the
scheduler
is
not
scheduling
part
for
multiple
clusters
and
instead
we
probably
will
schedule
a
namespace
across
multiple
clusters,
but
I'm
going
to
do
it
in
a
kind
of
much
more
flexible
way,
in
the
sense
that
I
don't
have
a
strict
restriction
saying
that
one
namespace
has
to
go
to
one
cluster.
C
C
So
besides
the
cloud
vendor
where
you
probably
can
build
a
supercluster
with
very
excellent,
auto
scaling
capability,
but
in
many
other
cases
the
supercluster
size
is
kind
of
fixed.
It's
not
easy
to
auto
scale.
Supercluster
in
that,
in
that
case,
if
we
want
to
have
a
talent,
because
supercars
is
kind
of
a
transparent
talent,
talent
does
has
does
not
has
has
the
entire
knowledge
about
the
size
of
the
any
of
the
underlying
resource
provider.
He
has
to
tell
us
the
results
that
they
want.
C
Then
we
can
do
the
arrangement
so
kind
of
we
have
to
got
some.
You
know
requirements
about
the
capacity
due
to
capacity
planning
carefully.
For
that
reason,
in
my
model
I
would
ask
you
know:
tenant
need
to
specify
the
quota
of
the
namespace
there.
We
will.
C
We
will
have
a
kind
of
scheduler
trying
to
distribute
the
namespace
code
across
multiple
clusters,
so
there's
a
high
level
idea
and
our
thinker
will
support
you
know:
syncing
the
parts
from
the
one
tenant
name
space
to
multiple
super
clusters,
that
that
is
a
kind
of
orthogonal.
C
It's
not
exactly
observable,
so
I
I
will
discuss
this
because
the
problem
into
two
parts,
one
in
the
scheduled
power
part.
The
second
is
the
single
part.
So
thinker
we
are
trying
to
support.
You
know
thinking
more
having
more
than
one
supercluster
to
sync
one
tenant
namespace,
that's
a
that's!
A
sinker
simple
need
to
implement
that
other
than
that.
You
know
we
will
have
a
separate,
you
know
scheduler
to
decide
which
supercluster
need
to
handle
or
or
sync
the
resource
from
the
tenant
name
space.
C
So
that's
the
high
level
idea
very
high
level.
I
probably
will
have
some
document
later
to
explain
the
details.
So
that's
that's!
That's
what
I'm
trying
to
do
I'm
currently.
This
is
a
not
very
easy
one,
because
it
kind
of
idea
is
kind
of
simple,
but
the
implementation
wise.
It
may
take
some
time
because
I
need
to
yeah,
for
example,
for
virtual
classes,
we
have
a
virtual
classes
crd,
but
for
the
multiple
supercluster.
Currently
we
don't
have
any
crd
to
represent
that.
C
I
probably
will
start
with
just
use
a
cappy
because
api
crd,
I
don't
know
if
chris
has
more
a
better
idea,
but
because
somehow
we
need
to
have
a
clever
crd
kind
of
thing
object
to
represent
a
super
cluster.
So
what
would
be
the
best
way
to
do?
I
don't
want
to
create
a
new
one.
D
I
think
it's
interesting
ideas
and
are
you
going
to
work
on
this
in
this
community
or
the
one
in
the
motor
tenants
community
here?
Definitely
for
here,
oh
yeah,
so
my
question:
yeah,
okay,.
C
D
Yeah,
but
when
we
talk
about
the
super
cluster,
multiple
super
cluster,
so
how
you're
going
to
figure
out
the
network
problems?
C
The
service
has
to
be
done.
The
service
part
we
have
to
use
the
external
slb
service
load
balancer.
We
cannot
use
traditional
class
iip
for
sure
because
the
same
workload,
the
pause
across
if
the
plus
cross
clusters
and
the
line
the
service
just
won't
work
the
contact.
C
D
C
Is
currently
the
syncer
so
the
currently
the
problem
in
synchro
only
we
are
seeing
everything
from
one
tenant
master
to
the
one
supercluster.
Now
we
have
to
support
the
selective
synchronization
so
for
one
tenant,
the
some
name,
some
of
them
some
members
space,
the
sinker
should
sink,
but
the
moving
space
belongs
to
the
other
supercluster.
C
You
know
the
sinker
shooting
the
synthetic.
So
the
idea
is
we
will
have
a
per
super
cluster
sinker.
So
so
each
super
cluster
will
have
a
sinker.
It
only
think
the
name
space
that
belongs
to
that
supercluster.
A
Yeah
similarly
yeah
you-
and
I
have
talked
about
this
a
little
bit-
it's
definitely
interesting
and
I
think
solving
it
solving
it,
at
least
at
the
level
of
saying
you
can
provide
the
the
underlying
networking
network.
However,
you
need
to
to
make
this
possible.
I
think
it
is
a
at
least
a
reasonable
approach
for
the
for
the
first
version
of
it
from
a
registry
perspective
from
a
cluster
registry
or
super
cluster
registry
perspective.
I
might
think
about
looking
into
what
the
multi-cluster
sig
is
doing.
A
I
know
there
was
a
cluster
registry
project
at
one
point
that
was
being
built
out
of
the
box.
I
can't
think
of
how
we
would
do
this
directly
with
with
cluster
cluster
api,
because
because
the
management
clusters
are
technically
the
scheduling
domain
for
some
of
these
environments,
so
it
gets
a
little
bit
weird
where,
where.
B
C
C
Yeah,
I
just
it
doesn't
need
one
crd
to
represent
the
super
cluster.
That's
all
I
can
create
my
own.
That's
very
super
easy.
I
I
that
can't
be
just
you
know
dummy
I
just
thinking
about
just
creating
my
own
or
just
using
some
existing
one.
C
See
to
see
that
as
it
goes
through
yeah
and
I'm
one
more
thing
so
chris
once
I
would
I
was
discussing
with
you,
you
have
a
concert
that
you
want
to.
You
don't
want
to.
Let
me
make
it
a
one
name
space
to
one
super
cluster,
so
I
I
think
that's
a
reasonable
request,
because
there
are
some
cases
that
even
a
workload
you
know
just
you
know
it's
it's
too
big.
So
now,
if
the
super
customer
may
support
it,
you
have
to
split
it.
So
I
I
have
to
think
about
that
problem.
C
B
There
I'm
also
looking
forward
to
seeing
inside
the
design.
Are
there
like
any
relations
to
the
multi-cluster,
like
the
multi-cluster,
like
federation,
v1
and
v2.
C
Yeah
yeah,
the
difference
is
federator
on
v2
and
there's
no
scheduling
part
in
it,
so
the
user
has
to
specify
so
there's
no
scheduling
so
user
has
to
to
say.
I
want
the
x
replica
in
class
a
I
want
to
wire
replica
in
class
b.
This
is
everything
is
transparent.
Even
the
entire
topology
is
transparent.
C
That's
so
in
my
in
currently
so
you
might
saw
this
in
the
virtual
cluster
context.
Again,
it's
a
tenant
that
doesn't
know
the
super
so
the
whole.
This
is
the
whole
idea,
so
the
you
can
connect
as
many
supers
as
you
want.
If
you
can
but
the
tenant,
what
they
want
to
say
tell
me
they
just
need
to
specify
the
quota
and
just
use
it
like
like
like
running
natively.
That's
that's
the
whole
thing
and
there's
no
split
workload,
kind
of
thing.
They
don't
need
to
think
about
at
all.
C
B
Okay,
cool
yeah,
I'm
interested
to
see
how
that
works
with
the
like
the
need
to
actually
schedule
things
together
like
if
you
don't
make
it
explicit
where
workloads
are
going
and
the
networking
then
obviously
like
we
have
issues
between
clusters,
communicating
exactly
so.
C
And
that
that
is
a
kind
of
chris
and
I
will
discuss
it,
you
have
to
use
external
loader
balancer
for
sure,
but
idea
is
that
another
another
assumption
is
the
rose
cluster
has
to
be.
You
know.
Pinnable,
like
you
cannot
say
that
the
one
the
cards
are
partitioned,
there's
no
way
they
can
communicate.
So
there
are
some
assumption
there,
but
assuming
they
are
you
we
have
a
very
big
flag
network
is
just
accessible.
C
Then
we
can
have
slb
to
do
the
you
know
load
balancing,
because
in
a
talent
from
a
tenant
perspective
they
still
know
which
endpoint
it
is
and
what
is
the
ip
they
are.
So
you
can
use
the
slb
you
watch
for
the
tenant
that
you
get
all
the
ep
mapping
and
the
due
diligence
is
still
possible,
but
non-native
way
you
have
to
do
that.
That
is
the
limitation.
You
have
to
do.
A
Cool
and
then
from
that
vc
side
of
things
where
there
are
still
open
pr's,
I
haven't
had
a
chance
to
actually
go
check
in
that
you
needed
for
any
of
us
to
take
a
look
at.
C
Well,
at
this
moment
probably
know:
okay
larry
and
I
will
take
do
the
most
of
early
type
for
hyping,
so
I
probably
will
focus
on
the
scheduler
part
and
jerry
will
focus
on
the
making
the
the
the
thinker
to
support.
You
know
multiple,
you
know
selective.
You
know
selective
synchronization
at
that
part
yeah
that
I
think
I
I
see
other
requirements.
So
maybe
I
think
I
think
that
imagination
is
kind
of
separated
from
my
cluster.
So
even
I
have
a
very
dumb,
very
bad
scheduling
design.
C
Then
you
can
just
get
rid
of
it,
and
yet
you
implement
your
own
scheduler
and
just
the
the
thinking
of
synchronization
part
should
be
separated,
and
I
wish
that
product
can
be
as
clean
as
possible,
but
for
roads
change
a
crazy.
You
may
help
to
review,
because
that
is
not
directly
related
to
the
scale
of
car.
C
All
right
wait,
so
actually
we
so
crd,
the
crd
sinker
part.
Actually
we
probably
want
to
do
that
very
similarly
internally
because
we
already
got
you
know
strong
requests
to
sync
crd,
so
I
have
told
my
internal
guys
about
what
you
guys
do.
They
think
they
probably
will
follow
you,
your
you,
your
approach
at
the
first
place
saying
that
they
probably
will
deal
with
a
separate
thinker
for
crd
owning
at
this
moment.
C
Instead
of
you
know
directly,
you
know
working
changing
the
code
of
the
system
currently
because
I
think
the
imc
controllers,
then
it
has,
has
some
changes.
It
supports
the
idea
right.
I
I
really
remember
you
using
the
unstructured
type
for
mc
controller
to
support
crd.
I
just
give
you
a
heads
up.
So
if
we
have
some
more
questions,
maybe
we
will
pin
you
yeah.
D
No
problem,
no
problem,
I
also
on
the
on
my
plate
to
modify
this
sinker
to
more
universal,
so
I
can
use
the
the
which
we
discussed
last
meeting
for
the
first
things
that
we
can
make
mix
them
together.
So
this
is
still
on
my
plate.
When
I
got
some
chance,
I
would
work
on
that
and
just
do
some
more
on
that.
Whatever.