►
From YouTube: Centaurus Monthly TSC Meeting 3/30/2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Today,
based
on
what
are
registered
in
the
our
meeting
working
dog,
we
have
two
agenda
items:
money
kind,
yeah
one
is
the
sig
update
and
review
the
other.
One
is
a
project
proposal
from
dpac.
A
Regarding
the
first
one,
remember
what,
in
the
very
first
ts
meeting,
we
discussed
our
our
sig
plannings,
based
on
what
we
are
working
on
in
project
actors
and
the
media.
We
currently
divide
the
entire
community
into
four
different
six.
These
four
things
are
scalability
networking
edge
and
ai.
A
This
will
be
the
first
one
it's
about
the
security
sector
review,
but
today
we
have
hawaiian
point
the
sig
lead
of
the
scalability
scandi
part
for
this
review.
We
will
focus
more
on
the
update
and
high
level
technical
directions
and
architecture.
A
In
the
last
tlc
meeting,
we
have
a
leftover
item
from
senior
senior
asked
something
about
how
the
pro
the
project
governance
details.
For
the
last
time,
we
didn't
get
a
chance
to
discuss
that,
because
I
knew
I
wasn't
in
the
meeting.
A
So
santa
money
money
organized
put
together
some
information
about
how
we
run
the
projects
and
some
links
regarding
the
this
project.
Governance,
because
this
will
take
a
this
topic-
is
just
some
information
sharing.
So
we
think
probably
we
first
go
go
through
this
and
then
we
start
the
the
two
topics:
hey
sania.
These
are
some
information
that
we
put
together
for
so
far
the
our
four
sig
comm,
the
community
meeting
for
the
first
six
and
our
release
plan
under
and
the
road
map
and
some
other
stuff.
B
Yeah
yeah,
actually
I
I
had
some
thoughts,
especially
on
the
ecosystem
and
the
technical
operation,
but
what
I
will
do
is
that
I
will
go
through
what
is
available
because
otherwise
it
will
be
a
repetition.
So
I
will
go
through
the
information
what
you
provided
today
and
then,
if
there
are
some
suggestions,
maybe
I
will
send
it
across
to
the
dse
mailing
list
and
then
we
can
discuss
in
the
next
video
okay.
A
C
A
A
A
E
Okay,
hello,
can
you
carry
me
now?
Yes,
okay!
Thank
you,
hello.
This
is
in
huang,
I'm
the
leader
of
octo
scalability
project.
So
today
we're
gonna
talk.
Many
talking
about
our
scale
out
design,
your
high
level,
so
actos
scalability
has
two
parts
scale
out
and
skill
up.
E
Skill
out
is
many
talking
about.
We
are
changing
the
overall
architecture
of
actos
to
make
it
to
be
able
to
handle
over
tens
of
thousands
parts
and
comparing
to
existing
single
cluster
for
skill.
App
is
mainly
the
skill
scalability
within
our
single
cluster,
so
today
we're
gonna
focus
on
skill
out
and
after
that,
we're
gonna
talk
about
our
current
standards,
including
our
the
the
previous
release
we
already
achieved
and
they
go
in
our
april,
30th,
okay,
so
here's
a
some
high
level
brief
introduction
to
the
scale
out.
E
First
actos
is
a
tenant-based
resource
management
system.
So
it's
on
top
of
the
existing
namespaces
we
put
tenants.
So
the
authentication
authorization
part
is
mainly
about
the
are
back
on
top
of
tenants.
So
within
tenants
they
have
their
own
namespaces
services,
ports,
etc,
and
also
we
isolate
a
single
tenant
each
object
within
our
single
tenants.
E
E
E
So
regarding
for
that,
we
talk
call
that
as
a
partition,
so
each
of
our
partitions
have
a
single
etcd
cluster
as
their
backhand
storage.
So
we're
going
to
talk
about
the
tenant
partition
and
later
so,
if
you
are
not
sure
what
what
is
the
tenant
partition
we're
talking
about
here,
and
so
the
overall
idea
of
scalability
with
act
hosts
scale
out
is
the
essential
idea.
E
Is
we
have
a
single
flat
control
plane
to
manage
all
hosts
in
our
region,
meaning
our
behan
is
a
gateway,
everything
that
all
the
hosts
is
be
able
to
assign
due
to
different
tenants
or
they
can
be
assigned
to
our
single
tenants.
But
this
is
all
like
behind
the
screen.
It's
not
viewable
by
the
tenant,
but
it
is,
it
is
within
the
internal
architecture
by
itself.
E
So,
even
though
we
can
have
multiple
partitions
the
cover
overall
overview
here,
it's
it's
easier
to
explain
with
this,
but
okay,
behind
this
api
gateway,
we
actually
have
all
those
tenant
partition.
E
Each
tenant
partition
have
its
own
etcd
instance.
Its
api
server
is
scheduler
and
group
controller
manager,
so
they
they
are,
they
are
self
hosted,
have
their
own
objects
their
object
between
tenant
partition.
They
are
totally
isolated.
Nothing
in
cross,
so,
as
we
can
see,
when
we
say
like
over
or
a
single
value
is
from
the
outside
of
the
api
gateway
they
actually
internally,
we
have
a
mapping,
so
tendon
doesn't
know
which
partition
it
is
belongs
to.
E
It
is
being
managed
by
the
api
gateway
so
but
tenant
partition
itself
is
like
a
tenant
facing
object
back
here
like
we're,
gonna
have
multiple
resource
partition
to
manage
nodes
which
is
behind
here.
It's
we
will
have
resource
manager
each
of
the
resources
partition,
those
we
call
it
resource
partition
and
they
have
the
resource
manager
to
put
them
all
together.
They
may
now
manage
all
the
hosts,
so
from
this
graph
we
can
say
each
tenant
partition.
E
They
have
a
global
resource
view
over
all
those
resource
partition
so
which
means
they
can
have
their
resources,
mainly
like
container
or
services,
and
other
running
object
that
needs
some
port
object.
They
will
be,
can
be
re
scheduled
in
any
of
those
resource
partitions.
So
for
them
it's
a
big
back-end
resource
pool.
So
there's
nothing
for
them.
Those
are
not
partitioned
at
all,
so
they
can
access
any
resource
behind
the
the
global
resource
view.
Here
we
put
it
here.
E
B
E
So
regarding
access
control,
right
now,
as
in
our
overall
system
architect,
this
tenant
partition
and
this
resource
partition
is
always
in
this
internal
system.
So
those
are
for
from
customers.
They
can
only
access,
while
this
regional
api
gateway,
they
cannot
directly
access
the
resource
partition,
they
actually
are
there.
There
is
sorry
the
authorization
authentication
part
is
already
controlled
when
they
access
their
tenant
partition.
E
Remember
the
tenant
partition.
Sorry,
maybe
I
haven't
mentioned
this
one.
They
can
have
multiple
tenants
in
each
partition,
so
even
within
the
tenant
partition,
they
have
their
own
authorization
and
resource
control
part.
So
they
can
only
access
the
object
that
belongs
to
their
own
tenant
and
within
that
I
believe,
like
kubernetes,
also
have
name
space.
It
also
has
our
back
within
namespace
and
the
different
users.
So
in
that
case
there
could
there
we're
going
to
be
inherited
the
authorization
part
of
within
the
tenant
as
well.
E
So
we
right
now,
we
are
not
considering
the
tenant
partitioning
access
the
resource
partition
is
we
don't
think
it's
a
like
a
security
threat
so
right
now
we
don't
have
a
particular
access
design
for
that.
A
Yeah,
my
understanding
is
first,
we
have
this
all
this
authentication
and
authorized
authorization
mechanism
for
the
customer
and
the
honeymotion
like
a
bag
or
other
authorizing
mechanism.
This
is
for
customer
for
the
for
the
traffic
internal
traffic
between
the
tenant,
partition
and
resource
manager.
A
E
We
right
now
so
when
we
talk
about
servicer
account,
it's
like.
E
D
E
They
were
able
to
access
the
the
api
server
in
the
resource
partition
with
their
own,
their
own
identity,
so
yeah.
This
is
controlled
by
the
right.
Now,
it's
it's.
We
enabled
ssl,
so
it's
only
controlled
by
the
internal
rollback.
E
There's
a
I'm
sorry,
the
rights
that
are
the
permission
that
are
assigned
to
each
individual
component,
for
example,
internet
partition.
We
have
scheduler
and
we
have
a
different
controller
and
those
can
access
the
resource,
partitions
api
server
with
their
own
identity
and
permission.
They
won't
have
additional
right.
Now
we
don't
have.
We
are
not
asking
them
to
have
additional
permission
to
access
resource
partitions,
api
server.
A
Yeah,
yes,
my
answer
is
this:
internal
traffic
is
internal
traffic
to
the
resource
manager
is
also
actually
secured.
Let's
say
if
you
have
just
you,
have
a
random
random
machine
in
the
data
center
and
you
want
to
access
the
resource
manager
directly
and
bypass
the
the
app
layer.
Actually,
you
won't,
you
won't
be
able
to
access
the
resource
manager
unless
you
have
the
service
account.
D
So
basically
it
is
like,
like
let's
say
a
traditionally
as
it
should
be
done
so
to
say
the
only
thing
that
can
happen
is
that,
let's
say
my
container
can
sit
next
to
some
other
tenants
container.
But
nobody
knows
that
right.
D
D
A
E
For
the
you
customer
the
clients,
they
don't
know
how
many
hosts
they
are
currently
using,
they
can
use
like
10
percent
of
one
host
and
90
of
another
one,
your
different
resource
partition,
it's
totally
transparent
to
them.
They
only
know
that
this
is
how
much
they
need
for
running
their
service.
Now.
D
B
Yeah,
so
so
this
resource,
I
mean
based
on
the
discussion.
What
I
understand
is
that
this
resource
is
kind
of
an
infra
and
once
it
is
allocated
to
the
tenant,
then
it
is
already
the
access
is
controlled
so
because
it
is
infra,
maybe
in
you
are
mentioning
that
probably
there
is
no
separate
access
because
it
is
already
on
the
infra
part
in
right.
I
mean
that's
what
you're
trying
to
say,
because
the
tenant
any
tenant
partition
can
access
a
global
resource
and
then
allocate
once
it
is
allocated
to
a
tenant.
B
Then
the
access
and
everything
is
managed,
because
this
is
kind
of
an
infra.
Is
my
understanding
correct?
That's
why
you
are
telling.
Probably
there
is
no
separate,
secure
or
security
or
access
control
is
required.
Is
that.
E
So
we're
talking
about
so
so
for
this,
this
is
a
scale-out
design
right.
So
when
we
see
this
scale
out
like
this
tenant
partition,
it
has
its
own
etcd,
api
server,
scheduler
controller
manager,
and
so
the.
E
Managed
part
this
is
resource,
tenant,
partition,
sorry
resource
partition,
it
has
its
own
etcd
api
server
and
the
coupe
controller
manager,
and
later
we
now
also
it
has
a
lot
of
host
and
right
now
it
does
not
have
scheduler.
But
later
we
will
considering
adding
a
scheduler
for
managing
additional
system
resources.
E
So,
as
you
can
see
each
of
them,
they
actually
are
almost
an
entire
octo
single
cluster
right.
So
except
the
tenant
partition
is
missing
hosts.
They
don't
have
any
hardware,
the
actual
host
to
running
those
pods
and
resource
parts.
They
are
not
caring
about
the
report,
all
the
other
resources
aside
from
host,
so
they
only
manage
the
host
part
so
each
so,
regardless
of
all
these
other
partition
parts,
the
skill
auto
part,
so
the
tenant
partition,
sorry
going
back
to
the
tenant
partition
and
the
resource
partition
together.
E
Added
up
together
is
actually
a
a
single
actos.
We
call
it
one
plus
one:
it's
one
tenant
partition,
plus
one
resource
partition.
It
can
actually
achieve
this.
Our
cluster
right.
This
is
a
minimal
architecture
of
scale
out,
but
we
can
say
in
this
one.
This
is
more
design
part.
So
each
of
those
tenant
partitions
they
can
access
the
same
resource
partition.
E
They
can
have
their
resource
allocated
in
different
in
the
same
resource
partition,
but
just
if
we
get
rid
of
this
controller
scheduler
part
and
we
can
combine
them-
this
is
original
the
single
cluster.
So
that's
why
they
should
not.
There
are
actually
no
additional
permissions
that
need
to
be
controlled.
They
naturally
belong
to
each
other.
Well,.
F
D
B
I
have
one
more
question
because
if
you
go
to
this
next
diagram,
I
think
here
on
the
tenant
partition
it
is
accessed
through
the
ap
server
again
the
resource
right.
E
E
Api
gateway
will
have
this.
They
will
check
this
attendant
each
tenant
when
they
access,
they
were
gonna,
have
their
own
token
right,
you're
to
have
initial
there's
a
on
top
of
that
they're,
going
to
be
an
additional
authorization
like
like
that,
the
permission
control
component,
which
is
not
drawing
here,
because
we
are
not
for
that
authorization
part.
I'm
sorry,
the
token
management
part
it's
well
they're
gonna
have
their
own
like
access
token
generated
and
when
it
goes
to
the
api
gateway,
the
gateway
gonna
be
able
to
identify
to
to.
E
I
I'm
sorry
to
get
the
identity
of
the
client.
From
the
token
and
based
on
the
token
identity,
it
will
only
redirect
its
request
to
the
one
the
tenant
partition
that
it
belongs
to
right,
so
they
won't
be
able
to
access
other
tenants
or
not
other
tenant
partition
at
all.
B
Okay,
so
so
so
when
you
see
this
resource
on
the
south
side,.
B
Because,
as
you
mentioned,
when
you
go
scale
out,
actually
the
host
and
the
resources
are
kind
of
decoupled
is
to
to
make
the
scale
out
a
design.
Now
these
resources
can
be
in
different
regions.
D
E
E
Yes,
we
can
have
them
in
different
availability
zone.
So
but
it's
still
sunny
wait,
wait,
wait!
Actually
I
don't
think
we
are
crossing
multiple
like
geophysical
right.
It's
it's
not
implanted.
It's
gonna
be
a
non-network
dna.
This
is
the
main
part.
It's
it's
not
because
of
the
scalability.
It's
because
of
the
network
delay.
A
B
I
thought
the
global
resource
view
means
the
resource
manager,
one
two,
three,
four
n:
if
you
see
some
resource
manager,
I
mean
the
the
whole
resource
pool
can
be
in
one
geo,
location,
availability
zone
and
another
can
be
another.
Another
availability
zone
is
my:
is
it
a
possibility?
Is
that
a
use
case.
A
Yeah,
it's
it's
possible
so,
but
so
far
our
designs,
this,
the
entire
diagram,
is
for
one
region
and.
A
Why
we
call
it
the
api
gateway,
the
original
api
gateway,
and
this
four
at
least
different
resource
managers
can
be
in
different
data,
centers
and
or
different
availability
zones.
But.
A
We
are
using
raft
to
synchronize
the
state.
F
A
F
So
the
tenant
does
have
visibility
to
the
all
the
resource
managers,
because.
A
A
No
talents,
they
know
they
can
specify,
reaching
or
or
a.
F
So
there's
a
logical
box
which
is
not
there
on
the
picture
that
logically,
you
know,
connects
the
multiple
resource
managers
or
one
or
whatever.
You
can
have
one
easy
consisting
of
more
than
one
resource
manager.
I
think
so
yeah
this
this
internal.
A
F
A
Okay
and
and
seven
back
to
your
question-
I
think
you
are
right.
We
each
talent,
partition
or
resource
question
actually
is
a
one-to-one
mapping
to
we
call
small
class
or
mini
class,
but
inside
that
cluster
the
cluster
itself
is
also
highly
available.
So
there.
D
I
D
The
az1
always
the
same
physical
hardware,
no
matter
of
who
the
tenant
is
or
is
it
randomized
and
dependent
on
the
tenant.
D
A
Yeah
so
far
in
our
system,
we
telling
the
a's
as
z
as
they
want,
is
saving
to
the
tenant
b
as
you
want,
but
it's
a
minor
change
because,
depending
on
the
resource
planning,
there
could
be
some
like
a
tweak
on
on
the
on
the
on
the
front
front
end
to
make
sure
resources
are
balancedly
used.
For
example,
in
aws
you,
your
sd1
is
not
necessarily
the
same.
D
D
Yeah
and
it's
also
security-
there
are
some
security
reasons,
also
for
kind
of
having
this
randomized.
A
E
So
this
is
the
we
are
actually
layout
of
those
how
this
resource
partition
and
this
tenant
partition
gonna
be
layout
in
the
you
know,
in
entire
region,
so
we
can
say
like
the
tenant
partition
they,
they
actually
can
join
like
multiple
asia
across
multiple
ac
and
they
can
utilize
the
resource
partition
in
different
age
as
well.
E
Okay,
so,
and
so
back
here
as
we,
so
I
also
sorry
do
you
have
any
more
questions
regarding
this
overall.
E
Okay,
thank
you.
So
here
is
our
component
view
of
our
skill
out
architecture,
so
we
can
see,
as
we
previously
mentioned,
that
each
partition,
including
tenant
partition
and
the
resource
partition,
so
this
one
we
only
draw
one
resource
partition.
It's
actually
a
multiple
resource
partition
going
to
have
the
same.
It's
just
going
to
be
make
this
graph
more
complicated,
so
we
have
only
one
resource
partition
here:
to
show
how
this
they
are
gonna
connect
with
each
each
other
and
how
it
works.
E
So,
basically,
so
the
nodes
are
managed
by
the
api
server
in
resource
partition
and
it
will
have
its
status
report
sent
out
to
the
api.
So
this
is
the
reason
that
we
split
out.
This
part
is
mainly
because
of
the
stat
node
status
report
is
the
most
heavy
traffic
in
kubernetes
architect.
E
So
that's
why
we
have
controllers
and
for
schedulers
going
to
be
running
some
system
level
objects
to
schedule
down
this
resource
partition,
so
the
main
part
is
for
the
tenant
partition,
so
their
api
server
will
have
their
own
controllers
that
manage
all
sorts
of
objects
for
the
tenant
within
the
single
tenant
partition.
So
we
said
the
one
tenant
can
have
their
object
located
in
only
single
tenant
partition,
but
there
can
be
multiple
tenants
within
a
single
tenant
partition
so
but
each
of
them
they
are
totally
isolated
from
their
point
of
view.
E
So
we're
gonna
have
this
access
model
the
secure
mode
to
to
ensure
that
they
won't
be
able
to
see
other
tenants
object
at
all.
E
So
most
of
this
object
are
going
to
be
hosted
in
the
etcd
of
the
standard
partition.
Instead,
for
the
parts
actually
pause
that
to
be
scheduled
on
host,
it
will
be
published
to
the
scheduler
of
the
tenant
partition
and
the
tenant
partitions.
Scheduler
is
actually
less
than
2z
api
server
from
both
tenant
partition,
as
well
as
resource
partition
from
this
api
from
tenant
partitions
api
server.
It
knows
its
resource
allocation
request
from
the
api
server
of
the
resource
partition.
E
It
will
decide
which
host
to
put
this
part
on
and
it
will
send
this
product
banked
request
to
the
api
server
in
the
resource
partition
and,
in
those
cases,
those
kubernetes,
that's
running
in
the
nodes
within
this
resource.
Partition
gonna
be
able
to
watch
those
and
actually
make
those
parts
running
in
those
hosts.
E
Once
it's
running,
the
status
update
will
be
right
back
to
its
own,
etc,
as
well
as
it
will
be
published
back
to
the
scheduler.
But
it's
because
the
scheduler
is
listened
to
the
api
server
from
this
api.
I'm
sorry
it's
listen
to
the
api
server
of
the
resource
partition
as
well.
E
So
after
that
the
node
well,
the
node
is
resources.
Is
usage
has
been
updated.
It's
scheduler
also
watch
that
and
decide
based
on
that.
It's
gonna
be
make
a
decision
for
the
next
or
next.
E
Yes,
wow,
that's
good,
it
actually
has
one
api
server,
cluster,
meaning
they.
They
are
each
running
so
api
server
in
a
single
cluster.
They
can
run
in
active,
active
mode,
but
they
are
the
same
copy
of
each
other.
It's
like
h
a,
but
they
both
take
traffic,
they
serving
the
same
request
and
they
they
actually
behave
the
same
to
the
same
client.
E
It
actually
connects
to
multiple
resource
manager.
It
doesn't
matter
for
it
for
also
for
this
schedule,
going
back
to
this
global
resource
view
for
the
resource
manager
within
each
tenant
partition
they
actually
all
of
them.
They
all
the
resources
are
available
to
them.
They
are.
They
are
listening
to
all
this
api
server.
B
E
B
E
G
E
Location
of
each
host,
it's
a
flat
host
list,
it's
just
based
on
its
algorithm.
It
chooses
a
random
host
for
all
the
hosts.
That
is
satisfies
this
criteria.
It
choose
a
random
host
and
it
will
send
a
request
to
to
ask
that
the
host
to
to
the
api
server
of
that
host
belongs
to
so
that
the
host
will
get
a
notification
from
the
api
server.
E
B
E
E
A
E
A
And
there
could
be
some
other
constraints,
for
example,
what
depiction
mentioned
before
my
pod
specified.
It
can
only
be
placed
in
se1
and
s1
has
a
resource
manager
one.
So
in
this
case
it
will
only
be
sent
to
the
resource
manager.
One
because
there's
an
extra
constraint
that
the
schedule
is
aware
of.
A
A
A
H
A
F
D
I
have
one
question
about
scheduling:
it's
not
clear
to
me.
If
one
resource
or
one
physical,
node
or
one
node
can
belong
to
multiple
schedulers
or
when
I
say
belong,
can
it
be
used
for
scheduling
by
multiple
schedulers?
That's.
A
F
E
I
think
there
are
two
things
this
one
thing
is
like
each
tenant
partition.
E
D
E
D
E
That's
possible
so,
but
so
that
it's
also
radical
right.
So
theoretically,
they're
gonna
be
some
risk
conditions.
D
E
E
The
confliction
is
very
minimal,
so
it's
they
when
we
analyze
the
the
logs
they're
only
like
like
100,
something
like
confliction
so.
D
E
D
E
F
I'm
assuming
we'll
just
use
the
optimistic
way
of
solving.
J
F
A
J
A
I
also
want
to
at
the
at
the
walmart
in
addition
to
what
he
said.
Yeah,
this
is
actually
this
scheduling,
conflict
or
risk
condition
is
a
fact.
Whenever
you
run
multiple
instances
of
schedules
right
there,
it's
not
introduced
by
partition
yourself.
So
whenever
you
run
multi
instance,
multiple
schedulers.
A
Because
now
our
entire
entrepreneur
is
really
scared
out.
When
you
have
more
hosts,
you
deploy
more
resource
managers
and
when
you
have
more
tenants,
you
deploy
more
talent
partitions.
They
are
totally
decoupled
and.
A
Before
we
move
on
more
just
a
quick
time
check
for
your
topic,
how
how
how
I
propose
we
move
push
it
to
next
meeting?
Actually
yeah,
oh
really
up,
because.
F
F
A
Okay,
yeah
and
and
nikita
also
is
awesome.
Yeah.
A
A
D
E
E
D
E
So
here
is
like
this
is
our
most
detailed
architecture
view
of
in
this
entire
talk,
any
other
questions.
E
We
can
always
come
back
if
you're
interested
for
more
questions.
Okay,
so
this
one
I
I
showed
that
briefly
when
we
were
talking
about
like
a
multiple
ac
is
availability,
but
mainly
like
our
clusters,
they're
still
out
a
single
cluster.
We
are
not
talking
about
cross
region,
there's
there's
no
plan
for
cross
region,
but
there's
plans
for
cross
availability
zone.
Is
it
okay
so
back
to
our?
This
is
our
some
kind
of
high
level
of
comparison
between
kubernetes
and
our
actual
scale
out
design.
E
So,
first
over
architectural
level
that
we
can
say
kubernetes
is
a
single
cluster.
They
can
have
multiple
etc
instances,
but
they
have
to
be
in
a
single
cluster,
meaning
that
they
actually
are
copy
of
each
other.
So
and
then
that
means
each
of
those
copay
have
to
have
the
entire
all
the
object,
a
full
copy
of
all
the
objects
so
and
api
server.
Also
they
are
a
copy
of
each
other
and
for
the
api
server.
E
We
know
it's
have
a
huge
internal
cache
and
they
have
to
catch
everything
in
one
api
server
and
they
are
for
different
api
server
instance.
They
actually
have
the
same
copy
inside
their
memory
cache
and
the
coupe
controller
manager.
They
are
actually
in
our
active
and
standby
mode.
So
anytime,
there's
only
one
single
cube,
controller
manager.
That
is
active.
That
means
it
can
actually
doing
something
that
is
making
changes
to
the
to
the
objects
and
the
others
are
not
the
only
other
stand
by
it.
E
We're
going
to
take
over
when
the
active
controller
manager
is
burnt
out.
So
there's
only
one
single
instance
that
is
active
at
a
single
given
time.
Also,
the
same
thing
for
scheduler
right
now.
Actually
we're
talking
about
the
kubernetes
can
support
multiple
scheduler,
but
during
our
like
performance
analysis,
scheduler
is
not
the
bottleneck
at
all.
E
So
right
now
the
bottleneck
is
in
api
server
in
our
scale-out
active
scale
out
we
already-
and
we
can
say
we-
we
separate
our
all
the
resources
and
major
components
to
different
clusters
and
they
have
their
own
etcd
cluster
api
server,
cluster
group,
controller
manager,
scheduler,
and
they
have
different
view
of
the
objects
and
they
are
separated.
They
are
logically
separated
and
they
can
be
also
physically
separated,
so
so
that
we
can
see
the
tp
tenant
partition.
E
Etcd
have
only
tenant
object
for
its
partition,
also
that
there
are
mod
there
can
have
multiple
tenants
partition
so
there
the
tenants
objects
are
logically
and
physically
separated
from
each
other
and
the
resource
partition,
etc.
Have
no
object
for
its
partition,
so
the
nodes,
the
results-
are
also
physically
and
logically
separated
from
each
other.
So,
as
we
can
see
like
based
on
this
part,
the
object
distribution
part
will
be
able.
We
will
be
able
to
host
much
small
objects
than
the
original
kubernetes
single
cluster.
E
Defining
by
this,
in
the
node
level,
the
kubernetes
all
the
nodes
are
in
our
single
cluster,
the
the
nodes
are
available
to
all
the
we
will
say
all
the
name
space,
because
they
only
have
the
notation
of
namespace.
They
don't
have
tenant
notation,
I'm
sorry
the
tenant
concept
so,
but
regarding
likes
for
that
octo
skill
out
that
each
tenant
will
be
able
to.
E
Have
their
resources
scheduled
in
all
the
resource
partition,
so,
theoretically,
each
one
tenant
in
a
single
tenant
partition?
They
can
take
over
all
the
resources
theoretically,
but
also
restricted
by
the
capacity
of
tenant
partition.
So
if,
if
it's
right
now,
we
don't
have
this
test
to
experiment.
E
The
scalability
part
independent
partition,
yet
we
haven't
been
there
yet
so,
but
one
thing
we
discovered
that
after
this
split
out
all
those
clusters,
we
have
much
more
capacity
as
well
as
our
memory
and
cpu
usage
is
much
much
lower
than
the
single
cluster
usage,
so,
which
means
we
have
a
lot
of
potential
to
be
able
to
host
a
lot
more
objects
than
the
single
cluster
then
or
original
kubernetes
have
so
regarding.
E
Availability
of
the
the
aha
of
kubernetes
are
is
achieved
by
their
duplicate
their
component,
but
saying
that
if
something
goes
wrong
and
it
can
potentially
turn
down
the
entire
cluster
in
actual
scale
out,
since
we
have
multiple
tenant
partition
and
resource
partition,
so
with
careful
design
and
they
inside
they
have
their
their
own
as
well
right.
So
our
single
tp
or
rp
partition
have
their
own
addition
to
that.
E
We
are
the
tp
and
rp
are
independent
from
each
other,
so
we
have
two
level
of
high
level
availability
and,
regarding
our
scalability
part
right
now,
the
official
announcement
for
kubernetes
is
a
single
cluster.
They
can
support
10k
nodes
but
for
scale
out
actuals
that,
as
we
can
say,
in
release
0.7,
which
is
the
beginning
of
february
this.
This
past
february
we
already
achieved
20k
nodes,
and
now
we
are
working
on
trying
to
target
the
launch
50k
nodes
per
cluster.
E
For
this
at
this
is
release
point
eight,
that
is
by
the
the
original
time,
is
april
30th.
So
it's
one
month
from
now.
That's
our
target
any.
F
I
think
one
comment
I
have.
This
is
very
important,
though
I
think
I'm
I'm
pretty
sure
sunil
and
stefan
will
have
a
lot
of
input
to
it
and
I'm
assuming
we're
going
to
put
the
slides
on
on
our
github
repos
right,
because
I
think
they
should
look
at
it
more
and
then
provide
any
feedback,
because
this
is
really
very
important
for
centaurus
and
you
folks
are
our
ambassadors.
F
D
Guys,
I'm
sorry
this
might
be
a
little
bit
off
topic,
but
it's
very
related
to
this
and
that's
actually
some
kind
of
a
language
or
material
or
something
kind
of
you
know
what
we
can
share
on
our
social
networks
and
you
know
show
to
the
world
basically
do
we
have
something
like
that,
because
I
mean
this
previous
slide
would
be
perfect,
obviously
with
maybe
some
additional
explanations,
et
cetera,
but
something
like
you
know
where
we
can
really
show
the
advantages
of
centaurus
and
yeah.
D
F
Defined
so
if
you
go
to
the
faq
section
on
the
web
page
there,
there
are
tons
and
tons
of
information
about
that.
You
know,
house
and
torah
is
different
than
kubernetes.
So
if,
if
this
is
what
you
ask,
I
know
they're
all
the
kind
of
very
basic
questions,
so
look
at
it
and
if
you
have
any
suggestions-
or
maybe
you
are
asking
for
something
more-
let
us
know
then.
J
A
Yeah
yeah
yeah!
Oh
sorry,
you
go
sorry!
Sorry,
no!
No,
but
we
actually
were
planning
writing
a
another
article
to
kind
of
promote
this
advantage
of
this
scout
design,
and
we
also
we
are
considering
to
submit
a
talk
to
the
cubicle
next
month,
but
we
want
to.
We
want
to
do
it
after.
We
finish
the
50k
test,
because
this
is
more
impressive
right.
50K.
A
D
D
Some
social
network
accounts
for
for
the
project
where
we
can.
I
I
mean
I'm
really
not.
You
know
experienced
operating
open
source
projects,
but
this
is
something
which
I
really
have
seen
a
lot
and
I
think
it's
it
would
be
definitely
very
yeah.
A
F
A
Can
submit
some
talks
together?
This
is
more
easily
to
be
accepted
if
it
is,
if
it's
a
collaborative
talk,
I.
D
E
E
Okay,
so
so
this
is
some
there's
some
some
status,
our
summary
status.
I
don't
think
that's
as
important
as
we're
talking
about
this
architecture
part.
So
this
is
but
our
overall
design
dock
is
in
this.
It's
in
our
centaurus
actos
and
there's
our
scale
out
design,
it's
originally
authored
by
xiaomi,
and
also
we
have
a
slack
channel.
If
you
are
interested,
please
join
and
the
sig
meeting
there's
a
on
monday,
3
p.m,
pdt!
So,
if
yeah.
B
Maybe
maybe
this
release
whenever
we
have
the
releases,
I
think
it
is
better.
We
can
maybe
announce
if
we
can
have
a
channel
in
twitter
also.
A
D
D
F
Right
so
yeah,
so
I'll
cover
my
topic
next
week.
Next
and
I
mean
next
time.
F
A
Okay.
Okay,
thank
you.
Thank
you
for
a
great
presentation.
Before
we
go
off
to
today's
meeting.
Any
other
thing
you
want
to
discuss
online.
Do
you
have
any
other
topic
you
want
to
quickly
discuss
today.
K
Yeah,
I
was
just
wondering
you
know,
from
ecosystem
building
standpoint,
in
addition
to
our
participation
in
those
you
know,
events
and
you
know.
Obviously
we
need
to
write
more
white
papers
and
stuff,
but
from
a
user
recruitment
standpoint,
do
you
guys
have
any
idea
like
who
we
should
go
after,
because
obviously
you
know
what
we
what
we
with
what
we
have
right
now.
K
It
might
not
be
enough
to
build
a
solution,
so
I'm
just
wondering
if
you
guys
have
any
idea,
because
we
have
it's
kind
of
a
catch-22
right:
you
have
to
have
enough
for
the
users,
so
they
will
participate
and
so
do
you
have
any
suggestions.
I
think,
having
at
least
one
user
would
be
very
powerful.
K
F
I
think
one
of
the
things
prashanth
volunteered
actually,
I
think
that
would
be
a
very
good
reference
architecture.
Actually,
if
you
can
build
the
telecom
reference
architecture
on
top
of
centaurus,
that
would
be
a
very
good
starting
point.
I
think
he
he
volunteered,
and
he
said
he
would.
Somebody
was
going
to
bring
in
some
other
his
customers
and
all
that.
So
that
would
be
very
good
use
case.
Actually.
K
So
so,
okay,
that's
actually
good.
So
what
so?
What
I
be,
because
I
kind
of
want
us
just
to
be
more
focused
instead
of
like
you
know,
have
a
no
plan.
So
maybe
we
can
just
start
focusing
on
telecom.
You
know
like
for
our
white
people,
for
our
use
cases
for
our
use
case,
building
white
paper
or
just
make
crea
kind
of
a
storyline
for
for
telecom
solution,
like
kind
of
like
a
blueprint,
kind
of
thing,
yeah
exactly.
H
K
Yeah
yeah
and
then
we'll
go
from
there
and
then
you
know,
and
then
we
can
take
that
blueprint
and
go
talk
to
china,
mobile
china
telecom,
deutsche
telekom
orange
these
guys.
So
at
least
we
have
something
for
people
to
kind
of
visualize
what
how
this
solution
is
going
to
help
them?
Okay,
I'll
work
with
prashant
and
rupa,
then
on
that
and
yeah.
So
maybe
next
week
we
should
have
an
agenda
item
to
talk
about
our
telecom
strategy
to
build
ecosystem.
F
The
other
one
I
can
think
of
is
financial
services,
but
I
think
that
I
think
great
gain
guys,
but
I
think
you're
right.
It's
a
lot
of
work
if
we're
going
to
build
a
reference
architecture
for
even
one
use
case
for
telecom,
it's
a
lot
of
effort.
Actually,
so
maybe
we
should
just
focus
on
that.
You
know
so.
Obviously
we
don't
have
a
lot
of
resources
as
well.
So
so
I
think
that
would
be
good.
K
K
Telecom
and
then
create
a
storyline
around
it,
and
maybe
we
can
have
a
blueprint
and
something
at
least
people
can
visualize.
How
you
know
centaurus
can
help
build
a
telecom
solution:
okay,
yeah,
so
yeah.
We
should
have
an
agenda
item
next
month
to
go
over
our
telecom
strategy.
F
C
B
D
Have
is
actually
yeah
what
they
have
is.
Actually
they
have
their
own
a
cloud,
so
they
they
also
now
build
several
data
centers
across
the
europe.
So
so
the
question
would
be:
is
there
a
way
how
we
could
maybe
frame
this?
Not
only
I
mean
this
might
be
really
like
a
really
huge
step,
but
kind
of
trying
to
position
this
as
a
managed
service,
or
something
like
that
instead
of
a
vertical.
D
So
instead
of
a
use
case,
which
is
specific
to
a
certain
vertical,
do
we
have
a
plan
going
towards
that
like
in
that
direction,
because,
for
example,
I
know
that
these
guys
have
their
own
managed
kubernetes
service
like
similar
to,
for
example,
eks.
That's
that's
operated
by
aws,
so
are
we
do
we
want
to
go
in
this
direction
as
well,
or
we
really
want
to
kind
of
focus
on
on
some
specific
use
cases.
K
K
A
Do
that
now,
since
we
work
with
click
cloud,
I
will
let
her
check
without
you.
F
A
K
Yeah,
I'm
just
saying
from
an
ecosystem
building
standpoint.
You
know
like
our
challenges,
you
know
we
need
to
have
at
least
some
kind
of
you
know
storyline.
So
we
can
go
talk
to
users.
Another
thing
is
we
probably
need
to
have
some
sort
of
you
know
proof
points
why
you
know
centaurus
is
better
than
the
kubernetes
approach
or
openstack
approach
right.
So
you
I
understand
we
don't
have
to
build
out
the
whole
solution,
which
is
fine
to
users.
K
They
don't
expect
the
whole
thing,
but
if
we
can
give
them
a
some
kind
of
glimpse
of
the
like,
you
know
give
them
some
light
to
see.
Oh
yeah,
if
we
we
we're
putting
resources,
this
can
be
something
really
useful
for
us.
So
if
it
would
be
great
if
we
can
have
some
kind
of
proof
points
like
especially
like
scalability,
if
we
can
show
you
know
some
kind
of
demo
or
a
proof
point
just
to
to
build
confidence,
otherwise
I
mean
we
are
against.
You
know
two
very
mature,
open
source
projects
right
so
yeah.
K
F
It's
very
important
to
can
I
contextualize
all
of
these
capabilities,
so
if
we
can
somehow
demonstrate
why,
in
a
telecom
industry
for
example,
why
would
you
need
this
kind
of
scalability?
Why
would
you
need
this
kind
of
multi-tenancy,
and
this
is
how
you
would
do
it
so
like
put
together
overall
kind
of
a
vision
and
then
because
it's
going
to
take
a
lot
of
time
and
effort
and
then
once
we
they
have
this
kind
of
a
picture,
then
they
can
start
contributing
as
well.
F
K
D
K
F
That's
very
important
actually
because
I
think
that
put
a
high
level.
You
know:
why
would
you
need
this
kind
of
scalability?
I
mean
the
question
comes
up,
I'm
not
going
to
build
a
public
cloud
like
that.
Why
would
you
need
that?
So
you
need
to
kind
of
articulate
that
okay,
in
this
case
telecom
operator,
would
need
that,
for
example,.
K
Yeah,
okay,
let
me
take
a
step
at
this
and
I
work
with
prashant's
team
with
rupa
and
then
we'll
come
back
to
you
guys
and
then
you
guys
can
re
check
out.
You
know
our
draft
and
we'll
go
from
there,
so
we
we
just
hand
you
two
number
one
to
demonstrate
the
needs
right
and
number
two
give
some
kind
of
blueprint
overview
number
three.
If
we
can
show
some
proof
points
like
scalability,
if
we
can
get
all
these
three
things,
then
you
know
I'll
be
a
lot
more
confident
to
talk
to
users.
F
Like
a
generic
platform,
we're
gonna
have
challenge,
you
know
getting
users
on
board,
actually
people
on
board,
but
unless
we
cannot
contextualize
and
have
you
know,
go
with
some
kind
of
a
reference
architecture
for
an
industry
segment.
I
think
we'll
have
a
challenge.
Otherwise,.
K
Yeah
and
then
maybe
we
should
think
about
demo.
If
can
we
prove
our
scalability?
Maybe
we
can
work
towards
that
direction.
Somehow
you
know
do
a
kind
of
side-by-side
comparison
now
with
kubernetes,
but
maybe
with
openstack
right.
If
there's
a
way,
we
can
demonstrate
the
scalability
portion
like
a
simple
demo,
I
don't
know
how
easy
it
is.
A
A
We
target
to
the
50
benchmark
test.
We
can
take
some
some
data
results
and
demo.
So
as
a
as
the
material
for
the
for
the
marketing
for
the
when
you
talk
to
customers.
K
Yeah
so
so
we
have
time
I
mean
I'm
thinking
we
can
get
all
these
together
and
then
once
we
have
the
you
know
the
white
paper
to
describe
the
need,
and
then
we
have
a
reference
architecture,
slash
blueprint,
and
then
we
have
a
demo,
I'm
thinking,
maybe
by
august,
because
august
is
when
people
start
traveling
to
go
to
conferences
right
according
to
linux
foundation,
they're
not
doing
any
physical
conference
until
probably
august,
then
you
know,
then
we
can
start
approaching.
You
know
prospective
users,
but
we
have
some
time
to
prepare.
For
our.
K
You
know
our
kind
of
ammunition.
B
So,
even
if
you
have
the
technology
ready
competency
ready
now,
the
real
deployment
gap
is
the
challenges
are
very
different.
It
can
be
business
reasons.
It
can
be
current
deployments
migrating
to
a
new
solution.
People
are
hesitant
to
do
it
all
these
things,
but
these
these
points
and
challenges,
the
real
challenges-
can
come
only
from
users.
B
So
can
we
can
we
think
of
maybe
in
due
course
of
time
that
some
kind
of
a
user
committee
or
user,
or
something
and
as
a
part
of
centaurus
and
then
on
board
some
of
the
potential
users
with
with
the
speech
so
that
they
have
some
positioning
within
the
project
and
we
can.
B
Community
and
users
can
work
together
to
fine
tune
and
grain
to
make
a
real
deployment.
K
Yeah,
absolutely
I
I
think:
that's
why
I'm
trying
to
get
to
to
get
some
users,
I
mean
without
users
we
can't
have
a
user
committee
right,
so
we
just
need
to
go,
find
users,
and
so
so,
once
we
have
at
least
a
couple
users.
The
thing
is:
once
we
have
users,
vendors
will
come
in
including
our
sponsor
they'll
come
in
they'll
close
the
gaps
they'll
make
that
they'll
make
the
solution
out
of
the
technology.
K
K
F
F
Prashant
did
mention
that
there's
you
can
bring
it
bring
on.
You
know
a
couple
of
few
users,
you
know
the
operators
or
whatever.
That
would
be
a
good
starting
point.
Yeah.
L
L
Hear
that
conversation
about
this
getting
yeah,
I
got
a
conversation
about
the
business
use
case
right.
Yes,
I
think
I
think
that's
why
we
are
working
with
click
cloud
right
now,
and
hopefully
you
got
something
the
the
srw
established
that
they
can
work
on
it,
but
I
think
this
first
things
they
need
to
work
on
that.
Oh
we
need
to
work
on.
It
is
to
make
sure
we
have
a
good
easy
to
use,
deploy
right
now,
you
will
deploy
centaurus
you're
going
to
take
a
while.
L
So
I
think
the
first
step,
the
persona
I
talk
about
it
is
make
it
more
usable,
easy
to
use
easy
to
install
and
then
and
then
hey
we're
working
with
other
potential
customers
to
look
at
that,
whether
it
is
telecom
or
not,
whoever
whoever
wanted
using
the
first,
they
can
demonstrate
the
the
ability
or
capability
that
the
benefit
in
which
would
you
go
with
it.
Yeah
yeah
normally
telecom
is
good,
but
they're
pretty
slow.
F
L
D
K
Show
them
something
and
then
to
help
them
understand
the
value
of
centaurus.
Then
they
want
to
come
on
board,
and
I
think
our
low
hanging
for
today
is
with
a
contact
with
our
knowledge.
Telecom
is
kind
of
low-hanging
fruit,
but
if
there's
like
banking
industry,
some
prospect
in
the
banking
industry
or
even
education
or
non-profit
or
anything,
that's
willing
to
take
a
shot,
you
know
take
a
chance
to
work
with
us
would
definitely
we
can
help
them
even
build
a
use
case
too.
L
Yeah
yeah,
so
so
maybe
we
need
to
talking
with
click
cloud
more
on
this
question,
one
just.
K
Yeah,
maybe
we
should
have
an
agenda
item
next
week.
You
know
talk
about
our.
You
know:
user
user,
recruitment
strategy.
K
F
H
K
F
D
Okay,
that
that
would
be
really
useful
to
have
some
sort
of
kind
of,
as
you
said
in
english,
to
discuss
this
because
honestly,
I'm
struggling
myself
a
little
bit
to
understand
that
customer
profile.
If
we're
not
going
to
do
like
a
managed
service
stuff
or
something
like
you
know,
which
the
people
who
are
going
to
offer
the
infrastructure,
I
I'm
struggling
to
understand
who
has
such
a
huge
infrastructure
and
is
con
consuming
it
only
in
house
so
to
say.
Maybe
there
is.
D
You
know
some
kind
of
I'm
missing
to
to
to
see
some
bigger
picture
here,
but
for
me
that
would
be
really
nice
to
to
have
these
questions.
F
D
K
Actually,
actually
from
what
I've
heard
lately
like
from
the
garner
conference
and
then
all
the
you
know
living
salvation
conference,
I've
been
to,
I
mean
edge,
no
is
huge
right,
I
mean
in
fact
that's
the
growth
area
now,
and
I
think
I
think
operators
are
going
to
you
know
leverage
their
real
estate,
their
data
centers
to
build
edge
nodes
and
those
are
going
to
be
scalable.
It's
going
to
require
scalability
too,
so
it
might
not
be
as
big
as
the
cloud,
but
I
think
there's
a
potential
there
and-
and
I
think
edge
is
hot.
D
Basically,
honestly
for
this
type
of
scale,
I
think-
and
that's
also
where
I
was
kind
of
going
at
with
this
remark.
I
I
think
the
only
kind
of
you
know
valid
or
let's
say
satisfactory
use
case
provider
would
come
from
the
edge,
and
this
is
important
also
because
that
I
don't
think
that
necessarily
kubernetes
is
our
competitor,
but
rather
these
distributions,
which
are
actually
kubernetes
distribution,
specifically
targeting
the
edge.
So
this
is
something
which
I
I
also
kind
of
managed
to
to
come
up
with,
but
then
yeah.
I.
D
I
think
that
this
brings
a
whole
set
of
a
little
bit
at
least
different
set
of
challenges
and
and
considerations
that
we
need
to
make.
K
L
Yeah
yeah,
that's
part
of
the
part
of
the
discussion
we
had
yes
yeah.
Would
you
include
we're
also
including
the
the
portal
for
the
centaurus,
sorry
and
easy
deployment
right
right,
the
one
one
click
deployment
with
which
we
don't
have
it,
and
and
so
we're
going
to
get
that
and
build
the
edge
there
and
then,
at
the
same
time,
to
see
whether
the
use
case
for
custom
use
cases,
scenarios.
A
Okay,
we
are
already
running
in
20
minutes.
Then,
let's
get
you
there,
so
I
just
want
to
make
sure
people
are
okay
with
this
or
we
want
to
have
the
difficult
discussion
on
email.