►
From YouTube: Centaurus Monthly TSC Meeting 4/27/2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
okay,
here
it
is
hi
everyone
welcome
to
this
monthly
tlc
meeting
as
usual.
This
meeting
will
be
recorded,
so
be
careful.
What
are
you
saying?
That's
attacking
everything,
respect
for
melon
for
the
agenda
today
we
have
two
items.
The
first
item
is
a
project
proposal
from
deepak.
A
A
Do
you
need
any
like
tlc
vote
on
for
this
item?
Well,
let
me
present
it
and
then
now
we
can
now
yeah,
okay,
okay,
yeah.
After
that
we
have
pandu
who
is
leading
our
and
and
who
is
leading
our
age
sick.
He
will
present
the
high
level
architecture
and
the
plan
for
the
edge
in
the
next
for
the
work
in
the
next
few
months
is,
and
he
hope
to
get
some
feedbacks
and
some
advices
from
tsc
members.
A
These
are
the
two
two
items
today,
let's
start
with
the
packet
one.
Okay,
can
I
share
my
screen.
Okay,.
A
C
Can
see
it?
Okay,
okay,
good
good?
Okay,
so
I
think
this
is
a
meeting
before
the
previous
meeting.
I
think
we
we
discussed
that
I
mentioned
that
you
know
the
about
the
whole
scalability
in
a
kubernetes
environment,
so
apache
ignite,
other
kubernetes
api
server
data
store
so
that
I'm
gonna
cover
you
know
what
the
issues
are
and
what
kind
of
work
we
did
and
how
do
we?
How
did
we
kind
of
enable
the
you
know
the
scalable
cooperative.
C
Yeah
there
you
see
okay,
okay,
so
I
think
the
the
key
I
think
most
of
you
folks,
are
familiar
with
kubernetes,
so
copyrighted
server
in
kubernetes
api
server
is
the
heart
and
soul
of
any
kubernetes
cluster,
actually
it's
equivalent
to
like
a
syscall
of
a
linux
kernel.
So
it's
a
very
important.
You
know
so,
essentially
the
way
it
works
is
just
kind
of
a
high
level.
10
30,
000
feet
level
description.
C
Api
server
in
kubernetes
has
long
pretty
much
relied.
All
this
time
has
relied
on
fcd
as
underlying
schematized
key
value
store
and
when
the
control
controllers
operate
on
it,
you
know
in
order
to
achieve
the
current
state
to
desired
state.
So
so
all
this
time
it's
been
lcd
has
been
used
as
a
data
store
as
an
underlying
data
store
for
api
server.
Now
the
the
because
of
the
the
non-partition
nature
of
at
cd
it
it.
C
It
places
a
lot
of
limits
on
how
how
much
vertically
it
can
scale
so
so,
essentially
the
the
the
currently
the
lcd
it
doesn't
scale
horizontally.
So
you
can
only
scale
it
vertically,
okay
and
then
the
and
then
the
it
is.
So
you,
you
can
have
multiple
instances
of
xcd,
but
it
but
therefore
high
availability
see
it's
only
one
at
one
time
you
have
one
master
and
are
active
basically,
so
so
what
what
that
does
is?
C
It
makes
fcd
highly
available,
but
not
scalable
see
you
cannot
horizontally
scale
it
and
and
the
the
the
limit
of
how
much
you
can
scale
lcd
is,
is
essentially
it
freezes.
You
know
after
some
time
you
know,
and
then
recent
experiments
you
know
in
the
community,
so
they
have
gone
up
to
10
000
now
exactly
and
then
after
that,
pretty
much
lcd
freezes.
So
you
cannot
go.
Your
cluster
cannot
go
beyond
10
000
or
15
000
nodes
in
the
cluster
okay.
C
So
so
so
this
overall,
you
know
this
non-partition
nature
of
xcd
contributes
to
the
lack
of
kubernetes
cluster,
scalability
and
overall
kind
of
efficiency
in
a
hyperscale
kubernetes
cluster
environment.
So
if
you
want
to
have
a
bigger
cluster,
you
know
with
fifty
thousand
or
hundred
thousand
nodes.
You
can't
do
that.
You
know
the
current
at
ce
as
a
care
store.
C
So
what
we
did
was
so
we
kind
of
started
exploring
you
know
what,
if
we
have
a
true
partition
underlying
distributed
data
store
as
an
alternative
to
lcd
and
that
and
and
on
top
of
that,
and
that
data
store
is
in
memory.
So
so,
let's
see
if
that
will
allow
scalability
of
a
kubernetes
cluster
and
to
that
regard
you
know
we
we
did
a
couple
of
months
ago.
C
Actually,
we've
we
finished
our
collaboration
with
the
grid
gain
folks
actually
to
see
if
we
can
replace
the
apache
ignite,
open
source,
in-memory
partition
data
store
to
replace
fct
in
kubernetes
and
see
how
you
know
the
how
it
helps
you
know,
scalability
in
in
a
community's
cluster
environment.
C
So
that's
that's
the
work
we
did
and
then
this
is
kind
of
a
high
level.
So
you
can
see
that
you
know
there
are
three
pictures
and
on
on
the
screen.
So
so
what
what
we
did
was
in
order
to
demonstrate
the
the
horizontal
scalability
of
of
ignite
data
store.
So
the
very
first
screen,
the
very
first
you
know
the
screenshot
you
see
here
is:
we
did
experiment
in
in
google
cloud.
C
Essentially,
we
ran
kubernetes
cluster
by
limiting
the
the
using
fct
and
using
with
by
limiting
the
the
cpu
and
the
and
the
memory
require
configuration
as
four
four
gig
of
memory
and
then
four
hundred
milli,
milli,
cpu
and
then
this
is
essentially
running
on
e2
high
mem
to
machine
type,
so
so
by
restricting
this
limit
on
on
on
the
of
the
cluster,
so
lcd
chokes.
Basically
so
when
we
ran
a
density
test
with
this
memory
and
the
cpu
configuration
scd
choke.
C
A
Have
hello
yeah
how
many
nodes
in
this
test?
I
mean
how
much
how
much
capacity
is
on
the
kubernetes
side,
api
server
side
for
the
for
this
test.
C
A
A
That's
the
lcd
configuration
I
mean
on
kubernetes
side,
let's
say
when
you
run
the
test:
how
much,
how
much
node
did
you
put
out
of
that
class?
How.
A
Yeah
I
mean
the
nodes
were
like
how
many
nodes,
how
many
parts
are.
E
C
Yeah
yeah-
I
don't
have
that
but
yeah,
so
I
think
they
were
not.
I
think,
maybe
500,
but
not
even
less
than
that,
because
we
couldn't
even
yeah
even
less
than
that.
Actually
I
see
so.
C
Really
try
to
scare
it.
We
wanted
to
the
core
and
I'll
get
into
that.
Why
we
didn't
try
to.
You
know,
do
that
kind
of
a
scalability
testing
so,
but
the
key
goal
was
initially
to
kind
of
demonstrate
the
limit,
the
limitation
of
xcd.
So
basically,
so
you
can
see
that
by
having
this
this
configuration
lcd,
chokes
and
there's
nothing.
C
You
can
do
about
it
at
that
point,
the
whole
cluster
kind
of
times
out,
so
every
api
server
starts
timing
up
basically
with
this
and
then
what
we
did
was
in
this,
the
screenshot
in
the
middle.
So
we
replaced
lcd
with
ignite
with
the
same
memory
and
the
cpu
configuration
basically
and
then
that
that
chokes
as
well,
basically
with
the
one
node
of
one
node
of
xc
ignite
cluster,
okay
and
then
what
we
did
was
so
because,
as
opposed
to
lcd
ignite
is
horizontally
scalable.
C
So
there
was
no
problem,
maybe
that
kubernetes
kubernetes
cluster
density
test
you
know
successfully
completed
so
you
can
see
that
the
key
thing
to
highlight
here
is
that
you
know
the
there's
a
limit
to
how
how
far
you
can
go
as
far
as
scaling
vertically
the
fct,
and
at
that
point
you
know,
there's
nothing
you
can
do
about
it
basically,
but
as
opposed
to,
if
it's
a
true
partition
data
store
underneath
like
ignite,
that
is
that
essentially
addresses
that
issue.
A
C
Yes,
yes,
yes,
yes,
yes,
okay,
so
each
one
of
them
they
have
the
master
slave,
the
configuration
okay!
Yes,
they
do
yeah,
so
they
have
a
redundancy
at
every
level.
Right,
yeah,
yeah,
that's
true!
Now
the
the
thing
is
now
so
one
of
the
things
which
we
didn't
do
it
was
to
now.
We
know
that
the
the
the
ignite
is
can
be
horizontally
is
horizontally
scalable,
but
we
didn't
really
go
to
the
extent
of
testing
doing
high
scale.
C
You
know
the
testing
with
a
bigger
cluster
with
20
000
or
30
000
nodes
in
it,
and
one
of
the
reasons
we
didn't
do
that
was
we
had
some
budget
resource
constraint
issues
as
well
and
and
at
the
same
time
we
found
out
that
at
some
point,
even
if
you,
your
your
data
store,
is
partitioned
and
it
scales
horizontally,
but
the
api
server
itself
shows
basically
the
the
kubernetes
api
server.
Has
this
in
built-in
map
cache
in
memory,
so
that
starts
choking,
so
that
doesn't
really
help
you
see.
C
So
we
we
decided
that
even
if
we
spent
time
and
spend
you
know
kind
of
money
to
kind
of
do
that
level
of
testing
we're
not
going
to
go
beyond
certain
limit.
You
see.
So
so.
Essentially,
it's
not
just
a
matter
of
scaling
your
your
data
store,
but
there
needs
to
be
some
work
done
at
the
api
server
level
in
kubernetes
as
well.
So
that's
when
we
decided-
let's
not
do
this-
the
high
scalability
testing
at
this
time,
and
we
will
do
that
as
part
of
centaurus
where
we
have
to
get.
C
We
have
addressed
that
issue
by
partitioning
the
api
server
itself.
Actually
so,
and
that's
what
our
our
project
proposal
is.
Actually
you
see
so
we
plan
to
do
by
replacing
fct
with
ignite
and,
at
the
same
time
be
able
to
leverage
the
the
the
partitioned
api
server
capability
as
part
of
centaurus
and
then
can
demonstrate
and
then
see
how
high
we
can
go
and
we
pretty
optimistic
that
we
can
easily
go
like
50,
000
or
whatever
by
partitioning
the
api
server
and
then
but
then
the
data
store
is
automatically
partitioned
anyways.
C
C
Yeah,
so
so
that's
that's
where
we
start,
so
we
did
just
to
kind
of
summarize
that,
for
we
as
part
of
the
collaboration,
we
demonstrated
the
horizontal
scalability,
but
we
didn't
really
did.
We
didn't
really
do
large
scale
testing
because
of
the
reasons
I
mentioned
basically
yeah-
and
this
is
a
high
level
architecture,
just
like
any
partition
data
store
type.
I
think
dynamo,
amazon
dynamo,
I
think,
but
that's
not
in
memory,
but
any
partition
database
works
the
same
way.
Actually.
C
So
what
we
did
was
we
built
the
way
we
did.
The
the
sole
replaced
ignite
with
the
xcd
with
ignite
was
essentially
kubernetes,
doesn't
even
know.
Api
server
doesn't
even
know.
Api
server
still
talks,
the
the
xcp
apis
and
we
we
put
together.
We
built
a
shim
layer,
called
ignite
fcd,
sim
layer
that
does
the
conversion
from
fcd
api
to
the
ignite
api,
underneath
it
basically
so
from
the
corporatics
or
api
server.
C
Standpoint
he's
still
thinking
that
they're
talking
to
a
lcd
data
store,
but
the
shim
layer
overrides
that
and
then
uses
the
underlying
partition
ignite
the
error
basically,
and
so
the
apache
ignite
is
horizontally
scalable,
so
that
solves
all
the
issues
of
vertical
scaling.
C
You
know
remember
the
the
the
thing
I
mentioned
at
some
point:
vertical
scaling
is
just
that
the
whole
cluster
chokes
you
know
so
so
that
by
horizontally
scaling
it
you
gonna,
address
that
issue,
and
then
this
is
the
your
everything
kind
of
goes
through
the
api
server
traffic.
C
The
all
the
calls
to
the
underlying
data
store
goes
to
ignite
client,
which
is
the
proxy
basically,
so
that
proxy
is
aware
of
all
the
partitions
and
all
that,
depending
on
the
on
the
your
partitioning
logic,
it
will
route
that
api
call
to
the
appropriate
partition.
Basically-
and
this
is
all
redundant
and
so
this
proxy,
you
can
have
multiple
copies
of
families.
C
So-
and
this
is
what
we
did
essentially,
what
we
did
was
as
part
of
our
collaboration
till
now.
We
have
achieved
functional
parity
with
the
copanetics,
so
essentially
we
enhanced
so
the
current
the
when
we
started
this
collaboration.
C
The
ignite
data
store,
did
not
support
a
lot
of
features
which
xcd
supports
as
part
of
kubernetes
underlying
data,
so
we
have
to
kind
of
plug,
build
all
those
plug
all
those
holes,
and
this
is
all
we
all
of
the
functionality
we
had
to
kind
of
build
out
in
order
to
be
in
order
to
replace
xcd
as
a
data
store.
Okay,
so
this
is
so
what
we
have
done
is
so
we
have.
C
As
in
a
previous
slide,
I
demonstrated
that
ignite
scales
horizontally,
so
we
demonstrated
that
and
then
we
demonstrated
by
running
the
three
different
tests
we
ran.
We
ran
a
density
density
test
which
is
part
of
the
kubernetes
and
the
load
test
and
the
integration
test.
So
by
doing
but
and
then
they
all
pass
basically,
so
what
that
means
is
so
we
have
a
functional
parity
with
the
icd
okay
and
then
this
these
are
the
security
features
actually
yeah.
E
I'm
sorry
one
question
to
the
to
the
previous
slide:
is
this
now
part
of
the
ignite
or
is
this?
Did
you
fork.
C
Yeah
yeah
so
yeah,
so
this
is
currently
on
about
this.
Isn't
this
repo
right
now,
all
the
work
we
did
extending
ignite
is
not
upstream
to
ignite
yet.
So
this
is
currently
sitting
on
this
github
all
the
extensions
to
ignite
and
by
the
way
we
didn't
touch
kubernetes
at
all,
all
the
extensions
all
the
the
functionality
which
we
built
was
by
extending
the
ignite
data
store
and
it's
sitting
on
this
github
and
the
plan
is
and
the
proposal.
C
The
reason
I'm
presenting
this
thing
today
is
to
incorporate
this
work
as
one
of
centaurus
and
then
at
the
same
time
demonstrate
the
scalability
of
centaurus.
You
know
beyond
thousand
votes.
Okay
answer
your
question!
Thank
you
just
this.
Thank
you,
nice,
okay,
good.
So
this
is
I'm
gonna
quickly,
so
this
these
are
the
security
that
inherent
security
capabilities
within
ignite.
So
we
can
tap
into
the
once.
C
We
replace
xcd,
you
can
encrypt
the
data
itself
and
then
we
built
we
have
to
kind
of
build
all
these
metrics
and
all
that
too,
in
order
to
be
fully
compatible
with
the
with
xcd
functionality
in
kubernetes
and
so
yeah.
So
the
currently,
as
I
mentioned
this,
is
where
all
the
code
sits.
So
actually
anybody
can
go
in
there
and
then
play
around
with
their
pc,
but
the
goal
is
to
to
incorporate
the
proposal
in
this.
C
You
know
in
this,
as
part
of
santorus
proposal
is
to
include
this
work
as
parasitorius
and
then
demonstrate
the
the
high
scalability
of
centaurus
cloud
platform.
Basically.
C
Yes,
that's
the
plan,
so
once
we
once
we
successfully
integrate
with
centaurus,
so
currently
it's
integrated
with
kubernetes.
So
obviously
we
should
run
seamlessly.
You
know
as
part
of
our
dos
or
centaurus
as
well.
So
once
we
do
that
and
successfully
demonstrate
all
that
the
plan,
the
proposal
is
to
move
all
this
as
part
of
santorus
itself,
so
it'll
be
like
a
project
just
like
in
kubernetes.
There
are
different
projects,
so
this
would
be
like
one
project
underneath
centaurus.
That's
the
proposal.
A
Do
we
want
to
do
we
want
it
to
be
a
stand-alone
project,
side
to
side
with
actors
or
a
part
of
actors?
I
mean
it's
definitely
a
standalone
ripple.
Do
we
want
to
make
it
a
standalone
project
anywhere.
C
Well,
so
I
think
so
so
we
so
so
currently
in
our
tools,
we
already
have
partitioned
xcd
functionality.
Basically,
yes,
you
need
to
kind.
C
C
We
have
ignite
datastore
as
an
option
as
a
project,
so
we
need
we
need
to
decide
one
way
or
the
other,
so
we
can
either
replay
it
so
just
just
to
let
the
other
folks
know
stefan
and
prashanth
and
everybody
else
that
currently
in
our
toes,
we
have
done
some
work
as
far
as
you
know,
partitioning
the
lcd,
but
which
is
kind
of
not
really
a
true
partitioning,
but
we
did
that
and
if
it
works
and
we
are
able
to
get
pretty
good
numbers
at
least
so
now
so
one
way
would
be
to
kind
of
not
just
replace
the
work
which
we've
done
in
ignite
and
replace
that
you
know
on
top
of
or
the
work
we
have
done
or
have
ignite
datastore
as
an
option.
C
Basically,
so
you
can
as
part
of
centaurus,
you
can
choose
one
or
the
other.
Basically,
so
we
need
to
decide
so
I'm
not
really
sure
what
we
want
to
do.
So
we
can
have
that
discussion,
but
in
either
case
we
will
move
the
score
either
as
as
a
alternative
data
store
in
in
arctos
or
centaurus,
or
maybe
just
replace
the
partition
at
cd
work,
which
we
have
done.
Basically.
A
Yeah,
I
think
the
first
step
we
can.
We
can
move
this
repo
under
the
centers
right,
so
yeah
exactly
ripple.
Let's.
A
Yeah
to
accept
a
new
new
project,
our
new
repo
actually
requires
tlc
to
vote.
Today
we
have,
we
have
deepak,
stefan
and
appreciate
me
on
joining
this
meeting.
We
already
reached
the
quorum,
so
money
can
have
a
launcher.
Will
vote
on
this
basically
you're
asking
if
tsa
members,
okay,
with
accepting
this
ignite
repo
into
centers.
D
Deeper
one
quick
question
for
you
for
the
short
cd
versus
the
ignite
thingy:
what
are
the
upstreaming
effort?
I
mean
when
you
guys
did
that
did
the
value
I
mean.
I
can
see
that
it
does
a
horizontal
scaling,
but
the
the
value
add
was
it
like
really
beyond,
above
and
beyond,
when
you
noticed
it
and
like.
D
Why
did
we
decide
to
do
that
or
did
you
give
the
feedback
to
the
to
the
kubernetes
community?
Oh
yeah
yeah.
So
yes,.
C
We
did
we
did
so.
We
were
in
touch
with
the
api
api
machinery
group,
so
they
they're,
I
mean
yeah,
so
they
they're
very
happy
to
see
any
any
data.
So
I
think
aws
guys
did
the
same
thing.
Actually
they
tried
replacing.
I
don't
know
what
th.
I
don't
think
they
have
published
that
information,
but
there
was
a
discussion.
They
replaced
that
cd
with
dynamo
actually
yeah
yeah.
C
C
There
was
another
work
part
of
k3s
actually
k3,
as
you
know,
the
the
the
smaller
version
of
kubernetes,
so
they
replaced
fcd
with
their
it's
a
project
called
k
I
and
e
okay.
They
replaced
it
with
the
stripped-down
version
of
fcd
and
using
a
sql
server,
a
sql
mysql
actually
database
underneath
at
the
data
store,
but
that's
part
of
the
k3s
project
itself.
C
Now
so
so
one
of
them
there
are
two
efforts
done
just
like
what
we
did.
One
effort
is
part
of
aws,
which
is
not
open
source,
but
the
kind
is
part
of
k3s
and
it's
part
of
cnci
so
and
most
likely
we
we're
going
to
reach
out
to
cncf
as
well
see
if
kubernetes
community
can
benefit
a
lot
from
this
effort
which
what
we
did
as
well
so
just
like
centaurus.
C
So
potentially
this
could
be
a
cncf
project.
You
know
itself,
you
see
now
the
the
the
there's
a
difference
with
the
work
with
the
k3s
guys
did
was
the
the
effort
they
did
was
part
of
the
k3x
itself,
but
in
our
case
it
will
be
a
standalone
project,
a
data
store
replacement.
So
we
need
to
kind
of
explore
that
we
haven't
done
that
yet,
but
that
will
be
part
of
fossils
here.
C
And
then
the
yeah,
so
kubernetes
community
we've
been
kind
of
working
with
them
as
well,
and
they
suggested
that
we
should
make
it
a
project
as
part
of
cncf,
as
opposed
to
in
kubernetes
itself.
Just
like
other
people
have
done
that.
E
D
A
A
Okay,
great
great,
we
with
the
40th
member
here
we
reached
the
forum
underway,
approved
this
adcd
ignite
project
to
be
a
new
sub
project
about
centaurus
project.
We
were
like
in
a
part
of
the
meeting
and
who
depart
who
will
move
this
ripple
to
the
center.
C
So
one
of
the
effort
which
we're
gonna
do
to
do
the
the
scalability
testing,
so
we
would
need
to
involve
great
gain
folks
as
well.
So
I
was
hoping
that
nikita
is
in
the
call
and
they'll
be
very
happy
to
participate
in
that
because
it's
in
their
interest
as
well.
Basically,
you
see
they
can
go
and
tell
the
world
that
you
know
we
can
be
the
underlying
data
store
for
cloud
environment.
C
Yeah
yeah,
I
think
so
he's
traveling
so
yeah,
so
I
think
that
they
did.
They
showed
a
lot
of
interest
it's
in
their
interest
as
well,
because
they
want
to
do
a
lot
of
marketing
stuff
and
all
that
you
know
based
on
the
the
results
you
know
once
we
demonstrate
that
we
can
scale
up
to
higher
number
of
nodes
in
a
cluster.
A
Okay,
great
yeah,
I
will
follow,
follow
up
with
you
offline,
to
see
how
yeah
yeah
we
remove
the
ripple
and
all
the
related
issue
of
pr
history:
okay,
okay,
thank
you,
deepak
and
thank
you
money
for
sending
out
the
survey,
and
next
we
have
the
the
second
remember.
We
mentioned
the
united
states
meeting.
We
will
have
a
rotated,
sig
review.
A
B
Thank
you
so
yeah,
for
this
part,
is
a
high
level
review
of
our
thinking
and
design
for
the
edge.
The
goal
is
to
extend
the
centaurus
functionality
and
the
infrastructure
to
to
be
able
to
support
workload
on
the
edge
and
we'll
talk
about
that
and
what
that
means.
Currently
we
put
a
code
name
under
this.
It's
called
the
fornax.
If
you
wonder
what
that
means,
is
it's
a
constellation?
B
The
the
the
mental
image
that
we
can
have
is
think
of
centaurus
or
the
actos
as
the
cluster
of
stars,
and
the
edge
feature
would
be
as
another
cluster
or
another
few
clusters
of
stars
surrounding
actos
okay.
B
So
this
is
the
agenda.
Today
we
have
four
parts.
We
talk
about.
The
use
case
requirement
requirement
is
based
on
what
we
see
and
what
we
think
is
necessary
for
and
practical
for,
for
actors
to
support
based
on
the
requirements.
We
will
talk
about
the
how
how
we
think
the
the
add
feature
for
actors
should
look
like
the
other
modeling
and
design,
and
also
the
proof
of
concept,
is
ongoing
and
we'll
report
some
of
the
progress
and
the
goal.
B
Basically,
the
goal
of
the
talk
today
is
to
introduce
this
project
and
to
get
the
feedback
from
you
guys.
Okay,
so
let's
talk
about
the
use
case.
B
First
so
far,
based
on
what
we
have
well
based
on
the
research
we
have
done,
there
are
three
kinds
of
use
cases
for
edge:
one
is
iot,
that's
essentially
you
have
some
device,
so
you
have
alexa
at
your
home
and
you
want
to
control
that
you
want
to
do
some
inference
on
the
edge
in
our
house,
and
you
want
to
do
some
data
mining
or
some
ai
stuff
in
the
cloud.
That's
the
kind
of
iot
edge.
So
second
kind
is
that
we
call
that
smart
factory.
B
B
The
kind
of
scenario
is
the
user
can
be,
can
be
moving
fast
from
a
certain
location
to
another
location,
so
the
user
session,
and
all
you
all
all
those
user
datas
need
to
be
accommodated.
Based
on
this,
that's
the
difference
between
the
the
third
one
and
the
first
two,
because
the
first
two,
the
connection
point
usually
does
not
change
and
the
third
one
we
are
assuming
that
the
user
can
can
be
moving
around
right.
So
we
have
three
kind
of.
B
We
have
seen
that
these
three
kind
of
use
cases
for
edge,
so
the
first
kind
just
to
show
you
some
solutions
from
other
source.
One
is
you
can
think
of
this,
as
essentially,
we
have
some
heavy
loading
running
workload
in
the
in
the
cloud
and
we
have
some
devices
running
on
the
edge.
These
are
the
two
social
two
examples.
One
is
from
azure
that
there
is
is
from
google.
The
idea
is,
we
can
run
ai.
B
We
can
train
the
model
with
a
lot
of
data
with
a
lot
of
data
from
everywhere
in
the
cloud,
and
we
can
run
the
inference.
We
can
send
the
model
to
the
edge
and
we
can
run
the
inference
on
the
edge
right.
So
this
is
the
iot
edge.
The
other
kind
is,
as
we
said,
you
can
have
a
small
cluster
running.
This
is
an
example
from
chick-fil-a
where
they
have
so
many
restaurants.
You
see
here
is
the
restaurant.
They
have
across
the
the
united
states
and
inside
each
restaurant.
B
They
run
this
cluster
right
and
then
the
problem
comes.
Do
we
treat
them
as
a
cluster
of
clusters
a
set
of
clusters?
Then
how
do
you
do
deployment?
How
do
you
do
maintenance
all
kinds
of
things?
So
this
is
another
kind
of
edge,
so
the
bottom
line
is
when
we
talk
about
the
edge.
It's
not
just
a
device,
it's
just
it's
not
just
a
camera
somewhere
and
we're
controlling
that
in
the
cloud.
It
could
be
a
full-fledged
cluster
on
the
edge
okay.
So
there's
some
other
use
cases.
B
I
will
not
spend
too
much
time
into
it
here,
I'm
just
showing
some
picture
or
some
of
the
infrastructure
design
that
they
have
right.
This
is
from
one
of
the
acronym
project.
This
is
from
starting
x
and
I'm
gonna
show
a
few
of
them,
but
you
don't
have
to
spend
too.
You
don't
have
to
look
into
the
details
too
much.
The
idea
here
is
based
on
this
picture.
We
can
see
that
for
edge
design
or
for
the
next
generation
edge
design.
B
There's
a
common
thing
between
all
these
different
pictures
or
different
projects.
The
commenting
is
is
essentially
what
our
key
requirements
come
from.
So
here
you
can
see
that
it's
not
just
one
single
level.
It
has
a
few.
It
has
a
few
levels,
the
same
the
same
story.
Here
we
have
on
the
right.
We
have
the
it's
called
the
regional
data
center.
We
have
local
data
center
edge
and
all
these
things,
so
it's
not
just
one
cloud
one
edge.
It
has
something
in
between
them.
B
Okay,
so
we
took
that
as
one
of
the
projected
infrastructure
for
the
future,
and
we
want
to
say,
okay,
how
do
we
accommodate
this?
How
what
kind
of
feature
our
center
is
cloud
centers
edge,
should
support.
We
come
up
with
two
things:
one
is
we
think
the
edge
should
be
autonomous,
so
the
the
the
idea
is
that
there
are
two
things
that
could
go
wrong
essentially
or
at
least
the
two
things
that
could
go
wrong.
One
is
the
network
because
edge
we
cannot
assume
edge
is
has
a
very
stable
connection.
B
It's
not
like
in
you
know,
cncf
projects
where
everything
is
in
the
data
center
is
well
maintained.
It
has
staff
working
on
it.
We
cannot
assume
that
the
network
could
be,
could
come
and
go
and
the
second
one
is
the
edge
node.
So
if
I,
if
you
have
a
node,
that's
running
in
the
restaurant,
that's
running
in
the
desert,
that's
running
in
the
car,
there
could
be
no
staff
working
on
it
and
that
node
could
go
down
right.
B
So
we
have
these
two
situations
and
we
still
want
to
be
able
to
support
the
workload
the
idea
is
for
for
centaurus
edge.
We
want
to
be
able
to
support
the
case
where,
if
one
and
two,
if
condition
one
condition-
two
happens,
we
want
the
workload
to
be
still
running
as
much
as
possible
and
also
when
one
and
two
happen.
At
the
same
time,
we
still
want
to
be
able
to
support
that
workload
being
a
part
of
being
a
deployment.
B
We
still
want
to
be
able
to
support
that
as
much
as
possible.
The
reason
I
say
as
much
as
possible
is
also
different
from
in
the
cloud
where
we
have
almost
indefinite
amount
of
resource.
If,
if
a
machine
goes
down,
we
can
get
a
new
one
on
the
edge,
there's,
usually
very
limited
amount
of
resource.
So
this
is
case
where,
if
the
third
condition
happened,
we
don't
have
any
more
resources.
B
B
Okay,
so
this
is
the
first
kind
of
requirement
we
want
to
support
is
autonomous.
We
want
the
workload
to
be
running
to
be
up
and
running
whether
condition
one
condition
two
or
condition
three,
which
is
a
combination
of
one
that
you
happen.
Okay,
the
second
one.
We
call
that
the
flexibility
we
could
probably
use
another
word,
but
the
flexibility
is
how
we
did
how
we
think
of
this.
B
So
the
idea
is,
we
want
to
separate
the
action,
the
action
being,
the
the
workload
running
or
with
the
kind
of
workload
from
from
the
control
which
could
be
another
set
of
workloads.
But
the
idea
is
there's
a
local
there's
global
and
some
action
will
be
run
on
your
local
machine
on
your
local
cluster.
B
Some
of
the
workload
would
act
as
the
controllers
and
kind
of
thing,
and
it
will
run
at
a
higher
level
place
where
it
will
take
the
input
from
all
this
local
input
and
it
will
do
something
there
so
two
possibilities.
One
is
we.
We
want
to
be
able
to
support
both
local
and
global.
For
some
cases,
we
prefer
the
local
because
we
for
them.
B
B
So
if
this
kind
of
thing
happen,
we
still
want
them
to
be
able
to
run
their
workload
right
and
also
in
another
case
that
the
user
might
say.
I
don't
want
to
send
all
my
data
to
the
cloud,
but
I
can
send
in
some
of
the
data
and
you
can
do
some
analyze
based
on
it
and
then
send
back
to
improve
whatever
is
running
on
my
local
machine
on
my
car,
on
my
cell
phone
or
on
my
camera
right.
So
we
cannot
say
which
one
should
be
the
case
for
the
edge.
B
B
Okay,
based
on
the
requirements?
We
originally,
we
think?
Okay,
it's
not
just
one
cloud
one
edge.
It
should
be
some
kind
of
data
center
in
between.
So
it
could
look
like
this.
We
have
all
the
devices
devices
would
connect
to
some
machine
here.
I
said
the
data
center.
It
could
be
just
a
small
machine.
B
It
could
be
just
a
very
big
cluster,
so
it's
not
just
a
huge
data
center,
like
amazon,
all
those
aws
things,
but
it's
still
that's
one
device
that
all
the
devices
would
connect
to
the
data
centers
and
the
data
center
will
connect
to
the
cloud.
This
is
the
original
image
that
we
have
in
our
in
our
mind,
but
after
doing
some
research
and
based
on
the
picture
we
saw
earlier,
actually
it's
not
just
once
one
more
level
right.
It
could
be
a
few
more
levels.
B
So
that's
why
we
said
it's
well.
Let's
just
extend
this
to
allow
this
flexibility
coming,
meaning
that
if
the
user
has
more
has
more
has
more
levels
of
data
centers
that
has
more
level
of
size.
We
still
want
to
be
able
to
support
that
right.
So
the
essentially
the
first
kind
is
iot
edge.
We
said,
okay
for
iot.
Actually,
it's
not
new,
there's
a
bunch
of
solutions
that
are
already
there.
B
For
example,
kubernetes
is
very
good
at
supporting
this
so,
but
for
for
the
first
kind,
it
can
support
the
remember
that
condition,
one
two
and
three:
it
can
support
only
one
of
them.
For
example
the.
If
the
connection
goes
away,
the
workload
is
still
run,
but
if
the,
if
the
workload
and
the
connection
goes
away
at
the
same
time,
then
there's
nothing
you
can
do
for
that.
So
the
way
to
solve
that.
Apparently,
if
okay,
I
was
the
reason
that
the
kubu
edge
or
the
out
the
edge
cannot
do.
B
That
is
there's
no
control
plan
right.
The
control
plane
needs
to
be
involved.
If
we
need
to
reschedule
anything.
So,
let's
put
the
control
plan
on
this
on
the
edge.
This
is
where
the
edge
cluster
comes
in
right.
Instead
of
just
having
one
node
on
the
edge,
we
can
have
control
plane
on
the
edge
and
we
have
a
cluster
and
the
all.
B
This
node
will
be
in
the
same
region
or
same
field
very
close
to
the
control
plane,
so
that
control
line
can
still
work
and
if
you
get
rid
of
the
connection
and
get
rid
of
one
node
or
two
nodes,
because
we
have
a
control
plan
there,
we
can
still
shift
around.
So
for
this
case
the
design
is
okay.
So
if
we
have
edge
cluster
support,
we're
able
to
support
that
a
ton
autonomous
idea-
so
remember
we
have
two
one
is
autonomous.
The
other
one
is
flexible
right
autonomous.
B
If
we
put
a
cluster
here
that
should
work
right,
then
this
the
other
one
is
flexible
right.
If
we
just
have
this,
we
said
it's
not
just
three
layers
here
we
have
actually
three
layers:
the
cloud,
the
data
center
and
some
device.
Imagine
if
you
have
a
camera
and
imagine
this
as
a
work,
node
right.
The
device
will
connect
to
this.
This
is
essentially
that
data
center
that
we
talked
about,
so
the
device
would
connect
to
the
data
center
center
of
the
connect
to
the
cloud.
B
B
We
want
to
say
that
if
you
have
another
cluster
or
a
few
clusters
based
on
your
requirement
based
on
your
topology,
if
you
want
to
connect
them
together,
instead
of
connect
everything
to
the
cloud,
we
still
want
to
be
able
to
support
that
and
the
the
scenario
for
this
is
the
user
say
I
have
all
my
data
here.
I
don't
want
to
send
that
to
the
cloud
I
want
to
do
some
processing,
but
I
could
have
a
bunch
of
these
clusters
around
here
and
they
can
send
in
some
information
to
a
higher
level.
B
For
example,
in
the
seattle
you
can
do
some
processing
and
the
whole
pacific
northwest
could
have
its
own
data
center.
It
could
do
some
aggregation
of
all
the
data
sent
in
from
all
the
pacific
northwest
cities
clusters,
and
you
can
do
that
so
so,
if
the
user
decided
that
they
have
this
kind
of
topology,
we
still
want
to
be
able
to
support
that
and
the
way
to
support
that
is.
We
want
to
provide
this
kind
of
capability
so
that
a
cluster
can
be
connected
together.
B
In
this
way
we
call
this
a
hierarchy
or
cascading
edge
cluster.
You
can
have
a
bunch
of
those,
it's
not
so
again
here
the
person
or
the
the
entity,
the
side
that
decide
that
we
want
this
kind
of
connection
or
we
want
this
kind
of
topology.
It's
not
us.
We
just
provide
the
capability,
and
if
the
user
wants
to
do
that,
they
can
right.
B
It's
not
us
that
that's
making
the
decision,
and
then
you
may
have
this
question
based
on
the
the
cascading
cluster
here
that
do
we
want
to
connect
so
for
for
for
different
kind
of
scenario.
You
may
want
to
do
different
things
right.
There
may
be
cases
where
I
have
all
this
cluster.
I
don't
need
that
kind
of
hierarchy,
cluster,
also
for
some
reason.
I
cannot
connect,
for
example,
the
summer
edge
cluster
on
the
edge
they
do
not
expose
anything
else.
B
So
if
that
happens
here
you
the
all
the
other
local
cluster,
they
cannot
connect
to
it
right.
For
this
case,
the
user
may
say
it's
fine
for
me
to
connect
everything
to
the
cloud
to
the
central
cloud.
I
still
want
to
control
everything
to
the
central
cloud
from
the
central
cloud.
B
We
want
to
support
that.
Actually,
that's
the
first
step.
We
want
to
support
in
our
proof
of
concept
that
we
want.
We
are
able
to
control
and
send
the
deployment
to
send
the
pod
to
the
remote
edge
cluster
and
be
able
to
see
it
and
be
able
to
see
the
status
of
that.
So
that's
the
first
kind
of
scenario,
and
we
won't
support
this.
The
second
one,
of
course,
is
the
case
where
we
have
a
set
of
clusters.
They
can
be
connected
to
together
the
user.
B
They
can
say
they
they're
able
to
control
that
all
of
that
from
the
cloud,
and
maybe
they
say
if
something
goes
wrong.
If
the
connection
is
down,
I
still
want
to
be
able
to
come
to
the
cluster.
For
example,
come
to
my
my
machine
room,
I
want
to
access
that
right.
Otherwise
I
don't
have
any
visibility
there,
so
the
user
in
our
design.
B
We
still
want
to
provide
this
kind
of
thing,
so
that
would
be
the
second
model
and
again
the
choice
is
in
the
user
side
that
there's
it's
not
the
on
the
left
or
on
the
right.
They
have
different
scenario
for
edge.
It's
a
very
complicated
thing
where
people
have
different
situations,
so
we
cannot
make
any
assumption
here,
but
the
first
kind
is
is
very
good
for
the
cluster
on
the
internet.
You
have
a
bunch
of
cluster,
you
want
to
control
it.
B
The
second
one
is:
we
have
some
kind
of
hierarchy
and
we
have
different
kind
of
processing
on
each
cluster.
It's
not
the
same
one
just
distributed
it.
Has
this
hierarchy
topology?
If
that's
that's
the
case,
the
user
can
still
choose
to
do
it
like
this.
B
Okay,
then,
following
this,
you
may
also
have
the
question.
Okay,
we
have
this
cluster.
How
do
they
even
connect
to
the
cluster?
In
the
cloud
where
do
all
this
cluster
come
from,
I
think
sorry.
E
Can
I
ask
you
a
question
about
this
control
here
is,
if
I'm
understanding
this
correctly,
would
this
be
like
configuring?
I
mean
just
a
very
technical
terms:
configuring
my
cube
ctl
to
control,
to
connect
to
this
particular
cluster.
Let's
say
c
one
one
or
c
one,
or
are
you
talking
about
different
controls,
like
you
know,
connecting
with
uart
or
some
usb
or
whatever
to
the
machine
and
then
managing
the
machine.
B
The
best
case
is
the
user
can
sit
in
one
place
right.
They
call
that
a
single
pen
like
a
glass,
a
single
pane
of
glass,
where
you
see
everything
and
they
say.
Okay,
I
see
this
plaster
c11.
I
want
to
see
what's
running
on
it.
What's
the
health
condition
and
the
if
I
can
deploy
something.
So
this
is
the
kind
of
control
it
doesn't
have
to
go
through,
but
yeah.
That's
the
kind
of
control
we're
talking
about.
B
Right
right,
yes,
that's
a
very
good
question.
The
idea
we
have
so
far
is
if,
for
example,
on
the
right,
if
you
yes,
if
we
have
too
many
of
this
kind
of
cluster,
whether
they're
connected
to
the
center
or
whether
they're
connected
to
each
other.
B
If
we
just
report
everything
all
the
way
up,
it's
gonna
get
more
and
more
congested.
As
you
may
imagine
right,
the
idea
is
on
the
higher
level.
Everything
is
just
looking
down,
meaning
that
everything
is
just
taking
control
of
what's
right
beneath
it
and
if
we
want
we
can
dive
down
into,
for
example,
if
you
are
here,
if
you
want
to
see
the
conditions
here
by
default,
you
will
not
see
the
details.
If
you
do
want
to
see
that
details,
we
have
a
way
to
send
in
those
requests.
B
A
B
For
example,
if
you
are
at
the
higher
level,
you
just
see
this
c11
cluster
running
what
kind
of
workload
is
running
on
it,
what
the
what's
the
condition
of
those
workloads?
For
example,
we
have
a
deployment
running
on
c11
the
user
by
default.
He
should
not.
B
He
will
just
see
that
okay,
I
have
all
my
deployment,
they
go
to
different
cluster
and
if
they
want
to
see
a
certain
pod
running
in
that
deployment,
then
for
that
case
we
probably
have
to
like
send
in
a
request
and
the
request
will
be
passed
by
pass
through
to
here
to
be
reported
by
definition.
By
default,
we
don't
report
every
single
pod
status,
all
the
way
to
the
higher
level
the
highest
level.
Otherwise,
it
could
get
that
there
could
be
a
scalability.
A
No,
no,
let's
say
if
I
only
connect
to
the
the
blue
one,
I
I
know
you
internally,
you
might
send
some
requests
to
the
down
level
to
retrieve
the
information,
but
from
user
perspective
this
is
transparent.
Does
the
user
still
able
to
see
the
port
level
information
if
he
only
connects
to
the
to
the
top
level.
B
A
B
That
phone
call
is
going
to
take
a
very
long
time,
but
yeah.
Think
of
this.
As
the
we
use
the
anatomy
like
in
the
army,
the
the
top
level
would
be.
They
can
still
get
the
status
of
every
single
soldier,
but
that
normally
they
just
talk
to
their
lower
level
or
their
direct
contact,
and
if
they
want
more
information,
that
information
will
come
from
the
bottom.
A
B
Okay
back
to
the
question:
if
we
say
okay,
we
can
we
want
to
support
both.
Where
do
this
cut?
Where
do
this
cluster
come
from
right?
How
do
they
connect
to
the
central
cloud?
What's
the
process?
There
are
different
ways,
one.
We
actually
actually
put
this
slide.
The
other
way
around
one
way
is
I'll
drive
I'll.
Do
this
one?
First,
if
you
have
a
cluster
already
and
it's
running
in
your
data
center
in
your
house,
in
your
in,
like
a
university,
you
can
run
some
of
our
components.
B
The
component
job
is
to
set
this
up,
set
this
control
plan
on
the
edge
so
that
it
will
talk
to
the
cloud
it
will
do
the
reporting
it
will
do
the
connection
and
from
by
having
this
connection,
then
the
user
can
see
that
the
status
of
this
cluster
from
the
above.
So
that's
the
first
kind.
We
call
that
attachment.
Essentially
you
have
a
cluster.
You
connect
that
to
the
existing
cloud
and
then
now
you're
part
of
this
bigger
cluster
right.
B
The
the
other
kind
is
actually
we
have
some
we're
still
working
on
this.
The
idea
is
from
the
user's
perspective.
If
they
want
to
have
a
cluster
on
the
cloud
and
they
have
some
node,
we
want
to
provide
a
feature
where
the
user
can
say.
I
have
five,
I
have
five
nodes
here
and
I
want
to
run
the
like
a
deployment
there
and
I
need
a
control
plan
there.
B
For
this
case,
I
don't
want
to
care.
Who
is
the
master?
Who
is
just
the
the
work
knows?
I
don't
want
to
care
about
that.
So
we
want
to
provide
the
feature
that
the
user,
while
still
connecting
to
the
central
cloud
they
can
just
select
a
bunch
of
nodes
and
say,
put
this
together
into
a
cluster
and
send
back
the
information
to
me.
B
We
call
this
self-organizing,
so
the
user
doesn't
have
to
worry
about
all
those
things
they
will
connect
to
each
other
being
able
to
do
that,
apparently,
is
not
going
to
be
like
straightforward.
We
need
to
put
some
software
on
it
and
they
are
able
they
should
be
able
to
switch
from
a
different
role
for
being
a
edge,
node
or
or
edge
cluster
master
or
cluster
workout.
So
we're
still
working
on
this
idea
so
so
far,
two
options.
One.
You
can
just
have
one
cluster,
you
can
connect
it
to
the
clouds.
B
Okay,
all
right!
This
is
just
a
bigger
picture.
Actually,
the
there's
another
thing
that
we're
still
working
on
is
we
call
that
the
east-west
connection,
the
inter-edge
communication.
B
This
is
very
important
for,
like
mec
the
the
third
scenario
where
the
user
can
move
around
right,
then
so
far,
a
lot
of
the
communication,
for
example,
if
you
have
one
session
or
if
you
have
some
application
running
here-
and
you
have
some
application
running
in
seattle-
the
other
one
run
in
in
salk
city-
then
the
connection
has
to
go
through
the
central
cloud
and
all
the
way
down
to
seattle.
So,
as
you
can
see,
that's
caused
a
lot
of
latency
increase,
which
could
be
very
large.
So
the
idea
is.
B
We
want
to
establish
this
kind
of
connection
between
this
so
that
if
the
user
wants
to
quickly
shift
from
this
one
location
to
another
location,
all
his
session
data,
all
his
things
stored
in
this
cluster
can
be
sent
over,
can
be
communicated
over
to
the
other
cluster
and
the
user
can
still
enjoy
the
same
kind
of
latency.
He
doesn't
even
have
to
know
this
right
and
the
idea
is,
we
don't
have
to
go
through
the
central
cloud.
B
This
is
another
part
of
the
the
proposal
that
we're
working
on:
okay,
okay,
so
so
far,
that's
the
higher
level,
introduction
and
ideas
that,
as
you
can
see,
we
have
a
lot
of
ideas,
we're
still
working
on
some
of
this
to
get
more
details
to
see
where
we
can
improve
and
we
will
have
a
some
feature
design.
So
here
is
the
schedule
we
started
in
march.
B
We
have
a
bunch
of
reviews,
we're
doing
the
proof
of
concept
and
it's
we
got
some
progress
and
it's
very
promising.
So
we're
happy
with
that
and
the
at
the
same
time,
when
we
do
the
poc
the
feature
design,
the
actual
official
design
will
be
based
on
what
we
find
in
the
poc
and
in
plc.
It
also
exposes
a
lot
of
work
for
us
to
do
so,
something
simple
in
idea,
but
it
may
take
time
to
to
actually
get
down
to
work.
B
So
we
found
a
lot
of
this
and
we
implemented
a
lot
of
ideas
that
okay,
we
can
see
that
it
can
be
implemented
and
how
much
work
it
takes.
So
the
pc
is
going
fine.
We
are
here
today.
Apparently,
here
is
the
overall
plan,
so
we
will
have
a
final
or
the
official
design
by
end
of
may
and
the
first
release
will
be
by
the
end
of
july.
D
B
We're
not
a
basically
here
is
what
we
want
to
see
what
the
the
future
data
center
look
like
and
what
other
people
are
thinking
of,
or
what
other
people
are
trying
to
to
solve.
It.
D
B
We
have
connections
with
in
the
cooper
edge
mec.
I
think
it's
the
cooperage,
I
mean
c6
the
kevin
guy
right.
I
think.
C
Yeah,
so
we
collaborating
with
with
china,
unicom
china
mobile
as
well
as
part
of
the
we're
building
blueprint
as
part
of
akrino,
actually
got
it.
So
this
whole
mvc
thing
the
multi-access
cloud,
so
I
so
so
just
to
give
you
a
reference.
So
there's
a
lot
of
information,
the
all
of
the
all
of
the
work,
the
the
channel
unicom
folks,
are
doing
as
part
of
gsma,
and
there
are
a
lot
of
three
or
four
white
papers.
C
D
D
Literature.
Okay,
I
mean
because
these
are
like
important
projects
that
are
going
on.
Currently
starlingx
is
a
very
important
project,
so
I
would
like
to
understand
that
how
we
are
referring
to
those,
but
your
presentation
was
very
thorough
and
very
helpful
in
terms
of
like
the
design
aspect
of
it
and
like
to
hear
more
in
the
next
meeting,
if
possible,.
B
Yeah
we
want
us
to
to
find
a
place
where
I
think
everybody's
their
field,
that
everybody
is
working
on.
We
want
to
find
some
other
field
that
we
think
is
if
we
look
forward
from
what
we
have
so
today.
Those
are
the
features
and
those
are
the
features,
for
example,
the
hierarchical
design
and
the
inter-edge
connection
communication.
Those.
E
D
C
And
actually
I'm
not
that
aware
of
that,
and
where
did
you
get
it
from
there's
a
talk
here?
C
B
Yeah
you
can
see
from
the
picture
it
it's
trying
to
do
something
from
what
I
remember
they
built
on
top
of
so
they
built
their
own
binary
and
run
that
on
the
work
node
to
support
all
this
control.
They
want
to
it's
like
compute,
the
network.
C
B
For
this
part,
I
think,
if
we're
interested,
I
can
send
some
information
after
that.
C
So
so
the
gsma
document,
which
I
thought
I
think
I'm
pretty
sure
it
will
talk
about
this
as
well,
because
they
pretty
much
capture
all
the
work
going
on
in
the
community.
I
mean
in
the
in
pretty
much
across
the
world
and
they
covered
all
the
projects,
including
cube
edge
and
all
I
think
they
have
ajax
and
a
bunch
of
other
projects.
C
So
they
they
they
capture,
they
kind
of
cover
the
landscape
and
they
cannot
describe
what
exactly
are
the
requirements
from
the
gsma,
the
5g
standpoint
from
a
platform
standpoint,
all
those
three
white
papers
I'll,
send
you
a
main
character.
D
I
think
the
starlingx
project,
our
some
of
our
colleagues,
should
be
connected
because
it's
part
of
open
infrastructure
summit
it
was
announced
in
that
and
it's
more
related
to
the
infra
in
the
edge
computing.
So
I
believe
like
if
we
have.
D
D
I'll
also
send
him
a
message
and
check
what
is
going
on
because
it
looks
like
they
are
practically
doing
a
similar
like
deploying
an
edge
cloud,
creating
an
edge
cloud
and
when
I
look
there
architecture
and
design,
it's
a
startling,
x,
dot,
io
I'll
paste,
the
link
into
the
communication,
so
that
you
guys
can
also
refer
to
that
appreciate.
D
I
think
mtn
is
a
major
player.
Then
some
of
the
companies
like
deutsche
telekom
is
also
there.
Starling
x,
dot,
io.
D
Yes,
I
don't
know
whether
I
can
paste
it
somewhere.
I
wanted
to
message,
I
don't
know.
A
D
Yeah
opens
up
this
one
yeah.
If
you
look
at
like
the
the
design,
it's
pretty
much
more
like
in
line
with
what
we
are
doing
or
maybe
in
in
line
with
the
acrino
and
whatever
the
anuket
and
all
those
things
are
there.
So
you
can
see
the
open.
A
Stack
yeah,
it
looks
like
a
complete
infrastructure
stack.
D
A
D
Yeah
the
moment
I
saw
in
pengdu's
presentation
I
was
like
okay,
where
did
he
get
all
the
starling
x
related
stuff.
B
Yeah
the
the
information,
the
picture-
I
I
put
all
the
all
the
reference
here
so
got
it.
Okay,
I
think
we
are
almost
done
so
I'll
just
go
through
the
rest
of
the
slide,
and
we
can
answer
the
question
after
that.
As
I
said,
we're
doing
the
proof
of
concept
it's
it's
going
through
and
the
kind
of
solution
we
referenced.
B
One
is
the
for
the
iot
edge.
There
are
three
of
the
there's
more
but
open
source
world.
We
we
checked
out
the
coupe
edge
and
open
new
york,
mainly
to
see
the
difference,
to
compare
and
see
why
they're
doing
what
they're
doing
and
this
one
is
from
baidu
also
from
china.
This
is
from
the
iot
lt.
The
proof
of
concept
that
we're
doing
is
based
on
kobach.
B
B
B
It's
also
from
my
understanding.
This
is,
is
another
version
of
federation
and
also
there's
other
solutions
in
the
cooperative
community
that
they're
also
working
on
like
edge
cluster.
So
we
are
also
in
keep
an
eye
on
those
solutions
and
how
they're
going
and
how
they
compare
to
our
solutions.
B
Yeah
here
is
the
the
feature
list
that
we
want
to
experiment
that
we
want
to
put
into
design.
I
will
not
go
into
the
details.
These
are
the
things
that
we
we
think
that
that
could
be
valuable
for
our
poc
and
for
our
final
design,
and
the
idea
is
that
the
two
streams
one
is
coming
down
right.
If
you
have
a
workload
you
want
to
deploy
that
from
from
the
central
cloud,
how
does
this
workload
come
down
to
a
certain
cluster
based
on
what
the
condition
and
how
we
connect
right?
B
To
answer
all
those
questions,
the
second
one
is
to
re
to
the
upstream
to
the
we
have
the
status.
How
do
we
report
those
stats?
So
what
do
we
report
and
based
on
this
two
direction?
We
have
all
this
list
and
we
are
doing
plc
on
some
of
this
and
I
think
that's
it.
A
C
E
C
I
think
just
before
we
wrap
up
you
know
I
just
wanted
to.
I
think
we've
been
exchanging.
You
know,
especially
you
know,
prashanth
and
rupaul
the
email
you
know
thread
that
I
started
about.
You
know
possible
reference
architecture
for
telecom.
You
know,
I
think
one
opportunity,
so
there
was
a
lot
of
good
information
exchanged,
so
you
think
prashanth.
Do
you
think
that
somebody
from
a
telecom
ecosystem
would
be
interested
in
that
and
they're
participating
in
this
discussion
and
kind
of
formalized?
Maybe
the
reference
architecture?
C
D
Well,
we
have
been
working
on
some
of
the
areas
and
in
this
current
release
we
are
focusing
on
making
sure
that
centaurus
as
a
piece
is
deployable
and
stable
in
nature
in
the
next
meeting.
Maybe
I
can
show
you
guys
some
of
the
stuff
that
we
are
doing
on
the
industry
specific
area
where
we
have
put
together
like
lot
of
different
accelerators
and
pocs,
with
partnership
with
microsoft.
D
I
can
give
you
preview.
I
don't
know
whether
I
can
share
my
screen
or
not.
Let
me
see,
I
guess
I
can
okay.
So
do
you
see
my
screen
here?
Yeah
we'll
see,
so
we
have
built
like
in
partnership
with
microsoft,
several
industry,
specific
solution.
They
are
not
age
specific,
but
if
you
see
like
if
we
go
and
talk
about
like
demo
site
for
manufacturing,
so
this
is
like
capturing
all
the
manufacturing
data
and
having
a
dashboards
for
manufacturing
like
campaigns
and
real-time
analytics
coming
from
iot
devices.
D
D
We
can
go
to
healthcare
and
pretty
much
again
the
similar
kind
of
work
and
that
we
are
doing
there
and
we
are
working
on
packaging
these
for
the
end
customers,
so
that
people
can
consume
this
as
a
solution
on
their
cloud
related
solutions.
So
these
are
like
aged
care,
healthcare,
financial,
analytics
and
all
those
things.
D
Ultimately,
at
some
point
of
time,
all
these
solutions,
like
what
I'm
looking
at,
would
be
an
industry-specific
solution
and
fit
for
the
edge
computing
scenario.
So
I'm
sort
of
thinking
myself
like
how
to
how
to
get
this
thing
together
in
terms
of
here
is
the
financial
sector,
and
these
are
some
of
the
example
and
most
of
my
team
members
are
very
familiar
with
that,
because
they
are
involved
in
development
of
these
scenarios.
D
E
C
More
like,
like
a
position,
paper,
kind
of
thing,
yeah
and
then
obviously
it's
gonna
take
time
for
us
to
build
out.
D
Yeah,
but
industry-specific
solutions
are
there.
We
are
already
looking
into
it.
I
think
we
are
slightly
ahead
into
the
process
with
that
which
gives
us
a
good
visibility
that
what
we
can
bring
on
to
the
table.
So
I,
dr
shang,
also,
I
discussed
that,
and
I
told
him
like,
let's
get
the
first
version
out
and
we
both
are
agreement
that
yeah.
We
should
just
finish
the
first
centaurus
related
basic
and
test
the
environment.
D
F
Yes
appreciate
this
correct
the
h
path
that
the
bluetooth
is
present
and
that's
part
of
that
we
can
talk
about
later
on
the
srwsi
yeah.
D
So
we
are
looking
into
all
those
things
solutions
and
we
are
trying
to
bring
this
together
also,
so
those
are
the
progress
happening.
I'm
very
happy
that
we
saw
bengu's
presentation
today
because
that
has
triggered
some
good
communication
pengdu.
If
you
can
share
that
information
with
us,
I
will
write
my
own
comments
on
certain
areas
and
share
it
back
to
you.
A
We
will
send
you
the
link
out.
Everything
shared
in
the
ts
meeting
will
be
public.
D
Perfect
all
right
deepak
will
continue
to
that
email
thread
and
we
will
try.