►
From YouTube: Centaurus Monthly TSC Meeting 6/28/2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
agenda
for
today's
meeting
click
to
cloud
team
is
gonna,
give
a
presentation
for
the
contribution
they've
done
to
our
for
next
project,
but
before
they
do
that,
I
just
want
to
take
few
minutes
just
briefly
housekeeping
item
the
we
had
two
voting
for
two
items.
One
was
the
last
week
last
night
last
week,
last
month's
presentation
of
quark
project
to
be
incorporated
as
part
of
centaurus,
so
we
got
a
approval
for
that.
A
So
five
out
of
seven
tsc
members
replied
and
they
were
in
favor
of
it.
So
that's
approved
so
majority
boring,
and
then
we
had
a
couple
of
months
ago
we
had
the
meeting
the
voting
for
the
all
community
members
to
be
allowed
as
part
of
the
tfc
meeting,
so
now
that
that
was
approved
unanimously,
actually
seven
out
of
seven.
So
now
anybody
can
join
this
meeting.
So
whoever
is
on
this
call
feel
free
to
invite.
A
Whoever
wants
to
kind
of
participate,
propose
any
agenda
item
you
know
and
then,
if
they
want
to
bring
up
something
feel
free
to
do
that.
And
so
it's
not
just
a
tse
members
meeting
it's
an
open
community
meeting.
So
the
intent
is
to
kind
of
expand
the
community
participation.
A
So
that's
that's
just
wanted
to
kind
of
mention
that,
even
though
this
boarding
happened
a
couple
of
months
ago,
I
just
want
to
repeat
that
again,
so
that
click
to
cloud
folks,
you
know
anybody
you
folks
want
to
bring
in
and
they
want
to
propose
something
changes
or
anything.
You
know
feel
free
to
do
that,
so
anybody
can
chime
in
okay.
A
Just
wanted
to
cover
the
housekeeping
thing,
and
then,
with
that
I
don't
know
whoever
is
going
to
take
over
roshan
roshan.
You
want
to
yeah
hi
deeper
yeah.
C
C
D
C
Deepak
and
team
we
have
worked
on
for
next
projects
like
on
automation,
scripts
and
recently
we
have
worked
on
like
the
golan
code
in
which
we
have
on
the
single
edge.
We
will
have
a
multiple
clusters,
so
nagaraji
will
explain
the
architecture
and
the
development
we
have
done
so
far
in
this
project
and
we
have
generated
pr
on
which
ping
has
reviewed
and
gave
some
comments,
and
also
we
have
resolved
that
one.
So
what
you
nagraj
yeah
thank.
D
You
so
let
us
directly
get
into
the
details.
We
all
know
every
edge
core
has
its
own
clustering
module,
and
earlier
this
cluster
d
module
was
able
to
handle
only
a
single
kubernetes
cluster
and
after
that
click
to
cloud
task
was
to
so
earlier.
The
case
was
like
this:
it
was
able
to
handle
only
a
single
kubernetes
cluster
and
now
click
close
task
first
to
make
it
enable
to
handle
multiple,
multiple
clusters
on
the
same
level,
so
a
single
edge
core.
D
So
the
cluster
inside
the
h
core
has
to
handle
multiple,
multiple
kubernetes
clusters,
which
we
were
able
to
successfully
solve,
and
we
have
raised
the
pr2
and
later
we
have
checked
this
with
the
already
existing
environments
like
hierarchical
environment
and
single
cloud,
core
connected
to
multiple
edge
cores
in
this
band,
we
were
able
to
find
success.
D
And
later
early
earlier,
according
to
the
demo,
it
was
like
the
kubernetes
cluster
that
you
are
going
to
connect
to
the
h
core.
We
have
to
place
it
under
the
root
under
the
two
directory,
so
to
make
it
a
bit
unique.
We
have
shifted
this
path
to
atc
for
next
slash
config,
and
that
was
a
change
yeah.
I
think
that's
it.
I
can
show
you
a
quick
demo.
D
If
you
say
now,
I'm
going
to
deploy
this
mission
into
a
cluster
named
server
one,
so
my
this
is
my
cloud
and
this
image
code.
This
is
an
external
kubernetes
cluster
that
I
have
connected
to
this
edge
scroll.
D
D
So
we
haven't
added
this.
Particular
this
edge
will
be
already
having
its
own
kubernetes
cluster,
so
we
haven't
added
the
cube
config
of
this.
We
have
added
this.
F
Yeah
yeah,
it
is
a
external
control
plane
which
is
connected
to
the
h4
and.
E
How
is
this,
how
does
this
name,
how
is
this
name
added
or
mapped
to
that
specific
cluster?
Okay,.
B
F
D
Ahead
yeah,
so
in
the
code,
basically,
we
were,
we
are
using
the
q
q
files
of
those
clusters,
kubernetes
classes
and
according
to
even
our
document,
we
have
to
place
those
particular
cube.
Config
files
of
those
particular
copyright
clusters
under
red
score
and
edge
course,
adc
slash
for
phone
xls
configs,
with
the
name
with
the
name
according
to
a
norm.
D
D
F
Oh
yeah,
so
we
have
we.
I
will
showcase
the
demo
on
this
these
cases,
so
click
to
cloud
was
tasked
to
provide
support
for
multiple
classroom
on
the
same
h4
level.
So
we
have
resolved
this.
We
have
raised
this
pr
and
I
resolved
this-
resolve
this
issue
and
also
tested
in
a
three
different
environments,
as
a
first
case
was
shown
by
nasa.
F
I
will
showcase
the
these
two
cases
like
one
cloud
core
configured
with
the
multiple
h4s
and
that
h4
configured
with
the
multiple
kts
control
planes,
and
also
a
hierarchical
setup
of
one
cloud
core
with
a
h4
where
each
core
h4
is
configured
with
multiple
control
planes.
So,
let's
get
into
live
demo.
F
Here
I
can
up
my
h4
one,
so
it
is
successfully
connected
with
the
output
one.
I
will
setting
this
up
for
a
hierarchical.
So
let
me
update
the
cloud
code
too.
F
F
F
So
here
here
we
have
a
yaml
of
given
and
now
this
h41
has
connected
with
the
two
controlled
events
like.
I
have
put
putted
that
config
files
of
these
two
control
things
like
control,
pin
one
and
control,
pin
two
which.
B
F
Connected
with
the
h4
one
and
same
like
this
cloud,
core
two
is
also
h42
is
also
configured
with
the
two
control
lines:
external
controllers,
control,
plane,
three
and
controls
and
four.
So
we,
if
you
go
to
deployment.
F
If
I
provide
a
name
here,
if
you
I
wish
to,
I
want
deployment
in
a
cloud
core
external
control,
plane,
two
or
and
also
in
h4,.
F
Yeah
we
can
put
a
single
one
and
a
multiple
one.
Also
here
we
have
the
that
phrase.
I
know
we
have
that
functionality,
also
okay.
So
if
I
apply
this.
F
So
here
we
can
see
log
in
logs
of
h41.
The
deployment
is
created
inside
the
h4
h41
and
control
plane
1.
So
we
can
cross
check
that
here.
If
I
check
your
missions.
F
So
here
we
can
see,
deployment
here
is
also,
we
have
put
it
the
deployment
and
also
checking
the
h41
sorry
control
plane.
One.
F
So
here
we
can
see
the
deployment
is
the
given
cluster
is
working
fine
in
hierarchy,
and
we
can
also
perform
same
like
this
deployment.
B
F
F
Okay,
so
we
have
got
created
this,
these
missions
department
was,
it
will
actually,
the
deployment
will
take
all
a
bit
much
time
more
than
more
than
doesn't
even
cluster.
F
Now
I
want
to
showcase
the
second
case,
like
one
cloud
four
configured
with
the
multiple
h4,
where
h4
configured
with
the
multiple
characters,
control,
then
the
second
one.
So.
F
E
F
H41
and
on
the
same
cloud
for
h41,
we
have
a
cloud
port
2
and
that
cloud
code
2
is
connected
with
another
child
h42.
Yes,
let
me
set
up
the
second
case.
F
So
if
I
want
to
connect
with
the
h
cloud
core
one,
so
I
have
to
connect
this
second
h4
with
the
same
cloud
core
one.
So
I
have.
I
will
provide
the
config
file
of
that
cloud
for
one.
F
So
we
are
setting
the
cloud
core
one
and
with
that
we
are
setting
the
h4
one.
F
F
F
Mission
deployments
and
other
like
going
to
cases
like
you
know
to
a
given
thruster
or
to
a
multiple
all
clusters,
so.
F
If
I
wish
to
get
deployment
in
cp2
and
cp3,
which
are
this
is,
this
is
in
our
cpu-
is
in
h41
and
cp3
is
external
control
plane
in
ec2
that.
A
F
So
we
can
see
here
we
in
the
loss,
that
is,
our
deployment,
is
created
in
its.
It
is
the
log
of
h41,
and
here
is
the
logo
h4.
F
F
And
we
are
thankful
that
we
have
merged
our
pr
in
the
formats.
E
Yeah
this
is,
I
want
to
say,
a
few
words.
This
is
very
good
addition
to
the
the
features
for
furnix
we
have.
In
the
meantime,
we
have
already
developed
some
networking
features
so
that,
with
this,
it
doesn't
really
matter
how
the
clusters
are
connected
with
this
pr.
We
can
connect
them
in
a
hierarchical
way
or
we
can
connect
multiple
clusters
to
the
same
edge
core
at
different
levels.
With
that
and
with
the
network
feature,
we
have
very
strong
flexibility
for
the
edge.
E
E
A
I
know
the
couple
of
things
I
wanted
to
bring
up
rupaul
and
manches.
You
were
there
in
the
in
the
austin
conference
on
the
go,
the
the
centaurus
booth
and
all
that
how
how
was
that
the
community
was
interested
in
finding
out
more
about
the
project.
Can
you
just.
H
H
Some
of
the
contributors
wants
to
get
that
published
in
kubernetes
as
well.
Yeah
yeah
fifty
thousand
like
a
two
by
two
three
by
three
and
then
the
future
five
by
five
as
well.
So
like
people
are
expecting
to
get
that
contribution
up
to
kubernetes
community.
Okay,
that's
one
area!
Probably
we
need
to
check
that
how
we
can
get
it.
We
know
that
process
is
predictable.
A
Also,
they
want
to
upstream
that
functionality
to.
H
Knitting
yeah,
so
that's
one
area.
Second
internally,
we
discuss
about
the
I
mean
magni
can
bring
the
other
feedback,
but
there
were
a
couple
of
people
who
were
interested
came
to
both
asking
about
the
different
scenarios.
So
I
think
that's
that's
something
can
give
the
overall
understanding
just
one
quick
question
on
this
specific
area
which
a
team
have
contributed
want
to
understand.
Are
we
planning
to
integrate
it
to
dashboard
because
I
don't
think
that
edge
is
yet
integrated
with
the
centaurus
dashboard,
so.
E
I
don't
think
we
are
there
yet
we
need
to
have
this
discussion
somewhere,
but
that
this
has
not
been
discussed.
Just
the
reason
is
is
just
because
the
focus
of
the
the
focuses
of
several
teams
have
moved
on
to
some
new
area,
but
I
can
bring
this
up
for
some
conversation.
E
It
would
be
nice
to
to
have
this
features
on
the
dashboard.
A
Yeah
yeah,
definitely
and
yeah,
as
you
folks
know,
I
think
we
we
talked
about
that,
but
there's
been
a
lot
of
focus
on
the
region
less,
you
know
the
cloud
project
so
so
yeah.
So
for
the
time
being,
this
is
kind
of
kind
of
not
on
the
back
burner,
but
it's
kind
of
a
lower
priority,
but
this
is
very.
The
work
which
you
guys
have
done
is
very
important.
Actually,
so
obviously
this
is
going
to
come
back
and
by
having
this
hierarchical
support
for
hierarchical
topology
and
all
those
things
are
very
useful.
A
Obviously,
so
this
is
good
work.
Other
things
they
have
angry,
you
want
to
say,
add
something
based
on
what
you
notice
in
the
conference.
B
Really
we
have
four
talks,
so
actually
saw
people
joining
like
people,
the
same
people
joining
each
of
the
talk,
because
they're
interested
in
the
topic
yeah.
A
B
C
A
I'm
pretty
sure
there
was
a
lot
of
you
know,
interest
in
the
scalability
aspect.
You
know
the
50
000
nodes
and
all
that
I
bet
you.
A
lot
of
people
must
have
asked
about
that.
A
Yeah,
I
think
so
that's
another
thing
I
wanted
to
mention.
I
know
all
of
most
of
you
guys
folks
were
there
in
the
previous
month
the
meeting
as
well,
where
yulin
presented
the
quad
container.
So
that's
another
good
project
going
on.
So
if
you
guys
folks
were
interested
in
it,
you
know
start
going,
there's
a
lot
of
documentation
in
there
and
then
I
mentioned
there
are
a
few
things.
A
So,
just
to
kind
of
give
you
a
background,
I
think
you
you.
We
mentioned
that
as
well,
so
the
quad
container
was
forked
from
g-wiser.
You
know
the
google
g-wiser,
the
the
hypervisor
or
the
lightweight
hypervisor
code
code-based.
So
one
other
thing
is:
we've
done
a
lot
of
optima.
Obviously
it's
rewritten
in
rust
and
there's
been
a
lot
of
optimizations
done,
but
those
optimization,
for
example,
they
use
the
q
mechanism
to
kind
of
do
the
syscalls
outside
the
q
visor.
A
So
all
of
these
things
that
a
lot
of
so
if
you've
been
trusted
in
it,
you
know
start
going
through
the
documentation
and
then
you
can
maybe
you
know
work
with
our
team
to
connect.
You
know,
enhance
the
documentation.
So
that's
one
other
thing
I
asked
for
that.
You
know
this.
This
optimization
using
queues
for
doing
sys
calls
outside
the
the
q.
Visor
is
a
very
important
feature
actually
so,
but
it's
not
documented
well
so
it'll
be
good
for
community
to
understand
how
those
cues
are
built
out.
A
So
if
this
is,
I
mean
these
are
the
kind
of
things
you
know
you
can
start
getting
involved
in
that
you
know
try
to
understand
if
you're
interested
in
it.
You
know
this
is
a
new
next
generation
containerization
beyond
docker.
So
if
you're
interested
in
that,
you
know
start
going
through
that
and
then
maybe
document
it.
You
know,
as
you
document
you
will
understand
more
and
then
work
with
our
team.
So
that's
I
just
wanted
to
throw
it
out
there.
A
B
G
Yeah-
and
I
think
the
the
the
most
important
aspect
of
this
overall
thing
was
the
performance
ability
on
the
scaling
cluster.
What
we
saw
in
the
conference,
of
course,
end-to-end
scenario
and
use
cases
with
something
that
people
were
looking
for,
that
if
we
can
implement
in
this.
A
Yeah
yeah
yeah
yeah.
I
think
that's
where
I
wanted
to
kind
of
bring
it
up
now.
The
good
thing
that
you,
you
know
came
to
this
meeting
so.
E
A
As
you
know
that
we
got
the
approval
for
non-tfc
members
to
participate,
so
I
think
it
will
be
good
for
everybody
to
kind
of
go
out
and
reach
out
to
folks
who
may
be
interested.
So,
for
example,
somebody
wants
to
implement
a
use
case.
You
know
using
center,
so
if
you
yeah,
if
they
can
come
to
this
meeting,
propose
something
that
would
be
a
good
thing.
You
know.
G
Yep,
we
are
also
planning
code
fest
3.2,
like
every
year.
We
do.
B
G
A
I
know
by
your
community
if
you
work
with
a
lot
of
telecom
customers,
so
this
time
around,
if
we
do
that,
you
know
that
meetup
we
can
talk
about
four
next
project.
We
have
last
time
we
did.
We
just
covered
mitcha
and
octo's
yeah,
but
this
time
around
we
can.
You
know,
talk
more
about
four
next
project.
You
know
the
edge
edge
computing.
G
Correct
and
I
was
planning
to
come
to
bay
area
to
meet
arpit
yeah
yeah,
so
if
you
are
around,
then
we
can
have
a
group
discussion
with
him.
We
can
showcase
whatever
we
have
done
and
how
the
even
linux
foundation
and
his
ecosystem
can
liberate.
Some
of
this.
A
Yeah,
okay,
good
yeah,
so
yeah,
I
think
so,
let's
keep
it
going.
I
think
this
is
good,
very
good.
I
think
the
work
which
you
folks
have
done
so
the
the
only
thing
is,
you
know,
because
at
this
time
you
know
the
emphasis
is
not
on
for
next,
at
least
for
my
side.
So
if
you
see
any
opportunity,
keep
keep
working
keep
enhancing
for
next,
you
know
eventually,
obviously
edge
computing
is
very
important.
Everybody
all
cloud
providers
are
moving
in
that
direction.
A
Yeah
as
it
is
all
of
these
things
are
going
to
come
back
other
than
that
any
anybody
wants
to
chime
in
anything
suggestions
or
proposals
feel
free
and
other
than
that
we
do.
I
don't
have
any.
I
mean
we
don't
have
any
any
other
agenda
item.
We
pretty
much
covered
the
housekeeping
and
we
covered
the
demo.
A
B
A
G
A
Yeah
so,
which
is
very
important,
none
of
the
cloud
providers
do
that.
Actually,
if
you
go
to
any
cloud,
I
know
any
amazon,
google,
loraly
or
whatever
you
have
to
pretty
much
say
which
region
you
want
to
deploy
your
application.
You
know
or
create
a
vm
or
whatever,
but
that
model
is
doesn't
work
I
mean
so
so
you
have
you,
don't
really
utilize
all
your
regions
efficiently.
A
E
A
Working
on
is
which
nobody
else
does
is
regionless
cloud,
basically
so
application.
So
the
developer
doesn't
care
what
region,
depending
on
the
requirement
and
the
resource
availability
and
the
sla
will
deploy
to
whatever
region
behind
the
scene.
You
may
have
28
regions
and
100
data
centers
it
doesn't
it
shouldn't
really
matter
to
the
developer.
A
E
A
Because
it's
not
that
easy,
it's
not
just
because
the
problem
is
all
of
these
key
value
stores
that
cds
and
all
of
these
are
very
tied
to
region
level.
Basically,
once
you
start
and
pong
actually
is
the
one
he's
he's
spending
a
lot
of
time
and
effort
to
kind
of
solve
all
those
hard
problems.
So
what
happens
to
a
cave
is
some
kind
of
a
key
value
store
that
stores
the
cluster
information
and
that
goes
across
regions.
How
do
you
do
that?
You
see.
So
these
are
the
hard
challenges
fung
has
been.
A
G
Yeah,
that's
good.
I
would
be
very
interested
in
seeing
how
this
shapes
up
in
in
the
near
term
and
longer
term
and
pong
is
like
really
smart
person,
so
I
know
that
he
can
write
those
things
in
the
right
direction
too.
Yeah.
A
E
Yeah
absolutely
yeah
yeah.
I
want
to
add
a
few
things.
One
is
that
the
current
as
deepak
said,
the
storage
we're
working
on
is
trying
to
to
offer
storage
at
the
global
level
or
the
the
quote-unquote
regionalist
level
is
not
tied
to
a
region,
and
we
have
so
yeah.
We
could
prepare
something
for
demo
next
time.
Also,
actually,
in
my
mind,
what's
in
my
mind,
is
the
the
solution
we
we
are
developing.
That's
based
on
the
fact
that
between
regions,
the
network
is
all
connected
right.
E
We
assume
we
have
a
bunch
of
storage
instances.
We
provide
a
layer
on
top
of
it,
so
that
for
the
user,
they
don't
have
to
care
where
the
data
is
stored.
But
the
assumption
of
this
whole
thing
working
is
all
the
storage
instances
in
different
regions.
They
can
talk
to
each
other
and
if
you
put
that
picture
into
the
fornix
picture,
eventually
there's
a
very
good
opportunity
for
them
to
work
together.
E
So
what
I'm
saying
is
the
original
is
storage
that
has
also
potential
for
the
edge,
so
the
the
top
layer
can
be
deployed
on
some
clusters
on
the
in
the
cloud
or
somewhere
on
the
edge,
and
the
storage
instance
can
be
managed
throughout
the
different
edge
clusters,
and
we
already
have
the
the
networking
side.
E
We
have
the
work
from
you
guys
to
set
up
different
ways
to
to
have
this
cluster
hierarchy,
and
once
we
have,
since
we
already
have
that
and
once
the
storage
side
is
ready,
then
we
can
also
try
deploying
the
the
storage
clusters
on
the
edge,
and
that
would
be
another
possible
usage
for
this
storage.
G
Project
is
it
possible
to
create
like
an
agri
kind
of
a
use
case
scenario
on
the
edge
site
you
collect
like
iot
data,
and
then
that
gets
back
to
the
to
the
to
the
center
s
or
like
some
way
to
connecting
to
the
regional
cloud.
And
then
you
keep
collecting
the
hedge
site
data
on
the
regional
cloud
itself
and
do
a
bi
on
that.
On
top
of
it.
E
Yes,
yes,
actually
it
goes
beyond
that.
The
the
potential
is
that
for
the
edge
user,
you
can
deploy
your
application
here
and
there
in
different
clusters,
and
then
you
can
use
this
storage
as
a
way
to
to
share
data,
so
one
application
on
one
edge
from
iot
devices.
You
can
collect
some
data
and
then
store
in
this.
The
storage.
The
application
does
not
have
to
think
okay.
I
need
to
send
this
to
other
application
on
some
other
edge.
The
application
does
not
have
to
to
do
that.
E
It
just
put
that
in
the
storage
and
because
the
storage
is
shared
across
different
clusters,
some
other,
for
example.
The
same
instance
of
this
application
on
a
different
edge
cluster
can
pull
that
data
from
the
storage.
That
application
also
does
not
have
to
know
where
the
data
comes
from.
You
just
see
that.
Okay,
we
have
this
key.
We
have
this
value,
it's
in
storage,
it
showed
up,
and
then
they
can
start
to
use
that.
G
Can
you
help
us
create
one
architectural
diagram
on
either
taking
care
of
like
medical
or
healthcare,
related
scenario
or
agriculture
scenario,
I'll
draw
it
on
pen
and
paper
and
send
it
to
you,
and
you
can
help
us
like
complete
that
scenario
I
was
thinking
like,
maybe
like
icu
control
units
or
some
radiology
unit,
talking
directly
through
tl7
or
something
with
the
edge
site
and
then
headsight.
Transferring
data
to
the
central
location
very
unique
example
for
healthcare,
like
maintaining
the
the
critical
units
through
portal.
A
Moving
you're
playing
the
game
and
you
you
know
in
the
car
or
whatever
you
you're
walking.
So
as
you
move
from
one
match
to
another,
it
should
be
seamless
experience,
so
the
state
with
the
game
state
which
you
built
in
edge
one.
It
should
be
automatically
available
in
h2
because
you
moved
near
to
h2.
Now
you
see
so
it
should.
G
A
All
similar,
so
all
of
these
storage
problems
which
we
are
trying
to
solve,
are
gonna
address
all
the
mobility,
so
iot.
G
G
A
Yeah,
just
this
storage,
where
the
the
way
storage
is
shaping
up
the
whole
ledge
thing
is
gonna,
be
very
different,
so
it's
the
way
storage
is
currently
you
know
designed
on
the
cloud
center
cloud
level
is
gonna,
be
have
a
very
different
experience
and
the
way
to
design
it
is
gonna,
be
very
different.
G
In
fact,
one
of
my
friend
like
who's
building
some
hardware
for
the
public
announcement
system,
especially
for
the
railways,
so
for
the
the
trains
right
that
the
tram
system
in
the
nagpur
city.
He
designs
that
and
he
was
asking
like.
Can
he
have
a
pa
system
on
the
edge
site?
Because
all
these
trams
are
running.
G
Yeah
like
an
emergency
system
and
all
those
are
like
really
great
examples.
But
let's
start
with
one
simple
one.
Maybe
I
can
ask
team
to
put
a
basic
diagram
and
then
they
can
work
with
one
to
solidify
it,
and
I.
A
Think
that
so
I
think
this
would
be
good,
so
essentially,
what
you're
proposing
is
so
the
work
which
you
just
showed
the
demo.
This
was
more
kind
of
you
know
work
now
that
more
like
a
technical
enhancement
to
be
able
to
support
hierarchical
topology.
Now
I
think
the
good
thing
would
be
to
build
a
use
case,
actually
yeah
ability
use
case
or
iot
use
case
or
whatever
you
know,
the
train
tram
use
case,
yeah
and
then
leverage
for
next
and
the
storage.
You
know
thing
which
is.
A
Because
that's
what
the
end
people
community
is
interested
because
you
know
for
them.
Fifty
thousand,
you
know
no
cluster,
you
know
or
or
it
doesn't,
unless
you're
gonna
leave
a
story
around.
A
A
Yeah
yeah,
oh,
that
would
be
good.
That
would
be
really
good.
Actually,
if
you
can
build
a
use
case,
have
a
story,
mobility
story
or
iot
story
to
go
along
with
the
fornax
and
and
the
storage.
You
know
how
the
storage
is
going
to
work
out
on
the
edge
yeah.
That
would
be
a
good
thing
to
do.
Definitely.
G
Yeah
we'll
draw
something
and
we'll
send
it
to
pong
and
the
team,
and
then
we
will
take
their
directions
from
there
like
how
and
in
fact
some
pieces.
How
do
we
in
fact
integrate
the
pieces
to
like
the
whole
ai?
Why.
A
A
B
A
E
A
E
Incorporate
the
storage,
some
thinking
from
the
story.
E
The
story
side
so.
A
E
A
consistent
story
working
towards
one
single
goal:
that
would
be
good.
That
would
be.
G
Very
good
yeah,
in
fact,
we'll
do
it
under
your
lead
only.
I
want
you
to
basically
drive
and
guide
us
to
the
technical
guidance
I
mean.
We
all
know
that
you
are
a
very
smart
guy,
so
you
can
help
us
with
the
right
direction.
A
A
Prashant
feel
free
to
reach
out,
send
out
all
the
information
to
everybody
in
tse.
You
know
yep.
G
A
Actually,
the
tuvn
guys
they
have
done
a
lot
of
work
on
this.
You
know
the
stefan
and
and
professor
doustron,
so
there
so
just
send
it
out
to
everybody.