►
From YouTube: Meshery Development - Jul 31st, 2019 - DeathStarBench and the Social Network sample app
Description
A discussion with Prateek Sahu of @utaustin and @chris_deli of @cornell regarding DeathStarBench and the Social Network sample app. Preparing for at-scale performance benchmarking!
A
B
C
D
A
A
Don't
know
how
concise
they
put
it
other
than
to
say
yeah.
We're
we're
pushing
hard
there's
a
number
of
it's
kind
of
there's
a
number
of
people
who
are
just
interested
in
the
same
kind
of
common
goals
and
have
come
to
help
contribute
to
this
memory
project.
So
it's
mm-hmm,
very
encouraging.
Okay,
I
did
I
was
able
to
get
a
couple
of
those
folks
that
were
that
are
excited
about
the
project
on
this
morning,
and
so
both
Girish
&
pratik
are
both
on
hey.
B
A
A
Christina,
this
is
great
I'm
excited
that
we're
talking
we
were.
We
were
excited
to
stumble
across
your
project,
actually,
hopefully
for
obvious
reasons,
but
I'm
gonna
end
up
saying
it
anyway
and
just
before
I
do
I'll
say
that
just
the
slightest
bit
of
who's
lead
and
who's
pratik
and
who
is
Jewish.
That
I
think
will
help
give
some
general
context
so
for
both
Lee
and
Girish,
we
are
full-time
roles
are
at
a
company
called
solar
winds.
A
A
A
And
so
some,
so
that's
that's
kind
of
focus.
We
end
up
letting
our
our
inner
geeks
or
lead
the
way
and
let
our
passions
run
loose
and
so
and
so,
hence
we
end
up
doing
a
lot
of
things
outside
of
solo
and
very
busy
outside
of
someone's
and
then
for
for
pratik.
He
is
so
I'm
physically
based
in
Austin
and
so
is
Prateek
pratik
is
one
of
a
few
individuals
and
the
one
who's
really
been.
The
the
hardest-working
of
these
individuals
from
the
University
of
Texas
Austin
from
UT
Austin
pratik.
Currently,
is
it
like?
A
C
F
So
the
focus
of
my
involvement
with
my
she
is
to
like
understand
the
service
mesh
and
look
into
what
kind
of
performance
as
well
as
systems
research.
We
can
look
into
all
these
new
technologies
how
the
cloud
infrastructure
should
be
developed,
specific
hardware's
vs.
system
development
around
the
same.
C
C
C
F
So
that's
where
we
were
discussing
one
of
your
papers
and
we
thought
that.
Okay,
let's
see
if
we
can
pull
in
one
of
the
lake,
a
few
of
the
applications
that
you
have
mentioned
in
the
destro
bench
and
see
how
it
translates
into
a
service
mesh.
What
are
these
rolling
goals
of
seer
and
because
seer
as
like
tested
bench
and
seer,
they
all
propagate
common
idea
of
having
the
tracing
system
which
traces,
our
pcs
to
evaluate
the
performance,
evaluate
the
book,
benchmark,
bottleneck
of
infrastructure
and
in-service
measures.
F
F
C
F
A
C
A
A
C
Sure
so,
when
I
was
a
PhD
student
before
at
Stanford,
I
did
a
little
work
on
classroom
management,
and
that
was
all
for
applications
that
operate
independently
from
each
other.
So
you
might
have
something
like
Hadoop.
You
might
have
like
a
front-end
web
server,
but
there's
no
interaction
between
and
then
when
I
moved
to,
Cornell
I
wanted
to
start
looking
at
multi-tier
applications
and
by
talking
to
people
in
industry,
it's
clear
that
people
don't
build
single
tier
applications.
They
build
multi
theory.
C
So
that's
how
that
came
about
the
project
started
summer
2017,
so
it
took
about
a
year
and
a
half
to
get
everything
done
until
the
paper
came
out
and
the
benchmark
suite
includes
a
few
applications.
We
have
an
open
source
open
source,
all
of
them
yet,
but
we
are
planning
to
do
that
within
the
next
month.
So
the
benchmark
suite
includes
the
social
network,
which
is
already
open
source,
a
media
service
which
you
can
think
of
like
Samara,
similar
to
Netflix.
C
So
you
can
go
and
browse
information
about
movies,
review
them,
get
recommendations
about
other
movies
and
so
on.
There
is
an
e-commerce
site
which
actually
borrows
some
of
the
micro
services
of
structure,
but
then
it's
more
extended
to
have
a
search
engine
or
recommender
engine.
More
databases,
more
information
about
them.
There
is
a
hotel
reservation,
application
application.
That
is
for
any
of
these
forms.
This
is
a
project
that
I
have
on
coordination,
control
of
door
of
drone
swarms.
So
we
also
have
this
application
that
does
image
recognition,
coordinated
routing.
C
C
Coming
back
in
the
end
of
August,
so
that's
when
we're
trying
to
open
source
the
rest
and
then
with
respect
to
the
seer
project.
The
idea
with
fear
is
that
when
you
have
dependencies
between
application
tier,
if
you
get
a
quality
of
service
violation,
it
takes
you
a
long
time
to
figure
out
what
caused
the
quality
of
service
violation.
And
if
you
take
that
long
to
find
it,
then
it
also
takes
a
long
time
to
recover
from
it.
C
And,
furthermore,
if
you
try
to
do
that,
manually
takes
even
longer
so
the
idea
was:
can
we
automate
this
process,
use
the
large
amount
of
traces
that
cloud
systems
collect
and
we
have
tracing
systems
for
each
of
these
applications
to
basically
recognize
patterns
that
have
a
high
probability
of
creating
a
QoS
violation
a
little
bit
into
the
future?
And
then,
if
you
know
that
with
some
spare
time
you
can
take,
you
can
make
a
decision.
C
C
We
assume
that
you
have
perfect
tracing
information,
so
you
have
something
like
Zipkin
or
or
or
Doppler
or
any
of
the
systems
that
breaks
down
the
latency
per
micro
service
tier,
but
you
might
have
systems
where
that
kind
of
tracing
system
is
not
he's
not
there
or
you
might
have
faulty
traces
or
you
might
have
missing
traces
because
of
some
error.
So
we're
trying
to
see
whether
you
can
do
the
same
analysis,
but
with
less
information.
A
C
Yes,
so
forced
here
for
the
evaluation
of
seer,
we
use
the
obligations
in
that
servant.
We
actually
had
a
fairly
large
installation
of
the
social
network
that
we
use,
inter
alia,
Cornell,
so
students
use
it
as
an
internal
social
network,
and
then
we
collected
a
bunch
of
traces
from
that
and
that's
why
we
do
analysis.
C
So
they
know
that
it's
a
research
up
and
we
collect
the
Tracy,
so
we
don't
collect
the
traces
about
their
activities,
just
performance,
traces,
so
I,
don't
know
how
you
know.
I,
don't
know
what
they
use
to
the
same
extent
that
they
use
Facebook
and
Twitter,
but
they
do
use
it
for
interactions.
They
also
use
it
to
share
information
about
courses.
G
F
C
C
G
A
A
A
Suppose
it's
obvious
now
about
why
it
is
it
yeah
the
Santa
lapse
world
interest.
We
measure
II
aims
to
answer
a
parent
questions
that
people
have
in
and
around
what
we
really
consider
that
a
third
heavily
all
native
for
many
people
like
industry,
that
his
systems
was
really
facilitated
through
the
popularization
of
docker.
That
kind
of
has
a
first
wave
of
them
coming
into
doing
cloud
native
things
doing
distributed
systems
to
the
extent
that
they
were
doing
it
seriously.
The
second
wave
came
around
with
a
bunch
of
battles
over
container
orchestrators.
A
Now
that
the
that
war
has
completed,
there's
one
around
proxies
and
inherent
to
proxies
are
their
use
through
service
meshes
and
really
that
that
is
a
fairly
hot
battleground
and
having
spent
a
lot
of
time
with
in
both
of
those
first
two
wait.
You
know
waves,
containers
and
container
orchestrators
we're
we're.
A
Now,
seeing
a
lot
of
the
same
issues
that
are,
you
know
fairly
common
to
any
time
that
someone
goes
to
adopt
a
new
piece
of
technology
and
invest
into
it
heavily
or
have
faith
that
that
new
technology
will
work
and
it
work
as
boring
infrastructure.
That's
happening
right
now
with
service
meshes,
you
know,
there's
it
takes
the
collective
of
the
world's
workload.
It's
a
really
really
really
long
time
to
be
modernized
or
to
move
forward.
A
We
would
expect
the
same
for
service
missions,
and
so
it's
based
on
that
that
we
understand
that
people
have
a
hard
time
understanding
what
service
meshes
are,
which
ones
which
one
they
should
run.
What
the
performance
overhead
is
of
running
a
mesh
and
comparing
that
to
the
value
that
they're
deriving
from
that
infrastructure,
and
so
my
Surrey
is
intended
to
just
do
a
few
different
things.
One
of
those
is
like
a
public
service
of
helping
people
giving
handing
them
some
some.
A
You
know
a
tool
that
they
can
more
easily
wield
and
run
things
do
things
they
can
already
do
like
performance
testing,
but
in
a
much
easier
way,
and
also
to
collect
anonymous
statistics
and
performance
test
results
centrally.
Those
are
being
collected
through
metric
measuring
and
it's
our
intention
then,
to
share
those
back
to
the
public,
the
world,
the
public
and
let
them
know,
hey,
here's
the
let
them
know
here's
how
their
infrastructure
their
workloads
are
performing
against.
Others.
A
Both
probably
the
leading
metric
that
of
throughput
and
latency
of
their
user
facing
endpoints,
so
the
work
so
maybe
for,
like
the
the
net,
pick
the
Netflix,
like
the
media
service,
app
that
you
were
talking
about.
Maybe
it's
the
catalog
interface,
maybe
it's
the
one
that
once
someone's
streaming
video,
maybe
it's
you
maybe
it's
that
interface,
but
but
it's
also,
then,
that
throughput
and
latency
they're
fantastic,
interesting
but
really
needs
to
be
understood
in
context
of
the
infrastructure.
That's
running
it,
the
cluster.
That's
that's
running
those
services,
and
so
it's
also
collecting
node
statistics.
F
C
Why
I'm
asking
is
because
what
we
saw
with
we
are
collecting
latencies
and
then
utilization
metrics,
and
they
don't
necessarily
match
so
you
might
have
Micro
service
that
has
low
utilization,
but
is
the
culprit
for
aqs
violation
and
the
other
way
around.
You
might
have
higher
resource
utilization,
but
that's
because
it's
waiting
for
something
to
respond.
So
it's
not
necessarily
the
problematic
microservice,
but
it
looks
like
it
is.
B
Christina
so
one
thing:
we
know
that
once
a
question
now
most
of
the
sample,
apps
we've
been
using
plus
the
sauce
measures
like
now,
we've
been
using
for
our
testing
and
what
not
they
do
have
the
capabilities
for
collecting
creases.
So
so
the
application
themselves
only
support
passing
around
the
right
headers,
but
the
collection
of
crises
themselves
are
actually
performed
by
this
ID
card
and
are
actually
shipped
to
us,
a
single
Jaeger.
B
So
now
that's
there,
but
then
now
from
within
Missouri,
you
know
up
until
today
we
are
we're
not
storing
it,
but
I
mean
let
you
know
now,
since
we
are
actually
playing
like
meshes
or
helping
people
like
a
nerdy
boy
meshes,
you
know.
In
most
cases
like
you
know,
we
do
also
support
employing
them.
Would
the
trace
is
enabled.
So,
as
long
as
like
you
know,
the
Jaeger
system,
like
you
know,
has
a
persistence.
A
A
A
A
To
be
so,
the
thing
that
I
wanted
to
do
here
is
explain
the
project
to
you,
its
goals,
see
if
they
resonated
if
we
were
to
go
use
the
social.
If,
if
there
are
things
that
we
were
doing,
it
could
help
advance
your
cause
and
your
focus
things
that
would
be
helpful
to
either
validating
work,
you've
done
or
advanced
what
you're
currently
doing.
A
One
of
the
things
you
just
said
was
was
great:
hey,
you
know,
is
our
findings.
It's
not
necessarily
the
service,
that's
that's
the
cause
of
the
issue.
That's
the
the
issue
is
manifested
there.
It's
actually
a
dependent
service,
that's
not
responding
quickly
and
that's
no
cause
equation
to
manifest
elsewhere,
and
so
it's
like
some
of
those
findings
and
those
insights.
I
thought
we
might
you
Claire
right
on
or
share.
C
Yes,
so
one
of
the
main
things
that
we're
doing
right
now,
like
I,
was
saying
before
is
classroom
management,
which
sounds
like,
is
a
lot
of
what
the
service
mesh
is
supposed
to
do
as
well,
and
that's
something
that
we
do
plan
to
open
source
once
it's
done,
I'm,
seeing
something
like
that,
working
measuring
or
just
trying
it
to
see
how
it
works
with
your
system.
That
would
be
great.
C
A
It's
kinda,
like
I,
was
saying
before
folks
can
go,
grab,
WR,
k2
or
another
load
generator
and
pointed
at
endpoints
and
collect
the
metrics
and
can
track
them,
and
it
is
kind
of
gets
to
be
a
mess
and
it
takes
them
longer
to
and
we're
hoping
that
we
bring.
Some
some
tooling
just
facilitates
that
and
the
comparison
of
statistics
so
within
the
Sheree.
Actually
one
thing
Christine,
if
it
makes
sense
to
do
at
the
top
of
our
30
minutes,
is
give
you
a
demo
of
the
project
which
would.
A
A
A
A
A
And
then,
to
the
extent
that
your
cluster
has
grifone
are
running
and
that
you'd
like
to
point
to
Gravano
and
have
measuring
pull
in
any
of
the
dashboards
that
you've
already
created
and
therefore
not
just
the
dashboards,
but
also
the
metrics
that
you've
created
that
measure
e
will
do
that
as
well.
So
kind
of
to
the
point
and
the
question
you're
asking
about
per
micro
service
statistics,
measure
e
is
capable
of
pulling
those
statistics.
A
The
caveat
that
Girish
was
giving
earlier
was
just
we're
working
on
the
persistence
of
those
metrics
over
a
long
period
of
time,
because
there's
a
little
bit
to
do
with
aggregation
and
summarization
and
things
also
it's
an
open
source
projects,
we're
trying
to
do
everything
free
and
possible,
and
so
even
if
no
statistics,
but
you
could
point
it
at
existing
graph
analysis
or
an
existing.
A
different
Prometheus
endpoints
once
you've
done
that
and
in
this
case
I
think
I've
connected
a
few
different.
A
In
this
case,
hey
point
being
is
I'm
sending
over
the
command
to
install
a
steel
once
you,
steel
is
either
install
up
and
running
or
not.
I'll
I'll
get
an
either
an
error
notification.
A
So
in
this
case,
maybe,
since
I
just
brought
up
a
quick
demo,
I
don't
even
know
if
I'm
connected
to
the
cluster
that
I'm
supposed
to
be,
but
but
but
you
send,
these
commands
you'll
be
able
to
install
apps
like
like
the
social
app
essentially
and
then
once
you've
gotten
that
up
which
and
by
the
way
you
can,
you
can
send
over
just
if
you
have
a
certain
manifest
or
set
on
the
animal
that
you'd
like
to
send
you
your
cluster.
You
can
use
this
interface
to
do
that
once
your
infrastructure
is
there.
A
Maybe
you've
grabbed
at
the
endpoint
of
your
that
you'd
like
to
generate
load
against
and
push
load
against.
You
come
here,
maybe
give
a
name
to
your
test
identify
what
type
of
best
it
is
you're
running,
give
it
your
HTTP
endpoint,
currently
we're
working
on
G,
RPC
and
TCP
support
as
well.
You
come
across.
E
A
A
You
then
persist
those
results
so
that
you,
as
a
user,
can
come
back
and
look
at
those
results
historically
and
reference
them
in
terms
of
you
know
what
what
what
it
was
for
that
configuration
at
that
time.
If
you
want
to
maybe
take
two
of
those
results
and
compare
them
and
sort
of
overlay
one
result
of
the
next,
so
you
can
see
the
difference,
mm-hmm
and
they'll
do
it
over
anyway.
You
know
as
many
results
as
you'd
like
to
do
it
at
some
point.
A
A
C
So
there's
like
we're,
still
fixing
a
few
things
internally.
We
have.
We
have
a
conference
deadline
in
the
next
two
weeks,
so
the
next
two
weeks
will
be
pretty
busy.
But
after
that
we
can,
we
can
upload
the
rest
of
the
applications.
What
I
would
say
is
I,
don't
know
if
you've
gotten
a
chance
to
try
the
social
network
yet
okay,
so
what
I
would
say
is
try
that
one
yet
because
the
first
because
the
others
most
of
them
are
implemented
in
a
similar
way.
One
of
them
is
using
G
RPC.
C
The
social
network
and
media
services
are
using
three
and
then
the
e-commerce
service
is
using
HTTP.
We
just
want
to
get
some
coverage
of
the
different
type
of
protocols
that
people
use,
but
I
would
say,
give
that
one
a
try
see
if
you
run
in
to
any
issues,
feel
free
to.
Let
us
know
if
you
run
into
any
issues,
we
are
updating
it
constantly,
so
you
know
any
bugs
that
we
find
that's
great
and
then
over
the
next
three
weeks,
we'll
share.
Definitely
the
other
two
applications.
A
C
You
know
getting
as
many
people
using
it
as
possible
and
then,
from
an
academic
perspective,
it's
highlighting
the
system.
Challenges
of
micro
services
is.
If
you
look
at
most
of
your
architecture
and
systems,
conferences,
people
still
use
very
simplistic
applications,
not
even
multiple,
just
very
simple
single
tier
applications,
which
I
think
is
leading
to
a
lot
of
wrong
conclusions
and
how
systems
are
designed.
E
F
Yeah,
like
I'm
I'm,
good
guy
I,
was
I
actually
had
a
couple
of
questions,
but
I
don't
think.
Regarding
the
paper
regarding
Alec,
you
mentioned
that
I
cache
pressure
is
noticeable
not
because
of
branch
misprediction,
but
more
because
of
fetch
latency
is
in
the
small
micro
services.
What
I
thought
about
data
caches
like
because
most
of
those
micro
services
are
designed
these
days
to
be
stateless,
like
sure
there
be
a
push
for
lower
data
caches
and
server
architectures
yeah.
C
That's
a
good
question,
I
think
when
you're
looking
at
the
different
cache
levels,
they're
still
small
enough
that
even
for
a
micro
service,
you
will
get
some
contention
a
bit
less
than
what
you
would
have
in
traditional
applications,
but
you
still
get
some
contention
same
with
I
caches.
It's
not
that
you
don't
get
misses.
Is
that
you
get
much
mrs.
then
he
would
get
with
a
monolithic
application
which
had
spawned
this
whole
line.
How
do
you
fix
like,
but.
F
C
Difference
between
here,
I
think
with
several
is
you're
absolutely
right,
because
you
use
something
once
and
then
something
the
container
doesn't
exist
anymore.
Definitely
the
function
doesn't
exist
anymore.
So
probably
you
don't
get
a
lot
of
reuse,
I'm,
not
necessarily
short
lived
so,
depending
on
what
the
micro
service
is
doing,
you
might
still
get
some
reuse.
So
if
it's
the.
E
F
A
A
C
A
C
A
F
F
Exactly
sure,
because
she
did
seem
to
be
pushing
for
us
to
try
out
the
application,
but
not
really
collaborate
with
the
applicant,
the
instance
running
on
there,
and
because
that
would
be
a
huge
instance
that
they
would
be
running
if
they
actually
have
it
deployed
with
real
people
using
it.
So
I'm
not
sure
where
that
discussion
was
leading
me
if
we
could
plug
in
or
play
with
instance
and
like
Mars
their
instance
and
our
mystery
project,
and
that
would
have
been
an
ideal.
But
that
did
not
seem
to
be
the
main
talking
point
there.
F
But
I
think
if
we
do
and
end
up
successfully
deploying
the
social
application
try
to
iron
out
all
our
things.
From
the
messy
and
intelligent
point
of
view.
She
might
be
open
to
deployment
and
data
collection,
and
things
like
that
and
I'm
not
sure
at
this
point
like
it
did
not
seem
that
there
seems
to
be
a
lot
of
contribution
interests
from
her
side.
A
It'll
be
probably
hard
for
her.
It
will
be
compelling
enough
to
her,
which
probably
won't
ignore
it.
My
hope
would
my
sense
is
that
we're
if
we're
consistent,
we're
running
the
social
app
sometime
within
the
next
week.
We
provide
her
some
feedback.
Maybe
it's
just
hey.
We
got
it
running
great.
You
know
just
kind
of
a
touch
point
with
her
that
we
say:
hey,
look,
there's
a
new
release
of
mystery.
It's
got
it
in
there.
Great,
oh,
hey
a
user,
get
feedback
on
it
or
had
a
question
like
this
like
that.
A
Yeah,
okay,
yeah
I
thing
that
she
talked
about
I,
don't
want
to
step
on
her,
so
so
so
for
some
of
the
research
that
pratik
is
doing
I'm.
Sorry
that
Mohit
is
doing
he's.
Looking
to
monetize
awesome,
you
know
are
awesome
either
way,
but
knowing
that
he's
looking
to
monetize
it
I'm
we're
not
I'm,
not
saying
well,
he
hey.
Let's
work
on
all
these
security
use
cases
and
specifically
exactly
what
you've
been
researching,
because
that
would
be
stepping
a
little
hurt
his
you
know
what
his
next
focus
and
I
don't
want
to
do
that
I!
A
Think
I
don't
know
if
you've
spoken
with
I
know
you
you
speak
with
critique,
but
I
don't
know
if
you
spoke
with
him
on
this
recently
he's
been
excited
by
the
notion
and
suggesting
that
maybe
he
and
or
others
come
he
I
think
as
he
enters
into
the
world
of
cloud
native
startups
and
open
source
centric
motions
of
doing
that.
He's
gonna
come
to
realize
the
value
of
the
currency
that
a
project
like
mastery
is
and
the
the
value
of
the
mine
share.
A
A
A
It's
the
those
things
are
not
going
to
take
away
from
what
he's
monetizing
I'm
only
talking
about
critique
caramel
heat
in
this
regard,
one
we
should
talk
about
it,
but
two
I
was
bringing
it
up
to
say
that
I
was
trying
to
understand
if
Christina
felt
like
we
might
go
steal
some
of
her
potential
future
paths
to
monetization,
which
is
not
we're
looking
to
do.
But
if
she's
not
going
monetize
it
either.
Then.
A
If
you
wrote
a
research
paper
published
it,
it
gathered
a
lot
of
interest
and
someone
went
and
began
writing
software
that
leveraged
those
ideas
and
those
algorithms
and
you're
not
going
to
do
that.
You're
going
to
continue
to
you
know,
do
research
and
research,
a
researcher
research.
Is
that
a
win
to
you
or
someone?
Stealing
your
idea.
That's.
F
A
Nice
cool
well,
then,
I
really
hope.
I
hope
that
Christine
is
not
gonna
monetize.
What
she's
doing,
because
because
yeah
any
number
of
times,
that's
really
not
that's,
essentially
nearly
exactly
what
I've
talked
about
for
some
things
to
come
within
measure
II,
and
if
that
type
of
insight
that
is
intriguing
to
people
and
very
valuable
to
them,
that's
difficult
for
them
to
discern
and
figure
out
themselves,
because
yeah
they
will
go.
Look
at
the
red
herring,
go,
say:
wow
that
service
is
consuming.
A
You
know
four
gigs
of
RAM
and
that's
just
because
it's
sitting
there
spool
it's
been
spooling
and
waiting
forever
on
something
else,
and
they
they
wouldn't
know
that.
You
know
there's
a
bunch
of
use
cases
like
that,
the
auto
tuning
of
the
mesh
in
general,
that's
something
that
would
be
fantastic
to
that
people
would
be
very
interested
in
as
long
as
you
get
it
right.
A
H
A
F
So,
like
I
am
still
currently
working
on
these
lines
like
trying
to
see
what
kind
of
work,
Christina
and
other
peers
are
doing
and
trying
to
figure
out
a
good
novel
approach
towards
I'd
quit
service
meshes
in
focus,
so
yeah
I
mean
if
we
are
able
to
deploy
her
tools
chain
that
she
has
developed
into
mesh
really,
and
we
have
larger
deployments.
Then
I
think
that
we
can
get
some
data
for
me
to
go
into
a
publication
academic
publication
as
well.
A
A
F
Not
that
moment,
yeah
I
after
my
internship
I'll
be
working
on
trying
to
figure
out
what
these
toolchain
are.
The
two
chains
that
Christina
mentioned
are
exactly
like:
try
to
apply
them
and
kind
of
finalize
my
thesis
direction
per
se
like
how
to
leverage
these
tool
chains.
What's
a
novel
idea,
because,
in
part
with
my
cherie,
I,
do
need
to
have
academic
publications
as
well.
A
F
So
like,
oh,
how
I
envisioned
it
can
it,
it
definitely
can
be
done
in
collaboration.
But
at
the
point
at
the
moment,
like
the
current
work
going
on
in
Missouri
has
been
more
development
focused
like
you,
are
trying
to
build
the
platform
up
and
trying
to
incorporate
a
lot
of
the
meshes
in
half
the
interface
smooth
and
everything
so
in
those
direction.
F
F
A
very
good
to
get
into
and
speak
up
on,
podiums
like
topic
on
and
cube
con,
and
when
you
have
a
nice
running,
smooth
running
demo,
it
attracts
a
lot
of
attention
and
that's
a
very
plus
point
to
it.
But
on
academic
front
there
needs
to
be
real
novel
ideas
and
points
of
impact
on
to
the
community
on
the
research
community
which,
as
as
you
could
I,
don't
know
like
with
Christina
no
I,
don't
know
if
you
had.
F
The
feeling
like
a
lot
of
the
underlying
frameworks
was
what
her
interests
aligned
with
so
I
agree
with
with
that
in
focus.
If
we
come
up
with
something,
then
definitely
Missouri
would
be
a
coauthor
like
we
have
on
the
academic
papers.
We
have
authors
and
the
institutions
they
belong
to
and
then
and
there
mache
would
definitely
be
a
part
of
it.
But
I'm
not
sure
like
at
what
stage
we
are
in
with
a
research.
A
A
Enterprise
class
use
cases
that
measure
you
might
have
I,
don't
know
how
much
time
we
might
want
to
give
toward
novelties
in
terms
of
research.
If
there
might
be
collaboration
there,
it
sounds
like
there
might
be,
and
that
might
be
worth
I
da
ting
with
you
on
novel
on
some
Meccano.
Some
of
what
those
novel
areas
of
research
offer
and
yeah
I
totally
get
a
mystery
itself,
and
all
of
its
focus
is
excellent.
F
Yeah
exactly
so
that's
what
I
was
gonna
say
next
that
I
don't
at
the
moment
I
think
I
need
to
figure
out
a
decent
research
goal,
given
the
infrastructure
that
we
have
as
mystery.
So
when
I
am
nearing
that
goal,
then
definitely
I'll
be
collaborating
with
you.
That
is
like
you
guys
in
the
sense
the
whole
community,
with
installing
a
particular.