►
From YouTube: Centaurus Monthly TSC Meeting 5/31/2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First,
agenda
item
is
quark,
one
of
our
future
wave
architect,
he's
gonna
present
a
new
container
runtime
and
then
the
second
item
is
fornex
edge
development
contribution,
which
click
to
cloud
team
did
and
they're
gonna
present
that.
So
I
think
let's
do
that
so
you'll
end
up.
Would
you
be
able
to
cover
this
thing
in
half
an
hour
or
is
that
something?
Okay?
Okay,
yeah
he's
okay,.
A
Good,
so
I'm
gonna
give
you
hi,
professor
dusad
hi.
How
are
you
good
thanks
for
joining
yeah
good?
I
think
we
we
just.
B
Yes,
okay,
yeah!
Let
me
start
a
container.
Is
a
high
per
month
secure
a
secure,
secure,
container
runtime.
The
goal
of
this
car
container
is
to
run
the
canalized
application
in
in
the
container
directly
on
the
on
the
biometric
machine
directly.
Instead
of
go
to
go
through
the
have
another
layer,
that's
the
virtual
machine,
the
current
that's
the
environment,
that's
cloud
security
environment.
B
Normally
we
will
have
a
have
the
container
run
inside
the
virtual
machine
and
the
container
provides
the
resource
isolation,
but
we
still
need
the
vm
to
provide
the
secure
isolation,
but
this
vm
introduced
some
overhead,
for
example,
some
memory
fiber
io
and
network
io,
and
we
hope
we
can
run
the
around
the
canalis
the
workload
in
the
in
the
parameter
machine
directly
so
that
we
can
save
this
water
machine
over
height
yeah.
B
Your
first
quick
container.
They
have
three
three
dimensions.
First,
is
the
secure
security
that
is
for
the
quark.
It's
just
like
a
virtual
machine.
It
is
based
on
the
kvm
one
kvm
virtual
machine
based
isolation,
and
it's
developed
with
the
rust
language,
and
another
important
thing
is
the
performance.
B
The
choir
container
is
a
is
designed
for
the
collaborative
workload
execution
pages
or
yeah.
Oh
yeah
keep
going
yeah
yeah,
it's
designed
dedicated
for
the
cloud
native
workload
yeah,
so
actually
for
a
container.
It
supports
multiple.
It
is
run
inside
the
linux
linux
machine.
It
supports
many
many
features,
for
example
the
the
multi
multicast
network
and
support
different
device,
but.
B
But
the
first
secure
container
just
for
security
scope
is
just
dedicated
for
the
cloud
native
workload,
so
we
can
do
some
optimization
for
that.
Another
thing
is
that
it's
optimized
for
this
central
environment
is
use
the
deep
center.
It
has
development
high
assumption
that
it
will
be
running
inside
this
center
with
data
center
network
and
it'll
be
running
with
the
multiple
call
the
server.
B
B
Yeah,
let's
give
more
information
about
the
the
the
quad
containers
architecture.
The
quad
container
is
a
running
in
running
over
the
kvm
yeah
and
for
for
the
kvm,
this
is
roughly
kym
introduction.
The
qm
qm
is
a
in
the
key
architecture.
B
That's
a
gaster
guest
is
running
a
test
applications
running
inside
a
host
user
space
and
in
this
qm
architecture
the
hostess
application
in
the
host
application.
Application
will
become
the
guy's
physical
memory
and
the
house
slide
can
be
mapped
to
the
guest
water
cpu,
and
we
we
will.
We
can
switch
between
the
gas
houses
through
kvm.
B
That's
a
go
through
kvm's
kvm
tm
provides
some
some
there's
io
control
cars
and
we
can
do
the
switch
between
them
and
in
the
guy's
kernel
there
will
be
just
like
a
common
os
and
provide
pivot
table
process
management
system
card
handling,
etc,
and
we
are
and
in
the
hospice
there
should
be
a
way
when
we
go
to
my
virtual
machine
monitor
for
the
for
the
linux
vm.
B
That
is
a
majority
that
is
the
qmo
it
simulates
some
block
device
and
the
network
device
etc
and
for
the
quark
is
different
than
the
linux
vm.
It's
like
the
business
virtual
machine
architecture,
but
there's
some
difference.
Your
result
said
is
the
linux
machine
architecture,
it's
running
over
the
linux
kernel
and
I
have
qmo.
I
have
linux
kernel
and
the
linux
application,
but
for
the
quark
it's
also
can
run
the
linux
container
application,
but
but
it's
it's
a
vmn
layer
and
the
gas
kernel
layer
is
a
bundle
together,
that's
quite
container
in
the
container.
B
It
still
split
them
as
a
q.
Visor
is
running
the
guest
in
the
house
area
and
the
q
q
kernel
running
in
the
in
the
hot
in
the
gas
gas
area,
but
they're
developed
together
so
that
we
can
do
some
optimization
to
improve
the
performance.
B
B
B
B
For
this
virtual
machine,
the
vmware,
the
the
switch
between
the
between
the
vmm
and
the
linux
kernel.
It
needs
to
go
through
the
sys
go
through
the
the
hypercar.
The
high
cost
cost
is
very
high
when
we
do
hypercars
work
is
a
switch
between
the
gas
and
gas
gas
space
under
the
house
space.
It
support
its
overhead
is
high.
B
It's
because
for
the
possibility
from
the
gas
to
the
host,
it
need
to
save
all
the
other
register.
Cpu
register
in
the
memory
and
in
the
in
the
more
in
the
current
advanced
cpu,
the
the
cpu
registers
currently
much
cpu
resists
pressures
results,
for
example,
the
the
normal
normal
registers
and
the
and
also
other
things
like
the
float
pump
red
contacts,
the
cause
the
cartoon
system
is
high,
but
for
the
quark,
because
the
q,
riser
and
q
kernel
are
work
together.
B
So
we
can
do
some
authorization,
this
one
optimization
that
we
can
use
a
shared
shared
memory,
queue
between
the
queue
kernel
and
q
visor.
So
when,
when
the
queue
riser
to
kernel
need
to
send,
send
some
send
some
some
call
to
secure
to
the
kilovisor
to
run
inside
the
house.
Space,
for
example,
run
some
horses
in
the
car.
B
B
B
Actually
quite
do
more
optimization
for
this.
For
this.
For
for
this,
this
request
between
the
guests
and
the
guests
and
the
host.
Another
thing
is
that
we
use
iou
ring.
I
o
using
the
new
is
a
new
new
io
infrastructure
in
the
linux.
B
Is
it
is
the
most
efficient
way
in
the
lens
kernel
so
far,
the
major
concept
of
iou
is
that
it
awa
setup
will
also
set
up
a
shared
memory.
Share
the
memory
queue
between
the
application
layer
and
the
kernel.
You
lose
the
space
and
the
kernel
space.
So
when
user
send
a
request
to
the
sender
system,
car
to
the
kernel
a
it
can
send,
it
can
send.
It
can
include
the
request
in
the
using
shared
memory,
queue
and
and
the
queue
kernel
can
get
requested.
B
No
a
linux
creator
can
get
a
request
and
process
it,
so
we
can
save
the
contact
switch
between
the
linux
user
space
and
the
kernel
space.
Similar
thing
we
we
can
leverage
this
this
one.
This
step
this
advantage
in
the
quark
actually
quark
map
the
maps.
Iou
ruin
share
memory,
queue
to
the
q
kernel
directly,
so
that
vancouver
kernel
needs
to
run
some
system
car
in
the
linux
kernel.
B
He
can
incur
the
include
the
request
to
the
elusion
general
queue
directly
so
that
it
can
bypass
the
q
current
q
visor
to
q
visor.
So
the
communication
between
the
q
kernel
and
the
lens
kernel
is
through
the
I
o
eleven
q,
so
it
can
make
the
performance
much
can
increase
the
performance,
yeah
actually
quark.
Actually
quark
process
is
the
normal
linux
linux
user
space
process.
B
A
So
this
is
very
interesting
actually
because
I
was
I
was
going
to
ask
you
that
I
was
looking
at
github
as
well.
Just
reminds
me
of
this
is
exactly
what
we
were
trying
to
do
in
the
hardware
enclave.
When
you
do,
you
know
the
typically
hardware,
enclave
stx
and
all
that
they
don't
allow
you
to
do
syscalls.
A
So,
in
order
for
you
to
do
the
syscalls,
we
have
the
similar
kind
of
architecture.
So
the
question
I
have
is
so:
is
this
all
asynchronous?
So
if
I
do
from
a
user
space
application,
I
do
a
cisco,
so
I'm
assuming
you're
going
to
trap
it
and
then
issue
a
function.
Call
is
that
function
called
synchronous
or
asynchronous,
because
you
mentioned
the
memory
queue.
I'm
assuming
it's
async
calls.
B
Go
ahead,
oh,
but
because
the
q
kernel
also
implemented
the
the
the
kernel
thread
so
kernel
thread,
this
kernel
thread.
Kernel
kernel
thread:
have
the
my
pin
have
the
multiple
two
one,
multiple,
multiple
yeah.
A
B
A
B
Actually,
that's
a
good
question,
a
good
question.
Yes,
actually
for
the
cool
curly
for
the
queue
there's
a
pure
names,
xiaomi
q-
it
is,
they
have
only
one,
that's
something
to
summon
the
queue.
A
B
Yeah,
but
for
the
further
return
with
hundreds,
but
when
they
finish
finish,
this
corresponds.
A
B
A
B
Yeah
you're,
actually,
normally
there
should
be
two
kills
when
the
summoner
queue
another
completely
cure
this
one.
B
A
Okay,
are
you
doing
any
shielding
so
when
the
response
comes
back
from
the
kernel,
what
if
the
kernel
is
malicious?
So
if
there's
something
wrong
as
you,
I'm
assuming
you're
not
doing
that
currently,
but
you
should
be
able
to
do
it,
though
you
can
have
some
kind
of
a
shielding
logic.
If
kernel
gives
you
some
kind
of
a
bad
pointer
and
then
you
can,
you
can
detect
that
by
you
know
using
the
shielding
layer
and
then
you
can
reject
that
syscall
response.
B
A
B
A
B
Yeah
we
will
check
that
wait:
yeah,
okay,
okay,
good,
okay,
thanks!
Thank
you!
Thank
you!
Yeah
yeah!
Let's
talk
about
the
the
the
area.
We
can
do
to
some
better
performance
than
the
wrong
c
here,
but
one
of
the
government
is
well
possibilities
for
the
network.
For
example,
we
can
use
the
use
rdma
based.
That's
a
container
network
yeah
at
rme
yeah.
I
just
gave
a
very
high
level
introduction
about
rna.
B
Rma
is
a
is
a
remote
direct
memory
access
and,
for
example,
in
the
normal
network
in
the
normal
tcp
network.
That's
a
the
protostack
application
will
send
the
request
to
the
when
applications
want
to
send
the
data
to
the
remote
machine.
It
will
send
the
request
to
the
socket
and
the
circuit
will
use
the
protocol
stack.
B
Tcp
throttle
stack,
send
it
to
the
to
the
nic
driver
and
the
set
driver,
send
it
to
the
neck
and
need
to
send
it
to
the
remote
through
the
through
the
well
and
eventually
go
through
the
go
through
the
tcvip
stack
in
the
linux
kernel
to
the
remote,
but
for
the
rdma,
the
the
the
application
can
send
the
buffer
to
the
nic
directly
through
the
rdma
interface
and
then
rdma.
B
The
the
the
data
transfer
between
the
machines
can
bypass
the
linux
kernel
and
the
linux
link
kernel
process
protocol
stack
so
that
its
performance
can
increase,
can
increase
dramatically
it
your
it.
Both
can
increase
throughput
and
the
latency
decrease
the
latency,
but
the
rme
is
very
good
and
is
maybe
the
most
efficient.
B
That's
the
network
infrastructure
in
the
in
the
in
the
it's
a
cloud
environment,
but
it's,
but
in
the
clarion
enter
very
we
we
didn't
use
that
much
because
the
first
one
is
that
in
the
cloud
development
user
user
rtc
lags
are
rdm
use
rpc
like
the
http
trpc,
to
do
the
communication,
that's
based
on
tcp,
but
the
area
makes
the
syntax
different
than
the
tcip
is
the
first
one
and
another
one
is
a
more
serious
designer
in
the
cloud.
B
That's
continuous,
generalized
development,
for
example
the
communities
they
have
container
network
and
a
network
is
over
the
house
network.
It
have
another
layer
of
virtualization
the
year,
so
virtualization
and
the
network
isolation.
So
it's
hard
to
to
mapping
the
rdi
main
rdma.
That's
a
that's
a
api
to
the
tool
to
the
container
container
virtualization
layer
yeah,
but
for
the
quark
we
can
leverage
that
here
we
can
use
the
tcp
socket.
We
we
have
implement.
We
have
a
design
of
tcp
socket
or
rdma
the
major
major,
that's
a
the
the.
B
The
major
that's
concept
is
that
is
like
this
for
the
client
application
when
they
want
to
send
the
some
data
to
the
server
and
actually
client
application
will
have
some
send
buffer.
B
It
will
use
a
assist
system
call
like
the
66
sender
to
the
kernel
that
the
system
is
send
data
through
the
system,
cars,
that's
the
socket,
socket
interface
and
the
system
will,
when
the
quark
to
to
process
the
system
it
will
copy
the
application
send
buffer
data
to
the
kernel
center
buffer
and
normally
quark
can
use
the
host
hostess
kernels
as
a
tcp
stack
to
send
to
remote,
but
for
the
rdma
syntax
items
context,
the
quaker
can
use
the
rme
right
to
write
to
the
remote
buffer
of
remotes,
quacker
packs
kernel,
receive
buffer
and
then
copy
it
to
the
application
received
buffer.
B
Go
through
this
go
through
this
way
we
can
see
the
the
quark
can
bypass
the
host
host,
tcpa
stack
and
also
the
the
gas
tcp
stack,
so
it
it
can.
It
can
improve
the
performance
yeah.
This
is
a
major
concept
actually
here,
because
we
have
less
time
so
I
didn't
go
through
the
virtualization.
B
A
You
learned
a
question
for
you,
so
I
recently
saw
somebody
told
me
that
the
grpc,
so
if
I'm
using
grpc-
and
you
mentioned,
that
your
pc
uses
tcp
underneath
it
but
grpc
they've
added
rdma
as
a
protocol
as
well,
though
now
so
you
can
run
grpc
over
rdma.
So
it
would
that
have
any
implications
on
your
design.
What
you're
doing.
B
Yes,
trpc
actually
is
over
the
over
the
tcp
tcps.
You
use
this
socket
layer.
It's
used,
maybe
six
right
or
something
yeah,
but
it's
still
still
over
the
old
tcp
so
because
the
quack
quack
just
use
rdma
tool,
no.
A
B
A
B
Yeah,
but
that's
I,
I
heard
that
but
there's
another
challenge,
just
like
I
mentioned
they're
too
challenged
to
use
rdma
in
the
cloud
cloud
development.
The
first
one
is
to
just
just
trpc
or
rdma.
Another
one
is
virtualization.
B
If
we
want
to
use
that
in
the
container
environment,
we
still
need
to
solve
the
container
network
virtualization
yeah
yeah,
but
for
the
quack
tcp
socket
or
rdma.
We
saw
both
of
them,
so
both
yeah
yeah,
we,
we
can
see
the
performance
so
far.
We
haven't
finished
all
the
implementation,
but
we
have
already
finished
already
finished
test
a
poc
test.
B
We
can
actually
in
the
fuse
test.
We
test
the
the
tcpo
rdma
for
the
r
radius
radius.
That's
a
with
radius
benchmark.
We
can
see
there's
a
in
for
the
radius
element
with
one
thread:
that's
a
quack
performance
through
fruit,
that's
quack
over
ranch
through
phosphate
thread.
That's
a
quacks!
Throughput
is
2.04
times
of
rancid
throughput,
and
this
opi
and
the
pb
is
a
two
benchmark
in
the
radius
benchmark.
B
Here
we
can
see
they
have
two
times
more
than
two
times
the
performance
improvement.
Another
is
the
the
latency
that's
p1,
p99
and
average
etc
and
rme
maybe
and
there
this
is
a
one
one.
Fourth
latency
and
my
fifth
latency
went
third
latency
over
there
or
wrong
this
implementation,
but
yeah
in
our
in
our
poc.
We
find
that
if
there's
a
multiple,
that's
a
multiple
tcp
connection,
that's
multiple
thread.
B
Our
performance
will
not
so
good
with
will
will
still
be
better
than
rancid,
but
not
so
good.
But
we
still
do
the
input.
Do
the
improvement
in
our
production
implementation.
We
may
release
this
release
this
implementation.
Here,
a
few.
You
know
a
few
months
in
about
two
months.
We
can
release
our
first
first,
the
production.
B
Tcp
review
under
the
rdma
connection
already
have
a
connection
named
the
kill
pair.
They
are
one
to
one
mapping,
but
in
our
production
implementation
we
find
that
the
rpc
we
find
that
if
there
are
multis
multispecial,
the
this
venturement
mapping
will
decrease
the
performance.
So
in
our
production
environment
we
will
production
implementation,
we
will
use
a
multiple
tcp
connection
over
single
or
a
fuel.
That's
the
rdma
connection,
so
that
we
can
improve
performance,
and
it
can
finish
in
a
few
months.
Yeah
yeah,
this
tcu
rna
and.
B
Yeah
another
thing
is
we
also:
we
already
have
the
tesla
the
texas
park,
quack
performance
and
compare
with
with
carter
and
the
divisor.
We
can
find
its
performances
better
than
them,
for
example.
This
is
that
this
is
start
time
start
time.
Let's
compare
reason
in
this
comparison.
We
we
can
see
this
wrong
c
if
we
run
a
run.
A
simple
application
at
start
time
is
about
600
600
miles.
B
Ms
millisecond
and
quark
is
a
a
little
over
the
over
the
wrong
c.
Divisor
is
a
is
a
is
over
700
milliseconds,
but
the
qatar
is
much
much
higher.
Your
qatar
will
start
to
fully
the
full
lineage
kernel
yeah.
Another
is
the
memory
overhead.
All
these
comparisons,
based
on
the
results
say
for
the
quark.
It
will
spend
maybe
12
12
megabytes
more
memory
than
the
long
say.
Divisor
need
to
spend
28
my
best
and
qatar
is
spending
about
200,
almost
200
yeah
and
another
thing
is
for
ecd
benchmark
test.
B
This
is
throughput.
We
can
see.
Quaker
is
actually
so
far.
This
is
older
order.
Test
result,
but
in
the
new
test
result
quark
is
sometimes
is
better
than
the
wrong
c
and
much
better
than
the
divisor
and
the
tata
still
for
the
radius
is
a
similar
thing.
That
quark
is
much
better
than
the
g
by
the
cutter.
Okay
here
this
injects
actually
still
the
older
order.
B
That's
a
test
in
the
new
test,
quark
quite
quite
easy,
a
little
smaller
that's
the
problem
is
a
little
less
than
the
wrong
c,
but
much
better
than
divisor
and
qatar
in
the
latest
test.
Quark
quake's
permanence
is
either
three
times
also
divisor
divisor
and
maybe
maybe
four
times
all
five
types
of
cartes
through
food
for
the
injects
yeah.
We
also
test
some
other
advertiser
for
the
modern
db
in
my
circle
and
yeah:
more
immortality,
the
environment,
that's
what
is
is
better
than
the
yeah
yeah.
That's
a
that's
all
any
question.
C
Thanks
for
the
presentation,
one
question
just
kind
of
to
wrap
my
head
around
it:
what
are
the
similarities
between
the
g,
visor
and
q
visor,
apart
from
the
name?
How
they're
related
like
is
it?
Did
you
guys
fork
it
from
from
g
visor
or
how
do
they
relate
with
each
other?.
D
B
Actually,
oh,
keep
oh
actually
quark
and
the
antivisor
shares
share
same
kind
of.
C
B
Set
same
kind
of
concept,
we
all
use
a
qm
as
a
isolation
and
have
our
own.
That's
a
our
own,
how
to
say
different
our
own.
That's
a
kernel
table
just
like,
or
only
let's
change,
automates,
the
version
of
linux
kernel.
We
implement
that,
but
the
divisor
implement
implement
that
with
the
code.
Language
and
the
last
implement
that
implements
the
last
language
and
we
share
different,
given
architecture.
C
A
B
A
Quad
does
that
as
well,
but
yeah
the
concept
higher
level
is
the
same,
but
as
the
quark
is
much
more,
you
know
performance
and
you
can
see
that
the
rdma
and
everything.
B
Yeah,
for
example,
keyboard
user
go
languages,
that's
gc,
so
but.
A
B
C
Interesting
slide:
do
you
maybe
have
this
this
similar
comparison
with
the
firecracker,
because
I
guess
the
concepts
are
not
so
similar.
There.
B
A
Okay,
thank
you.
Thank
you.
Good
thanks.
Thank
anybody
else,
any
questions.
So
so
we
can't
really
do
voting
for
this
because
we
don't
have
a
quorum.
So
I
only
see
three
out
of
four
seven
tse
members
on
the
call,
so
we
can
maybe
do
voting
via
email,
but
if
anybody
has
any
questions,
otherwise
we
can
move
on
with
the
next
agenda
item.
Thanks
yulin,
I
appreciate
it.
This
is
really
that
looks
very
good.
Thank
you.
A
Anybody
else
has
any
questions
before
we
move
on.
So
the
next
item
agenda
item
is
the
work
our
click-to-cloud
team
did
for
the
edge
cloud
as
part
of
our
edge
cloud,
just
in
total
such
cloud
thing
so
who
wants
to
how
much
time
would
you
take,
because
we
have
only
like
20
minutes
left?
Would
that
be
good
enough,
or
should
we
punt
it
to
next
tfc.
D
Call
what
do
you
guess,
I
think
yeah.
I
think
that's
enough
for
us,
deepak,
okay,
so.
E
D
Yeah
go
ahead.
You
guys
take
over
then
yeah
sure,
thanks
so
from
our
side,
russian
team
is
there
so
hey
hi
ashwin.
Are
you
there.
D
Ashwin
and
the
are
you.
D
So
guys,
please
share
your
screen
and
showcase
whatever
we
have
done
for
the
for
next.
Okay
till
now
the
progress
with
the
bug
and
team.
Okay,
please
go
ahead.
A
Kapang
is
not
there
actually,
so
he's
not
there
on
the
call,
that's
another
one.
How
should,
if
that's
something
you
know
what?
I
think,
that's
not
good,
because
punk
is
the
one
you
work
very
closely
with
yes,.
D
A
E
A
A
E
A
A
Our
I
know
the
the
professor
and
stefan
they
have
a
pretty
good.
You
know
the
edge
background.
You
know
they've
done
very
expensive
work,
but
then
our
team-
I
want
our
team
to
be
there
as
well.
You
know,
especially
the
rh
cloud
and
a
bunch
of
other
guys,
so
I
don't
see
them
and
it's
my
fault
actually.
I
should
have
invited
them
and
plus
the
fact
that
we
may
not
have
enough
time
to
go
over
this.
A
A
Okay,
great,
thank
you.
I
appreciate
it
sorry
about
that.
I
know
that
kind
of
I
know
I
invited
bunch
of
people
and
I
forgot
to
invite
pun,
even
though
he
was
aware
of
it,
because
if
the
meeting
is
not
on
your
calendar,
you
don't
really
know
so.
You
might
have
forgotten.
A
A
Yeah
thanks,
I
appreciate
that
and
then
we
will
move
this
to
the
next
mid,
this
agenda
item
to
next
month
and
in
the
meanwhile
part
container
work.
So
if
you
guess
so
now
we
have
some
time.
If
you
any
of
you,
have
any
question
about
quark
feel
free
that
we
can
chat
more
if
you
want.
Otherwise
we
can
wrap
this
up
just
meeting
anybody.
D
A
Okay,
thank
you.
Thank
you.
Everybody
for
joining
so
we'll
send
out
I'll,
send
out
an
email
boring
for
this
quad
container
to
be
encountered.
The
intent
is
we
would
like
to
I
mean
if
the
pse
committee
approves
it,
we
would
like
the
quad
container
as
a
runtime
as
a
workload
to
be
part
of
just
like
docker
container,
to
be
part
of
centaurus
project,
so
I
will
send
out
well.
I
think
that
before
we
do
that,
I
know
a
couple
of
members.
Dsc
members
already
there.
A
What's
your
perspective
of
that
professor
dustar
and
stefan
on
the
quad
container,.
C
I
I
think
it's
it
seems
like
a
nice
technology,
it
seems
similar
to
gvisor,
basically
on
the
concepts
etc,
but
implementation
seems
to
be
much
better.
The
question:
if
we
can,
I
would
need
to
look
a
little
bit
more
into
the
performance
side
of
things
so
potentially
to
explore
if
it
would
make
sense,
maybe
to
bring
this
to
the
edge
as
well.
That
would
be
a
you
know.
One.
C
If
that
that's
possible,
obviously-
but
you
know
for
those
things-
that
the
resource
constraints
etc
are
important
but
yeah,
definitely
I
think
we
should
allocate
some
time
to
explore
this
further
together,
but
you
know,
but
one
thing
I
do
want
to
let.
C
Yeah
yeah
yeah,
it
makes
sense
yeah.
I
noticed
that
you
kind
of
you
guys
are
kind
of
trying
to
optimize
the
whole
thing
and
to
kind
of
make
it
faster
and
smaller
so,
and
this
is
actually
also
something
that
that
we've
been
working
on
some
years
ago,
but
I
think
the
technology
is
now
much
more
ripe
so
to
say
to
to
actually
do
this
and
to
do
this
in
a
really
kind
of
production
ready
way.
C
Obviously,
what
another
important
thing
is
the
you
know
how
how
does
this
fit
with
the
general
ecosystem?
So
how
much
of
the
tooling
can
we
reuse
for
those
things?
For
example,
you
know
for
provisioning
for
deployment
etc,
or
do
we
have
to
also,
or
should
we.
B
C
A
C
Very
important
yeah
definitely
I
mean
I
shared
this
already
internally,
as
we
were
speaking,
is
speaking
with
a
team,
and
it
sounds
really
interesting
and
I
think
we'll
have
a
look
at
this
a
little
bit
more
and
then
maybe
we
can
also
have
you
know,
let's
say
an
offline
session
where
we
go
where
we
do
a
little
bit
deeper
technical
dive,
because
I
think
there
are
some
interesting
questions
which
I
still
would
like
to
have
clarified.
C
A
But
at
the
high
level,
though,
you
think
it's
a
good,
I
know
so
yeah
the
github
has
a
lot
of
information,
but
I
think
maybe
we
can
expand
more.
You
learn.
Actually
you
know
the
picture
you
have.
Maybe
we
can
expand,
because
that
optimization
piece
is
very
important.
Actually
that's
the
key
of
all
that
you
know
so
we
need
if
we
can
maybe
explore
that
more
in
a
documentation.
To
kind
of,
I
think
that
that
would
be
really
just
one
comment
for
my
side.
A
You
know
yeah
because
that's
the
most
important
thing
you
know
the
whole
I
optimization
and
how
this
whole
flow
works.
You
know
the
thread.
You
know
you
have
one
to
m.
You
mentioned
one
to
n
thread
and
all
of
these
things-
and
you
know,
then
they
request
you
response
queue
and
the
reason
you
didn't
have
a
response
queue
because
we
control
everything.
So
we
should
really
document
because
that's
the
very
core
of
all
that
actually
see.
I
think
we
should
document
that
much
better.
You
know.