►
From YouTube: UniMatrix video conference call 2020-09-23
Description
Agenda: https://gitlab.com/unimatrix/meetings/-/blob/master/20200923_mom.md
B
No,
I
can't
find
it,
I
don't
get
the
option
of
recording.
A
A
C
B
B
B
So
so
the
first
point
of
agenda
is
the
linux
foundation
update
a
pair
couldn't
join
two
days,
so,
yes,
that
was
different.
A
Yeah,
I
think
his
everything
is
ready
for
being
listed
on
the
lf
site.
It's
just
they're
waiting.
If
we
want
to
do
any.
A
B
B
A
Okay,
so
just
some
quick
things,
so
I
have
made
all
the
technical
board.
Members
should
not
be
maintainers
in
the
gitlab,
but
I'm
missing
from
milan
from
microsoft.
I'm
missing,
I
think
I
think
you
haven't
registered
in
gitlab,
so
I
you
don't
have
a
gitlab
handle
is
milan.
There.
A
Okay,
we'll
bring
it
up
on
the
gb
meeting
next
year
and
then
another
point
is
to
I
advise
everyone
to
turn
on
the
two-factor
authentication
in
gitlab,
so
to
get
some
more
security.
A
It's
it's
quite
easy
to.
Do
you
go
to
your
gitlab
profiles
to
settings
account
and
then
two-factor
authentication
and
then
you,
then
you
can
register
and
you
can.
I
used
an
app
on
my
phone
to
to
provide
a
second
authentication
method,
so
it's
called
free
otp.
It's
easy
to.
A
A
Discuss
around
what
would
be
the
best
solution.
B
A
So
I'm
fine
with
that
like
providing
or
having
an
amd
64
based
platform
as
well.
A
And
I
think
elarad
already
supports
this
running
on
cpu,
so
it
should
not
be
a
problem
there
and
also
the
it
should
not
be
a
problem
with
opencv
as
well.
So
I
don't
really
see
so.
I
kind
of
started
a
specification
for
uni
matrix.
A
So
it's
now
it's
checked
into
the
to
a
new
project
called
spec.
A
A
A
I
just
used
the
standard
rfc
2119
for
navy
for
specifying
the
requirements
so
just
like
shall
not
or
shell
and
so
on.
Also.
D
A
Options
enabled
so
that
you
can
run
basically
run
containers
on
it.
So
I
haven't
I've,
looked
at
a
couple
of
this,
so
there's
a
script
for,
for
example,
for
docker
to
check
a
view.
So,
but
it's
quite
big,
it
checks
for
a
lot
of
options
which
are
not
really
required.
I
would
say
so.
We
should
make
our
own
script.
I
think
which
checks
for
the
necessaries
and
also
maybe
check
for
optional
things,
which
could
be
a
sign
that
you're
maybe
not
running
as
effective
as
you
can.
A
So
there
I
we
can
so
I
I'm
gonna
have
a
look
and
see
if
I
can
write
that
script
also
that
the
docker
script
is
not
really
so.
The
idea
here
is
that
you,
you
will
run
that
on
your
on
your
host
and
it
it
needs
to
have
the
kernel
configuration
available,
so
normally
that
is
available
through
the
proc
file
system.
So
it's
like.
So
if
you
don't
support
it,
exposing
your
kernel
configuration
over
the
proc.
You
must
provide
it
as
an
option
to
the
script.
A
E
Just
a
quick
one
here
so
container
d
access
to
dbus
is
a
little
unusual.
No
are
we
are
we
sure
that
we
require
that
for
the
proposed
capabilities.
A
Yeah
right
now,
it's
like
that
for
because
larp
service
is
using
that
the
the
largest
the
inference
api
is,
is
kind
of
split
up.
There's
a
library
that
will
be
in
the
container.
Then
there
is
a
service
that
runs
on
dbos,
so
the
library
will
talk
to
the
inference
engine
over
a
d
bus,
but
the
d
bus.
E
Well,
yeah,
I
mean
one
real
gotcha
here
is,
for
example,
trying
to
get
systemd
to
run
inside
docker
containers,
it's
kind
of
a
real
pain
in
the
ass,
and
it's
extremely
delicate
so
just
want
to
make
sure
we're
not
going
down
that
rabbit,
hole.
A
There
is
a
problem
around
if
you're,
using
the
username
space
is
that
what
you're
referring.
E
To
just
in
general,
I
mean
even
the
the
service
name,
spaces
yeah
I
mean
it's,
it's
quite
touchy
and
not
basically
not
supported
wholesale
red
hat
has
done
some
work
on
it
on
the
side,
but
yeah
I
mean
we've,
we've
kind
of
gone
down.
This
rabbit
hole
a
bit
and
it's
not
pretty.
A
E
A
A
Yeah,
maybe
yoon
can
comment
on
this.
I
think
from
the
so
right
now
at
least
the
the
the
d
bus
connection
is
needed.
I
think
it's
mainly
needed
to
negotiate
a
unix
domain
socket
so
that,
basically,
when
you
do
inference
you,
you
don't
use
d
bus
for
that.
You
use
a
basically
a
unix
domain
socket
for
it,
but
I
believe
the
initial
negotiations
to
get
that
socket.
You
need
to
get
that
over
d
bus.
F
Yeah,
it's
that
that's
true.
The
bus
and
the
connection
to
the
d
bus
broker
is
required
for
for
the
session
for
session
initialization
and
as
frederick
described,
for
to
to
get
the
one
at
the
end.
One
of
one
of
the
ends
from
from
a
socket
pair
connection.
F
But
the
library
itself,
the
deep,
the
d
bus
implementation,
the
d
bus
library
implementation
is
required
as
well,
because
we
use
peer-to-peer
debus
to
communicate
over
the
socket.
F
A
F
Is
so
the
primary
issue
here
is
not
really
the
the
debuffs
the
dependency
itself,
the
the
debuff
library
dependence
itself?
It's
the
rather
the
the
communication
dependency
with.
A
It's
a
it's,
basically,
a
limitation,
the
dbos
protocol.
I
think,
because
there's
no
way
when
you
send
a
message
from
inside
the
container
to
when
you
change
it
it
doesn't.
You
can't
map
one
user
to
the
other
to
the
host
use
a
namespace.
Really,
I
think
canonical
has
done
a
workaround
for
it,
so
they
have
a
way
of
translating
the
user
between
the
container
and
the
host.
F
I
mean
the
is,
I'm
I
don't
know
the
details
about
communication
in
and
out
of
a
container
from
a
host's
system,
but
is,
is
there
the
most
important
thing
here?
Is
the
ability
to
send
file
descriptors
and
to
be
able.
F
The
initial
the
the
the
the
socket
pair
endpoint
is
communicated
over
d
bus,
but
that
I
mean
that
that
could
be,
of
course,
be
communicated
over
a
socket
in
in
the
first
place.
So
if
we
could
have,
we
could
have
a
server
socket
open,
for
example,
and
we
could
do
a
unix
domain
server
socket
open
for
for
a
session
initialization.
F
That
could
be.
That
could
be
an
option
with
that.
Would
that
solve
this
issue
of
of
context,
translation.
F
Yeah
I
mean
we
have.
The
reason
we
use
divas
is
that
it's
I
mean
we,
it's
not
the
lara.
The
service
is
not
designed
to
be
part
of
uni
matrix
in
the
first
place,
where
we
use
a
lot
of
dbos
at
axis
for
to
commit
for
applications
to
communicate
with
server
services.
F
So
so
that
is
the
reason
the
the
background
to
it.
I
mean
we
could
definitely
think
of.
I
have
thought
about
it
myself
before,
and
we
could
do
some
more
thinking
on
it
if
we
could
replace
if
we
could
replace
the
dbus
based
session
initialization
process
with
a
unix
domain,
socket.
A
Yeah,
maybe
we
should
think
about
it,
since
I
think
it's
maybe
not
so
common
to
use
the
bus
in
this
way.
I'm
not
sure
it's
it's
common
for
a
lot
of
ui
based
applications
to
use
dbus.
If
you
look
at
genome
or
things
like
that,
they
all
they
use
dbos
a
lot.
A
But
I
know
there
are
things
in
the
works
to
kind
of
fixed
debus
for
the
future,
but
I'm
not
sure
if
it's
ready
yet
so
it
could
be
that
it's
one
year
away
and
so
on.
So
using
sockets
is
maybe
a
better
option.
H
G
F
Because
I
mean
one
reason:
we
one
reason
why,
behind
the
quite
frequent
use
of
the
ebus
at
access
is
that
many
many
open
source
component.
E
F
F
F
F
I
I
mean
we
have
a.
We
have
we
experience
different
kinds
of
issues
related
to
this
dbs
communication
in
and
out
of
this
of
containers.
So
I
mean
it's,
not
it's
not
the
first
time
I
I
hear
about
it.
A
E
A
Like
access
rights
and
things
like
that
on
on
d
bus,
so
if
you
would
take
away
that
you,
you
would
need
to,
if
you
just
have
a
socket,
then
basically
you
have
just
file
access
privileges
that
to
play
with.
F
Yeah
yeah
that
could
I
it
it
might
it
might
limit
our
our
abilities
to
handle
different
permissions.
A
Because
one
nice
thing
with,
if
you
use
security
and
lsm
linux
security,
module
like
app
armor,
which
is
app
armor,
is
developed
by
canonical,
I
think,
and
so
they
have
like
a
then
you
can
basically
do
a
lot
of
configuration
of
your
who
can
access
what
and
which
applications
are
allowed
to
to
do
things
on
different
apis.
F
Yeah
yeah,
I
I
can
see,
I
can't
I
at
least
I
consider
the
bus,
the
the
d
bus
type
ipc
being
on
a
completely
different
level
than
a
plain
socket.
I
mean
it
gives
you
a
lot
of
additional
features.
F
That
that
the
plane
socket
cannot
cannot
compete
with,
but
I
mean
there.
There
is
also
it
gives
you
some
dependencies
as
well.
A
So
how
much,
how
much
work
do
you
think
it
is
to
you
know,
provide
an
alternative
api.
F
I
don't
know
really,
I
think
we
need
to
to
look
that
over
and
try
to
estimate
that,
but
I
I
don't
we
I
think
it's
I
mean
to
just
just
implement
the
a
socket
based
session
initialization.
I
don't
think
that
is
is
particularly
advanced,
but
I
mean
we
need
to
there.
There
is
a
lot
of
things
around
the
authentication
and
so
on
that
we
need
to
think
through.
So
I
think
that
is
the
most
tricky
part.
F
But
I
I
will
I
can
I
I
can't
for
for
sure
investigate
the
possibilities
to
to
set
up
a
different
kind
of
session
initialization.
I
I
can.
I
can
definitely
do
that.
A
F
Fine
yeah-
and
I
I
mean
we
we-
I
don't
remember
now,
frederick,
on
top
of
my
head,
but
our
are
we
using
what
are
we
using
system
d
as
as
as
the
d
bus
broker,
or
do
we
use
the
that
we
don't
use
the
separate
the
old
one.
A
Now
we're
using
systemd
right
now.
I
think
we've
been
monitoring,
there's
a
new
broker
written
by
red
hat,
which
is
superior.
I
think,
but
the
problem
is
that
no
distribution
has
yet.
I
think
only
maybe
red
hat
themselves
have
started
using
it.
So.
F
Could
I
ask
you
guys
from
from
panasonic
about
system
d,
is,
are
you
not?
Are
you
not
using
a
system
d
at
all
in
your
system,
or
do
you
use
use
a
different
differently
in
it
program.
F
F
Because
we
have,
we
have
some
lara
have
some
dependencies
to
system
system
d
as
well,
but
I
mean
we
could
I
it
should
be
possible
to
make.
F
I
think
it
should
be
possible
to
make
the
system
d
dependencies
configurable
so
that
it's
possible
to
build
lara
without
system
dependencies.
A
E
Yeah
these
issues
are
are
pretty
thorny,
so
you
know
the
sort
of
the
docker
way
of
doing
things
the
docker
best
practice
on
initialization
scripts.
You
know
they,
don't
they
don't
use
system
d
for
anything
right,
so
you're,
basically
forced
into
writing
sort
of
custom,
initialization
scripts,
usually
in
bash,
it's
actually
quite
quite
painful,
but
there's
no
standard
way
of
dealing
with
this.
Everything
feels
like
a
bit
of
a
hack,
and
this
all
comes
down
to
you
know
these
these
issues
around
d
bus.
E
I
would
love
to
see
some
stop
gap
here
and
it
looks
like
red
hat
is
trying
to
provide
one,
but
it's
not
ready
to
ride
yet,
but
yeah
I
mean
if,
if
unimatrix
accepted
systemd
as
a
as
an
initialization
standard,
and
we
were
able
to
put
stop
gap
in
place
that,
I
think
would
be
great,
but
it
feels
like
the
stop
gaps
that
are
under
development.
Right
now
are
just
not
ready
for
prime.
E
B
A
A
A
But
for
the
future,
this
is
how
I
see
it
now,
because
that's
the
easiest
way
to
go
forward.
I
think
then
each
host
can
provide
their
something
that
works
with
opencv,
but
for
the
future,
I'd
like
to
see
a
unimatrix
driver
for
opencv,
but
it's
it's
something
for
the
future.
Then
we
will
have.
I
have
an
idea
of
using
red
is
as
a
a
cache
so
that
you
can
read
metadata
information.
Also
it
did
cut.
A
It
could
possibly
be
used
as
a
video
api
to
to
get
the
frame
the
data
you
need
to
to
read
a
frame
in
a
serocopy
and
then
so
the
radius
would
be
like
a
common
layer
for
getting
different
kind
of
metadata
to
your
application.
A
But
that's
for
the
future,
because
yeah,
I
think
that
will
take
a
longer
time
if
we
want
gets
to
get
something
like
a
draft
out
in
reasonable
time.
I
think
for
now
we
have
to
live
with
this
bullet
5.
A
Yeah
also,
I
haven't
really
started
on
the
video
api
section,
so
I'll
put
more
requirements
there,
on
which
properties
of
that
the
driver
should
support
for
different
opencv
things
like
getting
the
timestamps
and
things
like
that
in
your
application.
A
Okay,
next
section
is
about
the
base
layers,
so
my
idea
is
to
have
basically
two
layers,
so
one
is
base
and
the
other
one
is
base
python
and
the
base
layer
will
contain
the
things
from
opencv
that
a
common
subset
of
opencv
the
base
layer
will
contain
just
the
c
c,
plus
plus
api,
while
the
base
python
will
contain
also
the
python
bindings
needed.
A
Also
there
will
be
a
corresponding
development
layer
for
that
for
your
host.
You
can
cross-compile
for
your
architecture
and
the
base
layer.
I
see.
I
think
I
think
the
ubuntu
image
is
quite
good.
It's
it's.
A
E
A
A
A
All
right,
so
my
next
task
will
be
to
kind
of
define
the
video
api
so
which,
which
subset
of
opencv
that
will
support,
which
parts
which
which
properties
that
you
can
read
for
an
image,
for
example,
so
or
which
properties
you
you
can
set.
So
we,
which
video
formats
are
supported
and
so
on,
and
also
the
inference
api
will.
A
A
Actually,
then,
if
there
is
no
other,
if
there's
no
comments,
maybe
we
can
give.
We
can
let
ju
you
and
talk
about
laura
at
the
next
release.
F
We
are
working
on
for
the
closest
future.
We
are
we're
working
on
primarily
primarily
one
thing,
and
that
is
to
to
optimize
to
optimize
the
internals.
F
The
data
flow
internal
data
flows
in
laravel
to
be
able
to
really
do
do
true,
zero
copy,
and
as
long
as
forest
is
possible,
we
want
to
avoid
the
need
for
doing
memory
mapping,
for
example,
cash
management
in
the
case
where,
for
example,
where
the,
where
the
cpu
don't
need
to
touch
the
data
in
any
stage
of
a
application
pipeline,
for
example,
if
in
most
cases
the
the
yuv,
the
original
yv
data
is
coming
from
from
from
a
piece
of
hardware
somewhere
like
it
could
be
produced
by
a
scale
or
an
image
scaler
for
example,
and
then
the
next
step
would
be
to
do
some
color
space
conversion
on
it
to
to
to
get
an
image
in
rgb
format,
possibly
in
a
different
resolution
to
be
able
to
feed
that
into
the
inference
stage.
F
Looking
at
that
as
an
example,
there
is
no
probably
no
real
need
for
the
cpu
to
touch
that
data
or
to
to
even
map
it.
So
we
are
trying
to
find
a
way
to
pass
the
data
through
this
pipeline
in
in
in
a
way
that
is
as
efficient
as
possible,
with
as
little
cpu
overhead
as
possible,
so
that
we
we
avoid.
We
try
to
avoid
the
cpu
copies
primarily,
but
also
memory,
mapping
and
cache
management,
so
that
is.
That
is
one
thing
that
we
are
going
to
start
up
now.
F
In
the
meantime
we
are.
We
are
planning
for
for
moving
our
work
upstream,
so
we
will.
F
We
are
trying
to
find
to
set
the
ways
we
are
working
with
larrot
upstream,
so
we
will
try.
Our
plan
is
to
start
with
doing
more
regular
drops
to
to
get
lab.
That
will
be
the
first,
the
first
change,
and
then
we
will.
We
will
look
into
the
process
of
of
moving.
F
I
mean
we
have
a
local.
The
laravel
master
is
in
a
resides
in
a
local
repository
at
taxes
today,
and
the
code
on
gitlab
is
just
it's
just
a
copy
of
that.
So
to
speak,
but
we
we
will.
F
We
are
planning
for
eventually
moving
our
our
active
master
branch
to
to
git
lab
so
that
we
are
doing
all
the
development
there
and
there
all
external
merge
requests
were
must
be
be
able
to
go
into
that
master
and
we
need
to
set
up
some
kind
of
a
ci
cd
pipeline
because
we
will
and
probably
end
up
in
a
situation
where
we
don't
have
the
hardware
here
at
axis
to
test
all
external
or
all
hardware
back-ends
that
that
could
come
in
as
external
contributions.
F
So
we
are.
We
are
looking
in
into
that
investigating
how
we
can
do
that
on
gitlab,
so
so
that
is
that
that
is
also
an
ongoing
process.
Now.
A
The
device
farm
it
always
like,
provide
the
device
farm
for
unimatrix,
but
I
I
I
guess,
the
actual
dev
boards
or
whatever
kind
of
hardware
we
are
using,
will
be
spread
out.
So
I
think
focus
can
provide
a
way
for
you
to
have
a
local
device
that
is
part
of
the
device
farm.
E
Yeah,
that's
right.
We
basically
at
the
point
here
in
our
future
releases,
where
self-service
should
be
ready
to
go
here
in
the
next
month
to
six
weeks.
We've
consistently
pushed
that
back
over
the
last
two
quarters,
because
we've
had
feature
requests
on
on
other
fronts
and
we've
just
ended
up
adding
features
to
farms
that
we're
hosting
and
managing
for
the
end
customer,
but
yeah
self-service
is
coming
back
to
the
top
here
in
q4,
so
yeah.
E
I
would
love
to
stay
in
touch
with
you
ian
about
about
hosting
your
own
device
and
and
how
to
connect
it
to
a
larger
collection
of
devices
for
unimatrix
use.
E
Of
access
cameras
already
here
in
one
of
the
new
york
clusters
that
we
got
from
adp
north
america,
so
those
are
currently
not
online.
I
think
we
took
them
down
after
a
refactor
back
in
july,
but
you
know
if,
if
you
need
more
to
go
up,
we
do.
I
think
we've
got
more
cameras
for
your
cameras.
A
Is
it
the
deep
learning
models.
E
I
don't
believe
so
I'll
have
to
double
check
the
docs
but
yeah.
I
think
they're
sort
of
basic
basic
models.
A
All
right,
yeah,
probably
we
want
to
have
some
deep
learning
models
to
to
test
the
inference
on.
E
Those
I
can
reach
out
to
adp
north
america
and
see
if
they,
if
they
have
any,
that
they'd
be
willing
to
send
over.
A
Sure
you
and
can
you
give
a
some
kind
of
timeline
or
when
you
expect
next
release
to
be
available
and
what
will
be
part
of
the
release.
F
Yeah
we
we
will
try
to
to.
We
will
try
to
upload.
We
will
upload
what
we
had
today
on
our
local
master,
and
that
was
the
things
that
are
part
that
will
be
part
of
that
release
was
the
things
that
I
described
on
the
lat
on
on
the
call
during
the
summer
and
it's
the
pri.
I
think
the
primary
the
primary
features,
the
prior
primary
features
of
that
is
our
pre-processing
support.
F
That
is,
I
mean
that
that
is
the
color
colors
format,
conversion
the
scaling
cropping
support
that
will
be
part
of
that
release.
We
have
also
new
back-end
implementations
for,
for
example,
for
the
google
edge
tpu
and
ambrella
cv
flow.
F
That
is
two
excel
I'm.
I
think
we
also.
We
will
also
we'll
also
include
a
gpu,
a
tensorflow
lite
generic
gpu
support
in
that
release
as
well,
so
that
there
are,
there
are
a
few
different
new
hardware,
back-end
implementations.
F
F
If
you,
if
you
read
about
the
the
gpu,
I
I
don't
remember
the
real,
the
exact
name
of
it,
but
there
is
a
gpu
there.
There
is
a
gpu
back
and
to
tensorflow
lite,
so
we
use
we
have
enabled
this
through
tensor
for
through
the
tensorflow
lite
back
and
in
in
laron.
F
So
so
you
use
it
through
tensorflow
lite,
so
you
you
can
you
by
doing
that,
you
can
run.
If
you
run
lara
on
a
system
with
a
gpu
that
can
that
can
do
open,
cl
and
opengl
you
can
you
can
deploy
a
tesla
light
model
to
larod
and
lara
can
can
use
the
gpu
to
to
accelerate
being
friends.
A
That's
good,
I
mean
if
we
talk,
we
talked
about
here,
having
a
pc
based
platform
as
well,
so
I
I
guess
that
that
back
end
would
be
good
for
that
kind
of
platform
where
you
have
like,
maybe
an
nvidia
or
yeah
some
other
kind
of
yeah.
We
have.
F
Very,
we
have
verified
it
on
on
a
gp
on
a
nvg
gpu,
so
it
should
be
the
way
to
go
for
that
kind
of
systems.
A
F
You
have
a
larynd
with
with
the
tensorflow
lite
the
the
edge
tpu
tensorflow
lite
back
end
enabled
that
one
will
run
on
a
pc
with
a
u
an
edge
tpu
usb
stick.
We
we
have
done
a
lot
of
development
using
that
setup.
Actually,
so
we
do.
We
do
a
lot
of
our
layout
framework
development,
the
core
framework
development
on
on
on
our
host
computers,
using
such
a
setup.
F
So
that
shouldn't
be
an
issue
at
all,
so
I
I
don't
think
I
don't
expect
this
that
what
I
mentioned
before
about
our
the
our
work
of
optimizing,
the
internal
data
flow
and
large,
with
regards
to
zero
copy
zero
mapping,
zero
cash
management.
F
That
is
not
going
to
be
part
of
the
next
release
that
we
put
on
git
lab.
We
will
put
that
releasing
it
lab
before
that
that
work
is
ready
to
be
launched.
A
So,
does
that
mean
if
you
want
to
do
color
conversion
or
something
you
need
to
do
that
in
opencv
before
you
pass
it
to
learn
or.
F
No,
that
that
part
will
be
that
the
pre-processing
part
will
be
par,
will
be
part
of
the
next
release,
all
right,
so
so
that
that
is
on
our
master
today.
So
that
could
be
part
of
that.
A
Probably
also
we
need
some
updated
examples,
so
I
think
we
should
the
next
meeting.
We
should
talk
about
how
to
proceed
with
the
next
event.
I
mean
a
new
hackathon,
maybe
a
collaboration
hackathon
on
doing
some
new
examples
for
uni
matrix.
F
Yeah,
I
could
just
comment
on
that.
We
we
need
to.
We
still
need
a
few,
I
think,
a
few
a
few
weeks
to
be
able
to
get
up
and
running
with
more
continuous
releases
to
to
get
lab,
but
we
we
are
working,
we
that
that
is
work
in
progress
here
at
axis
now.
So
I
expect
that
to
happen
within
some
weeks
from
now.
F
A
Okay,
any
updates
from
any
other.
A
A
All
right,
so
then
we
come
to
the
next
meeting,
so
the
gp
wanted
us
to
join
in
their
joint
session
on
october
15th.
A
E
A
E
Online
bunch
of
presentations,
I
haven't
actually
gone
through
the
schedule
to
see
what
I'm
going
to
be
attending
yet,
but
likely
to
be
some
conflicts
and
maybe
for
other
people
who
are
on
the
line.
B
A
F
I
I
actually,
I
would
like
to
add
one
thing.
It's
a
question.
During
the
cold
that
I
participated
in
during
the
summer
there
were,
there
was
a
request
from
I
think,
from
hikvish
from
the
high
commission
team
to
set
up
a
call
regarding
adding
a
backend
for
high
silicon
platforms
to
larod,
but.
F
D
Yeah
hi
yeah.
I
think
I
think
the
last
time
the
last
meeting
that.
D
You
or
other
guy
and
proposed
to
have
a
a
large
sharing
session
with
with
the
silicon
right
so,
and
this
is
the
time
I
I
I
don't
remember,
what
is
the
exact
time
so.
F
Yeah,
I'm
I'm
very
positive
to
that,
so
you
can
you're
very
welcome
to
to
invite
me
to
such
such
a
good.
D
F
D
Meeting
if,
in
fact
in
fact
as
a
our
unimatrix
goes
a
lot
larger
and
larger,
I
think
more
and
more
new
comers
are
very
interesting
to
join.
And
I
don't
know
I
don't
know
you,
but
we
we
got
some
contact
with
the
ews,
and
maybe
we
are
very
like
to
here
to
to
to
to
say
how
they
can
do
something
with
with
this
on
ice.
So
you
mean
from
amazon.
A
D
Yeah
we've
talked
to
amazon
as
well,
so
it's
but
yeah,
but
I
think
from
different
parts.
I
we
we
connect
with
amazon
asia
and
they
have
some
specific
department
for
new
solution.
Development
like
they
say
their
goal,
is
to
seek
out
some
new
technology
new
possibilities
and
join
the
merging
them
into
into
their
current
solution
counter
architecture.
So
it's
maybe
it's
a
different
view
from
them
to
to
their
headquarters.
So
maybe
I
don't
know.
A
Yeah
they're
welcome
to
join.
We
have
also
had
some
initial
discussions.
H
D
Send
an
email
to
to
you
guys
to
see.
If
we
can,
we
can
make
time
in,
like
a
middle
of
october
or
end
of
october,
like
that
yeah.
That
would
be
perfect.
Okay,.
A
All
right,
so
I
think
that's
all
for
today.
Thank
you
very
much
for
joining
and
then
we'll
reconvene
on
the
5th
of
october.