►
From YouTube: UniMatrix TB call 2021-09-30
A
A
A
D
A
So
the
formats
of
it
I
propose
to
use
the
json
format
and
then
the
contents
of
the
app
manifest.
So
I
propose
these
basic
things
for
like
there
should
be
a
signature.
A
So
you
can
sign
your
application
and
verify
it.
When
you
install
it,
you
should
be
able
to
set
which
user
and
group
that
application
should
run
us
should
be
able
to.
A
A
Also,
we
could
have
security
profiles
set
for
the
application
in
the
manifest,
so
it
will
point
out,
like
second
profile,
which
system
calls
that
it
needs
to
run
also
other
security
profiles
for
app
armor,
for
example,
and
then
licensing.
A
So
that's
my
idea
about.
Why
would
I
would
like
to
see
create
a
schema
for
this
in
json
to
define
these
things?
The
resources
thing
is
the
most
difficult,
probably.
B
B
So
I've
reduced,
so
I
would
like
to
confirm
one
thing
so
about
signature,
so
a
contact
image
will
attach
the
unimatrix
official
signature
to
the
osha
image
itself.
Is
my
understanding
correct.
A
Yeah
it
can
do
that.
I
mean.
B
In
the
first
time,
so
I
suggested.
B
System,
for
example,
so
the
end.
The
ai
application
developer,
submit
the
oshira
image
to
the
university
software
website,
so
the
aussie
image
is
tested
automatically
and,
for
example,
api
and
vulnerability
like
that,
and
if
the
testing
is
passed
and
the
oci
image
should
be
attached
as
a
new
matrix
of
signature.
B
So
I
I
think
so
such
a
structure
is
very
useful
for
the
human
beings.
But
so
is
it
difficult
to
realize
such
a
thing.
A
E
A
And
run
the
image
the
app
and
make
sure
it
works
and
conform
to
some
security.
You,
you
need
all
this
information,
so
I
would
say
everything
in
this
app
manifest
has
to
be
uploaded
when
you
are
publishing
your
application.
So
it's
not
enough
with
just
an
oci
image,
so
I
would
say
the
oci
image.
If
it
contains
this
information,
the
the
signature
can
be
added
if
it's
by
the
server
side
when
it's
been
tested
and
so
on,.
B
B
So,
who
is
the
service
side,
for
example?
Is
it
access
vision
or
eyebrow
or
our
integrate?
Our
partners
integrators.
A
E
A
A
Git
lab
actions,
and
so
on.
You
can
automatically
run
certain.
You
know
all
the
containers
and
stuff
in
git
lab
when
certain
things
happen.
B
So
did
you
read
my
email
about
signing
yeah?
I
read
it.
No!
No
about
the
contents
of
last
year
called
yesterday.
Gb
go.
A
B
So,
according
to
jp
members,
so
the
marketplace
seems
to
be
planned
if
it
is
created
on
the
umatrix
offshore
website.
So
I
don't
know
maybe
a
tables
or
microsoft
rated
cloud
service,
maybe
and
if
we
do
so
so
the
automatically
executed
a
program
would
be.
B
So
installed
in
aws
or
ajio
back
then,
therefore,
so
we
need
to
consider
the
poor
ecosystem
to
realize
the
you
know
matrix.
B
Maybe
so
or
maybe
it,
though
you
don't
understand
what
I
say.
B
I
don't
have
any
note
so
maybe
person,
or
they
only
donate
some-
will
issue
the
meeting
minutes
later.
B
Anyway,
so
I
wish
we
should
consider
about
such
a
thing
to
discuss
technology
matters.
B
C
A
Can
find
applications
because
I
don't
think
unimatrix
should
have
a
marketplace.
I
think.
B
Yeah
yeah
yeah,
according
to
our
discussion
until
now,
so
that
I
completely
agree
with
you.
However,
yesterday
a
real
night,
sun
and
hike
visions
adversary
suggested
a
new
scope
about
uni
matrix.
B
You
could
separate
it,
I
think
yeah,
I
I
I
also
think
so,
and
so
we
need
to
launch
your
matrix
version
1.0
as
soon
as
possible.
D
B
So
therefore,
so
a
real
night
to
sun
is
about
to
expand
the
industry
not
only
of
security
and
safety,
but
also
all
industries.
A
Yes,
okay,
I
understand
they
want
to
widen
the
scope
right.
So
it's
more
iot
type
of
applications.
D
B
Yeah
yeah
yesterday,
so
so
I
don't.
I
didn't
understand
world
discussion,
but
so
and
the
main
point
is
these:
two
items:
unimatrix
is
not
needed
only
for
security
industry,
but
also
for
other
industries,
and
it's
desirable
to
collaborate
with
the
marketplace.
D
A
If
we
go
back
to
the
signing-
and
that
part,
I
think,
as
I
said,
can
be
automated,
but
then
the
resulting
binaries
or
the
resulting
application
images.
A
They
should
not
be
hosted
on
our
container
registry,
for
example,
we
could
host
something
like
that
on
docker
hub
I
mean,
if
we
have
our
own
account,
then
docker
hub
can
do
additional
scanning
of
containers
and
so
on,
but
that
I
think
that
type
of
account
that
we
need
for
that.
I
think
it
costs
money
or
so
we
we
should.
Then
we
have
to
look
at
the
financing
of
that.
A
Okay,
any
other
otherwise,
I'd
like
to
discuss
a
little
bit
about
resource
requests,
and
so
do
you
guys
have
any
ideas
on
how
to
do
how?
How
if
you,
if
we
do
an
app
manifest
okay,
this
application
needs
some
specific
amount
of.
A
I
mean
when
it
comes
to
to
ram
and
and
and
flash
and
so
on,
it's
it's,
it's
not
so
difficult
with
the
cpu.
We
have
some
different
should
should
it
request.
You
know
a
percentage
of
the
cpu
or
it's
more
on
how
many
flops
and
so
on
that
you
need
to
run.
A
F
On
the
other
hand,
can
we
go
back
to
the
signature?
I
have
some
question.
F
F
It
can
be
run
or
load
correctly
by
canada,
d
or
docker
yeah,
so,
but
but
the,
but
how
we
can.
F
Signature,
the
image
the
signature
will
attach
to
the
manufacturer.
A
A
So
then,
when
that,
when
the
system
when,
when
the
system
is
supposed
to
install
this
application,
it
unpacks
the
tar
and
then
it
finds
the
manifest
and
there
it
gets
the
signature,
it
can
calculate
the
signature
of
the
oci
and
then
compare
to
the
manifest
and
then
also
it
can
check.
Sorry.
F
F
A
A
That's
I
mean
when
you
compare
that's
how
we
do
access
I'm
talking
about
access.
We
do
our
own
application
platform.
We
have
like
a
tar
archive
with
containing
some.
A
Configuration
file
which,
which
yeah
it
gives
you
the
configuration
for
the
application
and
then
the
application
itself
is
packaged
in
another
binary.
A
A
I'm
I'm
not
so
familiar
with
c
groups,
but
in
c
groups
I
think
you
can
set
percentage
like
you
have
different
slices,
so
I
guess
each
application
would
have
its
own
slice
and
then
you
can.
You
can
set
a
hard
and
soft
limits
for
the
for
the
slice.
How
many
percent?
I
don't
know
if
you
can
set
a
specific
amount
of
cycles
that
this
application
is
supposed
to
get
or
more
hard
definable
resources.
A
And
when
it
comes
to
the
gpu,
it's
even
more
difficult
to
to
divide
it,
and
even
some
gpus
is
not
possible
to
divide
and
some
gpus
you
can
divide
a
percentage
or
a
certain
amount
and
so
on.
F
Gpu
and
mpu
is
more
difficult,
because
ziggler
will
have
a
than
any
controller
to
control
the
gpu
and
cpu.
So
maybe
the
there
is
no
standardized
solution
to
control
that.
A
B
E
E
A
Yes
so,
but
I
I
think
so,
the
difficult
thing
here
is
the
cpu
and
even
more
difficult
is
gpu.
How
to
express.
F
Two
seconds
in
of
10
seconds:
it
means
20
percent.
A
Yeah,
I'm
sorry
we
are
talking
about
different,
so
so
I
I
guess
let
me
summarize
so
one
idea
is
that
we
make
it
quite
simple.
With
the
cpu,
we
only
say:
okay,
this
application.
It
needs
to
run
on
a
gigahertz,
cpu
or
something,
then
I
think
that's
quite
good
at
it.
It
makes
it
quite
simple,
but
because,
let's
say
the
applications
say:
okay,
I
want
25
of
the
cpu.
A
Then
it
might
not
run
well
if
the
cpu
is
500
megahertz
instead
of
the
gigahertz,
so
it's
then
it
it
might
so
I
I
agree
with
your
proposal
about
maybe
make
it
on
the
cpu
just
check
basic
hardware.
If
the
cpu
is
fast
enough.
E
Yeah,
I
mean
technically
it's
very
hard
to
to
control
the
application
right
if
either
describe
it
need
like
a
25
cpu
of
like
when
the
computation
resource
is
20
or
1t
in
the
camera.
We
cannot
control
very
precisely
how
many
resource
does
the
application
to
usage
for
ram
or
for
ram
or
flash?
Maybe
it's
easier
but
yeah.
As
you
said,
the
cpu
on
the
gpu
is
not
easy
to
divide
it
right.
Yeah.
E
A
Json
schema,
then
we
can
have
a
next
time.
Maybe
we
can
go
into
more
details
and
then,
if
we
go
to
the
gpu
or
then
maybe
it
does
something
on
the
same
line
would
be
the
size
of
the
gpu
like
in
in
megabytes
or
whatever.
So
you
know
this
base,
you
know
roughly
the
size
of
the
model
that
you
can
load
into
the
gpu.
Would
that
make
sense.
A
Yeah
when
it
comes
to
the
gpu,
let's
say:
okay,
my
application
requires
it-
has
a
500
megabyte
deep
learning
model,
so
it
needs
to
load
its
model
to
the
gpu
or
something
then.
A
And
that
that
could
work
for
other
deep
learning
type
of
ships
as
well.
I
mean
the
size.
Basically,
if
you
have
some
other
type
of
accelerator,
you
need
some
internal
ram
and
that
basically
limits
the
size
of
the
model
that
you
can
run.
D
I
have
one
comment:
if
you
look
at
the
azure
and
aws,
they
have
some
b
core
definition.
Maybe
we
should
have
a
look
at
how
they
define
cpu
resources
and
see
if
there
is
something
that
could
be
useful
for
us
yeah.
Where
do
you
find
it?
I
asked
google
quickly,
you
know.
D
While
I
listened
to
you
and
I
thought
that
microsoft
sure
has
a
virtual
core,
I
didn't
go
into
all
the
details,
but
maybe
to
next
meeting
we
could
have
a
look
to
see
if
there
is
something
we
could
use,
that's
probably
for
running
a
virtual
host,
so
maybe
maybe
connect
this
as
a
cpu
model.
I
don't
know,
but
we
are
saying
we
could
have
a
look
before
deciding
to
see
if
there
is
something
that
is
usable.
A
Yes,
I
mean
kubernetes
has
something
for
cpu
and
ram
and
so
on.
I
don't
think
and
docker
has
something
for
gpu,
but
it's
nvidia
specific.
D
A
D
Well,
maybe
you
require
that
you
have
two
cpus
50,
but
you
need
to
know
the
generation
of
the
hardware.
So
maybe
you
need
a
generation,
so
you
could
say
this
is
generation
five
generation,
6
2
have
some
sense
of
what
kind
of
capacity
the
processor
has
yes,
megahertz
is
not
very
good
eye
or
gigahertz
is
not
a
good
idea.
I
think.
D
A
A
I'm
happy
also,
if
someone
else
can
do
a
proposal
about
it.
I
don't
have
to
do
everything.
A
Okay,
anyway,
I'll
do
some
presentation.
Next
time
around
jason
schema.
A
So
my
idea
is
to
keep
the
governance
and
meetings
spec
and
then
we
create
an
sdk.
A
This
is
currently
residing
under
containers
and
under
there
we
put
different
parts
of
the
sdk
so
right
now
it's
opencv
and
tensorflow
serving
also,
I
think
maybe
we
should
put
some
models
in
the
sdk
so
that
you
have,
I
mean
just
some
open
source
models
that
are
available,
so
there's
a
few
ones
that
are
very
commonly
used
like.
A
A
So
right
now
I
have
made
three
examples:
object,
detector
in
python
and
then
a
c
plus
plus
version
and
then
a
native
layered
version,
that's
using
large
apis
and
then
the
containers
that's
already
existing
and
then
under
there
we
put
the
different
components
that
are
needed
to
run
so
the
debuss
for
one
that's
needed
by
larrot
and
then
larrot
and
the
lyric
inference
server.
A
If
you
want
to
use
this
tensorflow
serving
api
and
then
there
is
larod,
which
is
the
inference
server
and
then
the
inference
engine
and
then
this
library
inference
server
runs
on
top
of
large.
If
you
want
to
have
this
tensorflow
serving
api,
which
is
a
network
reply,.
B
So
frederickson,
so
at
first,
so
the
purpose
of
the
access
purpose
of
accessing
accessing
the
git
by
the
ai
developers
is
to
get
software
modules.
I
think,
therefore,
also,
for
example,
in
your
list.
Governance
meetings
are
included
and
therefore
the
member
internal
internal
information
and
the
software
modules
both
of
them
should
be
separated.
I
think.
B
A
B
Could
you
could
you
please
display
the
lists
again
and
there?
So
if
I
remember
correctly
so
you
will
also
release
the
back
end
of
road.
So
the
back
end
of
railroad
where
it
back
and
railroad
are
placed
on.
E
B
This
is
what
you
mentioned
right.
A
Yeah,
so
here
are
the
under
ships
are
the
different
ships
that
are
supported,
so
here
is
to
fly
tpu
and
also
opencl
and
other
cv
flow
for
umbrella.
A
B
Okay
and
in
addition,
so
in
the
future,
ipro
will
contribute
to
upload
the
reference
implementation
about
kubernetes
k3s
rated
implementation,
so
which
part
is.
A
Placement,
so
that
would
go
in
this
structure.
I
mean
this
is
not
finalized.
This
is
a
proposal
for
me,
so
that
would
go.
I
mean
here
is
the
raspberry
pi
using
yeah
container
d.
A
It
would
be
maybe
just
an
x
86
host
with
linux.
That
would
be
another
interesting
reference.
I
guess
yeah
so.
B
A
So
what
should
we
do?
This
reorganization
and.
B
So
maybe
the
git
structure
is
okay.
I
like
this,
and
so
we
should
start
writing
the
concrete
description
for
the
each
category.
A
A
Yeah,
I
think
that's
all
I
had
for
this
day
then,
and
or
is
there
any
comment
from
high
question?
Do
you
have
any
suggestions
for
the
gift
structure.
E
A
The
the
example
applications
yeah,
we
have
yeah,
we
have
run
on
our
camera,
but
the
examples
that
are
here
are
not
exactly
what
we
are,
because
we
are
using
axis
using
their
own,
our
own
back
end
to
open
cb.
So
then
that
part
it's
not
published
yet
so.
A
A
Which
day
is
best
for
you
guys?
Should
we
continue
on
thursdays
or
like
before?
We
had,
I
think,
on
tuesday
or
wednesday,.
B
A
Okay,
so
I'll
I'll
send
out
an
invitation
for
the
same
time
slot
on
wednesday
13..
Thank
you.
B
So
friezakissan,
and
so
by
the
way.
So
this
is
a
communication
board
topic
so
have
you
already
received
the
invitation
for
the
linux
foundation
meeting
from.
D
B
A
B
Maybe
tomorrow
jensen
will
go
back
to
our
office
and
therefore
so
I
hope
he
will
send
an
invitation
to
us.
D
A
E
Another
question:
you
imagine:
the
access
camera
you
already
can
run
in
this
dock.
Ocr
compatible
container.
F
F
A
I
can,
I
don't
have
a
fresh
number,
but
I
can
probably
make
I
I
I
can
gather
some
statistics
for
you.
If
you're
interested
okay.