►
From YouTube: .NET Design Review: Tensor
Description
-01:-28:-59 - Rejected: Add remove Range to sorted List https://github.com/dotnet/corefx/issues/32987#issuecomment-472132435
A
Three
concrete
classes:
the
problem
with
this
approach
was
that
it
required
it
required
users
to
create
these
wrapper
types
and
incur
rate
these
rough
types
anytime.
They
wanted
to
do
the
interchange,
which
also
held
potential
for
problems
with
the
additional
allocations
that
were
being
made
and
that
they
may
have
been
exposing
more
operations
than
operations
than
the
native
tensor
types
were
actually
supporting,
but
not
like.
Why.
A
A
But
the
the
point
of
the
interchange
types
is
for
is
on
inter
train
boundaries,
you
would
be
able
to
take
the
interchange
type.
Instead,
it
went.
Why
is
the
last?
No?
No,
it
was
an
interchange
type.
The
problem
was
that,
in
order
to
instantiate
one
of
these
in
order
to
instantiate
the
wrapper
type,
you
had
to
create
a
new
instance
light
of
dense
tensor
and
then
passed
the
underlying
native
buffer
into
it.
You.
C
Didn't
have
to,
but
there
are
valuable
abstractions
in
these
concrete
types
as
well,
so
the
base
type
was
just
about
multi-dimensional
access
and
in
the
dense
tensor
it
was
a
concrete
type,
but
it
also
exposed
the
concept
of
a
dense
memory
layout
and
that
in
particular,
is
interesting
from
some
consumers,
so
like
there's
so
there's
I
think
we
were
in
the
experiment.
We
like
that
the
whole
concrete
type
versus
interface
thing
was
really
still
something
that
was
up
in
the
air.
C
B
A
So
so
a
lot
of
the
times
for
cancer
types
in
these
libraries
are
struck
types.
So
you
can't
just
it
you,
you
can't
just
inherit
from
the
abstract
base
class.
You
have
to
actually
create
one
of
the
concrete
implementations
of
that
inherit
from
the
abstract
class,
wrap
the
memory
in
it
and
then
potentially
add
additional
operations,
support
for
the
native
library
to
get
everything
working.
So
you
end
up
duplicating
a
lot
of
the
logic.
That's
already
on
the
concrete
enter
type
to
that
library.
B
E
A
You'll
still
end
up
with
a
potential
box.
In
some
scenarios
you
can
mitigate
some
of
that
with
generic
specialization
and
other
runtime
tricks,
but
the
biggest
part
of
the
one
of
the
biggest
issues
is
that
you
you
end
up
having
too
many
of
these
temporal
types
have
very
specific
behaviors
and
optimizations.
That
can
be
done
for
the
underlying
layout
of
the
template
type
for
that
library.
The
abstract
class
mechanism
requires
in
order
to
get
those
optimizations.
A
E
I'm
slow,
but
it's
not
understand
how
this
has
anything
to
do
of
interface
versus
abstract
class,
because
somebody
still
has
to
map
whatever
is
on
the
interface
to
whatever
the
other
line.
Coming
is
so,
whether
you
inherit
from
a
base
type
and
you
override
methods
or
whether
you
through
an
interface
and
implement
the
methods
somebody
has
to
do
the
plumbing
right.
A
But
when
you
have
the
interface
you
can
minimize
the
amount
of
duplication
you're
having
to
do
I
minimize
the
amount
of
one
duplication
you
could
just
forward
to
the
existing
method,
because
you
don't
have
any
existing
definition
core
for
that.
Abstract
class
may
have
or
behaviors
that
an
abstract
class
may
have
already
that's
good
Berger's.
The
abstract
class
had
some
minimal
behavior
and
then
the
dense
tensor
that
we
shipped
that
that
inherited
from
also
had
their
own
behaviors
that
were
not
always
applicable.
We.
B
Available
and
so
if
it's
just
behavior
and
we
make
the
methods
as
well
and
then
people
can
change
them
thanks,
Virgil
yeah,
they
were
virtual,
but
I
have
to
say
I'm
still,
not
understanding
like
we
have
an
abstraction.
The
only
thing
that
I
understand
this
people
want
to
implement
it
on
Starks.
It
cannot
that's
the
only
thing,
I.
D
B
B
E
That's
the
same
problem
that
wasn't
had
that
similar
API
so
I'd,
if
you
have
to
give
basically
multiple
hierarchies,
so
basically
the
trick
we're
doing
with
a
hierarchy
may
be
of
different
shapes,
just
what
what
other
libraries
already
do
as
well,
so
they
have
whatever
hierarchy.
They
have
right
there.
So
it's
pretty
hard
in
the
hierarchy
situation
to
say:
here's
our
hierarchy,
here's
your
hierarchy,
reconcile
that
because
you're
basically
it's
definitely
become
you
know
so.
F
It's
like
the
torch
start,
torch
sharp,
has
path,
has
or
had
this
problem
right
and
maybe
Matteo
can
can
jump
in
on
this.
But
when,
when
we
tried
to
put
tensor
the
base
class
in
the
torch,
sharp
like
there
was
pushback
into
putting
it
into
the
torch,
our
base
library
and
instead
it
got
added
to
a
separate
library.
That's
been
wrapped
that
so
underlying
my
tensor
class
and
what
was.
B
B
G
C
Initially,
with
the
with
the
types
was
to
make
it
so
that
the
types
were
were
directly
usable
as
an
exchange
type
to
wrap
the
native
memory.
So
my
that
was
initially
my
thought
was
that
people
wouldn't
need
to
even
define
their
own
types
that
they
could
write
and
an
interruption
that
would
directly
get
the
data
from
a
native
layer
and
wrap
this,
but
it
hasn't
panned
out
that
way,
because
folks
want
to
do
more
with
these
objects
than
just
represent
the
data
behind
the
tensor.
C
So
they
also
want
to
expose
operands
and
methods
that
are
natively
provided
by
the
tensor
library,
so
like
torch
has,
and
so
so
they
want
objects,
and
so
are
our
dense,
tensor
and
our.
So
our
scent
is
never
going
to
give
them
that
because
they
can't
like
we
have.
We
have
a
set
of
like
that.
Basically,
the
decision
that
is
access
is
access,
a
read,
or
is
it
at
the
invoke
right?
Some
people
might
actually
want
to
make
that
decision
themselves,
as
opposed
to
have
us,
because.
A
You
different,
you
also
lose
the
type
information
when
we
use
class.
If
you
have
a
struct
and
you
have
to
wrap
it
into
a
class,
you
have
lost
the
type
information
of
the
underlying
type,
so
you
can
no
longer
say
if
I
tensor
is
tensor
flow
tensor,
but
then
cast
it
to
you.
Tensor
flow
tensor
and
operate
it
as
a
tensor
flow.
Tensor
interface
provides
the
same,
provides
abstraction
like
the
class
does,
but
it
maintains
the
underlying
type.
Information
of
the
original
type
yeah
I
was
not
friend.
B
F
G
A
C
E
Microserver
per
my
concern:
it's
just!
If
you
look
at
these
api's,
they
are
not.
What
ever
could
that
are
simple
right
like
they
are
more
like
I
mean
there
is
simple
in
the
sense
that
we
try
to
get
them
down
to
some
small
number,
but
by
competitors.
That's
the
I,
comparable,
ID
move,
which
is
fundamentally
simple
and
very
unlikely
to
change
right.
E
I,
don't
tell
that
this
is
what
we
think
is
the
primitive
right.
That's
true
for
any
affection
right,
but,
like
you
know,
we
looked
at
streaming,
we
won
it,
look
perfect
right,
but
eventually
you
add
a
new
cross-cutting
concept
and
then
I
don't
know.
Maybe
it
liked
expand
right,
I
mean
originally.
We
may
have
exposed
to
Ray.
E
So
every
time
you
introduce
a
new
exchange
that
you
will
have
this
fall
term
in
existing
libraries
who
didn't
take
a
dependency
on
the
exchange
that
are
harder
to
evolve,
that
either
at
a
breaking
change
to
their
library.
So
they
can
just
add
you
over
laws
to
take
our
time.
So
I
but
I
think
fundamentally
like
it's
very
hard
to
do
to
basically,
you
know
change
the
exchange
currency
without
being
a
breaking
change.
I
think
that's
fundamentally
a
problem.
One.
A
Of
the
other
problems
with
the
interface
versus
classes
is
that
none
of
these
librarians
are
writing
these
tensor
types
as
c-sharp
owned
memory,
they're
all
writing
them
as
native
owned
memory
that
interrupt
with
a
underlying
native
library,
which
is
fine,
and
so
you,
you
you're,
going
to
have
additional
cost
of
abstraction,
whether
we
do
do
class
or
interface
and
none
of
these
libraries,
because
they're
native
agree
necessarily
other
than
a
few
based
methods.
What
a
tensor
is
and
what
what's
available
to
it.
A
E
Busy
saying
like
the
most
of
them,
exist
in
native
land,
so
the
question
was
how
much
or
do
we
want
to
click
on
the
manage
side
to
point
to
it
right
and
I
think
you
could
argue
that
interfaces
with
vanity,
virtualization
and
boxing
behavior
will
basically
become
a
zero
overhead
abstraction.
But
that's
also
well.
A
Also,
even
if
we
were
to
expose
classes,
we
would
end
up
trimming
it
down
to
this
much
and
George,
and
if
we
expose
new
methods,
the
default
implementation
for
those
methods
is
basically
throw
not
implemented,
because
not
every
library
guarantees
that
a
given
operation
is
available,
and
since
it
is
existent
in
native
memory,
you
can't
definitely
say
whether
something
is
readable
or
writable,
or
that
you
can
even
create
a
new
tensor,
because
it
could
be
a
managed
call.
Well,
I
mean.
C
We
could
implement
a
very
bad
version
like
so
if
we
could
always
clone
like.
If
these
are
the
base
abstract,
then
we
could
always
envisioned
a
implementation
that
cup
that
uses
these
methods
to
copy
everything
to
manage
the
memory,
and
then
we
could
do
whatever
we
wanted
and
in
fact
that's
like.
If
you
look
at
some
of
the
base
like
some
of
the
stuff
that
I
was
putting
in
the
base,
then
Spencer
it
was
that
it
was
like
sure
we
can
limp
along
and
do
these
arithmetic
operations.
A
That
basically
drives
the
user
into
a
pit
of
failure,
because,
if
you're,
if
you're
in
torch
art,
for
example,
you're
going
to
want
to
have
a
port
sharp
10
serve,
basically
the
entire
way
through
and
users
are
going
to
be
able
to
accidentally
do
operations
that
were
trying
to
manage
tenser.
That
you
then
need
to
re
materialize.
As
a
native
tensor
and
you're,
just
you're
killing
pearl
wait.
B
A
B
C
E
But
I
think
I
mean
my
point
is
more
about
if
you
look
at
stream,
but
if
you
look
at
the
additions
over
that's
mean,
but
stupid
mean
one
basically
had
both
synchronous
and
asynchronous
API.
That's
still
true
today.
It's
just
that.
The
way
we
we've
modeled
asynchronous
API
is
have
changed.
I
didn't
take
same
of
indexers
right,
I
mean
in
all
the
range
or
index
based
indexer
like
a
fundamental,
doesn't
change
the
type
all
it
does.
It
makes
it
also.
E
B
E
B
A
So
if
they
have
a
tensor
type
and
they
have
a
dense,
tensor
type,
which
inherits
from
their
tents
or
types
they're,
all
they're
already
broken,
because
now
they
can't
inherit
from
are
don't
tense
or
tight.
But
is
that
the
case
and
I'm
not
sure
of
the
particular
cases
in
the
libraries
today?
I've
not
looked
super
closely
at
onyx,
but
did
places
that
are
using
Struck's
already
can't
inherit
so
they're
also
brought
in.
But.
H
Could
I
ask
a
question
about
this?
Sorry,
sorry
to
break
in
a
more
natural
way
to
if
this
is
an
exchange
type.
Couldn't
libraries
continue
using
their
own
types
and
if
they
won't
have
this
sort
of
thing
wrap,
you
have
an
implementation
that
wraps
around
it
rather
than
injecting
it
into
their
like
own
type
system.
Why
are
we
talking
about
that?
Why
do
they
have
to
descend
from
this
thing
to
use
tensors
in
there.
A
H
A
This
either
right,
this
isn't
supposed
to
be
a
drop-in
replacement
for
the
type
most
of
the
time
a
library
will
be
operating
on
exclusively.
They
are
tight,
but
once
you
get
to
higher
libraries
that
want
to
support
both
port
sharp
and
onyx,
for
example,
you
end
up
needing
they
either
have
to
implement
both
or
they
need
an
interchange
type
that
allows
them
to
determine
what
the
original
type
is
in.
What
to
do
to
support
both
libraries,
okay,.
H
A
E
All
right,
I
think
the
German
problem
with
reference
is
also
that,
unless
you
are
very
careful
like
you,
it
can
easily
result
in
locating
a
ton
of
objects
that
are
basically
becoming
garbage
and
you
have
like
you
know
your
library's,
exchanging
from
some
other
library
and
you're
passing
attempts
us
back
and
forth.
But
then
you
basically
have
to
find
creative
ways
to
do
that
which
we
have
done
in
the
past
too.
But
it's
not
easy
to
do.
B
Well,
the
problem
is
that
if
we
implement
the
interfaces
on
strands,
unless
we
gonna
have
these
hypothetically
virtualization
and
optimization
features,
it's
going
to
be
not
hypothetical
they're,
already
implemented
and
continuing
to
be
implemented
and
worked
on.
What
do
you
mean
like
now
boxing,
so
we
have
boxing
into
a
short
box?
We.
A
A
E
Not
it's
not
a
great
experience,
because
you
cannot
really
add
it,
because
the
conversions
in
all
cases,
unless
you're
willing,
with
taking
the
dependency
to
the
so
I
think
in
general,
like
wrapping,
works
to
an
extent
but
I
think
everybody
prefers
the
solution
where,
whatever
you
type
is,
it
also
is
whatever
the
exchange
type.
It
is
right
so
that
you're
basically
arguing
with
the
type
of
that.
That's
almost
always
the
better
situation
and.
B
Most
of
the
tensor
style
classes,
one
is
thinking
about
being
as
chart.
Another
thing
is:
do
we
have
actually
implementation
of
methods
that
would
consume
these
interfaces
because,
as
we
just
observed
majority
of
interesting
operations,
they
kind
of
need
to
be
implemented
negatively
to
be
performant
right,
not
just
give
me
the
wrong
memory
and
Iran
can
just
add
an
indexer
that,
basically,
you
know
like
like
are
these
interface
is
useful
on
that
consumption
side.
A
C
C
But
there
there
are
two
scenarios,
there's
probably
more
than
two
but
contrast
the
scenarios
of
actually
implementing
a
tensor
DNN
library,
where
you
want
to
perform
operations
versus
operating
on
the
edge
and
wanting
to
flow
data
in
and
get
data
out
and
then
potentially
flow
it
into
another
library.
And
so.
D
F
Well
done
that
would
really
want
to
use
something
like
this
right,
where,
where
one
of
the
DNN
of
whether,
if
we
talked
well,
we
do
talk
to
both
tensorflow
and
onyx.
Today,
it's
like
you
could
write
I'll,
transform
that
just
operated
over
I,
tensor
or
ident
answer,
and
you
didn't
know
or
care
whether
it
came
from
onyx
or
tensor
flow,
and
you
could
plug
that
transform
into
your
pipeline.
H
Would
actually
said
that
it,
it
would
probably
to
speak
to
I.
Think
Stevens
point
is
we
would
ultimately,
if
we
cared
about
performance,
just
work
directly
against
that
type
anyway.
The
fact
that
there's
some
sort
of
like
convenience,
like
the
interface
doesn't
seem
to
say,
mean
but
very
much
work,
because
the
first
thing
that
I'd
have
to
do
if
I'm
interfacing
with
tensorflow
is
that
have
to
figure
out
how
to
plug
it
into
tensorflow
anyway,
by
allocating
their
version
of
tensors,
so
that
their
library
can
work.
F
H
I
H
C
Limited
at
all,
consider
like
you,
don't
actually
have
to
learn.
Tensor
flows,
tensor
tensor
flow
Spencer
would
would
accept,
would
have
a
constructor
that
takes
one
of
these,
and
then
we
would
have
a
library
ecosystem
of
of
things,
loaders
and
whatnot
to
produce
these,
and
so
so
it
provides
the
the
common
currency
to
talk
between
these.
These
libraries
that
all
have,
at
the
end
of
the
day,
they're
different,
a
preferences
for
where
memory
lives
and
how
they
can
best
interact
with
that
memory,
but
we
provide
the
common
currency
for
those
things
to
work
together
today.
H
C
A
The
rough
overview
here
is
really
that
some
library
would
take
these
interchange
types
as
their
inputs.
They
they
wouldn't
worry
at
the
public,
API
surface
level,
about
the
differences
between
on
extensors
or
tensorflow
tensors
or
the
other
tenser.
They
would
just
take
the
the
interchange
type
whatever
it
is,
then
they
would
be
able
to
using
these
interchange
types.
A
have
spent
have
specialized
fast
path.
So
if
interchange
tenser
is
a
port,
sharp
tenser
then
passed
it
to
a
torque
to
a
concrete
torch
type.
A
Ten
torch,
sharp
tensor
and
call
directly
into
operations
exposed
right
by
that,
and
they
would
be
able
do
the
same
thing
with
an
onyx,
tensor
or
any
other
tensor
type,
and
as
a
fallback,
the
these
interfaces
expose
enough
to
be
able
to
view
and
operate
on
these
tensor
types
to
perform
primitive
operations,
if
required,
as
a
software,
fall
back
if
they
don't
recognize
and
explicitly
support
the
concrete
tensor
type.
Okay,.
H
H
A
Using
under
the
current
proposal,
the
library
that
created
that
the
tensor
would
be
expected
to
manage
the
lifetime
in
memory
of
it,
there's
been
some
discussion
about
us,
also
providing
a
simple
implementation
over
these
interfaces
as
part
of
the
framework.
But
that's
not
part
of
this
proposal.
But
if
we
were
to
expose
that,
we
would
likely
expose
some
mechanism
to
allow
cleanup
and
other
primitive
operations
as
needed
in
the
future.
E
Okay,
I
still
don't
quite
get
those
like.
Isn't
the
purpose
of
this
sighs.
You
said
like
it's
mostly
for
libraries
that
don't
care
about
the
actual
types,
but
I
saw
one
of
the
major
processes.
Library
would
be
that
we
model,
basically
what
the
common
shapes
of
tensors
are
and
give
you
raw
access
to
the
underlying
memory.
So
what
could
you
do
better
if
you
would
have
the
actual
type
in
your
hands?
You.
A
Would
be
able
to
so,
for
example,
c.net
doesn't
really
have
GPU
support
today,
but
and
Berry's
tensor
libraries
might
have
operations
that
are
implemented
on
the
GPU,
given
various
size
inputs,
and
things
like
that,
like
math
map,
for
example,
supports
a
concept
of
a
tensor
type
and
it
supports
performing
execution
on
the
GPU.
If
you
tell
it
to
so,
you
would
be
able
to
go
ahead
and
do
whatever
is
considered
best
by
the
native
library
for
an
operation.
If
you
cast
it
concrete
type,
so.
E
A
Out
it's
backed
by
some
memory
somewhere
it
could
be
the
GPU,
it
could
be
a
disk,
it
could
be
RAM
store
a
ram
disk.
It
could
be
some
other
computer
on
the
network,
that's
holding
the
actual
data
for
the
tensor
types,
but
you
have
memory
somewhere.
You
perform
operations
on
it
and
you
get
the
result
back.
You're.
E
E
A
A
E
Then
each
spec
of
T
V
return.
You
know
what
the
what
the
order
in
which
things
are.
Otherwise
you
cannot
reason
about
these
things
right
so
like
it.
So
basically,
I
guess
if
there's
a
new
format
that
would
show
up
within
the
new
interface
like
I,
fancy
cancer
of
T,
and
then
you
would
describe
what
that
shape
would
be
it.
So
you
can
actually
directly
right.
A
F
F
One
real
space
here
is:
is
the
dense
stuff
right
like
if
I
copied
in
the
chat
window
in
ml
has
basically
this
exact
same
concept
right?
They
have
an
eye
tensor,
and
then
they
have
tensor
in
32,
tensor,
double,
etc,
etc,
and
the
way
you
get
at
its
memory
is,
you
call
get
as
vector
view,
and
then
that
returns
an
eye
list,
I
read
only
lists
of
whatever
the
thing
is
right
and
it's
like,
if
I
want
to.
F
If
I
want
to
write
a
library
that
gets
at
the
memory,
that's
underlying
the
TEC,
you
know
the
tensor
it
without
an
exchange
type.
You
just
can't
do
it
right,
you
have
to
be.
You
have
to
write.
This
is
how
I
do
it
for
onyx.
This
is
how
I
do
it
for
torch
sharp.
This
is
how
I
do
it
for
win
ml.
This
is
how
I
do
it
for
tensor
flow,
so
there
I
mean
there.
Is.
F
E
K
B
C
I
B
E
K
K
C
F
B
You
see,
you
know,
skip
the
spacer
for
a
moment.
If
we
wanted
to
design
a
dense
tensor
that
is
actually
efficient
for
arbitrary
memory,
I
would
say:
I
would
remove
the
indexers
I
would
change
dimensions
from
the
span
to
basically
get
dimension?
You
fascinate
and
dimensions
length
gives
you
that
number
of
you
know
the
the
range
of
instagramers
yeah.
C
L
D
C
C
A
E
A
B
A
A
C
A
B
C
C
So
it's
it
does
its
job,
but
it's
only
doing
its
job
in
the
sense
that
it's
it's
exposing
enough
information
to
provide
an
efficient
transport.
But
it's
not
exceptionally
usable,
like
you,
wouldn't
want
to
write
the
code
that
actually
goes
in
and
interacts
with
that
with
that
interface
from
an
application.
But
an
application
could
pass
an
object
around
between
two
libraries
and
then
those
two
libraries
could
have
relatively
efficiently
maintained.
The
sparseness
of
the
data
in
if.
B
C
Understand
the
complete
subject:
no,
they
don't
even
need
to
understand
the
other
concrete
subtype.
So
imagine
torch.
Tents
are
outputs.
A
sparse
tensor,
that's
represented
in
like
a
dictionary
of
indices
and
values,
and
then
tensorflow
tensor
represents
a
sparse
tensor
as
a
series
of
tensors,
where
one
is
the
indices
in
others,
and
so
it
could
be.
It
could
get
this
bartender
at
a
torch
tensor
and
then
it
could
call
to
get
nonzero
values
and
get
non
zero
in
to
see.
C
Is
it
and
it
can
construct
its
own,
its
own,
concrete
value
so
that
the
the
tensor
flow,
tensor
life
or
a
tensor
flow
chart
could
construct
its
own,
sparse
tensor,
knowing
only
about
I,
sparse,
tensor
and
in
its
concrete
implementation.
So
in
that
sense
it
is
satisfying
the
the
goal
of
minimal,
minimal,
minimal
exchange
without
having
to
know
about
torch
tensions,
concrete
implementation,
but
yeah.
It's
not.
It's
not
pretty
guided
I've
thought
about
this
problem.
Ala
I
can't
come
up
with
a
greater
so
as
far
as
the
way,
except,
for
example,
to
enter
indices.
B
A
I,
don't
think
that's
really
a
good
idea.
We
shouldn't
move
the
indexer
to
ice
bars
tensor
just
because
it
could
potentially
be
slow
for
lookup
on
and
on
and
ident
and
serve
it's
still
a
primitive
operation
being
able
to
get
a
value
out
of
a
tensor,
and
you
should
be
able
to
do
that.
Why
would
you
not?
Why
would
it
be
so
look
at
the
buffer
and
then
I
mean
what
if
you're
wanting
to
just
in
the
debugger
say,
give
me
the
value
of
then
we
should
edit
another
viewer.
F
C
Value
out
of
it,
the
output
of
these
things
could
be
very,
very
small,
either
a
single
single
cell
or
a
small
table-
or
you
know
like
it,
doesn't
necessarily
it's
not
always
a
bad
thing
and
that
that
I
think
to
me
the
indexers
are
primarily
useful
for
fetching
the
output
right.
That's
when
people
will
will
fall
back
to
the
indexes
on
on
I-10,
so
it's
when
they
they
get
the
output
of
their
DNN
and
they
want
to
find
out
how
its
scored
this
stuff
and
they
need
something,
that's
easy
to
use
in
that
case.
C
C
We
want
them,
don't
have
something:
that's
as
accessible
as
a
list,
because
if
it's
not
they're
gonna
do
really
bad
stuff,
they're
gonna
like
say,
okay,
what
concrete
collection
do
I
know
of
that?
I
can
pass
this
weird
thing
to
that
gives
me
normal
access
and
then
I'm
gonna.
Do
that
and
then
that's
gonna
result
in
a
copy.
It's
like
they'll
wrap
it
in
a
memory
stream,
then
put
it
in
something
else
and
then,
like
they'll,
do
whatever
weird
transformation
they
can
figure
out
in
order
to
get
something.
J
G
A
J
H
If
you're
like
in
the
business
of
like
having
these
like
the
linearized
single
index
as
your
return
type,
but
it
seems
to
be
the
approach
here,
yeah
because
you
have
a
cube
of
you
know
even
the
relatively
small
dimension
right
then
begin
your
yeah
huge
I
mean
I'd,
make
the
same
point
about
what
what's
it
it's
not
destroyed.
What
was
the
other
word
that
we
had
instead
of
stride
twice
that
slice?
H
A
So
going
back
to
the
question
of
returning
span
for
things
like
getting
on
zero
values,
what
tensorflow
does
is
they
just
return,
a
tensor
that
contains
the
non
zero
values
or
the
indices
or
the
dimensions?
Since
attendance
was
just
a
multi-dimensional
reign,
you
could
have
a
one-dimensional
tensor.
So
maybe
we
could
just
return
a
tensor
of
long
here.
I
check.
F
J
A
A
A
If
it's
back
by
native
memory,
then
you
can
have
your
index
ergo
and
index
a
64-bit
long.
We
have
to
change
this
to
be
long.
Oh
I'm
pregnant
to
change
like
this
or
do
openings
over
long
right
or
or
maybe
we
could
see
see
what
the
current
plans
for
in
into
are
and
conceptually
make
it
use
that
instead,
but
you
you
either
have
to
do
in
inter
long
yeah.
If
you
want
more
value
this.
F
A
A
H
A
C
C
A
A
A
H
A
J
H
J
You
know
if
this
is
an
interrupt
type
I
mean
that
maybe
make
it
a
little
more
valid
than
right.
What
I
mean?
No
wonder
saying
if
it's
a
tensor
is
something
that
you
use
when
you
pass
values
between
different
frame.
Oh
yeah,
then
well
that
should
definitely
be
read-only.
I
would
assume
you
know.
Mm-Hmm
I
mean
that's
an
area
right.
You
know
we
usually
expect
it
to
directly
or
you
know,
modify
them.
B
A
B
A
There
was
an
open
question
on
whether
we
should
call
dimensions
shape
instead
and
whether
or
not
if
we
were
to
call
it
shape
if
we
should
have
a
I
shape,
interface
or
type
that
way,
users
can
carry
additional
shape
metadata
if
we
needed
or
desired.
Some
libraries,
for
example,
carry
name
information.
I
wanted
a
given
dimension.
C
So
part
of
this
was
the
the
the
methods
on
array
of
not
very
usable
as
they
are
today
like
to
get
like
limited
type
of
information
like
that.
You
might
want
about
the
shape
you
have
to
eat
a
lot
of
stuff
in
call
methods
multiple
times,
whereas
it's
rather
convenient
to
have
direct
access
to
the
the
dimensions
as
a
single
object.
So,
like
here's,
the
debugging
scenario
right
and
there's
also
like
the
workbooks
workbook
type
scenario,
where
you're
trying
to
just
throw
together
sample
code
that
it
uses
this
interact
with
it.
C
C
B
For
now
hour
and
15
minutes,
my
kind
of
overall
feedback
would
be
the
following:
I
would
try
to
use
abstract
classes
instead
of
interfaces.
We
don't
have
a
very
strong
evidence
that
it
doesn't
work.
It
seems
one
library
tries
to
move
those
tracks,
but
I
kind
of
cannot
understand
how
struck
here
won't
even
like
matter.
B
The
fact
that
you
know
it's
a
bit
like
more
lighter
weight,
I
mean
the
data,
is,
you
know,
potentially
very
large,
that
will
be
box
unless
the
feature
is
in
there
on
the
teacher
probably
won't
work
in
a
lot
of
cases
so
like
if
we
had
it
as
abstract
classes,
the
abstraction
is
easier
to
import
in
the
future,
and
second
piece
of
feedback
is
I,
want
implemented
the
abstractions
and
make
sure
that
you
can
actually
get
performance.
That
is
perishable
because
I
can
totally
understand
how
you
know
down
to
us.
A
A
B
We
just
discussed
there's
a
way
to
say:
can
interfaces
such
that
it's
you
know
get
faster.
For
example,
if
you
mentions
are
not
new
consecutive
memory
locations
that
it
won't
be
better
to
have
you
know
the
dimension
length
and
dimension
length
length
and
then
maybe
something
bugging
problem.
The
kind
of
a
point
is
I
kind
of
feel
very
uncomfortable
with
introducing
new
abstractions
to
BCL
without
having
vetted
implementation
and
I.
Don't
know
how
much
with
so.
A
A
B
Interfaces
and
the
reason
for
it
is,
we
would
have
the
top
type
items.
You
know
not
eye
cancer,
but
cancer
of
T.
The
libraries
would
take
it
and
then
every
other
type
is
just
a
downcast
or
potential
performance
optimization.
Now,
if
we
ever
need
to
involve
the
interface-
and
we
have
now
I
faster
and
better
than
Spencer,
it's
just
an
interface,
it's
appear
and
then
dunk
us
and
took
us
to.
A
A
B
F
By
by
Allah
one
off
of
it
is
the
buffer
and
the
strides,
and
is
it
column
major?
That's
all
I
really
want
I
like
I
want
to
see.
Are
you
a
dense
one?
Oh
you
are.
Let
me
get
you
directly
to
the
buffer,
because
the
advantage
you
get
there
is,
you
could
have
an
eye.
Dense,
tensor
and
and
I
read
only
dense,
tensor
right
and
if
you
just
wanted
to
read
them,
Cassatt
I
read
only.
A
You
still
have
the
broken
problem
of
if
you
want
to
accept
a
tensor
but
determine
if
it's
a
dense,
tensor
you're
doing
it's
check.
You
exist,
then
I
use
the
abstract
class
tensor
of
T
right,
but
as
soon
as
you
have
something
which
is
not
a
class,
you
can't
have
the
proper
hierarchy.
Yes,
why
do
we
need
to
have
something
business.
A
C
So
we
set
the
Matteo
set
that
tend
to
flow
sharpest,
I've,
also
seen
folks,
who
want
their
own
type
hierarchy
for
their
tensor
types
because
they
treat
them
like
handles
and
they
have
face
handle
types
that
deal
with
managing
the
memory.
But
these
things
back,
and
so
they
want
the
type
hierarchy
to
represent
their
handle
hierarchy
and
in
all
their
common
memory,
memory,
management
and
rep,
counting
methods,
so
that
that
was
the.
So
then
there's
a
moment.
B
C
B
Have
to
route,
but
the
thing
is
that
I
would
absolutely
sure
these
interfaces
will
not
last
longer
than
two
years.
The
space
is
so
involved
in
the
editing
ranges.
This
and
they're
like
I,
know,
I,
think
I've
seen
this
so
many
times
we
keep
adding
in
interfaces
then,
after
a
few
years
like
they
are
basically
obsolete.
We
just
you
know.
B
B
B
B
F
B
A
C
A
There's
other
things
that
we
were
trying
to
build
off
of
that
there's.
Definitely
other
things
that
we
read
up
even
recently,
like
span,
probably,
should
have
supported
long
lengths
that
way,
it
will
just
work
with
native
memory
without
all
the
spam
sequence
and
other
extensions
we've
had
to
have,
but
we
messed
it
up
within
rest
of
them
and
series.
That's
your
opinion
on
it.
By
this
myself
and
many
others
would
say,
we
messed
it
up
and
we
have
to
live
with
it
and
the
consequences
of
it.
E
Think
one
thing
we
already
do
know
is
that
when
we
look
at
in
the
indexes
in
particular
ID,
we
know
that
we
want
to
think
about
how
people
support
index
in
range
in
the
future
right
or
you
know,
see
sharp
nine
one
thing
that
Matt
is
looking
into
is
numerical
tabs
and
rolls
right.
So
the
question
is:
move
those
things
in
mind
like
how
long
will
these
abstractions
laugh
now
using
an
abstract
base?
B
D
C
A
F
B
An
example:
I
can
imagine
a
property
that,
basically
okay,
you
know.
Yes,
there
are
three
libraries
that
we
know
about
tensorflow
and
my
torch
and
whatnot,
and
we
have
a
property
that
tells
you
which
tensor
it
is
just
an
idea
like
those
are
the
things
that,
whatever
time
we
decide
that
we
want
to
add
and
then
on
an
abstract
class.
It's
a
simple
totally,
don't
add,
because
you
had
an
enum
with
the
value.
B
A
B
H
A
H
H
A
A
G
That
they
don't
tell
because
they're
untested
I
can
give
you
my
use
case
and
because
I
moved
to
start.
This
basically
far
as
I
know,
tends
to
flush
tests
for
sharp,
but
does
not
provide
training
and
try
to
do
training
over
torture
and
when
you
start
to
have
like
several
transformation
over
the
tensor
and
you
loop
them
into
like
thousands
of
tensor,
because
that
is
the
day
after
they
flew
in.
You
basically
generate
a
lot
of
objects
in
deep
and
that
kind
of
crash
the
performance.
G
So
that
is
my
use
case,
but
I
don't
know
if
the
other
system
that
I
would
say
may
use
case.
But
of
course
that
doesn't
mean
that
we
need
to
do
to
have
like
I
cancer
in
the
in
the
training
path
right
because,
in
theory,
should
only
use
I
test
when
I
want
was
they
trained?
I
want
to
move
my
data
to
some
other
framework
right,
so
it's
kind
of
a
specific
use
case.
There
wasn't
a
temple
right
now,
but.
E
A
E
A
You
have
to
create
the
object
which
gets
attacked
by
the
VC
a
lot
of
times.
These
objects
for
slicing
and
stuff
are
short-lived.
It's
it's
the
same
problem
we
had
with
spam.
You
end
up
creating
a
lot
of
short-lived
GC,
tracked
objects
that
are
just
wrapping
memory.
That
is
the
perfect
use
case
for
value
types.
E
G
Thousand
tests
are
multiplied
by
the
tensor.
The
thousand
tents
are
the
input
days,
they
input
data
and
then,
on
top
of
that,
I
have
to
do
transform
so
out
of
one
thousand,
but
the
tensor
I
transfer
them
other
I
do
twenty
transformations.
So
now
you
have
yeah
twenty
thousand
times.
Four
see
that
you
leave
in
memory
and
one
epoch
take
like
a
few
seconds
to
dispose
all
of
them.
So
the
transforms
aren't
in
managed
code,
don't
write.
G
A
So
so,
no
matter
what
you
have,
whether
you're
doing
a
value
type
or
a
class
in
on
the
C
sharp
side
on
the
native
side,
you
have
all
the
memory,
that's
backing
it
the
problems.
The
problem
then
becomes
once
you
move
to
the
man
inside.
Do
you
have
a
zero
cost
abstraction
over
that,
or
are
you
incurring
a
34
by
keep
allocation
to
wrap
every
single
one
of
those
tensors?
A
You
have
on
the
native
side
and
then
more
objects
created
or
every
single
time
you
want
to
slice
or
reshape
that
tensor,
in
which
case
the
underlying
view
of
the
memory
remains
the
same
you're
just
creating
new
heap
allocations
to
track
the
different
slices
of
the
memory
that
you're
looking
at
to
exactly
and
the
average
GC
object.
Even
for
just
you
know,
an
empty
object
is
like
34,
bytes
and
so
you're,
incurring
that
34
byte
overhead
for
every
single
time
you
slice
and
if
they're
short-lived,
you
have
the
GC
thrashing
every
single
time.
A
A
B
We
are
discussing
to
1d
down.
I
said
what
I
don't
think
I
think
there
should
be
done
more
validation,
experimentation
that
we
are
doing
same
things
here.
It
seems
like
we
just
created
a
bunch
of
abstractions
that
can
be
implemented
on
times.
We
don't
have
a
very
crisp
picture,
how
they
will
be
consumed.
B
A
B
G
Criticises
fixed
in
such
that
I
am
just
implemented
in
order
network
model
and
missed,
so
the
algorithm
is
fixed
and
by
just
changing
class,
to
struct
the
performance
improved
by
to
us
in
there
in
the
wrapper
around
a
negative
tension.
That
is
what
I
was
served.
I
I
have
the
test,
so
I
can
tell
you
how
to
run
the
test
set
is
open
source.
It.
B
E
B
E
Them
on
I
buy
it
because
they
see
it
appears
structure
with
her.
The
structure
mean
on
a
located
right
like
you
can
you
can
have
your
own
slice
method?
It
gives
you
back
an
instructor
right
and
then
you
don't
have
occasion
to
slice
and
I
think
mode.
You
know
they
made
in
memory
to
Oregons
that
you
have
to
do
it,
but
like
it's
to
resume
from
an
I
tensor
of
t
like
your
slice,
but
that
has
to
return
an
item
to
Encino
yeah,
but
they
initially
don't
care
about,
basically
say.
B
C
C
Library,
so
the
way
that
you
it's
less
like
tensorflow,
where
you
set
up
this
big
graph
and
then
just
feed
a
bunch
of
data
in
and
get
data
out,
Orton
is
a
more
an
interactive
framework.
I
mean
like
you're
you're,
calling
the
operators
directly
as
opposed
to
it's.
Not
the
pre
can
graph
like
think
back
to
our
ml
nets.
Discussions
with
respect
to
like
we
can
like
a
functional
graph
and
then
and
then
calling
just
bang
acts
like
that
executes
versus.
C
C
M
C
B
Complicated
collections
of
strings
for
a
single
request
so
of
for
Oracle
what
amounts
to
single
invoke
of
well,
that
said:
that's
much
bigger
problem,
then
a
single
allocation
tend
not
involved.
Anyway.
We
now
get
dreaming
into
the
thing
and
I
said
what
I
said:
I
think
it
would
be
like
it
would
be
good
to
illustrate
the
problem.
I
see
yeah
and
you
know,
do
some
implementation.
Sun-Baked
I'm,
like
we
can
kind
of
long
enough
in
core
effects
lab
to
make
sure
that
this
is
the
shaitaan'.
E
A
So
there's
a
proposal
to
ships
for
if
we
expose
these
interfaces
should
also
provide
potentially
a
primitive
abstract,
a
primitive
implementation
over
these
to
assist
with
software
fallback
cases
or
for
cases
where
users
just
want
to
play
around
with
the
tensor
without
being
dependent
on
a
poor
library.
Is
it.
E
The
thinking
gentleman
of
instructions,
what
you
really
want
is
you
want
to
make
sure
you
have
at
least
two
implementations
right,
because
otherwise
you
have
no
begging
for
implementation
right,
but
I
mean
like
I'm
saying
like
you
would
have
let
say
potential
flow
shop
in
committing
those,
and
then
maybe
we
have
a
set
of
implementations
that
we
provide
people.
Everything
is
good
enough
for
a
subset
of
customers
by
the
memy
can
say
well.
Does
that
actually
hold
up
blue
the
abstractions
actually
hold
up
well
enough
can
then
bottle
both
sides.
Okay,
now.
J
J
F
H
F
F
Course
it
well
if
we
have
an
I,
tensor
type
and
tensor
flow
returns,
ml
net
and
I
tensor
type
and
says
here's
the
data
and
then
ml
net
doesn't
need
to
copy
it
into
a
V
buffer.
All
it
does
is
like
okay,
here's,
the
I
tensor
type.
Let
me
keep
it
in
the
pipeline
and
go
with
it.
No
copies
need
to
happen.
Of
course.
It
of
course
solves
that
problem.
H
J
Yeah,
maybe
if
you
do
this
interview,
that
has
a
column
then
like
we
have
to
have
operations
on
notes
right,
like
every
transform,
is
you
know
designed
to
operate
on
a
certain
kinds,
and
it
doesn't
know
anything
about
tensor?
How
hard
would
that
be
to
rewrite
them
basically,
or
at
least
augment
them
to
you
know
to
support
that
type
as
well.
At.
H
H
H
H
B
A
B
B
H
A
C
F
There
it
is
today,
Dominic's
runtime
exposes
answer
of
T
what
we
have
in
V
point
one
today,
that's
what
Onix
runtime
gives
back
to
its
managed
consumers.
Ml
dotnet
is
one
of
those
managed
consumers
and
it
uses
tensor
of
t
ml
Dannette
is
dependent
on
tensor
of
T
today,
because
it
depends
on
on
extra
time.
We
have
this
scenario
today.
E
Well,
how
does
the
internet
work?
If
you
don't
know
your
your
producers
and
producing
consumers,
don't
know
about
each
other
right,
which
is
basically
the
video
of
the
abstraction
is
to
say
there's
an
exchange
type
that
both
sides
that
don't
know
about
your
other
chemically
on,
like
in
those
scenarios,
are
all
basically
going
down
the
path
of
saying
either
I
know
about
the
actual
cancer
type
or,
if
I,
don't
I
copied
to
my
representation.
E
Is
valid
at
least
value
right,
at
least
now
you
simply
API
server.
That
would
just
converts
to
a
single
almost
like
a
single.
Basically,
if
we
basically
make
the
part
of
the
idents
tensor,
the
exchange
type,
which
is
basically
the
you
know,
give
me
a
give
me
a
dense
tensor
representation
for
copying
purposes.
Yeah.
A
B
That's
the
part
of
the
exercise,
so
we
keep
saying
it.
We
have
some
code,
it
converters
all
work
together,
because
maybe
the
only
useful
abstraction
is
their
sensor
and
sponsored
its
parts
dancers.
You
cannot
interchange
them,
I,
don't
know,
I,
don't
know
what
the
answer
is.
It
would
be
good
to
implement
it
end
to
end
and
that.
A
Well,
we
can't
exactly
implement
in
the
end,
until
after
we
have
an
API
surface
which
can
be
checked
in
the
core
of
X
and
then
given
out
as
an
experimental
parent
doesn't
need
to
be
perfect
because
we're
looking
at
shipping
it,
because
people
are
needing
this
interchange
type
being
a
state.
One
ordered
enough
where
we
need
after
we
get
an
approved.
So
the
original
plan
was
after
this
gets
reviewed
and
potentially
approved.
A
E
A
To
push
this
through
make
sure
that
everyone's
okay,
with
the
shape
and
Christoph's,
had
a
lot
of
negative
feedback
about
the
shape
and
then
go
in
working
with
the
teams
that
I've
already
been
working
with
and
Eric
Earhart
has
been
doing
some
as
well
to
make
sure
that
they
are
picking
it
up
and
implementing
it
and
getting
the
feedback.
It's
we've
been
iterating
on
this
Ferb
well
over
a
couple
months
now
it.
E
Will
be,
as
I
said
before,
like
a
I
can
see.
The
general
principle
is
that
we
basically
extend
the
like
memory
structure
so
that
you
can
effectively
extract
the
data
in
an
efficient
way
as
possible
right
and
there's
two
ways.
We
don't
have
ordered
ensign,
sparse
right
it,
but
will
be
useful,
like
once.
You
did
the
experimentation
to
see
what
the
what
what
happens
at
the
edges
of
these
systems
right
if
they
effectively
all
end
up
just
getting
the
memory
copy
it
over.
E
Like
more,
you
know
a
useful
thing
that
you
expect
you
to
actually
consume
an
API
service
and
that
kind
of
opens
up
the
thing
for
saying.
Hopefully
we
want
to
evolve
the
mechanics,
but
they
like
the
indexer
works
right
or
let's
say
we
add
a
new
I
fancy
tensor.
No,
you
basically
want
the
method
to
fancy
tensor
on
both
dense,
tensor
and
sparse
stamps
on
your
now
school,
because
you
cannot
add
a
new
virtual
method
on
that,
one
right.
So
that's
why
I
think
the
like.
E
A
Well,
I
think,
based
on
the
meeting
so
far,
we've
said
we
would
remove
the
too
sparse
and
too
dense
tensor,
because
they're
not
always
valid
right.
We
would
expose
the
I
read
only
concept
and
then
you
don't
and
then,
if
you
want
to
expose
a
new
thing,
that's
not
dense
or
sparse,
which
I'm
not
sure
I
could
conceptualize
that
right
now,
because
you
either
have
something
that
has
all
values
or
doesn't
have
all
values.
F
H
I
mean
look,
let's
imagine
just
the
most
basic
thing.
I
mean
you
have
like
a
batch
of
text
and
then
you
tokenize
it
I
mean
you
know,
then
you
have
one
sentence
with
seven
words
and
other
with
ten.
Another
word
I,
don't
know
five.
How
do
I
represent
that
I
mean
that's
one
of
the
most
basic
things
you
can
do
in
text,
processing
and
I'm,
not
sure
I
represented
here,
and
each
one
could
be
sparse
thing
by
the
way
as
well.
J
C
H
H
H
H
B
A
I
mean
I
mean
once
you
move
to
tensors
everything's.
A
tensor
in
array
is
just
a
one-dimensional
tensor,
a
scalar
values,
a
zero
dimensional
tensor
a
vector
is
a
two
dimensional
tensor.
If
everything
is
tensors,
and
so
everything
can
be
represented
as
tensors.
If
you
have
something
which
is,
you
know,
a.
A
H
A
A
H
J
J
J
Exactly
so,
like
you
imagine,
you
start
with
the
site
tensorflow
and
go
to
onyx
and
go
to
mill
that
net
or
in
some
random
order.
For
that
and
hot.
Is
it
actually
gonna?
Look
like
I
mean
use
your
usage.
I'm.
Sorry
is
our
usefulness
in
this
concept
between
amore,
which
is
gonna
go
between
the
native
native
types
anyway,
or
perhaps
the
other
thing
is
like.
If
we
have
this
type,
will
there
ever
be
like
a
math
library,
that's
based
on
it
right?
Can
we
do
like
math
operations
on
tensors,
like
with
that?
A
J
E
Right
but
I
think
that
means
like
in
a
sense,
that's
kind
of
like
somebody
has
to
do
that
homework
but
because,
but
like
fundamentally,
they
give
huge
ship
an
exchange
time.
You
need
to
be
able
to
say,
what's
the
expectation
for
a
producer
and
a
consumer
right,
because
the
whole
thing
is
scenario
driven
right
so
like
once,
we
know
what
the
requirements
are.
E
Then
we
can
jerk
the
API,
but
otherwise
it's
hard
to
say
who's
using,
for
example,
the
the
param
spaced
indexer,
because
there
anybody
ever
using
that,
because
if
the
guidance
has
never
used
it
well,
then
maybe
shouldn't
have
the
API
right,
or
maybe
we
say
well,
you
should
never
use
it.
However,
the
API
is
used
in
debugging
I
know
whatever
the
case
might
be
like.
B
I,
even
I
can
imagine
the
denseness
or
we
could
have
an
abstraction.
That
actually
is
useful.
I
mean
kind
of
marks.
Other
concerns
about
this
first
answer.
At
the
moment
we
say
it's
only
dense
tensor.
That
is
useful.
Then
maybe
we
only
need
one
abstraction.
As
an
answer
and
in
some
other
case
you
fall
back
to
drawing
an
exception,
just
like
American
Idol,
but
you
often
don't
have
a
dense
tensor
because
a
lot
of
times
it's
a
nose
and.
B
C
C
J
H
B
J
H
Try
to
sort
of
like
pull
that
part
I.
Think
Eric
tried
to
pull
that
part
and
it
wound
up
being
more
confusing
than
helpful.
So
we
backed
off
of
repetition
so
good,
but
I
think
that
is
so.
If
we
were
to
do
this
conversion
into
and
out
of
a
buffer,
we
would
ultimately
be
implementing
sort
of
wrapper
types
that
implemented
one
or
the
other
without
having
be
buffer,
implement
I,
tensor
itself,
because
I
don't
think
it
could.
H
I
G
G
C
A
C
C
C
B
E
I
mean
in
general,
it
seems
to
me
like
that
we
should
not.
It
should
not
be
that
we
need
to
ship
to
new
Barack.
Do
an
experiment
right
like
we
should
be
able
to
do
the
experiment
purely
from
meeting
to
master
building.
Am
I
get
package
getting
the
package
out
to
any
reference
to
feed
right
that
should
unblock
yeah
at
least
people.
We're
partnering
with
I
understand
that
you
will
not
make
any
worlds
for
some
30
party.
Even
a
third
party.
E
You
know
team
that
may
have
some
interest
because
that's
a
pretty
high
bar
but
I
think
for
us.
We
should
be
able
to
make
progress
on
okay,
it's
the
shape
sufficient
like
to
me.
It's
really
about
like
him.
It
can
be
defined
more.
The
consumer
would
do
more.
The
producer
will
do
and
do
we
have
confidence
that
if
they
don't
know
each
other,
they
they
have
a
sensible.
You
know
a
bi
to
talk
right
and
that's
really
what
it
is.
E
In
my
opinion,
like
I
mean
as
long
as
we
have
that,
and
we
can
argue
okay,
do
we
have
enough
consumer
sec?
Do
we
think
one
is
enough?
Do
we
need
another
one
or,
but
otherwise
it's
a
bit
like
reading
tea
leaves
what
you're
saying
well
I
think
this
is
what
the
tender
looks
like.
Okay,
but
oh,
it's
true
for
like
the
next
three
versions,
us
it's
true
today
and
we're
not
holding
two
months
for
now.
All.
A
E
Well,
I
think
it
said
earlier,
but
I
think
they're
three
there's
two
sides
to
it
right.
There
is
the
that's
the
concept.
Then
there
is
the
mechanics
right
Endeca
if
you
design
a
time
to
be
directly
consumable
like
the
wind
exits.
You
also
have
mechanics
on
that
time,
but
then
those
do
change
all
the
time
like
no
matter
whether
your
concept
changes.
My.
A
Team
hasn't
changed
concept,
exchange,
I,
think,
there's
implementation
details
of
how
something
changes,
but
the
base
principle
is
that
a
tensor
is
effectively
a
multi-dimensional
array
and
therefore
you
are
able
to
index
into
it
and
if
it
is
row
major
order,
then
the
indexes
are
interpreted
this
way.
If
it's
call
a
major
they're
interpreted
this
way
so.
E
Like
all
I'm
saying
is
that
stream
is
in
the
same
bucket,
beam
hasn't
changed
since
we
want
readable
as
vital
as
random,
X's,
it's
string
or
async
and
whom
class
in
the
buffer
you
want
to
read
into.
This
is
immutable
right,
but
still
we
have
added
a
function
of
API
to
do
you
want
and
that's
the
mechanics
have
changed
at
the
way
you
do.
Acing
has
changed
the
way
you
like
the
way
that
you
might.
B
But
it's
just
again
going
back
to
implementations,
it's
in
my
opinion.
It's
close
to
be
relevant
and
the
same
thing
about
break
finding
you
know
a
common
subset
of
a
key
is
in
several
implementations.
It's
also
not
super
useful
with
their
argument,
because
I
can
say
well.
I
took
several
random
types
from
the
framework.
The
common
set
of
API
is
happened
to
be
those
two
random
api's,
then
for
Letson
that
they
add
an
abstraction
for
those.
Two
ad
is
right.
A
B
A
Consumers
will
be
able
to
buy
it
in
sucks,
almost
everyone
who
has
a
producer
consumer
pattern
where
you
don't
know
who
the
producer
is,
but
you
don't
you
do
know
who
the
consumer
is
most
modern
life
I
use
some
form
of
interface
and
dependency
injection
to
solve
that
issue.
They
defined
some
kind
of
marker
type
that
says,
I
am
this.
A
There
was
marker
types,
sometimes
expose
some
core
operations
that
allow
you
to
interact
with
the
type,
even
if
you
don't
know
the
concrete
type,
and
then
they
use
some
mechanism
to
resolve
the
concrete
type
at
runtime.
When
you
don't
want
to
take
in
a
pendency
on
everything
you
use
dependency
injection
or
something
similar
to
do
that.
So
we
should
implement
the
kosumi
code
that.
B
Does
exactly
what
you
said,
because
there
is
a
possibility
that
the
fallback
will
be
I
just
want
to
copy
into
my
D
buffer,
because
I'm
not
gonna,
either
implement
my
algorithm
five
times
over
or
you
know
this
library,
this
dumb
cast
it's
gone
down,
cast
and
then
fall
back
to
some
ApS
that
are
super
inefficient
to
a
point
where
consumers
would
basically
be
disappointed
and
the
let
us
copy
into
the
buffer.
But
then
we
discovered
that
the
actual
abstraction
that
we
should
have
is
then
Spencer
and
me
the
memory
so
I
can
copy
it.
A
Think
I
think
one
of
the
problems
is
that
when
you
have
libraries
that
have
concrete
types
themselves,
so
they
are
the
ones
implementing
the
tensor
they're,
the
consumer
buy
tensor
and
the
producer
of
I
tensor.
There
are
going
to
be
producing
ASIS,
so
they're
always
going
to
be
operating
on
their
type.
The
point
where
the
interchange
comes
useful
is
when
you
have
a
third-party
library
that
is
supporting
one
or
more
of
these
external
libraries.
So,
for
example,
someone
like
math
dotnet,
who
wants
to
expose
primitive
tensor
operations
that
are
abstract
from
any
particular
library.
A
They
end
up
wrapping
the
other
libraries
and
then
a
and
then
the
application
developer
can
say.
Oh
I
want
to
support
Intel,
MKL
or
I
want
to
support
onyx.
Oh
I
wanted
to
support
this.
This
gives
them
the
ability
to
provide
the
primitive
support
and
still
exchange
and
plug
in
with
everything
else,
because
they're
not
a
producer,
they
are
the
consumer
of
the
type
providing
an
abstraction
over
it.
It's
the
same
problem
we've
had
with
hardware
intrinsic
swear
their
platform.
Specific
and
people
have
to
write.
If
platform
is
a
is
supported.
A
A
E
I
think
the
scenario
you're
sketching
makes
sense
to
me.
I
think
all
we
are
seeing
is
any
validate
that
that's
the
case
and
that
they
actually
go
through
this
API
should
is
this
viable,
because
the
thing
is
with
a
front-end
is
a
good
example:
I
BB
base,
you
said
not
venturing
themselves
down
at
the
software
for
whatever
just
blow
up,
if
you
use
them
on
the
incorrect
platform,
whereas
the
belief
was
that
that's
the
wrong
way.
Alera
T,
we
said
these
higher-level
operations
like
GSD
32
or
whatever
they
may
have
a
software
out
there.
E
Good
enough
and
then
the
code
looks
reasonable.
We
know
the
jury
lights,
all
the
things
in
the
scenario,
I
think
it's
validated
and
I.
Think
here
the
only
thing
we're
saying
is
you
have
equipment
to
this
API
ship
versus
the
white
one
and
then
I'll
be
sure
that
once
we
start
adding
concepts
to
it,
like
you
know
in
index
and
range
that
we
can
involve
them
or
him
even
get
away
with
it.
Basically
less
less
API
surface
because
turns
out
the
way
consumers
work
is
that
they
either
downcast
of
their
own
stuff
or
copyright.
A
Would
almost
say
that
I
think
the
only
argument
I
would
have
is
if
we
were
to
say
that
the
concept
here
is
less
and
all
you
want
to
something
like
I
copy
too,
then
we
shouldn't
be
exposing
a
I
tensor
type
at
all.
We
should
be
exposing
and
I
copyable
interface
that
allows
you
to
take
an
arbitrary
object
and
copy
it
to
some
other
white
storage.
That
way
you
can
interact
with
well,
but
then
you
don't
know
that
you.
B
Can
delicous
so
I
even
said
it
at
some
point
like
maybe
it
gets
to
a
point
that
it's
so
simple
that
I
actually
would
start
stop
worrying
about
this
being
and
extreme
is
a
marker.
So
imagine
that
we
just
have
a
tensor.
It's
not
even
generic,
it's
just
a
tensor
and
it's
empty.
So
you
take
it.
It's
basically
a
promise
that
it
is
a
tensor
you
keep
down
asking,
and
maybe
it's
not
a
full
market
has
one
method.
A
But
the
problem,
then,
is
the
same
thing:
we've
gone
back
and
forth
about
is
that
eating
the
memory
is
not
always
applicable.
If
you
had
a
jagged
array,
then
being
memory
is
not
applicable
because
you
have
multiple
memories.
If
you
have
a
sparse,
then
it's
not
applicable
because
you
there
isn't
memory
per
se.
You've
got
some
abstract.
B
H
A
H
A
Right
I
mean
I
mean
as
soon
as
you
want
to
debug
something
and
you're
you're
debugging.
Through
your
code,
you
have
a
local
and
you
want
to
say:
I
just
want
to
see
the
index
of
this
without
expanding
the
ienumerable
view.
But
then
this
is
how
you
do
it.
You
have
an
indexer
and
you
say:
I
want
the
value
from
index,
1,
2
3
and
the
debugger
prints
it
back
out.
C
Another
thing
to
think
about
contrast
this
to
Python,
wherein
the
array
will
take
the
raw
data
and
let
you
interact
on
it.
So
you
can
deal
with
the
native
multi-dimensional
array
type
and
have
it
wrap
the
memory
that
came
from
the
library.
But
we
don't
have
that
right.
We
don't
even
have
an
abstraction
for
a
multi-day,
multi-dimensional
array
type.
The.
A
Indexers,
probably
won't
be
used
from
a
performance
oriented
library
perspective.
They
will
be
used
from
an
interactive
user
model
perspective
where
users
are
they're,
opening
up
a
console
window
and
they're
just
playing
around
with
tensor
types
to
be
like
Oh.
What
happens
if
I
add
these
two
things
together
or
what
happens
if
I
do
this
primitive
operation,
because
they're
learning
they.
A
They
might
be
using
something
like
math
net,
which
could
conceivably
take
AI
tensor
return,
I,
tensor
and
internally.
They
make
the
differentiation
of
oh
I
want
to
run
this
against
the
GPU,
so
I'm
gonna
use,
GPU
or
I'm
gonna
use
Intel
MKL,
because
I'm
on
an
Intel,
CPU
and
I've
got
80
X
512
intrinsic
saleable
or
hey
I'm
on
some
low-power
arm.
Machine
and
I
just
want
to
do
the
least
efficient
thing,
and
it's.