►
From YouTube: .NET Design Review: GitHub Quick Reviews
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
don't
know,
I
know
what
happened
why
they
all
just
keeps
working.
So
no,
we
have
audio
again
congratulation.
B
B
So
the
the
data
it
has
a
bunch
of
columns
the
important
one
for
us
is
that
it
combines
both
PRS
and
issues
into
one
set
of
data.
So
the
first
thing
we're
doing
is
filtering
it.
By
that
column,
the
line
98
it
says,
DF
is
PR,
government
was
equals
1,
so
ones
are
all
PR.
Zeros
are
all
issues,
so
that's
what
times
98
and
99
does
then.
The
aim
in
this
demo
is
to
Train
issue
labor,
similar
to
what
Miriam
is
doing
so
for
the
issue.
B
B
The
other
thing
we're
doing
is
it
has
a
description,
column
and
usually
what
happens
in
issues
is
some
rewrites
of
an
issue
and
then
mentions
like
one
of
us
in
the
core
effects
team,
but
that
is
has
correlates
well
with
what
the
label
for
the
issue
should
be
just
what
the
part
is
supposed
to
predict.
So
what
I'm
doing
on
lines,
111
2,
1,
1
7,
is
going
through
the
description,
column
and
I
have
a
red
X
to
figure
out
the
user
mentions.
B
So
if
there's
a
ad
like
Erik
Earhart,
so
that
would
come
out
as
as
data
from
the
description
column
that
is
going
into
a
new
column,
called
user
mentions
and
then
I'm
adding
the
user
mentions
column
to
the
issues
data
set
to
the
issues,
data
frame
itself.
So
then,
if
you
scroll
a
little
bit
autumn,
sorry.
B
B
D
D
B
C
D
Yeah
I
could
do
that
also
yeah,
so
so
some
things
that
we
do
on
other
collection
types
like
dictionary,
for
instance,
is
we
have
some
Ultra
properties
hanging
off
of
them.
So,
for
example,
a
string
data
frame
column
could
have
a
property
like
pairs
off
of
it
where
pairs
is
ienumerable
of
the
whole
string,
long
instance.
That
way
you
could
say
for
each
bar
pair
and
description
and
then
pair
doctor
and
pair
dog,
growling
ducks
or
whatever.
Okay.
That
way,
you
use
some
text.
D
A
D
Well,
select
is
actually
returning
his
new
idea
mobile
them,
whereas
for
each
is
just
operating
based
on
the
contents
of
it
like
you're
right.
Presumably
you
would
want
that
out
and
on
any
list
of
T,
but
on
any
random
list.
Generally,
you
can
just
say
for
int:
I
equals
0
blah
blah
blah,
whereas
it
sounds
like
here
the
Ebro
index.
It
might
actually
be
interesting
scenario.
I
guess.
F
B
Using
a
regular
fold
acceptable,
not
always
because
of
the
of
the
backing
store.
Well,
so
it
holds
a
memory
of
bytes
and
then,
when
you
have
say
like
an
arm
of
integers,
you
go
from
a
memory
of
bytes
to
a
span
of
int,
and
so
each
time
we
do
a
for
loop
for
every
index.
You
keep
converting
from
5
to
8
by
2.
It
goes.
B
G
B
B
In
in
this
case,
so
if
you
on
the
column
that
we're
doing
you
apply
element-wise,
if
you
only
modified
that
column,
then
you
won't
be
using
here
you're
working
on
different
columns
index
right.
So
there
you
hit
P
memory
and
then
span
you
do
the
memory
marshal
thing.
If
you're
working
on
the
same
column,
then
you
don't
do
that,
but
Steve
Tobes
write
that
yes,
comparing
delegation,
let's
write.
C
B
B
C
Kind
of
my
point
is
like
you
know:
there
are
api's
where
one
designs
them
for
the
absolute
in
am
alone
had
possible
and
their
api
is
but
one
designs
them
for
usability
and
thus
far
this
seems
like
one
that's
intended
to
be
designed
for
usability
yeah.
Yes,
the
the
core
work
that's
done
is
not.
He
is
either
very
expensive
and
done
at
user
code,
and
so
the
cost
of
accidentally
api
doesn't
matter
or
its
performed
by
the
api
itself
and
the
api
itself.
Optimized
implementation.
C
E
B
E
I'm
kidding
you
Spano
yeah,
it
is
not
just
a
simple.
What
I'm
saying
is
it's
it's
more
expensive
than
just
getting
a
span
from
a
from
a
memory,
because
the
memory,
the
backing
memory,
is
a
memory
of
bytes
wait
and
then,
when
you
want
to
get
the
spent
like
when
the
column
is
a
is
a
column
of
int
right.
The
backing
data
is
still
memory
of
bytes,
and
so
in
order
to
get
a
span
of
inside
of
it,
you
have
to
do
a
reinterpret.
Yes,
but
that's
cheap.
It's
still
cheap.
E
A
F
B
Okay
in
this
example,
it
goes
from
zero
to
whatever
the
length
of
the
colonists,
but
it
could
in
theory,
be
it
couldn't
really
be
different
row
in
Texas.
Do
you
have
yeah,
but
there's
some
API?
Where
you
can,
you
can
go
on
a
column
and
then
you
can
get
random,
not
random
in
disease,
but
indices
you
desire,
so
there's
a
filter.
Call
that
will
give
you
indices
or
whatever
in
you
gave
I
will
be
a
column
of
indices
into
your
original
column
there.
If
you,
the
element,
Y
is
the
row
index
II,
so.
B
C
When
you're
rewriting
everything
in
the
in
the
colony
that
makes
total
sense
to
me,
it's
a
convenience
thing
yeah
like
the
sort
of
blue
in
syntax.
If
the
description
were
being
pulled
somewhere
else
like
I
get
it
the
thing
that
sort
of
raises
my
spidey
sense
is
then
using
an
API
for
something
that
it's
not
really
intended
for
sure,
and
in
this
case
it's
it's
being
sort
of
used
in
an
unorthodox
fashion
to
rewrite
itself
to
its
self
and
take
as
a
side
effect
about
invocation
being
storing
it
into
something
else.
But.
C
Know,
hey
I
mean
yeah
I
would
have
imagined
that
what
when
I
first
saw
this
my
initials,
which
was
this
is
select
and
it
was
returning
a
new
column
and
then
I
realized.
No,
it
wasn't
storing
it
into
anything
else.
So
then
I
thought.
Oh,
it's
mutating
itself,
okay,
but
it's
returning
current.
So
it's
mutating
itself
in
so
that's
what
I
started
getting
fused
the
ability
to
basically
have
a
select
operation
or
app
or
whatever
you
want
to
call
it
that
returns
a
column
seems
fine.
B
C
D
E
C
B
C
B
And
so
we're
almost
the
end
of
this
example.
So
here
what
I'm
doing
is
I
have
this
set
wines
120
0.197
I'm,
adding
the
user
mentions
column
to
the
issues,
data
frame,
so
whatever
ml?
How
do
we
want
to
run
on?
It
actually
has
more
data
on
it
when
120
121
I'm
splitting
the
issues,
data
frame
into
three
data
sets
this
one
training
data
set
one.
Well,
they
data
set
and
then
one
test
data
set.
Then,
after
that
we
don't
care
about
what
the
MMO
auto
ml
stopped
at.
E
B
B
E
That's
where
that
one
comes
in
right.
Do
you
know
what
I'm
talking
about
like
if
I
have,
if
I
have
a
data
frame
of
a
thousand
rows
and
I
want
it?
I
want
to
get
a
new
data
frame,
picking
out
half
of
those
rows,
and
but
they're
gonna
be
shuffled
right,
so
I
want
Row,
1
and
then
I
want
row.
10
I
want
row
5
and
then
15.
G
B
Okay,
so
for
the
next
one
email,
could
you
scroll
just
a
little
bit
to
the
top
just
before
the
fat?
Oh
yeah,
that
right
there.
So
this
one
is
a
spark
net
like
use
case
here.
This
is
the
initial
data
that
we're
looking
at.
So
this
one
is
uber
data,
I
believe
from
2015
for
the
month
of
Janet
Feb
to
here
what
we
have.
The
columns
are
city
ID,
which
is
kind
of
mighty
for
the
city.
We
have
a
date
the
number
of
active
vehicles
on
that
day
and
the
number
of
trips
on
that
day.
B
The
task
here
is
to
find
the
number
of
trips
each
day
of
the
week
for
Jan
and
Feb.
So
it's
kind
of
a
spark
like
tasks.
The
first
thing
that
the
first
thing
that
spark
does
in
its
flow
is
it.
Is
it
splits
this
and
gives
us
data
such
that
you
have
for
each
city?
You
have
all
the
columns
for
ET
so
for
city,
two,
five
one
two.
This
would
give
me
data
for
first.
Second,
third,
fourth
table
31st,
and
then
it
gives
me
the
first
ponding
active
vehicles
and
trips.
B
So
if
you
scroll
further
down
mo,
you
can
see
that
that
data
would
look
like
that.
So
four,
five
four
you
have
those
columns
and
those
numbers
and
spark
would
give
us
the
data.
All
of
all
of
the
to
five
one,
two
to
three
to
mitosis
one
data
frame,
then
all
the
to
seven
six
five
will
be
its
own
data
frame
and
so
on
and
then
mo
go
further
down
the
end
yeah.
B
B
B
So
you
have
you
have
everything
for
Monday,
then
you
have
everything
for
Tuesday,
Wednesday,
Thursday,
blah
blah
and
then
the
sum
sums
up,
all
the
of
the
their
active
vehicles
and
the
whatever
the
last
column
was
for
all
of
the
Monday's,
all
the
Tuesday's
blah
blah.
The
only
thing
between
those
two
that
I
did
is
there's
a
date
column
and
it
didn't
make
sense
to
some
debate
form
so
I
just
removed
it
from
the
data,
and
then
you
just
return
the
different,
that's
it.
B
This
is
an
example
that
shows
where
you
can
use
like
a
group
I.
So
this
data
frame
has
seven
rows,
get
one
for
each
day
of
the
week
or
eats
it
again
for
each
day,
and
then
this
would
be
called
in
a
loop,
so
smart
would
call
it
call
this
for
one
city,
then
the
next
CD
and
on
and
on
so
this
shows
like
one
part
of
the
pipeline.
So
now
we
can
look
at
the
yes.
B
E
B
B
D
C
B
B
J
E
A
B
B
Yeah,
let's
see
themself
as
you
want.
Okay,
the
columns
and
rope
Holmes
gives
it
back
the
column
collection.
So
that's
one
of
those
things
from
last
times.
In
fact,
real
comb
just
gives
the
length
of
the
data
frame
the
next
two
indexers.
The
first
one
gives
an
element
at
that
row:
comma
column,
the
next
one.
This
long
row
index
gives
you
the
entire
row
as
an
Iowa
stuff
object.
B
B
But
all
of
the
weak
API
is
there's
no
way
to
know
what,
like
you,
have.
A
B
J
B
J
J
A
If
you
have
a
very
small
number
of
data
types,
then
it
might
be
worthwhile,
it's
not
even
to
the
generic
just
have
to
actually
get
get
primitive
column,
get
string,
column
or
whatever,
because
then
you
have
even
less
typing,
but
if
you
make
a
generic
or
whether
I
put
the
titanum
an
angular
bracket
over,
they
put
the
continent
and
around
parentheses
doesn't
make
any
difference
in
terms
of
typing
right.
So.
C
I
have
another
indexer
question:
the
there
are
two
indexers
here,
one
that
takes
so
long
and
one
that
takes
a
string.
You
know
other
collections
that
I'm
familiar
with
where
you
know
there
are
named
columns
or
something
both
of
those
indexers
would
give
you
allow
you
to
grab
a
column
right.
They,
the
indexer,
is
which
column
might
number
do
you
want
and
the
name
is
which
column
by
name.
Do
you
want?
F
C
A
C
B
C
I
C
E
B
D
D
C
F
B
B
E
E
B
E
J
The
way
we're
works
with
ienumerable
is
you're
your.
It
doesn't
really
take
another
I
numerable.
It
takes
a
predicate
that
you
can
then
use
to
filter,
true
or
false
for
every
out,
so
it
integrates
through
all
the
elements
of
the
I
numerable
or
the
data
frame
here,
and
then
you
just
filter
using
your
predicate
saying
yes
take
this
one
or
no
don't
take
this
one,
but.
J
F
F
First
time,
I
tried,
don't
care
I'm
not
like,
but
I,
don't
feel
like
super
strongly
about
it,
but
I
feel
like
it's
better
to
just
make
it
expensive
columns
of
thinking.
I
don't
mean
when
I
connect
seem
like
they're
I
get
the
two-dimensional
kind
of
index,
and
this
one
I
totally
get
this
way.
I
don't
have
any
shoes
on
this
one,
but
anything
I,
just
kind
of
I
wouldn't
know
unless
I
I
can
be
SFI
of
the
intelligence
would
know
that
if
I
was
reading
the
column.
H
C
B
Which
once
he
that
this
there's
a
filter,
I
think
so,
if
you
go
back
to
the
example
emailed
primitive
data
frame
now
the
ienumerable
Flom
know
the
primitive
kind
of
cool
I.
Think
so,
if
you
scroll
up
a
little
more
yes
stop
there,
where
it
says
dot
element
boys
equals
1,
@
EF,
so
line
98,
TF,
dot,
filter
and
then
spr
dot
element
voiced
one
after
trying
to
prem
the
different
column
of
booths,
and
then
it's
a
filter
over
the
column
of
both
the
other
you're
using
the
filter.
B
B
B
D
B
No
I
mean
in
this
case
I
was
like
showing
cold
right,
but
if
I
was
writing,
it
is
a
notebook.
I
would
just
be
DF.
I
would
open
like
square
braces
and
then
write
that
stuff
I
mean
it's
really.
If
I
wanted
to
use
the
method
or
if
I
want
to
use.
The
inductor
like
on
a
notebook
I
would
tend
to
write
a
short
code
as
I
can
write.
C
Pr,
well
that's
what
I
guess
what
I
was
what
I'm
sick
yeah!
My
initial
my
initial
comment
was:
it
was
weird
to
me
that
there
were
two
different
kinds
of
things
being
returned
based
on
this
sample.
I
would
get
rid
of
all
the
ones
that
Mauro's
and
just
keep
the
one
that
returns
columns,
but
you're
telling
me
as
well.
This
isn't
representative.
G
C
On
based
on
this
sample,
the
only
indexer
that
should
exist
is
the
one
that
takes
a
string
column
and
get
rid
of
all
the
ones
that
return.
The
disconnect
for
me
is
having
an
indexer
that
sometimes
proposed,
and
sometimes
in
commerce
yeah.
This
sample
shows
only
using
indexers
when
it
returns
columns
so
based
purely
from
the
wet
code
to
someone
right
perspective
based
on
this
sample.
C
B
B
A
B
E
So
housing
data
in
this
world
is
is
a
data
frame
and
the
first
chunk
is
just
maybe
we
can
delete
it,
but
it's
just
basically
shuffling
an
array
of
veins
right,
so
line
13
is
getting.
You
know
from
the
row,
count
get
from
zero
to
the
row,
count
and
then
shuffle
and
then
14
and
15
is
14,
is
getting
a
size
recently.
D
D
E
A
I
mean
the
other
question
I
have
is
like
like.
If
you
did
to
me,
this
selecting
rose
by
passing
in
the
column
or
whatever
I
like
is
reminds
me
a
lot
of
like
the
way
we
do
masculine
vectors
as
well,
but
usually
we
have
methods
that
would
tell
you
that
this
is
what
they're
actually
doing
rather
than
an
index
are,
where
you
kind
of
have
to
apply
some
imagination
to
know.
What's
going
on.
E
A
Me,
the
the
key
question
is
and
I
think
that
is
something
where
I
think
the
room
has
not
a
uniform
opinion
on
it
like
how
how
discoverable
and
surprising
are
these
behaviors
right
or
I
guess.
The
other
question
is
how
intuitive
are
these
behaviors
right,
like
I
mean
it's
a
bit
hard
to
say
when
you
just
stare
at
an
API
surface
of
ID,
what
kind
of
have
to
actually
write
some
code
and
see
what
you
actually
can
reason
about
when
you
actually
call
api's
and
see
what
you're
getting
back.
E
E
A
Not
suggesting
that
particular
method
name
all
I'm
saying
is
that
sometimes,
when
you
just
have
a
method
that
has
a
verb
in
it,
it
tells
you
more
what
the
API
is
doing.
I'm,
not
convinced
it
aware
it's
necessarily
better.
It's
just
that
it
avoids
to
pull
them,
that
you
have
just
indexer
and
then,
if
you
don't
know
the
types
you
have
no
idea
what
this
thing
does.
Yeah.
J
Maybe
compromise
we
should
have
both,
that
is,
we
should
have
the
friendly
named
methods
for
people
who
are
familiar
to
dotnet
code
and
for
people
coming
from
Python
and
pandas
and
other
things.
We
should
offer
the
indexers
and
there's
no
more
implementation
costs,
but
there
is
potential
for
a
little
bit
of
confuse
ability
when
reading
code
to.
A
Tell
you
the
truth:
I'm,
not
convinced
that
this
makes
the
world
a
better
place.
Like
I
mean
the
canonical
example
of
this
is
PowerShell,
where
you
are
allowed
to
use
the
Swart
aliases
and
the
long
forms,
and
everybody
does
everything
all
the
time,
and
so
you
have
this
super
inconsistent
world,
where
you
cannot
reason
there
were
a
lot
of
for
anymore,
like
I'd.
Rather
we
pick
one
convention
and
just
use
that
consistently,
rather
than
having
two
different
conventions,
and
then
you
can
use
both
of
them
at
the
same
time
like
that,
I
think.
That's.
A
But
that
is
not
so
much
to
any
of
a
different
coding
style
that
is
more
to
accommodate
different
programming
languages
that
don't
support,
offer
an
overloading
but
I,
think
I
would
not
expect
somebody
who
does
that
say
a
vector
multiplication
and
c-sharp
to
actually
call
the
methods
I
mean
you
would
use.
The
operators
like
there
will
be
dramatic.
J
A
J
A
I
by
that
I
mean
like
that's.
Why
I'm
saying
to
me
that
this
the
problem
is
I
have
not
used
pandas
right,
so
I
don't
have
a
feel
for
the
intuition
and
that's
why
I
said
earlier.
Let
me
just
look
at
an
API
service,
I'm
more
with
Stephen.
Only
oh,
my
god,
I,
don't
I,
don't
think.
I
can
reason
about
what
the
indexer
does,
because
it
does.
It's
like
the
Swiss
Army
knife
of
things.
D
There
is,
there
is
something
definitely
to
be
said
for
somebody
who
steeped
in
this
world
finding
these
api's
familiar
because
they're
coming
from
pandas
or
or
you
know,
even
if
they've,
never
you
even
if
they
don't
use
it
as
their
primary
thing.
Maybe
they
have
to
interrupt
with
it
at
some
point,
yeah.
E
D
D
A
Giving
it
some
more
thought,
like
I,
think
I
retract
my
earlier
statement
that
having
both
conventions
is
bad
I,
think
both
conventions
being
bad
as
think
is
most
applicable
for
naming
conventions,
I,
think
for
or
shape
I
think
it
makes
more
sense,
because
we
have
that
in
other
places,
whether
the
most
recent
one
that
can
think
of
is
the
is
the
collection
issue,
letters
from
Texas?
Why
people
who
use
Jason,
for
example,
almost
nobody
that
we
saw
able
to
study,
actually
assume
them
to
be.
A
You
know
consumable
collection,
initializers,
but
once
you
know
you
can
it
makes
you
cool,
really
nice
to
read
and
I
think
that
what
might
be
the
same
area
where
it's
like?
Well,
okay,
when
I'm
new
to
the
API
and
have
named
methods,
okay
can
or
can
reason
about
what
things
does,
but
once
I'm
actually
proficient
with
this
I.
Actually,
it's
like
the
fact
that
it's
super
concise
to
who
really
want
to
alright.
Maybe
that's!
Okay,
if.
B
E
A
I
think
what
Steven
is
asking
is
more
like:
what's
the
what's
the
equivalent
operation,
where
I
pass
in
a
column
with
a
bunch
of
bulls,
you
get
basically
a
new
data
frame
of
a
set
of
rows
selected.
What
what
would
I
do
for
were
four
columns.
It
seems
odd
that
you
would
be
the
different
thing
for
that,
but.
A
I
mean
to
be
honest,
like
more
than
any
way
of
doing.
This
would
be
you
passing
a
delegate
right
and
it's,
like
you
know,
I'm
with
a
you
delegate
run
over
the
rows
or
the
columns
is
pretty
much
irrelevant,
but
it
is
the
same
thing
and
then
you
just
have
effectively
a
you
know,
give
me
new
data
frame
where
you
evaluate
this
delegate
over
all
the
roles
and
then
they're
frozen
I'm
getting
back
when
all
right,
you
said,
here's
the
thing
that
runs
of
all
the
columns
that
will
you
get
right
so.
E
A
D
We
we
had
talked
about
this
at
the
last
meeting,
because
one
of
the
things
that
I
suggested
was
you
could
avoid
that
and
just
you
know
say
there
was
an
80%
chance.
I
choose
any
given
row,
but
Eric
had
brought
up
a
good
point
that
you
want
the
rows
to
be
randomized
as
well,
not
just
random.
You
want
the
order
to
be
randomized,
yes,
but
this
is
basically
the
oh
you're.
Sorry.
B
E
Sort
we
have
like
a
sword
API
right.
The
way
I
mean
the
way
we
implement
it.
Underneath
was
got
the
indexes
sorted
order
and
it
said
select
out
those
this
sort
of
myself
of
a
of
this
shuffle
something
or
whatever
yeah
so
yeah.
It's
it's.
It's
not
necessarily
just
shuffle
right.
It's
anytime,
I
want
Rollo,
is
in
a
circle
right.
E
E
A
I,
don't
think
we
made
a
decision
I.
Think
most
of
the
time
like
the
problem
of
API
review
is
like
we
can.
We
can
point
to
a
thing
that
looks
smelly,
but
it
doesn't
mean
that
be
happen,
and
since
he
is
solution
to
the
problem,
but
I
think
there,
the
smell
that
Steven
Rae's,
which
I
buy,
is
that
if
you
look
at
it
at
an
indexer,
especially
when
you
look
at
the
original
code,
where
you
only
have
bomb
on
my
hand
side,
it
gets
tricky
to
reason
about
what
the
thing
does,
but
I
think.
A
If
you
look
at
like
one
of
the
things
we
talked
about
like
a
few
weeks
ago,
was
the
you
know
when
operators
are
okay
right,
it's
all
about
their
intuition,
like
it
I
mean
if
you
do
a
times
B
and
it
multiplies
okay,
that's
great.
But
if
it's
like
oh
I'm,
doing
this
fancy,
you
know
every
other
role
masking
behavior
is
like
what,
but
you
only
ordered
what
was
a
star
B
right.
A
I
mean
like
how
do
I
reason
about
things
like
that,
if
I,
if
I
index
into
something
people,
can
reason
about
the
B
that
gives
you
a
basic,
gives
you
logically
an
element
right
and
then
sure,
depending
on
what
the
type
is,
it's
a
robe
or
it's
a
column
or
it's
an
element.
But
if
you
have
all
combinations
and
then
slices
they're
off
like
it
gets,
it
gets
harder
to
reason
about
this
right.
A
If
you,
because
now
you
have
to
really
understand
every
data
type
that
you
pass
to
this,
because,
depending
on
the
data
type,
did
you
dress
into
different
operations?
That
seems
it
seems
smelly.
But
at
the
same
time
again,
like
I
would
like
to
see
someone
usability
study,
doing
things
like
that
and
then
just
see
how
they're
fair
right,
because
it's
possible
that
you
know
when
you
actually,
when
you're,
actually
in
the
trenches
and
you
actually
can
pull
stuff.
A
B
A
E
I
E
F
D
I,
don't
think
the
existing
indexers
would
smell
too
bad,
because
if
even
the
indexer
that
returns
a
single
element,
the
a
comma
B
indexer
yeah,
you
would
still
get
to
the
same
thing
by
doing
bracket
a
close
bracket
bracket,
B
flows
back
of
bracket.
So
you
can
kind
of
reason.
But
that's
basically
what
you
would
that
doesn't
strike
me
as
terrible
I
would.
J
Think
the
one
benefit
too
moving
to
having
the
indexers
on
the
data
frame,
only
black
in
exact
cell
and
then
having
the
row
based
ones,
moved
to
a
row
collection
to
match
the
column
collection.
One
would
be
that
it
would
allow
the
internal
implementation
to
change
if
that
was
ever
deemed
to
be
desirable
in
the
future,
to
wear
a
robe
to
wear
a
column.
Major
implementation
was
more
efficient.
E
J
B
F
The
only
I
didn't
think
that
was
not
X
or
Y
is
exactly
this
one.
Oh
you're
like
well
it
because
very
much
is
this
person.
Then
they
work
their
second
philosophy.
Yeah.
D
You
know
Jen
I
was
off
the
cold
surface,
so
one
one
interesting
thing
about
that
is:
there
is
a
disconnect
in
the
API
surface
right
now,
because
I
instantiate,
the
data
frame
with
an
ID
list
of
column
and
then
I
access,
the
data
frame
indexer
and
what
I
get
back
is
not
at
home.
What
I
get
back
is
around.
B
D
B
A
H
E
D
That's
kind
of
the
mental
model
that
you
have
no
collections,
I
just
threw
it
out
there
I
don't
feel
strongly
about
this.
It's
just.
If
you
think
of
this
as
a
collection
of
column,
because
the
constructor
takes
in
a
list
of
column,
it
would
be
natural
to
think
of
the
indexers
returning
a
column
on
maybe.
E
B
A
When
you
could
imagine
that
it
just
until
it
returns
it's
trapped
right
and
in
this
truck,
basically,
the
only
thing
it
holds
is
based
in
reference
to
the
data
fans.
So
basically
can
say
Rose
count
and
you
can
say
Rose,
Bracken
and
then
actually
index
into
that
right,
like
you,
wouldn't
actually
store
any
data
and
it
you
would
just
make
it
so
that
API,
why
is
it
mixed?
Just
basic.
A
And
then,
basically,
basically,
you
end
up
in
a
world
where
the
indexes
on
data
frame
always
give
you
back
a
data
frame,
and
then
everything
I
mean
the
only
exception
might
be
the
the
the
one
that
takes
both
a
row
index
and
the
column
index
and
everything
else
either
of
the
columns
or
off
the
road.
I.
E
A
You
either
do
that
or
you
literally
just
have
a
super
primitive.
Just
you
know
a
very
cheap
wrapper
around
something
that
basically,
because
internally,
you
probably
don't
store
anything
on
that
collection
like
you
just
have
it
elsewhere,
just
from
an
API
standpoint.
It
makes
more
sense
to
say
you
know
you
say:
columns
can
you
say
rows,
count
right
and
then
in
the
XS
for
rows
and
then
extra
for
column
gives
you
either
a
row
or
a
column
right
and
then
everything
else
is
just
either
an
element
value
with
it.
B
E
E
A
A
A
E
D
K
B
B
D
B
E
E
Thing
does
one
thing
this
thing
does:
is
it
it
like?
If
you
look
at
the
number
of
rows
to
read
in
the
guest
rows,
yeah?
Well,
maybe
not
a
number
of
rows
to
read
but
the
guest
rows.
It
actually
goes
over
your
data
twice.
One
is
to
do
inference
on
which
what
are
the
types
yeah
and
then
another
time
to
actually
load
the
data
and
go,
and
so
the
CSV
stream
actually
has
to
be
a
sequel
stream
to
go
back
to
the
deep.
D
E
J
But
you
can
also
easily
learn
the
types
that
would
infer
the
type
wrong
bullets
who
sings
head
rows
to
infer
the
type
correct
right
because
we're
girls?
Yes
right.
But
you
could
have
the
you
can
have
ten
rows
ruler
which
are
integers
and
then
the
eleventh
row
and
say
it
has
the
fractional
portion
which
guess
it
would
throw.
And
then
it's
on
you
to
use
the
the
type.
F
E
A
It
almost
seems
like
what
you
want
instead
of
this
form
arrow
record
batch
API
is
basically
an
API
that
you
can
like
I,
don't
know
it's
unsuitable.
Let
me
just
constructor
long
or
just
make
a
custom
on
the
record
budget,
so
record
much
faster,
yeah
I
mean
I
would
I
would
try.
It
I
mean
record
batches
in
API
from
air
all
right.
Yes,.
B
E
A
E
A
J
E
E
A
I
E
A
That's
holy,
what
I
would
do
I
would
probably
have
where
ever
the
integration
would
live.
You
would
define
an
extension
method
on
record
batch.
I
would
probably
call
it
as
rather
than
two,
because
two
usually
implies
the
conversion
that
is
non
cheap
versus
this
one
is
like,
as
is
usually
done
for
things
that
are
supposedly
o1.
B
A
A
A
Thank
we.
We
have
relaxed
our
stands
and
optionals
in
the
in
the
forever
design
guidelines
right
but,
like
the
polymers
people,
don't
understand
optional.
Oh,
like
we
had
like
the
instigated
this
thing
where
they
had
I
forgot
what
it
was
when
one
of
the
client
API
spaces,
the
co
API,
you
have
to
instantiate
to
talk
to
a
service
it
took
like
ten
optionals
or
whatever
for
the
various
things
you
can
configure.
Yeah.
E
A
Think
eight
out
of
ten
people
try
to
pass
every
single
radio.
They
did
not
understand
the
devotional
and
there
is
a
problem
in
the
IDE.
Probably
like
I
already
started
the
discussion.
We
got
a
team
that,
like
the
brackets
thing,
is
very
subtle,
like
you
can't
have
to
understand
what
the
bracket
syntax
means
or
don't
you
see
that
they're
optional,
but
people
don't
understand
it
and
there
tries
to
like
construct
the
whole
thing
right
and
then
like
be.
A
What
I
would
probably
do
is
I
would
probably
try
to
have
an
overload
that
takes.
You
know
as
few
arguments
as
you
cannot
get
away
with
and
then
I
think.
If
you
have
two
optional
arguments,
that's
fine,
okay,
I
like
try
to
get
to
something,
really
minimalistic
and
then,
if
you
have
one
longer
one
that
that
did
allows
you
to
do
everything
you
want,
it
has
some
more
things
that
are
optional.
A
That's
also
fine
I
mean,
if
that's
more,
for
the
advanced
user,
but
like
basically
what
they
what
people
will
do
is
they
will
try
to
find
the
shortest
overload
yeah,
and
so
that
takes
two
optionals
yeah
people
it
up
and
they
still
pass
it
in
that's,
okay,
but
if
they
have
the
shortest
one
they
get
is
like
an
optional.
Then,
if
they
try
to
pass
them
all
in
like
there
will
be
a
lot
of
time
before
they
can
actually
get.
You
can
see
it.
Yeah
I
can
called
regardless.
D
F
A
It
was
kind
of
back
to
like
it.
What
is
this
API
for
right,
I
mean
how
do
you
say
it's
similar
to
an
encoding
right?
You
should
always
specify
encoding
and
it
should
always
specify
it
for
me
right,
but
in
reality
there
was
people
right
out
of
throwaway
code
right
in
the
scripting
environment
like
what
are
the
changes.
The
type
inference
will
just
work
for
me
and
I
would
argue.
If
you
look
at
the
first
ten
rows,
yeah
Lee
hi
I
mean
you
know.
Exo
does
the
same
thing.
It
looks
at
the
first.
A
What
is
it
n
rows
and
I
almost
never
have
to
tweak
like
the
data
types
ever
annex
our
and
like
it
if
it
works,
often
enough
forcing
people
to
avoid
like
ten
lines
of
code
to
get
sometimes
defeats
the
point
of
the
API
I
did.
The
whole
point
is
that
you're
in
some
interactive
experience
you
just
want
to,
you
know,
load
the
data
as
fast
as
you
can
like
inspect
it.
A
You
know
slightly
shuffle
it
around
and
then
like
do
something
with
it,
and
maybe
you
throw
this
thing
away
like
five
minutes
later
right,
yeah
I
mean
that
seems
like
a
good
candidate
for
let's
try
to
opt
for
something
simple
and
then
you
have
to
write
production
code.
You
probably
should
not
use
time
and
friends
right
but
like
that
honestly
I
think
it's
the
same
as
any
other
API
array.
If
you,
if
you
let
the
system,
guess
a
production
code,
that's
probably
not,
but
we
provide
a
lot
of
experimental
core
as
well.
E
J
B
F
A
Affair,
if
you
load
a
CSV
file
right
if
the
data
source
what's
floating
point,
even
if
the
number
of
scientific
values
they
both
have
a
point,
all
suffix
usually
like
I,
think
that
you
I
don't
think
you
can't
this
up
too
bad
in
practice.
I
mean
yeah.
If
you
have
a
handwritten
CSV,
maybe
but
like
if
you
get
this
easy
from
any
from
any
same
data
source,
like
I
mean
you
know
what
that
will
be.
J
B
K
J
Or
I
mean
I
think
the
only
same
thing
is
you
start
off
by
saying
everything
is
in
by
default.
If
you
ever
come
across
something
that's
long
and
that
data
set,
then
you
extend
the
data
type.
If
you
ever
come
across
something
that's
flow,
then
you,
you
cast
the
data
that
so
therefore
this
you
have
inference.
A
A
B
A
Sure,
like
I,
mean
I
think
honestly,
I
think
it's
completely
okay
to
say
inference
is
not
the
fastest
way
to
get
the
data
I
mean
like
at
the
end
of
the
day,
like
I.
Don't
think
we
would
do
this
in
production
code.
Adair
is
saying
right.
So
that
means,
if
you,
if
you
care
more
about
performance,
if
you
care
more
about
you,
know
reliability,
then,
then
you
pass
in
the
types
and
then
things
are
faster.
A
Yeah,
it's
probably
all
I,
don't
know
I
I'm
inclined
to
believe
it.
I
don't
know,
I,
don't
know
how
many
Bulls
you
look
at.
You
said
10
like
it's
all
from
how
often
do
you
get
this
wrong?
My
like
I
mean
if
you
have
to
continuously
be
I
locate,
maybe
look
at
the
first
hundred
rolls
right
and
then
around
them
fellows.
Well,
that
might
be
harder
in
this
forward.
Only
reader
idea,
oh
no.
A
D
Yeah
headaches
or
anyone
here,
I
get
it
throwing
one
I
left
a
comment
on
the
API
review
website
as
well.
D
A
F
B
F
Like
if
you
have
a
because
I
did
the
operator
which
works
on
butplease
I'm
happy
that
they
do
right
for
the
pool
one
yes
so
like.
If
my
array
is
only
true,
sand
falls
right
and
I
do
ant
with
true,
it
will
produce
the
same
result
right
and
if
I
do
this
to
zero
to
always
produce
the
zero
of
the
content.
Right.
Yes,.
B
F
D
A
paradise
I
can
imagine
if
this
is
in
the
middle
of
the
sequence
of
operations
yeah
like
maybe
you
just
allow
this
to
go
through
and
then
at
the
very
very
end.
You
check
to
see
if
the
data
has
been
zeroed
out
effectively,
you
don't
expect
it
to
be
a
common
case,
so
you
don't
want
to
start
putting
if
statements
all
throughout
your
code,
but
you
do
want
to
check
once
at
the
very
end
like
it
makes.
I
can.
J
See
that's
at
the
same
time,
having
and
and
or
for
all
overloads
is
useful
because
it
allows
you
to
trivially
mask
out
a
data
frame.
For
example,
you
want
to
mask
all
not
a
number
values
out
of
the
data
frame,
because
you
don't
care
about
them.
You
can
trivially
mask
them
to
say,
treat
them
zero.
Instead,
okay,.
B
D
J
B
A
A
E
A
What
should
we
have
something
more
specific
that
may
see
the
question
becomes
if
we
have
more
stuff
that
ships
analysis
in
namespace
right?
What
package
would
that
go
with
that
stuff
going
to
so,
for
example,
when
you
factor
out
the
the
the
error
dependency
right,
you
kind
of
have
to
ship
this
in
a
secondary
package
right.
Otherwise
it
doesn't
make
much
of
a
difference.
What
would
name
the
other
one
right
I
mean.
A
B
F
A
Well,
I
think
that
was
one
of
the
things
we
talked
about
at
some,
and
one
of
the
concerns
was
that
the
API
is
not
baked
enough
to
be
interesting
in
space,
but
like
at
this
point
for
data
I,
don't
think
it
matters
one
way
or
the
other,
because
we
have
both
names
in
use
today
and
they
both
are
based.
The
users
are
changeable,
so
I
don't
have
strong
opinions,
whether
we
put
it
in
system
are
in
Microsoft,
it's
just
a
toss-up.
E
A
A
You
say
that,
but,
like
I
think
that's
actually
not
true
for
people
that
use
Microsoft
packages
like
I
mean
there's
like
this
long
priority
for
asp
net
for
EF
or
for
stuff
that
we
have
shipped
like.
It
should
be
client
that
that
they
use
the
naming
convention,
I,
think
in
the
in
the
HP
net,
where
people
actually
feel
used
to
them.
A
A
A
difference
between
the
term
we
used
to
describe
the
thing
and
whether
people
understand
what
the
name
that
that
name
that
we
use
like
Michael
data
system
later
is
in
that
foot
vicinity
right.
Nobody
uses
the
term
out-of-band
right,
that's,
but
it's
basically
the
difference
between.
Does
it?
Do
you
have
to
reference
something
externally
or
does
it
auto
reference.
A
Yeah
no
I
mean
I'm,
not
saying
it's
an
exact
thing
right.
All
I'm
saying
is
that
that
is
like
when
you
have
both
a
system,
dot
X
and
you
have
a
Microsoft
X,
the
Microsoft.
That
X
was
the
one
that
wasn't
shipping
in
box.
Like
keep
in
mind,
the
original
naming
convention
for
system
was
literally
what
is
involved
and
there's
some
exceptions
from
the
rule.
Like
Microsoft
I
went
to
the
tutored
registry,
but
it's
a
very
small
set
and
they
did
not
make
it
consistent
in.
A
E
A
I
think
inbox
makes
zero
difference
moving
forward
anyway,
like
I
mean
the
way
core
works,
inbox
or
Auto
boxes.
Basically,
if
it's
a
repeat
that
the
package
boundary
makes
no
difference,
that's
also
said:
I
don't
have
strong
opinions,
one
way
or
the
other
I
think
there
is
some
minor
preference
for
Microsoft
or
data,
because
it
sounds
more
modern
than
system
data,
but
I
think
practically
speaking,
I
think
either
one
will
be
fine.
That's
why
I
basically
leave
it
to
the
team
to
decide
whether
they
want
to
ship
in
Microsoft
or
dinner,
or
just
about
data.
A
A
Please
remember
that
the
question
was
just:
should
it
just
be
max
to
later
analysis,
or
should
we
have
something
like
Microsoft
data
analysis,
dot,
data
frame
and
I
think
the
conclusion
was
we
don't
do
that
because
we
think,
like
whatever
we
will
think,
is
cool
we'll
go
into
the
enemy
more
like
core
ish
right.
For
this
analysis,
an
area
would
go
into
Microsoft
data
analysis
and
then
everything
that
is
on
top
like,
for
example,
the
arrow
support,
would
just
go
into
something
else
right,
so
Microsoft
data
devices,
dot
arrow
for
now.
B
So
then
we
can
just
scroll
to
the
bottom
of
all
the
operators,
the
same
thing
just
different,
so
add
it's
the
same.
The
actual
call
the
operator
calls
the
add
method.
The
only
thing
is
the
first
case.
It
takes
a
read,
only
list
of
values,
again
I
doubt
many
people
would
use
this,
but
if
you
had
a
data
frame
only
of
integers,
then
you
could
pass
in
values
which
would
be
different
integers
for
different
columns,
so
the
first
column
would
get
out
of
it
values
of
zero
of
it.
Zero.
K
D
B
B
D
B
A
D
A
D
E
D
F
D
D
J
G
E
One
scenario
that
I
think
about
in
my
head,
where
I
would
use
this
if
I
do
have
a
bunch
of
floating
points
right
in
there
all
floating
points.
All
columns
are
floating
points
now,
I
wanted
to
scale
the
whole
thing:
hey,
hey,
they're,
all
too
big
and
I
want
to
feel
it,
but
something
like
I
would
use
like
to
multiply
one
right
but
potentially
like
if
I
know
that
I
want
to
scale
this
column,
five
point
one
and
then
this
column
by
0.2.
J
J
But
so
my
question
is
for
the
one
that
only
takes
a
T,
you
have
the
same
problem
where
all
columns
and
all
entries
and
all
columns
have
to
be
of
type
T,
and
that's
probably
rare.
So
why
not
just
expose
this
on
the
column
type,
because
you
know
that
everything
in
the
column
is
of
that
type
T,
and
then
users
who
happen
to
be
in
a
scenario
where
they
have
everything.
As
of
t,
they
can
just
write
the
tribute
for
every
column.
Do
this
single
operation
right
so
also
remove.
B
D
J
All
of
the
primitive
types
are
convertible
to
each
other,
I
mean
you
can
convert
100
to
float
in
c-sharp.
Does
that
for
you
implicitly
so
I
don't
see
the
problem.
So
if
all
of
them
can
you
go
from
a
decimal
to
where
well,
no,
but
you
can
explicitly
convert
and
then
you
know
that
you're
getting
a
possible
precision,
loss.
F
J
Only
column
in
this
day
a
week
well
and
I
think
I.
Think
I
I
also
see
a
reason
why
you
might
want
to
scale
everything
by
something,
yes
or
even
add
or
anything
else,
but
I.
Don't
think
that
they're
common
enough
that
users
are
going
to
necessarily
need
these
in
v1
versus
just
iterating
the
columns
themselves,
and
if
we
do
get
enough
feedback
saying
hey.
This
is
a
common
thing.
Then
maybe
we
can
go
and
look
at
like.
F
J
J
But
even
if
it's
element
wise
you're,
saying
like
if
one
of
your
columns
is
of
type
string,
this
is
always
going
to
fail
if
one
of
your
columns
is
of
type
int
and
you're,
multiplying
it
by
0.1,
now
you're
converting
it
to
float
right
and
it
and
you're
getting
hidden
semantics,
because
one
of
your
columns
happen
to
be
int
and
the
other
ones
were
flow
and
now
you've
messed
things
up.
But
the
compiler
does
the
same
thing,
though
you've
much
planning
with
the
float
you
get
a
float
back
right.
J
J
J
If
you
scroll
to
that
I,
don't
think
there's
a
good
way
to
handle
that
one.
So
so
for
that
one
in
particular,
you
could
constrain
that
the
thing
you're
multiplying
by
matches
by
doing
a
type
check,
but
that's
the
problem
you
have
with
non
generic
types
in
general,
but
that's
no
different
from
like,
for
example,
if
you
have
a
list
because
it's
non
generic
it
throws,
if
you
try
and
add
a
non
whatever
the
actual
base
type
is
or
for
the
for
the
generic
constrained
version.
I
will.
B
J
J
Oh
yeah,
I
was
just
gonna
say
that
wrong
and
I
was
just
saying
that
should
be
explicit.
That
way,
users
know
they're
doing
a
mass
conversion
of
data
and
they
should
they
should
because
you're,
not
just
converting
one
data,
one
piece
of
data
you're,
converting
all
the
data
in
that
column
and
right,
but
none
of
them
are
in
place,
though,
for
example,
you
can't
convert
and
int
array
to
a
float
array
correctly.
B
J
K
J
D
F
D
F
J
B
But
so,
okay,
so
they're
hunting
at
least
two
different
things
there
that,
oh,
you
didn't
see
sharp
if
I
say
aimed
plus
long.
They
get
a
long
back
right,
but
there's
no
conversion
like
I
if
I
said
var
something
equals
in
plus
long.
I
know
that
the
VARs
actually
along
correct
I'm,
not
explicitly
casting
it,
and
this
is
really
the
same
implicit
conversion,
videos,
yeah.
E
B
J
But
then
there's
also
the
case
of
you're,
taking
int
or
long
and
going
to
float,
which
well
it's
implicit
unlike
other
implicit
conversions,
its
lossy
or
which
means
that
it's
not
a
safe
conversion
to
do
by
default
and
I
would
say
if
C
sharp
or
to
do
it
again.
It
should
not
have
copied
the
C
behavior
for
that,
because
it
causes
the
loss
of
data
and
it's
I
think
the
only
case
where
C
sharp.
J
Conversion,
which
is
probably
the
safer
option
and
the
more
sane
option
when
you're
working
with
user
to
find
text
like
this
is
to
just
say
the
user
has
to
explicitly
insert
the
cast
here.
So
they
know
that
they're
transforming
the
data
type
from
T
to
you.
Okay,
I
understand
what
you're
saying
and
I
just
doing
the
freight
office,
where
it's
not
having
the
functionality
himself.
B
B
Flow
there's
a
bigger
difference,
though,
and
I
say,
like
their
Thames
plus
five
point:
five
I
get
a
completely
caught.
A
copy
of
the
initial
data
frame
is
still
there.
The
way
you
want
it,
you
would
have
to
go
to
this
data
frame.
Take
the
column
somehow
convert
the
column
of
intz,
let's
say
to
a
column
of
floats
and
then
add
five
point.
Five
to
it.
J
D
B
B
D
J
D
J
D
J
That's
also
the
difference
between
a
high
level
and
low
level.
Api
a
lower
level
API
like
life
is
exposed
and
a
framework
should
generally
be
explicit,
and
then
you
should
have
higher
level
things
like
an
Excel
spreadsheet,
which
uses
it
internally
and
does
implicit
conversions
to
make
it
easy
to
use.
So.
D
F
B
Eventually,
yes,
but
like
I,
do
the
add
of
key-value
like
in
this
case
it
would
throw,
but
in
the
operator
case,
I
could
convert
the
float
when
it
if
it's
an
in
column
good
and
you
were.
B
J
Operators
return
right:
what
did
you
don't
like,
which
is
also
confusing,
because
operators
and
the
friendly
overloads
are
supposed
to
have
the
same
behavior?
That's
why
they're
the
friendly
overloads
for
languages
that
can't
support
operator
overloading
the
frame?
We
may
not
understand
this
you're
saying
the
opener.
Also
looking
soso
framework
design
guidelines
said
say
that
if
you
expose
an
operator,
you
should
also
expose
a
friendly
version
of
that
operator,
for
example,
instead
of
if
you
expose
op
Edition,
you
should
also
expose
a
method
named.
J
Add
those
two
methods
should
have
the
same
behavior,
because
the
friendly
overload
is
for
languages
that
don't
support
operator
overloading,
and
so
your
your,
if
these
two
don't
have
the
same,
behavior
you're,
also
adding
another
logical
step
of
this,
doesn't
make
sense
to
other
languages.
That
would
expect
them
to
have
the
same
behavior.
Okay,.
J
Yeah
and
I
dripped
I,
don't
like
the
operators
because
of
implicit
conversion
and
loss
of
data
and
confusing
semantics
I
think
that
should
be
very
explicit
to
the
user,
because
they
should
be
aware
that
they're,
changing
the
behavior,
the
semantics
and
the
typing,
and
for
the
same
reason
that
you
can't
take
a
regular
like
c-sharp,
allows
you
to
convert
scalar
operations.
It
doesn't
allow
you
to
convert
arrays
of
data
between
two
types:
there's
no
implicit
conversion.
So
this.
J
D
J
D
J
D
J
B
J
Column
sure,
but
why
does
the
notebook
have
to
be
one-to-one
mapping
over
this?
Why
can't
the
notebook
have
an
expression
evaluator
that
does
provide
friendly
or
notebook
like
syntax
right,
there's,
no
reason
why
the
notebook
can't
be
a
high
level
API
that
provides
a
friendlier,
interactive
environment,
okay,.
E
B
Maybe
I
should
have
said
the
people
who
used
the
who
well
I
assume.
Well.
This
is
my
perspective.
The
people
who
you'd
use
this
I
don't
think
they
would
want
that
low-level
lawful
control
over
the
types
that
they
were
adding
in
the
return
type,
and
things
like
that
is
that
that
was
my
perspective.
A
PA
looks
like
that.
That's
why
it
doesn't
have
stuff
that
you
said.
B
J
Can
always
build
a
loosely
typed
API
over
a
strong
typed
API,
it's
very
hard
to
build
a
strongly
typed
API
over
a
loosely
typed
one
and
I'm
tired
of
getting
of
people
filing
bugs
of
I
have
an
integer
that's
over
two
to
the
power
of
23
and
I've
converted
to
float.
Why
isn't
it
equal
to
the
same
value
when
we
just
had
a
timespan
bug,
because
of
that
hey
with
long
what.
B
J
F
F
B
J
D
The
other
thing
is
if
I
change
my
data
like
if
I,
if
I
go
back
to
the
CSV
file
and
just
put
point
0
at
the
end
of
all
of
my
column
or
at
the
end
of
all
of
my
entries
in
a
given
column,
like
my
code
might
blow
up
now,
because
everything
is
now
a
primitive
data
column
of
okay.
Instead
of
a
primitive
data
column
in
there
right.
J
D
Not
necessarily
because,
if
I,
if
I
have
to
get
the
column,
cast
it
to
a
primitive
data,
column
of
int
and
then
call
out
one
now,
I
need
to
go,
find
that
call
site
and
change
it
to
a
primitive
data,
called
a
float
and
then
call
add
one
point:
zero
right
like
it's.
It's
something
that
I
won't
actually
notice
until
a
runtime
that
my
code
blows
up
right.
D
D
D
J
You
could
just
say
you
could
have
a
convert
method
which
says
whatever
the
data
type
is
invert
to
this
new
data
type,
and
if
it's
already
that
data
type
it's
no
on
it's
no
different
than,
for
example,
all
of
our
vector
api's.
Where
we
can
say
you
have
a
vector
of
t,
and
I
want
to
treat
it
as
a
new
vector
of
t,
even
though
it's
already
to
you
so
primitive.
J
Would
be
convert
to
new
primitive
data,
column
of
type
T,
and
so
it
would
convert
from
the
existing
T
to
the
new
you
type.
And
if
it's
already,
that
type
of
to
know,
which
is
how
vector
of
T,
for
example,
works
and
makes
it.
It
makes
it
very
clear
and
straightforward
what's
happening,
and
it
means
that
you
can
always
make
sure
that
it
will
work
regardless
of
what
type
herbs
interpret
it.
As.
B
D
D
J
And
you're
going
to
hit
problems
with
people
with
custom
types
that
don't
want
implicit,
conversions
or
can't
have
implicit
conversions
and
it
it
seems
like
a
good
idea.
Primitives.
Maybe,
but
once
you
look
at
like
the
big
picture,
custom
data
hubs
and
everything
else,
I
just
don't
think
it
flushes
it
flushes
out.