►
From YouTube: GitHub Quick Reviews
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah,
so
stephen's
probably
also
the
right
one
to
be
on
for
this
for
this
one,
since
he
owns
it,
but
basically
fred
opened
an
issue
asking
for
a
compare
exchange
to
be
expanded
to
cover
the
primitive
types
as
there's
various
usages
need
for
needed
for
that
in
the
compiler
or
various
other
locations.
It
just
naturally
simplifies
things
when
you
actually
have
a
byte
that
you're
working
with.
A
B
I
think
this
is
one
of
the
many
sets
of
parameter
names
where
we'd
go
and
fix
them
up.
If
we
were
willing
to
take
that
break.
D
A
quick
question:
I'm
I'm
on
mobile
right
now,
so
I
can't
really
see
the
screen,
but
do
we
have
an
overload
that
takes
enum.
B
B
C
This
at
least
lets
you
unsafe,
as
kind
of
regardless
of
what
the
base
of
your
the
underlying
type
of
your
enum
is
because
right
now,
you're
kind
of
out
of
luck.
If
your
idiom
is
based
on
a
bite
yeah,
but
I
did
does
this
fully
round
out,
I
don't
remember
we
have
we
currently
have
intuit
long,
you
long
and
there's
another
issue,
and
we
have
at
least
in
pointer
and
there's
another
issue
for
you
and
you
and
are
enduent.
Is
that
right.
B
I
don't
think
fred
had
a
need
for
that,
but
ad
and
don't
have
those
I'm
not
a
hundred
percent
positive
that
all
platforms,
support,
add
and
and
for
the
small
types,
though
I
know
compare
exchange
they
do.
C
I
logically,
I
group
compare
exchange
and
exchange
in
my
mind
together
and
then
everything
else
are
sort
of
you
know
the
increment,
decrement
and
or
are
sort
of
off
in
a
separate
grouping,
at
least
in
my
mind,
I'd
I'd
be
inclined
to
ensure
that
we
have
parity
between
compare,
exchange
and
exchange,
even
if
we
don't
assuming
it's
supportable
and
supportable
without
doing
our
own
custom
loops
separate
from
what
we
decide
to
do
for
the
other
helpers.
B
Yeah,
compare
exchange
and
exchange
are
generally
when
one's
supported
and
hardware
both
are
yeah
and
exchange
and
compare
exchange
are
the
only
two
that
support
like
double
and
single,
and
you
know
arbitrary
t
constrained
to
class
today,
so
it
would
make
sense
to
just
fully
flush
them
out.
Yeah.
C
I
think
parity
between
those
two
is
good
thing.
So
if
we
add
these
here,
which
makes
sense,
we
should
add
to
exchange
the
corresponding
fight
aspect-
short,
you
short,
and
for
the
next
issue,
I
guess
and
you
went
to
exchange
as
well.
C
There
was
a
question
about
which
compiler
this
is
referring
to.
I
think
from
fred's
request
for
talking
about
roslyn
threat
works
on
roslyn.
C
B
So
I
I
guess
the
if
we
have
these,
we
have
the
ability,
for
you
know
full
parity
across
the
across
the
primitive
types,
except
for
fool
and
char.
But
those
would
those
could
fall
in
the
same
bucket
as
enum,
where
we
could
say
if
you
actually
have
a
booler
char
case,
just
use
unsafe
as.
B
C
B
Right,
tanner
yeah:
if
we
want
them
accelerated,
they
will
require
jet
work
yeah,
although
since
these
ones
are
already
accelerated
for
int
and
the
other
types
it
should,
you
know
knock
on
wood,
be
relatively
simple
for
us
to
go
and
extend
it
to
the
other
types.
C
C
And
so
in
some
situations,
I'm
fine
with
us
adding
apis
with
sort
of
the
naive
implementation
and
later
optimizing
them.
But
in
the
case
where
someone
might
start
switching
from
one
of
the
existing
ones
to
the
new
ones,
because
it's
quote
quote
better,
I
want
to
make
sure
when
we
add
them,
we
add
them
the
right
way.
B
We
don't
have
hardware
intrinsics
for
these
ones,
because
this
is
this
is
considered
baseline
functionality
in
the
cpu
there's,
no
like
isa
check
or
ready
to
run
flags
that
need
to
be
set,
or
anything
like
that.
This
is
just
you
know:
cpus
provide
the
support
because
it's
considered,
you
know
functionally
required
for
you
to
do
yeah,
fair
enough
and
then
same
same
thing
except
covering
inuint.
C
B
Right
and
and
and
ideally
even
once
they
can,
once
the
language
support
goes
in
for
inuit
versus
inuit
pointer.
We
would,
you
know,
go
and
switch
to
using
the
keywords
in
various
places,
just
to
simplify
things.
B
So
the
basic
premise
is,
you
know:
we
introduced
64,
compu,
64-bit
computers
20
years
over
20
years
ago,
in
a
lot
of
other
languages,
from
c
plus,
plus
to
rust,
even
even
java,
and
go
have
support
for
128
bit
integer
types.
These
can
often
be
partially
accelerated
by
the
underlying
hardware
and,
most
importantly,
this
is
a
type
that
the
that
the
community
cannot
provide
themselves.
B
The
reasoning
being
is
that
in
128
is
considered
special
by
the
abi,
the
application
binary
interface,
which
defines
how
two
methods
communicate
between
each
other
and
so
a
a
end
user.
While
they
could
provide
their
own
type
named
128,
they
could
provide
all
the
operations
all
the
features.
B
C
Which,
which
abr
are
you
talking
about?
This
is
just
the
same.
That
c
c
and
c
plus
plus
use.
Is
that
what
you
mean
like
mapping?
Spec,
okay,
right.
B
The
underlying
calling
convention
used
by
each
platform
so
so
normally
there's
a
windows,
api
for
32-bit
64-bit
in
arm
64.
and
then
there's
a
unix
api
which
covers
the
same
platforms
but
which,
which
is
used
across
both
linux
and
mac
os.
Sometimes
mac
os
differs
slightly
but
they're
generally
in
line,
and
that's
the
system
v
specification.
B
But
previously
we
weren't
able
to
expose
this
type
in
the
bcl,
even
though
users
had
been
repeatedly
asking
for
it
over
the
years
and
that's
because
we
weren't
able
to
define
check
operators,
and
so,
if
we,
if
we
had
exposed
it
and
the
language,
had
come
along
later
and
said,
hey,
we
want
to
support
this.
B
Now
too,
it
would
have
been
a
breaking
change
in
the
same
way
that
switch
in
the
same
way
that
int
pointer
to
inand
was
a
breaking
change,
and
so
we
weren't
really
willing
to
put
users
through
that
pane.
B
We
can
expose
it
128
and
all
the
support,
except
for
literals
and
constant
folding,
largely
speaking,
and
it
will
be
basically
on
the
same
level
as
when
we
exposed
half,
which
is
that
we
can.
B
We
can
expose
it
in
the
bcl
and
the
the
language
can
support
at
any
time
in
the
future
when
they
deem
appropriate,
with
only
minimal
breaking
changes,
largely
around
edge
cases
and
operator
operator
resolution
for
casting
and
things
like
that
since
built-in
operator,
since
built-in
language
operators
have
a
slightly
different
precedence
for
conversions.
B
But
the
premise
is
basically
this
entire
api
surfaces,
int128
mirrors
what
in
64
and
32,
etc
does
and
the
same
for
uint
128
with
un64,
uint32,
etc,
with
the
only
notable
differences
being
that
there
is
a
constructor
that
takes
the
two
halves
of
the
value
and
there
being
a
set
of
explicit
or
there
being
a
set
of
implicit
and
explicit
conversion.
Operators
which
match
the
normal
safety
rules
being
allowed
where
it's
implicit
and
disallowed.
Otherwise,.
B
The
only
notable
difference
in
that
definition
is
that
the
conversions
around
the
the
conversions
to
the
floating
point
types
I
marked
as
explicit
that's
inconsistent
with
how
n64
works,
but
that
operation
is
not
safe.
It
is
not
lossless
and
particularly
with
one
with
int
128
you're,
going
to
see
more
and
more
values
where
the
truncation
becomes
extreme,
since
it's
anything
over
2
to
the
power
of
53.
B
So
anything
that
uses
more
than
53
bits
is
going
to
potentially
result
in
loss
of
data,
and
so
I've
explicit
I've
marked
them
explicit,
but
that's
not
a
strong
preference,
otherwise
everything's
identical
to
n64,
just
taking
the
int
128
type.
D
When
you
say
identical
in
terms
of
surface
area,
do
you
are
you
also
accounting
for
methods
like
tostring,
for
instance,
where
I
can?
I
can
have
similar
behaviors
with
say,
hex
formatting
things
like
that.
B
D
Okay,
because
I'm
I'm
also
wondering,
did
we
ever
end
up
enlightening
apis,
like
system.convert,
with
respect
to
half.
B
Don't
I
don't
believe
so?
Okay.
D
B
B
User,
it's
it's
likely
recommended
that
users
just
do
t
dot,
create,
checked,
create
saturating
or
create
truncating.
Instead,
because
that's
simpler,
it's
going
to
already
work
and
then
we
don't
have
to
touch
convert
moving
forward.
Okay,.
D
Yeah,
because
I
I
think
what
you
what
you
have
here
as
far
as
api
surface,
I
think,
makes
sense,
especially
since
you
you're,
basing
it
on
something.
That's
already
approved
right.
What
I
guess,
what
I'm
more
interested
in
is
all
of
the
like
ecosystem
impact
from
this.
Do
we
have
scenarios,
for
instance,
where
people
want
to
convert
between
like
grid
and
into
128?
Do
we
have
scenarios
where
people
need
to
pass
in
128
across
a
p,
invoke
boundary
stuff
like
that.
B
So
int
128
across
the
p
invoke
boundary
does
exist
that
would
implicitly
work
by
the
runtime
correctly
handling
the
abi
for
the
type
for
things
like
good
and
stuff.
I'm
not
convinced
it's
worth
adding
dedicated
apis
for
that.
B
I
think
users
explicitly
using
things
like
unsafe
as
to
do
the
bitwise
cast
or
or
even
exposing
an
api
and
bit
converter,
which,
which
is
called
guide
to
int
128
bits
mirroring
single
to
in
32
bits,
for
example,
would
make
more
sense,
but
I
think
those
are
also
questions
that
we
can
answer
and
address
over
time
as
the
need
comes
up.
D
Yeah,
I
agree
with
that.
It's
I
only
raised
it
because,
like
I
don't
know,
if
people
were
using
gwyd
as
today
like
a
surrogate
for
the
lack
of
int
128.,
it
was
brainstorming
more
than
anything
else,.
C
What
about
relationship
with
the
previous
issues
we
were
looking
at?
Can
we
rely
on
any
or
all
the
platforms
we
target,
having
necessary
primitives
to
do
atomic?
You
know,
compare
exchange
in
exchange
for.
B
64-Bit,
yes,
at
least
on
windows
windows,
requires
a
compare
exchange.
16
byte
support
more
accurately
speaking.
If
we
were
to
expose
a
centralized
api,
it
would
be
effectively
compare
exchange,
two
pointers
or
two
native
integers,
and
so
on
a
32-bit
system.
You
can
only
do
atomic
eight
byte
exchanges
and
on
64-bit
you
can
do
atomic
16-byte
exchanges,
and
so
it's
really
it's
really
a
inuint
upper
lower
rather
than
in
into
128.
D
According
to
msdn,
the
alignment
for
a
compare
exchange
128
has
to
be
16.
Bytes
is
that?
Does
that
mean
that
the
alignment
here
will
be
16
byte
guaranteed
for
this
type.
B
So
that
is
an
interesting
thing
where,
due
to
the
way
the
gc
is
set
up
today,
the
gc
will
not
guarantee
16
byte
alignment.
B
B
The
packing,
however,
would
be
correctly
16
bytes
and
the
stack
alignment
would
be
16
bytes
because
we
can
respect
it
in
both
of
those
locations.
It's
only
for
the
gc,
where
the
gc
does
not
allow
or
guarantee
above
eight
byte
alignment
today,
where
we
wouldn't
be
able
to
have
that
guarantee.
There's
a
couple
issues
tracking
a
request
around
that
for
maoni
and
of
course,
you
can
manually,
allocate
an
array
using
the
gc,
alec
uninitialized
and
pass
in
an
alignment.
D
Sure-
and
I
guess
what
I
was
getting
at
primarily
is:
if
we
do
decide
to
expose,
say,
compare
exchange
of
into
128.
Will
people
be
able
to
just
new
up
a
regular
array
or
use
a
regular
field
inside
a
type
or
do
they
have
to
call
native
memory
first,
in
order
to
do
that
correctly,.
B
They
would
they
would
have
to
call
native
memory
or
gc
alec
with
the
alignment
to
guarantee
the
16
bite
alignment
that
was
required
by
compare
exchange.
Two
pointer.
B
I
see
the
the
same
would
be
true,
for
you
know
eight
byte
on
32-bit,
because
not
all
platforms
guarantee
eight
byte
alignment,
there's
some
platforms
where
long
double
and
others
are
actually
four
byte
aligned.
I
believe
arm.
32
is
one
of
them,
and
in
that
scenario
you
can't
actually
even
rely
on
eight
by
alignment
for
some
operations.
D
Yeah,
but
I
thought
that
I
thought
that
those
processors
already
handled
correctly
things
like
multi-threading
when
things
weren't
eight
fight,
aligned.
B
It
depends
on
the
processor
arm.
32
is
actually
pretty
famous
for
not
allowing
unlined
reads,
and
that's
one
of
the
reasons
why
it's
so
critical
that
we
use
unsafe
dot,
read
and
write
unlined.
When
we
know
we
have
unlined
data.
D
Okay
yeah,
so
I
don't
want
to
rat
hole
in
like
the
interlock
stuff
here,
but
I'm
I'm
I'm
wondering
if
maybe
we
would
be
in
a
situation
then,
where
we
would
say:
okay,
we
would
never.
We
would
never
add
an
overload
of
compare
exchange.
It
takes
into
128,
and
if
people
want
atomicity
with
these,
then
maybe
that's.
You
know
better
handled
by
atomic
of
t
which
people
have
theorized.
B
Right,
right
and-
and
the
thing
I
was
trying
to
get
at
is
also-
we
would
never
ever
expose
in128
anyways.
We
would
at
best
expose
they
expose
interlocked
exchange
over
like
a
tuple
pair
of
inuits.
That
way,
it
is
correct
on
32-bit
as
well,
because
32-bit
can
never
do.
D
D
A
Was
that
said
when
I
was
trying
to
reboot
but
like
this
is
the
first
time
when
the
primitive
would
effectively
have
custom
operators?
Are
we
saying
that
the
compiler
will
never
have
them.
B
Even
if
it
was
a
runtime
primitive,
it
would
end
up
being
a
lot
like.
Let's
say
that
we
went
and
said
we're
going
to
extend
the
il
specification,
such
that
add
works
for
int
128
there'd
be
open
questions
there,
unlike
how
do
you
encode
that
type,
because
things
like
in
32
are
specially
encoded,
but
outside
of
that
it
would
be
no
worse
than
what
we're
already
in
within
32,
because
generic
math
is
exposing
all
these
operators
on
the
types
anyways
they're.
B
Just
you
know,
they're
explicitly
implemented,
but
they're
still
there,
and
so
the
language
just
knows
whether
in
pointer
is
another
endpointer
is
both
a
runtime,
primitive
and
a
language
primitive,
but
it
exposes
user-defined
operators
and
the
language
knows
to
ignore
those
in
just
a
minute.
The
more
efficient
aisle
op
code
instead.
B
So
if
we
ever
added
that
support
the
language
could
decide
to
say
hey,
we
know
these
are
compatible
and
identical
so
just
to
make
them
more
efficient.
In
coding.
A
Yeah
I'm
just
asking,
because
I
remember
for
the
npdi
discussion
that
this
was
one
of
those
sticky
points
where
the
language
was
not
super
happy
with
having
both
right.
I
think
the
the
you
know
if,
as
long
as
we
are
saying
like
yeah,
there's
nothing
wrong
with
having
them,
that
seems
fine.
I
just
wonder
what
they
be
corner
ourselves,
we're
saying:
oh,
we
do
them
here
now,
because
the
language
doesn't
have
them
yet
or
something
like
that.
A
B
And
I'm
not,
I
can
always
have
the
conversation
with
like
david
wright
and
john
on,
if
on,
if
extending
the
the
ilop
code
is
appropriate,
but
even
then
c
sharp
wouldn't
be
able
to
use
it
without
simultaneous
support
and
they're,
not
planning
on
adding
support
for
in
128
anytime
soon
like
half,
they
want
to
wait
until
they
see
enough
adoption
in
user
libraries
before
they
decide
it's
worth.
Adding
the
additional
language
support.
D
So,
along
those
lines,
the
constructors
on
into
128
should
those
be
long
instead
of
you
long
just
because
it
is
a
two's
complement
representation.
Or
are
we
good
with
you
long.
B
B
You
don't
want
to
suddenly
try
and
conceptualize
that
the
lower
bit
the
lower
bits,
even
though
they're
you
know
they
don't
have
any
sign
impact
or
suddenly
carrying
in
sign
information
or
having
to
like
conceptualize.
Oh
I've
got
to.
I
want
to
do
0x
8
out.
You
know
80
000,
but
now
I
have
to
insert
an
unchecked
cast
to
long,
because
c
sharp
considers
that
a
too
large
unsigned
value.
D
B
Yes,
and-
and
in
particular
I
imagine
that
most
users
doing
this
will
end
up
using
hex,
they
likely
won't
be
doing
something
like
0,
comma,
negative,
2.14
billion
or
anything
like
that.
They'd
likely
just
use
the
hex
pattern
to
improve
readability
and
make
sure
that
it
can.
Actually,
you
know
you
actually
understand
what
the
bits
are
to.
To
that
extent,
it
might
actually
be
good
to
order
upper
first.
B
That
way,
you
can
read
it
from
left
to
right
with
just
a
comma
separating
the
two
values,
so
it
would
be
as
if
you
wrote
out
a
single.
You
know
128
bit
hex
literal.
A
Right
now,
the
compiler
has
no
support
for
this
at
all
right.
It
would
just
be.
If
you
want
to
have
literals
in
code,
you
would
have
to
cost
them
to
either
in
128
or
basically
have
a
very
long
yeah.
I
guess
you,
you
would
have
to
either
cast
or
effectively
call
the
constructor
explicitly
right
right,
yeah,
but
as
an
example.
B
I
was
saying
one
two,
three.
B
If
you
were
to
write
like
a
single
128
bit
integer
that
represents
min
value,
you
would
write
it
like
this.
If
you,
if
you
have
the
constructor
and
you
want
to
write
the
same
thing
and
we
order
it
upper
first
and
then
lower,
then
you
would
write
it
as.
B
C
A
Okay,
I
think
the
question
is
like
so,
but
but
the
literal
won't
work
in
c
sharp
right
unless
the
the
compiler
does
work
and
says.
B
B
There
won't
be
actual
literal
support,
it
will
be,
it
will
be
effectively,
the
user
has
to
write
a
constructor
and
then
the
jit
will
turn
it
into
a
constant
gotcha.
D
E
D
Okay,
so
it
sounds
like
we're:
we're
probably
set
on
flipping
those
two
parameters
and
do
opera
before
lower,
because
it
reads
better
from
the
call
site.
B
Yeah,
I
think
that'd
be
the
preference
and
then
otherwise
you
know
everything's
identical,
then
64..
B
I
have
a
call
out
that
anything
that
differs
between
this
proposal
and
the
you
know
generic
math
proposal
that
shipping
is
unintentional
and
just
a
side
effect
of
when
this
was
written
versus
updates
in
between
and
so
really
the
only
new
things
are,
the
constructors
and
the
conversion
operators,
and,
like
I
mentioned
the
only
difference
on
the
conversion
operators
is
the
conversion's
two
floating
point
types
is
explicit
because
it
is
technically
lossy
and
that
will
be
visibly
lossy
for
anything
above
2
to
the
power
of
53
for
double
and
anything
above
2
to
the
power
of
24
for
float.
D
Sure
I
guess
my
my
final
questions
then,
are
how
do
you
envision
people
getting
the
upper
and
lower
components,
shifting
or
using
accelerators,
like
dot
up
or
dot
lower.
B
Realistically,
they
shouldn't
need
to
access
those
and
if
they
do
need
to,
then
they
can
they
can
get
the
underlying
bit
representation
and
access
it.
Okay,.
B
You
should
you
should
treat
this
exactly
as
a
regular
integer
and
just
use
the
built-in
operators
for
everything
that
you
need
sounds
good.
B
B
And
then
you
know
covering
the
earlier
ask
of
like
what
about
what
about
changing,
convert
or
a
bit
converter,
or
anything
like
that
that
that
kind
of
falls
into
we
can
look
at
binary
reader
and
bit
converter
and
etc,
as
appropriate
as
user
requests
come
in.
The
most
important
thing
is
kind
of
just
getting
the
base
type
and
support
approved
and
out
there,
and
then
we
can
always
version
it
over
time
as
appropriate,
because
I
doubt
there's
places
where
it
needs
to.
You
know
touch
the
entire
bcl
at
once.
A
B
Right,
there's
there's
so
in
128,
has
hardware
acceleration
and,
and
so
one
of
the
most
common
things
is
that
the
underlying
hardware
for
things
like
64-bit
multiplication,
there's
actually
an
option
to
return
a
128-bit
result.
B
Likewise,
when
you're
doing
things
like
big
integer
arithmetic,
one
of
the
most
common
things
for
detecting
overflow
is
actually
to
do,
for
example,
in
plus
int
equals
long
and
then
check
if
the
long
result
is
greater
than
in.max
value
and
in
native
compilers.
That
pattern
will
actually
be
recognized
in
and
converted
to
add
and
then
add,
carry
rather
than
you
know,
being
an
actual.
Compare
and
branch
in
the
same
vein
in
128
allows
the
same
hardware
accelerated
scenarios
to
be
supported
for
for
64-bit
values.
B
It
allows
you
to
correctly
expose
and
support
the
the
larger
multiplications
needed
we
actually
emulate
in
128
bit
types
for
various
floating
point.
Parsing
algorithms
math
exposes
some
some
operations
that
allow
you
to
do
big,
multiple
big
multiplication
and
get
the
upper
and
lower
halves.
B
A
B
B
D
Will
this
work
with
vector
does
half
work
with
vector?
I
don't
remember
right
now
so.
B
Half
there
are
vectorized
instructions
for
half,
but
we
don't
expose
them
today.
That's
a
broader!
There
is
an
api
proposal.
I
don't
think
it's
actually
marked
ready
for
review
yet,
but
there
is
an
api
proposal
to
expose
the
appropriate
half
vector
support
for
n128.
B
There
isn't
actually
hardware
acceleration
in
the
vectors
for
those
there
isn't,
for
example,
a
vector
128
that
just
holds
one
in
128,
nor
is
there
a
vector
256
that
holds
two
of
them.
B
D
And-
and
I
that
makes
sense-
I
guess
one
scenario
that
I
could
imagine
people
having
is
wanting
to
take
a
an
int
128
and
turn
it
into
a
vector
128
of
into
128
and
then
immediately
turn
that
into
a
vector,
128
of
say,
byte
or
int,
or
something-
and
I
wondered
you
had
mentioned
using
unsafe
dot
as
to
do
that
in
the
past.
But
would
that
be
appropriately
accelerated?.
B
Yeah
unsafe.ads
could
be
there.
There
isn't
really
an
accelerated
form
for
going
between
the
vector
register
and
the
integer
registers,
so
it
would
go
through
memory,
but
that
would
still
be
you
know
basically
as
efficient
as
that
could
be.
D
I
see
so
there
wouldn't
be
like
a
there,
wouldn't
be
like
an
unsafe,
create
equivalent.
D
Yeah,
it's
the
I'm
asking
because
I
know
that
you've
been
paying
a
lot
of
attention
over
the
past
few
years
to
the
coding
patterns
that
we
would
expose
to
our
customers
in
order
to
make
this
as
efficient
as
possible,
and
I
would
just
want
to
make
sure
that,
where
the
api
patterns
that
we're
telling
them
to
do
unsafe.as,
if
you
need
it,
would
be
as
efficient
as
we
could
realistically
do.
D
A
B
Yeah,
and
so
they
would
be
responsible
for
getting
the
underlying
bytes
or
or
or
shifting
out
the
underlying
bytes
as
appropriate.
The
you
know,
if
you
want
to
do
reverse
endiness,
that's
pretty
simple:
you
just
extract
the
two
halves
and
then
decide
which
one
to
write
first
and
even
internally,
the
implementation
is
going
to
have
a
if
big,
endian,
then
upper
is
the
first
field
lowers
the
second.
If
it's
little
indian
then
lowers
the
first
field
and
uppers
the
second
field
right.
D
Yeah,
because
we
we
didn't
add
any
like
we
didn't
mark
system.half
serializable,
for
instance,
we
didn't
build
any
special
knowledge
of
that
into
older
serializers,
like
binary
formatter,
but
system
text.
Json
natively
understands
it.
I
believe,
because
they
special
case
it
has
a
primitive
and
we
would
expect
them
to
do
the
same
here.
Right.
B
Can
definitely
file
tracking
bugs
for
for
any
of
this
stuff
as
appropriate,
and
we
can
also
expose
tiny
api
proposals
for
things
like
do.
We
want
to
support,
you
know
a
direct
reverse,
ndns
api
or
anything
like
that
for
that
one's
in
binary
primitives,
rather
than
bit
converter.
B
Right
be
with,
with
the
advent
of
user-defined,
checked
operators.
We
can
provide
the
same
level
of
support.
That
half
has,
which
means
that
we
are
in
the
same
same
zone
of
minimal
braking
changes.
If
and
when
the
compiler
decides
to
add
support
in
the
future.
B
And,
and
with
the
call
out
that
any
changes
we
make
to
generic
math
will
be
implicitly
reflected
in
these
interfaces
here
and
the
api
surface.
So
I
think
one
example
was
the
last
api
review.
We
said
that
sign
should
return
int
rather
than
the
same
type,
and
so
this
this
one
should
actually
just
be
int
here.
A
B
That's
the
call
out
I
I
listed
here
the
generali
number
interface
support
should
match
any
decisions
made
for
the
actual
generic
math
api
proposal
and
be
consistent
with
the
other
types.
Any
inconsistencies
are
unintentional
and
just
due
to
timing
differences
between
when
one
was
written
and
the
other
was
updated
or
reviewed.
D
I
mean
we're
going
to
have
to
come
back,
probably
in
a
few
weeks
or
a
few
months
and
and
maybe
add
some
supporting
apis
around
it.
We
had
talked
about
binary
primitives.
We
had
talked
about
interlocked,
you
know
system,
text,
json
and
so
on,
but
I
don't
think
it
affects
the
type
shape
itself,
as
proposed
here.
D
F
D
That's
because
you
you
have
your
your
fan
club.
D
B
It's
just
been
a
long
long
long
long
requested
thing:
we've
had
various
api
issues
tracking
this
for
over
six
years
now,
and
we've
just
been
able
to
have
it
having
to
say
we
can't
do
this
yet.
D
And
and
now
you
can
because
of
the
features
that
you've
been
championing
so
hooray,
is
this
something
that
ships
in
preview
or
would
this
be
like
a
proper
supported
thing.
B
The
discussion
I
had
with
jeff
was
that
it
would
be
great
for
us
to
ship
this
alongside
genericmath.net
seven,
but
it
largely
depends
on
timing
and
how
much
you
know.
Feedback
comes
after
build
for
the
generic
math
stuff.
So
if
not
a
lot
of
feedback
comes
back,
then
this
is
actually
relatively
simple
to
implement.
B
Overall,
it's
fairly
straightforward,
even
even
jit
acceleration
support
is
not
that
complicated,
because
we
just
extend
the
what
is
currently
the
type
long
to
type
into
decomposition
to
be
effectively
type
int
128
to
type
long
decomposition
on
64-bit.
B
So
it's
it's,
it's
not
overly
complex,
so
I
don't
expect
tons
of
work
there,
but
the
I
what
I
foresee
to
be
the
worst
case
scenario
is
we
get
it
all
implemented,
but
not
a
hundred
percent
accelerated
in
legit
and
then
in
dot
net.
Eight
we
go
in
it.
We
go
and
finish
the
acceleration
we
finish,
adding.
Maybe
things
like
binary
primitives
as
appropriate,
etc.
But
the
api
surface
wouldn't
be
impacted
or
anything.
It's
just.
You
know,
perf
improvements,
release
to
release.
A
D
Yeah,
that's
awesome,
remind
me,
tanner
are
there?
Are
there
checked
conversion
operators
as
well?
I
don't
see
them
on
the
screen
right
now.
B
That's
a
good
point.
I
don't
think
I
listed
them
here.
There
would
be
checked
for
the
explicit
operators.
It
is
impossible
to
find
them
for
implicit
because
that's
a
you
know,
a
misnomer
implicit
conversion
should
never
overflow.
D
B
Okay
and
so
implicitly,
there
would
be
checked
versions
for
all
of
the
explicit
ones
where
it's
possible
to
overflow.
B
A
B
B
A
A
B
Yeah,
the
the
buffer,
zero
memory,
one
levi
and
I
could
cover
and
then
there's
the
there's
two
of
them
that
are
language
related
that
if
we
could
just
get
off
the
backlog
today,
that'd
be
great.
It's
the
the
refspan
constructor
there
and
then
the
runtime
feature
for
in
pointer.
C
So
c
sharp
11
is
very
likely
going
to
be
adding
a
better
analysis
around
passing
refs
around
in
support
for
better
support
for
for
refstructs,
and
as
part
of
that,
we
can
safely
expose
a
new
constructor
on
span
such
that
the
compiler.
Sorry,
actually,
it's
ref
fields
in
in
refstructs
such
that
the
compiler
can
tell
you
basically
taint
the
span.
C
If
you
pass
in
a
ref
to
something
that
is
only
on
the
stack,
then
the
compiler
will
be
able
to
say,
okay.
Well,
this
spans
lifetime
is
is
tied
to
that
ref.
So
it
can't
allow
it
to
escape,
and
in
doing
so
we
can
do
something,
that's
actually
pretty
common,
which
is
enable
you
to
create
spans
that
represent
a
single
item.
And
if
you
look
at
our
use
of
like
memorymarshall.createspan
inside.net
runtime
about
half
of
them
were
passing
in
comma
one
along
with
the
ref.
C
To
basically
say
I
want
to
create
a
span
for
just
this.
This
one
item-
and
this
basically
formalizes
that
so
we
we
now
have
these
constructors
internally,
and
this
would
be
about
exposing
them
publicly
once
that
compiler
feature
is
available,
because
the
the
compiler
feature
will
basically
change
it.
It
involves
a
breaking
change
to
the
meaning
of
ref,
essentially
that
it
should
be
tracked,
and
so
basically
we
will
be
adding
this
once
that
meaning
is
in
place.
A
C
Basically,
if
you
like
saying
today
be
like
saying
memory,
marshall.createspan,
ref
item,
comma
one
okay,
but
with
the
additional
tracking
that
prevents
you
from
then
you
know
returning
the
span
out
of
the
method
but
pointing
to
something
that
was
stacked
out.
C
Because
there's
no
such
thing
as
a
ref
field
today,
and
so
there's
no
way
and
you
can't
store
a
ref
onto
the
heap
that
you
can't
store
one
of
these
things,
at
least
not
in
safe
code.
So
the
so
you
know
you
there's
no
tracking
to
be
done.
The
moment
you're
allowed
to
store
these
now.
The
compiler
has
to
start
caring
about
safety.
A
C
A
C
Right,
so
that's
why
we
had
pushed
all
that
support
off
to
memory
marshall.
Then
it's
you
know
it's
on
you.
C
I
think
the
only
other
thing
here
that
might
change.
Potentially
I
don't
know
I
don't
know
if
we'll
be
in
c
sharp
if
it
would
be
in
dot
net,
seven
or
beyond,
is
there's
a
proposal
that
jared
had
made
and
then
somebody
else
opened
it
in
c
sharp
blank
to
allow
for
using
read-only
ref
instead
of
in
in
signatures
for
for
parameters,
and
we
would
choose
to
use
read-only
ref
here
instead
of
in.
C
If
and
when
we
could,
that
the
distinction
is
basically
whether
it
would
require
you
to
write
ref
at
the
call,
or
you
know,
write
ref
at
the
call
site,
which
then
prohibits
the
accidentally
passing
in
an
r
value
which,
using
in
won't
prevent.
C
B
That
is
fine
and
good
for
many
cases
like
com
interop,
even
in
c
and
c
plus
uses
guid
ampersand,
which
is
a
which
is
an
implicit
biref
of
sorts
it
works
just
like
in
does,
but
that's
undesirable
for
some
of
our
apis,
where
we
are
directly
dealing
with
memory.
So,
for
example,
unsafe
dot
is
no
ref.
We
explicitly
decided
not
to
use
in
because
users
passing
in
r
values,
like
the
literal
five
is
problematic
and
likely
a
bug
in
what
they're
doing
there.
B
New
read-only
span
for
in
our
value
isn't
technically
like
incorrect.
There's
nothing
fundamentally
problematic
about
it
and
it's
potentially
what
the
user
wants.
So
so
for
read-only
span.
It
just
comes
down
to
do.
We
want
the
users
to
be
explicit
there
or
not,
and
it's
it's
probably
better
to
lean
towards
users
should
be
explicit
for,
for
this
case,
make
sure
they're
doing
exactly
what
they
want.
It
also.
B
Right,
there's
nothing
theoretically
wrong;
it's
just
the
user
is
potentially
not
doing
what
they
wanted
and
there's
the
consistency
that
stephen
mentioned.
Where
you,
you
know,
you
have
to
specify
ref
in
both
locations.
A
Okay,
because
my
question
was
like
is
there
like?
Can
you
even
it
sounds
like
what
you're
saying
is,
if
you
declare
it
as
read,
only
ref,
that's
different
from
writing
in
because
the
metadata
encoding
is
different.
I
wasn't
an
impression
about
it,
and
data
encoding
is
exactly
the
same
in
both
cases.
No.
B
A
B
Right
and
there's
a
proposal
being
championed
by
jared
to
also
add
refereed
only
so
that
you
can
disallow
our
values
where
that's
important
and
also
to
cover
the
the
missing
compat
story,
because
changing
from
ref
to
in
is
current,
currently
source
breaking
today
and
binary
breaking
for
virtuals.
But
that's
not
largely
important
for
the
vcl
apis.
C
B
C
Maybe
this
is,
I
know
too
much
about
how
span
works
and
that
it
is
a
wrapper
around
a
reference
to
something
in
a
length.
But
I
see
that
and
I
think
it's
passing
in
a
copy
and
I'm
like
what
is
this
doing
yeah
and
it
does,
and
you
can,
you
can
still
write
in
at
the
call
site
which
we
do,
but
without
the
end
it
just
looks
confusing.
C
Anyhow,
this
is
the
best
we
can
do
right
now
and
if
we
could
get
a
a
read-only
ref,
we
could
change
the
end
to
the
refereed.
Only
rather
we
can
change
it
and
it
should
be
non-breaking.
B
B
Yeah,
so
this
one
is
as
per
the
api
review,
ldm
offline
discussion
and
then
the
following
summary
that
was
given
in
the
last
generic
math
api
review.
The
language
is
going
to
the
language
opted
to
change
over
to
make
inpointer
and
init
as
well
as
uintpointer
and
innuent
consistent.
B
They
need
a
they're
going
to
do
that
via
runtime
feature
flag,
and
so,
if
you
are,
for
example,
using
c
sharp
five
and
targeting
net
seven
or
newer,
you
will
see
see
the
change
provided
that
this
feature
flag
exists,
and
so
this
is
just
suggesting
we,
we
add
a
feature
flag
to
to
cover
that,
to
cover
the
language
need,
and
so
it's
just
proposed
to
be
numeric
in
pointer.
A
B
And
as
long
as
it's
clear
enough
for
the
compiler
not
conflicting,
I
think
that's
fine,
that's
why
I
didn't
we
tossed
a
few
ideas
back
and
forth
in
outlook,
but
there
were
there
weren't
any
potentials
that
were
like.
Oh,
this
is
a
clear
winner.
A
F
D
Yeah
we
have
apis
hanging
off
of
buffer
right
now
that
allow
you
to
copy
native
memory,
these
apis,
take
pointers,
source
destination
and
the
amount
of
memory
that
you
want
to
move
specified
in
bytes.
We
don't
have
an
api
that
lets
you
clear
memory,
the
the
way
that
you
would
normally
do.
This
is
a
new
span
of
byte
pass
in
the
pointer
and
the
length
and
then
call
clear
but
span
right
now
is
limited
to
install
max
length,
and
there
are
scenarios,
especially
when
you're
working
with
you
know,
large
data
sets
where
well.
D
You
might
actually
have
more
than
two
gigs
of
data
that
you
want
to
clear,
because
you
have
two
gigs
of
data.
Sorry,
you
have
more
than
two
gigs
of
data
that
you're
handling,
so
this
api
would
allow
people
to
write
their
own
memory
management
functions
that
their
own,
like
my
my
span,
type,
which
is
appropriate
for
my
scenario,
for
instance,
which
might
also
be
more
commonplace
as
people
introduce
ref
fields
into
their
own
types,
so
that
this
would
be
a
helper
method
for
that,
basically
to
for
them
to
implement
clear.
D
B
Right
so
so
the
questions
that
I
had
on
this
were
was
mainly.
Should
we
be
providing
a
basically
a
ref
t
overload
because
you
can
zero
any
memory
safely
by
just
you
know,
assigning
it
default.
D
We
don't,
I
don't
think
we
tend
to
include
reft
overloads
outside
of
things
like
memory
marshall,
if
we
did
want
to
add
one
as
a
helper,
that's
probably
where
I
would
recommend
putting
up.
B
B
B
Would
it
be
better
to
have
this
in
unsafe
and
have
it
you
know
because
we
already
have
a
knit
block
there,
but
that
doesn't
cover
that
doesn't
cover
sizes
up
to
inuit
it
just
covers
sizes.
Up
to
you
went.
D
A
I
think
the
meters
would
largely
depend
on
whether
we
want
to
have
logical
or
actual
overloads
that
wouldn't
be
requiring
unsafe,
because
they're
not
dealing
with
pointers,
but
it
would
be
unsafe.
So
because
I
mean
I
think
the
putting
them
on
buffer
would
be
fine
if
they
require
unsafe
people
to
call
already.
But
if
the
apis
themselves
don't
require
the
unsafe
keyword
but
are
unsafe,
then
I
don't
think
we
should
put
it
on
buffer.
We
should
pull
it
somewhere.
In
you
know
the
typical
unsafe
places,
like
memory
marshall,.
D
Yeah
the
the
thing
about
reft
is,
you
would
still
you'd,
probably
still
want
to
limit
it
to
unmanaged
types
t,
because
you
can't
ever
really
have
again
like
an
array
of
people
more
than
two
billion
elements
like
that's
just
not
something
that
makes
sense
from
the
runtimes
perspective.
D
That's
fine
may
have
just
been
a
a
false
hit
yeah.
So
for
the
only
the
only
reason
I
could
think
of
maybe
wanting
to
take
a
t
is
to
say
like
well.
I
have
I
have
my
custom
native
buffer
of
8
billion
shorts
and
the
runtime
can
do
the
math
to
say.
Oh,
that's
actually
16
billion
bytes,
but
you
could
always
just
do
you
know
times
size
of
short
yourself.
B
B
A
A
The
conclusion
is,
basically,
we
don't
want
to
do
it
for
managed
types,
because
the
rates
are
not
larger
anyway
than
spam.
Is
that
fair
or
it's.
D
It's
because
span
of
t
already
covers
every
scenario
that
you
might
have
with
managed
types
t.
The
only
thing
that
the
only
scenario
this
actually
enables
that
would
be
difficult
to
do
today
is
unmanaged
types
t
in
memory
that
has
already
pinned,
because
it's
not
being
tracked
by
the
gc.
D
B
It's
just
a
efficient
implementation,
it's
kind
of
like
you
know.
We
we
put
the
native
memory
apis
there
for
alec
and
stuff,
because
we
wanted
to
have
the
guarantee
that
the
users
could
call
free
accordingly,
but
that
then
gives
a
centralized
place
for
saying
I'm
working
with
native
memory
and
therefore
I
want
to
conceptualize
this
as
basically
the
kind
of
the
the
very
few
primitive
memory
operations
that
that
are
exposed
in
c
I
I
don't
have
a
strong
preference
there.
I
was
just
suggesting
it
as
an
alternative.
A
D
B
Yeah
and
and
buffers
somewhat
of
an
odd
api
today
already
because
you
know
it
exposes
block
copy
byte
length,
get
byte
and
set
byte,
which
only
work
on
array
and
actually
the
only
unsafe
api
it
exposes,
is
memory
copy
and
that
api
for
memory
copy
isn't
even
like
an
efficient
memory
copy
it.
It
forces
you
to
take
long
rather
than
native
integers,
and
it's
got
a
weird
api
shape
compared
to
like
what
you
traditionally
want
for
a
mem
copy
like
api.
A
A
Other
question,
because
I
mean
the
the
other
overloads
on
buffer-
don't
take
nun
today
they
take
basically
in
64
variants,
yeah
and
everything
on
the
memory
already
takes
a
new
end,
so
it
makes
a
bit
more
sense.
A
B
A
Looks
like
the
only
thing
we
now
need
is
support
for
the
delete
keyword,
and
then
we
have
full
parity,
we'll
see,
plus
plus
as
well.
B
D
D
D
A
D
A
B
That's
at
least,
I
think,
all
the
api
proposals
I
can
think
of
that
were
impacting
the
language
or
that
were
in
my
specific
area
and,
like
I
said
earlier,
if
anyone
else
has
any
others
feel
free
to
call
them
out.
I
think
eric.
D
F
So
in
di
we
have
a
service
collection,
which
this
is
the
thing
that
you
add
your
your
implementations
to
right.
You
say
I
want
this
one
transient,
this
one
singleton
whatever,
and
then
you
build
your
service
provider
from
that
service
collection
and,
as
part
of
like
the
new
minimal
apis
in
asp.net
and
then
also
kind
of
the
clone
of
them
in
maui.
F
Is
we
had
to
make
this
kind
of
wrapper,
adapter,
eye
service
collection,
thing
that
has
a
service
collection
inside
of
it
that
we
add
the
stuff
to,
and
then
we
make
read
only
and
we
only
give
out
the
the
wrapper
collection,
and
so
anyway,
like
the
idea
was
well.
If
we
could
just
make
a
feature
on
the
base
service
collection
make
read
only
that
it
would
throw
if
you
try
to
add
more
things
to
it.
F
After
make
read
only
is
called
we
wouldn't
have
to
do
all
this
extra
adapter
stuff
in
all
these
other
app
models.
So
the
ask
here
is
to
make
a
new
method
on
service
collection.
I
don't
we.
I
don't
know
if
there's
like
a
normal
way
of
doing
this,
but
like
called
make
read
only
that
once
you
call
that
you
can
still
enumerate
it
and
things
like
that,
but
you
can't
add
things
to
it.
Anymore.
Add
a
remove.
A
A
F
A
F
It's
like
an
extension
method
depending
on
which
di
you're
actually
using
and
so
yeah.
There
is
no
like
all
right,
I'm
done
adding.
Now,
I'm
gonna
do
things
with
it.
So
if
the
build
was
there
then
yeah,
you
could
just
say:
well,
once
you
build
it,
then
it's
read
only,
but
we
don't
have
a
method
on
it.
Yet.
F
A
F
F
Right
exactly
like
in
asp.net
today,
you
you
already
get
this
behavior,
because
the
thing
the
service
collection
that
they
give
you
from
their
builder
is
a
one
of
these
rappers
that
once
you've
called
build
on
your
on
your
app
builder,
it
starts
throwing
if
you
start
adding
things
to
it.
This
would
just
move
it.
A
lower
lower
down
into
dependency
injection
yeah
makes
sense
so
yeah.
It
really
wouldn't
break
anybody
right.
It
wouldn't
be
a
breaking
change.
It's
just.
A
F
G
Yeah,
so
it's
not
exactly
like
a
new
ipa
proposal,
but.
G
G
So
as
it's
requesting
all
close
generics
for
working
generic,
my
media
review
and
there's
some
interesting
scenarios,
the
committee
member
asked
about
how
it
should
work
if
we
want
to
cover
derived
attributes,
etc.
A
A
Is
that
the
only
api
that
would
have
to
honor
this?
I
mean
there's
various
ways
you
can
ask
for
customer
attributes,
but
we
have
it
on
attribute,
there's
some
extension
methods.
We
have,
I
believe,
there's
I'm
not
sure
how
they're
calling
into
each
other,
but
I
mean
logically,
you
probably
want
the
behavior
that
whenever
you
can
ask
for
custom
attributes
of
a
type,
you
would
support
open
generics
right.
G
Yeah,
if
you
apply
for
overlaps
with
the
type
that's
a
good
type,
is
the
parameter.
A
G
G
Here
we
go
so
like
in
this
case.
If
we
expect
to
try
to
draw
it.
A
A
A
Did
everybody
drop
offix?
No,
okay,
there's
more
people
on
the
call
now,
for
a
moment
it
looked
like
it's
just
me,
I'm
like
hold
on
hold
on
anybody
contacts
on
this.
B
I
think
jared
wanted
to
be
here
for
this
one.
I
just
pinged
him
since
he's
talking
in
chat
every
now
and
then
but
he's
yet
to
respond
to
say
if
he
does
or
does
not.
C
My
only
question
is,
I
mean,
I
know
it's
covered
in
the
alternate
designs
you
were
just
talking
about,
but
is
is
that
statement
different
from
everything
else?
That's
exposed
on
gc
memory
info
like
pause
time
is
special
because
we
have
a
faster
way
to
get
it,
unlike
all
the
17
000
properties
that
are
currently
okay,
I'm
just
wondering
why
this
one's
special.
A
D
Well,
I
think
what
they're
saying
is
that
you
would
have
to
you
would
have
to
get
the
the
percentage
time
spent
in
gc
and
then
call
like
system
process
get
current
process
something
and
then
get
the
total
wall
time
of
the
process
execution
and
then
multiply
them
out.
A
A
A
A
B
D
B
I
think
it
also
comes
down
to
how
much
of
the
data
you
want
like.
If
you
just
want
one
property,
then
it's
going
to
be
more
expensive
to
use
the
memory
info
api,
but
if
you
need
most
of
it
like
is
like
you're,
generally
tracking,
all
the
telemetry,
then
it's
going
to
be
faster
to
use
the
memory
info
api.
A
Well,
it's
not
just
faster
right.
I
think
it's
also
the
the
fact
that
these
things
are
related
to
each
other
and
snapshotted
right.
If
you
just
have
individual
methods,
you
always
get,
you
know
them
individually,
snapshotted,
but
then
you
can't
relate
them
in
any
meaningful
way.
Right.
A
A
A
D
Diagnostics,
would
we
want
folks,
like
jeremy
to
be
here
for
this?
Do
we
have
representatives
from
diagnostics
here
right
now?
D
B
B
About
parameter
names
and
other
types
of
forwarding.
A
Well,
this
wouldn't
be
forwarding
right.
Forwarding
really
know
how
to
do.
This
is
more,
I
think,
the
without
reading
it.
It
seems
like
this
is
the
general
problem
where
they
can't
replace
the
type
in
place,
so
they
emit
a
new
type,
and
then
I
just
want
to
indicate
that
that's
basically,
you
know
a
new
version
of
the
other
one
which
I
think
wouldn't
be
actual
forwarding
at
runtime.
It
would
just
be
for
framework
code.
I
think,
to
find
the
latest
version
of
that
type,
but
that
will
be,
I
think,
some
questions.
We
should
ask.
B
So
this
this
one
is
basically
that
we've
got
various
we've
got
various
safe
handles
for
that
existing
reflection,
and
it's
basically
just
asking
for
a
way
to
be
able
to
get
get
a
raw
end
pointer
to
and
from
those
that
way,
you
can
easily
more
easily
do
various
types
of
interop
or
performance
related
work.
B
And
it's
calling
out
this
is
you
know
this
is
an
already
internal
api
we
use
in
various
places,
but
makes
sense
just
outside.
A
A
A
B
B
A
B
E
So
yeah,
so
there's
one
thing
that
we
keep
coming
up
with
a
problem
to
keep
something
up
against
in
language
design
is
how
do
we
add
new
features
to
the
language
and
in
such
a
way
that
they
can
be
consumed
by
the
bcl
asp.net
core
and
a
lot
of
our
customers,
which
have
strong
binary
compatibility
guarantees,
but
have
apis
that
kind
of
work
against
the
feature
we're
trying
to
add
like
one
of
the
like
most
common
one,
like
one
of
the
most
kind
of
common
patterns
we
have?
Is
this
idea
of?
E
We
tend
to
want
to
add
these
new
color
info
items.
So
we
recently
added
like
color
info
argument,
expression
where,
if
you
add
an
optional
parameter-
and
we
will
fill
it
in
with
kind
of
the
textual
representation
of
the
expression
that
went
into
another
one,
it's
perfect
for
apis,
like
debug.assert,
where
we
could
essentially
say
like.
E
E
The
problem,
though,
is
we
have
this
weird
thing
called
compatibility
in
c
sharp
and
the
way
in
which
our
compat
rules
work
out
is.
We
will
always
end
up,
preferring
the
older
version
of
the
api,
like
the
one
without
the
optional
parameters,
like
you
can
add
this
caller
argument
expression,
all
you
want.
E
The
bug
got
a
certain
we're
literally
never
going
to
call
it,
because
we
will
always
call
the
one
without
the
optional
parameter
first,
because
that's
how
the
compatibility
shakes
out-
and
that
was
how
optional
parameters
were
designed-
and
there
are
a
number
of
other
features
like
this.
Like
one
thing
that
came
up
a
lot
when
we
were
doing
all
of
kind
of
the
new
lambda
inference
rules
and
overload
tweaks
that
we
did
in
c
10,
we
found
like
a
whole
bunch
of
apis
and
asp.net
core
where
they're
like
man.
E
We
wish
we
could
use
the
new
rules
here,
but
we
can't
because
you're
always
going
to
fall
back
into
this
old
legacy,
trap
that
we
have-
and
we've
been
discussing
this
for
a
long
time
like
how
do
we
kind
of
rationalize
the
fact
that
we
have
a
lot
of
these
apis
out
there,
that
we
need
to
have
for
compatibility
reasons
but
they're
like
making
it
very
hard
for
us
to
both
design
and
consume
new
language
features
in
terms
of
the
design
phase.
We
are
constantly
faced
in
language
design
with
okay.
E
We
could
like
take
this
new
little
rule
and
we
can
make
overload
resolution
work
like
this,
and
then
we
go
and
look
at
asp.net
core
the
runtime
and
we're
like.
Oh
dear
lord,
they
have
an
api
pattern
like
that,
like
that'll,
that
will
defeat
everything
we're
trying
to
do
here
like
we
can't.
We
have
to
like
make
a
rule,
a
little
tweak
to
overload
resolution
so
that,
like
we,
don't
fall
into
some
compat
trap
with
like
runtime
or
asp.net
core
and
on
kind
of
have
the
opposite
end.
A
E
Added
this
feature
that
we
can
literally
never
use
in
all
the
places
we
want
to
use
it,
and
so
we've
been
kind
of
struggling
with
like
this
for
a
long
time
of
like
how
do
we
kind
of
cross
this
boundary
and
like
this
is
not
like
a
one
time
thing
like
this
was
this
was
brought
up
and
we
did
call
our
argument.
Expression
we're
like
well,
that's
really
unfortunate,
oh
well
and
we
moved
on,
and
then
we
have
a
new
one
new
caller
info
caller
info
identity.
E
Where
for
in
order
to
make
like
our
aot
scenarios
better,
we
really
need
to
be
able
to
know
what
type
of
member
was
the
thing
that
called
this
method.
It's
very
important
like
dna
scenario-
oh
god,
I'm
forgetting
that
and
I,
how
does
asp.net
core
build
all
their
dependencies
dependency
injection.
E
It's
important
for
like
dependency
injection
apis
and
things
like
that,
but
it
keeps
falling
into
this
trap
of
sure
go.
Add
the
api,
we're
literally
never
going
to
call
it.
So
one
thing
we've
talked
about.
I
kind
of
had
this
idea
and
it's
been
circulating
around
is
adding
a
way
to
mark
apis.
As
binary
compat
only
but
basically
saying
that
we
can
kind
of
really
make
progress
in
this
area,
if
we
just
said
hey,
we
know
we
have
this
api
that
we
used
to
call
that
we
just
don't
want.
E
That
is
interfering
with
our
ability
to
make
forward
progress,
and
we
would
like
this
to
exist
just
for
binary
upgrade
scenarios
like
when
you
compile
from
source.
We
just
don't
want
this
to
appear
at
all.
We
would
rather
just
take
this
api
and
exclude
it
essentially
make
it
unusable
in
source
code.
It
is,
but
the
compiler
will
still
process
it
and
put
it
in
the
binary
so
that
anyone
who
is
in
kind
of
an
upgrade
scenario
where
you
like,
compile
for
net5
and
then
started
running
on.net
6,
like
your
code,
continues
to
function
just
fine.
E
E
The
idea
is
that
it
doesn't
exist,
so
it's
not
only
overload
resolution,
it's
any
kind
of
member
lookup,
so
that
means
both.
You
hide
it
from
overload
resolution.
You
hide
it
from
method
group
conversion.
So
if
you
attempted
to
like
take
a
method
and
put
it
into
say,
like
a
delegate
like
it
would,
we
would
take
that
out
of
the
candidate
set
that
we
are
looking
at
now
in
terms
of
like
how
this
actually
works
under
the
hood.
E
We
don't
fully
know
yet,
because
we
haven't
fully
designed
this,
so
this
is
still
more
in
the
idea
phase
and,
like
I
think,
a
lot
of
what
we
want
to
talk
about
here
is
like
at
a
high
level.
We
kind
of
understand
how
this
is
going
to
work.
We
will
just
and
source
it.
Just
kind
of
won't
exist.
There's
a
lot
of
different
tweaks.
We
can
have
on
how
we
pull
it
out
of
source,
but
at
a
high
level
it
just
won't
exist.
E
E
E
No
not
this
point.
We
would
probably
start
with
just
methods
at
the
moment,
because
those
are
the
ones
that
we
like
most
frequently
run
up
against.
I
think
it
could
at
some
day
apply
to
types
like
to
use
dirty
words
like
secure
string.
It
is
a
way
to
solve
that.
It
is
an
approach
to
solve
that
problem
where
we
say
that
there
is
like
one
more.
E
You
know
if
we
think
that
kind
of
the
part
of
the
ops
leading
story
is
that
you
can
upgrade,
but
we
will
kill
your
code
when
it
tries
to
like
rebuild
on
that
target
platform
and
then
the
next
one
we
delete
the
type
like.
If
that
is
something
that
you
all
thought
was
valuable
as
a
part
of
kind
of
an
obsoletion
deletion.
E
A
E
Would
do
that
is
effectively
what
this
is
doing
from
the
perspective
of
c
sharp?
Now,
if
you
think
about
other
languages
this,
this
is
the
other
thing
it's
like
with
other
languages.
They
would
like.
You
could
do
that
today,
and
that
would
be
a
completely
acceptable
way
of
like
fixing
your
thing,
however,
that
causes
problems
with
say,
like
f
sharp,
like
f
sharp,
doesn't
necessarily
recognize
all
of
the
features
that
we
had
to
see
sharp
like
immediately
as
we
do
it.
E
The
kind
of
suggestion
is
reusing,
obsolete
api
with
binary
compat
only
like
that
is
something
that
f
sharp,
could
then
say
like
hey,
that's
valuable,
like
c
sharp
got
a
lot
of
miles
out
of
that,
like
let's
start
using
that
too
vb.
What
have
you
so?
That's
kind
of
the
the
key
difference
between
doing
this
as
a
trick
with
reference
assemblies
and
a
trick
with
like
marking
the
api.
Some
source
is
like
it's
better
supported
with
a
bunch
of
languages.
A
E
The
compiler
would
the
probably
the
simple
ways
we
would
just
kill
it,
because
the
problem
is
once
you
get
into
things
you're
like
you're,
like
oh,
like
use
this.
This
doesn't
apply.
We
have
a
whole
lot
of
if
this
doesn't
apply.
Try
something
else,
logic
in
the
compiler.
It's
called
overload
resolution.
We
do
some
really
fun
stuff.
E
E
E
The
power
is
yours
now
like
what
I
really
kind
of
want
to
talk
about
this
review
because,
like
I
said,
we
don't
have
the
language
feature
implemented.
Yet
it's
more
of
just
two
things
like
understanding
like
how
the
shape
of
this
change
would
work.
If
we
did
it
like
here,
you'll
see
that
I
had
the
obsolete
api
and
I
put
like
a
tick
for
binary
compat
only
an
earlier
version
of
this
proposal.
I
and
I
believe
I
put
as
an
alternate
because
you
just
have
a
completely
different
attribute.
E
E
It's
like
this
is
only
worth
doing.
If
you
all
think
it's
going
to
enable
you
all,
if
you
all,
would
be
comfortable
using
it
to
move
to
new
language
features
and
to
like
make
your
apis
better
and
have
a
better
upgrade
experience,
because
if
you
all
are,
are
not
comfortable
with
it,
then
there's
no
point
in
doing
the
work.
E
I
think
that'd
be
one
we
could
kind
of
I'd
have
to
think
about
that,
because
that
wasn't
kind
of
in
the
idea
I
originally
had,
but
I
think
we
could,
if
you
all
said,
this
is
only
valuable
if
you
can
change
return
types,
so
we
could
sit
down
and
say:
okay,
let's
think
about
how
we
could
change
return
types
as
a
part
of
the
design.
A
But
the
problem
is
still,
you
would
have
to
be
able
to
define
them
right
like
this,
like
you
have
to
design.
Basically,
both
members,
and
then
one
of
them
is
marked
as
the
attribute
in
the
compiler
would
have
to
also
support
that.
It's
not
just
the
consumptions,
also
the
definition
that
would
have
to
be
supported
right.
E
That's
correct,
but
we
I
mean,
if
you
think
about
today,
like
we
have
to
support
that,
like
you,
can
define
two
members
with
different
types
like
we
have
to
parse
it
and
give
you
guys
decent
errors
and
still
produce
like
a
semantic
model
and
give
symbol
information
more
about
just
saying,
in
which
cases
would
it
not
be
an
error?
Now,
I'm.
E
A
I
mean
my
personal
preference
would
be
it's
a
separate
attribute,
just
purely
because
we
may
want
to
have
different
targets
because
absolute
applies
to,
I
think
everything
effectively
and
this
one
you,
as
you
said
you
may
want
to
start
with.
You
can't
put
it
on
types
because
it
would
be
weird
or
you
would
just
say
you
can
only
do
the
members
or
whatever
right
like
it.
A
A
Yeah
seems
like
overhead,
like,
like
you're,
a
particular
example
here,
where
it's
just
you
know
it's
it's
pretty
obvious
on.
You
know.
Oh
yeah,
we
have
these
two
things.
The
other
one
just
gives
us
more
information,
so
please
call
the
other
one.
Instead,
there
really
isn't
a
reason
why
we
need
to
write
a
you
know
a
page
on
why
this
is
obsoleted
right.
It
seems
self-explanatory
for
the
most
part,
and
I
don't
think
we
would
use
it
in
cases
where
the
new
behavior
wouldn't
be
a
super
set
right.
A
B
Yeah
and-
and
while
I
think
that
there's
some
interesting
cases
where
we
could
do
we
could
we
could
make
changes
that
are
currently
exclusively
binary
breaking
today,
like
changing
return
types
and
making
it
so
that
it
could
be
compatible
and
become
only
a
source,
breaking
change
we
could
effectively
make
you
know,
make
the
decision.
This
is
better
for
the
ecosystem,
so
we
want
to
break
users
who
recompile
that's
one
interesting
thing,
but
I
think
even
beyond
that,
just
like
we
have
apis
in
in
string.
B
We
have
apis
throughout
the
bcl,
where,
like
even
when,
we
eventually
get
prams
of
prams
of
span,
for
example,
where,
even
if
the
language
didn't
go
and
say,
hey
we're
going
to
say,
the
spans
version
is
better
than
the
array
version.
We
would
be
able
to
make
that
decision
ourselves
at
that
point,
and
so
it
ties
things
less
to
the
language
in
terms
of
betterness.
The
the
frameworks
now
able
to
make
that
decision
themselves
when
we
think
it's
better.
A
Right,
I
think
the
the
the
one
thing
that
jared
said,
which
I'm
not
sure
will
actually
fly
in
practice,
is
the
idea
that
it's
less
impact
for
the
language
for
other
languages
right,
because
I
think
the
problem
for
f
sharp
will
be
if
they
want
to
support
one
of
those
scenarios.
Let's
say
the
caller
identity
attribute
case,
for
example,
and
they
will
start
processing
the
binary
compact
only
thing
they
effectively
if
they
do
the
same
semantics.
A
That
c
sharp
does,
which
is
you
know
they
always
excluded
from
from
from
name
lookups,
then
well,
now,
they're,
basically
putting
themselves
on
the
hook
to
support
all
the
features
where
we
did.
This
particular
trick
for
c
sharp
right.
So
I
think
it's
going
to
be
a
a
binary
thing
right.
Once
you're
adopted,
you
have
to
adopt
all
cases
for
it
right.
There
is
no
piece
meaning.
E
Yes,
you're
correct
in
that
respect
that
they
would
have
to
make
the
decision
to
kind
of
say,
like
any
of
these
features,
which
start
to
kind
of
impact
it
like.
We
have
to
do
it
now.
One
thing
to
keep
in
mind
like
if
you
do
start
changing
return.
If
you
want
the
one
where
you're
like
oh,
we
want
to
oscillate
on
return
types
that
will
screw
with
other
languages
period
like
because,
while
at
an
il
level,
yes,
you
can
differ
signatures
by
return
type
only.
I
would
not.
E
I
do
not
believe
other
languages
have
fantastic
support
for
that
in
terms
of
like
which
one
would
they
pick
like?
How
do
they
deal
with
that
to
the
error?
Do
they
pick
one?
I
honestly
don't
don't
even
know
what
c
sharp
does
offhand?
I
don't
know
if
we
mark
it
as
an
error.
If
we
pick
first,
it's
probably
one
of
those
two
but
that'd
be
one
to
keep
in
mind
but,
like
I
said,
I'm
happy
to
sit
down
and
kind
of
like
people
who
are
interested
in
this
and
said
you
know.
E
Maybe
this
would
be
valuable,
like
I'm
happy
to,
like
kind
of
you
know,
sit
in
a
room
a
few
times
kind
of
walk
through
like
what
the
decision
like
what
the
options
would
be
and
like
maybe
try
to
find
a
few,
like
I
put
the
bug
down
to
serve
down
here.
There's
a
couple
more:
if
you
click
through
on
yon's
proposal
for
caller
identity,
it's
the
one,
that's
highlighted
right
there.
E
There
are
a
few
more
apis.
You
can
see
where
it's
really
hard
to
see
what
a
path
forward
is.
Unless
we
have
a
feature
like
this,
like
I
said,
I'm
happy
to
sit
down
with
a
few
people
on
bcl
and
walk
them
through
like
how
which
directions
we
could
take
the
feature
in
and
what
kind
of
the
implications
of
that
would
be,
and
then
we
could
make
a
decision
on.
This
is
valuable.
We
should
go,
try
to
pursue
it
or
we
can't
see
ourselves
using
this.
E
You
know
punt,
move
on
like
one
thing
I
want
to
be
clear
on
is
like
you,
no
matter
what
we
do
like.
If
you
add
this
api,
there's
always
going
to
be
an
edge
like
you
can
do
a
whole
lot
of
things
to
say
that
source,
which
reasonably
called
this
method
before
we'll
call
this
method
after
we
apply
this
attribute
and
you
upgrade,
but
there
will
always
be
kind
of
a
weird
as
edge
case
that
comes
along
like
someone
could
do.
E
Some
really
weird
like
overload
resolution
trick
like,
if
you
guys
think
if
you
all
want
to,
for
instance,
convert
spam's
return
from
into
uint,
I'm
sure
I
can
sit
down
and
pretty
quickly
design
some
cases
you
know
just
find
the
intersection
of
types,
the
the
differences
between
like
what
can
convert
to
versus
knit
and
then
just
have
someone
pass
a
link
to
an
overload
with
that,
and
you
have
a
break
so
there
are.
There
will
always
be
like
a
small
friction
area
that
you
will
be
creating.
E
If
you
did
this,
I
think
you
can
make
it
extremely
manageable,
particularly
on
method
parameters,
but
it
is
something
that
it
it
won't
be
perfect.
Like
you,
you
would
be
saying
that.
Okay,
there
is
a
small
subset
of
customers
that
could
get
broken
by
this
and
again
I
don't
have
a
great
intuition
if
that
is
within
your
all's
comfort
zone
or
not,
if
you
think
the
wins
for
doing
that
or
not
I
mean
I
understand
you
guys,
do
take
deliberate
breaks
every
now
and
then-
and
I
just
don't
know
I'm
just.
A
I
mean
it
usually
is
a
function
of
like
friction
versus
cost
right,
if
the,
if
the
positive
this
for
us
is
mostly
in
aesthetics,
because
we
don't
like
how
things
were
named,
we
just
basically
get
over
ourselves
and
don't
cause
the
friction.
If
it's
or
it
performs
much
better
or
the
user
gets.
You
know
most
users
get
just
fundamentally
better
behavior,
then
I
think
that's
something
where
we
generally
entertain
it,
at
least,
but
then
it
also
becomes
a
function
of
like
how
how
how
much
friction
does
it
cause
right?
A
I
think
I'm
personally
more
on
the
if
it's
source
breaking
only
I'm
much
more
comfortable
with
it,
if
it's
binary
breaking,
then
it's
usually
like.
We
can't
reason
about
it,
because
the
universe
is
too
large
for
our
api,
so
consumption
but
source
breaking.
You
know
it's
manageable
and
I
think
I
mean
the
way
I
think
about
this
feature.
Maybe
that's
wrong,
but
if
you
really
basically
say
there's
a
way
and
an
attribute
to
effectively
state
hide
it
from
the
public
api,
I
mean
the
compiler
consumes
it.
A
D
Yeah,
I
think,
I
think,
we're
actually
in
a
situation
within
the
library's
team,
where
we
want
to
do
a
little
bit
more
experimentation.
I
mean
look
at
what
we're
doing
with
generic
math
right
and
we're
reliant
on
new
compiler
features
in
order
to
give
us
the
ability
to
do
that.
I
think
this
attribute
is
another
good
example
of
something
that
would
help.
A
D
Like
it
put,
putting
this
on
obsolete
attribute
in
particular,
makes
me
think
that
the
justification
string,
sorry,
the
the
message
inside
the
obsolete
attribute
will
play
into
this
somehow
like.
I
would
expect
that
most
people
who
would
use
this
would
expect
it
to
tie
into
editor
browseable
somehow.
But
you
know
that's
I'll.
Leave
that
to
other
smarter
people
to
discuss.
E
D
Yeah,
and
in
this
case,
though,
if
I
put
binary
compat
only
like
I,
I
think,
if
I
understood
you
correctly,
the
compiler
doesn't
even
see
it
correct
or
it
just
ignores
it.
Basically,.
E
Well,
we
have
to
actually
compile
and
put
it
in
the
assembly,
but
where
it
becomes
interesting,
is
that
and
we'd
have
to,
for
instance,
likely
if
you,
if
it
were
a
member
that
was
part
of
like
interface,
if
it
was
like
an
interface
implementation,
we'd
have
to
talk
about
like
well.
Do
we
see
it
for
that,
or
do
we
not
see
it
for
that?
Like
I
said
it's
not
100
sketched
out.
E
This
is
much
more
of
a
kind
of
an
idea
with
kind
of
squary
corners
right
now
yeah,
but
we
could
definitely
we'd
have
to
sit
down
and
kind
of
like
walk
through
some
of
that,
but
the
other
thing,
but
kind
of
like
the
the
kind
of
counter
like
I
said
this
is
idea.
I've
had
centered
around.
It
seems
to
have
some
lukewarm
support
in
various
places,
but
kind
of
the
counter
to
this
is.
E
If
we
don't
do
something
like
this,
then
we
have
to
kind
of
look
at
yon's,
caller
identity,
attribute
proposal
and
say
well
we
got
to
do
something
else
like
this
aot.
They
really
feel
like
that.
This
is
something
that's
going
to
help
them
with
aot
and
how
they're
going
to
get
aot
forward,
but
the
places
we
want
to
use
it.
We
can't
because
of
this
compatibility
problem,
and
so
we
would
have
to
make
some
decisions
on
how
we
approach
that
and
it
could
be.
E
It
could
really
be
as
simple
as
we
think
that
this
is
too
heavy
like.
We
don't
think
this
is
generally
useful
enough,
so
we're
gonna
do
the
ref
assembly
trick
and
just
go
to
those
few
ref
assemblies
and
just
nuke
those
numbers,
we're
gonna,
add
an
attribute
like
don't
put
this
in
the
ref
assembly
and
like
tell
up
sharp
in
other
languages.
Sorry,
you
gotta,
learn
about
this
new
feature
and
like
move
on
that's
another
way
to
do
it
different
friction
points
but
yeah.
D
B
Yeah,
it
just
impacts
source
breaking
changes.
So
if
we
don't
expose
a
member
in
the
ref
assembly,
then
as
far
as
the
compilers,
like
c
sharp
and
f
sharp
are
concerned,
is
it
doesn't
exist.
D
Well,
I
guess
what
I'm
trying
to
figure
out
is
like:
where
does
it
actually
cause
like
a
method
missing
exception
at
runtime,
because
I
I
don't
remember
if
the
assembly
is
actually
trying
to
find
sorry,
I
don't
remember
if
the
app
is
trying
to
find
system
dot,
runtime
api
versus
system
private
correlate
api
where
it
would
still
exist.
Obviously,.
D
B
A
I
mean
that's
basically,
the
biggest
friction
for
us
is
effectively
we
have
to,
because
we
try
to
generate
the
reference
assemblies.
We
would
probably
cook
up
some
attribute
on
our
end
and
then
allow
it
to
be
excluded
from
regeneration,
so
we
don't
have
to
keep
not
forgetting
to
remove
the
one
line
that
puts
it
always
back
where
you
can
regenerate
it
right,
but
I
think
that's
I
from
a
I
mean
personally.
A
I
think
that
would
be
fine
if
it's
isolated
to
things
like
call
identity
attribute
where
we
have
like
two
or
three
things
in
the
framework
that
need
to
do
that,
but
it
would
basically
mean
yeah
the
f-sharp
guys
better
support,
quali
identity
or
they're,
basically
screwed
right
yeah,
but
that
that
seems
not
entirely
insane,
because
I
think
the
feature
would
be
useful
enough.
That
that
would
be
like
yeah
would
be
nice
if
they
would
do
that.
E
Right,
I
mean
honestly
that
that
is
something
we
can
legitimately
discuss.
Is
that
that
an
outcome
of
this
may
be
that
hey
jared?
This
was
generally
yay
good
generalization,
we're
not
going
to
do
it,
but
caller
I
didn't
eat,
damn
that's
important
enough
that
we
need
to
go
and
make
sure
that
all
of
the
I
mean
it
might
be
a
position
that
we
take.
That
aot
is
important
enough
to
the
platform
we're
going
to
make
a
couple
of
changes.
In.Net
you
know
eight
or
whatever.
E
E
A
E
E
Breath
struck
into
this
point:
we've
been
at
least
able
to
kind
of
one
of
our
mantras
was
like
okay.
Every
time
you
have
a
span
overload
have
like
a
bite
array
overload
like
there's
a
there's,
a
way
for
this
to
work,
and
we
we
might
have
to
find
we
would
potentially
have
to
invent
something
else
through
similar
to
that,
like
caller
identity
or
something
like
right.
A
But
we
basically
said
at
minimum:
we
need
to
add
knowledge
to
the
compiler
to
bail
if
they
don't
understand
things
right.
So
that's
the
text.
We
pay
basically
right
if
we
excel
into
f-sharp
that
c-sharp
doesn't
support
the
other
way
around.
We
always
add
the
code
on
the
other
end
to
say:
yeah.
If
you
see
this,
you
don't
know
what
it
is
you
fit
in
a
meaningful
way.
A
You
don't
just
do
stupid
stuff
right
and
I
think
that's
kind
of
the
same
here
we
would
say
in
order
for
us
to
support
this,
we
have
to
remove
these
members.
Therefore,
you
know
fsharp
needs
to
support
this
attribute,
so
that
code
keeps
you
compiling,
I
mean
which
seems
reasonable.
I
mean
that's,
not
horrible.
I
guess.
E
Well,
maybe
then
kind
of
seems
like
made
the
conclusion:
is
we
find
a
couple
interested
parties
on
the
kind
of
runtime
team
side
and
we
sit
down
and
kind
of
map
out
where
we
think
this
would
or
would
not
be
useful
and
like
what
the
other
options
would
be
for
things
like
caller
identity
attribute
cool?
I
don't
know
how
you
guys
mark
issues
that
way,
but
that's
how
I
think
it
should
be
marked.
A
Basically,
I
guess
needs
work,
needs
work,
sounds
great
and
then
we
can
decide
where
we
want
to
take
this,
but
I
mean
I
I
like
the
way
you
think,
because
I
think
it
is
the
trying
to
make
it
additive
only
would
be
difficult
for
us
right.
E
E
Awesome
great,
I'm
glad
to
hear
that
so
yeah.
I
think
it's
it.
I
I
feel
like
there
might
be
something
to
do
here.
So
I'm
glad
that
you
all
like
think
that
too,
and
so
I'm
happy
like
I'm
happy
to
sit
down
and
kind
of
like
keep
hacking
on
hacking
on
the
ideas
with
someone
and
feel
like
where
we
see
where
we
can
take
it
to.