►
From YouTube: GitHub Quick Reviews
Description
Powered by Restream https://restream.io/
A
Hello
internet,
it's
api
review
time
and
surprise.
Surprise.
We
have
red,
we
always
have
red
near
the
end
of
a
release,
but
since
we
have
red,
I
won't
talk
long
and
did
not
attempt
any
german.
Today.
Sad
all
right,
add
metric
overloads,
taking
more
tags.
Five,
six
niner,
three
six
tadek.
B
Yes,
so
in
in
preview,
5
we
have
exposed
matrix,
apis
and
part
of
the
cpis
is
have
what
we
call
instrument
classes
like
counter
in
histogram,
and
these
classes
have
methods
that
it
reports
to
the
measurements,
for
example,
about,
like
you
know,
in
the
front
or
in
the
screen
like
we
have,
the
counter
class
has
add
missile
and
while
reporting
the
the
measurements,
there
is
options
that
you
can
pass
what
you
call
it
tags.
I
mean
it's
key
value
of
years.
I
mean
to
pass
it
with
a
measurement.
B
It's
it's
considered
as,
like
you
know,
a
dimension
for
the
matrix,
so
we
we
have
exposed
in
these
classes,
three
overloads
one
taking
one
tag
and
the
other
taking
two
tags
and
the
third
taking
three
tags,
and
the
main
reason
for
that
is.
We
are
trying
to
avoid
any
memory
allocations,
while
calling
the
cprs,
because
performance
is
very
critical
for
the
for
the
metrics
in
general.
B
B
B
So
the
proposal
here
is
we
need.
We
are
adding
some
more
overloads,
which
is
exactly
the
same
signature,
but
with
more
tags
here
up
to
eight
tags.
B
We
are
trying
to
avoid
to
have
the
users
of
the
api
to
do
the
work
around
from
there
by
themselves,
so
we
are
trying
to
avoid,
like
every
user,
would
go
and
and
try
to
work
around
the
memory
allocation
by
like
allocating
like
static
arrays
and
trying
to
passing
it
as
a
span
or
something
like
this.
So
this
is
the
main
reason
for,
for
this
purpose.
A
A
Key
value
pairs
are
struct,
so
you
need
to
make
it
a
nullable,
but
I
mean
that's
my
initial
reaction.
The
second
one
is,
I
feel
like
it's
string.concat.
We
decided
that
six
is
as
far
as
we
really
ever
felt
like
we
needed
to
go.
B
B
D
D
We
think,
instead
of
just
focusing
on
microsoft,
we
we
should
also
reach
out
to
the
community
and
see
the
other
players
in
the
system,
they're
going
to
use
this
api
and
what's
the
typical
use
and
according
to
the
feedback
from
new
relic
and
other
companies,
it
seems
like
it's
very
like
consistent
six
to
eight,
and
this
is
why
we
ask
for
eight
dimensions.
We
believe
with
eight
dimensions.
D
D
D
E
E
B
B
F
Is
it
really
worth
having
up
to
six
overloads
rather
than
waiting
for,
say,
net
seven
c
sharp
11,
where
we
might
get
prams
of
span,
because
these
overloads
will
be
end
up
being
preferred
over
prams
of
span
and
they're
going
to
be
less
efficient
than
params
of
span
when,
when
that
comes
out,
every
single
one
of
these
is
going
to
be
a
shadow
copy
to
pass
the
value.
F
So
it's
going
to
be
implicitly
passed
by
reference
and
it's
going
to
invoke
a
copy
for
every
single
value
type
and
that's
not
going
to
be
it's
going
to
be
potentially
cheaper
than
the
allocation.
But
it's
not
going
to
be
cheaper
than
passing
it
by
array.
If
you
had
an
array
or
by
span,
if
you
had
a
span.
B
Yeah
knit
six
and
knitsex
net
standard
and
net
four.
B
A
F
Officially
speaking,
it
will
only
be
supported
in
the
c-sharp
language
version
that
it
releases
in
which
will
only
be
supported
on
the
corresponding.net
version
it
releases
with.
If
it's
just
a
language
feature,
you
can
probably
force
it
to
work
down
level,
but
officially
that
will
not
be
supported.
E
The
problem
here
is
not
so
much
the
library
author
right,
I
don't
care
about
the
open,
telemetry
people,
they
can
force
the
lang
version
right.
That's
what
library
authors
do
it's
the
problem
that
every
single
consumer
of
that
library
now
also
has
to
upgrade
the
language
version
and
that's
not
going
to
fly.
In
my
opinion,.
E
So
I
think
practically
speaking,
if
you,
if
you,
if
you
need
the
reach
of
net
standard
2-
and
you
want
the
feature,
then
I
think
waiting
for
a
hypothetical
language
feature
is
not
gonna
fly.
I
think,
and
I
mean
I
mean
I
understand
what
10
are
saying,
but
we
have
this
problem
literally
freaking
everywhere
in
the
bc,
all
right.
So
it's
not
like
this
is
a
new
problem
by
any
stretch
like.
G
A
F
E
E
Yeah
I
talked
about
jared
before
like
apparently
it's
not
that
easy,
because
there's
other
problems
with
that,
but,
like
I
think
the
yeah,
I
think
that's
kind
of
I
mean
if
we
do
this
right.
I
think
my
point
is
more
like.
If
we
to
to
me
waiting
is
not
actionable
like
we
need
features
now
right
and
I
think
to
me
waiting
on
a
hypothetical
language
feature
is
always
a
bad
idea,
unless
it's
very
clearly
on
the
books.
E
I
would
say
no
for
the
same
reason:
it's
not
fine
and
even
console
right
line
like
these
things
are,
in
you
know,
relatively
hot
loops
right
like
it's
telemetry
right,
it's
the
thing
you
want
to
use
everywhere
right,
so
I
I
I
would
venture,
I
guess
and
saying,
given
how
we
treat
allocations
in
general
with
our
system
I'd
be
shocked.
If
that's
lie,
if
that's
the
case,
then
you
should
all
be
passed
by
in.
E
E
B
Yeah,
this
would
not
be
that
much
different
if
you,
if
you
just
have
like
a
static
area
and
fill
it
and
pass
it
here,
but
you
will
run
into
some
other
issues.
I
mean
concurrency,
I
mean
like
you
know,
and
it
has
to
be
managed.
I
mean
by
the
user
to
make
sure
that
they
are.
They
are
doing
the
right
things
to
to
get
what
they
need
right.
E
Well,
but
you
could
have
a
struct
that
basically
holds
up
to
let's
say
eight
key
value
pairs
right
as
fields
and
then
you
have
a
fallback
that
is
effectively
an
array,
and
so
you
can
imagine
you
have
that
builder
effectively.
It
has
logically
just
add,
add,
add,
and
then
you
know
the
first
thing
goes
into
the
first
field,
the
second
and
the
second
and
so
on,
and
then
eventually
you
spill
over
into
the
array.
I
look
at
the
array
if
somebody
has
more
than
eight
and
then
that
entire
struct
is
passed
by
in
right.
E
It's
pretty,
I
mean
it's
not
exactly
what
we
do
for
passing
our
parents
to
the
other
overloads,
but
it's
kind
of
similar,
whether
we
have
this
special
internal
struct,
where
we
hold
on
to
these
things
and
then
only
spill
over.
If
we
have
more
than
I
don't
know,
three
or
four
elements,
whatever
the
cutoff
line
is
in
the
bcl.
C
E
Sure,
but
like
it's
I
mean
if,
if
two
poor
is
your
goal,
I
think
you
know
asking
people
to
write.
Three
normal
lines
of
code
doesn't
strike
me
super
unnatural,
especially
because
it's
pretty
obvious
what
code
to
write
right.
You
you
basically
just
say
you
take
an
in
of
some
struct
and
then
the
person
would
have
to
new
up
that
struct
and
call
add
ad
and
then
pass
in
that
struct
right
I
mean
it's
intellisense,
pretty
much
tells
you
what
to
do.
E
I
mean
I
give
you
that
it's
not
quite
as
convenient
as
calling
an
overload
here,
but
I
would
not
call
these
overloads
convenient.
Just
I
mean
like,
like
imagine,
you're
calling
this
api
for
the
very
first
time
you
go
to
counter
you
say:
add
you
open
paren,
your
intellisense
window
explodes
because
the
signatures
are
like
you
know,
500,
you
know
million
characters,
long
thanks
to
generics.
E
E
E
And
if
you
have
six
or
seven
or
eight
I
mean
yeah,
I
mean
we
can.
The
nice
thing
is
so
the
problem
with
overloads
is
that
effectively.
That
is
a
every
time.
We
add
something
we
have
to
add
abis
for
that.
If
it's
just
in
the
struct
well,
we
can
grow
that
struct
right,
we
can
say:
oh
we
optimize
for
six.
Oh,
it
turns
out
ada,
you
know
pretty
pretty
common.
E
E
A
E
Yeah
sorry,
like
it's
the
wrong
technology,
so
like
what
I
mean
by
explosion,
is
more
like
you,
you
open
up
the
signature,
help
for
add
and
because
it
takes
up
to
like
six
arguments,
each
of
those
parameter
types
is
fairly
long.
So
if
you
look
at
the
pop-up
window,
that
appears
now
it's
super
messy
right.
It's
kind
of
like
when
you
look
at
link
right.
E
C
E
E
E
Well,
the
nice
thing
is
you
wouldn't
have
to
I
mean
one
thing
you
can
do
with
the
struct
is
basically
just
have
an
add
method
that
takes
you,
know
key
comma
value
and
then
implement
innumerable
because
then
you
can
say
new
builder,
open,
brace
and
then
just
basically,
you
know,
do
curly
braces
prepare
right.
It's
the
normal,
like
basically
how
you
would
initialize
the
dictionary
with
the
object,
initializer
syntax
in
c
sharp.
You
know
what
I
mean
yeah
and
then.
E
E
C
E
A
Key
value
pair
you're
forcing
your
key
value
pairs
to
be
string
and
object,
so
where's,
the
generic
benefit.
The.
D
A
D
E
E
A
And
then
that,
then
I
assume
tanner
that
this
is
as
it's
a
massive
massive
struct
as
far
as
the
24
bytes
rule
is
concerned,
that
the
call
encounter
should
take.
This
is
in.
E
B
E
A
F
B
A
Bag
is
our
usual
phrase
for
jumble.
B
A
F
B
F
But
the
the
concern
is
that
bag
means
that
the
order
doesn't
matter,
and
so,
if
you
do
add
ad
add
then
potentially
the
user
could
interpret
it
as
one
of
the
two
conflicting
keys
wins,
but
not
necessarily
last
one
wins.
F
A
A
A
So
basically,
we're
saying
we
think
something
like
this
would
be
better
and
flesh
it
out
and
then
bring
it
back.
E
Yeah,
I
mean
the
only
thing
is:
if
we,
so,
what
do
you
have
on
screen
right
now?
We
probably
want
an
overload
to
this
ad
that
doesn't
take
attack.
Does
it
just
takes
the
read-only
span
of
keyway
to
pair
of
string
comma
object
right,
or
do
we
say
screw
you
have
to
actually
because
it
will
be
more
expensive
to
you
know
that
be
addressed
dehydrate
this
thing
into
this
other
thing
right.
F
F
The
allocation
probably
be
the
worst
part,
but
for
for
the
actual
everything
else,
it's
basically
two
copies
you.
You
do
block
copy
k,
ref
kvp1
for
the
first
up
to
eight
elements
of
the
span,
and
then,
if
there's
more
than
that,
you
you
allocate
the
list
and
pass
in
the
slice
for
the
rest
of
it
and
that's
also
a
single
copy.
So
it's.
E
E
E
A
Yes,
really,
the
much
better
thing
to
talk
about
here
would
be
histogram,
so
record
is
then
we
can
talk
about
record
versus,
add.
E
I
think
it's
fine,
I
mean
we
can
add
it
later
too,
like
it
doesn't
have
something
that
we
have
to
do
now.
I
think
the
most
common
cases
you
pointed
out
is
you
already
have
something
you
want
to
just
modify
it,
but
presumably
that
you
want
to
add
a
few
more
or
maybe
modify
one.
So
I
guess
add,
or
the
normal
setters
are
fine
ad
range
seems
maybe
a
bit
over
engineered
right
now,
but
I
don't
think
it's
it
wouldn't
be
bad
to
have.
E
We
generally
have
ranges
for
when
you
have
more
than
one
value
right,
that
is
usually
for
one
value
and
then
ranges
from
multiple
values.
It
avoids
a
bunch
of
like
over
resolution
issues
as
well,
where
things
simplicity
convertible
to
both,
but
that's
just
how
all
the
collections
work,
but
we
don't
have
to
have
them
right.
I
mean
we
can
probably
get
away
with.
Just
having
add
the
constructor
plus
ad
seems
fine.
E
B
E
A
A
E
A
A
Yeah
I
mean,
if
you
end
up
with
basically
this
then
probably
we
can
probably
try
and
be
good
at
answering
a
thing
on
email
but,
okay,
I
think
yeah,
it's
the
try
and
see
if
the
thing
that
we
doodled
from
nothing
works.
G
Hey
so
basically,
this
is
continuing
the
discussion
we
had
last
week
effectively.
This
effort
has
been
split
into
two
different
pieces
of
work.
One
is
reverting
the
breaking
change
that
we
were
discussing
during
last
week
so
effectively
when
users
register
custom
converters
and
for
primitive
types
and
also
happen
to
serialize
dictionaries
whose
keys
are
of
the
same
type.
Then
we
will
no
longer
be
throwing
not
supported.
G
Exception
will
be
falling
back
to
the
default
converter
so
effectively
for
the
reflection
based
serializer
we'll
be
preserving
the
same
behavior
as
dotnet
five.
However,
for
the
case
of
source
generators,
we
would
still
be
throwing
not
supported
exception,
and
that
is
in
line
with
our
attempts
to
avoid
rooting,
the
default
converters
and
optimize
trimming.
G
So
the
feeling
is
that
we
still
need
to
get
these
apis
included
in
dotnet
six
to
unblock
users
coming
from
the
source
generation
perspective,
so
the
apis
are
more
or
less
the
same
as
the
ones
that
we
discussed
last
week,
module
laws
and
renamings.
So
the
proposal
here
is
to
call
the
two
methods
read
from
property
name
and
write
to
property
name.
G
The
signatures
are
identical
and
the
other
thing
that
we
discussed
last
week
was
whether
we
wanted
to
have
methods
that
either
accept
a
string
argument
or
return
a
string
value,
but
we
decided
to
avoid
that
approach
because
it
would
result
in
needless
copying
and
allocations
and
just
doing
it.
The
simple
way
seemed
like
the
preferred
approach.
G
Yeah
I
started
with
right
ass,
but
it
kind
of
didn't
mix
well
with
read
from
so
yeah
I
I
used
to,
but,
as
should
work
as
well,
I
think.
H
A
All
right,
so
I
remember
before
we
talked
about
change
like
instead
of
taking
the
writer
returning
a
string
and
then
there
was
the,
but
if
you
already
have
the
pre-encoded
value,
then
you
were
losing
the
perf
benefit
of
that,
etc.
So
I
assume
all
of
that
was
talked
about,
and
this
is
where
you
ended
up.
G
G
In
this
case,
we
know
we
acknowledge
that
these
methods
would
specifically
be
used
for
rendering
property
names
rather
than
being
a
general
purpose
mechanism
for
serializing
arbitrary
values
as
json
string
values,
but
I
think
we
can
discuss
adding
more
methods
should
a
need
for
a
general
purpose
mechanism
arise.
I
don't
see
this
happening
at
this
point,
so
just
unblocking
the
dictionary
key
scenario.
G
E
G
So
the
regression
will
get
addressed
independently,
but
it
would
not.
It
would
only
affect
the
reflection
based.
Serializer
users
would
still
be
receiving
not
supported
exceptions
in
the
source
generator
case,
but
we
don't
care
as
much
because
it's
a
new
thing
and
therefore
they
would
just
be
able
to
use
that
workaround
to
fix
the
issue.
C
G
Yeah
so
so
the
base
implementation
throws
if
the
default
converters
are
unoccluded,
otherwise
it
will
fall
back
to
the
rooted
converters
and
then
the
individual.
A
A
Yeah
all
right
memory
cash
had
the
possibility
to
disable
linked
cash
entries,
four
five,
five,
nine
or
two
anyone
know
anything
about
this.
H
Yeah,
I
can
speak
to
it.
I
don't
think
adam's
here,
so
in
extensions
caching,
we
have
a
memory
cache
when
you,
when
you're,
when
you
have
like
dependent
entries
in
the
cache
so
say,
say
you're,
you
get
something
from
a
web
server
and
then
you
cache
that
and
then
you
do
some
sort
of
processing
after
that,
and
then
you
cache
that
and
then
set
expiration
tokens
on
these
things.
H
If
you're
doing
that
inside
the
same,
like
chunk
of
of
of
work,
the
two
entries
become
linked
and
the
way
we
do
that
is
using
like
an
async
local.
So
that
way,
when
one
expires
like
it's
dependent,
expire
expires
as
well,
and
a
couple
things
here,
one
is,
it
is
expensive
and
adds
a
trivial
non-trivial
amount
of
overhead
and
then
two
it's
kind
of
surprising
as
well
and
so
the
proposal.
H
The
initial
proposal
here
is
to
add
an
option
to
to
shut
this,
to
shut
this
behavior
off
and
then,
if
you
scroll
even
further,
you
can
see
that
follower
actually
even
asked
to
flip
it
and
not
have
this
behavior
and
opt-in
to
this
behavior
in
the
future
as
well.
H
H
H
So
the
the
breaking
changes
is
the
is.
The
proposal
is
to
add,
it
add
an
option
to
do
this,
but
then
default
that
option
actually
to
false.
It
says
true
here,
but
later
the
conversation
turned
to.
We
should
disable
this
by
default,
and
anybody
who
wants
this
behavior
needs
to
opt
into
it.
Going
forward.
E
E
And
like
where
does
this?
So?
Is
this
something
that
is
used
by
default?
Or
is
this
something
that
people
basically
have
to
reference
the
memory
cache
themselves,
because
this
part
of
asp.net,
basically
where,
if
we
do
this
now
random
apps,
have
different
caching
behavior?
Or
is
this
basically
only
users
that
use
caching
explicitly
themselves.
H
I'm
pretty
sure
asp.net
uses
caching
inside
of
it
itself,
but
I
I
don't
know
of
anywhere
where
they
have
like
this.
This
behavior
of
linked
entries
right
where
you
get
one
thing
and
then
in
the
process
of
getting
that
one
thing
you
have
to
go,
get
another
thing.
E
E
E
E
A
E
I
mean
my
question:
is
I
guess
if,
if
you
turn
it
off
at
the
cash
layer
rather
than
the
entry
level
it?
I
assume
that
means
it?
Can
it
could?
Presumably
it
doesn't
matter
whether
one
of
your
entries
is
cached
or
all
of
them
could
be
so
if
some
of
them
are
like,
if
like,
if
you
track
linked
entries
at
all
or
if
you
have
to
track
them
for
one
or
more
right,
so
I
guess
the
the
entry
level
only
makes
sense.
If
that
allows
you
to
reduce
your
cost
right.
E
E
It's
really
an
easier
design
right.
I
think
the
entry
level
makes
more
sense
if
well,
it's
still
pay
for
play.
So
if
you,
if
you
have
to
opt
in
individual
entries,
then
well,
if
you
have
a
million
entries
and
only
one
of
them
is
linked
and
or
only
if
one
of
them
needs
the
tracking
for
linking,
then
maybe
that's
the
better
approach
right,
but
I
assume
that
the
cost
makes
no
difference,
because
once
you
have
one
well,
there's
some
overhead
at
the
cash
layer
that
you
have
to
pay
for.
H
E
Interesting
yeah,
that's
another
good
point
effectively.
That
would
mean
that
I
mean
the
nice
thing
with
having
it
at
the
cash
layer.
Is
that
the
party
that
turns
it
on
and
off
is
maybe
a
different
party
than
the
one
that
actually
adds
the
entries
right
so
in
the
in
the
alternative
design?
Basically,
everybody
who
calls
create
entry
and
wouldn't
have
to
make
a
decision.
Do
your
call
create
entry
to
recall,
create
unlinked
entry
right.
H
E
E
It
might
be
easier
for
people
to
to
get
the
new
behavior
or
get
back
the
old
behavior
of
what
parties
in
the
middle
having
to
compile
against
different
apis.
A
E
A
Get
a
net,
so
it's
it's!
You
make
a
decision
and
I
think
it's.
This
is
the
options.
That's
passed
into
the
constructor
of
the
cache,
so
it's
it's
red
once
when
the
cache
is
built
and
then
in
fact
they
don't
even
seem
to
be
proposing
being
able
to
ask
the
question
of
the
memory
cache.
A
Yeah
so
yeah,
I
guess
that's
the
only
question
I
have
from
assuming
we're
ignoring
the
alternative
design
and
just
looking
at
the
top
from
the
memory
cache.
Does
it
give
you
back
the
options
like?
Is
there
a
way
to
to
ask
this
question.
H
A
A
Do
that
again,
I
don't
know
what
else
in
the
options,
because
you
know
that
we
have
here
is
the
diff.
So
it's
does
the
cache
generally
tell
you
things
about
how
it
was
created.
H
So
the
option
test
has
things
like
the
size
limit:
how
big
you
should
impact,
etc.
E
B
H
E
E
E
E
The
question
is
with
the
alternative
design,
the
party
that
gets
to
control
whether
entries
are
linked
or
not,
is
the
library
right,
because
I
mean
I
basically
get
the
cash
I
fill
my
in
and
then
I
can
decide
do
I
want
tracking
or
not,
because
I
call
different
methods
right
in
the
current
design
proposal.
That's
not
what's
happening
and
the
current
proposal
is
well.
The
cash
decides
whether
it's
doing
that
or
not,
and
the
library
doesn't
get
to
control
that
necessarily
right.
E
So
the
question
is:
are
there
cases
where
I
don't
know
for
correctness
or
whatever
you
you
need
to
decide?
In
which
case
you
don't
want
random
behavior,
you
probably
want
something
like:
oh
you
get
a
cash
and
then
you
say,
oh,
is
tracking,
enabled
no
okay,
then
throw
because
you
gave
me
the
wrong
cash.
I
can't
work
in
this
environment.
A
And
like
in
the
in
the
case
here
where
I
don't
quite
understand
why
child
is
a
child,
but
that
the
if
the
original
code
would
have
applied
the
same
expiration
token
to
both
parent
and
child,
then
you'd
be
like
if
not
cash,
dot
track
linked
cash
entries,
parent.ad
expiration
token
token,
to
get
the
same
behavior
back.
Presumably
so
it's
a.
H
Yeah
it
feels
like
if
we
want
to
support
that
it
I
I
would
go
with
the
alternative
design,
actually
like
it,
the
person
who's
doing
it's,
not
the
person
who's
like
configuring
or
creating
the
cache
right.
It's
like
it's
the
person
who's
trying
to
shove,
something
in
it
saying
whether
they
want
this
behavior
or
not.
H
H
Because
and
then
and
then
the
breaking
change-ness
is
easier
to
talk
about.
I
think
because
it's
like,
if
you're
pure
library
who
wants
who
needs
this
behavior,
we
say
you
either.
You
know
you
update
to
the
latest
memory
cache.
You
say
you
require
memory,
cache
7-0,
and
then
you
call
that
api,
where
you
need
it.
H
H
H
E
A
Yeah,
I
mean
the
the
question
I
would
have
at
that
point
which,
because
the
you
know
the
breaking
change
on
this
behavior,
is
you
decide
you
don't
that
the
breaking
change
was
too
disruptive?
You
can
switch
it
to
true
being
the
default
and
the
people
who
want
it
off
can
set
it
to
false
the
with
this
one.
It's
you
now
have
to
rename
and
have
a
breaking
like
reacting
from
the
breaking
change.
To
what
create
entry
does
is
a
breaking
change
to
these
people.
E
E
H
H
E
A
E
E
That's
kind
of
the
question
of
like
you
know
is
this:
like:
do
we
believe
that
it
doesn't
matter
for
correctness
99.99
of
the
cases,
because
in
that
case
it's
like
yeah,
it's
a
breaking
change,
but
practically
nobody's
impacted
by
this.
You
just
get
better
perf,
in
which
case,
maybe
it's
okay,
to
say.
Sorry,
there's
a
new
api
in
the
remote
case
where
you
are
broken
by
this
and
then
you
have
to
do
some
work,
but
we
believe
that's
like
one
out
of
a
thousand
libraries.
So
nobody
really
cares
about
that.
C
H
C
E
H
E
I
mean
I
mean
that
would
work.
I
mean
I
mean
what
I
like
about
this,
so
what
you
just
described.
I
think
at
this
point
I
would
like
to
have
a
global
hammer,
because
basically,
what
I
would
do
is
if
my
app
behaves
weirdly.
I
would
turn
off
tracking
globally,
run
my
app
again
and
see
whether
that
fixes
the
problem.
If
the
answer
is
yes,
then
well,
then
it
is
nowhere
to
look.
G
E
H
H
H
A
Based
on
a
comment
that
somebody
had
about
adding
a
new
method
means
that
there's
probably
a
bunch
of
extension
methods
that
need
to
be
modified.
Presumably
it's
like
all
these
cache
extensions
dot
sets
then
need
to
gain
this
boolean
as
well.
H
E
H
E
H
E
Just
said
like
the
the
break
will
be,
it
will
probably
be
no.
So
my
only
concern
with
this
kind
of
breaks
is
that
it's
kind
of
like
a
production
load
situation
where
you
would
actually
observe
it
right.
So
it's
not
something
that
we
would
really
like
to
get
feedback
from
previews
right.
So
that's,
I
think
the
only
downside
of.
B
E
E
H
E
E
E
E
A
All
right
provide
an
api
to
get
the
native
module
handle
of
the
native
process.
Entry
point
module
five,
six,
three
three
one
and
mr
korczynski
is,
I
think,
on
youtube
chat.
Instead
of
the
team's
channel
right
now,.
F
F
There
is
a
workaround
today
in
that
you
can
explicitly
you
can
explicitly
set
up
a
hook
and
handle
it
all
yourself,
but
that's
not
as
nice
as
having
having
a
direct
api
to
get
the
the
handle.
F
Basically,
the
h
instance
of
the
of
the
assembly
in
question,
because
otherwise
you
kind
of
have
to
go
through
like
system
reflection,
assembly,
get
module,
get
module,
handle
or
and
then
system,
and
then
marshall.get
get
get
h,
instance
from
the
module,
and
you
have
to
do
all
kinds
of
hoops,
and
this
just
simplifies
it
to
a
single
direct
api
call.
That
gives
you
exactly
what
you
want.
F
It's
the
it's
whatever
the
the
operating
system
requires
to
resolve
native
exports
for
for
that
assembly.
E
I
don't
see
a
problem
with
that
and
the
alternative
seems
way
worse.
I
would
agree
with
that.
F
I
think
my
only
question
I
think
jeremy
would
be
the
one
that
needs
to
comment
is:
should
there
be
an
equivalent?
That's,
not
marshall,
dot
get
get
agent
since
that's
a
get
module
handle
that
takes
a
module
just
so
you
can
do
it
for
other
assemblies
as
well.
E
F
It's
an
additional
api:
that's
a
get.
Module
handled,
takes
a
module
as
its
first
parameter,
but
jeremy
would
be
the
one
to
comment
there.
There
is
an
api
that
does
exactly
that
today,
which
is
on
marshall,
but
it's
called
gate,
git
h.
Instance:
it's
not
exactly
an
x
friendly
name
among
other
considerations,.
A
Lag
plus
typing
hasn't
answered
it
yet
well
we're
doing
the
one
do
we
I
I
know
I.
A
A
Have
a
special
note
for
login
and
third
edition,
which
is
it's
now
a
compound
noun,
but
it
is
not
a
compound
verb
phrase.
So
if
it's
a
button,
it's
with
a
capital
I-
and
if
it's
describing
the
text
box,
then
it's
allowed
to
have
a
lowercase.
I.
E
A
All
right,
jeremy
k,
I
don't
think
we
need
that
api
today
and
even
if
we
did
I'd
suggest
we
put
it
on
native
library.
It's
a
workaround.
F
My
assumption
was:
you'd,
have
get
module,
handle
of
module
n
and
get
entry
point
module
and
effectively
be
shorthand
for
get
entry
point
assembly
dot
get
get
first
module.
B
B
F
So,
as
it
says,
vector
t
exists
to
expose
a
bunch
of
common
cross-platform
cmd
helpers
that
supports
a
lot
of
operations.
It
does
not
support
shifting
today,
so
the
proposal
is,
we
expose
some
explicit
methods
to
support
shifting,
namely
shift
left,
shift
right,
logical
and
shift
right,
arithmetic
matching
the
names
that
we
used
in
the
actual
hardware
intrinsics.
F
And
only
taking
in
an
integral
today
there
are
some
notes
slightly
below
that,
where
it's
saying
it'd
be
nice
to
expose
operators,
but
there's
considerations
with
float
and
double
where
it
may
not
fit
in
well.
There
is
consideration
for
doing
per
element,
variable
shift,
but
that's
not
supported
on
all
platforms.
So
that's
not
included
in
this
proposal
today
and
then
there's
also
a
note
on
it
would
be
nice
for
the
int
in
to
actually
be
the
corresponding
t.
But
that's
not
what
c
sharp
allows
today.
F
So
we
shouldn't
do
it
either
and
also
doing
byte
in
uint
and
yulong,
isn't
natural
for
shifting
in
c-sharp
and
we've
gotten
the
feedback
that
on
the
actual
public
expat
apis
we
should
be
using
instead.
So
matching
all
of
that
based
on
previous
feedback
and
discussions
that
we've
had.
F
F
We've
not
done
one
generic
method
for
many
cases
in
the
past,
because
it
makes
it
hard
to
see
or
to
know
if
we
ever
expose
a
type
in
the
future.
That's
not
supported.
E
F
E
E
Yeah,
I
don't
feel
strongly,
like
I
mean,
makes
sense
as
it's
proposed.
A
F
Right
and
statics
on
vectors
how
we
do
most
of
the
helper
methods,
both
for
perf
reasons,
as
well
as
to
avoid
generic
generic
explosion
issues
when
you're
using
concrete
type.
E
F
If
we
were
going
to
end
up
supporting
something
like
generic
math,
we'd,
probably
end
up
exposing
some
kind
of
helper
struct
that
what
one
that
dealt
with
integers,
one
that
dealt
with
floating
points
and
then
you
would
pick
which
one
was
appropriate
to
access
things
like
shift
generically.
F
E
Only
question
is:
is
x
the
best
name.
I
should
just
be
value.
F
I
I
think
we
used
value
everywhere
else
and
I
just
copied
x
from
the
from
the
top
post.
A
I
mean
we
could
be
even
worse
and
we
could
also
have
the
other
jeremy
k
where
we're
all
afraid
to
pronounce
people's
last
names
because
getting
it
wrong
is
hard.
A
E
E
A
B
A
What's
what's
weird
about
having
the
throw
on
this,
I
guess.
As
long
as
we
put
does
not
return,
the
modern
compilers
understand
that
it's
the
but
doing
throw
new
unreachable
exception
or
unreachable
code
exception.
E
F
E
E
F
E
Yeah
to
me,
I
think
we
can
approve
this
one
api,
but
we
should
probably
you
know
if
we
say
we
want
to
add
the
throw
helper.
Then,
yes,
you
should
coordinate
that
with
a
compiler
team,
which
you
can
probably
do
a
separate
item.
For
I
mean
to
me
the
number
one
value
gain
of
this
is
just
you
know,
a
well-defined
exception
name
and
then
a
default
message.
So
we're
not
because
right
now,
in
almost
all
code
bases,
you
do
something
like
throw
new
exception..
C
B
A
Yeah
I
mean
I
can
certainly
see
the
perspective
of
them,
not
wanting
to
just
trust
it
on
blind
faith,
because
that
would
mean
that
all
you
need
to
do
is
paint.
Something
is,
does
not
return
and
now
congratulations
it.
It
lied.
It
did
return,
and
now
you
have
access
to
uninitialized
values,
which
we
don't
like
in
c
sharp.
A
Attribute
that
says,
does
not
return
and
they
just
follow
it
with
row
null
as
long
as
the
jet
doesn't
see.
Oh
god,
there's
a
there's
a
throw
null
here.
I
can't
inline
this
like
because
that's
the
real
concern
with
the
throw
helpers
right
is
the
inlining,
so
teach
the
jet
throw
null
is
like
whatever
fine
inline
to
throw
in
all.
Who
cares.
F
C
E
I
think
that's
the
question
now.
You
know
how
we
feel
about
this
because
the
we
can
so
c.
So
if
it's
a
regular
attribute,
then
you
know
even
in
c
sharp.
You
can't
really
enforce
it,
because
it's
only
the
newer
compiler
version
that
can
enforce
it.
The
old
one
would
know
about
it,
so
it
would
be
happily
applying
that
right.
That's
the
general
problem
of
using
custom
attributes
for
that
sort
of
stuff
and
then
other
languages
have
the
same
problem.
E
Anybody
can
apply
the
attribute
and
there's
zero
enforcement
around
that,
and
we
can't
really
mod
wreck
the
type
itself.
E
So
there
would
be
some
we
could
introduce
a
mod
rack
and
then
just
say
you
know,
the
the
mod
drag
is
applied
to
the
return
type
right
and
then
this
way
at
least
people
have
a
hard
time
uttering
that
in
their
language,
because
there's
no
language
syntax
for
that,
so
they
would
have
to
write
their
own
compiler,
at
which
point
we're
just
like
okay
screw
you
like
you,
can't
help
you
anymore.
I.
A
Mean
if
they
wrote
a
a
compiler
that
doesn't
understand
what
a
mod
rec
is
and
just
skips
over
them
like
the
vb
compiler,
then
they
can
do
whatever
they
want.
E
It'll
be
honest
that
I
think
f
sharp
is
the
one
that
does
okay
generally
ignore
what
rex
said
they
don't
recognize,
even
though
they're
not
supposed
to
but
yeah
it's
the
same
thing
I
mean,
I
think
we
will
probably
make
it
a
mod
rag,
not
a.
E
Custom
attribute
for
that
reason,
but
yeah,
I
agree
with
tenor
that
if
the
compiler
enforces
that
you
know
you
put
a
keyword
on
the
method,
then
the
compiler
says:
okay,
now
you
promised
that
this
thing
will
never
return
and
then
well,
maybe
your
body's
environment
don't
fail
fast,
in
which
case
the
compiler
says.
Yes,
I
know
that
this
is
going
to
blow
up
all
the
time
and
then
or
you
have
a
throw
statement
in
it
or
whatever
validation
the
compiler
does.
Then
that
seems
good
enough
for
the
number
one
for
the
99
percentile
right.
H
E
A
F
C
F
Well,
it's
just
the
general
we're
adding
this
exception,
which
represents
a
well
a
kind
of
well-defined.
This
should
be
unreachable
case,
and
so,
if
the
compiler
correctly
understands
this
should
be
unreachable,
or
this
will
always
throw.
Then
it
can
include
that
in
nullability
analysis
and
other
analysis,
such
as
definite
assignment
or
other
things,
it
can
actually
return.
H
F
F
Like
like
so
in
the
jit,
for
example,
we
have
a
concept
of
asserts
versus
unreached
and
on
in
an
assert,
will
fire
and
there's
an
option
to
ignore
that
and
to
continue
execution?
It's
basically
like
a
like
an
exception.
You
know
you
can
catch
the
exception.
You
can
continue,
whereas
unreached
is
more
like
stack
overflow.
If
it
gets
thrown,
you
cannot
catch
it.
F
It
represents
a
fundamental
failure
in
this
in
the
program
execution
and
it
must
fail
fast
and
so
likewise
the
runtime
could
and
probably
should
handle
unreachable
like
it
does
stack
overflow
can't
catch
it,
and
so
it's
something
that
we
probably
need
to
and
should
talk
with
the
other
teams
about
to
coordinate
on
is
there
anything
we
want
to
do
here?
If
so
coordinate
it
all,
and
if
not,
then
we
don't,
but
at
least
we
had
the
conversation
and
we
didn't,
you
know,
get
to.net
eight
and
go
oh.
I
wish
we
had
done
this.
E
H
F
Do
that
I
think
it's
a
case
of
we
should
approve
it,
because
the
shape
is
fine,
with
the
annotation
of
have
a
small
whoever's
driving
this.
It
looks
like
probably
me,
because
it's
in
system
run
time
needs
to
sync
with
john
and
jared
and
potentially
others
before
implementing
to
see.
Is
there
anything
we
want
or
should
do
here.
E
I
mean
I
mean
it's
almost
irrelevant
from
the
compiler
standpoint.
Right
I
mean
we
could
treat
unreachable
exception
just
like
the
other
process,
corrupted
state
exceptions
and
just
prevent
them
from
being
catchable,
which
seems
like
it's
a
runtime
feature,
and
then
you
know
sure
somebody
could
ship
that
api
down
level
and
runtime
wouldn't
make
it
uncatchable
or
something,
but
I
mean
I
think
I
agree
with
tenna
that
it's
an
example
of
something
where
well,
if
we
could
make
it
fail
fast
and
prevent
people
from
catching
that
by
accident.
E
That
seems
generally
speaking,
good
idea,
because
unreachable
is
a
good,
probably
a
good
idea
to
fail
fast,
because
something
is
really
fundamentally
wrong
in
your
logic.
Well,.
H
A
E
E
A
A
E
A
Right,
so
there
are
two
things
that
I
think
that
we
want
to
talk
about.
That's
one
of
them,
the
rule,
for
if
it's
a
net
standard
two
type
is
yes,
but
I
assume
that
we're
gonna
just
introduce
this
as
a
net
in
type,
which
means,
I
think
no
is
our
answer
now.
E
I
mean
that
yeah,
you
would
know
I
mean
I
I
mean.
I
don't
think
this
is
something
that
I
would
actively
encourage
people
to
ship
down
level.
I
would
just
say
if
you
have
an
ascender
2
library,
just
define
this
type
yourself
and
by
all
means,
please
put
it
in
your
namespace,
don't
put
it
in
the
system
name
space
and
by
all
means
make
it
an
internal
type,
because
you
know
there's
no
point
having
a
public
api
that
people
can
catch
anyway.
E
So
realistically,
like
I
think
for
that
center
too,
the
the
video
ad
of
shipping
that
down
level
is
not
very
high.
A
Okay,
so
removing
the
serialization
constructor
and
then
do
we
like
the
name
unreachable
exception,
or
do
we
think
it
needs
another.
A
It's
so
unreachable
exception,
unreachable
code,
exception,
unreachable
statement,
exception,
or
do
we
think
just
unreachable
is
fine
to
me.
It
feels
like
unreachable
code
is
what
I
call
the
block
that
it
lives
in,
but
I'm
fine
with
unreachable
just
making
sure
that
we
think
about
things
instead
of
just
hitting
a
proof.
F
A
E
Okay,
I
think
I
knew
that.
I
I
mean
that's
what
I
said:
it's
kind
of
a
flawed
hierarchy,
but
originally
the
idea
was
system
is
for
system
and
then
basically,
you
either
derive
from
system
or
application,
but
that
hasn't
happened
really.
So
it's
all
it's
all
messed
up
now,
so
either
one
would
work.
I
mean,
strictly
speaking,
like
unreachable
is
a
contract.
It's
an
implement.
It's
an
internal
error
right
so
like
it
doesn't
really
matter
what
it
extends
like.
It's
a
it's
logically
equivalent
to
you
know,
fail
fast.
A
Well,
we're,
I
think
it's
we're
waiting
on
the
we're
not
doing
the
throw
helper
at
this
time,
because
it
won't
help,
at
least
in
c
sharp,
because
it'll
tell
you
like,
if
you
declared
a
variable
before
a
block
and
you
have
an
unreachable
in
it,
it'll
still
see
I'm
sorry.
You
could
have
gotten
to
a
point
where
this
variable
was
not
initialized
yeah,
so
unreachable
exception.throw,
it's
not
much
shorter
than
throw
new
unreachable
exception.
A
Argument,
expression
yeah-
this
also
pulled
in
the.
If
so,
it's
it's
pulling
in
the
condition
normal
making
sure
to
try
and
normalize
the
parameter
name
and
a
bunch
of
things.
And
here
it's
just
do
you
want
the
word
throw
on
the
left
or
on
the
right
so.
F
Jeremy
suggested
that
it
could
be,
it
could
inherit
from
invalid
operation
exception.