►
From YouTube: GitHub Quick Reviews
Description
Powered by Restream https://restream.io/
A
Good
morning
or
whatever
time
it
is
everyone
welcome
to
another
thursday
session
of
api
review.
One
of
these
days,
we'll
get
rid
of
all
these
red
things
and
go
back
to
only
once
a
week,
but
we
have
plenty
of
red
things.
So,
let's
get
started.
B
All
right
so
for
streams
that
are
duplex
or
connected,
such
as
like
a
tcp
socket.
B
B
This
is
important
for
some
protocols,
not
for
others.
It
is
something
that
we
wanted
to
add
for
quick,
or
rather
that
we
need
for
quick,
but
it's
also
applicable
to
things
like
network
stream,
ssl
stream,
possibly
others.
I
don't
know
of
needing
to
to
be
able
to
send
a
shutdown
to
make
certain
protocols
work
with
it.
B
B
We
considered
having
some
sort
of
new
base
class
called
like
connected
stream
or
duplex
stream.
We
considered
having
an
interface
of
like
I
duplex
stream,
both
of
these
options.
We
found
break
composability.
B
It
would
be
really
nice
for
something
like
gzipstream
to
be
able
to
you
know
if
it
doesn't
care
about
shutdown,
it
can
at
least
forward
it
to
its
base
stream,
and
if
we
have
a
new
base
stream
type
or
a
new
interface,
it
kind
of
breaks
that,
because
none
of
these
streams
will
have
a
shutdown
to
forward
to-
and
it
probably
wouldn't
be
appropriate
for
gzip
stream
to
inherit
from
a
connected
stream
type.
B
So
the
alternative,
or
the
only
thing
left,
is
to
add
this
on
stream.
Unfortunately,
not
many
users
of
stream
actually
need
this,
so
it
would
be
kind
of
additional
api
burden
that
most
people,
hopefully
ignore,
but
they'll
definitely
see
and
it'll
make
it
more
complicated
for
them.
C
Corey,
I
also
had
a
question
that
I'd
asked
of
jeff
and
I
I
don't
remember
what
the
answer
was.
So
you
mentioned
ssl
stream.
It's
not
clear
to
me
what
this
does
on
ssl
stream.
It
can't
just
call
shut
down
on
the
underlying
stream,
because
subsequent
reads
might
result
in
say
a
renegotiation
or
something
that
needs
to
still
send
data
on
the
underlying
stream
right.
C
B
What
does
the
cell
stream
do
here?
So
ssl
stream
does
have
its
own
like
shutdown
packet,
that's
specific
to
tls,
when
I
think
ssl
stream
would
just
not
call
shutdown
until
it
was
completely
finished,
with
its
own
stream
on
on
the
actual
socket
and
that's
on
its
underlying
stream.
It
would
not
call
shutdown,
probably
until
disposed
so,
probably
never
call
shutdown,
because.
C
B
B
Yeah,
so
the
expectation
is
that,
if,
if
you
don't
care,
you
should
forward
it
and
as
part
of
a
pr
for
this,
we
would
be
updating.
Well,
we'd,
be
looking
at
and
updating
any
applicable
stream
classes
that
we
have
in
the
bcl
and
in
the
very.
C
B
We
do
it's
really
important
to
have.
This
can
shut
down
property
here.
Essentially,
if
you
need
shutdown
behavior,
then
it's
it's,
your
app
will
hang
if
the
stream
doesn't
actually
implement
it.
So
we
need
some
way,
for
you
know,
and
http
1.0
is
is
the
easy
example
here
it
requires
this
shutdown,
so
anything
taking
a
stream
it
would
take.
It
would
check,
can
shut
down
right
and
throw
if
it
was
not
shutdownable.
C
B
Yes,
today,
we
would
call
dispose
and
it
would
hang
our
app
because
in
http
100
the
server
needs
to
have
the
full
request
before
it
can
send
its
response,
and
it
has
no
idea
of
knowing
if
your
request
is
finished,
if
you
don't
send
shutdown.
B
So
essentially,
this
would
manifest
as
like
a
hang
when,
whenever
someone
calls
read
async
when
they're
expecting
the
remote
side
to
have
some
sort
of
message
for
them.
But
if
you.
B
B
B
Yeah,
so
we
started
with
shutdown
because
that's
sort
of
the
socket
terminology,
another
proposal
is
complete
because
pipewriter
has
a
complete
method
on
it,
so
we
would
at
least
be
consistent
with
the
other
stream
type
that
we
have
finish
right
was
another
alternative.
Something
like
that,
just
just
to
indicate
that
you
are
finished,
writing
and
the
other
side
can
start
sending
you
whatever
it
needs.
A
The
comment
says
you
can't
use
end
right
for
obvious
reasons:
I'm
not
coming
up
with
any.
What
are
the
obvious
reasons.
A
It's
already
there
that
that
would
be
an
obvious
reason.
I
apparently
have
purged
iasync
result
entirely
from
my
brain.
A
We
can
make
this
new
product
called.net
core
and
get
rid
of
all
the
methods
we
don't
want.
Oh
wait.
A
So
I
know
that
in
the
context
of
ssl
stream,
which,
since
that
was
mentioned,
that
is
what
I
got
loaded.
There's
the
when
you're
closing
the
stream.
Do
you
send
the
nice
hey
by
the
way
I'm
going
away?
A
You
can
throw
away
your
encryption
resources
as
opposed
to
like
something
notices
that
the
socket
died
and
then
it
goes
on
a
resource
cleanup
and
so
there's
so
I
thought
it
was.
I
have
a
message
that
I
would
like
to
write
as
part
of
shutting
down.
Please
go
write
that
message
which
would
be
right,
shut
down
instead
of
shutdown
right
and
now
now
I've
learned
that
that's
not
what
it
does
at
all
and
I
need
to
think
again.
B
So
socket
does
have
a
way
to
shut
down
reads:
it's
actually
very,
very
rare,
to
call
that
we
have
no
code
that
calls
that
today,
quick
does
quick,
also
has
kind
of
a
similar
thing
to
shut
down
read,
but
also
not
shutdown
right
seems
to
be
the
the
the
least
common
denominator
between
all
of
these
things,
shutting
down
reads
is
a
little
more
esoteric
and
would
not
apply
to
like
ssl
stream.
The
thing.
F
About
the
reads:
is
that
you,
it's
not
really
a
graceful
thing
right.
You
know
when
you're
talking
about
writing
the
common.
You
know
the
way
that
this
works
in
sockets
and
in
quick
and
other
places
is
typically,
you
write
a
bunch
of
data
and
then
you
shut
down
right
and
that
will
actually
send
an
eof
across
to
the
you'll
be
able
to
as
they
are
reading.
They
will
get
a
zero
from
their
bytes
read
that
actually
indicates
that
eof
has
been
hit.
The
the
there's,
no
real.
F
So
that's
graceful
in
the
sense
that
it's
the
the
person
who's
writing
controls.
When
they're
done
writing
the
person
who's
reading
doesn't
control
when
they're
done
reading
right-
it's
it's
I
mean
they
can
stop
reading,
of
course,
but
but
it's
the
peer
who's
deciding
to
send
data
to
them.
So
you
can
abort
the
read
and
in
fact,
if
you
call
shutdown
read
on
socket
what
it
effectively
does
is
abort
the
read.
In
other
words,
it
basically
says
to
the
socket
stack:
I'm
not
expecting
any
more
data
to
come.
F
If
any
more
data
does
come,
then
send
a
reset
to
the
peer,
because
that's
a
protocol
error,
not
a
tcp
level
protocol
there,
but
it's
it's.
It's
I've
considered
that
I'm
not
expecting
any
more
and
if
they
do,
then
I
want
them
to
know
that
their
data
was
not
read.
F
Quick
has
something
similar
to
to
abort
the
read
side
as
well,
but
we
don't
want
to,
and
in
quick
in
fact,
we
are
going
to
expose,
read
method
that
will
allow
you
to
do
that
and
allows
you
to
specify
an
error
code
as
well.
E
A
Yeah-
and
they
generally
can't
say
by
the
way
I'm
done
listening-
I
mean
they
can
just.
E
A
E
B
E
A
I
have
to
talk
in
terms
of
implementation
instead
of
abstract,
to
make
sure
I
understood
what
you
were
saying.
Levi
you've
called
right
right
right,
right,
right,
right
right
and
now
you
call
whatever
shutdown
right
or
whatever
we
rename
it
to,
and
that's
expected
to
then
be
like.
Oh
I've
hit
the
end
of
the
data,
so
go
finish
with
final
transform
and
all
the
padding
and
the
nonsense.
C
E
A
Yeah,
like
this
really
only
makes
sense
on
on
you
know,
by
die
streams
or
bitty
or,
however,
anybody
feels
like
pronouncing
b-I-d-I
and,
most
importantly,
it
only
makes
sense
when
you're
really
up
a
pipe,
as
opposed
to
like
file
stream,
where
you
can
be
in
both
read
and
write
mode,
but
it
doesn't
matter
because
you're
you're,
both
parts
of
the
universe.
A
A
Yeah,
if
we
added
bi-directional
stream
as
an
intermediate
base
type
like
again,
I
I
don't.
I
know
why
file
stream
doesn't
need
it,
because
it's
not
really
a
bi-directional
stream.
It
can
be
in
both
read
and
write
simultaneously,
but
it's
not
really
bidirectional,
but
I
is
gzip
really
bidirectional.
A
B
If
you
have
something
that
transforms
a
stream,
if
you
have
a
rapping
stream
and
you
don't
personally
care
about
shutdown,
it
makes
sense
to
just
forward
it
to
the
next
stream.
And
if
we
have
like
a
bi-die
stream
or
an
eye
by
die
stream
or
something
like
that,
then
it
breaks
any
opportunity
for
composition.
Essentially.
A
A
Well,
but
I
mean
that
would
be
true
anyway,
right,
like
gzip
stream,
either
it
does
special
work
in
its
dispose
to
say
it
supports
if
my,
if
I'm
rapping,
if
I'm
in
right
mode-
and
I
have
a
bidirectional
stream
instead
of
disposing
the
stream,
I
should
call
shut
down
right,
like
the
way
that
it
would
do
that
is
it
has
to
override
the
shutdown
right
to
forward
that
call
into
the
stream
that
it
was
wrapping.
So
no
matter
what
once
we
add
this
there's
no
free,
it's
you!
You
have
to
do
the
work.
A
B
It's
not
free
and
that's
free,
isn't,
isn't
the
goal
here.
It's
it's
really
making
it
so
that
if
you
have
some
sort
of
filtering
stream,
that
is
not.
B
A
Yeah
I
mean
so
I'm
just
like
so
just
thinking
out
loud.
If
we
added
a
bi-directional
stream
and
we
added
a
you
know
a
right
wrapper
stream
over
a
bi-directional
stream,
or
even
just
over
stream
like
if
it's
just
common
to
have
a
right
wrapper.
A
Then
we
could
make
the
right
wrapper
in
its
dispose
say
if
it's
wrapping
a
bi-directional
stream
called
shut
down
and
then
don't
dispose,
I
guess
versus
calling
dispose
if
it
wasn't
a
bidirectional
stream
and
then
now
you
have
encoded
the
semantic,
and
now
we
just
need
to
say
gzip
and
crypto
stream
and
the
in
other
simply
wrapping
streams
that
we
have
switched
to
using
that
now.
Everything's
happy.
D
Well,
that's
the
general
problem.
Well
that
only
works
for
one
layer
right.
So,
if
you
have
like
a
you
know
a
b,
directional
stream
and
then
on
the
right
side,
you
wrap
it
in
a
gzip
stream
and
then
on
top
of
that,
you
grab
it
on
a
I
don't
know
a
hashing
stream
or
whatever,
let
you
have
a
side
effect
for
that
doesn't
compose
anymore
right.
A
Well,
but
the
hashing
stream
we'll
call
dispose
on
the
gzip
stream
and
the
gzip
stream.
You
know
so
using
the
right
wrapper,
it
calls
dispose,
which
says
I
didn't
have
bidirectional.
So
it
hits
the
gzip
stream,
which
hits
its
right
wrapper,
which
says
it
was
bi-directional.
So
it
calls
shutdown
right
and
then
doesn't
dispose.
The
underlying
stream,
because
somebody
else's
job
is
to
do
that
because
it
was
bidirectional.
I
see.
G
A
F
F
D
B
Yes,
so
the
thought
is
that
anyone
who
needs
this
and
is
currently
like
using
network
stream
today,
something
like
network
stream
today,
would
now
have
to
keep
track
of
both
streams,
both
a
reading
stream
and
a
writing
stream
and
just
dispose
the
writing
stream.
A
C
From
a
composition
and
kind
of
cleanness
perspective,
I
I
like
that.
I
there
will
probably
be
some
objections
from
folks
who
are
super
focused
on
allocation
that
we're
creating
another
24
byte
object
yeah,
but
I
don't
know
that
it
actually
matters.
C
D
Why
I
generally
like
that
more
because
the
problem
is
really,
if
you
add
virtuals
to
stream,
that
you
expect
people
to
override
like
it's.
It's
basically
a
bug
farm
right,
because
there's
always
one
party
in
the
middle
that
forgot
to
override
it
and
then
dust
doesn't
compose,
and
that
will
probably
take
forever
to
get
this
actually
across
the
finish
line
versus
the
other.
One
is
yeah
a.
C
B
C
B
C
B
Right
so
our
current
apis
that
take
a
stream.
We
would
have
to
make
new
apis
that
take
two
streams,
because
we
would
need
to
be
able
to
shut
down
the
right
stream.
F
Or
since
we
already
so,
I
think
what
we
would
do
is
we
have
apis
that
already
work
in
terms
of
stream
right,
like
connect,
callback
works
in
terms
of
stream,
which
means
that
we
are
never
doing
a
shutdown
right
today
when
you
use
an
arbitrary
stream,
what
we
would
do,
since
we
already
have
those
apis
is
we
do
something
like?
If
is
bi-directional
stream,
then
we
can
be
smarter
and
actually
do
the
shutdown
right
and
if
it's
not,
then
we
just
do
we
do
today,
which
is
punt
and
hope.
B
C
I
C
C
So
was
the
suggestion,
then,
that
we
introduce
a
new
bi-directional
stream
class
and
you
can
call
as
bi-directional
stream
on
network
stream
to
get
one
of
those.
Our
network
stream
is
a
bi-directional
stream.
A
C
F
There
may
have
to
be
some
small
value
in
being
able
to
protect
someone
from
not
being
able
to
disallow
somebody
from
doing
reads
on
the
stream
like
if
you're
handing
off,
I
mean
presumably
you're
handing
off
to
somebody
who
only
needs
to
write
and
if
you
are
disabling
them
from
accidentally
calling
reid
and
up
the
state
of
the
stream.
That
might
have
some
small
value.
C
C
This
might
also
solve
introducing
bidirectional
stream,
as
a
base
class
might
also
solve
some
other
issues.
We
have,
or
at
least
help
them,
and
that
is
right
now
the
stream
based
class
assumes
you
don't
know
what
you're
doing
and
it
serializes
all
asynchronous
operations
by
default,
which
causes
problems
for
people
that
derive
from
stream,
don't
override
everything
they
really
should
and
end
up
getting
serialized
behavior,
even
if
they're
a
bi-directional
stream.
C
If
they
were
to.
Instead,
it
still
requires
work
on
their
part,
but
if
they
would
instead
derive
from
bi-directional
stream,
we
could
make
bi-directional
streams
methods
not
do
that
serialization
by
default,
and
it's
a
minor
thing,
but
it
it
it's
kind
of
a
nicety
there's.
F
Also
some,
I
kind
of
like
the
introducing
another
class
here
specifically
for
bi-directional
streams,
because
it
also
potentially
opens
up
in
the
future
the
ability
to
add
other
methods
on
bi-directional
stream
that
don't
really
make
sense
on
stream.
F
C
So
the
streams
that
we
we
have
that
we
own
seems
like
network
stream,
quick
stream
system,
io
pipes,
pipe
stream
and
system
io
pipelines.
Yes,.
B
Of
pipelines,
the
pipelines
wrapper
around
stream
would
probably
implement
this.
Don't
don't
you
get
a.
E
A
C
For
each
each
side,
yeah,
if,
if
pipelines
introduced
a
bi-directional
pipe
or
a
bi-directional
whatever
it
is,
maybe
we
would
create
something
from
that,
but
yeah
everything
else
we
have
is
really
intended
for
one
operation
at
a
time.
Even
if
you
can
switch
directions
like
file
stream,
you
can
you
can
read
or
write,
but
you
would
never
want
to
be
doing
that.
At
the
same
time
I
mean.
C
F
F
Yeah,
so
just
to
back
up
for
a
second
here,
I
think
it's
worth
making
the
philosophical
point
that
we
kind
of
have
a
certain
fixed
number
there's
a
couple,
different
types
of
streams
in
practice.
Right
one
is
the
you
know,
seekable
stream,
which
is
file
stream
and
memory
stream
right.
It's
just
it's
conceptually
random
access,
even
if
we
don't
actually
allow
random
access-
and
you
can
read
it-
you
can
write
it-
you
can
seek
it.
F
Then
there
are
the
read,
only
streams
and
the
re
and
the
write,
only
streams
which
were
pretty
self-explanatory
and
then
there's
this
sort
of
bi-directional
stream.
That
corresponds
to
loosely
to
like
a
network
connection,
whether
it's
actually
across
a
network
or
just
in
memory,
and
it's
and
perhaps
teasing
out
you
know
some
of
the
differences
in
semantics
between
those
different
types
of
streams
is
goodness.
C
B
C
No,
I
wasn't,
I
wasn't
suggesting
we
could
do
that.
I
I
don't
okay,
I
don't
know
that
we
need
to.
I
was
just
highlighting
that
the
desire
to
potentially
get
a
read-only
thing
to
protect
you
from
being
written
to
or
a
right-only
thing
to
protect
you
from
being
read
from,
has
been
expressed
elsewhere.
Gotcha.
A
A
A
A
And
we
could
always
make
it
virtual
if
we
desired
right
and
if
we,
if
we
think
that
we
would
call
the
feature
on
stream
as
read
only
and
as
right
only
then
we
can
use
the
same
name
here
and
then
it
becomes
override
unless
we
decide
that
was
not
virtual
on
stream,
because
we
can
make
our
own
wrappers
but
yeah,
okay.
So
yes,
this
is.
F
If
we're
going
to
add
get
right,
only
stream
do
we
want
to
add
get
read
only
stream.
Also.
I
realize
that
it's,
it's
really
just
a
convenience
at
this
point.
We're
not
actually
you
know
doing
the
delegate
to
the
close
right
on
dispose
thing,
but
it
seems
like
it
would
look
odd
if
we
only
added
one,
but
it
would
just
return
this.
A
B
B
C
A
A
A
C
The
even
if
you're
virtual
sorry,
even
if
you're
non-virtual,
we,
the
compiler,
will
still
emit
a
culvert
yeah,
but
there's
like
one
percent
of
certain
situations
where
it
doesn't
that
have
crept
in
and
there's
a
debate
about
whether
that's
valid
gotcha.
I
think
the
one
that
I
think
I
can
remember.
I
think
for
what
is
it
a
null?
C
A
C
No
coalescing
and
vote
yeah,
I
think
there
we
may
be
because
the
original
reason
for
introducing
introducing
call
vert
was
to
handle
null,
and
I
think
there
because
the
compiler
can
then
prove
that
it's
non-null,
because
it
does,
it
admits
the
check
it.
Then
it
might
emit
a
call.
Instead
of
a
convert,
we
have
to
double
check
yeah.
I.
A
A
Optimizing
how
like,
if
they're,
not
if
they're,
not
conceptually
bi-directional,
then
I
mean
yes,
it's
you
could
say.
I
want
a
file
stream.
I
don't
care
if
it's
in
in
both
read
and
write
mode.
I
I
want
rights
to
not
happen,
but
that
feels
rare.
So
the
seekable
dual
operation
streams,
aka
file
stream-
would
be
the
only
one
that
I
can
think
of.
That
would
want
this.
That's
not
by
die.
A
I
mean
we
could
believe
it
go.
It
belongs
on
stream
instead
of
here,
but
it
feels
weird-
and
I
think,
we've
convinced
ourselves
that
we
can
move
these
down
later,
because
moving
a
method
to
a
base
class
is
handled
by
the
runtime
and
if
it,
even
if
it
was
doing
call
instead
of
call
vert,
it
will
call
the
override
slot,
which
should
just
work.
A
I'll
say
yawn,
but
there
is
probably
someone
who
gets
asked
fewer
questions
that
could
also
answer
it
authoritatively.
Sorry
about
the
changing
about
the
virtualness,
if
the
if
the
compiler
is
emitting
call
instead
of
culvert,
and
we
later
move
these
in
a
later
release,
we
move
as
read
only
and
as
right.
Only
onto
stream
as
virtual
will,
the
combination
of
the
runtime
handling
methods
moving
down
and
and
there
being
a
new
override
slot
that
has
the
same
spot
as
the
call.
A
Instead
of
the
culvert
mean
that
we
ended
up
exactly
in
the
same
override
we
probably
wouldn't
forward
to
a
later
override,
because
they
called
call
instead
of
call
vert.
But
as
long
as
none
of
the
bi-directional
streams
re-override
as
read
only
as
right,
only
you
wouldn't
have
a
breaking
change
from
that
version
to
the
next.
There.
C
We
can
check
it,
but,
as
levi
points
out,
we
also
have
other
options
available
to
us.
We
could
add
a
leave
this
non-virtual
and
add
a
virtual,
create,
write
only
or
create
read-only
protected
number
on
the
base
class
and
then
change.
A
B
B
Yeah,
so
I
I
feel
like
we're
still
consistent
with
how
stream
works,
but
it's
just
a
thought:
if
there's
some
clarification,
we
can
make
there.
C
F
I'm
I
feel
a
little
queasy
about
using
the
term
close
for
the
reason
that
steven
just
brought
up,
which
is
that
clothes
itself
implies
that
you
know
this
thing
is,
is
done
and
you're
I
don't
I
I
guess
I
lean
a
little
bit
towards
something:
that's
that
that
doesn't
associate
with
the
term
close,
which
is
strongly
associated
with
dispose
and
and
yeah
I
mean
we
can
come
up
with
terrible
doubts.
F
A
That
that
would
be
true,
but
but
then.
F
Like
there
may
not
be
rights
that
actually
involved,
there
may
not
be
any
data
associated
with
it
in
the
sense
of
like
you
know,
causing
you
know,
you
know
actual
bits
to
be
read
on
the
other
side,
but
there's
certainly
something
that's
going
in
if
you
want
to
think
of
it,
as
in
terms
of
on
the
wire
there's
something
that's
going
on
the
wire
as
a
result
of
this.
B
I
think
I
understand
where
jeremy's
coming
from
currently,
when
you
call
right,
it
creates
bites
on
the
other
side.
So
having
something
else
called
right
that
doesn't
result
in
bytes
on
the
other
side
might
be
confusing,
especially
as
we
have
like.
We
have
various
other
classes
that
have
tons
of
right
overloads
and
they
kind
of
are
all
expected
to
write
bytes
to
the
stream.
F
Yeah,
on
the
other
hand,
let
me
put
it
this
way
if
I'm
the
peer
and
I'm
blocked,
and
I'm
I've
issued
a
read,
call
and
I'm
blocked
waiting
for
data
if,
if,
if
I,
as
the
writer
of
the
data,
call
write
and
send
actual
bits
that
read
will
complete
with
those
bits,
if
I
call
close
rights,
then
that
read
will
complete
with
a
zero
byte
read
with
bytes
red
equals
zero.
J
A
Yeah,
the
problem
with,
I
guess,
yeah
the
the
problem
with
complete
as
the
verb
to
me,
is
that
sort
of
sounds
like
flush
and
block
like
don't
move
to
the
next
line
of
code
until
yeah
until
we've
gotten
all
of
the
the
pending
buffered
rights
confirmed
as
written,
and
what
we're
really
saying
is
the
stream
promises.
It
won't
write
anything
else
and
now
gets
to
deal
with
that.
A
Yeah
I
mean
I
can
also
like
if
we
changed
it
from
you,
know,
right
or
rights
to
writing,
and
we
called
it
complete.
Writing
then
maybe
or
close
writing
or
because
what
we're
really
indicating
is
you
can't
call
right
anymore,
whatever
the
stream
does,
with
that
information
is
up
to
the
stream,
but
that's
really
all
we're
all
the
caller
is
signaling.
Is
I'm
done?
Writing.
A
Right
which
yeah
I
mean
there,
there
will
at
least
be
a
state
change,
because
something
like
a
you
know,
a
a
pipe.
You
would
close
your
half
of
the
pipe
and
then
they'd
get
the
like.
Oh
yeah,
you're
done
now,
because
we
hit
eof
yeah,
but
you
didn't
necessarily
write
anything,
which
is
why
it?
A
Why
right
being
the
verb,
feels
weird
and
goes
back
to
the
beginning,
when
I
thought
that
this
was
indicating
go
issue,
all
your
shutdown
verbage,
when
what
it's
from
the
api
purpose,
all
it's
really
signaling
is.
I
promise
I
won't
or
not
only
do
I
promise
I
won't
write
after
this
rights
after
this
will
probably
throw.
E
E
Like
the
behavior
you
want,
if,
especially
if
the,
if
the
default
implementation
is
basically
to
call
flush,
assuming
that
this
is
virtual
instead
of
abstract,
then
one
of
the
things
that
I
had
previously
mentioned
is
final
flush,
which
is
like
by
the
way.
This
is
please
flush.
Whatever
data
you
have
and
there's
going
to
be,
no
more
data
after
this
do
with
that
flushing
isn't
guaranteed
here.
A
A
E
F
C
A
A
The
only
problem
that
I
had
with
complete
is
it
it
implied
flush,
and
then
the
question
of
whether
this
is
a
blocking
operation
or
a
non-blocking
operation
is,
I
guess,
left
to
the
implementation,
but
yeah.
A
But
again
it
blocks
until
if
for
the
network
stream,
it
blocks
until
it's
all
been
sent
out
of
the
neck
right.
No.
F
Yeah
network
stream
is
a
non-flushing
stream.
There's
we're
still
debating
whether
quick
is
a
flushing
stream
or
not.
But
even
if
quick
requires
flush,
then
what
that
would
mean
similar
to
what
it
means
on
like
an
http
request
stream
request
body
stream,
which
is
that
the
data
has
been
handed
off
to
the
kernel.
F
We're
not
even
handed
off
the
colonel,
but
the
data
has
been
queued
for
sending
and
you
there's
no
further
action
required
by
you
in
order
to
guarantee
that
it
will
be
sent
at
some
point,
barring
catastrophic
failure.
A
Yeah,
so
is
there
a
guarantee
in
any
of
these
operations
that
the
kernel
pushes
it
to
the
nick
and
that
the
nick
has
pushed
it
to
the
wire
and
for
the
in
the
case
of
tcp
and
ack
has
been
received
before
the
process
terminates?
No,
no!
So
there's
literally
nothing
you
can
write,
which
is
as
soon
as
this
has
been
successfully
pushed
to
the
pier
terminate
my
process.
F
There's
well:
okay,
there's
this
a
that's
a
great
question
in
general,
there
is
no
way
for
you
to
know
whether
you
are
writing
data
or
or
writing
end
of
stream
or
there's.
Well,
those
are
the
options,
but
whether
you're
doing
which,
whatever
those
you're
doing,
there's
no
way
for
you
to
become
notified
that
the
peer
has
received
and
acknowledged
that
data
other
than
something
at
the
application
level
right,
in
other
words,
meaning
that
they
explicitly
send
you
something
back.
The
other
way.
F
Is
there
is
a
thing
called
linger
that
no
one
ever
uses
that
it
basically
says
when
you
close
the
socket,
don't
complete
the
close
of
it
immediately
linger
until
you
get
the
final
ack,
and
if
you
don't
get
it
after
a
certain
amount
of
time,
then
you
should
basically
abort
and
and
kill
and
send
a
reset
on
the
wire.
No
one
actually
ever
uses
this
because
well,
first
of
all,
it's
a
pain
in
the
ass
to
use.
Second
of
all,
it
doesn't
actually
work
asynchronously,
and
so
that's
really
bad.
F
Third
of
all.
In
practice,
protocols,
including
http,
are
written
so
that
they
don't
require
protocols
that
are
built
on
top
of
tcp
don't
depend
on
receiving
the
fin
from
the
pier
they
have
app
level
acknowledgements
that
allow
them
to
unders
to
you
know,
know
whether
their
the
operation
they
were
trying
to
complete
was
completed
or
not.
Okay,
now
quick
is
a
little
different
and
and
in
quick.
F
First
of
all,
one
of
the
different
differences
is
that
all
the
protocol
handling
happens
in
user
mode
as
opposed
to
kernel
mode,
which
has
a
couple
of
implications,
one
of
which
is,
we
don't
want
you
to
shut
down
your
process
until
the
all
the
tear
down
of
the
of
the
connection
state
has
the
quick
connection
state
has
completed,
whereas
the
kernel
takes
care
of
that
for
you
for
tcp
it
doesn't
now.
If
that
has
to
be
taken
care
of
in
in
in
user
mode.
F
Additionally,
quick
seems
to
rely
more
on
where
it
seems
to
intend
for
folks
to
be
able
to
reliably
get
notifications
that
the
stream
has
completed
in
in
part,
perhaps
because
of
this
exact
problem
I'm
talking
about,
which
is
that
you
need
to
know
when
that
happens,
in
order
to
make
sure
you
don't
tear
down
your
user
mode
state
and
so
as
part
of
quick
stream,
we
will,
you
will
be
able
to
well.
Let
me
put
it
this
way.
F
So
that
means
that
if
you
want
to
handle
any
errors
that
come
from,
you
can't
wait
on
any
particular
acts.
You
can't
say
tell
me
when
the
pier
has
act,
this
many
bites,
but
you
can
wait
on
effectively
the
entire
stream
being
successfully
processed
and
gracefully
closed
down,
and
you
can
be
notified
of
errors
during
that
process
by
throwing
from
dispose
async.
F
Exactly
all
the
flesh
will
guarantee
really
is
that
it
has
been
handed
off
to
whoever
is
responsible
to
all
the
plush.
Let
me
even
back
up
if
you
don't
call
flush.
This
is
assuming
we
require
flush
in
quick
stream,
but
if
you
don't
call
flesh
and
the
data
may
not
be
sent,
it
may
be
sent,
but
it
may
not
be
set
right.
If
you
call
flush,
then
we
guarantee
that
the
data
will
be
sent
eventually
modulo.
The
entire
thing
blowing
up,
in
which
case
you'll,
be
notified
about
that
in
a
different
way.
A
A
right
when
needed
by
the
protocol
and
they
flush
when
appropriate
and
and
again
in
the
case
of
like
a
pipe
they'll,
simply
just
call,
I
mean
they
would
call
flush
if
they
had
pending
data,
I
guess
and
then
they
just
call
close
on
the
pipe
on
the
right
half
and
then
everything
else
works.
So
yeah
we're
really
down
to
yeah.
F
A
A
buffer
between
or
a
dynamic
buffer,
between,
two
components
like
it's
still
duplex
the
notions
here,
make
sense.
Really
you
should
be
using
whichever
is
right
of
channels
or
pipelines
for
for
that,
but.
A
B
Directional
this
actually
works
because
pipeline
says
I
duplex
pipe
so
yeah,
that's
true.
B
Okay,
I
guess
we
also
want
to
add
this
to
ssl
stream
and
do
we
need
api
review
approval
to
add
it
on
any
other
streams,
we.
C
Find
I
think
we
just
do
that
as
part
of
the
pr
yeah
yeah,
but
I
still,
I
do
have
some
questions
about
behavior
and
a
few
things.
So
let's
say
I
have
a
network
stream
and
I
give
someone
as
read
only
and
I
give
someone
else
as
right
only
and
they
both
close
or
dispose
their
streams.
A
C
I
feel
like
we
probably
need
to
get
rid
of
well
either
we
need
to
get
rid
of
as
read
only
or
we
need
to
get.
We
add
complete
reads
and
we
don't
really
know
what
that
does
in
a
variety
of
places
and
then
my
question
still
stands.
I
think,
presumably
just
in
a
different
form
of
it,
presumably
like.
If
I,
if
I
give
someone
as
right
only,
I
still
need
to
eventually
call
dispose
on
my
stream,
yes
yeah
and
do
I
know
when
I
can
do
that.
C
C
A
C
A
Right
so
now
the
the
native
resource
gets
queued
for
finalization
right
yeah,
I
mean
thinking
of
it
in
terms
of
the
network
stream
or
ssl
stream
or
etc.
Right
you
have,
you
know,
using
ssl
stream,
do
some
stuff.
A
Oh
I'm
writing
compressed
data
using
a
gzip
stream
of
stream
dot
as
right,
so
that
and
now
I've,
given
it
all
of
my
writing
and
I'm
done
because
when
assuming
gzipstream
does
cascade
and
dispose,
then
once
that
you
close
that
gzip
stream,
the
right
channel
is
done,
and
now
you
still
own
the
read
half,
but
when
you
close
it
you're
done
so.
A
It
makes
sense
in
my
head.
I
can
also
see
where
there's
some
room
for
more
finalization
than
we
would
really
like.
B
A
A
B
It's
not
fully
clear
what
we
do
with
complete
reads
in
quick
and
elsewhere,
like
ssl
stream,
I
would
prefer
not
to
add
those.
A
Right
so
I
think
it
I.
My
understanding
of
steve's
comment
is,
if
we
want
to
add,
as
read
only
we
have
to
add,
probably
virtual
complete
reads,
virtual
and
that
it
generally
does
nothing,
and
if
we
don't
want
to
add
complete
reads
yet,
then
we
don't
add
as
read.
Only
so
I
like
less
is
more,
even
though
it
violates
symmetry.
A
A
I
look
forward
to
the
pr
all
right.
Do
we
agree
that
duplex
dream
is
a
general
I
o
and
not
like
you
know,
net.
F
One
other
issue
that
occurred
to
me,
as
we
were
talking
about
that
is
a
lot
of
rapping
streams,
have
a
concept
of
whether
they
own
the
inner
stream
and
therefore
should
dispose
it
when
they
are
disposed.
Do
we
need
that
on
as
right?
Only.
F
It
was
also
brought
up
that
some
people
just
want
to
write
only
a
read-only
wrapper
stream
in
general,
and
in
that
case
I
think
you'd
want
to
be
able
to
control
whether
you,
whether
you
dispose
the
inner
stream
or
not,
but
maybe
we're
just
saying
that
scenario
scoped
out
here
and
we'll
deal
with
it
later.
If
we
deal
with
it.
A
Yeah,
that's
sort
of
how
I
feel
like
you
can
achieve
the
solution
by
everyone's
most
favorite
non-cascade
disposed
stream,
that
probably
everyone
has
written
at
one
point
or
another
in
their
net
lifetime.
A
B
Boolean
would
apply
to
the
duplex
stream
if
you're
using
azerite
only
on
a
duplex
stream.
You
want
complete
rights.
A
B
A
A
I'm
going
to
write
a
content
marker
that
now
I'm
writing
some
gzip
stuff
and
then
write
some
gzip
stuff
and
then
end
the
content
marker
or
whatever,
like
you,
would
just
open
the
gzip
stream
with
your
stream
instead
of
with
the
azerite
only
and
if
we
think
that's
complicated
now
we
want
to
rename
as
right
only
to
be
like
completable
writing
stream.
Then
like
we
can
do.
A
That,
if
we
feel
that
that,
as
because
it's
not
really,
as
which
I
think
is
why
I
started
with
get
write,
only
stream
because
disposing
the
thing
that
is
returned
from
this
is
different
than
disposing
us
and
has
a
different
context
of
state
management.
B
E
Generally,
you
use
the
two
free
picks
when
you're
creating
a
new
object
with
a
standalone
lifetime,
but
this
doesn't
have
a
standalone
lifetime
right.
B
A
Well,
yeah,
I
think
so
levi
brought
up
our
high
level
heuristic
of
as
versus
two
is,
as
is,
the
gc
was
not
involved,
and
two
is
the
gc
is
involved
so
like
array
dot
as
span
makes
a
new
thing,
but
it
makes
a
struct,
so
it
didn't
allocate
anything.
So
as
is
it,
the
gc
didn't
care.
C
This
will
this
will
likely
take
up
the
next
45
minutes.
I
know
there
were
some
other
things
that
people
wanted
that
had
been
in.
You
know
your
agenda.
How
do
you
want
to
handle
that.
D
Yeah,
so
we
can
do
one
or
two
things:
we
can
either
just
declare
bankruptcy
and
say
we
will
not
finish
integrated
strings
anyway,
because
we
only
have
45
minutes
left
and
actually
just
try
to
get
the
other
quick
things
approved,
or
we
can
just
say
screw
it.
We
just
go
over
into
interpreted
strings.
It
kind
of
depends
on
how
strongly
you
feel
about
steven.
I
would
I
would
like
to
get
the
support
for
this.
C
Merged
by
the
end
of
next
week,
so
if
we
can,
if
you
want
to
punt
this,
we
can
you
know,
do
it
first
thing
on
tuesday,
I'm
okay
with
that.
D
C
C
D
A
Okay,
the
the
itinerary
I
saw
was
a
seal,
internal
and
private
type.
Analyzer
mayonnaise,
gc
usage
then
call
convention,
then
sig
term,
but
okay,
then
let's
do
that.
We
can
do
whatever
order.
I
think
steve's.
I
think
proposal
is
easy,
we'll
just
say
yes,
yeah,
that's
what
I
was
thinking.
C
C
Our
public
api
guidelines,
to
my
dismay,
say
don't
by
default,
but
there
are
many
good
reasons
to
do
so
and
so
for
internal
private
stuff.
I
would
like
people
to
be
able
to
turn
on
an
analyzer
that
says:
hey
you'll
get
free
perf
if
you
just
seal
this
thing
yeah.
So
that's
what
it
is
so,
basically
an
analyzer
that
we
can
make
it
configurable.
C
So
you
could,
you
can
make
it
configurable,
but
based
on
visibility,
but
by
default
it
would
be
if
you
have
an
internal
or
private
type
and
no
one
is
deriving
from
it
in
the
assembly
then-
and
maybe
some
other
rules
we
choose
to
throw
in
then
suggest
that
it
be
sealed
and
a
fixer
that
would
do
so.
C
D
A
Right
I
mean
if
we
super
duper
absolutely
care.
This
last
point
only
applies
to
actually
internal
types.
If
it's
a
private
type,
ivt
shouldn't
shut
off
the
thing,
but
cool.
F
A
C
I
C
C
I
don't
know
what
kind
of
arbitrary
configuration
we
support,
but
we
could
certainly
say,
like
you
know,
I
care
about
ivt
or
I
don't
defaulting
to
yes
and
then
turn
it
off.
Okay,.
I
D
That
seems
like
a
reasonable
tradeoff,
because
it's
easier
to
configure
this
way
right.
The
other
thing
is
now
you
have
to
find
out
like
what
you
have
to
do
to
turn
it
on
for
ivts,
and
it
just
seems
if
the
old
analyzer
is
opt-in,
that's
your
one
switch
and
then
the
other
one
is
just
normal
suppression,
yeah
and
my
proposal.
C
Was
that
by
default,
this
is
at
best
info
and
probably
hidden
by
default,
which
just
means
it
shows
up.
Only
if
you,
you
know
put
your
cursor
on
the
relevant
line
and
then
it
shows
up
as
a
little
light
bulb
so
yeah.
I
A
K
G
G
This
feature
request
came
from
the
exchange
team.
Their
situation
is,
they
have
like
50
managed
processes
on
any
one
box,
and
what
made
things
hard
to
manage
for
them
is
that
some
processes
had
much
larger
heaps
than
worse
than
were
actually
required,
and
so
they
would
like
a
knob
to
tell
the
gc
to
conserve
memory
to
some
extent
and
when
mounty-
and
I
talked
you
know,
the
idea
was
to
introduce
a
dial
that
would
basically
tell
the
dc.
You
know
try
this
hard
to
conserve
memory
and
the
knob
we
came
up
with.
G
Basically
is
a
knob
that
tells
gc.
You
know
how
much
memory
should
be
useful
memory
as
opposed
to
fragmentation,
and
so
we
introduce.
We
propose
to
introduce
an
api
that
lets
the
user
program
adjust
this
run
time,
because
the
idea
is
also
is
this:
may
change
depending
on
what
kind
of
situation
the
app
is
in
there
may
be
apps,
where
at
some
points
you
really
want
to
handle,
say
net
traffic
as
efficiently
as
possible.
There
may
be
other
times
where
it's
more
important
to
conserve
memory
and
so
on.
So
that's
why
it's
a
setting.
G
I
think
what
we
can
certainly
discuss
is
the
naming
whether
the
naming
is
good
and
I
think
it's
probably
useful-
to
have
a
read,
write
property
because
a
static
read,
write
property,
because
that
also
enables
code
to
save
and
restore
a
setting,
and
you
know
change
it
for
some
intermediate
activity.
Where
say
we
don't
care
about
memory?
We
just
want
to
get
things
done
as
quickly
as
possible.
So
that's
sort
of
the
summary
of
why
we
want
this
vgr.
C
K
Yeah
right,
so
this,
as
the
name
suggests,
is
optimization.
So
this
is
a
way
for
the
user
to
express
to
us
basically
their
desire
for
how
they
want
the
memory
usage
to
be
optimized.
So
you
know
in
eventually
we
want
to
add
the
other
two
aspects
as
well.
So
generally,
there's
three
aspects:
when
you
talk
about
memory,
usage,
optimization
goals,
right,
there's,
obviously
the
heap
size
there's
the
throughput
if
you
want
to
optimize
or
throughput
or
not,
there's
pause
time.
K
So
it's
the
first
step
that
we've
by
the
way,
we've
already
tried
this
with
change
on.net
framework.
So
we
wanted
to
provide
the
memory
dial
for
the
alternation
goal.
L
C
I
guess
I'm
my
not
being
steeped
in
the
innards
of
the
gc
from
a
user
perspective.
This
seems
like
another
setting
for
the
gc,
so
I'm
just
wondering
why
we
wouldn't
put
it
on
gc
settings.
G
Well,
it's
an
alternative
alternative
designs.
If
you
could
scroll
up
I'm
at
the
top.
G
Scroll
down
so
so
that
would
be
an
alternative.
I
think.
As
mouni
explained,
we
want
to
have
more
than
this
one
eventually,
so
it
may
make
sense
to
make
a
separate
class,
but
we
could
also
put
it
under
gc
settings.
It
just
means
the
name
has
to
be
somewhat
different.
I
put
optimize
memory
for
now,
but
we
may
be
able
to
come
up
with
a
better
name.
C
Well,
let
me
ask
this
question:
you
talk
about
adding
two
more
knobs
in
the
future.
What
is
the
expected
interaction
between
those
knobs
so
like
if
we
had
this
dedicated
gc
goal
class
does
does
that
in
any
way
impact
the
relationship
between
like
I,
I
set
dc
optimization
goal
to
map.memory
to
six,
and
then
we
add
a
pause
time
overload.
K
K
Yeah,
they
definitely
would
interact
with
each
other.
So
what
we
would
do
you
know
in
in
internal
ways
that
we
just
calculate
a
weight
based
on
what
you
specify
for
each
value,
then
we
do
the
best
we
can
to
achieve
that.
So
let's
say,
if
you
say
I
want
to
optimize
for
everything,
then
that's
you
know
we're
just
staying
like
then
we'll
just
optimize
equal
amounts
for
each
goal
right.
But
if
you
say
like
I,
either
don't
specify
the
other
two
goals.
K
C
K
The
way
the
latency
modes,
the
current
settings
that
we
expose-
I
I
think
in
the
long
run,
we
probably
want
to
not
obsolete
them,
but
kind
of
phase
them
out
in
the
background,
because
we
really
want
the
user
to
not
have
to
care
so
intimately
about
the
gc
behavior
like,
for
example,
like
sustained
low
latency,
the
user
has
to
understand
like
oh,
I
don't
want
gc
to
be
doing
these
blocking
gcs.
Often
I
wanted
to
be
doing
background.
Pcs.
K
A
M
Too
many
buttons.
Sorry
thanks,
I
think
the
question
I
see
the
example
up
here
where
it
seems
like
some.
This
is
this
is
something
that
you
would
that
we
would
expect
users
to
on
the
fly
be
able
to
change.
M
Is
that
seems
like
a
very
dangerous
operation
like
because,
basically,
what
I'm
saying
from
what
I'm
reading
here,
the
user
sets
the
goal
to
zero
and
that
reduces
pause
times,
and
so
the
assumption
is.
It
was
doing
more
work
in
this
case,
and
so
as
soon
as
they
reset
it,
then
they
could
go
in
the
opposite
direction.
So,
instead
of
saying
zero,
let's
say
they
put
nine
there,
and
then
they
revert
it
to
a
lower
number
wouldn't
as
soon
as
they
revert
it.
The
gc
could
have
a
spike
of
work
to
do
and.
M
A
I
think
if
you
went
from
nine
to
zero,
you
would
say
run
the
gc
all
the
time,
and
so
we
don't
grow
our
working
set
and
then,
when
you
revert
it
back
to
zero,
now
you're
saying
be
lacks
and
do
what
you
want
it's.
When
you
go
seven
to
zero
back
to
seven,
then
it
goes
oh
crap.
I
have
too
much
lack
of
memory.
I
need
to
run
a
full
gc
now.
A
K
We
don't
really
like.
I
wouldn't
expect
a
user
to
change
this
all
the
time.
I
would
expect
them
to
really
just
change
it
when
they
switch
sort
of
switch
faces,
meaning
that
you
know
let's
say
I'm
on
my
startup
path
and
now
I
know
I'm
going
to
go
into
study
paths,
so
I
kind
of
have
different
expectations
for
my
views.
Yeah
or.
A
My
concern
with
the
usage
example
here,
which
is
similar
to
to
aaron's
but
I'll,
say
if
we
really
think
that
we
want
to
save
restore
pattern
like
this
is
a
global
static.
It
sounds
like
we
need
some
kind
of
guarantee
of
non-race
conditions,
because
if
I
snap
what
the
state
was
in
my
parallel
operation
right
as
someone
sets
optimization
goal
memory
to
lax,
then
I'm
going
to
end
up
pushing
it
back
into
the
lack
state.
G
Well,
certainly,
you
wouldn't
want
every
every
tom,
dick
and
terry
to
mess
with
it
in
their
little
methods
right.
This
is
really
an
app-wide
setting
and
it
only
makes
sense
to
use
it
in
this
way.
If
the
app
changes,
behavior
say
it
goes
quiescent,
it
doesn't
do
any
work,
then
you
want
to
shrink
your
working
set,
probably.
G
The
what
what
specifically
would
you
object
to
I
mean
in
in
the
example?
Is
it
the
try
finally
or
it's
the
fact
that
it's
using.
A
B
K
A
Okay,
but
again,
that's
that's
more
an
app
state
and
they
probably
don't
care
what
mode
they
were
in
before
they're.
Just
saying,
oh
I'm,
I
seem
to
have
gone
idle.
Go
into
this
other
mode,
not
remember
what
I'm
doing.
Remember
what
I
decided
do
some
work
in
this
other
mode
and
then
pop
back
to
what
I
was
doing
like
the
push
pop
is
where
I
really
feel
that
we
then,
if
we
think
that's
going
to
be
common
and
operation
scoped
or
thread
scoped
or
something,
then
we
want
something
more
complicated.
A
A
L
L
Libraries
will,
and
so
I
think,
it's
fine-
that
power
users
and
power
libraries
have
to
explicitly
think
about
synchronization
and
how
it
may
interact,
if
another
library,
for
example,
tries
to
do
or
modify
the
settings,
because
it's
not
like
a
core
scenario
that
every
app
is
going
to
do
it's
going
to
be
people
like
unity
or
exchange
where
they
are
writing
very
specific
and
targeted
scenarios
that
they'll
even
think
of
touching
these
things.
A
Yeah
is
it
mine
really
is
if
the
sample
here
suggests
to
me
that
we
think
that
this
is
a
a
scoped
operation
like
blogging
is,
and
if
we,
if
we
think
it's
really
macro,
I
have
no
problem
with
it
being
get
set.
I
just
it.
My
point
is:
if
this
example
represents
how
we
think
people
should
use
it,
I
think
we
have
the
wrong
api.
G
A
E
Oh,
I
I
think
actually
aaron
and
jeremy
asked
one
of
the
questions
I
wanted
to
ask
about
race
conditions.
I
I
guess
the
other
question
I
had
was
in
here
in
the
issue
they
had
talked
about,
having
an
e
m
for
this
instead
of
just
a
regular
ends.
E
Would
that
allow
us
to
set
something
like
lacks
to
have
a
zero
value
for
the
enum
and
most
aggressive
to
have
like
int
dot
max
value,
and
that
way,
if
we
ever
wanted
to
add
stuff
in
the
future,
we
just
had
intermediate
values
or
even
saying
around
like
what?
What
is
the
range
on
this
thing?
That's
not
clear
to
me
from
looking
at
the
api.
G
G
A
Than
zero,
we
don't
tell
okay,
so
zero
is
do
whatever
you
think
is
best
and
one
is.
I
would
like
to
express
an
opinion
and
that
opinion
is
memory
is
cheap.
Yes,.
G
And
now
just
to
just
to
let
you
know
what
it
actually
is
for
server
gc
one
is
more
aggressive
than
zero,
but
for
workstation
you
see
it's
the
other
way
around.
E
Sure
I
I
think
this
could
actually
benefit
from
an
enum
then,
because
then
we
can,
instead
of
making
people
memorize
magic
numbers
like
it
is
one,
is
one
less
or
more
aggressive
than
nine.
For
instance,
like
is
this
a
measure
of
aggression
or
a
measure
of
relaxation
an
enum
being
able
to
explicitly
say
default
least
aggressive,
most
aggressive.
G
I
think,
having
multiple
values
I
mean
more
than
say
three
is
good,
because
in
some
cases
you
really
want
to
be
more
precise
than
what
you
get
with
three
levels
or
so
and
beyond
three
levels
the
naming
gets
confusing.
I
think,
then,
you
have
like
medium
low
or
things
like
that.
C
For
compression,
though
well
compression,
we
have
like,
we
don't
provide
a
whole
lot
of
choice.
It's
like
optimize,
optimal
size,
optimal
throughput.
C
I
was
going
to
suggest,
I
wonder
there
are
other
places
and
I
think,
maybe
even
some
with
gc,
where
we
use
a
double
like
zero
to
one
and
then
it
could
be
like
a
nullable
double
to
say.
Is
it
set
or
not.
L
Yeah,
that's
what
I
was
kind
of
going
to
suggest
as
well
for
my
question,
which
was
like
what
happens
if
a
user
passes
in
more
than
10
or
a
negative
value,
and
in
particular,
if
like,
if
zero
is
default
and
zero
to
one
and
zero,
isn't
strictly
speaking
lower
than
one.
I
would
almost
think
that
if
we
were
to
do
an
int,
then
0
through
10,
being
ordered
least
to
most
and
negative
1
being
default,
would
make
more
sense
because
then
you're,
then
you
have
something
where
negative
1
just
means
default.
L
A
K
I
don't
think
it's
it's
practical
to
make
it
that
fine
grain,
but
because
the
reason
why
we
have
it
as
integers,
because
I
we
think,
like
you
know,
it's
kind
of
a
percentage
and
one
through
nine
kind
of
corresponds
to
ten
percent.
Ninety
percent-
that's
fine-grained
enough,
but
if
the
user
really
wants
to
be
that
like
no,
I
don't
want
ninety
percent.
I
want
92
percent.
K
You
know
up
to
the
gc
to
decide
and
in
the
future,
when
we
can
do
a
better
job,
we'll
do
a
better
job,
just
automatically
right
so
having
a
zero
to
one
double
I
don't
know
it
feels
to
me
is
a
little.
C
C
So
on
what
is
what
is
g
gc
gc
memory
info
dot,
pause
time
percentage.
K
Right,
it's
a
percentage
that
we
give,
because
we
can
calculate
this
very
precisely
right.
That's
you
know.
If
we
tell
you
it's,
you
know
0.23
it
is
0.23,
but
when
you
set
something,
if
you
tell
us,
no,
I
don't
want
point
two
three.
I
want
point
two
one
you're,
probably
not
gonna,
see
any
difference
whatsoever.
C
A
Yeah
because
well
the
gzip
compression
level,
while
gzip
internally
has
a
number
which
I
think
is
one
through
nine.
We
expose
optimal
fastest,
no
compression
and
smallest
size,
and
they
have
they're
just
words
like
they
don't
they're,
not
fastest
being
the
one
is
greater
than
zero
in
any
means
it's
and
one
no
compression
is
bigger
than
fastest.
Does
that
mean
anything?
A
No
they're
just
words
if
we
want
an
enum
having
default
least
aggressive,
aggressive
two
up
up
through
aggressive
eight
followed
by
most
aggressive,
which
we
would
probably
just
make
max
end
seems
okay,
but
it
all-
and
it
also
says
these-
are
your
options
instead
of
oh,
it's
an
int,
but
don't
give
me
a
number,
that's
10
or
bigger,
because
that
feels
I
can't
think
of
places
that
we
have
that,
like
one
through
or
zero
through
nine
being
a
valid
range.
Unless
it's
you
know
tied
to
a
length
setting
next
to
it,.
L
Maybe
a
good
comparison
is
slider,
slider
controls
and
wpf
and
winforms
those
all
use
double
and
most
of
the
time
you
don't
care.
If
it's
like
0.213
versus
0.2235,
most
of
the
time
you
generally
have
0.1
0.2
0.3,
and
if
you
set
it
to
one
of
the
intermediates,
then
the
the
ui
rounds
it
to
the
closest
actual
value
yeah
the
there.
There
are,
of
course,
maybe
the
concerns
that
mayoni
and
peter
have
raised
on.
A
I
mean
if
they
set
it
to
0.21
and
in
2021.
That
means
it's.
What
would
have
been
two
from
this
proposal
and
that
in
or
unless
it's
three
and
then
in
2027?
That
means
it's
well,
it's
slightly
more
than
two
and
slightly
less
than
three
like
that's
totally
fine.
K
So
to
to
tell
you
a
bit
about
our
experiments
with
exchange
exchange
actually
did
like
seven
better
than
five.
Oh
that's!
What
they're
using
now.
K
A
K
D
A
Well
right
like,
but
that's
actually,
the
problem
right
is,
if,
when
you,
if
five
is
documented
as
be
as
aggressive
as
you
possibly
can
and
keep
your
working
set
from
ever
potentially
growing
at
all
like
if
we
change
it
to
sorry,
we
were
kidding.
50
is
now
that
maximum
that
feels
really
bad
and
the
app
has
no
way
of
reacting
to
this.
E
A
G
Now,
what
are
our
versioning
rules
for
enums?
I
thought
that
enums,
basically
don't
really
version,
but
that's
from
a
long
time
ago
I
mean.
B
L
C
So
yeah
I
mean
you
know
and
peter
to
your
point
about
our
rules
about
what
we
consider
breaking
have
definitely
changed
over
time.
Obviously
we
could
break
an
app
just
by
introducing
a
new
enum
just
because
we
could
break
an
app
by
introducing
a
new
method
on
a
type,
so
we've
cataloged,
we've
classified
it
as
non-breaking
and
we're
happy
to
do
it
because
otherwise
we
couldn't
evolve.
Yeah.
A
Yeah
so
most
aggressive,
you
know
what
levi
said
we
would.
We
would
call
that
in
dot
max
value,
even
if
the
api,
when
you
set
it
to
max
value,
turned
it
into
nine,
because
otherwise,
because
you
can't
renumber
something
once
it
has
a
number
okay,
because
the
number
is
what
gets
baked
into
the
il,
not
the
name-
and
that
goes
back
to
the
previous
when
you
built
with
maximum
maximum,
was
five
and
now
you're
running
later,
and
that
turns
out
what
you
really
said
was
be
extremely
loosey-goosey.
E
A
A
L
Yeah,
I
I
think
in
particular,
users
are
used
to
having
a
zero
to
one
double,
even
though
double
can
go
up
to
you
know,
one
e
308
users
aren't
very
used
to
having
an
int
that's
only
zero
through
ten
and
where
everything
above
it
is
invalid.
L
B
A
A
A
I
mean
the
it's
when
you
start
up
the
app
and
you
read
it
the
first
time
it
has
to
have
a
value.
We
can
say,
that's
negative
one.
We
can
say
that's
nan
nan
is
really
there's
no
number
here.
We
could
also
call
it
nullable,
it
doesn't
really
matter
again.
The
gc
can
use
a
double
with
nan
and
the
public
api
uses
a
nullable,
that's
bounded
between
zero
and
one
like
we
can
do
whatever
we
want.
E
Okay,
it
is
just
I'm,
I'm
not
used
to
api's
intentionally
returning
man-
I
I
guess
I
would
defer
to
to
tanner
on
that,
though,
and-
and
let
me
know
if
it's
actually
not
that
uncommon
for
apis
to
intention.
L
It's
fairly
common
in
the
case
where
you
want
something
that
is
a
effectively
placeholder
value.
L
It's
just
like
where
we
use
negative
1
to
represent
a
buffer
length,
because
we
don't
want
to
hard
code
that,
as
a
constant
at
the
default
parameters
or
wpf,
for
example,
uses
it
to
represent
that
a
length
was
not
specified
in
control,
size,
rust
and
other
languages
use
it
to
do
niche
filling
on
nullable.
That
way,
you
don't
have
to
carry
around
a
pointless
bool
and
slow
down
your
code.
A
I
I
think
that
the
new
type
would
make
sense
if,
instead
of
it
being
setters
on
this
type,
it
was
getters.
And
when
you
wanted
to
build
a
new
goal,
you
got
to
pass
the
values
and
then
it
normalized
them
for
you
to
give
you
a
hit,
but,
like
that
sounds
way
too
complicated.
I
would
just
put
it
on
gc
settings.
It's
already.
A
bunch
of
you
need
to
be
really
smart
to
know
how
to
use
this
correctly.
A
G
A
So
I
yeah
my
quick
stab
combining
the
names
of
of
the
type
here
and
what
is
I
memory?
Optimization
goal.
A
K
A
K
G
A
That
we
gave
up
yeah,
we,
we
added
a
bunch
of
them
for
tls
cipher,
suite
names,
because
without
the
underscores
that
are
their
actual
spec
names,
there
are
ambiguities
and
interpretation,
but.
C
G
A
Yeah,
I
see
zero
as
be
as
lacks
as
you're
willing
to
be
not
necessarily
never
run
garbage
collection
but
be
as
lacks
as
you're
willing
to
be
one
is
be
as
aggressive
as
you're
willing
to
be.
It's
not
literally.
My
working
set
must
not
increase
more
than
if
I
was
writing
perfect
code
in
c,
because
that's
if
you
want
that,
you
see.
C
I'm
just
curious
what,
in
fact,
so
I
set
this
to
keep
my
working
set
as
small
as
you're
willing,
like
what
does
that
mean
in
terms
of
how
much
how
frequently
the
gc
runs
and
how
much
time
it
takes
to
run
and
all
that
kind
of
stuff
that
doesn't
impact
the
api.
I'm
just
curious
for
my
own
business.
G
Well,
if
you
set
it
to
something
like
0.9,
what
it
would
mean
is
you're
aiming
for
90
useful
stuff,
and
whenever
that
increases
to
100,
you
run
the
gc,
so
in
other
words,
10
is
sort
of
our
wiggle
room
where
we
can
allocate
without
running
gc,
and
then
we
run
gc
in
terms
of
pause
time,
etc.
I
cannot
tell
you
because
that
will
depend
very
much
on
your
heap
size.
C
A
D
L
A
All
right
and
because
we
are
finishing
a
thing
with
at
exactly
time,
we
didn't
run
over
by
15
to
45
minutes
today.
K
M
E
It
seal
the
attribute
mark
it
mark
it
allow
multiple
false.
M
E
Yeah,
so
why
why
not
use
one
of
the
existing
attributes
and
that
part's
not
immediately
clear
to
me
from
looking
at
the
issue?
Can
you
sorry,
I
can
you
elaborate
on
that?
Sorry,
because
it's
what
wouldn't
this
actually
go
in
the
dll
import
attribute
itself
normally
so.
M
L
L
And
then
the
only
other
question
was.
I
wasn't
a
a
super
big
fan
of
unmanaged
callee
and
I
think
I
suggested
an
alternative
like
just
unmanaged
call
cons
which
might
be
a
little
bit
more
straightforward
for
the
average
user
and
fits
in
with
the
existing
call
con
types
that
you're
expected
to
use
in
it.
E
Yeah,
I
I
agree
with
that
like
what,
while
tanner
while
you
were
talking,
I
looked
through
our
public
api
surface
for
the
term
kali
and
I
just
don't
see
anything
relevant
like
there
there's
some
stuff
in
some
ub
packages,
but
nothing
that's
actually
in,
like
the
sdk
proper.
M
L
Yeah,
I'm
not
strictly
against
unmanaged
callee
myself
either.
I
think
it
does
provide
good
symmetry
with
unmanaged
callers
only.
I
was
just
trying
to
indicate
I'm
not
a
huge
fan
of
the
name
kali,
and
so
I
suggest.
B
L
L
Yeah,
it's
not
a
call
back
it.
Many
of
these
are
going
to
be
used
on
dll
import
in
favor
of
like
like,
for
example,
if
we
expose
vector
call
in
the
future
you'll.
M
A
M
L
D
M
E
E
So
erin
quick
question:
do
you
imagine
this
being
actually
honored
by
the
runtime
or
is
this
only
for
the
source
generator
stuff.
M
Oh,
no,
the
runtime
will
honor
this.
Ideally
the
reason,
so
how
this
whole
thing
works
is
right.
Now,
if
someone
said
I
have
a,
I
want
to
p
invoke
into
a
native
function.
That
is
vector,
call
we're
broken.
It's
never
going
to
happen.
We
can't
do
it.
This
will
allow
users
to
say
this
is
a
c
style
function
that
supports
a
vector
call,
and
now
we
can
at
least
have
a
way
of
doing
that.
E
And
do
you
imagine
other
stuff
being
added
to
this
in
the
future,
so
say,
for
instance,
say
for
instance,
we
didn't
have
suppressed
gc
transition
attribute.
Would
that
have
nominally
gone
on
this
type
or.
L
I
don't
think
we'd
have
been
in
that
situation
because
you
we
required
suppressed,
gc
call
com,
suppress,
gc
transition
for
function,
pointers,
unmanaged
callers
only
this,
and
this
attribute
and
function
pointers
all
that
the
types
they
take
are
all
the
system.
Runtime
compiler
services
call
con
star
types
and
so
they're
all
sharing
the
same
effectively
lookup
mechanism
to
determine
what
the
actual
calling
convention
is.
Now.
L
E
M
E
M
H
H
M
L
Right
and
and
call
call
or
call
con
member
function
or
whatever
we
called
it
as
well,
the
the
new
one
that's
going
to
be
used
for
com
standard
call,
plus
this
call
basically.
A
Aaron,
your
comment
here
is
actually
imprecise
right.
The
semantics
are
not
identical
to
when
api
they're
identical
to
platform
default.
When
api
is
platform
default,
we
can.
We
can
yes
you're
right,
you're
right,
oh
right,
the
thing
we
call
calling
convention
that
when
api
is
not
win
api,
it's
actually
platform
default.
Yes,
yes,
okay,.
A
A
L
E
L
E
Just
out
of
curiosity,
what's
what's
the
behavior,
if
you
put
unmanaged
falcon,
which
implies
platform
default
and
have
a
dll
import
that
has
an
incompatible
convention,
you
imagine
that's
just
a
runtime
failure.
E
Sorry
I
say
that
again,
so
you
have
dll
import,
calling
convention
cdecl
or
something,
and
then
you
have
the
unmanaged
call
conv
attribute
with
either
an
incompatible
or
no
calling
convention
at
all
specified.
So.
M
E
M
E
L
L
E
A
L
E
Yeah,
I
don't
necessarily
want
to
create
the
work
for
people
to
write
an
analyzer
just
because
I
don't
imagine
a
whole
lot
of
people
are
going
to
be
bitten
by
this,
and
maybe
documentation
is
sufficient,
but
you
know
throwing
it
out
there.
As
is
this
value.
Add
who
knows?
E
Maybe
you
know
that's
a
very
fair
observation.
A
A
M
M
L
A
E
L
E
A
Okay,
I
think
we're
good
on
this
one
and
I
need
to
terminate
by
apparently
two
minutes
ago.
So
I
will
hit
approved
and
be
nice
and
do
the
completion
paperwork
and
then
I'm
out
of
here.
L
A
L
And
emma
we
still,
I
I
need
you
to
schedule
the
preview
api
review
for
the
generic
math
work.
We
have
to
do
an
early
preview
of
that,
so
we
know
that
all
the
work
I'm
going
to
do
over
the
next
three
months
isn't
for
naught.
D
A
Okay,
internet
yeah,
I
have
to
terminate
the
stream,
they
may
keep
chattering,
but
I
unfortunately
am
necessary
for
you
to
keep
seeing
the
chatter
so
see
you
later.