►
From YouTube: GitHub Quick Reviews
Description
Powered by Restream https://restream.io/
A
A
Hello,
everyone
so
father
do
you
want
to
give
us
the
introduction
to
this
because
you
filed.
A
C
B
Good
question,
so
what
came
up
for
quick
is
that
quick
can
change
the
id
throughout
the
connection
lifetime?
We
I
I
don't
want
to
say
that
that
we
have
tried
that
that
that
currently,
what
we
do
is
that
we
actually
cast
the
id,
but
it's
not
guaranteed
as
part
of
the
api.
I
don't
know
if
this
id
might
map
once
one
to
the
quick
id
that
might
require
more
more
design.
B
So
I'm
not
sure
if
that
needs
to
be
a
requirement
that
it
can't
change.
Yeah.
D
D
I
think
that,
and
plus
quick
can
actually
have
multiple
connection
ids
at
the
same
time.
So
my
guess
is
that
that
this
connection
id
is
completely
different
than
the
quick
connection
id
and
therefore
we
should
probably
say
yes,
it
cannot
change
on
a
given
connection
instance
that
it's
expected
to
remain
the
same.
Yep.
C
C
So
one
way
to
enforce
that
would
be
have
a
non-virtual
property
that
does
the
as
it's
in
its
get
calls
a
virtual
thing
once
and
once
it's
yeah
once
it's
not
null,
it
won't
ever
ask
again,
and
then
that
would
let
the
base
class
enforce
the
contract.
B
C
Yeah
or
yeah
protected
virtual
string
get
connection
id
or
whatever
that
gets
called
by
this
property
setter
or
getter.
Once
yeah
memoization
yeah.
F
It's
it's
not
necessarily
like
the
the
inbox
versions
might
have
some
uniqueness
like
like,
for
instance,
if
we're
using
quick,
we
might
put
a
connection
id,
but
I
don't
know
if
there's
a
guarantee
of
uniqueness
it's
this
is
mainly
intended
like
it
will
probably
be
unique
for
the
user
at
least
unique
enough
to
use
to
correlate
logs.
D
Okay,
yeah
there
might
be,
there
might
even
be
some
transports
to
intentionally
reuse
ids,
not
at
the
same
time,
but,
like
you
know
like
if
you
think
about
sockets.
Well
one
thing:
you
know
we're
not
going
to
do
this,
but
we
could
say:
hey
we'll,
just
expose
the
fit
here
and
the
photocourse
can
be
reused.
We
could
also
expose
you
know
the
the
socket
pair.
You
know,
source
and
destination
ipn
port,
which
gets
a
little
wordy,
but
would
actually
be
useful
in
some
ways,
but.
D
Can
be
reused
over
time?
I
don't
think
this
is
a
strict
uniqueness
guarantee.
It's
it's
really
for
tracking
purposes,
and
we
should,
I
think
it
we
should
make
it
as
friendly
as
possible
in
terms
of
like
not
just
reusing
these
trying
to
avoid
that.
But
it's
not
the
worst
thing
in
the
world.
If
occasionally
that
happens,.
H
So
we
had
this
discussion
yesterday,
because
kesha
does
something
exactly
like
that,
but
we
don't
use
connection
id.
We
use
an
internal
long
that
we
create
just
because
we
don't
want
to
rely
on
that
guarantee.
The
connection
id
is
used
mostly
for
logging.
E
B
C
So
the
socket
connections
would
all
be
distinct
for
distinct
sockets,
but
its
potential
or
potentially
the
tls
stream
connection,
may
use
an
overlapping
id
inadvertently
like
if
they
just
did.
You
know
a
static
plus
plus
to
to
figure
out
what
their
id
was.
C
Yeah,
so
tls
on
top
of
a
socket
would
be
two
different
things
and
I.
B
H
B
C
Passive
ftp,
where
you
are
reusing
a
thing
for
temporary
sub
connections
like
would
that
be
the
same
id
for
all
sub
connections,
or
you
know
a
concatenation
or
I'm
just
trying
to
figure
out.
I
thought
I
understood
it
basically
that
everybody
could
use
an
implementation
of
good.newgood.2string
as
their
default
and
happy,
but
not
necessarily
useful,
but
if
they're
supposed
to
be
correlatable,
then
it
sounds
like
you
really
just
want
the
full
tracing
notion
of
who's,
my
parent
and
where,
where
did
I
come
from?
B
Yeah,
so
so
I
I
think
the
issue
with
that
trying
to
enforce
that
hierarchy
is,
you
may
want
to
represent
an
id
that
that
you
have
already
from
the
underlying
transport.
So
you
want
to
surface
it
somehow
right.
It
does
feel
wrong
to
me.
I
I
haven't
thought
through
this
part,
like
renaming
sorry
having
a
different
id
per
middleware
as
it
wraps
connections
feels
wrong
to
me
that
that's
actually
the
opposite
of
what
I
think
you
want.
H
H
Yeah
most
middleware
forwards,
like
all
connection
properties,
unless
it
has
a
reason
to
otherwise,
like
obviously,
if
you're
doing
tls,
you
wrap
the
pipe
for
the
stream
but
yeah.
You
generally
forward.
Like
all
the
you
know,
connection
properties,
and
in
this
case,
like
connection
id,
you
could
probably
afford.
B
B
C
F
F
I
mean
we
could
just
as
easily
use
an
incremented
number
right.
F
We
try
to
guarantee
some
uniqueness
within
the
process.
I
think
we'll
be
okay.
E
Yeah,
you
generally,
if
this
could
end
up
being
exposed
to
the
user.
Generally,
you
don't
want
to
use
incrementing
numbers
because
it
discloses
information
about
how
active
your
website
is,
but
yeah
interesting.
B
I
A
C
D
D
It's
a
lot
easier
to
see
like
connection
zero,
zero,
zero,
zero,
zero,
three
or
whatever
it
comes
up
as
if
you're
doing
testing
as
opposed
to
fa-69.
Whatever
you
know,
maybe
we
don't
care
about
that,
but
it
you
know.
D
H
B
F
A
A
B
It's
nice
yeah,
because
because
we
we,
we
basically
have
an
id
for
connections
that
is
increasing,
and
then
we
have
the
request
id
be
a
number
off
of
that.
So
we
have
like
the
idea
for
the
connection
plus
the
number
of
requests
over
that
connection
at
the
id
by
default.
Sorry
by
it's
leaking
a
ton
of
information
to
client
to
people
super
nice.
A
A
C
That's
a
double-edged
sword
which
was,
I
think,
levi's
point
of
it's.
If
it's
predictable,
then
maybe
there's
a
way
to
use
that
to
your
advantage.
If
it's
being
reported
to
you
and
if
it's
not,
then
you
can't,
but
it
really
depends
on
scenarios
if
the
really,
if
the
scenario's
logging
and
diagnostic
flow-
and
it
seems
unlikely
that
this
is
what
someone
would
choose
to
return
on
an
error
page
or
whatever
without
them
having
to
find
the
id
of
their
own
model,
then
then
locking
while
bumping
a
static,
seems
fine.
C
C
C
Indeed,
from
from
an
api
perspective,
it
doesn't
really
matter
from
from
the
security
perspective.
Since
you
know,
many
of
us
are
in
the
same,
can
just
change
hats
while
being
in
the
same
room.
We've
said
the
concerns
and
I
don't
think
we
have
a
block
on
anything,
but
but
being
virtual
and
having
a
default.
Implementation
is
the
api
concern,
and
after
that,
do
the
memoization
thing.
D
The
generating
the
default
value,
the
base
implementation
here,
one
issue
that
came
up
previously
is,
if
we're
auto,
generating
stuff
for
people.
If
it's
goods,
it
doesn't
matter
right,
because
goods
are
always
unique,
but
if
it's
an
incrementing
number,
then
the
question
is:
what's
the
scope
of
that
number
right?
Is
it
transport
or
is
it
global.
C
D
F
I
don't
know
that
if
this
even
matters
for
for
this
group,
it
seems
implementation,
not
api,
but
yeah.
B
F
D
There's,
I
think,
there's
one
other
api
issue
here,
that's
related
to
this,
which
is
that
we
have
on
there's.
Actually
the
the
typical
way
you
get
a
connection
is
either
from
a
connection
factory
or
a
connection
listener,
but
you
can
also
create
a
connection
directly
from
there's
a
statics
that
are
connection
dot
from
stream
and
connection.
That's
from
pipe
that
allow
you
to
create
a
connection
without
going
through
one
of
those
things,
in
which
case
you
don't
have
your
own
custom
connection
implementation.
D
You
have
like
a
just
simple
default
implementation
that
we
give
you,
and
so
what's
the
behavior
there
and
do
we
need
an
api.
Do
we
need
overloads
on
from
stream
and
from
pipeline
to
allow
you
to
customize
the
connection
id
in
that
case,
and
and
if
not
I
mean,
I
think,
if
we're,
if
we
have
globally
unique
or
at
least
process
wide
unique
default
behavior
here,
then
it's
easy
to
auto
generate
one
for
those,
but
I
think
the
question
becomes.
Do
we
want
to
allow
you
to
customize
it
in
that
case,
right.
F
I
think
we
just
add
a
nullable
optional
parameter
to
the
existing
method
and
if
it's
null,
we
generate
for
one
for
them.
H
So
right
now
it
has
what
the
three
properties
are,
the
three
parameters.
It
would
be,
the
stream
slash
pipe.
The
the
properties
I
assume
is
an
optional
parameter,
the
properties
and
the
endpoints.
B
So
here
here's
a
question
that
so
jeff,
that's
a
really
good,
a
really
good
observation,
because
I
think
the
one
thing
that
that
that
is
broken
today,
we
thought
we
actually
are
thinking
about
this
whole
thing
is,
if
you
have,
if
the
transport
has
an
id,
and
then
you
wrap
it
in
some
other
stream,
and
you
call
from
stream,
how
do
you
preserve
the
actual
transport
id
as
you're
flowing
state
up
up
the
channel
of
the
pipeline?
B
F
B
H
F
B
B
We
struggle
to
figure
out
which
things
should
be
hoisted
top
level
versus
which
things
should
be
left
as
property
properties
in
the
property
by,
and
we
basically
do
it
based
on
what
we
think
is
commonly
used
so,
for
example,
like
in
stefan's
pr
he
made,
he
made
the
id
an
extension
method,
because
it's
used
all
over
the
code
base
and
as
a
convenience
to
avoid
having
to
do
try
again
or
try
get
a
lot
over
and
over
where
the
property
property
bags
are
useful.
B
B
It's
kind
of
like
how
a
local,
endpoint
and
remote
endpoint
are
on
the
base,
because
they're
so
commonly
used
and
logged
and
unreferenced
that
you
want
to
hoist
the
top
level,
but
then
for
things
that
are
lesser
used
like
tls
options,
you
want
to
keep
those
in
the
property
bag,
and
then
we
happen
to
be
using
using
the
property
bag
as
a
way
to
to
copy
and
chain
properties
without
having
to
write
a
bunch
of
code.
So
I
think,
having
it
be
in
both
isn't
wrong
per
se.
Nor
is
it
inconsistent.
B
B
We
what's
definitely
saying
would
be
a
nice
api
to
add,
like
connection
not
from
connection
that
basically
lets
you
pass
in
the
base
connection
to
you
pass
in
the
base
connection
as
a
as
the
factory
of
things
to
to
based
off
of,
and
then
you
can
add
change
some
properties
on
it
or
if
we
don't
think
that
we
should
add
that
we
can
just
have
it
taken
a
string.
The
only
issue
with
having
it
passing
the
string
explicitly
is
that
you're
now
going
to
allocate
the
string
eagerly
when
you
have
to
wrap
right.
B
C
B
C
Yeah,
I
don't
think
I
would
do
it
as
from
connection,
because
there's
the
ambiguity
of
do
you
want,
is
it
better
for
you
to
do
the
pipe
wrapping
or
the
stream
wrapping?
So
I
would
overload
from
stream
and
from
pipe
to
just
take
a
connection
as
and
then
so
you've
picked,
which
mode
you
want
to
be
using
and
and
then,
if
anything
that
gets
added
later
has
the
I
can
defer
to
this
other
object.
C
C
That
we
would
both
overload
them
on
the
first
parameter
to
take
a
connection
base
and
then
the
existing
methods.
I
think
for
this.
They
suggested
adding
the
the
nullable
string
of
connection
id.
J
B
B
A
B
F
B
Yeah
and
I'm
fine
punching
it
for
now,
I
mean,
like
stefan,
basically
has
to
do
that
in
his
code
in
castro.
As
far
as
I
I'm
aware
of
it's
very
common,
actually
I
I
I
I
hate
the
same
thing,
but
I'm
fine
punting,
pointing
out
until
five,
because
you
can
you
can
easily
work
around
it.
C
C
Having
the
one
that
takes
the
connection
to
wrap
means
that
callers,
who
use
that
one
are
already
set
up
for
success
as
this
evolves
in
the
future
yeah,
but
if
there's
some
other
property
and
they
have
the
like,
why
can't
I
forward
the
the
bloody
blah
property?
What
code
change
do
I
have
to
do?
C
Oh,
instead
of
calling
from
streamconnection.getstream
et
cetera,
I
just
changed
it
to
connection
or
connectionbase.fromstream
the
connection
and
like
great
everything's,
just
magic
now
so
right
that
can
be
done
later,
yep,
if,
especially,
if
it's
complicated,
but
I
think
from
an
api
conceptual
perspective,
the
adding
the
overload
there
is
reasonable.
Is
there
a
chance?
It
can
introduce
ambiguity.
C
C
So
there's
no
ambiguity
I
mean
and
then
like
somebody
could
have
their
own
connection
type,
that
they
write
a
implicit
conversion
to
stream,
on
which
we
can't
stop
them,
but
that
as
long
as
we
don't
provide
it
in
box,
it's
not
likely,
and
so
I
think
it
can
be
added
later
without
problem,
like
it'll,
be
a
problem
for
reflection
because
everything's
a
problem
for
reflection
but
yeah
as
a
static,
it
won't
even
have
to
to
risk
a
challenge
with
extension
methods
that
would
be
likely
to
have
been
written
so
right.
C
F
I
I
think
at
least
within
asp
and
within
http
client.
We
would
end
up
using
this,
so
the
expectation
is
it's.
The
allocation
is
going
to
happen,
at
least
for
the
two
major
use
cases.
D
D
I
don't
think
it
matters,
I
don't
think
one
string
is
going
to
kill
us
there's,
certainly
plenty
of
other
overhead
to
a
connection,
but
yep.
H
D
I
hate
to
reopen
anything
here,
but
if,
if
this
is,
if
the
connection
ids
are
generated
on
demand,
then
you
can
potentially
get
into
like
on
first
use.
D
D
F
The
default
implementation
would
be
good,
new,
good
and
the
ones
that
we
overload
would
probably
be
based
on
some
sort
of
existing
counter
that
we
have
so
there
would
be
like
a
long
field
already
with
it.
It's
just
the
string
allocation
that
would
be
late
bound,
but
probably
not
the
id
generation.
D
C
C
Yeah,
I
feel
that,
with
all
this
all
the
real
connections
that
are
going
to
be
provided
that
they're
all
going
to
have
semantic
ids
I
mean
I
could
be
wrong,
but
it
it
seems
like
if
socket
uses,
something
like
the
source
port
destination
port
destination
ip
in
its
the
way
it
builds
an
id.
Then
it
wouldn't
matter
if
tracing
was
turned
on
late
things
built
based
off
that
notion
are
going
to
have
still
implemented
it
plus
the
moment
you
wrap
something
that
would
have
called
the
generator
and
that
would
have
gotten
its
id.
D
It's
interesting
to
say
that
that
we
think
that
most
providers
will
have
semantic
connection
ids,
because
we
don't
do
that
today.
Right,
I
mean
the
way
you're
logging
in
in
asp.net
works
is
just
implementing
integers
right,
yeah,
so
well.
Well,
it's
interesting
to
explore
that
we
to
say
that
we
think
all
of
them
are
going
to
have.
That
seems
a
little
weird
since
they
don't
today.
B
They
kind
of
do
right.
We
well
this,
isn't
an
issue
in
a
spinet
because
we
actually
generate
the
id
and
use
it
super
early.
We
don't
we
don't
wait
until
tracing
is
turned
on,
because
that
would
have
the
weird
effect
that
that
you
said
all
right.
D
E
D
So
it
could
be
that
we
change
that
to
semantic
so
something
that's
somatically
meaningful
in
the
future,
like
you
know,
ipport
pairs,
but
that
seems
I'm
not.
It's
not
obvious
to
me
that
we
do
that.
H
C
D
C
I
mean
but
that'll
restart,
assuming
that
it's
zero
out
zero
initiated
memory,
it
restarts
when
the
process
starts
and
it's
shared.
If
you
look
at
the
logs
of
two
processes,
interweaved,
then
their
their
ids
are
interweaved
versus
a
source
port
destination.
Port
combination
at
any
relative
time
range
is
unique,
wow,
so
they're
all
trade-offs.
That's
the
implementation
detail.
B
Remove
the
from
the
ones
that
take
connection
base,
that's
it
that's
for
later
they.
I
think,
those.
C
F
C
D
C
Right
here,
right
so
connection
is
what
has
from
pipe
and
from
stream
connection
base
is
what
gets
the
id
and,
and
so
we
still
and
we
want
the
connection
id
property
to
be
non-virtual
so
that
it
can
guarantee
one-shottedness
of
calling
a
protected
member.
A
F
A
Well,
sorry,
I'm
still
posting
I've
not
done
anything.
So
api
approved
all
right.
A
B
J
Yeah,
so
this
one
was
very
classy
and
you
know
there
was
some
follow-up
discussion
about
the
approved
shape
of
the
api.
I
think
there
are
two
issues
raised.
One
issue
is
that
you
know
the
process
path.
It
has
really
nothing
to
do
with
the
app
or
application,
so
it
kind
of
doesn't
make
sense
for
the
up
prefix
to
be
in
the
name,
and
I
think
the
second
issue
is
that
the
the
other
api
that
doesn't
re
return
that's
equivalent
to
up
context
get
based
on
that.
It's
unclear
what
it
should
return.
J
You
know
like
it's
kind
of
poorly
specified,
and
you
know
there's
basically
a
lot
of
kind
of
policy
questions
about
what
it
should
return
in
different
situations.
G
A
A
J
My
take
on
this
would
be
to
just
add
the
environment,
dot,
process
path
and
don't
add
environment
dot
up
base
directory.
There
is
a
context-based
directory,
and
you
know
the
same
property
on
app
domain
and
winforms
has
the
same
property.
All
of
them
are
returning
the
same
path,
so
I
I'm
not
sure
whether
we
are
doing
any
good
by
adding
fourth
api.
That
returns
the
exact
same
value
right
so.
J
A
K
K
A
A
A
You
can
look
at
and
depending
on
what
your
scenarios
you
have
to
choose
among
them
to
get
the
correct
response
right
and
so
like
if
we
say,
there's
a
location
where
you
probe
for
dlls
and
there's
a
location
where
you
should
probe,
for
you
know
your
database
file,
for
example,
or
your
other.
You
know
image
file
that
you
ship
with
your
application.
A
Then
that's
a
different
directory
right
and
so
that's
my
concern
when
we
say
well,
some
things
are
an
app
context
and
we
just
call
it
base
directory
and
then
there's
another.
You
know
app
whatever
it's
called
here:
app-based
directory
on
environment
that
returns
a
different
thing,
then
we're
not
really
helping
the
customer
to
make
a
determination
right,
we're
basically
just
muddying
the
waters
with
sprinkling
apis
over
various
areas.
A
J
But
you
know
I
I
think
it's
incredibly
hard
to
come
up
with
apis.
That
match
the
scenario.
So
I
it's
like,
I
I
think
I
kind
of
like
the
place
we
are
today
where
we
have
like
base
directory.
It's
like
something
that
works
for
like
90
of
the
cases
and
if
the
policy
that
you
know
it
implements
doesn't
work
for
the
given
case,
just
let
people
to
assemble
it
from
the
kind
of
policy
free
apis.
You
know
like
they
can
combine
the
entry
point
assembly,
the
exe
path.
You
know,
whatever
else
they
want.
K
But
we
have
to
in
order
to
do
that
we
have
to
like
give
them
give
them
other
apis
to
know
where
they're,
what
kind
of
environment
they're
running
in
right
like
I
can't
do,
there's
no
way
for
me.
Even
if
we
have
process
path
right
like
say
we
have
process
path.
I
can't
use
process
path
all
the
time
to
say
this
is
where
my
my
app
is
because
my
app
might
have
been
invoked
from
program
files
done
at
exe
path.
K
To
my
app
and
now
process
path
is
program:
files.net
exe,
that's
not
where
I'm
going
to
be
looking
for
image
files
right.
So
I
can't
use
that
one.
I
can't
use
app
base
app
context
based
directory
because
in
single
file
we
we
open
in
a
new
temp
directory
and
put
a
bunch
of
dlls
there
and
say:
well,
that's
the
base
directory.
It's
like!
Well,
I
don't
want.
J
A
A
A
J
My
recommendation
would
be
ship
the
ship,
the
app
s
exe
and
use
the
get
process
path
to
handle
this
case.
You
know,
like
the
net
running
the
things
with
dotnet.
It
doesn't
work
today
already
in
number
of
cases.
For
example,
you
cannot
either
run
winforms
up
using
dotnet
food.dll
and
there
are
like
other
cases
where
it
doesn't
work
right.
So
it's
like
when
you
are
building
concrete
up.
A
So
let
me
just
like
try
to
get
your
position,
then.
Is
it
unfair
that
you
don't
believe
in
having
a
part
in
an
api
that
works
regardless
on
how
the
app
was
executed?
You're,
basically
saying
in
the
corners
of
a
particular
app
that
is
either
aot
or
not
aot,
it's
either
actively
linked
or
it's
not
either
it's
run
with
net
or
it's
not
for
this
particular
app.
The
developer
is
on
the
hook
to
find
the
appropriate
api
to
find
their
files,
but
there's
not
the
single
api
that
will
work
for
all
the
scenarios.
A
I
mean
honestly
for
library
developers,
I
kind
of
agree
with
you
on,
because
it
is
virtually
impossible
to
write
a
library
and
make
assumptions
about
how
you
know
what
the
application.
You
know
how
the
application
is
being
deployed
that
virtually
never
works.
I
think
I
think
I
think,
if
you,
if
you
are
the
app
developer
and
you
want
to
load
resources
you're
in
control
of
where
they
are
right,
like
the
app
like
the
library
cannot
assume
that
oh
yeah,
I'm
my
the
father,
I'm
depending
on
it's
always
relative
to
the
exe
right.
G
E
G
B
It
does
so
so
we
have
one
piece
of
code
in
the
host
that
does
current
directory
and
then
everyone
everything
is
gonna.
We
will
use
the
we'll
use
that
resolve
path,
that's
configured
in
the
host
up
front,
so
it's
actually
in
one
place
in
the
entire
application.
C
C
B
B
K
B
G
J
B
So
so
that
I
understand
like
you,
can
make
it
work
if
you
don't
own
the
host
per
se,
but
in
the
case
where
the,
where,
in
the
case,
where
it's
a
single
application
running,
I
guess
it
isn't
really
labor
your
host.
You
have
an
app
running
where
there's
this
library
that
that
wants
to
know
where
the
entry
point
is
to
find
configuration
files
and,
depending
on
how
you
deploy
that
same
application,
the
same
application
itself.
So
I
have
one
app
running
either
a
single
file
or
self-contained
or
or
whatever.
J
Well,
you
know,
so
there
are
two
different
cases
of
configuration
files.
It
might
be
configuration
files
that
you
want
to
be
that
you
want
to
be
that
you
want
to
get
bundled
so
that
they
don't
actually
leave
us
separate
files
on
disk,
or
you
can
have
like
configuration
files
that
you
actually
want
to
live
on
disk
as
separate
things
right.
B
J
B
B
K
Variable
to
define
this,
I
I
think
what
we've
defined
as
app
context
based
directory,
isn't
this
value
and
then
then
the
host
starts
setting
it
and
if
you're
in
hosted
scenario
where
it's
some
other
host,
then
they
should
pass
that
in.
If
you
need
it
and
90
90
some
percent
of
the
time.
Yes,
it's
going
to
be
the
same
as
app
contact's
base
directory.
A
I
mean,
I
don't
think
it's
wrong
it
just
is
this
you're
chasing
the
long
tail
of
you
know,
policy
heavy
api
is
right
where
you
add
more
and
more
values,
because
you
have
more
and
more
nuance
right,
that's
the
downside
to
it,
but
I
agree
with
you
that
it
seems
common
enough
that
we
should
think
about
it
and
like
I
also
think
we
make
our
own
lives
more
miserable.
J
J
J
J
J
K
A
I
mean
to
a
certain
extent,
I
hear
what
you're
saying,
but
also
I
strongly
believe,
the
the
in
blue
pill
or
red
pill.
Api
is
right.
This
is
a
red
pill,
api
right.
It
tells
you
accurately
which
process
is
running
your
application
right,
and
I
think
it's
important
that
we
also
have
those
apis
right,
because
we
need
to
have
plumbing
and
implementation
infrastructure
that
they
can
actually
reliably
tell
okay.
What
is
the
process
right
and
then
there
is
scenario-specific
virtualized
apis
that
are
like
kind
of
more
giving
you
the
blueprint
environment
where
it's
like.
A
Well,
if
you
want
to
load
content
files,
I
give
you
the
content
file
directory
and
you
know
that
might
be
different
radios
based
on
how
you're
hosted,
and
that
makes
more
scenarios
work
right,
but
I
wouldn't
necessarily
block
red
bull
apis
just
because
we
haven't
designed
the
blue
pill
yet
so
in
that
sense,
I
would
agree
with
that.
I
mean
my
only
concern
with
splitting
this
design
is
that
you
know.
A
As
I
said,
I
don't
want
to
sprinkle
these
apis
over
different
types,
but
I
I
think
environment
seems
like
the
right
location
for
app
process
path
or
I
should
say,
process
path
in
this
case
and
then,
if
we
decide
later
that
we
have,
you
know,
I
don't
know
content
path
or
something,
then
it
seems
reasonable
to
put
this
on
environment
as
well.
A
Right,
this
is
what
I
meant
earlier
with
like
we
need
to
have
good
dogs
right,
I
mean
you,
you
basically
have
to
say
like
triple
slash
comments
or
summaries
that
job
intellisense
needs
to
differentiate
these,
and
that's
also
why
I
think
naming
is
important
that
they
are
tied
together.
So
they're
not
like
you
know
far
apart
in
intellisense
right,
that's
why
I
originally
had
the
app
prefix
so
that
they're
all
tied
together,
but
I
think
in
general,
like
I
think,
environment
is
not
that
large.
A
K
A
C
So,
just
for
the
things
that
do
the
temp
directory
extraction
model.
That
means
that
the
the
executable
that
is
extracting
into
attempt
directory
doesn't
launch
a
child
process
it.
It
then
becomes
the
host
after
doing
the
extraction.
Yeah,
that's
good.
Okay,.
C
This
is
the
path
to
the
thing
that,
if
you
do
a
if
you're
looking
in
the
task
manager
or
t
list
or
top,
this
is
the
thing
that
shows
up
yeah.
J
Okay,
you
know
it's
like
it's
mentioned
at
the
bottom
of
the
issue
like
the
this
model
that
the
exe
and
the
dll
of
the
entry
point
are
in
different
directories.
It's
not
unique
to
this
to
this
self-extracting
right
xamarin
on
android
has
a
similar
setup
where
the
exe
lifts
in
one
directory
and
the
and
the
entry
point
dll
lifts
in
different
directory.
So
that's
how
the
things
have
to
be
laid
out
on
disk.
A
K
A
K
A
A
C
C
A
A
A
K
Yeah
the
community
filed
for
it,
but
and
then
we
changed
it
a
little
bit.
Fowler
also
knows
a
lot
about
this
as
well,
so
in
hosting
we
have
a
an
abstract
class
called
background
service
and
background
service
has
two
methods
on
it.
It
has
like
a
start
async
and
a
stop.
Async
and
start
async
returns
a
task
for
the
starting
of
the
background
service.
K
So
if
you,
you
know,
go
into
asynchronous
mode,
the
task
that
comes
back
from
start
async
is
complete
because
I
started
and
my
task
is
or
my
service
is
now
running,
so
there's
no
way
for
somebody
outside
of
the
class
to
understand.
Is
the
service
done
or
not,
because
there's,
no,
you
can't
get.
You
can't
get
access
to
the
actual
test,
that's
executing
the
service
and
so
and
then
so.
K
There's
other
reasons
as
well
that
it's
important
for
hosting
right
now,
right
now,
if
the
background
service
starts
doing
something
asynchronously
and
you
know,
returns
that
completed
task
during
start
and
then
an
exception
happens,
nobody's
logging
that
exception,
and
so
the
way
we
need
to
be
able
to
do.
That
is
by
getting
the
executing
task
and
being
able
to
log
when
it
fails.
Basically,
and
so
the
idea
here
is
to
expose
the
executing
task,
the
task,
that's
running
the
background
service
for
these
kinds
of
scenarios.
K
K
K
K
K
B
Right
so
the
the
whole
the
whole
just
some
background.
The
way
this
works
is
that
the
start
stop
model
was
more
of
an
explicit
on
off
kind
of
pattern,
and
then
we
we
offered
an
alternate
model
built
on
top
of
this,
the
startup
model
that
was
for
long-running
tasks.
B
So
so
you
can
imagine
the
start.
Stop
model
is
good
for
doing
like
for
making
a
timer
and
firing
things
until
stop
is
called
versus,
where
you
want
to
run
code
on
a
background
to
to
to
dequeue
something
in
in
a
long-running
loop,
so
you're
going
to
have
two
different
patterns
for
the
same
long-running
task
model
and
in
the
second
pattern
is
an
implement.
Is
it
is
a
detail
of
the
I
hosted
service?
B
H
A
H
Host
gonna,
as
cast
like
background
service
or
whatever,
and
look
for
this,
or
does
it
already
have
a
special
concept
of
background
services
versus
other?
I
hosted
services.
H
H
I
guess
the
default
would
be
weird
and
we're
not
using
default
interface
methods
yet
or
anything
like
that.
Oh
yeah.
A
B
Exactly
so,
you
that's
a
good
question.
You
may
not
even
have
a
task
right
in
that
case,
so,
for
example,
in
in
the
case
of
a
timer,
you
don't
have
a
task.
The
timer,
but
the
timer
event
runs
arbitrarily
on
some
background
thread
on
on
occasion
and
when
stock
calls
you
stop
it
and
in
those
cases
you
log
your
own,
like
errors.
A
So
basically,
in
other
words,
if
you,
if
you
implement,
I
hosted
service,
you
you
own,
the
actual
execution
and
you're
on
the
hook
for
logging,
any
errors
or
making
them
observable
right.
And
if
you
don't,
then
that's
a
that's.
The
failure
of
the
implementation
of
I
hosted
service
versus.
If
you
derive
background
service,
the
only
job
you
have
is
override,
execute
async
and
that
should
be
sufficient.
B
Right
and
under
there
are
all
alternate,
I
guess,
suggestions
for
this
model
is
where
you
don't.
You
don't
actually
expose
a
task,
but
you
expose
some
some
method
that
basically
lets
you
signal
failure
or
completion
in
some
form,
which
basically
is
a
task
to
let
the
host
basically
do
stuff
for
you
on
your
behalf.
So
if,
if
we
wanted
this
behavior
on
eye-hosted
service,
you
can
imagine
having
a
different
interface
that
exposed
the
task
that
you
could
implement
optionally
and
the
host
can
check
for,
instead
of
being
on
the
actual
background.
C
K
C
B
B
The
one
return
to
the
host
the
way
it
works
is
and
and
stop
a
sync
of
the
background
service.
It
does
a
task
with
any
waiting
for
the
timeout
to
fight
the
the
token
that
was
passed
in
to
stop
the
fire.
So
you
stop
passing
in
a
token,
and
that
token
represents
how
long
stock
will
wait
until
the
actual
task.
B
E
Itself
can
fail.
That
makes
sense,
I'm
what
I'm
looking.
What
I'm
asking
about
is
execute
async
itself.
What
does
the
cancellation
token
in
there
do
that
thing
signals
to
the
long-running
loop,
that
is
your
code
to
shut
down
and
the
you
expect
that
when
the
cancellation,
token
fires
the
return
task
transitions
to
successfully
completed
rather
than
cancelled.
H
K
E
H
It
is
interesting
that
stop
async
can't
exit
before
execute
async
does.
If
you
cancel
the
token.
A
A
B
B
B
A
B
Yeah
exactly
yeah,
but
but
we
could
argue
and
say
if
you
want
to
avoid
that
catch
yourself.
That's
like
that's!
Not
our
problem.
A
F
A
H
B
C
B
H
C
I
guess
the
question
is:
is
that?
Because
if
that
task
is
the
background
service,
then
I
think
the
name
is
weird.
If
it's
something
the
background
service
is
doing
like
that,
it's
represents
a
piece
of
enqueued
work
or
something
where
it's
again
effectively
a
task
schedule.
Then
the
name
makes
sense.
But
if
it's.
B
Well,
it
represents
the
the
entire
background
service
operation
like
working.
So
so
imagine
you
want
to
build
a
dashboard
that
did
a
loop
over
all
these
services
in
in
the
process
and
showed
a
status
for
them.
How
would
you
implement
that?
I
I
think
exposing
the
task
is,
is
a
clean
way
to
do
it,
because
you
would
just
loop
over
all
them
and
check
the
status
and
then,
if
it
was
not
running,
you
would
print
the
actual
sure.
B
C
And,
and
even
the
summary
statement
with
it,
this
to
me
sounds
like
a
piece
of
work
that
it
is
doing
and
not
the
piece
of
work
that
represents
its
entirety.
C
H
So
I'm
going
to
read
the
current
dot
comment
for
background
service:
dot,
execute
async.
It's
this
method
is
called
when
the
hosted
service
starts.
The
implementation
should
return
a
task
that
represents
the
lifetime
of
the
long-running
operations
being
so
performed
could
choose
not
to
do
that
today
and
there
would
be
no
problems,
because
it's
not
really
observed
like
if
you
just
decided
to
like
use
the
timer
to
do
your
background
stuff
and
have
execute
async
exit
early
things
would
work
today,
but
you're
going
against
the
doc
comments.
A
G
I
A
F
H
H
C
Making
me
feel
like
it
is
describing
transient
data.
A
C
H
A
H
B
H
A
A
A
A
K
B
K
F
K
Right,
I
mean
that's
really
all
you're
getting
here,
yes
and
something
for
di
to
you
know
you
put
instead
of
putting
background
service
as
the
the
service
type
you're
putting
I
hosted
services.
A
I
mean
I'm
just
curious
like
what
what
does
an
I
hosted
service
task.
Look
like
that
would
not
effectively
replicate
what
background
service
does,
because
even
in
my
own
one
when
I
implement
it,
I
basically
just
have
star
that
kicks
off
work,
and
I
think
in
my
case
I
don't
even
complete
start
at
all.
I
just
run
forever
and
start.
I
think
I'm.
B
B
The
only
the
only
thing
I
can.
B
So
when
you
start
your
new
timer,
you
you
pass
in
the
callback
and
stop
basically
stop
on
the
timer
and
the
task
represents
the
timer
execution
that
isn't
necessarily
a
task,
but
you
could
implement
the
interface
to
make
it
work
that
way.
H
B
B
H
H
K
C
That's
my
vote.
I
mean
that
feels
a
little
weird
execution
task
is
better
than
executing
task,
but
execute.
C
C
Could
change
over
time
so
being
not
super
familiar
with
the
type
in
fact
being
completely
unfamiliar
with
the
type?
The
word
executing
task
to
me
sounds
like
it's
individual
pieces
of
work
and
it
lets
you
find
out
things
like
hey.
Am
I
currently
being
run
by
this
background,
so
it
represents
a
task
it's
currently
working
on
which
will
complete,
and
then
it
will
do
something
else
but
x,
like.
J
J
H
C
I
K
H
A
B
B
A
B
That
works
hold
on
okay,
you
know,
or
or
set
virtual
better.
Instead.
B
A
G
A
It's
marked
approved
on
my
side.
You
may
have
to
refresh
your
page
all
right.
So
then
40936,
which
is
the
other
eric's
proposal
for
char,
is
ascii,
is
eric
on
the
call.
A
E
E
E
Is
ascii
yeah
that
was
actually
discussed
in
the
issue,
the
idea
being
that
for
simple
and
common
accelerators
like
this,
it's
not
the
end
of
the
world
to
actually
duplicate
them
throughout
the
framework,
because
that
way,
they're
discoverable
on
whatever
types
you
happen
to
be
using
okay,.
E
K
E
F
E
A
All
right,
then,
we
all
in
violent
agreement
which
should
make
me
almost
concerned
I'll,
be
okay
with
the
parameter
name
ch.
I
guess
that's
all.
It
is
on
char
today,
right.
A
A
A
K
A
I
E
Yeah,
that's
and
that
steve
steve
tobe
commented
to
that
effect
too.
It's
like
what
you're
going
to
write
is
32-bit
process
else
is
64-bit
process
else
fail
fast.
C
Yeah
because,
as
far
as
I
know
and
oh
john's,
not
still
on
the
call,
he
looks
like
he's
still
on
the
call
on
youtube
because
his
his.
I
C
Somehow
yours
is
moving,
but
his
it's
frozen.
The
like,
I
don't
see
us
doing
a
16-bit
environment
even
when
that
new
old
computer,
that
corey
was
excited
about
ships.
C
So
right
now,
given
that
we
only
have
32
and
64
it
doesn't
and
like
nobody
has
a
128-bit
computer
and
we
don't
expect
to
ever
go
back
and
support
a
16
or
8-bit
computer.
I
I
don't
know
that
it
adds
value
other
than
now.
There's
two
different
ways
you
can
be
set
up
for
failure
in
the
future.
A
C
E
C
C
Yeah,
I
think
I
I
think
I
think
it
doesn't
add
enough
value
and
that
it's
really
just
two
different
ways
of
asking
the
same
question
and
in
the
defensive
case
that
they're
talking
about
of
why
they
want
to
propose
it
then
we're
in
a
worse
situation
than
we
are.
If
we
have
already
added
it,
because
right
now
every
else
thinks
it's
32
and
if
we
have
a
second
supported
way
of
doing
this,
then
now
every
else
means
64..
C
All
right,
you
want
to
go
real
old
school
seven
bit
13
bit,
I
don't
know
yeah,
so
I
think
that
until
we
understand
what
a
third
case
that
we
would
support
would
look
like,
we
can't
answer
this
now.
Win6T
or
net
framework
1
had
this
problem,
it
supported
116.
C
K
C
K
C
Yeah
I
used
to
care
about
synch
versus
sync:
sync
is
not
a
thing
because
it's,
the
h
is
part
of
the
sync
in
synchronous,
but
I
gave
up
on
that
a
long
time
ago.
G
H
All
right,
so
this
is
another
api
proposal
for
the
new
connection
abstractions.
This
is
for
connection
listener,
except
async.
H
H
So
it
would
be
nice
if
except
async
can
return
a
value
task
of
connection
which
is
installable,
and
I've
already
had
some
discussion
with
like
jeff
and
corey,
and
it
seems
like
there's
some
support
for
this,
like
with
managed
sockets
you're
gonna
end
up,
throwing
from
like
the
inner
call
to
except
async
anyway,
but
perhaps
with
some
other
transports.
You
could
avoid
that
exception.
A
C
A
A
I
mean
my
problem
with
nothing
with,
like
you
know
these
null
value
instances
is
that
there's
very
few
cases
where
this
actually
makes
sense
right,
I
mean
like,
unless
you
have
a
really
good
null
state
that
makes
sense
like,
for
example,
an
empty
string.
You
could
argue
is
a
really
good
representation
for
null
string.
Most
of
the
time,
it's
usually
more
trouble
than
it's
worth,
because
now
you
have
more
error
states
in
your
program.
D
I
mean
the
analogy
here
well,
first
of
all,
I
think
you're
going
to
have
an
air
state
anyway,
right
I
mean
you're,
not
you,
because
today
what
happens
is
you
always
get
an
exception?
So
you're
gonna
have
to
write
a
rapid,
try
catch
around
that
and
specifically
look
for
object,
disposed
exception
and
say:
oh,
that
means
we
shut
down
and
well.
D
I
hope
it
means
we
shut
down
and
the
object
didn't
suddenly
dispose
itself
or
something,
and
so
just
follow
that
and
pretend
like
everything's
fine
and
go
on
so
the
code's
gonna
have
to
handle
this
one
way
or
another.
Unless
it's
like
trivial,
you
know
prototype
code.
The
question
is
just
what's
the
best
way
to
represent
it.
As
steven
said,
the
advantage
of
this
is
that
you
don't
get
an
exception.
You
don't
have
to
catch
the
exception.
D
You
don't
have
to
know
which
exception
you're
going
to
get
and
all
that
sort
of
thing
you
know
exactly
when
it
returns
and
all
that
means
somebody
disposed
the
listener,
and
so
there's
no
more
to
be
done
here.
It's
analogous
to
like
stream.read
returning
0
on
eof
right.
It
just
says
nope.
No
more
stop.
C
I
give
it,
I
guess:
does
it
feel
like
a
common
problem
to
run
into
that
like
it
is
it?
I
know
that
there's
kristoff
has
a
fun
quote
of
people.
Say:
exceptions
are
for
exceptional
behavior,
but
they're
not.
But
my
my
question
here
is:
is
it
actually
exceptional
behavior
to
have
disposed
the
thing
during
while
except
async
has
yet
to
finish
that
feels
super
marginal
to
me
and
introducing
the
notion
of
you
need
to
check
for
null
to
see
if
it
was.
C
This
case
feels
like
we're
just
putting
a
burden
on
people
when,
if
we
return
to
singleton,
if
we're
trying
to
avoid
an
exception
of
the
performance
problems
with
exceptions,
then
returning
a
singleton,
I'm
a
useless
connection.
Consider
me
that
I
was
returned
and
then
the
dispose
happened.
Just
after
I
was
built
because
you're
going
to
have
a
race
condition,
no
matter
what.
H
F
C
C
Okay,
so
then
yeah
all
right,
so
now
it
makes
more
sense
to
me
why
dispose
would
come
into
play.
I
was
looking
at
this
as
you
set
the
thing
up.
You
configure
a
few
properties,
you
say
go
and
it's
a
one-time
operation
that
it's
like
I'm
now
listening,
but
this
is
this
is
listen
once
not
listen
or.
H
Begin
the
notion
of
listening
right.
It's
not
returning
like
david,
suggested,
an
eye
async
and
newer
bullet
connections,
because
then
we
would
know
when
it
ended.
G
H
C
Is
the
it
returns
when
in
tcp
land,
when
when
synack
is
done-
and
you
have
a
connection,
this
isn't
the
put
the
os
in
the
state
where
the
port
is
open.
H
C
A
A
D
Right
and
I
I
don't
think
it's
about
performance,
it's
more
about
just
the
hassle
of
of
having
an
exception.
You
know
people
see
first
chance,
exceptions
and
go
sometimes
they
break
on
them
in
the
debugger
and
then
they're
like
oh
wait.
This
is
getting
caught.
I
didn't
really
need
to
deal
with
this.
You
know
and
then
they
have
to
write
the
try
catch
to
deal
with
it.
D
You
know
it's
not
a
huge
deal
either
way.
It
just
seems
like
the
this
is
because
it's
feeling
not
exceptional
in
any
way
pretty
much
every
piece
of
code
that
that
does
the
server
is
going
to
write
an
acceptable,
but
every
single
accept
loop
is
going
to
have
to
deal
with
and
so
everybody's
going
to
hit
this
they're
either
going
to
throw
and
have
to
do
a
try,
catch
or.
C
Yeah,
now
that
I
have
reloaded
networking
terminology-
I
I
I
100
agree.
This
is
a
normal
behavior
of
we
we've
stopped
listening
before
someone
connected,
and
that's
that's
normal
and,
as
you,
your
analogy
to
read
line,
makes
perfect
sense
of
it's
we're
not
coming
back
from
this
either
or
of
socket.read.
So.
I
A
Sounds
good
then
see
you
guys
next
week
unless
something
earth
pops
up.