►
From YouTube: GitHub Quick Reviews
Description
00:00:00 - Approved: Async System.Data resultset and database schema APIs https://github.com/dotnet/runtime/issues/38028#issuecomment-651967965
00:25:54 - Approved: Database-agnostic way to detect transient database errors https://github.com/dotnet/runtime/issues/34817#issuecomment-651987951
01:04:47 - Approved: Introduce SqlState on DbException for standard cross-database errors https://github.com/dotnet/runtime/issues/35601#issuecomment-651990751
A
B
Okay,
so
this
this
one
is
pretty
straightforward.
I
think
system
data
has
ado.net,
has
an
api
for
getting
schemas.
There's
two
apis
one.
You
have
a
connection.
You
want
to
get
the
schema
of
the
entire
database
and
the
second
one
is
you
have
a
reader
for
a
query
that
you
executed
and
you
want
to
get
the
results
at
schema
for
that
result.
Set
these
apis,
I
think,
are
among
the
last
and
the
provider
abstraction
layer
which
are
still
sync
only,
and
this
is
about
introducing
the
async.
B
If
we
look,
the
first
one
is
get
get
schema,
async
on
db
connection.
On
the
sync
side,
there's
there's
actually
three
methods
which
are
overloads
here.
I
just
collapse
them
into
one
with
optional
parameters.
That's
about
it!
On
the
db
data
reader
side,
it's
a
little
bit
more
complicated
for
reasons
which
I
don't
have
the
complete
context
on.
So
there's
so
backing
up.
There's
actually
two
reader
schema
apis
in
ado.net.
One
is
the
traditional
one.
B
B
This
introduces
counterparts,
async
counterparts
for
both
of
these
apis,
so
the
first
one
is
pretty
straightforward:
get
schema
table
async
on
the
data
reader.
The
second
one
was
introduced.
I'm
not
sure
why,
via
an
extension
method
and
an
interface,
so
idb
schema
generator
is
an
interface
that
a
reader
can
implement
to
signify.
They.
They
support
this.
Otherwise,
the
extension
just
checks
whether
the
reader
implements
it
and
if
not,
does
a
shim
kind
of
implementation.
B
So
I
for
now
I'm
proposing
to
add
the
async
counterparts
in
exactly
the
same
way
for
consistency
rather
than
on
the
data
reader
itself.
I
think
that's
about
it.
One
more
thing,
I'm
going
to
say
so
for
now,
I'm
proposing
adding
these
methods
as
returning
tasks
rather
than
value
tasks.
These
are
schema
apis,
they're,
not
considered
proof
sensitive
at
all.
They
will
almost
always
complete
asynchronously,
except
for
some
very
minor
usages.
B
C
B
Absolutely
so
even
the
alternative
api,
which
is
a
bit
better
so
which
doesn't
go
through
data
table
in
in
the
end.
It's
still.
Basically,
the
api
is
supposed
to
allocate
a
bunch
of
structures
describing
your
results
at
metadata,
which
are
objects
so
it
by
nature
it
allocates
in
any
case
and
that
additional
allocation
is
not
gonna
weigh
in.
C
This
all
seems,
the
only
question
I
would
have
is
the
two.
So
you
mentioned
the
new
ones.
The
sync
versions
were
added,
because
the
data
table
one
didn't
exist,
then
we
added
back
the
data
to
table
one.
Do
people
use
the
one
that
we
added,
that
was
meant
to
replace
data
table
and
then
didn't
or
like?
Is
it
sort
of
is
by
the
way
side?
Or
do
people
actually
prefer
that
one.
B
I
don't
know
that's
a
good
question.
I
don't
have
any
usage
statistics
on
them
so
again,
there's
a
backwards
compaction,
so
it
works
providers
don't
have
to
implement
it.
It's
gonna
work
anyway.
It's
it's
a
good
question.
I
mean
I'm
not
sure
if
it
would
impact
this
decision
in
any
way,
but
yeah.
It's
a
good
question.
A
So
I
mean
I
think
we
introduced
it
relatively
early.
I
want
to
say
like
something
like
471
or
something.
So
it's
it's
already
a
few
years
out.
Unless
the
the
implementation
cost
is
high.
I
don't
think
it's
a
big
deal.
A
A
Ones
like
so
they
so
how
they
implement
it
like
what
does
get
column
schema
async
do.
B
So
it
checks
if
the
reader,
so
it's
an
extension
method.
If
the
reader
implements
idb
column
schema
generator,
which
is
the
interface
you
see,
then
it
calls
into
that
method.
Otherwise,
it's
going
to
call
into
the
traditional
one,
so
the
one
that
returns
a
data
table
and
it's
going
to
construct
these
db
columns
out
of
the
data
table.
Basically.
B
A
A
B
Do
you
want
to
say
so?
There
is
an
option
here
of
introducing
the
new
the
new
async
one
directly
on
db
data
reader
again,
I
just
I'm
not
sure
why
this
was
not
done
originally
for
the
sync
one,
which
is
the
only
reason
I
I'm
proposing
to
do
this
this
way.
But
if
we
really
want
to
avoid
dims,
then
we
can
just
do
this.
Instead,
it
would
be
weirdly,
inconsistent.
D
But
I
suspect
that
the
reason
for
the
interface
is
due
to
unification
down
to
the
net
types
and
wanting
to
maintain
compatibility
with
that,
but
still
allow
people
to
do
something
different.
But
this
is
probably
from
back
in
the
portable
class
library
days.
That's
when
we
that's
when
we
did
all
these.
A
D
Anyone
is
is
actually
using
the
the
get
column
schema,
which
returns
the
collection
of
db
column.
I
can't
imagine
why
now
that
data
table
exists,
you
would
do
that.
But
then
you
know
I
don't.
We'd
have
to
do
verification
of
code
out
there
to
make
sure
that
that's
the
case.
D
A
B
B
I
personally
kind
of
support
I
like
and
want
to
push
forward
this
newer
api,
simply
because
it's
better
in
the
sense
that
it
dumps
the
data
table,
which
is
weakly
typed.
It
gives
you
a
strongly
typed
kind
of
api
access
to
the
to
the
resulted
metadata,
which
is
why
I
think
it's
you
know
it's
it's
a
good
thing
in
general,.
A
Yeah,
I
suspect
the
only
reason
we
added
this
interface
was
in
order
to
avoid
the.
A
B
So,
are
you
saying
you'd
prefer
just
adding
a
virtual
async
method
on
reader.
A
Yeah,
in
the
same
way
that
you
already
do
it
here
right,
you
would
just
add
another
one
parallel
to
get
schema
table
async,
you
would
have
get
column
schema
async,
and
then
you
would
just
call
that
guy
from
get
column
schema
async
from
the
extension
method
and
the
extension
method
in
of
itself
doesn't
seem
to
be
super
useful.
To
begin
with.
Right
like
it
seems
like
we
just
did
that,
because
recent.
D
A
We
have
the
exact
same
issue
with
web
sockets,
where
we
have
a
bunch
of
async
methods
or
extension
methods,
and
so
the
way
we
have
done
it
on
the
on
our
side
is
we
basically
kept
the
extension
methods
and
then
we
have
effectively
actual
public
methods
on
the
underlying
websocket
one,
but
we
don't
expose
it
in
reference
assembly.
So
as
far
as
the
user
is
concerned,
there's
only
one
one
home
for
the
method,
but
for
implementation
reasons,
doing
everything
in
the
extension
methods
is
shitty
right,
because
you
can't
actually
access
any
state.
A
D
D
So
I
found
one
issue
that
was
filed
in
2016..
Let
me
see
if
I
can
chat
here.
D
Which
is
asking
about
implementation
of
that
and
the
response,
is
you
don't
need
to
do
it
unless.
B
D
A
Yeah,
otherwise,
I
think
they,
the
proposal,
makes
sense.
What
are
the
arguments
for
get
schema
essing,
like
collection,
name
and
restrictions.
B
Yeah,
so
collection
name
is
what
are
you?
What
do
you
want
to
get
out?
Is
it
the
list
of
tables
the
list
of
columns
or
the
list
of
whatever
there's
like
a
set
of
fixed
things,
stringly
types
which
you
can
ask
for,
and
then
restrictions
is
your
way
of
saying.
I
want
to
basically
filter
out
only
specific
tables,
so
I
want
any
table
like.
I
want
all
the
information
for
these
tables,
rather
than
all
the
tables
in
the
database.
E
E
E
So
yeah
so
git
schema
async
that
just
takes
the
cancellation,
token
defaulted
to
default
and
then
the
get
schema
async
that
takes
the
collection,
name
and
restrictions
and
cancellation
token,
the
latter
being
the
one
that
needs
to
be
virtual
and
the
simple
one
just
being
there
to
be
a
forward,
though.
Having
said
that,
I
don't
know
that
we've
actually
looked
at
that
in
the
context
of
you
have
a
bunch
of
defaults
on
an
async
thing.
So
if
that
feels
unnatural,
then
you
know
don't
because
generally,
when
we've
done
it,
it's
been
essentially
chopping.
E
The
whole
signature,
but
since
async
says
end
with
the
cancellation
token
that
mixes.
Interestingly,
so
I
guess
the
question
is:
do
you
do
you
expect
many
people
to
want
to
call
get
or
do
many
people
call
git
schema
and
not
specify
the
options,
if
so
than
accelerating
the?
I
just
want
to
pass
you
a
cancellation
token
and
not
specify
a
collection,
name
and
restriction.
Sounds.
B
Virtuous,
I
have
to
look
at
it,
but
so
if
you
don't
specify
a
collection
name,
so
my
expectation
is
collection,
name
is
usually
going
to
be
passed
because
you're
going
to
have
to
specify
whether
you
want
tables
or
or
columns
or
what,
because
the
default
is,
if
I'm
not
mistaken,
to
get
the
list
of
possible
collections.
B
So
if
you
call
it
without
an
argument,
you're
going
to
get
the
pos
like
a
data
table
that
contains
the
possible
values
you
can
pass
into
collection
name
now,
it's
coming
back
to
me,
which
is
not
very
necessarily,
very
useful.
It's
like
a
reflection
kind
of
thing.
However,
the
restrictions
one
I
I
do
suspect
is
going
to
be
null
most
of
the
time
because
you're
going
to
want
to
get
all
the
tables,
but
not
not,
I
mean
you're,
not
necessarily
going
to
want
to
filter
all
the
time.
B
So,
if
anything,
I
would
say
maybe
the
maybe
there
should
be
like
an
overload.
That
does
get
the
collection
name
in
the
cancellation
token,
but
not
the
restrictions.
If
we're
looking
at
it.
That
way.
Does
that
make
sense.
A
Yeah,
I
was
about
to
say
like
if
you
just
mirror
the
the
one
that
we
have
for
the
sync
version,
because
the
sync
version
basically
has
three
overloads
right:
the
one
that
takes
no
arguments,
just
the
collection
name
and
collection,
name
and
restriction.
So
I
would
suggest,
in
order
to
just
keep
that
symmetric,
I
would
just
add
two
non-virtual
methods,
one
that
takes
no
arguments
and
one
that
just
takes
the
collection
name
and
then
keep
on
the
virtual
one.
Only
the
cancellation,
token
optional,.
E
E
Would
then
further
say
you
shouldn't
actually
have
this
last
one
be
public
virtual.
It
should
be
public
non-virtual
and
call
it
protected
virtual,
but
I'll
if
you
already
have
public
virtuals,
it's
probably
weirder
to
mix.
A
A
B
So
maybe
the
one
thing
is
that
if
you
I'm
looking
at
the
docks
now
so
if
you
are
invoking
the
like
one
of
the
overloads
that
accepts
the
strings,
but
you
pass
on
all
then
you're
supposed
to
get
an
argument
exception,
so
you're
supposed
to
get
an
argument
null
exception.
Basically,
so
the
way
the
sync
works
is,
if
you
basically
don't
want
to
specify
collection
name,
you
have
to
call
the
overload
without
anything.
A
Okay,
that's
even
dumber,
I
think
I
mean
like
we
have
yeah
I've,
seen
that
as
well
in
the
bcl
and
we're
like,
since
we
had
the
double
annotations,
we
found
all
the
places
where
we
are
hideously
inconsistent
with
these
kind
of
things,
but
that's
generally
bad
as
well.
In
my
opinion,
like
you
like,
they
should
really
be
convenience,
overloads
right
with
that
dust
forward.
It
seems
very
weird
to
me
that
we
would
say
well,
if
you
don't
want
to
have
a
restriction,
then
you
can't
pass
in
now.
D
I
think
my
experience
of
the
system
data
apis
is
that
they
didn't
find
all
framework
guidelines
for
dotnet
framework,
one
and
ever
since
then
they've
been
the
same
and
never
you
know
no
breaking
changes
or
anything,
so
they
basically
don't
follow
any
kind
of
reasonable
guidelines
or
any
kind
of
reasonable
design.
For
most
things,
we
can
try
and
fix
this.
You
know
when
we're
adding
new
async
methods
and
I'm
not
against
making
them
better,
but
it
is
what
it
is
in
terms
of
the
terrible
design
on
system
data.
A
A
We
make
it
consistent
with
what
the
sync
one
looks
like
right.
I
mean
the
sync
one
basically
has
three
overloads.
We
can
either
decide
that
we
also
have
three
overloads
that
are
all
virtual
and
they
have
the
same
behavior
as
the
sync
one,
meaning
they
will
throw
if
you
don't
pass
in
like
if
you
pass
a
null
for
any
of
the
arguments,
I'm
not
opposed
to
that.
But
it's
like
I
agree
with
jeremy.
I
think,
having
one
method
that
just
everything
is
optional.
We
know
that
that
doesn't
work
very
well
with
people.
B
D
You
can't
you
can't
actually
create
a
naked
db
connection,
it's
not
abstract,
but
there's
no
way
to
construct
one,
but
it
is.
E
B
E
Okay,
because,
on
the
one
hand,
the
you
know,
principle
at
least
surprises
make
your
async
look
exactly
like
your
sync
does,
but
given
that
we're
adding
virtual
methods
late
in
the
game,
now
there's
the
extra
complexity.
Question
of
the
you
know
somebody
updates
to
go.
Add
the
async
support
like
do.
E
They
really
need
to
override
all
three
virtuals,
or
should
they
be
able
to
get
away
with
only
overriding
the
longest
one,
and
so
the
reason
that
the
or
that's
part
of
why
the
template
method
pattern
suggests
effectively
never
having
a
public
virtual,
because
if
you
wanted
the
behavior
of
calling
with
a
null
collection
name
in
the
public
method
to
throw
that
would
be
fine,
but
when
it
calls
the
protected
one,
the
the
protected
one
has
the
like:
you'll
get
null
if
they
called
the
the
simpler
overload,
and
that
means
do
your
default
thing
or
you'll
turn
that
into
the
string.
D
Not
pushing
back
on
what
you're
saying,
but
that
that
is,
we
frequently
get
people
saying:
okay,
I'm
trying
to
use
mock,
I'm
trying
to
create
testables
and
now
the
template
methods
are
protected.
Virtual
and
that's
a
major
major
pain
for
me.
So
yeah
I
mean
it's
okay,
I
don't
disagree
with
it
and
usually
using
mock
to
create
those.
D
It's
using
right,
you
can't
you
can't
access
the
protected
method
from
the
api
surface,
so
you
have
to
use
the
complex
method
of
override
telling
your
mark
to
override
a
protective
method,
because
you
can't
just
use
a
lambda
expression
to
refer
to
it.
For
example,
oh
this
is
that
particular
libraries.
D
Well,
it's
not
just
I
mean
yes,
it's
it's
mock
is
the
most
common
library
which
doesn't,
but
it's
not
a
a
a
a
restriction
that
mock
is
arbitrarily
producing
it's
that
you've
made
it.
So
you
can't
reference
this
publicly
by
design
right
and
therefore
nobody
can
strongly
reference
it
publicly
by
design
and
therefore
you
have
to
go
to
not
referencing
it
and
using
string
names
or
whatever
to
do
it,
regardless
of
whether
it's
mock
or
not.
It's
just
a
a
consequence
of
moving
things
off
the
public
surface.
E
D
Exactly
right,
so
mock
is,
like
the
you
know,
the
the
sixth
most
downloaded
you
get
package.
So
you
know
it's
not
like
everybody's
doing
that
I
mean
we
got
rid
of
mock
in
the
air
for
the
same
reasons
a
long
time
ago,
and
we
do
that
as
well,
but
I'm
just
saying
that
it's
it's
feedback.
I
have
seen
a
lot
over
the
years.
B
Yeah,
just
that
one
argument
against
adding
three
virtual
methods
here
is
that
the
three
sync,
the
implementation
for
the
three
sync
ones,
is
the
throw.
So
providers
have
to
implement
them,
but
if
we
are
now
to
add
the
async
once
they
would
delegate
to
the
sync
ones.
So
if
somebody,
if
a
provider
doesn't
override
those
three,
then
they
would
never
know
so
nobody
would
ever
know
it
would.
Just
you
know,
work
sync
instead
of
async,
which
is
not
a
good
thing,
so.
B
A
B
A
Them
I
see
so
that
means
if
we
do
the
same
thing
here,
but
basically
the
async
version
just
calls
the
sync
ones.
Then
you
would
get
a
platform.
That's
sorry,
not
supported
exception
when
the
provider
didn't
implement
the
sync
versions
either.
So
the
only
problem
you
have
is
when
they,
when
the
provider
overrides
the
sync
behavior,
but
not
the
async
behavior,
but
that's
that's
always
the
case,
no
matter
whether
we
have
three
virtuals
or
not
correct.
B
It's
true,
but
yeah
I
mean
that's
true,
but
assuming
he
you
know
a
reasonable
provider
overrides
like
the
one
with
all
the
arguments,
but
maybe
doesn't
the
other
two,
because
they
reasonably
expect
them
to
just
delegate
as
we've
been
discussing.
Then
you
end
up
with
two
methods
which
are
sync
and
one
which
isn't.
But
I
mean
it's
not
that's.
Right.
D
I
mean
it's
worth
pointing
out
that
the
behavior
here
for
almost
all
providers
when
we
implement
this
initially,
is
they're
going
to
call
getschema,
async
and
they're
going
to
get
a
sync
version
running.
That's
what
ado.ado.net
does
so,
given
that
things
like
oracle's
provider,
don't
don't
support
async
at
all.
D
D
I
don't
feel
like
we
should
make
all
of
these
virtual
on
the
async
for
any
reasons
related
to
that
or
anything
related
to
basically
for
some
provider
writer
to
override
this.
They
have
to
understand
that
it's
there
and
why
they're
overriding
it
anyway
and
a
single
virtual
message
override
makes
sense
for
that.
E
Well,
the
I
mean
the
one
caution
that
I
would
give
for.
That
is
if
what
the
docs
suggest
the
right
answer
is
for
the
long
version
of
git
schema
is
that
it
should
throw
an
a
e
if
collection
name
is
null
if,
if
a
provider
has
actually
implemented
that
that
means
that
you
can't
have
the
simple
async
call
the
long
sync,
because
it
would
do
so
with
null,
which
means
you
do
have
to
do.
A
virtual
async,
deferring
to
your
virtual
sync
on
every
single.
E
A
Well,
the
other
thing
I
just
noticed
is
that
you
know
assuming
we
would
chain.
Then,
if
you
want
cancellation,
you
have
to
par,
you
have
to
construct
the
longest
one,
but
if
you
don't
have
restrictions,
then
how
would
I
do
that?
Because
I
can't
pass
it
now
right.
So
that
means
you
would
have
to
add
an
optional
cancellation
token
for
all
three
methods
as
well,
which
seems
rather
weird
so
like.
Maybe
we
should
literally
just
make
them
non-virtual
and
just
chain
them
and
say:
yep,
that's
it
that's!
E
Well,
I
mean
the
problem
is
if
the
sync
implementation,
that
the
virtual
is
going
to
by
default,
defer
to
doesn't
allow
a
null
connect
collection
name.
Then
then
the
thing
that
did
the
simple
one
calling
in
the
more
com
into
the
more
complex
one
now
blows
up
when
if
the
simple
one
called
the
simple
one,
it
would
work.
Yeah.
D
Right,
we
could
take
a
totally
different
approach
here
and
say:
actually
we're
not
actually
going
to
implement
the
async
one
to
call
into
the
sync
one
we're
just
going
to
implement
it
to
throw
if
it's
not
implemented,
at
which
point
you
don't
get
people
thinking
that
this
is.
This
is
doing
async
when
it's
actually
doing
sync
and
provider
writers
will
need
to
override
something-
and
in
that
case,
since
we're
not
calling
into
the
sync
methods
and
ever
it
can
just
be
a
single
virtual
method
that
they
override
there.
A
D
D
A
D
B
I
mean
yeah,
I
mean
just
realistically.
This
is,
if
we
leave
this
to
throw
you
know,
this
is
basically
not
going
to
work
for
a
lot
of
like
some
providers,
at
least
until
they
do
like
people
will
never
implement
this
or
whatever.
D
B
C
C
If
we
made
that
throw
by
default
rather
than
delegating
to
the
existing
one,
because
someone,
you
know,
updates
their
code,
they
use
the
new
one
they're
getting
concrete
instances
that
all
have
it
and
then
all
of
a
sudden
they
get
one
that
doesn't
and
they
blow
up
and
now
they're
stuck
between
a
rock
and
a
hard
place.
Hey
they
potentially
blew
up
at
a
point.
C
You
know
in
production
they
haven't
really
tested
well
or
something,
but
more
so
now,
they're
stuck
choosing
between
using
the
new
thing
that
works
everywhere
or
falling
back
to
the
old
thing
and
not
being
able
to
move
to
the
new
thing
until
everything
they
might
possibly
use
appropriately
overrides
it
and
in
those
situations
people
add,
like
you
know,
capability
apis,
but
that
seems
overly
complex.
Here
I
mean.
D
I
get
I
get
that
argument,
but
I'll
point
out
again
that
in
ado.net,
masses
of
people
are
thinking
that
they
using.net
with
async
access
to
the
database
and
they're,
not
because
they
don't
realize
that,
for
example,
the
oracle
provider
just
delegates
directly
or
everything
to
the
sync.
So
it's
like
it
looks
like
it
supports
async
and
they
think.
Oh,
my
net
app
is
going
to
have
all
these
events.
D
Well,
at
least
they
know
not
to
use
that
database
provider
and
expect
that
that
behavior,
their
alternatives
are
a
different
database
behavior
or
not
use
async
right
yeah.
That's
a
huge
hammer
that
you
have
to
change
the
database
that
your
application
is
using
as
its
backups
as
you
I
agree,
but
but
you're
saying,
if
you,
if
you
need
async
and
you're
not
getting
it,
then
you
can't
use
the
oracle
provider
period.
That
is,
I
mean.
Is
it
better.
C
There's
a
it's
not
a
you
know
a
yay
or
nay
decision
right.
It's
I
want
improved
scalability.
I
want
it's
a
scale
right.
I
want
it.
I
want
it
to
do
better,
so
I'm
going
to
take
advantage
of
async
wherever
I
possibly
can.
All
of
a
sudden
one
api
that
I
want
to
call
throws
an
exception,
because
the
database
that
I'm
using
throughout
my
company
doesn't
support
that
brand.
D
D
D
D
Yeah,
no
well
depends
what
you
mean
you
can
you
cannot
expect
to
have
a
application
that
uses
sql
server,
change
your
provider
to
oracle,
for
example
in
in
in
in
ef,
and
as
expect
your
application
to
work?
What.
C
D
D
Just
introduced,
why
would
you
have
to
do
everything
if
you're
doing
a
method
that
calls
get
schemer,
then
if
ef
did
expose
it
and
it
exposed
an
async
version,
then
you
call
the
async
version.
If
you
use
the
async
and
if
the
async
version
isn't
done,
then
for
that
method
you
call
the
sync
version:
it
doesn't
change.
C
It's
not
the
case
ef
in
particular,
but
generally
you
don't
have
a
one-to-one
correspondence
between
a
public
entry
point
and
a
single
async
api
called
in
the
entire
chain.
Generally,
you
end
up
doing
either
multiple
things
asynchronously
or
multiple
things,
synchronously
and
that's
dictated
by
the
entry
point.
D
I
agree
so
I
said
I
don't
wanna.
I
like
to
me
the
whole
thing
that
people
get
sync
when
they
think
they're
getting
async,
and
you
know-
and
we
frequently
see
that,
as
you
know,
people
complaining-
because
this
has
been
something
that
you
know
bit
them
in
the
ass
because
they
weren't
they
weren't
ready
for
it.
It's
you
know,
that's
an
issue
that
I've
seen
many
times
over
the
years.
I
get
the
reason
for
doing
the
other
way
and
I'm
not
pushing
back.
D
A
B
A
A
I
understand
the
practical
problems
when
you
do
something
that
is
less
ideal,
but
that's
still
better,
in
my
opinion
than
having
class
library,
authors
being
scared
of
methods,
because
somebody
might
throw
because
what
we
see
in
the
ecosystem
then,
is
that
people
basically
go
out
of
their
way
to
not
call
these
methods.
So
basically
you
hurt
your
adoption
massively
if
you
expose
new
apis
as
throwing
exactly
and
so
you're,
almost
always
better,
just
saying
yeah.
A
We
accept
that
there's
something
less
ideal,
because
at
least
the
new
thing
gets
adoption
and
then,
if
the
adoption
is
high
enough,
hopefully
there's
pressure,
then
on
oracle
and
others
to
actually
do
the
right
thing.
But
I
guess
the
problem
there
is
that
customers
aren't
aware
right.
So
so
I
guess.
D
A
D
B
D
B
So
wait
there's
a
lot
of
async
things
that
were
not
introduced
in
the
first
place
because
they
weren't,
and
that
has
nothing
to
do
with
their
usefulness.
It's
just.
D
B
This
case,
whether
this
is
a
perf,
you
know
whether
it
makes
sense
or
not
is
going
to
depend
on
application,
and
I
know
of
people
who
use
you
know
who
do
use
this
for
every
single
thing
and
you
could
say
they're
doing
it
wrong,
but
I
mean
the
api.
Is
there
and
it's
their
application?
I
think
you
know
they
have.
B
Do
it
and
to
to
finish
off,
I
mean
basically
the
goal
here,
I
think,
is
for
all
of
the
live
api
surface,
that's
still
being
maintained
and
you
know
in.net.
We
basically
don't
want
people
to
have
to
do.
Sync,
sync
io!
It's
that
simple
and
this
thing
is,
is
kind
of
trying
to
close
that
corner.
That's
all
now,
there's
we
can
discuss
how
we're
going
to
introduce
it
and
whether
it's
going
to
you
know
refer
or
not,
but
I
mean
just.
B
D
D
A
D
B
A
D
C
D
B
So
that's
the
whole
point
of
having
this
invoke
the
sync
implementation:
it's
that
it
will
always
work
and
providers
can
implement
them,
implement
it
at
their
leisure
and
when
they
can,
I
know
I'm
definitely
going
to
ship
this
for
npg
sql.
I
know
the
mysql
guy
is
probably
going
to
do
it.
Sql
client
doesn't
align
with
the
same
release
schedule
as
you
know,
ado.net
sorry,
as
as
the
bcl,
I'm
not
sure
why
we
would.
You
know,
kill
this
just
because
sql
client,
you
know,
might
or
might
not
check
this.
C
What
I
don't
want
is
for
us
to
add
an
api
that
a
virtual
method
that
no
one
overrides
if
you're
saying
that
the
the
providers
that
we
you
know
do
think
are
important-
are
going
to
very
quickly
respond
to
this
by
overwriting
them,
then
great
and
I'd
want
to.
You
know,
quote
as
much
as
we
can
possibly
do
validate
by.
At
the
same
time,
this
pr
is
going
in
like
putting
up
the
pr
that
does
that
in
you
know,
those
repos
obviously
won't
work.
You
won't
compile.
C
D
D
E
C
C
So
we
ship
net
five
in
you
know
whatever
the
schedule
is
and
three
months
later,
let's
see.
There's
people
are
picking
up.
You
know
people
are
picking.net
five
after
the
holidays
and
they're
starting
to
use
it
and
they
go
and
grab
the
new
version
of
mpg
sql
and
whatever
else
like
at
that
point,
they
start
using
this
new
api.
It
should
do
the
right
thing.
D
Because
if
you
mean
it
should
behave
asynchronously
then
I
would
guarantee
I'm
pretty
much.
I
put
good
money
on,
but
other
than
npg
sequel,
and
maybe
my
sequel
are,
you
know
which
are
not
our
most
common
providers.
Sql
server
and
sql
are
most
common
providers.
I
will
pretty
much
guarantee
that
neither
of
those
will
do
it
asynchronous
in
that
time
frame.
In
fact,
sequel.
D
B
D
C
D
C
Is
the
first
and
foremost
concern?
I
had
a
second
thing
that
I
was
getting.
The
second
thing
is
that
there
are
then
some
there's
some
additional
benefits
doing
that
code,
change
right,
just
making
that
code
change
and
having
there
be
no
asynchronous
implementations
from
anything
they
might
possibly
use
is
wasted.
Effort
on
their
part,
there's
no
benefit.
They.
D
B
Arthur,
if
I
may
get
a
word
in
if
this
is
our
bar,
then
we
will
never
ever
introduce
anything
into
idio.net
ever
again
in
the
future,
because
there
will
never
be
a
point,
never
be
a
point
where
we
can
get
something
in
and
guarantee
that
all
of
our
providers
are
specifically.
So
I
assuming
we
wanna
at
any
point,
introducing
anything
into
ado.net
that
logic
can't
work.
So
I.
B
Well,
I'm
not
done
all
I'm
trying
to
say
it's
very
simple.
So
there's
a
model
already
about
how
these
things
work
in
ado.net,
you
introduce
an
async
thing.
It
delegates
to
the
same
thing,
because
obviously
not
everybody's
going
to
implement
it.
It's
not
ideal
because
it's
going
to
work
synchronously,
but
not
asynchronously,
but
everybody
can
use
that
api
always
and
it's
always
going
to
work
and
as
time
goes.
B
D
Is
not
doing
it
just
saying
that
if
the,
if
the,
if
we're
not
going
to
change
changes
to
ado.net,
unless
there
is
actual
three
months
later
providers
that
implement
that,
then
I
agree
with
you.
We
should
never
take
any
more
changes
in
ado.net,
which
is
why
that
statement,
I
don't
think,
makes
any
sense,
but.
C
To
change
that,
my
three
months
is
me
waving
hands,
it's
a
saying
that
at
some
point
in
the
foreseeable
future
there
are
going
to
be
implementations
of
this
thing.
If
we
can't
make
that
statement,
then
there
is
zero
benefit
to
adding
the
api.
If
we
can
say
that
there
will
be
implementations
of
this
in
the
foreseeable
future,
then
there
is
benefits.
That's.
D
All
there
will
be
implementations-
yes,
not
not
not
for
90
of
our
90
95
of
our
customers,
but
for
the
people
using
postgres
there
will
be
an
implementation,
and
for
the
few
of
those
that
are
actually
calling
this
api,
there
will
be
benefit.
It's
it's
extremely
minimal,
it's
extremely
small,
but
that's
just
the
way.
These
kinds
of
things
are.
A
A
If
you
ship
an
abstraction
that
you
don't
implement,
because
my
confidence
in
us
getting
the
abstractions
right
is
very
low,
because
abstractions
are
usually
hard
so
like
to
me,
it's
not
so
much
important
that
by
the
time
we
ship
everybody
has
it
it's
more
that
we
have
done
it
enough
that
we
validated
that
our
abstractions
are
correct
and
that
they're
actually
viable
right
so
like.
If,
for
example,
shy,
does
these
methods
implements
them
in
postgres
and
says
yep?
They
work
fine
and
they
you
know.
A
If,
if
we
don't
convince
the
sql
guys
to
override
these
methods
in
any
given
time
frame,
then
that's
unfortunate,
but
I
wouldn't
hold
that
necessarily
hostage
against
any
given
feature
right,
because
the
very
problem
that
we
have
it
at
the
at
the
bottom
is
that
almost
all
our
abstractions
are
implemented
by
a
gazillion
people
and
in
order
for
us
to
make
progress,
we
have
to
start
somewhere
right
and
if,
if
the
summer
is
everywhere,
then
it's
extremely
expensive
for
us
to
move
things
forward.
That
was
kind
of
a.net
framework
used
to
be
right.
A
So
I
I'm
kind
of
reading
with
shy.
Like
let's
pick
you
know,
let's
pick
some
area
where
we
can
make
progress.
If
not
everybody
comes
along
for
the
ride.
That's
fine,
but
I
also
want
to
make
sure
that
steven's
point
of
us,
not
shipping,
bogus
abstractions-
is
also
checked
off
right
so
like
so
I'm
honestly
fine
with
saying
if
we,
if
we
get
one
other
provider
to
do
the
work
before
we
ship,
so
we
validate
the
abstractions,
that's
fine
by
me.
If,
then,
nobody
overwrites
the
method,
that's
unfortunate,
but
I
don't
think
that's
a
deal.
A
B
I
mean
it's
a
nice
counterpart
to
an
abstraction,
that's
already
there,
so
I
think
it's
it's
the
safest
kind
of
thing
we
could
ever
do
in
terms
of
introducing
an
abstraction
and
yes,
of
course,
this
is
going
to
go
in
in
my
provider
and
I'm
hoping
it's
also
going
to
be,
and
at
least
to
others
we
do
control.
So
we
have
very
good
relations
with
my
sequel.
We
also
control
the
sql
light
one.
If
we
care
about
that
because.
D
D
Yeah
sql
client
might
do
it,
but
they're
they're
not
going
to
do
it
in
the
dot
in
dotnet
5
because
they
have
a
different
ship
schedule
and
if
we
put
this
in
dot
net,
5
they're
not
going
to
do
it
before
we
ship
dot
net
5
and
again.
So
that
is
again
why
I
was
saying:
if
that's
what
we're
saying,
we
need
to
push
this
back
to
dotnet
six,
so
we
can
align
those
things
and
get
it
if
we're.
Okay
with
what.
D
D
Don't
bring
this
issue,
I
don't
think
it's
worthwhile.
If
we
are
going
to
do
this,
then
I
think
we
should
just
put
it
in
there
like.
We
always
have
done
and
let
providers
implement
it
as
and
when
they
can,
because
that's
what
we've
always
done
with
ado.net
and
it
makes
sense.
I
think
it's
pointless
to
try
and
wait
for
some
number
of
providers
to
try
and
implement
this.
Although
and
if
it
is,
we
need
to
coordinate
that
and
make
sure
we
have
that.
D
A
A
At
the
same
time,
it
might
be
better
to
do
it
early
in
dot
net
six,
where
you
actually
have
more
runway
left
to
talk
to
the
to
the
to
the
to
the
sql
folks,
rather
than
doing
it
latent.net
five
right
but
like
I
am
personally
of
the
opinion
that
this
feature
is
small
enough
and
the
abstraction
is
simple
enough
that
just
having
one
person
saying
yep,
I
was
successful
in
implementing.
This
would
be
fine
by
me
like
if
there's
a
more
complicated.
B
A
D
Yeah,
I
just
but
yeah,
that's
not
what
I
was
hearing.
C
E
From
what
you
have
on
screen
right
now,
emo,
I
think
we
do
want
the
cancellation
tokens
and
the
simpler
overloads
and
I
threw
in
chat
an
example
of
a
essentially
it's
a
muxing
demuxing.
If
we
think
it's
still
valuable
to
have
all
three
virtual,
but
we
could
possibly
get
away
here
with
only.
E
It's
just
a
question
of
if
it's
important
to
maintain
the
argument
null
exception
on
collection
name,
and
I
don't
have
super
strong
feelings,
because
the
feelings
I
do
have
are
in
conflict,
one
is,
it
should
be
properly
delegating
and
the
other
one
is.
It
should
be
consistent
and
I
can't
have
both
so,
but
so,
if
we
think
it's
important,
we
leave
the
virtual.
If
we
don't
think
it's
important,
we
remove
the
virtual
but
yeah
I
I
threw
in
based
on
things
of
it
even
tries
to
pay
attention
to
the
cancellation.
E
Token,
though
I
did
this
from
memory,
so
the
the
words
that
I
used
may
not
be
right.
A
So
I
think,
given
the
complexity
of
the
existing
design,
I
would
probably
go
with
what
we
currently
have
on
stream
on
screen,
because
it's
the
easiest
one
to
do
right.
You
just
follow
the
normal
pattern
where
you
override
them
in
pairs
right.
You
overwrite
the
sync
one
and
the
async
one
at
the
same
time
and
they're,
basically
one
to
one
and
the
fact
that
there's
three
overloads
doesn't
matter
because
they're
apparently
like
based
on
the
sync
once
there
are
three
independent
things
you
can
override.
E
Yeah,
since
this
isn't
doing
deferral,
or
even
if
it
was
doing
deferral
with
passing
null
for
collection
name
really,
it's
the
the
simple
method
can
honestly
do
whatever
it
wants
for
any
provider.
It
just
has
a
suggestion
which
really
means
that
this
isn't.
This
is
really
more
of
a
interface
than
a
base
class,
but.
A
B
Okay,
I'll
have
a
go
with
this,
then
so,
okay,
there's!
This
is
two
issues
that
belong
together.
This
one
is
about
adding
property
abu
property
to
db
exception,
so
db,
exact
db
exception
is
supposed
to
be
the
base
class
of
all
exceptions
thrown
by
database
providers.
B
The
proposal
would
would
add,
is
transient
on
it
to
allow
database
providers
to
expose
whether
they
think
the
exception
is
transient
or
not.
What
that
means
is
whether
retrying,
the
operation
that
caused
that
trigger
this
exception
be
expected
to
succeed
now,
just
to
give
a
bit
of
background.
So
databases
are,
you
know,
there's
networking,
there's
various
kinds
of
transient
exceptions,
there's
some
networking,
some
related
to
the
database,
there's
a
whole
slew
of
retrying
strategies
and
various
resilience
kind
of
things.
B
If
people
are
familiar,
maybe
with
poly,
which
is
which
is
kind
of
like
a
resiliency
package
for
net.
The
problem
is
that,
right
now,
when
I
want
to
build
resilience
into
my
application-
and
I
use
ado-
I
use
ado.net.
It's
all
on
me
to
know.
You
know
how
my
database
driver,
which
kind
of
error
codes
it
exposes,
which
error
codes
are
transient
and
which
aren't,
and
this
kind
of
thing
happens
again
and
again
and
again.
B
Now,
the
proposal
here
would
be
to
move
that
into
the
driver
so
to
make
the
driver
take
control
of
the
the
concept
of
transientness
and
also,
as
you
know,
a
new
error
code
comes
in,
so
the
driver
can
simply
flag
it
as
transient
and
report
that
up
so
new,
basically
new
execution
strategies
using
poly
or
whatever
will
now
be
able
to
use
this
piece
of
information
automatically
and
even
in
a
database
portable
way,
if
that's
relevant,
so
one
property
virtual
false
by
default.
That's
basically,
it.
A
And
then
so,
the
idea
is
that
the
driver
will
basically
say
like.
Oh,
this
is
a
networking
issue
that
might
be
something
where
we
try
would
make
sense,
and
then
they
would
basically
have
their
own
logic
to
say.
Okay,
this
error
code
is
one
of
those.
This
error
code
isn't
and
then
they
just
write
themselves,
a
helper
method
that
basically
does
the
switch,
and
then
it
would
just
say
exactly
like
true
false
based
on
that
right.
It's
it's
moving
the
switch
inside,
that's
basically
what
it
is
that
makes
sense
to
me.
A
B
Do
so,
I
think
I
think
there
definitely
is
other
state,
but
it's
also
not
necessarily
expected
that
this
thing
be
the
only
thing
that
people
look
at
when
deciding
if
they
want
to
retry
something.
So
it's
perfectly
reasonable.
Even
today,
people
can
look
at
the
db
exception
and
they
can
look
at
other
other
things.
This
is
basically
supposed
to
be
expressing
that
part
of
it,
but
if
you
have
anything
else,
that's
going
to
govern
that
decision.
That's
fine
and
that
stays
there.
A
If
I,
if
I
basically
the
question,
is
what
should
I
do
in
order
to
get
good
behavior
out
of
the
system
like
if
I
like?
Basically,
if
the
provider
gives
me
a
false
return
value
here,
where
it
says
true,
when
I
really
shouldn't
have
had,
would
I
have
accidentally
done
something
bad
to
my
database
by
having,
for
example,
inserted
the
same
data
twice.
A
D
Yes,
so
to
give
some
context
on
this
right
now
in
right
now,
in
ef
core,
we
have
actually
an
ef6
as
well.
We
have
a
list
of
sql
server
error
numbers
which
we
have,
which
we
update
periodically
based
on
information
from
the
sql
server
team
and
sql
azure,
to
say,
because
they
tell
us
that
these
are
transient
errors
that
you
should
retry
on
now.
If
your
retrying
is
not
buffering
or
not
doing
something
correctly,
then
you
know
everything
goes
goes
to
hell.
E
Now
this
could
still
even
the
ef
core
thing:
the
current
ef
core
stuff
and
this
or
the
ef
core,
based
pattern.
E
D
E
Code
would
be
lead
you
down
a
bad
path.
That's.
D
Not
true
because,
for
example,
if
you're
doing
a
query
and
you've
returned
some
stuff
from
it,
and
then
you
get
a
transient
error
and
then
you
just
restart
that
query
you're
going
to
get
duplicates
you've
done
something
to
to
avoid
doing
that.
There's.
So,
basically,
if
your
tree
trying
is
not
done
correctly,
if
it's
not
using
transactions
correctly
or
if
you're
doing
non-item
potent
stored
procedure,
mappings
then
that
the
last
one
that's
that
is
you're
right
up
to
you.
D
Everything
else
should
be
handled
by
the
retrying
strategy
and
that
that's
what
ef
does.
But
if
you
don't
do
that
correctly,
then
yes,
it's
very
easy
to
to
get
into
into
bad
states
by
retrying
when
you
shouldn't
so.
A
D
B
It's
a
question
of
scope
and
if
you,
the
second
point
addresses
this
exactly
so,
this
doesn't
tell
you
anything
beyond
that.
The
error
itself
indicates
a
condition
that
you
know
is
transient
right.
What
do
you
do
with
this?
What
kind
of
execution
retry
or
not
that's
a
completely
out
of
scope,
thing
right.
A
Yeah,
I
think
I
buy
that,
like
that,
it's
kind
of
like
a
building
block
towards
it
without
providing
any
sort
of
guarantees
right
then
that
that
makes
sense
to
me
so
previous
question.
So
basically
you
already
said
you
have
a
table
for
for
for
sql
server.
So
I
suppose
part
of
the
work
would
be
to
work
with
that
team
to
implement
that
switch
statement
on
sql
exception
right,
yes,
and
we
would
probably
do
the
same,
for
I
guess
the
postgres
one
that
shy
owns
right.
D
Yep
and
my
sequel,
I
think,
is
on
board
to
do
something
similar
yeah.
That
makes
sense.
E
Or
that
seems
reasonable,
so
I
do
know
or
have
observed
that
in
your
notes,
you
suggest
they
should
err
on
the
side
of
calling
something
transient,
but
you
have
the
default
as
being
false.
So
I
assume
that
that
just
means
in
docs
you're
going
to
have
it
as
a
suggested
implementation
is
to
error.
B
E
B
Transients,
it's
guidance
only
because
most
errors
in
the
world
are
definitely
not.
I
wouldn't.
I
wouldn't
think
they
are
transient,
so
it's
not
a
good
default
to
say
always
retry.
However,
it's
just
when
a
provider
writer
is
kind
of
not
sure
this
sometimes
could
be
transient.
Sometimes
it
might
not.
Then,
in
those
kinds
of
situations,
it's
better
to
err
on
the
side
of
saying
yes,
right.
E
B
I
don't
know
if
I'd
go,
that
far
I
mean
so
the
specific
behavior
inside
is
going
to
be
provider
dependent.
I
I
still
kind
of
you
know
the
way
I've
seen
this
implemented
and
the
way
it's
implemented
in
my
driver
is
still
in
like
a
list
of
things
which
are
considered
transient.
So
you
don't,
I
still
didn't
implement
it
the
other
way,
but
it's
a
good
question.
D
I
think
I
think
that
I
think
that
what
this
note
is
actually
trying
to
assert
is
that,
for
example,
you
could
fail
to
connect,
because
that's
a
persistent
failure
like
somebody's
actually
cut
the
optical
cable
right,
and
so
in
that
case
it
wouldn't
be
a
transient
error,
but
you
should
err
on
the
side
of
saying
that
if
you
fail
to
connect,
that
is
a
transient
error
and
we
should
retry
a
few
times
in
case
it.
You
know,
even
though
it
might
be
permanent.
I
think
that's
what
you
were
trying
to
say
right,
shine.
D
E
A
B
So
this
one
is
quite
connected,
so
we
talked
about
transients
on
exceptions.
This
is
another
thing
that
we
could
that
I'm
proposing
we
introduced
on
the
same
db,
except
db
exception,
so
there's
a
standard
for
sql
state
which
is
in
the
sql
standard,
five
character
code
with
a
standardized
meaning.
So
this
five
letter
code
means
unique
constraint,
violation
that
five
letter
code
means
something
else.
This
is
actually
standardized.
B
Unfortunately,
it's
not
completely
followed,
so
it's
not
followed
by
all
databases.
Sql
server
doesn't
follow
this,
but
the
odbc
and
jdbc
layers
have
a
shim
which
translate
the
sql
server
errors
into
this.
This
thing
and
there's
interest
in
by
china
from
the
the
sql
client
team
to
do
some
kind
of
translation,
also
and
postgres.
This
is
native
in
my
sequel.
B
This
is
native
as
well,
so
it's
not
a
100
kind
of
mechanism,
but
it's
still
something
that
would
allow
exposing
an
exception
code
which
is
completely
database
portable
and
so
to
detect
in
a
database
portable
way,
at
least
for
most
databases,
for
example,
a
unique
constraint.
B
So
I
I
mean
it's
a
good
question,
so
some
database
providers
sqlite,
doesn't
have
sql
state
at
the
moment.
So
unless
it
starts
translating
it
internally,
then
it
could
still
implement
as
transient
without
even
you
know,
looking.
E
B
So
in
postgres,
that's
the
only
so.
The
only
error
code
you
have
is
sql
state
so
effectively
is
transient,
is
already
currently
implemented
on
top
of
sql
state
for
postgres.
But
that's
a
postgres
thing,
because
that's
already
there,
I
don't
have
anything
else
to
implement,
is
transient
over.
If
you
will.
A
No,
I
mean,
I
guess
the
question
is
like:
if
we
like
assume
a
world
where
we
have
both
db
sorry
sql
state
and
is
transient
right
on
db
exception,
one
thing
we
could
be
doing
in
the
default
implementation,
as
opposed
to
returning
faults.
You
would
say
if
sql
set
is
not
null,
do
the
translation
and
then
we
have
a
better.
B
B
E
E
And
then
you
just
don't
have
to
it,
wouldn't
be
that
everybody
has
the
exact
same
switch
statement
for
is
transient
of
exactly
map
falling
back
to
this
equal
state
map.
So.
D
We
should
probably
be
some
I
mean,
do
the
research
on
to
you
know
what
the
different
providers
do,
but
we
can
be
somewhat
conservative
in
the
default
implementation
too,
and
then
things
like
oracle
will
get
some
value
from
this,
even
if
they
don't
update
and
things
like
postgres
and
and
sql
server
and
mysql
can
always
do
a
more
accurate
translation,
if
necessary,
on
those
providers
and
or.
D
D
A
A
A
Then
let
me
just
try
to
find
another
slot.
Where
would
this
one
make
sense?
Should
there
be
anybody
else
here
or
just
you
and
arthur
again.