►
From YouTube: Entity Framework: .NET Community Standup - May 6th 2020 - Introducing the EF Core Community Standup
Description
Join members from the .NET teams for our community standup covering great community contributions for Framework, .NET Core, Languages, CLI, MSBuild, and more.
Community Links: https://www.theurlist.com/efcore-standup-2020-05-06
A
A
Do
we
have
yes
here,
you
perfect
thanks,
John
appreciate
it
I'm
so
nervous
John
is
is
like
the
the
maestro
of
live-streaming
and
now
he's
here
to
judge
our
inaugural
entity
framework
stand
up
super
excited
to
have
everyone
here
today
you
can
see,
we've
got
several
people
from
the
team
I'm
going
to
introduce
the
team
and
then
will
will
jump
jump
into
this.
We've
got
Arthur
who
leads
the
team
Arthur
welcome.
C
Thanks
Jeremy
I'm,
a
I'm,
an
engineer
with
Microsoft
I
worked
for
I,
don't
work
on
Jeremy's
team
I
work
for
a
group
called
commercial
software,
engineering
or
CSE.
For
short,
since
we
have
an
acronym
for
everything
in
fact
inside
my
actually,
we
have
multiple
acronyms
for
everything
inside
Microsoft,
as
it
turns
out
yeah,
I'm
Jeremy
invited
me
to
be
here
so
definitely
glad
to
be
here.
Year
sounds.
A
Great
and
thanks
for
coming
on
board,
so
in
future
shows
we
plan
to
have
guests
from
the
community
whether
it's
MVPs,
whether
it's
some
of
our
community
contributors,
people
working
on
tools.
So
we
look
forward
to
bringing
those
guests
on
and
certainly
reach
out
to
us.
If
you
feel
like
there's
something
we
should
highlight
on
this
stand-up
or
someone
that
we
should
have
board
for
this
one
we've
got
a
special
project.
A
We
want
to
share
with
Josh
so
we'll
get
to
that
in
a
minute,
but
just
for
a
quick
format,
we're
going
to
go
through
a
few
links
that
we
find
interesting
talk
about
those
links
introduce
you
to
a
few
new
features.
Josh
is
going
to
share
with
us
a
little
bit
about
his
project
and,
of
course,
throughout
we're
here.
To
answer
your
questions.
This
is
our
real
reason
for
being
here
in
the
stand-up.
A
D
A
No
you're
good
I'm
I
was
just
transitioning
the
the
window
and
I'm
gonna
do
one
more
transition
here.
All
right
and
we're
gonna
go
top
to
bottom.
So
let's
go
ahead
and
expand.
This
first
link
the
foundation
for
the
future.
So
this
is
an
article
that
Julie
Lerman
wrote
for
us
and
if
you're
not
familiar
with
Julie
she's,
been
a
incredible
advocate
for
the
entity
framework
team
over
time
and
really
it
does.
A
A
lot
of
support
gives
us
a
lot
of
valuable
feedback,
and
this
article
really
encapsulate
the
philosophy
behind
inity
framework
three
and
I
love
in
the
article.
If
you
want
to
scroll
a
little
bit
that
she
not
only
is
it's
not
just
a
here's,
how
you
get
started
within
the
D
framework?
Three,
it
talks
about
the
history
behind
it,
some
of
the
decisions
that
were
made
why
we
made
some
of
the
choices
that
we
made
it
talks
about.
A
Basically,
this
the
great
link
quarry
overhaul,
definitely
something
that
we
get
a
lot
of
feedback
about
changes
in
the
way
that
that
link
works
right
and
the
way
query
evaluations
work
in
3.0
versus
2,
and
this
dives
into
the
background
behind
that.
If
you
want
to
scroll
a
little
more,
we
don't
have
to
read
the
whole
article,
but
the
format
here
goes
through
some
example.
A
Changes
has
some
code,
samples,
etc.
So
really
just
wanted
to
highlight
this,
especially
if
you're
curious
about
the
the
mindset
behind
entity,
framework
or
3
or
if
your
new
identity
framework,
it's
a
great
article,
to
really
cover
all
the
bases
of
what's
going
on.
Does
anyone
else
have
any
comments
on
this
one.
D
A
A
What
this
article
does
is
it
talks
about
a
way
to
facilitate
that
in
a
very
unique
scenario,
so
in
this
case
they're
using
identity
and
identity
already
has
its
own
DB
context,
etc,
and
so
they're
explaining
how
to
create
a
special
extension
to
that
so
that
you
can
intercept
changes
and
actually
track
the
events
that
are
happening.
So
it's
a
pretty
detailed
article
but
very
informative
on
ways
that
you
can
extend
an
inny
framework
to
facilitate
things
like
creating
a
set
of
events
based
on
changes
to
the
database.
A
D
D
That
makes
it
pretty
easy
to
hook
up
to
sarah
log
if
you're,
in
a
kind
of
asp
net
called
dependency
injection
application,
it's
not
so
easy
to
do
that
if
you're,
just
in
a
console
or
some
other
application,
but
in
AF
core
5
we've
introduced
also
the
log
to
method
which
I'm
not
sure
it's
covered
in
in
here
or
not,
but
that
also
actually
makes
it
easier.
If
you
just
get
simple
logs
out
without
doing
any
setup,
you
don't
have
to
install
any
package,
you
don't
have
to
configure
anything
di.
D
A
A
So
this
is
the
first
part
in
a
five-part
series
on
channel
9.
This
is
a
video
series
that
walks
through
the
basics
of
getting
started
with
in
DD
framework,
and
if
you
scroll
down
the
first
video
that
we
have
link
to
has
links
to
the
other
videos,
you
can
see
that
they
cover
how
to
work
with
existing
databases.
A
Things
like
change,
tracking
and
querying
using
some
more
complex
query
and
purge
so
you're,
not
grabbing
everything
that
is
available
but
grabbing
just
what
you
need
and
then
I
love
this
putting
the
cud
into
crud,
so
wrapping
it
up
with
adding
updating
and
deleting
data.
So
fine
series
check
it
out
they're
less
than
a
half-hour
each
video.
So
it's
good
for
just
some
casual
watching
and
catching
up
on
an
annuity
framework.
A
A
Definitely
love
that
nice
thumbnail
on
on
this
one.
So
this
is
how
to
query
sequel
server,
XML
data
type
column.
So
the
context
here
is
the
way
that
we
actually
use
some
of
the
expressions
to
do
a
search
on
a
column
isn't
necessarily
compatible
out-of-the-box
with
XML
column
type,
and
so
this
article
talks
about
a
way
that
you
can
extend
again
how
entity
framework
works
to
facilitate.
If
you
have
that
XML
column
type-
and
you
want
to
do
a
search
inside
the
the
properties
or
attributes
in
that
XML
object.
D
So
interesting
thing
about
XML
columns
when
I
first
joined
the
EF
team
12
years
ago,
and
we
were
planning
for
EF
for
the
first
meeting
I
went
into
was
support
for
mapping.
You
know
first
class
mapping
of
XML
columns
that
got
cut
from
EF
4
and
it's
never
come
back.
I
would
hope
that
we
can
actually
do
something
in
in
AF
core
now,
because
we
have
a
lot
better
infrastructure
without
a
vdm
under
it
to
make
it
easy.
D
But
what
this
is
this
shows
is
using
these
DB
command
interceptors,
and
this
is
something
that
is
in
F
core
now
and
was
actually
very
popular
in
EF
6
as
well,
because
it
basically
lets
you
intercept
right
before
we
send
something
to
the
database.
So
you
can
do
things
like
manipulate
the
command
text
by
parsing
expressions.
So
this
is
a
really
good
hook
for
being
able
to
kind
of
get
hook
into
the
low-level
stuff
and
get
it
to
do
what
you
want.
It's
also
great
for
shooting
self
in
the
foot
as
well.
A
A
So
this
is
one
of
many
articles
that
come
from
Eric
Eric
is
the
author
of
set
of
power
tool,
extensions,
the
entity
framework
and
is
very
active,
as
well
with
contributions
and
supporting
the
team
and
very
prolific,
blogger
and
I
love
his
posts,
because
they're
just
shortened
to
the
point.
So
this
is
I
want
to
pass
a
dynamic
or
variable
list
of
values
as
parameters
and
any
different
requires
us
from
sequel
Ross.
So
that
lets
you
take
Ross
sequel.
A
A
And
then
this
one
this
was
probably
my
favorite
project
type
to
work
with
I,
don't
know
if
any
of
the
viewers
have
had
the
pleasure
of
managing
their
databases
with
backpack
files,
but
this
is
a
a
project
that
defines
the
schema
and
information
for
a
sequel,
server
database,
and
so
what
he's
showing
is,
if
you
want
to
start
with
that
definition
and
generate
your
classes
from
the
database
definition.
These
are
steps
that
you
can
take
to
do
that.
D
So
one
of
the
things
that
this
shows
is
the
AF
core
power
tools.
This
is
one
of
Eric
EJ's
projects,
and
this
is
basically
the
GUI
the
recommended
way
to
do
reverse
engineering
with
a
GUI
as
opposed
to
just
doing
it
on
the
command
line.
You
can
see
he's
got.
You
know,
dialogues
that
you
can
go
through
in
a
kind
of
wizard
like
way.
It's
really
good.
It's
very
he's
very
responsive
about
putting
new
stuff
in
there.
D
A
And
the
nice
thing
is:
he's
got
a
open,
github
repo,
so
you
can
go
in
and
file
issues
and
request
new
features
and
that's
all
available
directly
through
that.
So
this
one
is
a
sequel
entity
framework
core
5,
that's
our
bleeding
edge,
we're
in
preview
3
in
our
release
and
was
comparing
that
with
sequel,
vault
copy
and
copy.
And
what
I
love
about
this
article
is.
A
A
This
is
a
USA
Today
article
and
it's
about
a
project
that
Microsoft
did
in
collaboration
with
a
few
other
companies,
and
it
is
designed
to
fit
a
need
during
the
pandemic,
which
is
really
matching
people
who
have
personal
protective
equipment
to
donate,
with
places
like
hospitals
that
need
that,
but
I'm
just
saying
at
the
high
level
Josh,
you
were
on
the
actual
team
that
that
worked
with
this
I
kind
of
gave
the
5000
foot
view
you
want
to
share
a
little
bit
more
about
the
the
project
at
a
high
level.
Sure
it's.
D
C
We
delivered
this
so
I'll
get
into
that
in
a
moment
and
we'll
talk
about
how
we
use
the
EF
core
and
and
and
what
our
experience
was
there,
but
but
yeah
the
the
project
that
a
Jeremy
did
a
good
job
kind
of
summarizing
it
at
a
high
level.
The
the
project
basically
is
meant
to
link
up
donors
of
personal
protective
equipment
with
hospitals
in
particular,
who
have
had
the
need
for
it.
C
It's
relatively
straightforward,
app,
not
not
a
ton
of
moving
parts,
but
but
essentially
you
as
a
donor.
You
come
in
either
with
a
mobile
app
or
we
have
a
web
portal
as
well.
That
kind
of
gives
a
similar
experience,
but
you
you
you
come
in
to
the
front
end
of
the
app
you're
authenticated.
We
give
us
a
bit
of
basic
contact
information
and
then
kind
of
guided
through
a
bit
of
an
experience
to
determine
the
set
of
items
that
you
have
to
donate
and
we
we
track
things
like
sizes
and
other
kind
of
details.
C
C
We
kind
of
validate
them
know
who
they
are
match
them
up
with
the
hospitals
that
they
represent,
and
then
they
have
sort
of
a
similar
experience
to
be
able
to
to
list
out
and
and
and
make
requests
for
donation,
essentially
things
that
they
that
they
need
on
the
backend.
We
have
a
matching
algorithm
that
that
runs
peer,
that
essentially
uses
geography
and
some
other
heuristics,
as
well
as
the
item
catalog
itself
and
the
donations
and
the
requests
and
matches
those
up
and
it.
You
know
it
has
some
some
elements
of
fairness
to
it.
C
So
you
know
you
when
you,
when
you
in
in
the
end,
this
is
sort
of
like
a
logistics
problem
that
we're
trying
to
solve,
and
so
there
are
aspects
to
it
of
fairness.
You
can.
You
can
naively
build
greedy
algorithms
that
will
that
will
satisfy
requests
for
donation,
maybe
in
a
way
that
that
doesn't
optimize
spreading
the
spreading
the
donations
around
to
the
maximum
number
of
folks,
so
that
everyone
benefits,
and
so
we
went
to
some
pains
again
using
some
some
some
good
work
from
our
partners
to
ensure
fairness
and
the
algorithm.
C
But
we
we
create
these
matches
behind
the
scenes
and
then
the
last
step
is
to
automate
the
fulfillment
of
those
matches,
the
the
actual
shipment
out
the
door
from
the
donor
to
the
hospital
where,
where
the
match
has
been
made
and
so
yeah,
we
we
worked
on
this
project
over
the
course
of
about
it
was
about
five
or
six
weeks,
we're
just
wrapping
it
up
now,
the
first
phase
of
it
anyway,
right
now
and
yeah.
It
was
a
obviously
we're
all
aware
of
kind
of
the
the
current
situation
in
the
world.
C
So
there
was
a
quite
a
bit
of
urgency
and
need
for
this
thing.
Obviously,
and
and
we
again
the
group
that
I
that
I
work
with
is
a
commercial
software
engineering
in
Microsoft,
and
we
do
we
do
projects
like
this
with
with
external
partners
quite
frequently.
Usually
the
projects
are
something
like
two
to
four
months
in
duration
and
and
this
one
sort
of
became
our
kind
of
internal
joke
that
we
were.
We
were
delivering.
What
was,
what
might
typically
be
a
three
month
project?
C
We
were
trying
to
deliver
it
in
three
weeks
and
we
ended
up
taking
a
little
bit
longer
than
three
weeks,
but
we
also
ended
up
getting
it
out
the
door,
so
it
was
a
yeah.
It
was
successful.
We've
had
we're
now
matching
items
so
on
the
supply
and
demand
side
and
shipping
those
out
via
UPS.
So
it's
it's
been
a
great
experience
to
work
on.
That's
that's
a
high-level,
that's
not
the
technical
details!
Sure.
A
So
getting
into
technical
details
now,
first
off
doing
that
much
of
a
project
and
in
five
to
six
weeks,
pretty
incredible
I've
been
on
some
of
those
projects.
Where
you
know
things,
don't
wait,
the
pandemic
isn't
going
to
wait,
so
you've
got
to
get
it
done
now
used
entity
framework
or
as
part
of
the
project.
Can
you
talk
a
little
bit
about
the
decision
behind
it
and
how
that
helped
the
project?
But,
of
course
we
also
want
to
be
transparent
and
talk
about.
You
know
some
of
the
challenges
you
may
have
faced
as
well.
Yeah.
C
Yeah,
certainly
so,
the
from
an
architectural
perspective
you've
got
a
mobile
app.
You've
got
a
web
app.
This
is
all
running
an
azure
of
course,
and
so
the
both
clients
are
talking
to
a
API
built
in
net
in
asp.net
core,
so
a
REST
API
that
we're
hosting
an
app
services.
All
of
that
the
the
the
REST
API
itself,
the
implementation
is,
you
know,
realize
heavily
on
EF
core.
C
All
of
our
data
is
stored
in
Azure,
sequel,
DB
and
earlier
incarnations
of
the
architecture
we
had
talked
about
using
cosmos
and
some
other
things,
but
in
the
end
we
decided
to
go
with
kind
of
tried-and-true
relational
relational
database,
hosted
in
Azure,
sequel,
DB
and
so
EF
core
was
kind
of
the
logical
choice.
Again
I
mean
there
are
certainly
other
possibilities
there,
but
the
partners
we
were
working
with
are
very
external
to
Microsoft.
C
They
were
very
familiar
and
comfortable
with
EF
core
the
when
we
started
getting
into
we
started
doing
performance,
testing
and
load
testing
and
and
and
we
sort
of
had
anticipated
demand
kind
of
over
over
periods
of
time.
Some
projections
we
had
done-
and
we
start
doing,
load
testing
that
you
know
at
many
multiples
of
the
anticipated
data
load
that
we
expected
to
face,
even
even
at
very
you
know,
multiples
of
the
again
that
expected
load.
C
We
found
that
ye
of
core
it
worked
very
well,
I
mean
the
we,
even
even
when
we
went
down
into
traces
and
looked
at
the
the
queries
that
were
being
generated.
We
were
very
happy
with
what
was
essentially
the
output
of
the
the
queries
themselves.
We
felt
I
mean
most
of
what
we
were
doing
was,
admittedly,
fairly
at
least
on
the
front.
End
side
was
fairly
straightforward
kind
of
crud
oriented
operations.
C
On
the
back
end,
we
were
also
using
EF
to
do
more
kind
of
batch
style,
a
lot
of
like
heavy
reads,
and
then
some
batch
style
writes
when
we're
doing
the
matching
and
fulfillment
side.
But
even
then
it
worked
very
well
or
it
held
up
very
well
in
all
of
our
testing.
One
of
the
things
in
particular
that
we
thought
worked
really
well
and
allowed
us
again,
because
we
were
like
moving
in
this
very
rapid,
a
kind
of
iterative
pace.
C
Almost
on
a
day
to
day
basis,
one
of
the
things
we
we
actually
thought
really
helped
us
was
using
migrations.
We
even
had
some
I'll
call
them
skeptics
say
on
the
team
who
were
sort
of
you
know
maybe
had
prior
experiences
with
migrations
and
kind
of
thought.
Oh,
is
that
really
what
we
want
to
do
here?
We
didn't
have
somebody
who
was
sort
of
dedicated
from
the
you
know.
C
One
of
the
patterns
that
you
know
I've,
certainly
seen
before
on
other
projects,
is
like
you
have
one
person
who
or
some
small
group
of
folks
who
are
dedicated
to
the
care
and
feeding
of
the
database
itself,
and
so
all
kind
of
changes
go
through
them.
We
didn't
have
that.
We
didn't
have
that
in
this
case,
and
so
we
got
ourselves
probably
in
a
little.
We
shot
ourselves
a
little
bit
in
the
foot
kind
of
up
front,
because
we
were
moving
so
rapidly.
C
We
had
many
engineers
kind
of
making
incremental
small
incremental
changes
to
the
database
and
to
the
schema
really
as
we
went.
So
we
had
lots
of
migrations
happening
lots
of
incremental
changes
in
migrations
happening
up
front.
Occasionally
we
got
we
shot
ourselves
in
the
foot
and
maybe
had
to
backtrack
a
little
bit
or
kind
of
like
a
wipe
away,
a
test
database
and
rebuild
it
from
scratch,
and
that
was
a
little
frustrating.
We
had
some
feedback
on
that,
but
but,
in
the
end,
migrations
did
allow
us
to
to
move.
C
You
know
when
you
take
a
slight
step
back
over
the
course
of
two
three
four
weeks.
Migrations
were
one
of
the
key
things
that
kind
of
kept
us
in
line
and
kept
us
all
synchronized.
Otherwise
it
would
have
been
sort
of
chaotic
to
kind
of
manage
the
the
the
evolution,
the
rapid
evolution
of
this
database
and
the
entire
solution
kind
of
all
at
once.
So
that
was
it.
That
was
a
big
win,
the
other
one
I'll
mention
actually
Arthur
at
a
moment.
A
moment
ago,
we
were
joking
about
logging
right.
C
We
actually,
we
actually
took
full
advantage
of
the
the
the
Sarah
log
integration
because
again
we're
running
an
asp.net
core
on
the
backend,
our
matching
and
shipping
components
for
running
in
functions,
and
so
that's
all
like
very
easy
to
wire
in
using
Sarah
logs,
so
that
you've
got
a
nice
abstraction
layer
there.
But
then
we
were
piping
everything
into
app
insights.
It
was
super
easy
for
us
to
go
into
that's
where
we're
that's,
where
we
could
go
in
and
look
at
actual
queries
that
were
being
generated
so
that
we,
you
know
we
didn't.
C
You
know
we
could
sort
of
do
the
trust,
but
verify
thing
right
like
what
is
yep
actually
doing
for
us
behind
the
scenes
and
in
this
particular
case
and
there
that
actually
helped
us
the
ability
to
go
into
app
insights
and
track
down.
Specific
queries
actually
helped
us
uncover
a
couple
of
places
where
we
made
some
assumptions.
You
know
we
were
you
know,
you're
using
iqueryable,
right
and
so
you're
you're
you're
doing
projections
into
the
database,
and
you
want
all
that
stuff
to
happen
down
to
the.
C
Level
so
that
you're
you're
not
you're
not
doing
like
weird
filters
and
pulling
half
your
database
out
of
out
into
memory
and
that
sort
of
thing.
But
but
there
were
a
few
cases
early
on
where
we
ended
up
doing
that
accidentally
and
it
was
very
straightforward
for
us
to
go
and
app
insights.
Look
at
those
queries
and
say
you
know
what
we're
actually
pulling
too
much
data
into
memory
and
then
doing
filtering
and
sorting.
We
need
to
go
fix
our
expression.
C
You
know
our
query
expression
that
we
were
using,
and
you
know
that
was
like
I'm
thinking
of
one
example
in
particular,
where
there
were
a
couple
of
us
kind
of
zeroing
in
on
this,
and
from
like
identifying
the
point
where
we
identified
the
issue
to
where
we
resolved
it
and
push
to
PR
so
that
we
could
get
it
merged
and
and
fix
the
problem.
It
was
maybe
an
hour
you
know
kind
of
like
round-trip.
C
D
Yeah
before
we
before
we
get
into
challenges
one
we
talked
about
this
kind
of
offline
a
couple
of
days
ago,
and
one
of
the
things
that
I
found
really
interesting.
Was
you
decision
near
the
beginning
to
not
use
a
customized
repository
pattern
to
not
use
cosmos
and
decide
to
use?
You
know
equalizer
and
that's
kind
of
one
of
the
themes
that
I've
seen
quite
often
in
a
lot
of
apps,
that
people
start
out
with
this
idea
that
they
need
all
of
these
abstractions
and
all
of
these
additional.
C
C
You
know
the
the
DB
context
and
you
don't
have
to
kind
of
you
don't
have
to
go
through
all
the
gymnastics,
like
you
say,
of
kind
of
abstracting
it
behind
a
repository
or
something
like
that
just
to
just
to
make
it
testable,
and
that
was
a
huge
win
for
us.
In
fact,
that
was
a
lesson
learned.
We
just
recently
did
a
retro
on
the
project,
and
that
was
one
of
one
of
the
folks
who
was
working
on
in
that
particular
bit
even
called
that
out
and
said
yeah.
That
was
something
that
you
know.
D
C
A
We
had
a
comment
in
a
question
from
Michael
Powell
on
YouTube.
His
comment
was
lessons.
Migrations
cannot
happen
in
a
vacuum
and
not
without
communication
and
and
I
think
that
I
think
that
the
reality
on
theirs
was
they
were
moving
like
at
a
sprint.
So
they
were
a
little
bit
out
of
breath
to
shout
over
the
line,
but
yeah
good
good
advice
there
and
then
there's
a
question.
C
C
An
external
partner
merit
solutions
who
were
really
good
partners
for
us
on
the
engineering
side
and
then
Kearney,
who
did
some
some
of
the
work
specifically
around
the
matching
algorithm
so
yeah.
So
that's
all!
That's
all
public
knowledge
at
least
I
hope.
So,
given
that
again,
it's
all
in
USA
today,
ya.
A
C
C
I
think
I
mentioned
the
I
mentioned
the
kind
of
yin
and
yang
of
migrations
and
I
think
it
was
Michael
who
chimed
in
there
as
well-
and
you
know
exactly
right:
I
mean
migrations,
it's
it's
they're
a
powerful
tool
but
they're
a
tool,
and
that
means
you
can
misuse
the
tool.
And
luckily
we
didn't
I
would
say
we
didn't
grossly
misuse
the
tool.
I
think.
If
anything,
we
were
just
maybe
a
little
overeager
over
overly
optimistic
and
frankly,
we
were
scrambling
around
a
little
bit.
C
I
mean
it
literally
was
the
kind
of
thing
where,
on
the
day,
to
day
basis,
almost
during
stand-up,
we
would
just
remind
one
another:
hey
were
we're
actually
trying
to
save
lives
here
and
that
in
you
know,
just
beyond
EF
and
any
technical
discussion
at
all
that
that
it
really
gave
a
really
interesting
spin
on
the
entire
project
and
and
yeah
it
was
just
it.
It
really
helped
you
it
helped.
C
You
focus
in
on
what
was
important,
but
it
also
made
it
hard
because
you
know
at
the
end
of
the
day
we're
all
engineers
right,
and
we
know
that
just
you
know
the
even
if
we
want
to
think
faster
or
do
it
faster.
It
still
takes
time
and-
and
that
was
it
was
frustrating
it
was
hard,
but
it
was
also
just
the
reality,
and
so
yeah
I
mean
that
that
manifests
itself.
C
Sir,
in
the
you
know,
you
know
if
we
look
at
migrations
in
particular
like
everybody's,
just
like
in
the
mad
scramble
sort
of
like
figuring
out.
You
know
that
early
stage
of
the
project,
where
you
sort
of
sorting
through
what
does
the
domain
look
like?
What
are
the
entities
you're
working
with?
What
are
their
relationships
with
one
another?
If
you
don't
do
that
deliberately,
then
yeah,
you
end
up
tripping
over
yourselves
a
little
bit.
Definitely
not
the
problem
of
my.
You
know
the
fault
of
migrations.
It
was
just
our
use
of
them.
C
I
think,
probably
something
else.
You
know
something
else
that
comes
to
mind.
That's
kind
of
worth,
calling
out.
We
we
had
a.
We
had
a
need
to
do
to
use
encryption
to
essentially
encrypt
pieces,
not
not
like
large
swathes
of
data
but
pieces
of
data
that
we
were
reading
in
and
out
or
flowing
through
EF
core,
and
so
we
we
ended
up
there.
It
ends
up
there's
a
there's,
there's
really
two
options
there
or
it's
kind
of
two
broad
options.
Our
thirteen.
You
can
probably
talk
a
little
bit
more
about
this.
C
What
we
ended
up
doing
instead
was
basically
defining
some
some
custom
code
and
using
some
some
custom
serialization
code,
along
with
some
custom
attributes,
to
sort
of
discreetly
identify
particular
properties
that
we
wanted
to
encrypt
and
then
kind
of
doing
the
right
thing
as
the
data
kind
of
flows
in
and
out
of
f2
to
do
encryption
and
decryption
wiring
it
into
a
key,
basically
using
encryption
key
that
we
were
storing
in
kibo
and
then
you
know,
I
mean
that's!
That
pattern
worked.
It
was
fine.
C
We
were
pretty
happy
with
it
in
the
end
and
it's
it's
actually
some
code
that
we
I
think
we
are
actually
planning
to
put
on
github
at
some
point
soon.
But
again
you
know,
I
would
turn
this
back
to
Arthur.
I.
Think
probably
his
guidance
in
general
was
more.
You
know
always
encrypted
is
probably
your
friend
here,
because
that
lets
you
avoid.
Writing
your
own
custom
code,
particularly
custom
security
code,
which
is
always
a
little
bit.
You
know,
can
be
a
little
bit
yeah
yeah.
D
So
couple
of
things
about
that.
Certainly
the
value
converters
feature
which
essentially
lets
you
you
know,
convert
any
value
from
one
from
one
form
to
another
when
you're
saving
and
then
do
the
officers
and
when
you're
reading,
we
did
consider
that
people
would
want
to
use
that
for
doing
encryption
and
it's
somewhat
built
into
the
asp
net
core
identity
model
for
that
I
haven't
actually
seen
a
lot
of
people
using
it
and
that
may
be
because
always
encrypted
on
least
on
sequel
server.
D
If
you're
using
sequel
server
is
available
and
it
works
pretty
well
with
EF
core
I,
wouldn't
say
it's
like
perfect.
Yet
and
there's
a
there's
a
bit
of
history
behind
that,
and
this
is
kind
of
technical.
But
you
know
we're
on
a
run
in
the
F
core
stand
up,
so
I'm
gonna
be
a
bit
technical.
So,
basically,
back
in
the
day.
Historically,
both
sequel,
server
and
sequel
client
have
been
very
relaxed
about
the
type
of
parameters
that
you
use.
D
So
you
can
pretty
much
send
a,
for
example,
a
date
time
parameter
and
if
your
columns
date
time
it'll
just
coerce
it
and
use
it
now.
You
might
have
perfect
pact
in
doing
that
because
of
indexes
and
all
kinds
of
stuff
and
depending
on
what
you're
doing,
but
it
would
work
when
they
introduced.
So
none
also
related
to
that
is
that
sequel,
client
and
has
historically
not
been
great
about
how
it
handles
parameters
in
some
ways
so,
for
example,
decimal
parameters.
D
If
you
specify
18
as
the
precision
and
scale
it
will
then
truncate
before
it
sends
your
data
to
the
database
to
a
precision
of
to
visit
the
scale.
That's
the
precision
right,
I,
don't
know
whichever
one
it
is,
the
one.
That's
the
one.
That's
after
the
decimal
point,
that's
the
one
that
gets
truncated
and
so
that's
not
ideal
because
sequel
server,
if
you
don't
do
that,
will
round
so
by
setting
the
parameter
values
in
sequel,
client,
you
end
up
with
a
different
behavior
than
if
you
don't
so.
D
A
couple
of
things
check
caused
issues
with
that
in,
in
always
encrypted
always
encrypted,
basically
started
requiring
the
parameters
exactly
match.
The
types
in
the
database,
so
all
of
this
kind
of
it
just
works
even
though
it
might
not
be
quite
correct,
it
kind
of
goes
away
and
you
have
to
have
everything
precisely
mapped.
So
that's
in
general.
You
can
do
that
with.
Of
course
it
might
be
a
pain,
but
you
know
that's
what
always
you
know
always
encrypted
does
if
your
reverse
engineering,
your
database,
will
do
it
for
you.
D
Otherwise
you
know
you
can
always
just
tell
it.
You
know
this
is
a
de
type
2
or
whatever,
if
you're
doing
your
converse
mappings
manually,
but
then
that
has
this
issue
with
decimals,
where,
if
we
set
18
to
in
the
parameter
which
we
need
Troy's
encrypted
it'll
start
truncating,
which
we
don't
want
it
to
do.
So.
D
Luckily,
since
the
sequel
client
has
now
moved
to
an
external
package
and
it's
not
tied
to
the
dotnet
framework
and
so
is
actually
able
to
make
sensible
changes
to
evolve
itself,
it
has
now
fixed
that
which
is
existed
for
12
years.
So
now,
if
you
do
18
to
in
sequel,
client
with
the
new
Microsoft
data,
a
sequel,
client
and
there's
a
preview
of
the
200
out
now
it
will
not
it
will
round
rather
than
truncate,
and
so
the
behave
you
know
aligns
with
sequel
server.
D
B
Maybe
I
have
one
one
more
word
to
add
on
that,
so,
as
was
mentioned
here
before
always
encrypted
as
a
sequel
server
feature.
So
obviously
that's
gonna
work
if
you're
on
sequel
server,
if
you're
on
sequel,
light
you're
out
of
luck.
So
then
you
have
to
use
some
sort
of
value
converter
based
or
or
there
are
other
options
by
the
way
as
well
and
always
encrypted.
Also,
unless
I'm
mistaking
I'm,
not
a
big
expert,
but
it's
a
whole
database
kind
of
solution.
B
So
you
don't
get
to
pick
and
choose
which
columns
you're,
gonna
encrypt
in
which
you're,
not
that
might
be
a
problem
in
I
mean
that
might
not
be
what
you
want,
what
you're
looking
for.
So
once
again,
you
may
I'm
just
basically
trying
to
say
that
there
are
some
valid
scenarios
where
you're
gonna
prefer,
maybe
a
value
converter
based
solution
or
another
one.
So
in
other
databases,
sometimes
you
handle
encryption
by
simply
doing
filesystem
encryption.
Now,
that's
the
Postgres
kind
of
recommendation.
B
You
encrypt
a
file
system
where
the
files
are
that
takes
care
of
the
at
rest
aspect
of
it.
Of
course
you
use
something
like
to
you.
That's
connection
encryption
to
make
sure
that,
on
the
network,
everything
is
encrypted
and
these
kinds,
these
two
things
together,
gives
you
something
which
is
like
what
sequel
server
does
with
always
encrypted,
so
there's
various
solutions
around
encryption.
It's
always
you
know
a
rich
topic
so.
C
I'll
I'll
make
sure
Jeremy
I'll
make
sure
Jeremy
that
that
we
circle
back
again.
Our
team,
where
I
think
we're
gonna
put
our
at
least
you
know
a
simple
simplified
version
or
a
an
isolated
version
of
our
of
our
value
converter
solution
on
github,
so
I'll
make
sure
you
get
the
link
to
that,
so
that
you
can,
you
can
circulate
it.
Nice.
A
Well,
we'll
bring
you
back
on
for
that,
but
yeah.
So
we'll
look
for
that
link.
Now
we
have
several
questions
come
up
and
some
of
them
aren't
directly
related
to
what
we're
talking
about
so
I'm
gonna
move
those
to
the
end,
but
there
is
one
that
said:
wait
so
I
repository
is
a
bad
pattern
now
or
just
when
use
with
the
F
core
and
I.
Think
the
the
point
is
is
not
that
it's,
it's
necessarily
a
bad
pattern.
It's
a
question
of
when
you're
approaching
the
application.
A
But
the
idea
of
the
DB
context
is
the
DB
context
is
something
that
you
can
mock
and
that
you
can
test,
and
so,
if
testability
is
what
you're
targeting
it's
not
necessarily
a
reason
to
go
to
another
pattern,
there's
a
lot
you
can
do
with
DB
context
directly,
it
doesn't
mean
those
patterns
are
invalid.
It
means,
instead
of
using
a
pattern
up
front
really.
A
We
encourage
people
to
evaluate
what
the
needs
are
and
find
the
simplest
approach
that
meets
those
needs,
because
it's
very
easy
to
overcomplicate
a
solution,
and
the
last
thing
that
you
want
is
to
be
on
a
project
that
has
you
know
four
layers
of
abstraction
irepository
of
teego's,
that
data
access
of
T
and
you
make
one
change
to
one
column
and
you
have
to
change
it.
Five
different
places
because
it
propagates
through
that,
as
opposed
to
a
DB
context
which
will
just
update
with
the
the
model.
So
it's
not
that
our
repository
is
bad.
D
Absolutely
it's
interesting
that
I
mean
we
only
have
20
minutes
left
I'm,
not
sure
we
have.
We
want
to
spend
the
time,
but
I
do
have
up
here
if
I
that
I
could
share.
If
you
want
some
of
the
so
stepping
back
slightly
I
wrote
a
recently
a
couple
of
weeks
ago,
new
documents
on
how
you
test
head
to
testing
with
EF
core
and
as
part
of
that
we
have
a
sample
and
I
could
show
a
few
parts
of
that
sample
if
we
want
to.
A
D
A
D
Yeah,
let's
do
that?
Well,
actually,
no,
let's,
let's
so!
This
is
the
let's.
Just
let's
just
talk
about
one
this
this
last
one
here,
because
this
is
a
good
one
to
talk
about
so
so
we
have
a
with
the
F
core
3-0.
We
released
the
cosmos
provider,
cosmos
DB
provider
as
a
finally
an
RT
M
version.
There
are
still
a
bunch
of
missing
features
in
there
missing,
as
in
we're
not
100%
sure
what
exactly
people
want
to
do,
and
so
we're
kind
of
gathering
feedback.
D
D
You
could
do
with
karate
shinky
and
tell
it
which
partition
key
and
then
it
will
extract
that
out
of
the
query
and
pass
it
through
the
cosmos
API,
to
make
a
more
efficient
to
make
the
query
as
efficient
it
can
to
the
cosmos
model
and
we
will
jump
in
FA
if
I
got
that
wrong
at
all.
But
this
is
a
really
cool
way
that,
and
we
have,
we
have
many
many
contributors
going
back
to
the
blog
post.
Again
we
try
to
list
these.
These
are
just
the
people,
so
this
is
the
team.
A
To
say
on
that
so
I
think,
instead
of
going
through
the
can,
we
highlight
the
docs
page,
really
quick
and
then
let's
look
at
your
testing
example
and
then
we'll
answer
some
of
these
pending
questions.
So
just
for
those
who
have
not
visited
our
documentation
recently,
we
overhauled
our
hub
page.
The
idea
is
to
give
you
faster
access
to
what
you
need
without
drilling
down
several
layers
deep
in
the
table
of
contents,
so
you
can
do
one
click
get
started
with
a
very
basic
console
and
indie
framework
core
app.
A
We
take
you
into
some
of
the
the
different
in
the
o,
references,
support,
databases,
etc,
and
then
we
have
these
cards
so
that
you
can
see
at
a
glance
what
databases
we
support,
what
our
providers
are,
how
to
create
a
model.
Korie
save
data
et
cetera.
So
we
encourage
you
to
take
a
look
at
that
hub,
and
this
is
your
page
we're
creating
this
page
for
the
community.
So
we
want
to
make
sure
we
get
feedback,
so
we
have
a
documentation,
github
project.
We
do
everything
in
the
open,
whether
it's
code
or
documentation.
B
D
A
C
A
D
So
what
we
have
here
is
this
is
a
very
simple
just
a
web
AP
application.
It's
got
a
DB
context
in
there.
We've
got
two
entity
types
item
and
tag.
I'll
go
I'll,
show
this
very
quickly
I!
Oh,
these
are
also
written
in
ways
that
are
more
like
you
might
write
real
entity
types
rather
than
just
auto
properties
that
go.
You
know
with
getters
and
setters.
D
So,
for
example,
I
have
a
private
read,
only
feel
for
the
primary
key,
so
I
don't
expose
it
to
the
application,
because
it's
really
a
database
concern
and
then
we
have
this
private
constructor,
and
this
is
what
EF
will
call
when
its
setting
the
primary
key
for
you
or
you
can
create
a
new
item
without
the
primary
key
set
publicly
and
then
save
that
and
EF
will
generate
the
key
and
set
it
into
this.
So
there's
a
few
other
things
in
there,
but
let
me
go
more
to
the
testing,
so
I
have
some
tests
here.
D
So
we
have
the
test,
which
is
an
abstract
class,
because
it's
a
derived
from
from
the
multiple
providers
that
I
want
to
show
testing
with,
and
it
takes
dbcontext
options
in
the
constructor
which
you
will
be
set
up
for
the
provider.
So
I'll
show
that
in
a
minute
and
then
as
when
the
test
instance
is
constructed,
it
runs
a
seed
method.
This
deletes
the
database
and
creates
it
again.
This
is
a
really
easy
way
to
just
ensure
you
have
a
clean
database
each
test
and
then
we
add
some
seed
data
and
save
it.
D
Also
when
we
want
to
do
tests
that
are
actually
mutating
the
database.
So
this
test,
I,
can
add
an
item.
So
here
we
create
one
context
instant
and
we
do
it
and
we
say-
and
this
post
item
method
is
actually
saved
to
the
database
and
then
we
create
another
context.
Instance
to
verify
it
so
this
actually,
if
we
just
verify
there
here,
we
wouldn't
necessarily
be
checking
that
we'd
actually
saved
anything
to
the
database.
Okay,
so,
but
look
at
this,
so
this
is
the
sequel
white
version
of
the
test.
D
So-
and
you
say
here
all
I'm
doing
is:
do
you
sequel
I
and
then
giving
a
file
name
time,
creating
a
test
own
on
the
entailing
the
file
on
disk
which
it
for
sequel
Lite
is
super
fast.
It's
just
in
process
file.
If
you've
got
an
SSD,
it's
you're
not
going
to
run
into
perfe
shoes
with
that,
so
I
can
run
those
tests.
I'm
sure
this
is
tiny
on
all
your
screens,
but
it's
running-
and
it
has
said
in
this
case
that
they
all
all
have
passed.
D
So
you
take
my
word
for
that
or
you
maybe
you
can
see
a
bit
of
green
down
there.
We
can
do
the
same
thing
with
the
sequin
white
in
memory
in
memory
database.
So
you
see
a
sequel
light,
allows
you
to
create
a
connection
with
filename
memory,
and
that
means
to
create
it
in
memory.
The
thing
with
the
sequin
white
in
memory
database
is
its
life.
Time
is
tied
to
the
connection.
D
So
in
this
case
we
have
this
method,
it
creates
the
connection
it
opens
it
now,
that's
important,
because
once
it's
opened
EF
will
not
they
open
and
close
the
connection,
which
means
this
connection
is
going
to
stay
open
for
the
test,
which
means
the
in-memory
database
is
going
to
stay
around
for
the
test.
So
we
do
that
in
the
constructor
and
then
we
use
this
look.
This
is
this
is
obscure.
We
need
to
make
this
better.
You
shouldn't
have
to
know
about
this
rule.
You
know
it
wasn't
written
for
this
purpose,
but
it
works.
D
So
if
we
run
this
any
of
these
to
sequel,
Lite
tests,
let
me
just
do
Street
by
running
in
a
debugger
what's
actually
happening
when
we
run
these
things.
So
sorry,
that's
the
that's
the
wrong
one.
Let
me
go
here
and
debug
unit
tests
and
I've
set
up
some
breakpoints,
so
you
can
see
what's
happening
with
when
I
run
these
tests.
So
first
off
we
get
the
test
classes
created.
This
is
X
unit,
but
most
a
kind
of
unit
type
testing
frameworks
do
the
same
thing,
creating
an
instance
of
it.
D
We
were
on
our
seed
method,
which
creates
and
deletes
the
database,
deletes
and
creates
the
database
and
now
we're
running
the
test.
Now,
when
we
run
the
next
test,
you
can
see
we
go
back
to
the
top.
We
create
a
new
instance
of
this
test
class,
so
there's
a
new
instance
per
test.
We
seed
the
database
we
keep
going
and
we
do
that
for
each
test.
D
Ok!
So
there's
there
there's
some
interesting
stuff
in
the
article
about
testing
within
memory
database.
So,
for
example,
I
have
one
with
the
in-memory
database
here
and
actually
one
of
the
test
will
fail
and
that's
because
the
in-memory
dalish
doesn't
support
one
of
the
things.
We're
testing.
That's
a
very
useful
thing
to
learn
about,
but
I'm
not
going
to
go
into
that
now
for
the
sake
of
time,
instead
I'm
going
to
go
over
here
to
where
we
have
shared
database
tests.
So
basically
these
tests
are
essentially
the
same
tests
that
we
saw
before
so.
D
I
have,
you
know,
can
get
items
basically
all
the
same
tests.
However,
what
we're
doing
here
is
we're
using
a
shared
database
fixture,
which
is
an
X
unit
concept
that
allows
you
to
create
something
that's
going
to
be
used
for
all
executions
of
the
test.
So
what
this
is
going
to
allow
us
to
do
is
basically
not
recreate
the
database
every
test
we
run,
but
but
only
create
the
database
and
seen
it
once
when
we
for
all
of
the
tests.
D
So
we
can
demonstrate
that
by
running
these
again
in
the
debugger,
you
know
if
I
can
do
that
we
go
to
here
and
debug
tests.
So
what
we
should
see
here
is
the
the
first
thing
that
happens
when
I
run.
These
is
I.
Didn't
actually
show
the
fixture
that
this
is
the
fixture,
so
we're
creating
an
instance
of
the
fixture,
that's
creating
a
connection,
and
that
will
be
shared
by
all
of
the
tests
and
then
we
run
a
database
seeding.
D
So
it's
seeding,
so
this
is
going
to
delete
the
database
and
create
it
again,
but
it's
only
gonna
know
it
once
so.
We
won't
see
that
hit
again
and
then
and
I'm
running
this
actually
on
a
sequel
server
so
that
just
dropped
and
created
a
sequel,
server
database
there.
So
it's
entirely
clean
and
it's
seeded
and
ready
to
go,
and
now
each
test
is
going
to
run
so
that
test
is
going
to
run.
D
D
One
last
thing
because
I
know
we're
short
on
time
when
you're
mutating
a
database
in
a
test
like
that,
if
multiple
tests
are
going
to
then
use
that
same
database
and
they're
expecting
a
well-known
set
of
data
in
it,
that's
not
going
to
work
if
you
mutate
it
halfway
through.
So
what
we
do
here,
it's
a
fairly
common
trick.
We
create
a
transaction,
so
we
have
a
fixture
method
here.
There
is,
let's
say:
that's
not
the
fixture
method,
that's
in
a
DB
connection.
D
We
have
a
fixture
method
here
that
creates
the
context
that
takes
the
transaction.
So
each
context
is
enrolling
in
this
track
change
transaction.
So
we
can
still
have
a
context,
safe,
stuff,
create
another
context,
read
it
back
and
have
that
use
the
mutated
database.
But
then
we
never
commit
that
transaction
and
then,
when
you
dispose
a
DB
transaction
without
committing
it,
it
gets
rolled
back.
So
when
this
using
ends,
the
transaction
gets
rolled
back
and
the
changes
are
not
visible
to
any
of
the
other
tests.
So.
A
That's
actually
interesting:
it
ties
into
a
question.
First,
someone
asked
if
we
linked
this
project,
so
in
our
yes
documentation,
we
have
a
detailed
documentation,
page
that
describes
the
testing
and
then
links
out
to
the
sample,
so
this
sample
is
a
publicly
available
sample.
The
other
question
was
our
transaction
scopes,
supported
by
an
ad
framework
or.
D
D
Sequel
server
supports
it,
I
believe
Postgres,
right
Shai
does
that
but
I
think
sequel
like,
for
example,
doesn't
yet
support
that,
so
it
depends
somewhat
on
the
database
provider.
Also,
this
is
only
system
transaction
supports
for
individual
transactions.
It's
not
the
the
support
for
distributed
transactions
does
not
exist
in
dotnet
core,
okay.
A
And
that
was
a
helpful
clarification
and
just
so
everyone
knows
if
you
go
to
live
dot,
the
word
dot,
dot,
net
live
dotnet
and
link
into
the
show
on
the
YouTube
or
actually
on
that
page
we
have
a
link
to
our
show
notes
and
that
link
is
all
the
links
we
cover
in
the
show.
So
that's
how
you
can
get
to
the
link.
We
have
the
link
in
there
that
leads
to
this
testing
page.
A
D
A
B
One
thing
is
to
keep
in
mind
parallelization,
because
if
you
use
something
like
X
units
nowadays,
then
by
default,
it's
going
to
paralyze
different
classes.
Keep
that
in
mind
if
you're
mutating
once
again,
even
if
you're
using
transactions,
you
might
see
some
some
effects,
so
it
might
just
be
better
to
disable
parallelization
right.
D
C
D
About
the
paralyzation
in
the
dock,
and
while
it's
not
actually
necessary
in
this
case,
because
they
only
have
one
test
class
and
one
fixture
and
so
they're
not
going
to
run
in
parallel.
This
is
why
we
have
this
locking
code
in
here,
so
you
can
share
this
fixture
between
multiple
test
classes
and
it
globally
locks
which
won't
be
an
issue.
You
don't
need
to
use
lock,
free
or
anything
like
that.
Just
your
normal
box
standard,
lock
there
and
that'll
ensure
that
even
for
multiple
tests,
you
get
one
clean
database.
Instead
of
see
data.
A
D
Six
has
very
poor
support
for
unique
constraints
and
that's
primarily
based
on
the
fact
that
the
EDM
itself
doesn't
have
very
good
supports
for
unique
constraints
and,
in
fact,
one
of
the
things
that
we
were
looking
at
when
the
idea
was
proposed
that
we
do
was
become
EF
core
was
adding
unique,
H
from
strains
to
EF
six,
and
it's
not
there,
and
it's
not
going
to
be
there
because
EF
six
is
what
it
is
and
it
works
for
a
lot
of
people.
But
EF
core
is
the
architecture.
A
D
A
A
So
it's
something
the
team
wants
to
make
sure
that
you
have
available
and
we're
working
on
making
sure
that
the
guidance
is
designed
in
a
way
that
really
makes
you
fall
into
a
pit
of
success
right.
We
don't
want
to
mislead.
We
want
to
make
sure
our
samples
do
do
all
the
right
things
and
that's
part
of
why
it's
taken
some
time.
D
A
D
We've
a
so
it
says
on
the
stream
that
everybody
lost
audio,
okay,
we're
back
okay,
edie
framework,
plus
extensions
and
so
forth,
yeah.
So
there's
a
lot
good
stuff
there.
We
certainly
we're
very
happy
for
people
to
use
external
third-party
extensions
if
that
they
work
for
what
you
need.
Some
of
those.
Some
of
those
extensions
do
things
in
ways
that
we
wouldn't
necessarily
want
to
do
for
various
reasons
due
to
the
architecture.
D
But
there's
often
a
lot
of
the
things
in
there
are
places
where
we
have
ideas
to
introduce
those
features.
It's
you
know
we're
a
small
team.
You
know
we
have
eight
people,
including
myself,
is
really
a
manager,
so
not
supposed
to
code
and
Jeremy
and
then-
and
then
you
know,
six
people
long
developers
on
the
team
and
some
of
those
things
that
we
will
add
but
they're
lower
down
the
list.
D
We
also
are
trying
to
make
it
possible
for
community
people
to
contribute,
so
we
Muricy
on
our
team
who's
not
on
the
call
has
been
doing
a
lot
of
research
into
temporal
tables
so
that
we
can
actually
put
some
guidance
up
on
that
and
potentially
a
prototype.
So
that
maybe
the
community
can
can
use
to
to
work
on
that
I
mean
I.
Think
I
think
it's
worth
saying
that
there
we
really
truly
believe
in
open
source
and
and
I
do,
and
everyone
on
the
team
does
and
Microsoft
in
general.
D
Open
source
is
the
thing
now
and
it's
not
just
lip
service.
It's
like.
We
believe
it
so
putting
this
stuff
out
there
having
it
on
github
having
people
be
able
to
build
extensions,
having
people
potentially
fork
things
there's
a
fork
of
ef6
called
EF
classic,
which
is
a
commercial
product.
That's
fine,
I
mean
people.
The
reason
for
being
open
sources
are,
though,
so
that
we
can
do
all
of
this
stuff,
and
people
can
use
the
code
in
different
ways.
So
I
just
think
it's
great
that
there's
extensions
of
stuff
and
yeah.