►
Description
ML.NET is a free, cross-platform, and open source machine learning framework for .NET developers. It is also an extensible platform that powers Microsoft services like Windows Hello, Bing Ads, PowerPoint Design Ideas, and more. This session focuses on the release of ML.NET 1.0. If you want to learn the basics about machine learning and how to develop and integrate custom machine learning models into your applications, this demo-rich session is made for you!
A
Welcome
to
the
session
folks,
my
name
is
Ankit,
Astana
I
hope
you
guys
and
folks
are
enjoying
both
this
year.
This
is
the
last
talk
at
Bell.
Then
we
save
the
best
for
the
last.
That's
what
we're
really
hoping
for
in
this
session.
So,
let's
get
started
so
before
we
go
further.
I
have
a
question
for
the
audience
here
in
terms
of
how
many,
how
many
folks
in
the
audience
are
just
getting
started
with
machine
learning?
Can
we
have
a
raise
of
hands
awesome.
Thank
you.
A
So,
in
this
session
we're
gonna
talk
about
how
you
can
how
you
can.
Actually,
you
know,
enter
the
world
of
machine
learning
for
them.
A
lot
led100
I'm
joined
here
by
my
fellow
PMS,
cesare
and
Chris
Lauren
yeah,
and
we're
really
excited
to
be
here
today.
So
as
you've
probably
already
seen
at
bill
this
year.
Dot
matt
is
a
great
tech
stack
for
building
a
wide
variety
of
applications.
You
can
build
a
speed.
You
can
build
web
apps
and
asp.net.
A
So
for
those
of
you
in
the
room
who
are
perhaps
new
to
machine
learning,
one
way
that
you
can
think
about
machine
learning
is
that
machine
learning
is
about
programming,
the
unprogrammed
Bowl.
So,
for
example,
if
I
ask
you
folks
in
the
crowd,
can
you
go
ahead
and
build
a
function
which
takes
in
an
image
and
returns
whether
this
image
has
a
face
in
it
or
not?
A
You
might
not
know
where
to
start
if
I
take
another
example,
where
I
give
you
a
description
of
a
shirt
like
the
one
I'm
wearing
right
now
and
ask
you
again:
can
you
write
me
a
function
that
returns
what's
the
price
of
the
shirt?
In
this
case,
you
might
be
able
to
look
at
some
keywords
like
long
sleeves
or
business
setting
and
try
to
guess
the
price
of
the
shirt,
but
if
I
then
go
ahead
and
say
that
hey,
can
you
scale
this
up
to
a
thousand
products
in
terms
of
what
online
stores
sell?
A
You
might
actually
be
a
child
a
bit
challenged
there.
So,
even
though
you
don't
perhaps
know
how
to
get
started
with
writing
these
functions,
what
you
do
have
in
front
of
you
as
examples.
You
have
examples
of
images
with
faces
in
them,
and
you
have
examples
of
images
that
do
not
have
a
face
in
them
and
what
machine
learning
is
really
about
is
learning
from
these
examples
and
then
building
a
function
or
what
we
call
the
machine
learning
model
they.
You
can
then
use
another
way
of
about
thinking
about
these
machine
learning
models.
A
Is
that
you
can
think
of
it
as
an
intelligent
function
which
takes
as
an
input
as
an
image
in
this
case
and
then
returns
whether
this
image
has
a
face
in
it
or
not?
So,
with
a
MOBOT
net,
the
demo
botnet,
you
can
actually
go
ahead
and
build.
You
know
image
classification
models
like
the
one
I
just
showed
you
which
can
detect
whether
there
is
a
face
in
an
image
or
not.
You
can
build
other
models
for
regression
which
help
you
answer.
Questions
like
how
much
how
many
and
while
you're
using
ML
botnet
what
you.
A
If
you
look
at
the
history
of
ML,
dotnet
or
the
roots
of
ML,
not
net,
they
come
from
Microsoft
research
project
called
TLC,
which
has
been
in
the
company
for
a
number
of
years
and
is
used
extensively
by
products
like
agile
machine
learning,
studio,
Azure,
machine
learning,
experiences
like
Windows,
hello,
Office
PowerPoint,
as
you
stream
analytics
and
a
whole
lot
more
to
name
a
few
since
we're
targeting
developers
at
ML.
Dotnet.
A
We've
designed
the
framework
in
that
matter
and
with
one
over
adding
some
new
features:
auto
ml
and
tools
like
model
builder
and
a
command-line
interface,
which
will
help
you
make,
and
we
shall
make
really
easy
for
you
to
build
custom
machine
learning
models.
We've
also
from
the
get-go
thought
of
extensibility
as
a
key
point
or
key
value
prop
or
key
design
principle
and
ml
dotnet.
A
So
not
only
can
you
leverage
and
use
the
ML
dotnet
trainers
and
the
ML
dotnet
transform
that
it
comes
with
out
of
the
box,
but
you
can
also
benefit
from
using
open
source
popular
frameworks
like
tensorflow
or
onyx,
using
the
same
uniform
api
set
that
ml
dotnet
offers
and
just
to
just
just
one
more
point
on
this
slide
that
I
want
to
kind
of
make.
Is
that,
like
everything
else
and
dotnet
these
days
and
they'll
docker,
it
is
free,
it's
cross-platform
and
it's
open
source.
So
that
sounds
great.
A
Let's
talk
about
perhaps
were
some
of
the
things
you
can
do
with
them
a
lot
at
one.
Oh,
so
there's
a
number
of
things
that
you'll
see
on
this
slide
that
you
can
already
do.
You
can
build
sentiment,
analysis
models.
You
could
build
a
product
recommender,
you
could
do
price
prediction,
you
can
classify
images,
you
can
build
a
sales
forecast
and
a
whole
lot
more.
A
A
So
what
you
gonna
see
here
in
a
minute
or
in
a
second,
is
the
dotnet
website.
So
if
you're
looking
for
ML
dotnet,
you
can
click
machine
learning
section
in
our
website
and
you
arrive
at
the
ML
dotnet
landing
page.
If
you
go
further,
you're
gonna
see
these
various
samples
that
we
have
and
clicking
any
one
of
them
is
going
to
take
you
to
our
samples
repo,
where
you
can
learn
how
to
build
them
from
scratch.
A
So
let
me
just
show
you
the
samples
people
very
quickly,
so
here's
a
sample
to
repo
you
can
see
here.
We
have
samples
for
fraud
detection,
you
can
do
you
can
classify
issues.
You
can
recommend
products,
you
can,
you
know,
do
a
sales
spike,
anomaly
detection
and
you
can
also
use
tensorflow
and
onyx
to
do
things
like
image,
classification
and
detection
for
each
of
these
samples,
for
example,
for
detection.
Here
we
have
both
the
training
code
for
these
models
and
the
consumption
code.
So
you
can
see
here
in
each
one
of
these
samples.
A
We
describe
you
where
the
data
set
comes
from
what
kind
of
machine
learning
tasks
the
sample
it
represents,
and
then
we
show
you
the
code
for
building
the
model
and
then
consuming
the
model.
So
if
you're
new
to
machine
learning-
and
you
just
want
to
get
started,
chances
are,
if
you
visit
our
samples,
repo,
the
scenario
that
you
want
to
actually
go
and
enable
is
perhaps
already
there-
and
this
would
be
a
great
start
for
you
just
to
look
at
look
through
these
samples
and
learn.
A
So
next,
perhaps
let's
take
a
scenario
here
or
ant-like
sentiment
analysis,
and
let
me
show
you
like
a
live
app
that
uses
this.
So
what
you
have
in
front
of
you
here
is
a
blazer
ml,
dotnet
app.
So
this
is
using
blazer
and
the
model
that's
running
behind.
It
is
an
ml
dotnet
sentiment,
analysis
model.
So
let
me
just
go
ahead
and
type
some
stuff
here,
so
you
can
see
this
in
action.
So
if
I
say,
machine
learning
is
fun.
A
You
can
see
that
that's
a
very
positive
sentiment.
So
it's
working
well,
however,
if
I
go
and
say
that
hey
machine
learning
is
not
fun.
You'll
see
that
the
sentiment
there
drops.
So
that's
a
very
simple
example
of
you
know:
ml
dotnet
at
play.
So
what?
If
I
want
to
build
a
sample
from
scratch?
So
let
me
show
you
how
you
can
do
that
so
I'm
gonna
bring
up
visual
studio
here.
A
This
is
we
distribute
2019,
but
you
can
use
any
version
of
Visual
Studio
to
play
with
them
a
lot
net.
The
first
thing
that
you
want
to
do
when
you're
getting
start
with
M
a
lent
is
that
you
want
to
go
ahead
and
acquire
our
nougat
package,
which
is
called
Microsoft
or
ml.
Once
you've
acquired
that
package,
you
can
see
all
our
source
code
is
currently
under
the
Microsoft
ml
namespace.
A
So
if
you
want
to
go
ahead
and
check
that
piece
of
code,
you
can
certainly
go
ahead
and
do
that
once
I've
set
up
my
nougat
package
and
my
namespaces
I
can
go
ahead
and
start
creating
my
machine
learning
context
for
those
of
you
in
the
room
who
are
familiar
with
the
entity
framework.
This
is
very
similar
to
creating
a
DB
context,
so
the
next
thing
I
have
to
do
is
once
I've
cleared.
My
context
is
to
read
in
my
data
set
for
training
and
testing.
A
So
if
you
notice
over
here
on
the
right
side
in
solution,
Explorer
you're
going
to
see
that
I
have
two
data
sets
here:
I
have
the
yelped
labeled
trained
data
set
and
the
up
label
test
data
set
if
I
go
explore.
One
of
these
data
sets
here.
You're
gonna
see
that
this
has
two
columns.
The
first
column
is
column
text.
This
is
the
actual
text
and
the
sentiment
column
is
the
second
column
which
is
of
value
1
or
0,
which
represents
whether
the
sentiment
is
good
or
not.
A
So
the
way
I
read
in
this
data
set
into
my
ml
dotnet
environment,
is
by
using
an
input
class
in
this
case,
I've
already
created
my
input
class
and
it
has
two
fields.
The
first
field
is
of
type
text
of
types.
Is
called
text
and
is
of
type
string
and
what
this
is
representing
and
mapping
is
the
first
column
here
in
this
data
set.
A
A
I'm
gonna
pass
in
my
input
class
here
which
tells
us
how
to
read
the
data
set
and
then
as
parameters,
I'm
gonna,
pass
it
the
path
to
my
training
data
set
and
then
a
parameter
here
which
basically
tells
it
that
hey
I
already
have
a
higher
header,
while
I'm
at
it.
I'm
also
going
to
go
ahead
and
read
my
test
data
here
very
quickly,
so
I'm
gonna
use
my
training
data
to
train
my
model
and
I'm
gonna
use
my
test
data
to
determine
how
well
my
model
is
actually
performing.
A
So
at
this
point,
I've
read
in
my
data
sets
so
now.
The
way
we're
learning
works
is
that
you
know
what
I've
read
in
this
data
right
now.
The
text
column
here
this
particular
column
or
text
in
general-
needs
to
be
converted
to
a
particular
format
that
certain
machine
learning
algorithms
can
understand.
So
what
I
need
to
do
here
next
is
take
this
textual
column
or
string
column
here
and
then
create
create
an
integer.
It
create
a
vector
which
is
basically
of
type
int.
A
Essentially
so
the
way
I
do
that
in
ML
botnet
is
by
creating
an
estimator
pipeline.
So
I'm
gonna
say
in
step
two
build
an
estimator
pipeline
which
transforms
my
input
data
and
once
my
dealers
in
the
right
format,
I
can
actually
add
ML
trainer
to
it.
So
I'm
gonna,
say
var,
I'm
gonna
say
is
two
meter
high
transformer.
A
I'm
gonna
build
this
estimator
in
a
stepwise
manners
in
the
first
step,
I'm
gonna
go
ahead
and
say:
transform
when
transform
my
text
for
this
transformation.
I'm
gonna
use
this
think
transform
called
feature
rise
stacks,
which
uses
something
called
engrams.
This
takes
in
the
output
column,
which
I'm
going
to
call
features.
This
is
where
my
result
of
the
feature
ization
will
be
and
then
I'm
gonna
pass
in
as
input
the
first
column,
which
is
of
type
sentiment
tags.
A
A
Since
we
are,
you
know
trying
to
essentially
perform
sentiment.
Analysis
vistas,
we're
trying
to
predict
whether
the
sentiment
is
0
or
1.
That's
an
example
of
a
binary
classification
problem,
so
I'm
gonna
set
binary
classification
as
a
part
of
my
context
and
then
I'm
gonna,
say
trainers,
and
now
what
I
can
do
now
is
I
can
choose
among
one
of
the
many
trainers
that
ml
data
provides.
This
is
what
the
intellisense
is
currently
showing
me
in
this
case.
I'm
just
gonna
use
logistic
regression
and
I'm
gonna
pass
in
as
input
here.
A
The
label
and
the
features
column
that
I
just
converted
or
transform
all
right.
So
now,
I've
built
a
pipeline
that
can
actually
take
an
input,
data,
transform
it
into
the
right
format
and
then
I've
added
my
machine
learning
algorithm
in
this
case.
That's
logistical
question.
The
next
step
here
is
to
train
my
model.
A
A
So
that's
my
next
step.
I
will
say
our
predictions
equals
modeled
or
transform
and
I
trained
my
model
on
the
training
data,
but
I
want
to
perform
transformations
or
predictions
on
new
data.
So
in
this
case,
I'm
gonna
use
my
test
data,
which
I
loaded
earlier
once
I've
performed
the
predictions
that
I
need.
I
can
go
and
see
how
well
the
model
is
working,
so
I'm
going
to
value
the
model
and
I'm
going
to
store
it
in
this
bar
called
metrics
as
I'm
going
through
this
process.
A
So
that's
pretty
much
all
I
kind
of
need
to
do
to
be
able
to
you
know,
build
a
sentiment
analysis
model
here.
You
know
I
start
with
loading.
My
data
I,
then
I
then
convert
that
into
the
right
format.
Add
a
machine
learning,
algorithm
train.
My
model
do
transformations
and
new
data
which
is
tested
in
this
case
and
then
I
finally
go
and
evaluate
this
model.
At
this
point,
I'm
gonna
just
add
a
breakpoint
here
and
run
this
in
the
debugger.
So
I
can
show
you
what
kind
of
accuracy
we're
getting
with
this
model.
A
So
I'm,
gonna,
step
over
here
and
we're
gonna
see
here,
is
that
under
metrics
you
know,
you
wanna,
see
an
accuracy
feel.
So
this
accuracy
field
is
telling
you
how
well
this
model
is
performing.
0.8.
Zero
is
suggesting
that
this
portal
is
about
80%,
accurate
right
now
on
the
test
data,
which
is
not
bad
given
we
just
got
started.
A
One
last
thing:
I
want
to
quickly
show
you
as
well
is
a
concept
that
we
came
up
with.
This
is
a
convenience.
Api
called
a
prediction
engine
and
what
the
prediction
engine
allows
you
to
do.
Is
it
allows
you
to
predict
on
a
single
instance
of
data,
so
I'm
going
to
the
last
step,
create
this
prediction
engine
and
save
predict
on
single
instance
of
data,
so
I'm
just
going
to
say
prediction
engine
here.
A
A
I'll
reset
my
breakpoint
here
so
I
can
just
show
you
how
how
the
how
this
prediction
actually
works
or
how
this
prediction
is
doing
so,
if
I,
just
tap
over
this
code,
we're
gonna
see
in
the
debugger
here
is
that
the
prediction
for
this
particular
sentiment
is
positive
and
the
probability
in
score
just
telling
you
how
well
this
prediction
is.
So
that's
a
very
quick
example
on
how
you
can
get
stronger
than
about
net
to
build
a
sentiment
analysis
model.
So
hopefully
that
makes
sense
if
some
of
these
concepts
seem
a
bit
strange
to
you.
A
Do
not
worry
you're
going
to
cover
this
at
length
in
our
in
our
in
our
deck
here.
So
the
next
thing,
I
kind
of
want
to
cover,
is
so
I
already
covered
that
one.
So
the
next
thing
I
want
to
cover
here
quickly
is
what
are
some
of
the
new
features,
we're
adding
with
them
a
lot
net1
o.
So
the
first
thing
I
want
to
talk
about
is
Auto
ml
or
automated
machine
learning.
This
is
a
new
feature
that
we've
added.
A
So
if
you,
if
you
just
remember
the
coding
example
that
I
went
over
when
I
went
and
ahead
and
chose
a
binary
classification
trainer,
what
you
might
have
observed
is
that
in
ml
dotnet
we
have
a
number
of
trainers
there.
We
have
average
perceptron.
We
have
linear
SVM,
we
have
logistic
regression
and
so
on
so
now,
as
a
person
who's
starting
new
with
a
male,
this
might
be
tricky
to
see
which
one
actually
performs
best
for
your
scenario.
A
Likewise,
for
even
if
you've
kind
of
like
figured
out
the
right
trainer
for
your
for
your
for
your
example
or
for
your
scenario,
you
can
further
find
human.
These
trainers
with
these
settings
called
hyper
parameters
so
again,
if
you're
starting
to
see
learning
and
your
new
might
be
really
challenging
for
you,
but
with
Auto
ml.
You
kind
of
like
solve
this
problem,
so
Auto
amel
automatically
builds
these
models
with
the
best-performing
trainer
and
settings
for
you.
A
So
it's
going
to
use
your
local
computer
to
figure
out
the
best
combination
here
and
provide
you
the
best
performing
model.
You
can
run
the
auto
xprm
l
experience
that
we
have
locally
and
we
currently
support
three
tasks.
There.
We
support
regression,
binary
classification
and
multi-class
classification.
You
can
use
the
ordinal
experience
that
we
built
an
ml
dotted
in
three
different
ways.
You
can
use
the
model
builder
tool
that
I'm
allowed
to
show
you
next.
You
can
show
the
ML
dotnet
CLI.
A
A
So
we
also
announced
a
bill
this
year,
a
new
tool
called
the
ML
document
model
builder.
This
is
a
visual
studio
extension
and
what
this
tool
does
is
that
it's
a
very
simple
UI
tool
that
allows
you
to
build
these
custom
models
automatically
using
auto
ml,
along
with
generating
the
models.
What
it
also
does
for
you
is
it
generates
code
for
both
model,
training
and
consumption.
A
A
So
the
way
you
can
launch
this
tool
is
by
right-clicking
ad
and
clicking
machine
learning
here.
If
you
want
to
figure
out
where
you
can
grab
this
tool,
you
can
you
can
go
to
our
website
here
very
quickly.
It's
just
dotnet
ml
and
you
can
click
the
model
builder
link.
This
will
take
it
the
model
builder
page,
which
shows
you
the
different
features
of
model
builder
and
also
allows
you
to
download
the
visual
studio
extension
for
it.
I've
already
I've
already
download
that
extension,
so
I
don't
need
to
do
that
again.
A
So
this
is
the
first
queen
and
model
builder.
The
first
queen
here
will
show
you
the
different
scenarios
you
can
solve
for
machine
learning
using
this
tool.
We
have
two
examples
here
for
price
prediction,
which
is
an
example
of
regression,
Mammal
task
and
central
analysis,
which
is
an
example
of
a
task
binary
classification,
but
you
can
also
build
other
scenarios
using
the
custom
in
your
scenario,
template
here,
I'm
just
going
to
go
ahead
and
click
sentiment.
Analysis,
so
I
can
show
you
this
tool
for
now.
A
A
So
we've
just
logged
into
our
database
and
now
we
can
actually
pull
in
one
of
the
databases:
I'm
gonna
pull
in
the
same
data
set
that
I
used
to
build
a
sentiment
analysis
model
earlier.
This
database
is
called
yummy
food
and
I'm
gonna
click.
Ok
I
can
then
choose
the
table
in
this
case,
I'm
going
to
choose
the
same
sentiment
review
table
and
what
this
tool
is
going
to
do.
Is
it's
going
to
show
you
a
preview
for
what
you
would
see?
A
Essentially
the
next
task
that
this
too
asked
me
to
provide,
or
we
need
to
choose-
is
the
column
that
we're
going
to
predict.
So
in
this
case
we're
trying
to
predict
the
sentiment
so
I'm
going
to
choose
the
sentiment
column
as
the
column.
I
want
to
predict
once
I've
done.
This
I
can
go
to
the
next
phase,
which
is
the
training
phase.
A
I'm
gonna
go
ahead
and
train
for
about
30
seconds
here
and,
as
this
tool
is,
that
model
bullet
was
actually
training.
We're
gonna
see
here
is
at
the
bottom.
Here
is
the
accuracy
of
the
model
that
it's
got
so
far,
the
best
algorithm.
It's
it's!
It's
the
best
algorithm
it
is
has
chosen
and
the
different
algorithms
is
trying
out
so
the
more
time
that
you
provide
this
tool
the
more
time
it
has
to
be
able
to
traverse
different
algorithms
and
explore
different
models
for
you.
A
In
this
case,
my
data
set
was
fairly
small,
but
if
you
have
datasets
that
are
bigger,
for
example,
if
you
have
a
dataset,
that's
more
than
a
gig
that
might
take
a
couple
of
hours,
if
you
have
a
data
set,
that's
about
a
terabyte
that
might
take
a
couple
of
days.
We've
tested
it
for
both
scenarios
here.
So
in
this
case,
I
just
trained
this
model
for
30
seconds,
and
you
see
that
the
accuracy
it
got
here
was
86%.
A
If
you
remember
from
my
last
example
that
I
did
when
I
created
these
sentiment,
analysis
bottle
by
hand
that
only
had
an
80%
accuracy,
so
using
this
Ottoman
experience,
you're
already
seeing
a
better
accuracy
here,
which
is
30
seconds
of
training,
once
my
training
is
complete,
I
can
go
to
the
next
screen
now,
which
is
the
evaluate
screen.
The
value
screen
will
allow
you
to
look
at
the
best
model
performance.
The
different
models
it
explored
and
so
on.
A
If
at
this
step,
you're
unhappy
with
the
performance
that
you
got
or
the
accuracy
that
you
got,
you
can
go
back
and
you
can
frame
the
for
longer
and
that
perhaps
will
result
in
a
better
accuracy
or
you
can
essentially
add
more
data
to
the
problem,
and
so
on
the
last
step
here
that
the
model
builder
allows
you
to
do.
Is
it
also
allows
you
to
generate
code
automatically,
so
when
I
click
Add
projects
here?
Well,
that's
going
to
do.
Is
it's
going
to
create
two
projects?
A
The
first
project
here
that
it
creates,
is
a
class
library
and
what
this
class
library
has
is.
It
has
the
model
dot
zip
file,
which
is
the
trained
model
that
I
just
screen
in
the
tool,
and
it
has
the
input
and
the
output
class
along
with
the
model
and
the
input
and
output
classes.
We
also
create
another
project
to
here
which
has
the
training
code
and
the
consumption
code.
So
if
I
just
show
you
the
training
code
very
quickly,
you
can
see
here
that
this
is
essentially
doing
the
same
feature.
A
A
The
other
file
that
you'll
see
in
this
project
is
called
we
programmed
with
CEA's
file,
and
what
were
you
showing
in
this
file
here
is
how
you
can
now
use
a
train
model.
So
you
start
with
again
creating
your
context,
and
then
you
load
the
ML
data
model.
Here
you
create
the
prediction
engine
again
and
then
you
can
start
making
predictions
I'm.
Just
gonna
hijack
this
code,
a
little
bit
here
and
add
my
own
single
instance
of
data
and
say
new.
A
A
A
Model
builder
is
in
preview
and
we
are
looking
for
feedback
on
this
tool,
but
hopefully
this
makes
building
machine
learning
models
easy
with
ml
dotnet
I'm
going
to
switch
back
to
the
deck
here
quickly
and
the
next
thing
I
want
to
show.
You
is
a
couple
of
other
tools,
but
for
that
portion
of
the
talk
I'm
not
going
to
leave
you
wits,
asar
who's
gonna
cover
this.
Thank
you
very
much
thanks
a
lot.
Thank
you.
B
B
One
could
be
because
you,
this
is
cross-platform,
you
can
run
the
CLI
on
a
Mac,
Linux
or
Windows,
and
another
reason
is
because,
once
you
know
the
process-
and
maybe
you
want
to
generate
a
model
everyday
or
with
new
data-
or
you
want
to
automate
with
the
CLI
in
a
different
pipeline
or
or
whatever-
then
it's
very
useful
to
have
also
a
CLI
right.
So,
let's,
let's
see
a
demo
about
it.
B
B
Basically,
this
CLI
is
a
global
tool,
so
you
install
it
as
a
global
tool
like
you
can
see
here
with
donna
tool,
install
and
then
and
m/l
net
is
the
name
of
the
new
deep
package
that
you
will
get
automatically
installed
and
I
just
did
this
so
I
don't
need
to
do
it
now
and
then
the
other
thing
I
want
to
show.
You
is
also
how
it
works,
that
you
can
write
ml
net
out
or
train
and
then
task,
and
now
we
have
also
tapped
out
of
completion.
B
So
I
can
press
stop
and
see
the
different
ml
tasks
that
are
supported
right
now,
which
is
binary
classification,
multi-class,
classification
and
regression
and
in
in
upcoming
versions
we
will
add
in
the
rest
of
the
ml
tasks
that
we
have
in
in
ml
dotnet.
So
let
me
run
one
sample
using
the
the
same.
Very
similar
data
set
that
we
were
using.
Basically,
you
can
see
here
Emma,
let
our
train.
The
task
is
binary
classification.
Then
the
name
of
the
code.
The
folder,
is
going
to
be
sentiment
model.
B
The
name
of
the
dataset
is
yelled
labelled
TSV.
So
it's
a
text
file
with
a
with
tabs
between
columns
and
finally
I
need
to
provide
the
label
column
name,
which
is
what's
the
column.
I
want
to
predict.
I
want
to
use
target
right
and
finally,
the
time
that
I'm
gonna
be
looking
for
better
models
in
this
case
just
15
seconds,
because
it's
a
quick
demo,
but
when
you're
working
with
large
data
sets,
you
might
need
many
minutes
or
even
hours
as
unkeyed
mentioned
right.
So
you
can
see
it.
B
B
You
can
see
here
the
accuracy,
another
metrics
like
area
under
the
curve,
and
you
can
get
more
information
about
these
metrics
in
this
URL
that
we
we
have
here
and,
most
importantly,
we
are
generating
the
code
and
the
model,
the
zip
file
model
that
was
created
when
training
and
the
code
for
training
it
or
for
screen
something
that
maybe
was
not
clear
when
doing
the
demo.
Previously.
Once
you
have
trained
and
model,
then
you
can
save
it
as
a
sip
file.
B
B
You
can
see
that
now
we
have
this
folder,
which
is
it's
new
I
was
just
starting
with
the
TSV,
so
this
is
being
generated,
and
then
we
have
the
same
projects
for
the
class
library
and
an
encode
for
training
and
scoring
it's
exactly
the
same
that
you
so
with
this
studio,
and
it
is
the
same
because
visual
studio
is
doing
that
on
top
of
the
CLI.
So
we
are
consistent
in
reality
is
the
same
thing
right.
So
let
me
go
back
to.
B
Which
is
about
scaling
and
going
to
production?
Okay,
so
so
far,
we've
seen
how
you
can
train
your
model,
how
you
can
test
it
validated,
but
what
about
putting
this
in
your
real
applications
right,
in.net
applications,
ASP,
Annette,
core
applications
or
asp.net?
We
support
dotnet
framework
dotnet
core,
and
so
it's
also
cross-platform.
You
can
run
the
model
on
Linux,
Windows
or
Mac
right,
so
when
moving
to
production,
it's
not
just
about
preparing
your
data
and
building
and
training.
The
important
point
is
also
about
running
that
model
in
your
application
right.
B
So
you
might
have
questions
like
this,
like.
How
can
I
optimize
that
for
running
in
asp.net
core
application,
for
instance
scalable
multi-threading,
or
how
can
I
include
the
creation
of
that
model
that
we
were
seeing
with
ankud
into
my
develops
into
my
CI
CD
pipelines,
and
how
can
I
automate
that
right?
So
let
me
do
one
demo
about
how
you
can
use
a
model
into
asp.net
core
application,
8.
B
So
here
you
have
a
solution
where
I
have
a
wood
epi
that
we'll
take
a
look
in
a
minute
and
and
then
we
have
to
project
that
those
are
precisely
generated
by
the
CLI
or
Visual
Studio.
You
see
the
the
the
console
for
training
model
and
and
the
model
project
that
has
a
zip
file
and
the
data
classes
that
unkeyed
was
showing
so
I'm.
We're
gonna
show
that
later,
maybe
when
doing
a
unit
testing,
but
now
I
want
to
focus
on
how
can
I
use
this
model
zip
file
into
my
Web
API
right?
B
So
there
are,
there
are
basically
three
classes
that
you
need
to
use
when
running
a
model.
One
is
the
ml
context.
Thank
you
said
the
I
transformer
or
model
okay.
So
those
are
our
thread.
Save
you
can
put
it
a
single
tone
or
static
and
would
be
better
for
performance,
for
you
can
reuse
it
from
different
threads
in
your
application.
But
the
third
object
is
the
prediction
engine
that
we
also
talked
about
for
doing
single
predictions
and
that
object
is
not
a
thread
safe.
B
So
you
need
to
use
it
in
a
special
way
when
you
are
running
a
multi-threaded
application
like
asp.net
core.
So
this
is
what
I'm
gonna
show
you
in
the
Web
API.
You
can
see
that
I
have
copied
the
zip
file
for
the
model.
I
also
have
the
data
classes
for
reading
and
using
the
model,
and
what
we
did
is
because
doing
this
in
a
scalable
way.
You
might
need
to
use
the
object
pooling
for
the
prediction
engine
that
might
be
complex.
B
I
wrote
a
blog
post
about
that
and
how
you
can
do
that,
but
it's
kind
of
complex.
So
what
we
did
after
a
meeting
a
few
months
ago,
in
collaboration
with
asp.net
ASP
on
a
team
with
Glenn,
Condren
and
Ryan
Novak
is
hey:
let's
create
dotnet
asp.net
extension
package,
so
you
can
use
it
in
dependency
injection
in
asp.net.
The
same
word:
you
can
use,
for
instance,
signal
R
or
entity
framework.
So
then
you
you
can,
and
it's
going
to
be
a
scalable
right.
So
it's
super
easy.
B
You
go
to
the
stirrup
class
and
the
configure
services
and
just
need
to
register
the
prediction
engine
pool
with
the
same
classes.
Data
classes
that
we
were
mentioning
when
creating
the
model
and
then
logging
from
a
file
which
is
the
path
to
the
zip
file.
Okay,
so
we
just
register
that,
as
you
would
do
a
similar
thing
with
entity
framework,
it's
in
in
the
dependency
injection
container.
B
And
finally,
you
just
go
to
your
controllers
and
simply
in
your
constructor,
you
will
get
the
prediction
engine
injector
and
you
just
need
to
go
in
in
the
method
of
your
controller,
predict
sentiment
and
just
predict
with
the
data
right.
So
I'm
gonna
run
it
very
quickly,
but
you
can
see
it's
just
this.
Yes,
hey
use.
The
pen
injection,
get
the
object,
prediction
prediction
engine
pool
and
call
predict
with
the
data
that
came
from
HTTP.
So,
for
instance,
you
can
see
here
that
we
are
sending
this.
B
B
Another
thing
is
about
CI,
CD
and
and
DevOps
right.
So
so
because
it's
not
just
running
this
in
production,
you
also
want
to
be
able
to
have
a
consistent
code
with
the
model
that
was
trained
with
particular
data,
and
for
that
you
need
to
engage
this
creation
of
the
model
in
or
in
your
pipeline.
So
I'm
gonna
do
a
demo
about
that.
I'll!
Try
to
do
it
quickly!
B
So
basically
I'm
gonna
start
from
a
trouble
case
which
is
I,
have
a
few
actual
application
web
apps
in
Linux
windows,
where
I
want
to
deploy
that
were
API,
but
then
one
of
my
peers.
They
change
the
data,
the
data
set
and
push
that
into
kidnapped
and
and
even
deployed
that
maybe
directly
from
this
low
studio
into
sure,
and
the
thing
is
that,
right
now
the
Web
API
work
is
working.
You
can
see.
This
is
a
URL
in
Asscher
deployed
in
app
service
is
working
wrong.
B
Like
you
see
the
Texas,
among
that
it's
also
on,
but
then
the
positive
sentiment
is
false
and
if
I
say
the
food
was
horrible,
the
positive
sentiment
is
true,
which
is
wrong
right,
so
I
go
ahead
and
I
can
go
to
my
build
pipelines
and
yeah
looks
like
when
the
wrong
data
set
was
published.
I'm
getting
errors
here
and
and
I
can
see
that
I
have
quite
a
few
tests
that
didn't
pass.
So
at
least
I
can
fix
this.
B
B
You
should
be
here
you
see
here
is
the
star
in
this
is
gonna,
take
a
few
like
a
couple
of
minutes.
So,
while
this
is
working
first
of
all,
I
want
to
show
you
the
code
about
the
tests,
which
is
interesting.
So
in
this
case
the
unit
test
that
I
have
here
are
testing
the
model.
So
one
test
that
you
can
do
you
can
see
that
you
can
run
these
tests
here.
B
One
test
would
be
about
simply
testing
that
the
statement,
the
negative
statement
with
a
wrong
sorry
with
a
bad
sentence
or
negative
sentence,
is
doing
it
right
like
this
one.
This
movie
was
very
boring.
Then
it's
gonna
predict
false
or
this
one,
this
ml
dot
night
is
awesome.
It's
gonna
be
true
right.
All
the
tests
and
even
more
interesting,
is
I'm
loading,
the
the
the
model
and
then
I'm
getting
the
metrics
like
Hank.
It
showed
and
I'm
I'm
saying
hey
if
the.
If
the
accuracy
is
not
higher
than
80%.
B
Consider
this
like
it's
not
passing
the
test,
so
break
the
bill.
Okay
and
even
even
further.
You
can
also
get
a
lot
of
like
hundreds
of
records
and
just
test
all
those
in
on
showed
here
right.
So
this
is
what
we
are
doing
precisely
in
the
build
I
want
to
edit
than
the
bill
just
to
show
you
a
few
steps
of
the
build.
You
can
also
do
it
with
the
new
llamo.
B
B
Pass
and
then
it
is
starting
and
triggering
also
our
release.
You
can
see
it's.
It
was
trigger
because
of
the
artifact
and
you
you
can
see
the
this
release.
That
is
now
publishing
the
Web
API
into
my
staging
environment,
which
is
a
two
different,
actual
app
service,
running
Linux,
running
Windows,
and
when
this
is
done,
then
it
will
ask
me
if
I
want
to
go
to
production,
but
then,
in
this
case
it's
I
have
to
have
a
manual
approval
right.
So
it's
gonna
finish
in
in
a
minute.
Well,
it
is
finishing.
B
B
Okay,
so
it's
published
already
in
in
the
QA
in
production
is
pending.
Approval
I
could
have
proved
it
I'm
not
going
to
do
it
and
then
I
just
refresh,
and
you
can
see
running
now
the
API
in
a
rush.
First,
this
one
as
well
and
now
this
sentiment
of
ML
dotnet
is
awesome,
true
running
in
usher
on
the
same
thing
here
now,
food
was
horrible,
is
false.
Okay
and
that's
it.
Thank
you.
C
So
we've
covered
a
lot
of
text
and
numeric
data
types
that
you're
already
pretty
familiar
with
how
to
work
with.
Hopefully,
you've
seen
how
we
can
add
some
intelligence
to
your
applications.
Now
I'm
going
to
show
you
how
we
can
use
our
preview
features
of
ML
dotnet
to
you
incorporate
pre-trained,
deep
learning
models
into
your
applications
be
able
to
work
with
other
data
types
like
images
and
speech
audio
and
more.
C
We
can
currently
support
taking
in
pre-trained
tensorflow
models
and
onyx
models.
Now
onyx,
for
those
of
you
who
are
not
aware,
is
an
open
source
initiative
that
we've
partnered
with
folks
like
Amazon,
Nvidia
Facebook
and
many
hardware
providers
like
Nvidia,
Intel
and
others
to
be
a
cross-platform
industry,
standard
representation
of
trained
machine
learning
models,
which
means
that
any
machine
learning
models
that
are
trained
in
scikit-learn
or
tensorflow
or
pi,
torch,
etc,
can
be
converted
to
onyx
and
therefore
used
inside
your
dotnet
applications
by
way
of
our
onyx
integration.
C
So
many
of
you
have
probably
seen
examples
of
using
deep
learning
before
where
we
use
a
classic
pre-trained
model,
that's
available
on
the
web
to
just
simply
download
this
one's
called
Yolo.
You
only
look
once
and
it's
an
object.
Detection
example
I'm
showing
here
in
asp.net
app
and
we
can
select
different
images
and
it
identifies
as
some
bounding
boxes
identifies.
C
What's
in
those
bounding
boxes,
tell
you
the
probability
or
the
confidence
level
that
yes,
this
really
is
a
sheep,
for
example,
now
I'm
going
to
show
you
how
to
actually
build
this
here
in
Visual
Studio,
not
just
like
on
kittens.
These
are
showed
you
before
we'll
start
with
an
ml
context,
and
even
though
we
have
pre
trained
machine
learning
models,
in
this
case,
the
Onix
Yolo
model,
you're
oftentimes,
going
to
need
to
do
some
pre-processing
or
transformation
of
your
data
before
feeding
it
through
that
model.
C
Rather,
it
was
trained
on
a
lot
of
regular
everyday
objects
like
images
of
bikes
and
trees
and
cars,
and
people
and
stuff
like
that,
and
if
we
used
a
technique
here
called
transfer
learning,
if
you
will,
where
it
takes
all
the
knowledge
that
you've
learned
in
one
context
where
somebody
else
has
trained
model
for
maybe
hours
or
days,
on
lots
and
lots
and
lots
of
data
and
using
that
in
one
part
of
your
ML
net
pipeline
to
train
another
model.
That's
specific
to
your
task.
C
In
this
case,
I've
used
a
bunch
of
different
data
from
the
grocery
store
to
identity,
train
a
model
that
sometimes
is
pretty
darn
good
and
sometimes
like
this
one.
Well,
I,
don't
really
know
if
it's
soda
or
not,
because
some
of
the
cans
of
soda
some
of
the
types
of
sort
of
look
a
lot
like
bottles
of
juice
and
so
something
that
model
might
get
a
little
bit
confused,
but
it
still
has
pretty
darn
accurate.
So
we
can
see
this
one's
definitely
juice.
This
was
coffee,
cake,
etc.
So
to
Train
this
machine
learning
model.
C
C
So
this
is
the
input
for
training
our
new
model,
and
so
it's
gonna
load
these
up
and
then
it's
going
to
iterate
over
these,
it's
again
going
to
load
the
images
resize
them
extract.
The
pixels-
and
here
is
where
we'll
load,
the
tensorflow
pre-trained
model,
using
the
input
model,
location
defined
above
and
then
we'll
train
an
additional
multi
class
classifier.
C
But
you
can
see
we
can
iterate
over
all
the
different
classes,
print
out
how
many
samples
of
the
each
one
of
these
classes
now,
if,
if
the
model
is
not
predicting
well
on
one
of
those
classes,
just
like
Cesar
showed
you
earlier,
usually
you'll
need
to
debug
your
data,
not
debug
your
code,
and
so
you
might
add
some
additional
pictures
into
one
of
those
folders
to
get
more
samples
of
one
of
those
classes.
Then
we'll
load
up
the
tensor
flow
model,
we'll
train
the
classifier
and
then
we'll
output.
C
C
So
we
just
iterate
over
a
bunch
of
these
different
images
in
the
test
folder
instead
of
in
the
Train
folder,
because
you
want
to
make
sure
that
you're
testing
with
images
that
you're
modeled
and
see
while
training,
otherwise
you
might
over
fit
the
model
which
would
be
bad.
So
I
can
certainly
publish
this
using
Azure
DevOps
and
proceed
down
that
path.
But
I
want
to
go
into
a
little
bit
of
detail
covering
one
other
topic
which
is
model,
explained
ability.
So
interpreting
these
machine
learning
models
can
be
pretty
tricky.
C
A
lot
of
you
are
pretty
darn
familiar
with
how
to
test
an
application,
but
testing
a
machine
learning
model
can
be
tricky
explaining
how
it
works
to
somebody
else
in
a
way
that
you
sound,
competent,
is
even
trickier
so
there's
some
pretty
standard
techniques
in
the
industry
that
enable
you
to
do
this
so
that
you
can
explain
and
debug
your
models.
But
more
importantly,
many
of
you
I
know
work
in
industries
where
there's
regulatory
requirements,
whether
it's
in
a
financial
or
health
care
or
whatnot
and
oftentimes,
they
won't
let
you
ship
something
to
production.
C
Well,
here
are
the
characteristics
that
you
could
improve
the.
This
is
why
it
predicted
that
you
are
not
100
percent
healthy
right
and
when
training
or
debugging
your
model,
you
might
look
at
all
of
the
data
in
that
data
set
to
understand
the
distribution
of
the
features
and
their
relative
importance
in
training
that
model.
So
I'm
going
to
show
you
exactly
how
you
can
do
that
with
ml
net.
C
So
I
have
built
a
an
extremely
fancy:
winform
application
just
for
fun,
I
hadn't
done
it,
but
you
can
see
using
the
New
York
City
taxi
fare
data
set
because
it's
a
very
simple
data
set
to
understand.
Most
folks
have
taken
a
taxi
trip
before
and
there's
a
there's
data
in
this
data
set.
That's
like
the
length
of
the
trip,
the
like
distance
as
well
as
time
and
how
people
paid
whether
it's
cash
or
credit
card
and
other
features
as
well
and
so
on
a
per
trip
basis.
C
Then
we
can
load
up
the
test
data
set
and
then
iterate
over
the
different
features
and
in
this
particular
model,
to
understand
why
it
predicted
the
fare
is
gonna,
be
ten
dollars.
Well,
the
trip
distance
was
super
super
important
in
this
one
relative
to
the
other
one,
but
for
some
of
these
then
they're
a
little.
The
different
features
like
trip
time
and
trip
distance
are
relatively
both
important
and
then
others
sometimes
you'd
say.
Okay
well,
on
this
particular
trip.
The
the
trip
time
was
a
little
bit
more
important
than
the
trip
distance.
C
We
can
add
one
final
thing
here,
which
is
the
calculate
feature,
contribution
and
calculating
the
future
contribution,
is
what's
going
to
return
that
explanation
of
why
the
model
performed
the
way
that
it
did
additionally,
now
when
we
train
this
model,
then
it
not
only
will
it
include
that
particular
step,
but
then
we
can
also
run
through
and
we
can
kind
of
load
up
some
test
data
set.
We
can
see
how
this
is
actually
transforming.
The
data.
Remember
I
talked
about
one
hot
encoding
and
normalization.
This
is
what
the
computer
actually
sees.
C
So
I
want
to
thank
you
all.
The
the
journey
for
ml
net
from
0.1
last
year
at
build
1.0
today
has
been
pretty
fantastic.
There
have
been
a
whole
bunch
of
downloads,
a
whole
bunch
of
commits
from
a
whole
bunch
of
people
in
the
community.
It's
been
really
fantastic
to
work
with
you
all
on
creating
a
machine
learning
model
framework
for
dotnet
developers.
C
There
are
a
number
of
customers
that
are
using
it
in
production
today
that
are
getting
great
results.
They
made
it
super
easy
to
incorporate
machine
learning
into
their
dotnet
applications
deployed
in
Azure,
just
like
you
can
with
any
dotnet
application
and
automated
machine
learning
capabilities,
hopefully,
are
a
great
way
to
get.
You
started
like
they
have
some
of
our
other
customers,
because
I
know
that
learning
this
machine
learning
stuff
can
be
pretty
tricky,
but
their
tooling
should
help
make
it
easier.
We're
gonna
continue
to
ship
regular
releases.
C
We're
gonna
continue
to
improve
our
capabilities
that
we've
just
talked
about.
Some
of
them,
like
the
model
builder
and
the
automated
machine
learning
stuff,
is
in
preview
right
now,
so
we're
of
course
going
to
continue
to
improve
it
and
get
it
to
GA.
We
want
your
feedback
as
quickly
as
possible,
go
out
and
try
it
today.
We're
gonna
make
it
easier
to
train
your
models
and
scale
it
out
on
the
Asscher
and
add
support
for
new
types
of
tasks
in
machine
learning.