►
From YouTube: ASP.NET Community Standup - April 21st 2020 - ML.NET + Blazor with Bri Achtman and Luis Quintanilla
Description
Join members from the ASP.NET teams for our community standup covering great community contributions for ASP.NET, ASP.NET Core, and more.
Community links for this week: https://www.theurlist.com/aspnet-standup-2020-04-21
Code from the show:
https://github.com/luisquintanilla/ASLModelBuilder
ML.NET Info:
Upcoming Events
Virtual Azure Global Conference: https://virtual.globalazure.net/sessions
Virtual ML.NET Community Conference: https://virtualml.net
Resources:
ML.NET website: https://dot.net/ml
ML.NET Docs: https://aka.ms/mlnet-docs
ML.NET GitHub repo: https://aka.ms/mlnet
ML.NET samples GitHub repo: https://aka.ms/mlnet-samples
C
B
B
D
C
B
C
C
Added
a
co2
and
temperature
meter
meter
to
my
office
in
here
because
and
I
made
some
random
theme
and
got
me
worried.
But
now
I
can
watch
my
co2
levels.
They
go
up
in
the
room
until
they
go
over
that
threshold
where
it
starts
to
affect
your
cognition
and
stuff
and
then
I'm
like
oh,
better,
open
the
door.
A
C
B
B
Everybody
can
hear
okay,
we
can
hear
somebody
on
the
twitch
stream
or
YouTube
all
good
boo
and
we're
live
ray
well
welcome,
welcome
back
as
always
damien
and
then
welcome
to
bree
and
Louise,
who
are
gonna,
be
talking
about
blazer
and
a
metal
net,
so
excited
I'm,
gonna
jump
right
into
Community
Links,
hitting
the
button
making
sure
I'm
still
unmuted
wonderful,
okey
doke.
So
first
we
have
a
blazer
section.
So
three,
two
OH
preview,
ladies
preview
for,
is
out
some
of
the
stuff
they
already
had
showed
or
talked
about
on.
B
A
recent
blazer
show,
but
some
some
cool
stuff
integration
with
host
environment
and
logging.
There
is
the
bratli
pre-compression,
which
is
pretty
neat
so
showing
now
a
web
assembly
down
21.86,
which
is
amazing
yeah.
So
that's.
B
Is
yep
yep
exactly
and
so,
and
it
says
a
minimum
lap
without
bootstrap
CSS
is
1.6,
so
it's
pretty
darn
amazing
other
stuff,
like
loading
assembly,
is
in
runtime
and
parallel
aisle,
linker
localization,
so
some
cool
stuff,
feud,
debugging
limitations
and
one
one
thing
in
here
about
improving
blazer
docks
I'm,
actually
going
to
call
that
out
separately.
I
have
a
separate
link
for
that
at
the
end,
cool
good
stuff
to
cool
things
from
Michael
Washington.
So
one
is
pointing
out
this
thing
about
the
repeater
syntax.
And
what
does
he
call
it
here?
B
B
C
Rate
razor
itself
has
always
had
a
couple
of
very
sort
of
esoteric,
not
very
well-known
features
for
doing
templating
style
stuff,
and
we
changed
a
little
bit
in
call.
We
had
to
remove
some
things
when
it
when
they
sync
and
then
blazer
has
its
own
flavor
of
razor,
which
is
editor.
Its
own
stock
because
render
fragment
is
a
blazer
specific
thing.
It's
not
in
like
razor
pages,
for
example,
so
I
had
not
actually
seen
this
one
before
I'm
gonna.
Try
this
out
myself,
yeah.
B
C
B
I,
don't
know
as
magic,
cool,
okay,
another
from
Michael
Washington.
This
is
a
whole
book.
These
are
succinctly
here,
I'm
actually
showing
that
there's
an
online
view
of
it
and
the
PDF
download,
though
you
know
lots
of
good
stuff
I
I
pointed
out
last
week
at
sharp
nose
got
a
whole
PDF
books.
It's
that's.
You
know
beautiful,
really,
nice
quality,
and
then
this
is
another
nice
book
that
was
really
cool
to
see.
B
You
know
some
pretty
good
books
available
this
stuff
in
detail:
alright
onto
hi
nice
posts
on
ty,
so
first
Stafford
Williams
doing
a
walkthrough
on
on
ty
and
pointing
out
some
kind
of
problems
and
solutions
that
he
ran
into
along
the
way.
So
you
know
showing
off
how
to
get
it
set
up
run
locally
and
then
some
troubleshooting
things
here
I
ran
into
an
authentication
issue
due
to
using
uppercase
in
his
registry
names.
Apparently,
that's
not
a
good
thing
to
do.
Then.
B
Also
there
were
some
issues
that
he
ran
into
you
and
just
logged
issues
on
these
and
and
I
think
from
looking
at
these.
Some
of
these
were
already
like
pet
fixes
available.
So
and
then
there
you
know
workarounds
for
some
things,
like
secrets,
not
accessible.
What
you
can
also
just
set
an
environment
variable
so
well
stuff
great.
B
Know
and
I
keep
saying
it
every
time
I
mentioned
ty
their
docks
right
from
the
beginning,
we're
really
nice
all
the
walks
and
tutorial
docks
really
really
well
done
so
clean
up
with
hack.
My
writing
about
ty
as
well,
some
some
cool
kind
of
in-depth
stuff
here,
pointing
out
some
things
like
some
command-line
options,
for
instance,
when
you're
debugging
he
showed
the
syntax
here,
tyrunt,
debug
and
star.
He
ran
into
one
issue.
Some
noteworthy
things
don't
run
tie
with
launch
settings
because
that's
at
the
project,
level
and
and
ty
is
working
at
the
solution
level.
B
So
you
really
kind
of
want
to
think
of
ty
kind
of
managing
all
that
stuff
and
then
also
a
thing
you
pointed
out
here,
launch
settings,
don't
exit,
gracefully,
leaving
containers
and
artifacts
behind
and
then
some
other
issues
like,
for
instance,
ports
will
change,
etc,
cool.
This
is
a
really
interesting
one
from
our
John
he's
writing
about
using
asp
net
core
three
and
system
command
line
together.
B
C
B
And
one
cool
thing
he's
talking
about
here
is
using
system
command
line
and
that's
that's
a
thing
that
allows
like
doing
strongly
typed
and
it
writes
out
help
and
it
can
support
colorization
and
all
kinds
of
stuff.
So
you
can.
You
can
have
like
a
really
nice
CLI
experience,
so
he
talked
about
setting
that
up
and
then
he
also
talks
about
some
issues
like
that.
He
needed
to
do
in
order
to
support
branching
between
the
two
API
and
you
know,
running
the
CLI
as
well
so
yeah.
B
You
know
talking
about
dependency,
injection
and
extensibility
points,
and
then
he
goes
through
and
also
talks
about
validation
and
help
as
well.
So
the
system
CLI
includes
support
for
that
and
then
even
digging
into
testing
as
well
stuff,
all
right
and
relock
I
called
him
out
on
Twitter.
This
past
week
is
one
of
the
best
like
people
in
writing.
These
tech
blogs
recently
I
love
his
format.
He
just
always
nails
it
with
these.
So
here
he's
talking
about
choosing
a
free
port
in
asp
net
core
3.
You
know
talking
about
how
the
ports
are
selected.
B
If
you
randomly
want
to
pick
a
free
port,
you
can
just
specify
the
port
of
0
and
then
it'll.
You
know
pick
a
one.
That's
that's
available,
but
then
a
little
more
more
interesting
is.
How
do
you
actually
find
out
which
port
that
was
so
then
he
digs
into
that
a
little
bit
more
and
looking
at
how
you
can
go
through
and
find
that
port,
and
you
know
you
can
dig
it
out
through
middleware
yeah.
C
B
Alright,
ahead
on
his
he's
doing
an
A
through
Z
series
again
and
he's
up
to
production
tips,
so
this
is
kind
of
a
nice.
You
know
refresher
on
things
to
pay
attention
to
for
a
three
point:
one
talking
about
things
like
deployment
slots
for
managing
staging
and
production.
So
that
is
a
huge
tip
like
if
you're
doing
anything
and
as
your
app
service
deployment
slots
is
like
the
first.
You
know
first
thing
to
check
out
environment
configuration
handling
migrations,
including
things
like
scripting
out
your
migrations,
so
that's
handy
being
able
to
script.
B
A
B
C
C
It
shouldn't
be
scary,
for
anyone
who's
ever
done
is
handler
a
module
module
mapping.
Anyone
who's
ever
done
dynamic
handler
mapping
in
IAS
via
code.
This
is
very
pretty
standard.
You
have
to
understand
the
order
in
which
things
are
registered
in
it
and
within
which
they
execute
in
order
to
be
able
to
write
code.
That
does
it
at
the
right
time
but,
as
you
said,
like
a
spirit
core
is
implemented
in
IAS
as
a
module,
and
so
it's
just
that,
typically,
it's
added
as
a
wild-card
module,
so
it
takes
the
entire
pipeline.
C
C
Means
I
can
do
everything
and
then
verb
equals
star
above
that
yep,
and
so
the
order
you
put
them
in
there
is
important.
So
the
fact
that
the
extension
list
URL
handler
is
first
at
star
dot
means
that
it's
gonna
get
first
chance
to
handle
any
requests
now.
Typically,
that
means
if
it
doesn't,
though
it'll,
be
terminal
and
then
you'll
just
get
a
forward
four
by
adding
the
code
as
I
understand
it.
You
can
then
basically
say
okay.
Well,
that
didn't
have
to
do
anything.
C
B
C
We
can
add
code
in
there
to
make
it
a
hard-coded
based
on
whatever
route
you
want,
for
example,
so
some
and
some
useful
examples
of
doing
this
might
be
hey.
You've
got
an
API
and
you've
got
versions
for
your
API
in
the
URL.
Wouldn't
it
be
great
if
v1
of
your
API
just
all
went
to
MVC
or
web
api,
and
then
you
could
just
have
a
small
piece
of
code.
C
B
B
C
B
Ends
right
there:
it
is
right
in
front
of
me,
so
he
set
up
a
similar
service
system
for
signal
our
core.
So
this
is
pretty
cool
being
able
to
here's
a
generated
spec
and
it
was
generated
off
of
the
signal,
our
hub,
so
uni
and
then
also
generating
typescript
code.
So
you
know
that's
pretty
cool,
to
be
able
to
write
the
hub
and
then
generate.
You
know:
you're
kind
of
runnin,
typescript,
stuff,
yeah.
B
All
right
and
then
the
last
one
here
is
Dan
Roth's
they're
asking
for
you
know,
help
or
for
ideas
from
the
community
on
blazer
improvements.
So
there's
an
issue
here
on
github,
and
so
he
points
out.
You
know
here's
first
of
all
like
if
you
have
reply
on
this
thread.
If
you
have
issues
or
you
know,
feedback
and
then
also
you
can
improve
the
blazer
Doc's
directly
at
the
bottom
of
each
each
one,
you
can
just
leave
feedback
directly
on.
B
You
know
an
item
in
there,
but
if
there,
if
there's
bigger
picture
things,
if
you
have
other
ideas-
and
you
can
see-
there's
there's
you
know
communication
going
on
there
now
so
so
that
is
it
I'm
gonna
switch
over
to
this
I
will
make
sure
yep.
Our
audio
should
all
still
be
good.
All
right,
so
welcome
debris
in
Luise.
A
B
A
D
A
For
context
in
ml
dotnet,
what
it
is,
it's
our
open
source,
cross-platform
machine
learning
framework
for
dotnet
the
history
here
is
about.
10
years
ago,
Microsoft
Research
developed
this
machine
learning
framework
to
be
used
within
the
company
called
the
learning
code
and
over
time
it
developed
it
to
such
a
powerful
framework
that
we
decided
to
make
it
open
source
cross-platform
and
release
it.
As
a
melds
on
that
I
think
that
was
about
three
years
ago.
Two
builds
ago,
we
released
the
first
private
preview
and
then
last
build
was
our
big
GA
preview.
A
Our
ga
release,
so
what's
really
great
about
ml
net,
is
that
it
can
run
anywhere
that
dotnet
can
run.
You
can
build
and
consume
Emeralds
on
it
models
on
Prem
locally
or
in
the
clouds,
which
is
a
sure.
It's
cross-platform
so
of
course,
runs
on
all
oasis,
yeah,
and
so
today,
we'll
show
you
how
we
integrated
that
into
a
blazer
application
with
image
classification.
So
it's
really
really
cool.
A
So
animal
TechNet
is
really
built
for
dotnet
developers,
meaning
that
people
who
know
dotnet
live
dotnet
loved
on
it
can
use
their
existing
dotnet
skills
and
tools
to
integrate
custom
machine
learning
into
their
dotnet
applications.
So
they
don't
need
to
go
and
learn
a
new
ml
tech
stack
or
any
language,
that's
commonly
associated
with
machine
learning.
A
If
you
don't
have
a
great
idea
of
what
machine
learning
is
or
what
it
can
do,
here's
just
some
examples.
These
are
actually
real
life
examples
of
how
people
are
using
machine
or
using
mljet.
Specifically
I
have
some
internal
ones
from
really
popular
Microsoft
products,
and
then
I
also
have
some
external
companies
who
we've
written
stories
with
who
have
been
using
em
all
done
it
for
really
cool
things.
So,
for
instance,
Microsoft
Defender
uses
ml
net
for
antivirus
or
protection.
A
There's
a
company
called
evolution,
software
who's
using
ml
net
to
predict
the
moisture
level
of
hazelnuts
and
a
lot
of
other
really
really
cool
use
cases.
Things
that
you
might
not
have
even
thought
that
you
can
do
before,
and
one
that
I
really
love
is
a
smart
systems,
was
using
it
for
grocery
demand
forecasting
and
they
say
we're
actually
writing
a
story
about
it,
but
essentially
they've
saved
like
millions
of
pounds
of
co2
emissions
just
by
using
ml
net
in
one
year.
A
So
he
was
she
learning
for
a
ton
of
different
things
and
specifically
here's
the
Lynda
ml
net
support,
okay,
and
so
we've
actually
got
a
demo.
We're
gonna,
show
off
model
builder
and
Visual
Studio
and
which
uses
something
called
automated
machine
learning
or
oto
mo
to
run
through
different
models
and
select
the
best
model
for
you.
So
this
is
where
not
needing
that
machine
learning
background
comes
into
play.
A
A
Classification
and
you
can
see
you
can
train
locally
or
in
Azure
for
this
case.
I
have
thousands
of
images,
so
I
want
to
scale
that
out
to
Azure
and
when
I
hit
that
and
set
up
the
workspace.
This
is
actually
integrating
with
Azure
machine
learning
and
once
this
dialog
loads
up
you
can
see,
you
can
actually
choose
your
subscription
and
your
workspace
and
all
the
other
necessary
things
you
need
in
order
to
run
an
azure
millwright
in
Visual
Studio.
A
So
once
that's
all
set
up
you
kind
of
get
a
little
summary
of
what's
happening.
You
move
on
to
data,
and
this
is
when
you
can
select
what
you
want
your
training
data
to
be
so
in
this
case,
I've
got
this
okay,
so
a
SL
means
our
stands
for
American
Sign,
Language
and
here
I've
got
3,000
images
about
per
category
for
the
different
letters
and
once
I
hit
train.
All
I
have
to
do
is
start
training
and
what
it
does.
A
Is
it
uploads
that
data
to
Azure
and
will
actually
start
training
in
Azure
Mel,
since
this
would
probably
take
about
I?
Think
I
did
it
on
this
and
it
took
about
20
minutes
so
I'm
not
going
to
train
the
whole
thing
right
here,
of
course,
so
I'm
just
going
to
show
you
the
second
instance
of
Visual
Studio
I
have
where
I
already
finished,
that
training
3d.
C
D
To
do
that
and
Louise
might
be
able
to
speak
to
it
more
yeah.
You
absolutely
can
so
we're
showing
here
is
the
tooling
side
of
things,
but
underlying
it
is,
as
we
mentioned,
automated
ml,
which
is
sort
of
like
another
layer
that
supports
the
tooling,
and
it
has
an
API
as
well,
but
really
at
the
foundation
is
just
a
standard,
dotnet
API
that
you
can
use.
C
A
So
you
can
see
here
that
we've
completed
the
training,
so
this
is
all
should
go
back
and
show
you.
This
is
the
same
data
that
are
the
same
data
set
that
I
had
before,
where
it's
those
three
thousand
images
and
if
I
move
on
to
the
evaluate
you
can
see.
This
is
the
accuracy
that
I
gave
it
and
the
amount
of
time
that
it
took
to
train,
and
so,
if
I
go
here,
I
should
be
able
to
go
it
within
the
UI
test
to
see
that
everything's
good
with
my
model.
A
So
in
this
case
it's
predicting
that
this
is
a
beat
which
is
true
and
then
the
last
step
is
to
just
add
the
projects
to
the
project.
We
wanted
to
add
machine,
learning
to
or
add
it
to
your
solution.
Then
it'll
automatically
add
the
reference
for
you,
so
you
just
have
to
copy
paste
this
code
here
and
then
get
it
to
start
working,
so
I
will
show
you
I'll
actually
have
to
turn
off
my
webcam
in
order
to
show
off
this
application.
So
let
me
do
that.
A
A
So
you
all
should
be
able
to
see
me
now
yep.
So
then
what
I
can
do
and
I'll
show
you
using
my
right
hand
here
is
once
I
hit
capture
so
for
the
first
time
it
takes
a
while,
because
what
it's
doing
is
it's
loading
the
model
to
be
used
once
it's
loaded,
then
all
the
next
images
will
take
a
lot
less
time.
So
you
can
see
it's
predicted
a.
A
A
C
A
video-based
like
I:
could
you
couple
this
today
with
other
like
video
analysis,
services
I,
think
Asia
has
some
actually
don't
they
likely
where
they
can
deconstruct
video
streams
into
individual
frames?
And
then
you
could
potentially
feed
this
through,
like
a
stream
processing
thing
or
is
the
idea
that
I
mean
I'm,
assuming
someone
could
probably
do
a
lot
of
work
to
put
something
like
that
together
today
or
end,
or
are
we
planning
on
building
higher
level
like
modules
or
whatever?
C
A
What
a
model
and
itself
doesn't
take
video
data,
so
we
definitely
have
to
be
capturing
extremes
from
frames,
yeah
frame
for
frame,
and
we
haven't
actually
tried
it
out
yet,
but
I
assume
that
it
will
work
as
soon
as
as
long
as
the
model
can
predict
that
fast.
We
but
I,
don't
think
we've
hashed
out
all
the
details.
Yet
on
how
it's
gonna
work,
we
were
going
to
try
out
a
few
different
ones:
I,
don't
Louise
had
a
few
ideas.
If
you
want
to
do
is.
A
C
That
would
prevent
someone
from
using
ml
net
to
have
a
model
that
say
requires
hey
I
need
more
than
one
frame
so
like
I,
I
need
to
see
20
frames
or
whatever
reason,
because
it
really
want
the
model
is
trained
on
I
says
it's
called
25
frames
like
it's
a
second
of
video
data.
Let
me
give
you
25
frames,
which
are
25
images
and
have
the
model
crunch
that
and
come
back
and
say:
hey
yeah,
I
think
this
is
a
whatever
is
that
would
that
be
possible
today.
A
D
You
you
certainly
can
so
what
we're
using
now
Ness
as
I
dive
deeper
into
the
app
and
kind
of
how
it
works.
So
what
we're
using
here
is
this
sort
of
convenience,
API
call
prediction
engine
and
what
that
does
is
it
just
makes
a
prediction
on
a
single
instance
of
the
data.
However,
there
is
another
piece
of
API
called
transform,
transform
or
operation,
and
what
that
allows
you
to
do
is
it
allows
you
to
basically
make
predictions
on
multiple
instances
of
data.
A
So
this
is
the
current
experience
we
have
and
how
it
all
works.
So
you've
got
your
in
model
builder.
You've
got
your
data
that
local,
you
select
image
classification
as
your
scenario,
because
that's
the
only
one
that
supports
as
your
training.
Currently
you
two
Asher's
your
Anakin
training
environment
and
then
you
set
up
those
Azure
machine
learning
experiment.
The
experiment.
Settings
like
I
showed
you
in
a
dialogue.
A
So
what
you
do
is
choose
an
existing
Azure
subscription
choose
an
existing
workspace
or
create
a
new
workspace,
choose
an
existing
compute
or
create
a
new
one
and
then
choose
an
existing
experiment.
I'm
then,
what
you
would
do
is
you'd
select
your
data
from
file
or
sequel,
server
and
model
builder,
and
once
you
hit
start
training,
what
will
actually
happen?
Is
it
uploads
data
to
Azure
blob?
A
It
creates
a
new,
a
sure,
auto
ml
run
in
that
selected
experiment
that
you
chose
it
through
model
builder
and
then
a
trained
onyx
model
is
actually
downloaded
and
sent
back
to
model
builder
and
then
model
to
wrap
that
onyx
model
to
get
an
ml
dot.
That
model
that
you
can-
and
it
also
gives
you
that
code
for
consumption
these
you
want
to
share
so.
B
B
D
D
D
So
the
solution
itself
there's
four
projects
right,
there's
the
API,
which
is
our
back-end
for
our
application,
and
just
that's
just
the
asp.net
core
Web
API.
We
have
the
wasm
applique,
which
is
the
webassembly
placer
app,
which
was
a
front-end
that
pre
showed
off
where
it
was
capturing
her
her
camera
input,
and
then
you
have
this
console
app
and
the
model
app
which
are
auto-generated
as
part
of
the
model
builder
tooling,
and
the
training
process.
D
So
if
we
go
ahead-
and
we
take
a
look
here
at
the
at
the
model-
which
is
really
the
core
of
this
application,
there's
gonna
be
several
assets
that
you're
gonna
notice
in
here.
One
of
them
is
this
one
here,
so
as
a
result
of
your
training
process,
the
way
that
your
model
is
sort
of
represent
or
the
way
that
you
can
reuse
your
model
once
you're
trained
it
is
through
through
these
files.
It's
your
model
is
essentially
serialized
in
some
format
and
then
it's
sort
of
saved
as
a
file
right.
D
So
what
you
have
here
is
you
have
the
Onyx
version
of
the
model,
this
best
model
dialects
file
and
on
extents
for
the
open,
neural
network
exchange
and
the
open
neural
networks,
changer
onyx.
What
it
allows
you
to
do
is
it's
a
format
that
allows
you
to
sort
of
have
interoperability
with
different
frameworks.
So,
for
example,
if
let's
say
you
for
some
reason,
said
hey,
you
know,
I
don't
necessarily
want
to
consume
this
within
within
a
net
application.
Perhaps
I
want
to
use
it
inside
of
a
flask
app
or
something
else
with
onyx.
D
You
can
we'll
certainly
do
that
using
the
onyx
runtime,
which
which
allows
you
to
basically
score
and
make
predictions
with
this
this
model,
and
and
conversely,
right
you
can
have
another
frame
where
such
as
tensorflow
pi,
torch
or
anything
else,
basically
create
a
model
there.
So,
for
example,
if
you're
a
data,
scientist
you're
working
with
data
scientists-
and
they
provide
you
with
an
onyx
model,
you
can
go
ahead
and
consume
that
on
its
model
from
within
a
madonna,
which
is
exactly
what's
kind
of
happening
here.
D
You
have
this
best
model
map
JSON
file,
and
essentially
all
that
contains,
is
just
all
the
letters.
So
these
are
all
the
different
letters
that
the
model
will
go
ahead
and
predict,
and
it's
a
through
C,
including
space
as
well.
Something
else
that
see
something
else
that
you
have
here
is
this
zip
file.
So
the
zip
file
is
the
serialized
version
of
the
mo
net
model.
D
D
The
path
of
where
the
image
that
we're
trying
to
make
predictions
on
lifts
and
then
the
label,
which
is
basically
the
letter
that
the
letter
that
belongs
to
that
image
or
the
label
the
basically
the
what
what
that
image
is
represents
in
which
letter
the
image
is
representing.
So
this
is
actually
useful
only
for
training
when
you're
making
a
prediction,
you
only
care
about
the
path
of
your
image,
because
you
know
it's
expected
that
you
don't
know
what
it
is
that
you're
trying
to
predict.
D
That's
what
the
model
is
there
for
and
and
on
the
output.
Again,
you
have
that
sort
of
prediction
there,
and-
and
this
is
what
what
the
model
is
going
to
output
once
it
makes
a
prediction
on
your
image
as
well
as
this,
this
property
called
score
and
the
score
column.
Nordenskjold
property
is
basically
a
float
array
that
contains
all
the
probabilities
for
your
for
your
predictions.
So,
for
example,
even
though
it
may
choose
the
one
with
the
highest
probability,
is
in
this
case,
let's
say
that
it
was.
D
It
was
I
believe
who's,
a
that
that
Bri
was
making
predictions
on
the
letter
A.
It
would
show
hey
a
is,
probably
we
think
it's
an
A,
because
the
profit
is
like
sixty
percent
or
something
like
that.
But
then
you
also
have
the
probabilities
for
all
the
other
for
all
the
other
letters
as
well.
So
so
that's
essentially
that
you
have
other
files
in
here
as
well,
and
these
are
these
are
basically
custom
transformations.
So
again,
I
was
talking
about
performing
different
operations
on
the
data.
D
One
of
the
things
that
we
do
here
is
we
have
this
lievable
mapping
class
and
what
the
label
mapping
class
contains.
Is
this
it's
a
custom,
transformer
or
custom
transform?
So
that's
one
of
the
neat
things
about
ml
Dannette
is,
for
example,
a
lot
there's
a
lot
of
common
things
that
you
can
do
with
it
like
normalizing
your
data
and
doing
things
like
that.
D
But,
for
example,
let's
say
that
there's
no
way,
there's
no
built-in
way
for
you
to
get
something
done
a
way
to
do
that
to
get
around
that
or
to
basically
build
on
top
of.
What's
already,
there
is
to
use
custom
custom
transforms,
and
essentially
this
label
mapping
transform.
What
it's
doing
is
it's
taking
the
output
from
the
model
and
then
it's
mapping
it
to
the
file
here
in
the
one
of
the
letters
here
in
the
best
model
map
JSON
right.
So
that's
that's
essentially
this
this
model
project
here.
So
how
exactly
is
this
used?
D
If
we
take
a
look
at
our
API
here,
there
are
several
ways
that
we
there's
a
few
areas
where
we
basically
make
use
of
the
model.
So
one
of
them
is
here
we
are
in
the
backend
of
our
application,
an
asp.net
core
web
api
and,
as
you
can
see,
we
have
everything
that
we
would
typically
expect
to
see
inside
of
a
HP
dunder
core
web
api.
D
And
if
we
take
a
look
here
in
the
startup,
this
is
exactly
what
we're
going
to
go
ahead
and
register
our
model
and
basically
load
it
right,
and
what
we
can
do
in
that
case
is
we
can
just
like.
We
would
register
any
other
service.
We
go
into
our
configure
services
method
here
inside
of
our
startup
dot
CSI
class,
and
then
what
we
do
is,
we
can
basically
just
say,
add
a
singleton
and
the
type
of
service
or
the
type.
D
The
thing
that
we're
trying
to
basically
register
here
is
a
prediction
engine
which,
as
earlier
prediction
engine,
provides
a
sort
of
a
convenience
API,
so
that
you
can
basically
provide
an
instance
of
data
in
this
case,
it's
expecting
for
you
to
give
it
a
model
input
as
input
and
in
return,
it's
gonna,
give
you
a
model
output
and
and
the
way
that
we
go
about
sort
of
setting
up.
The
initialization
logic
for
this
particular
service
is
with
EMA
context.
D
So
EMA
context
is
the
entry
point
of
all
ml
dotnet
applications,
and
you
can
think
of
it
very
much
like
like
DB
context,
all
right
so
I
have
your
app
DB
context.
That's
that's
kind
of
it's
similar,
it's
not
exactly
the
same,
but
it's
a
way
to
think
about
it
and
then,
in
addition
to
that,
because
we
have
these
custom
transforms.
What
we're
doing
is
we're
registering
the
assemblies
associated
with
them.
D
So
again
we
have
this
normalized
mapping,
custom
transform,
which
is
gonna
normalize,
our
images
and
we
have
this
label
mapping
transform
which
again,
what
it's
gonna
do
is
it's
gonna,
take
the
output
from
the
model,
and
it's
going
to
map
it
to
one
of
the
letters
that
we
expect
this
output.
We
provide
a
model
path
in
this
case.
D
D
D
Once
we
have
this,
this
sort
of
API
set
up,
it's
just
a
matter
of
consuming
an
API,
alright
and
that's
exactly
what
we're
doing
inside
of
our
was
an
application.
If
we
go
to
pages
here,
we
have
only
a
single
page
right,
there's
that
camera
page
the
front
of
that
pre
was
showing,
and
we
just
have
a
few
elements.
Let
me
kind
of
go
down
to
the
code
here
to
kind
of
show
you
what's
going
on
so
again
when,
when
Bree
clicked
that
sort
of
star
camera
button,
what
that
did
was
it.
D
It
started
capturing
video
from
her
webcam
and
the
way
that
that
did
it
did.
That
is
using
so
that
the
J's
Interop
again,
because
you
with
leaves
or
one
of
the
limitations,
is
that
you
can't
necessarily
interact
directly
with
the
with
some
of
the
Dom
elements
and
some
of
the
the
Dom
API.
So
we're
using
a
little
bit
of
JavaScript
for
that
to
perform
those
operations
right
and
to
kind
of
show
you
what
the
interrupts
looks
like.
There's
it's
exactly
this.
So
what
will
happen?
D
D
I'm
gonna
close
these
pinned
it's
okay.
So
what
we're
doing
here
is
we're
calling
that
initialize
function,
which
is
what's
registered
here
and
what
that's
gonna
do.
Is
it's
going
to
set
up
a
stream
and
it's
gonna
basically
assign
that
as
a
source
for
the
video
element
and
the
video
element?
Again,
it's
just
sort
of
this.
D
D
Function
here,
snap
right,
so
it's
gonna
call
the
snap
function
and
all
that's
gonna
do
is
basically
it's
gonna
take
what
it's
capturing
on
the
video
and
it's
gonna,
take
a
snapshot
and
basically
copy
it
to
a
canvas
element
inside
of
our
inside
of
our
Dom
and
then
get
image
data.
What
that's
going
to
do
is
it's
going
to
convert
the
data
from
the
campus
into
a
into.
D
Basically,
it's
gonna
get
the
data
URL
part
of
that
which
contains
the
the
base64
representation
of
the
image,
and-
and
that's
that's
basically
it
in
terms
of
what
you
need
to
do
on
the
J's
Interop
side.
Again,
that's
only
there
so
that
you
can
interact
with
with
the
dom
elements
and
so
that
you
can
basically
interact
with
the
Dom
API.
D
What
I'm
using
is
I'm
using
this
class
here
called
prediction
response
to
basically
get
back
the
response
from
the
asp.net
core,
Web
API,
and
then
this
this
piece
here
all
that's
doing.
It
is
just
manipulating
the
Dom
so
that
you
can,
you
can
basically
see
and
make
it
and
basically
see
what
what's
being,
but
by
the
model.
So
that's
that's!
Essentially
it
again.
D
Nothing
special
was
happening
in
the
front
end
from
the
blazer
side
on
the
blazer
side,
you're
just
doing
what
you
normally
would,
if
you
wanted
to
do
some
sort
of
image
capture,
the
back
end
is
really
what's
doing
everything
for
you
and
again
like,
as
you
saw
that
it's
it's
it's,
it
doesn't
take
a
lot
of
steps
to
basically
do
that.
As
long
as
you
can
register
a
prediction,
engine
you're
pretty
much
solid
there
Louis.
C
We've
had
a
question
about
why
or
why,
but
couldn't
couldn't
this
also
be
done
by
just
old
staying
in
the
browser
like
there's
quite
a
few
examples
on
the
net
of
folks
hosting
ml
net
directly
and
blaze
and
Assam
and
having
the
model
just
do
its
work
there
rather
than
having
to
post
to
a
back
end.
Do
we
have
it?
Could
we
could
we
migrate
this
sample
to
do
that
or
is
there
something
stopping
there
today.
D
There
might
be
yes,
there
might
be,
and
part
of
that
has
to
do
with
wasm,
so
I
actually
wrote
a
post
on
trying
to
do
something.
Similar
I
haven't
tried
it
with
image
classification,
but
ideally
yes.
Ideally
it
would
be
really
nice
if
you
could
do
this
in
wasm
directly
and
the
reason
for
that
is,
as
you
can
imagine,
number
one
you
sort
of
cut
out
sort
of
that
back-end
right,
you
don't
have
to
basically
set
up
a
whole
back-end
to
do
this.
Yeah.
B
D
More
importantly,
the
resources
right
you're
using
the
clients
resources,
rather
than
basically
performing
all
these
operations
on
the
server
side.
You
know
that's
sort
of
something
that
sensor
flow
j/s
does
and
on
its
runtime.
They
have
a
j/s
sort
of
client
that
you
can
use
inside
of
the
browser
and
again
with
Waze
on
that
sort
of
capability.
It
would
be
nice
if
that
could
be
lit
up,
but
there's
a
few
things
with
wasm
at
the
moment
that
don't
necessarily
you
know
plain-ass
nicely.
D
So
there
are
some
algorithms
that
you
can
basically
some
algorithms
that
support.
Basically,
you
being
able
to
do
what
you
just
mentioned,
but
not
on
not
all
of
them.
Yeah,
there's,
there's,
definitely
some
issues,
so
you
know
I
would
say
just
you
know
if
you
try
to
do
that,
just
use
some
caution,
because
again
it
won't
work
in
all
cases.
Do.
D
Uses
her
yes,
yes
and
there's
no
documentation
part
of
that,
because
you
know
that
was
am
I
still
very
there's
still
a
lot
of
things
changing
there
yeah
it's
so
that
so
that
piece
of
it
there's
still
a
lot
of
stuff,
that's
changing,
but
but
yeah
and
also
you
know,
on
the
dinette
side,
the
demo
dinette
side,
there's
also
some
question
as
to
you
know:
what
would
it
take
for
these
algorithms
to
run
in
in
wasum
yeah?
Okay?
Well,
there.
C
Might
be
something
for
us
to
taste
up
given
blazer
wise
will
be
out
in
the
month
need
to
be
good
to
have
a
some
some
yeah
there's
something
a
little
more
concrete
there.
So
folks
could
know.
Hey
yeah
this
stuff
will
run
in
the
browser
on
lies,
I'm,
just
fine.
This
on
staff
is
still
waiting
on.
You
know
this.
This
isn't
this
and
then
folks
could
give
us
feedback
on
what's
important
and
what
not
but
yeah
cool,
absolutely.
D
Yeah
yeah,
so
essentially
that's
and
I
could
kind
of
show
you
what's
what's
happening
here
in
terms
of.
Let
me
start
this
up
so
to
see
it
running,
I
kind
of
set
a
breakpoint
just
to
kind
of
go
through
through
what's
happening
so
now
that
it's
on
right,
yeah
and
let
me
start
the
wise
on
application.
Let
me
actually
probably
gonna
have
to
stop
my
camera
to
do
that.
So
let
me
do
that.
D
D
D
D
Image,
alright,
sorry
about
that,
I
think,
there's
like
that's
what
it
is
basically
grab
the
whole
thing.
So
if
I
generate
there,
you
go
so
that's
basically
what
I
was
the
image
that
I
gave
it.
So
we
can
see
that
in
fact
is
basically
getting
that
image
then
we're
doing
here
is
we're
just
creating
a
path,
saving
it
somewhere
locally
on
our
computer
and
then,
when
you
get
back
here
in
our
prediction,
if
we
were
to
expect
that
right.
D
So
here
we
have
the
prediction
and
you'll
see
that
it
thinks
that
what
damage
that
I
gave
it
is
an
H,
but,
most
importantly,
there's
that
there's
that
flow,
the
score
property
containing
the
float
array,
with
all
the
probabilities.
All
these
probabilities
sum
to
one
now
here,
I
can't
necessarily
tell
you
which
one
is
H,
I'm
gonna,
guess
it's
the
one
with
the
highest
probability
and
that's
exactly
that
label
mapping
custom
transform
was
doing
right.
It
was
taking
these
probabilities
and
mapping
it
to
a
letter
right.
D
So
that's
kind
of
why
you
needed
that
custom
transformation
there
to
basically
perform
this
map,
because
because
if
you
were
just
to
like
take
a
look
at
this,
you
know
would
be
pretty
hard
to
say
like
okay.
Well,
one
of
you
know
which
one
of
these
is
H,
so
that's
Italian
what's
happening
there,
I
mean.
D
Exactly
yeah
exactly
yeah,
so
yeah
so
and
that's
exactly
what
the
label
mapping
transformer
is
doing
and
then
yeah
so
you're,
just
returning
that
back
to
your
front-end
and
and
again,
it's
gonna,
basically
output
that
it
was
H
or
whatever
I
gave
it
so
yeah.
That's
essentially
how
how
this
app
works.
B
D
You
mean
when
you,
when
you
start
your
application,
yeah
sorry
first
of
the
motor
yeah,
because
it
took
a
while
for
this
to
to
load
well,
so
so
part
of
the
like
there's
two
parts
to
that
part
of
it
is
because
the
model
is
actually
somewhat
paid.
I
think
this
one
is
like
90
megabytes
that
it's
loading
that
first
time
and
against
part
of
that
delay
is
also
because
you,
you
know
you
you're
not
basically
you're
like
until
you
hit
that
end
point
right
or
until
you
hit
that
controller.
D
That's
when
this
actually
gets
initialized
right,
so
I
guess
the
alternative
to.
That
would
be
that
that
you
can
probably
probably
sort
of
instantiate
it
somehow,
and
that
would
basically
come
down
that
way
when
you
call
it
from
within
the
controller.
That's
not
the
first
time
that
this
is
that
the
model
itself
is
being
loaded,
you've
already
sort
of
done
it.
The
initialization
inside
of
your
startup
class,
are.
D
Yeah,
so
so
in
this
case
you
you
can't
and
I
think
in
general
you,
you
can't
yeah
I,
think
I
think
in
general
it's
it's
as
is,
and
that's
actually
one
of
you
know
you
actually
bring
up
a
really
good
point
so
in
in
in
these
applications
right
it's
a
web
app.
You
were
swimming
that
you
have
some
server.
You
can
probably
scale
right
so
so
size
is
not
necessarily
gonna.
D
Be
that
big
of
a
concern
right,
but
when
you're
doing
this
and
IOT
devices
right,
you
don't,
you
may
not
have
ninety
Meg's
to
spare,
so
it
certainly
becomes
a
challenge
and
there's
some
techniques
out
there
that
can
basically
help
you
with
this
and
basically
make
these
models
smaller.
So
that's
actually
another
approach
that
you
can
take.
You
can
make
these
models
smaller
the
trade-off
there
so
there
it's
like
this
technique
called
quantization.
B
D
Yes,
so
so
there
certainly
are
the
differences,
and
there
are
some
differences
in
there.
Some
similarities,
so
I
can
start
off
with
the
similarities
so
how
at
least
locally.
This
is
certainly
the
case
so
locally
when
you
want
to
train
a
machine
learning
model
using
the
image
classification
API
that
that
ml
dotnet
provides
it's
actually
using
tensorflow
on
the
on
the
back,
and
so
it's
leveraging
this
community
project
tensor
flow
net,
which
provides
dotnet
bindings
for
the
tensor
for
tensorflow
right,
so
I
think
I.
Believe
tensorflow
is
built
in
C++,
so
attention
without
notice.
D
Again,
a
community
project
provides
dotnet
bindings
for
tensorflow,
so
what
they
and
then
the
higher
level
API
sort
of
sits
above
that
providing
an
even
higher
level
of
abstraction.
Where
you
all
you're
able
to
tell
it
is
basically
here's
my
data,
here's
the
architecture
that
I
want
to
Train,
there's
a
lot
of
there's
a
few
neural
net
architectures
that
are
already
available
for
image.
Classification,
such
as
like
rest,
net
101,
there's
Inception,
there's
a
few
architectures
that
you
can
choose
from
and
again
in
the
backend.
D
It's
leveraging
tensorflow,
if
believe,
it's
sensor
flow
1.14
to
train
this
model
and
basically
perform
transfer
learning
on
it
in
terms
of
differences
in
terms
of
differences
ml
dotnet,
you
can
basically
on
top
of
just
building
models
with
them
about
net.
The
other
thing
that
you
can
do
with
it,
of
course,
is
consume
models
right
and
that's
really
where
the,
where
the
sort
of
one
of
the
bigger
benefits
comes
from
is
regardless
of
whether
your
model
is
built-in
sensor
flow,
pipe
torture
or
whatever.
You
can
use
this
format.
D
B
A
Alright
does
that
work
yep,
awesome
yeah,
so
the
resources
are
the
ones
on
the
bottom.
Here
we've
got
the
main
and
well
done
that
website,
which
is
on
the
dotnet
website.
I've
got
a
link
to
the
docs
here
to
our
github
repo
Cemil
down.
It
is
open
source
and
also
our
get
our
samples
github
repo.
A
What's
really
nice
about
this
one
is
there's
a
lot
of
different
examples
of
things
that
you
can
do
and
it
it
lays
out
how
exactly
they
to
do
it
there
so
I'm
a
lot
of
people
get
started
with
our
samples
and
then
put
in
their
own
data
and
go
from
there.
Other
people
use
our
tooling
so
that
auto
ml
takes
care
of
choosing
the
model
for
them
and
then
there's
some
events
coming
up
as
well
for
ml
net.
A
A
B
D
Thanks
for
having
us
and
yeah
definitely
submit
your
submit
your
your
togs
if
you're
interested
in
this
stuff,
something
your
talks
with
a
virtual
conference,
we're.
A
B
Cool
awesome
and
I
have
one
I
just
would
love
if
you
can
somehow
use
ml
to
create,
we
would
call
it
a
dramatic
zoom
out.
What
it
would
do
is
at
the
end
of
our
OBS.
It
would
just
simply,
and
then
it
would
sue
my
sow
it
because
somehow
we
we
don't
have
that
technology.
Yet
alright,
awesome
thanks
a
bunch,
and
that's
it
we'll
talk
to
you
later
so
we
have
to
the
Thursday
show
coming
up
and
lots
of
good
stand-ups
going
on.
So
thank
thank
you
very
much.