►
From YouTube: S311 - Cognitive Services in Xamarin Applications
Description
Are you interested in machine learning, but don't know where to start? Worry no more! You don't have to have a special education to integrate artificial intelligence in all kinds of applications fast and easy. I'll show you all the benefits of Cognitive Services. You'll feel like a real data scientist.
A
B
B
B
B
Recently,
I
started
playing
with
Ameren
before
my
day,
job
I'm
working
with
da
da
da
CMS's,
like
every
server
and
side
core
at
some
point,
I
was
using
PHP,
my
sequel,
droopy
and
moveable
rails
I'm,
not
working
with
all
technologies
right
now,
but
maybe
I'll
get
back
to
them
at
some
point
and
my
hobbies
are
dancing
traveling
and
a
real
yoke.
So
here's
the
agenda
for
next,
probably
45,
minutes
I'll
start
with
artificial
intelligence
and
machine
learning
and
I'll.
Tell
you
some
basic
information
about
them.
B
Then
we'll
move
to
basics
of
Microsoft,
cognitive
services
and
then
I'll
tell
you
about
groups
and
individual
services.
There
then
we'll
move
to
specifics
of
integrating
Microsoft
cognitive
services
with
salmon
and
salmon
forms
applications
and
then
the
best
part
custom
vision,
service
demo,
so
definitely
still
to
stay
till
the
end.
B
So
let's
talk
about
artificial
intelligence
and
machine
learning.
All
big
tech
companies
are
investing
lots
of
time
and
my
internet
official
guidance
and
machine
learning.
Those
are
big
things
currently
and
we
all
kind
of
understand.
What's
going
on
there,
we
have
lots
of
tools
available,
but
sometimes
it
might
be
confusion
how
artificial
intelligence
relates
to
machine
learning.
Are
they
working
together?
Are
they
completely
separate?
B
Let
me
actually
clarify
that,
for
you
and
I'll
start
with
artificial
intelligence,
so
technically
artificial
intelligence
has
more
than
50
different
definitions
and
in
general,
it's
something
that
we
as
humans.
Aggeridge
by
machines
are
not
so
we
are
trying
to
mimic
human
intelligence
using
logic,
if-then
rules,
decision,
trees
and
other
cool
stuff.
B
Next
we
have
machine
learning
and
machine
learning
is
a
subset
of
artificial
intelligence
and
basically
it's
a
combination
of
statistics
and
album
Isis.
So
when
we
use
those
techniques,
we
enable
machines
to
improve
a
tasks
with
experience,
and
next
we
have
deep
learning
and
deep
learning
is
a
subset
of
machine
learning.
So
when
we
have
lots
of
data,
we
are
trying
to
organize
it
and
create
neural
networks
with
data
nodes
and
using
those
neural
networks.
B
B
B
Microsoft
cognitive
services
are
sets
of
api's
and
SDKs
that
you
can
use
to
make
your
application
smarter
and
make
them
more
interactive
for
your
users.
They
are
really
easy
to
use.
You
just
need
to
get
a
key
and
write
a
couple
of
lines
of
code
and
you
are
getting
access
to
all
those
cognitive
services
through
a
REST
API.
B
B
Awesome
all
those
Microsoft
cognitive
services
are
tested
and
created
by
super
smart
people
from
Microsoft,
Research,
BIM
and
Azure
machine
learning.
Those
people
actually
got
their
degrees
in
machine
learning
and
I'm
sure
they
know
what
they're
doing
so.
We
don't
have
to
get
our
degree
in
machine
learning
and
spend
years
begin
those
topics.
B
B
Huge
community
support,
really
awesome
examples
online:
you
can
check
em
SDM,
you
can
go
to
detail
and
if
you
notice
that
something
is
not
right
or
you
want
to
improve
something,
you
can
actually
get
the
code
fix
it
and
the
mutiple
request,
I'm
sure
Microsoft.
You
will
be
super
happy
to
have
your
code
there.
B
First,
one
here
really
interesting:
computer
vision.
Api
these
service
actually
can
detect
objects
on
your
arm
images
and
also
it
can
before
optical
character
recognition.
So
if
you
have
written
text
somewhere,
you
have
a
picture
of
it.
You
can
just
upload
that
picture
to
the
service
and
it
will
translate
it
to
actual
text
next.
One
here
is
face
API
and
recently
they
integrated
emotions,
API
interface,
it
yet
so
now
it
can
detect
specific
people
on
your
images
and,
at
the
same
time
check
their
emotions.
So
you
see
how
happy
or
sad
they
are.
B
If
you
check
all
those
vision,
cognitive
services,
you
didn't
find
anything
that
can
be
suitable,
for
you
will
have
specific
set
of
images.
You
want
to
search
and
get
specific
information
from
those
images
you
can
go
ahead
and
use
custom
vision,
service
and
using
custom
vision.
Service
is
really
interesting.
You
can
feel
yourself
like
you're,
actually
machine,
learning
and
expert,
because
you
can
flow
the
images
you
can
train
them
all
over
yourself.
B
Here
is
the
example
from
cognitive,
Services
website
and
it's
example
of
computer
vision.
Api
there
you
can
just
go
to
hone
your
services
website
find
that
specific
cognitive
service,
computer
vision,
API.
You
can
either
upload
your
own
image
right
on
the
page
or
you
can
use
images
that
are
provided
there
on
a
page,
and
then
you
can
see
the
output.
You
can
see
you
Jason,
and
you
can
see
what
computer
vision
API
API
returns
to
you.
B
So
you
you
don't
have
to
actually
get
the
key.
You
don't
have
to
write
any
code.
You
don't
have
to
do
anything.
You
don't
have
to
even
use
postman.
Just
go
to
the
website,
see
that
Jason
that
the
service
returns.
You
can
see
lots
of
information
there
most
of
applications.
Don't
need
that
much.
But
you
know
if
you
want,
you
can
just
go
crazy
and
use
all
that
information.
B
Next
group
here
is
speed
and
it's
all
about
spoken
information.
The
there
are
several
of
them
really
interesting.
For
me,
personally,
first
of
them
is
speech
to
text.
This
one
is
pretty
straightforward,
so
your
users
can
use
your
application.
Maybe
it's
a
bot.
You
can
talk
to
you
about
and
then
your,
but
it's
gonna
use
speech
to
text
API
to
translate
speech
to
text
and
then
to
maybe
pass
it
along.
So
other
tools
can
analyze
the
text.
B
Next.
One
here
is
speech
translation,
so
this
service
can
actually
translate
speech
to
other
languages
and
actually
the
whole
speech
suite
was
used
in
new
add-on
for
a
PowerPoint.
It's
really
convenient
if
you
travel
into
a
foreign
country-
and
you
don't
know
language,
but
you
still
need
to
present
your
PowerPoint.
B
You
have
your
slides
so
that
add-on
can
actually
translate
your
slides
or
you
can
set
up
subtitles
so
that
add-on
will
show
translated,
subtitles
right
on
top
of
your
presentation.
So
I
think
this
can
see
what
you
are
talking
about,
or
they
have
an
opportunity
to
scan
a
QR
code
with
their
mobile
devices
and
see
those
subtitles
right
there.
They
can
also
ask
your
questions
on
their
device
from
their
devices,
and
you
see
those
questions
you
can
answer
them
really
useful
too.
B
Here
you
have
option
either
to
play
samples
that
are
provided
on
the
website,
or
you
can
record
your
speech
and
then
you
can
see
how
the
service
works.
You
also
have
an
opportunity
to
select
language
language,
they're,
really
useful.
It's
all
on
the
website,
all
online.
You
can
go
and
check
it
out
there,
and
next
group
here
is
language,
and
that
is
all
about
language
analytics
and
first
cognitive
service
here
is
Lois
I'm
sure
most
of
you
heard
about
it.
It's
really
popular
right.
Now.
It's
widely
used
in
bond
designing
development.
B
Also,
it's
used
in
virtual
system,
skills
creation,
language
understanding,
intelligence,
service
or
Lewis
can
understand
intent
and
can
parse
those
little
variables
of
needed
time
and
please
from
your
users
conferences.
So
your
user
can
talk
to
you
about.
Then
you
speech-to-text
the
translating
spoken
words
in
text
and
then
pass
the
text
to
Lewis
and
then
Lewis
will
understand
what
your
user
wants
and
based
on
that
it
will
return
information.
B
B
It
works
great
then
next
one
here
is
bin
spellcheck
API.
This
service
is
really
useful.
I
think
everyone
should
use
that
I
have
lots
of
spelling
mistakes
all
the
time,
so
definitely
go
and
check
it
out,
especially
your
users.
You
need
to
write
a
lot
and
it's
important
for
them.
To
spell
everything
correctly.
B
B
Okay
and
here's
the
example
of
Luis.
It's
also
from
the
website
cognitive
Services
website.
You
can
either
type
a
command
or
you
can
pick
one
of
predefined
commands
there,
and
it's
really
interesting
here,
because
you
are
not
only
seeing
the
adjacent
response
from
the
service.
You
also
can
see
how
it
actually
works.
So
here
I
picked
switch
all
lights
to
green
and
you
see
that
and
a
lot.
The
light
was
turned
green
right
and
also
Jason
here.
So
you
know
what
you
working
with:
what
kind
of
data
you
get
in
from
the
service.
B
So
you
can
just
upload
the
PDF
document
with
questions
and
answers
or
provide
a
link
to
your
page,
your
FAQ
page,
and
then
this
courier
service
will
parse
it
split
it
into
questions
and
answers
and
voila
your
users
can
go
talk
to
you,
bot
type,
their
questions
and
get
their
answers
right
away,
and
you
don't
have
to
do
a
lot
there
and
here
is
an
example
from
cognitive
Services
website.
They
have
questions.
B
B
Okay
next
group
is
search
and
that's
the
biggest
group
because
all
kind
of
services
they
probably
started
with
being
research
team
and
then
they
grew
into
actually
cognitive
services,
as
we
see
them
right
now,
and
here
you
have
all
kinds
of
searches.
You
have
been
image,
search,
new
search
and
web
search.
If
you
need
some
specific
information
in
the
search
or
you
wanted
to
search
through
specific
data
set
or
want
to
create
some
kind
of
musician
there,
you
have
access
to
beam
custom
search.
There
definitely
go
and
check
it
out.
Okay,.
B
So
we
went
through
all
five
main
groups
of
cognitive
services,
and
now
that's
the
sixth
group.
It's
called
laughs.
I
personally
think
it's
official
group.
It
should
be
an
official
group
because
that's
the
future
going
there
and
checking
labs,
you
can
see
we're
calling
your
services
are
going.
One
can
be
next
step
there.
What
researchers
are
currently
working
on
all
those
labs
are
experimental,
so
I,
don't
recommend
you
to
use
it
in
production
environment,
but
definitely
go
and
check
them
out,
definitely
provide
feedback
and
maybe
submit
pull
requests
with
your
fixes
and
your
code.
B
So
one
day
one
of
those
labs
or
maybe
all
of
those
labs
can
become
actual
cognitive
services,
and
there
are
a
couple
of
those
I
really
like,
for
example,
project
gesture.
If
you
using
project
gesture
your
users,
don't
have
to
use
keyboard
or
mouse
anymore,
they
don't
have
to
use
the
touchscreens
anymore.
That
is
insane.
B
They
can
just
use
the
camera
and
using
gestures
they
can
interact
with
objects
on
the
screen.
They
can
enlarge
those
objects,
they
can
move
them
around.
That
is
really
cool
and
they
have
awesome
video
on
the
website.
So
you
can
see
how
it
actually
works
next
next,
one
here
is
project
event
tracking,
and
this
lab
actually
provides
information
about
events
based
on
Wikipedia
and
interests,
and
another
really
interesting
and
really
really
useful.
One
is
project
URL
preview.
B
So
when
you
use
that
service,
your
enable
preview
for
pages
that
your
users
are
trying
to
load
so
they're,
not
loading
a
name,
malicious
content.
They
are
not
loading
any
adult
content,
they
can
preview
the
page
before
actually
loading
it.
So
this
one
is
really
useful
too,
and
there
are
13
labs
available
currently
on
condo
services,
website
and
I.
Think
six
are
available
at
AEI
labs
max
Telecom.
You
definitely
go
and
check
them
out.
A
Veronica,
there's
a
yes
there's
a
question
on
the
chat
that
I
think
it's
a
good
place
to
ask
about
it.
Okay,
warden
94
is
asking
our
cognitive
services
provided
by
Microsoft
maintainable.
What
if,
for
example,
speech-to-text
service,
I'm
each
needs
more
learning
on
eret
on
on
the
arabic
language.
How
can
we
maintain
that?
A
B
So
actually
it's
a
good
question.
There
are
several
options:
god
definitely
use
them
all.
Then
a
little
trick.
There
is
there
more
you
using
the
service,
the
better
it
gets.
So
if
you
integrate
in
it
with
all
your
applications
and
using
it
a
lot
and
you
have
specific
group
of
users,
definitely
those
services
will
learn
from
experience
and
they
will
get
better.
B
A
B
B
There
is
also
a
free
tier
available
for
all
those
calling
of
services.
It
depends
on
each
service
how
many
transactions
allowed
there
for
free,
for
example,
for
computer
vision.
Api
free
tier
includes
five
thousand
transactions
for
months,
but
before
signing
out,
definitely
go
and
check
the
documentations.
They
have
alka
later
there.
So
you
can
see
how
much
you
can
spend
on
this
or
that
service
and
actually
I
want
to
tell
you
that
five
thousands
of
transactions
per
month
are
more
than
enough
for
testing
and
playing.
B
Definitely
not
enough
for
production
environment
is
once
your
users
start
using
your
app.
Then
all
those
transactions
are
flying
just
like
that
and
if
you,
for
some
reason,
don't
have
as
a
subscription
then
go
get
it
but
seriously.
You
can
try
calling
it
your
services
without
edit
subscription.
You
can
go
to
cognitive
services
website
and
try
all
those
services
for
free
and
free
option
is
available
for
a
month.
So
you
can
play
those
services.
B
You
can
get
the
same
keys
from
the
website
as
you
can
get
them
as
a
portal
and
another
cool
thing
here
is
custom
vision
service.
It
actually
has
mobile,
auto
expert,
so
you
can
export
your
model
and
tensorflow
file
or
quarter-mile
files.
So
you
can
either
access
customization
service
with
your
custom,
trained
model
use
an
API
like
the
other
service,
or
you
get
download
your
petrie
model
and
put
it
on
your
server,
for
example.
So,
even
if
you
don't
have
internet
connection,
your
model
still
work
and
it's
mutual
results
to
users.
B
B
But
if
you
really
experience
with
machine
organ,
you
know
a
lot
about
it.
You
are
ready
to
build
your
own
models,
ready
to
train
them
play
with
them,
Oh
crazy.
You
can
use
custom
a
height
and
it's
Azure
machine
learning.
Also,
Ashur
provides
the
hole
in
structure
we
used
to
work
with
like
AI
on
data;
ok,
the
bases,
cosmos,
DB
database,
also
hey
I
compute
power
like
spark
and
IOT
edge,
different
kinds
of
CPU
GPU
everything
that
you
might
eat.
They
also
have
rebuild
so
machine
a
measure
with
everything
you
might
need.
B
So
you
don't
need
to
set
up
anything
yourself.
It's
being
up
virtual
machine
and
work
with
models,
build
your
machine
models,
train
them
work
with
them.
Everything
also
you
can
have
access
to
coding
and
management
tools
by
your
tools
for
I
and
as
machinery
here,
and
you
have
access
to
deep
works
like
int
cake
and
also
to
something
it's
like
transfer
flow
and
coffee.
B
So
you
I,
hope
you
all
know
the
difference
between
xamarin
and
lemon
forums
and
diamond
forums
we're
trying
to
share
as
much
code
as
possible,
but
sometimes
we
need
to
tweak
it
here
and
there.
If
you
want
to
tweak
it
specifically
for
device
and
if
it's
a
front-end
change,
then
you
can
use
device
class,
and
here
I
have
an
example
of
it.
B
If
you
want
to
access
platform
or
environment
specific
functionality,
then
you
can
use
dependency
services
and
dependency
services,
give
you
access
to
all
kinds
of
environment,
specific
functionality
for
Android
iOS
and
universal
Windows
platform.
You
can
access
camera
and
location
settings
all
all
that
off
semester
and
pretty
much.
You
need
to
connect
to
camera
and
microphone
and
other
device
specific
parts
in
order
to
work
with
cognitive
services,
because
usually
you
are
providing
images
or
you
are
providing
some
information
for
microphone
like
user
speech
or
any
other
information
implication
based
information
to
the
service.
B
Lots
of
really
talented
developers
went
out
there
and
created
common
api's,
so
you
can
have
access
to
microphone
and
camera
and
settings
and
all
other
stuff.
You
have
all
list
of
common
api's
online
they're.
All
on
github.
You
can
talk
to
developers.
You
can
medium
submit
your
full
request
with
your
code
changes
and
updates.
So
that
is
really
awesome.
B
I
used
salmon
plug-in
media
that
was
created
by
James
Monty
Magnum.
This
specific
plugin
I
gives
you
access
to
camera
and
also
gives
you
access
to
your
users,
media
folder
on
their
device.
I
use
that
one
for
when
I
worked
with
computer
vision
API,
but
you
can
also
use
it
with
custom
vision,
services
and
whenever
you
want
to
connect
to
camera.
B
But
now
we
have
zaman
essentials
and
I
hope.
You
checked
james
montemagno
session
two
days
ago.
He
was
talking
about
jamming
essentials.
There
is
still
on
preview,
but
I
heard
that
they're
releasing
them
soon.
So
stay
tuned
and
you
know
Dameron
attend,
shows
they're
just
one
day
loud,
whine
no
get
packets
for
all
your
device,
specific
functionality.
You
can
go
to
website.
You
can
check
what
assemble
in
essentials,
library
actually
has
and
what
kind
of
functionality
you
can
access
using
summer
and
essentials.
B
And
if
you
are
using
cognitive
services
and
you
try
and
connect
them
to
your
application,
you
definitely
need
to
get
a
new
get
packaged
right
and
in
order
to
find
those
new
catechesis,
you
need
to
search
for
Project
Oxford
project.
Oxford
is
a
regional
name
of
cognitive
services,
and
here
you
can
see
that
I
used
Project,
Oxford,
dot,
vision
for
computer
vision
API,
but
you
have
speech,
recognition
and
common
and
emotions.
They
don't
have
emotions
anymore,
because
they're
part
of
face
API
by
I'm
sure
you
have
Firefox
word
face
API
there.
B
Custom
vision
example
and
how
you
can
use
custom
views
and
service
with
your
zaman
applications.
I
already
have
a
couple
of
projects
here,
but
when
you
go
first
to
that
website,
you
can
create
a
new
project,
type
name
of
the
project,
type
description,
and
then
you
can
select
resource
group.
It's
actually
for
me.
It's
connected
to
my
other
subscription,
but
you
can
create
new
resource
group.
You
can
get
the
unlimited
trial.
B
Limited
trial
is
perfect,
form
plan
with
the
service
and
seen
how
it
all
works.
You
have
to
project
types
here
and
project
detection
is
on
preview.
Currently
you
can
create
different
classification
types
like
multi
label
and
multi
class
and
all
kinds
of
domains.
Here,
if
you
are
planning
to
import
your
custom
pre-trained
model,
you
need
to
select
compact,
because
if
you
have
all
of
those
domains
without
compact,
then
you
won't
be
able
to
export
those
models.
But
there
is
a
little
sick
return.
B
You
can
always
change
your
domain
if
you
want
to-
and
here
I
really
have
my
project
setup-
it's
called
cats.
So
if
you're,
not
a
cats
fan-
or
you
are
more
like
a
dog
person-
I'm
sorry
I'm,
a
big
cats
fan
so
I
created
and
cats
example
here
with
all
kinds
of
caps,
the
only
part
is
that
I'm
not
really
good
with
their
breeds,
so
I
just
classify
them
based
on
their
appearance.
Like,
for
example,
here
I
have
my
color
orange.
B
When
you
click
it,
you
can
see
all
of
them
and
I
uploaded
all
those
images
to
the
project.
You
can
do
it
with
that
bottom
and
images,
and
you
have
your
images
here:
I
have
the
whole
folder
example
this
one
you
open
it
and
then
you
can
tag
it.
You
can
write
a
couple
of
tags
if
you
trying
to
classify
them
by
a
couple
of
options.
There
I
have
only
one
tag,
and
here
I
have
predefined
tags
because
I
really
use
them.
B
You
can
select
images,
you
can
add
tags
here
and
when
you
actually
upload
all
the
images
and
tag
them,
you
can
train
your
model.
My
model
is
already
pretty
trained
and
then
you
can
you
can
quick
test
it
and
you
can
get
an
image
here
and
it
will
return
you
the
information
so
most
likely
it's
angora.
A
Veronica,
there's
a
really
good
question
as
you're
demoing
this
from
Hagar
dadada
he's
asking,
will
it
be
possible
to
add
images
that
we
don't
have
in
our
PC?
Do
we
have
to
download
the
images
and
then
upload
them
or
can
be
pulling
from
an
API
or
from
some
other
data
source
to
be
able
to
process
them.
A
B
B
Again,
I
want
to
just
make
it
clear:
you
can
either
export
your
model,
so
it
can
be
stored
on
your
users
device
or
on
your
specific
server
or
you
can
just
access
your
custom
vision
model
through
API
since
I
downloaded
the
model
as
tensorflow
I
can
I
created
here.
Xamarin
application
for
Android
and
here
in
assets,
folder
I
actually
put
those
two
files
for
tensorflow
model,
dot,
PB
and
labels,
dot,
txt
and
all
came
from
my
custom
vision
model.
B
Here.
I
wrote
really
simple:
really
straightforward
code
I'm,
creating
a
structure
here
with
label
and
confidence,
because
that's
what
we
gonna
display
and
then
I
am
connecting
to
camera
here.
So
you'll
see
that
I'll
get
screen
really
simple
screen
with
just
pretty
much
one
button.
You
click
you
take
a
picture
and
then
you
actually
send
in
the
picture
to
your
model
and
here
I'm
actually
using.
B
There
is
also
a
couple
of
other
tensorflow
libraries
for
c-sharp
developers
like
she's,
like
one
created
by
Miguel
de
Icaza.
It's
really
popular
I,
don't
think
it
currently
supports
xamarin
but
I'm
looking
forward
when
it
actually
started
supporting
them
and
Here
I
am
getting
back
to
my
code,
I'm,
actually
getting
the
get
in
the
model
and
opening
the
labels
file
and
I
am
working
with
the
image
that
I
get
from
I
got
for
my
camera.
Here
we
are
normalizing.
The
image
actually
tensorflow
requires
images
to
be
normalized
in
a
specific
way.
B
B
B
B
It's
better
when
you
have
actual
cats,
I
don't
have
one,
so
we
are
working
with
whatever
we
have
straight
okay,
so
the
model
thought
that
that
was
my
color
orange
and
the
confidence
is
pretty
lows.
Three
point:
three
percent:
it's
not
a
real
cat,
it's
just
a
picture!
Now
we
have
I,
don't
have
enough
light,
probably
here
so
the
confidence
is
pretty
low,
but
the
model
returns.
The
information
that
that
was
my
color
orange
cat,
and
that
is
pretty
accurate
right,
even
when
the
confidence
is
not
that
high.
A
A
Here
someone
who
doesn't
do
Samer
and
how
can
I,
how
can
I
quickly
get
started
this
outside
of
the
nougat
packages?
You
know,
how
can
I
is
there,
I
is
there
something
that
I
can
just
download
or
clone
from
github
I'm
able
to
get
started
on
this.
How
would
I
bother,
you
know,
go
about
doing
something
like
that.
B
Yeah,
it's
a
good
question.
So
actually
there
are
lots
of
examples
online
lots
of
articles,
lots
of
github
examples.
I
have
my
personal
github
account.
You
can
go
and
get
that
project
that
I
currently
showed
just
show
Vince
ago
a
couple
other
projects.
Also
there
are
projects
available
from
people
who
actually
work
with.
It
was
a
Murray
and
every
day
they
are
really
awesome.
B
Also,
there
are
lots
of
courses
online
like
examine
university
courses
or
Pluralsight
courses,
so
lots
of
information
available
here
I
have
a
couple
of
useful
links,
so
you
can
see
that
is
cognitive,
Services
website
and
you
can
see
the
commutation
and
an
article
about
optical
character,
recognition
also
if
you
want
to
do
more
mobile
development,
even
if
you
do
use
a
marine,
you
can
take
iOS
and
Android
specific
development
courses.
So
you
know
more
about
environment
and
just
some
information
about
me
so
follow
me
on
Twitter.
B
A
Sorry
I
was
just
unmuting,
pushing
all
the
different
buttons
to
make
the
magic
happen
with
this.
Thank
you
so
much
Veronica.
This
has
have
been
really
great
and
all
of
you
out
there
who
are
watching
us
live
on
Twitch.
We
got
more
stuff
coming
up
tons
of
sessions.
So
please
get
engaged.
Ask
your
questions
on
the
twitch
channel,
so
the
we
can
monitor
him
here
and
relay
them
back
to
the
speakers.
Any
closing
thoughts.