►
Description
Learn how to build and deploy resilient containers and microservices at scale with .NET Core and Framework on Azure Service Fabric. We will walk you through development tools (VS, VS Code) & debugging, CICD integration, diagnostics & monitoring in production. We'll also share tips and techniques for creating and managing your microservices on Azure Service Fabric.
B
A
Then
we're
gonna
also
dive
into
a
little
bit
about
monitoring
with
app
insights.
We're
gonna
show
you
how
we
monitor
performance
counters
as
well
as
traces,
a
little
bit
of
searching
for
different
kinds
of
traces
that
you
get
during
upgrade
scenarios
and
the
like,
and
then
we're
gonna
move
on
and
show
you
some
of
the
advanced
techniques
we
use
to
scale.
In
and
out
service
fabric
applications
for
dotnet
core
and
then
we'll
wrap
it
up
with
a
little
bit
of
the
future
stuff.
A
That's
coming
out
for
service
fabric,
with
the
focus
on
the.net
core
work
that
we're
doing
here
so
with
that,
let's
just
jump
right
on
in
and
we're
gonna
go
right
over
to
doing
demos,
I
think
I'm,
starting
on
the
demo.
So
let
me
go
first
okay,
so
let
me
show
you
what
we
have
so
we're
gonna
start
with
a
application
that
we've
probably
shown
to
you
in
the
future
or
in
the
future.
We.
D
A
Showed
you
this
in
the
past,
so
this
is
actually
our
QuickStart
application.
This
is
the
voting
app
where
we
kinda
have
a
very
simple
back,
end
very
simple
front
end
services
about
both
written
and
dotnet
core
and
what
I'm
gonna
do
initially
here
is
I'm,
just
gonna
walk
through
the
basics
and
the
structure
of
how
this
application
works.
A
Just
to
get
you
familiarized,
with
how
dotnet
core
applications
are
written
on
service
fabric,
if
your,
if
your
service
fabric
veteran,
this,
will
be
a
lot
of
review
for
you,
but
do
pay
close
attention,
though,
because
we're
going
to
show
you
how
some
of
this
stuff
is
changing
and
evolving
and
improving
in
service
fabric
in
the
future.
So,
let's
start
with
the
very
basics
here
right
up
at
the
beginning,
so
we
have
two
service
is
in
this
visual
studio
project
and
I'm,
going
to
start
with
the
back-end
service.
A
So
this
is
a
stateful
asp
net
core
service,
where
we
inject
reliable
collections,
which
is
our
built-in
replicated
data
structures
in
datastore
for
service
fabric
and
we're
gonna,
show
you
how
this
project
is
structured
and
how
you
do
ASP
nut
core
on
service
fabric,
so
right
up
at
the
beginning.
Here
this
is
where
things
things
get
interesting
right
away
and
you
can
see
right
as
we
jump
into
the
main
entry
point
of
the
program.
We
already
have
some
service
fabric
concepts
to
think
about
here.
A
So
the
way
this
whole
thing
works
is
every
service
that
you
runs,
that
you
run
whether
it's
in
a
container
or
not
always
has
to
have
some
host
process.
So
this
is
just
a
regular
process,
executable
whatever
it
is,
and
that
actually
runs
your
service
in
it.
So
the
very
first
thing
that
you
have
to
do
when
this
process
comes
up,
is
you
have
to
tell
the
service
fabric
runtime,
hey
this
thing's
a
host
process
and
I
want
my
services
to
run
in
here.
So
what
you
see
here,
the
very
first
thing
is
we
do.
A
Is
we
register
what
we
call
a
service
type?
A
service
type
is
a
pretty
important
concept,
and
this
is
something
we're
gonna
revisit
later,
but
this
thing
here
this
essentially
says
that
I'm
gonna
I
have
this
I,
have
this
compiled
set
of
code
and
config,
which
represents
the
service
and
all
the
bits
and
everything
we
need
to
run
the
service?
A
That's
we
call
it
type,
it's
a
similar
concept
to
say
a
class
in
a
or
object
oriented
language
where
you
define
the
class
once
and
then
later
you
can
go
and
create
as
many
instances
of
that
class
that
you
call
objects
at
that
point.
You
can
just
create
instances
of
those.
So
that's
all
this
is
this
line
of
code.
Here
is
saying:
this
host
process
can
run
this
service
type
and
you
can
instantiate
it
in
here.
A
Once
you've
done
that
step,
then
you
can
move
on
to
the
actual
service
class
itself,
which
is
what
we
see
here,
and
so
you
can
see.
This
is
a
class
inherits
our
stateful
service
base
class
and
again
this
is
another
service
fabric
concept
and
so
what's
happening.
Here
is
every
time
you
create
an
instance
of
this
service.
It's
gonna
instantiate
an
instance
of
this
voting
data
class,
the
service
class
inside
the
host
process,
and
that's
what
that
registration
thing
was
all
about.
A
So
at
this
point,
you're
still
kind
of
bootstrapping
your
service
in
your
application
to
run
in
a
service
fabric
environment-
and
this
is
the
this-
is
what
we
collectively
call
the
reliable
services
framework
once
you
get
down
to
this
point
here
is
where
you're
opening
up
listeners
and
communication
and
points
for
that
service
of
other
clients
and
other
services
can
connect
to
it,
and
this
is
where
we
actually
start
up
a
spinet
core.
Finally,
if
you're
using
ASP
net
core
for
your
application,
and
so
now,
this
is
where
it
becomes
a
little
more
familiar.
A
If
you've
done
an
HP
net
core.
Now
you
can
see
the
web
host
comes
in.
You
start
bootstrapping
kestrel,
you
start
adding
MVC
and
you
start
adding
app
insights
and
all
that
goodness,
you
can
see
where
service
fabric
kind
of
plugs
itself.
In
here
we
inject
reliable
services
and
the
state
manager
has
a
dependency
into
the
built-in
dependency
injection
system.
In
asp
net
core,
and
so
by
doing
that,
once
you
get
into
the
actual
nuts
and
bolts
of
your
application,
like
the
controllers,
you'll,
then
get
access
to
reliable
collections
directly
within
those
controllers.
A
So
once
you're
at
this
point,
you're
kind
of
back
into
the
land
of
just
a
vanilla
asp
nut
core
application,
and
then
you
just
go
about
writing
a
spoon.
Of
course
you
normally
would.
Okay,
so
I'm
gonna
launch
this
application
here
on
my
local
cluster,
which
may
or
may
not
be
running
at
the
moment.
I
had
to
do
a
quick
reboot
before
I
came
on
here.
A
So
let's
see
how
this
goes
so
since
this
is
an
initial
launch
when
I
hit
f5
or
ctrl
f5
and
visual
studio
to
get
the
debugger
going
and
get
to
get
it
to
launch,
it's
actually
gonna
spin
up
that
local
local
development
cluster
on
my
laptop
here,
I've
got
a
configure
to
just
use
a
one.
Node
cluster
and
I
also
have
something
configured
here,
which
is
the
in
fact.
The
defaults
called
refresh
mode
and
that's
the
type
of
debugging
mode
in
Visual,
Studio
that'll.
A
Allow
you
to
just
kind
of
make
changes
directly
into
the
application
without
having
to
redeploy
it
every
single
time
you
go
and
make
some
changes
to
it.
So
this
is
especially
helpful
if
you're
doing
web
development,
where
you
need
to
make
frequent
changes
to
HTML
files
or
JavaScript
files
or
CSS
files,
or
what
have
you
you
can
actually
just
make
those
changes
directly
in
your
Visual
Studio
IDE,
without
having
to
redeploy
the
whole
thing
so
that
really
fast
kind
of
development
that
you're
used
to
with
web
applications.
A
Where
you
make
a
change,
you
hit,
save
you
refresh
your
browser
and
the
change
shows
up.
You
can
do
that
by
default
in
just
the
default
visual
studio
service
fabric
tooling
configuration
it's
set
up
for
that,
so
that
you
can
get
that
really
fast,
fast
development
cycle
all
right,
so
it's
gonna
take
just
a
moment
here
to
spin
up
the
cluster.
So
while
it
does
I'll
walk
you
through
a
couple
of
the
reliable
collections
lines
of
code
here
and
how
this
stuff
works,
so
the
basics
of
it.
A
A
So
you
ask
this
component
to
go
and
grab
a
reliable
collection
for
you
and
the
first
time
you
do
that
it
actually
has
to
go
and
do
an
operation
that
replicates
out
to
other
nodes
to
register
that
collection
on
the
other
nodes
and
that's
where
that
replicated
h
a
part
kind
of
comes
in.
So
that's
why
this
is
an
asynchronous
operation,
and
you
see
we
have
this
await
here,
because
that
first
time
it's
doing
it,
it's
it's
doing
this
replication
operation.
A
Every
subsequent
call
to
this
is
gonna,
be
really
fast,
because
once
it's
created,
it
gets
cash
and
you
get
it
back
very
quickly.
So
the
recommendation
here,
when
you're
doing
this
kind
of
code
is
to
actually
not
cache
these
things,
if
you
don't
have
to,
but
it's
actually
better
to
just
do
what
you
see
on-screen
here
right
now,
which
is
call
that
state
manager
every
time,
because
every
subsequent
time
should
be
very
quick.
This
ensures
that
you
get
that
reliable
collection
created
the
right
way
every
single
time.
A
If
you
end
up
caching,
this
and
a
private
member
variable,
then
you
could
get
some
strange
behavior
depending
on
timing
and
when
the
code
runs
and
whether
or
not
you're
listening
on
secondary
replicas
and
all
that
kind
of
stuff,
it
gets
a
little
more
cab.
There
are
a
few
more
caveats
in
that
case.
It's
usually
better
to
just
call
this
every
single
time.
A
Now,
in
the
next
lines
of
code
you
can
see
this
is
all
transactional,
so
you
just
create
a
transaction
off
that
state
manager,
and
then
you
can
go
into
your
operations
on
the
collection
and
then
commit
your
transaction
like
you
normally.
Would
the
important
thing
to
remember,
though,
and
here's
here's
an
important
tip
for
you.
A
If
you're
using
these
reliable
collections,
you
have
to
make
sure
that
all
the
values
you
put
in
there
are
read-only,
so
you
never
want
to
pull
a
value
out
of
a
reliable
collection
and
just
and
just
make
changes
to
it
in
memory.
What
can
happen
if
you
do
that
before
you
actually
commit
it
you're
actually
changing
the
object
in
memory,
but
if
that
transaction
rules
back,
we
don't
roll
back
the
change
that
occurred
in
memory.
A
So
what
will
happen
is
you'll
actually
have
you'll
have
secondary
replicas
that
have
copies
of
the
data
structure
that
have
not
been
changed
yet,
but
because
you
change
them
locally
in
memory,
you'll
be
operating
on
a
brand-new
changed
data
structure.
So
if
a
failover
happens
suddenly
that
data
will
be
lost
and
the
client
that
sent
that
data
to
you
originally
won't
actually,
so
you
have
to
make
sure
that
everything
you
put
into
this
reliable
collection
is
read
only
so
that's
that's
a
very
important
thing
to
do.
A
Ideally,
what
you
want
to
do
is
do
a
deep
copy
of
these
objects.
Every
time
you
pull
an
object
out
and
you
want
to
change
a
property.
Do
an
entire
copy
and
memory
copy
of
that
whole
object
before
you
put
it
back
in
and
commit
the
transaction.
Okay,
so
I
think
the
cluster
is
up
and
running
here.
I'm
going
to
check
real
quick
here
and
the
cluster
manager
I
should
be
able
to
actually
just
hit
it
locally.
Here.
A
Let
me
open
up
a
new
edge
browser
and
we
can
just
go
to
the
local
host
on
port
nineteen
thousand
eighty
and
that'll
open
up
service
fabric
Explorer.
All
right,
I've
got
three
applications
running.
One
of
them
is
there,
but
we
don't
need
to
worry
about
that.
One!
That's
not!
The
one
I
want
to
show
you
today.
A
Okay,
so
that's
my
full
computer
name
there
and
it's
on
port
8080
81,
so
I
can
just
hit
that
on
localhost
port
8081
and
it
should
go
ahead
and
load
up
that
application
once
the
once
the
web
stack
kind
of
spins
up
here,
okay,
so
while
we're
waiting
for
that
there,
it
is
great,
and
so
now
I
can
just
so
they
like
I,
said
this
is
a
voting.
This
is
kind
of
a
voting
sample.
This
is
there
a
QuickStart
you've
probably
seen
this
before.
A
You
can
add
some
values
here
and
then
you
can
vote
on
them
cool.
So
it's
pretty
simple
and
we're
using
this
real,
simple
one,
just
because
we
don't
want
to
get
too
much
into
the
details
of
the
application
itself,
because
we
want
to
show
you
some
of
the
cool
stuff
around
it.
Okay,
so
I
think
we
should
probably
show
how
this
is,
how
we
deploy
it
through
a
CSU
environment.
B
So
on
the
screen,
you
can
see
that
I
have
an
azure,
devops
project
open
and
within
it
the
things
that
we
care
about,
particularly
our
actual
repos
and
Azure
pipelines.
So
if
you
go
into
Azure,
repos
you'll
see
that
we
stored
Ross's
QuickStart
application
as
a
git
repository
on
Azure
repos,
and
in
addition
to
that,
we
also
have
the
arm
templates
that
we
used
to
deploy
cluster
and
also
all
the
artifacts
that
we
use
to
spin
up
the
build
and
release
pipelines
in
Azure
DevOps.
B
Now
all
of
this
will
be
shared
at
the
end
of
the
talk,
so
you
guys
will
have
access
to
this
entire
repository,
but
in
addition
to
the
repo,
we
also
need
to
think
about
the
pipeline.
So
when
it
comes
to
see
ICD,
there
are
two
stages
in
Azure
pipelines,
build
and
release.
So
before
I
show
how
to
create
a
build
pipeline.
Let
me
just
go
ahead
and
kickstart
a
build,
so
it
it
runs.
While
I
show
you
guys
how
to
create
a
new
one,
so
I'm
going
to
go
ahead
and
cue.
B
So
we
have
stored
all
of
our
code
in
Azure
repos
get
so
I'm
just
gonna
go
ahead
and
choose
that
you
can
pick
the
branch
that
your
source
code
that
you
want
to
build
off
of
and
because
we
already
have
a
build
on
master
I'm
gonna
pick
one
of
the
other
ones,
so
it
doesn't
interfere
with
the
existing
build,
go
ahead,
continue
and
then
there's
a
template.
So
here
you
can
click
and
select
a
variety
of
templates
for
the
type
of
application
or
the
type
of
solution
that
you're
building
in
Azure
DevOps,
particularly
in
build.
B
We
already
have
a
template
for
Azure
service
fabric
applications,
so
I'm
gonna
go
ahead
and
apply
that,
and
you
will
see
that
there
are
a
bunch
of
tasks
that
run
as
part
of
this
pipeline.
First,
there's
an
agent
job,
so
the
agent
pool
is
a
hosted
vs
2017.
What
this
means
is
the
build
is
run
on
an
agent
with
all
the
dependencies
for
service
fabric
applications
already.
So
you
don't
need
to
worry
about.
B
You
know
any
machines
that
and
all
the
dependencies
for
zarok
application,
that
agent
is
already
existing
and
is
hosted
in
this
hosted
vs
2017
agent
pool.
So
there
are
a
bunch
of
tasks
that
run
on
the
left
side
that
you
can
see,
use,
nougat,
nougat,
restore
and
the
build
solution
and
build
SF.
Prod
solution
is
essentially
what
boss
did
on
his
local
machine
so
on
his
local
machine
when
he
did
the
f5
it
built
his
saloon.
It
restored
the
nougats
built.
A
solution
then
built
the
SF
project.
Essentially,
this
build
in
Azure
pipelines
is
automating.
B
All
of
that
for
you
and
then
there's
an
update
server
side
with
manifest
portion
which
will
take
care
of
updating
your
service
ride
with
manifests
if
there
are
any
changes
to
it.
So,
each
time
your
repository
is
built,
if
there
are
any
updates
to
the
repository,
which
is
what
this
one,
this
checkmark
does
it'll
go
ahead
and
update
your
servers.
Favorite
manifests
the
last
two
stages
in
the
build
pipeline,
takes
care
of
copying
the
artifacts
or
the
results
of
the
build
and
moving
them
into
an
artifacts
folder.
B
So
that
takes
care
of
the
build
pipeline,
and
that
is
exactly
what
we
just
queued
off.
The
final
thing
that
you'd
care
about
in
the
build
is
triggers.
So
in
order
to
enable
continuous
integration,
you
need
to
go
to
the
triggers
and
select
a
branch
to
enable
continuous
integration
off
of
so
here.
We're
gonna
go
ahead
and
specify
that
the
branch
that
we
want
up
into
contains
integration
off
of
is
my
original
branch
and
with
that.
That
concludes
the
build
pipeline
and
now
I'm
going
to
show
you
guys.
B
So
once
a
build
is
done,
the
second
part
of
CI
CD
is
to
take
the
artifacts
of
the
build
and
actually
release
that
to
some
sort
of
end
point.
So
in
order
to
create
a
really
spy
plane,
I'm
going
to
create
a
brand
new
one
and
once
again,
you'll
see
that
you're
presented
with
the
option
to
select
a
template,
so
we're
gonna
go
ahead
and
look
for
a
service
fabric
deployment.
Template
which
exists
and
we're
gonna
apply
it.
B
B
B
We
need
to
select
the
artifacts
that
the
deploy
stage
is
gonna
get
all
of
its
artifacts
from,
and
the
artifacts
that
we
wanted
to
pick
up
from
are
the
build
that
we
just
configured
so
we're
gonna
go
ahead
and
select
the
brand
new
build
that
we
just
made
edit
and
now
this
deploy
stage
is
configured
to
pick
up
these
artifacts
for
its
deployment
tasks
to
configure
the
deployment
you
need
to
specify
the
cluster
connection.
Now
this
is
the
cluster.
B
The
service
or
cluster
in
Azure
or
in
any
on-premise,
is
setup
that
you
want
to
connect
to
so
the
credentials
that
you
need
to
provide
a
connection
name,
which
is
generic.
The
cluster
endpoint
of
your
cluster,
the
thumbprint
of
the
certificate
that
secures
the
cluster.
So
this
would
be
the
cluster
certificate
that
actually
secures
the
cluster
and
locks
it
down
and
then
the
base64
encoding
of
that
certificate.
B
Now
we've
actually
gone
ahead
and
created
this
already
so
I'm,
just
gonna
select
the
one
that
we
already
have
and
you'll
notice,
that's
similar
to
the
build
there's
an
agent
job
that
runs-
and
this
is
once
again
at
the
hosted
vs
2017
agent.
This
agent
is
once
again
configured
with
all
the
dependencies
that
is
needed
to
deploy
a
service
fabric
application,
so
in
the
second
part
of
AUSA's
demo,
what
he
did
the
f5
once
he
built
his
vs
2017
project
and
solution.
It
actually
went
ahead
and
deployed
it
to
the
local
cluster
for
him
automatically.
B
So
let's
go
check
out
what
the
deploy
Center
start
with
deployment
properties
are
so
in
here
you'll
see
that
there's
a
published
profile,
the
publish
profile
is
contained
in
the
artifacts
folder,
and
this
has
all
of
the
settings
for
your
publish.
You
can
also
override
and
specify
application
parameters.
B
Now
the
thing
of
note
is
upgrade
settings.
So
in
upgrades
settings
you
can
specify
what
happens
when
you
want
to
upgrade
or
when
the
deployment
that
is
being
made
is
for
an
upgrade
to
show
this
I'm
actually
gonna
override
all
the
publish
profile
upgrade
settings.
So,
if
you
didn't
do
this,
the
upgrade
settings
would
come
from
this
publish
profile,
but
because
I
want
to
show
exactly
and
configure
them
myself.
I'm
gonna
go
ahead
and
override
them,
and
you
will
see
that
there
are
two
different
up.
B
Three
different
upgrade
modes,
monitored,
unmonitored,
auto
and
unmonitored
manual,
monitored,
means
satisfied
with
platform
will
actually
monitor
the
upgrade
and
it'll
roll
it
back.
If
there's
any
issues
on
monitor,
auto
means
the
sort
of
platform
will
not
monitor
your
upgrade
and
I'll
just
blast.
An
upgrade
unmonitored
manual
means
at
each
upgrade
domain.
B
When
an
upgrade
finishes
unmonitored,
you
have
to
go
ahead
and
click
to
continue
the
upgrade
failure,
action,
there's
two
types,
rollback
and
manual
rollback-
will
only
apply
if
you
have
the
monitor
upgrade
mode
and
what
rollback
will
do
is
if
there's
a
monitor,
upgrade
and
it
fails
at
a
particular
upgrade
domain.
Then
service
fabric
platform
will
automatically
rollback
that
upgrade
for
you
and
make
sure
that
your
application
faces.
B
Folder
and
the
very
final
thing
you
want
to
do
in
this
release
is
I
want
to
enable
continuous
integration,
so
I
want
this
release
to
be
kicked
off
every
time.
My
build
completes
so
I'm
going
to
go
ahead
to
my
artifacts
and
enable
the
continuous
deployment
trigger
and
with
that
I
actually
have
a
complete
buildin
release.
Pipeline
I
have
an
azure
pipeline
that
now
takes
any
new
changes
that
are
pushed
to.
My
branch
builds
it
on
an
agent
pool,
that's
already
pre-configured
and
then
we'll
release
the
service
harbor
cluster
nice.
A
C
A
The
things
that
sadhana
was
talking
about
the
upgrade
settings,
whether
it's
monitored
and
the
rollback
settings
all
that
it's
basically
just
burning
the
same
PowerShell
commands
or
the
same
CLI
commands
that
you
do
in
your
own
deployment
environments
or
also
that
Visual
Studio
uses.
So
it
all
kind
of
funnels
down
to
the
same
set
of
commands
that
are
being
run,
they're
just
different,
different
ways
of
doing
it.
Yeah.
B
Screen
again,
you'll
notice
that
in
service
fabric,
that's
running
on
Azure,
the
service
has
been
deployed.
This
was
the
bill
that
I
configured
initially
and
you'll
notice
that
it's
identical
to
the
one
that
was
deployed
on
his
local
machine
fresh
you'll,
see
that
there
are
two
services,
the
voting
web,
which
contains
five
instances
and
voting
data.
So
it's
the
same
one
that
was
deployed
now
it's
running
in
Azure
and
if
I
hit
the
endpoint
in
Azure.
A
B
B
B
Why
you
no
prove
this
and
you'll
notice
that,
when
I
complete
and
merge
this
PR
into
master,
the
build
that
I've
created
will
trigger
automatically
and
it'll
do
an
entire
service
for
a
brick,
build
and
release
to
the
cluster
while
upgrading
my
existing
application?
So,
let's
go
see
if
the
build
that
triggered
it
has
so
now,
I'm
gonna
hand
it
back
to
Voss
who's
gonna
walk
through
the
extended
voting
application
code
and
explain
how
it's
different
and
what
improvements
he
made.
A
Me
back
in
and
I'll
show
you
all
I'll
show
you
what
it
is
all
right
so
yeah.
So
we
made
a
few
changes
to
this
now.
The
original
voting
app
that
we
showed,
admittedly,
is
a
little
boring.
It's
not
it's
not
the
most
interesting
thing
in
the
world,
but
it
gives
you
kind
of
a
baseline
and
somewhere
to
start
now.
The
main
problem
with
that
initial
voting
app,
that
back-end
service
was
a
stateful
service
right
and
it
was
partitioned,
but
the
voting
app
only
represents
a
single
poll,
so
I
can
put
up.
A
One
poll
and
I
can
put
up
as
many
candidates
as
I
want,
I
can
vote
on
them
and
so
by
partitioning
on
it
by
partitioning
that
thing
out,
I
can
scale
that
back-end
service
out
and
I
can
put
up
a
lot
of
candidates
in
that
one
poll
which
is
fine.
The
problem
is
that
that
back-end
service
that
partition
count
that
you
set
up
initially
is
fixed.
So
once
you
pick
a
partition,
count
you're,
basically
stuck
with
it
forever.
A
So
the
way
we
get
around,
that
is,
there's
there's
an
interesting
trick
that
you
can
do
in
service
fabric,
and
this
is
actually
a
fairly
it's
a
fairly
common
architectural
pattern
that
really
takes
advantage
of
what
you
might
consider
the
micro-services
pattern
is,
and
so,
instead
of
just
doing
a
single
service,
what
we've
done
is
we've
extended
this
voting
application
with
just
a
couple
of
simple
modifications,
so
that
you
can
now
actually
create
multiple
polls
and
there's
a
little
trick
to
that
we
did
it.
So
here's
here's
the
updated
version.
A
Let
me
run
this
again,
real
quick
just
to
get
it
deployed
and
I'll
show
you
what
we've
done
so.
The
first
thing
I
want
to
point
out
is
up
in
the
application
manifest
here
now.
If
you've
used
service
fabric,
you
probably
seen
in
your
application,
manifest
you
come
down
here.
You
import
your
your
services,
and
then
you
have
this
thing
called
default
services
and
you've
seen
this
you've
probably
configured
some
of
your
settings
for
how
you
want
your
service
to
run,
and
you
put
it
down
as
default
service
thing
now.
A
Default
services
is
kind
of
a
shortcut.
It's
not
really
the
way
that
you
would
typically
create
service
instances
in
an
application.
So
when
you
spin
a
the
application
and
service
fabric,
initially
normally
it's
empty
and
you
go
and
you
then
programmatically
create
service
instances
inside
that
application.
A
Instead,
what
you
do
is
you'd,
have
your
build
and
deploy
pipeline
or
your
CI
CD
environment
then
run
an
extra
commandeth
and
says:
go
create
my
service
instances
and
the
reason
you
do
that
is.
You
can
then
go
later
and
update
those
service
instances
kind
of
on
the
fly
using
an
update
command,
and
you
can
also
do
this
really
cool
trick
where
you
can
create
a
service
instance
programmatically
from
another
service,
and
so
that's
what
we've
done
here
with
this
voting
application.
So
let
me
show
you
how
that
works.
A
A
But
if
you
look
down
here
under
service
types-
and
this
is
that
service
type
concept
that
I
that
I
mentioned
at
the
very
very
beginning
of
this,
which
is
pretty
important,
which
is
where
you
register
you
register
with
the
cluster,
a
set
of
you
know
set
of
binary
xand
configs
that
make
up
your
your
your
service.
So
we
call
it
a
service
package
and
service
package
needs.
Has
everything
needed
to
run
the
service
instance?
And
so
you
can
see
down
here.
A
A
The
the
web,
one
came
from
the
default
service
that
we
had
specified
here
and
you'll
see
that
I
actually
don't
have
a
back-end
stateful
service
running
at
all
right
now,
so
what
I
did
is
I
made
a
little
change
to
the
service,
and
now
what
you
can
do
is,
if
you
go
to
the
the
website
again-
and
you
can
now
put
in,
for
example,
the
name
of
a
poll
that
you
want
to
create
and
what
this
is
gonna
do.
Is
it's
going
to
hit
this
home
controller
and
it's
going
to
execute
some
code
here?
A
You
can
see
this
is
my
index
now
and
it
takes
in
the
name
of
pole
and
what
we're
doing
here
is
we're
actually
using
the
service
fabric
client
to
go
and
instantiate
a
new
service
instance.
So
that's
create
service,
async
call
and
we're
saying
cray.
One
of
this
type,
with
this
partition
scheme
and
this
number
of
replicas
and
so
on
and
so
forth,
you
can
you
can
parameterize
this,
basically
every
one.
So
every
time
there's
a
request
for
a
new
pole.
A
C
A
A
So
what
I
did
is
I
constructed
the
name
of
this
service
and
put
the
name
of
the
pole
right
there
into
the
service
name,
and
so
then,
when
I
go
to
that
pole
here
now,
I
can
just
put
in
say,
for
example,
cats
and
dogs
and
whatever
I
want,
and
this
will
then
hit
the
reliable
collection
in
the
backend
and
and
create
that
create
that
pole.
So
then,
if
I
want
to
start
another
pole,
for
example,
I
want
to
vote
on
cars.
A
Now
it's
just
created
another
pole
for
me
and
what
this
does
is
it
create
an
entirely
new
service
instance
to
represent
that
pole?
So
now,
what
I've
effectively
done
is
I've
created
a
way
where
I
can
create
as
many
poles
as
I
want
and
I'm
loading
service
fabric
actually
do
the
job
of
tracking
those
services
that
are
running
those
poles
and
the
cool
thing
about
that
is
I.
A
Don't
then
have
to
go
and
write
my
own
UI
or
management,
or
anything
to
actually
keep
track
of
all
of
those
services
I'm
creating
because
they're
all
just
listed
right
here,
they're,
just
more
services,
and
so
anytime
I
want
to
end
the
pole.
I
can
just
come
in
here
and
I
can
delete
it
if
I
want
to
keep
track
of
a
pole.
I
can
come
in
here
and
look
at
its
status
and
even
better
than
that.
A
Every
pole
I
can
create
I
can
give
it
different
parameters
so
when
they,
when
you
see
it,
when
you
saw
this
code
in
here,
so
every
pole
that
you
create
can
actually
give
it
a
perimeter.
So
if
I
have
a
pole,
that's
gonna
have
a
lot
of
data.
It's
gonna
hold
a
ton
of
votes
or
a
ton
of
different
candidates.
I
can
actually
create
with
more
partitions.
So
the
overall,
this
overall
voting
application
now
scales
out
per
polls.
I'm
not
really
limited
by
a
fixed
number
of
partitions
I
can
always
create
more
service.
A
Instances
on
the
fly,
and
so
effectively
your
scale
out
is
is
pretty
much
unlimited
in
terms
of
the
capacity
that
you
set
up
upfront,
it's
just
a
matter
of.
Do
you
have
enough
hardware
to
manage
all
the
data
and
all
the
traffic
that
you
want
to
that
you
want
coming
in,
but
the
architecture
of
it
allows
you
to
just
keep
scaling
out
and
out
and
out
as
much
as
you
need.
A
So
this
is
a
fairly
common
pattern
that
we
see
with
service
fabric
because
as
its
unique
ability
to
create
these
service
instances
on
the
fly
now,
the
reason
that
we
have
these
types,
these
service
types
in
here
is
because
the
services
I'm
showing
you
right
now
are
actually
just
running.
Exe
s
they're
not
actually
running
inside
of
containers.
You
could
do
the
same
thing
inside
of
a
container
as
well.
A
I
wouldn't
change
a
whole
lot,
but
because
service
fabric
has
the
ability
to
run
services
that
are
not
inside
of
containers,
you
have
this
problem
of
packaging
up
all
those
binaries
and
you
have
to
do
something
with
them.
You
have
to
put
those
binary
somewhere,
so
they
can
be
provisioned
to
the
cluster
and
so
there's
a
stage
when
you're
deploying
in
the
CI
CD
environment.
The
sedona
showed
you
in
DevOps.
A
Does
this
for
you
automatically,
along
with
Visual
Studio,
there's
a
provisioning
stage
which
copies
all
those
bits
up
onto
the
cluster,
tells
the
cluster
about
this
type
and
tells
the
cluster
about
the
version
of
the
type
and
then
once
that's
up
there.
You
can
just
go
and
instantiate
these
things
as
much
as
you
want.
So
it's
kind
of
a
thing,
that's
very
unique
to
service
fabric,
the
other.
The
only
other
change
we
had
to
make
was
just
here
on
the
front
end.
We
had
to
make
a
little
change
to
the
JavaScript
here.
A
You
can
see
we've
just
added
the
name
of
the
poll
now
into
into
this
JavaScript.
So
when
you,
when
you're
clicking,
you
know,
vote
or
something
like
that,
it
goes
to
the
right
poll.
That's
the
one
thing
that
we
added
and
then
we
just
changed
the
routing
a
little
bit
down
here
in
the
asp
net
core
MVC
route.
So
you
can
see
we
just
change
this
up
a
little
bit,
but
that's
really
all
it
took.
The
interesting
thing
is
that
the
backend
service,
the
data
service,
actually
didn't
change
at
all.
A
We
didn't
change
any
code
whatsoever
in
the
data
service.
It
stayed
exactly
the
same
way
because
we're
just
creating
more
and
more
instances
of
those.
So
the
code
that
we
have
to
write
for
the
data
service
is
actually
really
simple,
because
there's
no
need
to
keep
track
of
multiple
poles
in
there.
This
service
only
has
to
keep
track
of
one
pole,
so
the
code
you
write
just
keeps
track
of
one
pole
and
you
just
create
multiple
instances.
A
B
Yeah
so
I
think
I
think
what
we're
going
to
show
is
that
we're
going
to
show
that
in
Azure
pipelines
the
build
that
we
kicks
Kickstarter
automatically
is
complete,
so
you
can
see
on
Azure
pipelines
under
builds
the
bill
that
was
triggered,
completed
and
moved
all
the
artifacts
removed
all
of
the
service
fabric
bits
into
the
artifacts
called
drop,
which
was
then
picked
up
by
the
releases
that
we
configured
and
that
releases
is
still
running.
So
let's
go
ahead
and
check
out
the
logs
of
this
release.
B
So
when
upgrades
like
this
fail
or
anytime,
you
do
any
sort
of
cluster
level
operations.
One
of
the
other
things
that
you
care
about
are
also
your
logs
and
we've
actually
configured
this
cluster
and
these
applications
with
application
insights.
So
if
you
go
ahead
and
check
out
the
application
insights
overview
with
an
application
insights,
we're
gonna
show
to
things.
B
We're
gonna
show
how
you
can
get
metrics,
so
perf
counters
on
the
cluster,
the
performance,
the
memory,
the
CPU
usage
and
we're
also
gonna
show
how
you
can
query
the
analytics
store
for
things
like
traces
and
specifically
our
upgrades.
So
you
can
recognize
and
figure
out
where,
in
the
upgrade,
what
went
wrong
what's
happening
at
the
cluster
level.
So
if
I
click
on
metrics
you'll
see
that
there
are
two
metric
namespaces,
the
standard
application,
insights,
metrics
and
these
metrics
are
the
ones
that
are
automatically
configured
by
application.
B
B
So
if
you
are
interested
in
checking
out
what
went
wrong
with
that
upgrade,
you
can
use
the
application
insights
over
you
once
again.
This
time
we're
going
to
go
to
the
analytics
tab,
and
here
you'll
see
that
really
what
we're
doing
is
requiring
a
data
store
that
is
already
provisioned
for
you
in
a
pin
sites
called
traces
which
has
a
huge
dump
of
all
the
traces
generated
by
service
arabic.
The
platform
this
can
be
pretty
verbose
and
hard
to
read.
B
B
B
A
B
A
Usually
what
will
happen
is
we'll
kind
of
retry
will
restart
the
process
if
it
crashes,
which
can
happen,
sometimes
run
on
a
different
node,
and
so
what
you
see
is
between
each
upgrade
domain,
as
the
upgrade
rolls
through
there's
a
period
of
time
where
a
service
fabric
waits
to
see
if
that
health,
either
stabilizes
or
if
it
stays
in
error
and
if
it
stabilizes
it'll,
continue
on
then
to
the
next
upgrade
domains.
So
that's
what
we
saw
here
now
we're
going
on
to
upgrade
domain
3.
A
Here,
let
me
back
in
real,
quick,
excellent.
Alright,
once
that
comes
back
up,
let
me
show
you
my
template
cool
here
we
go
so
this
is
actually
fairly
simple
to
setup
and
I'll.
Show
you
what
I've
done
so
here.
I've
got
the
arm
template
open
in
Visual
Studio
code.
So
if
you
come
down
into
the
arm
template
here
when
you
get
down
into
virtual
machine
scale
set,
so
this
is
the
underlying
infrastructure
and
as
you're
the
vmss
that
the
cluster
actually
runs
on.
A
So
this
provides
the
VMS
the
service
fabric,
software
that
stitches
the
cluster
together.
That
is
set
up
as
a
vmss
extension,
and
so
that's
that's
what
you're
seeing
here
so
once
you
get
down
into
this
extension,
you'll
see
the
service
fabric
extension
and
a
few
settings,
etc.
Once
we
get
down
here,
however,
to
diagnostics
you'll
see
now,
I
have
a
sure,
Diagnostics
setup
right
here
and
inside
this
wag
config
we've
got
an
application,
insights
sink,
and
so
the
sink
is
basically
telling
us.
A
Where
do
you
want
those
traces
to
end
up
once
they
get
written
out
from
the
service
wiper
cluster?
So
if
we
scroll
down
a
bit
further
here,
you
can
see
the
configuration
for
the
sink
now.
What
I
did
here
is
the
instead
of
putting
an
application,
insights
key
and
one
way
you
can
do
this-
is
you
can
setup
application
insights
ahead
of
time?
Go
grab
the
key,
which
is
you
know,
ague
it
or
some
string
like
that,
and
then
you
can
paste
that
into
here.
A
The
way
I
set
it
up
here,
though,
is
I,
actually
have
the
application
insights
resource
as
a
part
of
the
same
arm
template.
So
when
I
deploy
this
arm
template,
not
only
does
it
create
the
service
fabric
cluster,
it
also
sets
up
a
pin
sites
simultaneously
and
then
what
I
do
is
from
that
resource.
You
can
see
here,
I'm
referencing,
that
resource
ID
insights
and
then
I'm
just
grabbing
the
instrumentation
key
out
of
there.
A
So
I
never
have
to
go
and
grab
the
instrumentation
key
myself
and
put
it
in
here
and
worry
about
encryption
or
anything
like
that.
It
all
just
happened
automatically
for
me
once
you
have
that
setup,
you're
pretty
much
most
of
the
way
there.
Then
it's
just
a
matter
of
setting
up
your
etw
event
sources.
So
these
event
sources
are
the
ones
that
we
have
set
up
for
the
service
fabric
infrastructure,
the
the
system
so
to
speak,
and
then
the
application
platform
part
of
it.
A
So
if
you're
using
reliable
services,
you
can
set
up
that
egw
event
here
and
that'll,
give
you
the
things
like
run.
Async
has
started,
or
any
sync
is
canceled
all
those
kind
of
traces,
so
the
application,
I
wrote
actually
doesn't
have
any
log
output
whatsoever,
which
is
terrible.
But
despite
the
fact
that
I
didn't
do
a
good
job
of
putting
in
any
sort
of
logging,
we
still
got
pretty
rich
traces
coming
out
into
application
insights.
That
gives
us
at
least
some
indication
of
what
was
happening
during
that
rolling
upgrade.
A
We
did
so
I
can
at
least
see
how
it's
going
through
upgrade
domains.
How
long
each
upgrade
Omen
is
taking
and
any
errors
that
that
happened
to
come
up
now.
The
errors
that
you'll
see
in
app
insights
in
this
case
will
be
only
the
errors.
That's
fabric
can
detect
and
that's
fairly
limited
to
things
like
the
process
crash,
or
there
was
an
unhandled
exception.
Anything
within
your
code
that's
happening.
A
A
Just
yeah
I'm
just
saying
min
come
on
that's
rough,
but
luckily
we
had
a
pin
site
set
up,
so
we
can
only
see
what's
happening,
yeah
and
in
the
performance
counters
as
well.
So
these
are
the
performance
counters
that
I
put
in,
and
this
is
just
kind
of
copied
straight
out
of
the
out
of
perfmon
locally
or
whatever.
Whatever
you
want.
A
These
are
the
custom
ones
AAS
the
custom
own,
so
we
had
a
bunch
of
proof
counters
that
were
just
kind
of
added
automatically
by
app
insights.
These
are
custom
ones,
I
put
in
to
kind
of
monitor
disk
activity
and
process
and
memory.
Now
here's
an
interesting
about
the
way
I
set
up
the
process.
One
here
so
I
wanted
to
see
the
processor
time
for
just
one
specific
process
on
the
machine.
A
I
don't
want
to
see
processor
time
for
the
entire
machine,
because
that
doesn't
really
help
me
much
so
what
you
have
to
do
to
get
this
set
up,
but
this
gets
a
little
bit
tricky
you
have
to
put
in
the
name
of
the
process
into
the
parenthesis
here,
which
does
mean
that
you
have
to
know
the
name
of
your
host
process
ahead
of
time.
So
if
you
go
back
into
Visual
Studio
here,
you
can
check
that
out.
If
you
go
and
just
say
you
know,
look
at
the
properties
I
have.
A
A
A
Actually
only
have
yeah
yeah
if
you
want
yeah,
if
you're
gonna
want
are
for
both
services
and
say
different
process
names,
you
just
put
in
two
of
these
and
you'd
put
in
the
different
process
name
for
it.
The
one
thing
that
is
not
supported
here,
unfortunately,
is
I.
Can't
just
do
something
like
this
I
can't
just
do
voting
star
now,
that's
something
you
normally
can
do
outside
of
applications.
That's
been.
Unfortunately,
it's
not
supported
here
in
application
inside
so
just
be
be
aware
of
this
one.
A
If
you
try
to
do
this
asterisk
to
say,
I
want
any
processed
name
that
starts
with
voting.
This
will
not
work.
You
won't
get
any
performance
counters
coming
into
app
insights.
If
you
do
it
this
way,
so
you
have
to
actually
put
in
the
full
process.
Name
like
so,
and
then
you'll
get
those
perf
counters
coming
in
into
up
in
sites.
So
that's
that's
kind
of
the
basics
of
it.
A
B
A
A
B
C
A
B
C
A
A
That'll
keep
going
now
keep
going
and
at
some
point
there
we
can
see
those
logs
exactly
cool
alright.
So
we'll
come
back
to
that
in
a
bit
now,
I
want
to
show
you
some
of
the
some
of
the
work
we're
doing
to
make
dotnet
core
applications
on
servers
fabric
a
little
bit
easier.
So
there
were
a
lot
of
concepts
that
we
talked
about
service
types.
We
looked
at
an
application
manifest
we
looked
at
these
service
manifests
imports.
A
It's
a
lot,
it's
kind
of
a
lot
to
take
in,
and
it's
very
heavily
tied
into
service
fabric,
the
even
the
code
itself.
If
we
come
back
and
look
at
that
again,
you'll
see
that
there,
the
just
that's
just
the
start
of
the
code
here,
even
right
at
the
entry
point
you're
already
tied
into
service
fabric
by
having
the
service
runtime
and
then
you're
registering
the
service.
So
your
code
is
very,
very
heavily
tied
into
the
platform.
A
So
it's
not
very
portable
I
couldn't
take
this
exact
same
a
subpoena
core
application
and
go
at
running
government
outside
of
service
fabric.
So
some
of
the
work
we're
doing
is
to
make
that
a
lot
simpler
and
to
make
applications
for
service
fabric
much
more
portable,
so
I'm
going
to
switch
over
to
this
kind
of
new
style
of
application.
So
this
is
the
exact
same
voting
data
backend
stateful
service
that
we
just
saw
and
if
I
go
into
the
controller,
you
can
see
it's
the
exact
same
code
that
we
looked
at
a
little
bit
earlier.
A
This
exact
same
thing:
I
got
a
reliable
dictionary.
I
got
state
manager,
I've
got
transactions
everything's
in
here,
so
I
actually
didn't
have
to
change
any
code.
I
could
pick
up
all
my
ace
peanut
core
code
that
I
had
in
my
old
services
and
I
can
just
dump
them
down
into
here
and
not
have
to
actually
change
anything,
except
maybe
a
namespace
or
two,
but
the
big
difference,
though,
if
you
look
at
what
number
one
the
project
type,
this
is
just
an
ASP
nut
core
project.
It's
not
even
a
service
fabric
project
in
this.
A
In
this
case,
in
this
example,
and
if
I
go
into
the
program,
there
is
no
Service
registration,
there's
no
type
registration,
there's!
Actually,
no
service
fabric
code
in
here
whatsoever.
This
is
actually
a
straight-up
vanilla
file,
new
project,
asp,
net,
core
donna,
core
Oh
done
I
didn't
do
anything
else.
The
only
thing
I
did
was
in
here:
I
added
user,
reliable
collections.
So
this
is
the
only
thing
I
now
have
to
add,
and
you
can
see
there
is
nothing
else
in
here
and
I.
A
A
When
you're
doing
this
kind
of
this,
this
new
kind
of
service
fabric
application,
because
it's
really
not
even
a
service
fabric
application
anymore
at
this
one,
it
is
really
just
a
a
asp
net
core
service
or
NH
peanut,
core
application
and
I've
just
added
a
nougat
package
that
has
reliable
collections
in
it
and
then
I've
said
use
reliable
collections
and
I've
got
it
available
to
me
in
exactly
the
same
way
as
I
did
before
so.
I
can
remove
a
bunch
of
that
code.
A
That
tie
me
down
to
the
platform
and
I
can
still
run
it
on
service
fabric
now
you're,
probably
thinking
to
yourself,
but
you
still
have
reliable
collections
in
there,
so
you're
still
tied
to
the
platform
right
fair
enough.
So
let
me
do
this.
I'm
gonna
take
my
local
cluster
here
and
I'm
gonna.
Stop
it!
So
we're
not
going
to
this
anymore,
just
a
just
to
show
you
how
this
works.
Okay,
so
that's
gone
now,
I'm
gonna,
take
this
thing
here
and
I'm.
Just
gonna
run
it
as
is
I'm
just
gonna
go
f5
and
I'm.
A
Just
gonna
run
this
a
spinet
core
application,
so
this
is
not
being
deployed
to
a
service
fiber
cluster.
This
is
just
running
on
probably
IAS
Express.
Alright,
you
can
see
my
local
cluster
stopped.
It's
not
right,
but
the
application
that
I
just
ran
the
asp
net
core
application.
It
will
actually
work-
and
I
think
this
is
just
running
on-
is
Express
locally,
so
wait
for
that
to
come
up
here.
We
go
so
I
just
ran
that
and
I've
already
I
ran
this
once
before.
A
Just
to
just
to
test
it
out
and
I've
already
got
a
value
in
there,
so
this
is
actually
hitting
that
reliable
collection
completely
on
its
own
without
running
in
the
runtime
and
just
to
show
you
well
hit
a
breakpoint
in
here.
So
I
put
a
breakpoint
on
this
put
method
for
here
where
I
grab
that
reliable
collection
out.
So
what
I'll
do
is
I'll
just
open
up
my
command
prompt
here
and
I'll.
A
A
So
now
when
I
say
this
is
truly
portable,
it
actually
is,
and
these
reliable
collections
that
are
running
here-
they
are
still
transactional
and
they're,
still
storing
State
down
onto
disk,
but
because
it's
not
running
on
service
fabric,
it's
not
being
replicated,
so
you
only
have
one
copy
of
it.
So
you
can
take
this
application.
You
can
run
it
elsewhere.
You
can
run
anywhere,
you
want
you
just
don't
get
the
benefit
of
the
built-in
replication
for
service
fabric,
that
replication
layer
happens
down
below
the
application
down
in
the
runtime.
A
So
you
still
need
the
cluster
running
in
order
to
get
replication,
but
I
don't
need
it
necessarily
just
to
run
the
thing.
So
if
I'm,
for
example,
developing
this
application
and
I
have
say,
I
want
to
give
it
to
Sudan
vut
to
do
some
of
the
fernand
stuff,
because
I
hate
doing
front-end
JavaScript,
but
he
loves
that
stuff
I'm,
like
so
I
can
just
hand
it
to
him,
and
he
doesn't
have
to
worry
about
going
and
installing
the
server
cyborg,
STK
or
running
a
local
cluster.
He
can
just
run
it
normally.
He
can.
A
He
can
run
on
his
Mac
because
it's
it's
down
to
core
and
he
can
iterate
on
it
without
having
to
install
the
runtime
reason
like.
So
it
makes
it
very
easy
for
other
developers
to
just
grab
it,
treat
it
as
a
regular
plain
web
application.
Do
development
on
it
and
then
later
on.
We
can
then
deploy
it
to
a
service
fabric
cluster
and
and
have
it
run.
Yeah
that
pattern
that
we
saw
earlier
of
creating
services
dynamically
so
since
I
don't
have
that
service
type
register.
A
How
do
I
then
do
that,
so
you
can
actually
still
do
that,
because
there's
still
a
piece
here
that
defines
how
this
application
would
run
on
service
fabric
if
you
were
to
deploy
it
to
service
fabric,
and
so
this
is
kind
of
a
new
style
of
describing
applications.
It's
different
from
the
application,
manifest
XML
and
different
from
the
service
manifest
XML.
It's
a
very,
very
simplified
version
of
those
things.
We
still
have
a
concept
of
applications
and
a
set
of
services
within
that
application.
A
So
this
right
here
when
I
say
create
services,
async
and
I,
give
it
a
bunch
of
parameters
here.
These
parameters
are
actually
what
I'm
now
describing
declaratively
here
in
this
the
ml
file,
so
I'm
saying
here's
a
container
image.
I
want
to
run
that
has
everything
in
it.
This
is
an
end
point.
I
want
exposed.
A
These
are
environment
variables,
I
can
set
up
the
resource
constraints
for
CPU
and
memory
here,
I
define
my
reliable
collections,
etc,
etc,
and
I
can
define
a
network
that
I
want
it
to
run
in
so
I'm
doing
a
completely
declarative
model
here
without
having
to
register
that
type
and
the
reason
this
works
is
because
this
newer
model
is
all
kind
of
based
around
containers.
So
because
we're
defining
everything
in
a
container.
A
You
like,
as
your
container
registry
or
docker
hub
to
solve
that
problem
and
everything
in
that
container
container
I'm,
saying
I
feel
like
I'm
saying,
container
life.
It's
all
contained
in
that
in
there.
So
when
you
go
and
deploy
it,
there's
actually
nothing
to
upload
to
the
cluster
anymore,
and
so
that's
why
we
can
do
this.
This
kind
of
new
style,
where
there's
nothing
to
provision
to
the
cluster
I'm.
A
So
it's
it's
a
similar
style
to
how
you
would
do
containers
on
service
fabric
today,
but
it's
drastically
simplified
and
you
can
get
all
the
benefits
of
using
stateful
applications
and
reliable
collections
and
you
get
replication
and
everything
without
having
to
actually
tie
yourself
down
to
the
platform.
So
that's
that's
kind
of
a
neat
thing
here.
You
can
see
I
have
my
docker
file
here
to
create
the
container
and
then
go
and
upload
it
sort
and.
A
Absolutely
and
so
Visual
Studio
actually
has
tooling
for
this,
which
unfortunately
I
don't
have
installed
at
the
moment,
but
effectively
you
get
the
same
style
project
that
looks
very
similar
to
to
this,
where
you
can
set
up
a
different
application
and
put
in
as
many
services
as
you
want,
and
they
will
kind
of
do
the
scaffolding
and
do
the
container
building
and
the
container
deployment
and
registration
all
that
stuff
for
you
and
then
you
can
deploy
it
out
to
your
local
environment
or
out
into
the
Azure
environment.
Now,
why
did
we
go
and
do
this?
A
So
let
me
switch
back
to
here
and
show
you
a
little
bit
more
about
some
of
the
cool
stuff.
That's
coming
up,
so
let
me
get
this
going
here.
We
go
and
I
only
get
down
to
here.
Okay,
so
here's
the
here's
kind
of
the
difference
in
what
we
looked
at
there.
We
go
so
they're,
effective,
effectively.
Three
ways
that
you
can
write.
Applications
at
this
point,
so
docker
compose,
is
sort
of
it's
just
a
way
to
support
existing
applications.
A
If
you're
using
docker
compose
today,
you
can
always
deploy
those
to
service
fabric,
but
the
two
that
I
want
to
focus
on
are
the
ones
below
so
the
application
service
manifest
is
what
we
showed
you
initially
with
the
voting
application,
and
that
was
that
style.
We
were
defining
types
and
you're
you're,
using
the
reliable
services
framework
and
you're
driving
from
base
classes,
and
this
gives
you
a
lot
of
low-level
control
over
the
serviced
fabric
and
runtime
and
the
platform.
A
A
This
environment
is
a
multi-tenant
environment,
everything
runs
in
containers
and
because
it's
a
multi-tenant
environment,
it's
a
little
more
restrictive
and
if
figure
to
have
your
own
dedicated
cluster,
and
so
what
we
did
is
we.
We
designed
this
new
kind
of
resource
model
around
that
environment
and
set
this
up
so
that
it's
container
base
and
universal.
So
once
you
write
an
application
using
these
resource
files,
you
can
deploy
that
to
anywhere
where
service
pattern
runs.
A
Any
one
of
these
environments
will
be
able
to
run
those
applications,
so
they're
extremely
portable,
and
you
can
even
run
them
outside
of
outside
of
these
environments
and
so
the
service
fabric
mesh
environment.
This
is,
of
course,
only
available
in
Azure,
that's
an
azure
exclusive
and,
as
I
said,
this
is
this:
is
the
the
fully
managed
server
list
one,
and
so
in
this
case,
there's
no
cluster
management
to
do.
You
saw
the
service
fabric
Explorer
that
we
were
kind
of
showing
so.
A
A
Just
writing
your
application
and
deploying
those
containerized
applications
up
into
that
environment,
described
by
those
very,
very
simple,
very
simple,
Amalur,
json
files,
and
so
the
goal
of
that
is
really
to
simplify
application
development
and
to
simplify
the
operational
development
of
managing
those
applications,
and
so
even
the
logs
that
we
saw
would
be
drastically
simplified.
Yeah.
B
A
A
Yes,
yeah
yeah,
all
right,
so
I
think
with
that
we'll
probably
wrap
it
up
here.
If
you
want
to
check
out
this
demo,
I'll
be
posting
this
up
to
my
get
up
account
fairly
soon,
that's
up
at
the
top
here.
Just
make
a
note
of
that.
If
you
want
to
start
playing
around
with
service
fabric
mesh,
that's
also
available.
That's
in
preview
just
go
on
to
our
github
repo.
Here
at
Azure,
slash
service
fabric,
mesh
preview,
it's
it's
a
public
preview!
A
You
can
get
started
right
away,
there's
some
cool
demos
and
samples
that
you
can
go
and
run
and
deploy
them
out
into
the
mesh
environment.
Of
course,
download
the
service
fabric
sdk,
if
you
just
want
to
do
service
fabric
development
in
general
and
of
course,
visit
us
on
get
up
service
fabric
is
open-source
and
we're
continuing
that
open-source
effort
and
moving
all
of
our
development
processes
and
everything
out
onto
github.
Please
come
visit
us
there
at
Microsoft,
slash
service
fabric
and
open
up
issues
and
play
around
with
the
code
of
it
awesome.
B
D
C
We
just
wanted
to
say
thank
you
so
much
for
watching
the
first
two
days
here
in
channel
9
studios
I
wanted
to
give
you
a
little
bit
of
stats.
The
longest
watched,
I'm
Bermuda
comes
in
six
hundred
and
twenty-one
minutes
you
guys
are
watching
this
show
like
consistent.
620
was
Barbados
two
hundred
and
four
minutes
now
at
number,
two
of
all
places
I
mean
sheesh
I
would
think
we'd
be
on
the
beach
or
something
anyways.
Most
views
came
from
the
United
States,
then
the
UK,
Canada
and
India.
Thank
you
guys.
C
We
love
you
our
local
events.
We
had
40
watch
parties
yesterday
watching
the
keynote
live.
That
was
amazing.
They
check
out
the
Donette
comp
hashtag.
We
have
a
ton
of
pictures
and
people
having
a
great
time
yesterday,
but
go
ahead
and
go
to
dotnet
comp,
dotnet
local
events.
There
are
a
hundred
and
fifty-one
total
events
and
they're
all
running
through
October
31st.
You
can
attend
a
live
event
and
learn
more
about
about
dotnet
and
we've
got
more
coming
up
on
Twitch
we've
got
a
whole
day,
3
going
right.
Oh.
D
Yeah
so
before
we
get
into
day,
3
I
just
want
to
give
a
great
shout
out
to
our
crew.
Here
at
channel
9,
it's
been
awesome
to
work
with
everyone.
Michael
O'neill
has
graciously
been
super
helpful,
Matthew
Pugh
on
the
Mobe,
who
is
the
guy
behind
the
camera
right
now,
we've
got
Caitlyn
doing
all
of
our
backend
stuff
and
help
him
so
people
like
Christiana
big
help
job.
Thank
you
all,
but
the
party
isn't
over.
Yet
it's
not
over.
You
still
have
we.