►
Description
Setting up Azure DevOps Pipelines for SharePoint Framework solutions demo taken from the SharePoint dev Special Interest Group recording at 27th of December 2018.
Presenter - Velin Georgiev (Pramerica )
More details on the SharePoint dev community calls from http://aka.ms/sppnp.
A
B
Guys,
my
name
is
winger,
give
an
amateur
porn
developer
for
a
long
time.
If
you
see
her,
if
you,
if
you
hear
a
big
round
Louisa's,
these
are
my
kids
singing
happy
birthday,
VESA
downstairs,
so
don't
freak
out
now
today
we're
going
to
talk
about
automated
pipelines,
so
the
the
demo
that
I'll
try
to
show
is
building
from
scratch.
B
Roger
is
trying
to
take
control
I'll
just
reject
again,
so
what
we
will
do
is
we
have
a
SharePoint
framework
solution
and
our
source
code
is
in
the
github
repository
and
we'll
link
that
github
repository
with
Asia
pipelines,
and
then
we
will
bundle
and
package
the
solution
here
in
an
automated
way.
Once
you
submit
a
new
piece
of
code
into
your
repository
and
then
once
we've
built
the
the
new
packages,
we
will
use
a
simple
scripting
using
the
PMP
office,
365
CLI,
to
to
deploy
it
up
to
our
tenant
to
deploy
that
changes.
B
We've
made
on
on
the
solution
here,
so
I
already
have
my
tenant
prepared,
and
this
is
the
tenant
up.
Catalog
and
my
package
is
already
being
deployed
there
and
the
web
part
within
that
package
is
already
being
installed
on
a
site
within
that
tenant
and
it
shows
version
20
at
the
end
version
12
at
the
end,
we'll
have
version
13
here.
Hopefully
if
we
can
set
up
everything
within
the
time
frames.
So
let's
assume
that
I
just
manually
publish
that
version
here
and
now.
B
I
want
you
to
create
an
automated
pipeline
that
everything
is
fully
automated.
End-To-End,
so
I
don't
have
to
manually,
drag
and
drop
here
again.
So
here
is
my
github
repository
and
I
had
to
upload
my
solution
there
to
save
some
time
and
it's
called
pipe
one
and
it
has
nothing
to
do
with
Azure
pipelines.
It's
not
automated.
It's
just
a
bunch
of
code
in
our
github
repository
and
we'll
start
from
here.
So
the
first
thing
to
do
is
to
install
your
pipelines
here
and
I'll
go
to
the
marketplace
to
show
you
where
that's
that's
located.
B
So
if
you
go
to
the
marketplace
and
if
you
go
to
deployment
apps
you'll
find
age,
your
pipelines
there,
so
you
can
click
that
one
and
you
can
simply
install
it.
If
you
haven't
done
that
before
you
get
an
install
button
here,
but
because
I've
already
installed
the
freight
here,
our
I
already
have
different
buttons
here,
but
it's
the
free
tier
and
it's
good
enough
for
all
your
experiments.
So
you
can
experiment
right
after
demo.
B
So
that
repository
is
not
part
of
the
list
here
and
you
can
see
all
my
previous
tests,
but
I'll
just
select
that
one
on
it
there
once
I,
save
that
it
will
try
to
redirect
me
to
the
aja
DevOps
and
with
asier
DevOps.
You
need
account,
so
you
have
to
create
account
with
Azure
DevOps.
You
have
to
login
with
that
account.
Then
you
go
through
a
wizard
nice
wizard
that
will
link
a
get
up
with
Azure
DevOps.
B
So
I
have
to
select
my
organization
here,
and
this
is
the
correct
one
and
I'll
create
a
new
project
there
and
I'll
call
it
pipe
one.
Exactly
as
my
reports,
a
repo
is
in
github,
then
I'll
be
navigated
to
a
narrow
window
where,
as
you
can
see,
it's
more
like
a
wizard
experience
here,
you're
clicking
and
you're
navigating
and
that
thing
will
open
Asia
DevOps
with
my
new
project
so
now
I'm
in
Asia
DevOps-
and
this
is
my
new
project
here
and
I'll-
have
to
select
my
github
repo,
and
this
is
the
correct
one.
B
Then,
if
you
try
to
go
now,
we
already
linked
get
up
here
and
if
you
try
to
go
into
my
github
repo
and
identify
what
piece
of
code
I
have
there
and
if
you
try
to
recommend
me
a
configuration,
as
you
can
see,
it's
smart
enough
figured
out
that
this.
This
is
an
OGS
or
it
has
something
to
do
with
no
GS
and
I
can
use
it
or
I
can
select
from
a
different
templates
here,
even
here.
So
for
me,
no
GS
is
good
enough.
B
I'll
just
pick
it
up
and
then
it
will
generate
a
llamo
file.
So
the
yamen
file
is
my
build.
So
we'll
have
two
pipelines.
First
pipeline
will
build
our
package,
our
SB
pkg
package,
and
the
second
pipeline
will
release
that
package
to
the
tenant.
So
what
I'm
doing
at
the
moment
is
I'm
using
that
IMO
file
to
create
my
first
pipeline,
which
will
be
the
build
pipeline
and,
as
you
can
see,
there
is
a
like
a
llamo
and
there
is
a
mark
up
here
and
we
can
easily
recognize
the
trigger
master.
B
That
means
that
whenever
I
push
a
new
code
to
the
master
branch,
it
will
trigger
that
build
and
if
you
try
to
create
a
new
bill
for
me,
then
the
pool
is
what
type
of
virtual
machine
I'll
be
using
to
build
the
package,
and
this
will
be
able
to
in
my
case,
but
you
can
use
Mac
or
Windows
here,
but
I'll
stick
to
the
defaults
here
that
are
recommended
by
them.
Then
the
next
one
will
be
once
we
get
the
virtual
machine
to
do
the
compute
for
us
it
will
install
node.
B
Yes,
so
that
was
told
no
G
is
8
and
after
that
it
will
run
a
script
that
will
say:
npm
install
NPM
install
is
nice
for
us,
but
we
don't
need
NPM
build
because
the
SharePoint
for
Mac
is
using
gulp.
So
after
that,
an
extra
an
extra
step
here
and
I
have
to
say
there
is
another
script
that
I
don't
need
the
build
here.
B
Instead,
I
can
say
test
because
I
have
unit
tests
and
the
just
testing
framework
will
run
the
unit
tests
and
if
they
fail,
they'll
fell
the
build,
so
I
can
replace
it
with
tests
here.
But
after
that,
I
have
to
do.
Go
clean,
go
bumble,
ship
and
go
package
solution
ship,
so
this
will
create
the
SB
pkg
filed
there.
So
that's
my
next
step,
but
I'm
not
done
yet.
I
have
to
do
two
extra
steps
to
complete
the
whole
flow
here
after
another,
its
tasks
here
and-
and
this
is
all
documented
within
the
document.
B
So
you
can
go
on
that
you're
out
there
and
you
can
see
how
to
build
those
steps
here
and
the
markup
and
what
that
isn't
means,
but
basically
the
the
last
step
for
me
is
like
this
virtual
machine,
so
virtual
machine
there
was
peanut
butter,
chewing
machine
for
me.
They'll
build
the
package,
but
that
package
is
on
the
virtual
machine
and
once
the
build
is
done,
the
virtual
machine
will
be
closed
or
deleted
or
whatever
and
my
package
will
be
gone.
B
So
there
should
be
a
way
for
me
to
store
it
in
adriĆ ,
dev
ops-
and
this
is
the
last
step
here.
What
I'm
saying
is
I'm
saying
get
it
from
the
virtual
machine
and
store
it
in
Asia
DevOps,
so
will
be
stored
that
will
be
stored
in
a
folder
called
SP.
Sp,
pkg
and
I
can
just
store
it
here
and
use
it
with
my
release
pipeline
or
whatever
I
want
to
release
out
of
that
build.
B
But
I
need
another
extra
step
here
and
now
I'll
just
paste
it
here
and
I'll
have
another
artifact
out
of
that.
So
this
will
be
my
package
here,
and
this
will
be
my
release
scripts,
so
basically,
I'll
create
two
folders
stored
in
Asia
DevOps
one
will
hold
the
HP
pkg
and
another
will
hold
the
release.
B
Script
and
I'll
show
you
the
relay
script
in
a
moment,
but
this
is
basically
a
bash
file
that
will
connect
to
the
tenant
whenever
I
start
building
the
release
pipeline
and
it
will
grab
the
package
from
Azure
DevOps
and
it
will
upload
it
to
the
tenant.
But
I'll
put
it
for
now
here,
and
this
is
my
build
and
as
you
can
see,
these
are
all
the
steps
and
I
was
just
safe
and
run
okay,
so
that
will
try
to
that.
Will
save
the
Yama
file
directly
in
the
source
control
so
that
can
be
used
everywhere.
B
It
immediately
goes
back
to
github
saves
the
mo
file
and
it
starts
the
process.
So
it
will
try
to
find
the
available
unavailable
ubuntu
agent
and
if
you
are
trying
to
run
all
these
tests
that
are
used
so
from
the
Yama
file,
so
it's
in
easier,
initializing
the
job
and
then,
if
you're,
trying
to
install
no
GS
here
and
then,
if
you
try
to
do
npm,
install
and
that
might
take
a
while-
and
while
this
is
running
I'll
just
show
you
the
deployment
script
here.
B
So
as
you
can
see
the
source
code,
if
I
refresh
you
see
a
yellow
file
here-
and
this
is
exactly
what
we've
built
previously,
there
is
the
Yama
file.
Now
I
have
an
extra
piece
of
script,
which
is
a
bash
script
and
it's
using
the
CL
ID.
We
will
run
when
we
have
the
build
and
when
we
have
all
the
artifacts
build-
and
this
is
how
it
looks
but
I'll
use
that
script
in
a
moment.
B
It's
just
for
you,
while
we're
waiting,
npm
install
its
it
accepts
two
parameters
here
then,
once
we
get
those
two
three
parameters,
it
will
try
to
install
the
office
365
CLI.
It
will
output
the
version.
Then
it
will
try
to
connect
to
my
tenant
using
the
credentials.
Then
it
will
try
to
add
the
package
to
the
tenant
catalog
to
the
ten
top
code
block.
Then
it
will
try
to
deploy
it
with
a
skip
feature
deployment.
That
means
that
the
new
package
will
be
available
for
all
the
sites
within
the
tenant.
B
It's
a
very
simple
script
and
we'll
use
it
when
we
built
the
release
pipeline.
So
let's
see
what
is
happening
here,
it's
almost
done
with
NPM
install.
Then
it
will
try
to
run
the
tests
because
I
have
tests
within
my
SPF
X
solution,
and
it
runs
just
any
to
try
to
spin
up
if
you
will
try
to
run
just
after
this
is
done
and
if
you
try
to
execute
the
tests,
so
the
test
execution
will
start
here
and
it's
always
good
to
have
unit
tests.
B
When
you
have
an
automated
pipeline
because
in
a
case
of
test
failure,
you
can
just
fail
the
build
and
ask
whoever
is
responsible
to
fix
the
solution.
So
the
MPM
is
stolen.
Test
is
done,
and
that
was
very
quick,
but
I
can
show
you
that
all
the
test
shoots
are
run
here
and
I
had
20
tests
and
they
all
passed
just
for
her
information
and
then
built
and
package
is
done
and
then
it's
creating
the
artifacts.
B
So
after
is
now
with
the
artifacts,
we'll
get
a
nice
blue
button
here
that
says
the
artifacts
are
ready,
and
these
are
basically
two
folders
with
files.
The
first
one
is
the
script
that
I'll
be
using
in
the
release,
pipelines
and
second
one
is
the
package
itself,
so
they're
saved
in
Asia,
DevOps
and
I'll
use
them
in
the
next
step.
So
this
is
the
build
pipeline.
It's
done
and
it
was
successful.
B
So
this
is
the
place
where
you
select.
Your
artifacts
and
I
know
that
my
artifacts
are
stored
in
Asia
devolves,
so
I'll
just
try
to
find
those
artifacts
here.
As
you
can
say,
you
can
pick
up
artifacts
from
different
providers
here,
even
from
Jenkins,
but
mine
are
stored
here
and
they're
stored
in
pipe
one
and
then
that's
important.
That's
alias
that
you
set
up
and
because
my
PMP
office
tree
specialized
creep
uses
that
alias
I'll
just
replace
it
there.
B
But
this
is
your
alias
is
basically
the
folder
where
your
package
is
okay
and
then
once
we
have
the
artifacts
here,
we
can
click
on
that
button
and
enable
continuous
deployment.
That
means
that,
with
every
build
it
will
deploy
to
to
our
deaf
tenant.
If
we
have
a
deaf
tenant
or
different
vironment
I
have
to
do
another
additional
thing
after
enabled
up
here
as
well.
B
B
Again,
because
I'm
using
the
CLI
and
I
can
use
whatever
I
want
here,
but
then
I'll
add
one
extra
step,
so
it
will
find
a
be
a
release
agent
for
me,
which
is
a
Bunter,
and
it
has
to
install
node
again
because
the
CLI
is
using
node.
So
I'll
install
node
here
and
another
extra
step
is
after
note
is
installed,
we'll
run
a
bash
script
and
that
bash
creep
is
part
of
our
artifacts
that
we've
just
built
so
I
can
click
on
that
thing
here
and
I
can
find
my
artifacts.
As
you
can
see.
B
The
package
is
here
the
script
that
I'm
going
to
execute
is
here
so
just
select
the
script
here
then
that
screep
requires
three
parameters,
so
I
have
to
setup
the
three
parameters
there
and
the
way
to
do
that
is
you
pass
arguments
here
and
those
will
be
taken
from
you
can
hike
all
in
here,
but
obviously
I
cannot
hardcore
password
here.
So
you
have
a
place
where
you
store
variables
and
I'll.
Take
them
from
there
and
I'll
create
those
three
variables
there
to
see
just
how
it
works
so
I'll
create
the
first
variable.
B
B
B
Then
I
have
to
pull
from
the
repository,
because
he
is
your
DevOps
modified,
my
repository
itself
to
say:
git
pull
we
base
that
should
be
good
and
then
I
have
to
push
it
to
the
repository.
So
once
I
push
it
that
will
go
to
the
master
branch,
that's
saved
that
will
go
to
the
master
branch
and
I'll
show
you.
What
is
the
experience
here
like
I
have
three
commits
here
and
the
third
commit
is
in
progress.
As
you
can
see,
the
previous
build
was
a
success.
B
Now
I
have
another
build
that
it's
running
and
if
you
click
on
that
dot
here
you
can
get
more
details
so
or
you
can
go
here
and
it
still
stays
in
progress,
but
you
can
go
directly
in
Asia
pipeline,
so
I
can
open
that
and
I
will
be
navigated
to
Asia
pipelines
here,
and
you
can
see
my
build
running
at
the
moment
again.
Npm
install
all
these
steps
will
be
repeated
again
and
you
can
see
in
if
I
click
on
my
builds,
you
see
that
that's
my
second
build
because
I
did
a
change.
B
B
Something
else
is,
you
can
Cologne
again
and
you
can
have
a
deaf
environment,
a
production
environment
story,
so
the
same
thing
with
the
production
environment.
You
can
set
it
up,
for
example,
that
could
be
a
release,
manager,
approver
or
something.
So
it's
not
automated
there.
It
will
be
only
automated
when
it
goes
from
built
to
death
and
your
different
will
be
always
up
to
date,
but
then
you
have
approvers
here
and
here
and
that
works
very
good
and
let's
see
what
is
happening
with
the
bill.
It's
still
running.
Okay,
I
have
two
more
minutes.
B
B
Good,
what's
happening,
hi
still
running
okay,
almost
there
I'll
make
it
I'm
pretty
sure.
So
here
here
is
my
catalog
and
I
deployed
an
hour
ago.
After
all,
these
pipelines,
automated
pipelines
that
will
refresh
and
my
web
part,
will
hopefully
refreshed
version
13
if
I
change
that
we
have
changed
data.
So
let's
see
how
it
goes
and.
A
While
that's
doing
that,
so
let
me
also
clarify
the
fact
that,
since
we
are
talking
about
SharePoint
framework
web
parts
and
that
SharePoint
framework
web
part
is
even
though
it
wouldn't
be
using
to
turn
on
scope
deployment.
Whenever
you
have
the
newest
Biblica
Jeff,
while
deployed
a
tab
catalog,
it
will
be
automatically
updated.
So
you
don't
have
to
go
to
the
site
level
and
upgrade
to
the
web
part,
because
the
instance
is
always
coming
from
the
app
catalog
site
collection,
and
that
is
a
big
difference,
obviously
with
SharePoint
add-ins
versus
Shevin
framework.
A
B
The
Deaf
release
is
running,
so
the
build
is
down
and
the
Deaf
release
is
running
and
it
says
in
progress
so
we'll
just
check
the
Deaf
and
we
won't
check
the
rest,
but,
as
you
can
see
it
spin
up
Ubuntu
agent
release
agent,
it
installed
node.
Now
it's
running
the
bash
script
and
everything
seems
pretty
good.
So
far,
so
at
the
end
that
will
connect,
it
will
add
the
the
new
package
there
and
it's
done
so.
B
Everything
seems
good
here
and
if
I
go
to
my
tenant
and
if
I
refresh
here
you'll
see
that
the
time
here
should
be
different,
it
should
be
a
few
seconds
ago.
Let's
see
there,
you
go
and
also
the
web
part
on
the
side.
That's
been
installed,
she
changed
version
13
there
you
go
so
20
minutes
automated
pipeline
and
everything
seems
to
be
working.
Just
fine
I
did
the
integration
with
github,
but
you
can
use
Azure
DevOps
alone,
where
your
git
repo
is
in
Asia
DevOps.
If
you
will
a
few
more
things
to
notice.
B
This
is
not
a
very
absolute
best
practice.
For
me.
It
was
just
to
set
it
up
in
20
minutes
and
show
you
how
it
works,
and
we
can
talk
in
a
separate
session.
What
is
the
best
practice
and
what
is
quality
gates
and
what
is
a
pre-flight
build,
and
things
like
that?
So
this
is
not
the
true
best
practice,
but,
as
you
can
see,
you
can
set
up
something
out
who
made
it
in
20
minutes,
which
is
which
is
great,
and
it's
for
free.
You
can.
You
can
start
experimenting
even
now,.