►
Description
GitHub Actions gives us the power to use our repositories to speed up the delivery of our software and applications, all from one central point of truth. This workshop will provide you with hands-on experiences with GitHub Actions to leverage GitHub package registry and safely deploy applications to the cloud.
GitHub Satellite: A community connected by code
On May 6th, we threw a free virtual event featuring developers working together on the world’s software, announcements from the GitHub team, and inspiring performances by artists who code.
More information: https://githubsatellite.com
Schedule: https://githubsatellite.com/schedule/
A
A
A
Oops,
so
I
just
want
to
get
talk
about
the
agenda
we
have
today
and
then
we'll
get
started.
This
workshop
is
gonna
pick
up
from
where
the
previous
workshop
ended.
So
there
are
things
that
the
previous
workshop
covered
that
I
wouldn't
be
covering
here.
A
A
All
right,
sorry
about
that,
we're
gonna
we're
gonna
start
off
by
setting
some
context
about
what
continuous
delivery
is
and
then
we
will
jump
in
and
do
the
first
workshop.
That's
going
to
be
a
workshop
to
store
a
docket
docker
registry
docker
image
in
in
the
get
a
package
registry
and
we're
going
to
be
using
that
in
a
subsequent
workshop,
when
we
are
building
and
storing
images
in
docker
in
github
package
registry,
and
then
we
will
go
and
do
the
second
workshop,
which
is
going
to
be
continuous
delivery
to
aws.
A
And
lastly,
I
will
have
some
extra.
If
we
have
extra
time,
we
can
cover
doing
continuous
delivery
to
azure
and
ecs.
That's
taking
the
container
image
that
we
created
and
then
deploying
to
ecs.
A
So
you
are
abstracting
the
key
elements
of
software
development
like
develop
like
building
your
code,
writing
a
test
packaging,
the
dependencies
and
then
creating
a
release
and
and
maintaining
that
code
takes
a
higher
priority
than
deploying
new
features
and
capabilities
to
your
software
and
what
it
means
is
we
we're
going
to
keep
the
code
always
deployable,
and
we
also
going
to
ensure
that
the
different
environments
that
we're
going
to
deploy
the
code
to
we
are
able
to
bring
that
into
version
control,
and
we
are
going
to
be
able
to
do
automated
deployments
to
any
any
environment
if
desired.
A
A
We
already
seen
that
continuous
delivery
is
not
continuous
deployment,
because
continuous
deployment
has
to
think
about
things
such
as
feature
toggles.
We're
going
to
do
a
b
testing
we're
going
to
move
certain
aspects
such
as
change,
management,
security
and
compliance
left.
In
order
to
do
continuous
deployment,
continuous
delivery
doesn't
have
to
be
a
fully
automated
process.
We
can
have
gating
process
in
continuous
delivery
where
code
is
promoted
from
the
feature
branch
to
production,
and
it's
not
certainly
restricted
to
deploying
to
cloud.
A
One
of
the
key
things
with
when
we
talk
about
ability
to
deliver
coders,
it's
ability
to
make
changes
frequently
and
release
frequently
and
getupflow
enables
teams
working
on
features
to
do
this
in
a
safe
and
secure
secure
manner.
A
So
the
key
thing
with
github
flow
is:
there
is
a
master
branch
which
is
kept
in
a
releasable
state
and
when
teams
want
to
have
a
new
feature,
developed
they're,
going
to
create
a
feature
branch
and
then
they're
going
to
make
comments
to
the
future
branch
and
once
commits
are
checked
in
you're,
going
to
have
your
ci
pipelines
run
and
they're
gonna
run
make
sure
that
the
code
is
according
to
your
code,
quality
standards,
your
acceptance,
your
unit
test
and
your
integration
test
pass.
A
A
At
that
point,
there
is
going
to
be
a
manual
process
to
trigger
the
continuous
delivery
pipelines
and
continuous
delivery
workflows,
and
the
continuous
value
workflows
are
going
to
deploy
the
code
to
a
staging
environment
and
they
are
going
to
do
acceptance
testers
and
it
could
be
a
manual
or
automated
acceptance
test.
Once
those
tests
pass,
then
the
code
is
merged
to
the
master
and
the
code
is
deployed
to
production.
A
Now,
if
you're
doing
a
continuous
deployment,
you
could
as
well
do
the
deployment
production
while
you're
on
the
feature
branch
and
then
you
can
use
feature
toggles
to
do
some,
a
b
testing
and
then
there's
a
test
pass
and
you're
happy
that
the
code
is
going
to
work
fine
in
production.
You
can
then
merge
the
code
to
master.
A
With
that
ground
setting,
let's
get
started
on
the
actual
work
workshop
itself
for
the
workshop,
the
prerequisites
are
you
need
a
learning.
Github
account,
you
need
learning
lab,
you
need
to
have
access
to
actions
and
make
sure
that
you
have
access
to
github
package
registry
you
signed
up
for
that.
We
would
also
be
using
a
personal
access
token,
so
we
should
create
that
on
the
aws
side,
you
need
to
create
an
a
you
need,
an
aws
account.
A
And
in
this
workshop,
what
we're
going
to
do
is
we're
going
to
pick
up
from
where
we
left
off
in
the
continuous
integration
and
we're
going
to
modify
the
continuous
integration.
We're
not
going
to
use
the
same
workflow,
but
we're
going
to
take
a
continuous
integration.
Workflow
modify
it
to
create
a
docker
image
and
we're
going
to
store
the
docker
image
to
the
get
a
package
registry.
A
So
what
I'm
going
to
do
is
I'm
gonna
do
the
pieces
of
the
workshop
and
then
I'm
gonna
wait
and
then
we
can
all
catch
up.
A
A
I've
already
signed
up
here
so
once
you
sign
up
the
for
the
learning
lab
you're
going
to
see
this
screen,
the
learning
lab
is
going
to
create
a
repo
in
your
in
your
github
account,
and
it's
going
to
create
the
steps
for
you
and
it's
going
to
start
at
the
first
step.
A
A
A
A
A
A
The
tests
themselves
are,
we
are
doing
a
matrix
test
like
we
saw
in
the
previous
workshop
and
we
are
using
node.js,
10
and
12
and
we're
going
to
turn
on
two
runners,
ubuntu
latest
runner,
and
then
we're
gonna
run
on
windows.
2016.
A
There
is
a
bug
in
this
code.
It
says
ubuntu
last
test,
it
should
be
ubuntu
latest,
but
it
should
not
really
impact
because
there's
another
reason,
because
this
variable
is
not
used
anywhere
here.
It
should
be
used
here,
but
it's
not
used
here.
A
All
builds
by
default
would
run
in
parallel
unless
you
are
specifying
a
dependency
to
say
this,
this
I'm
sorry,
all
all
jobs
run
in
parallel
unless
you're
specifying
a
dependency.
For
example,
you
can
say
a
dependency
for
a
job
to
say
this
job
needs
the
previous
job
to
be
run.
A
In
this
pull
request,
we
are
going
to
create
a
docker
image
and
docker
images
gives
you
the
consistency
and
immutability
that
guarantees
that
whatever
is
being
tested
is
the
stuff.
That's
gonna
run
in
production.
So
you
can
have
the
confidence
that
your
acceptance
test
is
gonna
test
the
same
code,
that's
to
run
in
production.
A
A
A
A
Another
thing
to
notice
is
it's
using
a
community
action
which
is
built
by
matt
davis
who's
at
github,
but
you
can
also
pick
and
choose
an
action.
That's
going
to
work
for
you
for
this
workshop,
we're
going
to
use
this
action,
but
for
your
work
you
may
pick
and
choose
any
action
that
suits
your
need.
A
A
Another
thing
to
notice
here
is:
we
are
also
using
because
we
are
storing
this
image
in
get
a
package
registry.
You
need
credentials
and
you
are
passing
a
credential
called
a
secret
speedup
token.
This
secret
is
automatically
generated
by
the
workflow.
A
A
A
A
You
may
ask
that:
why
is
it
doing
the
four
combinations
I
mean?
I
did
not
specify
the
ubuntu
latest.
In
my
workflow
I
mean
I
did
specific
as
a
strategy,
but
I
did
not
specify
it
as
the
runner.
The
strategy
just
combines
the
all
combinations
of
the
variables
and
then
creates
the
job
servant.
A
While
this
is
running,
I
will
quickly
show
another
repo
which
has
the
completed
exercise,
so
we
don't
have
to
wait
for
the
docker
image
to
be
there.
A
A
All
right,
so
I'm
going
to
switch
back
to
here,
so
you
can
actually
look
at
the
links
and
I'm
gonna
man
the
slack
channels.
While
we
watch
for
any
questions,
but
please
go
ahead
and
start
the
workshop
and
then
we
will
stop
at
the
at
this
point
where
your
jobs
are
gonna,
run
and
then
create
the
docker
image.
C
C
C
C
A
All
right,
let's
get
back
so
in
this
workflow.
A
A
I
already
have
the
docker
image
running
several
docker
images
there,
but
I
can.
I
can
do
a
docker
login
and
give
the
credentials
my
private
personal
access
token
and
my
username,
and
then
I
can
log
into
my
get
a
packaged
registry
and
I
should
be
able
to
download
this
to
a
docker
poll
and
get
this
image,
so
I
can
copy
it
here.
A
And
then
the
next
step,
what
we're
going
to
do
is
we're
going
to
take
it
another
step
forward
and
then
deploy
the
application
to
aws
in
order
to
deploy
to
aws.
We
are
actually
going
to
deploy
it
as
a
serverless
app.
So
we're
not
going
to
use
the
container
that
we
created
and
we'll
cover
that
later
in
the
workshop.
A
But
what
we're
going
to
do
is
we're
going
to
create
a
workflow
and
then
when
we
want
a
manual
trigger
to
kick
in
the
cd
workflow.
So
what
we're
going
to
do
is
we're
going
to
create
a
trigger
based
on
a
label
and
we're
going
to
create
a
a
label
called
stage
and
we're
going
to
deploy
when
the
label
cost
stages
applied
to
the
workflow.
A
A
A
So
in
this
pull
request
again,
the
learning
part
has
gonna
has
already
created
a
template
for
us
for
the
deploying
staging
deploy
file.
So
we
just
have
to
start
modifying
it
to
build
the
different
jobs
in
it.
A
A
So
the
learning
bot
learning
lab
what
has
responded,
and
it
said
that
you
need
to
do
these
things
and
what
we're
gonna
do
now
is
we're
going
to
create
a
job
conditional.
A
So
what
we
want
to
do
is
we
want
to
end
up
with
something
like
this.
The
neat
thing
with
with
actions
is
it's
going
to
give
you
some
some
assist
for
you,
so
I
can
go
here
and
I
can
create
an
you
see
how
actions
is
going
to
prompt
you
with
assist
code
assist.
So
I
can
say
if.
A
A
A
A
And
before
we
start
this
part,
because
this
one
needs
your
aws
account
to
be
set
up,
I'm
gonna
just
take
a
pass
here,
maybe
for
five
minutes,
while
you
all
get
caught
up.
A
C
C
C
C
C
C
C
C
C
A
A
All
right,
let's
get
started,
I'm
gonna
quickly
switch
my
monitor
so
that
I
see
on
the
slack
that
people
are
having
difficulty
seeing
my
screen.
So
I'm
gonna
stop
the
share
and
then
switch
the
monitor.
Please
bear
with
me.
A
All
right,
so
we
right
now
we
created
a
workflow
and
in
the
workflow
we
just
added
a
job
conditional,
but
we
haven't
actually
done
anything
to
deploy
the
code
to
aws.
A
So
next,
what
we're
going
to
do
is
I'm
sorry
we're
going
to
go
here
and
then
we're
going
to
create
an
aws
account
if
you
are,
if
you
don't
have
one
and
then
we're
going
to
create
an
iam
user.
So
if
you
go
to
adobe,
ps
and
login,
one
of
the
services
would
be
iam.
You
can
go
there
and
create
an
im
user.
A
A
A
A
New
step
here,
this
new
job
here
and
just
copy
this
entire
file
and
then
replace
it.
A
Actually
not
this
one
is
was
deployed
created
by
github
sorry,
so
I'm
going
to
commit
this
change.
A
So
here
what
we're
going
to
do
is
we're
going
to
we
create
we're
going
to
go
to
aws
and
create
an
s3
bucket
and
once
the
s3
bucket
is
created.
Please
make
note
of
the
region
that
you're
using
and
that
has
to
be
consistent
across
all
the
files
and
then
we're
going
to
create
a
an
aws
config
file
and
we're
going
to
update
the
config
file
with
those
entries.
A
And
I'm
gonna
create
a.
I
also
kept
the
stack
name
as
the
same
as
the
bucket
name.
It
doesn't
have
to
be,
and
then
my
region
is
u.s
west.
Make
sure
that
wherever
the
bucket
is
that's
the
region
here,
and
then
you
commit
the
change.
A
A
A
And
it's
it's
just
like
any
other
terraform,
I'm
sorry
cloud
formation
script
for
lambda
and
it's
setting
the
urls
for
you
and
then
you
can
configure
this
url
to
whatever
you
want
leave
it
as
it
is
for
this
workshop.
A
A
So
what
we
have
done
in
this
is,
we
have
right
now
created
a
staging
workflow
in
the
staging
workflow.
What
we
have
done
is
we
have
created
a
trigger,
that's
going
to
be
based
on
a
label
labeled
event
and
there's
going
to
be
a
job.
That's
going
to
get
triggered
conditionally,
based
on
a
value
of
the
label,
we're
looking
for
a
label
with
a
value
of
stage
and
that
job
is
going
to
build
the
application
and
then
there's
another
job
that
runs
subsequently
and
that
needs
the
previous
job
to
complete.
A
So
it's
going
to
run
sequentially
and
that
job
is
going
to
check
out
the
artifacts
from
the
previous
job
and
then
deploy
it
to
aws.
In
order
to
deploy
to
aws,
we
have
to
create
a
secret
key.
We
have
to
create
an
iam
account.
We
have
to
create
a
secret
aws
access,
key
and
secret
key,
and
we
have
to
store
that
as
secrets
with
these
values.
A
A
A
A
But
we
can
get
started
with
the
staging
workflow
and
I
will
just
take
a
we
can
take
about
15
minutes
here
to
get
started
back.
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
A
A
Okay,
great,
I
think
we
are
good.
So,
let's,
let's
go
to
the
next
step
here.
A
A
A
We
create
that
and
pretty
much.
This
is
all
we
need
to
do.
A
A
The
workflow
to
run
so
now,
you're
gonna,
see
the
bot
come
back
and
say
you're
done,
and
you
should.
If
you
go
to
the
actions
tab,
you
will
see
that
there
is
going
to
be
a
the
merge
even
triggered
the
workflow
to
run
and
the
workflow
is
going
to
do
the
build
step.
And
then
it's
going
to
do
the
deploy
step.
A
So
what
we've
done
in
this
course
is.
We
have
created
a
staging
workflow,
the
staging
workflow
we
want
to
trigger,
based
on
a
particular
label
when
the
feature
branch
is
ready
to
be
merged,
that
triggers
the
workflow
to
run,
do
a
build,
run
some
tests
and
then
deploy
the
code
to
a
stage
environment
which
is
in
our
case
on
aws,
and
then
once
the
acceptance
criteria
is
passed
and
and
testing
has
been
completed,
then
we
can
merge
the
code
to
master.
A
There
is
a
mask.
We
created
a
master
workflow
which
is
going
to
get
triggered
when
there's
a
push
to
master
and
the
master
workflow
is
gonna,
I'm
sorry,
the
the
production
workflow
is
gonna
trigger
a
build
and
it's
gonna
deploy
the
code
to
aws,
and
it's
also
gonna
create
a
darker
image
and
then
store
that
in
the
github
packages.
C
C
C
C
C
C
C
C
C
C
A
A
We
could
then
use
the
container
image
and
then
deploy
that
same
container
image
that
we
depart
to
stage
to
production
and
in
this
example
that
we
saw
we
were
deploying
not
a
container,
but
we
were
deploying
as
a
serverless
app,
but
you
could
always
take
the
container
that
we
created
and
then
deployed
using
ecs,
for
example,
or
you
can
use
azure
and
then
you
can
deploy
it
using
azure
to
app
service.
A
A
Just
have
a
shortcut
for
it
here,
but
you
can
go
to
this
learning
lab
and
you
can
sign
up.
This
learning
lab
is
very
similar
to
what
we
have
what
we
did
with
aws,
except
it's
going
to
deploy
to
to
an
azure
environment
and
it's
going
to
use
containers
to
deploy
to
an
azure
app
service.
A
A
And
we
can
look
at
the
workflow
here,
so
the
nice
thing
with
this
with
actions
is
actions
is
not
just
for
ci
cd.
You
can
also
use
actions
to
for
doing
infrastructure
as
code,
and
this
is
an
example
where
we
are
doing
actions
to
spin
up
an
azure
environment
and
then
spin
it
down.
A
A
Something
similar
to
ecs
it's
a
service,
it's
a
it's
a
deployment
methodology
where
you're
running
a
a
container
based
application
in
an
app
in
an
app,
and
in
this
case
what
we
are
doing
is
we
are
using
a
free
tier.
So
when
you,
when
you
sign
up
for
azure,
you
can
run
this
and
keep
this
sku
as
f1,
and
it's
going
to
be
available
for
you.
A
You
can
also
have
another
job
here,
which
is
to
destroy
the
azure
resources,
so
you
don't
have
to
worry
about
like
tearing
it
down.
You
just
apply
this
destroy
environment
and
it's
gonna
tear
down
your
your
azure
infrastructure
for
you
when
we
look
at
the
code
here,
let's
look
at
the
pull
request
and
we
can
walk
through
a
few
things
that
needs
to
be
set
up.
A
So
what
we
need
for
this
deployment
is,
you
need
an
azure
account
and
then,
once
you
have
an
azure
account
and
you
need
azure
cli
and
with
the
cli
you,
you
would
create
a
service
principle.
This
is
similar
to
an
iam
user,
so
you're,
creating
that
with
the
role
of
contributor
and
you
you're,
giving
it
only
a
scope
to
that
particular
subscription
that
you
have
you
can
also.
A
You
can
do
a
more
fine-grained
control
here
and
once
you
create
a
your
service
principle,
you
provide
those
values
as
secrets
in
your
in
your
repo,
so
you're,
giving
the
azure
subscription
id
and
also
you're,
giving
the
azure
credentials.
The
azure
credentials
is
basically
the
json
that
you
get
when
you
run
this
command,
and
you
store
that
once
you
store
have
that
information,
then
all
you
need
to
do
is
then
create
that.
A
A
The
job
itself
has
an
azure
login.
This
is
where
it's
going
to
use
your
azure
credentials,
you're
going
to
log
in
to
your
azure
environment.
Basically,
what's
happening
is
from
your
runner,
you're
logging
into
the
azure
environment
and
then
you're,
creating
a
resource
group
here
and
once
you've
created
a
resource
group
you're,
creating
an
app
service
plan
with
the
particular
skew.
A
A
So
when
you
look
at
this
one,
it's
very
similar
to
what
we
did.
The
workflow
is
very
similar
to
what
we
did
with
aws.
You
built
the
application.
A
You
build
the
image
and
store
the
image
in
in
get
a
package
registry
and
then
you're
deploying
it.
So
in
this
case,
deployment
is
dependent
on
the
container
being
available
in
the
package
industry,
and
this
is
gonna
with
zero
downtime,
deploy
the
container
and
then
start
the
application.
With
the
new
container
there.
A
A
There
you
go
so
it
took
a
little
bit
while
I
think
it's
just
probably
warming
up
it.
It's
a
it,
doesn't
keep
the
nginx
container
running
all
the
time.
So
you
have
the
game
up
here.
A
So
this
is
a
nice
alternative
if
you're
gonna
run
with,
if
you
don't
have
aws
credentials
like,
if
you
don't
have
an
error-based
account,
then
you
could
always
do
this.
This.
A
So
you
can
do
the
azure
learning
lab
and
it's
pretty
similar
to
what
we
did
with
with
aws.
A
You
can
also.
Then
another
option
is,
if
I'm
running
on
aws
itself,
but
I
want
to
deploy
the
containers,
then
you
can
also
do
using
ecs.
A
Another
shortcut
for
this,
so
this
is
this-
is
a
the
same
lab
that
we
did
for
aws,
but
I
modified
it
a
little
bit
so
that
I
can
deploy
to
a
container
service.
A
In
this
case,
I
have
added
a
code
in
this
particular
branch.
So
the
way
you
do
it
is
you
you
have
in
on
aws,
you
create
a
ecs
cluster.
You
create
a
ecs
task
definition.
The
task
definition
could
be
just
a
template.
So
if
you
look
at
this
task
definition,
I
have
a
task
definition
here
and
I
have
specified
an
image
here,
but
I
also
in
order
for
this
image
to
be
pulled
from
github
package
registry.
A
I
need
to
create
a
repository
credential
secret
in
aws,
and
then
I
have
to
store
the
your
personal
access
token,
your
repository
url
and
and
your
username
there.
Once
you
create
a
repository
credentials,
you
take
the
arn
and
right
here
when
you
configure
it,
you
can
trigger
the
you
provide
that
arn
here,
I'm
using
far
gate
here
so
that
I
I'm
not
going
to
be.
A
A
I'm
in
the
right
place,
yeah
right
here,
so
in
this
case,
what
we're
going
to
do
is
I'm
using
the
aws
action
to
create
a
task
definition.
So
what
this
is
going
to
do
is
it's
going
to
use
the
task
definition
file
that
we
have
and
it's
going
to
replace
it
with
the
container
image
that
we
have,
that
the
the
workflow
is
going
to
build
once
the
task
definition
is
created,
it's
going
to
be
available
for
the
subsequent
tasks
in
the
job.
A
A
A
It
basically
deploys
you
don't
get
the
url
here,
so
one
thing
you
need
to
do
in
that
case
is
you
have
to
go
to
aws
to
get
the
url,
but
that
url
is
really
something
that
you
specify
when
you
create
the
cluster.
A
So
that's
pretty
straightforward.
So
in
order
to
deploy
create
an
action
to
deploy
the
container
to
aws,
all
you
need
is
to
use
the
aws
provided
actions.
That's
there
in
the
marketplace,
create
an
ecs
service,
create
your
ecs
cluster
and
copy
that
task
definition
file
and
put
it
here.
So
that's
pretty
much
all
you
need
to
do
and
then
you
should
be
able
to
deploy
it.
A
A
There's
a
whole
bunch
of
actions
that
are
available,
so
you
could
also
deploy
to
a
kubernetes
cluster
by
from
your
from
your
workflow
file.
A
So
we've
seen
that
actions
is
really
fantastic
for
doing
a
continuous
delivery
because,
first
of
all,
you're
not
managing
and
maintaining
you're,
not
responsible
for
the
care
and
feeding
of
your
cacd
infrastructure
you're.
If
you're
using
get
a
provider
runners,
then
there's
a
low
threshold
and
low
need
for
for
any
kind
of
operational
overload.
A
It's
cloud
agnostic
we
can.
You
can
provide
your
own
runners,
which
are
running
on
your
own
on
your
premises
and
then
you,
you
can
run
your
tasks
and
target
to
those
runners.
You
can
also
deploy
to
any
kind
of
cloud
environment
or
a
non-cloud
environment.
A
A
The
actual
repo
there
and
then
you
can.
A
But
you
can
go
to
the
you
can
find
the
action
in
the
github
repo.
You
can
customize
that
you
can
fork
it,
so
you
have
all
those
flexibilities
to
do
that.
A
We
also
saw
that
it's
easy
to
debug
your
actions
using
the
console
you
can
download
the
your
action
logs
and
the
most
important
thing
is
by
bringing
by
using
actions
you're
bringing
the
your
your
your
workflow
code
to
become
a
first
class
citizen
with
your
application
code
and
you're
running
it
all
in
the
same
keeping
it
all
in
the
same
repository,
which
is
which
is
just
the
best
best
practice
to
do.
A
And
I'll
leave
you
with
the
with
these
links.
I
hope
you
enjoyed
the
learning
lab
workshops
and
I
hope
you
will
continue
practicing
and
experimenting
with
actions,
and
at
this
point
I
will
look
at
the
slack
channel
and
see
if
there
are
any
questions,
but
we
can
also
hand
and
answer
some
questions
that
were
raised
previously.
A
All
right,
so
I
will
go
through
the
questions
that
we
have.
Let's
see.
A
Okay,
how
safe
is
it
to
use
third
party
actions?
Is
any
any
audit
there,
I'm
a
bit
afraid
passing
keys
to
some
non-certified
actions.
A
Yeah,
that's
a
very
valid
question,
so
you
have
to
treat
third-party
accents
as
just
as
a
any
kind
of
a
third
party
library.
So
it's
a
little
bit
of
you.
You
need
to
have
some
amount
of
trust.
A
A
One
thing
you
can
definitely
do
is
make
sure
that
you
use
like
the
checkout,
the
shahs,
for
when
you're
checking
out
a
specific
version
which,
in
which
case
you,
don't
you
don't
have
a
possibility
of
somebody
changing
that
reference
to
point
to
a
another
comment
which
just
got
something
that
is
not
safe
for
you.
A
And
there
are
some
best
practices,
some
blog
posts
on
that,
so
definitely
a
good
practice
to
do
that.
Are
there
any
rep
requirements
and
thomas?
Do
you
want
to
add
something
to
that?
Because
I
I
know
you
posted
that.
B
Yeah,
the
only
thing
I
wanted
to
add
was
just
in
a
general
sense.
You
should
treat
it
like
you'd
have
said
as
a
third-party
package,
so
if
it
was
nuget
or
npm
or
anything
else,
that's
open
source
that
you're
consuming.
You
would
want
to
do
whatever
security
reviews
you
need
for
that.
The
great
thing
about
actions
on
the
marketplace,
however,
is
they
are
all
open
source
by
requirement.
A
B
That's
kind
of
what
I
just
answered
with
the
yes
yeah,
sorry
yeah,
specifically
on
that
part
of
the
concern
was
like.
Does
the
action
have
a
security
review
done
before
it's
posted
on
the
marketplace
and
the
answer
to
that
is
no
similar
to
potential
malicious
code
being
in
any
open
source
repository?
A
All
right
thanks,
thomas!
The
next
question
is
other
slides
going
to
be
available.
What
about
the
recording?
Yes,
the
slide!
The
slide
is
there
in
the
repo
and
then
this
this
presentation
is
also
recorded,
so
this
should
be
available
to
you
and
the
recording
would
be
emailed
sometime
after
the
session
next
question.
I'm
looking
for
deploying
to
azure.
Would
this
workshop
cover
that
too?
So
we
did
talk
a
little
bit
about
deploying
to
azure.
I
hope
that
was
helpful.
A
A
What
scopes
do
I
need
on
my
personal
access
token,
but
it's
great
somebody
has
answered
that
read
packages
and
repo
yeah.
It's
always
a
good
idea
to
restrict
the
scope
of
your
personal
access
token,
when
you're
setting
this
up
yeah.
A
Time
check,
okay,
all
right!
We
have
we
are
about
on
time
now.
So
maybe
I'll
speed
up
really
fast,
there's
a
labeled
section
in
the
workflow.
Does
that
mean
every
period
needs
a
label
attached?
What
happens
if
no
labels
are
attached?
Basically,
no
every
pr
doesn't
have
every
pr
doesn't
need
a
label
attached.
What
it
means
is
like
when
you,
when
you
want
to
deploy
to
a
particular
pr
to
a
staging
environment.
That's
when
you
apply
a
label.
A
Maybe
that's
what
your
question
is
yeah
so
and
that's
just
one
way
to
trigger
it.
You
can
also
use
a
comment
in
your
pr
to
trigger
the
deployment
there's
several
ways
to
do
it.
A
All
right
last
question
is
very
important,
so
what
aws
resources
should
be?
Do
we
need
to
remove
in
order
to
avoid
billing,
so
definitely
the
s3
bucket
and
then
the
lambda
function
needs
to
be
removed?
There
is
also
yeah.
You
also
created
the
cloud
formation
stack
by
the
virtue
of
setting
up
the
environment,
so
you
could
also
delete
that
stack
and
you
should
remove
it
and
we
could
definitely
thomas
somebody.
B
Since
we're
out
of
time
on
it,
I
just
wanted
to
say
I
posted
the
faq
to
the
repository
I'll
post
it
in
the
channel
right
now
as
well,
and
that
has
relevant
links
and
answers
for
each
of
the
the
questions
that
we
just
discussed
as
well
and
then
I'll
modify
it
in
just
a
moment
with
the
links
that
are
on
the
screen
right
now,
so
they're
easy
to
click.
B
A
Great,
thank
you
all
hope
you
enjoyed
it.
Sorry,
it
was.
It
was
a
long
workshop,
but
I
hope
you
enjoyed
it.
Thank
you.