►
Description
Set up a managed Kubernetes cluster with Azure Kubernetes Service (AKS) (Part 1 / 3)
For the full series: https://youtube.com/playlist?list=PL0lo9MOBetEEk9gIFox8EbCf1Co_4ppIO
0:00 - Start
2:38 - Intros
5:42 - Reference Architecture
23:31 - Actions at scale repository: https://github.com/link-/actions-at-scale-ghoh
26:49 - Creating resources in Azure
34:49 - How provisioning works as it happens
43:42 - Application Gateway benefits
54:41 - Applying the ingress configuration
Demo repo: https://github.com/link-/actions-at-scale-ghoh
Contact Us: https://resources.github.com/devops-learning-journey/
A
A
Good
morning,
good
afternoon
and
good
evening,
I
am
so
happy
to
be
presenting
this
series
of
the
github
office
hours
we're
going
to
be
talking
about
adopting
github
actions
at
scale
in
the
enterprise,
but
first
my
name
is
bassem
dreddy
and
I
am
part
of
the
expert
services
team
at
github.
My
team
and
I
work
with
customers
across
the
globe
to
help
them
figure
out
solutions
for
different
types
of
problems,
adopting
github
actions
at
scale
being
one
of
them.
So
what
is
this
series
about?
A
There
will
be
four
events
during
which
we're
going
to
attempt
to
tackle
this
set
of
questions
and
so
much
more
starting
with
how
can
I
deploy
and
auto
scale
self-hosted
runners
can
I
use
kubernetes
to
deploy
self-hosted
runners?
How
can
I
manage
access
to
my
self-hosted
runners?
Can
I
create
custom,
self-hosted
runner
images
and
deploy
them,
and
how
can
I
create
my
own
scaling
solution?
All
of
these
questions
will
be
answered
and
we're
gonna
be
talking
about
best
practices.
We're
gonna
be
talking
about
how
you
can
maintain
this
infrastructure
and
how
to
secure
it.
A
So
there
will
be
four
episodes
in
this
series
and
we're
gonna
start
with
episode,
one,
which
is
today's
episode,
where
we're
gonna
be
talking
about
setting
up
a
managed
kubernetes
cluster
on
azure
using
aks
in
episode.
Number
two
we're
going
to
install
the
actions
runner
controller
on
aks.
This
controller
will
allow
us
to
auto
scale
the
runners
and
we'll
have
all
the
rules,
and
I'm
going
to
show
you
how
you
can
set
that
up
in
episode.
A
Number
three:
we're
gonna
customize
our
self-hosted
runner
images
and
show
you
how
you
can
configure
them
to
your
liking
and
how
you
can
add
tools
that
are
necessary
for
your.
You
know,
cicd
pipelines
or
any
other
workflow
runs
that
you
need
to
make,
and
I'm
gonna
show
you
some
security
best
practices
as
well
and
now
episode
number
four.
I'm
gonna
show
you
how
you
can
create
your
own
custom,
auto
scaling
solution
for
your
self-hosted
runners
using
webhooks.
So
today
we're
gonna
start
with
setting
up
our
aks
or
our
kubernetes
cluster.
A
The
session
is
still
beneficial
for
you
there's
a
lot
for
you
to
take
out
of
it.
This
will
also
be
a
nice
crash
course
for
you
on
how
kubernetes
works
in
general,
you
probably
have
noticed
that
these
episodes
are
not
live.
They
are
pre-recorded
because
of
the
complexity
of
the
demos.
We
don't
want
something
to
go
wrong
and
waste
your
time
while
watching
us
troubleshoot
it.
So
these
episodes
are
gonna.
Go
live
once
every
week
starting
from
today
and
don't
worry
about
taking
notes
or
where
you
can
get
where
you're
gonna
get.
A
The
information
from
I'm
gonna
make
all
of
the
necessary
material
available
for
you,
including
the
public
github
repository
with
step-by-step
instruction
for
how
you
can
set
up
the
environment
yourself.
These
episodes
will
be
made
available
on
github's
youtube
channel
and
you
can
revisit
them
anytime
if
you
want
to
go
through
the
details
and
especially
when
you're
trying
to
set
up
the
environment
on
your
end
enough
with
the
introductions,
let
us
discuss
what
are
we
going
to
be
building
in
today's
session
and
what
is
this
environment
that
we're
going
to
be
setting
up?
A
That
will
prepare
us
for
adopting
actions
at
scale.
This
is
the
reference
architecture
that
we're
going
to
be
here,
we're
going
to
be
seeing
over
and
over
again
throughout
these
episodes
and
we're
going
to
start
on
the
left
hand
side
by
talking
a
little
bit
about
github.
So
you
are
a
github
enterprise
customer
you
are
using
either
github.com
with
enterprise
managed
users,
maybe
or
without
them.
Maybe
you
are
using
github
enterprise
server,
because
you
want
to
deploy
github
on
premise.
This
solution
will
basically
work
for
both
options.
A
If
you
are
on
github.com,
you
have
the
option
and
the
added
benefit
of
using
github
hosted
runners.
These
are
runners
that
you
don't
manage
yourself,
but
there
are
many
use
cases
when
you
will
need
self-hosted
runners,
and
these
use
cases
are
quite
varied
and
they
differ
from
customer
to
another.
But
in
general
you
might
have
some
specific
build
pipelines
that
have
access
or
that
require
access
to
on-premise
assets
that
you
don't
want
to
open
up
to
the
internet
and
vice
and
and
vis-a-vis
the
github
hosted
runners.
A
You
might
even
have
some
very
specific,
build
jobs
that
you
really
need
some
beefy
runners
that
github
does
not
provide
off
the
shelf.
Maybe
you
also
need
to
provide
some
form
of
authentication
to
these
runners
or
have
some
tools
that
are
not
available
on
the
gita
boasted
runners
by
default
that
you'd
like
to
have
so.
There
are
many
many
reasons
and
we
have
already
established
that
there's
a
benefit
from
hosting
your
own
self-hosted
runners.
A
The
question
is:
how
do
you
manage
the
hosting
of
these,
and
how
do
you
make
sure
that
you
don't
get
overwhelmed
whenever
you
need
to
deploy
these
runners
at
scale?
So
this
is
what
we're
going
to
be
talking
about
in
further
detail
today
in
our
enterprise
that
we're
going
to
call
enterprise
r
we're
going
to
assume
that
we
have
organizations
that
you
are
structuring
your
teams
within
organizations
first
and
foremost,
and
then
with
the
teams,
and
then
your
projects
are
in
either
in
one
repository
or
distributed
across
multiple
repositories.
A
You
know
a
set
of
self-hosted
runners
within
a
group
that
we
can
then
assign
to
an
organization
or
to
repositories,
and
only
the
repositories
that
are
assigned
to
this
group
will
have
access
or
will
be
allowed
to
use
these
self-hosted
runners
within
the
workflows.
So
that's
the
first
level
of
how
we're
gonna,
you
know
organize
our
self-hosted
runner
setup.
Now
we
still
did
not
create
the
runners
right.
So
how
can
we
achieve
this?
A
We
can
create
the
runners
very
easily
using
the
run
registration
process
that
leverages
the
rest
apis
and,
as
you
can
see
here,
on
the
right
hand,
side
I
have
the
api
and
web
hooks
layer.
However,
registering
runners
and
adding
runners
manually
can
become
a
very
daunting
task,
especially
if
you
have
a
large
team
or
large
teams,
and
they
all
require
runners
on
demand.
A
They
want
these
runners
to
be
available
for
them
and
at
the
same
time,
you
they're
not
going
to
be
managing
them
yourself
themselves,
you're
going
to
be
managing
for
you're,
going
to
be
managing
it
for
them,
and
then
it
becomes
really
complicated
and
very
difficult
to
do.
When
we
talk
about
hundreds
or
thousands
of
self-hosted
runners,
the
solution
that
we're
going
to
be
talking
about
today
allows
you
to
scale
up
and
down,
meaning
that
you
can
add
runners
and
shut
down
runners,
because
you
don't
want
your
infrastructure
to
become
very
expensive
right.
A
Self-Force
runners,
they
they
use
resources
that
are,
if
you
are
we're
talking
about
resources
on-prem,
for
example,
they
are
expensive
if
you're
talking
about
resources
in
the
cloud,
not
so
much
but
still
you're
paying
for
them.
So
you
don't
want
to
have
these
runners
available
all
the
time
running
indefinitely,
you
would
like
to
shut
them
down
when
there's
actually
no
need
for
them.
So
this
is
why
adopting
kubernetes,
for
example
in
this
case,
is
a
viable
option,
because
kubernetes
allows
us
to
have
self-hosted
runners
running
in
pods.
A
We
can
set
them
up
and
shut
them
down
whenever
they
finish
the
job,
so
we
can
prepare.
We
can
spin
them
up
very
quickly
whenever
there's
a
need
for
them
whenever
once
they
finish
the
execution
of
the
workflow,
they
can
shut
down
and
we
don't
have
to
pay
for
these
resources
anymore.
So
we
can
create
sort
of
a
dynamic
infrastructure
that
scales
up
and
down
with
the
demand
of
the
team.
Now,
how
do
you
know
when
do
you
need
to
scale
up,
and
when
do
you
need
to
scale
down?
A
This
is
where
the
concept
of
web
hooks
is
very
handy,
and
in
github
we
have
a
created
web
hooks
web
book.
Events
specific
for
workflow
runs
which
are
called
workflow
jobs,
meaning
whenever
we
have
a
workflow
that
is
created
on
a
certain
repository
right.
So
let's
say
someone
triggers
a
workflow
run
in
repository
x.
A
This
workflow
run
will
generate
a
webhook
event
that
is
later
on
dispatched
to
something
right.
That's
going
to
create
a
self-hosted
runner
for
us.
The
job
from
that
was
created
in
the
repository
x
will
be
assigned
to
the
self-hosted
runner.
The
runner
is
going
to
execute
the
job
and
once
it
completes
the
work
it's
going
to
shut
itself
down,
and
this
is
how
it's
going
to
be
provisioned
so
to
make
all
of
this
work,
we're
going
to
need
a
github
app,
and
this
github
app
will
allow
us
to
do
two
things.
It
will
allow
us.
A
Now,
I'm
going
to
show
you
how
to
create
the
github
app
as
part
of
this
series,
I'm
going
to
show
you
which
events
do
we
need.
We
need
to
register
as
part
of
the
webhook
events
and
I'm
going
to
show
you
how
we
can
create
the
server
on
the
kubernetes
end
and
what
is
the
controller
that
we're
going
to
be
using
so
that
we
can
scale
the
runners
up
and
down
based
on
these
web
book
events?
A
Now
this
is
the
github
part
right,
so
we
started
with
organization
inside
this
organization.
We
have
one
or
multiple
repositories
inside
these
repositories.
We
have
one
or
multiple
workflows
that
we
need
to
run.
These
workflows
will
require
self-hosted
runners
to
execute
them
right
to
execute
the
job.
The
jobs
and
in
these
workflows
and
we're
going
to
rely
on
the
api
layer,
as
well
as
the
webhook
layer,
to
generate
the
webhook
events
whenever
we
have
a
workflow
job
that
has
started
and
whenever
we
want
to
register
or
the
register
self-hosted
runners.
A
A
We're
gonna
go
to
azure
in
this
case
and,
as
you
can
see
here,
on
the
right
hand,
side
we
have
the
asia
region
we're
going
to
be
hosting
our
kubernetes
cluster
inside
a
specific
region
right
and
we're
going
to
be
deploying
all
the
other
tools
necessary
for
all
of
this
to
work
inside
a
very
specific
asia,
asia
region,
we're
going
to
create
a
a
resource
group
which
we're
going
to
call
github
actions
runner
resource
group,
because
we
want
to
contain
all
of
this
infrastructure
that
we
create
inside
one
resource
group.
A
A
There
are
many
reasons
why
you
would
want
to
have
resource
groups,
they're
really
nice
to
work
with,
and
I
highly
recommend
that
you
you
adopt
them
in
your
workflow
next
inside
this
resource
group,
we're
gonna,
create
two
v-nets
and
the
first
v-net
is
gonna,
be
used
to
host
the
application
gateway.
We're
gonna
be
needing
an
application
gateway
that
will
behave
as
the
ingress
controller
for
our
kubernetes
cluster.
Why
do
we
need
the
application
gateway
as
opposed
to
just
using
a
generic
engine
x?
Maybe
english
controller
or
something
else?
Well.
A
The
application
gateway
is
a
nice
service
that
provides
us
with
waff
web
application
firewall
capability,
and
it
provides
us
with
ddos
protection
and
a
bunch
of
other.
You
know
nice
features
that
will
protect
our
webhook
server
right
for
for
the
self-hosted
runner
scale
up
from
attacks,
and
these
features
are
not
available
with
your
normal
or
standard
nginx.
A
Ingress
controller
and
that's
why
I
have
opted
for
the
application
gateway.
Of
course,
you
don't
necessarily
need
the
application
gateway.
You
can
just
expose
the
ingress
controller
via
a
load
balancer,
for
example,
or
an
application
load
balancer
or
some
other
mechanism,
but
you
will
not
have
the
firewall
capabilities
in
front
of
in
front
of
your
ingress
controller.
So
that's
why
we're
going
to
be
deploying
our
own
application
gateway,
and
this
is
where
also
we're
going
to
be
doing
the
tls
termination
we're
going
to
be
using
tls
for
obvious
reasons.
A
We
don't
want
to
have
our
traffic
pass
unencrypted.
You
know
throughout
our
network,
it's
not
going
to
be
end-to-end
encryption
because
the
traffic
is
not
going
to
come
to
our
self-hosted
runner
runners
encrypted
it's
enough
or
it's
sufficient
for
it
to
arrive
to
the
english
controller
and
and
then
get
get
passed
to
the
action
runner
controller.
I'm
gonna
discuss
what
these
are
in
a
bit,
but
what
I'm
trying
to
say
is
it's
sufficient
for
it
to
come
encrypted
to
the
ingress
controller?
A
Beyond
that,
it's
it's
okay
or
it
should
be
acceptable
for
it
to
pass
unencrypted
to
the
to
the
controller,
and
I'm
gonna
tell
you
why
in
a
little
bit
so
the
first
v-net
is
gonna
be
for
our
application
gateway.
The
second
v-net
is
gonna,
be
for
our
aks
or
our
kubernetes
cluster.
This
is
our
cluster
and
you
can
see
here
that
we
will
be
creating
v-net
p-ring,
because
we
want
both
of
these
v-nets
to
communicate
with
each
other.
A
Now
you
might
say
bassem,
why
don't
you
have
the
application
gateway
in
the
same
v-net
as
your
kubernetes
cluster?
Well,
because
the
application
gateway
requires
a
subnet
range
that
is
a
large,
so
so
does
the
kubernetes
cluster
and
we
don't
want
to
create
an
overlap
and
clashes,
and
you
know
in
the
ip
assignments
in
these
subnets,
so
you
are
better
off
creating
your
own,
separate
v-net
for
the
application
gateway
and
creating
peering
in
between
them.
This
should
work
totally
fine
and
I'm
gonna
show
you
how
you
can
set
this
up
now
inside
our
kubernetes
cluster.
A
Now
what
will
be
in
this
namespace?
First
of
all,
we're
going
to
be
deploying
the
actions
runner
controller
web
hook
server
or
the
actions
runner
controller
in
general,
we're
going
to
be
installing
it
on
our
cluster.
It's
very
important
for
you
to
be
aware
that
this
controller
is
an
open
source
project.
It
is
not
built
by
github
and
it
is
not
supported
by
github.
We
will
help
you
set
it
up
and
we
will
show
you
how
you
can.
A
You
know
figure
it
out
as
part
of
our
expert
services
engagements,
but
this
is
not
a
product
that
is
generally
supported
by
by
github.
We
will
try
to
answer
you.
We
will
help
you,
of
course,
but
this
is
not
a
software
that
we
maintain
ourselves,
even
if
we
are
engaged
with
the
maintainers
of
this
open
source
controller
ourselves.
With
that
said,
what
will
this
controller
do?
A
It
will
create
crds
in
your
custom
resource
definitions
in
your
kubernetes
cluster,
and
this
will
be
responsible
for
receiving
the
webhook
events
and
making
the
decision
whether
it
needs
to
add
more
self-hosted
runners
or
destroy
the
existing
self-hosted
runners
that
have
been
created
already.
This
is
going
to
be
the
brain
of
the
whole
operation
and
it
has
the
logic
to
parse
the
webhook
events
as
they
are
received
from
the
github
side.
A
This
controller
will
also
create
the
self-hosted
runner
containers
and
put
them
inside
the
pods
and
deploy
them
in
our
cluster.
It
will.
It
is
responsible
for
fetching
the
self-hosted
runner,
docker,
container
image
and
spinning
up
runners
from
it
and
managing
the
life
cycle
of
these
runners
inside
one
or
multiple
name
spaces,
also,
as
part
of
our
cluster,
we're
going
to
have
another
namespace
called
cert
manager
and
we're
going
to
be
deploying
solid
manager
to
take
care
of
our
tls
certificate
generation
and
the
tls
termination
process
and
the
you
know
the
whole
setup.
A
Of
course
there
are
I'm
going
to
be
using,
let's
encrypt
sorry
for
this
demo,
but
you
can
also
create
your
own
certificate.
Get
it
signed,
store
it
in
in
vault,
for
example,
and
also
use
cert
manager
to
fetch
that
certificate
from
the
vault
and
and
use
it
for
tls
purposes.
A
Now,
in
addition
to
what
we
just
discussed,
we're
going
to
be
creating
a
public
ip
address
and
the
dns
alias,
because
we
want
to
expose
the
ingress
controller
and
the
application
gateway
to
the
internet
right,
we
want
to
receive
the
web
hooks
from
github
and
we
cannot
really
create
a
direct
connection
between
github.com
and
our
vnets,
so
we're
going
to
need
a
public
ip
address
that
github
that
we
will
add
to
our
github
configuration
so
that
webhook
events
are
pushed
to
that
ip
or
to
that
dns.
A
Next,
we're
going
to
be
using
acr,
azure
container
registry
to
contain
our
docker
images
or,
in
addition
to
the
helm,
charts
we're
going
to
be
using
for
this
entire
setup.
Now,
in
addition
to
our
github
actions,
runner
resource
group,
we're
going
to
have
another
resource
group
created
automatically
when
we
provision
a
new
aks
cluster,
it's
called
the
node
resource
group
and
inside
the
node
resource
group,
we're
going
to
have
the
basically
the
vms
and
the
scale
set
that
are
going
to
be
used
as
our
cluster
nodes
right.
A
These
are
going
to
be
doing
all
of
the
work
for
us.
This
is
where
the
containers
are
going
to
be
provisioned,
and
these
are
the
you
know
the
main
vms
that
do
all
of
the
work.
Also
we're
going
to
have
a
load
balancer,
that
is
on
demand,
it's
not
created
by
default,
but
it's
it
could
be
created
if
you
want
to
expose
your
services
via
the
load
balancer
as
opposed
to
creating
your
own
application
gateway
or
creating
your
own
ingers
controller.
A
A
Let
us
dig
right
into
the
fun
stuff
today,
I'm
going
to
be
using
my
command
line
interface
to
create
all
of
this
infrastructure
using
azure
cli.
You
can
pretty
much
choose
anything
you're
comfortable
with.
Maybe
you
want
to
use
powershell.
Maybe
you
want
to
use
the
cloud
shell
or
maybe
you
want
to
use
arm
templates.
A
But
at
the
same
time
I
want
you
to
keep
in
the
back
of
your
mind
that
these
resources
that
we're
going
to
be
creating
today
are
not
free
and
they're
going
to
cost
you
money.
So
if
you
are
doing
this
for
exploration
purposes,
just
keep
that
in
the
back
of
your
mind,
it's
not
a
huge
amount.
It's
probably
a
few.
You
know
dollars
or
something
like
that,
but
it's
not
it's
not
free.
So
I
just
wanted
to
make
sure
that
you
are
aware
of
this
before
we
proceed.
A
Let
us
jump
to
my
screen
and
I
have
here
the
actions
at
scale
repository
where
I
have
all
of
the
configuration
files.
In
addition
to
the
step-by-step
instructions,
I'm
going
to
be
dropping
the
link
for
this
repository
in
the
description
of
the
videos
when
they
are
published.
I'm
going
to
be
dropping
it
in
the
comments
right
now
and
you
need
to
be
aware
that
this
is
a
public
repository.
So
if
you
just
go
to
my
profile
and
then
actions
at
scale,
you
should
be
able
to
access
it
right
away.
A
Now,
let
us
jump
to
our
azure
portal
just
so
that
we
can
do
a
quick
browsing
around
and
then
we're
going
to
jump
to
the
terminal
so
that
I
can
show
you
how
you
can
set
up
the
whole
infrastructure.
I
have
my
portal
already
running
around
in
in
here
and,
as
you
can
see
here,
I
have
some
resource
groups
that
are
created,
but
these
are
default.
Resource
groups
we're
not
going
to
really
bother
with
those.
A
If
I
go
here,
you're
gonna
see
that
I
have
these
three
we're
gonna
be
creating
the
two
new
resource
groups
or
the
one
resource
group
and
the
supplementary
resource
group
that
comes
with
it
over
here
and
we're
gonna
provision
all
of
our
infrastructure
in
it
already.
You
should
already
have
access
to
this
portal
if
you
have
an
azure
account.
That
should
not
be
a
big
deal,
because
I
want
to
start
in
a
fresh
environment.
A
I'm
not
going
to
be
using
my
mac
specifically
for
having
the
you
know
the
environment,
where
I'm
going
to
be
running,
the
commands
from
I'm
gonna
be
running
them
actually
inside
a
ubuntu
container,
and
I
already
have
the
container
spun
up
and
it's
called
if
I
run
docker
ps,
I
have
it.
It's
called
playground,
so
we're
gonna
go
inside
the
container
and
we're
gonna
run
batch,
and
if
I
go
to
the
directory
root
work,
the
ir.
You
should
see
that
I
have
the
repository
already
checked
in.
A
Don't
worry
too
much
about
spinning
up
your
own
container,
just
work
in
the
environment
that
you
are
operating
in.
I
just
wanted
to
use
something
a
little
bit
more
standardized
that
maybe
more
people
would
be
familiar
with.
In
this
container.
I
have
a
bunch
of
tools
already
installed
and
ready
for
me
to
use.
So.
First
of
all,
I
have
azure
cli
version
2.31.0
I
have
helm,
which
is
version
3.7.2
that
I
am
using.
I
have
also
other
than
home
cube
cattle,
which
I
am
using.
Oh
sorry,
let's
try.
This
version
1.23.
A
So
you
can
use
these
versions,
there's
really
no
hard
requirement
to
use
these
specific
versions,
but
in
case
you
bump
into
problems,
you
might
want
to
fall
back
on
the
versions
that
I
am
using
myself
and
once
we
have
these
three
main
tools
in
place:
we're
gonna
start
by
creating
our
infrastructure,
but
first,
let's
make
sure
that
we
are
actually
authenticated.
So
if
we
do
an
az
login,
we
need
to
make
sure
that
az
has
the
proper
credentials
for
it
to
access
the
our
you
know,
environment.
A
A
A
I'm
going
to
clear
my
screen
and
now
I
can
use
az4
or
the
easy
to
create
resources,
in
my
account.
So
after
authenticating
I'm
gonna
run
a
very
quick
command,
I'm
gonna
say
ac
account
locations,
and
this
is
gonna
list.
Basically,
all
the
different
regions
available
in
asia.
This
is
a
quick
verification
that
I
am
properly
authenticated
now.
The
first
thing
we're
going
to
do
is
we're
going
to
create
the
resource
group
that
is
going
to
contain
all
of
the
different
services,
and
we
can
do
that
by
using
this
command.
A
Az
group
create
and
we're
going
to
specify
the
name
of
the
resource
group,
as
well
as
the
location
or
the
region
we're
going
to
be
using
and
I'm
going
to
be
using
west
europe,
because
this
is
closer
to
me
once
this
resource
group
has
been
created.
You
can
see
here
that
the
provisioning
state
is
succeeded
next,
because
I
want
to
be
using
the
monitoring
add-on
on
my
azure
kubernetes
cluster.
This
will
give
me
some
insights
into
the
containers
and
how
much
resources
they
are
using
and
a
bunch
of
other
useful
stuff.
A
I
want
to
make
sure
that
these
that
the
outcome
of
these
two
commands
is
registered
here.
These
are
prerequisites
for
the
monitoring
add-on
to
operate
correctly
and
I'm
gonna
be
running
these
two
commands.
So,
first
of
all,
we
want
to
make
sure
that
the
operations,
management
and
operations
insights
both
are
registered
in
my
account
and
then
once
this
is
done,
I'm
going
to
clear
my
screen
one
more
time
and
now
we
are
ready
to
create
our
kubernetes
cluster.
A
As
you
can
see
here,
I
did
not
enable
the
ingress
add-on
straight
away,
I'm
going
to
be
doing
that
in
a
later
stage
and
I'm
going
to
be
explaining
why
in
a
little
bit
and
then
I'm
going
to
be
specifying
that
I
would
want
just
one
node
right,
one
worker
node
or
one
vm,
to
be
attached
to
my
cluster
and
last
step.
Is
I'm
saying
that
I
want
this?
I
want
the
azure
cli
to
generate
a
public
private
key
pair.
A
This
will
store
the
public
key
inside
my
azure,
sorry
inside
my
aks
node,
which
will
allow
me
to
ssh
into
that
node
using
the
private
key
which
will
be
stored
on
my
machine
or
in
this
case
inside
my
container,
and
I'm
going
to
run
this.
This
is
going
to
take
a
little
bit
of
time
to
spin
up
because
it's
going
to
create
a
lot
of
different
resources
and
once
this
is
complete,
I'm
going
to
jump
to
the
portal
and
I'm
going
to
show
you
what
it
has
created
and
where
these
things
are
available.
A
A
If
I
go
to
my
azure
portal
and
I
click
on
resource
groups,
I
should
be
able
to
see
them
right
here.
So
the
github
actions
runner
resource
group
is
over
here
and
then
the
second
resource
group,
the
emcee
github
actions,
runner
blah
blah
blah
west
europe
research
group
is
also
created.
If
I
click
on
this
resource
group,
I
will
be
able
to
see
what's
inside
it,
you
can
see
here
that
we
only
have
the
kubernetes
cluster
as
part
of
this.
A
That's
totally
fine,
because
the
rest
of
the
things
are
inside
the
associated
node
resource
group,
and
here
you
can
see
all
of
the
rest
of
the
stuff
so
like
the
agent
pool,
which
is
the
security
gateway
in
this
case
the
route
tables
the
vm
scale
set.
You
can
see
the
v-net,
you
can
see
the
kubernetes
cluster
over
here
as
well
right,
so
all
of
the
building
blocks
of
our
kubernetes
cluster
reside
in
this
accompanying
resource
group.
Perfect.
This
is
exactly
what
we
wanted
to
see.
Now
our
cluster
is
up
and
running.
A
The
next
thing
we
want
to
do
is
we
want
to
fetch
the
credentials
for
this
cluster
so
that
we
can
use
cubecattle
or
cubectl
whatever
you
want
to
call
it
to
interact
with
with
our
cluster,
and
we
can
do
this
with
az
aks
getcredentials,
and
then
we
need
to
specify
the
resource
group
name
as
well
as
the
aks
clusters
name,
and
then
we
run
it.
It
should
be
a
very
quick
setup
because
I've
already
run
this
demo
before
I
already
have
some
credentials
in
my
cube
config
file.
A
That's
perfect,
because
that
means
that
we
are
connected
to
our
kubernetes
cluster
and
we
can
also
have
more.
You
know
confidence
by
running
cubecardo
config.contexts,
and
you
can
see
here
that
the
this
is
the
proper
name
of
the
cluster
and
that
we
are
properly
authenticated
to
it.
Now,
with
our
cluster
up
and
running,
I
want
to
show
you
a
little
bit
a
couple
of
nice
things
that
you
can
do
if
you
want
to
add.
A
First
of
all,
we
can
see
the
nodes
that
are
linked
to
our
cluster,
so
you
can
see
here
we
have
one
vm.
If
you
want
to
see
them
in
more
detail,
you
can
go
to
the
portal
and
then
you
can
click
on
the
github
actions.
Runner,
you
can
click
on
the
kubernetes
aka
aks,
basically
icon
over
here,
and
if
we
go
to
the
node
pools,
you
will
be
able
to
see
that
we
have
one
node
pool
and
if
we
click
on
that,
we
can
see
that
we
have
and
then
on
nodes.
A
We
can
see
that
we
have
one
vm
that
is
already
set
up
and
we
are
using
the
ubuntu
as
a
base,
image
version
18.04
and
it's
running
container
d
as
the
container
one
time,
and
if
I
click
on
this
node
for
further
details,
you
can
see
here
that
it
has
an
internal
ip
address
and
it's
in
west
europe
awesome.
Now,
if
you
want
to
add
more
nodes
to
this
pool,
you
can
do
it
from
here.
A
You
can
go
to
the
node
pool
and
then
you
can
click
on
the
scale
node
tool,
so
you
can
add
more
vms
to
it.
If,
if
you
have
a
need
for
it
or
you
can
also
upgrade,
you
know,
kubernetes
upgrade
the
image
for
the
nodes
that
you
are
using,
and
I
want
to
show
you
here
that
we
are
using
the
standard.
The
s2
version
2
nodes.
So
these
are
general
purpose
vms.
A
They
should
do
the
job
fine
and
for
self-hosted
runners
they
should
be
totally
okay.
But
if
you
need
more
specific,
you
know
if
you
have
more
specific
workflows,
you
have
more
specific
requirements
in
terms
of
you
know:
iops
disk,
io
network,
io,
cpu
amount
of
ram
that
is
provided
for
your
vms
and
hence
your
self-hosted
runners.
You
might
want
to
consider
different
node
sizes,
I'm
going
to
show
you
how
we
can
scale
up
inside
our
terminal.
A
A
A
I
just
want
to
show
you
here
how
the
provisioning
process
happens.
You
will
see
you
will
start
seeing
some
new
vms
popping
up
in
a
few,
as
you
can
see
here,
our
vms
are
starting
to
be
provisioned
and
they
are
in
an
unready
state.
So
we're
gonna
just
wait
for
it
a
little
bit
more
and
there
we
go.
Our
vm
is
now
ready
to
be
used.
I
think,
as
part
of
our
node
pool,
so
we're
just
gonna
wait
for
this
command
to
finish
execution,
and
we
should
be
good
to
go.
A
We
should
be
able
to
start
using
this
node.
You
know
to
schedule
containers
all
right.
This
is
it
we're
done
so.
I'm
gonna
close
this
window
over
here
and
we're
gonna
go
back
to
our
process.
The
next
thing
we're
gonna
do
after
showing
you
how
you
can
scale
up
by
the
way
scaling
down
is
just
running
the
same
command,
but
reducing
the
number
of
the
node
count.
It's
it's
that
easy
next
thing
we're
going
to
do
is
we're
going
to
create
our
container
registry.
Why
do
we
need
a
container
registry?
A
A
We
just
need
to
run
this
command,
az
acr
create,
and
then
we
specify
the
same
resource
group
as
our
cluster,
and
then
we
provide
the
container
registry,
a
name
we're
going
to
call
it
github
actions,
office,
hours,
acr
and
we're
going
to
specify
the
basic
sku,
which
is
fine
for
general
purpose.
You
know
container
images
and
helm
charts,
there
are
other
skus
other
products
with
more
advanced
features
with,
but
they
are
more
expensive
for
now.
A
The
basics
should
be
sufficient
for
our
purposes,
and
the
acr
should
take
a
few
seconds
to
be
provisioned
and
there
we
go
once
that
is
done.
We
need
to
attach
our
acr
or
container
registry
to
our
kubernetes
cluster,
and
we
can
also
do
that
very
easily
with
a
z,
aks
update,
and
then
we
need
to
specify
again
same
resource
group,
the
name
of
the
kubernetes
cluster
this
in
this
time,
and
then
we
provide
the
name
of
the
acr
right.
So
we
have
already
specified
or
created
the
name
in
the
previous
command.
A
Right,
it
seems
that
our
container
registry
has
been
successfully
attached
to
our
kubernetes
cluster.
Now
we
need
to
verify
this
and
the
way
we
do,
that
is
by
first
of
all,
fetching
the
container
registries
url
or
fully
qualified
domain
name,
and
we
can
do
this
with
this
command
azacr
show
and
then
we
query
for
the
value
login
server,
and
then
we
specify
that
the
output
should
be
tap
separated
value
because
we
want
it
without
the
double
quotes.
We
don't
want
the
json
value.
Why?
A
Because
we
want
to
store
this
url
inside
a
variable,
we're
going
to
call
acr
url
and
we're
going
to
use
this
variable
in
a
subsequent
command
to
verify
or
do
a
quick
check
on
acr
and
make
sure
that
it
is
properly
configured.
So
we
run
this
once
we
run
this
command.
I
have
added
or
appended
the
echo
of
the
variable
as
well
just
to
verify
you
know
a
quick
sanity
check
that
the
url
has
been
properly
returned
and,
as
you
can
see
here,
the
url
for
my
container
registry
is
the
following.
A
A
You
can
safely
ignore
this
warning
right
here.
It
should
be
totally
fine
and,
as
you
can
see
here,
that
the
checks
have
succeeded,
we're
able
to
resolve
and
we're
able
to
you
know
we
have
enough
permissions
to
pull
images
from
the
registry.
If
you
see
this
message,
you're
good
to
go,
if
you
see
something
else,
you
might
want
to
start
troubleshooting
this
problem
before
we
proceed
now.
A
What
do
we
have?
So
far?
We
have
a
kubernetes
cluster
that
has
been
created.
We
have
scaled
up
the
node
pool
to
have
two
nodes
for
our
cluster
as
opposed
to
one
which
comes
by
default,
and
the
next
thing
we
did
is
we
created
a
container
registry
and
we
attached
that
container
registry
to
our
kubernetes
cluster.
A
Next
thing,
we're
gonna
do
is
a
little
bit.
The
complicated
part
slightly
the
complicated
part
is
to
create
the
application
gateway
and
connect
that
application
gateway
to
our
kubernetes
cluster
and
use
that
as
our
english
controller.
Now
the
process
here
is
a
little
bit
more
involved,
because
there
are
networking
elements
that
we
have
to
deal
with
so
bear
with
me,
and
I
will
try
to
explain
them
as
best
as
I
can.
A
It's
always
going
to
be
the
same
resource
group
name,
and
then
we
provide
this
public
ip
with
a
name
we're
going
to
call
it
application
gateway
public
ipe
so
that
we
can
immediately.
You
know,
recognize
it
once
we
see
the
name.
We're
gonna
specify
that
this
is
a
static
allocation,
so
it
never
gets
destroyed
until
we
unless
we
destroyed
it
ourselves
or
we
remove
it
ourselves
and
then
the
sku
standard,
because
we
don't
need
more
than
that
for
the
purposes
of
our
demo
today.
A
So
the
public
ip
should
be
fairly
easy
to
create,
and
there
we
go.
The
public
ip
has
been
created,
and
this
is
the
public
ip
address.
Ipv4
ip
address
that
we
have.
I
did
not
provision
ipv6
we're
not
going
to
need
it
for
this
demo,
but
it
might
be
a
good
idea
for
you
to
provision
that
ipv6.
It
should
come
with
no
additional
charges
now.
A
You
know
to
overlap
with
the
ip
allocations
for
the
cluster,
so
it's
better
to
keep
them
separate,
completely
isolated,
and
then
we
connect
both
of
these
v-nets
so
that
the
application
gateway
can
connect
to
the
cluster,
and
I'm
going
to
show
you
how
to
do
that
right.
Now,
I'm
going
to
clear
my
screen
one
more
time
and
I'm
going
to
create
the
v-net.
So
we
do
this
with
az
network
v-net
create
and
then
I'm
going
to
name
this
v-net
application
gateway
v-net,
I'm
going
to
specify
one
more
time.
A
A
And
it
shouldn't
take
a
long
time
you
can
see
here
that
it
has
also
succeeded.
We
are
good
to
go
next
thing
we
need
to
do.
Is
we
need
to
create
the
application
gateway
now,
so
we
have
the
public
ip.
We
have
the
v-net.
We
have
all
of
the
prerequisites
to
set
up
the
application
gateway.
Now
we
can
create
the
application
gateway
with
this
command.
Az
network
application
gateway
create,
and
then
we
again
same
resource
group.
We
provide
the
name
of
the
application
gateway,
the
region.
A
We
are
installing
the
application
gateway
in
the
sku
we're
just
going
to
use
the
standard
v2.
You
can
replace
this
with
waf.
If
you
want
to
enable
the
rough
by
default,
I'm
not
going
to
enable
it
by
default.
Now
we
might
tackle
this
in
a
future
episode
and
then
I'm
gonna
give
the
in
this
parameter
the
public
ip
address,
gonna
supply
the
name
of
the
public
ip
resource
we
created
in
a
couple
of
steps
ago.
Right.
A
Please
keep
these
commands
in
a
file
next
to
you,
because
you're
going
to
be
reusing
a
lot
of
these
names
again
and
again.
So
here
we
supply
the
name
of
the
resource,
and
then
we
also
need
to
supply
the
name
of
the
v-net
we
just
created
for
the
application
gateway,
as
well
as
the
name
of
the
subnet
we
created
for
the
application
gateway,
and
once
we
provide
these
parameters
correctly,
we
will
run
this,
and
this
process
should
take
a
little
bit,
because
it's
also
going
to
provision
vms.
A
A
The
reasons
are
many,
a
few
of
which
are
because
we
have
a
waff
that
we
can
set
up
with
the
application
gateway.
You
know
a
web
application
firewall
in
front
of
you
know
our
endpoint.
The
second
thing
is,
we
can
have
also
a
firewall
and
ddos
prevention
and
a
a
lot
of
security
features
that
come
with
the
application
gateway
that
don't
come
with
other
resources
and
at
the
same
time
we
have
a
nice
managed
service
and
an
interface
where
we
can
see
all
of
the
different
rules
for
the
routing
that
is
happening.
A
We
can
do
routing
via
http.
We
can
do
routing
the
other
path
we
can
do
routing
via
header
parameters.
We
can
do
routing
via
multiple
different
mechanisms,
and
we
can
also
monitor
the
behavior
of
the
application
gateway
and
push
these
metrics
to
prometheus
grafana,
whatever
other
logging
and
monitoring
solution
that
you'd
like
to
use.
A
So
as
a
front-end
for
our
cluster,
I
think
the
application
gateway
is
a
pretty
good
solution
for
us
to
use
it's
a
managed
service,
as
I
mentioned,
and
it
gives
you
a
lot
of
very
nice
features
that
will
definitely
come
in
handy,
especially
as
you
scale
your
infrastructure
further,
and
you
want
to
troubleshoot
problems,
and
you
want
to
see
what
really
is
happening
under
the
hood.
Of
course,
it's
totally
optional.
A
You
can
replace
the
application
gateway
with
a
basic
load
balancer
or
you
can
even
use
a
standard.
You
know
custom
kubernetes
controller
that
you
install
on
your
kubernetes
cluster.
That's
totally
fine
fantastic.
Our
application
gateway
has
been
created
successfully.
As
you
can
see
from
the
response,
the
provision
state
is
has
succeeded.
We
everything
looks
good
so
far.
The
next
step
that
we
are
going
to
do
is
we're
going
to
attach
this
application
gateway
to
our
kubernetes
cluster
or
to
aks,
and
how
do
we
do
this?
A
First
of
all,
we
need
to
fetch
the
application
gateway
id
and
we
can
do
this
by
running
the
command.
Az
network
application
gateway
show,
and
then
we
query
for
the
id
and
as
usual,
we
return
the
output
as
tsv
and
we
store
it
in
the
variable
application
gateway
underscore
id,
and
let's
run
this
very
quickly
here.
You
can
see
here
that
this
is
the
id
that
has
been
returned.
A
It
looks
correct
because
this
is
the
name
I
have
provided
the
application
gateway
with,
and
the
next
step
is
to
run
the
azure
or
az
aks
enable
add-ons,
and
then
we
specify
the
ingress
dash
api
gateway
add-on
to
be
enabled.
This
is
provided
for
you
with
aks,
and
then
we
provide
the
application
gateway
id
over
here
and
once
we
do
this,
our
application
gateway
will
be
connected
to
our
kubernetes
cluster
and
we
can
start
using
it
as
our
default
ingress
controller
across
namespaces
great,
enabling
the
application
gateway,
ingress
application
gateway
add-on
has
been
successful.
A
This
looks
good
so
far
now
for
the
sharp
ones.
You
might
ask
me
but
basim
how?
How
is
did
this
connection
work,
because
these
are
two
separate
v-nets
right
and
we
did
not
connect
them
yet
you
are
totally
correct.
This
is
the
next
step
for
us,
which
is
creating
the
v-net
p-ring
between
the
application
gateway,
v-net
and
the
kubernetes
cluster
v-net.
How
do
you
achieve
this?
This
is
a
bit
of
an
annoying
process
because
we
need
to
create.
You
know
the
peering
on
both
of
these
v-nets
separately,
so
just
follow
follow
along.
A
As
I
explain
the
different
commands
that
we
need
to
do
and
we
need
to
run
first
first
things:
first,
we
need
to
get
the
node
resource
group
name
right
so
again.
As
a
reminder,
the
node
resource
group
is
the
resource
group
that
is
created
once
we
create
an
aks
cluster.
It
is
automatically
created
for
us
and
you
don't
have
much
control
over
it,
and
I
highly
recommend
that
you
don't
touch
this
resource
group
at
all,
but
in
this
case
we're
going
to
be
creating
the
v-net
peering
connection
you
know
or
service
inside
this
resource
group.
A
So,
first
of
all
we're
going
to
fetch
the
name
and
we're
going
to
store
it
inside
this
variable
over
here,
and
the
next
thing
we
need
to
do
is
we
need
to
get
the
aks
vnet
name
and
we're
going
to
run
it
using
this
query,
you
can
see
here
we
return
the
aksb
net.
Of
course
you
can
go
to
your
portal
and
copy
the
names
from
the
portal,
but
yeah
we're
just
using
the
cli
now.
So
I
don't
see
a
point
of
context.
A
Switching
here
right,
and
especially
if
you
want
to
automate
this
process,
you
will
definitely
need
to
do
the
same
yourself
in
your
scripts.
So
next
thing
is:
we
need
to
get
the
v-net
id.
So
here
we
are
getting
the
v-net,
the
aks
v-net
name,
and
here
we
need
to
get
the
aks
v-net
identifier
or
id
once
we
have
this,
we
need
to
create
the
peer
or
peering
you
know
resource
in
between
our
this
is
from
the
application
gateway
to
aks
appearing.
A
So
we
just
need
to
run
this
command,
and
then
we
provide
the
names
right
as
well
as
the
vnet
aks
vnet
id
which
is
stored
inside
this
variable
that
we
have
over
here
can
run
it
right
now.
I
think
yeah
everything
looks
good,
I'm
just
looking
if
there
are
placeholders
that
I
need
to
update,
but
it
looks
good
so
far,
so
this
will
take
you
a
few
seconds
to
run
and
then
we
can
proceed
to
the
final
step
of
our
exercise
today
or
sorry.
A
The
final
step
of
the
peering
exercise
not
the
whole
exercise,
so
the
peering
connection
has
been
created
successfully.
As
it
looks
now,
we
need
to
create
the
opposite,
keying
connection,
so
we
are
going
to
fetch
the
application
gateway
vnet
id
in
this
case,
and
I'm
going
to
run
this
command
right
here
and
we're
going
to
store
it
in
the
inside
this
variable.
A
And
then
we
are
going
to
create
the
the
second
peering
resource
and,
of
course,
we're
going
to
be
supplying
all
of
the
different
variables
and
we're
going
to
say,
allow
v-net
access
and
once
we
run
this,
our
application
gateway.
V-Net
is
going
to
be
connected
to
our
aks
v-net
and
we
are
able
to
integrate
them
and
utilize
them,
as
I
will
demonstrate
here,
by
creating
a
small
test
application
and
I'm
going
to
create
the
proper
routes
and
we're
gonna
test
connectivity
to
that
application.
A
Fantastic,
the
final
resource
has
been
created
and
I
think
our
infrastructure
setup
is
now
complete.
I'm
going
to
clear
my
terminal
and
I'm
gonna
jump
very
quickly
to
my
portal.
I'm
gonna
go
to
the
home
screen
resource
groups
and
then
I'm
gonna
go
to
my
github
actions.
Runner
github
actions,
runners
and
I'm
gonna
see
all
of
the
different
resources
that
have
been
newly
created
right.
So,
first
of
all,
we
have
the
public
ipv
that
we
generated.
We
have
the
application
gateway
vnet.
A
We
have
our
container
registry
over
here
and
we
have
the
application
gateway,
all
of
them
provisioned.
All
of
them
are
in
a
good
state
and
all
of
them
running
as
expected.
Everything
looks
really
good
so
far,
so
we're
gonna
be
deploying
a
test
application
and
how
are
we
gonna
be
doing
that?
If
we
go
to
the
repository
I
talked
about
in
the
beginning
of
this
episode,
you
will
see
here
that
I
have
the
folder
apps
and
inside
this
folder
I
have
created
a
configuration
for
a
test
application
that
we
can
run.
A
You
know
to
do
some
sanity
checks
on
our
cluster
and
the
resource
is
called
test
app
and
inside
this
test
app.
We
have
a
very
basic
deployment
right
and
we
are
yeah.
We
just
have
one
replica
and
we
are
pulling
this
image
from
the
public
microsoft
container
registry
and
we're
going
to
be
deploying
this
container
with
the
and
publishing
the
port
80.
We
are
configuring
here,
a
small
environment
variable.
A
You
know
with
the
message
that
the
ingress
test
is
successful
and,
of
course,
in
addition
to
the
deployment
we're
going
to
be
creating
a
small
service
that
is
going
to
expose
the
port
80,
you
know
to
the
port
80
inside
the
container,
and
then
we
are
going
to
specify
the
selector
here
to
be
app
test
app.
This
is
it
it's
very
simple.
There's
nothing
really
major
going
on
here
and
we're
going
to
use
this
as
a
quick
sanity
check,
because
I
have
this.
A
You
know
this
repository
already
checked
in
or
checked
out
in
my
container.
I
have
the
apps
folder
over
here
and
I
have
the
configuration
file
right
away,
so
I
can
immediately
say
cube.
Cuddle,
apply,
dash,
f,
apps
and
then
it's
called
testapp.yamo
and
I
can
specify
the
namespace
also
to
be
default,
and
now
we're
going
to
wait
for
this
app
to
be
created.
It
has
been
deployed
successfully.
As
you
can
see,
we
can
do
cube
cuddle,
get
pods
dash
n
default.
A
Dash
n
is
the
short
for
dash
dash
namespaces,
and
you
can
see
here
that
the
container
is
being
pulled
and
it's
being
created.
So
let's
do
a
quick
watch,
dash
n
2
on
this
and
wait
for
this
container
to
be
running
perfect.
As
you
can
see
here,
the
status
now
is
running,
which
means
that
our
you
know,
small
app
is
properly
or
correctly
deployed.
A
Obviously,
we
still
cannot
access
it
from
the
internet
because
we
have
not
configured
our
english
controller
yet
and
we
have
not
configured
routes
right
and
I
have
already
provided
you
with
the
basic
ingress
configuration.
So
if
we
do
a
quick
ls
on
the
ingress
folder,
you
will
see
here
I
have
the
ingress.yaml
file.
This
is
my
basic
ingress
configuration.
A
I
have
the
ingress
tls
configurations,
I'm
going
to
demonstrate
later
on
in
the
next
episode
once
we
set
up
the
actions
runner
controller,
but
for
now
we're
going
to
be
satisfied
with
the
basic
ingress
configuration
and
I'm
just
want
to
show
you
the
content
of
this
file.
So
you
can
see
here
it's
just
a
very
basic
ingress
pay
attention
please
to
the
annotations,
because
they
are
very
important
you.
This
will
save
you
hours
of
experimentation,
so
the
annotation
here
we
are
specifying
that
the
ingress
class
should
be
azure
application
gateway.
A
This
instructs
the
kubernetes
cluster
to
use
the
application
gateway
as
our
ingress
controller,
and
then
we
are
specifying
the
rules.
The
rules
are
simple.
We're
just
saying
that
I
want
to
create
a
new
path,
so
forward
slash
the
main
path.
I
want
to
route
all
the
traffic
that
comes
to
this
path
towards
the
service
that
has
the
name
test
app
right,
which
we
created
in
the
previous
configuration
file
on
the
port
80..
So
now
let
us
apply
this
ingest
configuration
and
we
can
do
that
with
cubecaro,
apply,
dash,
f
ingress
and
then
ingress.yaml.
A
It
doesn't
really
matter
which
namespace
you
deploy
to,
because
our
configuration
is
now
runs
across
namespaces
and
I'm
going
to
explain
this
concept
in
a
little
bit
in
our
future
episode.
What
happens
if
you
have
multiple
english
rules
defined
in
different
name
spaces,
and
how
does
the
application
gateway
manage
those
I'm
going
to
explain
that
a
little
bit
later?
A
For
now,
we
are
just
going
to
be
satisfied
with
creating
this
ingress
rule
and
we
can
very
quickly
check
it
with
cubecaro
get
ingress,
and
you
can
see
here
that
the
ingress
controller
has
been
created
and
the
ip
address
the
public
ip
address
has
been
correctly
assigned
and
if
we
do
cube,
cuddle
describe
ingress
and
then
provide
the
name
of
our
ingress,
which
is
ingress
main.
You
can
see
here
that
the
path
forward
slash
is
now
being
routed
towards
the
test
app
on
the
back
end.
A
So
now,
if
I
copy
this
ip
address-
and
I
go
to
my
browser
and
I
paste
the
ip
address
and
hit
enter,
you
will
see
here
that
I
am
receiving.
The
successful
response,
which
is
aks
ingress
test
is
successful,
and
that
is
exactly
what
I
was
hoping
to
see.
If
I
go
back
to
my
portal
and
I
go
to,
let's
see
http
settings,
I
should
be
able
to
see
here
a
new
setting
defined,
which
is
great,
but
I
also
want
to
have
a
quick
check
on
where
is
this
going
to?
A
So
if
I
go
to
back-end
pools,
I
think
I
have
here
one
target
perfect.
This
is
it,
and
this
ip
address
you
can
see
here
belongs
should
should
belong
to
my
test
app
container,
and
this
is
a
private
ip
inside
my
cluster.
How
can
I
figure
this
out?
If
I
do
cube
cuddle
get
pogs,
I
should
be
able
to
get
the
name
of
the
pod
and
then
I
can
say,
cube
cuddle,
describe
pod
and
then
provide
the
name.
A
I
should
be
able
to
see
the
ip
address
of
this
pod,
which
is
over
here,
10.2.1.4,
which
should
basically
match
the
ip
address
of
my
target
and,
as
you
can
see
here,
my
application
gateway
has
been
appropriately
configured
fantastic
and
this
is
it
folks,
I'm
gonna
switch
now
to
the
live
mode.
Where
I
will
be
answering
your
questions,
live,
please
make
sure
you
drop
them
in
the
comment.
A
I'm
gonna
cover
anything
that
we
have
discussed
in
this
session
today
and
if
you
have
inquiries
about
how
certain
things
work
or
why
we
did
things
in
a
certain
way.
This
is
the
proper
time
to
do
so
and
I'm
gonna
be
going
live
right
now.
Awesome.
Thank
you
very
much
everyone
for
sticking
around.
Until
until
this
point,
I
really
appreciate
you
and
definitely
a
small
reminder.
If
you
have
any
questions,
please
drop
them
in
the
chat
right
now,
so
that
we
can
tackle
them.
A
Otherwise,
don't
forget
that
there
is
a
feedback
form
in
the
chat
as
well.
We
would
love
to
hear
from
you
your
input,
your
feedback,
your
comments,
whatever
you'd
like
us,
to
enhance
for
future
episodes
and
for
future
office
hour
sessions.
A
Don't
forget
that
there
is
another
episode
which
is
the
continuation
of
this
next
week
same
time.
Please
share
the
registration
link
with
the
people
who
you
think
might
be
interested
in
this
in
this
topic
and
in
this
setup
and
as
always
come
prepared
for
the
next
session.
If
you'd
like,
I
will
be
going
live
towards
the
end
of
these.
A
To
answer
your
comments,
your
whatever
thoughts
you
might
have,
if
you
have
concerns
about
how
these
things
work,
how
you
can
you
know,
get
them
up
and
running
in
your
environment
so
on
and
so
forth.
I
would
love
to
tackle
these
questions.
Of
course,
these
videos
are
gonna
be
available
on
youtube
once
this
broadcast
is
completed,
so
you
can
also
share
them
with
your
colleagues
and
any
other.
You
know
stakeholders
or
parties
that
might
be
interested
in
this.
A
I
will
stick
around
for
maybe
another
30
seconds.
If
I
don't
see
any
questions
from
anyone,
I
want
to
wish
you
a
great
evening
and
a
wonderful
weekend
and
of
course
feel
free
to
reach
out
to
us.
You
know
through
to
expert
services
or
to
your
account
managers
or
to
anyone
at
github,
so
that
we
can
also
discuss
any
more.
You
know
deeper
conversa
so
that
we
can
have
deeper
conversations
about
what
you've
just
seen
today
and
I
think,
with
this
we're
gonna
wrap
up
our
session
for
today.
A
Sorry
checking
it
one
moment,
please
all
right,
so
I
see
here
interesting
so
far.
Looking
for
part,
two
has
been
there
been
any
comparison
made
with
brigade.js,
I'm
not
sure
I
am
familiar
with
that
tool.
To
be
honest,
let
me
quickly
check
it
for.
A
A
Event
driven
scripting
for
kubernetes,
oh
interesting
yeah,
I
mean
next
episode
we're
going
to
be
tackling
the
action
runner
controller.
We
have
not
done
such
a
comparison.
There
are
definitely
many
ways
you
can
do
this
implementation.
You
can
also
approach
it
from
a
you
know:
function
as
a
service
perspective
that
are
also
open
fast.
You
can
have
your
you
know.
Containers
or
runners
run
as
an
event
driven
setup.
So
definitely
everything
is
feasible.
A
If
you'd
like
to
go
for
sort
of
a
makeshift
type
of
setup
or
situ
or
solution,
it's
feasible,
you're
gonna,
be
you
might
want
to
think
about
all
the
edge
cases
that
you
will
be
tackling
with
this,
but
brigade.js
specifically,
we
have
not
really
explored
it
or
tested
it.
Next
episode,
it's
going
to
be
a
native
nother
native.
A
Sorry,
it's
going
to
be
a
sorry,
a
kubernetes
native
controller
that
we're
going
to
be
discussing
we're
going
to
be
installing
and
we're
going
to
be,
showing
you
how
that
one
will
work
and-
and
we
believe
that
is
the
solution
that
we
want
to
support
from
the
github
side
and
that
we
are
endorsing.
Even
though
that
controller
is
a
an
open
source.
Community
driven
effort.
A
A
Okay,
I
have
a
question
from
fakundu.
Sorry,
if
I'm
mispronouncing
your
name,
is
there
a
benefit
to
using
two
separate
virtual
networks
or
vnets
as
opposed
to
one?
Yes,
I
will
explain
why,
so
you
might
want
to
have
your
kubernetes
cluster
in
a
private
v-net
right
and
you
don't
want
to
maybe
expose
it
and
make
it
make
you
public.
So
the
you
want
to
have
your
api
gateway,
because
it's
facing
the
internet
to
be
to
be
in
a
public
v-net.
A
That's
why
we
have
two
v-nets
and
that's
why
we
are
peering
them
and
at
the
same
time
also,
you
don't
want
the
application
gateway
to
be
in
the
same
v-net
as
your
kubernetes
cluster
because
of
cider
block
sizing
considerations,
the
aks
cluster
takes
a
wide
range
of
subnets
and
you
want
to
you:
don't
want
to
be
an
overlap
between
your
application
gateway.
You
know.
Subject
requirements
and
your
kubernetes
cluster
so
put
them
into
separate
v-nets
and
stay
comfortable
that
way,
and
also
a
little
bit
more
safe.
A
All
right
anything
else,
I
can
help
you
with.
I
was
not
expecting.
I
was
not
expecting
the
questions
to
come
in,
but
please
feel
free
to
drop
anything
you
you
have
on
your
mind,
any
questions
that
you
you
thought
about
any
questions
that
you
are
curious.
I
can
share
with
you
some
insights
also
about
future
events
without
really
spoiling
it
for
you,
but
for
sure
anything
that
comes
to
mind
feel
free
to
drop
it
in
the
chat
and
would
love
to
take
them.
Take
them
on.
A
All
right
this
time,
we're
gonna
wrap
up
for
real.
Thank
you
very
much
everyone
for
watching
and
for
your
questions.
I'm
looking
forward
for
the
next
episode
take
care
this
time
and
we
will
see
you
next
week.
I
will
the
questions
you
asked
right
now,
I'm
going
to
answer
them
in
the
next
session.
Thank
you
very
much.