►
From YouTube: Sponsored Demo/Tutorial Session: Kubernetes GitOps with Rancher Continuous Delivery - Arsalan Naeem
Description
Sponsored Demo/Tutorial Session: Kubernetes GitOps with Rancher Continuous Delivery - Arsalan Naeem, Rancher (now a SUSE company)
Take a look at how Fleet was built from the ground up to perform Kubernetes Gitops at scale. Dive into the basic Fleet architecture, how labels are our friends, and how to treat K8S clusters as cattle. How to automate deployment of cluster tools (Monitoring, Logging, OPA Polices) after cluster provisioned.
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
A
A
If
I
wanted
to,
I
could
create
a
new
cluster.
So
if
I
did
add
cluster,
I
could
import
a
cluster
that
already
exists.
I
could
adopt
a
hosted
kubernetes,
so
ets,
gke
and
fitbit
for
life
cycle
management
adventure.
I
can
also
do
other
clusters,
so
this
means
this
one
could
be
a
cube,
ctl
command
that
I
could
drop
into
my
close
humanities
clusters
and
bring
them
into
rancho
as
well
to
manage
with
fleet
and
adventure
itself.
A
I
had
then
also
have
an
option
to
create
an
rke
cluster
with
a
bare
metal
or
existing
vms.
So
I
give
it
a
docker
run
command
through
this,
and
I
can
just
drop
that
in
with
the
prerequisites
of
windows,
os
agnostic
and
it
will
set
up
a
rke
cluster
and
retro
back
into
rancher,
and
you
know
again,
we
can
use
fleet
to
manage
it.
The
next
option
is
we:
if
we
have
no
infrastructure,
we
can
tell
russia
to
go,
create
the
virtual
machines
and
also
set
up
kubernetes
and
let
it
register
back
into
my
chip.
A
So
if
we
did
like
say,
amazon
and
call
this
test
cluster,
give
it
nodes
three
fcd
and
control
plane,
I
can
then
come
under
here
environment.
I
can
add
in
env
again
and
test,
and
then,
if
I
have
my
fleet
rules,
they
would
pick
up
this
and
then
deploy
my
applications
for
test
and
product.
You
know
we
can
talk
about
that.
A
So,
let's
just
go
to
prod
and
let's
just
watch
a
cube
ctl
here
and
she
will
stop
a
new
tower
with
cluster
explorer
and
it
will
always
keep
ctl
so
this
loads.
Whilst
we
continue,
so
this
is
the
cluster
explorer.
So
this
just
gives
you
an
overview
of
what's
going
on
with
your
cluster.
What's
visually
running
at
the
top
left,
we
go
to
continuous
delivery,
so
this
is
fleet-
and
this
is
part
of
every
retro
install
now.
A
So
the
first
thing
we'll
do
is
we'll
need
to
create
a
group,
so
I
have
one
here
already
so
I'll
give
it
that
create
new
one
and
we'll
call
this
env
prod.
So
we
say
this
is
a
cluster
group
for
environment
prod.
We
say
you
know,
we
again
add
that
selective.
So
we
say:
if
a
cluster
has
this
label
set
to
this
value,
and
then
you
know
you
can
obviously
change
that
and
we
hit
create
and
then
it
should
bring
in
our
cluster.
A
So
if
we
go
here,
we
can
see
that
our
cluster's
been
brought
in.
It
has
three
parts
and
it's
ready
to
go.
So
we
go
to
cluster
repo
and
we
do
create
repo.
So
now
we
can
deploy
the
sister.
So
here
I
am
using
the
fleet
examples
on
the
rideshare
github
and
we'll
call
this
fleet.
So
we'll
just
call
this
example
and
we'll
do
the
simple
test
first,
so
we'll
just
deploy
these
kubernetes
javafx
files,
so
these
are
just
vanilla,
kubernetes
jumbo
files,
we'll
just
pick
up
this
whole
url.
A
So
we
can
go
to
code
copy,
the
https
url
for
ssh.
We
will
need
our
keys.
So
since
this
is
an
authenticated,
we'll
just
leave
it
as
it
is.
We
can
pin
this
to
a
version
or
we
could
go
to
a
branch
authentication.
You
could
use
ssh
keys
or
basic
auth
and
the
github
or
the
git
reaper
doesn't
have
to
be
on
the
internet.
It
could
be
in
your
isolated
environment
as
long
as
once
you
can
resolve
the
dns
and
reach
it.
The
certificate
can
be
your
own
ca
or
it
could
be
a
self-solid.
A
So
you
have
the
option
to
do
both
since
I'm
using
github.
They
can
just
use
a
bad
cert
for
path.
So
this
is
the
path
of
the
root.
So
here
I'll
be
the
root
of
the
repo
I'll
choose
simple.
So
simple,
then
I
select
where
I
want
to
apply
to.
I
click
my
cluster
group,
which
is
e
and
m
dive
prod
or
I
can
use
to
plot
straight
to
one
cluster.
So
obviously
I
want
to
be
managing
multiple
clusters,
so
I
hit
that
and
now
we
see
it's
ready
to
go.
A
We
can
go
back
to
our
other
tab
and
do
cube
ctl
get
pods
and
we
can
see
our
applications
running
seven
seconds
ago
and
we
can
look
at
the
services
and
we
can
see
our
front
end's
been
deployed
as
well,
so
we'll
just
copy
the
node
port
and
we
will
get
the
ip
address
of
this
node.
So
if
you
go
here,
we
can
go
to
any
one
of
these
nodes,
so
100.1.100
drop
that
in
and
we
can
see
our
applications
working
awesome.
A
Get
repo:
we
will
go
back
one
to
this
place
and
we
will
go
to
say
single
cluster
help,
so
we'll
just
deploy
this
home
chart
again.
It's
the
same
application
just
as
a
home
chart,
so
we'll
come
back
here,
go
to
create
we'll,
say
help
example
and
we'll
drop
in
the
url
again.
So
we
need
this
url
we'll
leave
everything
default,
we'll
add
a
path
because
we
need
to
go
to
single
clusters
and
help
and
we
can
drop
that
in
there
put
in
the
forward.
Slash.