►
From YouTube: Sponsored Lightning Talk: GitOps at Scale with Rancher Continuous Delivery - Arsalan Naeem
Description
Sponsored Lightning Talk: GitOps at Scale with Rancher Continuous Delivery - Arsalan Naeem, Rancher (now a SUSE company)
Take a look at how Fleet was built from the ground up to perform Kubernetes Gitops at scale. Dive into the basic Fleet architecture, how labels are our friends, and how to treat K8S clusters as cattle. How to automate deployment of cluster tools (Monitoring, Logging, OPA Polices) after cluster provisioned.
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
Hey
guys
welcome
to
cdcon
today
we're
going
to
be
talking,
scaling,
kubernetes
get
ops.
My
name
is
arsenate,
I'm
a
field
engineer
over
at
susa
rancher.
So
let's
get
started
so
we're
talking
about
what
is
good
ops
and
what
is
fleet
and
how
fleet
difference
of
competitors
and
scaling
fleet
up
to
a
million
clusters
so
get
ops
is
a
way
we
can
give
developers
tools
to
deploy
their
applications
and
their
workloads
from
a
git
repo.
A
If
I
was
to
go
and
go,
you
know,
modify
that
infrastructure
manually
and
remove
is
a
say,
a
resource.
Then
my
git
ops
will
be
able
to
alert
me
and
say
you
know.
My
infrastructure
is
out
of
sync
for
my
single
source
of
truth
get
repo,
and
would
you
know
remember
that
situation,
so
this
is
where
fleet
comes
into
the
field
of
get
ops.
So
rancher
is
an
open
source
management
solution
for
kubernetes,
so
we
can
create
deploy
buses
all
through
rancher.
A
What
we've
done
is,
you
know:
we've
learned
from
lessons
of
creating
rancher
and
managing
thousands
of
nodes
that
there
needs
to
be
a
solution
for
managing.
You
know
thousands
of
clusters
and
that's
where
fleet
comes
into
it,
so
fleet
is
deployed
automatically
when
you
deploy
a
range
management
server.
A
A
So
why
one
million?
So
you
know,
as
we
talked
about
k3s
being
donated
to
cncf
foundation,
so
a
lot
more
people
are
going
to
be
running.
You
know
small
kubernetes
clusters
on
the
edge,
whether
it's
single
node
or
just
small
clusters,
there's
going
to
be
a
lot
more
of
them,
and
you
know
the
world
is
going
to
be
running
on
qmas
in
a
couple
of
years
and
because
you
know
kubernetes
runs
on
top
of
linux.
A
A
So
this
is
why
we
need
fleet.
You
know
now,
because
these
clusters
are
the
future
and
we
need
to
be
able
to
manage
these
at
scale
so
with
fleet.
What
we
can
do
is
we
can
point
to
a
git
repo
similar
to
the
competitors
and
like
argo
influx,
and
they
can
look
at
the
ammo
files
and
home
files
and
customize
files
and
deploy
them
to
a
cluster
or
clusters.
A
Where
it
differs,
though,
is
we
have
a
centralized,
a
server
so
fleet
when
you
deploy
it
when
you
deploy
rancho,
you
deploy
the
fleet
server,
that
comes
under
a
launcher,
and
what
this
means
is.
It
just
uses
a
two-stage
pool
model,
which
means
it's
scalable,
we'll
talk
more
about
that
in
a
second,
so
fleet
is
runs
on
top
of
kubernetes
components
and
it's
all
driven
by
humanities
apis.
A
So
this
means
it's
easy
to
deploy
and
it's
easy
to
manage.
This
also
means
we
could
have
certain
features
like
our
back
coming
to
it.
The
fleet
agent
is
deployed
on
the
downstream
clusters,
so
you
only
need
one
agent,
it
doesn't
always
have
to
be
connected
to
fleet,
and
it
can
just.
We
can
provide
a
small
amount
of
information
and
you
can
just
go
to
the
rest
for
us
and
you
can
also
provide
feedback
on
what's
actually
happening
when
I,
when
you
know
when
I
roll
out
a
new
application
or
I
change
something,
it
will.
A
Let
us
know
how
that
application
is
reacting
and
provide
us.
You
know
with
useful
feedback
that
we
can.
Then
you
know
figure
out
if
there's
a
problem
or
if
our
application
was
successfully
rolled
out.
So
typically
we
would
have
fleet
here.
This
would
be
the
ranger
server,
it
would
pull
from
git
or
we
could
interact
with
directly
with
the
kubernetes
api,
and
this
would
push
out
these
bundles,
and
you
know
we
would
have
these
cluster
groups.
A
You
know
we
can
just
use
metadata
on
those
clusters
and
have
create
cluster
groups
based
on
those
labels,
and
then
we
can
roll
out
actual
applications
and
workloads
to
those
clusters
based
on
what
they
are.
So
we
could
have
say
a
a
edge
cluster.
We
could
also
have
a
gpu
load
cluster
and
with
the
labels
we
can
roll
out
different
workloads
to
different
clusters.
A
So
the
main
features
of
fleet
is
that
we
can.
We
have
the
ability
to
manage
policies,
we
can
deploy
humanity's
jammer
files
and
we
can
even
up
you
know.
You
know
just
and
everything
in
communities
now
is
basically
being
made
where
any
mundane
task
or
any
system
tasks
that
we
used
to
do
through
the
command
line
we
can
do
through
human
essays.
So
this
means
more
and
more
applications
are
going
to
be.
A
You
know,
workloads
are
going
to
be
tailored
towards
kubernetes
and
if
we
can
deploy
that
to
at
a
scale
that
solves
a
lot
of
problems,
especially
when
you
come
into
infrastructure
maintenance,
you
know
deploying
applications
and
also
having
a
governance
of
all
the
our
back
policies.
With
the
monitoring.
We
can
get
a
live
status,
update
on
how
the
rollout's
going,
whether
the
configuration's
in
sync,
whether
there's
any
issues
and
problems-
and
you
can
get
logs
on
that.
A
The
centralized
control
plane
also
gives
us
visibility
into
all
of
the
downstream
clusters,
and
then
we
can
also
apply
our
back
on
top.
This
means
with
larger
organizations.
You
can
have
strict
rbac
controls
to
make
sure
that
you
have
controls
in
place
to
see
who
can
deploy
what
and
what
can
be
changed.
A
So
the
competitors
are
are
going.
Flux
are
the
main
ones
that
you
know
would
be
similar
to
fleet.
So
argo
is
very
similar
to
fleet,
but
it
has
its
own
engine
and
it's
got
a
push
architecture.
The
push
architecture
is
harder
to
scale
with
flux,
it's
a
simpler
design
and
it's
there's
no
centralized
control
plane.
So
you
know
we
can't
add
in
our
back
or
we
can't
add
in
dashboards
or
monitoring
this
the
rollout
stage.
So
this
is
where
free
comes
in
with
the
two-stage
pool.
A
So
you
know
we
pull
the
information
to
fleet
pushes
that
out
to
its
edge,
and
this
is
where
you
know.
We
also
use
helm
as
a
deployment
engine,
because
we
know
helm
works
with
it,
creates
and
updates
charts,
and
we
can
view
the
changes.
So
this
means
we
can.
You
know
we
can.
You
know,
use
that
to
deploy
our
change
state
and
then
that's
how
we
can
you
know,
do
programming
access
to
these
bundles.
You
know
working
with
git
programmatically
isn't
as
easy
as
it
seems.
A
So
you
know
we
would
have
these
nodes
registered
into
clusters
with
rancher
and
pushout
configurations
that
can
be
very
chatty
can
be
very
noisy
and
the
problem
with
that
is
you
know
you
have
all
this
resources
being
tied
up.
You
know
chugging
through
figuring
out,
where
what
what
to
deploy
where
so,
where
this
is.
Where
fleet
comes
into
it,
we
push
that
compute
off
to
the
edge.
A
We
just
say
you
know
freak
pools,
you
know,
gets
all
information
from
the
git
repo
and
then
we
post
that
we
send
a
couple
of
kilobytes
of
files
down
to
the
actual
agent
and
we
let
that
process
where
it
needs
to
deploy
all
the
files
on
that
cluster.
This
means,
you
know
we
are
basically
making
it.
You
know
the
control
plane.
Server
doesn't
have
to
be
a
huge
workload,
because
all
that
compute
power
is
being
pushed
down
the
line
and
to
the
edge.
A
So
that
was
a
quick
overview
of
what
fleet
is.
There
will
be
a
demo
if
you
guys
have
any
questions
about
fleet
check
out
the
slack
channel
check
out
the
student
ranger
community
for
more
information
about
fleet
and
our
other
projects.