►
From YouTube: Kubernetes Infrastructure the GitOps Way
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
welcome
to
this
webinar
that
is
hosted
by
cncf,
together
with
the
partner
of
the
cooper
mackick.
My
name
is
mikhail
mancho
and
I
will
be
guiding
you
through.
The
webinar,
which
main
topic
is
about
spinning
up
the
kubernetes
infrastructure
using
the
github's
tooling,
so
have
fun
and
let's
get
started
right
away.
A
So,
just
briefly
about
my
introduction,
I'm
living
in
czech
republic
and
I'm
working
as
the
consultant
and
the
cloud
architect
at
the
kubernetes
company
and
I'm
helping
the
customers
with
their
cloud
native
journey
feel
free
to
join
me
on
linkedin.
If
you
are
interested
and
if
you
like
this
webinar,
I
will
be
happy
to
respond
today
during
the
webinar.
A
I
will
also
explain
the
motivations
and
why
we
have
created
such
a
tool
and
what
kind
of
tools
from
the
cncf
landscape
we
are
using.
I
will
also
explain
a
bit
more
concepts
about
the
the
githubs
and
on
on
which
levels
we
can
utilize
it
and
last
but
not
least,
I
would
like
to
give
some
focus
to
the
security,
which
is
an
important
piece
of
the
of
the
puzzle
in
this
setup.
A
Hopefully,
we
will
have
also
the
the
live
demo,
so
I
hope
that
we
will
fit
nicely
in
the
given
time.
A
So
let
me
start
with
the
introduction
of
the
kubernetes
company,
so
just
for
you
to
be
aware,
so
it's
a
european-based
company
and
employees
are
all
over
the
world,
so
everybody
are
working
in
the
full
remote
corporation
and
it's
one
of
them,
the
top
kubernetes
employer
in
europe,
and
there
are
tools
like
the
coupon
and
the
kubernetes
kubernetes
platform,
and
those
are
the
tools
which
will
be
involved
in
the
in
the
webinar
today
as
well.
So
I
will
just
try
to
briefly
explain
those
tools
as
well.
A
So
really
we
are
heavily
focused
on
the
automation
and
simplification
of
the
operations
which
is
connected
with
running
the
the
workload
and
the
application
on
the
cloud
native
or
kubernetes
based
stack.
A
So
first
of
all,
I've
already
mentioned
the
q
cube
one
and
the
kkb
tools.
We
won't
be
today
that
much
focused
on
the
cube
carrier,
so
we
can
skip
this
for
now.
So
let
me
start
with
the
coupon.
So
coupon
is
the
tool
that
is
used
for
an
automation
of
single
cluster.
A
It's
completely
vendor
neutral,
so
you
can
use
this
tool
to
deploy
the
kubernetes
clusters
in
the
vanilla
way
on
all
well-known
public
cloud
providers,
as
well
as
on
the
environments
like
the
on-premise
or
openstack
vsphere
and
providers
like
that
or
digital
ocean
hetzner
pocket,
and
so
on
so
effectively
with
this
tool.
It's
the
cli
tool,
so
it's
from
developers
to
developers
so
there's
no
ui
and
anything.
A
So
it's
just
a
cli
tool
with
some
dml
definition
with
that
you
control
the
provisioning
of
your
kubernetes
cluster.
So
the
result
that
is
usually.
A
Created
out
of
the
after
you
execute
the
coupon
tool
is
that
you
have
an
cluster
deployed
where
usually
there's
some
load
balancer,
which
is
used
for
accessing
the
kubernetes
api.
The
control,
plane
and
nodes
are
provisioned
and
also
the
worker
nodes
are
provisioned
and
those
are
managed
again
in
the
declarative
way
using
the
resource
called
machine
deployment.
A
So
this
is
this
is
one
part
that's
the
coupon
and
the
second
one
is
the
abbreviation,
is
the
kkp,
so
I
will
be
using
that
during
the
webinar,
so
this
tool
is
effectively
the
platform
that
is
being
used
to
create
a
single
ui
that
can
be
used
for
management
of
the
kubernetes
clusters
in
the
multi-cloud
environment
so
effectively.
That
gives
you
the
one-stop
shop
for
all
of
your
kubernetes
clusters
across
different
environments,
including
the
public
cloud
providers,
your
on-premise
environments,
or
essentially
bare
metal
clusters
or
are
so
newly
with
kkp
2.19.
A
A
So
this
is,
I
will
just
briefly
explain
some
of
the
major
concepts
because
it
will
be
handy
to
understand
throughout
the
next
steps
I
will
be
describing
so
usually
how
the
setup
look
like
is
that,
first
of
all,
you
provision
one,
let's
say:
management,
kubernetes
cluster,
with
the
coupon
tool,
and
on
top
of
this
management
cluster,
you
will
install
the
kkp
platform.
So
it
consists
of
a
couple
of
core
components
like
the
api,
the
operator.
A
The
dashboard
and
so
on,
and
then
next
to
that,
so
this
is
usually
called
a
master
cluster
where
these
components
are
deployed.
Next
to
that,
so
it
can
be
either
a
separate
cluster
or
it
can
be
also
deployed
on
the
same
cluster.
We
provision
so-called
seed
cluster
and
the
seed
cluster
again
is
the
set
of
the
operators
and
controllers
that
are
running
over
here,
but
mainly
on
this
cluster.
A
We
provision
the
the
con
the
containerized
control,
plane
components
of
your
user
clusters,
so
the
user
cluster
is
the
end,
let's
say
entity
or
the
cluster
that
is
actually
being
used
by
the
end
users.
So
by
user
cluster
you
can
imagine
the
kubernetes
cluster
on
aws
on
google
and
so
on
and
effectively
it's
represented
as
the
workers
that
are
provisioned
in
the
given
environment
and
all
of
the
control
plane.
Components
are
running
as
the
containers
on
the
seed
cluster,
so
I
would
keep
this
level
of
detail
for
now
for
sure.
A
A
A
So
the
overall
idea
is
to
go
through
the
wizard.
The
wizard
is
composed
of
the
six
steps
and
based
on
your
selection,
the
the
content
will
be
downloaded
or
generated
from
this
wizard,
and
you
can
take
this
to
as
a
starting
point
to
spin
up
your
kubernetes
infrastructure
in
a
very
easy
way
within
couple
of
minutes,
instead
spending
couple
of
days
or
weeks
trying
to
do
the
same
just
by
following
the
documentation.
A
So
I
will
just
briefly
try
to
describe
the
steps
which
are
available,
so
in
the
first
step
in
the
wizard,
you
will
be
able
to
choose
what
is
the
git
provider,
where
your
repository
that
will
represent
the
infrastructure
as
a
code,
because
we
automate
everything
so
for
sure.
We
would
like
you
to
have
the
declarative
definition
of
all
the
configuration
files
and
so
on.
So
here
you
will
pick
one
of
the
most
famous
git
providers.
I
would
like
to
highlight
that
this
week
we
have
actually
added
the
bitbucket
support.
A
So,
first
of
all,
after
selection
of
the
git
provider,
you
will
select
the
cloud
provider
for
now
we
have
added
the
support
for
five
most
common
cloud
providers.
So
those
are
the
public
cloud
providers
like
the
aws,
google,
google
cloud
or
azure
and
from
the
on-premise
environments
we
have
picked
the
vmware
and
the
openstack.
A
You
can
just
use
it
for
some
learning
and
onboarding
purposes
and
based
on
that,
you
can
build
your
own
structure
in
the
similar
similar
way.
So
in
the
next
step
there
is
the
cluster
configuration.
So
here
we
will
ask
you
about
the
specific
details
like
what
version
of
the
kubernetes
cluster
you
would
like
to
use
for
the
main
cluster.
A
In
the
next
step,
it
says
the
details
of
the
kkp,
so
here
we
will
ask
you
what
will
be
the
domain
that
will
be
used
to
expose
the
kkp
through
the
ingress
resource,
and
there
are
also
information
about
some
email
and
the
user
that
will
be
used
for
the
authentication
and
then
in
the
section
of
the
bootstrap.
We
also
demonstrate
how
you
can
manage
some
resources
inside
the
kkp
platform
in
the
declarative
way
so
effectively.
A
You
will
be
asked
to
provide
some
name
of
the
project
that
will
be
used.
The
configuration
of
the
seed
cluster,
so
here
you
will,
let's
say,
configure
in
which
location
the
seed
cluster
will
be
running
and
how
the
seed
cluster
will
be
configured,
and
you
can
also
optionally
provide
the
credential,
the
the
credentials
that
are
being
used
for
provisioning
of
the
user
cluster.
A
The
content
may
look
like
this,
for
example,
so
you
can
see
that
this
is
the
structure
of
the
of
the
archive
and
it
has
some
first
of
all,
it
has
some
readme
files.
I
will
talk
about
these
readme
files
a
bit
a
bit
later,
but
in
general
these
are
the
instructions
how
to
make
this
work
in
your
in
your
repository
or
on
your
local
machine.
A
Then
there
is,
if
you
choose,
for
example,
github
is
the
git
provider,
so
you
will
receive
the
the
github
workflow
definition.
If
you
were
just
the
the
git
lab,
you
will
have
the
gitlab
ci
email.
If
you
choose
the
bitbucket,
you
will
have
the
pipeline
symbol
instead,
so
it's
very
dynamic
and
all
of
the
content
is
generated.
Based
on
your
selections,
then
this
is
one
I
didn't
mention
yet
that
for
the
githubs
tool
we
have.
We
choose
the
the
flux
version
too.
A
So
that's
why
we
have
here
the
flux
directory
and
you
can
see
that
there
is
some
structure
like
flux
clusters
master
so
effectively.
These
are
all
the
resources,
sorry
that
are
delivered
on
the
master
cluster.
Then
we
have
some
additional
directory,
which
is
called
subs.
I
will
talk
about
this
a
bit
later,
so
this
is
for
the
purpose
of
providing
some
encrypted
values
directly
to
your
kubernetes
cluster,
and
then
some
high
level
directories
are
cube1,
so
this
directory
includes
some
declarative
definition
of
of
the
coupon
cluster.
A
We
also
need
two
configuration
files,
so
one
is
the
value
symbol,
so
this
is
effectively
set
of
values
for
the
helm,
charts
that
are
being
being
installed
as
part
of
the
of
the
installation
and
the
second
one
is
the
custom
resource
called
kubernetes
configuration,
which
includes
some
high-level
configuration
about
the
authentication.
What
will
be
the
features
enabled
on
the
pkp
and
so
on?
Then
there
are
two
more
files.
One
is
called
the
secrets.
I
will
talk
about
this
later
so
effectively.
A
Here
we
are
generated
generating
some
kind
of
secrets
for
you,
so,
for
example,
some
encryption
key
pair
or
the
user
password.
So
this
is
the
file
where
we
provide
to
you
these
generated
values,
and
then
there
is
the
terraform
directory
that
is
being
used
to
together
with
the
q1
tool.
So,
first
of
all,
you
will
provision
some
cloud
resources
with
the
terraform
and
then
coupon
has
the
native
integration
with
the
terraform
output.
A
A
So,
first
of
all,
just
to
recap:
you
went
through
the
the
wizard.
Then
you
download
the
some
zip
archive
which
may
look
like
this.
So
this
is
just
an
example,
and
then
the
next
step
is
how
do
you
actually
deliver?
So
we
there
is
a
clear
separation
of
responsibilities
between
what
is
being
delivered
by
the
automated
pipeline,
because
for
sure
we
are
not
able
to
start
everything
from
scratch,
so
it's
kind
of
common
chicken
neck
problem.
So,
first
of
all,
we
provision
the
master
cluster
and
install
the
kkp
using
the
automated
pipeline.
A
So
there
is
a
schema
how
the
pipeline
look
like.
So
there
are
some
stages,
like
validation
of
the
terraform
application
of
the
terraform
itself.
Then
we
apply
the
coupon
install
the
kkp,
and
then
we
initialize
the
flux
tool
or
the
the
github
store
in
general.
So
this
is
all
managed
by
the
pipeline,
but
then
there
is
the
second
part
of
the
responsibilities
and
all
of
the
resources
that
can
be
defined,
and
that
is
fully
managed
by
by
the
flux.
So
anytime,
you
will
update
or
add
or
delete
some
files
under
the
flux
directory.
A
So
this
will
be
the
responsibility
of
the
flux
to
reconcile
the
state
of
these
resources
on
your
target
cluster.
We
are
also
providing
two
ways
how
you
can
spin
everything
up.
So,
first
of
all,
you
can
really
can
utilize
the
automated
pipeline
of
your
git
provider
so
like
the
gitlab
pipeline
or
github
github
actions
or
bitbucket
pipelines,
or
we
have
also
an
alternative
way,
which
can
be
very
handy
for
understanding
the
the
product
and
all
of
the
steps
by
yourself.
A
So
still
you
will
utilize
all
of
the
main
concepts
you
will
have
the
infrastructure
as
a
code
in
your
hit
repository,
but
the
provisioning
of
the
main
cluster
won't
happen
with
the
automated
pipeline,
so
just
to
wreck
up
some
of
the
the
motivations
why
we
were
doing
this
because,
as
you
may
imagine,
we
are
doing
this
kind
of
installations
in
various
environments,
for
various
customers
and
so
on,
and
with
this
project
we
wanted
to
simplify
the
bootstrapping
and
onboarding
of
the
customers,
so
that
we
can
really
start
in
a
very
quick
way.
A
Customers
can
also
try
this
by
themselves
and
based
on
that
they
may
decide
whether
they
like
the
platform
or
not.
So
the
wizard
and
the
documentation
are
very
detailed
at
this
moment,
so
it
should
be
easy
to
follow.
If
not,
we
are
always
welcome
to
hear
any
feedback
from
from
community
as
all
of
the
stuff
we
have
is
open
source.
A
So
truly
we
try
to
avoid
any
manual
steps
at
all
costs
so
in
by
using
this,
we
are
pretty
sure
that
you
will
only
do
a
couple
of
the
preparation
steps
and
you
will
have
the
full
automated
pipeline,
which
will
do
everything
for
you,
and
on
top
of
that,
we
will
also
set
up
the
the
flux
tool
which
will
be
used
for
the
management
of
the
resources
in
the
github
sway
on
your
kubernetes
cluster
and
last
but
not
least,
we
wanted
this
to
be
secure
and
so
that
you
can
really
provide
all
of
your
configuration
files
in
in
the
git
repository.
A
So
to
avoid
the
situations
like
you
are
limited
to
comment
something
to
the
repository
or,
of
course,
it's
always
a
bad
practice
to
provide
any
plain
text,
values
and
secrets
in
the
repository.
So
to
avoid
this,
we
have
decided
to
use
the
mozilla
substitute
which,
by
the
way,
has
the
native
support
in
the
flux
version
too.
So
you
can
integrate
with
this
tool
directly
by
using
the
decryption
provider
with
the
flux.
A
So
we
will
see
this
later
on
how
we
utilize
that
and
next
to
that,
why
we
have
chosen
the
the
subs
tool
is
that
we
are
not
only
delivering
some
secret
resources
in
kubernetes.
For
that
you
can
use
the
tools
like
sealed
secrets
or
vietnamese
secrets
and
so
on,
but
there
are
also
some
other
files
which
include
some
a
sensitive
configuration
like
some
values
file
or,
for
example,
the
preset
or
other
files
may
have
some
sensitive
values.
A
A
A
So
I
found
about
20
projects
which
are
connected
and
which
are
actively
used
as
part
of
the
start
kkp
project,
because
we
are
not
only
delivering
the
plain
kubernetes
cluster,
but
together
on
the
kkp
platform,
we
are
directly
providing
the
support,
for
example,
for
some
observability
stack
or
monitoring
logging
and
alerting.
So
this
is
everything
what
we
deliver
out
of
the
box.
We
utilize,
for
example,
the
cert
manager
to
provide
the
certificates.
A
We
use
the
engines
to
expose
the
application
there
is
in
both
coupon
and
kkp.
There
is
a
new
support
for
the
cerium
cni
that
was
recently
added
in
the
in
the
landscape
as
well.
So
here
this
is
just
a
very
brief
input.
A
What
kind
of
projects
and
tools
are
involved
in
the
in
the
start?
Kkp
project
that
you
can
try
on
your
own
and
use
very
quickly
so
to
move
on?
I
also
wanted
to
briefly
explain
how
it
actually
works
under
the
hood.
So
right
now
I
will
try
to
be
a
bit
more
technical
and
I
will
just
briefly
try
to
explain
the
specific
steps
which
are
happening
either
by
you.
A
So,
first
of
all,
we
start
with
provisioning
of
the
cloud
resources
using
the
terraform
tool.
So
for
each
provider
we
have
the
terraform
example
that
can
be
used,
and
then
the
output
from
the
terraform
is
being
used
by
the
cube
one,
two
provision,
the
cube
one
or
the
simply
h
a
kubernetes
cluster.
A
So
this
will
be
usually
the
the
result.
So,
as
already
mentioned
before,
so
there
will
be
usually
some
load
balancer
service.
There
will
be
set
of
the
control,
plane,
machines,
virtual
machines
or
whatever
is
available
on
that
provider
and
also
set
of
the
workers
that
will
be
used
for
running
the
workload
on
the
master
cluster
so
for
actual
installation
of
the
kkp.
These
workers
will
be
utilized
then.
So
this
is
how,
let's
imagine
that
this
is
the
empty,
so-called
master
cluster.
A
In
the
terminology
we
use,
there
is
also
some
concept
of
some
add-ons,
so
out
of
the
box,
let's
say
we
provision
some
storage
class.
We
also
set
up
the
nodes
autoscaler,
so
you
also
don't
have
to
care
that
much
about
scaling
up
and
down
the
the
machines
based
on
the
current
utilization
and
it
will
be
all
managed
automatically
with
the
nodes,
auto
scaler.
You
can
just
configure
what
will
be
the
maximum
and
the
minimum
nodes
that
are
in
use.
A
So
after
that,
so
for
now
we
have
an
empty
kubernetes
cluster,
vanilla
and
right
now
we
will
use
the
kkp
installer.
So
kkp
is
officially
delivering
the
the
archive,
which
includes
the
binary
called
cobramatic
installer
and
after
running
this
installer
and
providing
the
values
file
in
the
kubernetes
configuration,
there
will
be
four
name
spaces
created,
so
first
one
will
be
kubernetec
where
the
already
mentioned,
dashboard,
api
and
operator
components
and
controllers
will
be,
will
be
running.
A
Next
to
that
for
exposure
of
the
application,
we
deployed
the
engines
increase
controller
and
for
the
provisioning
of
the
certificates
the
cert
manager
will
be
installed.
So
this
is,
let's
say,
the
core
components
that
are
installed
by
the
installer,
but
right
now
we
continue
because
we
would
like
to
simply
demonstrate
how
to
automate
the
other
stuff
and
so
on.
So
we
continue
by
okay.
So
this
is
one
more
premier
preliminary
step.
A
So
after
the
previous
step,
there
is
a
required
step
that
to
be
able
to
access
the
the
ui
on
some
specific
domain,
you
have
to
register
the
dns
dns
endpoint.
A
This
is
very
much
pro
cloud
provider
specific
again.
There
are
instructions
how
to
deal
with
that
in
the
documentation
and,
for
example,
for
aws.
We
also
provide
some
automated
module
in
the
terraform
which
can
take
care
of
this
step
for
you
anyway,
next
step
after
you
have
the
kkp
installed,
the
domain
is
registered
after
the
domain
is
registered,
the
certificates
will
get
provided
automatically
and
so
on.
So
at
this
moment
you
can
start
using
the
kkp
in
your
browser,
but
right
now.
A
The
next
step
is
the
installation
of
the
flux
tool
there
is
for
version
2.
There
is
the
cli
tool
called
flux
that
has
the
bootstrap
command,
which
is
used
and
it
will
effectively
in
your
git
repository
it
will
bootstrap
itself.
So
it
will
also
create
the
comments
with
the
definitions
of
the
of
the
components,
so
all
of
the
deployments
all
of
the
service
accounts
and
so
on.
A
So
it
will
be
all
created
declaratively
in
your
repository,
and
it
will
also
create
so-called
flux
customization,
which
is
please
do
not
do
not
mix
that
with
the
native
kubernetes
templating
system
or
tool.
So
this
is
the
flux,
specific
customization,
which
is
an
api
resource
that
is
being
used
for
the
purpose
that
you
describe
that
this
from
this
repository
from
this
path
deliver
the
resources
to
my
kubernetes
cluster.
So
this
is
just
a
very
brief
description.
A
A
So,
at
this
step
the
resources
like
the
monitoring,
logging
and
oauth
2
proxy
and
mean
io,
so
these
components
are
installed
by
the
flux
in
the
automated
way.
So
these
are
the
examples
of
the
the
helen
charts
that
are
being
delivered
from
the
cobratic
repository
and
next
to
that
we
also
deliver
a
couple
of
of
kubernetes
api
resources.
So
this
is
based
on
what
you
have
provided
in
the
wizard.
A
A
We
provide
some
additional
kkp
settings
which
is
like
what
should
what
features
should
be
visible
in
the
ui.
There
are
some
custom
links
and
so
on,
and
then
sorry,
I
have
skipped
one
more
step.
Actually
this
is
this
is
what
is
being
delivered
first,
so
we
also
delivered
the
customization
another
one
which
has
the
already
mentioned
subs
decryption
provider
defined.
A
So
it
means
that
here
we
are
saying:
okay,
we
have
some
other
directory,
which
may
include
some
encrypted
yaml
resources
in
the
git
repository
and
from
this
repository,
please
deliver
the
resources
to
my
customer
as
well,
and
this
is
the
example
like
how
we,
for
example,
there
we
were
the
preset
or
so-called
cluster
templates
and
so
on.
So
these
are
some
additional
resources
that
are
managed
by
this
customization.
A
So
this
is,
let's
say
the
complete
picture.
What
you
will
get
as
the
result,
so
you
can.
You
can
see
a
lot
of
logos
over
here,
a
lot
of
well-known
tools
that
will
be
pre-configured
automatically
for
you
and
you
can
start
provisioning
your
kubernetes
clusters,
on
which
you
will
run
your
actual
applications
and
and
the
workload.
A
So
I
also
wanted
to
mention
that
effectively.
This
gives
you
a
huge
power
of
what
you
can
do
with
the
with
the
githubs
and
also
I
wanted
to
demonstrate
a
bit
how
we
demonstrate
everything
in
the
declarative
way.
A
So
this
is
also
managed
in
the
declarative
way,
if
you
will,
for
example,
decide
to
upgrade
the
the
tool
itself,
so
you
can
do
it
by
updating
these
files.
In
the
repository
or
of
course,
you
can
upgrade
the
the
flux
locally
and
if
you
will
perform
the
the
bootstrap
again,
it
will
update
the
components
and
potentially
update
the
the
synchronization
files
as
well,
but
more
for
the
context
of
the
kkp
on
the
right
side.
I
also
mentioned
some
more
examples
what
you
can
manage
in
the
declarative
way.
A
So,
first
of
all,
I
already
mentioned
some
concept
of
the
cluster
template,
so
this
can
be
used
to
really
simplify
the
bootstrapping
of
the
kubernetes
clusters
for
your
end
users.
So
if
you
are,
for
example,
having
the
kubernetes
platform
as
a
service
for
other
customers,
so
that
you
can
use
the
features
of
the
multi-tenancy
and
set
up
the
roles
and
permissions
for
each
customer
in
different
project
and
for
each
customer,
declaratively
defined
different
options
for
the
cluster
templates
and
so
on,
you
can.
A
There
is
a
concept
of
the
add-ons
that
you
can
use
as
well,
so
it
will
be
all
again
declarative.
There
is
also
native
support
for
the
opa
policies
and
bunch
of
other
kkp
resources.
There
is
one
exception
that
the
user
cluster
itself
is
not
yet
supported
as
the
as
the
resource
that
you
can
manage
through
the
yaml
or
the
cube
ctl.
A
This
one
can
be
only
managed
through
kubernetes
api
directly,
but
this
is
about
to
change
in
some
of
the
early
upcoming
kkb
releases
and
right
now.
I
would
like
to
mention
some
kind
of
inception
that
you
can
do
with
this
platform,
because,
first
of
all,
we
have
created
an
automated
way:
the
management
master
cluster
with
the
kkb
and
so
on
from
this
kkp.
A
You
can
create
other
kubernetes
clusters
and
on
these
kubernetes
clusters
you
can
install
again
the
githubs
tool
that
will
be
again
automatically
delivering
the
the
resources
or
your
applications
to
these
user
clusters
and
in
community
repository
we
have
an
example
of
the
flux,
2
kkp
addon,
as
well
as
the
argo
cd.
So
if
you
have
any
preference
and
if
you
are
using
the
argo
cd
so
for
sure
you
can
use
it
as
well
in
this
in
this
way.
A
Next
to
that,
if
you
are
a
programmer,
so
you
may
not
like
that
much
to
go
through
the
wizard
in
the
browser
and
the
ui
all
over
again,
if
you
will
be
doing
that
multiple
times,
so
everything,
of
course
can
be
managed
and
controlled
by
the
api.
So
the
wizard
itself
that
is
running
on
the
start.govermatic.com
has
an
api.
A
Here
I
have
added
a
little
this
disclaimer.
I
will
be
happy
that
we
will
provide
some
open
api
definition
for
this
api
so
that
you
can
easily
get
some
example
how
the
structure
of
the
payload
should
look
like,
but
then
you
would
simply
run
the
single
api
command,
and
out
of
that,
you
will
receive
the
zip
file
that
will
get
everything
pre-configured.
So
this
is
also
possible
with
the
with
the
project.
A
So
right
now,
you
are,
I
believe,
interested
in
the
ways
of
the
security
aspects.
So,
as
I
already
mentioned,
one
of
the
main
goals
was
to
really
put
hundred
percent
of
all
of
the
sensitive
configurations
and
everything
in
the
repository
so
do
not
really
have
the
mixture
that
we
manage
declaratively
and
in
the
github's
way
like
80
percent,
and
we
have
to
do
20
manually.
So
we
wanted
to
truly
avoid
this.
So
for
this
we
have
chosen
the
the
substore
which
can
be
used
with
different
encryption
backends.
A
So
out
of
the
box,
we
are
using
h.
So
that's
a
go
binary
that
is
used
for
the
encryption
and
decryption
by
using
a
key
pair
of
the
secret
key
and
the
public
key,
but
you
can
also
configure
subs
to
use
the
vault
or
some
pgp
keys
and
so
on.
The
very
nice
feature
of
the
subs
is
that,
as
you
can
see
from
this
gif
example,
which
is
taken
from
the
the
repository
that
is
public,
so
the
nice
feature
is
that
the
files
are
still
human
readable
and
only
the
sensitive
values
are
encrypted.
A
A
So
we
also
provide
some
cheat
sheet
documentation
how
to
how
to
use
that
properly,
and
this
is
an
example
file
that
we
generate
out,
or
that
is
part
of
the
that
is
part
of
the
archive
that
you
download
it's
called
secrets
md.
A
A
A
To
the
h
key
effectively
to
the
secret
key
that
is
used,
but
this
one
has
to
be
for
sure
used
by
the
pipeline.
So
we
guide
you
through
the
steps
how
to
set
up
your
git
repository
to
make
this
secret
available
or
in
case
that
you
are
doing
this
locally.
It
won't
be
never
exposed
anywhere
and
the
secret
key
will
be
only
on
your
machine.
But
the
second
place
is
also
the
the
kubernetes
cluster
where
the
flux
is
configured.
A
But
from
this
path
there
is
a
definition
that
there
is
a
decryption
provider
subs,
which
is
pointing
to
the
secret
with
some
name.
So
in
the
steps
and
the
instructions
that
we
deliver,
we
give
you
the
guidance
how
to
properly
create
this
this
secret,
so
that
the
secret
key
is
available
for
the
flux
on
the
kubernetes
cluster,
and
here
on
the
right
side.
You
can
see
an
example
of
the
preset.
So,
as
I
already
mentioned
before,
the
preset
is
a
set
of
the
pre-configured
cloud
credentials.
A
Usually
that
is
used
for
provisioning
of
the
user
cluster.
This
is
an
example
of
the
aws
preset
which
accepts
the
access
key
and
secret
access
key.
Next
to
that
there
is
also
the
vpc
id
that
is
being
used
for
the
deployment,
but
you
can
see
that
vpc
id
is
the
plaintext
value.
There
is
nothing
secret
if
in
case
that
it's
secret
for
you,
so
for
sure
you
can
configure
that
it
will
be
encrypted
as
well,
but
then
the
specific
two
values
are
are
encrypted,
and
there
is
only
the
public
key
mentioned
over
here
here.
A
You
can
see
that
there
are
some
additional
metadata
for
the
subs
here.
You
can
see
that
there
are
a
couple
of
ways
how
you
can
configure
the
backends,
but
we
are,
as
already
mentioned,
we
are
using
the
h
and
there
is,
for
example,
also
the
regex
that
was
used
for
encryption
of
this
file,
so
with
that
you
can
control
which
fields
are
encrypted
and
which
are
which
are
not
so
right.
Now
it's
a
demo
time,
so
I
will
be
happy
right
now
to
demonstrate
life
how
it
looks
like
so.
A
I
have
prepared
some
github
scenario
as
well,
so
I
have
the
life
repository
that
I
prepared
in
advance,
and
I've
also
run
the
pipeline
already
because
it
takes
about
15
to
20
minutes
to
spin
up
everything.
So
I
wanted
to
avoid
this
during
the
live
demo.
But
anyway,
I
will
still
show
you
the
life,
the
life
of
this
art.
A
So
here
you
can
see
that
I
was
talking
about
the
six
steps
so
right
now
we
will
try
to
go
through
these
steps.
So,
first
of
all
you
can
select
some
git
provider.
You
can
notice
that
if
I
choose
the
github,
I
have
available
just
aws
and
google
cloud.
This
is
because
of
the
limitation
that
we
need
to
store
somehow
the
terraform
state
somewhere
and
for
these
providers
we
directly
utilize
some
s3
or
google
storage
for
for
the
specific
bucket.
A
But
for
example,
if
you
pick
gitlab,
we
are
directly
utilizing
the
feature
of
the
gitlab
which
supports
the
storage
of
terraform
state
directly
in
your
project
or
repository.
So
if
you
pick
the
gitlab,
you
have
available
all
of
the
currently
supported
cloud
providers.
So,
for
example,
let's
pick,
let's
pick
google
or
maybe
let's
pick
aws,
it
doesn't
really
matter
for
the
demo.
A
Here
you
can
see
that
you
can
define
your
own
cluster
name
that
is
being
used
as
the
prefix
for
most
of
the
cloud
resources
being
created.
You
can
select
the
version
that
will
be
used
for
provisioning.
You
can
select
the
container
runtime
you
can
enable
or
disable
the
autoscaler
add-on
that
will
be
on
the
cluster.
So
here
we
are
saying
that
we
will
let
the
autoscaler
to
go
from
1
up
to
10
workers.
A
A
A
A
Cluster
issuer,
so
for
that
we
need
to
have
some
email
which
gets
the
notification
about
some
potential,
expirations
and
so
on,
and
we
are
also
using
the
oauth
2
proxy
for
the
authentication
and
we
can,
for
example,
control
that
only
the
users
with
the
cobramatic
domain
will
be
able
to
access
some
monitoring
services
like
grafana
alert
manager
and
so
on.
But
for
sure
there
is
a
very
huge
set
of
options
that
can
be
used.
A
So
this
is
just
a
very
small
example
of
how
it
can
be
configured
so
right
now,
I'm
in
the
step
of
the
kkb
bootstrap.
So
let's.
A
The
region-
and
here
I
won't
be
providing
the
real
values
right
now,
but
here
you
can
generate-
or
I
would
for
sure
recommend
to
have
some
separate
iem
user
in
aws
and
for
that
generate
the
secret
keys
that
will
be
used
for
provisioning
of
the
user
cluster.
A
And
here
you
have
to
write
specific
reviews
and
those
will
be
encrypted
inside
the
preset
resource.
In
your
repository
out
of
the
box.
A
So
right
now
we
are
in
the
last
step,
so
it's
effectively
the
summary
and
recap
of
all
of
the
inputs
that
you
have
provided.
So,
as
you
can
imagine,
and
based
on
what
you
have
seen,
there
are
a
lot
of
options,
some
of
the
conditional
steps,
what
you
can
provide
and
so
on.
So
here
you
should
really
see
all
of
the
options
and
the
inputs
that
you
provided
and
as
soon
as
you
click
the
generate,
it
will
download
the
the
archive
and
you
effectively
start
to
becoming
the
automation,
superhero
and
so
on.
A
But
so
the
next
step
is
that
you
just
unzip
the
archive
and
inside
this
archive.
I
will
demonstrate
this
on
the
live
repository
already
so
here
in
the
next
step.
I
have
prepared
the
repository
so
for
this
demo.
I
have
decided
to
use
kit
lab
with
combination
of
the
google
cloud,
but
you
can
really.
There
is
a
huge
matrix
already,
so
you
can
do
your
own
combination
and
try
it
with
your
favorite
kit
provider
and
the
cloud
provider.
A
But
here
I
just
wanted
to
show
you
what
you
get
out
of
the
box.
I
mean
life,
so
first
of
all
there
will
be
the
gitlab
ci
yammer.
It's
not
that
complex.
It's
only
about
200
lines
and
it's
splitted
inside
five
stages
you
can.
I
can
show
you
the
real
pipeline
that
I
executed
a
couple
hours
back.
A
So
this
is.
These
are
the
stages
it's
matching
the
image
and
the
flow
I
was
describing
at
the
beginning
so
effectively.
Here
it
creates
the
the
cloud
resources
with
terraform.
Here
the
kubernetes
cluster
is
provisioned
with
the
coupon,
then
the
kkp
is
installed
and
then
flux
is
bootstrapped.
That's
it
nothing
else.
So
this
is
the
responsibility
of
of
the
pipeline,
but
we
are
also
providing
all
of
these
files
that
are
pre-configured
and
so
on.
A
So
here's,
for
example,
the
cube
one
here
you
can
see,
for
example,
kubernetes
configuration
so
all
of
the
values
that
we
have
provided
somewhere
are
somewhere
defined
in
the
specific
configuration
files,
and
you
can
see
that
the
the
values
that
are
somehow
sensitive
are
encrypted
using
the
the
softs.
A
A
A
So
that's
nice
generated
name-
and
here
I
have
the
the
selection
of
the
kkp
preset
or
I
can
provide
so
for
google
to
be
able
to
provision
something
you
need
to
create
the
service
account
first,
so
here
effectively,
you
will
provide
the
service
account,
which
is
the
base64
encoded
value
of
the
json,
and
after
that
you
will
be
able
to
create
the
customer.
A
It
takes
some
time
so
I've
already
created
some
cluster
in
advance.
So
this
is
an
example
of
the
user
cluster
that
was
created
again.
Just
to
recap,
the
concept,
the
control
plane
components
are
running
on
the
seed
cluster.
You
can
separate
like
you
can
decide
that
you
will
have
the
seed
cluster
pair
region
or
per
the
cloud
provider.
So,
for
example,
all
of
your
aws
clusters
will
be
running
with
the
control
plane
components
from
the
single
seat
cluster,
so
this
is
for
sure
possible
as
well.
A
A
Okay,
it
should
be
okay,
this
is
the
machine
type
and
one
standard.
Two
anyway,
right
now
on
this
cluster,
you
can
decide
to
scale
the
replicas
of
this
machine
deployment
so
effectively
you
will
get
more
workers.
A
You
can
also
set
up
the
autoscaler
add-on
again
for
the
user
cluster
or
you
can
set
up
the
add-on
for
githubs
so
that
your
applications
will
be
delivered
here
in
the
automated
way.
But
let
me
do
another
quick
demo.
I
have
prepared
a
pull
request
on
top
of
my
repository
and
inside.
This
pull
request.
I
have
some
commits
first
commit
is
creating
the
cluster
template,
so
that
is,
as
you
can
see
right
now.
I
don't
have
any
cluster
template
and
I'm
only
able
to
create
clusters
from
google.
A
So
in
my
pull
request,
I'm
also
adding
the
support
for
aws,
so
I'm
defining
more
data
centers
under
the
seat
resource.
A
So
I
I'm
enabling
the
the
deployment
in
us
and
also
in
france
and
next
to
that,
I'm
also
adding
some
more
regions
for
google,
so
it
should
enable
google
in
finland
and
india,
so
I
just
wanted
to
try
to
demonstrate
the
concept
of
the
githubs.
So,
let's
consider
that
somebody
did
the
review
and
I
will
just
blindly
merge
it
for
now,
but
for
sure
this
should
follow
some
some
regular
process
that
you
are
used
to.
A
A
Okay,
so
right
now,
actually
the
reconciliation
already
happened.
So,
as
you
can
see,
this
comment
is
already
matching
my
merge
commit,
which
I
have
done
just
now,
and
it's
already
reconciled.
So
it
means
that
the
change
was
already
applied
on
my
cluster
by
the
flux
here
at
the
top.
This
is
just
another
example.
A
It's
the
three
machine
deployments
that
were
created
automatically
and
because
I'm
using,
I
have
configured
the
autoscaler,
so
it
has
already
done
some
job
and
it
has
already
scaled
the
the
nodes
across
the
regions
so
effectively.
Right
now
I
have
seven
workers
available
in
my
cluster
to
run
the
workload
and
I
can
try
to
see
we
change
it.
A
So
I
was
creating
the
cluster
template
so
right
now
you
can
see
that
the
cluster
template
is
available
over
here
and
I
can
quickly
provision
the
cluster
out
of
this
template.
You
can
see
that
there
are
the
details
of
the
configuration
also
which
options
I
have
enabled
how
the
cluster
should
look
like.
We
can
try
it
out
so
right
now.
Another
cluster
will
be
started,
but
it
will
take
some
time.
Maybe
it
will
be
super
quick,
but
we
don't
have
to
wait
for
this
anyway.
A
A
So
here
comes
the
the
supporting
step
like
really
try
it
out.
If
you
are
interested
in
what
you
have
just
seen,
try
it
on
your
own,
so
the
wizard
and
the
star.grubermatic.com
is
public
endpoint
that
you
can
use
and
with
that
you
can
try
to
build
the
very
same
infrastructure
using
a
bunch
of
the
cncf
tools
that
were
already
mentioned
as
well
in
case
that
you
will
have
any
questions.
A
So
there
is
the
community
slack
called
start
kubernetes,
where
you
can
ask
the
questions
as
well
or
feel
free
to
reach
us
through
some
general
contact
contact
form
and
so
on,
or
feel
free
to
reach
me
directly
or
anybody.
If
you
have
anybody
in
your
network
from
provermatic,
so
they
will
be
for
sure
very
happy
to
help
you
with
your
questions
as
well.
A
This
is
the
one
of
the
last
slides
where
I
just
wanted
to
give
the
kudos
to
the
guys
who
were
participating
on
the
project.
So,
first
of
all
it
started
as
my
idea
and
from
sasha.
I
have
received
the
support
and
we
have
decided
to
get
bunch
of
very
experienced
engineers
from
the
company
who
participated
on
the
development
both
on
the
api
and
the
ui
part.
A
So
I
would
like
to
thanks
to
marco
sascha,
martin
and
sebastian,
and
also
to
the
ladies,
who
are
helping
us
with
a
couple
of
ui
and
ux
ux
stuff,
so
we
are
at
the
very
end
of
the
webinar.
I
hope
you
have
enjoyed
that.
A
It
was
a
pleasure
talking
to
you
at
least
this
way
virtually
here.
You
can
see
a
couple
of
the
links
to
the
project
itself
to
the
demo
repositories,
either
on
gitlab
or
github,
and
also
the
link
to
the
documentation.