►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native,
I'm
annie
and
I'm
a
cncf
ambassador
as
well
as
a
senior
product
marketing
manager
at
camunda,
and
I
will
be
your
host
tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
If
you
want
the
early
bird
great
and
as
always,
this
is
an
official
live
stream
of
the
cncf
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
with
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
content.
Basically,
please
please
be
respectful
of
your
fellow
participants
as
well
as
presenters
so
I'll
hand.
It
over
to
nick
to
kick
off
today's
presentation.
B
Thank
you.
So
thank
you
and
I
welcome
everyone.
My
name
is
nick.
I'm
a
developer
advocate
with
on
that
and
I've
been
working
with
kubernetes
approximately
for
the
last
five
six
years,
and
the
topic
for
today
is
going
to
be
focused
around
building
native
pipelines
within
communities
for
stateful
applications.
So
the
idea
is
really
to
to
focus
on
the
developer
experience.
B
You
know
from
a
real
use
case
application
and
we're
going
to
start
with
there
into
you
know
from
developing
the
application
and
moving
to
you
know
from
your
local
laptop
to
a
kubernetes
production
cluster.
So
maybe
I
can
start
sharing
my
screen
and
we
can
get
started.
B
Okay,
so
this
is
what
we
are
going
to
build
today.
Hopefully,
45
minutes,
one
hour
will
be
will
be
enough,
but
here
is
the
idea
when
I
started
to
work
with
kubernetes
what
you
know,
I
was
kind
of
confused.
How
can
I
start
developing
my
application
and
make
it
a
good
application
for
communities
what
kind
of
communities
concepts
I
should
use?
What
kind
of
tools
I
should
use
to?
B
You
know
manage
the
life
cycle
of
my
application
from
my
local
laptop
to
the
staging
cluster
to
the
production
cluster
and
what
are
the
key
concepts
when
it
comes
to
also
deploying
and
building
stateful
application.
So
what
is
important?
There
is
maybe
with
let's
start
by
defining
what
is
a
stateful
application,
so
stateful
application
means
that
one
or
several
components
of
that
application
are
stateful
or
in
other
words
they
store.
Some
data
to
disk,
so
it
can
be
a
database
can
be
a
cache
caching
solution.
B
Basically
anything
you
need
that
needs
to
persist
between
you
know
user
or
for
usage,
or
just
to
present
some
sort
of
data
to
maybe
a
front-end.
So
you
will
need
some
sort
of
some
sort
of
database,
so
the
idea
is,
let's
start
by
developing
that
application.
So
it's
already
there,
but
we're
going
to
start
with
the
code
and
then
go
into
how
do
we
make
this
application
inside
a
container
like
using
docker
file,
things
like
that
and
then
from
docker?
How
can
can
we
move
into
a
more
you
know?
B
I
mean
high
highly
available
environment
like
like
kubernetes,
so
this
is
the
idea
we're
going
to
start
with
an
app.
That
is,
I
call
the
marvel
app
so
just
to
give
you
an
id.
I
think
it's
still
open
there.
It's
basically
this
app,
where
it's
showing
a
bunch
of
characters
on
the
screen
from
the
marvel
apis
right.
So
it's
basically
like
I'm
looking
for
the
image
and
then
I'm
showing
the
name
of
the
character
and
then
the
the
comic
books
where
the
character
has
appeared
and
then
of
one
two.
B
Three
four
I've
got
six
cards
with
this
information.
That's
basically
what
the
application
does.
So
it's
a
two
micro
service
based
application,
where
the
first
part
is
essentially
what
you've
seen
so
the
front
end,
which
is
using
a
python
flask
and
the
backend
is
a
mongodb
database
where
all
the
the
semi-structured
data.
So
all
the
json
information
from
the
marvel
api
is
stored
and
the
id
to
build
this
application
in
kubernetes.
To
also
show
you
what
kind
of
basic
you
know.
B
First
class
concept
from
communities
you
can
use,
have
decided
to
build
this
application
in
the
following
way.
So
we're
going
to
build
a
three-node
mongodb
inside
kubernetes,
which
is
basically
our
stateful
application,
we're
going
to
have
a
couple
of
pods
deployed
as
a
deployment
that
will
be
basically
the
the
front-end
application
and
then
we
need
to
populate
that
database
with
the
marvel
information.
B
So
for
this,
I've
created
a
kubernetes
job,
because
I
mean
there's
different
solution
on
how
to
do
that.
But
the
job
like
just
to
show
off
how
it's
working
it's
a
good
solution,
because
the
job
is
going
to
be
running
until
it
succeeds.
So
meaning
that
is
going
to
try
to
come
to
connect
to
the
marvel
apis.
Get
the
information
store
it
to
the
database
until
it
succeeds,
so,
basically,
until
the
at
least
until
the
cluster,
the
mongodb
cluster
is
available.
B
Once
the
mongodb
cluster
is
available,
the
job
will
succeed
and
populate
the
database
and
then,
as
a
result,
the
application
will
start
working.
That's
the
idea
of
running
that
application
in
kubernetes,
but
for
this
we
need
a
proper
life
cycle.
So
I'm
going
to
start
by
showing
you
how
I
can
what
tool
I
can
use
to
facilitate
the
development
of
this
application
on
your
laptop.
So
the
expectation
here
is
that,
as
I
save
code
into
my
application,
I
can
do
things
as
automatically
building
my
marvel
container
or
my
job
container,
for
example.
B
As
soon
as
I
change
some
of
the
code
without
doing
anything
else,
sort
of
you
know
just
monitoring
the
file
system
when
a
file
that
I
monitor
is
changed
that,
ideally,
what
I
want
is
the
system
I'm
using
or
the
tool
I'm
using
to
build
the
container
and
basically
deploy
it
into
my
local
kubernetes
cluster
testing
environment
on
my
particular
laptop.
So
for
this
I'm
going
to
be
using
k3d,
which
is
a
you
know,
just
a
wrapper
around
k3s.
B
So
but
you
can
think
about
it
as
just
as
k3s
so
k3s,
which
is
a
you
know,
the
the
rancher
sort
of
very
lightweight
kubernetes
distribution.
You
can
basically
install
anywhere
a
simple
binary,
so
this
is
basically
there
on
the
top.
This
would
be
the
pipeline
right,
so
you
start
you
know
committing
code
not
necessarily
committing,
but
just
saving
code,
to
your
laptop
of
course.
B
At
some
point
you
want
to
commit
it
to
git,
for
you
know
for
further
usage,
maybe
when
you
want
to
deploy
to
production,
but
for
the
moment
what
I
want
first
is
to
have
the
develo
the
right
developer,
experience
on
my
laptop,
so
I
want
to
save
code.
Then,
when
I
save
code,
the
docker
images
are
built
and
eventually
also
they
are
saved
into
the
remote
repository,
a
remote
container
repository
like
docker
hub
there
and
then
once
the
docker
image
has
been
built.
I
want
to
use
some
sort
of
tooling
to
build.
B
The
kubernetes
manifests
and
then
deploy
it
into
my
local
cluster,
but
remember
we
also
have
a
a
stateful
application
component,
which
is
our
mongodb,
and
this
is
where
you
may
want
to
add
a
couple
of
extra
features
so
that
it
represents
the
end,
the
you
know,
kind
of
the
end
environment,
the
the
production
environment.
You
may
want
to
test
earlier
in
the
code.
You
know
life
cycle.
B
So
basically,
if
you
want
to
run
smoke
test,
including
some
of
the
infrastructure
components
that
we
will
be
deploying
in
production,
you
can
do
it
on
your
laptop
because
it's
just
kubernetes
in
the
end
right.
It's
just
software,
so
for
mongodb
we're
going
to
be
using
on
that
which
is
a
solution
that
allows
you
to
use
local
storage
of
your
kubernetes
nodes
and
aggregate
this
as
a
pool
of
consumable
storage
for
your
persistent
volumes
and,
on
top
of
this
add
specific
features
such
as
replication
encryption.
B
All
those
kind
of
premium
features
you
would
want
to
have
in
production.
So
by
enabling
this
on
your
local
cluster.
You
also
have
an
idea
of
how
your
application
will
behave
in
production.
But
again
the
solution
is
kubernetes
native
meaning
that
is
just
using
storage
class
yaml,
so
you
can
control
it
as
part
of
the
manifest
generation.
B
You
know
on
your
local
on
your
local
laptop,
using
customize
or
helm
or
whatever,
so
for
today
the
tools
we're
going
to
be
using
for
the
local,
let's
say,
laptop
development,
we're
going
to
be
using
so
of
course,
git.
We
you're
going
to
be
using
the
local.
You
know,
docker
that
is
running
on
your
laptop
we're
going
to
be
using
customize
we're
going
to
be
using
on
that
as
well
and
scaffold
that
you
can
see
on
the
top
here.
Actually,
the
documentation
is
just
there.
It's
it's
really
a
nice
tool.
B
It's
also
open
source
part
of
the
ecosystem.
It's
a
command
line
tool
that
facilitates
continuous
deploy
development
for
kubernetes
native
application.
So
basically
it
handles
multiple
phases.
You
know
in
the
life
cycle
of
your
application,
so
this
is
scaffold
that
is
going
to
help
automate
the
building
of
the
container.
The
deployment
of
the
container,
so
building
using
can
use
docker
can
use
to
build
here.
For
example,
we
see
what
is
supported,
so
you
can
use
docker
like
docker
file.
You
can
use
also
cloud
native,
build
packs.
B
You
can
use
custom
scripts
and
so
on.
So
this
is
for
the
build
phase,
we're
going
to
be
using
our
local
docker
socket
and
when
it
comes
to
deploying
so
again,
this
is
for
our
local
laptop-based
kubernetes
cluster.
You
can
use
coop,
ctl,
helm,
customize
or
docker,
so
we're
going
to
be
using
customize,
because
customers
help
you
decorate
your
base,
manifest
to
a
to
match
a
particular
environment.
B
So
in
our
case,
I'm
gonna
have
a
dev
overlay
that
I'm
gonna
be
using
to
deploy
and
to
configure
my
manifest
on
my
dev
environment,
and
I
will
also
have
a
prod
overlay
that
is
going
to
be
used
to
deploy
in
the
production
cluster,
but
in
the
production
cluster
we
won't
be
using
directly
scaffold
or
direct
or
customize
directly
to
deploy
it
we're
going
to
be
using
tekton
and
flux
right,
but
this
is
the
second
part,
so
those
are
the
tools
for
the
the
local
cluster
deployment.
B
Now
the
second
step
as
a
developer.
Once
you
have
it
on
your
cluster.
Once
the
code
has
been
has
been
committed
to
gits
and
your
pull
request
has
been
validated
by
your
peers.
Then
it
comes.
You
know
it's
it's
about
time
to
deploy
either,
maybe
not
in
production,
but
let's
say
our
production,
which
is
probably
like
the
the
development
or
the
testing
or
staging
area
like
the
remote
kubernetes
cluster.
That
is
going
to
be
useful
for
that.
B
So,
in
our
case,
it's
going
to
be
gke
and
the
idea
is
we're
going
to
be
using
again
a
kubernetes
native
way
of
doing
things.
So
we're
going
to
be
triggering
a
pipeline
which
is
gonna,
be
the
same,
be
doing
the
same
kind
of
things
as
scaffold.
So
the
idea
is,
we
need
to
build
the
container.
B
We
need
to
build
the
manifest
using
customize
and
then
we
need
to
deploy
those
manifests
with
the
right
images
into
our
staging
slash
production,
cluster.
Okay.
So
how
are
we
going
to
do
this
so
we're
going
to
be
using
tecton
and
tecton
for
those
who
are
not
familiar
with
the
the
solution?
It's
also
part
of
the
cncf.
B
You
know
ecosystem
it's
a
kubernetes
native
pipeline
software,
meaning
that
every
task
you're
going
to
be
creating
within
tecton
correspond
to
a
pod
or
a
container,
so
an
action
or
a
command
that
is
run
into
a
container
and
when
you
have
multiple
tasks
to
run
as
part
of
your
pipeline,
those
tasks
will
be
sequentially
executed
by
by
your
tectonic
solution
within
kubernetes
as
multiple
containers,
so
the
so
the
whole
solution.
B
The
pipeline
itself
is
completely
running
in
kubernetes.
So
again
the
idea
is
to
produce
the
right
money,
the
right
images,
so
this
time
we're
not
gonna
be
using
docker
because
we
are
running
inside
kubernetes.
The
pipeline
is
using
kubernetes.
It's
not
a
good
idea
to
mount
the
docker.
You
know
sockets
in
production,
right,
that's
bad
from
a
security
perspective,
so
a
typical
you
know
way
to
build
container
in
kubernetes.
B
There
are
several
of
them
sure
for
sure,
but
kaneko,
which
is
basically
building
container
without
using
the
docker
socket,
is
a
good
solution
to
do
so.
So
we're
going
to
have
a
task
within
tecton
where
the
goal
is
going
to
be
build
the
container
with
kaneko
we're
going
to
have
a
second
task
which
is
going
to
be
using
customize
to
generate
the
manifests
as
well
that
we're
going
to
push
into
a
particular
repository.
B
So
this
is
exactly
that
particular
you
know
github
repository
there
I
have
a
directory
which
is
called
target
and
the
manifest
I've
tested
multiple
times
so
it
should
be
working.
This
is
basically
the
the
result.
What
we
should
see
in
the
end
is
just
when
we
will
be
running
it
live.
We
will
see
a
different
image.
You
know
digest
here
and
the
last
part
once
we
have
our
manifests
that
are
deployed
into
this
repository,
so
we
will
have
of
course.
B
Of
course,
the
image
also
is
going
to
be
picked
up
by
docker
hub
by
using
flux
right,
so
the
application
manifests
are
going
to
be
deployed
into
the
repository.
I've
just
shown
you
and
then
flux,
which
is
a
git
ups
solution,
is
gonna,
monitor
that
repository,
monitor
for
changes
on
that
repository
and
reconcile,
and
you
know
like
any
github
solution.
The
goal
is
to
reconcile
the
states
of
the
cluster
with
the
intent
which
is
stored
in
git,
so
our
intent
is
to
deploy
the
manifests
that
are
stored
when
we
are
just
so.
B
You
showed
you
before,
and
the
state
of
the
cluster
that
needs
to
be
reconciled.
Well,
it's
deploy
the
manifest
that
or
the
new
manifest
or
the
new
object
that
corresponds
to
the
manifest
that
have
been.
You
know
changed
on
the
git
repository.
So
as
soon
as
the
container
image
will
change,
we
will
have
our
front-end
container
that
will
be
replaced.
I
won't
change
like
the
the
the
mongodb
portion
because
it's
a
bit
longer
to
deploy,
but
the
application
is
already
deployed
in
the
cluster.
B
So
what
we
will
do
is
do
some
changes
to
the
application
and
show
that
trigger
the
pipeline.
That
will
update
the
image
digest
into
the
application,
manifest
repository
and,
as
a
result,
flux
will
see
that
change
and
will
replace
the
front
end
container
with
the
new
code
right.
So
the
first
step
is
really
to
change.
Some
of
the
code
on
the
laptop
show
show
you,
the
developer.
Experience
with
scaffold
see
check
the
result
on
the
local
cluster.
B
Then
we're
going
to
be
triggering,
let's
say
the
staging
slash
production
pipeline,
and
we
will
double
check
that
this
time
it's
our
githubs
pipeline
will
pick
it
up
and
deploy
this
into
the
production
cluster.
So
if
I
don't
know,
if
there
are
still
is
there
any
question
at
this
stage
before
we
jump
into
into
the
weeds.
A
A
To
you,
there
happy
to
hear
from
everyone
else
of
which
location
they
are
tuning
in,
but
I
might
want
to
ask
a
question
from
you
as
well.
So
what
would
you
say
are
the
benefits
of
running
stable
applications
in
kubernetes.
B
Yeah
sure
so,
basically
you
know
it's
always
the
same.
The
same
thing
when
it
comes
to
running
things
in
kubernetes,
you
can
just
leverage
the
basic
feature
and
characteristics
of
kubernetes,
which
is
all
about
scale,
being
cloud
agnostic
being
highly
available,
highly
distributed,
which
is
a
perfect
fit
for
for
any
application,
including
including
stateful
application.
Because
now
I
would
say
we
have
all
the
tools
to
to
manage
the
stateful
application
and
as
an
example
here
I
didn't
mention
it.
But
the
way
the
mongodb
cluster
is
managed
by
manifests,
which
is
a
kubernetes.
B
You
know
by
ammo,
is
by
using
the
mongodb
operator.
So
now
we
can,
with
the
operating
operator
framework
in
kubernetes,
we
can
encapsulate
the
knowledge
that
is
required
to
deploy
application
on
top
of
the
stateful
sets.
So
we're
going
to
be
using
state
rule
sets
because
that's
a
stateful
application,
but
also
we
need
to
make
sure
that
mongodb
is
properly
installed
with
the
right
permission.
The
right
database
size.
All
of
that,
and
this
is
encapsulated
into
a
custom
resource
that
will
be
managed
by
by
the
mongodb
operator.
Yeah.
A
B
So
it
won't
be
a
blue,
green
or
canary
deployment
at
this
stage.
What
I'm
going
to
show
you
today
happens
before
this
right.
If
I,
I
could
add
canary
deployment
as
part
of
it
like,
maybe
by
using
istio
on
top
of
that.
But
probably
that
would
be
that's
already
a
lot
for
today.
That
would
be
too
much
so
today
is
just
like
basically
replacing
the
pod
in
production.
So
it's
like
coop
ctl,
you
know
just
replace
and
deleting
the
pod
don't
do
that
in
production.
You're!
Absolutely
right!
B
You
should
not
delete
your
existing
containers
right.
You
should
do
like
blue,
green
or
canary
deployment,
but
yeah
for
today,
I'm
just
gonna
replace
the
the
front-end
container.
So
in
reality,
if
that
was
my
production
cluster,
I
would
be
a
bad
engineer,
because
I
would
cause
some
sort
of
disruption.
B
Great
okay,
so
let's
get
started
so
on
the
left.
Here.
This
is
my
production
environment,
so
you
can
see
just
the
the
timestamp
I've
got.
My
the
application
has
been
deployed
like
92
minutes
ago,
so
with
the
mongodb
being
there
already.
This
is
production,
but
the
idea
in
the
end,
when
once
flux
is
gonna,
pick
up
our
changes.
What
we
expect
is
this
value
here
right
to
be
like
a
couple
of
seconds,
a
couple
of
minutes
as
we
we
change
our
code
in
the
development
environment.
B
B
I
have
the
db
operator
so
that
when
the
mongodb
you
know
crd
are
going
to
be
pushed
into,
I
mean
ingested
by
kubernetes,
then
the
mongodb
operator
is
going
to
react
based
on
the
crd
and
deploy
the
mongodb
mongodb
cluster
on
the
left
in
production
yeah.
I
didn't
mention,
but
we're
going
to
be
also
using
some
policy
as
code
to
verify
that
the
parameters
we
set
for
our
application
are
aligned
with
our
compliance
system.
So,
for
example,
I'm
going
to
show
you
a
couple
of
rules.
B
We
want
the
database
to
be,
I
think,
inferior
less
than
10
gig.
We
want
a
special
user
to
be
created
for
managing
the
the
mongodb
database.
Things
like
that,
and
so
this
is
why
I
we're
going
to
be
using
kiverno.
Okay
vernon,
I'm
not
sure
how
to
pronounce
it.
If
anyone
in
the
audience
know
if
it's
kyvern
or
kiverno,
please
shout
out,
and
so
we
will
have
the
admission
controller
there,
which
is
set
for
audit,
so
we're
not
going
to
prevent
the
application
from
being
deployed.
B
If
it's
not
conformant
we're
going
to
be
generating
a
report,
I'm
also
going
to
show
you
how
to
use
the
cli
as
part
of
the
pipeline.
If
you
want
to
also
you
know,
fail
the
past
the
pipeline
as
a
command
line,
as
opposed
to
an
admission
controller
like
before
deploying
the
solution
in
in
the
cluster.
B
We
also
have
the
mongodb
operator
that
is
there
to
react
based
on
the
the
cl
the
custom
resource
we
have
techdon
that
is
installed
as
part
of
the
cluster
as
well,
and,
of
course
we
have
the
on
that
solution.
That
is,
that
will
be
leveraging
the
the
local
storage
to
create.
You
know
the
various
pvc
and
also
add
the
extra
features
like
encryption
and
replication
all
right.
So
now,
let's
start
with
the
application
itself,
so
the
application
itself.
As
I
said,
the
idea
is,
we
start
with
a
pretty
empty
environment.
B
Where
I
have
my
application.
I've
got
my
python
script.
This
is
my
front-end.
This
is
a
flask
application.
I'm
not
going
to
go
too
deep
in
the
code,
just
explaining
you
to
you
what
it
does
so
essentially
the
front-end
application
is
just
the
job
is
just
to
connect
to
the
mongodb
cluster
that
is
going
to
be
deployed
in
kubernetes,
connect
to
that
particular
mongodb
cluster
and
then
get
the
the
different
information
I
pulled
from
the
api
and
then
populate
the
different
card
with
it.
B
Like
all
the
the
comic
books,
the
comic
card
will
be
populated
with
the
json
information.
I
have
and
render
into
an
html
page
right,
that's
basically
the
code,
it's
not
super
simple
and,
of
course
you
want
to
have
the
docker
file
in
the
appropriate
directory.
So
in
the
app
directory
I
have
my
docker
file.
That's
scaffold
will
use
because
remember
scaffold
is
going
to
be
using
docker
to
build
the
application.
So
therefore
I
need
a
simple
docker
file
and
I'm
going
to
be
using
g
unicorn
as
the
web
server.
B
So
I've
got
an
extra
configuration
for
for
unico
g
unicorn,
the
requirements,
the
dependency
for
my
application,
flask.
That
is
there
as
well
and
as
an
environment
and
then
so.
This
is
my
application.
It's
basically
encapsulated
within
that
folder,
the
app
folder
I've
got
my
code,
I'm
going
to
be
running
into
a
kubernetes
job
that
is
located
into
this
directory,
so
marvel
init
db.
The
role
of
the
code
there
is
going
to
be
to
populate
the
effectively
is
going
to
be
to
populate
the
the
database
with
the
information.
B
The
role
of
that
code
is
to
connect
to
the
api,
the
marvel
api
and
populate
the
database,
so
you
will
find
things
here
like
username
bongodb
password
the
replica
set
name
inside
mongodb
the
functions
to
get
the
you
know
to
to
to
call
the
marvel
api
to
do
to
realize
all
the
requests
and
to
store
this
into
a
results,
kind
of
dictionary
and
then
store
that
dictionary
into
mongodb,
using
the
client
library
and
simply
storing
this.
B
As
you
know,
json
payload,
into
into
a
mongodb
document
right,
so
nothing
too
fancy
there
the
application
and
again
python,
and
for
this
I
also
need
to
have
a
docker
file
to
be
able
to
build
that
particular
container.
So,
essentially,
my
application
is
two
micro
services,
not
really
micro
services,
because
I
mean
it's
quite
simple,
but
there
are
two
containers
that
will
run
in
kubernetes
the
first.
B
The
initial
init
db
will
be
run
as
a
job,
so
we'll
be
run
multiple
times
until
it's
successful
and
the
application
will
be
run
as
a
deployment.
So
I
think,
in
test
we
will
run
like
two
or
three
front
and
in
production
we
will
have
like.
Maybe
four
or
five
different
front
end
just
to
address
the
potential
you
know
load
on
on
on
our
application,
and
so
that
is
for
the
application,
then
for
scaffold
scaffold
again.
It's
super
easy
to
configure
it's
quite
intuitive.
B
If
you
go
to
the
documentation,
there
is
a
couple
of
things
I
want
to
highlight
here,
so
I
want
to
build
two
artifacts,
as
I
mentioned
before,
I'm
just
specifying
a
context
which
is
the
name
of
the
directory,
so
the
app
directory,
which
is
where
I'm
going
to
store
my
flask
application,
the
the
command
I
need
to
build
the
container.
This
is
a
build.sh
which
is
there
I'm
using
build
x
because
I'm
running
on
mac
m1,
which
is
a
non-based
processor
cpu.
B
So
I
need
to
use
docker
build
x
to
to
build
my
container.
If
I
want
to
build,
you
know
cross-platform,
including
x86,
so
this
is
why
I'm
using
a
custom
script.
Typically,
if
you
don't
use
an
arm
based,
you
know
laptop
you.
Maybe
you
won't
have
to
use
build
x
right.
So
this
is
why
I'm
using
a
custom
script,
which
is
good
right?
It's
I
mean
this
means
that
scaffold
is
quite
extensible.
B
You
can
just
specify
your
script
that
will
be
used
to
build
your
container
so
same
thing
for
the
so,
for
that
was
for
the
flask
for
the
marvel.
The
init
db
container
same
thing,
I'm
using
the
same
script
local
push
through,
which
means
that
local
means
that
I'm
gonna
be
using
the
local
docker
socket
and
push
means
that
I'm
gonna
push
it
into
my
docker
registry.
B
And
now,
if
I
show
you
the
overlay,
the
dev
overlay,
this
is
where
all
the
magic
happen
to
move
from
a
docker
container
based
environment
to
a
real
kubernetes
environment.
Where
you
need
those
different
manifests,
so
I've
got
the
the
base
manifest
there,
which
is
like
the
naked
application
and
my
customization.
B
Essentially,
there
are
a
couple
of
things
I
want
to
provide
some
create
some
config
using
customize,
so
customize
can
dynamically
generate
things
like
secret
password
credentials,
and
this
is
exactly
what
I'm
going
I'm
going
to
do
here,
so
I'm
going
to
be
using
customize
to
create
all
those
secrets,
I'm
going
to
also
populate
the
different.
B
You
know:
config
map
for
the
environment,
variable
to
connect
to
my
mongodb,
I'm
going
to
create
here
a
number.
I
want
to
specify
three
replica
for
my
environment,
the
number
of
pods
for
the
front
end
in
terms
of
my
database
again,
this
is
encapsulated
into
my
custom
resource,
which
is
now
a
you
know,
a
first
class
citizen
in
in
kubernetes.
As
soon
as
I
installed
the
mongodb
operator,
I
can
start
using
this
platform
resolve
and
I'm
going
to
specify
the
volume.
A
A
B
A
There's
also
audience
question,
but
we
can
we
can
figure
out
the
one
size
of
course.
First,
okay,
is
it
better
now
yeah.
B
A
Yeah,
so
yes,
we
got
a
confirmation
thanks,
it
looks
better.
People
can
see
it
better
now
so
for
mongodb
are
you
using
when
I
started
my
mongodb
container
when
I
tried
to
use
paimon
law
for
grid
fs,
I
get.
The
top
error
for
secondary
databases
asks.
B
Oh
okay,
so
timeout
probably
means
you
have
to
double
check.
You
know
I
don't
know
like
like
this
out
of
the
blue,
I'm
not
sure,
but
typically
the
error
you
may
face
is
maybe
you
your
your
mongodb.
Your
database
is
not
open
up
and
running
properly,
so
you
want
to
check.
Look
first,
use
a
container
that
with
an
image
that
is
the
client
and
try
to
connect
from
a
con
with
a
before
using
the
python
library.
Try
to
just
run
the
client
or
it's
called
sh.
B
The
the
shell
container
image
in
your
cluster
and
from
the
cluster
itself
try
to
connect
to
the
database
to
check.
If
it's
working,
if
it's
working,
then
you
may
have
an
issue
with
the
way
you
connect
to
the
mongodb
depending
on.
If
it's
a
cluster,
if
it's
a
seed
cluster,
there
are
different
ways
to
connect
to
the
to
the
database.
B
Okay
yeah,
so
here
what
I
wanted
to
mention
is
in
terms
of
the
storage
for
the
database.
This
is
where
you
specify
the
volume
claim
template,
so
the
type
of
storage
I'm
going
to
use.
So
in
that
particular
class
case,
I'm
going
to
be
using
the
on
that
story.
Class
where
I
have
defined
all
this
extra
feature
encryption.
You
know
replication
all
that,
and
then
I'm
gonna
also
specify
the
size
for
my
data
volume
and
the
size
for
my
logs
volume
right
and,
of
course,
the
storage
class.
B
Here
this
is
development,
my
local
cluster
for
development,
so
maybe
replicas.
I
don't
need
them,
so
I
don't
want
to
enable
encryption
if
it's
local.
On
my
you
know
local
cluster
on
my
laptop
and
I
don't
want
to
have
any
replicas
now
I
can
stop.
I
can
you
can
also
the
only
thing
you
need
to
change.
If
you
want
to
enable
a
particular
feature,
is
just
go
there
change
it
and
that's
it
and
save
right.
B
Scaffold
will
take
care
of
the
rest
when
you
redeploy
your
your
application,
so
with
scaffold
there's
a
multiple
thing:
you
can
multiple
mode
where
you
can
run.
So,
let's
go
back
to
the
scaffold
part
which
is
there,
so
you
can
use
a
scaffold,
build
which
is
going
to
be
building
your
images.
You
you,
you
can
use
scaffold
run,
which
is
going
to
run,
run
and
deploy
the
enviro.
You
know
the
different
manifest
and
build
the
image
in
your
cluster,
or
you
can
use
scaffold
in
depth
mode,
which
is
probably
the
best
mode.
B
B
So
it's
going
to
be
building
the
images
and,
as
you
can
see,
a
new
namespace
is
now
created
here.
It's
going
to
start
deploying
my
different,
so
my
application,
seven
seconds
you
can
see
on
the
top
right,
and
here
you
can
see
that
a
job
I
started
the
job
will
probably
fail
because
you
can
see
the
mongodb
it's
a
three
node
cluster,
so
it
won't
be
ready
before,
like
a
couple
of
second
slash
minutes,
so
the
job
will
have
to
run
multiple
times,
but
it
doesn't
matter
in
kubernetes.
B
A
job
will
be
around
until
it
succeeds
right-
or
at
least
I
think
by
default
is
10
time
or
until
it
succeeds
and
the
the
front
end
is
already
deployed.
So
that's
fine,
it's
just
our
flask
application.
But
now,
if
we
go
back
to
scaffold,
the
interesting
part
is
that
now
it's
also
displaying
the
logs
live
of
your
container,
so
the
two
container,
the
init
db
and
the
flask
front
end.
The
container
logs
are
displayed
here
in
what
what
you
see
on
the
screen
here
is
the
container
locks.
B
Now,
on
top
of
that,
I
can
start
also
changing
my
code
and
because
it's
located
into
a
directory
that
I'm
monitoring,
if
I
modify
the
code
there
and
I
just
hit
the
save
button,
the
entire
application
that
is
related
to
these
changes
is
going
to
be,
although
the
container
are
going
to
be
redeployed.
So
in
that
case,
I'm
going
to
modify
the
the
page.html
and
then
we're
going
to
be
changing
that
code
and
reflect
the
code
in
the
html
page
live.
B
So
here
you
can
see
that
the
job
is
trying
to
add
data
into
the
mongodb,
but
because
the
mongodb
cluster
has
not
been
deployed.
Yet
it
cannot
succeed
right.
So
if
we
can
see
now,
the
job
must
have
failed.
Once
you
can
see
error.
So
now
we
have
the
second
instance
of
the
job
that
would
be
running,
but
now
you
can
see
my
cluster,
my
mongodb
cluster
is
up
and
running,
so
this
particular
job
should
succeed.
B
B
What
we
can
do
now
is
just
do
a
quick
ctl
pull
forward
for
my
application,
which
is
running
on
port
8080
and
go
back
to
our
local
host
att.
So
now
this
is
my
development
cluster,
my
development
application.
Now
let's
say
I
want
to
change
some
of
the
code.
I
won't
like
to
use
a
different
syntax
for
comic.
I
don't
want
to
use
comic,
but
I
want
to
replace
using
commix
everywhere
right,
so
I'm
finding
all
the
instances
in
my
code
to
replace
with
comics
I'm
gonna
just
kill
here
my
paw
forward.
B
I'm
gonna
go
back
here
and
so
the
logs,
so
you
can
see
here.
This
is
representing
the
connection
I've
just
initiated
from
my
browser.
So
now,
let's
save
so
what
I'm
expecting
is
the
container
to
be
rebuilt
and
redeployed
in
the
cluster
so
before
that,
just
to
prove
that
I'm
not
lying
so,
let's
check
the
front
end
is
like
three,
let's
say:
340.,
it's
going
to
be
like
four
minutes
or
something
like
this.
B
B
My
new
deployments
that
has
been
deployed
live
in
my
development
cluster.
So
again
we're
gonna
try
to
go
forward,
and
I
see
I
should
see
here
a
new
code
that
is
deployed
right.
So
I've
got
the
right
syntax
right.
So
that
gives
you
an
idea
of
at
least
the
capability
for
local
development.
So
and
if
you
stop
testing
everything
what
you
can
do,
then,
is
you
just
go
back
to
your
scaffold?
B
Dev
process
hit
ctrl
c,
and
by
doing
so
it's
going
to
deep
delete
your
application
right
and
you're
going
to
see
here.
The
dev
name
space
is
being
terminated
right
now,
if
you
want
to
that,
I
still
have
pvcs,
so
you
may
want
to
also
have
a
script
to
delete
pvcs
and
stuff
like
this.
That
has
been
provisioned
by
the
operator,
not
by
not
by
scaffold.
So
now,
I'm
back
again
into
a
clean
development
environment.
B
B
The
first
component
is
going
to
be
tekton,
so
techton,
as
I
said,
we're
going
to
be
running
a
couple
of
tasks,
so
we're
going
to
be
using
so
tipton
as
a
concept
of
pipeline
and
a
pipeline
is
composed
of
one
or
multiple
tasks
and
those
tax
tasks
make
use
of
resources.
B
So
I
have
three
main
tasks.
I
want
to
realize
the
first
one
I
want
to
build
my
docker
image,
the
same
way
scaffold
did
it
so
in
that
particular
case,
I'm
using
a
container
which
is
kaneko
and
I'm
running
a
couple
of
commands,
which
is
to
build
the
image
I'm
using
canonical
executor
and
then
I'm
just
using
the
docker
file
the
path
to
dockerfile.
I
specified
somewhere
else
when
defining
the
variables
for
tecton
to
use
the
destination
the
digest
file.
B
All
of
that
this,
this
is
going
to
be
used
by
tecton
to
produce
my
image
once
I've
got
this
image
that
has
been
deployed
into
the
you
know,
the
docker
registry.
What
I
want
to
do
is
also
use
customize
to
replace
into
my
manifest
to
generate
my
manifest
with
a
new
image
right,
the
new
image
that
has
been
set
by
my
previous
task,
which
was
building
the
container
using
kaneko.
B
So
the
job
of
my
customize
task
is
going
to
be
to
use
customize
to
edit
the
image
within
the
particular
manifests
and
then
to
generate
those
manifest
into
a
special
directory
and
that
directory.
Then
I'm
gonna
be
using
a
workspace
within
tecton,
which
is
basically
a
pvc.
It's
just
like
a
directory
as
part
I'm
mounting
in
inside
the
container.
That
is
going
to
run
that
particular
task,
and
once
I've
got
the
manifest
that
I've
been
stored
into
that
particular
directory.
B
If
you
go
to
techton
hub,
they
are
predefined
tasks
and
if
you
look
for
git
cli,
there
is
some
documentation
and
you
can
just
create
a
task.
You
just
name
it
git
dash
cli
and
the
only
thing
you
have
to
do
is
create
a
variable
called
git
script
and
then
specify
your
git
action.
Not
action
you'll
get
a
command
to
into
that
particular
variable.
So
if
we
go
back
to
here
as
part
of
the
git
script,
this
is
what
I'm
doing
right
so
remember.
B
I've
got
locally
the
manifest
that
are
that
have
been
created
by
the
previous
previous
task.
So
now
I'm
doing
git
init
within
that
particular
directory,
I'm
adding
the
origin,
which
is
where
I
want
to
upload
my
manifest
I'm
doing
just
cd
target,
which
is
where
I
want.
Where
I
have
my
manifest
add
it
commit
push
and
then
the
upstream
repository
will
be
updated
with
my
manifest
and
then
finally,
what's
going
to
happen
is
once
tecton.
B
This
is
the
last
part
of
the
tecton
task
right
once
tecton
is
going
to
be
deploying,
I
mean
creating
those
manifest
and
push
it
upstream
into
the
repository.
We
will
have
custom,
not
customized.
We
will
have
flux,
that's
customization,
but
it's
going
to
pick
up
that
particular
repository
that
I
have
defined
as
part
of
the
prod
customization.
So
it's
going
to
monitor
this
customization
using
the
special
source.
B
B
What
cluster
is
this
flux,
pipelines
yeah
you?
You
can
see
customization
there
and
I
should
have
git
repositories
as
well
right,
so
I've
defined
these
git
repositories
with
the
manifest
as
part
of
the
the
repository
that
flux
needs
to
monitor.
So
as
soon
as
you
will
see,
tecton
updating
the
upstream
repo
we
will
have
flux
is
going
to
pick
it
up
right.
B
So
what
we're
going
to
do
there
we're
going
to
do
a
flux,
get
customization
w
to
check
like
once
it's
going
to
pick
up
the
manifest
that
have
been
updated,
monitor
the
changes.
I
should
see
new
line
there
right
so
now
we
got
what
we're
going
to
do
we're
going
to
be
triggering
the
pipeline
right.
So
now
I've
shown
you
this
particular
pipeline.
What
it's
going
to
do
again,
because
it's
a
kubernetes
native
solution.
B
I
can
use
kubectl
to
trigger
my
pipeline
right,
so
for
this,
I'm
going
to
create
a
an
object
based
on
the
marvel
app
run
manifest.
So
this
is
the
one
that
contain
you
know
all
the
different
tasks,
the
high
level
ones,
and
this
one,
this
marvel
app
run,
will
make
use
of
the
subsequent
pipeline,
marvel
app,
the
different
tasks,
etc,
etc.
So,
let's
trigger
this
and
again
so
create
this.
So
it's
created
now
techclone
gives
you
the
ability
to
monitor,
live,
what's
happening
so
again,
so
the
first
part
is
building
the
container.
B
So
this
is
using
kaneko
second
step
we're
going
to
be
using
customize
to
generate
the
manifest
and
third
step
we're
going
to
be
using
the
git
image,
the
git
task.
Sorry,
the
git
cli
cli
task
to
push
the
manifest
into
the
remote
git
repository
and
finally,
flux
is
going
to
pick
up.
Those
manifests
and
the
last
step
really
is.
I
wanted
to
show
you
when
it's
deploying
at
the
same
time,
I'm
going
to
have
kiverno
that
is
going
to
be
monitoring
the
different
objects
that
are
going
to
be
pushed
into
the
cluster
and
kiverno.
B
This
is
also
a
kubernetes
policy
engine,
so
meaning
that
I
can
specify
my
policy
as
yaml.
Excuse
me,
as
opposed
to,
for
example,
oppa
gatekeeper,
because
basically
it
makes
use
of
frigo.
Rigo
is
a
different
language
for
sure
it's
going
to
be
probably
more
flexible,
but
here
for
more,
like
native
policies,
more
simple
way
to
build
policy,
then
you
can
use
just
yaml
and
create
your
own
rules.
So
a
couple
of
examples
here
here
I've
got
a
a
rule
that
says
that
I
need
to
have
an
admin,
at
least
with
all
those
permissions.
B
So
it
is
monitoring
the
mongodb
community
custom
resource.
So
you
specify
the
kind
of
resource
you
want
to
modify
to
apply
the
rule,
and
then
you
can
type
the
message
if
it
fail,
and
then
you
specify
the
pattern
for
my
encryption,
for
example,
when
you're
in
production
you
want
to
have
encryption
enabled
for
your
storage
class.
If
the
provider
like
on
that
is
providing
this
feature,
you
may
want
to
have
the
pattern.
B
B
These
report
so
gets
cluster.
B
So
we
have
here
two
paths
right,
and
we
also
have
cluster
policy
report
some
of
the
policy
I've
used.
I've
got
four
policies,
two
are
considered
clustered
two
are
considered
policy
report,
so
I've
got
two
paths
for
the
cluster
policy
and
two
paths
for
the
the
policies
and
now
I
can
also
do
this.
B
So
it's
going
to
tell
you
which
one
has
passed
and
if
there
was
one
fail,
is
going
to
tell
you
while
it
failed,
why
it
has
failed
and
just
to
give
you
an
example
of
something
that
is
non-conformant.
B
I
believe,
okay,
you
can
use
so
the
results
I
had
previously.
I've
showed
you
the
report
it's
inside
the
cluster,
because
I'm
using
kiverno
within
the
cluster
as
an
admission
controller.
So
now
you
can
also
you
can
choose
to
stop
and
fail
your
pipeline
if
the
manifest
are
not
matching
those
requirements.
So
for
this
you
can
use
also.
B
A
B
Yeah,
so
you
can
see
here
in
my
non-conformant,
where
on
purpose,
I've
created
a
size
bigger
than
10
gig.
I've
created
I've
disabled
encryption.
All
of
that
you
can
see
that
I've
got
three
fail,
one
pass
and
it's
going
to
tell
you
why
it
has
failed.
So,
as
a
result,
you
can
choose
to
fail
the
pipeline
if
you
want
right.
So
this
is
the
the
last
part
I
wanted
to
to
show
you
in
terms
of
you
know
also
adding
some
poly
cs
code
element
to
it.
B
So
now
just
quickly
get
back
to
our
pipeline,
so
the
the
tecton
pipeline
has
finished.
Now.
What
I
want
to
see
is
you
can
see
here
now.
I've
got
another
another
instance
of
the
reconciliation
of
flux,
so
flux
has
picked
up.
The
new
manifests
upstream
on
my
remote
git
repository.
So
now,
as
a
result,
I
should
see
you
know
four
minutes.
This
is
when
flux
picked
it
up
and
as
a
result,
I
should
also
see
now.
B
Let
me
check
if
I
have
on
the,
if
I
just
want
to
check
that
I've
yeah,
so
I
have
removed
and
terminated
the
por,
the
portware
direction
there.
Let's
go
back
to
techton
and
then
use
a
coupe
ctl
for
forward
from
there.
So
now
this
is
running.
This
is
my
production
environment
that
I
want
to
pass.
So
let's
go
back
to
localhost
and
hopefully
so
you
can
see.
I've
picked
up
my
image
with
the
change
and
I've
updated
my
production
of
our
environment
with
it
right.
B
So
I
am
now
in
a
production
environment
where
my
application
is
running,
but
I
have
encryption
enabled
replication,
enabled
I've
got
more
more
container
at
the
front
end
and
yeah.
The
only
difference
is,
as
was
mentioned
previously,
I
didn't
do
a
canary
deployment.
I
just
you
know
deleted,
replaced
the
the
container
using
using
flux,
basically
updating
the
application.
A
Great
great
demo,
by
the
way,
thank
you
so
much
while
we
wait
for
the
audience
questions
hopefully
coming
in
so
essentially
now's
the
time
for
you
to
ask
the
question.
Well,
you
could
have
asked
them
throughout
the
whole
webinar
as
well,
but
now
is
your
time
trying
audience.
So,
let's
get
those
questions
in
and
then
there's
the
first
one
here.
Where
can
I
get
this
example
on
github.
B
Okay,
we
can
post
it,
I
can
give
you
the
link
and
then
you
can
post
it
from
the.
Where
is
it
located
like
on
the
webpage
or
where
we
put
all
the
the
information
related
to
that
talk
I'll,
make
sure
to
post
all
the
the
different
links
to
to
the
repositories
there.
A
B
A
A
really
nice
demo,
I
agree
with
carlos
but
yeah,
keep
the
questions
coming
in.
We
have
about
seven
minutes
for
questions,
so
there's
plenty
of
time
so
type
away,
but
while
we
wait
for
the
typing
to
start,
I
have
a
few
questions
as
well,
so
there
are
a
lot
of
moving
parts
with
the
ci
cd
pipelines.
How
do
you
select
the
right
tools.
B
Yeah,
so
I
would
say
basically
to
find
the
right
tools.
You
don't
have
to
automate
everything.
So
it's
I
would
say,
depending
on
your
use
case
and
how
you
want
to
improve
your
current
processes,
you
can
focus
on
some
of
the
area
I've
mentioned
today.
So
maybe
it's
just
for
you.
You
want
to
be
more
agile.
When
it
comes
to
your
own
application
testing
on
your
local
laptop,
then
you
can
pick
up
scaffold
or
try.
You
know
other
other
tools
that
have
the
same
kind
of
qualities
to
deploy
on
your
laptop.
B
If
what
you
want
to
enhance
is
maybe
provide
more
security,
more
compliance
then
focus
on
the
the
policy
as
code,
one
at
a
time
use
those
building
blocks.
You
know
like
in
isolated
fashion
and
once
you're
happy
with
them.
Then
you
can
start
combining
them
together.
I
mean
I
didn't
I
mean
this
demo
took
me.
Probably
I
didn't
build
at
once.
B
A
Makes
a
lot
of
sense,
so
there
was
a
few
a
lot
of
questions
now,
so
let's
get
through
them.
So
carlos
continued,
where
what
page
so,
which
page
you
will
add
the
github
example
to.
B
So
what
page
I
don't
know
I
don't
know,
is
it
like?
How
can
we
communicate
this
to
the
attendees
to
what.
A
Would
you
recommend
I
mean
yeah,
I
can
do
one.
You
have
a
page
that
you
can
share
right
now.
You
can
post
to
the
private
chat
that
we
have
here
in
the
production
site,
attendees
via
chat
or
you
can,
if
you
don't,
have
it
ready
right
now
you
have
a
people
black
channel,
where
you
can
send
it
there
as
well
like
later
on.
B
Yeah,
so
there's
multiple
solutions,
so
I'm
gonna
link
into
the
chat
here
my
github
repository
where
this
is
where
you
will
find
all
the
repos
I've
used
today
with
the
name.
So
if
you
watch
the
video
on
demand-
and
you
can
pick
up
the
name
of
the
different
repository,
this
is
where
you
will
find
all
the
code,
and
if
you
want
to
connect
on
slack,
you
can
also
connect
on
the
on
that
slack,
which
is
let
me
I
have
to
make
sure
that
I'm
not
mistaken
on
that
dot.
Slack
dots
yeah.
B
So
if
you
go
on
our
slack
channel,
which
is
I'm
gonna
just
post
it
into
here,
you
can
join
this
on
that.slack.com
channel,
where
you
can
find
me
if
you,
if
you
want,
I
will
make
sure
to
post
the
the
links
there.
If
you
want
to
talk
to
me,
you
can
also
find
me
on
the
cncf
slack
channel,
I'm
there.
So
my
name
again
is
nick
nicholas
vermandy.
If
you
want
to
reach
out
to
me
there,
if
you
have
any
other
questions
yeah,
I.
A
Yeah
that's
good,
and
then
we
have
what
three
or
four
or
five
questions
to
go
through
in
three
minutes.
So
let's
be
quick
about
perfect,
we
have
the
next
like
set
up
resources
ready.
So
a
question:
tektron
is
an
alternative.
Ci
cd
technology,
correct
being
cloud
native
to
communities
thanks
beforehand
asks
yeah.
B
It
is
tecton
is
a
cicd
solution.
What
is
specific
to
tecton
is
that,
as
opposed
to
you
know,
github
actions
or
circle
ci
or
anything
else,
it
is
running
itself
inside
kubernetes.
So
every
action
you
do
in
tecton
is
a
container
right.
That's
the
only
difference,
but
it
is
effectively
a
ci
cd
solution.
A
Perfect,
so
do
you
recommend
to
take
the
ci
cd
pipeline
images
used
in
the
task
to
copy
the
images
scan
them
and
host
them
yourself?.
B
Yeah,
the
best
thing
is
just
update
your
remote
repository
right,
so
whether
it's
docker
docker
hub
or
on
you
know,
github
repository
for
for
your
just
to
store
your
images
there
yeah
do
that.
I
guess
it's
better.
If
it's
on
your
laptop,
you
can
use
like
your
local,
deploy
your
local
images
there
as
well,
but
I
guess
it's
better
just
to
update
your
images
on
a
public
repo
somewhere
right,
public
registries
right.
A
B
Yeah,
exactly
it's
a
multi-cluster
would
mean
that
you
would
have
different
cluster
configured
to
listen
for
that
particular
repository.
Where
you
have
your
manifest,
so
you
would
have
maybe
different
overlays
for
different
clusters
and
then
for
every
cluster.
You
would
have
flux
or
it
can
be
all
go
whatever.
That
is
monitoring
this
particular
this
particular
repository
yeah.
Definitely
I
would
do
this.
B
Yes,
meaning
that
you,
when
you
define
flux,
you
tell
flux,
you
know
like
the
interval
time,
it's
monitoring
like
every
10
seconds,
every
10
minutes,
sorry,
that
is
the
default,
but
you
can
change
it.
If
you
want
perfect.
A
And
then,
with
one
minute
left
or
a
bit
less
last
question
of
today
there's
a
question:
I
always
having
difficulty
to
set
the
graceful
exit
for
stateful
application,
mostly
for
dbms
type.
How
would
you
suggest
for
killing
the
stateful
pod.
B
To
kill
a
stateful
pod,
but
normally,
if
you
use
an
operator
and
if
in
the
the
normally
you
can
scale
down
to
zero
right,
if
you
want
to
delete
the
pod,
you
can
scale
to
zero,
and
this
is
the
best
way
to
to
delete
your
pod.
All
right,
don't
delete
manually
use
just
like
the.
If
you
use,
if
you
trigger
the
operator
to
delete
the
stateful
set,
then
you
make
sure
that
it's
it's
properly
done.
Yeah
perfect.
A
That
was
it
to
our
today.
We
are
right
on
time
with
the
with
the
one
hour
mark.
So
thank
you,
everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
session
about
communities
dating
pipelines,
sustainable
applications
and
thank
you
for
such
a
great
speaker
as
well.
We
also
really
love
the
interaction
and
questions
from
the
audience
and,
as
always,
we
bring
you
the
latest
cloud
native
code
every
wednesday,
so
next
week
we
will
have
another
quiz
session
coming
up.
So
thanks
for
joining
today
and
see
you
next
week.