►
From YouTube: Rob @ Inshur about their Kubernetes usage
Description
More info at https://docs.google.com/presentation/d/1XIisabbPBoLkwd1nFaQZE9Y38_QCs9vhXQziVvatFyg/edit
A
Yeah,
as
as
I
really
just
found
you
fully
for
the
devil's
weekly
and
for
your
article,
let's
start
the
basics.
Could
you
describe
briefly?
The
company
I
know
that
you
are
working
in
I'm
in
the
industry,
the
insurance
industry,
but
like
the
size
of
the
company.
How
many
of
you
are
engineers
and
how
is
your
engine
structured.
B
Yes,
I
the
company
I
work
for
would
ensure
it's
a
a
mobile
phone
application
product
that
is
used
to
sell
insurance
to
either
drivers
or
basically
rideshare
type
apps,
but
we
a
partnership
with
uber.
So
you
know
that
that's
where
our
focus
is
yeah.
We,
our
applications,
actually
recommended
to
drive
this
when
they
stand
up.
So
there's
like
a
link
in
the
app
that
says
you
know,
would
you
like
insurance,
even
using
sure
the
size
of
the
company's
about
6070
people
at
the
Mon?
B
Well,
we
were
just
about
wrap
idli
expand.
Actually,
but
obviously
you
see
everything.
That's
going
on
this
yeah
and
I'll
be
gone,
gone,
I,
hate
us
the
company's
just
over
two
years
old
and
in
terms
of
the
engineering
department.
I
think
we're
about
30
to
40
people
we're
just
too
many
people
join
that's
aid
in
our
reap
at
forming
effort,
which
is
basically
yeah.
A
B
Fairly
smooth
I'm,
the
only
devil
person
I
tried
to
kind
of
follow,
DevOps
practices
in
the
I,
integrate
with
the
development
team
and
were
quite
quite
quite
placing
them
and
and
also
yeah,
very
sad
oppressive.
The
company
I
try
to
integrate
with
and
try
and
build
systems
that
basically
enable
it
seems
to
you
know,
do
the
work
themselves,
so
Michael
George
basic
today
empower
the
teams
to
get
the
software
that
they're
writing
into
production
as
quickly
as
possible.
B
B
Teams
sort
of
split
between
like
the
mobile
app
development,
the
kind
of
legacy,
but
our
current
existing
platform
and
then
the
replac
forming
effort
and
some
kind
of
like
each
of
a
steam
and
maintenance
type
teams,
but
they
you
know
they
they
do
change.
Sometimes
you
know,
depending
on
the
kind
of
you
know
whether
there's
a
lot
of
you
features
that
you
were
working
on
or
something
like
that
shift
around
a
little
bit
and
this
because
it's
still
kind
of
a
smooth
company
from
that
that's
the
works.
B
A
B
So,
basically,
the
way
that
I've
kind
of
done
things
is
read:
three
tiers
of
monitoring,
there's
infrastructure
tier
where
which
is
like
basically
a
cloud
platform
which
is
mainly
hot.
You
know
that's
mainly
handled
by
GCP
reasons
used
to
be
for
every
platform,
I'm
really
happy.
There
is
surface
the
events
that
I
kind
of
care
about.
There's
the
application.
B
Tier
monitoring,
which
is
kind
of
Prometheus,
makes
at
me
so
scraping
metrics
from
my
running
applications
and
that's
kind
of
like
performance,
monitoring,
I,
suppose
and
also
kind
of
you
know,
kind
of
red-green
monitoring
for
service
availability.
But
again
we
wouldn't
really
necessarily
alert
on
either
of
those
two
tiers
of
monitoring
we'd.
We
mainly
alert
on
our
on
our
third
tier
of
my
own
stream,
which
is
our
external
kind
of
end-to-end
monitoring
you
still.
There.
A
A
B
B
So
basically
yeah
we
so
we
have
Prometheus
the
scrapes
matrix
of
applications
here
and
that's
basically
used
for
kind
of
performance
monitoring
and
they
see
specially
for
red
green.
So
you
know
where
our
services
are
alive,
but
we
wouldn't
really
use
that
for
a
nursing
on
anything,
because
you
know
clusters
can
they
handle
they
handle
like
failure
in
a
kind
of
in
a
nice
way
yep.
B
So
the
third
tier
of
monitoring
is
our
external
monitoring
and
that's
kind
of
end-to-end
monitoring,
and
we
have
checks
up
on
each
one
of
our
AP
ice
and
that's
the
kind
of
the
monitoring
that
she
would
wake
us
up
in
the
middle
of
the
night.
If
there's
a
problem,
because
that
means
that
there's
actually
an
issue
with
our
service
mm-hmm
yeah.
So
basically
the
end-to-end
monitoring
that
we
have
that's
configurable
by
the
by
the
developers
as
well.
So
you
know
it
just
gets
automatically
deploys
a
lot
and
sides
our
applications.
A
Okay,
let's
get
back
to
more
general
questions
before
going
to
dive
in
deep
princess.
So,
like
you,
wrote
me
that
you
basically
started
work
working
with
kubernetes
like
around
four
or
five
years
ago
and
I
understand
for
what
you
said
as
well,
that
you
are
working
on
GCP.
Do
you
use
just
the
pure
qualities
or
or
do
you
have
experienced,
fits
with
other
flavors
like
rancher
or
OpenShift
as
well?.
B
B
A
It,
okay,
is
this
perfectly
fine,
actually
I,
don't
know
how
much
I
believe
that,
but
we
are
mostly
focusing
on
money
like
kubernetes
as
well.
So
just
understanding
of
your
background,
and
now
you
wrote
as
well.
That
sounds
really
interesting
that
you,
like
small
sized
clusters,
so
what
I
would
like
to
ask
is
that
how
many
clusters
do
you
have
a
temperature?
And
how
do
you
organize
those
clusters
so.
B
For
this
for
our
repowering
project,
where
the
moment
we've
only
deployed
our
development
environment,
we're
gonna,
deploy
a
test
environment
in
a
couple
of
weeks
and
in
our
production
environment
a
few
weeks
after
that,
but
we
will
have
30
clusters
in
total
and
that
that's
one
cluster
per
service
pair
environment,
so
one
cluster
for
dev,
1,
plus
of
terrible
test,
one
for
production,
each
of
us
for
each
of
our
services.
So
it's
roughly
around
30
Webster's.
These.
A
B
B
Partially
it's
to
do
with
how
it
enables
our
teams
to
work
very
independently
of
each
other,
and
you
know
they
can
if
they
will
need
to
add
a
database
or
something
or
they
need
to
add
some
sort
of
service
to
you
know
to
bolt
onto
the
side
of
their
application.
They
can
do
that
in
complete
isolation.
A
B
B
B
So
we
have.
We
have
two
nodes,
because
we
want
to
test
concurrency
for
applications.
You
know
to
make
sure
that
when
we
run
more
than
one
instance
of
the
app
everything
still
works
as
it
should,
but
we
don't
really
need
care
about
like
high
availability
and
development.
It
doesn't
we
try
and
cut
cost
as
much
as
possible,
so
using
preemptable
nodes,
as
as
you
see
that
from
the
ask
something
to
scale
down
the
nodes
a
night.
Yes,
we
do
the
same
for
our
date.
Ladies
we're
using
yeah.
B
A
B
Nice
to
me,
there's
freezing
vote,
how
she
called
the
vote
yep,
and
we
also
have
like
a
shared
project
that
has
we
serve
as
using
service
accounts.
Basically,
is
the
answer.
Tcp
service
accounts
we
use
terraform
to
configure
it
all
of
the
kind
of
child
projects
with
access
to
this
centralized
project
that
has
the
kind
of
GC
are
the
container
repository
in
it.
All
of
our
all
of
our
application,
build
pipelines
pushed
to
that
one
place,
and
then
all
of
the
projects
across
every
environment
have
access
to
that.
One
repository.
B
Containers
from
their
pipelines,
use
votes
that
say:
volt
volt
is
connected
to
every
project
by
a
service
account
and
then
our
build
pipelines
have
a
secret
setting
them
that
allows
you
to
instantiate
some
short-lived
credentials,
so
I
think
about
it's
about
10
minutes
or
20
minutes.
Something
like
short
and
I
allows
the
bill
pipelines
to
have
access
to
that
that
centralized
repository.
B
We
have
they're,
not
they're,
not
deployment
pipeline
search.
They
just
build
the
images
in
it
yeah.
That
was
my
understanding,
so
they
just
they
run
tests
and
they
build
the
images,
but
I
don't
actually
do
the
deployment.
We
actually
have
a
service
that
links
our
build
pipelines
to
our
infrastructure
as
code
repository
I'm.
B
Basically,
what
it
does
is
it's
a
web
that
uses
a
web
hook
so
that,
when
it
builds
succeeds
for
an
application
pipeline
it
triggers
the
web
hook,
which
then
can
check
out
the
infrastructure
as
code
repository
and
update
the
image
text,
development
environment.
So
it
means
that
developers
don't
need
to
go,
and
you
know
bump
versions
and
the
infrastructure
is
a
deployment
whenever
they
want
to
deploy
an
application.
We
have
a
automatic
way
for
that
to
happen
in
development
tests
and
production,
obviously,
because
they're
a
little
bit
more
finely
controlled.
B
A
B
So
this
will
lutely
checks
to
see
if
more
than
one
system
pod
is
running
on
the
same
machine,
and
if
it
is,
then
it
kills
it
and
that
basically
allows
the
cluster
to
rebalance
those
system
services
and
then
our
services
start
properly.
So
that's
one
thing:
we're
actually
we're
actually
using
a
MongoDB,
Cloud,
Atlas
I.
Think
for
our
databases.
So
every
one
of
our
our
projects
has
a
shared
V
PC
that
connects
the
Cloud
Atlas
sort
of
platform
to
that
projects.
A
B
Mongodb
for
objects,
or
it's
based
there.
We
can
get
stored
in
that.
Okay
and
that's
managed
service,
and
this
was
used.
That's
yeah,
yeah
I
mean
we.
B
A
B
A
so
so
yeah
I
mean
we
don't
really
have
have
to
do
that,
because
we
don't
have
shared
namespaces
or
anything.
It's
not
we're,
not
we're,
not
sharing
our
resources
across.
You
know
a
lot
of
different
teams,
there's
not
a
lot
of
things
going
on
at
once.
So
I'm
we
try
and
keep
things
as
simple
as
possible
until
there's
a
reason
to
kind
of
you
know,
add
metadata
or
okay.
A
B
The
moment
we
are
actually
using
the
default
namespace
by
I
think
that
will
probably
I'll
probably
change
that,
because
again,
just
from
previous
experience
a
different
place
at
different
companies.
When
my
last
company
basically
yeah
it
tends
to
be
a
good
idea
to
leave
the
default
namespace
empty
and
then
monitor
that,
if
anything
does
appear
in
that
namespace,
it
could
be
indication
of
an
issue.
A
A
B
That
wasted
silicon's
we're
using
cube
CCL
cause
because
we're
using
customized.
So
you
know
we,
actually,
you
know
in
our
kind
of
infrastructure
as
code
repository,
we
we
have
kind
of
a
tree
structure
that
has
the
maps
to
our
our
projects,
where
we
have
like
the
region,
name,
the
environment
and
in
the
deployment
where
the
deployment
is
like
the
project
or
the
service.
B
Those
are
the
kind
of
three
key
pieces
of
information
that
allow
our
tooling
to
map
a
deployment
into
GCP
and
then
within
the
within
that
kind
of
deployment
directory.
We
have
two
directories:
we
have
an
infrastructure
directory
which
contains
our
terraform,
and
then
we
have
an
application
directory
which
contains
our
customized
or
contains
a
subset
of
directories
that
have
the
each
directories
a
kubernetes
application
and
within
that
to
customize
file.
B
That
then
kind
of
references
is
a
modules
directory
which
is
outside
of
that
big
directory
tree
that
contains
you
know,
versioned
modules
feeds
for
our
applications,
so
we
basically
only
needs
to
define
our
application
in
one
place
and
then,
within
our
kind
of
definition,
of
our
infrastructure.
We
just
reference
that
module
I,
see
and.
A
B
So
we
have
yeah,
basically,
this
deployment
directory
that
has
a
region
and
then
the
environment,
and
then
our
services
within
a
service
directory,
there's
an
infrastructure,
barracks
reading,
an
application
directory
yep,
so
the
infrastructure
actory
just
to
remember,
if
I
can
open
it
I
think
I
can
yeah.
So
this
basically
is
a
module
that
deploys
our
service
pattern
and
then
these
are
just
the
particulars
about
this
particular
service.
B
This
here,
basically,
is
where,
where
we
can
figure
out
into
end
monitoring,
so
there's
not
much
here
at
the
moment,
this
service
doesn't
have
many
dependencies
are
basically
just
one,
so
the
yeah
this
place
is
deploys.
Why
I
showed
you
in
that
diagram,
which
is
yes,
it
deploys
a
instances
of
atlas,
MongoDB,
ESS
and
in
a
cluster
and
then
the
end-to-end
monitoring
within
this
directory.
So
yeah
we
have
our
application
directory
within
yep
so
that
this
also
maps
to
the
cluster
within
the
target
project,
so
it
with
more
than
one
cluster.
B
This
is
this
is
just
a
building
to
rebuilding
works
based
off
of
this
directory
structure.
So
you
can
see
in
this
in
this
cluster,
basically
just
kind
of
decide.
You
know,
which
is
just
what
a
single
customization
file
the
laser
just
Maps,
this
module
directory
a
reversion,
the
modules
as
well,
because
you
know
we
might
want
to
be
have
version
one
deployed
and
in
test
and
production.
A
B
B
It
just
makes
it
really
easy
to
work
on
the
modules
and
because
they're
so
tightly,
coupled
to
our
infrastructure
anyway,
it
makes
more
sense
than
having
them
in
separate
repositories
just
because
you
can
see
change
through
PRS
happening
in
a
much
easier
way,
mm-hm
so
but
yeah.
Basically,
then
our
actual.
This
is
just
these.
The
manifests
for
our
actual
application
and,
of
course,
customized,
allows
us
to
have
a
main
module,
and
then
we
have
our
like
overrides.
What
is
specific
to
this
deployment?
B
A
B
So
it's
easier
to
kind
of
reason
about
what's
going
on
so
from
the
top
level
directory
again,
one
of
the
other
things
that
kind
of
might
not
have
been
obvious
is
that
you
know
there's
a
lot
of
kind
of
similar
configuration
that
you'd
have
in
every
single
project.
Like
the
backend
configuration
where
the
state
is
third
provider
self,
some
I
am
stuff,
so
we
basically
have
this
skeleton
directory
and
our
tooling
copies.
Bonnie
custom
links
those
files
into
the
infrastructure
directory
before
it
runs.
B
B
A
B
Yeah,
so
we
have
a
beat
rap
directory,
which
has
some
terraform
and
stuff
in
it,
and
this
is
basically
what
we
used
to
set
up.
It
just
creates
a
new
GCP
project,
your
organizer
TCB
organization,
api,
so
we'll
create
a
new
project.
We
actually
create
the
terraform
state
bucket
out
outside
of
our
normal
kind
of
application
terraform,
because
if
someone
does
the
terraform
destroy,
we
don't
want
them
to
destroy
the
state
bucket.
B
A
B
The
by
the
developers,
I
write,
I,
basically
wear
a
hello
world
app
that
kind
of
showed
how
kubernetes
integrates
with
GCP,
mainly
around
like
provisioning,
ssa,
automatic
reasoning,
ssl
certificates,
using
static,
IPS
things
and
then
they've
just
taken
a
pan
and
extended
it
to
encompass
all
the
functionality.
They
need
their
operations,
cool.
B
B
A
B
B
Yeah,
that's
one
thing:
I've
kind
of
been
wondering
thinking
about
whether
you
know
basically
whether
to
make
the
cluster
apos
private
or
not.
We
do
have
a
VPN
at
work,
so
considering
no
at
least
yeah
change.
Changing
our
cluster
API
in
place
that
only
accessible
via
like
IP
whitelist
but
yeah
I
mean
it
to
be
honest,
that
that
kind
of
quaint,
of
securing
everything
down
I
haven't
really
got
to
you
when
it
comes
to
the
cluster.
Yet
I
have
I
mean
some
loop,
so.
B
In
GAE
I
mean
some,
we
have
like
secure,
be
turned
on
and
look
all
that
sort
of
kind
of
security
stuff
and
only
the
features
that
we
need
a
turned
on,
but
we
definitely
need
to
go
through
a
phase
of
kind
of
you
know,
checking
things
and
we'll
do
that
before
we
go
before
we
deploy
our
test
environment.
Imagine
I
kind
of
clear
space
with
was
getting
the
platform
up
from
running
so
that
our
developers,
kids
iterate
on
services.
A
B
B
A
B
There's
a
couple
of
tools
that
we're
looking
at
using
GCR
itself
have
does
the
security
security
scanning
of
the
containers
bottom?
It
I,
don't
know
how
effective
that
is,
especially
since
this
you
know
just
at
build
time
the
we're
looking
at
dependent
the
bots
as
well.
So
if
these
pop
github,
possibly
sign
a
key
I,
think
we're
looking
at
so
yeah
we're,
basically
we're
kind
of
at
the
at
the
stage
where
we're
looking
at
what
tools
we're
going
to
integrate
into
that
put
into
the
application,
build
pipelines.
A
A
Cool
I'm,
actually
we
are
getting
short
on
time.
So
I
don't
know
if
you
have
five
minutes
or
oh
yeah.
Okay,
thank
you.
I
have
basically
two
more
topics.
One
is
a
minor
question
around
monitoring,
especially
about
cost
management.
I
know
that
you
are
pretty
much
cost
aware
and
and
try
to
stop
your
pastures,
but
do
you
do
any
kind
of
motoring
around
your
clothes
or
what.
B
B
Our
costs
a
day
like
that's,
actually
something
that
I've
got
a
ticket
to
do
as
well
as
set
up
some
billing
alerts
around
around
each
projects,
cost
we
don't
really
have
a
well.
We
kind
of
did
a
cost
analysis
of
what
we
think
our
costs
will
be,
but
because
we
are
not
really
sure
about
our
kind
of
volumes
of
data,
we
haven't
really
tuned
obligations
yet,
which
means
we
don't
know
strictly
what
size
our
instances
are
going
to
be.
B
So,
therefore,
we
haven't
actually
set
up
any
building
alerts,
yet
I
think
they
see
what
what
we'll
do
is
when
we're
in
production
and
we've
been
running
for
a
month
or
so.
You
can
we'll
end
up
doing
that,
but
you
know
terraform
supports
figuring
GCP
billing
alerts,
so
that's.
A
A
I
think
I
can
host
a
cast
of
yours,
so
we
are
currently
working
pretty
heavily
on
improving
get
labs,
perform
integration
and
the
reason
why
we
had
this
discussion
today
because
we
are,
we
want
to
improve
our
kubernetes
integration
as
well.
I
will
quickly
describe
our
current
state
on
command
this
after
going
through
the
perform
and
feel
free
to
stop
me
anytime
say
that
some
of
our
ideas
are
crazy
or
that
you
like
them,
because
so
this
is
this
is
the
epic
for
rent
reform
and
basically
what
we
are
focusing
on
right
now.
A
These
two
issues
which
are
epics
as
well
one,
is
to
provide
the
ability
to
review
the
travel
plans
in
the
merge
request.
The
merge
request
is
the
equivalent
of
the
pull
request
in
github
yeah.
The
idea
is
in
the
first
iteration
what
we
want
to
do
basically
just
add
the
summary
of
this
reform
plan
to
DMR,
so
that
you
don't
have
something
Sam
stores.
Atlantis
does.
If
you
know
it,
you
don't
have
to
set
up
anything
basically
simply
using
the
github
CI
and
the
table
from
template
that
we
provide.
A
A
Actually,
what
we
do
as
good
lab
has
one
of
the
core
features
of
gait
lab
is
the
CI
CD
that
it
has
integrated
and,
as
a
result,
we
I
don't
know
what
you
use
puts
our
form
I've,
seen
once
on
your
rows
of
the
ETF
for
clouds
logo.
So
I
can
imagine
if
you
use
that
for
something
you
know
we
just
all
right,
okay
here,
the
idea
is
that
basically
in
the
CI,
you
can
run
trough
form
using
docker
and
so
on.
B
A
Definitely
are
not
currently,
you
would
have
to
do
it
probe
the
same
way,
how
you
do
it,
but
github
that
you
have
to
have
Ward,
air
and
and
a
bit
of
manual
integration
in
terms
of
wartime
tower
form,
but
actually
we
get
lab.
We
are
just
running.
We
are
just
rolling
out
integrated
board
support
together
with
her.she
core,
so
it's
not.
A
Are
overtaking
but
it's
it's
an
integrated
effort
of
the
two
companies
and
currently
what
you
have
I'm
one
of
my
one
of
my
projects.
To
give
you
an
idea
how
yeah,
for
example,
this
one
good
luck
settings
page
looks
like,
and
so
what
we
have
right
now
is
that
you
have
environment
for
hours
that
you
can
set
for
for
your
CI
pipelines.
Yeah-
and
this
is
this-
is
not
a
trap
for
project,
so
there
isn't
much
to
be
shown
around
that.
A
But
basically
one
way
that
you
can
do
now
is
that
you
have
already
chopped
from
related
to
environment
for
a
bus
at
here,
and
then
you
are
just
going
to
form
the
same
way:
Connexus
water,
etcetera.
So
you
later,
the
plan
is
to
switch
that
this
override
the
settings
to
words.
Actually
that's
pretty
cool
yeah
that
sounds
really
good
and
yeah.
Then
the
the
plan
output
is
stored
as
an
artifact
and
is
a
special
artifact
that
we
can
read
in
in
the
merge
request
to
show
you
this
widget
in
the
merge
request.
That's
really
cool!
A
The
second
thing
that
we
are
working
on
is
the
get
lab
manager
for
State,
which
would
mean
that
if
you
ever
tried
to
off
from
cloud
I
really
loved
a
very
sleek
way
of
setting
up
the
stage,
because
whenever
I
use
state
with
s3
first,
we
have
create
estate
buckets
DynamoDB
table
and
then
in
the
second
run,
you
can
import
your
current
local
state
into
that.
And
it's
it's
not
the
best
use
experience,
I
guess
on
the.
B
A
A
Okay,
I
won't
look
into
it
right
now.
So
basically,
the
idea
here
is
that
we'd
get
lab.
If
you
have
its
half
for
state,
then
you
can
choose
your
object.
Storage,
back-end
that
you
want
to
use.
If
it's
on
github.com,
then
then
we
have
the
object.
Storage
set
up
and
your
state
in
the
end
would
be
stored
in
in
objects
stories
like
even
s3
or
JCP.
It
depends
on
you,
but
on
top
of
that
we
wouldn't
use
the
HDS
feedback
and,
for
example,
but
we'd
use
HTTP
back-end
that
connects
the
talks
with
the
github
API.
A
B
A
A
So
we
have
a
feature
called
Auto
DevOps,
that
men's,
that's
primarily
aimed
for
people
who
are
starting
to
use
vanities
and
want
to
focus
on
what
you
said
as
well,
to
make
it
very
simple
for
the
developers
to
just
guess
things
out.
What
we
do
here
is
basically
most
people
companies
who
don't
even
have
a
single
type
of
person
like
you.
It's
got
to
be
a
best
fit
for
them,
because
what
we
do
is
we
once
you
set
up
a
cabal.
This
is
faster
that
this
is
like
what
is
faster.
A
A
It
was
popularized
by
Heroku.
Basically,
that's
how
Heroku
tries
to
guess
what
is
the
type
of
project
language
and
how
can
it
automatically
deploy
it?
Otherwise,
that's
a
bit
that
and
using
that
we
try
to
figure
out
as
well
what
kind
of
projects
you
are
running
and
we
try
and
we
are
going
to
do
all
kinds
of
security
testing
and
everything
on
your
on
your
project
and
deploy
it
in
the
end
cluster.
A
A
B
A
B
Suppose
one
of
the
things
that
I've
found
is
that
just
using
we
actually
were
you
using
bit
bucket
for
the
kind
of
legacy
platform
which
I
don't
know
if
you've
ever
used
it
before,
but
I
didn't
really
get
on
with
it
very
well,
but
we
had
to
kind
of
we
built
a
basically
a
wrapper
for
terraform
and
keep
CTL.
That
means
that
we
can
run
the
wrapper
in
you
know,
pipelines
also
developers
can
run
it
locally.
B
So
having
that
ability
to
basically
run
exactly
the
same
self-examine
same
process
in
the
pipeline
and
locally
means
that
there's
no,
never
any
unexpected
kind
of
things
happening
when
developers
push
their
stuff
to
production
best,
and
that's
really
important
for
us,
because
you
know
just
having
that
consistency
across
it
means
that
yeah.
You
know
less.
Ladies
running
some
problems.
Basically
yeah.
A
B
A
A
Okay,
thank
you
very
much
for
fruit
talk
and,
let's
keep
in
touch.
It
was
thanks.
Victor,
Ange,
okay,
just
after
I,
stopped
the
previous
recording
as
one
more
question,
which
was
that
how
often
do
developers
start
a
pull
request
on
that
infrastructure,
repo
that
he
presented
during
the
video
and
his
answer
was
pretty
interesting.
I
think
so
I
would
try
to
share
it,
which
was
that
developers
when
they
want
to
try
out
something
they
are
actually
full
access
to
today
clusters.
So
they
can,
they
usually
run
cube.
A
They
can
do
everything
basically
without
running
terraform,
just
using
cube
city
electronically,
and
was
they
are
happy
with
what
they
have
that
they
open
the
poor
request
with
having
the
the
yellow
face
change
and
to
have
integrated
everything
into
the
test
environment
also
said
this
usually
happens
that
they
have
to
sit
down
for
that
to
have
an
integrated
approach.
How
the
that
set
up
will
actually
be
included
into
into
the
testing
wrapper.