►
From YouTube: [Kubernetes, IaC] Clusters, environment scope, Terraform - User interview (Interal, SRE)
Description
Talking with Graeme about the environments, the usage of clusters per environment, cluster apps, Terraform, user journey,
A
Hi
Graham
good
good
afternoon.
B
B
B
A
B
A
B
A
B
Sense
and
obviously,
there's
limitations
with
the
way
that
works
at
the
moment,
because
it's
all
done
from
I
think
the
rails
app
itself
actually
does
that
cube,
CT
all
calls
and
everything
so
they're
moving
that
to
having
all
of
the
get
labs,
get
lab,
managed
acts
done
by
CI
jobs,
I'm,
definitely
for
and
I
think
that
makes
perfect
sense
and
the
tooling,
and
what
they've
done
so
far
looks
good.
The
problem
is
when
they
moved
it
to
doing
it
via
get
labs
CI.
B
The
way
you
kind
of
differentiate,
different
clusters
or
anything
got
changed
a
little
bit.
I
guess
you
could
say
so
before
when
I
get
lab
kosta
to
a
group.
Sorry,
when
I
added
a
keeping
Eddie's
class,
though
I
get
that
group
mm-hmm
on
that's
cluster
screen,
I
could
choose
the
apps
I
wanted
to
install
and
I
could
choose
the
sentence.
I
could
say
you
know,
I
want
to
install
these
apps
or
I
want
to
set
up
this
to
configure
such
a
way.
It
was
on
the
setting
screens
for
the
cluster.
It
didn't
matter.
B
Let's
say
at
the
group
level:
it
didn't
matter
which
projects
were
using
that
cluster.
It
didn't
matter
what
environment
scopes
like
get
lab
environment
scopes
were
being
used.
It
was
just
for
the
setting
page
was
on
that
cluster
related
to
that
cluster.
Yes,
so
with
the
CI
approach,
that
kind
of
becomes
a
little
bit
blurry
because
our
example
is
for
get
lab.
Comm
is,
we
want
to
have
say
five
or
six
communities
clusters
and
they
may
have
different
purposes.
B
One
might
run
all
our
staging
one
might
run
all
our
production
that
workflow
aligns
with
the
environments,
go
production
staging
what-have-you,
but
then
there's
also
times
we
want
to
have
I,
get
lab,
I,
sorry,
kubernetes
cluster,
that
is
running
production
workloads,
but
it's
a
different
cluster,
maybe
for
some
specific
projects
or
for
security
reasons.
So
we're
doing
things
like
with
Vashi
called
phone
for
security,
and
things
like
that.
B
B
A
B
A
B
Environment
scope,
yes,
but
I.
Actually,
so
the
problem
was
environment.
Scopes
forget
lab,
don't
necessarily
map
to
me
a
particular
one
cluster.
So
the
current
model
I
can
go
to
the
I,
can
add.
The
cluster
I
can
go
to
the
up
screen.
I
can
click
the
install
button
on
that
cluster
and
I
know
it's
that
cluster,
whereas
if
I
with
the
new,
it's
our
environments,
go
like
the
CI
job,
a
CI
job
runs
per
environment.
B
Alright,
stay
tied
to
an
environment,
but
an
environment
to
me
is
not
a
single
cluster
and
so
there's
actually
no
way.
For
me
to
say
these
are
three
kubernetes
cost
is
one
running
a
one
app
and
production
one
running
another
app
in
production.
One
is
running
respect
extra
security,
app
that
has
extra
compliance
reasons
so.
B
A
So
I
understand
the
use
case
and
you
have
multiple
clusters
very
environment,
and
that
was
the
reason
I
scheduled
the
meeting
with
you,
because
I
think
it's
probably
a
more
technical
reason
we
decided
to.
In
my
understanding,
we
only
allow
one
cluster
environment
because
otherwise
get
lab
wouldn't
know,
at
least
in
a
project
level
or
in
an
even
group
level
which
cluster
to
deploy
to.
If
it's
not
defined
in
the
domicile,
because
I
guess
you
can
define
which
cluster
you're
gonna
use
is
that.
B
It
depends
on
how
I
guess
it's
how
the
cluster
management
project
is
configured
at
the
moment,
because
there's
that
functionality
that's
an
alpha
which
is
called
the
cluster
management
project.
Yes,
I've
been
trying
to
utilize,
but
but
yeah
what
happens
is
if
I
have,
let's
say
three
clusters
and
each
one
of
those
is
set
to
use
the
one
cost.
The
management
repo,
because
I
would
like
one
repo
to
manage
all
my
classes.
I
can't
get
it
to
actually
run
against
all
three
of
my
clusters,
because
you.
B
So,
for
me,
I
think
the
concept
of
an
environment
scope
should
not
be
tied
to
crosses
I,
don't
think
they
match
one
to
one:
I,
don't
think
they
from
an
end-user
app.
So
when
you
might
have
a
git
repo
that
runs
a
particular
and
use
the
app
that
you
want
to
deploy
some
code.
You
know
I
make
sense
for
environment.
You
have
a
like
a
production
environment
for
have
you,
but
from
like
a
costume
management
perspective.
B
You
know
production
is
more
than
one
cluster
to
us,
especially
for
security
and
compliance
reasons
where
we
have
different
security
levels
and
stuff.
So
to
me,
environment
scopes
for
things
that
are
deployed
on
top
of
it.
For
me
managing
my
classes,
I,
don't
think
environment
scope
should
matter
at
all
I.
Don't
they
don't
matter
to
me?
I,
don't
really
follow
that
model.
I
follow
these.
Are
the
clusters
I?
B
Have
that
I've
added
to
get
lab
and
I
want
to
manage
each
one
of
those
independently
and
know
what
I'm
putting
on
each
one
and
then,
if
other
git
repos
under
my
group
want
to
deploy
on
top
of
them
and
they
can
use
environments
goats
to
mark
which
one
of
those
clusters
for
their
environments
that
they
want
to
match.you.
That's
fine
one
for
me
is
a
cost
to
admin.
I
need
to
just
understand
directly
the
clusters
I
have
and
what
I
wanted
on
them
right.
B
More
managed
it's
more
for
people
deploying
like
the
older
DevOps
and
the
people
deploying
things
on
top,
but
for
me
yes,
like
I,
think
the
thing
is
I
had
multiple
clusters
that
are
production.
That's
really
it's
not
never
going
to
be
a
one
hosta
for
production
for
me,
because
we
have
security
limitations.
Some
of
them
need
to
have
to
be
PCI
compliant.
Some
of
them
have
to
be,
you
know,
Sox
compliant
and
so
will.
What
is?
Production,
for
us
is
never
just
a
one
to
one
it's
we
have.
B
You
know
it
could
be
six
and
then
staging.
We
might
have
all
of
our
stage
environments
running
on
one
cluster,
but
once
again
we
may
have
a
few
different
staging
ones
as
well,
so
it's
I
managed
those
clusters
and
then
other
people
use
you
know,
get
my
CI
and
yellow
dev
ops
deploy
on
top
of
that,
what
they
environment
scopes
and
what
they
map
to
is
kind
of
more
up
to
them
a
little
bit.
B
It's
kind
of
tricky,
so
essentially
what
we
do
is
so
at
the
moment
right
when
I
add.
So,
if
I
have
a
group,
let's
say
it's
like
my
git
lab
engineering
group
or
whatever
and
I
add
like
six
clusters
in
there
we
go
into
the
settings
screen
for
each
one.
Right
and
I
can
click
the
buttons
for
each
one
and
that
doesn't
matter
about
environment
scopes
at
all.
The
current
workflow
daddy's
in
the
app
has
no
kind
of
concept
of
being
mapped
to
environment.
B
B
B
A
B
B
B
A
B
B
We
must
not
even
be
using
the
environment
scopes,
announce
the
current
set
up.
Maybe
that's
why
I
am
but
to
me
till
I
see
like
so
we
might
have
just
one
cluster,
that's
in
production
that
we
I
guess
I'm,
not
sure,
but
my
my
understanding
is
this
could
be
wrong
that,
with
especially
with
the
see
we've
been
using
this
to
manage
our
classes,
and
some
of
them
are
using,
like
it
lab
or
a
DevOps,
and
some
of
them
aren't
using
get
my
board
a
DevOps
they're
just
deploying
stuff
manually
and
I.
B
Think
the
thing
that
maybe
maybe
this
has
always
been
this
way
where
I
guess
it
has
always
been
this
way.
But
I
think
the
thing
was
the
thing
that
really
tricked
us
or
tricked
me
was
when
we
were
trying
to
essentially
say
we
want.
You
know
like
a
production
cluster
and
we
want
to
have
different
gitlab
managed
apps,
deploying
to
what
they
think
is
in
production.
B
Is
an
environments
going
but
they're
different
clusters
like
I
guess
it
is
basically
that
limitation
of
one
cost
of
her
environment,
scope
and
I'm,
just
trying
to
that
to
us
is,
is
becoming
trickier
for
us
to
work
around
broad.
You
know
to
basically
try
and
do
if
we
want
to
move
more
stuff
to
get
lab
order
DevOps,
because
we
have
multiple
clusters
that
are
in
production.
A
B
I
guess
what
I
could
do
is
if
I
put
one
cluster
at
the
group
level
and
then,
if
one
particular
project
wants
to
use
a
different
cluster,
I
could
put
a
different
cluster
at
the
project
level.
That's
potentially
a
way
to
do
it,
but
the
problem
is
once
again
is:
if
I
want
the
same
cluster
management
free
both
of
those
two
clusters
when
I
run
the
cluster
management
job,
see
I
I,
think
it
I,
don't
know
which
one
it'll
actually
pick
because
they're
boys,
all
those
clusters
are
in
smoke
production.
If
that
makes
sense,
it.
B
A
B
B
Well,
that's
okay,
I
can
so
I
can
I
can
when
I
create
the
clusters.
I
can
blind
them
all
to
the
one
management
repo,
but
it
was
more
a
case
of
I
wanted
to
be
able
to,
because
I
wanted
to
be
able
to
have
multiple
clusters
that
matched
production
so
that
any
of
the
get
lab
projects
that
are
using
it
you
know
I
can
can
choose
which
cluster
I
guess
that
is
production
or
whatever
based
off
their
use
case.
I
also
wanted
to
I
wanted
to
refer
to
me.
B
I
refer
to
my
clusters,
you
know
by
their
name
or
identify
to
me
that's
what
those
clusters
are,
whereas
the
CI
only
knows
off
environment
a
CI
knows
like
that's,
you
know
you
set
an
environment
for
a
CI
job
and
that's
how
it
knows
which
class
that
it
goes
to
to
go
to.
But
to
me,
I
want
to
be
able
to
say.
I
would
like
to
run
it
against
this
cluster
on
this
cluster.
That's
not
necessarily
tied
to
environments.
Good,
ideally,
there's
multiple
clusters
in
a
production
environment,
scope
and
how.
A
B
A
B
That
that
is
yeah,
that
is
a
tricky,
tricky,
tricky
setup
because
it
would
it
would
essentially,
it
would
essentially
mean
anywhere
that
someone
will
have
to
pick
like
you
know,
you'd
be
like
there's
multiple
clusters
in
this
environment
scope.
How
do
you
you
know
pick
which
one
to
actually
apply
to
one
thing?
I
have
I
haven't
really
looked
at
and
maybe
thinking
about
it
now,
what
be
worth
exploring
is
I
know
we
can
do
wildcard
environment
scopes
and
I
could
I
could
maybe
start
seeing
if
I
can
make
a
wild
cards.
B
Maybe
that
gets
me
what
I
want
and
it'll
all
work,
but
I
think
that's
the
the
tricky
part
at
the
moment
is
at
least
from
what
I
can
tell
we've
had
to.
We've
had
to
actually
create
entirely
different
groups
with
different
clusters
so
that
they
don't
overwrite
each
so
that
each
one
can
have
its
own
production
spoke
from
production
cluster.
If
that
makes
sense.
B
A
B
Think
it's
based
off
like
I
think
so,
but
once
again,
I
haven't
had
a
close
look
at
it.
So
that's
that's
also
that
I
think
I
think
the
production,
like
master
master
branch
and
it
maps
to
production,
like
that's
I,
think
a
hard
mapping
or
at
least
I,
think
that's
expected.
I
could
be
wrong.
I
I,
don't
know,
and
if,
if
I'm
wrong,
it's
definitely
I
just
haven't
looked
at
the
documentation,
well,
an
offer.
It's
not
clear,
but
I
think
that
would
be.
That
would
probably
that
would
be
okay.
B
A
B
That's
a
good
question,
so
the
big
thing
we
do
is
a
lot
of
these.
So
now,
obviously
we'll
get
a
request
to
deploy
a
new
app
or
you
know
just
to
to
migrate
an
existing
app
off
all
infrastructure
to
kubernetes,
and
things
like
that.
So
we
use
terraform
just
spin
up
out
qke
clusters,
so
we
have
a
git
repo
where
we
commit
terraform
code
and
that
launches
a
CI
job
to
run
terraform
itself,
and
that
does
things
like
creates.
B
A
B
B
B
It'll
essentially
see
so
it
does
like
a
it's
probably
a
bit
too
small
for
you
to
read,
but
it
does
like
a
validate.
Then
it
does
a
telephone
plan
and
then
it
like
it
basically
does
a
terrible
terraform
apply
and
then
there
might
be
some
extra
bits
we
need
on
the
end
of
it
that
will
do
bits
and
pieces.
We
also
use
the
git
lab
terraform
module.
Oh
you're,.
B
In
fact,
we
made
some
additions
to
it
to
actually
improve
some
of
the
kubernetes
integration
stuff
and
there's
some
more
work.
I
want
to
do
on
that.
Actually,
I
want
to
do
some
more
work
on
that
terraform
module
to
do
to
get
that
managed,
apps
stuff
like
that
we're
talking
about
now,
because
it's
currently
not
supported
I'll.
A
B
B
A
B
Then,
when
we
like
push
a
branch
or
run
a
CI
pipeline,
this
does
a
terraform
apply
and
that
a
terraform
apply
sets
up
the
like.
The
networking
creates
the
sub
Network.
This
makes
the
actual
GK
container
cluster,
which
takes
a
long
time,
takes
like
six
minutes
and
then
that
sets
everything
up
and
so
excuse
me,
I
think
this
that's
a
warning,
so
this
kind
of
sets
it
all
up
and
there's
part
of
this
terraform
code
as
well.
One
of
the
things
that
I'm
working
on
right
now
is
to
automate
adding
it
to
get
lab.
B
So
once
the
clusters
all
created
and
it's
running
the
kubernetes
cluster,
normally
we
would
go
into
get
labs
via
the
web.
You,
our
UI
and
we'd,
add
the
cluster
and
I
cut
and
paste
all
the
information
and
everything
I'm
using
this
kit
lab
group
class,
though,
which
just
got
added
recently,
which
we
pushed
upstream
now
we
basically
in
the
terraform
code
as
well.
When
we
create
the
cluster,
we
can
also
specify
hey,
add
it
to
get
lab
and
so
it'll
appear
and
get
lab
in
the
UI
straight
away.
Oh
okay,.
A
B
A
B
This
is
yeah,
so
this
is
the
tricky
part
at
the
moment,
because
we
only
have
the
one
cost.
The
per
production
cost,
the
scope,
everyone's
actual
app
code
lives
in
a
different
group,
they're
very
separated,
and
we
have
to
constantly
hunt
around
from
time
to
time
and
if
there
is
a
cost
that
we
want
to
share
across
two
groups.
It's
it's
tricky,
like
that's
really
other
than
wrong.
What
I'm
trying
to
kind
of
improve
somehow
is
if
we
can
bring
it
back
to
every
single
cost
that
we
have
inside
of
one
group
and.
B
Of
our
application
are
users
code
also
in
that
one
group,
but
then
we
were
able
to
just
your
DevOps
and
get
more
fine-grained
control
over
which
clusters
I'm
going
to
be
used
by
which
Auto
DevOps
processes
and
so
forth
and
I
think
I've
been
looking
at
it
from
trying
to
remove
the
impact
at
the
cost
and
management
side.
But
perhaps
perhaps
this
is
just
more
of
an
Auto
DevOps
configuration
side
I'm,
not
sure,
because
I
could
definitely
see
now
that
I
think
about
it.
B
A
B
A
B
No,
no,
no!
That's!
Okay!
You
know
it's
fine.
Essentially,
what
I've
done
is
I've
written
a
script
which
will
automate
clicking
those
buttons
and
I
can
write
if
you're
interested
in
how
its
works,
that
ever
I
can
send
you
a
link
to
the
code.
I'm,
pretty
sure
it's
something
you
get
like
somewhere.
That.
A
B
Wouldn't
what
we're
doing
in
this
diskit
repo
with
the
terraform
probably
made
sense
to
come
into
a
cluster
management
project
as
well,
if
it
scopes
out
so
that
the
cluster
management
project
could
create
the
cluster,
add
it
to
get
lab
like
we're
currently
doing
and
then
also
run
like
the
CI
jobs
for
the
get
lab
managed
apps
by
a
CI
to
install
those
on
top
and
configure
them.
You
know
like
all
these
Prometheus
get
lab
runner
across
plain
cert,
managing
whatever
we
need.
I,
say:
okay,
this
is
the
cluster
I
want.
B
These
are
the
apps
I
want
installed
on
top
of
it.
This
is
the
settings.
I
want
I
mean
that's.
That
would
be
great.
That
would
be,
you
know
more
or
less
what
we're
doing
here,
but
just
you
know
if
it
was
combined
and
a
bit
later.
You
know
a
bit
of
a
better
user
interface
in
the
get
led
product
around
that
so.
B
B
Division,
like
perhaps
you
have
another
button
or
something
here,
that's
like
a
git
lab.
You
know
perhaps
a
or
a
cost
be
the
word
custom
or
a
get
ops,
backed
repository
or
get-ups
pact,
provisional
or
maybe
that's
the
word,
but
like
a
custom
provisional,
where
you
can,
basically,
you
know,
have
access
to
creating
your
own
telephone
code
or
or
some
kind
of
custom
provisioning
around
that
that's.
B
Possibly
another
way
I
could
see
it
done
in.
Perhaps
this
is
it
also
a
good
way
of
looking
at
it
is
if
I
have
a
costume
management.
Repo
already
I
would
love
to
be
able
to
just
commit
a
new
file
to
that
repo
and
say
actually,
I
would
like
to
create
a
new
class
that
here
is
the
settings
commit
that
to
get,
and
then
you
know
when
I
commit
that
to
get
a
kicks
off
a
pipeline
to
run
terraform
creates
class
that
adds
it
to
get
lab
installs
the
apps
and
everything.
B
So
that
is
more
really
a
more
get-ups
workflow,
which
I
also
think
would
actually
be
incredibly
valuable.
You
get
ahead
of
the
chicken
and
the
egg
problem.
I,
don't
know
how
you
would
set
up
a
cluster
management
project
without
a
cluster
already
I
I'm,
not
sure,
maybe
that
isn't
too
difficult,
but
that
is
also
another
way
which
is,
which
is
how
we
currently
do
it
more
or
less
like.
B
B
Yeah
I
would
be
interested
to
understand.
You
know
when
I
creating
ziggler.
Does
that
mean
I?
Can
click
a
button
and
get
a
cluster
management
repo?
Already
that's
a
pretty
cool
idea
like
I,
think
that
would
work.
Well,
if
you
go
like
a
category,
here's
a
cluster
management
repo
for
you.
You
can
push
I,
stop
I,
push
a
file
to
its
and
create
the
acosta.
This
is
how
I
want
it
to
look,
and
these
are
the
settings
and
then
it
just
comes
up-
and
you
know
it's
added
and
it's
that
that
would
be
great
well.
A
B
We
we
haven't
done
much
yet,
mostly
because
we
have
been
we're
still
playing
around
with
this
quite
early
the
instance
wide
so
because
also
we're
using
get
lab.
We
use
get
lab
calm
and
op
stock,
get
lab,
don't
next
to
manage
our
deployments
of,
and
management
of
communities,
clusters
and
stuff
because
we're
using
get
lab
comm.
We
can't
have
instance,
wide
clusters.
Does
that
means
every
user
in
get
lab
comm
without
access
to
it,
which
is
obviously
not
what
we
want.
So
we
don't
touch
instance
wide
clusters
at
all.
A
B
I
mentioned
before,
basically
because
a
lot
of
the
stuff
we're
using
all
that
develops
and
we're
not
quite
sure
how
that
works
with
custom
environments
groups
are
not
the
standard
environment
scopes
of
production
or
what
have
you?
We
have
more
or
less
gone
down
to
creating
new
groups
every
like
creating
a
new
group.
Putting
some
clusters
in
there
for
station
in
production
create
another
new
group
stage
in
production
and
they're
very
separated,
and
we
often
have
to
duplicate
commissions
and
stuff
across.
Oh.
A
B
Kind
of
how
we're
doing
it
at
the
moment-
and
we
don't
really
use
project
clusters
very
much
because
most
of
the
clusters,
not
always
but
most
of
the
clusters
we
want,
we
would
like
to
have
them
managed
at
the
one
level,
because
it's
hot
it's
hard
for
us
to
go
our
where
as
a
cluster.
Is
it
in
this
project
on
this
project
or
is
it
in
this
group?
If
we
keep
them
all,
at
least
at
the
group
level,
we
know
there's
a
few
different
groups.
We
can
look
at
to
find
where
that
cluster
is
I'm.
B
Putting
them
at
the
project
level
we
just
I,
don't
I,
think
the
tricky
part
is
is
in
the
get
LabVIEW.
Why?
If,
if
I'm
in
a
group
like
I,
might
be
an
admin
of
this
group,
I
can't
at
what
glance
see
what
cluster
is
in
every
single
perk,
you
know,
there's
no
tree,
there's
no
way
for
me
to
look
top
down
and
see
everything.
What's
the
clusters
in
my
instance,
what
are
the
classes
that
are
in
my
groups?
What
are
the
clusters
that
are
in
my
projects?
All
in
one
view?
Yes,.
B
B
Think,
especially
with
the
without
doing
more
work
with
cross
playing
I
think
there's
potential
there
for
us
to
get
a
full
view
of
all
our
cloud
resources.
What
terraform
is
the
same?
I
guess
cross
playing
or
terraform,
but
it
is
interesting.
So
if
you
know
if
we
create,
we
create
my
created
kubernetes
cluster,
but
we
also
might
create
a
database.
A
cloud
database
instance,
a
cloud
SQL
instance
right
for
some
data
storage
is
part
of
that
that
deck
that
running
on
that
cluster
might
need
some
data
storage.
It
might
have
some
other
bits
and
pieces.
B
You
know
if
we're
creating
they're
more
in
terraform,
it's
gonna
be
nice
to
be
able
to
get
a
view
like
a
visual
representation
or
just
an
overview
of
that
infrastructure
that
was
created
by
terraforming
those
CI
jobs,
which
often
is
just
stored
in
the
terraform
state
file.
So
tariffs
on
one
just
or
a
state
file
saying
these
are
the
objects
I
created.
This
is
where
they
live,
or
some
details
about
them
enable
to
you,
especially
if
you're
running
tariffs
on
from
the
cost
of
Management.
It's
not
only
being
out
there.
B
Just
you
know,
run
terraform,
create
some
things,
but
then,
at
a
holistic
view,
take
a
step
back
and
go
okay.
Well,
what
has
this
terraform?
You
know
if
I've
run
this
costume
management
project
for
me
is
now
you
know
it's
made
all
these
things.
What's
a
clear
view
of
everything
that
it's
done
so
that
I
can
understand,
you
know
what
am
I,
what
am
I
doing
and
what
are
other
people
in
my
group
doing
and
what
is
the
infrastructure
we
now
have
running
on
this
account
for
this
cloud
provider
or
what-have-you
right.
A
B
So
I
think
the
plan
will
tell
you,
you
know
what
it's
going
to
do,
what
it's
going
to
change
and
if
you
run
it
again,
the
plan
will
be
empty
because
everything
is
already
created.
It's
the
actual,
terraform
state
file
and
where
you
store
that
that's,
that
is
a
running
list
of
what
it
currently
manages
and
is
created
and
I
think
that
state
file
is
the
most
interesting
part
right,
because
that
is
the
current
state
of
what
is
running
and
is
managed.
B
A
A
B
Would
be
useful
at
a
very
high
level.
I
know:
I'm
I
I
understand
the
more
information
we
put
on
this
page.
The
longer
it
will
take
to
load
so
obviously
I
understand
we're
kind
of
going
to
be
careful
but
I
think
some
very
good
high-level
pieces,
like
maybe
even
just
the
API
API
URL,
so
the
URL
to
hit
the
cribben
Eddy's
endpoint,
would
probably
be
useful,
I
think
maybe
the
kubernetes
version
it's
running.
It's
also
very
useful.
It's
very
nice
being
very
nice
to
see
at
a
glance.
B
Oh
look,
my
staging
environments
or
my
production
environment
is
running
a
different
version
from
my
other
environments.
Yeah,
the
the
API
URL
is,
would
be
nice
because
you
know
at
the
moment
I'm
like.
What's
this
cluster
again,
how
do
I
access
this
cluster
I
have
to
click
on
here
and
then
I
find
yeah.
B
That
would
also
be
nice
to
see
at
a
glance
so
I
could
see.
You
know
what
cluster
is
a
hear.
What
clusters
are
actually
managed
by
git
lab,
maybe
what
version
of
kubernetes
they're
running
and
it
would
also
be
probably
be
good
and
go
and
see
and
don't
overload
the
screen
but
being
able
to
see
what
get
lab
managed.
Apps
are
installing
quickly.
B
So
if
I
could
see
like
somewhere
over
here
that
it's
like
okay,
like
there's
a
little
symbol
for
helm
in
the
ingress
and
generics
Jupiter
like
just
a
few
symbols
of
what
apps
are
installed
on
it
so
I
do
understand.
That's
gonna
grow
quite
a
lot.
Sorry
I,
don't
know
if
that's
gonna
be
feasible
to
do,
but
I
think
that
would
also
be
useful.
So
I
could
look
and
say:
ok,
these
are
the
clusters.
I
have
what
am
I
installing
and
managing
on
them?
You
know
what
version
of
that
kind
of
running.
Yes,.
A
Okay,
can
you
see
my
screen
sure?
Okay,
so
I've
done
a
redesign
for
the
listing,
so
yeah
I
haven't
yeah.
This
is
maybe
not
the
most
final
one,
so
you
can
vary
with
me.
Okay,
this
is
the
most
final
okay.
So
in
my
understanding
you
don't
really
care
about
the
cluster
size
and
the
total
memory,
and
that's.
B
A
B
The
resource
contention,
like
total
coins,
total
memory
and
stuff
is
useful,
I'm,
not
sure
if
the
cluster
size
is
necessarily
useful
to
me.
Okay,
just
simply
because
especially
the
way
I
think
can
use
kubernetes
is
you
know:
I
try
not
I
had
physical,
no
real
nodes
that
I'm
worried
about,
but
it
really
is
more
of
a
like
how
much
resources
I,
try
and
think
of
a
more
abstract
sense
right.
B
A
B
I
think
the
thing
with
resources
you
have
to
be
careful
of
getting
into
the
technical
need
agree
of
Cuba
Nettie's,
there's
two
actual
ways
to
look
at
can
contention
and
resource
allocation
and
kubernetes.
When
you
create
for
the
pods,
then
you
create
the
resources
to
incriminate
it.
You
can
tell
it
how
much
to
give
so.
You
can
say
this
pod
only
needs
a
gig
of
memory.
That's
all
it
gets,
that's
all
it
gives
it
away.
It
goes,
and
the
scheduler
knows
how
to
goes.
Okay,
I
couldn't
find
a
node
with
1
gig
free
memory.
B
A
B
Two
problems
at
that
one:
the
kubernetes
scheduler,
has
to
make
up
what
it's
called
as
a
best
effort,
yes
and
where
to
fit
it.
Because
it's
like
you,
don't
I,
don't
know
how
much
it's
gonna
use
so
I'm,
just
gonna,
try
and
put
it
here
and
if
it
uses
too
much
memory
or
CPU,
it
will
kill
the
pod
and
actually
kill
your
application,
because
in
case
you
didn't
tell
me
how
much
I'm
gonna
use
I
gave
you.
You
know,
I
put
you
here,
you're
using
too
much
and
I'm
running
out
of
memory.
B
If
you,
so,
if
a
if
a
particular
physical
node
in
kubernetes
is
hitting
its
memory
limit,
the
Kubler,
which
is
the
piece
of
software
that
runs
on
that
node,
that's
talking
to
kubernetes
control,
plane
and
managing
the
pods
look
at
all
of
the
apps
that
are
running
on
there
and
if
it
finds
an
app
that
has
note
that
it
was,
it
was
never
given
any
information
about
how
much
memory
or
CPU
it
was
allowed
to
use.
It
was
just
left
to
best-effort.
It
will
go
your
a
best-effort
pod
I'm
running
out
of
the
memory.
B
A
B
Actually
best
practice
in
general,
every
pod,
you
should
always
tell
kubernetes
how
much
memory
and
CPU
to
give
it
in
criminales
won't
limit
that
like
it,
won't
allow
it
to
use
any
more
than
that,
but
it
also
allows
it
to
make
good
decisions
because
it
can
go
okay.
This
this
box
has
six
gig
of
memory
free
you've.
Given
me
a
pod,
that's
says
it
can
only
use
three
dude.
I
can
definitely
run
it
on
that
box
and
I
definitely
know
it's
not
gonna
use
more
than
that.
So
I
definitely
know
that.
B
Short,
they
shouldn't
think
it's
not
always
the
case
of
people
do
or
not,
and
it's
something
I've
raised
just
because
when
you're
talking
about
total
memory
and
total
cause,
if
you
report
you
you've
got
to
be
careful,
because
if
you
report
what's
currently
being
used,
that's
not
necessarily
a
accurate,
it
would
can
not
necessarily
be
an
accurate
representation
I.
This
is
a
topic.
B
Actually
I
can
go
into
a
great
detail,
having
done
a
lot
of
it
in
the
past,
but
it's
all
I'm
just
saying
is
it's
good
for
me
to
know
this
information
I
would
actually
enhance
it
to
say
more
of
the
total
resources
that
you've,
so
the
amount
of
CPU
that
or
memory
that's
currently
being
used
is.
Is
this
amount
an
actual
amount
of
sea
and
resources
that,
if
you
add
up
pods
and
for
every
pod,
that's
defined
limits
on
what
it's
allowed
to
use
the
kind
of
dough
used?
B
Both
of
those
statistics
are
important
to
me.
I
want
to
know
what
you
know
have
people
defined
and
said
their
pods
are
going
to
use,
but
if
I'm
using,
if
people
have
said
you
know,
their
pods
are
going
to
use
like
10
gig
of
memory
in
total.
But
the
current
cart
currently
there's
like
20
gig,
being
news
I'm,
not
actually
fine,
some
pods
and
say:
hey,
look
I
found
these
developer
I
found
this
pod.
You
haven't
set
memory
limits
on
this
pod,
so
you
know
we're
using
I.
Don't
know
you
know
like
this.
A
Okay,
so
if
the
developer
doesn't
define
those
limits,
then
what
would
a
good
user
experience
be?
So
you,
your
job,
I
guess,
is
to
ensure
the
infrastructure
is
covering
the
needs
of
the
application.
So
what
would
be
a
crisis
solution
in
this
case
when
a
pod
is
about
to
be
an
application?
Is
about
to
be
killed
because
the
developer
didn't
set
the
limits.
Yeah.
B
So
that's
good,
so
there's
two
ways
to
look
at
that:
there's
I
can
definitely
spin
out
more
nodes
to
take
like
if
it's
like
you
know
what,
because
because
sometimes
obviously
fixing
these
things
and
investigating
them
can
take
time.
So
I
can
certainly
go.
You
know
what
I'm
going
to
just
you
know
extra
capacity
to
handle
that
not
more
more
nodes.
B
I'll
expand,
I'll
expand
my
cluster
to
take
that
on
and
then
in
the
meantime,
we'll
go
and
investigate
see
if
we
can
figure
out
understanding
what
the
needs
of
that
pods
aren't
and
lock
that
down
definitively
and
then
you
know
you
just
scale
down
the
cluster
or
maybe
the
custom
needs
to
stay
that
feed,
but
we're
just
making
sure
that
we,
you
know,
understand
the
news
pace.
That's
probably
more
often
than
not.
B
A
B
Know
causes
the
box
to
run
out
of
memory.
It
will
get
killed
simply
because
no
one
communities
and
get
told
how
much
memory
it
needs,
but
it
may
need
that
much
memory.
It
could
be
time
for
me
to
actually
expand
the
cluster,
but
in
fact
that
that
not
sure-
and
especially
in
the
case
of
misbehaving
applications-
and
we
do
see
it
from
time
to
time
where
an
application
just
is
leaking
memory
and.
A
B
B
Dying
because
you
know
we
can
definitively
say
well,
this
is
the
amount
of
memory
we
think
it
should
use
and
if
it's
dying,
because
it's
running
out
of
memory,
that's
a
problem,
especially
if
it's
like
a
new
version
of
an
apple.
It's,
like
all
of
a
sudden,
starts
dying
because
it's
using
more
memory,
we
can
kind
of
roll.
A
B
So
we're
using
Prometheus
monitoring,
which
is
set
up
into
our
monitoring
infrastructure
notes
and
monitor,
basically
how
much
cost
the
resources
we're
using
overall
and
so
at
the
moment,
because
we,
it
is
basically
when
we
get
the
alert
and
the
cluster
is
running
out
of
resources.
We
have
to
do
kind
of
make
a
bit
of
a
judgment.
Call
we
don't
get
much
visibility
into
like
we
can
see
how
much
the
cluster
is
using,
but
so
what
you
know
we
have
to
kind
of
still
doing
manual
investigation
into.
Why.
B
Is
is
this
a
misbehaving
app?
Is
it
like
a
user?
You
know
we're
running,
get
lapto
comm
services
in
kubernetes.
Is
that
someone
outside
of
get
lab
trying
to
do
bad
stuff
or
doing
the
wrong
thing?
Or
is
this
just
the
app
just
needs
more
resources?
Maybe
it
just
needs
more
resources,
because
it's
you
know,
got
more
features
or
something
right.
A
B
B
Sorry,
I
think
all
right,
I
think
definitely
I
think
on
the
kind
of
cluster
management
details.
I
think
it
would
be
not
I
think
we
already
do.
You
say
bits
and
pieces
in
the
monitoring
there.
I
think
there
is
a
monitoring.
I
have
to
admit.
I
haven't
actually
done
much
with
the
monitoring
of
the
monitoring
part
of
gitlab
at
all,
I
mean
I'm,
not
very
familiar
with
it.
B
So
I
may
not
be
the
best
one
to
answer,
but
I
do
think
you
know
being
able
to
identify
I
think,
there's
two
ways
right
like
being
able
to
look
at
it
for
the
particular
cluster
and
a
monitoring
that
cost
up
and
then
once
again,
even
if
there's
like
a
pain
on
the
left-hand
side
of
the
screen,
that's
for
monitoring
where
I
get
them
all
bird's-eye
view
of
all
my
classes
and
all
my
infrastructure.
That
would
also
kind
of
be
nice
or
quite
useful,
as
well.
I.
Think
right.