►
From YouTube: Configure stage basics
Description
Intro to Configure stage by Orchestration PM
B
Sorry,
all
right,
so
we
are
chatting
about
the
configure
stage
and
all
of
the
categories
and
features
that
we
work
on
them.
Well,
not
all
of
them
we're
going
to
try
to
cover
them
at
high
level.
So
we
are
in
the
category
speech
here
and
as
I
was
saying,
the
OP
section
which
we
see
here
covers
everything
that
has
to
do
with
kind
of
operator,
support
and
infrastructure.
So
things
like
monitoring
things
like
infrastructure
setup
like
setting
by
a
British
cluster
surveillance,
which
is
also
a
type
of
infrastructure.
B
So
the
first
one
is
Auto
DevOps
and
kubernetes
well,
on
the
same
page.
So
really
this
group
name
is:
is
temporary
I
think
it's
not
very
good,
because
it's
only
talking
about
two
of
the
categories
that
would
cover
it's
just
that
we
haven't
been
able
to
come
up
with
a
better
name.
I
think
that,
in
the
interest
of
like
having
something
shorthand,
we'll
probably
call
it
the
kubernetes
group,
and
the
reason
is
that
the
DevOps
features
that
we
work
in
are
like
tightly
matted
with
kubernetes.
B
So
Auto
DevOps
have
some
security
components
to
it,
and
then
it
has
some
operational
components
to
it
and
those
are
the
ones
that
we
kind
of
focus
on
the
most
and
I.
Don't
know
how
you
think
about
auto,
auto
DevOps,
but
Auto
DevOps.
All
it
is
it's
basically
a
it's.
A
CI
template
right
and
I'm
gonna
show
you
the
template,
so
hopefully
that
will
click.
So
this
is
audit
audit
of
ops.
It's
basically
a
template
that
we
provide
out
of
the
box
if
the
user
doesn't
have
a
CI
file,
I'll
see.
B
So,
as
you
know,
when
you
want
to
use
good
lab
CI,
you
have
to
put
a
file
on
your
repo
called
get
lucky.
I
ya
know.
So
when
you
don't
have
that,
we
will,
by
default,
use
Auto
DevOps
on
your
on
your
reap
and
think
that
the
main
purposes
of
auto,
auto
DevOps
to
fold
well,
first,
its
kind
of
provide
modern,
CI
workflows
with
the
box.
So
one
goal
there
is
that
everyone
wants
to
take
advantage
of
modern
DevOps
practices.
B
B
The
way
we
do
a
build
is
that
you
either
provide
us
with
a
docker
file,
which
is
basically
a
definition
of
a
container
image
and
basically
dependencies
that
you
may
have
with
that
container.
And
we,
if
you
have
that
we'll
use
that
to
build
your
your
project,
you
don't
have
that
we'll
use
this
thing
called
Heroku
build
tanks,
and
you
can
think
about
karoku
build
packs.
It's
kind
of
a
docker
file
that
covers
more
ground.
B
B
So
generally,
users
will
have
that
looks
like
when
they
have
a
project
like
this
minimal
ruby
app.
They
will
have
a
file
called
a
docker
file,
and
this
docker
file
basically
specifies
an
image
and
like
and
some
configuration
settings
like
what
core
to
expose.
You
know
what
dependency
you
need
to
use,
and
things
like
that
and
this
image
like
Ruby
Alpine,
it's
basically
the
Linux
operating
system
with
some
some
dependencies
on
top
like
the
Ruby
one
will
have
Ruby
specific
dependencies,
and
then
you
have
other
images
that
will
have
other
dependencies.
B
B
B
So
not
only
that,
but
the
image
that
it
will
use.
So,
as
I
said,
the
image
is
based
on
a
certain
flavor
of
Linux,
so
it
will
be
an
operating
system
and
it
will
be
an
operating
system
in
some
cases
with
some
dependencies
on
it.
So
like
the
Ruby,
one
will
have
some
ruby
components
installed
on
top
of
the
operating
system,
yeah
and
but
yeah.
So
the
the
dockerfile
is
separate
from
the
application
code,
so
the
application
code,
for
example,
is
here,
and
that
is
like
code
specific
to
my
app
okay.
B
So
as
I
was
saying,
if
you
have
that
docker
file,
we'll
use
that
and
that's
going
to
be
kind
of
the
fastest
way
to
build
your
app
and
if
not
we'll
use
this
Heroku
build
purpose,
and
you
know
basically,
as
I
was
saying
built
bill
packs,
are
you
can
think
about
them
as
a
way,
a
common
way
to
build
applications
of
a
certain
language?
So
if
you
see
you,
we
have
build
packs
for
different
language.
B
Well,
not
we
but
Heroku,
maintains
all
of
this
build
packs
that
support
a
number
of
languages,
so
we'll
use
these
out-of-the-box
and,
if
you're,
a
user
who
wants
to
use
a
build
pack,
that
is
not
on
this
list.
You
can
specify
your
own
bill,
so
you
know
the
community
has
built
a
number
of
build
packs
and
basically
you
can
specify
a
custom
build
pack.
If
you
want
to
build
an
app
that's
outside
of
this
list
or
for
a
language
that
is
outside
of
this
list.
Yeah.
B
Correct:
that's
exactly
right!
Okay,
so
that
is
kind
of
part
of
the
build
process
and
all
the
DevOps
has
also
other
components
like
building.
Everything
is
kind
of
the
crucial
one
where,
where
we
start,
if
we
know
how
to
how
to
build
it,
we'll
move
on
if
we
don't
know
how
to
how
to
build
it.
Currently,
it
will
fail,
however,
we're
working
on
a
way
to
make
it
smarter,
where
it
won't
fail,
but
it
would
seem
simply
it
won't
run
or
it
will
run,
but
it
won't
result
in
a
failure.
B
So
it
kind
of
doesn't
provide
a
bad
user
experience
because
other
DevOps
is
enabled
by
default
and
currently
when
you
enable
it
by
default,
and
you
see
a
failure,
it's
kind
of
confusing
you
don't
know
you
know,
what's
failing
I
haven't
done
anything.
Why
is
it
like
showing
me
a
failure
and
the
reason
it
fails,
because
we
didn't
know
how
to
build
your
project
most
of
the
time
and/or.
You
didn't,
have
CI
configuration
in
place
like
a
runner
or
things
like
that
and
that's
confusing.
B
So
we
want
to
make
that
smarter
and
we
want
to
make
that
kind
of
a
better
experience.
But
yes,
so
then
we
move
on
to
like
auto
testing.
We
do
code
quality
if
you're
on
a
higher
tier
plan.
We
do
all
the
security
features
like
SAS,
which
stands
for
static
application,
security
testing,
and
then
we
do
things
like
license.
Management
container
scanning
review,
ABS
all
of
those
things,
so
all
of
that
happens
automatically
after
we
build.
B
And
then,
if
you
have
a
kubernetes
cluster
configured
on
your
project,
then
we
will
deploy
it
automatically
in
that
cluster
and
we
provide
the
ability
to
monitor
what's
going
on
in
your
cluster
with
Prometheus,
which
is
an
open
source
monitoring
system
and
it
is
Clapp
native.
So
it's
basically
it's
the
spine
kind
of
to
run
with
containerized
workflows
and
it
works
very
well
with
kubernetes.
It
has
a
very
robust
helm,
start
that
you
can
use
and
that's
what
we
use
to
deploy
it
into
your
cluster.
B
And
then
you
will
see
like
all
of
the
stats
from
your
cluster
like
CPU
memory,
things
like
right.
That's
what
we
call
auto
monitoring
and
that's
basically
it
you
know-
that's
all
the
DevOps
at
a
very
high
level,
of
course,
there's
a
lot
of
nuance
and
detail
in
how
to
make
it
work
in
different
scenarios.
So,
for
example,
when
you
need
to
initialize
or
migrate
a
database,
you
need
to
do
different
configuration
if
you
have
like
custom
elements
that
you
want
to
use
and
you
don't
want
to
use
them
out
of
the
box.
B
There's
custom
configuration
for
that
as
well
and
most
of
the
configuration
for
Auto
DevOps.
It's
done
through
environment
variables,
so
in
odd
in
the
auto
devops
talks,
you'll
see
that
we
have
a
whole
section
that
talks
about
the
variables
and
you
use
different
variables
for
different
things.
These.
B
So
here
this
is
a
project
that
I
have
with
all
the
DevOps,
so
you
go
if
I
go
into
the
settings
and
see
ICD
here,
I
will
see
environment
variables,
variables
and
here
is
where
I
use
each
one
of
those
variables
two
two
four
four,
some
some
purpose.
So
let's
go
just
over
a
couple.
So
Auto
DevOps
uses
a
domain
Naumann
creature
that
can
be
quite
complex,
so
you
have
a
base
domain
and
based
on
that
base
domain,
we
will
auto-populate
a
domain
for
your
app
based
on
your
project,
ID
and
a
number
of
things.
B
But
if
you're
running
a
production
app,
you
may
want
to
use
like
a
custom
host
name,
like
example,
comm.
So
it's
the
way
to
configure
that
is
through
an
environment
variable
just
like
that.
We
have
that's
one
configuration
point
just
like
that
one
we
have
others
like.
If
you
want
to
use
a
custom
chart,
we
would
use
this
one.
If
you
want
to
use
a
custom
build
pack,
you
would
use
this
one
and
that's
basically
the
gist
of
it.
So
that's
for
configuration
for
data
base
for
security
and
for
job
management.
B
B
Yeah,
let
me
know
if
at
any
point
something
does
make
sense,
or
you
want
to
ask
me
something-
please
feel
free
to
interrupt
me,
but
that's
the
gist
of
all
the
DevOps
and
I
think
the
vision
we
have
for
our
DevOps
is
that
you
know
there's
kind
of
an
explosion
of
projects
right
now,
so
everything
used
to
be
kind
of
a
model.
It's
a
single
project,
and
now
everything
is
moving
to
be
a
service
right.
B
So
let's
say:
if
you've
had
a
single
an
app,
you
may
be
breaking
it
in
small
services
that
each
serve
its
own
purpose
and
then
you
interconnect
them
all
through
api's,
so
getting
started
with
DevOps.
As
I
said
all
of
this
modern
practices,
it's
hard
takes
time,
it's
knowledge.
So
if
you
have
a
CIO
expert
on
every
team,
maybe
that's
not
a
problem,
but
the
reality
is
that
people
want
to
get
up
and
running
and
they're,
not
necessarily
an
expert,
so
Auto
DevOps
is
kind.
B
Exactly
right,
so
that's
that's
part
of
it
and
actually
here
in
in
Division,
we
have
like
good-good
snippets
that
talk
about
the
the
increase
in
the
number
of
software
products
and
the
need
to
provide
some
sort
of
workflow
and
how
much
time
it
takes
and
investment.
So
it's
trying
to
reduce
that
and
kind
of
take
the
friction
out
and
give
you
modern
work:
clothes
out
of
the
box.
What's.
A
B
Testing
had
some
level
of
automation
it
has
had
for
a
while,
and
there
is
different
types
of
testing,
but
yes,
so
that
the
gist
of
it
is
building
this
gigantic
thing.
There's
a
lot
of
changes
coming
in.
You
need
the
coordinate,
changes
across
groups,
making
sure
that
one
doesn't
break
another
and
it
was
very
convoluted
and
heavy
and
it
didn't
allow
companies
to
move
fast.
A
B
No,
no!
No,
so
it
doesn't
solve
that
that
that
is
solved
by
the
people
that
are
designing
their
own
apps
and
how
the
service
is
inter
communicate
with
one
another.
That's
all
quite
them
like
how
that
services.
All
those
services
are
designed.
I
would
say
that
the
biggest
thing
that
our
DevOps
solves
is
the
high
barrier
of
entry
that
people
have
to
implement
modern
CI
workflows.
So.
B
Well,
so
they
put
yeah
some
of
them.
You
know
a
lot
of
them,
don't
know
or
if
they
do
know,
they
may
not
know.
What's
the
best
practice.
Yes,
you
write
out
your
CI
file
manually.
So
basically,
when
you're
creating
a
CI
job
in
give
lab,
you
have
to
write
the
yam
all
by
hand
and
you
have
to
write
all
the
stages
you
have
to
write.
Basically
what
you
want
to
happen
on
each
stage
dependencies
that
you
may
have
across
stages,
and
things
like
that.
B
So
there
are
a
couple
of
things
that
I
think
we
are
very
advanced
at
gitlab
and
one
of
those
is
best
practices.
You
know,
because
not
only
we
see
what's
going
on
with
users,
but
we
see
what's
going
on
internally
and
we
kind
of
connect
all
of
those
dots
and
also
our
expertise
with
containerized
workflows.
So
you
know
you
may
see
that
everybody
wants
to
work
with
containers,
and
you
know
that's
it's
a
very
modern
way
to
make
sure
that
your
environments
are
I'm
a
genius
and
they
run
anywhere.
B
So
with
that
and
frameworks
like
kubernetes,
which
is
you
know,
a
project
that
allows
you
to
orchestrate
all
of
your
containers,
the
there's
definitely
a
wrapping
up
period
where
people
are
not
experts
in
neither
containers
and
our
kubernetes
and
Auto
DevOps
gives
you
very
good
practices
to
get
started
with
both.
That's
that's.
It's
solving
that
problem
too,
that
everybody
wants
to
kind
of
take
part
in
the
container
revolution,
but
they
don't
know
where
to
start.
B
B
Let's
move
on
to
the
next
one,
then
the
next
one
is
what
we
call
kubernetes
configuration
everybody
both
internally
and
externally,
refers
to
this
as
the
kubernetes
integration,
and
it's
really
less
of
an
integration
with
kubernetes
and
more
of
a
way
to
manage
kubernetes.
So
it
should
be
I'll
update
it
to
be
called
kubernetes
management.
B
So
it's
a
way
that
you
can
manage
and
interact
with
your
clusters,
starting
with,
as
you
might
might
have
seen,
adding
a
cluster
which
you
can
do
either
manually
by
entering
the
cluster
details
or
you
can
do
by
adding
it
from
GK,
which
is
G.
K
is
the
google
kubernetes
engine
and
it's
part
of
the
Google
Cloud,
which
is
one
of
the
hyper
clouds,
so
that
three
big
ones
are
AWS,
Google
and
Azure,
and
our
goal
is
that
you're
going
to
be
able
to
easily
add
a
kubernetes
cluster
to
any
of
those
top
three?
B
At
least
there
are
others
on
the
list
like
digitalocean
and
some
others
that
may
not
be
as
big,
but
we've
also
had
some
appetite
for
write.
One
and
number
one
would
be
starting
from
out
of
your
your
your
cluster
and
then,
when
you
add
your
cluster
you're,
giving
gitlab
kind
of
access
to
what
we
call
a
cluster
admin
account
and
what
that
allows
us
to
do.
Yes,
you.
A
B
So
there
are
two
two
different
workflows
that
that
we
have
right
now.
So
when
you
go
to
add
a
cluster,
you
can
add
it
right
on
gke.
So
here
we
said,
add
your
clothes
are
on
on
GK,
and
all
you
have
to
do
is
click
here
to
sign
in
with
Google,
and
you
select
the
account
you
want
to
you
and
that's
it.
You
can
just
give
your
saucer
name
and
then
you
select
which
project
in
Google.
You
want
to
create
it
in
what
zone.
B
How
many
notes
and
things
like
that,
and
just
with
a
single
click,
it
will
create
it
for
you.
So
this
is
great,
because
if
you
were
to
do
it
manually,
it's
kind
of
more
clicks.
You
have
to
go
outside
of
your
lab
and
change
contacts
and
things
like
that.
So
it's
a
great
experience,
but
we
only
provided
for
one
cloud
right
now,
so
we
want
to
make
sure
that
we
provided
for
other
clouds
as
well.
That
are
very.
A
B
The
existing
cluster
and
that's
kind
of
a
more
convoluted
process,
because
when
you
created
by
default,
you
may
not
have
the
account
you
need,
and
you
have
to
do
a
certain
number
of
steps
in
your
your
cluster.
So
yeah,
it's
not
as
easy.
So
we
want
to
make
sure
that
it's
easy
or
as
easy
as
possible
on
each
one
of
those
files.
B
So
when
I
add
a
cluster,
I
am
giving
basically
a
couple
of
details,
I'm
giving
gitlab
kind
of
details
on
the
API
URL.
So
it's
basically
the
URL
for
that
cluster,
the
credentials
that
you
need
to
log
into
that
cluster,
so
on
and
so
forth.
So
git
lab
has
access
to
some
variables
that
are
coming
from
that
cluster
and
little
app
can
use.
B
It
can
use
those
variables
on
its
CI
jobs,
so
even
outside
of
other
DevOps
I
can
tell
good
lab
like
I
could
write
a
CI
job
using
the
variables
coming
from
my
cluster
without
having
to
specify
them
manually.
So
that's
a
great
feature
of
our
kubernetes
integration
and
that
in
itself
is
valuable.
But
then
we
have
kind
of
other
integration
points
with
kubernetes
that
make
gitlab
very
easy
to
use
with
your
with
your
kubernetes
classrooms.
One
of
them
is
deploy
boards
and
I'm.
Sorry
that
we
don't
have
an
image
here.
B
Let
me
see
if
I
have
one
up
and
running
your
mind.
So
basically,
when
you
deploy
anything
to
kubernetes
you're
making
use
of
what
we
call
pods
and
pods
are
kind
of
logical
units
where
that
have
containers
running
inside
of
them.
Those
containers
may
it
may
share
some
common
components
and
then,
when
you
make
a
deployment
I'm
sorry,
you
will
see
those
things
running
here
in
gitlab.
B
So
here
this
is
what
we
call
a
deploy
board
so
for
this
particular
deployment,
I
see
that
I
have
20
pods
running,
and
so
it's
great
that
you
can
see
the
status
of
those
pots
right
here
like
if
my
rollout
was
not
complete.
I
could
see
how
many
parts
are
in
process
and
while
that
deployment
is
happening,
you
can
see
this
thing
being
completed
until
the
specified
number
of
pods
is
deployed.
B
So
that's
great
another
very
nifty
feature
of
the
kubernetes
integration
is
that
when
you
hover
on
each
one
of
those
parts
she
tells
you
kind
of
what
the
pod
name
is
the
status.
You
can
click
on
that
pod
and
you
can
see
the
logs
so
basically
in
real
time,
what's
going
on
inside
that
pod,
so
that's
really
great
because
otherwise
you
have
to
you
know,
go
to
Google
or
you
know,
to
whichever
provider
you're
using
and
then
locate
at
the
pod
and
then
drill
down
and
locate
the
logs
that
are
relevant.
B
So
you
know
we're
adding
a
lot
of
value
by
doing
things
like
that.
So
that's
one
another
one
is
that
well
so
here
you
see
a
bunch
of
links
on
the
upper
right-hand
side
of
each
environment.
So
one
is
that
you
can
open
the
live
environment,
so
you
don't
have
to
locate
like
the
URL,
try
to
remember
what
the
URL
is
and
blah
blah
blah.
So
here
when
I
click
on
it,
it's
just
gonna.
B
Take
me
to
that
live
environment,
and
here
you
see
kind
of
what
the
URL
is,
and
it's
doing
all
of
that
automatically
for
me.
So
that's
one
another
one
is
the
monitoring
that
I
was
mentioning.
So
here.
If
you
use
prometheus,
you
can
see
the
performance
data
for
your
cluster,
so
here
I'm,
seeing
like
error
rate
and
latency
and
throughput,
and
things
like
that.
So
there's
a
lot
of
good
monitoring
data
that
we're
providing
with
the
kubernetes
integration.
B
Otherwise
you
would
have
to
either
go
directly
to
Prometheus
or
use
a
dashboarding
to
like
Ravana
to
basically
see
all
of
this
stuff.
So
the
fact
that
you
can
see
it
right
within
gitlab
is
also
very,
very
valuable,
and
then
you
can
customize
it
to
see
whatever
metrics
are
meaningful
to
you
right.
So
that's
very
cool.
So.
B
So
the
kubernetes
dashboard
is
not
installed
on
your
cluster
by
default,
and
so
you
can
choose
to
install
your
kubernetes
dashboard,
but
then
customizing.
It
is
not
as
easy
as
you
can
in
prometheus,
so
yeah.
Maybe
we
don't
have
to
go
through
the
whole
thing,
because
it's
gonna
take
us
some
time.
But
yes,
that's
the
gist
of
it.
Is
that
it's
a
lot
easier
to
customize
it
in
tools
like
like
Prometheus,
because
the
kubernetes
dashboard
has
like
a
set
of
metrics
and
that's
it
I'm.
B
So
now
we
also
have
as
part
of
this
integration,
the
web
terminals.
So
when
you
click
on
web
Terminal,
what
this
is
doing
is
that
it's
giving
you
access
to
a
container
that's
running
inside
one
of
your
pods,
so
this
is
for
the
production
environment.
Here,
for
example,
if
you're
troubleshooting
a
problem
in
production,
you
don't
have
to
like
locate
the
VMware.
That
container
is
running
and
things
like
that
which
it
takes
time
and
it
can
be
very
cumbersome.
B
Yes,
that's
me,
okay,
so
here
you're
inside
of
the
container,
so
you
can
like
if
you
wanted
to
see
what's
going
on
in
your
doctor
thing
in
your
doctor
file.
So
maybe
somebody
changed
it
and
they
didn't
do
it
right
or
they
didn't
get
into
the
specification.
So
you
could
vote
right
here,
see
a
doctor
file
and
then
you
can
see
the
contents
and
you
see-
oh,
maybe
there's
not
the
right
port
or
whatever,
and
you
can
fix
it
all
right
here,
instead
of
like
locating
the
VM
and
going
to
determines
like
that.
So.
B
So
those
are
I
think
at
a
high
level,
the
main
things
about
Nettie's
integration.
We
have
more
like
technical
detail
things
like
the
ability
to
do
canary
deployments,
so
the
you
can
do
that,
basically
for
configuring,
more
environment,
variables
and
but
I
think
that
this
is
kind
of
the
and
well
so
another
very
useful
things
is
that
when
you
integrate
with
a
coronary
sponsor,
you
can
configure
your
CI
file
to
deploy
review
apps
in
your
branches.
B
So
I
don't
know
if
you've
had
a
chance
to
interact
with
review
apps,
but
review
apps,
give
you
a
full
running
environment
of
your
app
that
you
can
test
with,
and
that's
a
really
great
feature
like
if
you
visit
any
mr
today
it'll
have
like
the.
Let
me
show
you
the
things
gonna
be
easier
if
I
show
it
so,
for
example,
it's
not
a
project
that
I
know
for
sure
has
review
ups
going,
so
this
is
lab
calm
and
we
go
to
merge,
requests
and
well.
These
are
building
this
is
built.
B
B
So
when
you
click
on
it,
you'll
notice
that
the
URL
is
specific
to
this
app
and
it
has
kind
of
the
branch
name,
then
about
that
good,
lab
review,
app
dot
and
then
company
team.
So
this
is
a
full
running,
get
lab
website.
That
is
not
production
right.
It
was
spinned
up
only
for
that
branch
and
it's
great
because
you
can
test
and
you
can
see
how
things
are
looking
and
filling
in
production
or
how
they
will
look
and
feel
in
production.
Make
sure
you
deployed
this
branch.
B
B
B
A
B
That
would
be
very
cool.
I
think
that
when
we
talk
about
costs,
things
are
so
specific
to
the
provider
that
you're
using
to
run
your
computer
on
so
the
cost
of
one
our
VM
time,
like
let's
say
for
a
certain
size
of
VM,
it's
gonna
be
different
between
each
one
of
the
clouds,
so
we
should
talk
and
we
should
think
my
opinion.
Of
course,
this
is
some
something
that
we
have
to
to
discuss.
A
B
B
So
we
have
to
talk
about
percentages
about
resources
like
GM's
cluster
pods
things
like
that,
and
then,
if
I
tell
you
that
you
can
cut
it
in
50
percent,
it
probably
means
that
it's
gonna
be
like
50
percent,
less
cost
I
mean
it
may
not
you're
using
you're
paying
more
for
your
kubernetes
masters,
but,
for
example
in
in
in
the
case
of
GCP,
you
pay
per
cluster
per
node.
If
you
cut
half
the
nodes,
you're,
probably
gonna,
gonna
save
half
the
money,
yeah.
A
B
But
that's
how
we
have
to
think
about
cost.
That's
a
that's!
Actually
a
good
segue!
Here
we
have
one
category
that
is
planned,
so
we
haven't
done
anything
in
that
category
yet,
and
this
is
something
that
we
have
been
discussing
with
the
monitoring
group
about
cluster
cost.
Optimization-
and
this
is
you
could
imagine
that
in
in
my
kubernetes
page
I
have
clusters
here
so
here
when
I
click
inside
this
cluster
I
could
have
a
tab.
That
says
something
like
cost
and
it
could
tell
me
hey
you've
been
under
utilizing
your
resources
by
50
percent.
B
B
Don't
know
if
you
know
this
may
or
may
not
be
doable
with
only
differentials
that
we
have,
but
it
would
be
great
that
we
tell
you
hey,
you
are
you
could
save
50
percent
and
with
a
single
click,
we're
gonna
go
out
to
the
cloud
on
your
choice:
we're
going
to
cut
down
your
cluster
by
three
nodes,
and
then
you
know
you're
in
one
month
we'll
come
back
and
say
hey
by
doing
this.
You
saved
this
over
months
and
well,
not
in
terms
of
money
but
in
terms
of
resources.
B
B
Well,
so
they-
and
if
you
yeah
not
within
get
lab
so
within
good
lab,
we
can
talk
monitoring
in
terms
of
like
memory
and
processor
use
right.
So
here
we
see
that's
the
CPU
and
and
the
memory
used,
but
that's
about
it.
So
we
see
that
it's.
The
average
request
is
one
point,
one
six
and
the
capacity
seven
point
three.
So
you.
A
B
B
So
right
now
the
cluster
cost
optimization
and
anything
that
has
to
do
with
kubernetes
matter
management
faults
on
their
configure.
However,
this
particular
category
we
talked
about
with
the
monitoring
health
p.m.
and
they
were
open
to
doing
it,
then,
for
a
certain
set
of
circumstances,
they
had
to
push
push
it
back,
but
I
think
that
whoever
gets
to
it
first,
it's
gonna,
it's
gonna.
B
B
A
B
So
one
thing
that
I
felt
to
mention
actually
was
the
application
management,
so
I
think
that
you've
already
experienced
this,
but
installing
a
helmet
art
on
your
cluster,
it's
kind
of
a
process
that
you
to
go
to
the
command
line.
You
have
to
have
a
llamo
file
for
that
that
that
particular
helm
chart
you
have
to
apply
to
a
cluster.
You
may
have
to
create
a
service
role
that
you
have
to
create
role
binding.
So
it's
it's!
It's
a
complicated
process.
If
you're
new
to
kubernetes
this
may
be
kind
of
a
high
barrier
for
entry.
B
So
yeah
what
would
be
the
term
so,
let's
think
about
the
App
Store.
So
let's
do
an
analogy
with
the
App
Store,
so
the
App
Store
gives
you
an
easy
way
to
install
applications
on
your
phone.
So
you
can
think
about
the
App.
Store
is
kind
of
delay.
It
that's
managing
all
the
packages.
Let's
say
each
package.
Isn't
that
so
helm
is
the
same
thing,
so
it
helm,
you
would
say,
is
the
App
Store,
so
helm
is
the
App
Store
and
a
helmet
art
is
an
app
so
a
helmet
chart
yeah.
B
B
No
so
helm
is
helm,
doesn't
determine
which
apps
are
on,
like
the
the
user
determines
which
apps
run,
but
helm
is
basically,
you
can
think
about
it
like
the
the
app
store.
So
when
we
go
to
charts
here
so
helm
is
the
application
that
manages
all
of
the
charts
and
you
can
think
about
each
chart
as
an
application.
So
you
can
think
about
helm
as
the
app
store.
So
helm
is
a
program
that
allows
users
to
install
apps
on
their
classroom.
Those
apps
are
called
helm
terms
for
it.
B
For
example,
if
I
was
to
search
oh
yeah
for
kid
lab,
let's
say
so
kid
lab
has
a
helmet
art,
so
you
can
deploy
gitlab
onto
a
kubernetes
cluster
using
helm,
and
that
would
be
the
equivalent
of
seeing
good
lab
on
the
App
Store
and
instead
of
saying
big
it'll
app
app.
It's
just
called
a
good
lab
helm
chart
right,
okay,
so
a
great
point
about
our
kubernetes
integration
is
that
it
allows
you
to
install
helm
first
and
helm.
Is
the
package
manager,
though?
B
Basically,
what
manages
all
of
the
installation
of
this
apps
and
with
a
single
click
you
can
install
apps
onto
your
cluster.
So,
for
example,
if
I
was
to
go
to
the
kid
lab
chart
and
say
how
do
I
install
and,
let's
not
say,
good
lad,
because
good
lab
is
kind
of
a
it's
a
it's
a
complex
chart.
So
let's
say
something
smaller
that
was
designed
for
kubernetes
where's,
prometheus.
B
So
you
have
installation
instructions
here
on
the
right,
so
let's
say
introduction
prerequisites
installing
the
chart,
so
it
says
to
install
the
chart
with
their
release.
Name
my
release
here
you
have
to
issue
this
command,
helm,
install
name
my
release
and
then
the
command
deploys
Prometheus
on
the
communities
cluster
in
the
default
configuration
the
configuration
section
lists
the
parameters
that
can
be
configured
so
if
I
go
here,
I
can
see
like
what
kind
of
values
I
have
to
give
and
that
links
not
working.
B
Obviously,
let's
go
to
the
Prometheus
repo
I
think
that's
it's
gonna
be
easier
for
me
to
explain
kind
of
the
complexity
that
we're
solving
so
prometheus
so
installed.
So
here
we
say
there
are
very
various
ways
of
installing
Prometheus
building
from
source.
You
can
build
binaries,
docker
images.
I,
don't
know
that
they're
talking
about
the
chart
here.
B
B
So,
as
I
was
saying
with
good
lab,
you
can
do
this
with
a
single
place.
So
here
with
Prometheus,
you
just
click
install
and
that
will
deploy
Prometheus
onto
your
cluster
and
then
not
only
that,
but
once
you've
installed
a
certain
helm
truck
certain
application,
you
can
uninstall
it
with
a
single
click.
So
if
you
were
to
uninstall
Prometheus
from
your
cluster,
not
only
do
you
have
to
get
rid
of
Prometheus,
but
any
of
the
resources
that
it
created.
B
So
if
you
created
a
role
to
run
as
a
role
binding
it
you
have
to
get
rid
of
all
those
things
too,
so
doing
that
with
a
single
click.
It's
a
huge
advantage
and
you
see
kind
of
this.
This
pattern
is
very
popular
with
people
that
want
to
be
a
brunette,
East
Manager.
So
other
companies
that
do
this
like
Rancher,
they
aim
to
do
it
with
a
single
click.
So
it's
very
valid,
and
that
was
the
last
thing
that
I
wanted
to
point
out
about
the
kubernetes,
the
kubernetes
management
category,
I.
A
Have
a
question:
it's
user
specific,
but
just
to
get
an
idea
of
our
users.
I
know
most
developers
don't
really
give
up
the
terminal.
So
now
we're
talking
about
a
single
click
for
installing
and
uninstalling.
Do
you
think
they
are
shifting
towards
the
UI
if
it
has
become
less
troublesome
or
do
they
come
online?
A
B
I
think
that's
a
good
question.
I
think
that
it
depends
on
the
use
case.
There
may
be
some
companies
in
which
the
main
persona
that
is
managing
the
kubernetes
applications
is
an
operator
like
a
developer,
major
doing
it
if
they
are
developing
an
application
and
need
a
particular
chart
installed
on
their
cluster.
B
I
would
say
that
developers
may
be
using
this
if
they're,
using
like
the
default
configuration
for
each
chart.
So
one
of
the
things
that
we
have
planned
that
basically
we
could
use
as
an
internal
customer
so
like
the
kid
lab
infrastructure
team
could
use,
is
the
ability
to
customize
the
helm
chart
before
you
install
it.
So
when
you
customize
it,
you
can
pass
it.
Let's
say
a
configuration
file
with
certain
configuration
values,
and
then
you
install
the
help
chart.
B
So
a
lot
of
operators
with
more
advanced
production
use
cases
will
need
to
do
that.
A
lot
of
developers
that
are
developing
and
just
need,
let's
say
an
instance
of
Prometheus
to
test
against
it,
will
most
likely
run
it
locally.
They
probably
won't
stand
up
a
cluster
and
get
laughs
like
people
that
are
doing
this
I
say.
Are
people
that
are
like
experimenting
with
their
apps
or
like
testing
it
prior
to
the
point
into
production?
B
All
right,
so
that's
the
kubernetes
management,
and
then
we
have
a
couple
of
more
that
are
minimal.
That
I
want
to
touch
on
so
the
first
one
is
chat.
Ups,
and
here
you
can
think
about
chat
ups
about
like
the
ability
to
execute
actions
on
your
infrastructure
via
chat.
So,
for
example,
let's
say
that
you
wanted
to
turn
in
a
feature
flag
on
or
off.
You
could
do
that
via
chat,
ops
and
really
all
all
that
is
is
that
it
boils
down
that
it's
running
a
CI
job.
B
So
the
current
the
current
scope
of
our
chat,
ops,
offering
is
that
you
can
run
a
command
in
slack,
and
if
you
have
configured
your
CI
job
properly,
it
will
go
into
get
lab,
it
will
run
a
CI
job
and
it
will
spit
back.
The
results
in
your
chat,
client
will
say
either
I
have
successfully
done
this
or
I
have
not
successfully
done
this
and
the
way
that
you
specify
a
a
chat
job.
Let's
say
if
you
only
want
to
run
it
through
chat.
B
A
B
B
Isn't
this
as
an
operator
I
may
want
to
give
you
access
to
run
a
job,
but
not
access
to
the
project
or
the
job
itself,
where
everything
is
defined,
but
yeah
running
it
via
chat
is
am
actually
quite
common
in
the
software
development
world,
and
you
can
see
examples
of
this
today,
like
if
you
go
to
our
production
channel,
which
I'm
not
a
member
of
been
longer
I.
Guess
you
see.
B
So
here
there
are
machine,
so
here
this
person
ran
a
chat.
Ups,
that's
his
feature
set
set
routable
to
step
lookup
true.
So
if
he's
betting
he's
basically
setting
a
flag,
that
was
false,
he's
enabling
it
by
setting
it
to
true
and
then
the
Chatham's
para
tells
you
if
something
was
successful
or
not
successful,
so
yeah,
it's
great
for
you.
Even
if
it's
not
an
an
access
problem,
maybe
you
just
want
to
quickly
do
something
in
slack
not
having
to
find
a
project.
The
job
run
is
doing
it
with
the
single
command.
B
B
Have
an
issue
about
it
somewhere,
I'm
sure,
yeah,
I
didn't
find
it
on
the
first
page,
but
yeah
I'm
sure
is
there
somewhere,
okay
and
that's
the
gist
of
chat
ups.
So
the
main
the
main
obstacle
that
we
faced
with
chat
ups
is
that
we
would
love
to
provide
out-of-the-box
cherubs
functionality
for
everyone,
but
I
would
say
that
the
main
obstacle
is
that
everyone's
workflows
are
different.
Everyone's
permission
model
is
different.
B
A
B
It's
it's
a
it's
an
issue
already
today,
but
there's
also
a
matter
most,
which
is
an
open
source
chat,
client
that
ships
with
good
lab.
So
we
should
have
like
support
for
that
out
of
the
box.
So
I
think
that
we
can
only
do
what
I
would
say
like
most
requested
by
users
today,
yeah
sure,
like
we're
doing
kubernetes
very
heavily
today,
but
tomorrow
something
else
might
come.
That's
not
kubernetes
and
cornetist
will
no
longer
be
relevant,
but
I
think
that's
one
of
those
things
that
we
have
to
worry
about
it.
A
A
B
Right
so
that's
it
for
chat
ups,
and
then
we
have
a
couple
more
that
I
wanted
to
chat
about.
I
know
that
we're
kind
of
running
out
of
time
and
I
want
to
be
respectful
of
your
time.
So
my
next
meeting
said
to
so
we
could
go
for
an
additional
half
hour
if
you
want,
or
we
could
table
it
and
continue
like
later
today
or
tomorrow,.
B
B
At
all,
not
at
all
this
is
I
think
this
is
time
very
well
spent
and
I
really
want
you
to
have
as
much
knowledge
as
you
can,
because
you
know
I,
like
you,
have
a
big
responsibility
of
enabling
the
workforce
and
the
u.s.
the
stage
which
is
quite
complex,
so
we
need
to
be
on
the
same
page
for
sure
it's
a
great
investment
of
my
time.
You
never
feel
that
when
you
put
time
on
my
calendar,
you
should
never
be
concerned
that
it's
not
a
good
use
of
time.
It
always
is
okay.
Thank.