►
From YouTube: Configure stage vision part 1
Description
Orchestration PM goes through the Configure stage vision
A
You
think
that
that
pod
is
going
to
be
enough
to
serve
your
application.
Let's
say
like
on
days
like
Black
Friday,
you
would
increase
your
replica
count
to
ten
times
as
much
so
you'd
have
ten
replicas.
So
that's
one
concept
and
another
concept
is
called
horizontal
pod
autoscaler.
So
let
me
send
you
some
links
here
that
I
think
will
be
used
for
so
you
can
have
some
some
reading.
That
I
think
will
be
interesting.
B
A
A
So
the
second
one
is
the
horizontal
pod
autoscaler
and
that's
kind
of
a
fancy
name
say:
basically,
you
can
set
some
parameters
for
kubernetes,
where
your
scaling
will
be
done
automatically
for
you.
So,
for
example,
you
tell
kubernetes
when
my
cpu
load
reaches
80%
I
want
you
to
scale
up,
and
then
you
say:
what's
the
minimum
number
of
nodes
and
the
maximum
number
of
nodes,
and
then
you
know
the
scaling
will
happen
automatically
within
those
ranges.
A
But
yes,
what
you
were
talking
about
just
now
is
something
that
we're
doing
manually
by
setting
CI
variables
and
when
you
set
those
variables-
and
you
say
I
want
my
replicas
count
to
be
10
and
then
you
will
see.
10
and
I
can
probably
show
you
an
example
of
this
in
action,
so
you
can
tie
it
all
together.
A
A
So
what
that
means
is
that
the
production
environment
will
automatic
when
he
deploys
will
automatically
deploy
to
20
replicas
sets
and
if
I
go
to
operations
and
environments,
all
my
environments
should
display
here.
So
here
you
see
the
production
environment
and
you
see
that
I
have
20
replicas
in
production.
So
these
are
all
the
parts
and.
B
A
I
can
have
you
know
one
for
test,
one
for
staging
and
one
for
production
right
now
we're
not
telling
you
which
cluster
is
relevant
here,
but
do
we
have
an
issue
that
will
show
you
that
I
think
the
villain
is
currently
working
on
that
so
it'll
show
you
like
what
cluster
we're
talking
about
when
you
have
deployments
when
you
have
pipelines
that
make
a
deployment
it'll
show
you
what
deployment,
what
cluster
the
deployment
was
made
to
so
yeah
there
was
a
all.
Improvements
are
coming
up
right.
B
So
do
we
have
issues
also
when
you're
not
using
the
cluster
for
testing,
for
example,
so
everything
is
in
production.
There
is
nothing
going
on
in
development
at
the
moment.
Do
we
allow
you
to
auto
turn
off
like
a
cluster,
or
will
you
still
have
them
up
and
running,
even
though
your
environments
not
useful
so.
A
If
there
are
no
deployments
to
your
cluster,
your
cluster
is
still
up
and
running
and
we
have
no
control
over
like
the
decommission
of
your
cluster.
That's
kind
of
the
thing
we
were
talking
about
earlier
is
that
we
can
interact
with
your
cluster
and
its
current
configuration,
because
we
have
the
credentials
for
that
cluster,
but
we
don't
have
the
credentials
for
your
cloud
provider
so
to
kill
the
cluster
would
mean
you
would
have
to
go
into
your
cloud
provider
and
kill
it.
B
B
Can
I
ask
a
question
which
might
seem
harsh
for
the
ops
people,
but
so
our
job
is
to
kind
of
change.
The
not
very
harsh
I'll
not
put
the
harshly
changed
job
scope
of
the
ops
person
into
more
high-level
tasks,
rather
than
these
nitty
gritty
configurations
that
were
doing
before
them.
Auto
devups,
so
are
they
gonna
be
unemployed
after
middle
of
is
a
successfully
adopted
and
successful,
not.
A
Really
so
I
think
that
I
think
that
all
the
customization
and
the
provisioning
may
or
may
not
happen
inside
a
good
lab.
So
there
are
a
number
of
advanced
use
cases
and
that
this
even
happens
there
internally,
where
that
the
good
lab
into
integration
does
not
handle
and
I
think
that,
as
the
kubernetes
project
evolves,
I
think
that
it's
it's
unrealistic
for
us
to
say
we're
going
to
cover
every
single
use
case
of
how
you
can
provision
a
cluster,
especially
not
because
it
not
only
has
to
do
with
kubernetes.
A
It
has
to
do
with
the
cloud
provider
as
well.
So
no
I,
don't
think
that.
That's
that
that's
gonna
be
the
case
and
there's
another
kind
of
mo
that
people
are
using
when
they
deal
with
clusters.
So
we
have
one
customer,
one
very
large
retail
customer
that
uses
ephemeral
clusters
and
what
that
means
is
that
they
create
a
cluster
just
in
time
as
part
of
their
CI
pipeline.
They
deploy
stuff
to
it.
They
test
it.
You
know
they
allow
users
to
interact
with
it
and
then,
when
the
pipeline
finishes,
it
kills
the
cluster.
A
So
this
is
all
part
like
of
you
know,
kind
of
that
sort
of
workflow
will
always
need
to
have
an
operator
in
the
middle
to
say
like
what's
the
capacity.
What
are
the
limits,
what
it's
the
access
for
for
this
cluster?
So
no
I
don't
think
that
we're
going
to
eliminate
the
job
of
the
operator
anytime.
Soon,
oh
cool.
B
A
B
A
B
A
I
think
that
the
most
relevant
for
us
to
talk
about
is
going
to
be
server
lists
since
that
a
category
that
we
have
at
a
minimal
maturity.
So
it's
out
there
it's
available.
Users
can
interact
with
it
and
then
maybe,
if
there's
time,
chaos
engineering
is
planned,
I
guess
one
books
is
minimal,
so
we
should
talk
about
run
bugs
as
well
yeah.
So,
let's
like
what
server
list,
let's
see,
let
me
see
if
I
have
a
project,
maybe.
B
A
B
Want
to
take
a
have
a
background,
so
the
vision
and
a
bit
of
explanation
for
each.
So
that's
very
useful
for
me
and
before
I
start
doing
actual
work.
I'm
proud
to
have
the
as
much
as
I
can
the
complete
understanding,
because,
in
my
understanding
the
UI
work
is
not
that
massive
in
our
stage
group,
but
more
like
the
experience
of
how
users
do
things
and
what
they
expect.
Yeah.
A
I
guess
that's
a
fair,
characterization
I
think
that
there's
a
lot
of
stuff
that's
coming
up
that
may
require
the
experience
to
be
more
crisp
and
clear.
There
may
be
a
lot
of
UI
changes
that
accompany
that.
It's
like
specifically
on
the
server
less
space
I,
think
that
that's
a
useful
thing
to
talk
about
so
yeah.
Maybe
let's
start
there:
okay,
so
serverless!
Let's
talk
about
serverless
where
to
start
I
think
that
yeah.
Are
you
familiar
with
what
service
is
kind
of
in
principle
just.
A
Well,
so
you've
caused
a
bit
further
than
that.
I
think
that
serverless,
it's
a
misnomer
right,
because
serverless
kind
of
means
that
there's
no
servers
and
that's
not
current,
so
everything
still
runs
on
servers.
I
think
that
serverless
at
a
high
level
means
that
there
is
little
to
no
management
of
the
infrastructure
where
you're
running
your
workloads
and
one
great
example
of
this
is
AWS
lamp.
So,
for
example,
a
EE
as
lambda
is
a
service
offering
that
allows
you
to
for
it,
for
example,
upload
a
function
or
an
application.
A
So
here's
like
the
couple
of
huge
benefits
that
that
offers
first
of
all,
when
you
don't
have
to
worry
about
in
an
infrastructure,
you
have
more
time
to
focus
on
the
application
that
you're
writing
and
it
could
be
whatever
be
a
function
of
my
cursor,
the
monolith
it
doesn't
matter,
and
so
developers
love
that,
because
application
developers
want
to
focus
on
writing
great
apps
and
not
necessarily
have
to
worry
about.
You
know
how
much
capacity
do
I
need
what
what
machine
size
should
I
stand
up.
A
What
kind
of
resources
do
I
have
to
to
create
so
that
sort
of
thing?
So
that's
one
huge
benefit,
and
the
other
very
big
benefit.
Is
that
it's
it's
elastic?
So
if
you
have
no
requests,
you
don't
pay
for
anything
right
and,
as
your
traffic
starts
growing,
there's
nothing
that
you
need
to
do
so.
Let's
say
today
you
only
have
ten
users.
The
service
will
scale
up
just
enough
to
serve
those
ten
users.
If,
tomorrow
you
have
ten
million,
the
service
will
scale
up
to
serve
10
million
users,
and
you
have
to
do
nothing.
A
So
that's
a
huge
benefit.
The
fact
that
it's
elastic
and
kind
of
the
third
one
and
well
one
thing
that
I
should
add
to
be
elastic
is
that
it
scales
down
all
the
way
to
zero
right
when
you're,
not
using
it.
You
don't
get
charged
for
anything
and
the
third
benefit
it's
of
course
cost
so
before
I
used
to
stand
up
like
a
couple
of
virtual
machines,
one
for
my
application,
one
for
my
database,
maybe
I
had
a
load.
A
Balancer
and
I
was
paying
for
those
machines,
a
hundred
percent
of
the
time
whether
I
had
traffic
or
not.
If
I
had
zero
traffic
I
paid
a
hundred
percent
of
my
bill.
If
I
had
80
percent
traffic,
I
paid
100
percent
of
my
bill,
that's
huge
yeah,
a
huge
thing,
there's
cost
so
that's
kind
of
at
a
high
level.
What
service
is
right
and
then,
let's
talk
about
good
lab
service,
specifically
so
good
lap
server
lists
is
based
on
kubernetes,
so
kubernetes
asked
this
project
called
que
native.
A
So
K
native
has
two
main
components,
but
the
main
one
that
we
concerned
ourselves
with
is
the
serving
component.
It
has
another
component,
that's
called
eventing,
and
that
is
basically
in.
Let
me
out
so
let's
go
over
those
two
concepts.
So
the
event,
a
service
is
specification
that
has
a
bunch
of
event,
sources
that
can
determine
when
a
service
will
process
a
workload.
Let's
say
so.
It's
just
kind
of
based
on
events.
What
event
sources
can
I
use
with
my
service
service?
A
That's
the
first
component
and
the
second
component
is
the
serving
component
and
that's
kind
of
the
biddin.
So
the
certain
component
is
great
because
it
so
kubernetes
cannot
like
natively
scale,
two
to
zero.
We
have
the
concepts
that
we
talked
about,
like
the
horizontal
part
autoscaler,
that
you
can
set
some
parameters,
scale
up
and
down,
and
also
the
parts
that
you
can
create
some
replicas,
but
Canada
has
this
native
serving
component.
A
That
is
much
like
I
was
talking
earlier,
which
service
is
request
driven,
so
without
you
having
to
configure
much,
the
service
will
scale
up
and
down
to
zero.
So
this
is
useful.
Let's
say
when
you're
using
it
as
a
hosted
service.
You
only
pay
per
request
and
different
providers
have
the
service
in
Google's
case,
it's
called.
It's
called
cloud
run
in
IBM
in
IBM's
case
is
called
hosted,
key
native
service
and
so
on
and
so
forth.
Right.
A
So
not
really
so
they
are
considered
serverless
technologies,
but
the
key
native
one
has
it's
a
middleware.
If
you
can
think
about
it
that
way,
and
it
enables
a
couple
of
things.
So
what
do
you
pass?
The
cañedo
it's
a
container.
What
you
pass
to
lambda
is
anything
it
could
is
just
code.
Let's
say
it's
a
function.
A
What
you
pass
the
K
native
is
a
container
in
that
container
could
contain
either
a
function
or
a
full
app
right,
so
kind
of
the
the
concepts
are
different,
but
the
high-level
idea
is
the
same:
that
you're
only
allocating
resources
to
workloads
that
demand
it.
If
you
have
no
workloads,
then
no
resources
are
being
allocated.
A
It
also
deals
with
your
your
networking,
so
it
can
it
abuses
a
service
match.
What's
called
a
service
match
and
a
service
match
is
basically
what's
in
charge
of
the
interaction
between
all
the
components
in
your
cluster.
So
networking
storage
security
things
like
that
the
mesh
is
in
charge
of
all
of
those
things.
A
So
when
you
pass
a
container
to
K
native
K
native
will
create
what's
called
a
service,
ok
native
service,
and
the
kinetic
service
will
do
all
of
those
things
that
I
that
I
just
just
mentioned,
soapy
native,
is
different
in
a
couple
of
different
ways.
So
we
call
it
serverless.
But
let's
say
if
you
want
to
add
connait
it
to
your
kubernetes
cluster,
your
kubernetes
cluster
is
running
all
the
time,
so
it's
not
scaling
down
to
0.
So
let's
say
we
found
a
customer
and
I
want
to
use.
Git
left
server
list.
A
I
have
to
provision
it
either
on
a
cluster
that
I'm
creating
in
some
cloud.
That's
kind
of
option.
Number
one
or
I
can
use
a
host
that
servus.
If
I,
usually
hosted
service,
then
I
am
really
not
paying
for
anything
when
there's
no
traffic.
So
that's
kind
of
a
concept
that
well
I.
Guess
it's
not
super
important,
but
it's
it's
important
to
keep
in
mind
that
server
list
doesn't
mean
zero
costs.
A
lot
of
the
times.
There's
a
lot
of
the
times.
A
B
A
Don't
know
all
right
so
there's
a
couple
of
reasons
why
we
choose
to
start
with
Canada
here
at
gitlab
and
the
first
one
is,
you
know,
because
of
our
deep
kubernetes
integration
right.
We
are
making
big
bets
on
on
kubernetes.
We
want
to
be
the
best
tool
if
you
want
to
use
kubernetes,
we
want
people
to
think
of
gitlab
like
it
lab
is
easy
to
to
use.
A
A
So
there's
a
couple
of
things
that
happen
in
good
lab
that
you
don't
get
out
of
the
box
with
candidate.
So
the
first
one
is
that
you
don't
have
to
provide
a
container
in
order
to
deploy
something
out
there.
We
are
doing
all
the
legwork
of
that
for
you.
So
let's
say
you
only
concern
yourself
with
writing
some
code,
that
let's
say
it's
a
function
and
then
get
lab.
C
I
will
take
that
code
and
through
a
couple
of
pipelines,
will
build
your
K
native
service.
A
A
So
that's
benefit.
Number
one
benefit.
Number
two.
Is
that
once
you
deploy
a
service,
you
get
all
this
telemetry.
That
is
coming
back
to
you.
So,
for
example,
you
get
the
number
of
pots
that
are
being
used
for
your
key
native
service.
So
let's
say:
if
you
have
no
traffic
you're
gonna
see
no
pots.
If
you
have
a
ton
of
traffic,
you're
gonna
see
a
bunch
of
pots,
so
that
gives
you
that
gives
you
a
good
grasp
of.
B
A
That
that
visibility,
I
think,
is
useful.
Let's
say
when
you're
running
your
own
kubernetes
cluster
and
you
see
that
your
service
is
so
popular
that
is
using
like
99%
of
your
CPU.
You
probably
want
to
increase
the
size
of
your
cluster.
So
it's
it's
good
for
you
to
plan
and
around
the
infrastructure
that
you're
using.
So
that's
number
one
and
then
the
other
thing
that
we're
doing
for
users
is
that
we're
showing
them
the
number
of
invocations
that
are
happening.
A
A
Function
details
we
open
up
this
image
in
a
different
tab,
so
we
can
zoom
in
on
it
a
little
bit
so
here
we're
showing
you
as
I
said,
the
number
of
pots
that
were
using
so
I
see
that
I
have
a
test
container
and
I
have
one
pot
that
is
currently
being
used
here.
I
have
the
invocation
so
I
see
that
I
had
0
here
and
then
it
spiked
to
12,
and
then
it
went
back
down.
A
A
So
right
now
our
service
offering
is
labeled
alpha
right.
So
there's
a
couple
of
things
that
we
know
we
want
before.
We
can
call
it
even
beta
or
just
GA
generally
available,
and
some
of
those
have
to
do
with
the
fact
that
K
native
is
a
new
project.
So
K
native
has
been
around
for
about
a
year
and
changed.
Maybe
here
and
a
half,
that's
number
one
and
number
two
there's
a
number
of
things
that
you
can
do
in
K
native
manually
that
we
want
to
provide
for
you
automatically.
A
One
of
those
things
is
the
need
to
provide
a
domain.
So
when
you
install
your
when
you
install
candidate
on
to
your
your
cluster
before
you
are
able
to
insult
a
native,
you
need
to
provide
a
domain
where
your
functions
be
served
out
of.
So
you
see
here
well,
this
installed
button
should
be
grayed
out.
It's
not,
but
so
here
I
have
to
provide
a
domain,
so
here
I
would
say
he
needed
that
info.
It's.
A
Exactly
that's
the
that's
the
DNS
and
then
when
I
installed
key
native
part
of
the
installation
of
key
native
and
will
install
that
service
mesh.
That
I
was
mentioning
earlier
called
sto
and
SPO
installed
an
ingress
controller
and
an
ingress
controller
you
can
think
about
is
the
path
for
your
kubernetes
cluster
to
have
traffic
to
the
outside
world
to
and
from
the
outside
world
that
internet.
A
A
So
here
you
have
a
kubernetes
cluster
in
the
back,
and
this
gate
is
the
ingress
controller.
So
the
ingress
controller
basically
allows
traffic
to
come
in
and
come
out,
and
then
you
can
do
like
a
bunch
of
advanced
configurations
that
puts
conditions
on
that
traffic
like
which
traffic
can
come
in
which
traffic
can
come
out.
But
that's
what
an
ingress
is
and
of
course,
when
you
have
that
gate,
you
get
an
IP
address
that
is
kind
of
the
address
for
that
gate.
That
is,
allowing
the
traffic
to
flow
through
to
your
cluster,
who.
A
There
are
different
companies
or
there
are
different
flavors
of
ingress.
One
of
them
is
for
this
company
called
nginx,
that's
the
one
that
we
use
for
when
you
install
ingress
onto
your
your
your
cluster,
like
here,
if
I
want
to
click
install
on
this
button.
That
would
be
nginx
right.
However,
when
you
and
that's
here
so
like
when
you
look
for
nginx
ingress.
A
B
A
As
long
as
you
have
set
up
your
DNS
endpoints
correctly,
then
yes,
so
here,
if
I'm
an
operator,
the
next
step
that
I
would
have
to
do
is
map
that
domain
and
hopefully
I
own.
That
domain
I'm,
not
sure
that
that's
the
case,
but
I
think
I
do
I
would
map
that
domain
to
that
IP
address,
so
I
wouldn't
have
to
have
an
entry.
So
yes,
can
you
get
that
info
here
so
here
I
would
go
to
manage.
I
would
go
to
DNS
and
I
would
map
custom
resource
record
to
that
specific
IP.
A
A
A
So
a
lot
of
people
don't
want
to
allow
external
traffic
into
their
cluster,
never
install
ingress
into
the
cluster.
You
know:
they're
they're,
closer
they're
cluster
may
just
be
dealing
with
like
internal
workloads,
so
it's
just
receiving
the
interactions
for
the
kubernetes
api
and
it
never
needs
to
talk
to
the
outside
world.
Hey.
B
A
Yeah
absolutely
so
there
there's
a
lot
of
clusters
that
are
part
of
a
like
a
bigger
deal
and
they
don't
need
to
have
an
ingress
installed
on
them
and
that
last
link
that
I
sent
you
on
on
slack
I.
Think
it's
very
helpful
to
like
for
those
high-level
concepts
like
pods
note
ingress.
All
of
those
things
are
described
there
I
think
in
in
good
detail,
so
check
that
out
you
have
a
chance.
B
A
A
A
Yeah,
so
what
we
want
to
do
is
we
want
to
provide
you
with
a
domain
automatically
that
you
don't
have
to
worry
about
mapping
ip's
to
domains.
We
want
to
provide
you
one
automatically
that
you
want
to
customize
it
ensure
you
can
go
ahead
and
provide
your
own
domain
and
do
the
mapping,
but
we
want
to
provide
a
domain
on
automatically
for
you.
We're.
A
Well,
so
you
can
use
any
domain
provider
in
my
case
I'm
using
Google
domains,
but
just
like
com
using
Google
domains,
I
could
be
using
any
other
provider
that
I
want.
I
could
purchase
an
address
in
GoDaddy
and
map
that,
to
my
to
my
ingress,
IP
or
any
other
domain
provider,
it
doesn't
matter
okay,
so.
A
A
So
it
is
not
served
over
HTTPS
by
default
and
if
you
want
to,
if
you
want
to
serve
it
over
HTTPS,
you
can
do
that,
but
it's
a
manual
process
just
like
just
like
the
domain.
So
here
you
see
this
last
topic
called
enabling
TLS
for
candidate
services.
So
this
is
in
order
to
serve
your
traffic
over
HTTPS,
and
these
are
all
the
things
that
you
have
to
do
in
order
to
make
that
happen.
So
it's
like
a
bunch
of
manual
work
that
you
have
to
do
in
order
to
to
make
that
happen.
A
So
of
course
we
like
to
do
that
automatically,
so
you
don't
have
to
go
through
all
this
stuff
and
you
know
in
reality.
Kubernetes
operators
probably
feel
comfortable
doing
this,
but
we
want
to
make
it
easy
for
everyone.
If
you
want
to
get
with
serverless
in
your
an
application
developer,
who
it's
not
an
expert
in
kubernetes,
we
want
to
make
it
easy
for
you
as
well,
so
those
aren't
kind
of
the
main
things
that
we're
talking
about
and
we
have
a.
A
B
A
Here
so
this
is
the
epic,
where
we're
tracking
all
the
things
that
we
want
to
do
before
we
can
call
service
viable.
So
here's
automatic
domain,
which
I
was
just
talking
about
the
other
one,
is
automatic,
SSL
TLS,
which
I
was
talking
about
this
one
is
about
invocation,
insights
and
logs.
So
right
now,
if
something
fails
in
making
native
service
and
I
want
to
like
search
for
logs,
we
don't
provide
logs
in
good
lab.
A
So
you
know
you
would
have
to
go
into
your
cluster
and
start
digging
to
get
those
logs
and
that's
painful
support
more
runtime,
so
that
this
is
an
interesting
one.
Journalists
uses
this
concept
of
runtimes.
So
if
I
write
my
function,
let's
say
in
nodejs,
then
I
need
to
have
a
runtime
that
will
know
how
to
build
a
container
for
my
function,
and
we
describe
this
in
the
in
the
surveillance
Docs.
A
So
when
you're
building
your
function,
you
can
specify
what
the
runtime
is.
So
this
is
a
good
website
that
the
analyst
study
animal,
so
there
are
kind
of
two
templates
that
you
need
to
provide
in
your
project.
When
is
a
good
lab,
ci
vediamo
and
the
other
one
is
the
server
list
at
the
mo,
and
here
you
described
which
runtime
you
want
to
use
to
build
your
function.
So
here
it's
it's
using
the
No
jas
run,
run
turn,
and
currently
we
only
have
a
couple
of
run
times.
B
B
B
Do
we
cannot
do
discovery
of
what
preferences
or
settings
the
user
would
want
around
these
automations?
If
any,
so,
do
we
go
out
and
ask
so.
A
You
know,
there's
not
a
lot
of
choice
for
things
like
SSL
like
for
SSL,
you
need
a
certificate
installed
in
your
cluster
and
you
get
HTTPS
so
I
guess
that
that
one
we
can
probably
have
a
good
feeling
to
say
you
know.
We
know
we
hold
the
path
forward.
Then,
when
it
comes
to
subdomain,
the
user
really
has
no
interaction
with
the
subdomain,
so
they
don't
have
to
go
outside
of
gitlab
to
manage
it.
So
what
we
think
is
like
we'll
provide
you
something.
A
That's
like
get
web
server
list
IO,
and
then
we
append
the
name
of
your
project
or
function
at
the
beginning
and
that's
what
what
do
we
give?
You
I?
Don't
think
that
we
would
get
a
lot
of
like
actionable
information
from
doing
research
for
things
like
automatically
providing
you
with
that
subdomain
right.
A
A
So
you
see
here
the
domain
that
I
provided
was
this
part
candidate
that
info
everything
else
is
built
with,
like
things
like
the
name
of
my
function,
my
project
ID
in
my
environment,
so
that
wouldn't
change
what
we?
What
would
change
is
that
the
user
doesn't
have
to
provide
us
with
that
with
a
domain
before
they
can
install
candidates-
and
you
know,
buying
a
domain
means.
You
know,
you're
spending
money
and
then
using
a
custom
domain
means
you
have
to
go
out
of
get
lab
and
configure
it
stuff
like
that.
A
B
B
A
B
B
Haven't
we
have
four
minutes,
but
maybe
maybe
that's
also
silly
question
but
I'm
gonna
ask
it
anyway,
so
we
have
this
server,
let's
kinda
graph,
showing
you
the
traffic
when
it
goes
when
it
goes
down.
What's
the
difference
between
these
view
and
monitor
the
monitor
view
in
Auto
DevOps,
so
the
view
of
your
containers,
I,
guess
yeah.
A
Yeah
so,
regardless
of,
if
you're,
using
auto
devops
or
not,
will
show
you
that
traffic
there,
if
you
have
it
configured,
so
the
difference
is
that
I
think
you
have
a
function
based
project
metrics.
If
you
have
a
function
based
project,
this
view
that
we
use
today,
it's
not
showing
you
invocations,
it's
just
showing
you
kind
of
high
level
metrics
that
have
to
do
around
the
state
and
the
usage
of
your
cluster
and
also
hits
traffic
like
HTTP
errors.
A
A
B
A
B
A
All
separate
yeah,
so
each
service
is
deployed
independent
of
each
other
and
you
care
about
one
and
I
may
care
about
another
so
showing
all
of
them.
I.
Don't
think
that
provides
a
lot
of
value.
I
do
agree
that
if
we
are
showing
you
monitoring
specific
deals
in
multiple
places
right
like
so
like
this
one
shows
me.
Cluster
health,
which
is
great
I,
mean
it's
useful.
Then,
when
I
go
to
metrics,
it
shows
me
another
set
of
metrics.
A
Then
when
I
go
to
server
list,
then
it
shows
me
another
set
of
metrics,
and
maybe
that's
okay.
Dylan
also
thinks
that
you
know
we
should
show
you
all
metrics
in
a
single
place,
but
I
think
that
different
metrics
like
context
matters,
if
I'm,
if
I'm
an
operator,
I
care
about,
like
you,
know,
CPU
and
memory,
I,
don't
care
about
how
many
times
your
function,
the
developer
has
been
called.
You
know,
I,
don't
care
and
then
I
said
developer
I
do
care.