►
Description
This is the monthly direction review between Configure, Release, and Monitor PMs for August 2021. In this episode, we discuss what we are doing in Kubernetes Management.
A
So
can
you
yeah?
You
can
shoot
okay,
so
as
in
the
configure
gp,
we
own
these
eight
categories,
but
actually
we
we
don't
even
consider
some
of
these
like
chat
up
serverless
plaster
cost
management
are
basically
totally
out
of
focus.
A
We
don't
do
anything
in
that
for
various
reasons.
The
other
ones
are
kind
of
in
focus,
but
we
are
actively
always
trying
to
focus
only
on
a
single
category
and
the
others
are
in
maintenance
mode.
I
might
run
user
interviews,
I'm
open
to
customer
calls,
but
we
don't
dedicate
development
time
to
those
except
for
bug,
fixes
maintainers.
A
So
right
now
our
focus
is
hundred
percent
on
kubernetes
management
and
we
are
actually
preparing
what
we
call
here.
Delivery
management
or
I
would
call
it
deployment
management-
that's
what
we
are
preparing
as
we
speak,
the
together
with
the
release
team
and
that
I
found
super
exciting,
but
I
will
try
to
focus
on
the
current
roadmap
so.
A
Basically,
since
the
release
of
14.0,
that
is
almost
like
two
months
ago,
we
are
focusing
only
on
kubernetes
management,
but
even
that
we
were
focusing
on
combat
management
and
infrastructure,
scored
together
a
little
bit
of
sickness
management
as
well.
This
very
strict
focusing
category
is
in
the
past
two
months
and
we'll
continue,
probably
in
the
next
three
months,
as
I
guess,
but
that's
that's
really
a
wide
guess,
because
it
really
depends
on
on
the
teams
how
they
deliver.
A
What
I
want
them
to
do-
and
this
is
the
next
topic-
what
do
I
want
to
to
ship
in
this
time
frame?
Basically,
how
I
approached
the
whole
area
is
that
we
are.
We
restarted
the
kubernetes
integration
of
git
lab
a
year
ago
and
a
few
months
ago
it
started
to
form
what
would
be
a
reasonable
scope
to
say
that
okay,
now
now,
this
is
a
nice
chunk
of
feature.
A
What
I
started
to
call
it
a
nice
category
iteration
here,
that
we
can
that
we
can
put
aside
this
category
for
a
little
bit
and
focus
on
other
categories
as
well,
and
once
the
scope
started
to
form.
A
The
first
thing
that
I
did
is
I
collected
basically
all
the
ideas
that
I
had
at
the
time
and
created
what
you
can
see,
something
like
the
opportunity
canvases,
but
it's
it's
really
to
in
a
format
that
helps
you
to
to
have
an
overview
of
the
different
opportunities
one
by
the
other
and
have
them
have
an
argument
about.
Why
would
you
prioritize
one
above
the
other?
A
A
It
was
very
interesting
to
see
that
they
were
actually
one.
I
had
three
calls
around
this
sheet
that
I'm
just
sharing
with
you,
and
there
was
one
who
said
that
he
personally
is
not
interested
around
the
elements,
but
hundred
percent
agrees
that
it
should
be
there.
That
was
very
interesting
that
he
said
that
they
have.
They
know
that
they
have
a
very
special
setup
and
that's
why
they
they
don't
care
about
that
one,
but
this
is
really
very
similar
to
the
opportunity.
A
Canvases
there's
a
huge
amount
of
comments
to
to
audi
to
many
of
the
sorry
too
many
of
I
wanted
to
scroll
down
here
so
too
many
okay.
I
want
so
many
of
these
sides.
There
are
a
lot
of
comments
as
well,
but
so
this
was
a
collection
and
then
out
of
this
I
created
a
maturity
plan,
and
this
is
I
shared
yesterday
in
the
in
the
s
configuration
as
well.
A
I
consider
I
would
consider
our
category
complete
if
we
can
ship
serve.
These
sets
of
features,
use
cases,
themes,
topics
whatever
you
call
them,
and
I
think
where
should
I
start,
because
I
don't
know
how
much
you
know
about
the
current
category,
like
the
whole
agent
thing
and
so
on.
B
A
B
I
would
if
you,
if
it's
possible,
given
we
have
at
least
two
new
members
on
the
team
yeah,
if
you
maybe
just
describe
the
vision
of
kubernetes
management
yeah,
where
we
came
from
with
the
previous
integration
and
yeah.
What
what
we
have
today
and
then
jump
into
why
these.
A
I
should
start
a
bit
earlier,
so
we
had
the
kubernetes
integration,
I
think
from
gitlab
11.0,
perhaps
so
quite
a
long
time
or
perhaps
might
be
even
9.0
a
long
long
time
ago,
and
that
integration
uses
some
cluster
certificates.
That
basically
means
that
you
have
an
api
for
your
kubernetes
cluster.
What
before
the
cube
api-
and
you
get
a
token
and
a
certificate
to
authenticate
with
that
api,
and
you
basically
tag
gitlab
that
look
gitlab.
A
A
Of
course,
there
are
kind
of
workarounds
around
this.
The
most
typical
one
is
that
they
install
runner
fleet
within
the
hdpc,
and
then
they
just
need
local
access,
but
in
that
case
they
lose
all
the
integrations
that
gitlab
would
offer
for
kubernetes,
because
only
deployment
happens
through
the
runner
and
nothing
else
can
reach
the
the
cube
api
there.
A
Basically,
we
said
that
let's
build
an
agent
that
sits
within
the
user's
cluster
and
the
agent
reaches
out
to
bitlab
to
set
up
the
connection
so
from
this
time
on.
This
cluster
should
not
be
opened
for
gitlab
to
reach
its
api,
but
we
have
an
active
component
within
the
cluster
that
active
component.
My
definition
can
reach
the
api
and
it
reaches
out
to
git
lab,
which
is
in
a
sense,
a
basin
host,
or
something
like
that.
A
A
The
the
first
team
who
who
saw
this
is
actually
our
container
security
team,
who
added,
for
example,
serial
network
security
policy
integration
to
the
agent,
which
means
that
if
the
cluster
runs
psyllium,
then
you
can
configure
container
network
security
policies
from
git
lab
in
the
cluster,
and
if
there
is
an
alert,
the
alert
is
surfaced
into
gitlab
and
an
issue
is
created
automatically.
A
So
this
is
really
the
power.
The
agent
is
just
a
basic
building
block
for
any
type
of
integration,
and
one
thing
that
I,
I
told
kevin
for
a
long
long
time
that
I
think
that
configure
monitoring
release
should
work
together,
because
even
the
monetary
features
within
git
lab
are
built
around
the
legacy
integration
or
even
without
the
legacy
integration
for
sorry
for
promises,
and
this
enables
to
say
new
possibilities
for
monitoring
your
cluster
as
well,
because
gitlab
lives
within
that
cluster.
We
can
run
christ
directly
in
that
cluster.
A
The
cluster
can
reach
out
with
alerts
to
gitlab
and
so
on.
So
I
think
for
the
monitoring
side.
This
is
this
is
a
an
important
change
and-
and
it
was
a
new
set
of
opportunities
here
for
deployments
again.
This
is
this-
provides
new
opportunities
like
before
we,
you
could
only
deploy
to
a
cluster
using
gitlab
ci
cd,
which
meant
push-based
deployment,
and
there
is
a
huge
hype
around
pool-based
deployments.
A
Part
of
that
hype
is
reasonable,
like
it's
much
better
for
platform
engineers
to
define
the
basic
tooling.
They
want
to
run
in
a
specific
cluster,
and
this
was
a
use
case
that
we
could
not
support
at
all.
With
the
previous
approach-
and
we
can
do
support
that
with
the
agent-based
approach,
so
this
is
what
it
started
to
be
about.
A
The
the
agent
is
really
it's
just
an
active
component
within
the
cluster.
That's
the
basis,
the
foundational
layer
of
every
gitlab
to
kubernetes
integration.
That's
the
idea!
It's
it's
in
itself,
it's
featureless
and
we
built
a
couple
of
features
already
on
top
of
the
agent
one.
Such
feature
is
pool
based
deployment,
which
means
that
you
have
a
repository
where
you
can,
where
you
describe
your
kubernetes
resources.
A
A
This
allows
as
well
automatic
syncing.
So
if,
if
somebody
changes
something
manually
in
the
cluster,
the
agent
will
realize
that
oh,
my
definition
differs
from
what's
in
git,
so
let's
get
what
is
indeed,
and
this
way
the
cluster
is
more
immutable.
It's
more
logged
down
and
all
the
changes
are,
are
better
monitored
and
followed,
and
more
compliant.
Usually,
besides
pool-based
deployments,
we
have
push-based
deployments
as
well
what
we
call
with
the
ci
cd
tunnel.
This
was
released.
A
A
It's
totally
up
to
you,
where
these
things
run,
and
this
way
you
can
do
push-based
deployments
as
well
from
gitlab
ci
cd
into
the
cluster.
You
can
run
any
of
your
favorite
tools
or
you
can
also
devops
or
whatever
you
want,
and
besides
this,
there
are
other
integrations
that
were
provided
by
other
teams,
like
mostly
security
team.
B
B
They
need
to
be
able
to
understand,
what's
going
on
in
their
kubernetes
cluster,
they
need
to
potentially
do
things
like
manage
the
cost
of
running
that
cluster.
They
need
to
do
things
like
enable
teams
to
have
different
name
spaces
for
their
different
applications,
etc,
etc.
There's
lots
of
jobs
to
be
done.
B
What
you're
saying
is
the
agent
is
a
single
building
block
it
by
itself
has
no
no
specific
functionality
other
than
that
it
knows
how
to
talk
and
interact
with
both
the
clusters
and
gitlab
and
gitlab
in
this
case
is
like
a
control,
control,
pane
or
or
the
knobs
and
switches
that
you
can
do
things
to
the
cluster
with
so
yeah.
This
is
this
is
what
kubernetes
management
is
all
about.
Is
that
accurate.
A
Yeah,
exactly
exactly
so,
it's
like,
I
will
get
into
more
details
around
around
some
of
the
jobs
that
kevin
just
mentioned:
around
access
management
and
so
on,
but
yeah.
So
basically,
kubernetes
is
a
as
a
container
orchestration
tool,
so
to
say
which
means
that
it's
a
server
on
itself,
where
you
put
docker
containers
and
it
just
scales
them
up
scales
them
down,
manages
access
rights
and
networking
around
those
containers
and
and
many
many
other
things,
and
all
this
management
is
actually
done
declaratively.
A
This
is
one
very
special
aspect
of
kubernetes
that
kubernetes
doesn't
have
a
ui.
It
has
an
api,
and
people
can
build
uis
on
top
of
kubernetes,
often
even
said
that
kuban
is
meant
to
be
a
platform
for
others
to
build
their
own
platforms
for
for
their
own
companies.
B
A
Pane,
I
think
the
main
selling
point
is
that
we
are
integrated
with
your
source
code
management
tool,
because
gitlab
doesn't
matter.
If
you
are
happy
about
this
or
not,
the
vision
is
to
have
a
single
app
for
the
whole
devsecops
lifecycle.
But
the
reality
today
is
that
gitlab
is
a
ci,
ci,
scm
and
ci2.
A
So
I
think
this
is
the
biggest
selling
point
that
another
one
which
is,
which
makes
it
very
strong,
is
that
if
you
are
a
self-managed
github
user,
that
means
that
there
is
somebody
in
your
organization
who
installs
and
maintains
gitlab
itself,
and
usually
that
person
is
a
platform
engineer
which
means
that
the
job
of
that
person
is
to
maintain
your
kubernetes
cluster
as
well.
A
So
we
are
already
in
very
good
hands.
There
was
somebody
who
picked
us
to
as
they
tool
for
source
code
management
for
ci
cd,
and
that
person
is
the
one
who
manages
their
clusters
too.
So
we,
we
are
in
a
relatively
good
position
here
to
to
speak
to
them,
to
show
them
that
look
guys.
We
have
more
to
offer
for
you
specifically
you
who
are
managing
gitlab
in
your
staff
members
center.
A
Yeah
so,
as
I
said,
the
kubernetes
just
provides
your
control
plane
and,
and
it
just
provides
you
an
api
and
there
is
the
stock
tool
to
to
run
commands
against
this
api.
That's
called
cube
cattle
cube,
ctl.
A
Yeah,
that's
for
that's
one
thing:
there
is
a
tool
called
helm.
That's
often
described
as
the
application
package
manager
for
kubernetes.
Sorry.
D
A
Yeah,
in
the
end,
they
everything
here
is
hydrated
into
yaml
and
the
yaml
is
applied
to
the
cube
api.
So
that's
the
end
result
of
all
these
tools.
The
only
question
is
the
level
of
abstraction
they
provide
on
top
of
it.
A
If
we
go
even
further
from
these
basic
tools
that
are
really
applies
something
to
the
cluster.
There
are
the
most
one
of
the
most
famous
tools.
These
days
is
argo
city
or
flux.
Cd.
These
are
what
are
often
called
as
github
stores.
A
They
basically
do
pool
based
deployment
and
and
a
lit
a
bit
more
automation
or
visualization
around
those
deployments.
But
the
basic
idea
is
that
that
you
can
provide
them
a
git
repo
and
they
they
just
make
sure
that
your
cluster
is
in
sync,
with
that
with
triple
and
the
gitraku
can
contain
a
helm
chart,
it
can
contain
pure
kubernetes,
yaml
files
or
customize
things
that
that's
another
tool
in
this
two
set
of
deployment
tools
and
juventus.
D
A
Yeah,
so
that's
kind
of
correct,
okay,
basically
aws
gcp
digital
ocean.
All
the
hyper
clouds
provides
a
managed
kubernetes
service,
which
means
gcp
is
considered
to
be
the
most
major
in
this
one,
but
aws
is
is,
is
very,
very
strong
already
as
well.
A
A
And
after
that,
that's
that's
a
kubernetes
cluster
with
a
cube
api
and
you
in
what's
inside
the
cluster
is
almost
doesn't
matter
whether
you
are
on
aws
or
in
gcp,
why
I
say
that
it
almost
doesn't
matter
because
aws,
both
gcp.
So
all
of
these
hyper
clouds
have
some
level
of
integration
with
the
built-in
aim
system
and
and
that's
where
they
they
can
provide
integration
out
of
the
box
and
in
they
can
provide
integration
in
monitoring
as
well.
A
So,
like
google
stack
drive,
I
think,
have
has
relatively
good
integration
with
with
the
gke
clusters
that
that
are
run
within
gcp
for
aws.
The
crowd
trail,
I
guess,
has
doesn't
have
such
a
good
integration,
but
you
can
integrate
file
trail
as
well
with
the
pods
that
run
in
your
kubernetes
cluster.
A
D
Okay,
so
I
think,
if
I'm
understanding
it
gcp
these
aws,
these
cloud
surface
services
are
kind
of
like
the
infrastructure
that
can
host
the
kubernetes
cluster,
whereas
the
agent
that
we
install
can
help
have
provide
a
better
interface
with
all
the
different
things
that
you
would
want
to
do.
Actually,
within
the
kubernetes
cluster.
A
Correct
yeah:
that's
that's
the
idea,
yeah
exactly
yeah
yeah
in
terms
of
monitoring
like
I
think
that
gcp
has
really
good
integration
with
stackdriver.
So
I
I
know
I
think
there
are
a
few
people
who
use
just
that
to
monitor
their
clusters.
So
there
it's
it's
a
big
question:
what
can
gitlab
offer,
and
actually
we
heard
that
even
there
gitlab
can
offer
a
lot
of
things
because
the
the
developer
is
in
gitlab,
and
I
I
really
like
to
differentiate
free
personas.
A
When
I
speak
about
all
this
area,
I
like
to
speak
about
the
platform
engineer,
the
application
operator
and
the
software
developer,
and
the
only
reason
why
I
speak
about
software
developer
because
I'm
everybody
speaks
about
software
developer
in
gitlab,
and
I
want
to
make
clear
that
I
don't
care
about
him.
He
is
not
my
persona,
so
I
have
two
more
application
operator
and
platform
engineer
and
a
very
typical
use
case.
A
A
Whatever
reason
they
don't
want
to
provide
access
to
the
gcp
project,
and
in
this
case
they
are
really
happy
and
are
looking
forward
to
have
really
good,
metrics
and
and
monitoring
vdn
gitlab
around
the
applications
that
were
deployed
into
a
cluster
and
only
around
those
applications.
C
Operator
is
this
person
usually
like
the
lead
engineer
in
the
team?
No,
it's
not
actually
it's.
I
mean
yes
and
no
so.
A
I
I
yeah
sorry,
I
would
rather
agree
because
if
we
speak
about
a
small
company,
then
I
would
hundred
percent
agree.
Probably
the
the
guy
who
takes
care
of
the
networking
builds
up
the
infrastructure
and
so
on
and
so
forth.
That's
that's
a
single
person,
that's
the
lead
engineer
and
he
does
all
the
platform
work
and
application
operations
as
well
in
a
an
enterprise
setup
that
might
be
a
dedicated
what
you
might
call
devops
person
in
the
team.
That
is
the
application
operator.
A
It
might
not
be
the
leading
the
engineer
it.
I
would
say
that
it
might
happen
that
he
could
be
a
lead
engineer.
He
might
be
a
lead
engineer,
but
often
he
is
not
at
all.
E
Hey
victor
so
wanted
to
understand
a
little
bit
more
about
the
three
personas
that
you
just
talked
about.
Could
you
elaborate
a
little
bit
more
when
you
talk
about
how
the
software
developer
is
not
somebody
that
you're
focused
on,
of
course,.
A
So
actually
I
was
the
one
who
added
here
the
two
persons
that
just
mentioned
the
application
operator
and
priyanka
platform
engineer,
and
when
I
speak
about
personas,
I
I
really
mean
the
persona
as
it's
meant
in
ux.
So
I
I
know
it
might
be
that
a
single
person
might
act
as
multiple
personas
and
especially
in
case
of
an
application
operator.
It
might
happen
that
that
person
is
a
software
developer
and.
A
A
So
the
reason
why
the
software
developer
itself
is
not,
I
don't
consider
them
as
a
persona,
because
I
think
the
persona
as
a
software
developer
is
the
one
who
focuses
on
creating
the
the
business
application
to
writing
the
code,
making
sure
that
test
pass
doing
perhaps
performance
tests,
but
they
don't
do
any
day,
one
or
day
two
operations
even
not
day
zero.
Really,
that's
the
reason
why
I
I
differentiate
between
these
personas
because
from
ops
perspective,
they
they
don't
exist.
A
A
A
Specifically
like
there's
a
specific
question
that
in
the
dev
environment,
it
might
happen
that
everybody
can
access
everything,
even
secrets
in
staging
and
production
environments.
Everybody
can
access
only
the
namespace
dedicated
for
their
team
and
they
only
have
read
access,
but
not
for
secrets.
Just
everything
guys
in
that
namespace,
for
example,
so
authorization
rights
are
one
center
aspect
of
all
of
this
topic
and
the
things
we
developed
already
around
this
basic
pool
based
and
push-based
deployments.
A
These
are
already
shipped
the
the
first
version
zoid
and
one
aspect
of
that
is
the
cicd
tunnel
that
I
mentioned.
That's
for
the
push-based
one
which
today
can
only
use
the
same
service
account
that
you
install
the
agent
k
this
component
here
into
the
cluster,
which
means
whatever
this
component
is
allowed
to
do
your
deployment
can
do
it
as
well.
This
is
pretty
insecure.
A
As
a
result,
we
are
currently
developing
features
to
to
make
this
cic
panel
more
scalable
in
a
sense
to
be
be
able
to
set
up
at
the
group
level
on
one
hand.
So
you
don't
need
one
agent
per
project,
just
one
agent
per
group,
but
as
well
to
make
it
possible
to
say
that
this
user,
who
runs,
who
started
the
cr
job,
has
these
access
rights
in
my
cluster
and
impersonate
that
user.
B
This
is
actually
so
when
talking
about
to
customer
about
release,
they
constantly
talk
about
separation
of
duties,
so
the
person
who
builds
something
should
not
be
the
person
that
deploys
at
a
large
organization.
Then
what,
with
this
setup,
you
have
a
single
choke
point
to
control
access
which,
which
makes
access
control
much
simpler.
You
don't
have
to
like
figure
out
oh
different
types
of
roles
that
people
need
to
belong
to
at
different
group
levels.
A
Yeah
one
more
note
here
which
which
I'm
not
sure
whether
I'm
happy
about
or
not,
is
that
we
are,
you
might
have
heard
already,
that
gitlab's
role
are
very
limited
and
not
customizable
and
so
on
and
so
forth.
A
A
So
we
try
to
get
around
this
and
we
have
actually
an
issue.
That's
in
the
lovable
phase,
which
which
would
sync
up
the
gitlab
users
and
the
access
right
to
the
cluster
to
make
this
mapping
between
gitlab
access
rights
and
kubernetes
access
rights.
But
right
now
we
make
it
more
flexible.
In
a
sense,
it's
up
to
the
platform
engine
to
set
it
up,
everything
in
kubernetes
using
kubernetes
airbag,
but
they
can
restrict
it
way
better
than
it's
possible
within
github.
Only.
A
Because
it's
in
a
sense
it's
a
weird
thing
that
we
we
escape
the
limitation
of
gitlab
and
we
don't
solve
airbag
within
gitlab,
but
we
just
delegate
it
to
the
cluster.
A
I
think
this
is
the
right
thing
to
do,
but
at
the
same
time
it's
a
little
bit
weird.
So
this
is
what
we
are
working
on
on
the
cicd
tunnel
aspect
and
very
similar.
We
have
an
issue
for
poo
based
deployments
as
well
today,
when
you
do
pull-based
deployment
use
the
service
account
on
the
agent,
whatever
the
agent
is
allowed
to
do.
If
you
put
that
manifest
in
your
repository,
it
will
do
it.
B
A
A
Okay,
even
even
in
the
future,
this
will
stay
so
what
we
want
to
do,
but
that's
a
good
point,
because
we
might
be
able
to
make
it
more
dynamic.
The
idea
what
we
have
is
to
say
that,
so
this
is
how
you
describe
which
project
you
want
to
deploy.
This
would
be
manifest
project
2,
and
this
is
a
project
where
you
have
this
kubernetes
resource,
manifest
these
younger
files
that
describe
what
you
want.
A
Kubernetes
t
to
do
and
the
idea
would
be
to
impersonate
a
specific
name
with
specific
groups
and
keys,
and
then
you
can
use
kubernetes
airbag
to
regulate
the
access
rights
of
these,
this
name,
these
groups
and
so
on,
but
it
doesn't
include
the
name
of
the
latest
commit
or
who
made
that
commitment
or
stuff
like
that,
but
that's
a
good
point
that
we,
it
might
be
feasible.
I
will
add
the
notes
about
this.
B
A
A
A
B
One
point
of
clarification:
cass
well
and
alana
means
kubernetes
kubernetes
agent
service.
B
A
B
And,
based
on
what
you're
describing
victor
is
like,
it
makes
not
only
it's
like
a
single
interface,
but
it's
potentially
a
way
to
help
users
transition
between
cloud
providers,
because
it's
like
your
interface,
remains
the
same,
no
matter
what.
A
Yeah,
I
I
don't
think
that
this
that
this
level
of
cloud
natural
approaches
is
actually
valuable
for
many
users,
so
based
on
the
interviews,
they
don't
plan
to
move
or
or
not
looking
into
this
but
yeah.
A
Otherwise,
it's
correct.
Another
reason
why
I
wanted
to
ship.
Why
want
to
ship?
This
is
because,
as
our
users
are
looking
into,
these
integrations
that
they
would
like
good
prometus
integration,
grafana,
whatever
tracing
tool,
or
they
use
just
erk
for
metrics,
and
even
if
you
have
promoters
integration,
you
won't
have
them
with
this.
We
can
tell
them,
use
your
favorite
tool,
let
it
be
a
cli
tool.
A
Whatever
you
want,
you
can
use
it
from
your
local
computer,
because
we
give
you
the
cube
config
that
connects
you
using
your
gitlab
username
to
the
cluster
and
and
that's
why
we
think
it's
it's
actually
pre-valuable,
because
it
just
gets
ahead
of
all
the
features
it
makes
them
available
to
our
users
without
us
developing
the
features.
Actually,
they
just
have
to
use
these
third-party
tools
that
they
likely
already
use
today.
A
The
next
item
to
reach
completeness
is
actually
to
have
some
management
ui
around
the
agent.
Even
today,
to
start
using
the
commands
agent,
you
have
to
run
graphql
queries.
It's
definitely
a
poor
user
experience,
there's
nothing
to
say
about
that.
We
want
to
improve
this,
not
just
the
registration
flow
so
that
it's
it's
much
more
pleasant
to
get
started,
but
as
well
the
the
troubleshooting
of
the
agent.
So
we
want
to
provide
you
status
of
the
connection.
A
We
have
an
idea
of
something
like
a
status
page,
that
many
tools
provide
where
we
could
show
you
not
just
what
you
set
up,
but
you
could
show
you
as
well
that
you
don't
have
content
network
security
policies
because
you
are
not
an
ultimate
customer
and
this
way
they
can
even
actually
upsell
features
in
a
sense
and
not
just
show
that
cicdano
doesn't
work
because
you
did
not
set
it
up.
So
these
are.
B
The
I
have
a
basic
question,
which
is
today
we
have
a
goal
of
increasing
user
adoption
of
our
agent
and
the
reason
today
that
anybody
who
want
to
use
the
agent
is
primarily
pope
base
deployment.
That's
our
is
that
it's
not
true.
What
do
they
also
want
to
use
the
ci
tunnel
for
push
base?
Is
those
two
things.
A
Actually,
there
are
three
things
that
I've
heard
on
user
cores
that
they
want
around
kubernetes
integration,
one
is
pubes,
other
is
push
based
and
third
is
metrics
and
logs.
B
A
A
Let
me
turn
the
discussion
to
metrics
resource
dashboard
and
log
tailing.
I
like
to
speak
about
these
three
together,
probably
because
I'm
not
knowledgeable
enough
about
the
space
to
speak
about
them
separately
and
because,
most
of
the
time
I
heard
hear
about
them
together
from
our
users.
I
consider
this
from
a
troubleshooting
point
of
view,
primarily
that's
what
I
hear
about.
So
it's
not
not
about
incident
management
but
troubleshooting,
and
we
we
don't,
have
a
good
idea.
Yet
how
to
do
this
with
the
agent?
A
The
reason
is
that
today,
gitlab
offers
a
few
views
around.
All
of
these.
There
is
an
environment.
There
is
the
deployment
environments
menu
where
sometimes
you
might
be
able
to
see
kubernetes
related
resources
most
of
the
times.
We
just
see
errors,
and
sometimes
you
shouldn't
be
you
even
shouldn't
see
them,
because
it's
not
too
bad,
it's
related.
So
it's
a
it's
a
really
poor
user
experience
that
we
have
today
and
and
very
brutal
in
its
functionality.
A
But
again,
all
of
this
depends
today
on
the
certificate-based
cluster
integration
that
I
call
legacy
and
that
we
slowly
want
to
roll
out
as
the
feature
as
the
agent
becomes
able
to
to
provide
these
features,
and
our
ideas
around
this
is
one.
Is
that,
okay,
what
if
we
just
change
the
back
end
and
instead
of
the
certificate-based
integrations,
it
will
use
the
direct
connection
through
cars
to
the
cluster
and
do
all
the
queries.
A
A
So
this
is
one
idea,
but
the
other
idea
was
that
what
couldn't
we
make
something
faster,
something
that
might
be
even
more
awesome
simply
because
our
kubernetes
dashboard
built
into
gitlab
is
super
limited
and
there
are
kubernetes
dashboards
out
there
that
are
way
more
powerful.
There
is
actually
even
an
official
kubernetes
dashboard
and
what,
if
we
could
provide
a
ui
that
ships
together
with
gitlab
but
is
in
a
sense
outside
of
gitlab
like
the
web
ide
or
the
graphql
editor,
are
outside
of
bitlab.
A
A
So
what
if
we
ship
a
kubernetes
dashboard
in
a
very
similar
fashion
that
it
just
besides
gitlab
you
can
choose
which
agent
you
want
for
your
connection
and
then
it
shows
you
the
resources
available
there
and
we
just
use
an
already
existing,
really
powerful,
dashboarding
tool
out
there.
So
these
are
the
ideas
that
we
are
looking
into,
whether
changing
the
backend
or
having
an
alternative
approach
that
we
might
be
able
to
shoot
way
faster
because
we
just
reuse
an
open
source
tool.
A
That's
that's
it
and
actually,
as
I
said
to
me,
especially
log
tailing
and
the
dashboard
go
hand
in
hand.
I
I
can
hardly
imagine
loctays
without
saying
the
resources
and
without
saying
that
this
is
the
part
that
I
want
to
see.
The
logs
of
matrix
is
a
little
bit
different,
but
I
I
I
usually
speak
about
these
three
together.
B
A
Yeah
I
mean
you
can't
separate
metrics
because
as
a
dashboard,
the
dashboard
shows
you
the
resources
that
were
deployed
in
kubernetes.
Every
resource
has
a
kind
like
you
can
have
a
deployment
kind,
a
pod,
a
replica
set
a
job,
an
ingress.
All
these
are
just
resources
and
that's
what
a
kubernetes
dashboard
should
show
that
give
me
all
the
pods
in
that
namespace
or
all
the
config
maps
or
whatever
for
the
matrix.
A
It's
it's
very
different,
because
there
you
want
to
see
the
nodes,
for
example,
that
your
cluster
is
composed
of,
and
we
want
to
see
the
cpu
utilization.
Of
course,
you
might
want
to
see
as
well
the
error
rates
for
specific
code
or
specific
deployment,
but
those
are
already
a
bit
different
aspects.
There.
B
Yeah,
that's
that's
interesting
because
we're
talking,
like
the
words,
can
mean
different
things.
Metrics
we're
specifically
talking
about
metrics
associated
to
your
cluster,
not
necessarily
always
just
not
necessarily,
in
this
case
the
application
metrics,
because
in
in
a
pure
monitor
world,
what
you
want
to
do
is
go
from
oftentimes.
You
want
to
go
from
an
application
metrics,
for
example,
your
your
the
the
time
it
takes
to
run
a
certain
process
gets
longer.
B
A
Nice
things
I
am,
I
know
that
I'm
not
an
expert
of
metrics,
so
I
don't
know
it's
the
the
application.
Metrics
are
definitely
valuable
and
actually
that's
what
our
users
are
asking
for.
Just
in
my
understanding
these,
so
these
are
different
tools
for
me
at
least
got
it.
Okay,.
A
Okay,
it
totally
depends
on
you,
okay,.
A
So
it's
I
think
it's
very
important
for
us
to
know
about
each
other
what
we
plan
to
do
and
how
we
want
to
do
that
and,
as
I
said
whenever
I
speak
about
the
resource
dashboard,
I
very
quickly
turn
to
logs
as
well,
because
if
I
see
the
pods
I
want
to
see
the
logs
of
them
too
for
metrics
that's
a
little
bit
outside.
For
me
at
least
that's
how
I
approach
the
topic
but
yeah.
So
we
we
will.
A
B
Well,
one
thing
to
note:
I
I
see
at
least
so
when
it
comes
to
metrics
and
logs
there's
the
open
question
to
me
right
now
is:
where
do
we?
Where
do
we
put
these
things
previously,
we
had
the
prometheus
and
the
elk
stack
that
you
can
deploy
to
your
cluster
to
to
be
the
database
to
collecting
these
things.
That's
we
we've
changed
direction
that
we
don't
necessarily
want
to
run
that
today.
So
what
are
the
options
to
me?
Actually.
A
That
it's
that's
one
of
the
options.
Exactly
that
one
of
the
options
is
that
that
you
use
a
cluster
management
project.
You
install
prometus
elastic
stack
using
the
cluster
management
project
and
then
the
option
there
is
that
we
change
the
backend
that
we
have
today
and
provide
you
a
dashboard
that
uses
the
agent
for
the
connection.
Instead
of
directly
accessing
the
cube
api.
B
Perhaps
I'm
leaning
in,
I
would
say,
from
a
product
direction,
that's
not
necessarily
where
we
want
to
go
in
the
future.
I
think
our
our
intention
or
my
personal
opinion
at
this
point
is
we
want
to
have
a
more
more
of
a
managed
solution
of
things
like
prometheus
and
whatever
log
things
and
whatever
tracing
are
in
a
single
package
versus
something
that
exists
separately,
but
that
that
doesn't
preclude
it
from
being
a
solution
here
and
then
we
also
know,
of
course,
users
love
monitoring,
vendors.
B
A
Just
one
minor
note
here
is
that
I'm
very
precocious
to
always
speak
about
lock,
tailing,
only
and
not
lock,
search.
One
of
the
reasons
is
that
it's
I
I
know
I
I
just.
I
know
one
thing
about
lock,
search
that
it's
super
hard.
You
have
to
store
those
you
have
to
store
the
logs
index
them,
etc.