►
From YouTube: Scalability Team Demo 2021-05-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First,
throw
some
stuff
away,
so
what
I'm
doing
for
the
past
few
days
is
trying
to
get
data
from
prometheus
into
science,
because
lots
of
people
gather
data
there
to
do
many
things
with
them.
One
of
those
things
is
in
is
showing
the
stuff
on
the
handbook
for
now,
we've
gotten
around
to
showing
things
using
the
screenshots
that
grafana
makes
and
then
uploading
that
as
an
artifact
and
then
showing
that
on
the
website.
A
But
now
we
want
to
do
more,
clever
things
for
their
budgets
to
show
them
to
two
stage
groups
and
so
on.
So
we
need
to
buy
the
bullet
and
get
this
information
available
to
sizes.
The
problem
with
that
is
that
we
don't
allow
just
anything
to
access
prometheus.
So
when
we
access
prometheus
as
humans,
we
get
authenticated
through
many
complicated
things
involving
google
and
and
one
of
these
other
authentication
things
that
I
forgot.
The
name
of.
A
And
other
way
around,
this
is
using
a
runner
in
the
same
like
a
computer
in
the
same
network
and
for
us
that's
going
to
be
runner
on
op
stock
gitlab.net.
So
we
won't.
Let
me
start
sharing
my
screen.
A
Okay,
so
what
I've
been
doing
is
building
a
way
for
people
to
define
stuff
that
they
want
to
get
out
of.
A
Prometheus
and
uploading
that
as
an
artifact
from
a
runner
or
into
a
bucket,
so
that
could
look
like
this.
This
is
a
file
inside
the
runbooks
repository
the
reason
I'm
going
with
the
runbooks
repositories,
because
that
way
we
can
reuse
queries
that
we've
already
defined,
for
example,
for
them
the
error
budget
dashboards,
and
we
don't
need
to
copy
those.
A
Yeah
monorepo,
it
works
so
yeah.
A
definition
for
for
the
queries
could
look
like
this.
It's
a
jsonnet
file,
you
define
a
name,
and
then
you
define
the
params
that
you
want
to
pass
on
to
prometheus
and
it's
yeah
I'm
build
currently
building
like
validation
for
that.
So
I
don't
need
to
support
all
types
of
queries
in
the
beginning
and
that
kind
of
stuff,
but
we
do
provide
feedback
to
people
writing
these
queries.
A
D
D
E
A
E
Or
I
just
thought
you
were
doing
something
really
cool
like
really
clever
with
I
don't
know
piping
to
prom
tool
and
and
but
yeah.
That's,
that's
the
validator,
I'm
glad
to
see
it's
being
used.
E
A
E
E
Prometheus
read
it,
but
they
I
saw
a
very
cool
tool,
the
other
day
that
you
give
it
a
bunch
of
rules
right,
so
you
say
like
how
maximum
card
now
it's
basically
like
a
linter,
but
it
has
to
talk
to
your
prometheus
back
end
and
it's
got
like
you
know.
This
is
the
maximum
cardinality.
E
It
also
has
some
nice
things
like
if
you
are
doing
four
scrapes
a
minute
like
don't
be
doing
stuff
with
like
30
second
range
intervals,
which
is
like
really
smart
right,
because
that
it's
pointless
it's
meaningless,
and
so
it
had
a
whole
bunch
of
I'll,
find
it
and.
E
E
But
for
me
I
mean
people
were
people
steal
our
prometheus
data
through
grafana
anyway
through
the
through
the
query
bridge
yeah,
so
that
you
know
that
that
thing
I
mean
it's
there's
nothing
that
private
in
in
prometheus
or
hopefully
not.
A
Yeah
well,
the
grafana
is
the
public
way
to
access
it,
but
even
there
there's
restrictions.
I
think
if
you
use
the
public
dashboards.
E
Yeah,
but
you
can
craft
you,
can
you
can
craft
these
data?
That's
how
those
people
who
are
doing
bad
things.
You
can
there's
a
there's
ajax
api
that'll,
give
you
prometheus
queries
and
you
can
pretty
much
send
anything
to
that.
Add
things
so
so
on.
You
know,
there's
a
there's,
an
ajax
interface,
if
you
want
and
when
you
load
a
graph,
it
goes
to
there
and
and
provides
the
query
and
get
and
pipe
and
proxies
that
through
to
prometheus
and
there's
no
sort
of
request
forgery
on
that.
C
E
E
E
G
Tell
you
that
what
at
least
on
one
occasion
we
leaked
a
huge
amount
of
potentially
very
sensitive
data
into
into
prometheus
labels
because
of
some
bad
decisions
that
were
made
in
parsing
database
query
logs.
So
that
is
very
sensitive
data
that
we
had
no
way
of
redacting
and
potentially.
E
Yeah,
but
it
can,
we
can
reject
if,
if
it's
important
enough,
we
can
reject
it
matt,
but
no.
E
E
E
But
the
for
what
it's
worth
the
data
that
they
were
scraping
was
all
stuff
that
was
exposed
by
node
exporter,
not
our
own
custom,
so
it
was
it
probably
wasn't
that
stuff
matt
like
luckily
that's
good,
and
you
know,
just
by
total
flick,
we've
managed
to
get
away
with
it.
E
C
E
So
my
take
is,
is,
I
think,
it's
much
more
important
to
stop
grafana
and
the
data
proxy,
or
at
least
limit
that
somehow
maybe
with
the
trickster
or
something
like
that,
and
there
was
a
proposal
at
one
stage
to
do
that,
because
that's
that's
the
easiest
way
to
do
it
like
anyone
can
do
that
at
the
moment
yeah
I
would
say
that
it's
less
like
being
able
to
run
a
job
in
our
ci
that
has
access.
You
know
that
still
requires
another
thing
to
be
broken
and
then
for
there
to
be
updates.
Yeah.
C
So
I
yeah
the
the
moment
when
we
have
an
explicit
bridge
like
this.
That
sounds
much
better.
I
think
what
just
comes
to
mind
for
me
is
that
we
are
using
tools
that
are
designed
for
a
certain
level
of
trust
and
because
we're
gitlab
we're
saying
everything
has
to
be
open
and
transparent,
and
then
it
turns
out
that
you
run
into
holes
like
this,
which
makes
sense,
because
the
people
who
made
grafana,
probably
weren't
thinking
of
us
being
as
open
with
it
as
we
are.
E
Yeah,
so
the
the
solution
there,
I
think,
is
to
institut
and
it
would
have
to
be
a
thing
where
we
find
the
time,
but
the
solution
there
would
be
to
set
up
a
prometheus
that
scrapes
from
thanos
or
and
but
only
scrapes,
recording
rules
and
not
rule
metrics.
E
C
Everything
you
want
that
anyway,
because
then,
if,
if
there's
bad
traffic,
that
they
that
prometheus
down,
you
don't
have
a.
A
But
andrew
this
whole
thing,
like
the
the
part,
because
the
the
way
we're
going
to
do
it
now,
because
the
data
team
doesn't
have
access
to
ops.getlab.net
the
so
the
way
we're
doing
it
is
going
to
upload
it
to
a
bucket.
This
was
part
of
what
I
was
going
to
show.
E
To
take
the
other,
it
will
take
longer
like
I
think
you
should
to
like
keep
on
this
track,
but
I
do
think
we
should
open
an
issue
about
being
able
to
access
thanos
from
a
secure
runner.
You
know
that
you
need
protected
branch
access
to
to
basically
push
a
ci
job
to,
and
we
can
start
that
ball
rolling
because
it's
you
know,
even
if
there
is
leaked
information
you
need,
you
need
to
be
able
to
push
to
that
branch
and
because
yeah,
but
it
would.
E
Sorry
thanks
man
yeah,
because
what
I
was
going
to
say
is
then
we
can
start
doing
much
better
validation
around
people
putting
you
know,
cardinality
bombs
into
prometheus,
and
we
can
start
looking
at
one
of
those
those
validation
tools
which
you
can't
do
at
the
moment.
G
Yeah
totally
that's,
I
was
gonna,
make
the
point
about
cardinality
as
well,
and-
and
I
guess
the
other
aspect
is
having
a
using
a
runner.
That's
not
that's
not
secured
against
against
malicious
abuse.
G
Could
potentially
be
disastrous,
as
we
know
that
people
actively
look
for
ways
to
blind
us
to
hinder
our
observability
while
committing
denial
of
service
attacks,
so
I
wouldn't
want
to
give
them
another
vector
for
doing
that.
But
what
andrew's
describing
doesn't
sound
like
it
would
be.
You
know
a
viable
abuse
vector.
Does
that
sort
of
make
sense,
I'm
kind
of
thinking
about
in
an
adversarial
con
context.
C
Thanks
for
sharing
that
that
that
gives
at
least
to
me
that
gives
a
sort
of.
Why
of
why
someone
would
want
to
do
this.
So,
if
they're
trying
to
hijack
the
I
minutes
to
do
crypto
mining
and
we
have
a
hard
time
finding
out
what
they're
doing
then
they
have
actually
something
to
gain
from
messing
with
our
observability
yeah.
E
Sorry,
I've
just
sorry.
E
A
C
A
C
No,
it's
not
an
accident
because
it's
the
easiest
thing
to
do,
and
it's
also,
I
think,
the
most
natural
thing
to
do
and
you
get
this
site.
I
mean
you
can
then
pretend
it's
a
prometheus
server
at
some
point.
A
It's
not
that
like
yeah,
because
I
broke
things
just
before
the
call.
I
think
I
don't
think
I
can
maybe.
A
A
A
So
I'm
not
100
sure
I'll
be
able
to
show
it
yeah.
I
broke
it
enough
to
not
be
able
to
show
that,
but
I
can
show
it
in
the
ops
runner
that
already
uploaded
some
stuff
so
that
it's
prometheus
and
in
this
file,
where
we
allow
stage
groups
to
and
stage
group
everybody
that
wants
to
query
prometheus
and
use
it
reuse
it
somewhere
else
can
write
queries
in
here.
A
They
can
write
because
for
the
purples
for
error
budgets
and
for
mean
time
between
failures,
we
will
have
multiple
queries
that
we
will
want
to
gather
together.
So
every
let's
say
goal
you
have,
you
would
create
a
file
in
here
with
multiple
one
or
more
queries
in
it
like
this.
A
So
the
responses
that
I
get
back
from
prometheus
are
wrapped
in
an
envelope.
A
A
E
I've
got
to
run
bye-bye.
No,
it's
not.
E
My
son
has
probably
got
covered
and
we've
been
keeping
him
in
a
room
on
his
own
with
the
window
open
for
airflow
and
the
winds
come
up
and
just
rip
the
window
off.
So
I'm
gonna
sort
that
out
bye.
A
A
A
A
C
Yeah,
but
it's
a
it's,
not
what
I
thought
it
would
be
for
a
moment
because,
like
if
you
curl
the
prometheus,
I
guess
well,
if
you
do
prometheus
query.
C
A
That's
this
format,
so
that's
that's
the
everything
that
is
inside
this
body,
except
for
the
status
field.
I
know,
including
the
status
field,
is
the
response
from
prometheus
okay.
So
this
is.
This
is
like
the
role
thing,
and
this
is
something
that
I
added
to.
If
it's
not
a
200
and
there's
nothing
included
then
like,
if
there's
a
500
or
whatever,
then
there
should
be
an
error
inside
this
body
object
as
well,
but
yeah,
who
knows
this
success
will
also
be
false.
A
If,
for
example,
there
was
a
timeout
and
there
was
no
response
at
all,
but
then
this
will
be
empty
yeah,
because
we
don't
have
one
obviously.
C
A
So
yeah,
that's
where
I'm
at
right
now.
I
think
you
have
the
next
item
jacob.
C
Well,
it's
a
bit
of
a
joke
item,
a
joke
or
not
the
it's
been
three
months
since
we
moved
workhorse
into
the
main
repo
which
I'm
not
even
sure
if
it
matters,
but
it
means
that
it's
unlikely
now
that
we
have.
If
we
have
to
do
a
security
release,
then
it's
unlikely.
We
have
to
do
a
separate
workhorse
release
for
it
anymore.
C
Sort
of
knock
on
wood
means
that
this
breaking
workhorse
is
behind
us,
but
something
that's
broken
is
things
like
dependent
scanning
and
security
scanning
and
that's
because
workforce
is
in
a
subdirectory
of
the
main
repository
and
all
these
scanner
tools
expect
to
find
metadata
files
for
the
language
project
at
the
top
level.
C
C
And
I
think
that
will
just
automatically
make
dependency
scanning
and
security
scanning
work,
which
would
be
a
good
outcome
but
yeah.
I
I
need
to
get
some
more
feedback
from
the
other
workhorse
maintainers,
but
that
it's
it's
a
little
thing.
I'm
trying
and
I
just
wanted
to
share.