►
From YouTube: LogQL v2 Public Design Call 2 2020-06-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
do
you
want
me
to
share
my
screen
or
time
you
want
to
do
like
last
time
and
do
it
on
your
own.
B
A
That's
not
kind,
oh
I'm
sorry.
It
looks
very
nice.
A
A
So
there
were
a
couple
of
stuff
I
wanted
to
talk.
Maybe
to
start
I'm
just
gonna
go
through
what
we
discussed
last
last
time
and
what
we
agreed
on.
So
the
first
thing
we
discussed
and
agreed
on
was
that
we
will
use
type
for
logision
well,
log
operation,
sorry
and
we're
going
to
use
function
for
metrics.
So
the
first
function
that
transform
logs
into
metric
will
be.
A
The
first
operator
will
be
a
function,
and
so
we
already
have
this
for
rate
average
over
time
and
the
current
supported
operator.
A
We
were
talking
also
about
that.
We
want
to
be
able
to
support
this
use
case.
Do
you
see
correctly
my
screen
by
the
way
yeah?
So
we
were
talking
about
the
use
case
of
being
able
to
multiple
level
passing
to
do
multiple
level
passing
and
I
spot
something
by
rewriting
this.
So
maybe
we
can
talk
about
this,
which
is
in
this
specific
case
that
we
wrote.
A
So
the
syntax
is
like
this,
and
I
just
gonna
tell
you
again
that
unwrap
allows
you
to
select
which
label
becomes
a
new
logline.
A
And
in
in
this
case,
if
I,
if
I
unwrap
the
message
to
be
able
to
extract
new
label,
then
I
cannot
really
well
technically,
I
could
do
an
another.
Actually,
that's
wrong!
I'm
actually
wrong!
Here!
I
could
do
another
unwrap
here.
If
I
want
to
yeah
yeah,
I
could
do
that.
Yeah.
C
A
Yeah,
okay,
we
were
also
talking
about
filter,
so
it
sounds
like
everyone
was
arguing
that
we
want
to
support
filtering
on
extracted
level
and
we
really
stopped
there.
We
didn't
really
had
a
new
syntax
for
that
and
that's
pretty
much
what
we
discussed
last
time.
So
there's
one
thing
that
I
wanted
to
start
discussing
before.
Looking
at
the
filtering,
which
was
something
we
didn't
discuss,
is
the
rig
x1,
for
instance,
as
explicit
label
names
when
you
extract,
but
the
log
format
or
the
json
won't
have
well
could
have
non-explicit.
A
So
you
could
technically
extract
a
lot
of
label,
and
maybe
you
don't
want-
and
we
didn't
talk
about,
how
can
we
limit
the
labels
that
we
want
to
extract
in
the
scratch
pads
that
we
had?
We
were
talking
about,
so
I'm
talking
about
this
kind
of
example
here
where
log
format
here
is
basically
extracting
all
the
possible
keys
of
the
log
format.
A
So
I
think
we
were
talking
about.
Maybe
so
that's
that's
more
for
that
time
to
they
will.
A
Yeah,
that's
that's
something
else,
so
yeah
I
wanted
to
talk
about.
What's
your
feeling
about
this,
like
should
we
allow
to
select
only
couple
of
key
to
be
extracted
in
case
there
is
too
many
of
them
definitively
ed
suggested
that
we're
going
to
return
an
error.
A
If
there
is
like
too
many
so
too
many
could
be
like
if
there
is
like
100
keys
that
in
the
log
format,
then
we're
going
to
return
an
error,
because
we
know
100
keys,
multiply
by
let's
say
already
50
streams
that
could
add
up
into
a
5000
stream,
so
we're
gonna
have
a
limit
to
something
like
a
thousand,
maybe
or
something
configurable.
A
So
that's
the
first
type,
but
also,
I
think
we
should
we
should
be
able
to
select
which
one
we
want
to
extract.
What
do
you
think
about
this.
D
A
E
A
We
haven't
decided
on
that.
No
okay
yeah,
but
this
is
something
that
we
talked
about
renaming
is
something
that
we
should
do
because.
A
Be
clash
there
could
be
clash
between
names
all
right.
What
I'm
suggesting
here
is
maybe
there's
like
two
way
to
solve
two
two
different
one
way
to
solve
two
different
problems.
One
way
is
like
picking
an
implicit
set
of
label
and
also
renaming
them.
At
the
same
time,
right
we
could,
we
could
say.
A
I
could
do
something
like
this
right,
so
this
will
mean
that
I'm
selecting
flap
and
although
this
is
a
bad
example
because
json,
maybe
the
syntax
gonna,
allow
you
to
select
a
bit
more
complex
property
from
the
document,
but
that
would
be
that
would
work
for
like
format
right.
A
So
if
you
have
too
many
properties
and
and
some
of
them
are
clashing,
maybe
we
could
have
a
way
of
renaming
and
also
just
selecting
which
one
so
this
will
select
just
two
of
them
and
rename
them.
But
you
could
add
one
without
renaming
like
this,
so
I
just
want
to
bring
this
subject
see
if
you
guys
agree
with
at
least
having
something
like
that
and
then,
if
this,
if
it's
the
case,
how
the
syntax
should
look
like.
E
A
B
B
A
I
mean
why
is
true
money,
a
problem,
it's
a
it's
a
problem
in
the
case
where
you
don't
have
a
vector
algorithm
on
top
of
it,
so
you
may
end
up
with
a
thousand
stream.
It's
also.
B
B
So
yeah
you
know
like
so
you
want
literally
every
message
could
be
its
own
stream
or
it
could
all
be
one
stream
and
we
wouldn't
really
be
able
to
tell
the
difference
and
then
in
metrics
I
feel
like
you
know,
we
have
a
good
use.
We
have
a
you
know.
We
have
a
powerful
system
for
doing
aggregations
here
so
like
looking
at
raw
time
series
is
never
a
good
thing.
People
are
trained
that
way
anyway.
A
Yeah
I
mean
that's,
we
can
we
could
do
that.
F
I
would
I
would
be
in
favor
of
it
still
giving
it
a
parameter
to
resolve
the
to
resolve
the
clashes
like
to
allow
any
renaming
within
I'd.
Be
I
find
it
weird
if
renaming
was
its
own,
its
own
little
keyword,
mostly
because
if
it's
in
a
separate
step
to
me
this
would
this
would
be
diff.
This
would
be
difficult
once
we
have
like
a
chain
of
block
format
and
json
parsing
and
unwrapping,
then
it
becomes
unclear
to
which
it
retains.
B
F
I
don't
understand
the
use
case
for
combining
some.
I
I
also
don't
see
how
the
extraction
will
be
different,
because
they're
all
keyword,
keyword
based
like
right.
It's
like
you
just
give
it
a
different
key.
B
A
Mean
we
so
if,
if
selecting
just
a
subset
of
label
is
not
really
important,
then
maybe
we
can
introduce
this
feature
later.
G
B
A
Another
problem
with
being
able
to
relabel,
for
instance,
is
the
json
like
well
actually
x.
Hatching
also
is
a
problem
because
we
haven't
talked
really
about
the
json,
but
what
will
it
how
the
extraction
does
work
for
the
json?
If
there
is
a
multi-def
document?
Should
it's?
If
it's
only
like
this,
will
it
extract
only
the
first
level
of
death.
E
So
would
that
work
with
erase,
I
mean
not,
I
think
or
no.
I
think
we
said
last
time
that
aries
is
kind
of
out
of
scope.
E
That,
like
nested
arrays,
I
think
something
we
talked.
A
About
yeah
yeah
yeah,
I
think
nested
array
is
yeah,
it's
a
bit
complex,
but
if
you
have
a
nested
property.
E
A
Okay,
so
using
unwrap
his
description
for
nested
property.
F
D
B
G
Yeah,
I
can
maybe
get
one
of
them
right
now,
the
the
challenge
with
treating
it
as.
B
A
as
a
chain
set
of
unwraps
to
to
unpick
like
values,
further
down
as
you
can't,
you
can't
unpick
two
different
branches
right.
F
F
A
Yeah,
so
the
the
alternative
syntax
is
allowing
some
sort
of
you
know,
x-path
or
or
gq
language,
where
you
can
write
where
you
can
write,
you
know
exactly
how
you
accept
this
property,
but
then
again
that
means
that
you
selectively
select
each
property
right,
which
log
format
seems
to
be
different.
A
So
there's
like
two
different
behavior
in
the
way
log
format,
parser,
regex,
parser
and
also
json
buzzer-
will
work.
F
F
B
C
H
A
F
Yeah
so
so
I
agree
too
like
this.
Should
the
what's
what's
highlighted
right
now
should
be
the
receiver
right
and
and
flip.flop
seems
like
an
access
pattern,
which,
in
lock
format
we
don't
have
right
lock
from
it,
is
a
single
level
key
value
right.
B
B
F
Flip
all
right,
everyone
have
fun
here.
Where
is
this
clashing
now
is
what
is
this
one
up
here?
Can
you
scroll
it
up
yeah?
That
was.
A
That
was
the
old
way
of
saying
which,
which
label
will
become
the
value
for
the
metric
we.
Finally,
we
finally
decide
that
the
metric
will
always
be
the
log
line,
so
the
log
line
itself
needs
to
be
a
passable
float.
So
that's
and
you're
going
to
use
unwrap
for
that.
Okay
cool.
A
F
Cool
but
then,
but
then
the
interesting
like
in
my
eyes.
The
interesting
thing
here
would
be
this
sort
of
pattern
right
where
you're
really
saying
I
am
accessing
multiple
paths:
yeah
in
adjacent
tree
and
they're
gonna
land
in
these
top
level
fields.
F
A
Barbers,
well,
I
thought
that
that's
that's.
What
I
was
saying
is
for
the
json.
If
we
go
down
the
road
of
you,
we
use
using
a
format
using
a
language
to
extract.
We
don't
really
have
choice.
You
need
to
be.
You
know
explicit
about
each
property.
You
need
to
extract.
A
Yeah
yeah:
let's
sum
up
the
issue.
F
A
D
A
F
B
B
B
I
mean
I
I
explicitly
would
say
I
don't
want
this
use
case
of
like
everything,
but
this
label-
okay.
This
seems
like
brittle
and
it's
not
clear
what
we'd
use
it
for
and
it's
not
clear
what
the
you
know.
What,
when
you
remove
some
labels,
what
happens
to
you
know
you're,
saying
we
merge
these
streams
together
again,
I
guess:
aren't
you
this.
F
Is
more,
this
is
more.
This
is
this
is
optional
for
we're
not
we're
like
dude.
Do
you
see
the
huge
message
down
here
like
where,
like
where
it
says
message,
or
something
or
reason
or
like
let's
say,
there's
another
field
called
detail
or
source
yeah,
maybe
source,
and
then
this
would
be.
This
would
be
a
stream
value.
This
would
end
up,
and
then,
if
this
has
like
high
cardinality
in
the
data
structure
in
the
local
response,
this
will
be.
F
B
That
I
honestly
david,
I
don't
think
that's
a
big
deal.
That's
an
optimization
there.
Okay,
we
should
for
sure
we
should
optimize
the
the
way
we
return
responses.
If,
if
like
repetition
in
the
json
response
is
a
problem,
but
this
is,
we
shouldn't
build
a
language
around
optimizing.
The
wire
transport.
F
Yeah,
no,
I
yeah.
Yes,
that's
a
good
point.
G
A
The
source
is
not
a
valid
label
right,
so,
if
you
leave
it
like
that,
you're
gonna
have
an
error.
I
don't
think
it's
a
valid
label
value.
Leave
it
like
what
be
specific,
if
the,
if
you,
if
you
select
source
as
a
as
a
label
from
for
a
stream,
then
you're
gonna
end
up
with
this
big
json
blob
here
as
a
label
value
which,
which
is
probably
not
a
valid
label
value.
B
That's
my
expectation
of
how
that
json
would
work
and
not
that
it
would
unpack
it
to
be
source
and
then
adjacent
blob,
because,
as
we
said
above
that
would
require
you
to
have
multiple
json
extractions
in
a
single
line
which
won't
work.
If
you
want
to
extract
things
from
different
forks,
okay,.
F
F
Just
a
second
in
this
label
rename
right
log
format
here
so
in
this
label.
Rename
this
line
here
this
this
past
bit
here
for
equals
far
won't
make
any
sense
anymore,
because
we
can't
map
on
tofu
anymore
right.
Why
can't
we
map
onto
foo?
Because
foo
is
already
a
stream
here.
B
F
F
I
F
F
Okay,
yeah,
I
mean
if
we
have
this
sort
of
safety
net
there,
then
I'm
happy
with
with
it
being
a
separate
step.
E
But
I
mean
that's
like
a
query:
planner
can
optimize
that
away
later
right.
A
If
you
do,
if
we,
if
we
were
doing
a
vector
aggregation
on
top
of
all
of
this,
then
if
you
aggregate
by
foo
only
I
would
you
don't
need
to
attack
all
the
other
labels
yeah.
But
I
don't
know
if
that's
what.
B
I'll,
so
if
we
reached
a
a
rough
consensus
that
having
a
separate
step
for
this
is,
is
more
desirable
than
integrating
it
or
are
we.
D
B
B
B
G
B
What
is
its
purpose
purpose
is
to
rename
maybe
to
join
labels,
maybe
to
format
labels.
You
know,
should
it
be
label
font
log
from
label,
fonts
label
forms
like
takes,
takes
like
string
substitutions
and
assigns
them
into
other
labels.
A
Yeah,
I'm
wondering
why
you
cannot
be
renamed
also
for
the
json
like
why.
Why
do
we?
If
the
name
is
having
a
dot
notation,
and
you
want
to
remove
that
dot?
Why
can
we
just
reuse
the
label
and
discovery
name
for
copyright?
B
B
Because
you,
these
are
all
suggestions
right.
These
are
all
reasonable
suggestions.
You
could
you
could,
you
could
think
of
you
know
your
label,
rename
is
relatively
obvious
and
then
is
it
equals
or
arrows
like?
How
do
we
know
what
the
which
way
around
it
is
label
merge
supports
more
than
label
rename
in
that
you
could,
potentially
label
merge
and
label
join
will
allow
you
to
construct
labels
that
have
multiple.
B
You
know
out
of
multiple
labels,
and
we
do
this
occasionally
in
the
metrics
world,
although
it's
very
you
know
very
occasional,
probably
done
it
three
or
four
times
label
thumped
would
be
like
you
know,
you
might
label
merge
on
steroids
like
in
that
you
could
string
format
a
label.
B
B
Would
be
like
you
know,
would
be
the
horrible
label
join
the
syntax
of
prometheus
right.
You
tell
me
what
that
does
I've
no
idea?
Does
it
put
flip
and
flute
flip
and
foo
separated
by
a
spacing
flap,
or
does
it
put
black
and
flipping
fl
in
foo?
This
is
fun,
then
there's
label
join
label.
Merger
do
basically
the
same
thing,
and
I
actually
I
mean
a
little
bit
me
quite
likes
label
front
and
right
same
front
which
would
allow
you
to
do
like
sense.
B
B
D
B
Yeah,
I
think
that's
yeah.
B
No
one
we
can
all
agree:
the
label
join,
syntax
function
in
prometheus
is
bad
but
hold
on.
I.
J
E
Merge
doesn't
seem
to
make
any
sense
to
me.
Look
like
compared
to
label
fmt.
F
F
B
B
Well,
there's
we
don't
know
how
many
streams
are
there
right
there,
so
it
could
be.
There
could
be
100
streams
here.
There
could
be
one
yeah,
but
we're
going
to
take
the
flu
and
fat
labels
and
put
them
in
flip
and
then
we're
going
to
do
a
you
know
we're
going
to
do
a
some
by
flip
right,
which
will
effectively
drop
the
foo
and
black
ones.
F
F
E
F
F
L
L
I
A
So,
just
to
recap:
we
want
to
introduce
only
label
format
and
for
json
all
the
label
name
will
be
extracted
with
dot
notation
and
we
accept.
B
Are
we
going
to
go
with
label
format
like
this?
Are
we
going
to
go
with
these
arrows,
or
are
we
going
to
go
with
equals?
Oh
dearest,
I
don't
know
yeah.
A
H
G
No,
I'm
sorry.
What's
the
sorry,
I'm.
A
Just
gonna
yeah,
so
I
use
the
operator
filter.
Here's
an
example:
let's,
let's
take
that
back
into
the
document.
D
I
agree
I
really
like
to
have
our
label
matching
syntax,
but
the
the
part
that
I'm
not
sure
about
is
like
this
greater
than
or
less
than
introducing
that
construct
into
the
existing
label.
Matcher
feels
sort
of
just
as
weird
and
dangerous
to
me
as
adding
another
keyword,
and
this
is
you.
This
is
because
we
want
to
only.
G
Yeah,
let's,
let's
just
hear
out,
hear
that
out
because.
B
You
might
be
able
to
do
something
like
unwrap
latexy.
A
A
That
that
you
use
will
transform
the
the
log
line
into
a
metric.
That's
the
last
thing
we
said
last
night.
E
I'm
not
entirely
sure
we
actually
landed
on
it.
I
think
we
still
have
some
notes
up
there
that
suggested
using
the
value
function.
Let's.
E
C
E
So
even
without
latency,
just
json-
I
don't
know,
but
we
have
to
say
which
one
yeah
exactly.
K
B
G
B
D
D
B
I
B
F
B
F
F
B
F
J
B
J
D
B
E
Problem,
I
think
so
as
well,
because,
like
the
it's
kind
of
similar
to
charting
a
histogram
on
a
heat
map
or
calculating
the
p99
or
something
right
like
you
can
do
either
the
p99
is
something
you
would
probably
do
first
and
then
you
look
at
it
on
the
heat
map
or
something-
and
you
can
do
the
same
thing
here
right
like
you,
first
see
that
there
are
these
values
or
how
many
values
there
are
of
this
kind.
E
D
B
G
Really
explicit,
let's
just
there's
a
there's
another
con
here-
is
that
this
one
here.
A
Well,
it's
also
easy
to
you
know,
build
the
query
as
you
go,
because
it's
limited
by
a
thousand
results
right.
So
the
first
time
you
get
way
more
and
there's
nothing
that
you
want,
but
as
soon
as
you
enter
latency
bigger
than
10,
you
get
another
thousand,
which
is
all
the
one
that
you
want
the
value
one
here
is
going
to
return
you
so
many
times.
If
you
have
a
high
throughput
logo.
B
B
B
I
don't
the
thing
is
I
just
really
don't
like
this
implicit
conversion
right,
because
what
happens
if
latency
contains
a
string?
What
happens
if
it
contains
an
s
on
the
end
of
it?
So
it
doesn't
return
it
right.
The
fill
yeah
in
that,
in
this
case
the
filter
just
silently,
drops
it
probably
right
yeah.
B
A
Yeah
but
the
the
the
other
problem
that
I
have
is
you're
saying
that
if
latency
is
not
a
flood,
then
we
should
fail.
I
don't
agree
with
that.
I
think
it's
a
bad
idea
to
do
that.
The
reason
why
is
you
cannot
force
everyone
for
a
single
stream
to
have
the
same
level
all
the
time
someone's
not
gonna
have
at
one
point
the
latency,
not
in
the
log
for
the
same
stream
and
then
you're
gonna
fail
him
just
because
the
latency
doesn't
exist
there
or
is
different.
B
Right,
but
in
that
case
I
would
if
you're,
if
you
I
mean
you,
the
the
thing
I
don't
like
is
silently
dropping
stuff,
because
I'm
going
to
write
a
an
sla
query
and
there's
going
to
be
a
failing,
there's
going
to
be
a
log
entry
that
says
this
request
failed
and
it's
not
going
to
say
the
latency
and
I'm
just
going
to.
Therefore
quietly
exclude
all
failing
requests
from
my
latency
metric
yeah,
but.
B
Well,
the
alternative,
is
you
know
we
we,
we
tell
users
that
you
know
we
do
this
in
prometheus
right.
If
the
query
is
invalid,
we
don't
just
silently
return,
no
results,
we
tell
them.
The
query
is
invalid
right
and
like
the
great
example
here
is,
joins
right,
joins
in
prometheus,
sometimes
work
and
sometimes
don't
work
right.
They
sometimes
don't
work.
B
If
you
get
a
sample
set
where
you
do
like
food
times
bar
and
foo
and
bars
labels
happen
to
match,
then
it
will
work.
But
if
you
do
a
few
times
bar
and
the
label
don't
match
they
want
you,
then
then
prometheus
will
return
an
error.
It
doesn't
just
silently
only
do
the
ones
that
do
work
I've.
This
is
all
you
have
to
do
like
join
left
group
left
or
whatever.
I
forget
it,
but
we
have
this
in
the
kubernetes
mixing
all
the
time
where
it
works
on
my
machine.
B
D
D
Middle
ground
here,
where
you
can
return
a
result
with
an
error
message
right
or
with
with
additional
data,
so
I
think
the
trouble
with
log
lines
is
it's
going
to
be
impossible
to
guarantee
that
you're
not
going
to
have
inconsistencies
in
a
log
line
and
there's
nothing.
They
can
do
about
that.
Right,
like
the
option
would
be
like.
The
query
will
always
fail.
B
Well,
there's
two
right:
you're
stating
there's
nothing:
they
can
do
about
it
right.
Well,
I
wonder
whether
like,
if
we
accept
that
the
crew
should
fail
when
when
it
can't
do
the
translation,
whether
we
can
then
provide
them
with
the
tools
such
that
they
can
opt
into
dropping
stuff
that
doesn't
match,
for
instance,
like
you,
could
do
a
filter
here.
B
B
I
can't
do
not
on
this
keyboard
there.
It
is
not,
you
know,
you
know
you've,
given
them
the
tools
to
filter
out
the
bad
values,
but
they
have
to
opt
into
that
process.
B
B
B
Is
no
uis
ever
show
the
warnings
graffana's
software
ui
doesn't
show
the
world
like
this.
This
is
a
by
design,
bad
api
to
to
show
you
values
with
that
and
then
another
great
example
is
the
dynamodb
api
right,
the
dynamodb
api.
If
you
send
it
a
write,
you
send
it
back
to
100
rights
right.
It
will
return
success
and
then,
in
the
body
of
the
success
message
it
will
say.
B
A
Since,
since
we
already
have
a
way
to
like
the
you
know,
filter
with
the
label
to
solve
this
problem,
could
we
have
this
thing
that
tells
the
behavior
of
the
of
all
of
this?
If
it's
like,
very
strict
or
not
strict,
and
then,
if
someone
really
wants
to
not
be
very
strict.
E
Right
where
we
say
like
by
default,
like
a
partial
failure,
is
a
failure,
but
you
can
optionally
say
for
this
one
query:
I'm
accepting
a
partial
failure
and
show
me
the
result.
That's.
B
B
Because
I
feel,
like
that's
a
reasonable,
a
reasonable
compromise
in
that
there's
a
there's,
an
extra
field
that
says
I
allow
partial
failures,
but
I
would
like
give
it
a
long,
hard
think
about
like
having
the
semantics
of
functions,
change
based
on
an
extra
field,
because
you'll
you'll
have
people
share,
queries
and
the
queries
will
work.
Some
people
are
not
for
others,.
E
B
And
I
feel
like
the
defaults
have
to
be
sensible,
I.e.
We
have
to
not
default
to
partial
failure,
in
which
case
like
we
should
really
only
be
building
for
defaults.
In
my
opinion,
give
it
some
thought
we've
got
really.
I
think
we're
really
close.
I
think
this
is
the
last
thing
left
to
decide
before
we're
before
we're
done
with
the
language.
A
Yeah,
so
don't
you
agree
that
having
value
here
and
then
after
on
top
of
it
average
with
a
duration,
is
a
bit
a
lot
like
a
you
need
the.
B
You
were
saying
make
value
implicit,
I
refer
you
back
to
no
magic,
like
we
rate
is
implicit
like
we
could
have
just
said
right,
we
could
have
said
rate
is
implicit,
but
we
said
rate
is
explicit,
like
we
have
to
be
explicit
here.
A
Series
yeah,
but
the
rate
is
a
aggregation
of
a
range,
not
a
transmission
of
the
time.
Sorry.
B
B
B
Okay,
bites
and
bytes
over
time
seem
like
some
overtime.
B
And
and
we're
we're
suggesting
our
value
well,
I've
got
to
run
I'm
five
minutes
late
for
the
next
one.
Sorry
yeah.