►
From YouTube: Loki Community Call 2020-07-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
First,
on
the
list
there
is
the
log
ql
v2
stuff.
We
had
another
sort
of
a
a
stakeholders
group
of
myself,
david
cowell,
schmidt,
cyril,
frederick,
sorry,
france,
I
think
maybe
his
last
name,
I'm
tom
wilkie
and
we've
been
progressing
through
that
a
couple
of
the
docs.
I
don't
know
if
the
if
anybody
has
trouble
with
permissions
on
those
docs,
let
us
know,
but.
C
Yeah,
you
can
ask
permission
and
they
are
not
up
to
date
with
the
last
meeting,
although
it's
yeah,
it's
not
that
far
from
what
we
discussed,
but
I
still
need
to
update
it.
Yeah
you
want
any
other
sort
of
feedback
from
those.
C
Cereal,
we're
looking
at
basically
the
filter,
how
we're
gonna
achieve
filtering
on
extracted
levels.
So
that's
where
we
are
we're
discussing
this,
which
I
think
will
will
be
the
last
part
but
yeah.
That's
where.
A
We
are
yeah,
there's
a
big
question
too,
on
how
certain
types
of
error
handling
would
do.
So
if
you
are
applying
a
regex
filter
to
your
logs
and
the
filter
doesn't
match
right
or
you're
doing
json
extraction
and
it's
not
a
valid
json
dock,
etc.
It's
like
what
do
we
do
with
with
those
kinds
of
things.
A
There's
there's
not
a
obvious
great
answer
for
that
right,
so
ignoring
the
line
means
the
results
are
somewhat
partial
or
maybe
not
to
be
expected
and
airing
the
queries
difficult,
because
the
ability
for
people
to
you
know
change
the
logs
is
isn't
really
there
right.
It's
like
what
do
you
do
if
it
errors
you
know,
you'd
have
to
find
a
way
to
remove
the
stream.
That's
airing,
it's
just
one
log
line,
so
you
know
results
with
in
warnings
of
an
errors,
are
possible
too.
A
It
is
an
interesting
problem,
but
excited
though,
because
it's
it's
going
to
be
we're
moving
a
little
bit
slower.
I
guess
we're
moving
a
little
bit
slower
and
everything
then,
except
owen,
you're
alerting,
looks
like
it's
coming
along
pretty.
Well,
I
listed
the
pr
that
you
opened
there
there's
a
sort
of
open
discussion
around
the
best
way
to
handle
changes
that
we
need
in
some
prometheus
packages
inside
of
cortex
and
loki.
A
We
want
to
try
to
make
up
stream
changes
if
we
can,
but
it's
historically
a
little
difficult
to
make
upstream
changes
in
prometheus
that
aren't
strictly
for
the
prometheus
project
right,
so
making
changes
to
benefit
effectively
loki
is
has
been
difficult,
but
we
still
want
to
see
if
we
can
at
least
head
down
that
road.
You
want
to
continue
from
there
owen
sorry,
I'm
kind
of.
B
You've
kind
of
laid
it
out
pretty
well
there's
a
couple
changes
that
we
would
need
to
some
of
the
upstream
prometheus
packages
to
support
like
log
ql
based,
alerting
and
then
there's
a
lot
of
great
work
in
the
cortex
roller.
That
really
enables
a
lot
of
functionality
that
we
need
as
well
so
basically
trying
to
find
the
least
invasive
changes.
We
can
do
for
that.
B
So
right
now,
there's
a
pr
up
that
has
like
the
the
probably
most
naive
way
forward,
which
is
a
series
of
pretty
invasive
changes,
but
we'll
be
kind
of
trying
to
figure
out
better
ways
to
do
that,
both
in
terms
of
like
code
changes,
but
also
in
terms
of
you
know,
kind
of
getting
community
buy-in
to
see.
If
we
can
get
some
upstream
refactorings.
C
Should
you
say
that
you've
been
running
this
for
a
while
in
death.
B
Oh
yeah,
yeah
we've
been
running
it
both
as
individual
nodes
and
like
across
restarts
and
ring.
Resharding
takes
a
lot
from
the
cortex
ruler,
which
is
a
it
basically
uses
the
ring
module
to
schedule
work
in
a
horizontally
scalable
way.
B
A
Nice,
thanks
owen,
the
bold
db
shipper
also
coming
along,
but
slower.
We
we
backed
up
a
little
bit
thinking
through
some
of
how
future
pieces
of
this
are
going
to
work
when
we
have
to
support,
deletes.
A
Sandeep
is
working
and
we're
working
through
to
change
the
behavior
such
that
we
basically
only
ever
upload
immutable
files,
so
that
simplifies
some
of
the
design
quite
a
bit,
but
it
also
means
that
you
know,
for
you
know
like
a
10
ingestor
cluster,
you
might
be
uploading
was
about
a
thousand
files
a
day,
so
it
will
increase
the
need
for
a
compactor.
A
I
think
it's
a
little
bit
to
be
determined
how
sort
of
fast
like
so
both
db
index
files
can't
be
merged,
so
we
basically
iterate
all
of
them.
We
download
and
then
iterate
like
asking
each
one
for
so
I
don't
know
if
it's
faster
to
you
know,
make
one
giant
bolt
db
file
and
ask
it
or
you
know,
iterating,
there's
going
to
be
some
trade-offs
there.
So
we'll
kind
of
see.
But
I
think
initially
it's
not
going
to
be
a
huge
requirement
to
have
a
compactor,
but
it
definitely
will
be
in
the
long
run.
A
But
I
think
we're
still
sort
of
hoping
to
have
that
pr
fleshed
out
with
some
more
tests
and
things
you
know
next
week
and
getting
stuff
running
in
our
environments
in
the
next
week
or
two
to
where
I'm
still
hopeful
towards
the
end
of
july,
we'll
be
in
a,
I
don't
think
it'll
be
quite
production
ready,
but
where
we're
maybe
running
it
in
production.
A
D
Yes,
yes,
that's
totally
fine
and
you
can
write
me
with
any
hours
or
wise
or
I
see
like
it's.
D
Yeah,
just
just
just
raise
awareness,
because
this
is
basically
the
spot
where
we
chimped
in
from
rented
side.
We
we're
running
currently
a
cluster
with
odb
shipper
and
member
list
is
basically
our
set,
but
we
want
to
stress
test
and
find
out
if
we
can
transfer
this
in
an
in-cluster
situation
for
replacing
our
current
store,
the
not
so
below
plastic.
But
this
is
future
music.
D
D
If
you,
if
you
fail
to
resolve
the
members
through
a
headless
service
which
is
possible
in
any
case
situation,
there
is
no
no
kind
of
ordering
when
your
service
will
give
you
the
end
points,
and
today
we
also
landed
the
pr
for
having
the
same
dns
provider
mechanism
for
srv
records,
as
we
have
this
in
memphis
d
clusters-
and
this
is
also
nice
because
you,
you
can
list
things
on
your
headless
service
site
and
you
just
give
dns
plus
on
one
as
one
member
list,
and
then
it
resolves
to
all
the
members
that
are
on
your
dns
side.
D
So
this
is
also
something
to
if
you,
if
you
run
loki
close
to
thanos
and
or
cortex
you
from
a
from
a
from
a
high
level
perspective.
If
you
don't
have
to
manage
two
different
discovery
mechanism,
it
makes
sre
work
easier
and,
more,
let's
say
less
less
cognitive
load,
at
least
in
the
end.
So
this
is
where
we're
working
and
yeah.
I
will
open
the
pr
to
transfer
the
cortexpix
bits
and
locate
just
updating
the
dependency
here
and
working
any
breaking
changes
out
and
yeah
currently
really
happy
about
that.
D
Because
member
list
is
is,
is
for
a
small
cluster
with
two
replicas
p
per
component
except
table
manager,
which
we
don't
use
currently
yet
probably
good
that
we
don't
use
it
with
volte
b
shipper.
As
far
as
I
understand.
A
Nice
yeah
I've
got
a
little
bit
farther
down
the
I
feel
like
there's
a
spot
coming
up
real
soon
here
for
a
1.6
release,
and
it
probably
would
be
good
to
consider
having
like
the
two
things
you
just
listed.
Getting
a
revendered
cortex
and
I
mean
there's,
there's
not
a
lot
of
huge
features,
but
there's
been
a
few
cyril.
A
So
there's
fixes
for
that
that
went
in
last
week.
You
know
the
alerting
and
bolt
db
and
log
ql
stuff
won't
make
it,
but
I
think
we're
at
a
good
point
here.
It's
been
forget
when
1.5
was
in
may
so
we're
more
than
a
month
now,
so
I
can
work
with
you
on
that
period
to
see
what
but
it'd
be.
C
For
1.6,
I
want
to
to
get
rid
of
the
capability
that
we
added
also.
It
was
yeah.
A
Yeah
that
ended
up
being
kind
of
a
nightmare
unsurprisingly,
but
so
in
1.5.
A
We
so
loki,
we
basically
non-rooted
the
user
in
the
docker
image,
but
we
wanted
to
maintain
the
ability
to
run
on
port
80..
Internally,
we
run
services
on
port
80
is,
is
kind
of
a
nice
operation,
simplicity,
so
that
if
you
need
to
figure
out
what
port
something's
running
on
the
answer
is
80,
rather
than
trying
to
remember
that
it's
3000
for
this
or
3100
for
loki.
A
So
the
trouble
with
that
is
binding
to
a
port.
Less
than
1024
isn't
possible
without
being
a
root
user.
Unless
you
add
a
linux
capability,
so
we've
done
that
we
did
that
we
went
that
route,
but
then
it
basically
broke
the
pod
security
policy.
That's
optional
in
the
helm
chart
because
we
didn't
add
the
specific
support
for
that
capability,
and
it's
like
doing
so.
I
understand
people's
feedback
right
like
we.
They
don't
like
most
people
aren't
doing
what
we're
doing,
and
so
it's
annoying
that
they
have
to
approve
the
capability.
A
So
I
believe
at
this
point
we're
just
going
to
move
everything
to
port
3100
by
default.
Remove
the
capability
completely.
A
C
Do
you
also
have
someone
who
joined
your
log
team?
I
think
someone
reached
out
to
me
two
weeks
ago.
D
Yes,
yes,
we
are
two
of
us
working
on
that
stuff
currently
and
we
look
forward
to
expand
this.
Basically,
the
team
has
a
size
of
seven,
but
currently
two
are
more
or
less
dedicated
in
the
topic,
which
is
I
try
to
do
my
best
not
to
do
all
the
work,
my
by
my
own
and
leave
bits
to
my
managers
that
I
am
just
overflowing
with
working
and
in
people
one
last
command
the
member
list.
There
is
also
a
documentation
section
on
that.
D
So
if
someone
else
would
like
to
try
that
on
a
non-openshift
cluster,
I
would
be
really
really
happy,
because
I
have
definitely
a
bs
here
on
how
things
work.
I
don't
expect
anything
special
here,
except
that
you
use
just
a
third
service
resource
for
the
ghost
ship
for
the
ghost
ship
port,
but
you
can-
or
maybe
you
can
really
squeeze
things
down
for
your
setup
on
the
same
headless
service,
like
with
your
pc
port,
depends
on
your
use
case.
It's
it's
definitely
bs.
D
That's
what
I
want
to
say.
If
you
want
to
try
it
out,
give
me
feedback
or
you
can
work
that
out
or
non-open
shifts.
C
A
We've
we've
talked
about
this.
Before
I
mean
it's,
I
think
it's
just
inertia
that
keeps
us
running
consoles
because
we
have
it
and
we're
running
it
and
some
would
have
to
go
through
and
but
in
in
the
long
run,
I've
you
know
we're
going
to
want
to
direct
people
to
member
list
not
console
because
it'll
be
easier.
A
A
A
Cool
thanks
perry.
I
did
another
note
in
there
that
I
just
we
did
finally
move
loki
to
go
1.14.
We
were
waiting
for
cortex
kind
of
oh.
She
was
waiting
for
something
else,
but
so
far
I
don't
think
we've
seen
anything
significant
in
that
change,
but
that
is
in
master
now
and
then
I'm
yeah.
I
guess
owen
you
added
this,
so
this
is
something
that
came
about
recently
like
last
week.
A
This
week
is
there's,
there's
a
bit
of
a
gap
right
now
in
terms
of
getting
logs
out
of
things
like
lambdas
or
functions.
Ephemeral
type
services,
where
you
know
loki,
has
a
hard
requirement
on
streams
having
ordered
data,
and
it
also
has
a
hard
requirement
on
not
making
infinite
streams,
and
so
this
kind
of
clashes
a
bit
with
things
like
lambdas
that
may
have
like
only
like
a
uid
or
something
it
depends
a
lot
on
how
you're
configuring
and
running
them.
A
So
we're
working
a
solution
to
that.
It's
a
couple
pieces
and
this
will
probably
be
an
evolution
over
time,
but
for
now
we
already
previously
approved
the
design
dock
for
having
prom
tail
implement
the
same
push
api.
That
loki
does
there
are
a
few
sort
of
advantages
of
this.
A
You
know
you
could
basically
stick
a
prom
tail
in
between
other
prom
tails
to
do
a
little
bit
of
sort
of
federation
or
working
around
different
network
architectures
and
things,
but
it
would
work
out
here
as
well
too,
where
a
lambdas
or
many
lambdas
can
push
to
prom
tail
and
then
prom
tail
can
assign
the
time
stamps
so
that
the
labels
won't
end
up
being
out
of,
or
the
series
doesn't
end
up
being
out
of
order.
A
That
label
ends
up
not
being
terribly
useful
right
like
you,
don't
really
need
to
know
which
prom
tail
is
the
one
that
received
your
lambda
logs
and
sent
them,
but
it
does
work
around
the
out
of
order
problem,
so
that
is
in
progress.
We'll
give
some
more
I'm
sure,
we'll
write
a
blog
post
about
that,
but
if
anybody
out
there
is
sort
of
struggling
with
this
problem,
we're
hoping
to
have
a
little
bit
of
a
help
with
that
anything
else,
owen
that
I
missed
on
that.
A
Sorry,
my
dogs
are
rather
rambunctious
today.
What
are
you
guys
doing?
It's
hot
out,
yeah
we
got
about
six
minutes
or
so
left
anything
else.
Anybody
questions,
comments,
feedback.
C
C
I
actually
finished
the
pr,
but
I
need
to
fix
all
the
testers
because
it
actually
split
the
code
base
in
two
now
we
have
a
sample
iterator,
not
just
an
empty
title,
and
so
it's
taking
a
bit
more
time
than
I
expected,
but
I
think
I'm
and
I
should
be
able
to
test
that
today
in
a
cluster
and
then
sell
a
pr
other
trigger
tomorrow.
A
Yeah
I
mean
this
helps
save
us
from
writing.
Like
my
general
understandings,
we
don't
have
to
ship.
The
log
line
allow
around
for
metrics
queries
now,
so
you
know
once
you've
sort
of
read
the
line
and
done
whatever
filtering
you
need.
It
just
turns
into
a
sample
at
that
point,
and
then
we
can
handle
it
effectively
more
effectively
as
a
metric
through
the
rest
of
the
system,
where
previously
we
were
still
sort
of
moving
it
around
and
treating
it
as
a
log
line
which
isn't
really
necessary
and
kind
of
wasteful.
C
Yeah,
so
the
the
local
enzyme
before
what
was
just
requesting
an
entry
territory
and
then
using
that
and
hit
it
out
to
do
everything
that
he
was
requiring
to
do
now.
He
can
request
both
an
entry
to
and
the
sample
iterator,
and
so
the
nice
benefit
of
that
is
that
when
we
process
chunks,
we
don't
need
to
do
the
location
of
the
log
line.
We
can
just
read
the
buffer
directly
and
transform
it
to
a
sample
which
allocation
for
metric
queries
are
currently
the
bottleneck,
so
that
should
reduce
the
arrogation
by
the
big
factor.
C
Yeah,
that
goes
along
also
with
a
once
walk
on
the
ring.
C
So
this
could
this
is
going
to
improve
also
instead
of
er
evaluation,
which
is
used
for
the
by
the
ruler.
A
Yeah,
I'm
excited,
I
think,
we're
we're
definitely
moving
in
the
right
direction
for
pretty
reasonable
performance
on
metric
queries
and
log,
qr,
v2
type
extractions
and
ruling
and
sort
of
having
a
much
more
capable
and
performant
solution
for
metrics
from
logs.
I
mean
we
don't.
A
A
Yeah
thanks
everybody
thanks
for
people
outside
of
grifana
labs,
for
showing
up
we're,
hoping
to
get
more
and
more
outreach
on
this
awesome
thanks.
Everybody.