►
From YouTube: Loki Community Call 2020-05-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Right
so
apologies,
I
don't
have
a
well
prepared
set
of
things
to
talk
about,
but
the
things
that
come
to
mind
cyril.
We
want
to
talk
about
your
latest.
A
Yeah
I'll
just
show
it
publicly,
so
I'm
gonna
pass
the
link
yeah.
A
Yeah,
so
I
did
wrote
a
new
design
dock
for
the
local
v2,
so
we
call
it
now
local
v2
and
I'm
guessing
that
there
will
be
more
than
v2.
We
probably
could
have
v3
and
v4
the
v2
only
focus
on
extracting
labels
and
sample
value
from
the
log
line,
and
if
you
want
to
see
where
we
want
to
go
in
the
future,
there's
another
design
doc.
That
has
a
way
broader
scope,
but
this
is
not
approved
all
yet.
We
haven't
got
a
consensus
on
the
wall
dock.
A
However,
this
smaller
dock,
we
should
be
having
a
consensus
by
either
at
the
end
of
this
week
or
next
week.
So
if
you
have
anything
to
say,
feel
free
to
look
at
this
new
documentation,
it
should
be
easier
to
review
because
it's
really
focused
on
the
specific
use
case.
And
if
you
have
any
questions
concerned,
anything
you
know
leave
it
on
the
on
the
on
the
dark
knight-
and
I
reply-
and
I
will
reply.
A
B
Yep
the
other
sort
of
exciting
new
feature
that
we'll
be
talking
about
in
that
demo
as
well,
and
that
we've
are
running
a
little
bit
more
now
the
bolt
db
shipper.
So
this
is
the
sort
of
probably
sort
of
name-
I
guess
we'll
probably
stick
with
for
that.
So
this
is
replacing
a
separate
index
store
like
cassandra
or
bigtable,
using
the
bolt
db
files
and
then
shipping
them
to
the
object
store.
B
B
I
mean
we've
been
so
far
so
good,
but
we're
gonna
have
to
get
some
volume
on
it
to
really
see.
I
think,
what's
gonna
happen,
so
we'll
probably
run
that
in
parallel
in
our
ops
environment
for
a
week
or
two
and
then
the
intent
there
is
to
replace
our
current
ops
environment
with
bothdb
shipper,
the
big
sort
of
to
do's,
there
will
probably
be
around
elites.
B
I
think
we
need
to
maybe
I
know
sandeep.
You
have
a
doc
up
for
this
somewhere
on
some
of
the
challenges
with
kind
of
synchronizing
deletes
when
you
have
a
non-deterministic
index.
C
Yeah,
so
the
plan
is
to
do
those
changes
when
we
introduce
deletes
in
loki
so
right
now
we
are
not
going
to
change
anything
related
to
that.
A
Do
we
have
a
that
would
be
interesting?
Do
we
have
some
sort
of
documentation
on
how
to
get
started
on
the
bold
db
shipper.
B
There
is
in
the
docs
a
sample
config
and
some
some
basic
background
stuff.
I
was
able
to
sort
of
figure
it
out.
I
I
mean
we
could
re-wrote
well
docs
in
general.
B
I
need
to
be
let's
put
that
on
the
list
here
too,
but
yeah,
I'm
probably
not
the
right
person
to
say
if
they're,
good
or
bad,
because
I
you
know
a
little
too
close
to
the
project
to
figure
it
out,
but
I
have
it
set
up
now
I
have
a
raspberry
pi,
that's
writing
with
bold
db
shipper,
and
then
I
can
r
sync
the
chunks
directory
to
another
computer
and
with
a
different
loki,
read
the
same
logs,
which
is
kind
of
cool.
B
It
is
in
how
the
the
store
works
for
tracking,
let's
think
about
saying
this
right,
but
basically
you
can't
schema
change
your
way
through
multiple
bolt
db,
shipper
configs
like
if
you
decided
you're,
using
an
object,
store
like
gcs,
and
you
know.
Currently
you
could
change
the
schema
to
say
I
want
the
new
object
store
to
be.
B
C
B
B
C
B
B
A
B
Anyway,
that's
real
bad,
so
we
wonder
if
we
should
either
do
so.
I
want
to
do
a
1.5
release.
Maybe
next
week
we
got
a
fair
amount
of
stuff.
The
bolt
db
shipper
will
just
sort
of
do
is
experimental,
but
I
think
it's
probably
close
enough
that
we
can
do
that
and
have
people
start
using
it
as
a
beta
feature
to
give
us
feedback.
So
it's.
B
Unreasonable
that
gave
us
something
a
little
newer
too
for
talking
about
our
presentation.
B
A
E
E
Like
an
empty
regex
yep,
it's
mp
contains
or
like
anything
that
selects
empty
okay,
dot
star
or
just
contains
empty.
Oh
yeah,
okay,.
B
B
A
B
B
Okay,
yeah
so
1.5.
E
G
I
don't
think
I
have
a
public
design
doc
out
yet,
but
that
should
be
coming
very
soon
regarding
alerting
based
on
logs,
and
the
tentative
plan
right
now
is
to
use
the
ruler
component
from
the
upstream
cortex,
where
we
can
and
extend
it
a
little
bit
to
handle
log
ql
and
logs,
and
that's
something
that
we've
seen.
G
I'm
not
sure
if
contention
is
the
right
word,
but
a
couple
of
different
opinions
about,
but
I
should
have
that
out
soon
and
then
I
guess
the
only
other
thing
from
my
end
is
I've
been
continuing
to
work
on
a
bunch
of
like
starting
optimizations
in
my
free
time
for
a
few
months
now,
there's
a
pr
up
for
that.
It
needs
a
little
bit
of
work
left,
but
I
think
we're
pretty
close.
I
don't
know
zero.
What's
your
opinion.
A
Yeah,
I'm
waiting
for
I'm
waiting
for
this
new
sharding,
so
this
already
exists
in
cortex,
and
this
work
is
to
owen
is
bringing
that
to
looking
in
a
different
way
now
and
I'm
waiting
for
these
changes,
because
I'm
expecting
actually
a
very
big
improvement
in
terms
of
performance
from
this
one,
because
so
far
we
only
are
able
to
split
queries
by
time,
which
can
be
tricky
with
blocky
and
being
able
to
split
by
chart
will
help
us
to
parallelize
way
better.
G
Yeah
so
as
cereal
said
right
now,
we
split
by
time
in
the
query
front-end
component
right,
which
is
like
one
dimension
of
parallelization.
We
can
get
when
we
say
starting
it.
It
actually
refers
to
the
cortex
schema
where
we
allocate
series
or
log
streams
in
this
case
in
a
number
of
shards.
So
it
allows
us
to
then
separate
by
like
remap
queries
into
more
paralyzable
forms,
which
can
then
run
like
sub-aggregations
and
sub-queries
on
subsets
of
the
log
streams.
G
So
it'll
give
us
another
dimension
to
parallelize
by
this
is
particularly
attractive
for
loki,
because
one
of
the
bottlenecks
we
run
into
now
is
we
are
running
up
against
a
problem
paralyzing
by
time,
because
you
can't
split
everything
by
you
know
five
seconds
10
seconds
etcetera
because
you'll
end
up
pulling
the
same
chunks
on
different
queries.
So
you
there's
a
lot
of
work,
duplication
there,
and
this
will
allow
us
to
increase
the
times
that
we
split
by
on
that
dimension,
but
then
paralyzed
by
another
dimension
as
well.
A
Yeah,
so
in
one
of
our
cluster
we
are
like,
I
think
we
are
doing
a
15
minute
time
split.
So
if
you
do
an
hour,
query
you'll
get
four
query
of
15
minutes
and
for
some
other
cluster
we
have
like
five
minutes
and
we
realize
you
know
it's
good,
but
if
we
go
down
then
we're
gonna
run
into
the
trouble.
That
owen
explains.
So
that's
why
charlie
will
be
really
helpful
here.
You
know
lowest
to
speed
by
16
without
having
a
very
small
time
range.
A
And
there's
a
question:
is
the
query,
font
and
using
nokia
the
same
package
in
context
yeah
it
is,
it
is
using
the
same
package,
it's
actually
using
the
same
content
package
and
walker
package,
the
only
difference
and
also
the
middleware,
but
the
only
difference
is
for
some
of
the
path
we
have
a
different
code
base,
but
otherwise
you
know
it's
very
like
very
much
the
same.
A
F
A
So
I
I've
been
actually
working
for
a
while
on
this,
and
I've
tried
really
hard
to
use
the
cortex
result
cache
until
recently.
I
realized
that
this
was
the
problem,
so
I'm
actually
starting
a
new
middleware
for
that
specifically
for
loki,
so
the
cache,
the
caching
part
will
be
very
specific
to
loki
and
I'm
not
going
to
reuse
cortex.
Actually,
I'm
going
to
use
the
same
idea
of
I
don't
remember
the
name
of
that
but
intense.
I
think
yeah
yeah
exactly
so.
A
This
is
going
to
be
exactly
the
same
for
the
cache,
but
it's
going
to
be
different
because
in
loki
you
can't
really
cache
a
wall
time
range
or
wall
split.
If
you
have
hit
the
limit,
because
it
means
that
you
have
a
partial
result
right,
so
I'm
working
on.
A
B
Thanks,
you
want
to
talk
about
your
protobuf
optimizations,
zero.
A
Yeah
I
mean
it's
it's
in
review.
We
haven't,
I
I
think
it's
good
to
go
online.
I
like
it
to
actually
so
far.
I've
been
testing
this
only
in
dev
and
it
shows
it
shows
like
a
better
cpu
usage
but
an
increase
if
in
memory,
however,
this
increasing
memory
is
not
coming
from
allocation
because
we
do
allocate
way
less.
So
it
seems
that
when
we
look
at
the
garbage
collection,
the
garbage
collector
is
somehow
collecting
less
often
than
before,
but
we
do
have
higher
memory.
A
So
I
still
need
to
put
pressure
on
loki
memory
to
see
that
to
see
if
this
will
be
okay,
because
I
was
able
to
reach
something
like
in
a
cluster.
I
was
reaching
like
17
gigabyte
of
memory
used
and
at
some
point
it
was
plateauing
plateau.
I
was
doing
a
plateau,
but
I
still
need
to
look
at.
If,
if
I
have
less
memory,
will
it
be?
Will
it
garbage
collect
everything
in
term
of
improvement?
A
This
improvement,
basically
it's
using
custom,
protobuf
type,
and
it's
the
same
idea
that
so
there's
like
two
optimization
when
I
use
this
custom
type,
the
first
one
is
the
same:
that
cortex
is
doing
the
euro
string.
A
Yeah,
the
university
so
basically
there's
a
buffer,
so
when
we
and
marshall
unmarshal
an
object
or
a
stream,
we
pass
it
a
buffer,
and
this
buffer
will
be
throw
away
once
the
unmarshalling
is
done
and
there
is
in
the
generator
card,
it
will
always
do
a
copy
of
the
string
instead
of
doing
a
you
know,
a
hacky
way
of
creating
a
string
from
the
pointer
reference,
and
so
this
is
one
of
the
optimization
and
it
does
create
less
allocation
and
the
similar
optimization
is
we
use
timestamp
in
the
loki
stream
and
the
end
in
the
entries
and
timestamp
in
photobuff
is
having
a
bug
in
term
of
allocation.
A
Every
time
you
ask
for
the
size
of
it,
it
will
create
an
allocation.
So
this
is
this
was
causing
a
lot
of
memory
traffic
and
now
all
of
this
is
being
improved.
So
the
only
really
side
effect
I've
seen
so
far
and
increase
the
memory,
and
I
wasn't
really
expecting
that.
A
So
I
still,
I
still
need
to
test
this
with
basically
pressure.
I
want
to
put
the
limit
on
lucky
because
my
limit
was
way
too
high
to
verify
that
there
were
no
memory
issues,
so
I
want
to
see
if
the
garbage
collection
will
trigger
often,
but
it
seems
that
this
this
change
has
made
the
garbage
collection
trigger
way.
Less
often.
B
We
covered
what's
kind
of
new,
only
other
couple
things
I
can
think
of
that
are
new
on
release,
but
I
refactored
how
the
sort
of
how
the
limits
are
being
applied.
We've
been
tightening
limits
in
our
environments.
Our
defaults
in
some
cases
I
think,
are
way
too
generous,
specifically
active
streams.
I
think
the
default
was
like
20
000..
B
This
is
a
lot
for
loki
and
specifically
it's
a
lot
when
people
have
very
low
volume.
This
is
kind
of
the
new
hard
area
that
we're
seeing
like
it's.
It's
actually
easier
to
deal
with
people
that
have
very
high
volume,
but
you
know
we
just
recently
started
doing
like
free
trials
and
people
sign
up
and
they
they
like
to
go.
B
Configure
labels
with
dynamic
values
like
you
know,
ip
address
or
something,
and
then
just
create
tons
and
tons
of
streams
that
log
relatively
slowly
or
not
at
all,
because
of
that
cardinality
and
then
it
ends
up
flushing.
You
know
tons
and
tons
of
tiny
chunks,
which
is
sort
of
bad
for
the
overall
health
of
the
system
in
general.
So.
A
I
I
realized
something
yesterday
I
was
wondering
if
anyone
has
a
feedback
about
this,
but-
and
I
think
we
already
talked
about
it
together-
should
we
remove
the
instance
label.
I
have
a
feeling
that
in
all
the
analysis
and
usage
I've
done
of
loki,
I
never
used
the
instance.
I
still.
I
still
think
it's
very
interesting
to
be
able
to
know,
which
instance
is
logging,
but
it
doesn't
seem
to
be
useful
as
a
label.
B
It
would
because
every
time
pods
restart,
I
mean
I
have
used
it
on
occasion
when
something
particular
happened,
and
I
knew
you
know
when
but
like.
If
you
have
a
time
range,
you
can
accomplish
the
same
thing.
Yeah
well,.
B
B
Trouble
so
so
we're
kind
of
moving
more
in
the
direction
of
we're,
definitely
moving
more
in
the
direction
of
guidance
for
configuring.
You
know:
fewer
labels,
static
labels,
only
really
dynamic
labels
are
definitely
a
big
problem,
mostly
just
because
people
don't
have
volume
right
like
if
you're
trying
to
fill
a
one
megabyte
chunk,
it's
compressed
at
five
or
eight
x.
Right,
like
you
need
five,
ten
megabytes
of
uncompressed
log
data.
B
You
know
we
had
default
limits
of
an
hour.
Most
people,
don't
log
logs
that
quickly.
B
So
that
will
help
complete
that
circle,
but
yeah
anywhere
you
can
remove
labels
that
change
values.
Frequently
instance
is
definitely
one
which
is
probably
yeah.
A
The
only
problem
I
see
with
removing
this
label
is,
it's
still
useful
to
have
it
at
some
point.
So
you
can.
You
can
find
the
information,
so
I
don't
know
if
there
is
any
way
to
add
it
as
part
of
the
log
or
okay.
I
will
still
have
it.
I
just
don't
want
to
index
on
it.
It
doesn't
seems
to
be
useful
to
index
on
the
instance
label,
but
I
still
need
to
be
able
to
see,
which
instance.
G
I
think
it
kind
of
underlies
another
problem,
which
is
it's
fairly
difficult
to,
or
it
requires
a
lot
of
knowledge
to
like
use
loki
really
effectively
right.
It's
like
this
is
these
are
things
that
we're
figuring
out,
building
and
running
locally
internally
and
loki
generally
runs
for
you
know
on
the
behalf
of
others
multi-tenanted
system,
and
so
I
do
definitely
think
this
is
an
example
of
something
that's
it's
helpful
to
have,
and
it's
something
that
a
lot
of
people
will
likely
index
on
right
which
indicative
of
like
introductory
patterns.
G
B
B
Yeah,
so
just
to
close
that
loop
there
so
there's
one
metric
now
loki
and
jester
discarded
samples
total.
I
think,
which
covers
all
of
the
reasons
that
something
is
hitting
a
limit
previously.
It
was
also
shared
with
cortex
ingestor,
discarded
samples,
the
metric,
but
that
one
doesn't
exist
anymore,
so
it
makes
it
a
little
easier
in
a
multi-tenant
environment
to
keep
track
of.
B
You
know
who's
having
trouble,
and
we
should
probably
I
gotta
circle
back
and
probably
reapply
the
defaults
that
we're
using
now
into
the
defaults
into
like
the
case
on
it
and
help
charts,
so
that
people
are
at
least
have
a
fighting
chance.
I
mean
all
of
this
is
kind
of
circumstantial
right.
Like
you
can
you
know
you
can
run
loki
just
fine
with
tons
of
labels.
B
B
A
Yeah
I
was
asking
if
there
is
a
edition
on
the
on
the
call
if
he
wants
to
give
a
station.
H
There
is
no
update
just
waiting
for
on
the
log
back,
I
mean
review
and
also
in
the
last
measurement
on
you.
B
Yep-
and
I
will
this
week
is
a
tough
week
next
week-
I
should
have
a
chance
to
get
some
of
your
reviews
attitude
and
get
you
back
on
track.
So
apologies
there,
but
thanks
for
we
got
lots
more
for
you.
If
you've
got
time,
I'm
queuing
up
stuff
for
you
to
work
on
yeah.
I
really
appreciate
it.
A
I
actually
created
two
issues
last
week,
which
I
think
should
be
should
be
done
at
some
point.
One
is
being
able
to
scrap
object,
storage
from
pontial,
so
let's
say
I
have
a
data
in
s3
or
in
gcs
or
whatever
object
storage
we
can
think
of,
and
I
want
to
be
able
to
scrap
this
data
and
send
it
into
loki,
and
this
seems
to
be-
you
know
not
too
difficult
to
integrate
into
hometail,
and
I
and
I
created
an
issue
for
that.
A
There
seems
to
be
at
least
one
person
interested
into
this,
but
I
definitely
received
the
other
use
case
and
there's
another
issue
I
created.
But
it's
for
the
ponte,
replacing
label
and
there's
a
lot
of
people
that
have
issue
with
the
current
pipeline
stage.
They
want
to
be
able
to,
let's
say,
remove
data
by
matching
so
like
password
or
an
ip
and
url.
They
want
to
be
able
to
remove
it,
and
so
I
propose
here
that
we
add
a
stage
that
allows
you
to
replace
instead
of
matching.
B
Yeah
we
get
that
request
a
lot
for
sure
being
able
to
strip
sensitive
data.
That
would
be
nice
all
right,
we're
out
of
time.
So
thanks
everybody
and
see
you
in
three
four
weeks
every
month
we
do
this.
I
think
yeah
yeah
yeah,.