►
From YouTube: Loki Community Meeting 2021-02-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
C
Not
very
many-
I
didn't
put
it
on
here,
but
what's
what's
going
to
be
a
better
time
for
this,
we
want
to
try
to
do
the
same
day
and
then
move
it
later.
B
B
C
B
Your
externally
onto
friday,
that's
historically
it's
a
bad
choice
of
day,
but
just
looking
at
our
calendar,
we
would
have
thursday
afternoon
free,
like
1700
utc,
which
is
a
similar
time
for
all
the
other
calls
yeah
and.
E
B
B
F
Hey
guys
just
to
let
you
know,
ivana
and
I
jumped
into
another
room
that
was
that
was
linked
in
the
calendar
and
there
were
a
bunch
of
people
who
were
trying
to
access
that
one
as
well.
A
Okay,
that's
good!
I
don't
know
how
to
get
to
that
one.
Can
you
join
that
one
and
give
them
the
link
here?
Is
it.
D
It's
not
changed,
or
maybe
that
goes
wrong.
Every.
B
But
where
did
you
find
that
link
because
I
just
checked
both
my
own
entry
and
the
official
one
on
the
on
the
community
calendar
they're
all
pointing
to
the
call
you're
currently
on.
G
So
what
I
did
some
time
ago
is,
I
copied
the
event
to
my
calendar
and
then
I
clicked
on
the
event
in
that
calendar.
So
if
this
updated
mine
didn't.
B
B
B
Okay,
perfect!
Thank
you
maybe
give
me
the
other
link,
because
then
I
can
sit
in
the
other
call
and
and
direct
people
if
they
join
the
old
one
and
now
I'm
going
to
shut
up.
If
you
can
just
perfect.
Thank
you.
So
I'm
going
to
drop
for
a
bit.
C
Thanks
richie,
I
started
adding
people
to
the
attendees
feel
free
to
remove
yourself.
If
you
don't
want
to
be
there
for
everybody
that
joined
late,
we
do
record
these
calls
and
if
you
don't
want
to
be
recorded,
you
can
drop
and
watch
the
recording
or
you
know,
hide
your
camera
or
join
under
a
pseudonym
or
I
think
that's
usually
the
intro
that
richie
gives
all
right.
I
I
have
a
rough
agenda,
but
I
encourage
anyone
to
add
stuff
that
they
want
to
talk
about
and
sort
the
ordering
around
a
little
bit.
C
Was
to
talk
about
some
design
docs
that
are
up,
and
you
know
see
if
anybody
has
any
questions
or
discussions.
All
of
them
should
be
available
for
comment,
at
least
so
that
anybody
can
add
feedback,
opinions,
etc
until
that
gets
out
of
hand
or
we
don't
want
any
more
opinions,
maybe,
but
usually
that's
not
the
case,
so
the
first
one
I'll.
Let
owen
talk
about
this
because
I
know
you've
done
all
the
work
on
this
doc
owen.
But
we've
mentioned
this
before
removing
the
constraint
on
ordering.
A
A
A
This
will
come
from
people
ingesting
logs
via
aws
lambdas
or
that
sort
of
thing
where
they
don't
want
to
create
a
unique
per
invocation
label,
because
that
runs
afoul
of
some
of
our
best
practices,
and
so
the
idea
here
is
to
allow
ingestion
of
samples
of
lines
that
are
not
strictly
in
order
and
conveniently
most
of
our
system
already
functions
by
kind
of
reading
data
out
of
your
object,
storage
that
may
be
overlapping
and
that
sort
of
thing,
so
we
really
only
have
to
solve
it
in
our
gesture
components.
A
There's
a
doc
out
there.
Now,
please
give
it
a
look,
give
some
feedback,
and
the
only
kind
of
issue
here
is
that,
due
to
some
internal
prioritizations,
we
probably
won't
start
working
on
this.
For
a
few
months.
A
It's
probably
one
of
our
like
most
requested
features,
though
historically
at
least
over
the
past
year
and
a
half.
It
should
enable
people
using
like
fluent
d,
fluid
bit
that
sort
of
thing,
clients
that
don't
take
into
account
our
ordering
constraint
basically
under
load.
Those
systems
can
kind
of
bash
requests
to
loki
independently
of
other
batches,
and
then
you
can
kind
of
end
up
in
a
frustrating
situation
here,
where
look?
We
won't
actually
accept
some
of
your
logs
all
right
now.
You
may
continue
here.
C
No
thanks
I'll
just
mention,
though,
that
the
number
one
sort
of
priority
for
loki
what
we
will
be
working
on
this
quarter
is
is
in
the
next
section.
There,
custom
retention
per
stream
and
the
ability
to
delete
logs
these
basically
go
hand
in
hand
because
offering
longer
retentions
usually
means
compliance
requirements
on
the
ability
to
remove
things,
so
those
will
most
likely
be
developed
in
parallel,
and
that's
going
to
be
probably
our
big
q1
goal.
The
internally
loki's
team
is
not
all
that
big.
C
We
did
just
recently
add
kavi
a
couple
months
ago
and
danny
who's
on.
The
call
is
now
internally
at
grafana
on
the
loki
squad,
and
that
brings
us
to
something
like
five
or
six
so,
but,
as
owen
mentioned,
he's
he's
getting
loaned
out
to
another
team,
and
so
we'll
probably
wait
for
him
to
come
back
and
do
the
out
of
ordering,
but
good
news
though,
and
we're
still
hiring
a
bunch.
C
So
we
should
be
able
to
increase
our
bandwidth
a
little
bit
and
I'll
hand
this
over
to
you
danny,
because
these
next
two
were
both
submitted
by
you
in
terms
of
extending
the
the
language
for
both
json
and
adding
a
distinct
filter.
F
Thanks
ed
yeah,
so
this
first
one
there's
a
po
up
for
it,
so
this
is
going
to
enable
expressions
in
the
json
pipeline.
So
instead
of
taking
a
log
line,
that's
in
json
and
extracting
all
the
labels,
you
will
now
be
able
to
extract
particular
labels
that
you
want,
based
on
an
expression
and
you'll,
be
able
to
access
elements
inside
of
an
array,
and
I
will
show
you
my
screen
just
to
show
you
what
the
syntax
would
look
like.
F
So
you'll
be
able
to
do
something
like
this.
So
with
this
being
the
log
line
and
there's
an
element
in
here
called
or
a
field
called
response,
and
then
inside
of
that
there's
a
field
called
status
which
will
be
200.
F
You
can
now
do
something
like
this
with
a
similar
syntax
to
our
label
formats,
so
you'll
have
the
target
label
name,
equals
and
then
the
expression
and,
if
you're
interested
in
all
the
different
use
cases
there's
a
bunch
of
tests
in
here
that
that
show
all
the
all
the
different
scenarios
that
this
works
in,
but
basically
you
can
use
this
to
to
access
arrays
fields
with
utf-8
names,
etc.
F
Sure
yeah,
so
when
you
need
to
access
elements
inside
of
an
array
in
json,
that's
currently
not
possible
using
the
json
pipeline.
So
this
will
enable
that,
and
also
if
you've
got
fields
that
that
are
irregularly
named,
and
you
don't
want
those
to
all
be
converted
to
to
underscores
so
this.
F
This
tries
to
introduce
a
jms
path
like
syntax,
but
we
we
haven't
implemented
all
the
jms
path,
just
because
it's
it's
probably
too
complicated
for
what
most
users
need,
and
this
also
means
that
when
you
reduce
a
big
json
document
like
this,
you
won't
get
a
whole
bunch
of
extra
labels
that
get
added
at
runtime.
You
would
just
get
that
that
one
that
you
care
about.
A
Cool
and
that
last
part
actually
helps
work
around
some
recent
limits
that
we
added
into
loki,
which
are
configurable
for
tenant,
but
will
throw
issues
if
you
try
to
return
too
many
series.
So
this
is
one
way
to
kind
of
work
around
that
to
reduce
the
label,
set
that
you're
actively
working
with.
C
Yeah,
we'll
extend
just
a
little
on
the
the
thought
process
for
not
just
including
jmes
path
or
jq,
syntax
completely,
and
primarily,
that's
because
concerned
with
two
problems.
One
is
sort
of
embedding
a
language
within
a
language.
The
capabilities
of
jmus
path
include,
you
know,
addition
subtraction,
you
know
multiple
path
or
part
thing.
There's
there's
there's
a
lot
you
can
do
and
we're
not
sure.
Yet,
if
that's
the
way,
we
would
solve
that
problem
and
the
bigger
concern
there
is
that
it's
going
to
be
hard
to
make
that
fast.
C
What
we're
towing
into
is
basically
what
we
know
we
can
do
and
not
affect
performance
and
solve
a
problem
that
we
know
we
have
and
then
we'll
kind
of
keep
revisiting
this
as
time
goes
on
to
see
what
other
sort
of
syntax
we
might
want
to
include
in
inside
the
parser
language,
maybe
a
good
time
to
reference
that
serial's
done
a
lot
of
work
in
the
json
extraction
lately.
So
it's
gotten
a
lot
smarter
now
in
terms
of
only
extracting
elements
that
are
included
in
a
sum
by
operation.
F
Yeah
thanks.
That's
a
that's
a
great
point
and
yeah.
I
really
tried
with
this
with
us
to
keep
the
performance
at
the
same
level.
There's
some
benchmarks
in
the
pr,
if
you're
interested
to
have
a
look,
but
it's
roughly
on
par.
It's
kind
of
interesting
that
just
extracting
one
label
is
slower
than
extracting
all
of
them.
But
that's
there's
a
few
interesting
implementation
details
there,
but
yeah,
that's
it
cool
and
then
moving
on
to
the
next
one.
F
So
the
use
case
for
this
is
slightly
changed,
so
this
is
going
to
be
renamed
a
little
bit
but
effectively.
What
this
pr
is
going
to
introduce
is
a
way
to
reduce
a
bunch
of
log
streams
down
based
on
particular
by
filtering
down
on
particular
labels.
F
So
if
you
have
a
bunch
of
blog
lines
that
are
all
pretty
much
the
same,
except
for
you
know
one
or
two
one
or
two
fields
inside
of
it
and
those
are
labels,
then
what
you
can
do
is
you
can
deduplicate,
although
we're
changing
that
terminology
not
to
distinct
and
that
will
reduce
the
level
set
down.
So
it's
probably
best
demonstrated
with
the
documentation
that
I
wrote
here.
F
So
if
you
have
a
bunch
of
log
lines
like
this,
where
imagine
there
was
there
was
some
content
that
was
the
same
for
each
of
these,
but
the
only
thing
that
differed
was
these
organization
ids,
and
actually
what
you
wanted
is
just
one
line
for
each
of
these
organizational
ids.
So
you
would
then
take
that
stream
run
it
through
log
format.
So
you
get
access
to
this
org
id
as
a
label
and
then
run
distinct
on
on
that
org
id.
F
F
At
the
same
time,
so
the
log
lines
will
be
identical,
except
for
say
the
cluster
name,
and
so,
if
we
want
to
identify
one
change
set,
then
we
will
get
a
distinct
list
of
the
change
states
by
by
effectively
filtering
out
that
that
cluster
id
and
then
you
get
a
unique,
a
distinct
list.
F
F
G
C
Yeah,
the
there
are
a
couple
discussions
in
there.
I
think
that
we
added
at
the
bottom
when
we
were
talking
about
this
one
of
the
things.
That's
not.
C
We've
talked
about
whether
we
should
you
know
continue
that
line
and
include
something
like
count
or
I
actually
don't
know
what
to
name
it.
That's
probably
the
hard
part
here
or
if
this
should
be
kept
outside
of
the
result
and
something
that
would
be
extended
in
the
api
to
return
additional
metadata.
C
So
I'm
not
sure
yet.
I
think
I,
like
the
you
know,
double
underscore,
which
I
think
owen
and
I
were
a
little
bit
trying
to
sort
that
out
still.
C
I
mentioned
this
only
just
because
I
sort
of
mentioned
earlier
that
her
owen
mentioned
to
the
out
of
order
stuff
might
not
happen
in
the
immediate
future.
We
really
want
it
to
try
our
best.
C
That
is,
it
is
probably
you
know
the
most
requested
or
common
problem,
the
out-of-order
stuff,
the
workarounds,
that
we
have
now
our
put
prom
tail
in
the
middle
and
re-time-stamp
the
messages.
This
is
not
ideal,
so
we'd
like
to
see
that
through
for
sure,
is.
H
The
out
of
order
stuff,
something
we
could
jump
in
with
some
experiments
or
stuff
like
that,
so
we're
really
having
bad
issues
with
that
because
of
internal
restrictions.
We
cannot
have
too
many
parallel
streams,
because
we
would
overload
the
internal
object
store
and
we
would
if,
if
our
resources
are
fine,
so
we
are
also
limited
in
time.
But
if
we
have
some
time
we
might
jump
in
with
some
experiments
and
just
try
to
get
something
running.
So
we've
got
plenty
of
cpu
which
we
can
spare,
but
yeah
we're
limited
in
streams.
A
A
Historically,
we
kind
of
went
down
this
path
with
something
called
lambda
prom
tail,
which
we
did
for
a
trial.
Obviously
I'll
put
a
link
in
the
call
under
out
of
order
fired
by
so
I'd,
say,
take
a
look
at
that
and
see
if
something
like
that
could
fit
your
use
case.
A
If
you're
interested
in
making
changes
to
loki
I'd
say,
probably
take
a
look
at
the
out
of
order,
design
doc
and
to
familiarize
yourself
with
some
of
the
internal
concepts
and
kind
of
what
you
need
to
account
for.
A
But
the
former
is
kind
of
like
how
can
you
work
around
the
ordering
constraint
right.
A
C
Yeah
lambda
prom
tail
stuff.
It
works
it's
just
it's
a
little
ugly
to
rewrite
the
time
stamp
right,
but
the
promptel
can
expose
the
same
push
api.
That
loki
does
and
you
could
basically
send
many
streams
to
it
and
use
the
pipeline
tools
to
remove
labels
and
then
rewrite
the
timestamp.
So
you
can
combine
many
streams
into
one
and
force
ordering
that
way.
It
just
means
that
the
logs
that
end
up
in
loki
are
you
know
the
time
stamp
will
differ
by.
C
However
much
latency
there
was
in
that
pipeline
or
you
know,
based
on
when
they're
ingested,
so
it
oftentimes.
It's
close
enough.
I
suppose,
but
it's
not
a
great
solution.
So
what
we
are
doing.
H
A
H
Yeah
yeah,
we
already
talked
about
it
yeah,
I'm
totally
sure
this
is
not
as
easy
as
it
looks,
but
I
think
it
just
a
little
bit
problem.
A
Yeah
I'd
say
definitely
take
a
look
at
the
order,
design
doc
then,
because
I
tried
to
explain
and
address
some
of
the
nuances
there
and
yeah.
I
remember
we
talked
a
couple
weeks
ago
about
it.
C
The
only
other
thing
that
I
have
on
the
list
here
is
just
going
to
mention
a
2.1
release.
I
we're
just
sort
of
waiting
to
be
confident
in
the
right
headlog
work
that
we've
stabilized.
It's
been,
let's
say
at
least
one
internal
release
cycle
since
we've
made
any
changes
to
it,
we're
in
the
process
of
promoting
it
to
all
of
our
environments.
C
C
Do
that
two
sorry
2.2
the
the
nice
well,
the
the
the
part
that
sucks
to
wait
for
is
that
that
we've
made
a
ton
of
improvements
to
queer
performance
specifically
around.
Like
we've
noticed
people
write
some
sort
of
non-optimal
queries
where
they
will
have
a
line
format
and
then
wrap
that
in
a
metric
query
and
that
line
format
operation
doesn't
do
anything
once
you
turn
it
into
a
metric,
but
we
were
consuming
cpu
to
do
the
rewrite
of
the
log
line.
C
So
there's
optimizations
now
that
serials
found
for
not
doing
those
kinds
of
things
additionally
hints
on
queries
for
like
the
json
operator,
where
we
only
need
to
parse
out
json
keys
and
values
that
are
actually
used
in
the
query
somewhere.
So
if
you
have
a
any
kind
of
grouping
operation
on
the
on
the
query-
and
you
really
only
need
the
keys
that
are
included
in
the
group,
so
that
also
improve
a
bunch
of
performance
improvement
too,
I
I
want
to
figure
out
the
right
way
to
do
this.
C
I
mentioned
that
we
do
internal
releases,
so
there's
images
that
we
push
they
they're
prefixed.
With
the
letter
lowercase
letter
k,
the
the
catch
is
that
you
know
as
we
debug
this
stuff,
we
find
problems.
So
I
need
a
way
to
communicate
to
people
what
would
be
a
stable
version
of
that,
but
it
would
be
a
nice
way
to
provide
people
that
want
something
more
quickly
than
what
our
you
know
published.
Release
cadence
is
especially
if
they're
we
don't
build
binaries
from
those,
but
we
do
build
docker
images,
I'm
not
sure
I
mean.
C
Maybe
we
just
need
to
document
it
on
the
you
know
in
the
docs
and
just
regularly
publish
you
know
after
we've
pushed
something
to
prod
and
not
had
trouble
with
it
for
a
week
or
not.
You
know
we're
happy
with
it
that
that
can
be
made
available,
not
sure
if
that's
something
people
are
interested
in
either,
but.
B
Yeah,
maybe
we
can
get
the
feeling
of
the
room.
What
if
we
just
push
literally
those
tags
like
we
just
make
those
tags
public,
we
provide
documentation
about
about
what
those
mean
and
it's
it's
a
choose,
your
own
adventure
kind
of
thing
or
choose
your
own
experiment.
D
E
B
A
Yeah
are
our
docker
images
fine,
or
should
we
be
building.
A
Fine,
so
it
sounds
like
this
is
won't
be
very
difficult
for
us
to
do
all
the
images
already
out
there
kind
of
a
perk
of
working
open
sources.
We
don't
have
to
keep
those
private
except.
G
A
C
Yeah,
that's
the
catch
right
and
previously
I
would
say
that
if
you
always
were
1k
release
behind,
so
I
think
we
just
did
k41
that
it
would
be
safe
to
use
k40.
But
we
have
scrapped
releases
in
the
past
for
a
number
of
reasons,
so
the
tag
existed,
but
we
ended
up,
never
promoting
it
and
that's
where
it's.
I
don't
have
a
hard
fast
set
of
rules.
That
would
say
this
is
when
it's
okay
to
use
one
of
those.
C
B
C
Yeah,
it's
definitely,
you
know
more
stable
than
master,
which
is
generally
stable,
like
our
master
is
continuously
deployed,
and
you
know
it's
usually
the
case
where
the
we
flush
out
query
bugs
or
performance
bugs
you
know,
but
in
general
it
should
build
and
run,
but
yeah
you're
all
breaks
master.
So
all
right,
I
don't
think
about
that,
but
this
sounds.
I
just
feel
useful.
I
know
there's
a
number
of
people
that
are
it's
just
it's
I
don't
know,
like
I'm
hesitant
to
sort
of
increase
the
cadence
of
releases.
C
B
E
Yeah,
it's
useful.
We
can.
We
can
help
with
we
can.
I
can
set
up
a
pipeline
on
our
side
to
to
help
test
that
run
our
ci
with
with
with
that
that
tag
and
hopefully
report,
but
whatever
bugs
we
can
find
so
yeah
we've
been
using
master
from
time
to
time
when,
when
a
bug
fixes
release
so,
but
if
we
can
help
there,
we
will
gladly
do
it
so.
A
B
Maybe
then
is:
should
we
start
instead
of
hot
fixing,
any
k
release
just
update
the
new,
like
just
increase
the
counter,
instead
of
hot
fixing,
something
like
if
k41
has
a
problem,
just
push
out
k42
and
be
done
with
it,
or
is
there
more
attached
to
that
number
an
hour,
and
I
honestly
don't
know.
A
I'm
not
sure
if
there's
a
right
answer
around
that,
I
find
the
I
find
the
number
helpful
personally
because
it
kind
of
links
to
you
know
like
which
week
I'm
on
right.
So
I
can
deduce
a
sense
of
time
from
that.
When
I
say
we
hot
fix
them,
we
you
would
generate
a
new
config
cache,
so
you
still
have
both
versions
of
you
know.
K41,
for
example,
you'll.
Just
see
that
you
know
one
kind
of
came
after
the
other,
we'll
have
another
commit
on
top
that
sort
of
thing.
I
I
think
what
we
could
do
maybe
is
once
we,
because
usually
we
we
catch
those
bugs
before
pushing
too
proud
like
we,
we
catch
those
bugs
in
our
own
environment
when
we
try
it
out
and
that's
when
we
recut
and
and
and
create
a
new
tie,
a
new
tag.
I
think
what
we
could
do
is
when
we
are
confident
enough.
Maybe
on
the
next
week
we
could
re-tag
the
the
the
k-41-sha
to
k-41
right,
like
the
the
the
we
could
promote.
I
One
of
them
be
the
the
k-41
and
and
the
sha
will
never
be
used
by
anyone
outside
of
the
company.
So
it's
just
one
week
behind,
but
I
think
it's
good
enough
good
idea,
sir.
B
So
and
again
see
more
pipe
up,
but
my
gut
is
the
more
weight
we
add.
On
our
end,
the
less
like
my
my
under
and
I
jumped
in
this
from
half
course.
I
tried
to
sit
in
the
other
call
to
see
if
anyone
would
still
join,
but
if
our
intention
is
to
just
have
a
lightweight
way
for
other
people
to
to
also
use
what
we
use
and
give
feedback
earlier.
B
Why
not
make
it
the
simplest
thing
we
can
possibly
make
it
and
not
have
blessed
versions
or
any
more
process
on
our
end,
just
do
the
absolute
bare
minimum
which
signals
what
we
are
doing
and,
let's
all
let
automation
and
numbering
schemes
and
such
that
do
the
rest,
because
you're,
basically
loading
work
on
yourselves
as
of
right
now,
and
I
wouldn't
do
that.
Yeah.
I
So,
what's
what
happened
this
week,
for
instance,
on
k-41,
is
I
did
a
huge,
a
lot
of
a
lot
of
performance,
optimization
and-
and
I
had
to
recut-
I
think,
k-41
five
of
six
time,
because
this
has
introduced
a
lot
of
recreation
in
terms
of
features
and
and
broke
the
api
multiple
time.
C
Yeah,
it's
not
much
operational
burden
for
us
to
do
with
cereal
said,
which
is
just
make
another
docker
tag
that
links
say
k41
back
to
whatever
we
were
happy
with
with
a
hashed
version
on
it.
That's
a
single
command
we
can
run.
We
could
embed
it
in
the
tool
that
we
use
for
automating
the
release
work
anyway.
So
I
think
that's
a
pretty
reasonable
step.
I
like
the
idea,
then,
because
that
does
give
a
clear
signal
to
people
that
you
know
this
is
a.
C
C
I
don't
remember
it
was,
I
think,
because
of
timing,
but
we
have
had
tags
in
the
past
that
have
been
broken,
that
it's
hard
to
communicate,
that,
like
curating,
a
doc,
doesn't
sound
like
the
right
way
because
who's
going
to
read
it.
You
know.
B
I
hate
christmas
feed,
I
mean
we
can
also
time
box
this
discussion,
but
the
way
I
understood
what
similar
said
is
that
it
doesn't
matter
like
he
doesn't
expect
any
stability
guarantees
or
anything
at
least
not
any
hard
ones,
and
that
is
what
releases
are
there
for
so
you're
kind
of
taking
the
usefulness
of
this
currency
system
away.
If
you
put
too
much
too
much
effort
and
validation
on
our
end.
I
Now
the
problem,
the
problem,
which
is
that
we
actually
push
images
that
have
bugs-
and
we
know
we
know
that,
so
I
don't
want
anyone
to
use
them,
because
those
people
who
technically
gave
me
a
feedback
or
create
an
issue
or
waste
time.
I
don't
want
to
waste
anyone's
time
right
so,
like
I
don't
want
anyone
to
use
a
stack.
E
I
A
possibility
that
that
something
slip
slips
in
in
production
for
us,
it's
just
that
you
know
during
the
day
like
we
could
catch
like
five
times
a
different
bag
and
recap
the
the
ashes,
and
so
I
think
you
know
there's
like
five
ashes
that
no
one
wants
to
use
because
we
know,
but
you
know
we
know
they
are
faulty.
A
B
A
I
think
that,
right
now
there,
if
you
want
to
test
out
our
internal
releases,
they
are
you
know
the
kxx
branches
in
github
and
that's
where
we
ultimately
build
our
releases
out
of
every
now
and
then
we'll
choose
a
particularly
successful
one
that
has
good
timing
as
for
kind
of
promoting.
Beyond
that,
we'll
see,
I
find
that
we
don't.
Actually,
I
think
cyril
cut
those
or
optics,
then
that
often,
I
think
I
do
it
every
now
and
then,
but
it
would
seem.
Probably
you
know
one
or
two
standard
deviations
out.
A
A
Okay,
so
we
one
of
the
big
pieces
of
our
next
release
will
be
the
right
ahead
log,
notably
if
you're
running
kubernetes,
which
is
kind
of
the
environment,
that
we
target
loki
isn't
built
to
only
run
there.
But
a
lot
of
our
examples
do
this
includes
migrating
from
deployments
to
stateful
sets
which
kind
of
tack
on
a
lot
more
operational
burden.
Unfortunately,
but
the
advantage
there
is
that
we
get
a
right
ahead.
A
Log
so
it'll
give
us
some
additional
persistence,
guarantees,
which
complements
the
replication
factor
that
we
already
have
in
loki
some
kind
of
distinctions
that
we
have
from.
I
guess,
like
philosophical
distinctions
from
some
other
right
ahead
logs.
Are
we
really
wanted
to
make
it
as
easy
as
possible
to
operate
loki?
A
So,
in
the
unlikely
event
that
you
do
have
run
into
file,
corruptions
we'll
just
continue
along
and
you
won't
actually
need
any
operator
intervention,
the
other
one
is
we
added
a
back
pressure
capability
into
the
the
loki
ingester
so
that
if
you
have
a
particularly
large
right
ahead
log,
you
shouldn't
be
bounded
by
the
available
memory,
so
it'll
load.
A
We
realized
that
that's
not
a
particularly
attractive
resolution
path
for
many
people
certainly
wasn't
for
us,
but
I
think
our
next
release,
the
two
but
2.2
release
should
have
that
and,
as
ed
said
earlier,
we're
in
the
process
of
rolling
that
out
to
all
of
our
environments.
Right
now,
did
I
miss
anything
there.
Ed.
C
No
sounds
good
to
me:
we're
kind
of
trying
to
optimize
the
right
headlock
to
be
as
sane
as
possible
for
operations,
we're
always
trying
to
avoid
cascading
failures
or
situations
where
you
have
maybe
multiple
things.
Replaying
walls-
and
you
know
we
don't
want
out
of
memory
crashes
and
things.
So
I
think
we've
reached
pretty
good
compromise
there
and
yeah
we've
as
much
as
we've
been
beating
it
up.
C
Good
good
work,
owen
and
team.
Well,
I
got
on
the
agenda
so
go
back
to
like
about
internal
releases
or
I
don't
know
if
anybody
else
has
got
anything
else
you
want
to
talk
about.
I
don't
want
to
single
you
out
ivano,
but
I
I
heard
a
little
rumor
that
loki
might
be
getting
pagination
in
the
in
the
grafana
api.
G
Then
you
have
better
info
than
I
have,
so
what
are
we
planning
for
the
next
release
or
next
releases
is
to
add
back-end
for
loki,
so
you
could
do
alerting
on
it,
but
I
haven't
heard
about
pagination,
so
I
will
have
to
check.
C
G
Wait,
I
know
what
you're
meaning
now,
so
we
have
yeah.
We
are
investing
time
to
figure
out
what
would
be
the
best
way
to
request
more
logs.
So
if
you
hit
your
limit
how
to
how
to
do
that,
but
I
am
I
think
we
are
not
set
in
stone
for
the
pagination,
but
the
just
from
the
ux
team
is
working
on
the
best
way
how
to
do
it
so
yeah.
You
are
right.
We
are
working
on
that.
C
That
story
is,
is
funny
to
me
because
that
limit
has
existed
for
all
of
time
in
loki
right,
like
you
get
a
thousand
logs
or
two
thousand
whatever
you
set
the
limit
to
and
we've
been
using
it
every
day
for
a
couple
years
now
and
I've
just
relearned.
I
guess
how
I
do
querying
right,
so
I
approach
querying
by
just
removing
log
lines
that
I
don't
care
about,
and
I
think
there's
some
sort
of
question
for
me
is
whether
I
would
even
use
a
paging
feature
now
if
it
existed.
C
Sometimes,
if
I'm
doing
you
know
a
look
at,
I
don't
know
like
you
recently
added
a
log
line
to
do
the
push
size,
just
push
requests
into
loki,
and
so
I
was
looking
to
like
kind
of
page
through
a
bunch
of
those
just
to
see
what
they
were,
but
it
does
also
fall
into
that
category.
Of
probably
one
of
the
things
people
give
us
the
most
feedback
on,
which
is.
How
do
I
go
to
the
next
page
right?
C
So
I
think
we
have
to
have
it,
but
I
would
also
encourage
everyone
to
just
get
really
good
at
adding
filter
expressions
to
remove
lines
that
they
don't
need,
and
you
can
work
around
this
pretty
well
too,
and
I
know
that
the
alerting
one
is
often
I'd
forgotten
about
that.
That's
so.
G
C
G
You
can
currently
hack
it
with
the
having
glocky
as
a
primitive
data
source,
but
loki
definitely
deserves
a
back-end
and
proper
way
how
you
can
set
up
alerts
in
the
grafana.
C
That's
all
I
have
I'll
open
the
floor
to
anybody
that,
even
if
it's
not
loki
related.
C
C
C
D
C
C
I
mean
related
to
this.
One
thing
we
don't
do
a
very
good
job
of
is
the
dashboards
that
we
use
to
operate
loki.
They
they
exist,
they're
in
our
json,
in
the
github
repo,
but
they're
too
heavily
opinionated
for
our
infra,
for
example,
the
gateway
they
referenced,
isn't
something
that
we
open
source,
mostly
because
it's
just
handling
our
internal
auth.
It's
not
very
complicated,
but
it
does
add
the
latency
for
those
dashboards.
I
know
actually
reinhardt
adapted
that,
as
well
as
the
canaries
dashboard
inside
of
his
demo,
which
is
nice,
but.
C
Oh,
you
just
did
all
right
check
this
out.
I
would
say
one
thing
to
be
aware
of
is
running
loki
in
a
distributed
fashion
on
a
on
your
local
computer
is
probably
going
to
perform
worse
than
the
single
binary
would.
So,
if
you
don't
have
an
environment
to
run
this
on
multiple
computers,
it's
it
will
work
right.
I've
set
it
up
and
run
it
myself,
and
it's
fine,
but
here
you
know
for
the
local
scenario
that
single
binary
is
going
to
work
better
in
most
cases.
C
C
Yeah
data
retention
policies
so
based
on
labels,
someone
else
came
up
with
well
labels
in
time.
Thighs
is
an
interesting
one,
but
we
will
be
adding
better
retention
policies.
A
Which
will
probably
come
after
our
yeah
stream
deletion
and
retention
work?
I.
C
C
C
All
right,
I
feel,
like
I'm
just
kind
of
talking
now
trying
to
give
you
the
hint
all
right.
I'm
not
gonna
end
this,
then,
unless
anybody
has
any
last
comments
or
thoughts
thanks
thanks
for
joining
everybody,
love
seeing
people
from
the
community
here
tell
your
friends
that
also
a
month
from
now
it's
going
to
be
much
later
in
the
day,
so
that
it's
more
accommodating
to
a
few
more
time
zones.
C
So
if
you
did
the
thing
where
you
had
a
calendar,
invite
I
don't
know
if
richie
already
updated
this
to
be
correct,
it
looks
like
it.
C
Thank
you,
richie,
so
we'll
see
everybody
in
a
month
thanks
thanks
for
coming.