►
From YouTube: Loki Community Meeting 2022-02-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Here
we
go
all
right,
hey
everyone!
Welcome
to
this
week's
what
is
february
3rd,
the
loki
community
call-
and
I
should
probably
share.
A
Okay,
let's
see
what
we
have
today
2.5,
so
our
next
minor
release
is
planned.
I
don't
think
we
have
an
exact
date
for
this,
but
it
should
cover
a
few
things.
Some
very
noticeable
improvements
in
prom
tail
kafka
support.
I
think
this
is
our
first
external
release
with
kafka
in
it
is
that
is
that
correct,
I'm
gonna
go
with
yes,
I
don't.
B
B
E
A
This
before,
where
I
did
a
presentation
with
the
release
and
then
like
announced
something
that
was
released
in
a
prior
release,
so
it
would
be
on
brand
for
me
anyways
to
double
up
here,
so
kafka
sport
very
exciting,
thanks
cyril,
for
getting
getting
that
in
there's.
Also
the
docker
is
it.
What
is
this
service
discovery
and
I
want
to
say,
carson.
B
Person's
not
here,
is
he
no.
I
can
talk
about
this
a
little
bit
because
I
prometheus
added
service
discovery
for
the
local
docker
demon
some
time
ago,
and
actually
I
tried
this
at
one
point.
I
just
didn't
get
the
pr
in
where
you
can
run
a
static,
config
and
use
docker
name
and
service
discovery
and
do
what
we
usually
do
with
like
kubernetes,
which
is
use
that
to
generate
the
path
to
the
file
locally,
so
that
you
can
tail
the
files.
B
B
So
that's
pretty
fascinating
in
the
sense
that
you
can
do
it
without
having
to
read
the
files
from
disk.
I
think
that
you
still
have
to
have
file
logging
enabled,
I
think,
if
I
read
the
api
right
for
the
docker
daemon
in
order
to
get
that
api,
the
logs
api
to
work,
it
still
has
to
write
to
disk,
but
anyway,
it's
a
way
to
sort
of
simplify
that
process.
You
don't
have
to
do
file
mount
permissions
and
other
things
into
your
container.
So
that's
pretty
slick.
B
Okay,
v12
and
the
binary
stuff
well
gel
golf
shelf-
I
don't
I
don't
know,
but
the
gray
log
extended
log
format,
cyril
added
this
thanks
zero.
C
Briefly,
yeah,
so
the
if
you
are
an
enterprise
customer,
I'm
not
sure.
Actually
I
think
you
need
to
be
an
enterprise
customer.
C
Then
you
can
run
from
there
and
it
will
pull
the
log
using
the
api
from
your
http
services,
whatever
services
you're
using
platform,
and
it
will
send
them
to
lucky
director,
so
it's
kind
of
still
elite
we're
we're
collaborating
now
with
clutter,
since
this
vr
on
this.
This
track,
so
they're
gonna
be
more
out
of
this.
B
Right,
it's
really
sweet
community
contribution
and
two
forms.
I
think
from
two
separate
people
where
one
introduced
rate
limiting
sort
of
globally,
so
you
can
configure
prom
tail
to
say
only
send
it
max.
B
I
don't
know
bytes
whatever
kind
of
bytes
per
second
to
loki.
So
you
know
loki
currently
has
server-side
rate
limiting,
which
is
mostly
intended
for
protecting
the
cluster
from
being
tipped
over,
and
you
know
when
one
tenant
is
rate
limited.
It
affects
every
prom
tail
sending.
So
this
is
a
way
to
do
client-side
rate
limiting,
which
is
kind
of
nice.
B
If
you
have
clients
that
have
sort
of
get
really
noisy
and
you
don't
want
to
send
them
to
loki
and
then
it
also
was
added
in
a
separate
pr
as
a
pipeline
stage
to
where
you
can
use
like
stream,
matchers
and
things
to
only
rate
limit
on
certain
behaviors
and
matchers
and
things
so
you
can
do
sort
of
for
the
whole
prom
tail
instance,
or
to
do
a
set
of
matchers
in
the
pipeline
stage,
pretty
sweet.
Thank
you.
The
loki
community
for
being
awesome.
A
Yeah
there's
a
now,
I
think
about
it.
There's
been
a
couple
other
community
pr's
that'll
make
it
into
this
as
well,
which
are
pretty
big,
there's
a
selection
of
prs
for
on
the
ingestor
side
to
change
how
we
do
locking
internally,
and
that
resulted
in
some
very
noticeable
improvements
in
the
tail
latencies
of
pushing
logs
through
loki.
Where
now
we
can
much
more
consistently
and
lower
latencies.
B
A
B
So
yeah,
I
think
that,
depending
on
your
sort
of
operating
model
that
this
may,
you
may
see
more
benefit
from
it.
It's
pretty
neat.
A
So
backing
up
a
bit,
we
are
also
introducing
of
the
next
schema
version
in
loki.
This
will
be
v12,
and
this
is
largely
done
to
to
enable
higher
scale
when
you're
using
s3
as
a
back
end,
which
functions
a
little
bit
differently
compared
to
some
other
object,
storage
back
ends
and
we've
tested
this
fairly
rigorously
in
s3.
A
They
can
start
to
see
rate
limitings
and
that
sort
of
thing
from
s3,
based
on
how
we
stored
chunks
in
the
key
structure
of
chunks
in
s3,
and
so
this
should
really
allow
us
to
blow
past
those
prior
limitations
and
and
play
much
more
nicely
with
the
s3
apis,
and
so
it
should
be
fairly
exciting
for
anyone
running
an
s3,
it
should
be
a
drop
in
configuration
change.
You
can
just
start
a
new
schema
version
with
b12
at
some
date
in
the
future
and
then
once
loki,
you
know
once
that
date
comes
low.
A
People
start
writing.
In
that
format,
any
chunks
past
that
point,
so
that
was
a
largely
done
by
jordan.
Callum
who
are
here
on
this
call.
So
congratulations
and
thanks
for
the
thanks
for
the
prs.
A
Yeah,
it
is
the
pr
is
not
merged
yet
but
kind
of
some
background
here
is
this.
I
wanted
to
make
the
binary
operations
in
log
ql.
Think,
like
you
know,
a
plus
b
run
with
a
higher
degree
of
parallelism.
Previously
they
would
do
they
would
run
each
leg
serially
and
this
kind
of
led
us
down
a
rabbit
hole
where
we
fixed
a
couple
other
bugs
in
the
process
and
and
kind
of
exposed,
some
other
things
we
could
do.
A
But
the
you
know
tl
dr
version
is
we're
seeing
a
roughly
a
10x
increase
on
chartable
binary
operations
compared
to
prior,
largely
because
there
was
a
bug
which
prevented
them
from
being
started
previously,
but
now
they
are
now
started
and
we
are
running
each
leg
in
parallel,
so
that
should
be
very
exciting
and
we
should
get
some
better
numbers
as
we
as
we
run
this
over
longer
periods
of
time,
but
again,
drop-in
replacement
here,
nothing
that
users
actually
have
to
do
it
should
they
should
just
see
much
faster
queries
for
these
sorts
of
operations.
A
B
I
don't
like
just
put
it
in
here
that
we
can
give
an
update
that
we've
been
testing
and
play
around
with
it
yeah.
I
think
good.
E
B
Yeah,
so
the
the
the
30-second
background
story
is
that
you
all
win
and
we're
gonna
make
the
histogram
better.
So
the
community
lots
of
users
of
loki
have
been
unhappy
with
the
sort
of
partial
histogram
that
comes
back.
B
I
feel
like
I've
on
this
call
in
the
past
tried
to
argue
that
it's
fine,
but
I
agree
I
I
I
give
up.
We
should
make
it
better.
So
we
did,
I
should
say
the
the
grafana
ui
folks
have
now
implemented,
and
the
way
it's
implemented
in
loki
is
by
issuing
a
range
query
on
your
logs
query.
So
when
you
submit
you
run
two
queries,
you
get
your
logs
result
back
and
then
you
get
a
visualization
of
that
query
back
as
range
query.
B
The
concern
was
that
that
increases
the
amount
of
work
that
you
do
on
a
query,
so
we've
been
kind
of
towing
the
waters
and
our
infrastructure
with
what
that
looks
like,
and
we
are
hedging
things
a
little
bit
by
sort
of
limiting
how
long
those
queries
will
run.
We've
had
a
couple
conversations
about
whether
or
not
we
should
include
those
knobs
in
loki
itself.
Right
now.
We
control
the
timeout
of
those
queries
with
a
gateway
component
that
we
run
the
current
plan
is
that
we're
not
going
to
do
that?
B
We
don't
want
to
introduce
the
complexity
of
it.
If
most
people
don't
want
it
and
don't
need
it
which
is
sort
of
the
feedback
we've
heard
so
far,
so
it's
not
really
anything
changing
in
loki
here.
You
know
we
did.
The
the
grafana
end
is
really
what
changes
right
now,
it's
available
in
8.3.x.
B
If
you
enable
the
feature
flag
for
it,
so
you
can
go
check
it
out
now.
The
version
that's
enabled
in
grafana
has
a
hardcoded
10
second
timeout,
which
will
be
removed
in
most
likely
be
removed
so
that
the
queries
just
run
for,
however
long
they
take,
but
so
far
we're
seeing
that
the
performance
impact
is.
B
It
depends
it's
not
that
significant
though
it
varies
between
maybe
40,
to
50
more
work
on
average
across
the
tenants
that
we've
enabled
it
for
so
you
know
if
you
ran
the
query,
normally
we're
doing
another
like
40
work,
to
do
the
range
query
over
the
full
range
and
that's
not
too
bad.
So
originally
we
were
worried.
It
would
be
like
two
three
times
the
amount
of
work
right,
because
you
know
you
can
imagine
running
a
logs
query
over
a
big
set
of
data.
B
A
And
we
should
open
this
up.
That's
the
end
of
our
agenda
for
the
day.
That's
it!
That's
all.
We
got
short
and
sweet
this
week,
so
anyone
questions
concerns
things
they're
happy
about.
C
Yeah,
I
just
want
to
precise
that
on
2.5
this
was
just
you
know.
The
top
of
the
list,
like
a
lot
of.
B
C
B
Hedged
request
is
in
here
too.
We
haven't
talked
about
that
yeah.
C
B
Yeah,
I
don't
remember
if
I
talked
about
that-
maybe
regardless,
though
so,
the
the
folks
at
red
hat
on
openshift
built
an
operator
for
for
loki
and
it's
a
general
purpose,
kubernetes
operator,
it's
not
specific
to
openshift.
It
has
some
configuration
to
enable
more
features
for
openshift,
because
that's
what
their
use
case
is,
but
they
contributed
upstream
to
make
it
part
of
the
loki
project
so
that
any
kubernetes
environment
can
take
advantage
of
a
loki
operator.
B
B
We
talked
about
helm
a
lot
last
time
and
we'll
bring
that
back
up
again,
but
we're
still
still
trying
to
figure
out
how
to
do
the
best
that
we
can
with
home
we're
kind
of
hopeful
that
maybe
the
operator
can
remove
some
of
the
helm,
charts
that
we
have
and
replace
them
with
a
helm
chart
that
deploys
the
operator
kind
of
thing.
B
But
very
excited,
thank
you.
Thank
you.
Perry
perry's
not
here
for
all
your
hard
work
on
the
oh
speaking
of
shoot.
We
have
new
loki
team
members,
perry.
I
can't
pronounce
your
last
name.
It's
very
greek
perry's
been
helping
us
out
a
bunch
from
the
red
hat
side
on
loki
for
years
now
been
a
big
contributor,
both
for
largely
for
helping
us
sort
of
with
the
perspective
of
how
they
use
and
other
people
use
loki
has
been
extremely
helpful.
Javi.
B
Congratulations,
kavi
welcome
to
the
loki
team
and
karsten
who's,
not
here
also
so
I
mentioned
before
that
within
grafana.
Thanks
to
the
I
would
say,
success
of
loki
we've
been
able
to
grow
the
project
and
grow
people,
so
this
call
is
bigger
and
most
of
the
people
on
this
call
are
now
part
of
the
loki
squad.
So,
looking
forward
to
what
we
can
accomplish
in
the
in
this
coming
year,
it's
going
to
be
an
exciting
one
for
loki.