►
From YouTube: Loki Community Meeting 2022-09-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
yeah
I
think
I
hope
everyone
has
the
access
to
the
dog
who
are
people
here.
We
also
share
it
and.
B
A
So
yeah
I
think
before
jumping
into
the
call.
We
also
changed
this
time.
Usually
it
happens
in
the
U.S
time
zone
and
we
change
it
to
like
12
UTC
so
that
you
are
attacked
folks
can
join.
A
I
think
maybe
let's
jump
into
the
agenda
before
that.
Do
you
anyone
wants
to
add
anything.
A
B
A
D
E
I
mean
I
think
this
was
I.
I
was
the
facilitator,
but
I
I
don't
have
any
specific
feedback.
I
asked
the
folks
who
filed
the
proposal
to
come
and
we
could
chat
with
the
rest
of
the
Loki
team.
B
C
You
can
see
all
stages
that
were
executed
with
this
line
and
you
can
see
how
the
log
line
was
transformed
on
each
stage,
not
on
this
one
or
also
you
can
see
on
which
stage
this
line
was
filtered
out
and,
for
example,
for
this
line.
We
can
see
that
after
this
stage
after
Stage
Line
per
month,
we
transformed
our
log
line
to
this,
and
previously
it
was
just
a
Json
and
also
on
this
line.
When
we
applied
Json
parcel,
we
can
see
which
labels
were
extracted
with
which
values
and
so
on.
C
So
it
might
give
you
some
insight
how
actually
the
query
was
executed,
yeah
and
also
it's
pretty
useful.
When
you
want
to
help
somebody
with
some
local
expression,
you
can
create
or
modify
your
local
query.
For
example,
we
can
change
it
to
this
in
this
way
and
we
want
to
share
it
with
somebody
else.
We
can
just
give
him
this.
C
C
C
Yeah
and
if
you
have
any
suggestion
or
any
Improvement
that
you
can
do,
you
can
do
it
using
this
link
to
edit
this
page
and
you
can
just
edit
it
if
you
want,
or
you
can
just
suggest
changes
to
this
tool.
Yeah
do
you
have
any
questions.
A
D
Yeah
I'm
just
wondering
this
looks
really
really
cool
very
useful.
Does
this
analyzer
also
include
any
grafana
specific
functions,
or
is
it
just
pure
the
pure
long
ql.
C
Yeah,
just
a
pure
Locale,
and
also
we
have
one
limitation
here-
that
you
cannot
change
string
selector.
So
here
we
have
predefined
selector
with
value
job
analyze.
You
cannot
change
it,
so
you
can
just
play
with
local
functions
and
with
attributes
and
Justice
query,
and
also
you
cannot
use
here,
Magic
queries
because
we
don't
have
here
actually
timestamp.
C
So
we
don't
know
about
this
entertainment.
We
don't
have
functionality
to
somehow
assign
this
timestamp
to
this
log
line,
yeah,
it's
just
for
now.
It's
just
initial
version.
A
F
A
Mean
it
doesn't
have
to
be
shot
per
se,
but
all
I
want
is
like
if
I,
if
I
use
this
for
some
specific
log
lines
and
specific
logio
I
should
be
able
to
share
it
with
someone
and
they
come
to
the
same
page.
They
should
be
able
to
see
the
same
docs
on
the.
G
G
If
someone's
trying
to
solve
a
use
case
around
log
URL,
you
can
insert
their
log
line
and
the
query
and
share
that
and
the
link
embeds
all
that
information,
so
I'm
sure
there's
some
limit
to
the
amount
of
log
lines
you
can
put
in
there,
because
it
all
becomes
query
parameters,
I
believe
but
I
think
that's
going
to
be
super
helpful.
B
C
Yeah
and
also
one
thing
that
I
want
to
mention
that,
with
the
next
version,
for
example,
when
Loki
2.7
will
be
released,
we
will
also
release
local
analyzer
API
server
with
the
same
version.
So
as
long
as
we
have
versioned
documentation
Pages,
we
also
have
versioned
API
server
for
this
analyzer.
So
this
the
documentation
version
is
compatible
with
Lockheed
2.6
and
in
the
same
way,
it's
compatible
with
local
analyzer
API
server,
2.6
yeah.
C
So
once
we
release
2.7,
we
will
release
also
documentation,
2.7
and
also
will
release
local
analyzer
appears
server,
2.7
and
it
will
understand
all
new
functions
if
we
add
any
to
2.7.
So
it's
versions,
API
yeah
and
also
maybe
it's
important
thing
that
I
forgot
to
mention
that
to
run
this
query
against
the
sloglines,
we
use
API
server,
so
we
sent
requests
and
analyzer
produce
this
debug
information
for
us
at
returns,
and
we
just
displayed
yeah.
H
B
E
Through
like
why,
like
what
the
use
case
is
for
y'all
and
then
maybe
what
are
any
outstanding
questions,
you
have
left
on
your
proposed
implementation.
D
Sure,
Ryan
and
Jose
do
you
want
to
take
this.
F
Sure
Jose
wrote
this
proposal,
so
I
guess
I
could
let
him
leave
and
fill
in
any
gaps.
C
I
H
So
my
name
is
Jose
I'm
with
Simon
and
Ryan.
We
are
part
of
the
obstacability
team
at
canonical
and
we
are
writing
lock
operator.
H
We
also
saw
that
there
were
some
requests
from
members
of
the
community
to
What
a
Way
to
specify,
and
luckily,
okay,
you,
you
need
to
start
deleting
chunks
when
the
the
space
reach,
for
instance,
I,
don't
know
80
percent
of
the
available
space,
and
also
we
saw
that
there
are
some
hacks
the
community
is
doing
in
order
to
solve
that.
That
is
well.
We
can
put
a
current
job
that
runs
and
script
every
10,
the
minutes,
for
instance,
and
delete
the
chunks
and
after
that
you
have
to
rebuild
the
indices.
H
So
we
start
thinking
about
okay,
what's
happening
if
we
try
to
include
this
functionality
in
local.
That's
why
we
were
thinking
about
in
this
possibility
and
start
discussing
in
the
local
communities
lab
with
the
other
members
and
created
the
issue
with
with
our
proposal.
Obviously
our
proposal-
it's
it's
just
the
the
sort
of
faith,
because
we
can,
we
can
read
the
local
code
base
and
modify
it
and
I
share
the
the
solution
with
the
community.
But
yes,
we
we
may.
H
H
D
Sure,
I
I
guess
many
of
you
might
already
have
seen
the
issue
and
had
a
look
at
the
proposal.
But
but
the
idea
is
that
we
would
use
the
same
or
similar
facilities
as
already
that
already
exist,
to
do
the
time-based
retention
and
try
to
deduce
what
what
point
in
time
we
need
to
delete
up
to
to
fit
within
the
the
space
requirements
and
then
use
a
time-based
delete.
Just
as
we
would
in
the
in
the
existing
retention
functionality
to
to
do
that.
D
And
I
kind
of
get
why
this
hasn't
been
prioritized
or
why
it's
not
part
of
Loki
I
mean
if
you
have
an
S3
storage,
you
don't
really
care
about
the
fill
up
in.
In
that
sense,
you'd,
probably
rather
say
I
want
seven
days
or
14
days
and
that's
it.
That
makes
perfect
sense
when
you're
on
a
cloud
provider
in
our
use
case.
The
the
whole
point
is
for
people
to
to
run
this
in
their
own
manager
infrastructure,
and
then
you
don't
always
have
that
that
privilege.
D
Hence
why
we
think
it's
it's
useful
to
have
it
in
the
product.
F
I
think
to
be
very
explicit,
the
issues
and
the
issues
on
the
track
have
been
open
for
quite
a
while,
and
while
using
block
based
storage
is
potentially
a
smaller
use
case.
We
are
trying
to
to
strike
a
balance
a
between.
We
do
not
want
to
directly
read
the
file
system
and
worry
about
whether
there
are
star
Styles
or
what
the
PVC
actually
looks
like
or
potentially
corrupting
indexes
as
the
crime
jobs
seem
to.
F
So.
Instead,
if
we
take
a
relatively
simple
use
case
of
it,
if
you're
running
on
on
black
storage,
then
you
can
set
a
percentage
after
that
and
if
it
is
set,
then
we
can
go
through
and
start
another
timer
when
lucky
starts,
which
will
exactly
percentages
whatever
file
system.
These
chunk
storages
on
it,
which
will
hopefully
make
it
containable
enough
that
if
they
change
to
tsdb
change
the
on
this
format,
we
don't
really
care
than
to
use
the
same
functions.
That
Loki
calls
internally.
F
And
I
I
think
that
the
goal
partly
is
to
solve
a
community
need,
while
also
keeping
it
simple
enough,
that
it
doesn't
add
a
dramatic
maintenance
burden
to
the
lucky
maintenance
or
a
huge
new
set
of
use.
Cases
which,
frankly,
would
be
better
to
invite
other
store,
and
that
is
what
the
final
Labs
is.
Mariana
anyway
episode.
J
G
Welcome
sorry
Cindy,
but
that
we
added
in
2.6
something
like
75
of
the
low-keys
that
are
reporting
back
to
us
all,
run
on
a
file
system,
store,
I,
think
most
of
the
community
users
just
run
a
relatively
simple
single
binary
and
back
it
with
disk
and
that
the
everybody
that's
doing
that
would
appreciate
having
a
bounce
on
how
much
disk
that
can
use.
There
have
been
issues
open
for
a
long
time
on
this,
but
you're
right,
I
mean
just
for
us
and
kind
of
how
we
prioritize
work
and
what
we
built.
G
It's
it's
never
been
enough
to
sort
of
you
know
make
the
top
of
the
list,
but
it's
awesome
that
you're
able
to
take
a
look
at
this.
We
can
do
everything
we
can
to
help
get
you
moving
forward
on.
I,
really
appreciate
that.
G
D
G
Yeah,
no
absolutely
I.
Definitely
at
least
I,
don't
know,
I,
don't
speak
for
everybody
on
the
team,
but
I
would.
Unless
anybody
here
has
opinions
not
like
I
I
mean
most
of
the
team
is
here
so
I
think
this
would
be
a
big
help
for
anybody
operationally
speaking
using
Loki
on
storage
that
you
know
specifically
file
system
store,
there's
a
lot
of
users
for
that,
but
nope
I
would
love
to
merge
this.
This
would
be
great.
F
G
Yeah,
that's
interesting,
I
think
you
could
handle
that
the
other
way
up
front
too,
with
rate
limits.
So
if
you
have
rate
limits
set
to
like
how
much
data
a
cluster
can
receive
that
effectively
limits
how
fast
you
can
delete
it
as
a
consideration
too,
but
yeah
right,
like
if
somebody
wakes
up
in
the
morning
and
the
only
thing
they
have
logs
of,
is
one
service.
That
might
be
a
bad
experience.
E
And
I
think,
since
we
have
Sandeep
on
here
a
review,
you
wanted
to
say
Cindy,
it
seemed
like
when
I
was
reading
through
the
proposal,
the
the
main
thing
that
there
was
maybe
some
back
and
forth
on
this,
so
I
used
the
delete
apis
or
do
I
use
some
of
the
other.
Like
inbuilt
cleanup
functionality,
it
seems
like
the
proposals
gotten
aligned
on
that.
J
Yeah
so
like
I,
didn't,
prefer
delete
here
because
it's
hard
to
know
how
much
data
like
what
range
you
want
to
delete
the
data
you
would
anyways
have
to
iterate
to
the
index
to
find
out
what
the
oldest
chunks
you
have.
So
it's
better
to
iterate
to
the
index
and
delete
the
chunks
then,
and
there
because
delete
delete
apis
would
delete
the
chunks
after
two
hours
by
default.
J
So
you
need
to
either
tune
that
config
again
to
delete
the
chunks
instantly,
which
is
why
I
prefer
not
using
delete
API
and
have
a
separate
code
path
for
reducing
the
usage
based
on
how
much
disk
you
are
using.
B
J
D
J
F
Let's
say
that
Trump's
chunks
were
directly
delete
it
without
you
know,
with
the
crime
job
or
something
else.
That
would
be
impossible
to
have
a
lot
of
ql
queries
with
labels
that
actually
had
no
records
attached
to
them
and
I
understand
that
you
can
delete.
Api
would
have
to
wait
two
hours
but
I.
Also
wonder
if
we're
submitting
the
pi,
wouldn't
we
just
be
able
to
invoke
the
functions
which
delete
things
immediately.
D
J
J
Like
you
are
not
honoring
the
config
for
returning
things
for
viewers.
F
Yeah,
no,
that
is
certainly
true
and
I,
haven't
read
the
code
base
in
depth,
but
it
looked
like
going
through
that
it
would
mark
them
in
another.
Job
went
through
so
if
we
invoke
the
function
which
deletes
all
of
them
are
chunks,
then
we
will
keep
coherency
between
the
table
and
the
chances
we
don't
have
to
like.
Maybe
there
is
a
very
good
tactical
reason
not
to
do
this.
D
J
Well,
like
think
thinking
more
about
it,
why
we
have
that
config
is
because
we
want
to
let
the
index
updated
index
get
propagated
everywhere,
like
because
if
the
index
is
cached
and
your
chunks
are
not
there,
then
the
queries
would
fail,
because
it
would
try
to
download
the
things
that
are
not
existent.
So
probably
my
Approach
would
like
make
this
issue
visible
to
the
user,
but
I
am
like.
Let's
take
it
on
the
issue
and
I'll
think
more
about
it.
What
we
can
do.
J
But
yeah
like
two
hours
of
delayed
for
returning
the
disk
space
would
be
like,
might
kill
the
Loki
because
it
it
doesn't
have
enough
space
to
write.
K
K
To
not
go
through
the
delete
API,
because
when
going
through
the
delete
API,
you
can
need
to
Market
this
delete
it
first
and
then
delete
it.
So
you
kind
of
need
to
iterate
through
the
index
and
the
chunks
twice
instead
of
if
you
have
a
direct
access
as
part
of
a
compactor
which
kind
of
run
to
clean
up
the
disk,
you
can
just
delete
it,
delete
the
index
into
chunks
directly
I
mean
we
are
talking
about
local
advisory
systems.
K
We
don't
have
the
problem
about
propagating
the
the
updated
index
through
to
other
Injustice
in
a
way
that
we
have
in
the
multi
multi-node
setup.
So
we
don't
I.
Don't
think
that
we
have
this
problem
with
the
caching
and
the
redistribution
of
the
index,
but
but
still
in
terms
of
performance
and
efficiency.
I.
Think
having
a
separate,
clean
up
component
in
the
complex
which
is
explicitly
running
just
for
cleaning
up
Junction
indexes
is
yeah
I.
B
J
Probably
efficiency
won't
be
a
problem
here
because
deletes
our
Innovation
when
you
are
using
content
waste
delete,
so
here
it
won't
be
content
based.
It
would
just
go
through
the
terms
and
keep
deleting
it
but
like,
as
I
said
it's
hard
to
find
out
what
range
you
want
to
use
for
deletion.
D
But
that's
a
that's
a
good
point
that
you're
making
Christian
in
that
we
we
I
think
we
con.
We
joined
to
decided
that
we
wanted
this
to
only
be
something
for
for
a
single
node
setup
right
that
if
you
want
to
run
it
in
microservice
mode
and
kind
of
split
things
out,
then
then
we'll
probably
just
write
a
warning
to
the
logs
and
ignore
the
settings.
B
E
Yeah
I
think
I
was
gonna,
say
that
Danny
said,
maybe
just
since
we
have
a
couple
other
options.
I
think
hopefully
erase
this
I
just
unblocked
this.
Hopefully
some-
and
it
was
great
I-
think
appreciate
the
canonical
folks
for
jumping
on
the
call
and
it's
great
to
get
to
put
some
faces
to
some
GitHub
handles,
and
hopefully
we
kind
of
have
enough
that
that
you
all
can
get
started
at
least
on
what
an
initial
implementation
maybe
could
look
like,
and
we
can
keep
going
back
and
forth.
Async.
A
All
right
thanks
folks,
yeah
I,
think,
let's
move
on
to
the
next
item,
then
I
think
it's
from
Salwa
operational
guide.
I
Sure
hi
everyone-
this
is
salba,
so
yeah
I
will
be
speaking
about
the
operational
type
we
have
put
together
about
Auto
scaling
couriers.
So,
as
you
may
know
already.
F
I
I
For
those
who
don't
know
about
geta,
in
a
nutshell,
is
a
solution
for
kubernetes
that
allows
you
to
Auto
Scale,
based
on
different
events
and
metrics
from
different
sources
such
as
Primitives.
So
in
the
case
of
Warriors,
follow
the
scaling.
We
look
at
a
metric
that
exposes
in-flight
requests,
which
are
the
number
of
queries
that
are
queued
in
the
scheduler
and
running
in
the
query
at
workers.
I
I
Okay,
yeah
I
was
about
to
share
my
screen,
but
if
Gallery
is
doing
that,
that's
great
so
as
of
today,
the
guide
lives
in
the
next
version
of
the
lucky
documentation
and
the
the
operation
section,
and
this
guide
contains
recommendations
about
which
metric
to
use
for
auto
scaling
and
how
to
to
turn
it
and
how
to
estimate
the
minimum
maximum
number
of
coded
replicas
that
the
next
section
cap
and
at
the
bottom.
There
are
some
alerting
rules
to
let.
I
When
you
may
want
to
consider
increasing
the
maximum
number
of
replicas
and
pretty
much
in
general,
how
to
configure
data
to
Auto,
State,
Warriors
and
yeah-
that's
pretty
much.
It
I
just
wanted
to
raise
awareness
in
case.
Someone
might
be
interested
on
doing
that
and
hope
you
find
it
useful
and
other
than
that
we
we
will
soon
publish
a
blog
post
about
how
we
are
using
data
internally
and
the
challenges
that
we
faced.
I
While
configuring
this
internally
so
stay
tuned
and
as
always,
we
would
love
to
hear
feedback
from
from
the
community
and
answer
any
question
that
you
may
have
so
yeah.
That's
all
from
my
side!
Thank
you
and
if
you
have
any
question,
feel
free
to
to
reach
up
foreign.
A
All
right
cool,
so
the
next
one
is
on
about
DStv
Sandeep.
You
wanna.
B
J
So
Owen
has
talked
about
like
us,
working
on
adding
support
for
tsdps
index
store
in
rookie
in
the
last
Community
call
so
I'll
just
add
a
follow-up
like
to
touch
base
on
why
we
are
doing
it.
Loki
is
a
high
density
time
series
database,
but
instead
of
numbers
we
store
logs.
So
what
would
be
a
better
alternative
than
tsgv,
which
is
highly
optimized
and
compact
index
store,
which
is
built
for
indexing
time
series
data?
J
So
we
are,
we
are
internally
testing
it
and
it
is
running
as
one
of
our
application
of
our
largest
internal
deployment,
which
is
doing
250
Mbps
of
ingestion
and
about
600
queries
per
second,
like
most
of
those
queries
are
coming
from
from
Loki
canary
plus
it
monitors
all
our
internal
clusters,
so
manual
queries
get
added
to
them
as
well.
So
our
Focus
right
now
is
doing
improving
the
stability
and
reliability
of
tsdb
and,
like
the
implementation,
is
not
the
vanilla
tsdb
that
comes
in
Prometheus.
J
We
have
opened
the
code
and
added
some
additional.
We
just
do
it
to
like
one
biggest.
The
biggest
example
is
we
are
tracking
how
much
data
we
are
storing
in
each
term.
So
this
helps
us
predict
how
much
data
query
would
touch,
and
this
helps
us
optimize.
The
queries
as
well
like
it
would
dynamically
Shard
the
queries
based
on
how
much
data
it
would
be
touching
and
the
internal
test
results
are
looking
promising.
J
We
are
doing
like
we
are
seeing
40
drop
in
overall
CPUs
age
across
all
the
services
like
like,
overall,
not
in
each
service
and
the
index
size
is
50
smaller
compared
to
bold
DB
shipper
index
size
with
the
same
index
and
because
of
because
of
the
because
there's
a
drop
in
CPU
usage.
J
We
have
more
Headroom
in
processing
the
query,
so
we
are
seeing
close
to
4X
speed
up
in
some
of
the
Intensive
queries,
so
we'll
publish
more
details
in
docs
and
blogs
soon,
when
we
are
getting
more
close
to
making
it
production
ready
or
like
releasing
it
in
beta.
This
is
still
highly
experimental,
and
if
you
want
you
can
try
it
and
share
a
feedback
feedback
with
us.
J
But
like
yeah,
it's
suggested
to
not
use
in
production
yet
because
it's
still
highly
experimental.
A
J
Under
the
hood,
it's
CMS
boldly
bishopper.
Instead
of
boldib
files,
we
store
tsdb.
So
the
config
is
the
same.
Just
use
tsdb
as
index
type
instead
of
boldly
bishopper
like
I'll
I'll
update
the
config
at
least
sorry
documentation
of
the
config
at
least
and
like
share
it
in
the
community.
Slack
yeah.
A
A
B
G
Also
wanted
to
share
something:
I
was
looking
at
this
morning.
Yeah
this
link
in
here.
G
I'll,
take
those
yes,
so
we
haven't
really
had
a
chance
to
see
what
what
the
sort
of
upper
stream
limit
is.
So
for
the
bull
DB
index.
We
largely
we
largely
sort
of
an
active
stream
limit
of
maybe
a
hundred
to
two
hundred
thousand,
which
combined
with
the
idle
timeout
that
we
run.
It
was
one
hour
we'll
let
you
put
between
like
one
and
maybe
two
million
streams
per
day
and
then
in
a
bull
DB
index.
G
We
do
a
24-hour
table
period
on
the
index,
so
the
all
of
the
streams
for
24
hour
period
go
into
one
index
file
and
it
gets
uploaded
and
we
keep
one
per
day
and
that
sort
of
upper
limit
has
to
sort
of
looks
like
this
graph
here
kind
of
relates
to
this
graph
here.
So
this
is
the
our
Ops
cluster
that
we
mentioned,
and
this
is
the
last
30
days
of
the
CPU
usage
by
the
queries.
G
So
this
cluster
gets
a
lot
of
traffic
from
the
Loki
canaries
that
we
run
and
there's
probably
a
thousand
of
them
that
are
running
against
this
cluster.
Now,
there's
a
lot
of
them,
and
so
they
actually
they
kind
of
call
out
a
worst
case.
Query
for
bolt
DB,
where
the
one
of
the
queries
they
run
sort
of
forces
of
24-hour
table
scans.
G
So
that's
why
you
kind
of
see
this
ramping
effect
where,
as
the
number
of
streams
in
the
index
gets
bigger
over
a
24-hour
period,
the
amount
of
CPU
required
increases
and
as
a
result
of
the
query,
latency
might
increase.
So
that's
you
know
kind
of
why
we
limit
you
know
you
could
put
more
rows
in
Bolt
DB
right.
You
could
put
more
data
in
Bolt
DB,
but
you're
going
to
be
kind
of
fighting
this
battle,
and
so
there's
some
big
spikes
here
too.
G
So
let
me
figure
out
how
to
make
it
not
show
there
we
go
so
the
red
and
blue
line
here
are
the
24-hour
stream
count
for
both
of
these
clusters.
So
one
of
these
clusters
is
running
tsdb
and
one
of
them
is
running
bull,
TV
and
I'm,
not
sure
why
the
stream
counts
aren't
exactly
the
same,
but
they're
generally
close,
but
in
periods
where
the
stream
counts
are
lower,
that
you
know,
ramping
effect
is
less
noticeable
and
then,
as
the
stream
counts,
get
closer
to
like
2
million.
G
You
know
this
correlates
roughly
well,
so
here
we're
in
the
two
to
two
plus
million
in
24
hours,
and
you
can
see
the
the
churn
on
or
the
sort
of
increasing
CPU
requirements
on
the
queers
goes
up.
So
the
what's
exciting
here
is
this
yellow
line.
So
this
is
the
tsdb
cluster
and
the
query
performance
is
much
more
consistent.
G
The
spikes
they're
not
surprising
there,
so
that's
likely
just
you
know,
query
load
that
this
cluster
sees
as
a
result
of
humans
using
it
too.
So
the
canaries
are
pretty
consistently
flat
source
of
query
load,
which
is
the
flat
part
of
that
line.
And
then
you
know
you
see
spikes
on
top
of
the
green
line
too.
That's
likely
people
using
the
cluster,
but
so
what's
really
exciting.
Here
is
it.
G
You
know
seems
to
all
but
eliminate
that
24-hour
time-based
performance
penalty-
and
this
is
kind
of
you
know-
expected
because
the
we
don't
have
to
do
like
full
row
scans,
which
is
what
happens
well.
Db
is
a
nosql.
You
know
key
value
store
and
the
like
I
said:
the
canaries
are
the
worst
at
this
with
the
query
they
run,
but
it
ends
up
doing
a
full
row
scan
which
is
requires
a
bunch
of
CPU.
G
So
this
is
a
big
reason
why
we
wanted
to
replace
one
of
many
I
guess
the
index
type
is
to
have
something
more
purpose-built,
but
also
to
be
able
to
push
the
number
of
streams
in
a
24-hour
period
higher
and
not
kind
of
see
this
behavior
that
we're
seeing
so
anyway,
science
right
now
look
good
that
this
is
doing
exactly
what
we
hoped
it
would
so.
I
just
wanted
to
share
that,
because
I
just
looked
at
that
this
morning,
it
was
exciting.
A
A
L
Yeah
sure
yeah
me
and
most
predominantly
Trevor.
It's
not
here
today
have
been
unifying
the
helm
shots
and
specifically
for
if
you
want
to
launch
a
single
binary
version
of
the
scalable
version,
and
we
move
them
back
into
the
local
repo
for
one
to
make
it
easier
to
test
and
make
it
more
visible,
and
this
will
be
the
go
to
home
chart.
L
You
should
be
using
if
you
want
to
host
location
and
it
will
default
to
the
scalable
version,
and
then
it
all
it's
all
on
Main
now
the
next
step
are
really
testing
Travis
testing,
the
migration
from
the
old
Helm
shots
to
the
new
one,
and
then
wrapping
up
the
docks
and
make
it
a
little
more
easier
to
use.
So
you
get
more
like
an
out
of
the
box
experience
without
modifying
too
many
things,
yeah.
A
L
We
didn't
release
it,
so
if
you
want
to
try
it,
you
have
to
go
into
basically
the
hand
folder
and
operations
how
and
then
the
location
apply.
It
yourself
we're
gonna
unify
this
as
well
like
the
release
process
to
release
with
local
together.
So
there's
some
outstanding,
PR
and
yeah.
It
should
just
be
a
nice
experience
and
simpler
because,
like
I
must
have
looked
at
four
Helm
shots
and
didn't
know
which
one
to
pick,
and
now
you
get
one
pick
Christian.
L
Yes,
yes,
so
the
yeah
thanks.
So
we
changed
the
GitHub
repository,
but
we
did
not
change
the
home
repository,
so
the
helm
repository
will
change
will
stay
the
same
and
it
will
call
Loki.
So
you
will
basically
have
an
upgrade
from
the
single
binder
because
I
think
that's
what
the
original
local
one
was
about
and
we
will
deprecate
the
Loki
ssd1.
L
So
there
will
be
only
low
key
and
the
distributor
one
will
be
mostly
Community
Driven,
because
we
feel
the
scalable
version
is
the
one
you
should
be
using
because
it's
easier
to
maintain
and
fairly
scalable
I.
Don't
think
I
don't
know
if
anybody
hits
this
scale
to
run
the
microservice
version.
K
Yeah
another
question
because
I
looked
at
it
yesterday
and
there's
also
the
locus
deck
Helm
chart
is
still
used
to
update
that
with
the
newest
version
or.
L
So
I'm
not
really
sure
what
we
can
do
is
we
can
switch
the
Loki
bit
of
it
to
our
version.
I
think
that's
what
we
should
be
doing
if
we
want
to
support
it,
but
it
should
also
be
Community.
Driven.
E
L
Oh
yes,
yeah,
the
other
ones
are
actually
empty
like
if
you
click
on
any
USC.
B
L
They
actually,
it
didn't,
have
a
big
duplication.
Note:
okay,
great
well,
that
actually
answers
the
question
for
before
the
logistic
as
well.
I
guess.
A
Yeah,
does
anyone
have
any
questions
about
the
Helen
chat?
We
have
13
years
back,
probably
yeah.
If
you
don't
have
any
questions,
then
we
can
go
on
to
any
open
q.
A
if
you
don't
have
any
question
in
general
and
also
one
thing
I
think
I
didn't
say
it
correctly.
A
The
beginning
is
like
about
the
timing,
so
yeah
I
think
we
changed
from
U.S
time
zone
to
you,
but
it
will
be
alternate
so,
for
example,
next
time
it
will
be
use
again
and
then
next
time
after
it
will
be
you
so
we're
alternating
between
between
the
time
zones,
yeah
I
mean
we'll
keep
update
the
calendar
so
subscribe
to
the
calendar.
If
anything
changes
so
you'll
be
notified,
so
I
just
wanna.
Let's
read
that.
A
All
right,
if
nothing
else,
then
we
can
rub
it
up
thanks
everyone
for
joining.
B
A
Yeah
I
think
thanks
folks,
thanks
everyone
for
joining
and
see
you
next
time
then
take
care
yeah.
Thank
you.
Bye.