►
From YouTube: Loki Community Call 2020-10-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thanks
richie,
all
right,
all
right,
so
yeah
I'll
touch
on
that
again,
I
don't
think
we've
done
a
good
job
at
one.
Communicating
the
existence
of
this
call
and
to
communicating
the
intent
of
this
call,
but
really
what
we
want
is
an
opportunity
to
engage
with
the
community
as
well
as
provide
the
community
some
information,
some
feedback
on.
What's
new,
what's
being
worked
on,
you
know,
problems
we're
trying
to
solve
new
features
ideas,
so
everybody
is
welcome.
Participation
is
certainly
not
required,
but
encouraged.
A
So
there
is
a
document
which
is
probably
how
most
people
got
here
in
the
sort
of
ephemeral.
Google
meet
chat
and
there's
a
rough
agenda
on
there
feel
free
to
add
agenda
items
at
the
end.
If
there's
something
you
want
to
talk
about,
we
have
an
hour.
So
we've
got
lots
of
time
here
to
to
dig
into
some
of
this
stuff.
A
So
I
will
kick
it
off
talking
a
little
bit
about
1.7.0.
That
should
be.
The
plan
is
to
release
that
today
or
tomorrow.
Most
of
my
plans
around
this
always
get
pushed
a
little,
so
it
might
be
early
next
week.
This
release
is
mostly
to
get
all
of
the
changes
that
have
been
done
for
alerting
and
bolt
db
shipper,
which
maybe
we're
gonna
start
calling
object,
store,
loki
single
store
loki.
A
I
don't
know
still
playing
with
that,
but
there's
been
a
lot
of
work
done
on
both
of
these
things
and
we're
not
quite
at
the
these.
Are
production
ready
state
but
they're
very
close,
we're
using
them
we're
running
the
bolt
db
shipper
in
an
ops
cluster.
Now
that
does
you
know
a
few
terabytes
of
logs
a
day
and
we're
looking
for
feedback
for
people
to
run
stuff
the
documentation
is
for
alerting.
I
think
it's
pretty
good
owen
did
a
really
good
job
there
for
bolt
db
shipper.
A
I
need
to
go
back
through
it
again.
The
only
thing
I'm
sort
of
concerned
about
with
that
is,
we
haven't
had
a
lot
of
opportunity
to
sort
of
test
upgrade
paths
from
existing
instances,
so
I've
got
a
few
instances
running
around.
We
have
a
few
that
we
need
to
upgrade,
but
my
only
word
of
caution
there
is
don't
you
know,
go
straight
to
prod,
hopefully,
there's
a
environment
that
can
play
around
with.
First,
I
don't
anticipate
problems,
you
know
we
don't
have
any.
A
There
may
be
a
couple
known
issues
in
the
upgrade
path,
specifically
like
with
1.6.0
there's
a
new
rpc,
endpoint
or
grpc
endpoint.
That
requires
in
a
microservices
mode,
an
order
of
operations.
You
need
to
upgrade
adjusters
first,
if
you
don't,
you
just
have
a
brief
querying
you'll
get
some
errors
at
query
time,
but
so
look
for
the
1.7.0
release.
Soon,
we
probably
won't
do
a
lot
of
press
around
it.
A
We
are
gonna
just
look
for
people
that
are
here,
people
that
are
in
the
slack
that
are
using
or
interested
in
these
features
that
can
play
around
with
them
and
give
us
some
feedback.
The
one
thing
excuse
me.
A
Excuse
me,
the
one
part
of
alerting
that
we
would
like
to
have
is
a
ui
page.
It
probably
will
be
exactly
what
the
prometheus
ui
looks.
Like
might
exactly
be
the
prometheus
ui.
They
probably
will
so
that
when
you
load
alerts
you
can
see
what's
loaded.
So
that's
missing
right
now.
You
can
use
the
api
to
query
this
stuff
and
cortex
tool.
I'm
jumping
ahead.
B
A
I
I
think
the
so.
What
would
people
expect
in
1.7.0
in
terms
of
where
can
they
find
docs?
What
is
the
recommended
way
of
working
with
it
like
cortex
tool,.
B
Yes,
so
right
now,
I
think
cortex
tool
is
probably
the
most
effective
route,
that's
a
reference
in
the
docs
and
was
originally
written
to
interact
with
cortex,
but
because
we
implement
a
lot
of
the
same
apis
works
just
as
well
with
loki,
there's
some
examples
there
and
then
there's
also
a
github
action
which
grafana
labs
bends,
which
uses
that,
so
it
should
make
it
easy
to
to
integrate
with
your
ci
pipelines
as
well.
There's
also
an
example
for
that
too.
B
I
heard
that
you
mentioned.
We
have
plans
for
a
very
slimmed
down,
ui,
on
top
which
we
will
vend
in
loki,
which
will
probably
just
be
like
server
side,
rendering
and
that
sort
of
thing,
and
then
there's
also
an
independent
effort
to
bring
an
alerting
ui
to
grafana
cloud
as
well.
A
Yeah,
definitely
if
you
are
using
grafana
cloud
and
you
want
to
play
around
with
alerting
through
that
ui
just
find
find
us
somehow
you
can
email
support
or
find
us
pub
slack.
You
welch
or
oh,
and
I
forget
now
which
you're
in
there
I
feel
like.
I
should
know
that
it's
probably
just
owen.
I.
A
Just
owen
and
we
can
we're
gonna-
do
sort
of
a
trial
type
thing
for
people
that
are
interested
all
right.
Thanks
owen,
and
now
I'm
going
to
turn
it
over
to
serial
to
talk
about
the
log
ql
v2
stuff.
I
did
link
to
the
design
dock,
which
is
effectively
formalized
now,
so
you
people
can
go,
look
at
what
the
syntax
is
going
to
look
like,
but
it
probably
would
be
more
fun
for
serial
just
to
demo
it
a
bunch.
C
Yeah,
the
design
dock
is
not
fully
up
to
date.
I
haven't
got
the
time
to
actually
updated
fully
from
the
last
decision
that
we
made,
but
it's
already
like
a
good,
a
good.
If
you,
if
you
read
it,
you
can
already
have
a
good
idea
of
what
this
is
about,
and
so
I
have
a
preview
which
I
actually
just
finished
today,
and
so
I'm
gonna
share
my
screen,
but
it
works.
C
So
it's
almost
finished,
but.
C
C
All
right,
is
it
good
enough.
C
Yeah,
okay,
so
currently
I'm
just
looking
at
some
logs
of
rocky
itself
actually
in
in
other
environments-
and
this
is
very
you
know-
this
is
something
that
you-
you
are
probably
used
to
and
I'm
doing
a
simple
filter
here
and
I'm
looking
for
like
a
specific
log
line
that
always
has
the
same
format
and
what's
cool
about
this
format,
has
a
ton
of
information
about
the
execution
of
each
query
that
someone
in
the
graphic,
not
indeed,
and
so
what
we
can.
C
Do
you
probably
recognize
the
log
format
here
format,
and
so
what
we
can
do
is
we
can
type
log
formats,
and
this
is
going
to
automatically
pass
each
line
and
then
we'll
add,
as
labels
all
those
properties.
So
we
can
look
at
all
the
properties.
So
now
we
can
see
that
we
have
all
the
properties
automatically
added
as
a
label
plus
the
one
that
palm
takes
carbs
or
whatever
agent
you're
using,
and
so
we
can
start
doing
filtering
on
those
new
properties.
C
So
I
can
start
to
look
at
the
one,
for
instance
that
have
the
throughput
all
right.
So
let's
do
the
throughput.
That
is
so
it's
in
megabytes.
It's
lower
than
100
megabytes.
C
And
so
that's
gonna,
it's
gonna
filter
with
only
those
right,
and
I
can
add
something
else
I
could
also.
I
want
to
see
the
one
at.
Where
is
the
latency
yeah?
There
is
a
latency
somewhere
else.
I
think
it's
tuition,
duration,
yeah,
it's
it's
in
the
direction,
so
it
duration
is
in
a
specific
format.
It's
the
golang
duration
format.
C
So,
if
you're
using
go,
you
should
be
familiar
but
we're
gonna.
We're
gonna
want
at
some
point
to
support
many
different
type
of
format,
but
we
support
that
one
for
now.
So
I
can
add
another
one
I
can
say:
oh
and
the
duration
is
bigger
or
equal
than
200
milliseconds
right,
and
I
can
run
that
query
and
so
this
time
this
is
a
combination
of
the
two
and
I
could
see
that
this
is
only
the
queries
that
are
slower
than
200.
C
So
this
one
was
very
close
and
204
and
milliseconds
what's
kind
of
cool.
Is
this
query
is
a
bit
like
there's
a
ton
of
information
that
I
may
not
need,
so
the
three
set
is
kind
of
something
that
I
would
like
to
see,
but
I
could
technically
reformat
the
line.
So
it's
called
line
format,
so,
as
you
can
see,
everything
that
you
will
do
on
on
logs
will
be
done
through
the
pipes
operation.
C
Query
so
this
time,
I'm
just
showing
the
query
all
right,
so
I
could
add
buttons
here:
I'm
gonna
put
the
latency
at
the
beginning,
so
latency
was
oops,
there's
one
more
that
can
see.
So
this
is
like
a
go
template
format.
I
think
it's
pretty
popular,
since
this
is
the
same
that
you
can
that
you
have
in
hugo
and
it
was
duration
in
the
version.
I
think
I
can
have
whatever.
C
C
So
that's
what
so,
there's
a
ton
of
other
features
that
you're
going
to
be
able
to
do.
We
can
reliable
some
label
if
you
don't
like
them
the
way
they
are
so
I
actually
never
tried
this
one,
because
I
finished
this
concept
today,
so
I'm
gonna
try
to
let's
say:
latency,
I'm
gonna
try
to
level
that
and
see.
So
I
think
it's
label
format
and
it's
latency
power.
C
It's
say
who
we
call
latency,
so
that
is
actually
named
for
now.
Hopefully,
that's
gonna
work
and
foo
yeah
so
that
renamed,
the
latency
label
and
latin
syllable
disappeared
and
what
you
can
do
with
those
is.
You
can
actually
have
multiple
of
them
and
you
can
use
template
instead,
if
you
don't
want
to,
if
you
don't
want
to
rename,
you
can
just
create
a
new
label
and
I
think
you
can
create
let's
say
bar.
So
this
is
a
very
bad
example
of
a
name.
I'm
sorry.
C
But
let's
say
this:
one
could
be
name
space,
I'm
gonna
put
it
down
in
space.
This
is
very
bad
example.
I'm
gonna
have
better
example
right,
and
so
I
should
have
a
new
label,
his
name
ba
somewhere,
and
it's
a
result
of
the
template,
as
you
can
see
bar
and
the
namespace
right.
So
you
can
combine
all
of
them
like
this,
so
that's
kind
of
cool
with
the
logs,
but
this
open
ups
also
a
lot
of
new
capability
with
the
metrics
that
rocky
can
do.
So
I
have
a
tab
somewhere.
C
So
this
is
the
one.
So
this
one
is
very
interesting,
I'm
doing
the
same,
so
I'm
using
the
same
logs
that
comes
from
the
metallic
file
and
doing
a
log
format
looking
at
the
duration,
that
is
above
100
milliseconds
and
this
time
I'm
using
the
n-wrap
operator,
which
will
tells
loki
to
use
this
label
as
the
metric
and
then
remove
the
label,
throughput
megabytes
and
I'm
doing
an
average.
C
So
the
average
of
the
throughput
and
I
can
send
by
the
prolific
because
the
query
is
extracted
from
the
log
format
from
the
code
itself.
So
I
can
see
the
throughput.
The
average
throughput
for
I
think
I
have
like
I
don't
know
how
many
of
them
there
is,
but
something
like
30
type
of
different
queries
in
that
specific
period
yeah.
I
think
it's
kind
of
kind
of
cool,
but
that's
pretty
much
it
for
the
for
the
the
demo.
C
A
That's
awesome,
yeah,
the
the
sort
of
next
next
level
for
loki
querying
here.
This
closes
the
loop
on
a
lot
of
the
trouble
that
we
have
now,
which
is
the
message
that
we
tell
people
that
you
should
not
use
many
labels
at
ingestion
because
that
blows
up
the
index,
but
there's
a
lot
of
operations
specifically
metric
operations.
A
You
can't
do
if
you
don't
add
labels
to
your
logs,
so
now
you
don't
have
to
and
we
can
sort
of
further
go
down
that
road
of
having
a
small
index,
but
a
really
powerful
query
time
approach.
I'll
show
you
this,
because
I've
been
playing
around
with
this
too.
C
Yeah,
I
should
mention
that
the
the
image
is
available.
If
some
people
want
to
play
with
the
preview,
it's
still
not
the
the
final
one,
there's
some
missing
features
and
also
there's
some
probably
some
bugs,
because
this
is
the
first
time
I'm
running
this.
A
Yeah,
definitely,
if
you
want,
you
could
probably
paste
that
image
tag
in
the
dock
and
just
maybe
yeah
caviar.
A
So
another
big
sticking
point,
I
think,
with
loki
adoption,
our
json
logs.
Historically,
the
experience
here
has
been,
you
know
a
little
rough
around
the
edges,
one
big
improvement
yeah.
This
is
an
older
version
of
grafana,
but
in
newer
versions
of
grafana,
the
ability
to
filter
on
log
format
or
json
by
selecting
certain
values
in
the
ui
has
helped
quite
a
bit.
A
This
is
an
example
of
a
pretty
flat
json
object,
so
it's
still
fairly
readable,
but
you
know,
as
they
get
really
big
it's
sort
of
hard
to
work
with
so
cereal
didn't
demo.
This
I'll
just
give
you
guys
a
quick
background
on
what
this
data
actually
is.
A
This
is
another
fun
sort
of
expect
a
conference
talk
out
of
this
someday.
Maybe
if
you
go
out
and
spend
50
60
bucks,
you
can
buy
a
antenna
and
a
raspberry
pi
and
run
software
on
it
that
will
listen
to
broadcasted
location
messages
from
aircraft.
A
So
these
are
the
all
the
planes
that
are
flying
around
my
house
right
now
and
I
am
storing
their
location
information
or
the
information
that
they
send
in
loki
in
basically
this
format,
but
now
with
log
qr
v2,
I
can
start
to
do
some
pretty
fun
stuff
right.
So
if
I
parse
that
as
maybe
more
fun
to
build
that
query,
as
cyril
did
so,
there's
similar
to
log
format,
there's
a
stage
called
json
that
will
turn
all
of
the
keys
and
values
into
labels.
A
It
is
a
little
simplistic,
especially
if
you
have
very
big
nested
objects.
It
makes
an
entirely
flat
set
of
labels.
So
if
you
have
nested
objects,
they
become
hex
underscore
object,
name
we're
using
underscore
to
do
the
separation,
mainly
because
the
lecture
that
we
use
for
parsing
the
thing
there
would
be
collisions
with
dot
and
some
of
the
other
language
use
cases.
So
I
think
that
was
the
problem
or
actually
specifically,
the
collision
is
that
a
label
in
prometheus
can't
have
a
dot
in
the
name.
So
it's
not
a
lecture
problem.
A
It's
a
prometheus
label
name
restriction,
so
to
make
labels
at
query
time
or
extraction
time
that
are
compatible.
We
have
to
sort
of
use
the
existing
rules
and
so
exact
same
thing
that
cereal
just
showed
so
now
I
can
look
at
so
in
this
case
the
I
get
some
altitude
information
or,
I
believe,
there's
speed
information
in
here
too
ground
speed,
so
we
can
filter
where
around
feet
is
greater
than
like.
A
hundred
might
be
more
interesting
to
see.
If
we
like,
look
for
really
fast
planes.
A
See
just
a
few
and
then
I
could
do
also
filtering
on
altitude
stuff
like
that.
I
think
another
interesting
idea
here
right,
like
there's
latitude
and
longitude,
so
you
should
be
able
to-
and
I
don't
have
this
query
prepared,
but
you
could
actually
do
some
geospatial
stuff
right
where
I
can
say
a
query
on
where
the
latitude
is
greater
than
this,
but
less
than
this,
the
longitude
is
greater
than
this
and
less
than
this.
And
then
that
would
tell
me
all
the
information
about
airplanes
that
are
in
a
certain
area.
A
Basically,
I
defined
a
box
with
latitude
and
longitude
and
the
same
sort
of
ability
now
with
json
that
you
can
do
log
format
is
to
be
able
to.
You
know,
rebuild
the
line
into
something:
that's
a
little
bit
more
human
friendly.
So
in
this
case
this
is
the
owner
information
for
the
planes.
The
way
this
information
is
captured
is
sort
of
repeated,
which
is
why
we
see
the
same
basically
every
second,
I'm
capturing
the
the
current
information
from
the
raspberry
pi.
A
If
anyone
is
interested
in
how
that
actually
works,
you
can
check
out
flightaware.
Has
a
project
called
pi
aware
there
are
a
lot
of
other
ways
to
get
into
this
that
don't
require
working
with
flightaware.
But
if
you
do
and
you
send
them
the
information
you
capture,
they
give
you
some
subscription
information
or
whatever.
But
I
don't
know
if
I
really
need
to
be
endorsing
flightaware.
But
I
think
the
project
is
kind
of
neat
and
the
information
is
kind
of
fun
and
it
gave
us
an
opportunity
to
play
around
with
some
json
logs.
A
A
More
restart
from
your
coffee,
you
want
to
talk
about
the
right
ahead.
Log.
B
There's
a
pr
for
now,
oh
yeah,
so,
yes,
we
are
introducing
a
right
ahead
log
into
loki,
that's
the
pr
with
the
current
iteration
of
design
dock.
It's
largely
based
on
prior
work,
already
done
in
prometheus
and
cortex.
B
Any
idea
here
is
to
to
solidify
some
of
the
benefits
that
we
already
have,
that
you
have
tunable
knobs
for
particularly
around
the
number
of
replicas
you
can
use
when
you're
writing
series
to
our
ingestor
level.
This
will
help
us
to
persist
data
even
across
system
restarts
and
that
sort
of
thing
reschedulings
if
you're
using
staple
sets
and
kubernetes
that
sort
of
thing
it'll
also
really
help
our
single
binary
use
case,
which
you
generally
don't
run
with
another
layer
of
redundancy.
B
A
A
We
need
to
keep
some
amount
of
data
in
memory
to
make
efficient
storage,
and
so
this
will
close
the
loop
there
such
that
we
can
keep
a
little
bit
more
memory
in
data
longer
to
be
able
to
make
some
bigger
chunks
which
helps
on
the
query,
side
and
the
storage
side,
but
not
have
the
risk
of
your
cloud
provider
rebooting
your
nodes
all
day
and
not
telling
you
why?
So
not
that
that's
ever
happened.
D
B
Sure
so,
right
now
it's
going
to
be
a
check
pointed
at
a
configurable
interval.
I
believe
upstream
cortex
uses
30
minutes
by
default.
I
think
we're
probably
going
to
try
something
a
little
bit
more
aggressive,
because
it's
much
easier
to
write
higher
throughputs
with
log
streams
than
it
is
with
metrics,
which
depend
on
scrape
intervals
and
prometheus.
B
B
The
latter
will
be
called
checkpointing,
and
this
can
be
combined
to
get
a
benefit
of
both
speed
when
you're,
replaying
and
also
guaranteed
correctness,
there's
a
lot
more
information
in
the
design
dock,
which
will
hopefully
help.
A
Okay,
the
last
thing
that
I
have
on
this
list
here
we
are
so
the
someone
from
the
community
approached,
I
think,
both
grafana
and
also
loki
about
centralizing
grafana
product
or
project
helm
charts
into
one
central
repo.
A
A
It
should
also
make
it
easier
for
us
to
add
community
contributors
to
help
maintain,
and
so
that
is
the
direction
that
we
plan
to
go
in.
It's
just
going
to
be
a
question
of
when
we
can
can
sort
of
get
the
pieces
moving.
Things
are
a
little
on
the
busy
side
for
the
next
month,
so
it
might
be
a
little
slower,
but
in
the
meantime
there's
reinhardt.
I
don't
remember.
His
last
name
has
created
a
a
really
nice
micro
services
version
of
loki.
This
has
been
a
community
ask
since
the
beginning
of
time.
A
A
A
The
issue
might
have
been
closed
by
the
stale
bot
or
we
might
have
just
closed
it
or
it
might
be,
I'm
sure,
there's
an
issue
for
this
and
I'm
sure
it's
well
requested,
but
especially
as
people
start
playing
around
more
with
the
parallelization
of
querying
the
ability
to
scale
things
in
the
microservices
fashion
is
helpful,
so
that
is
in
planning
and
should
happen.
I
can't
guarantee
you
when,
but
at
least
in
a
month
from
now
we
should
have
a
better
update,
and
that
was
the
last
thing
on
my
list.
A
E
Hi,
hey
eddie,
I'm
roberto
and
I
want
to
ask
you:
what's
will
be
the
brush
for
luck
like
you
want
to
to
maintain
the
approach
using
tank
to
deploy
it
or
you
are
focusing
on
the
helm
charts?
What
will
be
the
the
road
map
you
keep?
You
want
to
keep
the
ball
or
you
are
yeah.
A
We
will
keep
both
internally
within
grafana,
we
use
tonka
and
and
jsonnet
everywhere.
You
know
a
year
or
two
years
ago,
I
guess
when
I
started,
I
had
no
opinions
on
the
subject.
They
were
both
new
to
me
at
this
point.
After
sort
of
maintaining
home
or
helping
the
community
maintain
home
and
also
maintain
json,
I
I
firmly
believe
that
the
the
maintenance
burden
is
lower
on
jsonnet
and
the
capabilities
are
better.
So
if
you
have
that
infrastructure,
so
this
is
the
big
caveat
right,
so
the
getting
started
with
json.
A
That
is
harder
than
home.
I
think
what
helm
does
really
well
is:
has
a
really
nice
out
of
the
box
experience
and
it
with
the
single
binary
with
loki.
I
think
that's
led
to
a
lot
of
early
adoption,
a
lot
of
really
ease
of
use.
The
problem
is
it's
templating,
yaml
right,
so
we
merge
prs
every
week
where
someone
wants
to
extend
the
current
helm
chart
to
do
something
that
wasn't
possible
before
in
json.
You
just
don't
have
to
do
that
right,
like
the
way
json
at
composability
works.
A
You
can
pretty
much
always
you
know,
manipulate
the
existing
json
in
a
way
to
suit
your
needs,
so
it
has
such
a
lower
burden
for
us
to
have
to
do
maintenance
like
we
just
make
changes
to
it,
as
we
add
features,
basically
so
so
we'll
support
both,
but
we
are
going
to
be
leveraging
the
community
heavily
to
help
support
the
helm
pieces
of
it
because
they're
the
ones
that
are
going
to
be
using
it
a
lot
more
than
we
are,
and
but
you
know
the
the
the
reality
is
home
is
the
you
know
is
the
most
popular
I
would
say
for
for
tooling
around
this
we're
trying
to
encourage
people
to
use
json
in
tonka,
but
it's
not
gonna
be
for
everybody
and
not
everyone's
using
it.
A
So
I
think
we
probably
need
both
so
the
the
goal
there
is
just
to
try
to
keep.
You
know
the
the
time
that
we
spend
doing
supportive,
helm
lower
than
the
time
we
spend
like
adding
features,
and
you
know
working
on
the
product
itself.
E
Yes,
yes,
yes,
and
one
question,
I
think,
is
to
owen
about
the
ruler.
Do
you
have
any
plans
to
have
like
a
operator
or
something
like
this
to
get
the
rules
and
apply
this
to
have
like
the
rules
applying
by
like
using
git
drops
or
like
lcd
this
kind
of
tool,
or
we
don't
have
any
plans
about
this?
That's
a
really
good
question.
B
Right
now
we
use
a
the
cortex
rules,
action
github
action,
which
is
built
on
top
of
the
cortex
tool
binary,
which
is
the
cli
we
use
to
interact
with
the
loki
ruler
as
well,
and
that
can
be
if
you're
familiar
with
github
actions
can
be
composed
in
your
ci
pipelines,
and
so
we
do
that
to
do
things
like
linting
rules
to
make
sure
that
they're
valid
valid
yql
to
diff
them
against,
what's
deployed
in
our
current
environments
and
then
ultimately
to
deploy
them
to
different
environments
when
merged.
B
So
you
can
do
that
right
now
that
way,
doing
it
with
an
operator
would
be
another
interesting
choice.
I
don't
think
that
realistically
we'll
be
focusing
on
that
quickly,
just
due
to
the
opportunity
cost
of
time,
but
I
also
think
that
that
could
be
a
valuable
effort.
Someone
else
would
probably
need
to
focus
on
that.
F
I
have
a
question
at
the
beginning
or
before
the
beginning
of
this
call,
someone
mentioned
that
they
weren't
sure
if
they
are
allowed
to
join
here,
so
asking
all
the
outside
attend
outside
attendees.
F
G
It
was
me
who
asked
that
question,
I'm
not
entirely
sure,
but
I
just
for
one.
There
wasn't
much
notice
now,
that's
possibly
my
fault
for
not
being
watching
the
slack
channel,
often
enough,
but
the
message
the
message
wasn't.
A
A
I
can
comment
that
the
notification
is
is
not
good,
has
not
historically
been
very
good
for
that.
I
did
get
a
tweet
out
today,
but
like
an
hour
before
it
started,
and
nobody
follows
me
on
twitter.
So
that's
probably
not
the
most
helpful,
but
we
are
gonna
have
to
do,
and
I'm
gonna
make
a
better
effort
for
the
future
to
give
us
some.
You
know
much
better
visibility
trying
to
grow
the
the
amount
of
people.
A
The
slack
channel
is
good,
but
I
don't
know
how
much
that,
like,
I
said,
people
pay
attention
to
it.
So
we'll
work
to
prove
that
I
don't
know,
graham
if
your
mic
or
your
headset
is
this
working
now.
G
I
still
don't
hear
you
sorry,
I
missed
most
of
what
you
actually
said.
Well,
microphone
seems
to
cut
out
randomly.
I
don't
know
why.
G
But
one
one
thing
I
would
like
to
ask
about
for
may
is:
I
feel
very
much
that
how
I
am
attempting
to
use
loki
in
the
organization
where
I
work
is
the
poor
relation,
so
we
don't
have
a
kubernetes
infrastructure
which
is
ready
yet,
and
we
also
have
issues
when
we
do
have
kubernetes
ready
with
the
whole
checking
the
build
of
a
particular
container
and
so
on
and
patching
so
that
might
take
a
while,
but-
and
this
could
just
be
a
failure
of
documentation.
G
But
I
got
a
slight
impression
to
from
what's
been
said
today
that
running
either
in
monolith,
mode
or
even
separate
in
micro
architecture
mode,
but
on
vms
is
kind
of
moving
away
from
there.
It's
not
a
high
priority.
Is
that
fair,
or
am
I
I
mean
I
do
understand
the
attraction
of
kubernetes
sorry.
A
Yeah,
no,
absolutely
that's
a
that's
something.
We
should
fix
the
the
communication
on
there.
There
is
absolutely
no
reason
to
not
run
loki
outside
of
kubernetes.
A
I
think
we
just
find
ourselves
having
the
conversations
more
around
kubernetes
and
I
think
that's
because
we
don't
offer
any
tooling
or
support
for
the
non-kubernetes
environments
right
like
we
don't
have
wn
packages,
we
don't
have
fedora
pack
like
we
don't
have
the
things
people
would
be
using
to
run
outside
of
kubernetes,
so
they
can't
complain
about
them.
I
guess
is
that,
like
like
there
can't
be
problems
with
them,
as
that
complains
the
wrong
way
to
say
that
right,
like
we.
A
So
I
I
personally
have
you
know
loki
running
on
raspberry
pi's,
all
around
me
right
now,
system
d,
jobs
that
I
set
up
to
do
that
they're
all
in
monolith
mode,
absolutely
absolutely
fine
to
run
that
way,
and
we
should
absolutely
improve
the
documentation
to
communicate
that.
I
think
through
some
of
the
presentations
and
talks
we
do
we
try
to
do
a
better
job
there.
I
don't
think
that's
translated
its
way
down
into
the
documents
themselves.
A
I
think
we
probably
make
an
assumption
that
that
you
know
too
many
people
are
using.
In
fact,
feedback
we've
had
at
webinars
around
this
is
that
it's
not
necessarily
the
majority
of
people
even
using
kubernetes
at
this
point,
so
I
think
the
bigger
harder
question
there
is,
you
know:
do
we
want
to
go
down
the
road
of
building
packages
for
different
operating
systems
or
you
know?
A
Is
there
another
way
like
what's
the
right
way
to
to
have
a
distribution
right
now,
the
binary
you
just
download
it?
You
need
a
config
file
but
you're
on
the
hook,
for
you
know
making
it
run
so
so
the
sort
of
long
answer
to
your
question.
There
is
no,
no
one
should
have
any
concerns
about
running
monolith
mode.
G
Sorry
yeah:
it
isn't
even
entirely
clear
that
it's
possible
to
do
certain
things
running
in
vms,
so
so
the
package
thing
actually
isn't
too
much
of
an
issue,
because
I
love
go
no
more
download
something
discover
that
you've
got
to
update
1500
java
dependencies
and
it
still
doesn't
work
because
you've
got
the
wrong
jre
or
something
something
like
that.
G
You
know
installing
the
binary
is
pretty
straightforward:
it's
getting
up
a
systemd
unit,
file,
equally
straightforward,
but
then
it
seems
like
what
is
I'm
not
even
sure
this
is
possible
how
to
debug
this.
I
had
some
issues
with
the
network
interface,
the
ens00
or
f1.
A
Yeah
yeah
the
the
default
for
the
ring
config
assumes
f0,
I
believe
and
yeah.
That's
not
obvious.
The
memberless
docks
yet
are
not
terribly
obvious.
If
you're
trying
to
do
so,
you
can
run
clustered
monolith
mode
like
you
can
run
a
few
of
them
with
a
ring.
There
are
very
much
no
docs
for
that.
It's
mostly
a
an
effort
of
the
adventurists
to
figure
that
out.
A
G
Yes,
it
was
an
adventure
getting
it
running
in
monolith,
mode
and
I'd
hate
to
see
people
discouraged.
As
you
said
by
oh
rats,
I've
got
to
set
up
a
kubernetes
infrastructure.
First.
A
I
don't
know
how
to
solve
our
docs
problem,
specifically
other
than
writing
docs.
I
guess,
but
it's
just
one
of
those
sort
of
it
needs
to
be
made
a
higher
priority
and
it's
been
on
my
list
for
a
while
and
I've
not
been
successful.
So
we
do
have
some
growth
coming
to
the
loki
team.
A
Although
everybody
says
it's
mean
to
make
the
new
people
just
go,
write
docs,
but
maybe
that'll
free
us
up
to
to
go
right.
Sometimes.
A
F
So
I
strongly
believe
we
should
have
new
ones
right
dogs,
but
we
are
also
aggressively
hiring
more
dogs
and
content
people
and
also,
if
anyone
on
this
call
would
be
willing
to
to
to
start
scratching
their
own
itches.
F
We
definitely
have
have
people
who
are
professionals
in
creating
docs
who
would
love
to
help
with
with
a
community
effort
to
improve
this,
like
you
can
rely
on
professional
tech
writers
to
to
copy
edit
what
you,
what
you
produce
and
such
like,
if
you're
interested
in
this,
we
can
definitely
support
you
with
professional
help
and
again
we
are
hiring
even
more
people
to
to
expand.
Of
course,
content
and
dogs
is
absolutely
essential
to
to
our
community
strategy.
G
Ask
me
that
again
in
a
month
when
I've
got
our
park
environment
happily
running
certainly
I'd
be.
I
noticed
that
there
were
a
couple
of
spelling
mistakes
in
the
documentation
that
I
just
wanted
to
correct,
because
I'm
a
bit
of
a
grammar
nazi.
A
A
All
right
thanks,
everybody
appreciate
everybody
joining.
We
will
do
it
better.
Tell
your
friends,
you
know,
tell
your
tell
your
enemies,
we'll
get
everybody
we
can
to
show
up
at
the
loki
call
and.