►
From YouTube: Loki Community Meeting 2022-01-06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Welcome
because
oh
yeah,
we
like
to
talk
to
all
the
folks
that
are
going
to
watch
us
on
youtube.
That's
really
what
we're
doing
this,
for
is
the
clicks
and
the
views
and
the
the
likes
so
welcome
to
2022
the
agenda
for
today,
a
bit
sparse.
All
of
us
are
kind
of
getting
back
into
the
swing
of
things
from
some
pto,
but
trevor
really
wants
to
talk
about
helm,
so
we're
gonna
kick
things
off
with
that.
A
Yeah,
I'm
really
just
I'm
really
curious
about
sort
of
usage
of
our
helm,
charts
like
which
ones
are
being
used
and
by
how
many
people
and
then
specifically
around
the
enterprise
home
chart
offering
I
kind
of
would
like
so
now.
We
have
two
loki
home
charts,
there's
the
full
microservices
and
the
simple
scalable,
and
then
we
have
one
enterprise,
one
which
is
based
on
the
full
microservices.
B
Let
me
make
this
conversation
more
complicated
great.
I
perry's
not
here,
and
this
should
definitely
go
on
here,
but
just
before
the
end
of
the
year,
we
upstreamed
a
loki
operator
that
red
hat
built
it's
a
general
purpose,
kubernetes
operator
for
loki.
B
It's
the
api
is
designed
around
basically
specifying
a
size
like,
I
think,
the
size
that
there's
only
one
size
that
exists
right
now.
It's
like
called
something
like
x1
small
and
it
will
you
provide
that
as
the
input
to
the
you
know
the
custom
resource,
and
then
you
will
get
a
loki
deployment
in
microservices
mode
with
the
number
of
distributors
and
gestures
and
queries
for
that
that
size,
which
correlates
to
a
number
amount
of
volume
of
logs
per
day,
or
something
like
that.
B
So
it's
red
hat's
investment
in
this
was
to
make
it
easy
to
run
loki
in
openshift.
The
operator
is
not
openshift
specific
though
it
has
some
flags
you
can
provide
to
it.
That
will
allow
it
to
get
some
additional
features
out
of
openshift,
but
it
it
will
run
in
any
kubernetes
environment.
B
So
so
thank
you
to
perry
and
the
folks
that
put
a
tremendous
amount
of
work
into
that
and
then
we're
gracious
enough
to
contribute
to
the
project,
hoping
for
basically
larger
adoption
to
build
a
community
around
it
to
support
it,
and
now
we
have
this
question
of
you
know
should
should
we
and-
and
I
don't
know
that
we
can
answer
this
question
today,
because
we
haven't
had
a
lot
of
talk
about
this
internally
yet,
but
how
could
we
use
this
operator
in
conjunction
with
the
helm,
charts?
B
Also
right,
like
a
helm
chart
can
deploy
an
operator
we
could
remove
most
of
the
helm,
charts
and
replace
them
with
this
that
deploys
the
operator
likewise,
should
the
operator
now
be
updated
to
support
the
single?
B
You
know
ssd
deployment
model
like
what
is
compelling
about
that
is
because
the
api
for
the
operator
is
this:
like
x1
small,
you
know
it
could
be
x1,
ssd,
right
and
what's
nice
about
operators
is,
you
could
say,
make
a
change
between
x1
ssd
to
x1
small
and
it
could
actually
do
the
things
to
convert
from
an
ssd
to
a
microservices
deployment.
These
are
things
you
can't
do
in
helm
very
easily
or
json
it
for
that
matter.
B
So
so
I'm
going
to
answer
your
question
by
creating
more
complexity,
which
is,
I
don't
know
what
we
should
do
with
all
of
our
helm
charts.
We
have
a
lot
they're,
largely
community
maintained.
We
just
introduced
the
one
that
you're
working
on
trevor
for
ssd
and
I
think
we
got
to
have
a
kind
of
sit
down
and
decide
of
of
where
do
we
want
to
put
you
know,
put
our
efforts.
The
helm
charts
are
not
high
maintenance
per
se,
but
they're
being
that
it's
helm.
B
It
is
some
ways:
high
maintenance,
because
lots
of
people
come
along
with
a
use
case
of
something
that
we've
not
templated.
And
then
you
know
there's
a
pr
and
like
that's
so
it's
a
matter
of
just
keeping
track
of
emerging
prs
and
like
as
the
folks
at
grafana.
We
don't
get
a
lot
of
time
because
we
don't
use
helm
ourselves,
so
you
know
we're
going
to
start
discussions
around
whether
or
not
the
operator
makes
sense
for
us
to
use.
You
know
we
famously
use
the
json
deployment
and
that.
A
So
that
was
going
to
be
my
follow-up
question
is
that
if
I
I,
I
would
have
no
interest
in
creating
a
helm
chart
that
deploys
the
operator
if
we
echo
final
labs,
we're
not
interested
in
starting
to
use
the
operator
right,
because
because
then,
which
just
we've
now
increased
the
surface
area
of
things
just
to
support.
Basically
right,
you
know,
unless,
unless
there's
going
to
be
considerable,
open
source
like
community
support
of
the
operator,
in
which
case,
maybe
that
would
change
to
change
the
story
a
little
bit.
B
The
the
driver
for
upstreaming
that
into
the
loki
project
was
to
allow
building
a.
Hopefully
you
know
allowing
to
build
a
bigger
community
around
it.
Getting
it
in
the
project
makes
it
more
accessible
for
keeping
track
and
merging
prs.
Like
the
you
know,
the
goal
is
to
to
grow
its
adoption,
and
I
do
think
this
is
my
opinion.
B
So
it
would
be
nice
if
it's
possible
if
we
can
make
this
work,
to
use
the
operator
to
basically
deprecate
all
the
existing
helm,
charts
and
json
it
and
replace
them
with
very,
very
thin
wrappers,
because
we
can't
escape
home
right,
like
obviously
tons
of
people
use
it,
it
needs
to
exist
like
that's
the
number
one
way
people
deploy
in
kubernetes,
we
personally
likely
won't
switch
to
home
because
of
our
massive
json
infrastructure.
B
A
Yes,
I
guess
that's
where
I
would
want
community
feedback
like
how
would
the
community
feel
about
us
because,
like
if
we
went
this
direction,
I
would
want
to
like
yeah
deprecate
all
our
homes
right
and
just
say
we
are.
We
have
one
helm,
it's
the
operator,
it's
the
only
thing
we
support
like
it's
the
only
thing
that,
like
we're
going
to
contribute
to
going
forward,
how
would
the
community
feel
about
that.
C
C
What
are
definitely
recognized
in
a
similar
kind
of
helm
to
operator
conversion?
It's
basically,
what
were
your
helm
prs
become
then
suddenly
go
code
and
like
kind
of
has
a
very
huge
area
of
entry
in
the
sense
that,
like
I
want
to
add
this
label,
I
want
to
add
this
annotation
and
so
on,
and
you
can
imagine
that
for
a
lot
of
fields
in
a
lot
of
object,
types
in
kubernetes,
so
so
that's
definitely
a
bit
of
fear
that
I
would
have
that.
C
B
Yeah,
I
share
a
sort
of
a
similar
question
with
I
don't
have
much
operator
experience,
but
the
api
that
it
exposes
is
intentionally
simple
and
when
I
say
api,
it's
basically
the
resource
definition
that
is
used
by
the
controller
to
deploy
it.
B
It's
intentionally
simple,
and
it
then
therefore
has
the
challenge
that
helm
has,
which
is
if
something
that
you
want
to
do
is
not
exposed
already
in
that
api.
You
have
to
pr
the
project
and
we
all
have
to
agree.
We
want
that,
and
this
is
what
helm
ends
up
you
know
creating.
Is
you
like?
I
want
to
change
the
security
context
on
the
distributors
and
nobody
templated
that
I
have
to
open
a
pr
for
it
and
and
template
it.
B
Jsonnet
has
the
luxury
no
such
requirement,
because
you
can
manipulate
anything
anywhere
at
any
time
pros
and
cons
to
that
for
sure
too,
but
the
operator
would
be
more
helm-like
in
that
regard,
which
is
it's
you
know
and
for
our
own
sanity
we
probably
would
have
to
say
no
to
some
things.
People
might
want
to
do
with
it
right
like
they
might
want
to
do
things
and
expose
things
through
the
api
that
maybe
we
don't
want
to
do.
I'm
not
really
sure
yet,
like
I
said,
my
expertise
here
is
is
not.
C
So
definitely
feel
like
maybe
ses
have
a
have
quite
a
bit
of
a
better
insight,
because
I
think
like
when
I
think
of
some
stories
like
adapt
the
helm
shot
to
openshift,
which
is
kind
of
still
very
possible.
I
guess
without
too
much
go
or
programming
skills,
but
when
it
comes
to
an
observator,
I
think
that
will
be
requiring
way
more
insight.
I
guess
into
the
operator
itself,
and
so
I
think
like
it
would
be
good
getting.
B
I
can
say
that
the
reason
that
red
hat
built
an
operator
and
the
reason
openshift
uses
operators
is
it's
largely
driven
by
being
simple,
to
deploy
and
run
like
it's
designed
to
hide
a
lot
of
the
complexity.
So
if
it's
done
well
and
it
works,
it
should
be
easier.
You
know,
I
know,
opinions
vary
a
lot
on
operators
and
you
know
what
you
tend
to
think.
Maybe
trade
there
was
is,
it
might
be
easier
to
deploy
and
run,
but
you
removed
some
of
the
knobs.
B
That
would
be
easy
to
tune
in
something
like
jsonnet
now
like
say,
I
wanted
to
add
a
couple:
more
distributors
like
there's,
not
a
knob
for
that.
Currently
you
know
you'd
have
to
change
the
size
of
the
cluster
to
be
bigger,
so
you
know
we
run
like
our
job
is
to
run
loki,
so
we
tend
to
you,
know,
tune
it
at
that
level
and
that
operator
isn't
built
for
that
kind
of
control.
Right
now.
So
do
we
want
to
add
that?
B
B
This
is
the
kinds
of
things
we
got
to
figure
out
in
the
coming
time,
but
the
you
know
to
trevor's
original
point.
We
have
a
lot
of
helm,
charts
now
and
that
doesn't
seem
great.
We've
got
to
figure
out
what
we
want
to
do
to
either
reduce
that
number
or
just
leave
them.
Be
I
mean
if
they're
working
for
people-
and
you
know
we
can
just
keep
merging
prs
and.
A
Well
and
I'm
not
sort
of
like
a
like
there's
a
couple
things
that
have
come
up
so
there's
like
I'm
not
one
is
this
sort
of
like
a
crosswords
right
now
is:
do
I
go
create
another
one
right,
because
we
want
to
make
ssd
available
to
enterprise
customers
as
well
as
open
source
customers,
so
like
open
source
users
right?
So
that's
the
first
question
then
the
second
one
is
that
if
you
look
at
our
helm
charts
the
sort
of
three
or
four
that
we
have
they
they
don't.
A
They
don't
consistently
follow
the
same
patterns
like,
for
example,
some
of
them
take
the
config
as
a
string,
and
some
of
them
take
the
config
as
a
struct
and
then
the
ones
that
take
a
config
is
the
struct
reach
into
that
config
to
pull
out
values
like
ports
to
use
for
other
places
in
the
templates,
whereas
the
ones
that
take
the
config
as
a
string,
hard
code,
those
ports
in
other
places
or
expose
new
top
level
variables
for
the
sports
differences.
D
A
I
just
it
seems
as
though
having
a
consistent
vision,
whether
it
be
helm
operator
whatever
it
is,
would
be
the
first
step
into
deciding.
A
You
know
how
to
improve
consistency,
improve
user
experience
across
all
of
these
different
things,
yeah.
B
I
my
personal
opinion
here
is
that
I
hope
the
operator
can
save
us
I'd
like
the
scenario
to
be.
This
is
my
opinion,
based
on
not
a
lot
of
time
on
this
yet,
but
I'm
I'm
hoping
that
allows
us
to
focus
everyone's
efforts
into
maintaining
the
operator
and
that
you
just
choose
your
favorite
rapper
to
deploy
it
not
sure.
Yet
how
that's
going
to
play
out,
because
it's
all
sort
of
very
new,
but
so
we
can
sort
of
chat
about
that
to
trevor
on
what
we
actually
do.
B
Steps
in
the
internal
I
mean
I
don't
want
to
do
something
that
would
end
up
being
kind
of
wasted
effort,
but
you
know
we
want
to
also
have
people
able
to
use
the
simple
scalable
deployment
model
because
it
does
cover
it
does
make
easier
to
run
loki
at
you
know,
sort
of
moderate
scales
with
a
lot
less
complexity,
so
it
may
still
be
worth
making
a
home
chart,
even
if
we
know
we're
not
going
to
keep
it
forever,
because
I
don't
think
we're
going
to
get
to
a
point
where
the
operator
will
be
ready
to
cover
all
these
use
cases
in
the
next
like
month
or
two
it's
going
to
take
longer
than
that.
A
So
I
have
found
I
deploy
all
when
working
on
how
I
deploy
it
all
using
just
on
it
and
tonka,
because
there's
just
things
you
can't
do
in
helm
that
I
need
done
to
like
and
granted
like.
I
think
that
my
use
case
in
development
is
different,
because
I'm
spinning
up
a
whole
world
right.
You
know
with
a
grafana
and
some
like
log
generators
and
stuff
too,
but
but
yeah.
It's
like
I
mean
that's
sort
of
what
yeah
so
anyways.
B
Yeah
anybody
have
any
other
feedback
or
thoughts
or
opinions.
We
don't
have
a
lot
on
the
agenda,
so
we
can
talk
about
home
the
whole
time
if
we
want,
but.
A
C
Sorry
again,
like
I,
I
was
thinking
like
what
what
other
opinions
are
on
on
like
that
kind
of
simple
scalable
deployment,
and
how
so
like.
I'm
definitely
of
the
of
the
argument.
I
think,
if
we
don't
run
it
ourselves
somewhere,
where
we
care
about
kind
of
on
a
production.
D
C
That
I
think
it
will
not
get
the
kind
of
love
that
it
needs,
so
we
will
not
kind
of
discover
the
bugs
and
obviously
like
operationally.
It
will
be
quite
challenging
for
us
if
we
have
to
support
like
in
sas
kind
of
those
different
models,
but
I
think
that
is
probably
the
the
only
way
that
we
get
a
kind
of
similar
quality
depending
on
which
model
you
use.
I
don't
know
if
that
has
been
discussed
recently.
A
So
I
actually
have
like.
Oh,
I
am
I'm
in
the
process
of
developing
a
plan
for
that
exact
issue
christian
I
I'm
still
working
on
the
design
of
doc
and
I'm
unfortunate
a
little
behind
where
I'd
like
to
be.
But
yes,
I
would
like
for
us
to
have
a
simple
scalable
pod
in
prod
and
to
basically
t
traffic
off
to
it
so
using
something
like
I
think
in
the
gateway
or
something
there's
some
sort
of
like.
A
I
want
to
say
it's
like
an
istio
thing,
or
so
I
don't
know,
but
it
basically
can
like
replicate
traffic
and
then,
similarly,
we
have
the
query,
t
functionality
that
we
could
use
on
the
query
side,
and
so
the
idea
would
be
you
know,
maybe
picking,
maybe
going
by
by
tenant
or
something
to
to
control
the
load
to
some
reasonable
amount.
That
you
know
would
not
be
prohibitively
expensive
and
would
also
not
be
too
big.
A
But
that
way
we
can
actually
compare
query
results
of
the
two
different
environments
and
make
sure
that
that
we
we
yeah,
so
that
we
can
stand
by
our
statement
right
that
it's
production
ready.
B
B
Many
many
terabyte
clusters
right
like
we
offer
a
hosted
service
and
we
have
whereas,
like
a
as
an
enterprise,
you
probably
don't
have
that
many
tenants
or
have
you
know
different
size
volumes,
and
we
spend
a
lot
of
time
tweaking
optimizing
and
playing
around
and
things
that
we
want.
Configurability
for
ssd
was
designed
to
remove
all
of
that,
because
most
people
don't
want
that
or
need
that,
or
at
least
we
think
so
yeah.
What
trevor
is
suggesting
or
trevor's
idea
here
that
I
like
a
lot
was
we're
gonna.
Take
some.
B
You
know,
portion
of
real
traffic,
mirror
it
in
terms
of
reads
and
writes,
and
try
to
you
know,
run
it
and
validate
we're.
Just
gonna
have
to
try
to
do
it
in
a
way
that
we
can
sort
of
control
our
cost,
because
it's
basically
we're
just
doing
it
to
validate
other
deployment
modes.
But
the
problem
is:
there's
a
lot
of
ways
to
run
loki.
You
know
there's
a
lot
of
like
we
can.
We
want
to
cover
the
most
common
things
and
the
things
that
we
support
when
we
do.
B
You
know
enterprise
contracts
and
things
that
we
help
other
people
run,
but
at
the
end
of
the
day,
like
as
a
community
project,
we
always
are
going
to
rely
on
our
community
to
help
us
find
and
fix
bugs
for
use
cases
that
are
hard
for
us
to
recreate
but
yeah.
I
I
don't
have
a
perfect
solution
for
this,
but
I
do
like
this
one
though
I
like
the
idea
of
at
least
dedicating
resources
for
some
portion
of
traffic
to
do.
B
A
I
think
the
other
part
to
this
too,
is
like
what
I
just
said
about
leveraging
the
community
just
getting
it
easier
for
people
to
start
with
ssd
and
then
like
telling
us
the
point
at
which
they
had
to
switch
over
or
something
right.
Like
you
know,
did
you
run
ssd
in
a
dev
environment
and
it
fell
over
there?
And
so
then
you
were
like,
oh
well,
we
better
go
to
microservices
before
we
go
to
broad
or
like
did
you?
B
I'll
put
a
link
to
the
where
the
operator
lives
in
loki
repo,
there's
some
other
links
to
documentation.
There
definitely
check
that
out
and
yeah
come
find
us
in
the
loki
slack
channel
or
on
community.grafano.com.
B
If
you
have
opinions
on
what
you'd
like
to
see
about
operators
or
home
or
any
feedback
is
well
appreciated,
danny
is
not
here.
So
the
next
thing
is
the
timing
of
this
call.
B
We've
talked
about
it
a
little
bit
before,
but
I
think
what
we
briefly
discussed
today
is
maybe
running
two
calls
every
first
week
of
the
month
and
have
one
in
a
time
zone,
that's
more
accessible
to
the
you
know
larger
part
because,
like
with
a
single
call,
it's
there's
certainly
going
to
be
time
zones
that
this
is
inconvenient
for
it's
already
inconvenient
for
most
folks
in
the
eu,
so
we
at
least
want
to
make.
So
we
got
a
lot
of
eu
both
employees
and
workers.
B
B
E
Is
there
like,
if
there's
usually
not
very
much
content,
would
it
not
make
more
sense,
at
least
to
start
to
just
rotate
one
one
month
have
it
in
north
america
friendly
time
and
next
month
have
it
in
europe
and
asia
friendly
time?
That's
at
least
what
the
the
prometheus
community
call
is,
and
it's
yeah
for
for
quite
a
while.
B
Yeah,
I
I
don't
have
a
huge
opinion
there.
I
think
I'd
prefer
do
running
two
of
them
because
it
basically
just
guarantees
that
most
people
can
only
attend
once
every
two
months.
If
you
do
them
every
other,
since
it's
already
like
so
it
just
feels
like
that.
Cadence
would
be
like,
although
you
know,
I'm
anticipating
lots
of
folks
to
show
up
and
we
we
need
to
do
more
to
promote
the
call
which
we're
kind
of
like
yeah.
I
don't.
B
B
F
Sure
hiroki
wrote
a
loki
book,
it's
very
good,
I
haven't
finished
it
yet,
but
I've
read
a
lot
of
it
already.
F
We
should
even
consider
pulling
this
into
a
as
an
official
project
if
rogue
is
interested,
because
I
do
think
it's
that
good
and
we're
in
pretty
sore
need
of
this
type
of
thing
turns
out
that
all
we
had
to
do
was
say
eventually
we'll
write
this,
for
you
know
a
year
and
a
half,
and
then
someone
wrote
it
for
us
way
to
go
us.
B
What
I
really
like
about
this
is
one
of
the
things.
That's
you
know
my
my
three
year
anniversary
with
grafana
and
loki
is
coming
up
soon,
which
is
pretty
incredible,
but
what
that
also
has
created
is
a
scenario
where
I
find
myself
like
not
very
good
at
representing
people
that
are
new
to
the
project
or
new.
To
the
like.
B
B
So
what
hiroki's
book
is
nice
is
it
does
tell
the
story
as
someone
who
started
using
loki,
maybe
about
six
months
ago,
and
if
you
go
to
the
part,
you
know
the
tips
at
the
end,
there's
the
failure
stories.
B
Oh,
that's
a
cool
like
like
the
the
book
outlines
itself
well
in
terms
of
like
talking
about
caching,
like
the
things
that
that
you
know
he
had
to
learn
along
the
way
that
we
don't
document
very
well
that
he
filled
in
the
gaps
for
which
is
awesome,
because
that
turns
out
it's
hard
for
us
to
do
sometimes
because
at
least
for
me
personally,
like
ten,
when
I
learn
a
thing,
I
forget
that
I
didn't
know
it
at
one
point,
and
so,
if
I
didn't
document
it
at
the
time,
you
know
now
it's
just
a
thing
that
I
knew
that,
like
I
didn't
remember,
that
was
hard
to
figure
out.
B
So
there's
a
lot.
We
can
pull
out
of
this
into
our
docs
or
just
link
this
from
our
talks
and
have
rookie
continue
to
do
an
awesome
job
of
so
an
interesting
sort
of
problem
of
like
you
know,
when
you're
invested
in
a
thing
for
as
long
as
we've
been
working
on
it,
it's
a
bit
hard.
B
It's
just
a
good
use
case.
If
you
can't
see
the
forest
for
the
trees,
I
don't
really
like
that
analogy,
so
it
doesn't.
But
I'll
use
it
so
yeah
thanks
to
hiroki
and
also
for
a
number
of
just
fantastic
contributions
around
the
project
lately
and
yeah
definitely
check
it
out
and
similarly
like,
if
you're
new,
to
loki
or
you're
using
loki
and
you're,
you
know
we
get.
I
mean
a
tremendous
amount
of
contributions
to
the
docs
already
that
people
fill
in
missing
gaps.
B
But
you
know
let
us
know
where
the
big
holes
are.
I
mean
we
probably
know
already
some
of
them,
but
sometimes
it
you
know
things
that
seem
obvious
to
us.
We
forget
are
non-obvious
to
everybody
else.
D
Yes,
we
are
pretty
close
to
release
a
new
loki
version,
mostly
because
a
few
issues
that
we
encountered
in
the
in
the
past
month.
So
it's
good
to
to
be
aware
of
that.
B
I
think
we'll
probably
do
this
very
early
next
week,
just
a
patch
fix
on
2.4.1,
but
a
couple
things
we
pointed
out
in
the
community
called
last
time
where
some
config
issues
that
we'd
like
to
enable
to
make
sure
that
the
parallelization
works
out
of
the
box.
B
The
way
we
wanted
it
to
there's
a
bug
with
the
way
common
config
is
applied
in
a
couple
scenarios
that
we
found
so
largely
to
clean
up
stuff
like
that,
so
it
will
be
worth
upgrading
to
if,
if
you're
in
the
2.,
I
wish
everybody
should
be
2.4
always
be
at
the
latest.
Lookie
only
ever
gets
better
with
time.
So
I
always
recommend.
B
Oh,
let's
close
it,
nice
short
and
sweet
well
welcome
to
the
new
year-
everybody
I
didn't
get
it
done
in
time.
I
think
I'd
probably
try
to
do
it
as
a
blog
post
of
the
issue
that
we
created
last
year
of
what
people
want
from
loki
in
2021,
which
turned
out
to
be
well
used,
and
we
actually
did
a
bunch
of
the
stuff
that
was
on
there
and
community
did
a
bunch
of
stuff.