►
From YouTube: Current State and the Future of Cortex - Alvin Lin & Alan Protasio, Amazon Web Services
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
Current State and the Future of Cortex - Alvin Lin & Alan Protasio, Amazon Web Services
Speakers: Alvin Lin, Alan Protasio
Cortex provides horizontally scalable, highly available, multi-tenant, long term storage for Prometheus. We will walk through what happened to Cortex throughout 2022, and what's next.
A
Today
we
are
going
to
talk
about
cortex,
specifically
we're
going
to
talk
about
what
happened
since
the
beginning
of
this
year,
and
then
what
do
we
have
in
plan
for
cortex?
So
here's
the
agenda?
Oh
actually,
let
me
introduce
myself
first,
so
my
name
is
Alvin
I'm
one
of
the
software
development
manager
at
AWS
I
joined
as
a
cortex
maintainer
about
a
year
ago.
I
started
to
look
into
the
query
path.
A
A
lot
so
I
have
a
little
bit
of
technical
knowledge
there,
but
nowadays
I
mainly
do
releases
doing
some
troll
work
like
operating
go
wrong
time
to
1.19.2.
I
know
that
those
circular
patching
I
know
that,
and
here
with
me,
is
Alan,
who
is
the
actual
brand
of
Cortex
and
I'll?
Let
you
introduce
himself.
B
Hey
my
name
is
Alan
I,
also
work
with
for
AWS
on
all
this
team
being
worked
with
working
with
cortex
for
the
last
two
years.
Work
in
scalability
availability
became
maintainer
in
the
last
year,
yeah
trying
to
make
cortex
a
better
place.
A
Cool
thanks
Alan,
so
Alan
does
most
of
the
work.
I
essentially
do
nothing
and
just
tell
Allied
to
do
work,
and
here
with
me,
is
the
Federation.
Of
course.
It's
not
here
with
me
physically,
but
it's
very
unfortunate
I
couldn't
make
it
so
Federate
is
actually
a
very
long
time.
Cortex
user.
He
deployed
cortex
clusters
for
Adobe
and
he
runs
I.
Think
the
number
he
told
me
it's
the
hundreds
of
clusters
and
if
you
were
even
even
ever
in
the
CNC
cortex
Channel,
then
you
ask
questions.
A
Then
cortex
is
the
person
who
will
answer
your
question,
so
he
is
really
active
in
the
in
the
China.
Is
there
any
questions
regarding
configuration,
optimization
and
all
that?
Let's,
but
not
the
least,
we
also
have
a
maintenance
for
from
from
Germany
and
his
name
is
the
Nicholas
I
didn't
want
to
go
into
linking
and
searching
my
face
like
stalking,
so
I
just
grabbed
this
GitHub
profile
picture
and
that's
why
I
give
up
profile
picture,
looks
like
and
he's
the
maintainer
for
the
helm
chair
and
it's
very
active.
A
If
you
feel
feel
that
hey
help
me
see,
some
configuration,
I
didn't,
add
it
and
then
Nicholas
will
always
be
there
to
emerge
the
pr
cool
so
to
this
agenda.
So
I
would
imagine
not
a
lot
of
people
know
how
cortex
cortex
is
and
how
does
it
work?
So
we
will
do
a
quick
introduction
to
to
talk
about
cortex
and
then
we'll
do
a
little
bit
of
the
architectural
dive
deep
for
cortex.
A
Then
I
want
to
introduce
three
exciting
features
that
come
into
cortex
in
the
next
release,
which
will
be
1.14
and
I'll,
give
one
operational
tip.
So
if
you're
running
cortex
right
now,
then
you
should
do
this.
When
you
get
back
to
get
back
to
work,
it's
very
useful,
like
we
run
a
lot
of
Cortex
cluster
back
in
AWS
and
that's
one
of
the
tips
that
saves
us
a
lot
of
memories.
Then
we'll
do
some
look
back
to
see
what
happened
for
the
cortex
release
1.13.
A
A
All
right,
so
what
is
cortex
cortex
is
a
horizontally
scalable,
highly
available
multi-tenant
long-term
storage
for
premises.
So
what
does
that
mean
when
we
think
about
per
mysterious?
Initially,
when
it
was
designed?
It's
designed
to
be
installed
on
a
single
machine,
scrap
a
cluster
of
Matrix,
and
then
it
is
stored
it
onto
a
local
local
Drive.
A
The
problem
is
your
local.
Drive
cannot
be
as
big
as
you
would
like
to
use,
so
you
usually
have
like
a
very
fairly
short-term
rotation
period,
so
cortex
trying
to
solve
that
problem,
but
in
order
to
be
able
to
store
a
lot
of
a
lot
of
Matrix
millions,
billions
of
metrics,
you
need
scale.
So
essentially,
what
cortex
is
doing
is
to
take
a
bit
bits
and
piss
of
cortex
and
then
make
it
into
microservices.
A
So
you
have
you:
have
a
micro
service,
recording
a
micro
service
or
ingestion
a
microservices
for
the
moving
data
to
the
long-term
storage.
My
courses
preface
for
reading
data
from
the
long-term
storage
so
like
saying
cortex,
is
the
long-term
storage
is
a
little
bit
misleading
because
it's
not
just
storage,
it
is
actually
a
system
they
allow
querying
and
cortex
is
the
cncf
project.
It's
incubernating
and
Apache
2.0
license.
A
So
if
you
can
do
anything
with
it,
a
bunch
of
contributors
about
250
of
them
lots
of
watches,
5K,
star
and
so
far
we
have
about
5
000
comments
and
that
Allen
here
is
making
the
comments
going
up
every
single
day.
So
it
is
actually
a
very
active
project.
A
This
is
a
very
high
level
view,
a
bird's
eye
view
at
a
very
high
level
of
Cortex,
so
essential
holidays,
a
typical
use
case
of
Cortex.
You
have
a
bunch
of
Matthias
right,
so
you
have
one
Premier
system
for
cluster,
a
one
Premiere
CS4
cluster,
B
and
so
on
and
so
forth.
What
you
can
do
is
you
can
configure
pyramid
spheres
to
do
a
remote
right
into
cortex?
All
of
them
can
just
do
a
remote
right
to
Cortex.
A
If
you
want
a
differentiate
between
clusters,
you
can
do
add
a
label
during
remote
rights
to
differentiate
different
clusters,
they're
just
into
cortex.
Then
you
attach
a
dashboard
tooling,
like
a
kofana
or
whatever
dashboard
that
you
feel
like
in
that
you
get
a
global
view
of
your
Matrix
right.
You
don't
have
to
go
to
like
parameters
a
to
look
at
class.
A
permission
speed.
You
look
at
a
cluster
piece
Matrix
so
because
remote
right,
it
is
the
protocol
that
it's
a
fairly
stable
from
proposed.
So
it's
you
don't
have
to
just
use
permafps.
A
You
can
also
use
the
tools
like
open
Telemetry
to
symmetrics
to
a
cortex
or,
if
you're
a
little
bit
more
adventurous,
and
then
you
like
working
code.
You
can
actually
write
your
own
code
and
goal
in
Java.
You
see
C,
plus
plus,
whatever
you
want
and
just
make
sure
the
message
is
in
the
remote
right
format.
Then
you
should
be
able
to
send
it
to
Cortex
and
you
can
send
a
lot
of
metrics
to
Cortex.
You
will
be
able
to
handle
it.
So
next
I
want
to
dive
a
little
bit
deeper
into
that
cortex
icon.
A
B
Well,
I
will
try
my
best
here
yeah
we
as
we
can
see
like
the
community,
saying
that
cortex
can
scale
a
lot.
I
can
send
a
bunch
of
match
cases
High
available,
but
it's
complex
to
set
up.
So
this
is
what
usage
to
be
cortex
in
the
past
a
year
or
so
we
are
trying
to.
We
are
deprecating
some
dead
code
and
some
deprecated
storage,
so
we're
moving
even
from
the
code
base
and
don't
worry.
This
is
not
what
cortex
looks
like
inboard.
B
It
looks
more
like
this.
This
is
a
typical
cortex
deployment
that
you
can
find
you
can
see
like
in
the
yellow
there.
That's
the
right
path,
so
remote
remote
right
come
from
Prometheus
and
in
the
in
green
there
that's
the
read
path,
so
we
still
have
some
components.
They
I
will
try
to
explain
what
what
is
what
are
each
one
of
those
components
and
hopefully
to
make
more
sense
at
the
end
of
the
disc
I
will
start
with
the
right
path.
B
So
what
what
happens
when
Prometheus
and
the
right
request
to
Cortex
right
the
first
component
that
rate
like
that's
rich,
that
is,
that
is
called
distributor
and
what
is
the
distributor
distributor,
is
basically
a
Gateway,
but
that
will
forward
the
request
to
the
writing.
Gestures.
Ingesters
are
the
storage
nodes.
Distributor
will
just
forward
the
request
for
them,
but
why
we
need
the
distributor
so
distributor
review
it.
It
will
do
sharding
and
replicate
your
data
and
and
do
like
pertaining
charting
and
replicator
data
and
optionally.
B
We
have
you
can
set
up
things
like
Shuffle
sharding
that
will
improve
your
tenant,
isolation
or
Zone.
Aware
awareness,
replication,
so
distributor
is.
Is
the
guy
that
to
make
sure
that
is
sending
one
copper
our
data
to
Haz?
It
also
does
things
like
rate
limiting
and
they
jaded
up.
So
if
you
have
a
Prometheus
server
that
is
sending
is
deployed
in
a
AJ
mode,
distributor
is
the
guy
that
we
receive
the
same
one
sample
and
throw
on
the
floor
the
second
one.
After
the
distributor
we
have
the
investors.
B
The
ending
gestures
are
basically
motor,
10
and
tsdb.
So
remember:
Alvin
said
that
what
cortex
does
is
take
Prometheus
and
put
in
different
microservices
Prometheus
is
basically
a
tsdb
and
a
query
engine.
The
investors
is
the
house
of
the
tsdb,
so
the
first
time
that
I
received
a
sample
to
one
tenants,
ingesters
receiver,
for
example,
for
one
tenant
it
will
create
the
tsdb
block.
B
The
test,
B
instance,
will
keep
a
painting
that
and
the
after
is
configurable,
but
after
typically
after
two
hours,
you
send
those
tsdb
blocks
to
the
block,
storage
and
the
block
storage
can
be
on
Google,
Cloud,
Storage
S3,
any
block
storage
that
you
want.
B
We
have
support
for
Azure
as
well,
but
now
we
can
see
that
as
I
was
sending
data,
so
the
data
was
that
were
on
disk
was
replicated
I
sent
to
one
replica
for
each
to
Haz
and
now
I
have
all
these
data
duplicated
data
on
S3,
so
that
is
where
the
compactor
comes
in.
The
compactor
will
get
all
those
blocks.
It
will
Compact
and
compress
those
blocks
and
to
make
sure
that
this
data
is
in
the
is
in
the
optimal
way
to
be
queried.
It
also
does
things
like
housekeeping.
B
So
if
you
are,
if
you
configure
your
attention
period
for
one
year,
compactor
is
the
guy
to
start
to
delete
blocks
that's
older
than
one
year
and
things
like
that.
Again,
all
those
components
can
be
Shuffle,
sharded
and
deployed
in
in
with
Zone
awareness.
So
you
you
have
AC
tolerance
and
tenant
isolation.
This
is
basically
the
right
path,
but
then
you
have
to
pair
your
data
right,
so
they
read
the
path.
The
first
component
is
called
pirate
front
end,
which
you
do
it's
a
similar
thing.
B
That
distributor
does,
but
for
the
query
it
to
shuffle
shards
and
make
sure
that
queries
for
a
given
to
an
entire
spread
across
this,
but
it
does
more
than
that
right,
like
it
does
qos,
for
instance,
so
it
makes
sure
that
110,
it
is
not
the
starving
other
things
or
or
do
like
cash
of
cash.
B
So
imagine
that
you
have
your
dashboard
there.
That's
like
refreshing
every
minute
and
instead
of
like
recomputing
or
re-executing,
the
whole
query.
The
quest
front
end
will
just
like
fetch
the
Delta
from
the
last
refresh
to
the
refresh
right
now
and
do
like
things
like
vertical
vertical
and
horizontal
horizontal
sharding
query
sharding.
That
I
think
Alvin
will
talk
a
little
bit
more
about
that,
but
it's
basically
get
trying
to
split
one
query
in
multiple
smaller
queries,
so
I
can
run
then
in
parallel
in
multiple
querier's
code.
B
After
the
query
front
end,
we
have
the
query.
The
query
is
the
house
for
the
prompt
ql
engine
now,
so
we
have.
We
run
the
promises
from
QR
engine
in
that
component
and
basically
receives
the
query:
request
fetch
data
from
investors
for
recent
data
or
for
stock
from
store
gateways
for
historical
data,
merge
all
of
them
evaluate
the
query,
return
the
result
back
to
the
query
front
end
and
to
the
customer
do
things
like
rate
limiting
as
well
like
some
some
prevent
to
prevent
out
of
members,
and
things
like
that.
B
Now
we
have
this
target.
What
is
this
or
Gateway
is
the
Gateway
for
the
store.
Basically,
what
this
guy
is
doing
is
like
it's
keeping
a
up-to-date
view
of
the
block
storage.
So,
every
time
that
I
receive
a
new
block
or
I
compact,
a
new
Block
store
Gateway
discovered
that
advertise
that
to
the
carrier.
So
this
block
starts
to
be
variable
and
and
also
download
parts
of
the
index,
the
block
index,
to
make
sure
that
we
can
have
a
faster
time
series
lookup
when
you
are
running
queries.
B
So
basically,
this
is
like
a
normal
cortex
deployment.
Optionally.
You
can
also
run
rulers
and
alert
manager
and
those
components
are
basically
a
multi-tenant
version
of
the
Prometheus
ruler
and
alert
manager.
Again,
Zone
awareness
again
show
for
sharded
rulers.
Your
basically
basically
evaluate,
recording
and
alerting
rules,
we'll
send
alerts
to
the
alert
manager,
alert
manager.
You
the
dub
group,
and
send
the
alert
for
the
right
destination
like
islac
or
page
30.
You
name
it.
B
Basically,
this
is
what
cortex
is
right
now
those
are
the
components.
Hopefully
it
makes
more
sense
after
that,
and
now
it's
back
to
Alvin.
A
Yeah
definitely
I
think,
makes
more
sense
than
the
diagram
we
showed
at
the
beginning,
all
right
cool.
So
this
is
a
list
of
the
companies
they
are
currently
using
cortex
running
cortex
cluster
cool.
So
now
I
want
to
introduce
the
three
features
that
I
was
talking
about,
and
the
first
feature
is
the
open
Telemetry
bridge
for
tracing.
A
If
you
are
the
operator
of
a
cortex
cluster,
you
will
like
this
feature,
so
this
feature
essentially
allows
you
to
send
traces
to
different
destinations
and
in
the
graph
here
we
have
the
example
of
sending
it
to
AWS
x-ray.
So
the
story
behind
this
feature
is
because
one
day
I
was
just
you
know:
writing
the
normal
status
report.
That
managers
should
write
every
day.
I
don't
come
into
my
office
and
he
said
over
and
Alvin
I
think
there's
a
button,
aggression,
query
and
query
front
end:
I,
don't
know
how
to
do
it.
A
A
So
I
got
back
to
my
work.
A
few
hours
later,
I
didn't
come
in
with
this
screenshot.
Exactly
so,
hey
Elvin,
look
how
I
got
it
working
I
got
I
got
I,
got
it
working
with
sex
right
now,
I
see,
there's
a
the
bottleneck
between
between
the
curry
front
end
and
the
query:
it's
because
the
queue
is
overloaded,
the
there's
a
cue
between
them
and
it's
overloaded.
I
was
like
oh
cool.
This
is
awesome.
A
How
did
you
do
it
and
I
said
oh
yeah
I
do
I,
integrate
it
with
the
open,
telemetry
and
I
said:
okay
cool
awesome?
Should
we
open
source?
This
I
said,
of
course,
why
not?
So
this
is
the
hours
of
work
from
Ireland.
It
is
awesome
if
you
ever
run
cluster.
You
have
a
problem.
Use
this
feature.
It
will
help
you
troubleshoot.
A
A
lot
of
issue
here
actually
is
like
the
trades
multiple
times
like
if
I
multiple
issue
and
do
optimization
even
for
the
query,
vertical
sharding
Island
use
that
to
analysis
that
analyze
that
hey
it
is
actually
boosting
performance
and
then,
which
I'll
talk
about
a
little
bit
later
so
yeah.
So
with
the
open
Telemetry
support,
you
can
send
to
multiple
destinations.
That's
the
major
selling
points
like
to
Jagger,
to
Zipkin,
to
Kafka
and
to
AWS
x-ray.
There's
a
lot
more.
A
All
right
cool
the
next
feature
I
want
to
talk
about
is
a
partitioning
compactor,
so
I
haven't
finalized
the
name
for
this.
This
feature
so
I
I
promise
I'll
work
with
the
the
creator
of
this
feature
to
come
up
with
a
better
name,
but
for
now
what's
the
partitioning
compactor
privacy
is
as
a
limitation
each
block,
so
Prometheus
tsdb
is
essentially
bunch
of
block
right
and
each
block
has
a
limitation
of
64
gigabytes
of
index
size
and
the
reason
is
because
they
have
a
reference.
A
That's
only
able
to
address
up
to
64
gigabytes
sure
we
can
fix
that
problem,
but
the
problem
is
a
little
bit
hard
to
fix
and
might
take
a
little
bit
long
time.
Imagine
switching
from
32-bit
Windows
to
64-bit.
It
will
take
a
while.
We
don't
want
to
wait,
we
can
wait,
but
we
don't
want
to
wait.
So
what
we're
doing
here
is
that
hey
currently
in
cortex,
if
you
try
to
merge
two
blocks
whose
index
size
is
close
to
64
gigabytes,
you
merge
them
together
and
then
the
result
is
100
gigabytes.
A
Then
cortex
were
choked.
What
you
have
to
do
is
you
have
to
upload
this
local
time
marker
to
the
source
block
of
the
blue
one
and
then
tell
the
hey.
Just
don't
don't
compact
this?
That's
a
workaround,
that's
not
fixed,
because
if
you
look
at
the
look
at
the
size
over
there,
63
plus
63
it's
bigger
than
100
right.
But
then,
if
you
go
through
compaction
process,
you
guys
should
do
the
do
the
symbol
table
index
the
duplication
which
reduce
the
index
size
by
quite
a
bit.
A
A
So
what
we're
doing
is
then
hey
the
new
compactor
will
say:
okay,
I'll
partition,
The
Matrix
in
such
a
small
way,
so
that
the
you
know
I
will
still
end
up
with
two
blocks,
but
each
of
them
will
have
a
smaller
index
such
that
it
doesn't
hit
the
limit
of
the
permanency.
So
we'll
figure
like
how
much
partition
we
need.
We
might
end
up
partition.
Maybe
three
four
blocks
to
three
or
two
to
two
like
in
this
example.
A
So
with
this
another
possibility
to
begin
begin
to
to
to
show
up
right,
you
can
actually
do
compaction
Imperial,
because
your
restore
is
two
blocks.
Each
compaction
can
be
wrong
in
each
individual
compactor
versus
per
before
with
the
blue
boxes.
You
need
to
do
that
in
one
box.
Right
I
saw
the
result
of
the
new
partitioning.
Compactor
is
that
we
actually
is
observing
about
50
compaction,
time
reduction
in
our
lab
results
for
single
tendon,
with
200
million
times,
Series
right,
we're
still
doing
testing.
This
is
still
work
in
progress.
A
It's
almost
done
the
implementation
is
there
we
are,
we
are.
We
are
just
doing
testing
and
finalization
then,
and
then
we'll
merge
a
PR
yeah.
So
with
this
feature,
you
don't
have
the
issue
for
the
bigger
than
six
flow
gigabytes
issue,
and
then
your
block
will
be
optimized
and
another.
Another
side
effect
of
the
design
is
that
the
the
the
algorithm
is
able
to
figure
out
who
who's
the
source
block
of
a
destination
block
right.
So
you
don't
have
to
download
all
the
blocks
from
S3
into
compactor
when
you
do
compaction.
A
This
is
a
that's
a
potential
speed
up
of
the
compaction
process
as
well.
Okay,
the
last
feature
I
want
to
go
into
is
a
query
sharding.
This
is
a
very
cool
feature
in
my
opinion
and
before
I.
Go
into
that
I
want
to
skip
shout
out
to
the
fairness
community
over
there
for
for
making
this
possible,
because
it
is,
we
actually
use
the
thermos
code.
They
do
some
query.
Analyzer
and
spit
output
says
hey,
they
probably
shareable,
not
shoutable,
and
then
we
thought
hey.
We
have
cortex
user.
Why
not?
A
We
bring
it
to
Cortex,
so
right,
called
collaboration
is
always
beautiful.
A
So
what's
vertical,
what
does
vertical
mean
when
we
talk
about
horizontal
sharding?
Just
imagine
you
have
query
I
want
to
query
from
day
one
to
day
two,
so
you
have
two
day
query
horizontal
shot.
It
means
that
you
shot
by
a
time
interval,
so
you
can
actually
split
the
query
into
day,
one
to
just
the
beginning
of
day,
two
and
beginning
of
day
two
to
end
of
day
two
and
then
you
can
split
them
wrong
them
concurrently
and
then
get
the
result.
They
all
works.
A
But
what?
If
today,
your
query
is
actually
a
instant
query
where
you
want
to
know
the
instant
query
means
that
I
want
to
know
the
data
right
now.
There's
no
time
interval,
then
how
do
you
shut?
You
cannot
do
horizontal,
starting
because
there's
no
time
interval
to
shot.
That's
where
vertical
query
sharding
comes
in,
so
I'll
do
a
little
bit
of
Deep
dive
into
the
vertical
query,
charting
just
because
I
think
it's
a
very
cool
concept
and
then
it's
the
first
step
to
a
more
optimize
procure
engine.
A
So
this
feature
is
already
available
right
now
you
can
use
it
and
I
forgot
to
mention
that
the
the
open
Telemetry
support
is
actually
in
the
mainline
Branch.
So
if
you
are
the
type
of
person
who
don't
mind
using
the
midnight
Branch,
please
do
you
can
start
using
the
future
and
then
cortex
thing
we
we
try
to
make
the
main
line
stable
so
because
internally
AWS
we
actually
use
the
mainland
as
well.
So
so
we
we
tested
the
stuff
before
before
we
push
to
midline
so
yeah.
A
So
the
vertical
vertical
questioning
is
available
in
my
branch
and
then
the
the
speed
Improvement
can
be
up
to
it
can
be
a
30
plus
and
simple
flag
to
enable
the
dosato2
flag
and
there's
the
documentation.
So
I
just
want
to
touch
a
little
bit
on
the
documentation.
It
is
under
the
V1
guarantee,
just
because
it
is
the
experiment,
experimental
feature.
So
it's
not
in
the
configuration
list.
A
So
just
be
aware
that
or
you
can
use
the
slide.
So
let's
do
a
little
bit
deeper
I
found
how
the
vertical
shotting
works
so
consider
this
Matrix.
You
don't
have
to
stare
too
hard.
It's
fine!
It's
it's
fairly
simple!
It's
a
you
have
metrics
to
calculate
how
many
users
you
have
per
region
so
I'm
using
the
North,
America
and
Europe.
As
an
example,
and
then
when
you
collect
the
metrics,
usually
you
have
multiple
permissions
instance
for
scalability.
You
already
you,
wouldn't
you
wouldn't
just
have
one
right.
So
that's
why
you
have
power.
A
One
part,
two
part
three
redundancy
of
three
is
always
beautiful
and
cost
effective,
and
then
now
imagine
I
want
to
run
this
query.
Don't
worry
about
starting
this
curve.
This
is
just
ahead.
I
want
another
user
per
region
right
so
and
I
want
to
get
the
result.
Hey
I
have
100
people
in
a
100
users
in
North,
America
and
I
have
95
in
Europe.
A
Cool
without
vertical
shotting
what
what
what
what's
done
is
that
the
whole
core
is
sent
to
one
query
and
the
query
fetch
all
the
data
and
then
just
and
just
and
just
merge
them,
send
it
back
to
query
from
and
back
to
user.
But
if
you
kind
of
think
about
that,
hey
I
can
actually
do
the
query
or
the
aggregation
for
North
America
and
for
Europe
differently
and
color
coded
with
I.
Guess
it's
blue
and
purple
yeah,
blue
and
purple
I'll.
Stick
with
that.
A
So
this
is
how,
without
Curry
shouting
how
it
works.
Corey,
Franklin
Korea
go
to
the
store
and
the
store
have
its
color
coded.
Just
to
show
you
that
hey,
we,
the
the
store,
doesn't
store
the
same
the
sandwich
in
the
same
store.
They
are
just
you
know,
interleaving
everywhere,
Korea,
to
go
to
a
store.
Do
everything
aggregate
table
send
it
back
to
Korean.
Customer
is
happy.
A
Well
when
it's
a
little
bit
slower,
they
will
not
be
happy,
but
you
know
this
is
what
happened
to
Curry
Chardon.
It
is
a
little
bit
more
complicated,
but
what's
happening
right
now.
Is
that
the
query
front
will
actually
do
a
splitting
notice
that
hey
Courier
One,
please
do
the
European
aggregation
Courier
two
do
the
North,
America
aggregation
and
The
Courier
One
actually
talked
to
the
store
to
say:
hey.
Please
just
give
me
the
European
one.
Don't
don't
give
me
the
North
American
data.
A
What
what
this
allows
us
to
reduce
the
network
traffic
right
so
right
now,
control
one
store.
Two
doesn't
have
to
return
all
the
data
like
before
or
overall
the
data
is
data
sent
OverWatch
does
the
same.
Just
each
part
will
have
legendary
now
to
receive
so
query.
Well,
we'll
do
aggregation
for
the
Europe
and
query
2
will
do
the
aggregation
for
North,
America
and
right
now.
Corey
Franklin
will
have
their
job
to
merge
those
together,
but
the
merchant
symbol
is
simply
merged.
Two
table,
like
you
have
two
rows.
You
imagine
two
row
easy
PC.
A
Customer
customer
again
is
happy.
I
couldn't
find
a
happier
Emoji.
So
that's
the
one
I
have
all
right
cool.
So
that's
a
feature,
and
we
added
this
the
streaming
Supply
belief
in
1.13,
so
it's
already
available
right.
So
if
you
want
to
take
away
anything
from
this
talk,
take
this
away
enable
streaming
between
query
and
gesture.
It
will
save
you
a
lot
of
memory
issues
we
actually
adding
adding
the
query.
Ingester
metadata
streaming,
I
think
it's
recently
or
is
it
already
released?
It's
not
released.
It's
not
released.
A
Okay,
we'll
make
sure
it
really
stood
because
only
to
happen
to
quarter
cortex
a
lot
of
limits
right
and
then
we're
finding
different
limits.
We
go
and
we
try
to
combine
agriculture
limit.
A
A
A
So,
looking
back,
we
released
the
1.13
1.13.
We
we
one
of
the
major
feature
of
1.13,
was
parallel
compaction,
and
that
was
the
first
step
to
speed
up
to
for
compact
for
compactor
and
it's
actually
complementary
to
the
partitioning
compactor.
A
So
partition
compactor
actually
make
the
pluralization
a
little
bit
better,
but
but
that
still
will
be
complementary,
making
compaction
faster
and
we
also
Fork
the
the
repository
from
kofana
Lab,
that's
cortex
tool
to
Cortex,
because
it
just
makes
sense.
We
want
to
continue
to
support
cortex
tool
as
the
cortex
maintenance.
There's
no
reason
to
leave
it
there
leave
it
then
say:
hey
we
don't
support
that
anymore.
It's
related
to
Cortex.
A
We
want
to
support
it
and
I
want
to
shout
out
to
Alan
Frederick
and
even
Nicholas
for
stepping
up
to
become
a
maintenance
of
Cortex
when
he
needed
the
most
cortex
went
through,
went
through
some
rough
time
this
year.
But
then
it's
now
back
in
good
hands
cool.
A
So
this
is
the
longtime
customer.
It's
the
federal
Edge,
because
I
wrote
this
slide
before
I
got
his
permission
to
use
his
name,
but
he
gave
me
the
permission.
So
I
can
just
explain
it
so
federich.
He
run
the
wronged
a
lot
of
cluster
at
Adobe
for
his
internal
customers
and
it's
fast
right.
The
the
the
injection
time
is
less
than
one
second,
but
FedEx
is
being
very
conservative
in
AWS.
Our
injection
time
usually
is
less
than
10
milliseconds.
A
And
also
99.9
percent
of
the
request
is
less
than
1.3
seconds
1.3
seconds
speed.
But
if
you
your
query,
a
long
range
of
query,
that's
fairly
long.
Okay,
it's
actually
not
not
too
bad
of
an
applicable
time
and
usually
I.
Think
fader
told
me
that
his
query
usually
spent
across
one
to
two
months.
If
I
remember
correctly,
I
have
to
double
double
verify
and
then
a
cortex.
We
usually
take
the
backward
compatibility
very
seriously.
A
So
when
Federer
is
trying
to
upgrade
from
0.61,
which
is
a
very
ancient
version
to
1.13
he,
it
was
a
brick.
It
was
easy
right.
He
got,
he
got
some
configuration
that
was
removed,
but
then
you
know
just
removed
those
configurations
from
your
yaml,
then
you're
good
to
go,
and
then
he
upgrade
to
1.13.0
and
then
was
able
to
support
100
150
million
times
series
instead
of
32.
and
the
platform
that
was
actually
just
a
compaction.
A
The
period
compaction
actually
enabled
a
lot
of
10
series:
150
million
my
my
it's
not
the
maximum
cortex
can
handle
a
lot
bigger,
but
there's
a
few
factors
we
have
to
consider,
which
can
be
whole
other
talk
all
right.
So
we
also
have
the
voice
of
the
community,
which
are
the
feature
that
we
customer
users
have
taught
us
that
one
first
one.
What
first
of
them
is
out
of
samples
of
a
lot
of
older
samples
for
backfilling
data.
A
This
is
already
available,
merge
into
premises
I
think
not
long
ago,
so
we'll
enable
that
in
cortex
soon
send
it
for
downtown
standpoints
down
sampling.
There's
some
discussion
about
hey
what
exactly
problem
we're
trying
to
solve
for
down
sampling.
Is
it
because
faster
occurring
or
saving
the
storage?
We
don't
know
yet
so
that
one
is
still
being
discussed,
pertain
encryption.
A
It
is
more
for,
like
the
people
who
cares
about
the
hey,
sometimes
I
might
accidentally
I
might
have
like
a
pii
data,
personal
identifiable
information,
Pi
data
in
my
Matrix,
then
I
would
like
to
encrypt
it
on
the
rest,
even
though
I
sent
to
the
cloud
Central
cluster
I
want
to
encrypt
it
at
rest,
and
of
course
you
don't
want
to
type
one
key
for
all
the
pendant
there
will
be.
A
You
know
what,
if,
like
your
external
customer
or
your
internal
customer,
two
different
teams,
they
would
like
to
be
encrypted
separately
by
different,
different
keys
right
time.
Series
why
the
feature
is
important
is
imagine
that
you,
you
suspect
that
hey
one
of
my
classes
compromised
and
you
go
ahead
and
revoke
the
access
to
a
key
which
result
in
hey
the
your
Matrix
are
not
available
anymore
for
that
specific
tenant.
So
once
you
identify,
oh
actually,
I
only
have
like
a
two
time
series.
They
are
affected.
A
That
has
like
maybe
critical
number
on
the
for
some
reason.
By
the
best
of
stealing
mistakes,
then
you
can
say:
okay,
I'm
gonna
delete
those
two
times
threes
and
then
re-enable.
The
key
then
you're
back
to
business.
So
that's
one
of
the
use
case
for
using
the
time
series
and
pertain
encryption,
but
there's
other
other
use
cases
as
well.
There's
a
lot
of
recent
driving
behind
this
to
just
to
asks
and
there's
a
lot
more
into
our
backlog.
A
So
please
do
tell
us
what
you
want
during
the
slack
Channel
talk
to
us
share
with
Alan
and
I
Federate,
who
are
out
there
who
are
all
friendly
people.
You
say
hi
to
me,
I'll
say
hi
back,
oh
I
will
send
Emoji
wave
back
to
you.
A
Oh,
you
should
use
beer
right
because
I
like
beer-
and
it
goes
to
the
background,
to
upload
the
issue
that
you
want
with
the
with
a
thumbs
up
or
you
can
use
a
smiley
face.
Anything
then
to
let
us
know
that
hey
this
is
important
to
you.
If
we
don't
understand
what
is
important,
you
will
maybe
hopefully
have
a
conversation
on
GitHub
or
in
Slack.
A
A
A
All
right
and
call
to
actions
cortex
currently
have
three
maintainers
and
then
I
would
like
more
because
there's
so
much
more
I
want
to
I
want
to
do,
but
then,
while
spreading
a
little
sin
here,
so
if
more
people
can
contribute
more
people
can
become
internet,
it
will
be
nice
and
help
maintain
home
trial,
maintain
the
Codex
style
website
and
the
documentation
so
I'm
the
person
who's,
trying
to
maintain
the
metrics
cortex
message.io
now,
but
I,
don't
have
any
artistic
sense
right.
I
need
someone
who
you
know
who
really
know
the
website.
A
I
can
know
that
to
help
me
to
predefather
organize
the
information
a
little
bit
better
and
all
that
same
thing
for
documentation.
Cortex
documentation
is
not
bad
right
now,
but
you
have
a
lot
of
room
for
improvement.
So
if
you
finally
typo
you
found
any
information
can
be
reorganized
on
the
website.
It
will
be
awesome.
For
example,
I
wanted
to
update
the
architecture
diagram
on
the
cortex
website
that
previously
and
the
the
last
one
is
the
automatic
benchmarking
framework
which
is
available
in
the
Premier
series
report.
A
But
they're
not
cortex
and
I
really
want
it
because
it
will
be
so
nice
if
appear
comes
in
I
do
a
slash
Benchmark.
Then
you
can
automatically
see
the
performance
difference.
All
that,
so
you
don't
have
to
do
yourself
eventually,
I
might
get
to
that.
But
then,
if
you
cannot
wait,
please
tell
me
that
oh
I
forgot
to
mention.
Please
engage
with
the
what
Learners
Community
is
trying
to
do.
They
are
trying
to
create
a
procure
engine.
That's
scalable
right!
So
please,
let's
join
for
Swiss!
A
Then,
let's,
let's
create
a
more
scalable
function,
agent,
that's
a
support,
shot
in
beta
shardin
and
note
that
right,
it's
going
to
the
arena
of
like
how
do
you
optimize,
like
funkyos
in
the
SQL
in
the
old
days?
What
is
this?
Oh?
So
thank
you.
That's
everything!
Sorry
I'm
a
little
bit
over
time,
but
I
guess
we
have
some
time
for
questions.
C
I
can
maybe
kick
it
off.
I
I
noticed
that
the
architecture
of
the
sort
of
query
and
storage
looks
very
similar
to
Druid.
Was
there
any
exchange
of
ideas
there
or
have
you
guys
looked
at
Druid
at
all
to
to
Druid
the
database.
C
It's
a
very
interesting,
similar
structure.
E
F
C
E
B
B
So
some
so
distributor
require
front-end
Aquarius.
They
are
always
stateless.
Some
components
we
can
choose
to
run
the
state
four
or
not
like
ingesters
should
be
stateful.
We
should
not
never
lose
your
data
but
components
like
story
Gateway.
You
can
choose
mainly
because,
like
as
I'm,
we
are
downloading
parts
of
the
blocks.
B
It
makes
sense
to
be
stateful
because
open
restart
you
don't
have
to
download
everything
again
compactor,
usually
we
we
run
state
food,
but
just
because
we
wanna
more
disk,
it
doesn't
need
you,
but,
like
case
I
want
to
put
a
PVC
there.
So
I
can
put
more
disk
if
I,
if
I
have
a
huge
tenant
that
I
have
to
compact.
B
D
Hi,
thanks
for
the
talk,
I
have
a
question
regarding
the
diagram.
Why
a
ruler
talks
to
the
queryer
and
not
to
query
front
end.
B
It's
a
good
question.
We
are
thinking
and
changing
that
and
the
reason,
because
we
want
so
right
now.
This
is
not
a
100
correct.
What
happens
is
like
the
rulers
run,
Aquarius
embedded,
so
it's
the
same
code
but
run
in
the
same
process
logically
is
like
that.
But
right
now
it
runs.
The
ruler
doesn't
really
talk
to
the
query:
talk
directly
to
investors
and
store
Gators,
but
we
want
to
change
that.
D
Okay,
thank
you
and
one
more
question:
how
distributor
does
the
duplication.
B
B
E
A
No,
we
actually
started
working
on
the
on
the
on
that
already.
We
just
appear
to
enable
that,
but
it's
just
the
first
step
but
feel
free
to
chime
in
in
that
PR
or
the
issue.
F
Hey
so
quick
question
you
mentioned:
is
it
a
feature
in
progress
to
a
sample
or
sorry
to
accept
out
of
order
sample
ingestion,
and
how
are
you
looking
to
approach
that
it's
a
great
idea
very
helpful,
so.
A
You
want
to
take
the
hello
how
yeah
so
I
think
how
it's
mostly
so
Prometheus
right.
B
Yeah
so
like
Prometheus
tsdb
already
have
support
for
it.
Basically,
it's
creating
a
new
head
chunk
to
accept
the
out
of
samples.
There's
some
overhead
there
in
our
case
is
basically
making
like
making
it
available
for
the
customers
to
enable
it.
A
Yeah,
just
before
we
enable
they
want
to
be
very
careful
about
not
to
have
users
set
up
their
own
tour
right.
So
when
I
do
our
own
benchmarking
when
I
say
hey,
when
I
give
a
set
of
recommendations,
if
you
enable
this,
maybe
you
don't
want
to
backfill
like
one
year
ago
simple,
they
might
cause
some
issue
like
that.
Yeah.