►
From YouTube: Open Match TSC Jan 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Sides
I
think
mostly
I
was
mostly
on
vacation.
So
not
much
of
an
update
from
me,
but
I
know
you.
Fine
and
Scott
were
working
primarily
on
the
scale
stuff
and
I'll
have
them.
We
can
have
them
toss
cards
in
details
about
that,
but
once
we
get
into
that,
I
think
that's
a
much
more
detailed
discussion.
So
let's
that's
kind
of
next
on
the
agenda
anyway.
So
I
just
note
that
we
were
primarily
working
on
scale,
benchmarking,
benchmark
tests
and
trying
to
get
a
couple
of
different
scale
approaches
going.
B
Just
became
styling
for
status,
he
has
it
working
in
an
environment
and
really
there
were
no
major
snap
user
issues,
I'm
curious
to
see
if
he
actually
got
all
our
serialization
and
extension
capabilities
all
hooked
up,
but
as
far
as
I
know,
things
have
gone
very
smoothly.
It's
it's
taken
a
little
while
and
we're
just
still
not
deploying
deploying
as
part
of
our
rollout
thing,
but
that's
scheduled
for
this
sprint.
So,
okay.
A
B
A
B
Yeah
I
know
the
last
time
we
talked
it
the
goals
to
have
that
this
time,
it's
in
an
environment,
but
it's
not
in
the
environment
that
we
use
for
for
scale
and
stress,
so
it's
not
quite
stood
up
that
way.
Yet,
okay
for
next
sprint.
So
before
the
next
meeting,
the
goal
is
to
have
a
and
actually
something
we
should
share
before
the
meeting
right
is,
is
some
of
our
some
of
our
initial
takes
and
notes
and
reports
perfect.
A
Perfect,
so
that's
that's
awesome.
I!
Think,
sometime
later
in
the
agenda,
I
have
how
we
actually
line
up
the
the
releases
and
your
integration
and
GDC
I.
Think
things
are
starting
to
converge
if
we
keep
GDC
as
like
a
a
check
point
for
trying
to
arrive
at
like
1.2,
or
at
least,
if
not
integrated
diversion
so
so
we'll
get
to
that.
Do
you
think
it's
a
good
time
to
jump
into
scale?
I
can
give
a
quick
summary
for
where
we
are
and
then
can
discuss.
Okay,
so
I
think.
A
One
was
we
hid,
as
you
know,
the
last
time
the
status
quo
was
we
identified
that
we
were
hitting
scale
issues,
but
we
weren't
exactly
clear
on
why
or
where
we
hadn't
had
a
chance
to
dig
in
so
I
think
you
fund
was
driving
that
bit
to
try
and
run
some
bottleneck
tests
on
Redis
and
he
identified
that
the
Z
range
queries
were
the
slowest
and
he
did
some
experiments
after
that
and
I
think
there
are
lots
of
findings
that
he'll
go
into,
but
at
a
high
level.
That's
what
that
was
one
track.
A
We
were
following
as
to
just
how
to
get
Reddy's
configuration
of
working
I
think
the
most
positive
one
right
now
is
like
reading
from
multiple
replicas,
it's
harder
to
set
up,
but
it's
actually,
like
you,
write
to
master
read
across
different
slave
replicas.
That's
kind
of
the
set
up
here
again
tried
a
few
others
cluster
mode
and
whatnot,
and
he
can
he
can
elaborate
in
the
parallel.
We
kind
of
wanted
to
just
get
the
wheels
rolling
on
even
having
the
in-memory
indexes
and
in-memory
state.
A
Essentially
so
I
think
Scott
was
working
on
a
prototype
for
that
I.
Don't
know
if
he'll
have
numbers
to
share,
but
with
both
of
these
we
are
also
in
parallel,
I.
Think
Scott,
a
new
fun
were
working
on
scale
and
benchmark
tests.
Where
we'll
have
like
a
concrete
set
of
tests,
then
that
can
give
us
results
across
these
so
yeah,
that's,
that's
kind
of
the
summary
I
can
have
like
I
think
both
of
them
can
go
into
the
details
of
their
respective
findings.
Next.
C
C
Current
or
like
there's
a
lot
of
places
within
open
match
where,
like
the
the
met,
the
match
function
is
streaming
out
of
the
list
of
matches
or
proposals,
and
the
evaluator
is
getting
this
stream,
and
then
it
has
a
string
going
out
and
the
pressure
matches
calls
getting
it
back
and
we
were
buffering
them
for
not
really
any
reason.
It
was
just
like
simpler
in
the
code
to
do
that,
so
I,
it's
not
fully
streaming.
Yet
the
the
code
is
written
in
the
way.
C
C
Like
I,
like
did
new
code,
so
that
because
the
code
did
it's
the
code
previously,
he
was
very
based
in
that,
like
it
needed
to
it,
didn't
really
support
the
streaming.
I
have
been
more
vocal
things
all
going
on
at
once
with
go
fifteen
and
all
that.
So
it
was
just
it's
just
a
different,
a
coded
approach
to
support
a
streaming
model
and.
A
I
think
I
learned
a
quick
North.
Therefore
Caleb
you,
this
kind
of
was
like.
There
was
a
blocking
issue
that
you
had
last
time
around,
which
was
like
we
used
to
wait
for
the
entire
synchronization
cycle
before,
even
if
all
of
the
matches
that
returned
right,
like
they
started
from
there
like,
we
were
trying
to
see
how
we
could.
D
A
The
results
out
but
I
think
Scott
figured
that
I
mean
that's,
that's
probably
just
one
optimization.
He
figured
that
there
was
a
lot
of
waiting
like
essentially
we
had.
Originally,
there
was
no
streams
in
the
original
API
I
think
we
added
the
streams
to
work
around
like
the
G
RPC
couldn't
send
like
one
huge
payload.
So
when
we
added
streams
originally,
we
were
just
buffering
them
all.
The
places
so
I
think
Scott
kind
of
changed.
The
approach
for
the
sink
riser
to
just
have
things
a
flow
through
as
a
single
stream
and
processing
rania.
That's.
C
Another
good
point
is
yeah:
now
it
doesn't
have
to
wait
for
the
synchronizer
or
that,
like
this,
doesn't
a
state
wait
for
the
proposal
window
to
close
if
all
the
math
functions
are
finished.
Also
it
now
sends
a
message
out
to
the
next
function
does
close
of
like
hey,
you
should
cancel
like
you're
you've
overrun
any
any
fashion,
the
new
lights
mm-hmm
did
you?
Did
you
just
put
that
as
a
street
message
yeah?
Is
it
like?
C
B
So
there's
just
a
stream
between
the
backend
and
the
synchronizer
yeah
everything's
flowing
over
and
eventually
the
synchronizer
is
able
to
send
a
signal
back.
That
then
sends
signals
to
all
the
grunting
match
functions
exactly
well.
That's
fantastic
I
mean
that's
something
that
we
always
talked
about,
having
some
way
of
doing,
because
it's
just
one
of
those
difficult,
difficult
problems
to
understand
from
the
context
of
a
match
function.
So
that's
that's
really
cool
that
we
might
get
like
a
cancellation
notification
that
allows
people
to
spin
down
their
their
functions.
Mm-Hmm.
C
A
A
B
A
B
C
Okay,
so
the
status
is
our
benchmark.
Tests
are
falling
over,
however,
last
night
I
found
a
bug
in
them
with
how
we
were
trying
to
do
it
and
there's
some
just
weird
behavior.
So
we
really
tried
hard
to
get
numbers
for
today
for
both
just
like
the
standard
implementation
and
then
the
in-memory
one,
but
we
just
didn't
make
it
okay,
but
bystander.
Do
you
mean
the
better.
A
C
That
is
correct,
so
what
I'm
currently
trying
to
run
something
for
checked
in
I
did
run
the
broken
one
on
my
in-memory
prototype
and
it
was
less
broken
so
very
broken
so,
but
the
good
news
is
is
that
we
have
some
infrastructure
set
up
for,
like
plugging
plaza
and
everything.
You've
ended
that
then
I've
added
refining
metrics.
On
top
of
it.
Okay,
like
we,
have
pretty
graphs
that
are
showing
bad
numbers.
C
I
I'm
thinking,
there's
probably
some
low-hanging
fruit
here
of
like
either
scale
test,
start
doing
the
right
thing
or
there's
something
in
the
synchronous
or
something
along
those
lines.
Okay,
so,
hopefully
soon
we
will
get
them
to
behave
and
then
actually
start
getting
some
really
good
numbers
out
on
them
and
the
current
scenario
that
I'm
trying
to
run
is
kind
of
a
standard,
team-based
game.
C
E
C
Instructions
are
run
a
couple
make
commands,
so
we
can.
We
could
get
those
see
if
you
really
wanted
them
and
we
will
publish
them
once
we
actually
get
it
all
working
and
then
we
wanted
to
do
like
a
first
batch
type
scale
test
as
well
of
just
like.
How
much
can
you
do
with
one
mass
function
being
the
whole
world
and
then
probably
like
a
battle
royale
one?
C
Just
because
it's
easy
just
you
know,
there's
end
regions
and
players
go
into
one
region
and
when
it
hits
a
hundred
players,
it
spits
out
a
match
kind
of
thing,
so
that
kind
of.
Hopefully
it
will
give
us
a
little
bit
of
flexibility
for
how
you
want
to
do
these
benchmarks
and
we
can
really
pump
up
the
profile
account
and
that
kind
of
stuff
or
it's
ticket
creation
count.
Do.
A
Think
in
the
current
implementation,
around
4,000
profiles
for
5k
tickets,
it's
actually
not
just
the
profile
count!
That's
interesting!
It's
a
it's!
Basically,
the
pending
pool
size,
I
combination,
along
with
the
profile
account
that
become
becomes
interesting,
so
I,
think
and
I'm,
quoting
you
fun
here.
So
he'll
he'll
probably
get
into
details
of
these
in
a
bit
anyway.
So
maybe
hold
off
your
question
until
like,
unless
until
he
has
done.
F
E
G
C
Gonna
try
to
get
like
healthy
numbers
for
the
main
or,
like
the
master
today,
I
think
and
then
see
where
it
breaks.
Swap.
I
just
have
my
project
in
a
different
branch
because
easier
to
not
merge
it
and
I'll
just
keep
merging
into
that
one.
Keep
it
up-to-date
and
then
run
the
same
same
scale
tests
and
then
see
where
it
breaks,
and
then
we
can
make
an
educated
decision
about
what
we
want
to
do.
Yeah.
A
C
A
I,
don't
know
if
you
have
like
a
one
pager
or
like
a
basic
level
diagram
I
asked
you
a
few
questions
and
those
those
might
be
helpful
like
are
we
swapping
out
red,
is
completely
or
is
ready
still
the
state
store
for
the
tickets,
and
this
is
just
indexing
in
memory.
You
may
I
know
the
answers
to
those,
but
I
think
those
might
be
good
instruction
points
for
asking
more
questions.
A
A
So
so
we
will
definitely
need
I,
guess
more
in-person
time
to
even
discuss
the
approach,
but
I
think
you're
on
the
right
track,
where,
if
you
have
numbers
to
share
before
and
if
those
numbers
look
great,
that
would
be
a
great
starting
point
to
even
say
like
hey,
okay,
this
looks
significantly
better.
What
does
it
take
so
Caleb?
What
are
your
thoughts
yeah.
A
Okay,
then
I
don't
know
if
you
found
do
you
want
to
go.
G
There
are
two
things
that
I
ve
done
in
the
past
few
weeks,
like
des
cow
benchmark
framework,
so
yeah,
so
for
that
one
like
I,
did
it
together
with
cows,
cow
was
busy
working
with
at
adding
metrics
to
a
by
for
like
a
skeletal
working
so
previously,
like
our
our
scale.
Test
was
working
but
also
like
the
configurable
parameters
are
together,
where
so
makes
user
like
pretty
hard
to
control
like
like,
for
example,
like
to
get
creation
rate
or
like
number
of
profiles,
this
does
like
yeah.
G
You
need
to
have
some
deeper
level
than
their
understanding
of
how
to
stout
scout
test
works,
make
things
like
under
control,
but
now
we
have
the
IBEX
work
from
work
to
have
every
like
almost
no
parameters
togethers.
There
are
some
something
left
right
there
that
let's
do
this
stood
much
better
than
we
have
before
so
mystical
about
power,
like
current
experiments,
will
be
done
against
the
benchmark.
G
A
G
G
It's
the
benchmark
findings,
so
I
do
figure
out
like
while
I
was
running
like
open
mesh
under
load
that
do
figure
out
rather
source
experience
that
several
degrees
of
performance
degradations
when
they're
executing
the
range
my
scores
and
in
the
worst
case,
narrows
the
the
99%
elegancies.
It
goes
up
to
a
seconds
if
we
have
fuzzy
sets
of
size
about.
F
G
G
I
G
G
It
just
keep
getting
worse
and
worse.
Okay,
so
I
do
do
some
research
online.
So
there
are
two
approaches
that
rather
some
suggestions
won't
be
able
to
read.
If
you
want
to
Scout
read
this
by
using
Redis
cluster
and
we
want
to
scout,
the
rise
is
busy
using.
That
is
reprogram
because
most
so
right
here
are
probably
like
elaborated
the
scaling
read
that
the
scaling
restarts
kids
yeah.
That
is
the
button
every
time
hitting.
So
there
are
two
there's
some
pros
and
cons
for
setting
the
regional
replicas
modes.
G
A
A
Synchronize,
like
our
entire
synchronization
model.
Etcetera
is
not
even
meant
to
really
guarantee
that
there
is
no
real
timestamp
guarantee
for
what
tickets
you
will
see
at
what
point
of
time
I
mean
within
reason,
I
think
it's
okay
for
the
data
to
be
lagging
on
our
reads:
compared
to
what's
being
written,
I
just.
D
G
It's
done
is
not
a
big
deal
for
us.
There
was
no
smoke
actually
setups.
They
were
required
to
to
make
the
bridge
from
slaves
works
production.
So
basically
the
the
service
ticked
off
at
this
cover
story,
yeah
cuz.
Now
we
need
a
service
discovery
story
or
either
logging,
just
a
proxy
or
the
raddest
via
the
ready
set
doctor
happens,
identify
it
like
what
replicas
are
currently
available.
So
it's
a
bit
like
previously,
where
we
we're
like
over
match
a
match,
taste
or
implementation.
We
were
connecting
to
the
rally
semester
directly.
G
A
Actually,
just
to
add
a
note,
there
I
think
the
red
Sentinel
is
something
that
we
are
using
just
for,
like
basically
a
J
right
like
if
Amasya
goes
down,
that's
the
one
that
does
the
new
master
election
and
whatnot.
So
you
don't
need
that
for
discovery
per
se.
But
given
that
that's
doing
the
high
availability,
that's
the
authoritative
one.
You
asked
for
forever.
G
And
I
these
have
some
like
a
few
benchmarks
when
I
have
that
away
from
replicas
parts
return
downs,
so
I
have
to
stand
up
right
here
and
the
filings
in
spaces
like
so
Scott
cuz,
you
post
it
like
a
scroll
down
bit.
So
there
is
no
okay.
Here
we
go.
Let
that
drive
there.
Now
the
link
that
I
put
in
the
bottom
of
page
I
go
to
it.
Oh
yep.
G
You
go
okay,
so
start
from
like,
for
example,
like
computer.
Take
it
one
master
yeah,
so
suddenly
I
go
is
to
round
each
tags
for
about
like
15
minutes
or
more
yeah.
So
this
is
a
result
that
I
got
yesterday
and
when
I
have
to
read
from
replicas
about
turned
off,
you
can
see
that
the
mesh
base
is
stopped.
Working
and
the
halfway
is
everything
crashed,
it
just
doesn't
come
and
they
never
come
back.
And
if
you
go
to
you
like
theater
figure,
take
a
trip
left
turn.
G
A
G
A
G
Well,
I
can
just
trying
to
sub
the
logic
like
doesn't
Scout
problem,
okay,
yeah,
so
because
you
possibly
to
Prometheus
Tamara,
because
at
the
bottom
others,
okay,
there,
we
go
okay,
so
so
here
is
the
graph
that
I
gets
using
Prometheus,
even
though
I
got
the
line
right
here.
The
blue
line
right
here
is
the
tiara
normal,
but
we
can
see
that
like.
G
So
though
it
works
under
camera,
because
but
the
error
rate
is,
is
you
like
forty
percent
or
30
percent
ish
like
some
of
the
cost
gap,
not
canceled
out,
because
the
pastor
synchronizing
it,
though
yeah?
If
we
have
like
the
replicas,
that's
20
and
if
you,
the
errors,
are
primarily
because
of
logic
falling
over.
Do
you
know
the
arrows?
There
are
two
parts,
so
one
is
the
EMA
logic
falling
over
the
actors
because
they
pastor
synchronize
window.
Oh,
they.
G
G
A
That's
useful,
I
think
the
one
thing
I
may
know
just
looking
at
stylus
to
you
fun
and
kill.
If
you
can
give
some
more
inputs
in
terms
of
this
right
is
we
currently
have
a
specific
set
of
inputs
that
we
are
trying?
These
numbers
with
I
would
like
some
more
interesting
inputs
like
what's
the
incoming
rate
that
we
can
support
for
like
ticket
creation,
let's
say
8,000
tickets
per
second.
Is
that
something
that
our
regular
set
up
supports?
A
If
not
how
many
replicas
does
it
take
to
kind
of
support
that
for
some
constant
set
of
profiles,
maybe
60,
maybe
tweak
that
number
but
I
think
some
more
realistic
kind
of
such
such
interesting
scenarios
might
help
us
figure
figure
out
whether
this
actually
will
work
and.
C
F
B
So
from
from
our
interfacing,
you
know
we
were
also
trying
to
get
some
of
that
that
information
as
well
right
now,
we
have
a
ton
of
insight
into
like
the
way
people
are
evaluating.
Matchmaking
technology,
they're
really
interested
in
isolation,
so
they
could
separate
dev,
QA
and
prod
and
so
they're
not
doing
anything
in
Prague.
Yet
as
far
as
we
could
tell
so,
we
can't
really
bring
too
much
insights
to
the
to
the
production
flow.
B
A
B
A
Like
it's,
it's
okay
for
us
to
suck
in
my
mind,
if
you
ask
me
what
question
I'm
trying
to
really
answer
right,
is
it
like
so
long
as
things
can
horizontally
scale
or
our
rates
I
think
we?
We
are
okay,
the
the
thing
that
previously
was
worrying
us
was
sure
we
we
were
kind
of
hitting
her
or
hitting
a
bottle
issue
where
we're
trying
to
horizontally
scale
our
nodes,
already
nodes
and
whatnot
and
weren't
able
to
figure
out
why
we
aren't
scaling
right.
A
So
if,
if
the
looks
like,
at
least
from
the
basic
scenario
that
you
found,
has
that
some
horizontal
scaling
is
possible,
the
only
thing
that
worries
me
here
is
exactly
what
you
found
pointed
out,
which
is
like
for
this
scenario.
20
rarest
replicas
is
like.
Maybe
we
are
missing
something
in
configuring.
What
each
of
those
are
set
up
as
or
whatnot,
so
like
we
right
now,
if
I
have
to
summarize
it
I
think
we
have
evidence
that
when
we
said
Redis
doesn't
scale,
it's
not
I
mean
that
that
blanket
statement
isn't
right.
A
We
there
are
ways
to
horizontally
and
the
scale
the
Redis
reads
and
writes
it
takes
a
little
more
effort.
You
found
I
think
the
the
one
thing
I
would
mention
is
I
know
you
had
a
lot
of
cons
as
the
complicated
set
up
kind
of
a
thing
right
and
I.
Don't
see
them,
particularly
as
tons
and
and
we
can-
we
can
discuss
this,
but
I
feel
like
a
production
set
up
looking
more
complicated
than
there
is
a
radius
in
the
same
cluster
as
open
matter.
A
What
we
currently
have
out
of
the
box
just
works
for
their
production,
but
for
psionics
it
clearly
won't
so
it's
okay
to
have
that
that
the
tricky
part
that
will
come
in
is
some
of
this
ties
in
to
how
our
state's
toward
implementation
we'd
stuff
right.
Like
you
said
before,
we
used
to
just
directly
connect
the
master.
Now
we
assume
that
a
sentinel
exists.
How
can
we
abstract
are
like
open
match
from
what?
A
B
A
D
A
How
complicated
the
production
setup
will
get
for
somebody
if
they
have
to
set
up
the
Sentinel
with
red,
is
with
Reed
replicas
and
then
have
open
match
to
figure
out
or
distribute
the
read
across
these
replicas
and
whatnot.
What
I'm
trying
to
say
is
it's
okay
to
expect
that
your
production,
that's
set
up
to
support
millions
of
players
has
become
more
complicated
set
up
than
just
oh,
we
have
everything
running
in
a
cluster
kind
of
a
setup.
It's
just
that
how
we
abstract
open
match
from
it
is
the
tricky
thing
right
like
we.
D
B
Of
curious
we've
marked
here
who,
who
now
I,
haven't
fun
hearing
his
name
is
trying
to
figure
out
what
we're
talking
about
with
okones
I
mean
the
question
of
scale
for
production
workloads,
because
there's
this
kind
of
batteries
included
you
can
get
started.
Is
that
the
same
system
with
zero
or
little
configuration
that
will
go
all
the
way
up
to
huge
or
if
you
eat
a
bunch
of
work,
yeah.
E
So
we
aren't,
like
you'll
you'll,
always
have
probably
a
limit
on
what
you
can
support
inside
a
single
cluster,
but
you
can
always
add
more
clusters,
and
so
that
makes
it
somewhat
easier
for
us,
and
then
you
start
stepping
into
the
realm
of
like
who
servers
and
stuff
like
Multi
cluster
stuff,
which
is
the
multi
cluster
stuff,
is
the
more
complicated
part,
but
that's
how
we
can
kind
of
deal
with
midscale
a
little
easier.
We
don't.
We
don't
necessarily
have
to
funnel
everything
through
in
a
single
install
yeah.
B
B
E
There
are
certain
limits
that
kubernetes
has
around
like
how
many
pods
you
can
run
inside
a
cluster
and
like
at
what
speed?
Does
it
do
it
and
like,
while
it
says,
hey,
we
can
support
up
to
5,000
nodes,
but
that
realistically,
look
like
like
it
sort
of
starts
shrinking,
as
you
add
more
and
more
pods,
so
we've
seen
an
application
like
people
who
are
running
game
servers
on
on,
like
kubernetes
as
a
whole,
like
around
500
nodes,
usually
sits
pretty
pretty
comfortable
eyes
and
like
yeah.
It
sort
of
varies
a
lot
so
yeah
interesting.
B
B
Yeah
definitely
feels
like
if
the
answer
to
hey
like
open
match
skills
to
production,
loads
and
the
way
it
does,
that
is,
here's
a
complicated
run
book
for
getting
read
a
set
up,
yeah
that
does
feel
sufficient.
It
feels
like
something
most
people
will
be
able
to
do.
It
would
be
nice.
I
think
that
part
of
our
responsibility
is
to
formalize
that
yeah,
maybe
not
in
the
form
of
a
complete
chart,
maybe
not
in
the
form
of
a
you
know.
B
With
in
charge
that
backed
up
those
facts-
and
also
you
know,
I,
if
Redis
isn't
complicated
to
set
up
I
mean
it
might
be
an
interesting
point
to
continue
investigating
things
like
the
memory
state
store
or
elasticsearch
as
possible
options
to
answer
that,
there's
always
there's
always
the
case
that
you
slow
stuff
down
yeah,
we
haven't
really
pursued
the
idea
of
it.
We
don't
really
know
why
we
run
stuff
at
like
300
milliseconds.
We
just
kind.
F
A
So
I
think
at
some
point,
it'll
be
a
balance
between
how
complicated
is
that
production
set
up
and
whether
it's
reasonable
to
expect
somebody.
So,
for
example,
when
you
fund
was
pursuing
like
the
cluster
mode,
that
felt
like
really
really
you
needed,
like
three
different
set
of
master
slave
read,
is
coordinated
together
in
a
Redis
cluster
that
could
then
shard
the
right
space.
Now
that
that
seemed
like
an
overkill,
but
then
we
were
like
okay,
just
read
from
multiple
replicas
seems
like
a
more
doable
setup,
so
I
think
it's
it's
gonna
be
a
balance
like.
A
If,
if,
if
the
production
setup
looks
really
crazy,
then
definitely
we
need
to
do
more
just
out
of
box,
but
if
it's
not,
then
that
should
be
okay.
I
think
that
the
tricky
part
is
going
to
be,
if
we
let's
say,
go
down
the
central
approach
and
whatnot
like
what
I
was
saying
was:
if
you
bleed
a
lot
of
that
ready
set
up
within
open
match,
that's
not
good
either,
because
then
you
would
almost
force
somebody
to
have
that
set
up,
and
that's
bad
I
mean
I.
C
C
I
think
the
big
ones
are:
it's
a
few
scale
scenarios
that
we
want
to
have
working
like
running
in
the
cluster
at
some
scale,
no
staring
out
and
then
make
like
figure
out
what
was
going
wrong
there
and
then
just
like
getting
some
numbers
out
of
it
sure
for
what
we
have.
What
you
can
has
what
I
have
in
progress
and
all
that
kind
of
stuff
and
then
I
think
we
should
perpetual
like.
Oh
it's
one
day
away.
A
So
I
think
we
just
basically
first
need
to
kind
of
like
formalize
the
the
load
that
we'll
run
across
these
these
different
kind
of
the
existing,
then
the
rate
from
replicas
and
the
in
memory
stores.
So
long
as
we
standardize
the
load.
We
should
just
be
able
to
compare
apples
to
apples
numbers
and
see
how
we,
which
way
to
go
yeah.
Okay-
oh
that's!
That's
awesome,
so
that
does
seem
like
reasonable
next
steps.
I
think
next
on
the
agenda
is
anyways
like
dot
nine
release,
timeframes
and
stuff.
A
Okay,
so
I
think
Before,
we
jump
into
the
release
stuff.
One
thing:
Caleb
I
wanted
as
information
or
basically
for
us
to
establish
was
just
purely
what
do
we
want
from
dot
nine
and
that
answer
kind
of
ties
back
a
little
into
what
are
we
really
hoping
to
get
by
GDC?
So,
in
my
mind,
I,
let
me
present
it
this
way.
Right,
I.
A
Think
GDC
is
what
only
March
ish,
so
we
still
have
a
Feb
to
go,
but
practically
once
the
dot
nine
ship
has
sailed,
trying
to
get
stuff
in
into
the
dot
ten
or
one
dot.
Oh,
we
will
see
what
we
call
that
later,
but
the
next
release,
basically
trying
to
kind
of
rely
on
anything
coming
in
the
next
release
for
a
GDC
dependency
may
be
very
tricky
yeah.
So
I
would
imagine,
let's
say
and
tell
me
if
this
is
even
a
reasonable
goal.
A
B
A
B
B
A
Kind
of
we
will
get
real
actually
about
the
reason
I'm
kind
of
trying
to
do.
This
is
basically
set
up
some
rules,
for
when
we
look
at
the
issues
and
start
triaging
them,
I
feel
like
dot,
ten
or
anything
beyond
dot.
Nine
can
be
something
that
is
okay
to
change
under
the
wraps
and
he
makes
things
better,
but
doesn't
like
necessarily
have
any
more
integration
overheads
for
you.
Folks,
right
like
what
would
suck
is
dot
ten
changes,
something
that
again
now
needs
Stolley
to
go
back
and
and
reintegrate
anything
at
all.
A
B
Alright
States
techno
steering
committee
two
two
parts
of
this:
whatever
is
like:
what
do
we
want
open
match
to
kind
of
like
have
as
a
presence
so
I
encourage
and
on
board
people
and
get
them
like
excited
and
explain
how
to
use
it
and
demo
it
there's
the
other
part,
which
is
we
as
technical
steering
committee
members,
but
also
unity,
are
kind
of
the
first
integrators
of
a
vote
match.
Is
there
some
type
of
beat
there
that
is
kind
of
founded
in
some
proof,
proof
point
yeah
right,
I
feel
like
that's.
A
So
I
think
I'm
not
talking
specifically
at
all
about
like
the
unity
product
roadmap
or
anything
like
that.
But
what
I'm
saying
is
if
we
have
to
have
a
GDC
open
match
message
and
I'm
having
it's
gonna
have
the
same
conversation
with
ish
in
the
community
meeting
right.
Are
we
gonna
be
in
a
ready,
ready
state
it
might
like
if
we
were
in
a
ready
state
at
least
make
a
statement
like
hey
here
is
open
match
and
unity
is
or
psyonix
is
using
it?
That
gives
a
lot
more
credibility
than
like
a
year
ago.
A
B
See
why
not,
but
I
obviously
have
to
take
that
back
and
discuss
that
a
little
bit
more
okay,
you
know
just
in
terms
of
any
any
game
company
that
integrates
with
open
match,
they're
GDC
beats
are
gonna,
be
like
pretty
sterile
to
whatever
message:
they're
trying
to
land,
okay
I.
This
is
a
partner
I,
see,
no
reason
why,
like
unity
being
a
consumer
and
also
a
contributor
to
open
match,
isn't
something
that
should
be
shouldn't
be
just
like
laid
out
there
as
like
a
big
like
hey.
This
is
super
cool
Unity's
on
open
match.
A
In
a
way
somewhere,
people
are
already
getting
that
message.
I
was
looking
at
your
presentation
at
unite
and
your
presentation
was
like
you
presenting
a
lot
of
open,
match,
internals
and
how
it
integrates
with
with
unity
as
well
right,
so
I
think
it's
already
kind
of
the
message
is
already
there.
It's
also
a
matter
of
we
are
looking
at
1.0
and
do
you
think
I
mean
we'd
love
to
call
it
1
dot,
oven.
Let's
say
from
your
perspective,
everything.
A
Auto
again,
it's
a
question
of
if
at
GDC
we
were
at
a
point
where
you
have
what
it
needs
and
we
have
what
it
needs
from
open
match.
And
let's
say
we
have
a
couple
other
folks
who
are
ready
to
a
test
rate.
Then
we
can
possibly
be
calling
it
1.2
as
well
so
I'm,
trying
to
think
of
it
from
even
from
that
roadmap
perspective
like
whether
we
should
necessarily
try
to
target
one
auto
and
GDC
together.
A
B
We
just
want
to
be
like
now,
of
course,
not
right
because,
like
they
shouldn't,
they
should
never
collide
serendipitously
right.
It
should
always
be
like
a
plan,
a
planned
concept
just
that
one
dot
o
is
meaningful.
You
know
if
one
dot
o
is
hey,
it
meets
our
needs,
arguably
like
dot.
8.9,
look
pretty
good
and
that's
kind
of
what
we
discussed
before,
like
that's
one
dono
sure.
B
E
E
I
was
like
God,
you
know
no
one's
using
it
gone
in
production,
yet
like
I,
don't
want
to
call
it
1.0
but
like
it's
like
chicken-and-egg
right,
like
who's
gonna
use
it
before
it's
one
point
out:
it's
not
yeah
yeah,
but
once
we
got
to
a
place
we
were
like.
You
know.
You
know
what
this
API
is
stable.
Now
we
can
just
keep
iterating
and
making
it
better
that
that
I,
like
that
perspective,
so
I
might
work
for
you
as
well.
Just
throw
that
in
there's
an
idea.
That's.
A
C
C
The
core
set
of
API
surface
is
close
to
being
stable,
but
it's
not
quite
there
and
there's
some
other
stuff
that
we
know
we
need
to
add
eventually
like
some
sort
of
stat
service,
but
we
don't
know
what
that
looks
like
and
I
think
it's
reasonable.
Even
if
we
do
a
1.0
with
like
this
set
of
core
API
services
is
stable.
Is
that
service
like
it's
there?
You
can
use
it,
but
we'll
have
a
big
morning
on
it
like
this
is
not
stable.
We
haven't
had
it
around
long
enough
to
know.
E
E
D
E
C
A
Service
cart
is
like
a
very
good
example
of
it's
almost
gonna,
be
a
separate
I
mean
at
least
looking
at
what
it
is.
It's
gonna
be
a
completely
separate
service
space
like
service
surface
area.
So
it's
perfect.
If
I
think
we
say
it's
not
the
core
and
it's
not
stable,
but
yeah,
but
the
core
is,
and
that
should
be
okay
stuff,
but
this
is
yeah.
This
is
a
good
example.
B
B
B
G
B
G
B
That's
alright
man,
there's
a
lot
yeah,
yeah
I
know
I
did
sound,
but
doesn't
sound
right
when
I
hear
it
that
way.
I
think
as
we
normally
don't
have
sixty
thousand
I
guess
so
an
our
our
scale
testing
we
haven't
ever
had
like
60,000
simultaneous.
We
usually
have
like
you
know
five
to
ten
and
we
have
you
know
we
have
sixty
thousand
coming
through
the
system,
but
they're
not
mm-hmm
they're,
not
simultaneous.
D
B
B
Yeah,
we
didn't
actually
ever
get
nearly
high
enough
to
60k.
We
know
rivalry
test
hit
of
20,000,
that's
a
minute,
not
a
second,
and
with
that
we
were
having
matches
created
time
to
match.
B
8,000
assignment
properties,
so
I
believe
this
is
with
four
properties.
So
not
a
very
high
intersection
value,
20,000
arrival
rate
we
had
matched
times
of
just
into
in
match
times
of
like
two
and
a
half
seconds,
but
the
95th
99th
percentile
was
was
eight
I
know,
that's
what
just
a
single
large
Redis
master,
but
that's
not
I
mean
that's
a
whole
factor
lower
in
scale
right,
$20,000,
I've
already
minute
is
still
only
like
a
thousand.
That's.
B
B
He
did
60,000
1600
okay,
so
that
was
like
yeah
I
got
at
first
I
was
like
man,
that's
not
great,
and
then
I
kind
of
thought
about
it,
some
more.
It's
like
we,
don't
we
don't
ever.
We've
never
scale
tested
quite
like
that.
We
also
weren't
tuning
it
for
performance
or
trying
to
get
more
out
of
it,
but
so
it's
cool
to
see
to
see
that
we've
managed
something
like
that.
B
B
B
C
B
Could
probably
yeah
at
that
point
start
thinking
about
that
you're?
Definitely
paying
a
processing
price
to
deal
with.
With
that
query,
I
am
curious,
so
so
Scott
you're
gonna
go
look
into
the
the
state
store
memory
store
which
we
we
have
something
somewhere.
I
know.
I
have
told
you
that
before
which
we
always.
B
We
haven't
scale
tested
the
heads,
yes
I'm
curious
to
hear
how
that
goes.
I
mentioned
elasticsearch
before
the
elasticsearch
is
definitely
designed
for
stuff
like
this,
especially
around
these
queries
and
how
complicated
they
yeah.
They
also
have
some
crazy
language
that
you
can
use
to
construct
some
pretty
impressive
queries
that
would
be
pretty
pretty
cool
to.
Let
let
match
functions
have
access
to
because
I
know,
red
building,
a
new
query
in
Redis
or
a
query
type
is
pretty
complicated.
It.
B
You
can't
you
can't
add
much
but
like
elasticsearch
is
like
a
backbone
for
a
ton
of
massive
search
systems,
and
if
we
really
are
read
which
it
seems
like
we
are
really
read
heavy
and
we
can
Q
create,
creates
right
into
some
kind
of
buffer.
We
can
really
eke
out
more
scale,
I
think
on
the
read
side,
if
we
use
something
like
distributed
and
indexing
system
like
elastic.
B
Just
something
to
think
about
that
might
be
something
that,
after
one
dot.
Oh,
we
on
our
side
because
we'll
probably
have
a
lot
of
infrastructure
already
set
up
to
go
test
that
we
could
go
and
benchmark
elastic.
We
also
have
some
reliability.
Folks,
who
are
I,
have
a
ton
of
experience
with
elastic,
so
that
might
be
something
we
can
chew
off
and
contribute
back
once
we
once
we
see
that,
especially
since
we
have
the
benchmarking
framework,
you
can
sell,
I'll,
be
really
cool.
A
Awesome
I've
I've,
been
like
really
a
spark
and
I
were
discussing.
I
would
love
to
have
like
our
state
store
kind
of
completely
abstracted
out
as
like
through
a
library
such
that
you
know.
If,
if
let's
say
you
went
down
that
router,
you
should
be
able
to
just
plug
in
your
library
that
provides
the
interface
implementations
and
open
magical
shouldn't
care.
So
it's
it's
easier
said
than
done.
C
Those
are
basically
the
only
two
bits
so
when
you
abstract
away
like
a
storage
layer
like
we
have
an
internal
interface
and
I,
try
to
use
it
for
maybe
10
minutes
before
I
gave
up
on
it,
because
the
interface
is
basically
open
matches
interface,
the
area
there's
very
little
other
than
fetch
matches
that
isn't
the
like
interface
of
the
state
store
itself.
So
we
could
do
it.
That
would
just
be
like
the
front
end
call
to
like
create
a
hook.
It
would
be
choose
an
ID
and
then
immediately
pass
it
on
they
store
code.
C
A
C
A
A
Do
you
want
to
go
to
the
issue
triage
yeah,
I
think
before
we
are
probably
just
along
along
with
it
up?
What
do
we
think
so
we
are
kind
of
already
past
you.
The
the
six
weeks
is
now
I
if,
if
to
finish
the
scale,
investigations
and
kind
of
to
kind
of
come
up
with
a
reasonable
approach,
Scott
and
you
fund,
since
you
folks,
are
majorly
handling
this
effort.
I
would
like
to
get
a
gut
feel
of
how
much
more
time
do
you
think,
and
should
we
kind
of
change
that
2.9?
You
are
not
you.
C
Know
so,
4.9
I
think
we
should
get
the
scale
tests
functional
like
not
falling
over
with
numbers.
We
know,
should
work
and
like
figure
out,
what's
going
on
behavior,
and
we
should
have
probably
finish
up
the
streaming
in
the
backend,
synchronizer
and
I.
Think
at
that
point
it
is
probably
time
to
cut
B.
C
A
We'd
love
that
absolutely
I
think
that's
that's
useful.
You
fund
on
on
your
end,
if
you
were
to
let's
say
that
sentinel
replicated
sorry
not
replicated
but
offloading
reads
to
Redis
replicas.
Do
you
think
that's
like
aya?
Is
that,
like
in
a
prototype
state
your
implementation,
or
do
you
think
it's
like
already
ready
for
if
you
were
to
because
the
way
I'm
looking
at
it
is
that
seems
to
me
as
an
improvement
over
the
current
anyways?
A
So
even
if
we
were
to
change
it
later,
that
seems
like
a
natural
like
sure
we
have
right
now,
just
read
from
master
check
in
whatever
is
required
to
offload
the
reads
to
replicas.
It
seems
like
an
improvement
and
then
kind
of
take
a
next
step.
If
we
think
that's
not
sufficient,
so
I
don't
know
if
your
thoughts,
you
think
you're
ready
to
have
that
in
or
do
you
think
that.
G
So
for
the
so
for
enabling
is
synced
sent
now
so
that
is
his
in
is
like
guys
already
have
a
PR
for
it,
but
that
part
doesn't
have
like
it.
Doesn't:
support
directly
come
direct
access
to
rather
semester,
so
maybe
like
refine
that
PR
a
bit
and
that
at
the
back
at
that
functionality
back
because
people
like
Horst,
you
like
they're,
not
who
don't
use
Center
now
my
student
that
features-
and
you
yeah
but
overall
I
think
like
the
new
features
itself
is,
is
ready
for
review.
Yeah.
A
Okay,
okay
I
mean
oh,
then
I
would
like
to
hear
the
thoughts
from
everybody
else
has
to
like
whether
we
we
agree
that
it's
just
like
uploading
reads
to
the
replicas
seems
to
me
like
unnatural
way
to
like
I
mean
it
doesn't
hurt.
Unless
you
tell
me
that
there
is,
there
are
some
got
just
to
it.
So
maybe
that's
something
we
can
anyways.
If
we
have
it
ready,
we
can
possibly
have
that
in
so
dot.
Nine
does
that.
Hopefully
it
is
better
than
not
eight
then,
and
then
we
evaluate
that
beep.
B
One
thing
I
heard
that
I'm
still
a
little
concerned
about
on
the
read
read
from
replicas.
We
mentioned
the
stale
data
we
kind
of
glossed
over
it
as
like
we're
actually
kind
of
okay
with
it,
because
we
don't
make
this
guarantee,
but
I
am
curious
to
hear
you
know
what
is
our
hit
rate
for
reading
stale
data?
Is
it
high?
B
C
For
my
correctness
point
we
do
not
check
to
make
sure
that
a
ticket
is
still
around,
which
is
probably
something
we
should
do
anyways
when
we're
sticking
out
the
back
end
and
I
think
for
assignment
as
well
I'm,
not
quite
sure
there.
So
if
we're,
if
we're
reading
snail
data,
we
might
actually
have
a
correctness
problem.
A
We
should
okay-
let's
let's
do
this.
Let's
take
that
offline
and
I
don't
take
a
look
into
what
the
consequences
of
sale
data
are
and
how
stale
are
we
talking
about
here?
It's
basically
staleness
that
would
be
whatever
it
takes
for
ready
slaves
to
kind
of
get
the
latest
changes
from
the
master.
So
let
me
read
up
a
little
more
on
that
and
see
if
that's
actually
a
real
concern
and
that,
if
that
is
then
we,
then
the
entire
approach
is
kind
of
host
for
us
right
if
it
impacts
correctness,
but.
B
B
Processed
by
a
director
like
when
you
do
a
fetch
match,
you're
gonna
find
everyone's
that
cut
out.
Sorry
that
every
once
in
a
while
this
happens,
it
just
needs
to
be
a
really
small
number.
Then
people
can
kind
of
like
losing
that
guarantee
I
think
it
sucks,
but
we
also
weren't
guaranteeing
it
necessarily
anyways,
because
there
are
other
timers
in
the
system:
yeah,
yeah,
yeah,
okay,
okay,
so.
A
I'll
try
to
do
some
due
diligence
to
figure
out
what
that
that
really
looks
like
and
if
it
should
be
concerning,
but
if
not
then
I
think
it's
something
reasonable
to
shoot
for
as
as
something
that
can
go
in
dot
nine,
and
if
we
decide
to
go
with
the
state
store
implementation,
then
I
don't
think
that's
dot!
Nine
timeframe,
Scott
correct
me.
If
I'm,
we
should
be
able
to
make
that
call,
but
I,
don't
think
I
think.
A
Let's
look
at
the
the
remaining
it
shows
before
we
like.
Let's
see
what
payload
should
go
in
right,
I
mean
again,
why
my
question
is
whether
folks
will
be
if
Caleb
tells
me
that
he'll
be
able
to
integrate
with
it
without
any
of
the
things
in
the
triage
list,
then
we
can
cut
it
today,
but
let's,
let's
see
if
we
can
make
that
call
after
we
triage
the
issues.
A
A
So
I
think,
first
I'm
kind
of
just
shortlisted
the
the
braking
API
change
labels,
because
these
I
think
are
the
ones
we
hopefully
might
take-
is
anything
that
here.
That
matter
should
be
front
loaded
into
dot
line,
because
this
is
the
stability
bit
and
after
that
we
can
see
if
whatever
is
required,
is
even
required.
So
since
most
of
them
are
on
Scott's
plate
and
I
can
actually
take
a
few
of
them.
But
let's
see
API
change,
I
remember
status
to
fetch
matches
Scott.
You
want
to
talk
about
this.
Okay.
C
So
general
problem
is
that
we
support
and
we
want
to
support
a
match,
function,
returning
tickets
or
I.
Guess
it's
not
okay,
but
we
want
to
support
it.
A
match,
function,
returning
tickets
and
then
getting
canceled
or
a
returnee.
Mash
is
not
tickets
and
then
processing
those
matches
and
returning
them
and
then
saying
like
there
also
was
an
error
with
the
match
function
so
that
if
we
are
timing
out
on
our
match
functions,
but
they
are
producing
results,
you
are
able
to
bring
the
pulldown
with
the
results
that
were
produced.
C
So
the
proposal
here
is
we
return
like
an
additional
status
back
for
each
match
function,
that's
run
recently,
I
haven't
think
about
it
and
there
might
be
in
another
approach
which
we've
talked
about.
Currently
he's
passed.
Multiple
match
profiles
to
have
such
matches
call
reduces
that
down
to
one
profile,
then
and
I'm
not
quite
sure,
about
the
G
RPC
guarantees
here.
C
I'm
not
sure
about
how
G
RPC
works
in
that
aspect.
I
looked
into
it
a
little
bit
and
I
found
some
stuff.
That
said,
the
errors
are
passed
in
the
HTTP
trailers,
so
they'll
come
after
any
messages,
but
I
don't
know
if
that
guarantees
that
both
the
server
and
client
will
send
all
the
messages
on
the
stream
before
the
error
gets
processed.
C
So
I
don't
know
if,
like
there's
a
buffer,
that
kind
of
get
dumped,
because
there
wasn't
here
that
came
along
or
not
or
whether
they
process
the
buffer
and
then
do
the
air.
So
that's
the
part
that
I'm
not
sure
about
yet
so
I
guess
thoughts/opinions,
like
I,
would
I
think
if
the
G
RPC
does
guarantee
that
it
works.
C
That
way,
it
would
be
much
simpler
to
say
you
can
only
pass
one
match
profile:
you're
gonna
get
an
error
back
if
that
match
function
failed
or
we
talked
to
it
or
whatever-
and
this
also
has
added
benefit
of
currently
were
like
you
want
to
do
that
anyway,
to
spread
out
your
load
on
your
backends
mm-hmm
or
at
least
yeah.
So.
A
B
You
know
I'm
trying
to
remember
the
history
behind
why
that
is
I.
Think
there
is
this
moment
of
clarity
away.
Almost
anyone
who's
running
match
profiles
are
generating
them.
Oh
right,
they're,
not
they're,
not
just
staying
like
run
these
ten
they're
gonna
be
like
massaging
it
and
a
system
that
massages
it.
It's
very
convenient
to
pass.
One
call
that
is
a
bulk
request
and
then
received
book
results,
but
the
alternative
is
like
hey,
I'm
gonna
generate
two
thousand
function
runs
okay.
B
Now
that
I
have
two
thousand
profiles
generated
I'm
now
going
to
spin
up
two
thousand
requests
mm-hm
and
from
my
one
process.
Yes,
it's
a
bit
more
complicated
I,
don't
imagine
it's
it's
too
bad
yeah,
it
doesn't
reduce.
You
know,
I
have
two
thousand
connections
that
need
to
close
before
I
can
effectively
move
on.
C
A
I
think
the
reason
I'm
curious
about
that
as
a
separate
issue
is
just
because
I
remember
when
we
were
discussing
this
with
each
as
well.
This
came
up
so
right
now
there
is
this
almost
like.
You
know
one
of
the
the
principles
of
like.
If
you
don't
really
need
to
provide
options,
please
don't
kind
of
I
mean
we're.
A
Today
we
have
an
option
and
when
somebody
asks
me
like
hey,
should
I
Club
all
of
these
up
on
one
profile
or
should
I
just
send
off
hatch
matches
per
profile,
and
it's
almost
like
I,
don't
have
a
great
answer.
I
can
tell
you
something
about
synchronization
cycles
and
stuff,
like
that,
where
all
of
them
on
one
fetch
match
is
still
ensures
that
they
all
get
considered
for
a
synchronization
cycle,
whereas
separate
fetch
matches
may
potentially
go
into
separate
cycles.
That's
a
reason
I
can
generate,
but
I
don't
know
if
it's
like.
B
Well,
considering
that
the
backend
API
is
no
longer
the
synchronized
and
actually
horizontally
scales,
it's
probably
more
fine.
It
used
to
be
that
everything
was
handled
in
memory
on
the
back
end
thing
through
commit
and
then
you'd
have
2,000
requests
come
in
and
they
kind
of
like
Andale
about
that
many
yeah,
because
the
idea
used
to
be
that
you
would
send
like
10,000,
n
right
and
so
we're
like.
Oh.
B
In
the
new
world,
where,
like
we
already
know
that
we
literally
can't
more
than
I,
don't
know
mm
and
current
functions
in
in
a
reasonable
way
for
the
synchronizer
streaming
that
maybe
that
could
go
up
now
with
the
streaming
stuff.
But
you
know
it's
a
really
whole
lot
of
point
in
having
a
bulk,
API
yeah.
B
B
Seems
like
an
OK
way
to
do
it.
I
would
imagine
the
interesting,
ok,
sorry,
I
imagine
this
is
also
an
interesting
separation,
because
if
we
move
towards
a
you
know,
fetch
match
fetch
matches
that
instead
affects
matches,
but,
yes,
a
place
to
fetch
matches.
But
now
you
just
pass
in
hey,
rose
tuned
with
these
parameters.
Basically,
and
it's
gonna
produce
results.
B
A
But
again
the
question
on
the
the
meta
issue
here:
right
is
and
Caleb
I
don't
know
if
the
the
core
of
the
problem
got
through,
but
basically,
if
an
MMF
generates
is
generating
and
it's
kind
of
like
you've
cut
it
off,
we
still
process
the
partial
results.
We
return
to
you,
the
partial
results,
because
that
way
we
are
chopping.
We
are
reducing
the
pool
and
it
doesn't
error
in
that
case,
even
let's
assume
now
a1
function
for
one
touch
much.
B
Point
at
all,
this
is
such
a
great
question.
We
encountered
this
about
two
weeks
ago
and
of
course
we
haven't
met
since
then
right
but
yeah.
We
have
this
issue
where
it's
like.
We
have
this
one
profile
that,
with
their
own
exception
on
this
one
function,
and
then
it
would
throw
out
all
the
results.
Yeah.
D
B
A
C
A
I
think
if
you
ever
documented
I
am
I'm
not
sure
if
there
is
a
pattern
to
this
right
like
when
you
return
partial
results
and
then
like
say
that
hey
this
is
just
partial.
Is
it?
Is
there
any
pattern
out
there
which
says
whether
there's
an
error
or
not,
but
anyway
we
can
find
that
offline,
but
it
looks
like
at
least
let's
do
the
one
polar
fetch
match
and
then
we
can
figure
out.
What's
the
best
way
to
deal
with
it,
then
how
about
that
Scott?
Okay,.
C
A
C
A
Mean
here's
the
thing
I
think,
let's,
let's
read
up
again
the
the
discussion
we
had
last
time
I
think
there
was
discussions
around
who
calls
it
and
what's
the
the
rows
and
ackles
for
the
the
caller
and
it's
stuff
like
that,
and
to
keep
that
convenient
right,
so
I
think
we
made
a
decision
last
time.
So
let's
probably
stick
to
that
at
least
for
this
release,
and
if,
if
you
have
any
other
arguments,
we
can
again
bring
it
up,
I
think
it
to
me
4.9.
Let's
just
change
need.
A
Think
this
one,
this
one
actually
was
I
apologize,
I
think
we
didn't
take
any
notes
and
we
are
maybe
in
Caleb's
thoughts.
There
is
something
about
this:
oh
no
Scott
meant
we
mention
it.
Okay
renaming
makes
sense,
however,
status
is
an
intuitive.
Maybe
we
work
so
I.
Think
yes
are
we.
We
are
also
fitting
this
one
out.
Okay,.
B
A
C
So
kill
point
was
like
this:
isn't
really
an
RPC:
why
does
it
see
status
on
it?
Yeah
so
I
think
from
open
matches
like
zoomed
in
an
Oakland
match.
That
is
true,
but
from
a
game
client
perspective
of
I
like
send
a
request
of
like
give
me
a
match
and
I
get
a
response
of
here's.
Your
assignment.
In
that
sense,
it
kind
of
does
make
sense.
A
B
Don't
know
if
we're
gonna
get
more
clear
to
your
on
this
one
I
mean
we,
we
don't
use
it,
someone
might
use
it
okay,
it
yeah.
If
your
goal
is
to
let's,
let's
think
about
it
from
the
open
my
front
door
perspective,
you
know
someone
is
going
to
let's
see.
Currently
it's
create
sorry
actually,
I,
don't
have
it
the
API
in
front
of
me,
but
it's
create
ticket
right
yeah
and
that's
and
it's
good
ticket
or
is
it
just
streamed
at
that
point,
I
know
I.
B
Think
it's
good
to
get
I
mean
the
streaming
one
we
are
trying
to
get
rid
of
so
ignore
that
and
right
now
get
ticket
kitchen
with
the
whole
deserialize
ticket.
Yes,
it
doesn't.
We
don't
have
like
a
status
object
or,
like
you
know,
a
that
assignment.
No,
we
don't
have
anything
like
that.
Some
resources,
and
so
it
was
a
sub
resource.
There's
you
could
have
a
concept
of
status
for
the
purposes
of
managing
the
request.
Mm-Hm
like
this
get
for
this
assignment
didn't
work
in
terms
of
using
the
assignment
to
signify
something
to
the
user.
B
We
basically
kind
of
all
agree.
It
should
be
extendable,
and
so
the
assignment
today
has
a
connection
which
I
still
think
that's
pretty
common
sense
like
we
don't
need
it
to
just
be
a
byte
array,
but
yeah
you
know,
unless
open
match
is
an
opinion
about
what
that
status
should
be
almost
feels.
Like
an
extension
point
to
me,
any.
C
B
Yeah
I
mean
what
one
thing
we're
kind
of
hitting
when
we
were
hitting
our
bottleneck.
Thetis
was
just
like
we're
kind
of
stuck
doing
pulling
until
we
have
a
more
robust
I
think
most
games
will
be
in
the
situation
where
they're
stuck
with
polling
until
they
have
something
better
and
so
a
polling
perspective.
You
definitely
expose
yourself
to
all
the
fine-grained
throttling
concerns
and
reconnect
storms
that
sometimes
come
with
exposing
yourself
to
game,
client,
behavior
and
so
right.
B
We're
debating
whether
or
not
having
a
sub
sub
status
on
just
assignments
or
some
kind
of
discernible
ticket
state.
That
signifies
whether
or
not
you
should
fetch
the
whole
ticket.
It
still
feels
a
little
advanced
for
this.
We
don't
use
status
or
I'm,
not
sure
you
need
status
I,
it's
edible,
I,
guess
from
the
direct
from
like
a
director.
When
you
do
the
assignment
you
can
put
whatever
you
want
in
there
and
so
connection
was
kind
of
a
generally
speaking
or
needs
connection.
But
then
what
else
do
they
need
yep?
B
They
need
a
proper.
They
need
a
proper
tea
bag.
They
need
just
some
place.
They
can
serialize
some
data,
yeah
well,
you're,
also
typing.
Okay,
I'll.
D
A
C
One
of
the
nice
things
is
that
I
have
a
stream
coming
out
of
like
hey.
These
are
all
tickets,
are
changing
and
I
have
like
an
authoritative
source
on
this.
That
I
can
like
do
what
like
I
can
create,
whatever
stream
I
want
with
this,
so
I
didn't
implement
it,
but
it
is
like
pretty
easy.
You
know
less
than
a
day's
work
to
add
something
to
the
front
end
where
it
also
attaches
to
the
stream,
and
it
can
note
like
hey
like
it
just
has
the
like.
C
It
doesn't
help
feed
the
in
the
memory,
but
I
can
just
say,
like
I,
think
it
got
assigned
okay.
Now
it's
deleted.
Okay,
anyone
got
a
sign
now
Scalia,
so
requests
coming
into
the
front
end.
The
front
end
would
know
like
the
whole
state
of
the
set
of
tickets,
that
are,
that
have
assignments,
and
it
can
like
if
you're
saying,
hey
I,
want
to
wait
for
an
assignment
on
this
ID
like
it
doesn't
need
to
go
to
the
storage.
C
B
You're
going
to
wait
if
you're
gonna
go
that
route,
you
need
to
come
up
with
how
you're
gonna
do
at
least
once
guaranteed
delivery,
because
if
you
lose
that
front
end
and
you've
moved
it
to
the
front
door,
it
could
be
on
every
front.
End.
I!
Think!
That's
me
what
you're
saying
is
that
you've
got
this
way.
C
I've
got
I've
got
the
set
of
assignments
on
like
I'm,
just
a
little
bit
of
a
problem
of
like
the
tickets
been
deleted,
so
you
can
be
waiting
forever
but
so
like.
If
you
check
once
like
at
the
start
of
the
call
of
like
make
sure
I'm
not
deleted
or
assigned
okay
like
I,
know
I'm
in
a
pending
state,
or
you
know,
listen
say
or
whatever
I'm
gonna
wait
on
the
stream
for
it.
To
tell
me:
hey
you've.
B
Got
a
sign
right,
I
mean
you're,
trying
to
solve
the
connection,
the
streamed
response,
yeah,
which,
by
the
way,
speeds
up
a
ton
of
things
and
removes
a
lot
of
concerns
with
the
polling.
But
what
happens
in
your
streaming
model?
Where
I
then
added
a
front
door?
Does
they
have
to
reprocess
the
whole
stream
to
catch
up,
to
figure
out
what
the
current
state
of
everyone's
tickets
are.
C
So
the
way
the
my
internal
representation
works
is
that
there's
a
stream
coming
out,
but
it
can
make
a
new
stream
that
is
like.
Is
it
like
the
the
client
sees
that
not
just
like
a
stream,
but
it's
all
of
the
updates
to
get
you
to
like
what
the
current
state
is
and
then
updates
on
top
of
that,
so
it
it
doesn't
need
to
process
the
last
20
minutes
of
tickets
or
whatever
I
guess.
B
B
Works
with
gr
PC,
and
so
yes,
I
could
have
a
WebSocket
service.
That
then
turns
the
request
into
a
G
RPC
request.
Then
I
get
my
streamed
responses
back
from
and
I'm
able
to
push
them
back
down
chances.
Are
this?
Isn't
the
only
thing
in
the
system
that
needs
that,
and
so
most
people
build
a
a
connection
service
with
a
message
broker
that
everything
in
this
ecosystem
can
publish?
B
You
know
formatted
messages
and
that
system
guarantees
that
hey
like
if
they're
connected
I'll
send
it
and
if
they're
not
I'll,
send
a
toast
and,
depending
on
their
like
presents
and
social
settings,
I'm
gonna
send
an
email.
You
know
that
type
of
thing,
so
cool
to
have
an
open
match,
would
definitely
be
cool.
I'm,
not
saying
that
what
so
that
it
wouldn't
be
cool
having
some
type
of
stream
ability
would
definitely
make
this
a
much
more
batteries
included
project
yeah.
F
C
B
E
B
As
in
unity
specifically-
and
this
isn't
any
big
secret
anything-
but
you
know
we're
definitely
interested
in
like
some
type
of
WebSockets
solution
to
this,
because
we
know
that
it
works
everywhere,
even
if
it's
not
the
best
on
the
market
today,
I
mean
it
works
in
browsers.
Right,
like
that's
a
huge
start.
B
For
what
it's
worth
doing,
something
like
mqp
or
g
RPC
over
WebSocket
is
a
thing
it
just
there's
like
a
routing
component.
You
have
to
solve
in
the
back
side
of
that.
So
you
open
your
connection.
You
send
your
to
your
PC
request
over
that
yeah
and
then
on
the
other
side.
If
this
base,
like
a
proxy
where
it's
sitting
you
know
in
the
cloud
you.
E
D
B
Think
we
have
a
pretty
good
sense
of
my
initial
reaction
is
no
just
because
I
think
we
have
a
pretty
good
sense
of
what's
possible,
with
with
open
mention
to
your
PC
and
that
it
is
a
great
piece
of
infrastructure
that
we
can.
Leverage
today
is
like
a
back-end
service,
but
is
because
it's
gr,
PC
is
always
gonna
have
client
connectivity
issues.
If
that's
how
you
want
to
host
your
Condor.
C
B
A
good
suggestion,
though,
April
I,
keep
forgetting
that
we
have
access
to
to
include
some
of
these
people,
not
just
from
the
community,
but
also
from
like
K
native.
If
we
want
to
talk
to
them
again
on
the
other
side
of
100,
which
is
something
we
talked
about,
something
like
a
1.1
type
thing,
yeah
and
also
G
RPC,
which
you
know
we
continue
to
have
broad
discussions.
Grp
see
all
the
time
and
it's
just
the
platform.
Conversation
has
really
killed
a
lot
of
that.
A
Okay,
so
I
think
the
last
two
so
I've
added
a
note
for
that.
One
for
the
last
two
I
think
this
one's
on
Caleb,
the
evaluator
returning
only
match
IDs.
We
needed
input
from
Caleb
I
last
time
for
how
to
proceed
with
this,
or
was
it
or
yes.
Now?
What
do
you,
whether
it's
efficient
for
the
evaluator,
to
not
return
back
any
additional
appended
context?
You
said
you
would
kind
of
look
into
it
if
the
right,
like
just
IDs,
I,.
B
A
So
now
this
one's
targeted
2.9
as
well
again,
it's
something
that
changes
the
surface,
so
we
should
probably
contour
it
easy
enough.
What's
just
do
it
yeah
for
the
last
one
I
think
it's
dot
10
right
now,
so
let's
kind
of
keep
it
that
way
at
all
query,
to
get
IDs
to
having
logic.
Let's
discuss
it
when
we
get
to
that
it's
a
new
API
edition,
so
it
doesn't
impact
any
of
the
current
usable
surface.
So,
let's
just
point
even
the
discussion
for
that
to
this.
C
A
So
that's
my
note
on
it
correctly.
So
let's
keep
it
at
that.
So
that's
that
now,
let's
look
at
anything
else,
dot!
Nine!
That's
not
a
breaking
change.
Scale
test
to
do
is
I
think
this
is
fair.
We've
covered
enough
of
it,
so
yeah
God
has
some
really
nice
formatted
like
list
of
things
in
there
to
do
so.
That's
dot.
Nine
makes
sense,
specify.
A
C
A
C
A
Looked
at
the
breaking
changes
already
open
match
should
generate
unique
match.
Identifiers
I,
don't
know
if
we
haven't
yet
found
our
like
I
I
know.
This
is
more
like
to
avoid
folks
shooting
themselves
in
the
foot.
We
had
a
good
discussion
on
this
last
time.
I,
don't
think
it's
compelling
enough
4.9
right
now
in
the
next
week.
So
if
not,
we
can
possibly
push
the
discussion
out
for
that
for
future.
A
A
We
looked
at
the
breaking
change.
Open
match,
does
not
refresh
much
function,
clash
kind
life
as
a
client
cash.
This.
Basically,
if
you
I,
think
in
some
of
the
testing
I'm
like
I've,
hosted
a
service
for
the
client
I,
don't
bring
down
open,
match
I
again,
we
host
the
same
match
function.
I
think
there
is
some
caching
issue
in
here
that
needs
to
be
explored
by
open
match.
A
We
will
try
to
reach
out
to
like
the
the
same
old
endpoint
and
not
refresh
it,
but
that's
like
a
bug
bug
so
we'll
take
a
look
at
it.
I
am
okay
punting,
this
2.10
as
well,
just
because
it's
a
very
specific
case,
where
you're
trying
to
test
it
out
again
and
again
by
deploying
the
match
function,
but
doesn't
really
impact
an
option
at
all.
So
if
there
is
no
thoughts,
we
can
kind
of
contact
the
dot
10.
H
So
I
think
after,
like
getting
that
Center
now
kill
checking
outside
to
work
on
this
one.
So
I
think
how
he's
doing
scope
a
point
I,
because
its
production,
like
a
horse
cow,
related
yeah.
Do
you
have
an
idea
of
how
much
work
this
is
gonna,
be
I,
think
probably
like
three
three
work
days:
two
or
three
work
age:
okay,.
C
C
G
A
Let's
keep
it
4.9
for
now
and
we
will
see
if
again
we
can
have
an
offline
discussion
to
see
if
it's
actually
impacting
scale.
If
not,
we
can
move
it
to
got
10
based
on
the
effort
needed.
How
about
that
yeah,
okay,
inability
to
select
indexes
I
I
was
pushing
for
this
for
now
I'm,
okay,
upon
taking
it
to
duck
ten
just
for
context
Caleb.
This
is
the
issue
where
today
we
actually
index
every
property
that
actually
comes
in
on
the
ticket.
B
C
A
I
think
the
next
one
for
sure
I've
proposed
punting
it
at
ten
if
you're,
okay
with
it.
This
is
basically
just
stating
that
there
are
a
lot
of
like
what
happens
when
I
take
it.
The
state
a
ticket
gets
lost,
and-
and
this
is
a
lot
of
this
seems
like
guidance
for
production,
so
we
should
revisit
it
before
one
dot.
Oh,
but
4.9,
it
isn't
anything
burning
that
we
need
fixing.
A
So
with
that
done
and
the
last
one
we
are
already
kind
of,
like
benchmarking
scale
characteristics
while
keep
it
right.
So
that
said,
we
have
these
many
like.
What
do
we
when
do?
We
think
is
a
reasonable
I.
Think
one
week
is
a
stretch.
Skirt
I
know
you
mentioned
one
week,
but
I
feel
like
one
is
a
stretch
for
for
these
I.
C
A
C
A
Week
we
should
have
a
good
idea
for
sure
I
I
think
it's
when
we
think
so
right
now,
I've
bought
this
date
to
131,
let's
see
and
if
I,
if
I
don't
want
to
keep
it
that
way,
because
if
we
keep
it
that
way,
it
always
just
sticks
but
yeah
I
think
yeah,
that's
kind
of
two
weeks
out
I
mean
to
to
assuming
as
a
poet.
So
should
we
just
keep
it
at
that
yeah.
C
Question
about
this
is,
for
we've
been
doing
release,
candidates,
I,
don't
know
if
we've
ever
gotten
feedback
on
them,
yeah
you.
We
still
want
to
do
that,
like
oh
here's,
the
release
candidate
and
then
do
like
scissor
now
it's
released,
and
then
we
get
feedback
like
a
few
weeks
after
that.
Right,
like
someone
finally
goes.
Okay,
now
I'll
like
use
it
and
then
or
more
likely,
the
next
person
to
come
along.
We
start
using
open
mattresses,
just
yeah
and
we're
just
releasing
yeah.
A
I
actually
I
love
that
idea,
let's
actually
for
this
one.
Next,
try
just
reusing
it,
because
there's
a
bunch
of
forehead
on
like
doing
the
release
process,
sending
out
those
emails
and
whatnot,
and
we
are
in
kind
of
like
a
crunch
time.
So
we
just
basically
this
time.
That's
really
is
it
and
worst
case.
If
it
comes
to
it,
we
will
do
hot
fixes.
I
mean
yeah.
A
A
That
makes
sense,
so
I
think
that's
all
good.
There
is
one
other
issue
that
I
listed
out
as
as
something
we
should
discuss,
so
I'll
bring
it
up
now
that
we
are
done
with
discussing
most
other
stuff.
Let
me
see
here
this
was.
This
is
currently
marked
as
dot
one
dot.
Oh
I
mean
I'm
the
reason
I'm
discussing
it
is
because
we
can
even
choose
to
point.
A
A
A
Let's
leave
it
at
like
this:
let's
sleep
over
it
because
it
almost
seems
necessary
to
protect
open
match,
but
at
the
same
time
it
could
have
issues
with
like
the
behavior,
significant
behavioral
issues
with
if
misconfigured
so
anyway,
I
think
the
the
last
couple
of
things
one
and
I'll
kind
of
from
Philips
early
response,
I
think
I
have
the
response
here,
but
should
we
be
shooting
4.10
or
do
we
think
it's
okay
to
kind
of
shoot,
one
dato
or
like
how
do
we
as
a
dsm?
Are
you
able
to
go
about
it?
A
Let's
do
that
so
doctor
yeah,
but
do
you
in
general
mark
from
a
conviction
as
well
in
general?
My
question
is:
is
one
dot
or
something
to
shoot
for
a
zone
dot
Oh
or
is
it
like?
If
got
ten
feels
stable
enough?
We
make
that
a
wonder
what
kind
of
was
followed
in
a
girl
is
smart.
Any
thoughts,
I,
don't
mean
I
think
we
would.
E
Be
to
a
camera,
but
the
version
number
was
I
think
at
the
time
we
were
just
like
yeah.
This
feels
pretty
stable,
we're
really
happy
about
it
and
we
just
went.
The
next
release
will
be
one
point
out:
okay,
okay,
I
mean
it
sounds
like
you're
continually,
adding
stuff
anyway,
so
like
if
you've
released
a
zero
point
and
you're
like
oh,
the
next
release
will
be
1.0.
You
probably
got
two
or
three
pr's
in
there
anyway,
so
like
I,
don't
think
it
makes
any
difference
really.
Okay,.
A
C
A
B
Let's
can
we
plan
on
having
another
meeting
then
before
here
and
there
maybe
like
I,
know
we
have
a
community
meeting
next
week.
Probably
not
the
right
time
place
to
discuss
it,
but
maybe
the
following
week.
Let's
have
another
meeting
and
we
can
talk.
Gd
scene
beats
I,
don't
know
if
I
know
you
probably
left
already,
but
if
there
was
gonna
be
like
a
talk
or
a
booth
or
you
know,
try
and
get
open
match
out
there
it'd
be
nice
to
heat,
to
know
what
what
we're
doing
we.
C
As
far
as
I
know
from
talking
to
our
people,
we
currently
don't
have
anything
that
is
open,
match
specific
Wow
open,
that's
just
part
of
our
story,
but
it's
not
like
we're
not
gonna
have
like
dedicated.
This
is
open
match
east
to
our
GBC
I
think
we
will
be
at
GDC
like
we're
complaining
on
it.
I,
don't
know
how
much
we're
gonna
be
there
or
how
many
of
us
are
gonna,
be
there,
but
I
think
we
definitely
want
to
do
some
like
hey
like
the
open
match.
C
A
A
C
We're
over
it's
got
a
quick
thing.
Maybe
I
did
what
I
do
I
make.
A
quick
note
of
there
have
has
been
some
discussion
on
slack
recently
and
there's
a
couple
of
people
who
are
talking
about
like
front-end
and
director
type
stuff,
and
my
feedback
has
been.
This
is
what
we're
kind
of
recommending
you
do
as
our
best
guest.
We
have.
C
C
A
Maybe
you
can't
even
we
can
revisit
that
in
the
community
meeting
next
week
around.
Like
you
know,
I
mean
I
think
there
is
open
for
an
ecosystem
discussion
that
we've
had
and
I
definitely
would
like
to
see
more
on
that,
but
like
just
in
general
kind
of
see
efficient,
but
we
can
get
a
lot
more
of
the
spokes
that
you
are
just
mentioning.
A
Let's
try
to
get
them
into
the
community
meeting
and
have
like
a
affinities
how
about
that
yeah
awesome,
so
I
think
between
the
last
last
thing
is
between
now
and
release
I
think
for
the
scale
we
probably
need
to
meet
again.
Maybe
we
can
just
actually
do
that
as
a
part
of
the
community
meeting
is
an
are
so
so
long
as
Caleb,
you
will
be
there
for
the
community
meeting.