►
From YouTube: Real Time Working Group 2020-11-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
okay,
real-time
working
group,
11th
of
the
11th
2020.
current
state
of
play,
I
think,
is
that
we're
in
the
final
kind
of
trying
to
close
off
the
final
exit
criteria
for
com
and
for
self-hosted
large
installs.
A
So
yeah,
like
the
main
one,
is:
how
can
we
get
the
feature
flag
defaulted
to
on
and
we're
working
through
the
last
of
the
exit
criteria?
To
get
to
that
point?
So
the
first
item
gabe
wanted
to
ask
about
reference
architectures.
A
My
understanding
of
the
reference
architectures
is
that
we
load
test
various
endpoints
with
representative
throughput
that
you
would
expect
in
a
given
deployment.
In
this
case,
I
think
it
was
10k,
but
that
we
don't
actually
follow
or
simulate
user
journeys.
In
other
words,
like
you,
will
load
the
issue
page,
but
we
won't
necessarily
assign
a
user
on
assignee
user
assigned
self
on
assigned
self
multiple
times
or
anything
like
that.
Is
that
your
understanding
as
well.
B
Yeah,
I
think,
there's
a
one
problem
here
is
that
it's
like
applying
representative
throw
put
in
terms
of
like
number
of
requests,
but
it
doesn't
use
a
browser
like
a
js
browser
right,
it's
more
like
a
more
like
a
kernel
request,
something
like
that,
and
so
that
means
when
it
visits
the
issue
page,
it
doesn't
trigger
the
you
know,
websocket
connection
and
ajax
request.
That's
what
I
understand
from
the
performance
test:
it's
not
a
real
browser,
so
no
js
running
and
no
so
therefore,
no
websocket
initiation.
I
think.
A
B
B
Yeah,
unless
the
framework
for
running
these
tests,
you
know
change
or,
like
you
know,
it's
very
difficult
to
because
right
now,
the
reference
architecture
also
does
not
measure
like
front-end
stuff
right
like,
for
example,
if
loading
an
issue
page
takes,
you
know
that
the
comment
section
takes
so
long
to
load
or
whatever.
Then
it
doesn't
really
reflect
in
a
reference
architecture
too.
Right
I
mean,
although
we
do
have
like
the
age,
we
do
have
the
ajax
endpoints
for
our
discussions
or
something
in
the
reference
architecture
tests.
B
So
it
kind
of
have
like
some
kind
of
data
on
it
on
these
ajax
requests
that
the
page
makes,
but
it
doesn't
really.
We
don't
really
have
like
when
you
load
an
issue.
How
long
you
know
shows
up.
Basically
that's
the
lcp
or
whatever
thing
we
have
right
now.
B
Yeah,
but
we
also
have
to
like
hold
those
connections
right
for
a
certain.
You
know:
that's
what
the
you
know.
The
test
script
that
me
and
matthias
ran
like
it
wasn't
using
the
same
thing
that
we
use
for
a
reference
architecture.
It's
just
the
easiest
I
found
on
the
internet.
I
forgot
what
this
was
called,
but
yeah,
something
like
that.
B
So
what
it
basically
did
was
like
you're
put
in
the
url
like
action,
cable,
slash,
cable,
and
then
we
also
put
in
like
a
message
you
send
when
you,
after,
connecting
like
we
sent
the
like
dummy,
subscribe
to
some
issue
and
yeah,
it's
kind
of
representative
and
similar
to
how
we
do
a
reference
architecture,
but
that
kind
of
tooling,
I'm
not
sure
if
it's
already
there
for
our.
You
know
for
that,
for
the
perform,
what
you
call
this
gpt
or
something
that
we
run.
B
A
Yeah,
so
maybe
it's
not
blocking,
but
we
probably
should
follow
up
with
to
see
whether
we
can
include
that
script
or
something
like
it
in
a
similar
test.
Ultimately,
if
we're
rolling
the
site
on.com
and
we're
able
to
measure
it
in
production,
maybe
this
is
kind
of
a
nice
to
have.
I
do
understand
gabe's
point,
though,
in
that.
A
We
can
can
we,
I
mean
the
reference
architectures.
What
is
it
really
testing?
Well,
it's
really
just
testing
the
like.
You
said
the
loading
of
the
library,
and
so
it's
maybe
limited
utility
to
us,
and
if
we
made
this
an
original
blocking
criteria,
then
we
should
probably
consider
you
know
a
proper
load
test,
the
blocking
criteria,
but,
on
the
other
hand,
if
we're
putting
it
on.com
and
we're
able
to
measure
it
and
switch
it
off
and
we're
deploying
it
on
new
architecture.
B
Yeah
even
load
test
load
testing
is
also
tricky,
like
even
with
a
like
similar
to
the
script
that
I
did.
I
think
it
just
checked
the
I'm
not
sure
what
it
actually
checked
but
like
we
want
to
check
how
messages
are
actually
transferred
over
the
wire
right
or
how
fast
they
arrive
to
all
the
subscribers,
and
that
didn't
really
achieve
that.
It's
kind
of
harder
to
test
that
I'm
not
sure
how.
But
like
you,
try
to
publish
one
event
and
then
these,
like
numerous
number
of
subscribers.
B
You
want
to
measure
how
long
it
took
for
everyone
right
to
complete.
I
guess
we
have
to
hook
into
action.
Cable,
internals
or
I
don't
know
if
there
are
existing
hooks
to
like
measure
where
it
logs
when
it
starts
publishing
to
all
subscribers
of
a
certain
event
and
then,
when
it
stops
like
publishing
or
when
it's
done,
publishing
talk
to
everyone.
A
Okay,
that's
maybe
another
so
I'll
put
another
takeaway
for
that
like
to
see
if
there
are
like,
if
there's
instrumentation,
like
events
or
something
already
in
action,
cable
yeah,
because.
B
A
Yeah
sure,
okay,
cool
thanks.
It
seems
like
that's
the
takeaway,
so
we,
I
guess
we're
never
going
to
include
this
work
in
reference
architecture.
So
we
consider
we
can
consider
that
exit
criteria
done,
but
we
probably
should
follow
up
with
something
like
this
to
see.
If
it's,
I
don't
think
it
should
be
blocking
since
we're
rolling
this
item.com
with
a
feature
flag
anyway.
So
cool
okay
at
the
next
item
as
well
so
yeah
I
just
want
to.
I
want
to
kind
of
figure
out
exactly
where
we
are
with
rolanddecideon.com.
A
So
I'll
give
you
my
understanding
and
then
both
of
you
can.
Let
me
know
if
you,
if
you
agree
or
disagree
with
this,
so
my
understanding
is
that
the
main
blocker
knows
the
creation
of
dedicated
websocket
pods
and
therefore
my
assumption
would
be
that
we
think
the
memory
problems
we
saw
in
production
with
workhorse
are
because
of
backward
pressure
from
having
too
many
pods
or
pods
with
that
are
hitting
resource
limits.
A
Is
that
right
or
convince
me
I'm
wrong?
So.
C
I
I
joined
right
on
time,
sorry
being
late
yeah,
that
that
is
my
understanding
so
far.
I
spent
some
time
looking
into
this
as
well
and
it
it
looks
like
during
the
time
of
the
incident.
There
were
a
couple
things
going
on
simultaneously
and
maybe
maybe
the
fact
that
action
cable
was,
you
know
being
added
to
that
mix
exacerbated
the
problems
we
already
had.
C
But
I
I
looked
at
what's
going
on
exactly
when
users
connect
through
action,
cable
and
workhorse
is
not
really
part
of
that
equation,
except
for
the
initial
upgrade
request.
So
the
user
will
visit
the
issue
page,
for
instance,
and
the
client
will
send
a
request
that
goes
through
workhorse,
which
is
routed
to
rails.
That
will
change
this
http
connection
to
a
websocket
connection,
but
then,
from
there
on
rails
takes
over
and
the
action
cable
server
running
and
puma
takes
over.
C
C
Basically,
and
so
this
is
something
that
separately
looked
at
that
because
it
was
quite
tight,
the
memory
limit
was
only
like
80
megabytes
or
so
for
workhorse
and,
together
with
the
change
that
infrastructure
had
made,
to
reducing
the
number
of
pods
that
we
run
and-
and
this
also
is
in
line
with
what
I
was
seeing
when
looking
at
production
profiles,
that
we
pulled
for
a
workhorse
during
the.
C
It
just
looks
like
he's
not
we're
more
busy
than
usual,
like
there
was
just
more
data
going
over
the
wire
at
the
time
yeah.
So
so
I
don't
think
it's
directly
directly
related
to
action.
Cable.
It
was
also
really
hard
to
look
into
this
because
that
null
pool
was
servicing
totally
different
kinds
of
requests.
So
so
I
don't
know
if
you
talked
about
it
already,
but
like
the
what
we
should
really
wait
for
is
to
properly
separate
the
traffic
have.
C
Between
serving
action,
cable
clients
and
other
websocket
based
features
that
are
not
actually
handled
by
rails,
that
workhorse
handles
directly
related
to
the
terminal
feature
fences
that
we
have.
A
C
C
So
if
you,
if
you
have
just
more
requests
being
handled
by
a
single,
go
process,
the
locations
for
those
network
buffers
will
just
increase.
So
that's
what
we
were
seeing
during
the
time
of
the
of
the
incident,
so
so
this
seems
what
this
seems
to
have
at
least
contributed
to
the
to
the
memory
growth
there.
B
C
Right
exactly
thanks.
That's
actually
really
important
point,
because
in
the
same
profiles
we
pulled
from
the
time
during
the
incident.
We
also
saw
that
actually,
the
majority
of
memory
was
being
allocated
by
grpc,
which
we
would
use
to
yeah,
communicate
with
git
and
like
download
data
through
git.
So
this
is
not
related
to
action.
Cable
at
all.
B
Yeah,
so
it's
basically,
we
were
just
like
really
unlucky
when
we
enabled
this
twice
already
like
and
but
yeah
so
because
of
those
incidents
like
everyone's
like
hesitant
to
turn
this
on
again
right
until
we
have
dedicated
notes.
So
we're
kind
of
locked
on
that
and
jarv
mentioned
in
the
last
meeting
that
they're
working
on
it
and
probably
like
he
said
in
the
next
two
weeks,
they
could
get
the
pools
up.
B
A
Okay,
cool,
it
seems
like
there's
nothing
more
to
do
than
at
least
for
the
next
two
weeks
until
we
got
those
get
that
infrastructure
and
then
so,
what's
the
next
step,
when
we
get
that,
can
we
immediately
start
to
roll
out
the
feature
flag
again.
B
C
Yeah
exactly
and
we
can,
the
good
thing
also
is
as
well.
That's
something
I
guess
we
learned
about.
I
learned
at
least
from
the
outage
was
that
we
kind
of
know
a
bit
better
now
as
well
like
what
to
keep
an
eye
on
to
be
like
personally,
for
for
this
exact
reason
that
the
action
cable
workload
is
it's
largely
served
by
by
puma
and
not
workhorse.
I
was
very
focused
on
on
that
part
of
the
stack
before
we.
D
C
Here
so
so
yeah,
it
was
kind
of
like
a
bit
of
a
lesson
learned
as
well.
Like
you
know,
you
really
need
to
look
at
this
holistically,
so.
C
Well,
like
also
including
workhorse,
what
exactly
within
workhorse
to
keep
an
eye
on.
So
I
I
feel
like
we're
a
bit
better
prepared
this
time
as
well
to
yeah.
A
C
A
Okay,
awesome
thanks
heinrich
we
can
move
on
to
your
item.
Then.
B
Yeah
I'm
just
saying
like,
since
we
did
the
performance
reference
architecture
thing
and
we
didn't
see
any
anything
wrong
with
like
increasing
memory
with
mounting
the
engine
or
enabling
it
could
be
perhaps
like
mounted
by
default
on
all
pumper
curves,
so
that
you
know
we
don't
have
to
have
that
environment
variable
and
then
we'd
have
I'll
just
change
the
future
flag
logic
to
you
know
depend
on
the
feature
flag
again
because
right
now
it's
an
ore
right
like
nvar,
enabled
or
like
feature
flags.
C
We
also
have
this
concept
of
operational
feature
flags
which
are
really
just
feature
flags
as
well.
I
don't
know
if
that
would
help
in
this
case,
but
this
is
something
we
just
use
for
the
image
scanning
rollout
as
well.
We
used
to
have
these
like
layered
feature
toggles,
based
on
like
who's,
making
the
request
and
who's
who
owns
the
content
that
is
being
served
so
that
we
could
do
like
the
staggered
rollout
across
different
dimensions.
C
Basically,
that
was
just
because
yeah
we
were
trying
to
be
super
careful,
but
actually,
in
the
end
we
decided.
We
would
just
fold
that
into
one
and
we
turned
it
into
an
ups
toggle,
so
that
will
it
makes
it
turn
it
into
like
a
perpetual
feature
flag,
basically
that
we
can
also,
you
know
if
yeah,
if
like,
if
there
is
a
need
to
reduce
pressure
on
certain
parts
of
the
application,
we
can
just
turn
it
off
again.
B
Yeah,
I
guess
there's
value
in
being
able
to
turn
it
off
and
yeah.
I
guess
we'll
just
keep
it
for
now.
You
know
reduce
complication
and
yeah
and
then
aft
at
the
end
of
the
working
group.
We
just
you
know,
keep
the
fee
keep
the
environment
variable
still,
but
then
just
maybe
default
to
true
in
omnibus
and
in
the
charts
right.
A
Yeah
we
can
actually,
we
don't
actually
need
to
make
any
changes
to
the
charts,
even
if
we
do
defaulted
to
true
only
if
somebody
then
wants
to
turn
it
off,
they
need
to
include
it
in
their
charts
or
in
their
configuration.
A
B
Yeah,
this
jam
is
already
part
of
our
gem
file
because
it's
like
part
of
rails,
so
yeah,
basically
not
requiring
it
and
not
mounting
it
to
our
route
middleware
or
like
rails
routing
thing.
A
C
Yeah
I
mean
I
I
like,
I
think
the
reason
we
had
put
it
behind
an
environment
variable
was
because
it
was
more
because
we
ran
into
problems
with
initialization
order,
if
I
remember
correctly,
like
generally
we're
trying
to
move
in
a
direction
where
we
get
a
little
bit
better
about
separating
like,
like
only
loading
things
that
we
actually
need
for
the
particular
workload
we
need
to.
C
We
need
to
serve
because
the
memory
team
constantly
struggles
like
we're
always
like
behind
you-
know
we're
always
trying
to
catch
up
with,
like
all
the
stuff,
that's
being
added
to
gitlab
at
an
incredible
pace.
C
Good
for
our
users,
but
we
have
the
mandate
to
push
memory
use
down
to
down
below
two
gigabytes
that
we
can
keep
running,
get
like
on
a
two
gig
node,
and
it's
even
if
it's
like
not
consuming
a
lot
of
memory.
It's
like
it's
like
this
thing,
where
it's
like
this
depth
by
a
thousand
cuts.
Do
you
know
what
I
mean
like?
If
we
don't
get
into
the
habit
of
like
separating
this
a
bit
better,
it
will
get
even
harder
to
disentangle
this
going
forward.
C
So
I
would
be
very
interested
in
yeah
like
having
a
solution
where
we
can
make.
We
still
have
this
switch
somewhere
so
that
we
could
could
prevent
loading
things
into
memory
that
aren't
actually
necessary
for
a
particular
yeah.
B
But
for
a
single
node,
this
one
will
be
like
required
anyway.
The
only
reason
you
would
disable
it
is,
if
you
like,
had
another
node
that
purely
served
like
the
action
cable
request
like
we
do.
That
is
true.
Exactly.
B
D
C
Yeah
I
mean
we
do
have.
I
don't
know,
I
don't
know
how
it
would
work
in
this
case,
but
I
mean
like
we
have.
We
have
similar
problems
with
sidekick
right
and
or
are
running
the
api
fleet.
Okay,
so
separating
out
an
api
cluster
is
maybe
not
something
most
of
our
customers
would
do
it's
probably
something
that's
maybe
specific
to
dot
com
but
like
running
sidekick
separately,
for
instance
right.
We
also
load
a
ton
of
stuff
currently
when
booting
up
sidekick
that
has
no
place.
C
Note
because
it
would
never
even
use
it
and
I
think,
like
things
like
action,
cable
just
make
no
sense
like
if
you,
if
you
fire
up
a
side,
kick
note
right.
So
from
that
perspective
I
I'm
hoping
that,
and
we
don't
currently
have
a
good
answer
for
how
would
how
would
you
do
that?
How
would
you
because
we
don't
have
apis
or
tooling,
to
also
assist
developers
in
deciding
you
know
what?
What
should
we
load
at
any
given
point
in
time?
C
B
Yeah
sure
so
I
think
we
decided
to
keep
it
and
just
enable
by
default
later,
when
we're
you
know:
okay,
on.com.
B
But
yeah.
Good
news
is
that
after
the
j
malek
changed,
then
we
didn't
see
any
ruby
or
puma
memory
problems
during
the
second
rollout
right.
It
was
rolled
out
for
a
day
and
a
half
or
so
and
yeah.
So
we
could
probably
in
it
you
know
use
graphql
subscriptions,
but
I
would
just
want
to
like
get
this
basic
thing.
Even
on.com
turned
on
for,
like
probably
a
few
days,
then
let's
merge
it
and
then
see
the
difference
right.
Just
to
you
know,
yeah
prevent
another.
A
I
just
put
in
here
that
the
real
time
in
general
is
very
interesting
from
the
point
of
view
of
transient
bugs
which
are
going
to
be
a
focus
in
the
near
future
imminently
and
in
the
near
future,
and
these
are
like
typically,
but
not
always
bugs
that
have
to
do
with
long
polling
where
you
think
you're
done,
and
the
page
is
ready
for
your
input
and
then
something
comes
out
of
the
blue
and
surprises
you,
mr.
A
The
mr
view,
is
a
great
example
of
this,
like
you
think,
you've
seen
all
the
widgets
and
then
a
new
one
appears,
or
you
know,
things
tend
to
move
around
on
the
page.
New
status
updates
pop
in
when
you
least
expect
them,
and
so
real
time
is
really
interesting
from
that
point
of
view,
so
I
think
it's
gonna
be
really
important,
like
one
of
our
exit
criteria
is
that
we
produce
really
good
developer
documentation
for
anyone
who
wants
to
build
on
this.
A
Whatever
way
we
go
with
it
like
you
know,
we
should
probably
go
like
you
said,
graphql
subscriptions
and
document
as
if
we're
going
to
go
with
that
natalia
you
might
have
some
input
here
as
well.
So
yeah,
that's
just
from
my
point
of
view,
since
this
is
such
an
immediate
and
such
a
near-term
requirement
that
it
will
be
really
good
if
we
get
realtimeworkingon.com
to
allow
people
to
start
contributing
as
quickly
as
possible.
B
I'd
like
to
add
the
graphql
subscriptions,
I'm
expecting
an
increase
in
puma
memory,
because
what
it
does
is
like
when
a
client
subscribes
to
a
subscription.
It
provides
a
query
like
I'm
subscribing
to
the
issue
update
event,
and
then
I
want
these
fields.
I
want
the
assignee.
B
I
want
the
whatever
fields
and
what
puma
does,
or
you
know
the
graphql
gem
on
the
puma
side
does
is
save
this
query
into
the
connection
together
with
the
connection,
so
that
when
a
event,
an
event
comes
in
like
an
issue
event
come
in
comes
in
it,
then
searches
for
all.
B
You
know
subscribers
that
subscribe
to
this
issue,
update
event
and
then
executes
those
queries
for
each
one
of
them,
and
so
it's
not
terribly
efficient
because,
like
you
know
it
reruns
the
query
like,
depending
on
how
many
people
are
watching
that
issue
right,
like
20
people
are
watching
that
and
it
executes
the
graphql
query
20
times,
because
the
context
is
different.
Different
users-
and
you
know
permissions
are
different,
so
it
kind
of
it
has
to
execute
it
20
times,
and
the
queries
could
also
be
different.
B
Technically,
because
it's
graphql
I
mean
if
we,
if
they
just
use
the
web,
ui
they'd
all
be
the
same
right,
because
the
query
is
provided
by
us
and
they,
you
know,
query
just
the
assignees.
But
then
this
is
open,
an
api
where
you
know
you
could
query,
basically
anything.
So
it
has
to
execute
the
query
for
each
each
subscriber
and
then,
but
it's
basically
what
we
do
right
now
right,
it's
just
on
the
api
nodes
right,
because
we
query
the
api
right
after
the
action,
cable
ping.
B
C
C
Kind
of
problem,
that's
something
we
thought
about.
B
Not
really
because,
like
you
know,
you
could
subscribe
and
it
triggers
a
query
when
an
event
comes
in
right,
but
you
could
just
do
a
query
right
away.
Then
it's
the
same
thing
like.
C
I
think
it's
more
about
how
they
are
spread
out.
I
think-
and
we
probably
don't
have
to
get
into
too
deep
into
this.
I
think
the
my
understanding
was
that
it's
a
bit
more
about
like
when
that
query
executes
right,
because,
if
different
clients
paul
at
a
different
time,
if
you
have
a
polling
solution,
the
clients
would
call
at
slightly
different
times
right.
So
you
wouldn't
send
all
these
requests
at
the
same
time,
whereas
if
the
server
is
responsible
for
deciding
when
these
queries
execute,
that
sounds
like
they
would
all
run
like.
B
Yeah,
but
I
feel
like
it's
also
similar,
you
could,
like
you
know,
parallelize,
you
know
dos
clients
and
query
thousands
of
requests
at
the
same
time
right
and
it
would
have
like
the
same
effect
so
but
yeah.
Definitely
it's
also
something
to
look
at
like
how
we
add
limits
or
whatever
to
these
things.
A
It
seems
a
little
bit
like
also
what
natalia
and
I
worked
on
with
startup
js
and
that
I
don't
know
if
this
is
a
real
concern,
but
I
guess
graphql
queries
can
be
quite
large
and
holding
a
lot
of
them
in
memory
might
be
an
issue
so
on
in
the
start,
startup.js
project
we
sort
of
simulated
stored
procedures
and
that
we
allowed
some
queries
to
be
added
into
a
special
folder
on
the
back
end
and
just
reused.
So
you
reduce
the
amount
that's
sent
over
the
wire
right.
A
You
reduce
the
the
amount
that's
held
in
how
the
memory,
I'm
not
sure
if
it's
the
big
problem
here
is
really
like
applying
the
same
query
over
and
over
with
very
slightly
different
variables
like.
But
it's
just
it's
just
interesting
that
we
came
up
against
this
idea
of
stored
procedures
before
and
they're
they're,
a
component
of
graphql
pro,
which
we
don't
use.
A
D
Yes,
we
actually
moved
queries
to
shared
queries,
just
because
they
are
if
we
leave
them
where
they
are
they're.
Part
of
the
bundle
and
obviously
our
haml
embedded
scripts
are
not
able
to
recognize
them
and
read
them,
so
they
moved
outside
of
javascript
bundle
to
shared
queries
and
yeah.
There
are
some
issues
around
this
like
we
need
to
add
type
names
manually
to
the
query,
but
more
or
less
it's
used,
it's
useful
and
just
mention
about
subscription.
D
I
mean
currently,
as
heinrich
said,
if
we
have
a
signal
from
the
websocket,
we
are
performing
these
queries
anyways
just
from
the
client
not
on
the
puma,
so
I
don't
think
there
will
be
a
huge
difference.
I
mean
I
understand
the
memory
issues
with
puma,
but
we
still
have
we'll
have
performance
issues
if,
like
thousands
of
clients,
sending
that
the
same
query
the
same
moment
from
all
these
clients
anyway,
it's
like
a
trade-off.
B
Yes,
with
the
api
thing
that
we're
doing
right
now
or
was
it
api
yeah
we're
doing
graphql
right
when
that
we
receive
the
signal,
so
it
basically
goes
to
still
puma
nodes,
but
on
separate
servers
like
the
api
fleet
that
we
have
and
yeah.
Maybe
it's
just
better
provisioned
right
now
and
like
because
I
know
our
api
fleet
is
built
for
many
requests,
because
you
know
it's
being
hammered
by
some
clients
and
all
that
but
yeah.
I
guess
yeah
it's
about
provision,
how
much
we
need
to
provision
for
our
websocket
fleet.
D
I
mean
worst
case:
frontend
can
operate
with
a
current
solution.
We
can
just
document
it
nicely
without
brachial
subscription
just
having
a
websocket
and
refreshing
queries
with
a
proper
documentation.
This
wouldn't
be
an
issue,
but
developer
experience
would
be
extremely
better
with
subscriptions,
of
course,.
B
Oh
yeah,
I
liked
how
the
subscriptions
integrated
with
like
a
view
and
stuff
like
it's
kind
of
the
view
components
already
have.
I
didn't
know
about
it
until
I
worked
on
it
but
like
it
already
supported
you
know.
You
just
mentioned
the
subscription
and
like
auto,
updates
the
data
for
that
component
thing
and
yeah.
A
D
It
shouldn't
be
like
a
really
big
refactoring,
because
we
will
probably
create
an
abstracted
service
to
deal
with
the
websocket
and
on
if
we
refactor
the
subscriptions.
As
heinrich
said,
it's
just
adding
one
small
parameter
to
the
query,
and
in
this
case
it
will
be
just
like
updated
every
single
time
that
subscription
is
sent.
D
B
Cool
that's
great
yeah.
It
will
be
similar
to
the
mr
I
made
because
that
mr
changes,
the
current
implementation
of
assignees
to
subscriptions,
and
it
wasn't
that
big
or
hard
but
yeah.
I
guess
the
question
is:
do
we
want
to
let
others
try
it
on
the
current
implementation
now
or
do
we
like
wait
right.
A
I
know
that's
frustrating
to
have
to,
but
we
just
keep
the
number
of
variables
of
things
that
can
go
wrong
down
and
we
increase
the
ability
to
measure
what
what
we
do
ship
and
then,
after
that
we
can
just.
Hopefully
people
just
take
it
and
run
with
it
and
we'll
have
a
complete
real-time
gitlab.
In
no
time.
A
That's
the
success
measurement
for
the
next
real-time
working
group
is
to
reach
that
point.
I
think
cool
all
right,
we're
over
time
and
through
all
our
items.
So
if
nobody
else
has
anything
to
raise
I'll
call
it
there
going
once
twice:
okay,
cool
all
right,
thanks
everyone
for
your
time
great
session,
I'm
going
to
follow
up
with
some
of
these
items.
Just
they'll!
Probably
do
it
through
mrs
to
the
real-time
working
group
page
and
move
some
of
the
exit
criteria
around
to
accommodate
the
things
that
we're
expecting
to
do
next.