►
From YouTube: Real Time Working Group 2020-09-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
real-time
working
group,
16th
of
september
2020.,
nearly.
B
A
A
It's
just
that
we
don't
have
very
much
websocket
traffic.
The
only
feature
that
runs
through
that
is
the
web
terminal.
So
I
think,
like
yeah,
I
mean,
as
far
as
I
can
see.
The
only
thing
we
would
have
to
do
now
would
be
the
helm
chart
updates
to
allow
action,
cables
to
be
run,
in-app
or
allow
that
environment
variable
to
be
set,
and
then
we
could
just
yeah.
We
would
update
this
file
to
actually
set
that
to
default
unless
it's
already
defaulted
to
on,
we
would
set
it
to
on.
A
I
think
that
might
be
all
we
have
to
do,
but
I'm
not
sure,
but
anyway
anyone
who'd
like
to
can
check
that
out
and
check
my
math
and
see
if
it's
correct,
but
I
think
that's
what's
going
on.
A
We
have
no
gate,
okay,
cool.
So
it's
the
first
item
as
well,
so
yeah.
I
thought
we
should
discuss
the
decision
to
go
with
and
only
supporting,
embedded
action.
Cable
has
a
lot
of
pros.
Do
we
want
to
discuss
the
cons?
A
I
want
to
discuss
the
risks
of
going
with
that
and
where
we
should
document
the
rationale
behind
the
decision.
C
Just
I
think
it's
also
just
like
what
we
do
with
like
the
kubernetes
websocket
thing
that
we're
doing
right
now
with
the
web
terminal
right,
it
runs
the
full
our
whole
rails,
app
it
just.
You
know,
only
serves
these
certain
requests,
because
it's
only
routed
these
certain
requests
right.
So
it's
basically
the
work
would
be
the
same
with
this
like
a
websocket
thing
that
they're
working
on
right
now,
I
think.
D
Yeah
yeah,
so
the
conversation
between
matthias
and
gabe
in
the
slight
channel
talked
a
bit
about,
like
you
know
how
you
can't
like
by
default,
you
can't
like
sort
of
treat
websocket
traffic
as
independent
to
web
traffic
if
it's
only
an
embedded
mode,
but
obviously
we
will
be
doing
that
because
we
do
that
at
the
aja
proxy
level
anyway.
So
I
think
I
think
that
probably
effectively
mitigates
the
main
downside
from
that.
A
C
E
Yeah,
that
was
the
main
thing
I
noticed
is
that
it
goes
through
the
it
draws
rails
routes,
if
you,
if
you
run
the
standalone
server,
which
are
not
used,
if
you,
if
it's
in
standalone
mode,
because
it
will.
E
Hard
numbers
behind
like
what
it
means
in
terms
of
extra
cost.
It's
also.
This
is
like
a
general
gitlab
issue.
Right,
it's
not
even
specific
to
action.
Cable,
I
mean.
E
Can
be
made
for
a
sidekick,
we
also
load
a
bunch
of
stuff
when
booting
a
cyclic
node
that
is
never
used
in
sidekick.
It's
just
kind
of
like
a
yeah
like
one.
E
A
monolithic
app
and
it
kind
of
surfaces,
the
problem
in
different,
you
know
different
ways,
maybe
depending
on
what
the
type
of
server
is
that
you
run
but,
like
generally
speaking,
yeah.
It's.
E
E
A
E
This
is
not
going
to
scale;
it
also
depends
a
little
bit
on
how
widely
we
roll
our
functional
functionality
based
on
that
actually
relies
on
on
web
sockets.
Right
because.
E
One
feature-
and
I
don't
think
we
have
any
really
good
insight
into
what
we
can
expect
in
terms
of
how
many
clients
we
will
have
connected
at
any
given
point
in
time,
so
we
will
have
metrics
around.
This
probably
can
also
still
be
improved,
but
I
would
like
start
simple,
like
roll
it
out,
slowly
see
what
happens.
Look
at
the
metrics
that
we
already
collect
and
then
really
just
go
from
there.
E
If
we
then
find
that
this
demands
a
traffic
split
like
we
mentioned
earlier,
and
we
want
to
run
this
on
dedicated
nodes,
we
can.
We
can
still
do
that
right.
C
Yeah
for
com,
I
think
we're
definitely
going
to
go
with
dedicated
nodes
which
are
split
based
on
routing,
because
you
know
that's
why
we're
blocked
on
kubernetes,
because
the
main
point
was
to
like
split
this
off
to
kubernetes
nodes.
And
then
you
know,
a
regular
traffic
goes
to
our
regular
vms
that
we
have.
A
Now,
I'm
just
thinking
actually
from
a
from
a
first
iteration,
we're
definitely
going
to
run
separate
notes
because
they're,
the
only
notes
they're,
the
only
web
service
notes
we're
going
to
be
running.
The
rest
of
hbs
requests
are
going
to
be
run
through
regular
web
service
nodes,
and
only
web
sockets
are
going
to
be
routed
through
the
kubernetes.
E
Okay,
I
see
yeah,
I
didn't.
I
didn't
know
that
yeah
so
so
are
we.
A
A
It
I
mean,
I
don't
see
the
points
in
maintaining
it
personally.
If
we're
going
to
be
doing
this
anyway,
where
we,
where
there's
minimal
there,
are
minimal
savings
from
actually
in
a
separate
standalone
thing,
and
then
anyway,
we
can
just
scale
up
a
dedicated
set
of
web
service
nodes.
To
do
the
exact
same
thing
so.
E
Heinrich
I
had
a
question
for
you
how
what,
because
both
in
both
modes,
we
still
run
the
full
puma
instance
right.
It
might
be
configured
differently,
but
do
we
also
in
both
cases
it
would
still
run
puma
worker
pools
as
well,
regardless
of
whether
you
run
it
in
standalone
embedded
right?
E
Is
there
any
way
to
get
around
that
because
that
also
seems
like
it
seems
like
a
point
of
confusion
as
well,
because
we
have
these
two
different
pool
settings
now
that
you
can
tweak
independently
and
but
also
like
from
a.
E
Perspective
because
if
I
run
so,
if
I
run
a
dedicated
node
that
only
serves
action,
cable
traffic,
the
only
case
in
which
I
really
need
an
ordinary
puma
worker
to
handle
traffic
is
to
upgrade
a
connection.
Correct.
C
Yeah,
so
the
connection
still
goes
through
a
worker
right
for
the
you
know
the
upgrade
and,
and
it
passes
the
connection
to
the
thread,
the
other
thread
pool
so
yeah.
I
asked
like
camille
about
it
like.
Should
we
default
this
puma
pool
number
to
one
when
it's
a
stand-alone
server
like
you
know,
we
don't
need
it,
but
he
said
like
it's
cheap
enough
that
you
just
you
know,
people
do
whatever.
It
is
right
now,
but.
A
Gabby
brought
up
a
couple
of
things
on
the
thread.
Are
you
happy
to
go
with
only
supporting
embedded
mode.
B
C
Current
yeah,
it's
like
we
lose
that
point
that
feature
at
the
you
know
at
this
action
cable
level,
but
then
we
can
do
the
same
thing
at
the
you
know
different
level,
so,
okay,
yeah.
B
C
B
Long
as
we
have
that
ability
to
to
respond
to
like
scaling
needs
fraction
cable
differently
than
normal
web
traffic
at
some
point
in
time,
so
that
we
don't
like
run
into
some
weird
thing
where
action
cable
is
like
cannibalizing
all
of
our
web
track.
If
I
can
screw
it
over.
F
A
I
don't
know
what
I
think's-
google,
google
cloud
or
whatever,
but
right
like
regular
nodes,
so
yeah,
but
we
save
a
lot
right,
so
we
save
like
not
having
to
maintain
a
separate
container
not
having
to
maintain
a
separate
helm
chart
for
that
container
and
then
a
separate
deployment
for
that
that
uses
that
helm
chart.
A
It's
also
easier
to
collect
all
the
usage
data
because
it's
really
hard
to
collect
standalone
usage
data,
whereas
we
can
simply
detect
the
setting
in
embedded
mode,
whether
it's
switched
on
or
not
and
report
on
that,
so
it
just
it's
going
to
save
us
a
lot.
I
think.
C
I
was
even
thinking
like
maybe
just
default,
the
action,
cable
embedded
mode
to
be
true
like
and
there
wouldn't
be
any
homework
needed
right.
We
just
deploy
like
web
nodes
and
just
feature
flag.
The
client
part
which
connects
to
the
server
too
risky.
Maybe
I
don't
know.
C
Yeah
they
would
but
no
website
connections
would
connect
the
right.
So
it's
very
little
overhead.
I
guess
I
mean.
B
Yeah
I'll
say
is
y'all
the
engineers
you
are
the
dris
for
implementation.
This
is
an
implementation
to
utl
and
I
trust
whatever
you
decide
and
if
the
decision
we
make
now
isn't
the
ideal
one.
It's
not
like.
We
can't
go
back
and
do
the
other,
so
whatever
you
all
want
to
do.
A
Okay
is
the
next
step,
I'm
actually
going
to
let
marin
know
on
the
other
issue,
so
we
have
two
issues
for
helm:
charts
one
is
for
standalone
and
one's
for
embedded.
I'm
gonna,
let
martin
know
on
the
one
for
standalone
and
just
let
him
know
that
I'm
gonna
close
that
out
and
that
we
don't
intend
to
use
it.
Then
maybe
we
can
mention
with
him
if
he
sees
ask
him
if
he
sees
any
problem
with
enabling
it
by
default
action,
cable,
so
that
those
nodes
just
we
can
skip
the
homework
all
together.
A
It's
a
nice
idea.
I
don't
know
if
it
introduces
any
risk,
though
cool,
all
right.
That's
the
next
item
as
well,
so
I
just
wanted
to
check
up
on
the
progress
of
how
of
tracking
this
and
usage
ping
so
mathias
thanks
for
passing
the
exporting
the
data
through
prometheus.
What
are
the
next
steps?
Is
it
just
the
case
like?
Can
we
schedule
this
usage
ping
issue
and
get
worked
on
in
13.5.
E
Yeah,
it
should
be
unblocked,
so
yeah
I
mean
in
the
beginning.
I
was
kind
of
hoping
to
just
do
it
all,
but
our
team
is
super
busy
with
the
image
scaling
stuff
and
we're
running
into
some
interesting
complexities
there
as
well
yeah.
I
don't
wanna
derail
this
so.
B
E
So
yeah,
essentially
also
also
like
I
had
in
a
one-on-one
with
craig.
He
also
mentioned
again
that
we're
trying
to
also
encourage
teams
that
own
a
particular
feature
area
of
gitlab
to
actually
work
on
these
usage
ping
related
things
so
that
we
don't
concentrate
the
knowledge
about
how
to
do
that
in
two
few
teams
or
individuals.
E
E
But
I'm
like
happy
to
help
out
with
this,
like
if
there's
questions,
but
it
should
be,
it
should
be
unblocked
like
if
we,
if
we
go
down
the
route
that
we
initially
had
in
mind,
which
was
to
just
query
prometheus
to
for
the
existence
of
these
action,
cable
metrics
that
we
now
should
have,
and
then
you
use
that
to
transform
it
into
a
usage
pane.
B
Halo
cool
thanks
so
much.
My
only
comment
is
there
thanks
for
the
the
go
learn
how
to
fish?
B
You
know
mentality,
so
that's
great
and
then
we're
happy
to
do
it,
but
if
it's
any
constellation
we're
already
adding
like
what
is
this
15
to
20
different
usage
being
counters
for
other
things
in
our
stage,
so.
B
B
Yeah,
I
know,
but
it's
just
great
it's
good
to
like.
Basically,
all
the
teams
are
responsible
or
all
the
groups
are
responsible
for
their
own
usage
being
things,
and
this
is
where,
at
the
end
of
the
day,
I
don't
mind
for,
like
our
the
the
project
management
group
from
the
plan
stage
as
a
whole,
is
like
pushing
action
cable
forward.
But
I
don't
want
us
to
be
in
a
situation
where
we
are
the
sole
arbiters
or
the
group.
B
That's
responsible
for,
like
it
at
scale
because,
like
that's,
what's
happened
with
graphql,
where
basically
everyone's
using
graphql
now
and
all
of
the
maintenance
work
falls
on
our
team
right
and
like
everyone's
using
labels.
Now
or
a
lot
of
people
are
in
all
the
maintenance
workflows
on
our
team,
and
we
have
four
backend
engineers
and
it's
not
sustainable.
B
E
B
E
Things
right
and
again
like
if
you
do
pick
it
up
before
you
know
like
I
can
also
try
to
you,
know,
get
to
it
as
soon
as
I
can.
If
you
happen
to
pick
it
up
before
feel
free
to
like
pull
me
in
as
well.
You
know
just
ping
me
on
the
issue.
If
there's
any
questions,
how
to
get
it
done,
there
is
some
like
existing
there's.
Some
existing.
E
Should
look
fairly
similar
to
what
we're
trying
to
do
here,
so
it's
not
like
we
do
have
to
do
anything.
We
don't
have
to
reinvent
anything
from
scratch,
hopefully,
but
yeah.
B
Cool,
what
is
the
the
just,
so
I'm
looking
at
this,
the
usage
ping
will
be
if
it's
enabled
and
we
can
like,
I
guess-
get
rid
of
the
checking,
whether
it's
in
standalone
or
embedded
mods,
yeah.
E
Yeah
when
we
brought
the
issue
we
still
had
or
when
we
also
when
we
worked
on
the
issue
to
actually
emit
metrics
into
prometheus,
that
that
was
still
a
thing
right
being
in
standalone
or
embedded
mode.
So
so
I
just
attach
it
as
a
label
for
now
to
that
metric.
So
it's
kind
of
like
a
dimension
in
how
these
metrics
are
published.
So
whether
we
then
report
it
back
into
usage
being
or
not
it's
kind
of
up
to
us.
It.
E
It
doesn't
really
add
any
additional
complexity
because
it
will
come
back
as
part
of
the
same
query
like
if
you
go
to
prometheus
and
ask
for
these
metrics.
E
So
the
biggest
question
there
was
actually:
where
would
that
go
in
inside
the
usage
thing,
because
the
usage
thing
it's
already
a
little
bit
messy?
I
mean
you
probably
know
yourself
like
if
you
work
on
it
because
it
is,
it
is
in
parts
it
is
structured
by
stage
or
like
a
feature
area,
but
not
every
metric
that
we
submit
really
fits
that
model.
E
We
felt
that
pain
a
little
bit
when
we
were
working
on
the
self-managed
topology
metrics
that
we
now
submit
with
usage
ping
to
get
a
better
idea
of
like
how
self-managed
customers
deploy
gitlab,
because
it
also
didn't
really
fit
any
of
the
existing
categories.
And
then
a
question
came
up.
How
is
that
actually
stored
and
processed
as
part
of
this
version
app?
E
That
consumes
the
usage
thing
and
we
actually
ended
up
just
storing
it
as
a
json
done
and
just
like,
basically
handing
it
off
to
that
downstream
pipeline
that
then
ingests
this
json
into
snowflake,
I
think
and
yeah,
so
so
yeah,
one
open
question
was
there:
how
do
we
do
that
with
action?
Cable,
because
it's
kind
of
like
a
technical
concern?
It's
not
not
so
much
a
feature,
you
know
whether
you
turn
it
on
or
not
because
it
will
power
many
features.
E
So
that's
kind
of
an
open
question
like
just
like
metric
naming
and
where
should
we
put
that-
and
I
think
that's
also
the
phoner
that
you
put
down
in
the
metric
descriptions
if
that's
still
a
thing.
So
these
are
like
smaller
things
to
think
about,
but
yeah
yeah.
C
Yeah,
so
I
think
I
know
we
do
like
do.
We
do
like
puma
thread
pool
size
or
something
in
usage
ping.
I
think
it
just
goes
with
that
and
I
think
the
enabled
flag
for
action.
Cable
is
also
a
temporary
thing
with
usage
ping,
because
you
know
in
the
future
it
will
be
like
enabled
for
everyone,
and
we
don't
need
to
track
this
in
usage
bing.
So.
B
My
question
was:
what
do
we
do
so
with
all
the
usage
being
data
right?
You
added
something
you
want
to
track,
because
you
want
to
answer
some
question
and
I
would
ask
what
question
do
you
want
to
answer
by
tracking
the
information
and
who,
what
will
make
a
decision
based
on
what
we
learned
from
the
information
that
we
collect?
D
E
I
mean
what
we
track
right
now
are
really
like
very
technical.
Like
operational
metrics,
you
know
like
how.
How
is
the
threat
pool
scaled?
Do
we
see
like
tasks
backlogging
in
that
threat
wall?
You
know
what
what
was
the
the
largest
queue
size
that
we've
seen
and
things
like
that.
So
it's
really
like
fine
tuning
your
system,
so
I
would
say,
maybe
a
gitlab
administrator
or
like
yeah,
an
infrastructure
team.
I
think
those
would
be
the
typical
consumers
of
this
data
so
far
outside.
E
Thing
that
we
already
said
we're
going
to
get
rid
of
anyway
yeah
and
there
is
actually
there
is
a
existing
structure
called
gitlab
components.
I
think
which
has
some
of
these
like
just
more
like
yeah
infrastructure
component,
specific
metrics.
E
E
But
I
did
go
back
to
the
telemetry
team
and
they
said
it's
probably
okay
to
not
add
it
there,
because
now
we
also
store
the
raw
payload
that
we
sent
to
the
version
app
and
we
can
probably
just
like
rely
on
that
being
propagated,
downstream
and
and
run
queries
and
create
dashboards,
just
based
on
queries
that
go
straight
to
that
you
they
do
a
jsonpath
query
that
goes
straight
to
that
raw
data,
basically,
that
we
were
sending.
E
So
that's
good
enough
and
we've
done
that
with
the
topology
stuff
by
the
way
as
well,
and
it
worked
fairly
well
for
us.
So
if
that's
good
enough-
and
I'm
not
really
sure-
because
I'm
not
gonna-
be
the
person-
probably
looking
at
that
very
much
but
like
that's
good
enough,
then
then
there's
probably
no
extra
work
on
the
version
have
to
be
done.
B
I'm
not
gonna
look
at
it
either
so
sure
I
might
look
at
like
if
it's
enabled
or
not
to
see
if
we're
getting
adoption
of
using
action,
cable
or,
if
somebody's
turning
it
off
in
general,
but
the
pool
size
and
stuff
like
that
is
not
my
concern.
So
if
we
leave
it
in
the
json,
basically
blob
and
somebody's
not
happy
with
that,
and
they
want
to
see
it
somewhere
else,
they
can
do
the
work,
that's
sort
of
how
I
feel
about
it.
B
B
I
would
I
would
we
we
have
like
the
the
basic,
isn't
it
in
our
values
or
somewhere
in
our
handbook
that
we
default
on
for
everything.
In
most
cases,.
A
D
Yeah
also
at
some
point
this
is
going
to
be
like
you
know
what
happens
if
you
turn
the
sidekick
off
like
that?
Just
doesn't
work
right,
like
you
know
not
right
now,
but
like
at
some
point
in
the
future.
That
will
be
what
turning
this
off
means.
You
can
turn
sidekick
off
because,
like
you
might
want
to
run
sidekick
somewhere
else,
but
it's
not
really
very
useful
for
us
to
know
in
usage
being
where
the
site
kicks
on
and
off.
E
That's
a
really
interesting
point
because
didn't
we
also
say
that
in
the
long
run
we
want
to
get
rid
of
the
the
kind
of
the
long-calling
approach,
because
I
mean
because
we're
still
calling
for
the
actual
data
right,
because
we
just
send
a
signal
right
now
into
the
ui
that
hey
something
has
changed,
but
you
still
have
to
go
back
and
make
a
request
to
graphql.
E
So
I
think-
and
that
happens
in
a
separate
increment,
basically
right.
So
so,
once
that
work
is
finished,
it
actually
becomes
an
essential
feature.
Otherwise,
you're
not
going
to
get
any
updates
at
all.
As
you
look
at
the
page,
without
reloading
to
a
full
page,
reload
right.
B
Well,
yeah
you'd
have
to
do
full
page
reload,
but
I
think
we
have
to
look
at
the
the
installations
that
are
on
memory
limited
like
a
raspberry
pi
or
some
other
small,
like
type
device
where
you
might
not
want
that
right.
You
might
not
want
to
make
that
trade
off,
and
so
basically
what
we
would
do
is
just
fall
back
to
the
there.
E
E
D
But
just
for
the
sake
yeah
right
yeah.
I
know
there
are
other
places
where
we
do
polling
like
we
either
do
it
with
so
we
do
some.
I
think
we
do
long
polling
for
something
to
do
with
ci
pipelines
at
some
point
and
we
do
sort
of
e-tag
caching
for
issue
descriptions
and
comments
and
stuff
like
that
so
yeah.
Eventually
that
would
probably
use
websocket.
You
know
action
cable
as
well.
Right
then.
B
We
always
work
and
stuff.
Wouldn't
we
always
want
to
have
some
sort
of
fallback.
If
it's
disabled,
though,
like
what
happens,
if
you
lose
your
stock
connection,
randomly
is
just
gonna
break
and
everything
stops
working.
That's.
E
F
E
Just
just
by
virtue
of
adding
action,
cable
that
is
not
a
lot
of
extra
memory
use,
so
I
would
to
be
honest,
like
in
the
in
the
testing
I
did.
E
I
actually
saw
at
least
for
this
one
specific
feature:
the
reduction
in
memory
use,
because,
when
action
cable
sends
like
signals,
the
ui
to
paul
for
the
update
that
this
actually
could
this
actually
like,
led
to
a
more
efficient
memory
profile,
maybe
because
the
application
had
to
like
load
fewer
classes
into
memory
or
something
like
that,
and
maybe
honestly
like
if
you
leave
it
run
for
a
while.
This
will
deteriorate
this
effect
as
well,
but
I
don't
think
it's
like
less
efficient
to
do
it.
That
way.
E
E
A
So
actually,
since,
like
even
those
small
instances
are
like
currently
being
ddosed
by
everyone
who
has
an
open
browser
and
a
few
open
tabs,
you
know
like
with
these
constant
long
pulling
requests.
So
I
mean.
A
Heard
I
haven't
seen
any
feedback
from
that
at
all
as
to
what
that
kind
of
traffic
is
like,
but
I
would
imagine
you
know
it's
quite
high,
especially
for
like
for
like
dot
com,
I'd.
Imagine
it's
and
times
the
number
of
online
users.
You
know
the
average
number
of
tabs
we're
nearly
a
time
like
so
we
can
skip
to
gabe.
If
you
want
to
verbalize
your
comments
at
the
bottom.
B
Sure
I
just
I
know
the
front
end
aspect
of
things
has
been
slow
in
the
working
group,
but
now
it's
like
picking
up
we'll
speak
so
natalia.
I
just
wanted
to
ask
if
you
would
be
able
to
work
on
that.
I
think
you
said
you
wanted
to
collaborate,
but
when
you
wanted
to,
I
guess
prioritize
that
and
your
schedule
in
the
coming
weeks
and
if
you
have
any
questions
about
it
or
want
to
collaborate
at
all.
F
Yeah,
I
already
dropped
the
commander
during
the
collaborate.
I
think
it
really
depends
on
when
my
manager
is
back
from
the
oro,
because
currently
I'm
managing
knowledge,
so
I
kind
of
busy
right
now
with
both
individual
contributor
and
management
staff.
I
really
hope
that
13-5
will
be
more
or
less
good
again
depends
on
direction
items.
So
hopefully
I
will
be
able
to
start
some
work
on
13-5
and
continue
a
13-6,
but
it
also
depends
on
scott
and
simon
as
well
and
their
will
to
collaborate.
B
Okay,
I'll
follow
up
with
simon,
and
I
know
in
13
5
one
of
the
things
that
we're
working
on
at
thirteen
five
and
six
in
our
group
is
refactoring,
pretty
much
all
of
the
components
on
the
issue
view
and
to
use
get
lab
ui.
And
so
it's
also
a
good
opportunity
to
think
about
refactoring
it
to
support
real-time
stuff
as
well
so
I'll
I'll
connect
with
simon
and
see
if
he
can
add
his
two
cents
to
the
issue.
A
Okay,
cool
we're
at
time
thanks
everyone
for
showing
up
I'll,
do
an
update
to
the
working
group
page
with
the
icons
from
today
and
drop
the
mr
into
the
channel
on
slack.
So
you
can
all
cool
thanks.