►
From YouTube: Real-Time Working Group 2020-03-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
So,
if
you
assign
somebody
by
quick
action
or
something
like
that,
currently
there's
the
side
bar
doesn't
update
and
when
we
picked
like
like
a
pretty
simple
feature
to
start
with,
but
the
purpose
of
the
working
group
is
to
get
us
to
the
point
where
we
can
build
features
in
real
time
using
WebSockets.
So
we've
reached
a
couple
of
stumbling
blocks
and
that's
like
we
built
the
application
side
of
things.
What
we
need
to
figure
out.
What
are
the
things
we
need
to
do
to
get
it
deployed.
B
Purpose
of
the
working
group
is
to
get
a
feature
to
self
hosted
customers
and
I
thought
like
the
first
iteration
we
could
do
would
be
to
actually
ship
a
WebSocket
connection
on
the
issue
show
page,
but
not
actually
do
anything
with
it
so
provided
we
have
observability
built
in.
We
should
be
able
to
see
from
that
from
any
open
connections
we
have
to
handle
what
the
load
is
on
the
various
parts
of
the
system,
so
Redis
nodes
and
pub/sub
nodes.
B
So
we
have
another
a
number
of
open
questions
like
should
we
be
running
separate,
app
nodes
for
the
application
art
for
the
action,
cable,
prima
servers
and
to
isolate
this
from
web
nodes
or
API
nodes,
and
what
kind
of
observability
do
we
need
to
build
in?
And
can
we
reuse,
Redis
nodes
that
we
already
have,
or
should
we
have
separate
nodes
for
web
sockets,
so
yeah?
There
are
no
issues
that
the
minute
just
application
issues
to
build.
That
first
feature
and
I.
B
A
A
C
A
D
I
just
wanted
to
add
it
I
think
our
first
goal
here
is
really
deploying
this
to
get
lab
calm
and
not
really
supposed
it.
I
guess,
because
you
know
similar
to
Puma
get
lab.
Comm
was
the
first
one,
and
then
you
can
learn
how
we
configure
it,
and
then
we
configure
the
defaults
for
omnibus,
but
then
the
same
thing.
We
still
need
a
lot
of
work
on
on
the
other
side,
to
you
know,
configuration
options
and
that
stuff
just
disabled
by
default
and
yeah,
so
that
we
can
enable
this
on
lab
calm.
A
B
A
E
D
Yeah
so
yeah.
Basically,
we
chose
action
cable
because
it's
not
really
tied
to
a
specific
implementation.
It's
more
like
an
API
of
how
rails
sends
messages,
and
you
know
handles
these
channels,
which
are
basically
like
subscriptions,
and
there
is
a
gem-
it's
not
from
rails
project.
It's
a
third
party
one,
but
it's
pretty
popular,
which
is
in
any
cable,
which
is
basically
they
build
an
action.
D
Cable
API
compatible
WebSocket
server,
that's
written
in
Groppo,
so
we
did
initial
evaluations
but
thought
that
we
need
start
with
action:
cable,
playing
action,
cable,
the
Rudy
servers,
because
you
know
added
complexity
and
stuff,
and
there
are
a
lot
of
things
that
we
don't
really
know
right
now.
So
that's
why
our
goal
is
to
get
this
and
get
lab.
Comm
see
how
bad
or
good
the
performance
is
and
then
evaluate.
If
we
need
to
do
that.
D
Familiar
also,
the
regarding
the
architecture,
dogs,
are
how
you
know
the
different
components
interact.
It's
actually
like
different
threads
and
the
different
issues,
so
yeah
good
that
he
pointed
that
out.
You
should
I
should
probably
you're
like
you
know,
write
something
up
and
central
because
it
wasn't
like
final
yet
and
we
were
discussing
which
way
to
go,
and
you
know
but
yeah.
It's
a
good
idea.
Yeah.
A
D
Yeah,
but
initially
yes,
just
to
give
a
quick,
you
know
overview
we
decided
to
because
there
are
many
ways
to
run
action:
cable.
It
could
be
run
on
the
same
web
servers
like
Puma
and
it
just
spawns
a
different
thread
pool,
but
we
ultimately
decided.
We
want
a
separate
process,
separate
Puma
process
and
server
for
this
just
to
increase
isolation.
So
currently
the
architecture
would
be
at
the
workforce
level.
D
We
have
a
reverse
proxy,
so
workhorse
acts
like
a
reverse
proxy
now
right
and
we
hammer
out
there,
where
/,
that
/
cable
would
proxy
to
the
action.
Cable
Puma
servers
instead
of
the
regular
rails
server
and
then
those
who
are
capable
Puma
server
to
handle
the
WebSocket
connections.
Just
a
structure
sounds.
C
On
the
shared
state
thing,
there
is
an
issue
currently
open
to
move
some
things
away
from
the
Redis
shared
States
instance.
I,
don't
think
that's
a
blocker,
but
I
might
want
to
take
that
and
go
check
if
you
know
like.
If
we,
if
we
add
a
new
Redis
instance,
that's
fine
if
we
use
the
existing
one
that
we're
using
for
C
I,
which
is
the
shared
state
one
we
might
want
to
check.
C
B
A
C
C
Like
if
it
makes
sense
to
do
that,
we
don't
want
to
have
to
do
the
work
twice
by
you
know
doing
it
once
now,
not
in
kubernetes
and
then
adding
it
to
kubernetes
in
the
future.
So
I
mean
it
probably
doesn't
make
sense.
I'm
just
concerned
that,
like
we'd
hit
other
issues,
but
on
the
other
hand,
if
we
put
them
in
kubernetes
first,
it's
also
a
low-stakes
thing
too,
from
the
imprecise
for
kubernetes
nodes,
because
it
would
be
I
do
want
to
say
only
vertical,
but
it
would
be
flagged
behind
a
feature
flag,
yep.
A
Yeah
no
I,
I
I
think
it
would
be
a
good
test
to
start
moving
web
stuff
and
I
I've
been
trying
to
to
get
the
get
the
the
the
start
of
the
rails
stack,
and
since
this
doesn't
sound
like
it
has
any
of
these
legacy
dependencies
on
things
like
local
files.
It
would
be
a
good
to
start
to
get
the
beta
testing
of
running
the
the
rail
stock
in
and
running
the
puma
stack
in
kubernetes.
So
it's
it's
a
it's
a
safer
test
than
say
running
a
web
or
API
node.
A
You
know
which
m4
team
would
open
that,
of
course,
it's
probably
delivery,
but
I
can
we
can.
We
can
try
and
get
some
other
people
to
work
on
it.
I've
been
trying
I've
been
trying
to
get
them
to
just
to
shard
out
the
load
of
moving
stuff
to
kubernetes
outside
of
just
Jarvan
skarbek,
because
right
now,
they're,
because
they're,
the
only
ones
doing
the
work,
they're,
the
bottleneck
and
we
need
to.
We
need
to
get
some
volunteers
to
do
other
stuff.
A
B
A
Yes,
it
would
be,
it
would
be
running
a
Puma
container,
so
it'd
be
running,
it
be
running
the
entire
application
you
know,
but
only
executing
sub
components
of
it.
But
that's
that's
fine,
there's
already
there's
already
docker
images
and
stuff
for
this.
It
shouldn't
it
that
shouldn't
like
that
shouldn't
be
part
of
any
of
the
blockers
like
is,
is
if
this
is
mostly
going
to
be
talking
to
rid
us
and
not
the
database.
I,
don't
see
too
many
blockers
here.
I,
don't
know
how
much
database
record
how
many
database
accesses
this
is
gonna
cause.
D
Yeah,
it
is
gonna
cause
some
DB
connections
on
subscribe
when
clients
subscribe,
because
we
can.
You
need
to
check
permissions
and
stuff
like
that,
but
yeah
many
more.
But
if
we
do
the
same
configuration
with
our
current
rails,
app
I
mean
if
we,
you
know,
share
the
same
database
UML
just
to
you
know,
simplify
things
and
stuff
from
on
the
both
side.
C
And
also
we
go
through
that
PG
bouncer,
so
they're
not
all
necessary,
actually
and
they're,
not
direct
database
connections,
but
also
it
sounds
like
it's
easy
to
calculate,
because
if
we
just
use
the
same
configuration
it's
just
as
if
we
were
gonna
add
some
more
web
nodes.
It's
just
these
web
nodes
aren't
used
for
web
traffic,
they're
used
for
WebSocket
traffic,
which
is
easy
for
us
to
calculate.
D
D
E
You
well
one
question:
I
have
just
like
having
located
benchmarks.
I
think
we're
at
some
point
in
the
near
future,
we'll
run
into
performance
problems
with
a
dashing
cable
like
how
much
of
the
work
are
we
doing
now
or
you
gonna
have
to
throw
away
if
we
like
switched
any
cable
or
integrating
that
into
workhorse
I.
Just
don't
want
to
do
a
bunch
of
rework
right
out
of
the
gate
for
what
we
kind
of
can
know.
Based
on
data.
We
can
look
at.
It
will
be
performance
bottlenecks,
specifically
memory.
A
E
C
C
D
C
We
do
that
as
well
like
we
don't,
there's,
definitely
some
in
our
application,
because
we
don't
want
background
taps
to
be
like
you
know.
If
someone
has
20
tabs,
so
we
don't
want
all
of
those
to
be
like
just
no
requesting
every
second
I
think
we
we
have
something.
That
only
does
it
when
you
switch
back
to
the
tab.
So
maybe
that
wouldn't
work
that.
A
E
E
Was
just
gonna
say:
I,
don't
I
want
to
keep
it
simple
for
the
first
one,
but
also
be
realistic
about
what
we
want
to
use
the
service
for
and
which
is
at
least
in
my
stage.
I
know
create
as
much
things
with
them
ours
they
want
to
do,
but
you
know
more
or
less
make
everything
on
issues.
Real-Time
make
issue
boards
real-time
make
like
as
much
stuff
as
we
can
in
the
application
push
through
this.
So
we
don't
have
to
do
like
a
bunch
of
page
reloads
to
get
accurate
information.
E
So
that's
one
thing
and
the
other
was
I,
don't
know
how
it
works,
but
typically
I've
done
stuff
like
this
is
the
past.
On
the
clients,
at
least
like
I,
maintain
the
state.
There
are
the
cash
in
local
storage,
and
so
that
way
like
it,
no
matter
where
it
updates
or
which
tab
that
local
storage
is
shared
across
all
tabs,
so
it
like
will
automatically
update
across
all
tabs
without
each
one
having
to
like
maintain
its
own
separate
state.
So
that
could
also
be
another
opt-in.
Look
at.
D
About
like
how
much
we
are
going
to
like
throw
away
if
every
switch,
so
basically,
what
would
happen
is
that
we
keep
the
application
code
the
same
because
it's
and
then
the
thing
that
would
change
is
instead
of
running
a
full.
My
server.
We
would
be
running
the
any
cable
server
which
is
a
single
go
binary.
I,
don't
think
we
ever
want
this
to
be
built
into
workhorse
or
something
I,
don't
see
the
gains
that
we
do
with
that,
and
it's
just
more.
You
know
things
to
maintain
and
stuffing
and
a
side.
D
So
the
any
cable
go
server
runs
the
WebSocket
part,
so
it
like
holds
WebSocket
connections.
It
spawns
go
routines
for
every
WebSocket
connection.
Is
things
like
that
and
connects
to
Redis
for
pops
up
and
but
the
rails?
It
needs
rails
right,
so
it
also
needs
AG,
RPC
server.
So
if
we
we
could
also
run
these
G
RPC
servers
for
any
cable
on
the
same
and
action
cable
nodes
or
it
could
be
a
completely
separate
set
of
nodes.
Look
yeah,
mostly
configuration
configuration
changes
would
be
needed
if
we
wonder
the
switch.
B
If
we
did
it
through
kubernetes,
though,
couldn't
we
just
scale
down
the
number
of
pods
that
use
action,
cable
and
scale
up
the
ones
that
use
any
cable,
because
the
application
interface
is
the
same?
What
I'm
wondering,
though,
is
like
even
if
we
do
through
kubernetes?
How
do
we
ensure
that
we're
not
competing
with
so
we'll
have
a
fixed
number
of
nodes
anyway?
So,
even
if
we
need
to
scale
up,
the
number
of
pods
we'd
still
be
competing
with
the
sidekick
implementation
as
it
is.
B
Okay,
that's
good
answer
cool
and
then
I,
don't.
B
A
A
B
D
C
I
think
creating
an
issue
for
delivery,
makes
sense
but,
like
Ben
says,
like
potentially
Thanks
say
to
them,
like
you
know,
if
another
infra
team
can
do
this,
that
would
be
great.
That's
also
like
you
know.
Knowledge
sharing,
but
I
think
starting
would
then
make
sense.
They're
earning
that
this
idea,
kubernetes
rollout.