►
From YouTube: Real-Time Working Group 2020-06-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
real-time
working
group
at
3rd
of
June,
already
yeah
I,
have
the
first
item
so
kick
off
just
in
announcements.
We
spoke
before
about
reference
architecture,
architectures
and
the
possibility
of
using
those
to
get
some
idea
by
a
resource
usage.
I
added
that
as
a
step
to
the
exit
criteria
like
we
can
look
further
down
the
agenda
by
whether
we
even
want
to
continue
along
this
direction,
but
grant
actually
made
some
progress
already
and
run
a
separate
tank.
A
A
reference
architecture
with
action,
cable
switched
on
notice
that
12
percent
roughly
increase
in
memory,
which
is
more
or
less
expected.
So
that's
because
we're
running
like
two
extra
PMO
workers,
but
he
did
mention
that
testing
the
feature
itself
would
be
somewhat
problematic
and
that
we
would
need
to
like
like
if
we
wanted
to
say,
put
a
unknown
number
of
WebSocket
connections
and
we
would
need
some
new
testing
apparatus
so
interesting
discussion
there.
You
can
read
it
whether
we
want
to
keep
that
as
the
reference
architecture
as
part
of
the
exit
criteria.
A
B
B
We
were
like
so
we're
going
through
the
next
planning
phase
right
now,
and
so
the
topic
came
up
again
for
30.2
I
think
we
had
like
a
bit
of
like
a
directional
struggle
internally,
whether
our
focus
should
currently
be
on
the
performance
of
comm
or
whether
it
should
be
more
on
the
self-managed,
that's
as
well,
and
so
I'm
bringing
up
the
topic
again
for
30.2.
So
it's
a
bit
of
an
on
update,
unfortunately,
but
like
we're
doing
this
right
now,
so
I
hope
well,
yeah
get
more
clarity
behind
this
soon.
B
A
Cool
thanks.
Yes,
so
the
next
items
mine
we
had
a
discussion
with
Christopher
levels.
Is
the
executive
sponsor
I?
Think
for
the
working
group?
That's
the
term.
We
are
past
our
original
due
date,
its
although
like
that
was
set
with
practically
no
knowledge
of
how
we
proceed
so
I.
Think
that's,
okay.
We
should
look
at
setting
a
new
one,
so
I
asked
my
rent
estimate
sort
of
when
the
delivery
team
might
get
around
to
prioritize
and
some
of
this
work
he
mentioned
he.
He
mentioned
a
couple
of
blockers.
A
So
that
puts
us
in
August.
So
I
wanted
to
see
what
people
thought
about
Hendrix
proposal
earlier
in
the
week
that
we
potentially
go
down
the
route
of
embedded
action,
cable
and
I.
Don't
know
if
anyone's
read
the
discussion
on
that
issue,
there's
quite
a
bit
back
and
forth
hundreds.
You
have
any
context
on
that.
Yeah.
C
C
You
know
with
the
feature
actually
working
right,
but
I
think
does
Camille,
who
expressed
some
concerns
regarding
running
embedded
mode,
because
you
know
there
are
some
unknowns
like
because
they're
running
in
the
same
process
they
might,
there
might
be
they
give
you
a
contention
and
I
know
that
but
yeah.
It
was
also
agreed
that
we
can't
really
tell
without
data
so
I
guess
it
makes
sense
to
continue
working
on
allowing
embedded
mode
so
that
we
can
actually
get
data
right.
C
I
mean
it's
a
good
reason
for
us
to
say:
we
need
to
run
embedded
mode
at
least
I'm
stating
maybe
or
on
even
on
comm,
for
a
certain
group
of
you
know,
projects
or
you
know,
a
very
minimal
set
of
projects
just
so
we
get
some
sort
of
data
to
decide
on
whether
embedded
is
you
know
worth
it
or
is
this
something
we
expose
our
self-managed
customers?
Even
if
we
don't
do
this
in
the
future.
B
C
Yeah
I
was
thinking
that
I
proposed
that,
because
you
know
just
to
you
know
make
this
set
of
this
like
more
similar
for
like
if
you're
on
you
know
a
single
node
versus
multiple
nodes.
But
it's
related
to
the
next
item
on
the
agenda.
Like
myron
mentioned
that
it's
a
non-starter
for
staging
to
have
a
different,
you
know
configuration
or
different
set
up.
So
if
we're
going
to
have
like
separate
nodes
running
in
embedded
mode,
that's
split
by
a
88
proxy
which
are
only
handling
you
know.
C
Websocket
connections
I
mean
we
could
test
the
idea
right,
but
there
yeah
Maren,
mentioned
that
it's
gonna
be
a
lot
of
duplicate
effort
and
probably
not
worth
doing
so.
My
next
idea
was,
you
know
why
not
just
run
embedded
mode
like
right
on
the
web
servers
themselves
so
that
we
don't
have
to
do
any
infra
changes.
A
C
B
Speak
on
behalf
of
Camille,
but
like
it
sounded
from
like
the
comments
he
left
that
he
was
not
a
big
fan
of
the
idea.
Just
because
of
the
knock-on
effects
it
might
have
it's
impossible
to
scale
it
independently
of
ordinary
web
traffic.
So
I
think
there's
like
a
couple.
You
made
a
good
list
of
concerns
which,
which
are
good
concerns.
You
know
good
points
why
this
might
maybe
be
not
a
great
idea
to
do
that
in
production.
B
On
the
other
hand,
I
kind
of
agree
with
you
as
well
that
we
need
to
kind
of
get
something
out
to
kind
of
you
know,
get
our
feet
wet
and
like
test
something
and
get
put
some
like
substance
behind
it
in
terms
of
numbers.
You
know
like
what
we
can
handle,
because
otherwise
it
just
remains
all
speculation.
It's
really
hard
to
test
this
in
like
a
local
set
up,
and
there
was
also
for
the
option
of
doing
and
I,
don't
know
if
that's
also
been
dismissed
now,
but
like
I
always
kept
thinking
what
why
can't?
B
B
C
I
think
we're
going
to
that
direction
because,
like
mentioned
like
just
before,
was
that
to
to
let
us
actually
try
this
embedded
mode.
We'd
have
to
add
that
switch
anyway,
so
we're
gonna
do
that
MRR!
That's
gonna,
not
gonna,
be
too
hard,
but
I.
Think
then
we'll
ask
the
question
whether
to
deploy
this
publicly
or
you
know
announce
that
there's
the
switch
you
know
in
our
Docs
or
you
know,
make
this
a
real
option,
but
initially
we're
gonna
have
to
do
that
anyway,
to
test
and
get
some
data
right.
C
I
think
the
main
problem
here
is
that,
if
we're
proposing
this
option
to
self-manage
customers
on
single
nodes,
we
must
have
some
sort
of
like
confidence
or
tests
that
this
work,
like
we
don't
burn
it
on
Comm.
You
don't
run
it
on
anywhere
so
like.
How
do
we
I
guess
when
you
could
run
it
on
dev
right?
You
know
there
single,
node,
I,
guess.
D
B
Yeah,
in
a
way,
it's
quite
similar
to
what
we
do
with
self
monitoring
and
the
embedded
Prometheus
server
as
well
right
I
mean
you
also
have
the
option
of
running
it.
You
know
embedded
like
for
a
single
on
setup
or
you
know,
put
it
on
a
dedicated
node.
So
it's
not
like
this
would
be
something
outrageously
different
compared
to
what
we've
been
doing
before.
B
D
Long
as
the
the
normal
executing
code
is
the
same,
I
don't
see
a
big
difference.
It's
just.
Are
you
running
it
in
a
dedicated
mode
like
right?
Now
we
already
don't
even
do
dedicated
for
some
of
our
use
case
for
even
the
rails
app.
We
have
dedicated
web,
we
have
dedicated
API
and
you
have
dedicated
get,
and
so
it's
the
same
codebase
it's
just
running
in
isolation,
and
we
we
actually
want
to
do
more
of
that.
Where
we're
gonna,
we're
gonna,
use
traffic,
routing
ahead
of
the
rails,
app
to
send
merger.
You
know
we're.
C
Yeah,
the
other
thing
related
to
the
previous
discussion,
whether
you
know
where
we
want
to
set
this,
is
that
the
question
is
whether
we
want
to
do
it
at
the
load.
Balancer
level
like
we
do
with
the
Web
API,
and
you
know
right
now,
because
that
way,
we're
actually
they're
actually
running
the
same.
The
exact
same
thing:
they're
just
route.
The
requests
are
just
routed
differently,
but
when
we,
when
we
were
talking
about
action,
cable,
embedded
mode,
the
switch
for
embedded
mode,
it's
actually
slightly
different.
C
It's
not
like
the
same
code
path
because
we're
running
a
different
rack
application,
I
mean
I
was
yeah.
Like
I
said
we
could
also
run
the
set
the
exact
same
rack,
application
that
handles
web
requests
and
then
just
split
it
at
the
proxy
level.
So
you
know
it
doesn't
receive
any
other
web
requests
with
that
veganic
saying
I.
C
A
Just
a
question
on
that,
so
what's
the
difference
between
running
the
entire
application
and
only
serving
WebSocket
traffic
to
the
nodes,
so
DK,
what
I
mean
like
running,
in
other
words,
like
all
nodes,
run
action,
cable,
what
we
only
send
some
traffic
to
some
notes
versus
what
we
were
doing
up
until
now,
which
is
to
have
a
separate
server
for
action.
Cable.
There.
C
Are
slight
differences
like
because
I
think
you
saw
it
when
you
were
creating
the
docker
images
right
that
we
had
to
specify
a
different
rack
up
file
for
the
Puma
server,
so
the
Puma
server
runs
that
rack
up
file
and
goes
directly
to
the
action
cable
that
server
rack
application.
So
it
kind
of
skips
a
lot
of
things
like
the
rack,
middleware
rails.
Router
I
mean
it's
very
minimal
because
you
know
if
it's
an
action,
cable
request,
it
should
just
like
go
through
all
those
and
good.
You
know,
but
it's
still
different.
C
C
Also
I
noticed
this
when
I
was
adding
the
logging
actually
logging
infrastructure
for
action,
cable
and
I
noticed
that,
with
the
way
we're
doing
right
now
with
a
separate
Puma
server,
we
actually
don't
have
I
noticed
that
we
were
generating
request
IDs,
because
the
request
ID
middleware
wasn't
there,
because
it
was
going
directly
to
the
action
capable
Rekha
and
I
had
to
add
that
middleware
in
the
rack
file
and
so
yeah.
We
wouldn't
have
to
do
those
things
if
we
just
ran
the
whole
web
app.
All
these
you
know
servers.
A
So
I
think,
like
with
regards
to
the
first
of
the
exit
criteria,
which
was
to
ship
a
feature
to
cell
phones
to
customers
and
there's
a
dependency
on
our
deployment
to
calm
and
weak.
Honestly
say:
we've
done
that
until
we
can
lift
the
feature
flick
right.
So,
even
though,
like
technically,
anybody
who
has
I
guess
version
will
get
lab
from
I
think
it's
1210
can
actually
run
this
feature
completely
on
their
own
instances.
A
We
can't
consider
it
shipped
and
as
far
as
I
see
like
that
that
doesn't
really
change.
If
we
do
this
so
I
guess
I'm
wondering
like
we
mentioned
they're
talking
about
customers
that
run
site,
one
node
and
shipping
it
to
them,
but
we
would
still
have
the
problem
where
we
have
to
like
put
in
the
docs.
You
know
that
they
have
to
lift
the
feature
flags
and
so
on
so
they're.
C
Yeah,
but
still
we
don't
have
like
the
you
know,
confidence
or
experience
to
say:
we've
run
it
on
single
nodes
to
allow
us
to
like
ship
it
to
them.
Then
would
you
know
if,
like
is
it
fine?
If
you
like,
experimented
on
like
dev,
that
you'd
laugh
at
or
or
ops,
maybe
instance,
or
there
was
Cygnus
single,
no
one
yet.
A
C
C
A
Okay,
cool
yeah.
That
sounds
good.
So,
from
the
point
of
view,
the
agenda
item
added,
like
I'm,
wondering
when
to
push
I,
went
to
set
the
next
due
date
for
the
working
group
right
like
once.
When
are
we
realistically
going
to
have
something
to
ship
to
customers?
I
don't
want
it
to
be
like
we
have
this
dependency
on
the
delivery
team
and
we
can
like
continue
to
pull
tasks
like
the
hound
charts
and
try
to
help
unblock
where
we
can.
A
A
Okay,
cool
sounds
good,
any
kind
of
any
opposing
opinions.
No.
B
I
just
had
a
related
question,
I
guess,
because
we
do
have
an
anon
I'm
playing
a
bit
devil's
advocate
here,
like
that.
We
do
have
a
couple
of
features
that
we
do
not
actually
dogfood
uncom
right.
Things
like
what
was
it
I
keep
thinking
of,
as
if
you
know
it's
not
what
I
said
this
tributed
tracer,
that
we
should
what's
the
name
again:
jeg
ER,
oh
sorry,
yes,
jaga
and
like
this
is
not
something
we
used
on
comm
right
and
we
have.
B
What
was
it?
I
was
just
looking
at
something
again
where
I
was
thinking
there
was
that
as
there
was
it
like
a
feature
for
specifically
for
for
on-prem
customers,
so
I'm
just
wondering
like?
Are
there
cases
we
had
in
the
past
that
we
could
look
at
for
how
we
could
test
something
that
we
kind
of
know
will
not
roll
out
to
come
in
its
current
shape
or
form?
And
but
that
will
give
us
some
confidence
I,
because
I
have.