►
From YouTube: Real Time Working Group 2020-08-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
it's
real
time
working
group
12th
of
august
2020.,
a
couple
of
announcements
just
quickly,
so
the
progress
on
the
group
is
going
to
be
announced
on
the
group
called
today.
So
that's
why
I
have
a
few
items
in
the
agenda
just
to
kind
of
clear
up
where
we're
at
on
a
couple
of
things.
Yeah.
I've
also
issued
an
mri
to
the
working
group
page
to
that
end,
so
that
we
can
take
off
anything
that
we've
done
in
the
meantime
we're
nearly
there
with
regards
to
phase
one.
A
I
think
we
just
have
to
get
quality
involved
and
we're
pretty
much
through
all
our
exit
criterion
for
the
first
phase
and
then
yeah.
Just
a
quick
update
from
what
I
can
see
on
the
kubernetes
side.
We
seem
to
be
about
50
of
the
way
to
having
web
sockets
deployed
through
kubernetes.
I
would
allow
for
additional
items
to
come
into
that.
There's
been
a
couple
already
and
then
remember.
I
think
as
well
that
this
is
web
sockets
for
the
web
terminal
and
not
for
action
cable.
A
Although
I've
seen
some
of
the
items
in
the
epic
seem
to
be
to
do
with
action,
cable,
so
I'm
not
sure
yeah.
So
I
don't
have
great
visibility
over
that
at
the
minute,
like
that's
just
as
far
as
I
can
see
they're
about
halfway
there,
so
it
seems
to
be
on
track
any
other
announcements
or
anything,
no
cool.
A
So,
okay,
at
the
first
item
we
had
discussed
or
heiner,
could
suggest
at
the
end
of
the
last
meeting
that
we
look
at
possibly
doing
the
rest
of
the
sidebar
while
we're
not
waiting,
but
you
know
concurrent
to
deploying
to
dot
com,
so
I
identified
like
three
parts
of
the
sidebar
that
I
think
would
be
relatively
easy
to
do.
Subscribing
to
an
issue
lock
in
the
discussion
adding
a
weight.
I
think
these
are
all
already
done
in
graphql
mutations
on
the
back
ends
and
queries.
A
B
I
think
gabe
raised
a
concern
right
like
the
next
point
and
yeah,
so
I
just
wanted
to
talk
briefly
about
the
comment
I
posted
there.
So
basically,
the
main
reason
I
didn't
go
with
the
graphql
subscriptions
at
first
was
that
this
feature
didn't
exist.
This
broadcast
feature
that
I
just
read
about
which
gave
link
it's
actually
new
like
two
months
ago
released.
B
So
without
that
feature,
what
graphql
subscriptions
actually
did
was
like
similar
to
what
we
did
where
every
subscriber
for
every
subscriber
the
server
would
like
re-execute
the
graphql
like
subscription
or
the
query
within
the
subscriptions.
It's
like
doing
the
query
10
times
anyway
and
like
it's
better,
to
do
this
on
the
api
nodes,
which
are
like
kind
of
like
already
running
and
stuff
through
a
separate
request
versus
like
stressing
out
our
new
action,
cable
nodes
right.
So
that
was
the
initial
like
decision,
but
yeah.
B
Now
that
I
read
about
these
broadcasting,
then
it
makes
it
a
bit
more
like
feasible
and
so
there's
actually
a
way
to
like
query
it
once
and
then
on
the
graphql
side,
you
just
define
which
fields
are
broadcastable
and
which
are
not.
The
docs
do
say
that
it
only
does
a
broadcast
if
all
the
fields
requested
are
broadcastable.
B
So
we
probably
want
the
issue
updated
event
to
only
have
broadcastable
fields
or
something
like
that,
and
then
the
other
fields,
like
the
other
parts
of
the
sidebar
that
are
user,
specific,
like
the
to
do
or
the
subscription,
could
be
like
a
separate
subscription
like
a
separate
event
right,
so
that
we
don't
mix
these
fields
and
you
know,
lose
the
optimization
so
yeah
it's
it
is
possible,
but
the
good
thing
one
thing
I'd
like
to
raise
about
this
is
it's
actually
related
to
how
to
the
other
point
about
adding
extra
fields,
because
adding
extra
fields,
I
think,
actually
makes
this
easier
to
shift
to
graphql
subscriptions
just
because
of
how
we
do
it
on
the
front
end.
B
So
I'm
I
don't
think
the
front
end
changed,
but
the
last
time
I
looked
at
it.
This
is
how
it
worked.
Currently,
it's
actually
the
assignees
component
that
does
the
action
cable
subscription.
It
subscribes
to
the
issue
updated
event
and
then
makes
a
query
once
it
receives
an
event
to
the
api,
the
graphql
query
and
then
updates
the
data
right
and
so
for
it
for
it
to
work
on
the
other
fields.
B
We'd
have
to
move
this
subscription
part
somewhere
outside
right,
like
on
the
sidebar
or
like
on
the
main
issue,
page
or
somewhere,
so
that
we
subscribe
to
the
event
there
make
the
query
for
like
multiple
fields
and
then
propagate
these
the
result
to
the
individual
components
right
and
yeah.
B
I
guess
so
yeah.
So
I
think
it's
mostly
a
front
end
thing
and
doing
the
extra
fields
actually
helps
us
like
it's
kind
of
a
weird
thing
like
we
don't
add
extra
feels
because
you
know,
but
yeah.
C
Yeah,
that's
what
I
was
why
I
brought
up.
That
comment
is
when
I've
done
this
previously
on
the
front
end
like
first
off,
I
noticed
in
the
front
end
work
that
we
did.
We
put
a
lot
of
conditional
logic
for
like
if
you
know,
if
websockets
is
enabled
use
this
this
component
instead
of
this
other
component
and
handle
the
network
quest
this
way.
C
Instead
of
that
way,
and
all
it
was
all,
it
was
encouraging
us
to
do
is
like
before
or
as
we
scale
this
out
like
make
it
make
it
more
like
remove
the
logic
from
the
individual
components,
because
it
doesn't
need
to
be
there
about
what
what
network
connection
we're
using,
because
you
can
put
it
in
the
apollo
like
client
configuration.
So
basically,
you
can
say
like
if
websockets
is
enabled
there
uses
transport
protocol,
otherwise
use
this
other
one.
C
That's
how
I
did
it
before
so
there's,
there's
no
like
conditional
logic
in
the
components
themselves.
The
the
apollo
client
handles
the
fall
back
to
which,
which
network
transport
protocol
you
want
to
use,
which
makes
it
a
lot
easier
in
in
working
on
the
individual
components,
because
they
don't
care
where
the
data
comes
from.
They
just
they
like
just
get
it.
I
think
the
only
thing
we
have
to
do
on
the
back
end
is.
C
I
do
know
that
there's
a
query
in
mutation
based
types,
but
then
there's
also
the
subscription
based
type.
So
that's
the
only
downside
is
we'd
have
to
add
the
subscription
base
type,
I
believe,
to
the
to
the
backend
graphql
schema
for
the
subscriptions
to
work
well
with
apollo
or
like
out
of
the
box.
C
So
there's
there's
nothing
easy
to
happen,
and
I
was
I
guess
I
was
just
encouraging
that,
because
we're
kind
of
blazing
the
trail
and
setting
standards-
and
I
think
once
we
get
a
few
more
fields
in
there's
lots
of
other
folks
that
are
going
to
start
start
copying.
The
patterns
that
we
established-
and
I
didn't
want
to
get
to
the
point
where
we
start
scaling
this
up,
and
then
everyone
uses
the
pattern
that
we
established.
C
It
may
not
be
the
optimal
one,
and
then
you
know
it
takes
a
year
for
us
to
refract
everything
which
you
know
when
you
have
lots
of
different
teams
working
on
different
things.
Just
it
will
naturally
take
a
long
time.
So
I'm
not
like,
I
trust
the
engineers
to
pick
the
right
pattern.
I
just
as
we
scale
out
to
more
fields,
let's
refactor,
to
make
sure
that
it's
the
ideal
solution
and
the
ideal
pattern
for
other
teams
to
follow.
B
Yeah,
actually,
on
the
back
end,
it's
I
already
have
an
mr
for
graphql
subscriptions
because
back
back
then
I
actually
created
to
mr.
So
we
actually
got
this
working
with
graphql
subscriptions
like
adding
a
subscription
type
to
our
base,
schema
and
then
yeah
doing
adding
the
apollo
thing
so
that
it
handles
web
software
connections
to
action
to
action,
cable,
sockets
and
yeah.
The
only
reason
I
didn't
do
it
was
yeah,
like
I
mentioned
above
like
it's
the
same
thing.
It
does
the
same
thing,
but
now
it
doesn't
right.
B
So,
on
the
back
end,
there's
actually
very
little
work
if
you
wanted
to
switch,
but
my
main
concern
is
really
the
design
on
the
front
end
where
the
query
is
right
now
on
the
assignees,
which
is
kind
of
weird.
I
raced
this
during
the
mr,
but
I
think
the
decision
was
to
like
move
it
out
when
we
need
to
or
something
I
didn't
really
follow
it,
but
yeah.
C
Okay,
yeah,
I
think,
let's,
if
we,
I
don't
know
who
the
ui
for
the
front
end's
gonna
be,
but
we
can,
if
it's
gonna,
be
scott
again
or
whoever
we'll
figure
that
out
and
we
can
work
with
him
to
set
up.
But
I
think
moving
the
the
logic
of
transporting
to
apollo
client
configuration
and
initialization
is
the
next
like
a
good
next
step
and
then
whatever
pattern
it
is
like.
C
Are
we
gonna
subscribe
to
like
an
issue
channel
or
that
would
like
that
would
make
the
most
logical
sense
in
the
long
run
just
because
it
then
would
allow
us
to,
in
the
future,
move
the
soft
real-time
poll
that
we
run
right
now
over
to
the
websockets
as
well.
C
I
think
that
we
use
for
notes
in
the
description
so
yeah,
let's
spin
up
issues
and
keep
going,
and
I
think
as
soon
as
we
get
something
on
dot
com,
we
should
pause
and
run
a
bunch
of
benchmarks
and
make
sure
performance
is
okay
for
like
going
further
further,
that's
my
opinion
but
happy
to
be
overridden
yeah.
I
think
that
makes
sense.
A
So,
looking
at
our
velocity
thus
far,
we
probably
can
do
one
or
the
other
like
we
can
probably
implement
some
more
features
or
we
can
focus
on
the
front
end
work
to
move
to
subscriptions
on
the
back
end,
like
you
mentioned
behind
me,
but
probably
not
both
so
considering
that
the
kubernetes
work
might
be
completed
in
13
4.
Are
they
on
the
13th
floor?
I
really
agree
with
them.
We
probably
need
some
issues
to
focus
on
the
front
end
work,
and
we
need
a
dri
for
that
as
well,
and
we'll
focus
on
that.
C
That
sounds
good
to
me.
I
can
work
on
the
issues
I'm
getting
dinner
right
now
and
foraging,
but
as
soon
as
I'm
back
in
my
office,
I'll
spin
up
some
issues
and
I
think
the
next
field
that
would
make
the
most
sense
that
I
know
customers
have
had
a
negative
experience
is
the
labels,
and
I
think
the
other
thing
worth
considering
is
the
because
things
are
a
little
bit
more
gonna
be
more
real
time.
C
The
ux
should
probably
change
a
little
bit
like
what
happens
if
somebody's
picking
a
label
and
we
get
an
update
that
that
same
label
has
been
applied.
This
is
like
the
one
of
the
problems
that
folks
run
into.
Is
they
try
to
apply
a
label
and
then
somebody
else
supplies
it
or
removes
it
at
the
same
time
they're
trying
to
edit
the
labels,
and
so
what
does
the
ux
around
that
look
like?
C
A
Yeah,
it
might
actually
solve
some
existing
ux
problems
like,
for
example,
if
I
assign
a
label
using
a
quick
action
and
then
I
assign
another
label
using
the
label
widget
in
the
sidebar
without
refreshing,
the
page
it
will
overwrite
my
original
quick
action,
probably
without
me
knowing
so
it
won't
show
that
state
anyway.
So
I
might
actually
solve
that
dx
problem.
B
B
It's
the
same
thing
with
assignees,
actually,
because,
like
there's
also
multiple
assignees
and
they're
updated
the
same
way,
we
send
a
list
of
ids
to
the
back
end
so
that
whatever
list
you
currently
have
on
your
front
end
is
actually
you
know
what
we
send
and
then
it
overwrites
whatever
is
in
on
the
back
end.
If
somebody
changes
it.
C
B
C
Cool
yeah
yeah,
I'm
excited
about
this,
I'm
all
doing
amazing
work
and
I'm
very
thankful
and
like
giddy
is
the
way
I
would
say.
A
Awesome
does
that
address
your
next
comment
as
well
gabe
in
the
agenda.
Do
we
just
talk
about
scalability
and
implementation
details.
C
A
Okay
cool
sounds
like
we've
takeaway
there
to
create
the
issues
for
the
next
step,
with
subscriptions
of
the
front
end.
So
cool
yeah
next
item,
then,
is
mine
as
well.
So
I
started
the
work
on
for
cloud
navigate
lab
to
enable
an
action,
cable,
enable
configuration
of
action
cable
automatically
in
in
embedded
mode,
but
it's
still
very
much
a
work
in
progress,
so
I
had
some
trouble
with
it
locally,
some
versions
out
of
sync
or
something
between
the
version
of
the
protocol.
A
The
front
end
is
using
and
the
version
that's
provided
by
the
backend,
and
I
think
it's
like
I
haven't
quite
got
to
the
bottom
of
it,
but
it's
something
to
do
without
it
setup.
It's
not
something
to
do
with
the
work
we've
done
thus
far,
but
jason's
asking
if
we
can
provide
any
cpu
or
memory
measurements,
and
given
that
this
is
on
dev.getlab.org,
I
was
wondering:
are
we
tracking
anything
already
like?
D
E
From
from
dev
gitlab
org,
we
should
we
had
a
prometheus
server
running
next
to
org.
I
turned
that
down
last
week.
We
should
be
we
should
set.
We
should
have
some
metrics
available.
Actually
I
think
there
is.
E
D
E
E
B
Everything,
though,
is
like
the
resource
indications
are
actually
very
like
traffic
dependent.
I
guess
right,
so
it's
kind
of
not
very
useful.
I
guess.
E
Yeah,
it's
not
enough
load
for
anybody
to
notice.
Ops
get
labnet
might
be
a
better
candidate
because
it
does
get
a
bit
more
traffic.
B
E
E
As
long
as
we
can
turn
on
and
off
quickly,
we
don't
depend
on
ops,
get
labnet
for
anything
like
critical
path,
but
it
would
make
people
cranky
because,
if
deploys
stopped,
working.
D
E
A
B
D
Yeah,
it
looked
pretty
small
that
was
on
a
death
machine
which
should
not
make
a
big
difference.
I
mean
so
maybe
this
all
kind
of
depends
on
what
happens
down
the
stack
right,
but
especially
with
ruby.
You
have
this
problem
where,
like
the
longer
you
run
a
piece
of
code
as
it
allocates
memory,
it
might
have
a
longer
term
effect
on
on
the
memory
profile.
It
look.
D
It
looks
like
it's
pretty
lightweight
thing,
so
I
wouldn't
expect
that
to
be
and
probably
dev
get
up-
or
it's
probably
not
a
good
platform
to
measure
that
anyway.
So
that's
probably
something
that
we
will
only
really
see
in
production
when,
when
it
gets
like
real
user
traffic
yeah
outside
of
that,
I'm
I'm
kind
of
stumped,
I
don't
know
how
else
we
would
look
at
this
prometheus
would
definitely
be
a
reliable
source
for
us.
D
A
Yeah,
so
I'm
wondering
what
the
cng
project
is
actually
used,
for
I
mean
if
it's
used
for
sort
of
ephemeral
instances
that
are
kind
of
you
know
spun
up
and
spun
down.
Then
the
length
of
time
really
doesn't
matter.
You
know
because
and
the
actual
and
the
utilization
doesn't
really
matter
like
if
there's,
if
we're
expecting
a
lot
of
traffic
to
effect,
it's
really
just.
I
think
that
they
would
want
to
know
what
the
overhead
simply
of
running
the
additional
action,
cable,
server
or
embedded
action
cable-
would
be.
You
know.
D
D
I
had
a
question
just
out
of
curiosity
in
the
cng
project,
so
we
we
provide
a
docker
compose
setup
which
is
really
interesting.
So
I'm
just
wondering
what?
Maybe
this
is
not
the
right
platform
to
discuss
this,
but
if
anyone
has
any
ideas,
how
how
does
this
compare
to
projects
like
the
gck
where
well,
because
some
developers
use
docker
compose
already
to
work
on
gitlab
as
developers
is?
How
does
this
compare
to
using
expanding
spinning
up
all
these
containers
using
the
compose
file
in
in
cng
is?
D
B
Yeah,
I
think
camille
is
aware
of
cng
and
who's
like
mentioned.
I
forgot.
The
specifics
was
like
some
differences
on
regarding
development
or
something
but
yeah.
I
forgot
that
yeah,
I
think
yeah
you
could
ask
amelia.
I
think
he
knows
more,
but.
D
Yeah,
okay!
Well,
I
know
I'm
for
sure
I
mean
there's
a
lot
of
like
development,
specific
tooling
in
gck
as
well,
and
just
looking
at
the
main
difference
as
well
as
that
composed
sorry.
The
gck
uses
only
one
base
image
from
which,
like
all
the
containers,
actually
use
the
same
image
and
it
just
like
mounts
different
source
folders
to
then
run
different
components
in
different
containers.
So
so
that's
quite
different
from
what
cng
does
but
yeah
okay
I'll.
I
can
catch
up
with
come
on.
A
D
D
Because
I
think
it
would
be
amazing
if
we
had
a
test
environment
that
is
a
little
closer
to
production,
because
the
gck
also
makes
a
number
of
assumptions
right
now.
So
it's
not
quite
so.
It
pretends
to
run
all
these
different
services
on
different
hosts
locally,
but
yeah.
It's
not
really
representative
of
what
it
would
actually
look
like
if
it
was
deployed
in
something
like
kubernetes.
A
It's
not
really
relevant,
but
I
actually
use
cng
locally
when
I'm
testing
bugs
and
triage
and
bugs
just
because
I
find
it
easier
to
you
know,
force
recreate
once
and
then
just
bring
it
down
and
spin
it
back
up
for
a
quick
test
and
whereas
gdk
can
sometimes
take
a
long
time
to
boot.
Up.
D
A
Okay,
cool
we're
coming
up
against
time.
So
last
item
matthias:
you
left
a
comment
in
the
track:
action,
cable
settings
in
usage
ping.
This
might
be
one
for
gabe
as
well.
Now
we
have
gone
to
dev.okaylab.org
without
usage
paying
at
our
well,
that's
not
really!
A
Well,
we've
released
it
to
single,
listen,
small
customers
without
usage
ping
already,
so
I
guess
maybe
gabe
could
speak
to
like
how
higher
priority
this
is
and
then
we'll
have
to
think
of
like
a
creative
solution
for
how
we
can
track
this,
because
we
can't
do
it
using
the
usual
way.
We
would.
D
Yeah,
sorry,
by
the
way
for
bringing
this
up
so
late,
because
when
I
first
look
at
this
it
sounded
it
would.
It
would
be
very
straightforward
if
it
was
just
a
settings
key
like
anything
else,
but
then-
and
I
think
also
we
made
we
switched
to
an
environment
variable.
D
That
was
not
the
first
approach
I
think
we
took
so.
I
think
I
just
got
my
wires
crossed
and
I
thought
oh
that's
easy
to
do,
but
then
yeah.
D
C
D
C
Doing
a
release
post
block
for
13.3,
we
can't
I'm
happy
to
spend
one
up
today.
If
we
want
to
do
that-
and
like
just
basically
say
you
know
single
instance,
self-managed
users
can
use
web
sockets
now
or
we
can
figure
out
how
to
track
it.
And
then
we
can
do
like
a
bigger
release.
C
Maybe
in
release
post
block
in
13.4,
I
wouldn't
say
it's
the
highest
priority
like
drop,
more
important
work
and
do
it,
but
it
is
worthwhile
to
test
and
under
or
like
to
measure,
because
I
think
that
will
sort
of
inform
a
how
well
we
do
at
like
evangelizing
the
feature
and
how
to
use
it
well
and
documenting
it
and
make
making
it
visible
to
the
water
community,
but
then
also
like
how
much
you
invested
in
it
over
the
long
run.
D
So
so
just
to
like,
because
we
can't
get
creative
right,
it's
just.
The
question
is
like
if
this
was
filed
under
13.3,
which
is
that
gives
us
only
a
few
days
right.
I
think,
because
monday,
I
think
we
should
look
probably
to
freeze.
C
D
So
there
are
ways
we
could
probably
detect
this.
I
wonder
if
we
can,
if
we
could
do
something
like
because
we
should
be
publishing
actually
would
we
have
any
prometheus
metrics
that
would
indicate
that
yeah.
I
don't
think
so
right,
because
we
it's
not
if
it
runs
embedded,
we
don't
have
anything
in
communities.
We
could
look
at
to
find
out
if
there
is
a
action,
cable
yeah.
If
action
cable
is
running
on.
C
Yeah,
if
we
can't
get
it
via
usage
ping,
I
know
that
we're
all
trying
to
standardize
on
getting
our
usage
data
from
the
usage
being
instead
of
a
bunch
of
stitching
a
bunch
of
other
sources
together.
So
it's
easier
for
less
technical
folks
to
build
self-service
data
analysis.
D
Yeah
prometheus
is
the
only
source
I
can
think
of,
but
I
don't
know
do
you
know
heinrich
if
we
would
have
anything
if
you
run
embedded
action
cable.
Would
there
be
any
metrics
and
prometheus
that
we
could
look
at
to
then
conclude?
Okay
action.
Cable
is
actually
enabled
on
that
node.
B
Like
you
know
things
number
of
connections
I
think,
but
yeah
we
haven't
worked
on
it
so.
D
Sorts
of
things
about
self-managed
nodes,
so
so.
B
D
Exists
as
like
infrastructure
that
we
could
piggyback
this
on,
but
yeah,
it's
not
ideal,
I
mean
yeah,
I
mean
the
best
solution
really
would
be
to
have
a
settings
key,
but
we
ran
into
these.
I
think
we
ran
into
these
problems
with
the
initializer
cyclic
dependencies
and
all
that
stuff.
Where
yeah,
you
may
remember
that
right.
So
so
that's.
D
B
C
C
Usage
thing:
do
we
still
want
to
have
a
really
supposed
entry
for
the
single
instance
being
able
to
use
websockets.
B
I
think
we
could
like
postpone
it
to
when
we
get
usage
ping
and
when
maybe
we
could
get
extra
fields
right.
We
discussed
this
last
week
and
last
meeting
and
I
felt
it
kind
of
you
know
bad
or
like
sounds
bad
when
we
like
release
a
feature.
Just
you
know
it's
very
small
feature
like
only
for
assignees
but
yeah.
C
Yeah
yeah,
I
I
don't
know
like
I.
I
felt
like
that
when
I
for
the
first
year
at
gitlab
and
now
I'm
like,
I
don't
care,
so
it's
a
win
so,
but
I
I
do
think
it's
fine
do
we
have-
and
I
can
check
on
this
later
work
time,
but
just
like
before
we
do
that
we
need
to
make
sure
we
have
really
good
documentation-
and
I
haven't
looked
at
our
docs
for
this
yet
so
that
folks
know
how
to
enable
it
disable
all
that
good
stuff
I'll.
C
Take
a
look
at
that
when
I
get
back
to
my
computer
later
today,
and
we
can
kind
of
decide
I'll,
post
async
in
the
working
group
slack
channel
and
get
y'all's
opinion
on
whether
or
not
you
want
to
include
it.
If
you
do
I'll
spin
up
a
release
post
entry
today
or
we
can
do
it
13
for
when
we
have
the
usage
being,
I'm
fine
either
way.
A
I'd
be
fine
with
13
4,
because
we
still
have
part
of
our
criterias
to
have
quality.
Look
at
this
and
see
if
there
are
any
things
they'd
like
to
add.
You
know
before
before
we
kind
of
make
a
big
announcement
or
anything
about
it.
So
I'm
okay.
A
Putting
it
off
to
starting
for
all
right.
A
We're
at
time
so
I'll
stop
the
recording
thanks
everyone
for
coming
thanks
for
your
time,
especially
those
of
you
who
got
up
super
early
or
studied
really
late.
So
thanks
very
much.