►
From YouTube: Real-Time Working Group 2020-04-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
we
use
our
existing
round
charts,
we
get
a
lot
of
our
service
discovery
stuff
with
that
so
and
I
think
we
mentioned
in
the
last
meeting
that
we
would
reuse
our
existing
Redis
nodes
as
opposed
to
provisioning
new
ones
and
then
we'll
provision
new
ones.
If
we
need
to
do
that,
in
which
case
we
can
probably
reuse
a
lot
of
our
helm,
charts
as
they
are,
they
currently
support
one
Redis
deployments.
So
if
we
want
to
add
more,
we'll
have
to
do
more
substantial
updates
to
the
helm,
charts
and
but.
B
A
B
Think
there
should
be
a
problem,
though,
because
you
know
I
just
read
the
issue
and
I
guess
our
helm
charts
are
the
one
used
to
deploy
our
current
kubernetes
pods
right
for
sidekick
export.
So
right
now
it's
configured
to
connect
to
our
sidekick
Redis
instance,
and
then
we
were
planning
to
reuse
our
shared
state
instance.
Instead
of
not
and
not
the
sidekick
instance,
so
we
have
to
have
a
way
to
configure
multiple
instances.
Then,
if
you
wanted
this
to
work,
unless.
A
B
A
A
E
My
impression
was
that
it's
really
more
for
like
mix,
because
it
makes
for
a
cleaner
separation
of
concerns
like
because
I
would
imagine.
I
had
actually
have
no
prior
experience
with
running
WebSockets
in
production,
but
I
would
imagine.
The
workload
is
very
different
and,
like
the
unit
of
work
is
being
executed,
is
very
different
and
like
a
batch
job
versus
something
that
requires,
like
you,
know,
maintaining
state
for
real-time
user
interaction
on
the
site,
so
so
I
think
there's
something
to
be
said
about
you
know.
E
C
E
A
Yeah,
the
problem
that
we've
been
having
I
think
so
far
with
with
this
is
that
there
are
a
lot
of.
There
are
a
lot
of
dependencies
on
other
teams
if
you
like,
or
really
just
on
other
disciplines
right.
So
we
kind
of
need
to
trim
as
much
as
possible
to
get
to
something
that
we
can
see.
Even
if
we
can't
necessarily
release
it.
A
The
customers
yet
and
there'd
be
some
value
and
not
being
able
to,
like
you
said,
measure
the
the
impact
of
having
WebSocket
connections,
maintaining
those
connections
on
a
known,
quantity
of
issues
or
known
quantity
of
the
users,
and
so
yeah,
like
I've,
fully
agreed
that
potentially
like
down
the
line.
We
like
an
elegant
separation
of
responsibilities
like
it's
ideal,
it's
it's
a
case
of
like
what
can
we
get
to
know
managing
the
dependencies
on
other
teams
and
other
teams
have
a
lot
of
work,
and
particularly
particularly
the
delivery
team
who
already
focused
on
delivering
psychic.
B
So
I've
been
working
on
the
getting
our
development
set
up
to
work,
so
gdka
Mars,
while
also
working
on
the
omnibus
and
marks
to
update
configuration
so
that
our
web
and
API
nodes
would
be
able
to
connect
to
the
right
Redis
note.
Whatever
Redis
node
would
be
choose
in
the
future
and
then
so
yeah.
The
workhorse
Amara's
also
emerged
because
everything
needs
to
go
to
him,
workhorse
and
then
we'd
have
to
it
doesn't
have
existing
support
for
web
sockets
on
it.
It
has
existing
support,
but
only
on
certain
routes.
B
B
So
once
we
have
the
GDK,
mr
emerge,
it's
easier
to
have
this
thing
you
can
now
boot
up,
dedicate
and
enable
the
future
flag
and
actually
see
the
future
working
without
having
to
you
know,
have
a
long
list
of
instructions
which
you
were
called
branch.
This
which
your
key
BK
register
so
yeah
once
that's
done,
it
will
be
the
ox
things
now
like
the
her
ization,
and
all
that
I
mentioned
below
and
omnibus
for
deploy.
A
So
apparently
mind
the
original
goal
of
the
group
was
to
ship
a
real-time
feature
to
sell
foster
customers
and
with
the
omnibus
work
and
the
GDK
work
as
well.
I
know
it's
behind
a
feature
flag,
but
technically
somebody
could
draw
down
the
work,
rights
or
and
lift
the
feature
flag
and
see
see
the
feature
working
right.
C
B
Of
on
WebSockets
like
when
an
issue
gets
updated,
it
knows
when
to
send
to
publish,
publish
messages.
The
SUBSCRIBE
part
it
needs
a
puma
server.
That's
what
we
can
I
decided.
They
can
separate
the
Puma
server
to
run
the
WebSocket
server
and
I
have
to
have
another
omnibus
mr2.
Like
start
that
for
some
folks
that
customers.
A
Okay,
cool
from
the
UPS
and
delivery
sites.
There
were
a
few
comments
that
I
thought
were
interesting
on
our
Irish
in
the
delivery
teams
projects,
so
yeah
John's
Garba
come
in
and
mentioned
that
we
pretty
much
get
a
lot
of
metrics
if
we
use
the
Prometheus
exporter,
which
is
built
into
all
the
clusters.
Currently
we
just
have
to
integrate
with
that
and
we
can
start
collecting
metrics
about
connections
and
things
like
that
and
so
yeah
and
then
also
we've
added
some
other
members
to
the
working
group.
I,
don't
call
anyone
right.
A
E
Yeah
sure
so
entirely
caught
up
yet
under
discussions
leading
to
to
now
but
I
mean,
like
our
team,
has
an
interesting
challenge
in
that
with
every
feature
that
we
knew
at.
Of
course,
we
increased
them
where
we
typically
increase
the
application
footprint
and
the
resources
consumed,
and
we,
of
course,
we're
trying
to
move
into
a
position
where,
at
some
point,
we
yeah.
E
Of
like
what
that
means
in
terms
of
cost
as
well,
for
our
customers
to
run
actually
we
today
we
have
like
a
first
meeting
to
talk
about
what
the
North
Star
is
gonna,
be
for
our
team.
So
we're
telling
you
newly
formed
teams
right
now,
we're
still
trying
to
say
and
talk
about,
is
happening
and
get
lab
and
seeing
like
where
things
move,
I
guess
my
main
I
wonder
someone
in
my
mind.
Right
now
is
if
I'm,
a
self
managed
customer
and
maybe
I,
run
a
very
simple
deployment.
E
You
know,
maybe
it's
a
single,
no
single,
no
deployment
I
have
to
pay
the
extra
cost
here
right
for
a
new,
at
least
at
least
a
separate
process.
That's
being
spun
up,
you
know
paying
with
memory
for
in
compute
for
the
cost
of
a
entire
race
process
over
again
so
I
guess
I'm,
just
wondering
like
is
that
something
we're
okay
with
or
because
that's
still
a
substantial
addition
to
the
existing
memory
footprint?
Why
is
this
something?
E
Kind
of
things
on
my
mind
what
I'm
thinking,
maybe
they're
users,
who
are
not
power
users
who
are
not
maybe
benefitting
that
much
from
real-time
editing
or
maybe
are
not
interested
in
in
paying
extra
resources
consume
to
use
that
feature.
I
honestly,
don't
know
at
this
point
if
there
was
any
research
being
done
into
that,
you
know
how,
like
what
kind
of
user
cohorts
would
be
heavily
using
this
kind
of
feature
and
I
mean
like
okay,
calm,
like
really
more
customers,
and
so.
E
Things
in
my
mind,
right
now,
rather
than
specifically
optimizing
for
oh
hey,
we
will
run
like
an
extra
note
or
an
extra
process,
at
least
and
then
looking
at
kind
of.
How
can
we
like
micro,
optimize
that
but
more
in
general
about
life?
Have
we
thought
about
maybe
making
this
up
in
or
at
least
a
group,
let
it
agree
gracefully
to
the
existing
functionality
for
customers
who
might
not
benefit
from
this
as
much.
A
What's
your
view
Henry
I
think
like?
Can
we
kind
of
customer
switch
it
off
with
the
current
proposal
like
or
just
uptight
if
they
don't
want
to
spin
up
a
separate,
puno
server
and
don't
want
to
pay
the
memory
cost
if
they're
happy
enough
with
the
first
feature
like
the
first
features,
so
I
think
it's.
A
The
first
feature
is
going
to
be
two
upgrades
assignees
in
real
time,
if
it
degraded
to
like,
if
we
had
fallback
it
would
literally
be
just
what
happens
now,
which
is
nothing
so
that
that
would
be
the
experience
for
the
user.
So
I
think
it
degrades
gracefully
right.
The
current
behavior
but
I'm
just
wondering
Hinrich.
Can
you
can
you
switch
it
off
or
off
tight
I.
B
B
I
think
it's
fine
that
you
can
turn
it
off.
I
mean
it
kind
of
works,
even
if
it's
broken,
if
the
WebSocket
server
is
broken,
it's
just
you
know,
works
like
it
is
now,
but
the
plan
is
to
do
to
build
more
features.
On
top
of
this,
like
I,
think
the
create
team
is
planning
like
a
real
time
editor
and
those
are
more
like
real
features
versus
like
enhancements
that
we
are
trying
to
do
right
now.
So
I,
don't
know,
I
mean
to
be
a
part
of
thing.
I
think
I
mean
this.
D
Isn't
gonna
be
the
most
helpful
way
of
putting
it
but
as
like,
if
double
that
pick
a
position
like
imagine
that
we
had
to
get
without
any
background
job
processing
and
we
were
adding
background
job
processing
for
one
thing
that
was
optional,
we
would
say
that,
like
maybe
you
can
turn
that
off,
but
then
background
job
processing
becomes
sort
of
fundamental
to
get
level.
We
just
have
to
include
it
like
I
think,
that's
sort
of
where
this
is
going
like
you
know.
D
If
we
have
real
time
features,
then
some
of
those
will
just
simply
not
work
in
the
same
way
that
get
lab
won't
work.
If
you
turn
off
JavaScript
or
if
you
do
not
background
processing
or
if
you
you
know,
turn
off
get
up
shell
or
something
like
that,
I
think
you
know.
These
are
just
sort
of
fundamental
components
that
like
right
now,
you
can
turn
them
off,
but
I
think
you
know
the
end
state
is
something
where
that's
just
integral
to
the
product
right.
B
E
These
these
are
super
fair
points
and
I
I
totally
understand
it's
like
and
a
bit
of,
like
a
team,
eternal
struggle
as
well
that
you
might
have
that.
What
kind
of
test
with
you
know,
keeping
low
but
at
the
same
time,
will
often
move
in
two
directions
with
a
company,
but
we
want
to
broaden
and
deepen
our
features
at
the
same
time.
So
that's
a
it's
a
difficult,
it's
something
we
have
to
figure
out
as
a
team
as
well.
Oh
yeah.
So
this
was
the
first
thing.
E
I
was
thinking
going
to
us,
but
it
sounds
like
very
much
aligned
around.
This
looks
like
a
really
promising
technology
to
deploy
and
then
build
interesting
features
on
top
and
that
we
might,
even
if
that
works
well,
we
might
even
expand
on
this
going
forward.
I
mean
that's
great.
You
know
if
we
think
this
is
like
the
direction
we
want
to
take,
then
of
course,
then
we
should
focus
on.
How
can
we
get
about
this
in
the
most
efficient
one?
C
Can
I
can
add
something
from
create
static
site
editor
team,
because
I've
been
joining
they're
brainstorming
session
when
they
were
design
on
the
architecture
of
the
application
and
partly
they
decided
to
adopt
craft
keel
because
they
really
needs
real-time
updates
and
they
hope
for
subscriptions
to
be
there.
I
mean
it's
not
hope
if,
like
the
closest
time
but
they're
building
it
on
top
of
craft
kill
because
they
need
real-time
update
so
yeah.
This
is
much
needed.
A
When
it
comes
to
features
like
real-time
collaborative
editing
and
things
like
that,
there,
those
are
things
can
could
possibly
out.
No
I,
don't
know
what
the
plan
is
for
those,
but
could
could
end
up
in
higher
tiers,
let's
say,
and
when
it
comes
to
real-time
assignees
on
the
sidebar.
It's
something
that's
most
likely
going
to
be
of
interest
to
you
know
most
it's
it's
almost
a
quality
of
life
improvement
for
the
product,
I,
suppose
that's
where
it
becomes.
You
know
maybe
a
question
for
people
like
do
I.
A
A
Anyway
and
yeah
so
I'd
like
to
get
an
idea
of
any
blockers
to
work
that
we
have
in
progress
and
ultimately,
like
we
said
last
week,
we
want
to
take
on
more
of
this
work
in
the
team.
Even
the
plan
team
for
infrastructure
work
that
we
can
do
and
but
yeah
it
would
help
to
get
an
idea
of
what's
stopping
progress
at
the
minute,
maybe
Heinrich
maybe
be
best
to
suggest.
B
Yeah
I
think,
with
the
back
end
that
we
have
right
now
we
could
even
start
with
a
doctor
is
Asian
and
how
much
thing
I
was
planning
to
start
looking
into
it.
I
think
I
was
just
starting
looking
to
like
tutorials
and
stuff,
but
because
I
don't
know
anything
about
them,
yeah,
so
I
don't
know.
If
some
somebody
wants
to
own
those
or.
A
Yeah,
it's
not
containerization,
at
least
pretty
much
has
to
be
done
before
we
start
working
with
the
kubernetes
deployment
right
and
so
yeah.
That's
that's
my
main
ask
for
this
meeting.
To
be
honest,
as
can
we
get
a
DRI
for
each
of
these
things
and
anything
else
that
you
think
would
be
blocking
doesn't
mean
you
have
to
do.
The
work
just
means
that
you
would
assist
with
whoever
is
doing
it,
bring
it
back
to
the
working
group
and
let
us
know
of
progress,
so
I
put
the
things
in
there.
D
It
is
blocked
by
the
yeah
you
can't
eat.
You
can't
use
the
help
chart
without
the
application.
Big
tater
roast,
like
you
know,
as
you
said
so
I
think
it
would
be,
but
you
can
very
easily
like
use
a
container
from
like
a
branch,
so
like
a
work
in
progress,
Emma
on
the
build
images
project
to
then
test
out
your
helm,
shot
on
map
that
one's
own
branch,
so
working
across
projects.
There
works
quite
nicely,
so
yeah
I'm
happy
to
help
with
the
hum
trout
side
of
things,
but
it
is
blocked
on
there
and
authorization.
B
I
think
it's
because
Shawn
is
now
here.
Could
we
like
go
back
to
the
first
point
which
he
mystic?
We
were
talking
about
the
about
our
charts,
where
they
had
only
supported
one
Redis
configuration
and
I.
Guess
it's
now
configured
to
connect
to
our
sidekick
because
we're
using
it
for
sidekick?
Is
this
going
to
be
a
problem
we
have
to
work
on
this?
Is
this.
B
D
I
assume
that
this
one
would
only
need
to
talk
to
the
shared
state
one,
but
that
issue
was
a
little
bit
concerning
to
me,
because
I
also
think
sidekick
might
need
to
talk
to
the
shared
state.
One
I
think
it
might
be
a
bigger
problem
for
sidekick
that
it
is
for
this
off.
It
is
not
a
guarantee,
but
off
the
top
of
my
head
I
can't
see
why
the.
D
B
D
Can
you
can
easily
override
things
on
different
levels?
So
I
haven't
looked
at
the
specific
details
of
this,
but
I'm
pretty
confident
that,
like
assuming
it's
possible
to
configure
this
at
a
pod
level-
or
you
know
whichever
component
level
and
that's
that
we
can
do
that,
but
I'll
I'm
going
to
add
it
to
do
to
look
at
that
issue,
because
I
think
it
might
be
a
problem
for
sidekick
anyway.
D
Yeah
so
bear
in
mind,
we
already
have
images,
for
you
know
unicorn
humor
sidekick
like
so
we
have
it
images
that
will
load
the
entire
rails
application
in
some
form
and
do
something
with
it.
In
this
case,
we
need
an
image
that
does
something
like
that,
but
not
exactly
the
same
as
those
other
images,
so
I
think
that's
the
the
optimistic
view
of
that.
Is
that,
like
you
know,
it
can
reuse
a
lot
of
that
existing
work.
It
might
be
wrong,
but
that's
what
I'm
going
with
for
now.
B
I
did
see
something
while
looking
into
the
docker
containers
images
that
we
have
like
there
was
a
note,
something
like
workhorse
is
still
embedded
in
the
rails
and
image,
or
something
and
I'm
not
sure.
If
that
would
be
an
issue
for
us
or
like
do,
we
need
to
separate
workhorse
out
or
I.
Don't
know
about
that,
because.
A
Gabe
last
week,
at
the
last
meeting
was
very
keen
to
to
get
more
of
the
work
into
the
plan
team
but
in
other
words
sorry
so
the
plan
seems
already
working
on
and
Heinrichs
already
working
on
the
application
work,
but
we
could
take
more
of
the
work
to
containerize
and
deploy,
but
we
can't
just
have
one
we
can't
just
have
heinrich
and
a
foreign,
I'm
cuz
he's
also
doing
the
application
work.
So
what
I
was
thinking
was
if
somebody
with
experience
and
working
on
the
containerization
could
be
the
DRI
for
it.
A
Okay
cool-
well-
maybe
maybe
not
maybe
somebody
from
the
team
could
be
DRI
then,
but
we
could
refer
to
somebody
in
delivery
or
one
of
the
other
teams
for
help.
Shawn
I,
don't
know
like
I
mean
I
know.
Originally
the
plan
was
the
EMM
would
be
able
to
take
tasks
back
to
the
team.
So
I
don't
really
know
where
to
find
somebody
who
has
experience
with
docker.
Yes,.
D
I
think
this
mostly
belongs
to
the
distribution
team
and
maybe
I,
don't
know
I'm
wary
of
suggesting
we
just
keep
it
expanding
the
working
group,
but,
like
all
of
this
stuff,
is
related
to
the
distribution
team
and
what
they
do
now.
That's
not
to
say
that
they're,
the
only
people
who
can
do
this
stuff,
like
you
know
it's
entirely
possible
for
someone
else
to
make
a
change.
That's
included
not
only
this
they're,
just
like
you're
reviewing
that
and
approving
it
and
like
providing
like
the
standards
and
guidance
around
that,
but.
D
That's
that's
the
team.
That's
gonna
have
the
most
expertise
on
this,
so
the
other
option
is
to
just
have
someone
own
it
and
there
you
go
talk
to
like
you
know,
potentially
the
distribution
team
or
go
try
and
pick
a
stuff
out
for
themselves
and
see
what
works
and
see
what
doesn't
work.
I
think
you
know
this
is
the
only
working
group
I've
been
in
so
I,
don't
know
which,
which
way
we
prefer
to
go
on
that
stuff.
A
Yeah
I
don't
really
mind
so
long
as
you
know,
we
can
keep
it
unblocked.
You
know,
and
so
I
think
what
we'll
do
is
based
on,
like
I
said
before
Gabe's
I
think
prepared
to
prioritize
say
a
containerization
task
in
our
in
our
backlog.
If
we
can
get
it
well
defined
and
get
more,
maybe
more
than
one
person
in
the
team
responsible
for
it.
So
maybe
that's
the
next
step
then
I'll
create
that
issue
and
not
talk
to
Gabe
about
get
prioritized
and
then
Heinrich.
A
D
A
A
Yeah
all
right,
we
are
kind
of
running
up
against
time.
I've
created
an
M
R
for
the
exit
criteria.
If
the
working
group,
the
two
criterias
I,
see
them,
are
to
get
the
feature,
work
done
and
then
to
also
get
the
kubernetes
deployment
configured.
If
you
have
any
suggestions
on
that,
could
you
please
like
check
the
issue?
Are
the
M
R
and
give
it
an
approval,
if
you're
broadly
in
agreement
with
the
steps
that
need
to
be
done?
If
you
think
anything
needs
to
be
added,
it
doesn't
correlate
exactly
with
our
task
lists.