►
From YouTube: Real-Time Working Group 2020-05-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
A
Think
that
we
have
like
the
kubernetes
work
hasn't
started.
The
helm
charts
workers
is
done
to
like
multiple
reticences,
the
containerization
work
since
second
review,
but,
like
I,
think
the
delivery
team
were
still
working
on
kubernetes
for
sidekick,
and
it's
unlikely
that
this
will
be
prioritized
anytime
soon.
I,
don't
know,
but
like
I,
would
imagine
that
it
won't
be
prioritized
soon.
So
if
we
want
to
start
collecting
data
on
resource
usage
or
anything
like
that,
I
think
stagings
pretty
much
our
only
choice
right.
A
C
We
can
submit
them
out
to
do
that,
but
we
also
need
to
actually
have
the
VMS
available
to
deploy,
to
which
I
think
is
done
with
terraform
I've
not
actually
done
that
myself.
Yet
so
you
can
see
that
we
know
that
what
I'm
saying
is
like
we'd
somewhere
to
run
this
and
on
staging.
If
there's
no
late,
because
I
don't
think
we
have
another
day,
we
really
want
the
model
where
the
WebSocket
connections
problem
the
same
nodes
as
the
web
nodes
right
so
either
way.
C
C
I
don't
know-
maybe
maybe
let's
take
that
to
the
delivery
team
and
the
issue
for
that
and
see
what
they
say,
because
you
know
I'm
just
speculating
here,
but
the
sidekick
stuff
is
kind
of
some
of
that's
brought
on
something
I'm
watching
right
now,
so
there
might
be
a
bit
of
a
gap
at
the
schedule.
Italy
thanks
to
me.
If
that
helps
so
who
we
can,
we
can
see
but
yeah,
maybe
maybe
let's
take
that
do
them
in
the
in
the
issue.
We've
been
discussing
that
with
them
and
see
what
they
say.
B
C
I
mean
staging
that
might
be
okay,
but
we
definitely
don't
to
do
that
in
production.
So
like
it
depends.
What
trying
to
lay
I
think
it
might
be
useful
just
as
like
a
sanity
check.
If
that's,
if
that's
what
you
want
to
do,
I'm
not
sure
it's
useful
for
like
getting
this
much
closer
to
production,
but
if
I
think
it's
maybe
useful
to
get
like
that
little
staging
post
on
the
way.
So,
if
sorry
I
didn't
realize,
that
was
an
option
to
like
allow
both
from
the
same
node.
B
Right
it
doesn't
really.
It
doesn't
really
help
us
much
in
like
collecting
metrics
and
all
that,
because
you
know
we
can't
even
really
connect
with
reasonable
metrics
on
staging
but
yeah,
maybe
just
simple.
If
it's
easy
enough,
maybe
it's
worth
doing
just
for
you
know
a
senator,
like
you
said,
sanity
check
on
the
whole
setup
and
production
like
setup
with
I.
Think
multiple
read
is
also
in
staging
right.
B
C
C
So
it
would
be
nice
like
for
a
place
for
people
to
play
around
with
this
and
see
how
it
goes,
and
you
know,
even
if
it's
going
to
be
on
staging
for
a
while
before
it's
on
production,
people
can
still
give
it
a
go
and
report
issues
and
stuff
like
that.
Right,
like
I'm,
not
100%,
sure
that
it
will
be
allowed
anyway
because,
like
maybe
they'd,
want
to
keep
the
configuration
of
staging
and
production
a
bit
closer
together.
A
And
since
we
update
the
whole
issue
anyway,
we
just
need
to
publish
more
updates
through
this
sidebar
channel,
like
broadcasts
issue
updates
or
some
of
the
channels
called
or
the
feature.
Flags
called
that
enables
that
channel
so
I
think
from
that
point
to
be
uneven.
Just
to
show
progress
like
it
would
be
good,
but
it
depends
on
how
much
workers.
C
Like
the
feature,
work
is
fundamentally
pretty
much
done
as
far
as
we
know,
but
we
need
to
get
feedback
and
like
to
get
feedback
in
production.
There's
a
whole
bunch
of
other
work,
that's
still
in
progress,
but
in
staging
we
can
maybe
short-circuit
some
of
that.
If
the
infrastructure
teams
were
okay
with
that
by
running
it
on
the
same
nodes
as
we
serve
HTTP
traffic
front,
so
I
think
yeah.
We're
all
agree.
Basically,
okay,.
A
C
And
there
might
be
other
options
as
well
like
maybe
we
could
enable
it
on
dev
dog,
a
live
dog
cuz.
As
far
as
my
way,
that's
the
single
which
we
call.
It
is
just
a
single
node
or
obstacle
Abnett
I'm,
not
quite
sure
how
that's
deployed,
but
that
might
be
another
option
as
well
like
we
do
have
other
instances
where
we
could
try
things
like
this
out.
B
A
Are
there
any
metrics
we
could
get
from
staging
that
actually
would
be
useful.
I
mean
we
had
matheus
had
some
concerns
about
memory
usage,
but
I
like
I.
Take
the
point
that
gathering
the
number
of
connections
is
probably
not
realistic
from
staging
like
it's
not
used
in
that
way,
but
things
like
CPU
and
memory.
We
could
probably
get
an
idea
so.
C
Quality
engineering
team
I
think
grant
young
in
particular
have
some
like
reference
architectures,
where
they
run
my
performance
test.
Right
like
it
might
be
interesting
to
see
as
another
strand
of
the
work
for
this
like
if
they
can
incorporate
this
into
those
somehow
I'm
like
I,
don't
know
if
some
of
those
can
run
against
like
because
I
was
run
against
an
instance
that
they
spin
up.
So
they
could
just
try
enabling
me
to
defrag
and
running
the
existing
tests.
C
A
B
Yeah
two
workers,
the
default
setup
I
just
copied
the
default
Booma
configurations,
it's
like
two
workers
and
I
think
it
was
Jason
or
I.
Think
Jason
mentioned
that
in
the
omnibus,
mr,
when
he
was
merging
it.
He
said
this
adds
one
gig
memory.
The
usage
I
think
it's
kind
of
expected
already.
We
kind
of
knew
that
you
know
when
we
boot
rails,
its
gonna
be
like
around
that
number
600,
gig,
600
MB
or
something
right,
yeah.
C
I
think
Mattias
just
found
that,
like
basically
all
of
that
is
the
application,
like
you
know,
there's
a
possibility
that
we
could
try
and
slice
out
the
part
of
the
application
that's
used
for
WebSockets,
but
then,
like
you,
get
into
issues
where,
like
you
might
add
a
feature,
and
then
you
need
to
tell
whatever
slicing
out
that
that's
also
using
web
used
by
WebSockets
now
so
yeah
I
think
is
I.
Think
it's
just
the
thing
we
have
to
live
with.
So.
A
One
of
the
reviewers
I
think
was
DJ
point
that
I
thought
we
probably
don't
need
doc,
utils
Python
package
right
in
that
container,
and
actually,
if
you
look
through
the
initializers
in
the
lab
application
there,
it's
essentially
like
a
ton
of
stuff.
We
could
strip
light
if
we're
only
running
WebSockets
like
there's
graphics
magic.
Do
we
need
that
to
run
like
if
I
don't
know
if
the
whole
application
will
but
would
like
this?
But
if.
A
So,
like
there's
potentially
like,
if
we
wanted
to
go
down
that
route
but
yeah
it's
like,
if
we
ever
wanted
to
say
and
I
guess,
if
we
ever
wanted
to
send
more
then
over
the
wire,
then
we
need
to
riad
the
doc
utils
package
and
that's
a
position
we're
in
and
then
there's
also
the
thing
that
we
want
to
potentially
move
to
any
cable.
At
some
point
and
how
far
do
we
want
to
go
down
the
road
of
slicing
off
pieces
of
the
application
only
to
switch
Stanley
cable
in
the
future.
C
Yeah
I
think
I
think
a
lot
of
the
stuff
we
blocked
on
right
now
is
feedback,
and
this
enabling
this
on,
like
oh
and
the
existing
nodes,
it's
so
big.
The
web
traffic
doesn't
get
us
a
lot
of
the
feedback,
but
it
does
get
some
of
the
feedbacks
we
may
as
well
shoot
for
that
and
I
think.
That's,
probably
the
the
main
focus
from
here
on
out
Rey
is
like
finding
ways
to
get
feedback
on
this
like
as
incrementally
as
possible
and
start
addressing.
Any
issues
recently
find
that
up.
Yes,.
A
C
Was
still
writing
this
up,
but
it
was
just
to
like
at
least
I.
Don't
know
if
we've
already
done
this
and
I
just
forgot
about
it
or
I
missed
that
meeting,
but
like
sync
up
with
grants
and
people
working
on
the
reference
architectures
and
ask
them
to
enable
WebSockets
and
run
the
perform
existing
performance
tests,
because
these
references
are
I've
written
this
out.
Yet
these
reference
architectures
like
use
the
performance
test,
suite
to
check
that
they
are
valid.
C
So
what
we
want
to
do
is
enable
WebSockets
and
enable
the
feature
flag,
possibly,
but
not
do
anything
else
and
see
if
that
reference
architecture
still
holds
or
if
the
additional
memory
usage
from
having
WebSockets
even
just
available
causes.
Some
issues
that
we
need
to
address
does
that
make
sense.
B
A
A
C
B
A
Okay,
thanks
Eric.
The
second
item
was,
we've
already
talked
about
and
that's
the
containerization
I
don't
really
have
too
much
to
update
on
that.
I
think
it
might
be
ready,
but
I
don't
know
it's
in
a
second
review
kind
of
at
the
outer
limits
of
my
knowledge
on
this.
To
be
honest-
and
there
are
a
few
things
that
like
a
bit
of
feedback
from
Jason
on
the
mr,
if
you
want
to
jump
in
and
give
any
suggestions
as
well,
I'd
really
welcome
them.
So.
C
C
But
all
I
remember
that
is
like
as
it's
getting
closer
like
it's
actually
usable.
It
might
be
possible
to
start
testing
this
out
with
the
helm
traps
right
now,
so
I
mean
honestly,
like
I.
Try
to
spread
the
load
around
the
distribution
team
for
some
of
the
work
I've
been
doing
patient's
just
taking
a
bunch
of
it
anyway.
C
Maybe
that's
video
referrers
too,
but
yeah
I
would
this
might
be
a
good
time
to
like
at
least
get
in
touch
with
them
and
say,
like
you
know,
or
ask
them
on
that
MRI
say
like
you
know:
I
want
to
start
putting
this
in
there
in
the
helm,
charts
can
I
just
like
basically
clone
over
the
existing
like
boomer
stuff
or
you
know,
do
it
should
I
start
from
scratch.
Like
you
know,
what's
the
what's
the
best
approach
here,
because
they're
all
like.
C
Sub
charts
within
the
chaps
repo,
so
it's
like
well,
it's
actually
called
web
service
now,
I
think
so
you
know.
Do
we
build
off
that
because,
like
it'll
be
build
off
each
other
right
like
we
have
a
web
service
chart
and
a
sidekick
chart
and
the
migrations
chart
which
all
run
the
rails
application
in
some
form
so
like?
C
What's
the
what's
the
best
sort
of
starting
point
to
start
like
bashing
around
is
where
I'm
gonna
start
with
that
and
say,
like
you
know,
should
I
just
build
it
off
this
or
should
I
start
building
it
off
that,
because
once
you
do,
that,
like
you
least
be
able
to
start
trying
out
and
get
some
feedback.
Sorry.
A
Do
helm
trucks
like
I'm
completely
ignorant
to
this,
so
the
home
charts
basically
do
what
docker
compose
does
if
you're
like
docker
compose
gives
you
like
a
standard,
you
can
run
docker
compose
up.
It
will
run
one
of
each
container
that
you've
specified
in
the
docker
compose
file
and
so
its
helm,
like
a
production,
basically
yeah.
C
Kind
of
like
it's
more
complicated
because
it's
my
you
know
it's
using
kubernetes
but
yeah.
Basically,
like
you
tell
helm
like
I,
want
to
spin
up
this
application
and
I
want
to
disable
this
service
to
the
neighbor
this
service
or
whatever
the
application.
Is
you
configure?
And
so
here
we'd
be
saying
like
you
know,
the
WebSocket
thing
is
off
by
default,
but,
like
you
can
turn
it
on
maybe,
and
each
sub
chart
sort
of
I.
Probably
gonna
expose
my
ignorance
here.
If
anybody
who
actually
knows
about
this
watch,
is
it
but
charts?
C
Don't
map
one,
two,
one,
two
docker
images
but
they're
pretty
close!
So,
like
you
know,
you
could
have
two
charts
that
use
the
same
docker
image
in
different
ways,
but
the
idea
is
to
have
like
the
docker
images,
have
different
entry
points
depending
on
what
you're
doing
so
you
can
think
of
it
as
like,
inheriting
a
docker
image
just
like
building
a
child
for
another
child.
If
that
make
sense,
yeah.
C
And
some
of
these
charts
do
things
which
we
might
need
here,
which
are
like
create
a
config
file
on
the
file
systems
like
we
have
get
lab
Barbie
that
generates
that
in
omnibus,
but
in
the
helm
charts
they
use
a
slightly
different
approach
where
you
just
sort
of
you
generate
it
when
you
run
helm.
Basically,
it's
little
bits
of
my
knowledge
with
that.
A
That's
good
yeah
thanks,
that's
really
informative!
Why
we're
kind
of
pushed
for
time
a
little
bit
I
want
to
move
on
to
the
next
thing,
just
because
I'd
like
to
were
in
the
last
week.
Basically,
if
the
working
group,
essentially
against
the
date
that
we
said
the
start
but
I
think
that's,
we
can
probably
push
that
out
a
little
bit
but
I'd
like
to
provide
Chris.
Was
the
executive
sponsor
I'd
like
to
provide
sort
of
an
update
on
how
were
tracking
against
progress
against
what
we
said
as
exit
criteria?
A
I'll
have
a
look
at
the
I've
linked
an
mr
that
I
made
to
the
page
just
to
try
and
tidy
up
some
of
the
points
and
my
progress
against
the
eggs
of
criteria,
but
neither
we've
just
had
a
conversation,
but
the
helm
charts
are
probably
good
to
look
at
that
again
and
that
as
another
step,
if
it's
not
in
there
already
but
yeah.
If
you
wouldn't
mind,
taking
a
look
at
that,
mr
anyone
and
adding
removing
suggesting
things
and
approve
that,
if
you
think
it's
okay,
yeah.
C
I
think
I
think
one
thing
there
was
like
we
could
probably
nudge
the
available
uncom
exit
criteria
forward.
As
soon
as
it's
enabled
on
an
environment
we
can
control,
even
if
it's
still
quite
a
long
way
from
you
know
it
wouldn't
necessarily
move
forward
far
but,
like
you
know,
it
would
be
a
step
towards
that
right
like
right.
So
are
the
comments
DMR
anyway.
A
Thanks
yeah
and
bear
in
mind
that
the
two
exit
criteria
were
delivering
the
first
feature
to
suppose
the
customers
and
the
second
was
to
deliver
to
calm
the
first
one
I
think
you
know
in
theory
at
least
any
self
hosting
customer
can
run
this.
No
right,
I
mean
if
they
jump
through
all
the
hoops
like
to
enable
it
right,
that's
not
on
by
default,
so
we
wouldn't
consider
it
released
or
anything.
But
I
think
we
could
say
there's
like
significant
progress
against
that.
B
A
Yeah
for
sure
the
the
one-
that's
though
probably
the
limiting
factor
is
the
deployment
to
comm,
so
I
mean
I.
Think
it's
fine
to
push
out
the
end
date
of
the
working
group,
but
we
should
probably
try
and
get
an
estimate
and
I'm
not
asking
for
one
in
the
call,
but
based
on
how
long
it
took
to
do
the
sidekick
migration.
If
somebody
who
has
knowledge
of
this
could
help
us
get
an
estimate
of
when
we
should
set
the
next
date
for
winding
up
the
working
group,
yeah.
C
C
John
and
John,
sharpen
scopic
in
the
victory
will
have
a
really
good
idea
of
that
and
they'll
also
know
like
what's
gonna
be
easier
and
what's
gonna
be
harder.
So,
like
easier,
is
it's
not
an
existing
workload
like
we
can
just
turn
it
on
or
off.
Like
you
know,
we
don't
have
to
worry
about
breaking
existing
stuff
harder.
Is
that
it's
not
an
existing
workload?
So
we
need
to
figure
out
like
all
the
metrics
and
stuff
that
we
talked
about
so
yeah
I,
think
I.
C
A
Okay,
cool
thanks,
I'll
check
out
and
ask
them
to
see
if
I
don't
to
make.
It
seem
like
we're
trying
to
push
this
up
their
agenda
or
anything
just
get
an
estimate
for
when
we
should
set
the
next
date
that
we
potentially
could
exit
the
working
group
with
everything
completed
and
also
like,
if
we're
taking
on
the
work
for
how
much
Arts
and
we've
taken
on
the
work
for
dr.
it's
possible.
We
could
also
like
take
on
some
of
the
work
for
kubernetes
as
well.