►
From YouTube: Scalability Team Demo - 2020-03-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
finally
creates
the
kubernetes
cluster
today.
Well,
I
had
one
running
locally,
but
it
made
my
laptop
using
mini
poo,
but
it
was
so
slow
that
I
literally
couldn't
do
anything.
So
then
I
tried
to
create
one
in
TCP,
but
then
we
were
out
of
network
addresses
in
the
network,
so
I
created
new
network,
but
then
that
one
failed,
because
we
were
out
of
IPs
in
that
region.
B
A
Yeah
I've
been
I've,
been
showing
the
deeper
filers
updates,
and
during
these
calls
it's
just
just
a
quick
link.
Actually,
people
I
think
everyone
in
this
on.
This
call
may
be
able
to
see
the
profilers
hopefully,
but
we
already
have
an
issue
to
improve
visibility
on
that
which
is
Greg.
Greg
Greg
is
also
working
on,
which
is
creating
a
group
for
all
developers
to
have
access
to
these
profilers.
G
G
G
G
Like
it
would
be
interesting
just
as
a
like,
and
is
it
kind
of
not
exactly
what
we're
doing
here
but
you'll
be
interesting
to
take
like
I
mentioned
it
last
week?
Maybe
I
should
try
and
do
it,
but
this
new
back-end
round
trip
of
funk
funk
to
whatever
it
is
and
try
and
see
what's
going
on
there
and
if
we
can,
because
I'm
sure
we
can
optimize
that
arts
and
that
gonna
be
quite
nice,
because
then
we
have
our
first
benefit
from
doing
this.
Yes,.
A
G
G
A
Yeah
good
good
point:
maybe
we
should
think
directly.
They
change
that
on
the
service,
so
maybe
for
it,
for
instance,
deeply
we
just
go
to
their
channel,
update
the
documentation
and
make
at
least
before,
where
that
we
had
this,
we
had
this
data
so
fall
back
to
that.
If
anything
goes
wrong,
good
good
place
should
have
a
look.
A
A
G
G
A
A
G
So
I
guess,
like
all
we
spoke
about
last
week,
was
we
kind
of
all
feel
like
we
need
to
reduce
the
number
of
Q.
So
we
kind
of
went
down
this
route.
Oh
well,
there
was
alternatives
where
we
just
use
all
the
cues
that
we've
got,
but
the
alternative
is
we
just
kind
of
accept
that
we've
kind
of
we
need
to
reduce
the
number
of
cues
and
then,
if
we
do,
that,
we
will
have
will
be
able
to
kind
of
get
more
more
out
of
out
of
Redis.
G
You
know
a
a
sidekick
reduce
the
CPU,
and
so
it
was
really
just
a
question
of
how
we
do
that,
and
so
I
came
up
with
this
proposal
like
very
loose
and
fast
proposal,
admittedly
around
putting
the
Q
selectors
moving
them
from
the
sidekick
cluster
into
the
application,
so
that
at
the
application
level
we
we
map
it
there
and
then
Sean
had
some
concerns,
particularly
around
migration.
I
think,
is
that
fair
to
say,
Sean,
yeah.
B
B
G
So
sort
of
I
guess
you
know
we
were
all
a
lot
of
us
one
that's
called
last
week
and
we
had
a
lot
of
backwards
and
forwards.
The
way
we've
got
to
we
had
a
great
call
this
morning
and
where
we
got
to
with
it
is
that
we
keep
the
queue
selectors.
So
developers
are
not
like
hard-coding
queue
names
and
eventually
the
queue
names
will
be
completely
removed
from
sidekick.
G
Not
yet,
but
in
some
future
world
that
we
know
key
names
and
sidekick
but
will
have
the
attributes,
then
there
will
be
kind
of
a
default
mapping
between
those
attributes
and
a
set
fixed
about
20
queues
and
so
they'll
be
called
alpha,
beta,
charlie,
whatever
we
actually,
we
don't
know
what
they'll
be
called,
but
those
b20
albuquerque's
I.
Could
we
call
q
q0
q1?
G
Is
queue
and
that'll
be
separate,
so
they'll
basically
be
three
cues
and
the
rest
of
them
won't
get
any
traffic,
but
on
the
sidekick
cluster
side,
we'll
be
listening
to
all
20
queues
and
that
will
be
like
a
fixed
set
of
queues
at
will
we'll
be
listening
to,
and
what
that
means
is
that
I'll
get
lab
comm,
we
can
kind
of
configure
it.
We've
got
20
slots
a
few
once
and
we
can
sort
of
move
things
around
between
those
slots
hoo-hoo-hoo
once
we're
not
kind
of
constrained
with
having
it.
G
You
called
you
know
low
urgency,
and
then
you
know
in
six
months
time
we
decide
that
we
actually
just
want
to
use
a
different
cue
feature
feature
category
young
I.
Don't
think
we
will
do
that.
I,
don't
think!
That's
a
very
good
idea,
but
I'm
just
kind
of
pulling
something
out
here
and
then
we
would
end
up
using
like
the
cue
that
was
called
lower
urgency
for
feature
category
X,
and
so
just
having
these
arbitrary
cue
names
that
are
that
are
meaningless.
G
They
can
kind
of
move
things
around
and
they'll
never
lose.
You
know
all
20
of
them
will
always
be
listen
to
kind
of
via
the
sidekick
negate,
because
you
know
once
you
got,
the
negate
whatever's
left
will
be
one
of
those
cues.
We
know
what
the
the
full
set
is,
and
so
it's
quite
safe
to
let
people
use
quite
easy
to
configure
I
think
it's
I
mean
Shawn
has
expressed
concern
about
the
migration
I.
Think
the
migration
is
definitely
easier
than
that
open-ended
approach.
B
Thanks
I
think
we
cut
to
a
good
spot,
yeah
yeah,
no
I,
think
I.
Think
the
that
fixed
versus
configurable
was
probably
the
only
thing
we
actually
disagree.
In
the
end,
say
everything
else
was
solution
and
I,
don't
think
we
actually
disagreed
on
that.
So
much
I
think
it
was
just
coming
from
different
places
that
made
it
take
a
while
to
get
there
but
yeah.
B
D
D
B
When
we
know
that
all
the
jobs
have
been
processed
is
like
I,
think
Gabriel
or
Note
regulars
actually
wrote
like
migration
helper.
For
this
so
like
there
is
a
thing
in
the
application
that
will
take
jobs
from
one
queue
and
put
them
in
another
queue.
So
we
can
use
that.
But
obviously
we
don't
want
to
use
that
until
we
think
that's
pretty
much
empty.
Cuz
like
it's
just
gonna
have
to
walk
through
the
entire
queue.
D
B
B
G
Other
the
other
part
of
it
is
that
as
part
of
migration,
I
think
you
were
kind
of
alluding
to
this
may
be
as
well
bob
is.
We
stopped
generating
the
ocuzi
amal
and
we
add,
like
you
say
we
add
the
the
30
queues
to.
There
were
20
queues
to
the
end
of
that,
and
then
over
time
we
basically
remove
the
old
queues
so
that
it's
just
the
alphab,
you
know
whatever
probably
yeah
we
could.
We
could.
G
We
could
also
be
cute,
like
elastics
all
and
give
them
Marvel
names
or
something
stupid
like
that,
but
anyway
and
yeah
and
then
and
then
the
other
part
was
that
we
take
what
you
did
with
the
with
the
you
know.
You
do
that
little
spike,
where
you,
where
you
did
the
queue
allocation
on
the
client
side.
We
put
that
in
a
mix
in
and
so
to
experiment
with
this.
G
B
H
B
So
one
thing:
that's
not
super
clear
right
now:
I
think
we
should
not
really
start
doing
much
on
this
until
we've
finished
what
we're
currently
doing
right
like
so.
What
we're
currently
doing
is
using
the
queue
selector
on
site
get
cluster
everywhere,
even
though
we're
eventually
going
to
remove
it,
we
want
to
keep
doing
that
now
and
get
the
fleet
switched
over
to
their
new
style,
and
then
from
that
point
we
can
start
doing
this,
but
we
don't
want
to
mix
the
two
projects
up
right,
yeah.
D
G
So
it
was
a
great
way
of
of
being
able
to
kind
of
divert
traffic
away,
but
then
particularly
last
Thursday
when
Prague
did
that,
and
we
realized
like
how
how
much
consuming
that
was
kind
of
the
moment
which
is
like
okay.
This
thing
that
is
kind
of
like
technical
debts
and
doesn't
feel
good.
We
actually
have
to
get
rid
of
it.
That's
my
sort
of
feel
on
it.
C
I
D
G
Mara
nothing.
My
point
is
that
the
biggest
cost
is
actually
in
the
documentation
which
we,
you
know,
the
experimental
you
select
documentation,
but
really
the
the
functionality
and
our
understanding
we
just
building
on
that
we're
not
kind
of
like
changing
direction.
We're
just
kind
of
going
further
on
that
road.
C
How
do
we
thanks
for
explaining
I,
was
not
even
trying
to
say
that
we
did
something
wrong,
just
to
be
absolutely
hundred
percent
clear,
yeah,
just
trying
to
like
understand
whether
there
has
if
you
already
work
with
me,
you
know
that
I
like
taking
shortcuts
and
I
like
taking
like
parallelizing
work
as
much
as
possible.
So
what
I'm
trying
to
figure
out
is
whether
there
we
had
a
branching.
B
C
B
We
wouldn't
be
reducing
our
flexibility,
but
we
would
have.
It
would
have
taken
us
longer
to
build
the
flexibility
that
we
have
with
the
cue
selector
the
way
it
is
now-
and
we
wouldn't
have
known
about
the
issue
ever
like
you
know,
but
we'd
have
had
to
do
more
work
to
get
to
the
point
where
we
never
found
this
issue
and.
G
As
the
as
the
attribute
that
we
kind
of
key
these
things
by,
even
though
I
was
vaguely
aware
that
you
know
there
was,
there
was
a
limited
life
span
on
on
every
worker
having
a
queue
and
so
maybe
like.
If
there
was
one
learning,
it
would
be
that
we
should
have
put
the
worker
class
onto
the
attributes
right
from
the
beginning
and
onto
all
the
all.
The
logging
I
think
we
got
it
on
all
the
logging.
So
that's
another
problem,
but
certainly
on
the
on
the
observe
on
the
monitoring.
G
B
G
B
G
C
My
next
question
would
be:
what
can
we
do?
Are
there
any
suggestions
on
what
we
can
do
to
move
all
this
knowledge?
Now,
four
of
you,
one,
two
three
four
four
of
you
have
accumulated
over
the
period
of
this
project,
because
it
would
be
a
terrible
situation
for
us
to
move
on
to
the
next
one
before
we
are
able
to
share
our
learnings
with
the
whole
development
team
and
explain
why
this
is
important
to
them.
So.
G
My
first
thing
on
that
is
is
I
think
that
the
reliability
teams
are
almost
more
important
on
that.
On
that
respect,
like
being
able
to
you
know,
call
often
you
a
new
sidekick,
for
example-
and
you
know
create
a
you
know
if
someone
from
a
development
team
does
pretty
much
what
Dylan
Griffith
has
done
recently,
you
know:
hey
we've
got
this
new
ques,
we
need
like
it
shouldn't
be
on
scalability.
G
C
That's
not
good
enough,
and
mostly
because
that
would
mean
that
we
have
to
expect
from
real
ability
teams
to
fully
understand
how
the
application
functions,
which
I
don't
think
it's
currently
the
case,
which
means
we
would
have
to
go
on
level
deeper
to
to
teach.
We
don't
have
the
time
to
do,
but
there
is
something
that
you're
saying
there,
which
I
think
is
really
important,
and
that
is
we
probably
don't
want
to
focus
exclusively
on
the
impact
from
the
development
side.
C
We
want
to
focus
on
the
overall
picture,
both
from
infrastructure
and
development
side,
because,
if
I
understand
correctly,
where
the
some
of
the
disagreements
or
debate
rather
was
between
you
and
Sean,
was
your
two
different
points
of
views
towards
the
same
thing:
right
yeah.
So
how
can
we?
How
can
we
get
those
two
things
together
so
that
we
can
show
the
same
picture
to
everyone
for.
B
Developers
I
think
it
gets
I
think
for
both
sides.
It
actually
gets
easier,
but
maybe
that's
just
me:
try
putting
two
into
a
gloss
on
it,
but
for
developers
I
think
we
have
already
been
sort
of
communicating
that
you
need
to
think
about
the
attributes
that
your
worker
has,
and
you
still
need
to
think
about
that
you
just
from
a
development
side.
B
I
think
it
almost
is
just
as
easy
as
it
is
now
like
in
the
sense
that
you
can
define
what
the
workload
for
your
worker
is,
and
you
can
define
like
what
the
resource
boundary
is,
and
we
already
have
like
documentation
on
that
so
like
we
can
improve
that.
But
I
think
the
key
thing
is
that
we
are
just
moving
away
from
a
one
worker
per
q1q
per
worker
model,
which
I
don't
think
most
people
have
thought
much
about
anyway,
so
hopefully
for
them
is
no
there's.
No
real
big
change,
like
the
point
is
still.
B
A
D
G
So
I
have
yeah
exactly
I,
don't
I
think
from
the
development
side.
It
makes
things
a
lot
easier
because
you
just
got
to
get
your
attributes
right
and
then
you
know
we'll
we'll
take
care
of
it.
I
think
the
two
points
I
wanted
to
make
one
there's
a
third
stakeholder
that
I
think
we
should
engage
with
fairly
early
on
and
that's
the
QA
team
responsible
for
the
reference
architectures
because,
like
I,
think
it
would
be
really
cool
if
they
could.
G
You
know
for
the
25k
reference,
whichever
whichever
one
it
starts,
making
sense
actually
splitting
things
out
so
that
we
have
you
know
urgent,
non,
urgent
or
CPU
bond
or
whatever,
because
I
certainly
like
at
50k
I
can
imagine
that
would
make
a
big
difference
to
to
them
to
their
world.
I
mean
I,
don't
know
quite
how
busy
50k
reference
architecture
is,
but
you
know
at
least
give
them
the
possibility,
and
then
the
other
thing
is
kind
of
maybe
related
to.
G
That
is
really
beefed
up
the
documentation
around
this,
because
at
the
moment,
is
it's
a
bit
of
a
hodgepodge
and
maybe
engage
with,
like
the
technical
writing
teams
to
to
make
this
into
like
a
first
class
like
scaling
feature,
you
know,
then
encourage
people
to
use
it
and
say
like
this
is
something
you
know
if
you're
seeing
these
problems
with
the
accuse
or
whatever.
This
is
a
possibility.
C
So
then,
from
my
side
that
is
somewhere
further
down
the
line,
what
I'm
trying
to
suggest
here
is
you've
all
done
a
lot
of
work
to
understand
how
all
of
this
actually
works,
both
from
in
front
and
development
side
and
I
would
like
to
be
able
to
offer
up
that
knowledge
in
you
know,
format
that
is
not
here
is
five
page
document
that
you
need
to
understand.
I,
don't
know
even
how
more
like
I'm
looking
for
like
a
how-to
or
I'm
looking
on
not
even
how-to,
it
is.
C
B
C
Wider
audience
so
that
we
can
explain
to
everyone
what
kind
of
challenges
the
application
actually
has
when
it
comes
to
scaling
so
that
we
can
engage
with
development
and
infra
and
quality
sure
no
problem
to
explain
why
this
is
really
important
and
why
we've
been
doing
it,
because
this
is
really
hard
to
explain
right
like
it's,
really,
it's
nearly
impossible
to
go
to
higher
levels
and
explain
why
we've
been
doing
the
work
with
misdoing
approach
from
saying
we're:
gonna
utilize
things
better.
Okay,
it.
F
C
Absolutely
so
the
executives
I'm
the
least
concerned
about
because
we
can
always
give
an
executive
summary
right.
This
is
gonna,
make
things
better
because
of
this.
This
is
the
outcome
that
we
are
looking
for.
That's
not
the
problem,
but
the
problem
is
I
want
to
make
sure
that
the
knowledge
that
we've
accumulated
has
been
explained
to
every
stakeholder
in
a
format
of
why
scalability
doing
this
rather
than
this
is
what
scalability
has
done.
C
F
The
two
formats
that
come
to
mind
is
either
a
not
exactly
a
blog
post
but
a
write-up
of
what
we
did
and
why
we
did
it
and
what
we
found
or,
alternatively,
a
video
version
of
that's
right
up
where
it's
a
structured
discussion,
that's
recorded,
but
it's
we
don't
have
to
go
through
the
process
of
writing
it
down
and
making
it
read.
Read
correctly,
those
are
the
two
that
come
to
mind
both.
C
So
I
just
want
to
make
sure
that
we
don't
end
up
in
a
situation
where
we
are
this
fringe
team.
That
is
resolving
a
problem
that
someone
else
made
somewhere,
knowingly
or
unknowingly,
and
that
we
are
going
to
do
the
cleanups
left
and
right
like
that's,
that's
not
what
we
are
about
here.
We
are
here
to
set
the
direction
for
everyone
and
make
sure
that
we
explain
like
well.
This
is
the
benefit.
C
C
D
A
D
A
The
main
goal
there
is
like
having
it
being
controlled
by
cyclic
process
being
like
we
don't
need.
The
customer
doesn't
need
to
know
it's
psittacosis
behind
the
scenes.
So
that
was
the
initial
idea
right
and
since
we
are
actually
configuring
that
in
sidekick
configuration
on
the
belts
that
makes
even
more
sense
to
yet
is
going
on.
Psychic
name
of
CTL
comment,
so
that
me
to
drop
off
all
right.