►
From YouTube: 2022-04-14: Scalability Team Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Great
so
liam
and
I
have
the
first
item
on
the
agenda
today,
we
were
just
putting
together
an
issue
about
the
day-to-day
changes
that
are
going
to
happen
in
scalability.
A
Mostly,
what
we
were
trying
to
demonstrate
is
that
this
is
very
much
still
a
work
in
progress,
we're
still
figuring
out
what
these
changes
mean,
but
we're
starting
to
put
together
some
ideas
for
the
teams
in
terms
of
road
maps
and
the
next
steps,
mostly
we're
hoping
that
people
can
continue
to
concentrate
on
closing
out
the
current
projects,
which
we've
listed
there
they're
five
in
progress
at
the
moment
and
as
we
start
to
work,
start
to
wrap
those
up.
We
can
start
the
new
projects
up
in
line
with
the
new
teams
that
have
been
created.
A
So
we
just
wanted
to
put
this
issue
out
there
and
show
the
people
what
we're
thinking
about
and
as
things
change
it'll
be
listed
on
this
issue.
I
don't
know
if
there's
anything
you
want
to
add
to
that
liam.
B
Yeah,
I
think
that's
that's
a
good
summary
igor,
obviously,
you've
probably
seen
our
one-on-one
agenda.
I
spoke
with
huang
min
yesterday
and
he
kind
of
gave
me
some
feedback.
I
think
that
you
had
on
your
sort
of
shared
call
with
him
around
the
transition
and
kind
of
like
what
then,
what
the
next
steps
are,
and
I
think
the
transition
has
been
very
fairly
informal
so
far.
B
I
think
that's
kind
of
been
purposely
say
because
we
don't
kind
of
foresee
there
being
like
big
changes
or
like
the
need
for
a
half
split
between
between
the
two
teams,
and
but
I
appreciate
that
that
also
means
that
maybe
people
are
feeding,
maybe
a
bit
lost
in
terms
of
what's
next
or
exactly
what
the
teams
are
going
to
be
responsible
for.
So
hopefully
we
can
answer
some
of
those
questions
and
set
a
bit
more
clarity
in
this
issue,
but
yeah.
If
anyone
has
new
questions,
that's
probably
a
good
place
to
put
them.
A
C
Yeah,
I
I
don't
have
any
immediate
questions
on
this.
I
just
wanted
to
say
thanks
for
putting
this
together.
I
think
this
is.
This
is
gonna,
be
helpful
to
give
us
all
a
better
idea
of
what's
gonna
happen.
Next,
thanks,
yeah,
and
with
that,
I
guess
I've
got
the
next
item,
so
fun
debugging
story
not
not
directly
related
to
scalability
work,
and
I
was
only
very
peripherally
involved
with
this,
but
it
was
funny
enough
that
I
figured
I'd
kind
of
share
it.
C
So
steve
azdelpadi
was
working
on
this
and
he
he
really
did
the
bulk
of
the
work
on
it.
We
were
seeing
502
errors
during
deployments
and
yeah
steve
was
kind
of
digging
into
why
that
is,
and
you
know,
spent
a
couple
days
on
it
and
then
at
some
point
discovered
that
our
graceful
shutdown
of
those
pods
is
not
actually
working
as
intended.
C
So
the
way
that
it
is
supposed
to
work
is
we
can
ascend
a
a
term
signal
to
terminate
the
process
it'll
finish
its
current,
currently
ongoing
work
and
then
shut
itself
down
gracefully
once
it's
finished
those
requests,
if
it
doesn't
finish
those
requests
or
if
it
doesn't
shut
down
on
its
own
in
time,
then
it'll
get
a
hard
kill
after
a
certain
grace
period.
C
The
message
for
receiving
that
graceful
shutdown
signal
wasn't
being
locked,
so
basically
it
wasn't
receiving
that
signal
as
far
as
we
could
tell,
and-
and
so
let
me
just
pull
up
the
issue
here
so
the
way
that
we
run
the
same
kubernetes
is
we
have
a
docker
file.
Let
me
find
the
docker
file
as
well.
C
We've
got
this
docker
file
that
defines
how
we
run
the
container
in
the
pod,
and
so
this
is
calling
to
a
wrapper
script
for
the
workhorse
container
or
the
workhorse
process,
and
then
inside
of
this
wrapper
script,
we're
calling
gitlab
workhorse
and
the
issue
is-
and
we
can
actually
see
this.
If
we
look
at
the
process
hierarchy,
we
have
bash,
which
is
a
wrapper
script
and
then
workhorse
as
a
sub-process
of
that
wrapper
script.
C
And
whenever
you
see
this
type
of
thing,
that
usually
indicates
that
bash
is
going
to
swallow
up
all
of
the
signals
that
it
receives
and
is
not
going
to
forward
those
signals
to
this
process.
That
is
a
thing
that
shells
do
and
so
what's
happening
is
yeah
exactly
this
bash
is
receiving
the
signals
and
workhorse
is
not
getting
them,
and
so
the
way
that
we
can
fix.
C
That
is
by
adding
a
single
keyword
before
this,
which
is
exec,
and
what
that
will
do
is
instead
of
workhorse
being
a
sub
process
or
a
child
process
of
bash.
It
will
actually
make
bash
become
the
workhorse
process,
and
so
we
won't
have
that
bash
in
between
and
we
will
start
receiving
signals
directly.
C
So
that
seemed
like
a
pretty
nice
and
simple
fix,
and
so
we
deployed
that
phase.
However,
the
process
was
still
not
receiving
signals,
and
so
something
was
still
kind
of
weird
and
and
so
looking
at
the
diff
between
before
and
after
we
see
before,
we
have
been
sh
well
at
the
top
level,
bin
sh
bin
bash
and
then
workhorse,
and
then
the
new
thing
is
bsh
at
the
top
level
and
then
workhorse
directly.
So
we
lost
bin
bash,
but
we
still
have
this
bin
sh
thing,
which
is
pid
1.
C
So
that's
kind
of
the
init
process
of
this
container
and
one
kind
of
unintuitive
aspect
of
pid
one
is
on
a
on
a
unix
system
pit
one
is
kind
of
the
root
process
and
every
everything
else
is
actually
directly
or
indirectly
a
sub
process
of
paid
one.
C
C
And
so,
if
we
look
at
this
cmd,
I'm
not
seeing
a
bin
sh
in
here.
So
this
to
me
looks
like
it's
supposed
to
be
executing
the
wrapper
script
directly,
but
there's
clearly
some
bin
sh
somewhere.
So
is
this
docker
is
this
kubernetes
like
is,
is
someone
just
adding
that
without
our
knowledge-
and
so
I
got
kind
of
lucky
and
stumbled
over
the
docker
cmd
documentation?
C
C
B
C
Yeah
yeah,
I
mean
I,
I
need
to
give
steve
some
credit
for
this
because
he
did
all
of
the
heavy
work
I
just
kind
of
swooped
in
in
the
end
and
was
like
wait.
I
seen
something
like
like
this
before
so
yeah.
It's
it's
nice
when
you
can
sort
of
just
something
comes
up
in
a
chat
and
you're
suddenly
connecting
the
dots
and
kind
of
all
comes
together.
A
A
Out
of
this,
so
I'm
glad
that
it's
been
an
interesting
process
for
you.
C
Yeah
so
so
this
is
we're
rolling
out
more
real-time
updates,
which
are
relying
on
the
websockets
service
and
it's
on
a
very
busy
endpoint
or
busy
page,
which
means
the
overall.
C
I
kind
of
predicted
yeah
the
auto
scaling,
like
basically
the
the
limits
on
how
many
pods
or
how
many
backing
nodes
we
add
by
autoscaling
would
probably
become
contended
and
the
memory
on
the
web
sockets
pods,
and
so
they
proceeded
with
the
rollout
and
sure
if
we
posted
any
graphs
of
the
the
exact
thing,
but
basically
both
of
those
predictions
were
spot
on
and
we
got
saturation
on
both
of
them
and
and
so
there's.
Let's
see
what
was
the
other
the
other
issue.
C
I
think
we
can
pretty
much
scale
horizontally.
It's
mostly
a
matter
of
adding
more
capacity
and
ensuring
that
the
auto
scaling
is
scaling
on
the
right
resource,
so
we
usually
auto
scale
on
cpu,
but
in
this
case
it's
memory,
that's
most
contended,
so
that
might
actually
be
a
case.
Well,
we
may
want
to
make
a
case
for
auto
scaling
on
memory
instead,
here.
C
A
I
think
what
I'll
do
then
is
I'll,
write,
a
comment
on
that
new
issue
that
you're
creating
that
I
don't
know
if
you
created
it
if
they
or
that
has
been
created.
To
just
ask
the
question,
because
ordinarily,
or
at
least
my
experience
of
in
the
past
is
that
we
scale
up
for
a
whole
bunch
of
reasons
like
there's
a
whole
lot
of
things
that
are
contributing
together
to
make
a
scale
up.
But
this
is
a
very
particular
feature.
That's
being
turned
on.
That
is
requiring
more.
So
I'll.
C
C
A
Yeah-
and
I
think
that
there
was
some
work
done
in
that
direction
last
year,
but
I
didn't
follow
the
end
of
the
work,
so
I'm
not
sure
where
it
landed
up.
But
enough
of
it
happened
to
make
me
think
that
I
need
to
ask
the
question
so
I'll
go
and
check,
but
thank
you
for
for
watching
it
so
closely
with
them
and
for
just
helping
them
through
the
process.
Because,
if
they'd
just
gone
and
put
this
on
at
100,
we
would
have
had
problems.
B
A
A
Cool-
I
do
have
a
question
about
this,
so
as
more
around
the
process
of
this,
though
not
the
f
the
finance
stuff,
but
so
we
had
a
request
come
in.
That
was
someone
from
scalability.
Please
help
us
with
this
thing
we're
going
to
roll
this
out,
and
I
I
chose
eagle
because
reasons
that
I
can't
remember
at
the
time
you
seemed
like
a
good
person
to
ask
to
take
a
look
at
this.
A
A
Well,
I
think
this
is
an
extension
of
the
review
request
process
that
we
had
last
year,
where,
if
anyone
was
concerned
about
how
their
feature
was
going
to
scale,
they
would
reach
out
to
us,
and
someone
from
the
team
would
take
a
look
and
ordinarily,
I
allocated
by
whoever
wasn't
on
the
most
critical
project
at
the
time
or
whoever
had
space
or
whoever
had
the
knowledge,
and
it
was
generally
having
to
interpret
each
of
the
requests
and
allocate
the
person.
Accordingly.
A
Often,
it
was
due
to
knowledge
that
knowledge
was
the
first
requirement
more
than
anything
else,
but
I
think
that
this
request
is
an
extension
of
that,
where
they
recognized
that
there
was
going
to
be
some
concern
around
how
we
were
going
to
like
what
would
happen
if
multiple,
if,
if
there
was
a
larger
uptick
in
usage
than
they
were
expecting,
howard
would
would
handle
that,
and
so
they
just
reached
out
to
us
directly
and
now
that
I
say
that
I
think
there
was
also
a
thing
that
was
written
in
the
handbook.
A
I
think
I
will
have
to
find
that
that
reference
there,
but
I
expect
that
there
will
be
more
requests
like
this
that
come
in
when
people
want
to
release
stuff
that
has
to
scale.
C
Yeah,
I
guess
one
similar
example
that
comes
to
mind
is
when
we're
or
when
a
team
is
looking
to
add
a
lot
of
new
stuff
to
a
cache
yeah.
We
we've
also
had
conversations
about
that
and
in
some
cases,
actually
had
to
push
back
and
say:
no,
we
don't
have
the
caching
capacity
for
that
right
now.
A
A
Yeah
and
I
think
the
review
requests
that
come
in
are
quite
broad
and
quite
general
when
there
are
questions
that
come
in
specifically
about
like
error,
budgets
or
the
usage
thereof,
then
it
makes
sense
to
come
to
projections.
But
something
like
this
is
just
there's
no
specific
owner.
B
Yeah
and
to
that
point
as
well,
the
thing
that
I
was
just
thinking
about
is
each
time
we
get
one
of
these
like.
What
can
we
do
to
try
and
like?
Is
there
anything?
We
can
do
to
make
it
easier
for
people
to
answer
this
question
themselves,
but,
as
you
say,
because
they're,
because
perhaps
these
problems
are
so
nuanced
in
general-
it's
it's
maybe
hard
to
do
that.
A
Yeah,
I
think-
and
I
think
it's
similar
to
it's
someone
reaching
out
for
specialist
advice
and
the
specialist
advice
happens
to
be
in
the
area
of
scalability,
but
the
same
thing
could
be
applied
to
people
who
reach
out
to
the
security
team
or
the
you
know
insert
any
other
specialty
team
name
there.
A
So
I
think
there's
there's
a
certain
amount
of
stuff
we
can
do
to
make
people
self-reliant,
but
they're
always
going
to
be
the
requests
that
are
just
so
specific
that
yeah
they're
reaching
out
for
a
reason,
because
it's
just
too
much
for
them
to
expect
them
to
have
that
specialized
knowledge.
Yeah.
C
Sort
of
tying
this
back
to
to
resource
attribution,
this
is
the
drum
that
I'm
beating
if
that
could
be
a
way
to
surface.
C
C
A
Well,
that's
exactly
what
I'm
keen
to
do
like
I'm
keen,
I'm
keen
to
get
to
a
place
where
people
can
come
to
us
and
say
we
want
to
turn
on.
We
want
to
build
this
feature.
We
expect
a
cut
that
there's
going
to
be
this
many
customers
with
this
usage
pattern
that
are
going
to
come
and
join
the
feature
over
a
three-month
period
in
order
for
us
to
support
that.
A
A
I
think
we've
only
just
gotten
to
the
point
of
knowing
how
this
stuff
runs,
so
how
the
feature
categories
actually
operates
with
gitlab.com,
and
now
we
can
take
that
data
and
continue
to
build
on
it
and
build
in
the
projection
side
of
things.
But
I
I
absolutely
agree
that
that's
that's
where
I,
that
is
where
I'm
keen
to
get
to.
B
Did
he's
I
really
like
the
sound
of
that
rachel
by
the
way?
Did
these
requests
come
in
frequent
enough
that
it
would
be
inefficient
for
like
someone
from
both
teams
to
look
at
them
so
that
we
can
learn
about
that
part
of
the
process
right
so
that
we
can
kind
of
consider
from
a
frameworks
perspective?
What
we
need
to
do
to
improve
also
from
a
projections
perspective,
what
kind
of
data
we
can
give
back
to
to
make
it
easier
in
the
future.
A
I'm
worried
that
if
we
have
to
assign
two
different
people
to
it,
that
it
is
that
it
that
it
might
not
be
efficient,
I
mean
I
can
see
what
you're
saying
like
having
people
with
two
different
lenses
on
the
same
problem
is
helpful
in
that
regard,
but
I
think
and
and
to
answer
the
question-
I
don't
think
they
are
frequent
enough-
that
it
would
be
a
massive
inefficiency.
A
I
was
thinking
more
in
terms
of
just
having
people
available
to
do
it
because
at
the
moment,
finding
one
person
and
pulling
them
off
of
a
project
to
say.
Please
look
at
this
thing
or
just
temporarily
assigning
like
please
look
at
this
thing.
We
would
not
be
doing
that
to
two
people,
which
means
either
the
same
project
is
getting
even
slower
or
two
projects
have
not
had
had
a
slight
delay.
A
So
it
might
just
be
that
when
these
requests
come
in
in
future
that
between
you
and
me
liam,
we
we
pick
someone
to
do
it
or,
alternatively,
we've
got
to
look
at
the
the
other
things
that
we're
doing
as
part
of
triage
and
maybe
say
your
team
looks
at
this
aspect
of
triage
and
we
look
at
this
other
aspect
of
triage
and
the
responsibility
of
these
incoming
review
requests.
Flips
between
the
teams,
I'm
not
sure.
A
To
be
to
be
determined.
B
A
A
Cool
anything
else
we
should
chat
about.
While
we've
got
some
time.
A
Cool
well
thanks
so
much.
I
hope
that
you
for
those
taking
the
long
weekend.
I
hope
that
you
have
a
good,
a
good
time
with
family
for
those
at
work.
I
hope
it's
quiet
we'll
see
you
next
week.