►
From YouTube: Distribution Team Demo 2021-05-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
so
this
week
in
the
distribution
demo
we're
going
to
do
something
interesting.
We
had
a
customer
recently
asked
about
running
c
groups
within
the
docker
container
that
has
the
on
the
bus
in
it
within
docker
itself,
so
they're,
not
using
kubernetes
they're,
not
using
docker
compose
they're,
not
using
rancher
they're,
not
using
anything.
Just
literally
running
docker
run
get
lab,
slash,
get
lab
ee.
A
A
In
that
we
talk
about
turning
down
the
number
of
puma
turning
down
the
sidekick
concurrency
and
also
particularly
setting
italy's
c
groups
to
constrain
how
much
that
it
will
actually
try
and
use
now.
The
problem
is
that
c
groups,
the
control
structures
for
this
are
not
exposed
within
the
container
run
time,
really
any
container
run
time
on
purpose.
A
Basically,
if
you
give
access
to
this,
you
get
access
to
a
lot
of
things
and
setting
the
capabilities
to
do
it.
The
right
way
is
not
easy
or
straightforward
in
such
that
we
actually
would
strongly
recommend
not
attempting
to
do
this
in
any
production
environment
just
by
the
sheer
nature
of
effectively.
The
easiest
way
is
to
make
it
a
privileged
container
or
to
give
it
caps
admit
now.
The
problem
is
those
do
give
you
the
ability
to
use
c
groups.
They
also
give
you
way
more
abilities
and
c
groups
are
only
available
in
certain
capabilities.
A
So
the
idea
is
that
we're
going
to
try
and
replicate
this,
but
try
to
prove
out
the
concept
of
whether
or
not
we
can
make
use
of
docker's
own
runtime
controls,
basically
in
kubernetes
we're
familiar
with
setting
requests
and
limits
in
this
case
we're
going
to
try
and
set
the
environment
up
to
operate
in
as
small
a
memory
footprint
as
we
can
and
then
pass
effectively.
Those
same
kind
of
settings
to
the
docker
run
command
so
that
this
will
actually
impact
the
container.
A
A
What
I
have
here
is
inside
the
working
directory.
I've
set
up
a
folder
that
will
have
etc
gitlab
another
one
that
will
have
the
logs
and
another
one
that
will
have
all
the
opt
content.
Specifically
var
opt
right.
So
just
the
data
generated
by
it
that
way,
I
can
start
the
container
over
and
over
and
over
again
and
bypass
the
whole
data
creation
problem.
I
should
I
say
problem,
but
really
the
steps
so
that
doesn't
take
as
long.
A
A
A
A
A
We
turned
off
monitoring,
we've
specifically
configured
the
je
malek
for
an
environment
variable
to
set
it
to
be
a
little
more
aggressive
in
memory
collection,
we've
tuned
down
the
getaway
concurrency.
This
basically
says
how
many
requests
it's
going
to
process
and
or
even
emit
in
literally
how
many
they're
going
to
do
at
a
time
or
allow
at
a
time
and
then
configured
italy
in
the
same
way
we're
shoving
in
jmalik,
and
then
configuring
in
that,
and
then
we're
also
telling
italy
to
only
ever
actually
spawn
at
most
two
commands
to
get.
A
So
the
probable
things
that
we're
going
to
want
to
do
here
is
for
the
moment
I'm
going
to
ignore
the
cpu
and
just
worry
about
memory.
So
I
am
looking
at
eliminating
continuous
access
to
memory.
Very
specifically,
the
direct
behavior
is
just
to
set
memory
now
the
particular
customer's
target
is
within
four
or
even
three
gigs.
I
don't
know
that
we're
gonna
be
safely
able
to
get
it
down
to
two
just
by
the
nature
of
this
beast,
that
is,
on
the
bus
and
all
the
components
that
make
up
the
application.
That
is
gitlab.
A
A
A
A
A
A
A
B
B
A
A
A
Same
now,
the
docker
json
has
10
250.
Does
that
satisfy
testing.
D
Yeah
yeah:
this
is
the
the
usual
amount
of
subgroups
that
we're
using
and
the
local
host
is
just
for
local
tests
to
just
to
create
smaller
amount
of
projects
and
subgroups
yeah.
If
you
switch
yeah,
you
can
see
here,
it
will
create
25
projects,
five
subgroups
and
five
projects
in
each
and
in
the
in
this,
we
will
create
twenty
two
hundred
five,
fifty
subgroups
and
five
projects
in
it
or
ten,
so
it
will
be
2
500
for.
A
A
A
A
D
D
If
you
will
be
using
the
the
docker
image,
so
you
can
scroll
down
a
bit,
there
will
be
a
comment,
but
you
also
will
need
to
to
set
up
the
folder
structure
as
a
little
bit
down
yeah.
This
is
is
done.
This
is
done.
Okay,
down
down
down
yeah
this
one
docker
recommended
and
below
there
will
be
below
the
help
output.
D
A
A
Okay,
so
at
that
point
I
just
need
to
turn
it
on
and
let
it
run
yep
you
just
copy
the
docker
run
the
the
comment
below
you'll
go
right,
right,
it'll
be
given.
A
I'm
just
double
checking.
If
there's
anything,
I
need
to
do
to
tell
it
the
ssh
port,
or
will
it
figure
that
out.
E
A
A
D
Yeah,
it
could
be
because
you
are
running
joker
and
it
doesn't
see
your
docker
container
yeah.
You
need
to
you
need,
maybe
you
can
specify
the
the
network
for
the
docker
yeah.
B
A
A
A
B
Or
you
can
just
run
with
the
network
run
both
with
the
network
instead
of
bridged
into
the
host,
so
they'll
run
with
your
whole
stuff.
A
B
D
Sorry
for
interrupting
the
the
generator
will
import
the
gitlab
hq
and
it
may
take
about
20
to
30
minutes.
Oh
yeah
yeah.
It
can
be
slow
yeah
because
it's
a
big
project
so
yeah
that
could
be
an
issue.
D
A
D
D
A
A
A
A
A
D
D
D
D
Yep
yep
that,
but
maybe
we
we-
we
will
have
another
issue
here:
sorry,
okay,
because
the
the
environment
file
that's
been
using
for
for
the
gpt
e,
it
waits
for
the
data.
Github
hq
data,
and
here
we
will
have
the
small
project
that
doesn't
have
this
data,
so
we'll
need
to
tweak
the
the
config
file
for
for
the
project
yeah.
So
we
may
need
to
create
the
custom
project
file
and
yeah
it
may
it
may
take
a
while.
D
D
I
will,
I
will
compile
the
list
one.
Second,
thank.
D
A
A
We
have
at
least
found
one
instance
where
we
can't
actually
just
choke
the
poor
thing,
so
that's
kind
of
good.
If
your
upload
is
large
enough,
it
will
just
fall
over
the
gitlab
hq
project
is
not
small,
as
malaya
pointed
out,
it's
very
large
in
its
own
right.
A
That's
kind
of
the
nature
of
memory
constraint
right
if
you're
extracting
something
and
you're
doing
a
large
process
in
memory
and
then
you're,
then
loading
a
bunch
of
data
to
then
turn
around
and
put
into
a
process
you're
going
to
have
some
overlap
between
the
two
one
thing
I
will
come
over
here
and
point
out
is
that
while
we
have
in
our
documentation
a
bunch
of
things,
one
of
the
things
that
we're
not
doing
here
is
actually
tuning
redis
or
postgresql
for
a
lower
memory
environment.
D
A
Try
that
the
idea
is
that
we're
going
to
try
and
be
able
to
handle
25-ish
users.
So
if
this
is
sufficient
for
100,
it
should
be
enough.
A
D
A
D
It
may
be
due
to
the
fact
that
we
don't
have
much
projects
and
issues
like
the
data
is
not
as
big
as
we
usually
use
for
the
gpt.
So
it
may
be
helpful
to
to
to
run
the
test
to
rerun
the
test
and
import.
The
gitlab
hq
right.
A
A
A
They're
running
it
in
the
container,
as
far
as
I
know,
gotcha,
I
was
just
curious,
yeah
they're
what
their
intent
is
to
be
able
to
spin
up
basically
think
of
git
lab
on
demand
for
small
project
groups.
They
are
considering
some
that
are
able
to
have
one
joint
larger
instance,
but
there
are
certain
segmented
work
components
that
have
to
be
in
separate
instances
for
contract
requirements.
A
So
yeah,
I
did
recommend
you
know
if
possible,
you
know
understand
that
the
the
the
node
size
for
25
is
the
same
as
100
and
there's
absolutely
no
way
to
constrain
that
difference.
Just
because
the
way
the
application
works,
you
can
basically
tune
the
daylights
out
of
it,
but
you're
going
to
spend
a
lot
more
in
terms
of
tuning
it
down.
Then
you
will
to
have
say
one
large
instance
that
could
handle
3000
users
and
then
having
these
smaller
satellite
instances
and
then
say
you
would
have
maybe
20
instances
in
total
one.
A
A
E
Just
for
grants,
can
you
pop
into
one
of
those
containers,
that's
running
the
app
and
and
see
like
if
it's
using,
swap
or.
A
Also
note,
it
is
recognizing
that
I
have
32
gigs
of
machine
from
inside
the
container.
It
doesn't
recognize
that
it's
only
four
gigs,
despite
the
fact
that
we
passed
the
arguments,
and
we
know
that
the
c
group
is
in
that
state.
A
A
That
being
said,
as
not
only
pointed
out,
we
didn't
exactly
load
it
down
with
data.
We
gave
it
a
small
project
and
then
we
only
created
a
number
of
things
in
terms
of
total
data
on
the
system.
So
it's
not
like
we
created
250
or
2500
groups.
We
really
only
created,
you
know
a
total
of
25.
C
Yeah,
I'm
curious
if
your
your
earlier
import
of
get
lab
hq
if
that
just
hit
the
out
of
memory
killer
being
that
your
system
doesn't
have
swap
yeah,
often
depending
how
slow
your
swap
is,
but
often
swap
is
more
of
a
like
a
deterrent
to
the
outer
memory
killer,
in
that
it
slows
everything
down.
C
A
So
api
v4
groups
actually
failed
on
the
testing.
The
target
was
greater
than
16..
We
managed
that
500.
A
A
A
A
A
Right,
so
that's
that's
the
nature
of
the
fact
that
we're
literally
running
what
is
it
60
20..
So
if
we
drop
the
rps
to
say,
10
it'd
be
different,
however,
I
don't
think
there's
a
lower
one.
Is
there.
A
A
A
It's
relatively
snappy
as
someone
who's
accustomed
to
navigating
on
that
on
an
empty.
A
A
A
A
A
Now
mind
you
ruby's
threading
is
a
little
bit
different
than
the
way
that
say
going.
Does
it
it's
not
a
subroutine,
it's
not
a
a
lambda
and
you
can't
have
necessarily
two
threads
actually
operating
at
the
same
time,
so
there's
only
so
high.
You
can
make
that
thread
response,
but
having
more
than
two
would
probably
be
better
at
the
same
time,
10
of
them
won't.
Do
you
much
good
in
a
single.
A
B
A
Though,
in
that
we
don't
specify
it,
so
the
omnibus
is
doing
its
default
magic
right.
So
it's
going
how
many
courses
do
I
have?
Okay,
then
give
me
90
of
all
quarters
rounded
to
make
processes
for
and
what
we
have
done
is
set
specifically
set
puma
worker
process
to
zero,
which
means
it
spawns
up
one
single
process
and
doesn't
spawn
up
children.
A
There
may
be
further
tuning.
I
don't
remember
all
of
it
off
the
top
of
my
head.
There
is
some
in
what
you
can
actually
do
there.
What
we
could
do
is
actually
say
two,
but
the
problem
is
when
we
say
two:
we
now
have
doubled
the
memory,
consumption
of
puma
just
by
the
nature
of
spawning,
yet
another
ruby
container.
So
there's
you
know
a
gig
and
a
half
a
ram
gone.
B
Yeah,
because
what
from
what
you
were
showing
in
h
top
and
top
that
you
are
reaching
that
threshold
on
the
load,
where
you
were
past,
the
four
cores
that
the
system
is
showing
that
as
seeing
four
cores
you're
above
four
loads.
So
he
was
stressing
out
slightly
at
that
point,
and
your
test
has
shown
that
we
fail
as
soon
as
we
cross
that
okay.
A
Again,
let's
see
where
we
cross
that
now
I'm
going
to
let
this
run
in
the
background
and
I've
got
the
top
from
inside
the
container
in
the
h-top
from
my
local
system,
noting
that
you
know
we're
sitting
at
50-ish
percent
just
because
I
have
zoom
running
and
I'm
sharing
my
screen,
but
it's
a
good
example
of
noisy
neighbor
right.
If
you
run
10
copies
of
gitlab
side
by
side
on
the
32
machine,
you
know
how
much
are
they
going
to
fight.
B
A
A
B
A
A
A
Yeah,
if
anybody's
interested
in
the
specifics
of
how
and
why
I
see,
groups
do
or
don't
work
and
the
underlying
why
they're
not
normally
exposed
if
you
go
to
the
linked
issues
and
follow
to
the
documentation
that
are
coming
from,
that
you'll
find
out
why
these
are
not
things
that
are
exposed
inside
of
containers
and
why,
even
though,
that's
a
great
way
to
contain
a
process,
doing
so
inside
of
a
container
requires
a
lot
of
privileges
that
you
just
normally
don't
get
for.
Good
reason.
A
Just
give
it
root:
how
about
that
yeah.
A
So
I
will,
since
this
is
the
end
of
the
demo
I'll
go
ahead
and
give
dimitro
that
the
heads
up-
no,
not
all
demos,
are
supposed
to
be
pretty
shiny
and
perfect.
This
demo
is
intentionally
to
walk
through
something
demo
what
the
product
is
and
how
it
behaves
and
actually
understand
the
problem
that
we're
trying
to
look
at
not
necessarily
that
everything
is
all
shiny
and
perfect
and
look
at
my
product,
those
are
product,
demos,
that's
not
what
we
do.
E
That's
not
to
say
if
you
have
something
polished
that
you
finished.
We
do
from
time
to
time,
particularly
if
it's
something
that
doesn't
wouldn't
normally
get
the
exposure,
but
we
want
people
to
know
about
it.
So
there's
room
for
interpretation
on
the
demo
sort
of
based
on
what
the
team's
working
on
this
was
an
interesting
one,
because
it
kind
of
you
know
it
came
in
hot
from
a
customer.
E
B
A
Small,
I
mean
that's
exactly.
The
thing
is
that
the
notes
that
will
be
present
in
the
demo
documentation
will
be
things
that
we
can
then
feedback
into
the
actual
proper
documentation
like
by
the
way
this
works
when
you
install
it
as
the
package
when
you're
doing
it
as
the
container,
you
need
to
take
these
things
into
consideration
right,
because
our
auto
magic
thread
and
our
ability
of
postgres
to
automatically
configure
how
much
of
its
buffer
it
should
be
using
in
memory
like
postgres,
should
have
recognized
that.
A
Oh,
I
can't
allocate
a
gig
and
a
half
for
my
shared
memory
buffer
and
done
something
about
it,
but
do
we
know
have
we
tested
it?
Should
we
write
it
down
right?
There's
a
number
of
things
like
this:
we
we
do
a
lot
of
auto
magic
behaviors
when
you
install
the
omnibus
on
a
actual
node.
However,
as
we've
seen
and
is
called
out
here
in
docker,
you
see
the
cpus
of
the
host
and
you
see
the
memory
of
the
host
right.
A
If
I
went
to
the
system
monitoring
tools,
it
showed
you
the
system
load
as
it
understood
it.
It
didn't
show
you
the
extra
load
from
the
processes
outside
of
the
container,
because
it
can't
see
them,
but
it
did
show
you
the
overall
memory
availability
right.
It
showed
you
all
32
gigs
and
that's
not
true
for
that
container.
It's
only
four
at
most
three
of
it's
actually
usable
right,
because
because
I
don't
have
spot.
B
A
Okay,
if
there's
no
other
questions
or
interesting
points,
then
shall
we
call
it
a
day.