►
From YouTube: 2023-03-30 Scalability Team Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
why
point
from
last
week
that
I
had
to
read
through
again-
and
we
were
chatting
at
the
dock-
was
basically
a
thank
you
to
the
the
folks
in
observability
that
have
been
working
on
Talos
I
think
Igor
helped
a
bit
out
there
as
well,
and
it's
not
much
easier
and
actually
possible
to
load
larger
time.
Frades
of
data
there's
some
links
in
the
dock
from
last
week,
poetic
those
out.
B
Thanks
yeah
Sean
asked
it
asked
that
I
talk
about
this.
After
all,
because
I've
been
a
bit
on
the
fence,
I
have
some
tools
that
I
made
for
myself
to
work
with
virtual
machines
on
my
Mac
and
I,
keep
telling
people.
Oh
you
can
just
do
this
on
the
Mac
and
it's
not
so
hard,
but
then
I
can't
really
explain
it
well
or
I.
Feel
like
I,
owe
people
an
explanation,
so
I
I
thought
I
would
just
briefly
demo
these
tools,
and
so
this
is
not.
B
This
is
not
directly
about
something
that
the
team
has
been
working
on,
but
it's
something
I
use
for
my
job
and
it
may
help
productivity
of
other
people,
although
I.
The
other
thing
is
that
these
tools,
I'm
not
sure
other
people,
will
be
happy
using
them,
but
they
work
for
me
and
I'm
just
going
to
show
what's
possible
and
for
that
I
need
to
share
my
screen
and
I'm
going
to
share
the
whole
desktop,
and
so
this
thing
is
online
at
this.
B
This
repo,
because
I
made
these
things
for
myself,
I
name
space,
the
commands
with
with
my
initials.
So
it's
a
very
small
repo
and
if
you
run
make
install
you
get
a
thing
that
can
put
a
virtual
machine
on
Mac
and
I.
Think
that's
useful
I
should
I
should
first
demo
how
it
works.
B
So
it's
a
command
jvxh
that
is
sort
of
the
the
interface
and
I
can
make
a
new
virtual
machine,
and
then
you
give
it
a
disk
size
also,
and
then
that
is
how
do
you
get
a
tree
or
maybe
I
don't
have
that,
but.
B
Yeah
finds.
B
B
That
says
this
virtual
machine
has
two
CPUs
and
two
gigabytes
of
RAM
and
currently
no
kernel
command
line
options.
Then
I
always
forget
how
to
do
these
things,
so
my
command
has
built
in
notes
about
how
to
do
common
things,
and
these
are
pretty
copy.
Pasteable
and
I
would
now
copy
paste
this
and
start
setting
up
an
Ubuntu
machine
except
the
downloading
downloading.
It
takes
too
long
so
I'm
going
to
cheat
and
CD
into
a
different
directory
where
I've
already
done
this.
B
So
if
we
look
here,
then
in
the
install
directory,
there
is
what
a
couple
files
and
there's
a
disk
image.
B
That
is
why
the
download
takes
so
long,
because
that
is
actually
it's
not.
It
got
extended
to
20
Gigabytes
so
anyway.
The
next
step
I,
should
open
a
new
terminal
and
follow
my
own
notes.
B
So
I've
done
this
bit
and
then
I'm
going
to
do
this,
and
now
it
boots
a
virtual
machine
and
the
virtual
machine
has
a
terminal
attached
to
it,
a
Serial
console,
and
that
is
in
the
terminal.
So
my
terminal
is
now
directly
connected
to
what
the
virtual
machine
thinks
is
a
Serial
port
and
it
fails
to
boot.
B
But
that's
okay,
because
now
we're
in
single
user
mode
and
we
can
do
stuff
and
I
just
copy
paste,
some
more
things
so
I
mount
the
what
is
going
to
be
the
boot
disk
and
I
set
up
a
root.
Password
and
I
create
SSH
keys.
B
And
then
I
need
to
write
some
configuration
for
the
network
card.
The
network
card
gets
this
DHCP
DHCP
and
it
gets
an
IP
address
from
Macos,
so
that's
very
convenient
and
then
I
need
to
get
out
of
the
change
routes
and
then
I
need
to
power
down.
So
I
can
do
this
and
my
virtual
machine
stopped
and
if
I
would
try
to
boot
it
now.
It
wouldn't
know
where
the
root
disk
is
so
I
need
to
edit
the
config
file.
B
And
say
the
root
device
is
Def
vda,
because
it's
the
first
disk
that
it
sees
and
now
I
can
boot
it
like
this.
B
Then
I
need
to
run
a
few
final
commands,
because
right
now
the
there's
almost
no
space
on
the
disk
for
46
64
megabytes,
even
though
the
the
device
is
20,
Gigabytes
large,
so
I
need
to
resize
the
file
system.
B
And
create
a
user
so
I
don't
have
to
do
everything
as
roots
and
now
I
have
a
user,
and
if
I
go
into
this
user,
then
I
have
a
home
directory.
So
I
can
make
an
SSH
directory
and
create
a
authorized
Keys
file.
B
And
if
I
now
look
at
what
the
IP
address
of
this
box
is,
then
it's
this
so
here,
I
can
now
say
this.
Yes
and
now.
I
have
a
virtual
machine,
and
this
works
exactly
the
same
as
a
cloud
virtual
machine
so
like
from
a
user
experience.
Point
of
view
like
if
you
would
go
on
on
Amazon
or
Google
cloud
or
anything
say,
click
virtual
machine
and
I
want
this
image
and
I
want
this.
B
And
this
and
this
and
in
the
end
you
got
an
IP
address
that
you
can
SSH
into
and
I
have
all
that.
But
it's
all
running
on
my
Mac
and
it's
it's.
It
looks
like
a
process.
I,
don't
know
if
it's
this
one,
but
it
just
looks
like
a
process
that
sits
there
when
it
doesn't
use.
This
thing
is
not
using
a
lot
of
memory,
and
that
also
means
that
it's
not
using
a
lot
of
memory
on
my
Mac,
so
I
can
have
a
bunch
of
these
running.
B
At
the
same
time,
I
actually
have
one
running
that
is
running
gitlab
and
Omnibus
image
right
next
to
it.
So
if
I
want
to
test
and
I'll
leave
this
image,
I
just
install
it
on
one
of
these
virtual
machines
and
then
I
can
try
stuff.
B
D
B
Socialization
yeah
tool
set
exit
that
do
this
exists
for
a
long
time,
but
over
time
Apple
has
been
creating
has
been
building
it
into
the
operating
system,
so
they're
effectively
commoditizing
what
it
takes
to
run
a
virtual
machine
and
it's
not
as
fully
featured
as
what
you
can
do
with
virtualbox
or
Parallels,
or
anything
like
that,
but
it
gets
better
and
better
and
it's
Apple,
like
the
operating
system,
knows
enough
about
Linux
to
run
Linux
and
the
only
this
sort
of
gets
into
how
it's
built.
B
D
Yeah,
they
I
think
to
the
dev
docs
like
they
have
like
sample
projects
on
there
to
like
run
GUI
Linux
in
a
virtual
machine
on
Mac
or
run
a
Mac
in
a
Mac
yeah.
B
D
They
don't
like,
they
don't
provide
these
pre-built,
you
have
to
like
copy
and
paste
them
and
then
build
them,
and
they
they're,
obviously
like
there's
a
lot
of
stuff
hard
coded,
but
like
they're,
it
feels
like
they've
got
to
the
point
where
they've
almost
they've,
given
you
all
the
bits
to
make
this,
but
they
haven't
actually
the
virtual
machine
manager.
E
B
Only
the
bits
and
pieces-
and
you
have
to
use
them
yourself
so
I
I
built
my
own
user
experience
around
it
and
that's
that
works.
It's
possible
that
so
I
can
show
a
little
bit
what
it
looks
like
how
I
built
this.
B
There
are
two
parts
to
it:
there's
an
Objective,
C
executable
that
talks
to
the
library
and
then
there's
a
ruby
script.
That
does
everything
I
don't
want
to
do
in
Objective
C,
because
I
don't
know
Objective
C,
very
well,
so
The
Objective
C
thing.
This
is
mostly
copy
pasted
from
the
Apple
documentation
it.
It
goes
through
the
motions
of
creating
a
configuration
object
and
attaching
a
Serial
port
to
its
adding
a
virtual
disk,
which
is
that
file
that
is
to
disk
image.
Adding
a
network
configuration
to
its
and
it
can.
B
It
needs
to
know
how
much
memory
it's
used
and
how
many
CPU
cores
and
things
like
that,
and
then
it
puts
it
all
together
and
it
says
start
and
then
Apple
creates
that
magic
process
that
you
see
in
the
process
manager.
So
it
creates
one
of
those
and
when
you
shut
down
the
machine,
then
this
this
program
also
exits.
B
Well,
actually,
most
people,
the
documentation,
encourages
you
to
use
Swift,
but
I
find
it
easier
and
shorter
to
write
it
in
Objective
C,
because
the
there
are
example
projects
out
there
that
use
Swift.
But
then
you
need
to
have
a
complicated
directory
structure,
because
Swift
looks
a
bit
like
a
rails,
app
where
things
need
to
be
organized
in
a
certain
way,
and
this
is
really
just
one
file.
D
It's
funny
what
you
mentioned
about
Ruby
and
Objective
C,
because
it
has
a
similar
like
management
tool,
that's
written
in
bash
that
has
the
commands
and
there's
an
open
PR
on
that
project,
but
I
rewrite
that
in
Swift
instead
of
batch,
but
I'm
sure
that
they
wrote
it
in
batch
initially
because
they
were
like
I.
Don't
want
to
be
dealing
with.
Like
you
know,
yeah.
F
D
Up
a
you
know,
a
screening
process
and
stuff
like
this
in
Thrift
web,
like
that's
the
thing
that
bash
could
do
very
effectively
so
yeah.
B
So
these
people
did
it
in
Swift,
so
this
thing
is
written
in
Swift
and
I
mean
I'm
complaining
about
Swift.
But
it's
really
not
that
bad.
It's
it's
300
lines
of
code
and
it
does.
They
have
a
little
more
features.
So
that's
probably
why
it's
longer,
it's
probably
pretty
terse
as
it
is,
but
then
to
make
it
useful.
B
They
have
another
command
around
it,
which
is
a
bash
script,
to
give
to
to
put
to
do
more
glue
stuff
that
I
didn't
want
to
do
in
Swift,
and
you
also
see
this
in
open
source
projects
in
other
projects
where
there's
a
core
command.
That
does
all
that
does
some
of
the
hard
work,
but
then
it's
very
hard
to
use,
and
then
you
need
to
wrap,
or
somebody
else
writes
a
wrapper
around
it
that
makes
it
user
friendly.
B
It's
it's
a
common
pattern,
I'm
not
saying
it's
great,
but
that's
how
that's
what
I
did
here
and
I
actually
started
out.
This
was
a
wrapper
around
another
commands
and
then
that
other
command
stopped
working
after
and
Macos
upgrade.
So
then
I
made
a
replacement
for
that
thing
and
I
kept
using
this
command.
So
I
was
using
X
hive,
which
stopped,
which
is
I,
won't
go
into
what
X5
is,
but
X5
stopped
working
after
one
of
the
Macos
upgrades
and
then
Apple
broke
X
Hive,
but
they
created
virtualization
framework.
B
So
I
thought
okay,
I'll
rebuild
the
bits
I
need.
So
this
is
the
Ruby
tool
that
does
the
the
management
and
yeah
it
does
little
things
like.
Don't
start
the
virtual
machine
more
than
once
I
did
that
once
that's
really
bad,
because
then
you're
corrupting
your
disk
and
compress
the
kernel,
if
it's
needed
and
it
and
it
builds
these
commands.
So
this
is
the
executable
and
then
it
feeds
in
all
the
options
which
would
be
very
cumbersome.
So
it's
this
thing
really
just
builds
a
command
based
on.
D
B
That's
why
mine
has
has
Ruby,
but
I
I
completely
understand
why
people
who
do
it
in
dash
too,
because
it's
it's
the
same
kind
of
job
but
I
find
it
more
pleasant
in
in
Ruby,
and
it
has
a
it
does
a
bunch
of
other
things
that
are
useful
like
if
you
want
to
install
what
I
just
did
was
you
based
on
the
cloud
in
its
image
and
the
disk
image,
which
is
what
Ubuntu
makes
for
booting
a
Cloud
Server,
which
is
very
convenient,
but
the
standard
way
to
package
a
Linux
distribution
is
a
DVD
or
it
used
to
be
a
CD-ROM.
B
And
the
idea
is
that
you
boot
once
with
the
DVD
in
your
drive
and
then
you
copy
things
from
the
DVD
onto
your
hard
drive
and
then
we're
done
you
throw
the
DVD
away
and
when
you
do
that
this
thing
doesn't
know
how
to
get
the
kernel
from
the
DVD.
So
then
you
need
to
mount
the
DVD
ISO
file
and
copy
the
kernel
off,
and
that
is
annoying
on
Macos.
But
I
looked
on
stack
Overflow
how
to
do
it.
B
So
this
is
the
automating
the
stack
Overflow
Magic
to
mount
a
DVD
image,
so
you
can
copy
the
kernel
off.
So
I
put
those
things
in
there.
Another
little
thing
here
is
snapshotting,
so
the
disk
is
just
a
file,
and
if
you
want
a
snapshot
that
you
showed
on
the
machine,
you
make
a
copy
of
the
file.
It's
a
file
with
lots
of
holes
in
it
and
it's
very
useful
to
make
a
sparse
copy
and
the
new
version
of
copy
version
of
copy
can
do
sparse
copy.
B
So
I
specifically
use
new
Google
copy
and
not
a
regular
copy,
because
otherwise
the
snapshots
get
very
big
and
I
can
quickly
demo
that,
because
I
showed
you
that
I
had
this
gitlab
writing
here
on
this
IP
address.
So
if
I
go
to
that
machine,
where
is
that
I
can
see
the
top
of
my?
Is
this
one?
B
So
this
one
has
has
only
buzz
on
it,
but
I
can
power
this
off
and
then
this
thing
stops
working
and
it
takes
a
little
while
for
gitlab
to
stop
I'm
going
to
show
you
a
snapshot,
restore
just
to
show
that
that
is
because
that's
actually
what
I
do
a
lot
with
this
on
these
VMS
that
I've
dedicated
for
Omnibus,
because
I
saw
a
different
package
and
I
don't
want
to
worry
about
it.
I
cleanly,
uninstall
everything
I
just
want
to
reset
the
VM
to
an
own
state.
D
Level
install
as
some
issues
like
as
well,
where
you
can
leave
things
around
then
making
new
install,
not
work
as
expected.
Yeah.
B
B
So
I
can
do
a
restore
and
then
it
wants
to
file
and
I
have
one
where
I
just
that
I
named
apt-get
upgrades-
and
it
was
from
this
timestamp.
So.
C
B
Got
it
yeah
and
that
puts
the
timestamp
in
there,
so
jvxh
snapshots
creates
a
creates
disk
dot,
timestamp
dots,
whatever
name
you
gave
to
snapshots
so
that
copy
ran
in
10
seconds
or
something
and
if
I
boot
this
now
then
I
have
a
working
machine.
Again
it
gets
the
time
I
guess
the
time
from
ntp,
so
it
because
that's
one
of
the
problems
when
you
do
snapshots
that
the
clock
at
your
clock
is
all
I
just
missed
out
my
password.
B
C
So
if,
if
you
wanted
to
create
a
second
VM
based
on
the
first
one,
can
you
just
shut
down
this
VM
to
a
a
copy
of
the
directory
and
go
from
there
or
is
anything
else
needed.
B
You
could,
but
they
would
have
the
same
Mac
address
and
the
the
MAC
address.
Is
it
you?
You
would
have
to
delete
this
uuid
file
because
that
gets
hashed
into
a
MAC
address.
So
that's
every
time
the
machine
boots.
It
does
the
same
Mac,
because
then
the
operating
systems
there's
a
DHCP
server
somewhere.
That
gives
you
the
same
IP
every
time.
B
So
yes,
you
can
copy
this.
And
then,
if
you
delete
this,
then
the
other
one
gets
a
new
IP
and
then
you
have
a
another
snapshot.
These
are
the
kind
of
features
that
better
virtual
machine
managers
have
I
I.
Remember
that
VMware
Fusion
can
do
this,
where
you
can
clone
the
machine
and
you
can
create
sort
of
descendant
copies
from
from
from
a
single
image
and
that's
very
nice,
but
I
from.
E
B
I
wanted
something
more
lightweight,
so
this
idea
it's
it's
just
a
directory
and
that's
it
makes
makes
sense
to
me,
but
yeah.
D
Yeah
I
think
what's
nice
about
this
project,
similar
ones
that
you,
you
probably
wouldn't
tell
someone
to
get
well,
you
could
you
could
just
find
it
and
run
it,
but
you
can
also
just
walk
it
and
like
make
the
changes
you
need
directly
because
it's
small
enough
and
that's
not
actually
going
to
be
an
issue
like
keeping
up
with
Upstream,
Jacob
or
Upstream.
Some
other
random
person
not
get
GitHub
is,
is
not
going
to
be
that
arguous
like
these
aren't
frequently
yeah.
B
D
B
I
I,
don't
really
know
like
in
the
long
run,
if
it's
good
I
think
what's
I
think
it's
better.
If
there
are
well
maintained
tools
and
not
everybody
has
to
make
their
own
tools
I,
just
really
like
making
my
own
tools
so
for
myself.
It
makes
me
happy
to
do
this,
but
but
I
hope
that
Apple
will
eventually
make
an
executable
for
this
or
like
this
will
grow
and
solidify
a
bit
more
and
it
will
become
a
commodity
running.
A
Linux
VM
on
Mac
should
be
a
quantity
and
older
pieces.
B
D
Yeah
I
did
find
another
project
yesterday
called
it
is
designed
for
CI
it's
much
bigger
and
it
also
got
one
of
those
weird.
D
Or
something
like
that,
where,
like
you
know,
it
has
specific,
like
CPU
restrictions
unless
you're
an
individual
and
whatnot
so
yeah,
but
it's
designed
for
building
things
for
for
CI,
rather
than
building
things
for
use
on
your
machine,
which
is
where
it's
a
lot
bigger
as
well.
So
yeah
yeah.
B
But
anyway,
you
can
do
all
of
this
stuff
with
virtualbox
2,
which
is
what
most
people
would
do,
I
think.
But
if
you,
for
whatever
reason,
don't
want
to
use
virtualbox,
but
one
of
the
nice
things
about
this
is
because
it's
so
integrated
into
the
operating
system.
Now
is
that
you
can
do
I,
never
run
sudo
like
none
of
this
needs
Roots,
because
it's
properly
fenced
off
by
by
the
operating
system,
which
is
kind
of
nice.
B
Anyway,
thanks
for
listening-
and
we
should
go
to
the
next
person
and
that's
Sean.
D
Yeah
thanks
so
I
was
just
looking
at
a
thing:
I'm,
not
sure.
If
we're
actually
gonna
do
it
right
now,
but
I
just
wanted
to
talk
about
the
plan.
D
So
one
thing
we
could
do
to
help
out
the
dedicated
team
teams
is
provide
capacity
planning
for
them
so
like
these
are
essentially
different
environments,
they're
not
get
that
production
gitlab.com
production,
but
they
are
production
environment
for
dedicated
tenants,
and
you
know
we
have
a
capacity
planning
tool.
How
could
we
use
that
for
them?
D
So
I
think
the
way
we
could
do,
that
is
to
split
things
up
a
bit
so
first
of
all,
I've
just
sort
of
handwriting
over.
Where
do
we
get
the
metric
from
so
Andrew's
got
to
move
an
issue
open
to
add
a
remote
right
endpoint
so
that
we
can
write
from
AWS
where
dedicated
is
into
our
main
metrics
platform,
which
I
think
we
would
want
for
things
like
switchboard,
because
switchboard
we
should
have
in
our
metrics
catalog
like
switchboard
is
the
thing
that
customers
can
use
to
the
customers,
use
it
directly.
D
I
think
they
can
to
manage
the
configuration
of
their
gitlab
instance
for
things
that
can't
be
configured
in
the
gitlab
UI
and
with
one
like
Matrix,
some
of
those
things
for
that
even
there's
a
separate
project
for
that.
But
those
would
go
to
the
switchboard
team
rather
than
the
the
EOC
from
reliability
or
scalability
or
whatever.
D
But
you
know
with
one
we'd
want
those
metrics
in
there,
but
we
would
probably
also
want
the
dedicated
tenant
metrics
in
there
if
that's
feasible,
so
I
think
if
we
ignore
that
I
think
the
main
thing
from
the
tablet
or
the
main
things
from
the
timeline
side
are
the
repoers
it
stands
is
two
things
which
make
sense
like
there's
no
point
in
putting
them
up
right
now,
which
is
that
it's
a
library,
flash
program
that
generates
a
capacity
planning
report
based
on
some
input.
But
it
also
contains
that
input
data
directly.
D
So
we
have
I
want
to
share
my
screen.
Actually
so
right,
most
of
the
ti
file
is
about
you
know,
managing
the
project
itself,
but
we
also
have
the
data
files
that
we
have
book.yaml.
We
have
saturation.json,
and
then
we
have
the
book.
Markdown
files
that
are
generated
from
book.yaml
Json
is
the
export
of
the
saturation
points
from
the
the
use,
the
metrics
catalog
in
the
runbooks
repo,
as
Json
book.yaml
described,
which
service
has
gone,
which
pages.
D
So,
if
we're
doing
it
to
a
dedicated,
for
instance,
we
obviously
wouldn't
have
a
customer's
dot
page,
because
he's
dedicated
it's
not
going
to
have
its
own
customers.getlabs.com.
D
You
know
they
won't
have
a
redis
rate
limiting
stuff
like
that,
and
so
those
are
fairly
full
of
decoupled.
Anyway,
there
are
a
couple
of
other
places
where
we
assume
things
about
production
environments,
so
that
shouldn't
be
too
bad.
We
also
have
this
data
directory,
which
just
holds
a
cache,
but
that's
already
essentially
packed
around
with
a
table
like
by
fetching
it
from
the
package.
D
Generic
package
read
the
three
things,
so
that's
again
fairly
well
abstracted
yeah,
so
we
assume
that
the
default
environment
is
this,
which
we
should
probably
pull
out
somewhere
and
I.
Think
from
here,
we
also
directly
refer
to
yeah
gprd
here
which
we
need
to
not
do,
but
basically
that's
ready.
So
we
could
consider
splitting
this
into
repos,
where,
like
the
the
main
timeline,
repo
is
just
the
code
and
then
it
provides
a
TI
file
that
you
can
include
in
other
repos.
D
That
does
things
like
check
that
the
template
have
been
updated
to
match
book.yaml
populate
the
cache
or
whatever
environment.
This
is
generate
the
pages
and
manage
the
issues
on
whatever
projects
you're
supposed
to
manage
them
on.
D
So
that's
one
part
I
think
the
splitting
in
separate
repos
is
actually
quite
easy,
with
gitlab
to
the
eye,
because
we
were
just
plowing,
the
parent
repo,
but
we'd
also
do
the
remote
include
thing
from
gitlab
ti,
so
we
get
the
tasks
there.
The
local
development
experience
I
haven't
totally
thought
about
that.
D
Yet
whether
we'd
use
a
sub
module
or
whether
we'd
do
something
else
because,
like
you,
need
you'd
need
that
code
and
that
environment
available
to
run
to
build
the
markdown
files
that
I
mentioned
from
from
the
book
dot
yaml.
So
we
need
to
think
about
that
a
bit
more.
The
other
thing
we
need
to
do
is
do
with
that
duration.json.
So
in
the
runbooks
repo
we
generate
saturation.json
based
on
these.
What
do
we
call
them?
D
We
call
them
saturation,
Cube
yeah
saturation
monitoring,
so
we
have
all
these
files
that
Define
a
saturation
point.
So,
for
instance,
this
is
Cube
node
IPS
and
then
you
know
is
Jay
comic
code,
so
it
creates
a
resource.
Saturation
point
that
you
know
ends
up
in
that
Json
file.
You
know,
like
things
get
templated
in
and
stuff.
D
D
Where
is
the
get
hybrid
thing
yeah?
So
we
have
in
the
reference
architectures
we
have
this,
which
is
essentially
used
for
dedicated
as
far
as
I
can
tell
so.
You
know
we
use
gitlab
environment
toolkit
to
build
dedicated
Penance,
and
these
already
have
some
of
the
things
that
we
provide
on
gitlab.com,
so
dashboards
and
rules
and
for
instance,
it
does
have
a
metrics
config,
which
I
thought.
Oh
great.
We
can
just
use
these
saturation
monitoring
objects
like
we
can.
D
We
can
already
just
generate
this,
but
the
problem
is
these
reference,
the
existing
ones.
So,
if
I
take,
if
you
maybe
for
an
example,.
D
Yeah,
so
this
is
evaluated
from
here
so
from
the
get
hybrid
environment,
but
it's
then
just
importing
the
metric
catalog.
That
is
global,
which
means
that
finding
VM
provision
services
will
find
services
that
we
do
not
care
about
for
dedicated
or
won't
exist
or
won't
make
sense.
It
might
miss
services
that
exist
in
a
different
format.
There
go
back
to
my
earlier
example
like
there
wouldn't
be
a
redditor
rate
limiting
on
dedicated
I
think
they
just
have
a.
D
U
and
one's
configured
to
keep
everything,
and
that
does
everything
else
and
yeah.
So
we
need
to
provide
probably
make
this
function,
take
a
function
of
Matrix
catalogs
so
that
we
could
then
create
a
separate
metric,
catalog
I,
don't
know
how
detailed
that
metrics
catalog
will
need
to
be
for
this
environment
that
lists
the
services
that
are
actually
available
in
this
environment.
But
it's
getting
I
mean
a
lot
of
the
work's
already
there.
So
it's
getting
pretty
close
one.
C
Question
on
this
one
Sean
did
it
make
sense
to
Tag,
Services
and
sort
of
say
this
one
exists
in
dedicated,
and
this
one
doesn't.
D
That's
a
good
idea,
actually
yeah,
so
we
could
definitely
do
that
with
the
way
we
have
things
at
the
moment.
But
we
look
at
web.
We
have
tanks
of
golang
and
rails
which
affect
a
couple
of
things
like
you
can
search
for
saturation
Point
based
on
them,
but
you're
politically
affect
dashboard
tags.
Maybe,
but
we
could
we
could
do
that
as
long
as.
D
D
But
if
it
doesn't
like
you
know,
we
could
possibly
do
some
like
labeling
stuff
to
yeah
web
service.
There
you
go.
We
can
possibly
do
some
labeling
stuff
to
get
around
that,
rather
than
reinvent
the
wheel,
because
I
don't
what
I
don't
want
to
do
is
have
to
essentially
copy
all
this
over.
Like
you
know,
we
don't
need
this
equally,
like
these
service
dependencies
wouldn't
be
right
in
dedicated,
but
we
wouldn't
actually
need
them
for
what
we're
doing
here
anyway.
D
A
D
At
the
moment,
the
saturation
Point
import
the
metrics
config
and
they
could
probably
return
a
function
that
taped
the
metrics
config
as
an
argument
and
then
evaluate
them
based
on
that,
so
that
they're
agnostic
but
yeah
I
think
that
that
could
work
yeah.
We
already
have
the
service
catalog,
basically
so
I
think
that's
good
and
yeah.
What
was
the
other
thing?
I
was
gonna,
say:
oh
for
the
environment,
the
other
thing
we
could
do
instead
of
changing
the
code,
is
Marco
already
added
the
services
end
mapping
thing.
D
So
if,
if
this
is
exhaustive,
we
could
just-
or
we
could
just
add
a
default
like
environment
here
or
whatever
right
and
then
you
know,
we
could
just
use
this
file
a
bit
more,
but
I
think
that's
what
we'd
need
so
essentially
we'd
end
up
with
a
tamlan,
gprd
and
Hamlin
dedicated
tenant,
yeah
one
project
and
stuff
like
that.
I
think
is,
is
how
it
would
look
eventually.
B
D
B
Wondering
about
that,
because
stemland
is
now
about
one
big
installation,
but
dedicated
will
have
many
installations.
So
if
we
at
this
random
number,
if
we
have
a
thousand
dedicated
installations
that
mean
we
generate
a
thousand
Town
lens
yeah.
D
The
requirement
I've
got
at
the
moment
is
that
it's
not
be
the
same
Pages
site,
so
obviously
you
get
that
Pages
like
is
coupled
to
a
project,
but
I,
don't
know
if
all
dedicated
tenants
could
be
on
the
same
page
of
site.
I
think
that
would
probably
be
more
convenient
going
forward,
because
all
dedicated
tenants
will
share
this
architecture.
D
It's
like
from
a
timeline
perspective.
Everything
is
the
same
except
probably
the
environment,
name,
I,
guess
the
environment,
collector
and
so
from
a
timeline
perspective.
We'd
probably
want
to
just
say,
timeline,
timeline,
gitlab.com
and
timeline
dedicated
rather
than
timeline
dedicated
tell
them
one,
so
yeah
I,
don't
know,
but
I
would
I
would
think
we
could
yeah.
We
could
generate,
like
you
know,
a
page
of
site
with
different
directories
per
dedicated
tenant
and
then
who's
going
to
look
at
them.
I
assume
we
create
the
same
issues
as
we
do
now,
and.
D
A
D
I,
don't
know
how
important
that
is
for
them.
It
was
just
I
was
asked
to
look
at
things.
We
could
do
for
the
dedicated
team,
so
that
was
one
of
them
and
I
need
to
get
things
well,
it's
already
written
down
in
the
Epic,
but
I
also
need
to
get
things.
D
Written
down
sorry
out
of
my
head
before
I
finish
up
Igor.
Would
it
make
sense
to
put
the
reusable
bits
into
comment
for
the
iPad
I
think
it
depends
I
think
if
they
are
agnostic
to
the
project
entirely
like
there's
the
python
formatting
tasks,
for
instance,
that
could
probably
go
in
common
to
the
iPad,
because,
like
any
project
that
uses
python
could
say,
I'm
going
to
run
this
python
formatting
task,
I!
Think
for
things
like
make
sure
the
book
directory
is
up
to
date
with
the
book.yaml
input.
D
D
Yes,
Rachel
I
will
link
to
this
conversation
once
the
recording's
uploaded.
Thank.
F
C
C
All
right
so
story
time.
This
came
up
in
in
Uncle
this
week
and
I
just
happened
to
stumble
over
this
and
it
rang
a
bell
for
me.
So
wool
G
is
what
we
use
for
postgres
continuous
archiving.
So
that's
uploading
wall
segments
as
they're
produced
as
well
as
base
backups,
which
is
kind
of
the
nightly
snapshot
that
we
do
and
we
had
an
issue
where
all
g-based
backup
failed.
C
So
since
I'd
kind
of
dug
into
bulgy
issues
before
I
just
wanted
to
take
a
very
short
peek
at
this
one,
and
so
I
saw
this
report
of
the
logs
of
woji
backup
and
it
says,
failed
to
run
and
retrieval
funk.
Google
API,
404
retrying
attempt-
and
this
looked
very
familiar
to
and
very
similar
to
an
issue
that
we
had
one
year
ago,
which
was
not
for
the
base
backups,
but
it
was
for
the
continuous
archiving,
but
it's
the
same
tool.
C
So
chances
are
it's
a
similar
code
path
and
possibly
a
similar
underlying
issue,
and
so
the
the
underlying
issue
previously
Was
a
Race
condition
where,
if
you
have
concurrent
wall
G
processes,
they
there's
a
race
condition.
C
So
the
the
basic
upload
Loop
is
gonna,
select
a
wall
file,
slice
that
file
up
into
chunks
and
then
for
each
chunk.
It's
going
to
upload
the
chunk
to
a
bucket,
and
then
it's
gonna.
So
basically
this
this
happens
concurrently,
I
believe
and
then
there's
a
compose
object
called
that
merges
those
chunks
into
a
single
object
and
then
once
that's
done,
it
deletes
the
the
temporary
chunks.
C
So
the
issue
is:
if
two
or
more
processes
are
doing
this
thing
concurrently,
then
they'll
kind
of
upload
this
to
the
same
location,
because
there's
no
random
element
in
this
in
this
file
name,
one
of
the
compose
calls
is
going
to
succeed
and
all
other
subsequent
compose
object.
Calls
are
gonna
fail
because
the
first
one
deleted
these
chunks,
so
they
no
longer
exist
and
instead
of
saying
oh
something
went
wrong.
C
I'm,
gonna,
bail
out
it
goes
into
this
retry
Loop,
which
is
Never
Gonna
succeed
because
it
just
keeps
getting
a
404,
because
these
these
chunks
are
never
gonna
reappear.
C
So
that
that
shouldn't
happen
and
the
fix
was,
let's
not
run
more
than
one
of
them
at
the
same
time,
yes
and
so
same
question
exists
for
base
backups.
C
We
should
also
not
run
more
than
one
of
them
at
the
same
time,
and
so
there's
there's
basically
two
approaches
either
ensure
that
they're
not
concurrent
or
have
a
separate
prefix
in
the
bucket,
so
that
they're
effectively
not
working
on
the
same
files,
and
this
is
probably
the
one
that
we
want
and
so
I
went
looking
on
the
host
and,
to
my
surprise,
there
actually
was
a
console
lock
process.
C
B
C
And
basically
one
of
them
wins
yeah,
yeah
and,
and
so
if,
if
we
don't
need
to
designate
a
single
box
that
if
that
box,
that
goes
away
yeah.
C
All
do
it
yeah,
so
so
it's
for
the
yeah
for
redundancy
reasons
and
yeah,
so
I
kind
of
looked
at
this
and
seemed
like
that
should
work.
The
other
thing
that
I
noticed
was
we.
We
have
some
new
postgres
hosts
that
are
doing
this
stuff,
so
we've
got
petroni
101
and
121
and
I.
E
Know
I
think
it's
it
doesn't
mean
we
have.
C
But
I
it
it
looked
to
me
like
these-
are
designated
to
maybe
be
for
base
backups
specifically,
and
we
have
two
of
them
for
redundancy,
but
I'm
not
entirely
sure
on
that.
So
I
don't
quite
know,
but
in
any
case
that's
something
that
kind
of
jumped
out
to
me
and
so
yeah
I'm,
actually
on
the
the
101
box
here
and
I
looked
a
bit
closer
at
this
console
lock
line
and
one
of
the
things
that
jumped
out
is
the
name
of
the
lock
actually
includes
the
101.
So
it
includes
the
hostname.
D
C
Yeah
yeah,
and
so
that's
weird:
why
is
that
happening?
So
I
looked
at
the
backup.sh
file
and
here's
how
we
compute
the
lock
name.
So
we
take
the
hostname
we
match
on
dash
two
numbers:
Dash
DB.
C
Well,
we've
got
three
numbers
here,
so
this
no
longer
matches.
So
we
we
kind
of
broke
the
assumption
that
we
have
less
than
a
hundred.
Well,
we
we
do
have
less
than
100
hosts,
but
less
less
than
100
named
hosts.
Let's,
let's
put
it
that
way,
and
so
by
breaking
that
assumption.
C
This
regex
no
longer
removes
this
part,
and
that
explains
what
we're
seeing
so
yeah
I
thought
that
was
kind
of
a
fun
find
and
yeah
short-term
fix
is
basically
fixing
this
regex,
which
indeed,
where
is
suffix
here,
which
is,
is
pretty
much
the
the
obvious
thing
right.
C
But
then
the
the
longer
term
fix
is
actually
configure.
This
lock
name
as
something
bit
more
explicit.
E
C
C
The
identifier
that
we
should
have
been
using
all
along
yeah
yep,
exactly
yeah
I,
just
thought
it
was
kind
of
a
neat
find
and
a
fun
story
to
retell.
C
A
It
to
say
about
that,
but
I
wanted
to
bring
it
up
because
Igor
is
on
the
call
and
I
want
to
verify
my
plan.
So
I
noticed
yesterday,
but
looking
into
subreddits
memory,
things
that
are
going
on,
that
we
don't
have
a
redis
memory
saturation
point,
so
we
measure
wait.
I'm,
not
sharing
my
screen.
A
We
measure
memory
utilization
in
several
different
ways,
and
one
of
them
is
just
node
memory.
How
much
memory
is
used
on
the
Node
versus
how
much
does
it
have
configured
and
that's
what
we
have
been
talking
about
and
then
I
want
to
compare
it
to?
This
was
in
this
issue
to
redis
memory.
A
Yeah
here
and
then
I
noticed
that
so
for
redis
host.
We
have
three
different
measures.
We
have
one
just
a
node
versus
use
memory.
Then
we
have
redis
memory
used
versus
max
memory
and
then
other
hosts
like
this
persistent
one,
doesn't
have
max
memory.
So
we
also
have
a
reported
memory
used
by
redis
over
node
yeah.
How
much
memory
is
on
the
is
on
the
Note.
What
is
missing
is
the.
A
C
A
Because
the
kubernetes
metrics
and
we
do
run
redis
and
kubernetes,
don't
have
fqdn.
Why
is
the
port
included,
in
instance?
So
I
was
first
working
around
this
with
some
label
replace
cleverness
in
in
the
saturation
points
itself,
but
then
I
thought
do
we
actually
need
this
port
in
anything
because
yeah,
for
instance,.
A
I
thought
it
was
Prometheus
as
well.
That's
what
I
was
going
to
ask
you
I
I
found
something
in
our
metrics
config
that
comes
from
vendored,
let's
find
see
where,
if
I
can
find
what
I
was
looking
at.
A
B
I
think
the
if
you
ever,
if
you
have
two
goal,
processors
on
the
same
box,
then
the
gold
Prometheus
client
I
think
always
exports
some
metrics
about
the
goal.
Runtime
and
those
would
then
Clash.
B
I
I,
don't
know
where
we
do
it,
but
you're
asking
like.
Is
it
okay
to
do
this
and
yeah
they
would
yeah
unless.
C
That
that's
news
to
me
but
sounds
legit
I.
C
Okay,
I
have
the
last
item,
so
this
is
the
last
team
demo,
the
Sean,
so
I
just
wanted
to
take
the
opportunity
and
say
it's
been
great
working
with
you
and
wishing
you
all
the
best
on
your
future
endeavors.
F
Yeah,
so
you
have
to
say
goodbye,
you've
got
to
put
your
your
farewell
slack
messages
on
there
before
they
call
because
then
yeah,
basically
Wednesday
afternoon
off.
Yes,
how
much
time
are
you
taking
a
game
between
the
rolls.
D
Tui
so
yeah
Easter
holidays,
eventually,
which
I
think
you're
taking
up
anyway,
so
yeah.
No,
it's
gonna
be
sad.
I
think
I
really
have
enjoyed
working
at
GitHub.
A
huge
amount
and
I
really
appreciate
working
remotely,
but
leaving
a
remote
job
is
very
different
to
leaving
an
in-person
job
I'm,
just
covering
so
yeah
I
I,
hope
to
stay
in
touch.
D
I
know
it's
difficult,
but
I
will
at
least
be
back
in
the
alumni
bike
Channel
whenever,
whenever
I
get
set
up,
I,
don't
really
know
how
any
of
this
works
so
yeah,
but
it's
been
great
working
with
you
all
and
I've
learned
a
huge
amount
from
working
at
gitlab,
but
especially
being
on
this
team.
It's
been
great.
F
Go
all
well
with
that.
I
think
we've
come
to
the
end
of
the
of
the
demo
for
today.
Thank
you
so
much
and
thank
you
to
whoever's
recording.
Please
do
upload
it
when
you
get
there.
Yeah
hope
you
all
have
a
good
rest
of
your
day
and
we'll
chat
soon.