►
From YouTube: Infrastructure Group Conversation (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
good
day,
everyone
welcome
to
the
march
2021
infrastructure
group
conversation
it's
great
to
have.
You
join
us
today.
We've
got
a
number
of
of
items
in
the
slide
deck
there
and
it
looks
like
sid.
You've
got
the
the
first
question
there.
B
Yeah
thanks:
what's
the
status
of
defining
an
slo
service
level
objective
that
would
be
more
will
require
a
higher
uptime
than
our
servers
level
agreement,
which
is
99.5.
A
Yeah,
so
we
do
have
slos
that
well,
we
have.
We
have
the
the
slis
and
the
combination
of
of
slo
set
that
gives
us
based
on
aptx,
basically
rolls
up
to
that
that
average
availability
that
we
use
and
then
in
each
one
of
those
services
there
are
five
defined
services
that
are
weighted
into
that.
We
can
also
look
individually
at
those
services
for
the
slo
on
those
are
you?
A
Are
you
wondering
about
kind
of
more
of
a
of
a
specific
commit
around
those
moving
to
like
an
sla
or
just
more
kind
of
transparency
and
visibility
to
the
individual
slos.
B
More
as
a
commit
of
the
team
like
an
ambition
of
the
team,
and
so
if
it's
an
sla,
it
means
like
guaranteed
to
our
customers
at
some
point.
There's
going
to
be
a
conversation
about
giving
credits
and
things
like
that.
That's
that's,
like
that's
a
very
high
part,
not
looking
for
that
high
of
a
part
but
yeah.
I
think
if
we
have
99.6,
we
made
the
sla,
but
I
think
we
all
still
feel
like.
B
Oh
there's
there's
room
for
improvement,
so
I
could
see
us
setting
an
internal
service
level
objective
of
199.39,
which
wouldn't
be
outrageous
for
a
sas
service,
and
if
we
don't
meet
that
in
a
month,
it's
not
that
we
go
have
to
send
an
apology
email
to
all
our
customers,
but
it
would
be
like
hey.
We
we
gotta,
we
gotta
like
look
at
what
went
wrong
and
and
try
to
do
better
next
month.
C
Sid
we
do
do
that
and
we
have
slos
for
every
service
at
get
lab
and
a
lot
of
them
are
like
99.95.
It
depends
on
the
performance
of
the
service.
Sadly,
there
is
a
bit
of
a
trend
at
the
moment
that
we
are
lowering
those
slos
not
high,
not
increasing
them.
B
Cool
that
makes
sense.
I'm
99.8
also
makes
sense
to
me
what
I'm
kind
of
hinting
at
is
like
if
we
have
slide
3
that
we
show
hey.
This
is
our
service
level
agreement.
We
made
it
it's
green
green.
I
kind
of
expect
the
next
slide
to
be.
This
is
our
service
level
objective,
it's
99.8
and
we
made
it
or
not
made
it
for
the
month.
C
So
we
we
have
that
at
the
moment
for
each
service
individually,
but
we
don't
have
like
an
aggregated
one
for
the
for
everything
and
it's
kind
of
tricky.
Sometimes
because
do
you
include
something
like
nfs
in
that
you
know
we
are
measuring
those
statistics
for
something
like
nfs,
but
we
probably
don't
need
it
in
the
in
the
metric.
So
if
we're
going
to
aggregate
into
a
single
value,
we
need
to
come
up
with
a
with
a
way
of
doing
that
or
we
can
just
keep
it
at
at
post
service
level.
B
Hey,
can
we
have
a
slide
with
the
service
level
objectives,
the
achieved
ones
as
part
of
like
the
the
fourth
slide
of
this
presentation,
yeah,
and
then
we
can
have.
C
A
B
You
know
make
it
and
don't
don't
slow,
show
a
metric
without
an
objective.
I
know
the
objective
is
99.5
so
like
we'll
always
make
it
unless
we
have
a
really
bad
month.
So
it's
hard
to
tell
like
how,
like
normally,
we
set
internal
targets
which
are
a
bit
more
ambitious
and
we
meet
like
70
of
the
time,
and
I
think
that
those
are
the
slo
targets
and
we
have
them
for
service
and
it's
great
except
please
add
them
to
the
presentation.
That's
mask.
Okay,
please
add
the
line
we
have
to
stay
above.
A
A
There
any
you
just
go
ahead
and
verbalize
questions.
If
anyone
has
one.
A
I
think
one
thing
that
that
could
be
a
discussion
point
here
if
we
do
have
a
few
minutes,
as
andrew
mentioned
the
fact
that
we've
relaxed
some
of
our
slos
this
month,
and
so
you
know
if
there
is
a
tension
on
in
the
in
the
deck
things
around
both
the
db
queries
rapid
action
as
well
as
kind
of
spam
on
the
topic
of
general
abuse
activity.
There's
some
linkages
between
these
things
that
that
are
causing
pressure
on
on
the
system
overall,
resulting
in
relaxing
the
slows.
A
B
A
I'm
going
to
see
if
they're
I
I
can,
I
can
say
that
I
have
a
conversation
coming
up
in
fact,
just
just
tomorrow
about
this
item,
and
so
it
is
there's
work
underway
to
to
raise
the
question
and
basically
get
it
get
it
addressed,
because
I
think
there
there
are
actions
being
taken
taken
in
individual
stage
groups,
but
the
kind
of
the
topic
of
abuse
and
trust
and
safety
and
safety.
Overall
I
mean
it
seems
like
something
well,
it
doesn't
seem
like
something.
A
B
D
D
E
Yeah
we
hit
two
easter
eggs
in
this
presentation
and
if
you
find
them,
we
can
discuss
it.
If
you
don't
we're
going
to
roll
with
it
and
make
changes
to
the
organization
following.
E
E
All
right,
I'm
not
going
to
spend
too
much
time
of
all
of
us
on
it
slide.
Where
is
it
now
10?
F
Sure
so
we
are
talking
about
rollbacks
and
benefits
we
get
from
rollbacks,
but
also
actually
want
to
highlight
without
easter
egg.
That
rollbacks
are
not
like
they
don't
they're,
not
a
silver
bullet.
They
don't
fix
everything,
so
there
will
still
be
a
decent
impact
from
it.
Changes
that
don't
work
as
expected
and
lead
us
to
need
to
roll
things
back.
So
the
the
key
takeaway
from
this
one
is
to
be
aware
of
that
in
the
future.
F
We'll
love
to
see
more
of
the
expanded
contract
pattern
being
used
and
feature
flags,
and
things
like
that
to
help
minimize
the
risk
of
changes
that
will
be
likely
to
lead
to
rollbacks
but
yeah.
We
probably
are
not
going
to
expect
everybody
to
be
release
managers
for
a
month,
although
that
would
be
amazing.
If
anyone
wants
to
do
that,
then
we'd
love
them,
but
grillbacks
will
help
us
but
not
solve
everything.
E
Thanks
amy
and
another
easter
egg
we
have
is
on
the
slide
12,
where
we
are
saying
that
we
are
discussing
a
proposal
to
enable
true
infinite,
horizontal
scaling
of
sidekick,
which
is
obviously
not
possible.
E
It
would
be
awesome
if
we
can
infinitely
scale,
but
this
is
to
highlight
that
even
the
things
that
we
resolved
last
year
got
us
to
the
point
where
now
we
need
to
talk
again
about
scaling
out
one
more
time
and
changing
how
we
generally
do
the
work,
because
our
growth
is
continuing
and
the
general
product
scale
is
also
growing.
So
we
need
to
continue
scaling
and
changing
things
that
we
already
changed
once
before.
B
So
we
moved
inactive
repositories
to
hard
disk
drives
instead
of
ssd
saving
us
a
ton
of
money.
So
that's
amazing.
B
Are
we
considering
something
like
that
and
what
impact
would
that
make?
I
I
really
don't
have
a
good
idea
of
like
the
amount
of
dollars
that
would
save.
Maybe
this
already
is
the
majority,
and
it
doesn't
matter
much.
G
Yeah,
I
can
speak
to
that
a
little
bit
so
I'd
say
we
haven't
realized
the
savings
yet
because
we
just
moved
these
to
new
servers.
So
right
now
we
just
have
extra
space
in
the
ssd
nodes.
So
that's
the
first
thing
in
terms
of
moving
things
to
object,
storage.
That
would
help-
and
I
think
that's
something
that
get
elite
team
is
currently
evaluating.
G
It
would
definitely
save
us
money
because
we
wouldn't
have
to
have
the
compute
on
those
servers
and
we
wouldn't
have
to
have
those
disks.
So
I
don't
have
the
exact
number,
but
probably
tens
of
thousands
of
dollars.
We
would
say
from
doing
that
yeah.
It's
definitely
something
I
think
we
should
also
do.
I
think
it
just
hasn't
happened.
Yet
the
early
team
is
still
looking
into
it.
B
A
I
I
I
do
know
that
looking
at
that
issue,
that
is
part
of
the
conversation
there
and
part
of
this-
is
that
we've
we've
just
created
enough
space
for
for
a
number
of
new
repos,
but
that's
there
still
is
an
intention
to
do
some
compaction.
A
I
just
think
at
the
point
that
we're
at
with
with
growth,
it's
not
going
to
be
as
much
as
we
envisioned
you
know
six
months
ago,
just
because
the
growth
keep
running
so
for
now
we're
spending
more
money,
not
less,
but
we
do
need
to
get
it'll
be
more
higher
efficiency
out
of
this,
rather
than
just
net
reduction
and
spend.
A
H
Saving
in
the
sense
that
we
will
need
to,
we
are
not
going
to
need
to
create
new
servers
so
fast
as
we
will
have
if
we
hadn't
migrated.
So
it's
still
saving
it's
just
not
as
obvious
as
we
might
thought.
A
That's
a
good
point,
thank
you
alejandro
and
then
andrew
thanks
for
pointing
to
the
object,
storage
issue.
I
know
that's
been
under
discussion.
There
said
you
had
a
question
about
the
service
catalog.
B
Yeah,
it
seems
that
it's
no
longer
used,
but
I
can't
really
tell
it's
archived,
that's
where
I'm
deducing
it
from.
Are
we
still
using
it
if
we're
not
using
it?
What
are
we
using
instead
and
for
no
longer
using
it?
Can
we
maybe
update
the
read
me
just
just
explicitly
say.
C
C
Sid,
we
that
was
always
kind
of
a
read-only
view
onto
the
real
service
catalog,
which
is
a
yaml
file
like
a
really
boring
solution.
It's
a
yaml
file
in
a
directory
in
a
repository
and
that
ui
was
was
kind
of
a
project
to
see
what
it.
What
a
ui
would
look
like
on
that.
But
the
core
of
our
service
catalog
is
similar
to
the
stages.yaml
file
that
we
have
for
defining
our
stages.
We
have
a
yaml
file
which
defines
our
services,
and
that
is
very
important
and
something
that
we
keep
updated.
B
B
C
Yeah
that
it
definitely
is
in
the
handbook
so
I'll
I'll
find
it
and
cross-link
it
to
I'll.
Do
some
updating
on
that.
E
Chromebooks
is
already
linked,
so
maybe
you
wanna
just
deep
link
as
well
to
the
service
category.
A
There's
probably
also
we
we
could
reference,
so
the
I
mean
between
the
tech
stack
and
the
service
yaml
file
and
the
the
run
books
and
then
also
the
service
maturity
model
work
from
the
the
scalability
team.
There's
there's
all,
if
not
direct,
at
least
some
indirect
relationships
between
all
of
those
things.
B
Oh
yeah,
I
linked
tech
stack,
so
that's
the
page
to
cross-link
the
service
catalog
handbook
page
from.
B
Yeah,
I
try
to
look
at
all
the
savings
and
I
wasn't
able
to
we
have
this
cool
thing.
This
is
more,
is
not
infrastructure
related.
We
have
this
cool
thing
where
you
can
kind
of
group
labels,
but
I
don't
think
you
can
then
look
for
the
the
thing
they
have
in
common,
so
I
wanna
just
a
list
of
all
savings.
B
B
G
I
just
learned
something
here
said:
I'm
not
sure
if
that's
what
you're
looking
for,
but
that's
the
word
that
just
has
the
initials
by
the
savings
amount.
But
if
you
want
to
see
like
just
all
the
open
issues,
I
think
you
just
search
for
infor
fin,
tagged
labels
and
then
just
look
at
the
savings
on
each
one
of
them.
I'm
not
sure
exactly
what
you're
trying
to.
G
G
B
Thanks,
I
don't
see
any
other
questions
so
keep
going,
and
I
might
be
conflicted
in
this
because
I
I
think
I'm
thinking
about
starting
a
startup
around
it.
But
what
do
people
make
of
click
house
because
I've
been
investigating
it
for
a
couple
of
months
now?
It
seems
that
it's
a
great
way
to
affordably
aggregate
logging
data,
and
if
I
look
at
our
most
expensive
things,
stackdriver
was
listed
as
something
we
spent
a
ton
of
money
on.
We
could
maybe
save
money
on.
B
Yeah,
the
the
young
dex
analytics
database.
I
B
A
House
all
right
looks
like
some
interesting
things
for
us
to
to
read
up
and
and
see
what
else
we
can
find
out
about.
A
A
Okay
appreciate
your
time
today
expect
maybe
a
couple
more
easter
eggs
next
month,
we'll
see
what
we
can.
We
can
do
and.