►
From YouTube: Infrastructure Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
The
first
thing
that
I
put
on
the
list,
which
is
an
actual
k,
r
4q
to
its
elastic,
search
to
improve
the
search
ability
and
get
that
become
where
our
first
iteration
is
on
a
single
project
and
mccollins
working
on
this.
The
second
thing
we're
working
on
is
eliminating
NFS
dependencies
because
they've
caused
us
grief
in
the
past,
and
we
really
shouldn't
be
using
NFS
anymore.
So
will
we
go
and
work
on
that
as
well?
A
B
A
A
Interesting
and
my
screen
I
am
on
the
on
the
right
slide,
but
I'll
stop
sharing
and
Noel.
You
can
go.
Look
at
the
slide
enough.
You
can
see
me
anyway.
I
were
also
working
on
the
point:
snow
plow
to
track
user
growth
and
also
working
on
repository
stores
on
CFS.
We've
talked
about
the
importance
of
data
and
how
we
aim
to
protect
it
at
multiple
layers,
and
this
is
the
first
step
at
addressing
that.
There's
obviously
other
work,
that's
happening
along
those
lines,
but
we're
very
focused
on
the
durability
of
the
data.
A
B
C
C
A
Awesome,
thank
you.
Anthony's
also
will
be
working
on
both
secrets.
Management
is,
is
super
important
for
us.
There
was
already
some
vault
usage,
but
we
want
to
make
sure
that
is
fully
privatized
and
that
we
do
take
advantage
fully
take
advantage
of
it,
especially
as
we're
embarking
on
kubernetes
post
west
11
were
running
Postgres
94
and
there
are
some
features
we'd
like
to
take
advantage
of
for
their
recovery
and
managing
blogged
to
become
much
easier,
much
more
doable
and,
of
course,
11.
So
that's
a
significant
undertaking,
given
the
importance
of
the
database.
A
So
there
is
a
lot
of
testing.
That's
going
on
to
make
sure
that
that
goes
smoothly.
Obviously
cost
management.
We
did.
We
made
a
lot
of
progress
last
quarter,
but
now
that
was
sort
of
a
grace
to
make
sure
that
we
weren't
overspending,
but
we
need
to
stand
there
Ison
and
essentially
make
that
an
ongoing
process
to
continue
watching
cost
and
on
slide
nine.
Actually,
let's
see
I
think
it's
like
ten.
There
are
the
okay
highlights
for
the
for
the
delivery
team,
and
these
are
also
super
important,
I'm
very
exciting.
A
We
continue
to
work
on
the
code
base
March.
We
started
that
last
quarter
to
really
complicate
a
problem,
but
we're
making
a
lot
of
progress.
We
want
to
have
a
single
codebase
to
streamline
a
lot
of
the
deployment
process.
We're
also
dipping
our
toes
with
kubernetes,
so
we're
moving.
The
gate
laptop
come
registry
to
kubernetes
and
there
is
a
ton
of
work.
That's
happening
there.
We
decided
for
this
to
be
the
first
iteration
because
it
was
on
a
core
service,
and
so
it
allows
us
to
really
understand
how
to
operate
kubernetes.
A
A
This
really
came
about
it's.
It's
been
something
that
we've
been
discussing
for
quite
a
long
time,
but
it
really
came
about
last
quarter
in
terms
of
managing
cost.
Since
we
didn't
have
any
limits
and
for
intents
and
purposes
we
were
sort
of
an
infinite
store.
So
Eliza
and
I
are
working
on
this
as
well
to
make
sure
that
these
are
manageable,
manageable
limits.
A
All
right
cities
asking
on
Slaton
where
we
speed
up
after
we
get
to
weekly
and
insurance.
Oh
yes,
we
will
the
most
important
thing
that
we're
trying
to
get
done
with
continuous
delivery
is
being
obvious.
One
aspect
of
it
because
we
spend
a
significant
amount
of
time
just
going
back
and
forth
solving
conflicts
and,
by
the
time,
there's
a
lot
of
lag
between
the
chimera
and
when
things
land
in
production
and
so
there's
a
lot
of
context,
switching
cost
involved.
So
this
should
actually
help
significantly.
A
It
is
in
the
sense
that
today
you
know
we
produce
the
release
candidates
and
those
are
the
ones
that
end
up
in
get
life
calm,
but
that's
no
longer
happening
because
the
changes
are
landing
in
Viacom.
So
what
I?
What
I
want
to
make
sure
and
what
we
want
to
make
sure
familiar
a
team
is
that
there
is
awareness.
B
Thanks,
can
you
talk
towards
where
we
roughly
spending
money
like?
If
you
don't
have
data
we
can
skip
this
but
I
love
to
like
to
see
like?
Is
it?
Is
it
the
server
compute?
Is
it
the
CI
compute
is
the
SSDs
forget?
Is
the
s3
artifacts?
Is
it
transfer?
Is
it
non
GCP
cost
like
what
are
we
spending
the
money
I.
A
Don't
have
specific
data
right
now,
but
I
mean
it's
spread
all
over
the
place
and
we
found
some
places
where
we
are
spending
more
than
we
should
and
what
we're
trying
to
figure
out
right
now
is:
where
do
we
get
more
bang
for
the
buck?
So,
for
instance,
there's
been
discussions
of
moving
from
SSDs
to
spindles
because
it
is
quite
cheaper,
but
we
don't
have
a
good
idea
of
how
to
what
the
effect
on
performance,
which
effect
on
performance
we
will
take.
A
So
we
know,
for
instance,
we'll
always
stay
on
SSDs,
but
the
geek
repose
will
likely
not
and
then
once
you
sort
of
on
once
you'd
make
a
decision
that
we're
going
to
start
optimizing,
these
sort
of
layers,
then
we
need
to
sit
down
with
power,
can
figure
out
what
is
the
best
way
to
do
this,
a
user
facing
fashion
without
you
know,
hurting
the
users.
There
are
other
optimizations
that
we
could
look
at
in
storage.
Hopefully
we
will
have
the
operations
analyzed
soon.
A
We
have
two
two
candidates
fall
through,
but
we
have
a
good
pipeline
and
we
will
get
much
a
much
better
understanding
of
where
these
costs
are.
We
also
understand
that
CI
CI
Runner
costs
are
are
higher
than
they
should
be,
especially
for
us,
so
we're
looking
into
that
as
well,
but
I
don't
have
a
breakdown
right
now.
For
these
thanks.
B
For
that
and
my
two
cents
is
that
we
probably
need
functionality
in
get
lamp
itself
to
move
repositories
around
like
if
they're
they
haven't
been
used
for
a
while.
They
move
to
spindles.
If
they
haven't
be
used
for
half
a
year,
they
move
to
s3
yeah
and
then,
if
you
want
to
access
the
repo,
you
first
get
a
rehydration
screen
for
four
ten
for
ten
seconds.
Well,
what's
retrieved
of
us
free,
so
I
think
the
first
step
would
not
it's
not
like?
Should
we
move
all
the
SSDs
two
spindles
I
think
it's
right.
B
A
And
actually
Jonas
is
making
a
comment
about
tier
stores
in
the
Emirates,
and
that
is
very
true.
So
there
is,
as
I
mentioned,
there's
a
ton
of
work
happening
on
CFS
for
the
repos,
and
some
of
the
sessions
were
trying
to
make
will
enable
us
to
to
do
some
of
these
things.
We
support
for
the
application.
In
fact,
today,
I
just
published
the
blueprint
that
I
believe
he
had
requested
on
project
recovery
and
how
we
can
speed
it
up
and
the
interesting
thing
about
all
these
Browns
is
that
they're
inter
wented.
A
So
there
are
some
solutions
that
we
can
use
to
do
multiple
of
these
things,
and
so
we're
now
trying
to
put
the
blueprints
together
to
tie
them,
but
tier
storage
reusing
the
MTTR
to
recover
deleted
projects.
All
these
things
we
think
we
can
solve
with
some
CFANS
and
then
some
support
for
the
application
I.
Just
like
I
said
I,
just
posted
a
blueprint
and
I'm
like
see
I
made
sure
to
CC
you
on
them
on
the
issues.
So
you
get,
you
can
see
it
as
on
the
engineering
board,
yeah.
B
B
About
it,
and
that
starts
I
think
with
a
business
case
like
hey,
we
can
save
this
X
amount
of
money.
So
if
we
can
just
do
that
calculation,
how
much
are
we
spending
on
a
Siddis
and
then
assume
we
can
reduce
that
spent
like?
How
much
can
we
save
by
moving
to
spindles
and
assume
that
goes
for
90%
of
the
repos
I?
Think
that
will
be
a
very
powerful
figure
to
then
a
prioritized
ism
product
yeah.
A
There
is
also
another
issue
we
opened
last
week
to
collect
some
metrics
on
the
state
of
repose.
How
many
reposts
do
we
have
per
note?
How
when
were
they
last
accessed,
X
there
etc?
To
get
some
of
that
data
as
well?
We
can
obviously,
as
you
mentioned
start
with
an
assumption
of
saying
what
happens
if
we
all
love
offload
90%
of
this
to
spindles
and
that'll,
give
us
some
rough
calculations,
but
we
are
also
pursuing
collecting
more
accurate
metrics
come.
E
A
On
estimates
that
McHale
did
the
host.
That
is
that
other
question:
if
we
go
with
get
that
large,
so
we
started
doing
that
in-house
and
then
we've
now.
The
first
iteration
is
limited
to
a
single
project,
just
for
us
to
be
able
to
gather
some
day
down
and
be
able
to
extrapolate
that
and
create
some
sort
of
predictions
as
to
how
big
is
this
gonna
get
and
what
is
it
gonna
take.
So
the
biggest
concern
is
running
this
huge
cluster
ourselves
at
a
reasonable
cost.
A
So
we
still
have
some
open
wrecks
in
infrastructure
and
there's
been
some
discussion
as
to
whether
we
should
you
know
we
have
a
good
team
in
terms
of
SRS
and
DB
re
skills.
Should
we
go
find
some
people
that
are
more
specialized
on
things
like
elasticsearch,
so
that
is,
there
is
definitely
the
consideration
and
in
some
respects
is
where
our
planning
is
going.
But
my
biggest
concern
is:
are
we
capable
of
running
this
huge
cluster
because,
based
on
the
calculations
that
that
Andrew
and
McHale
put
together,
this
is
gonna?
A
A
A
Right
now,
we're
just
busy
laying
foundational
stuff
for,
for
instance,
kubernetes
I
know
there
is
work
on
giveaway
chain
which
obviously
affects
not
gonna,
buy
change,
sorry
get
early
18,
which
obviously
affects
our
availability.
So
I,
you
know,
aside
from
the
stuff
that
jar
put
together,
I
think
this
quarter
were
just
super
busy,
but
we
should
definitely
discuss
this
as
we
move
into
q3
yeah.
D
A
So
from
from
the
product
engineer
inside
the
house,
I'm
not
sure
yet,
but
from
the
infrastructure
side
of
the
house,
one
of
the
things
that
there
was
a
presentation
by
fuse
floss,
snaggle-eye
s,
our
econ
title,
shipping,
software
with
a
necessary
mindset
and
so
I-
think
doc.
Kooning
is
gonna,
be
the
biggest
thing
in
terms
of
there's
a
lot
of
things
that
we
do
that
are
not
in
the
app
today
there
are
they
glue
everything
together
and
those
things
need
to
move
from
the
realm
of
just.
A
This
is
what
infrastructure
users
to
run
the
obligation
to
less
move
most
of
the
stuff
into
that
into
the
product.
It
doesn't
have
to
be
it's
not
that
we're
gonna
start
developing
some
of
these
things
in
product,
but
we
should
find
a
middle
ground
between
some
of
the
tools
we
have
today
and
what
they
look
like
long
term
in
the
product
and
start
just
pushing
them
and
pushing
that
SRA
mindset
into
the
product.
A
I
think
that's
the
biggest
focus
from
our
from
sort
of
a
strategic
perspective
and
I
think
it
makes
sense,
because
we're
also
starting
to
be
very
focused
on
self-managed
customers.
Right,
so
infrastructure
is
getting
more
and
more
and
more
food
service
customers.
We've
been
having
conversations
with
this
rapport,
ghen
ization
through
Tom
and
Lyle
and
and
other
folks
to
see
how
we
can
better
help.
A
So,
there's
like
this
wide
array
of
focus,
points
and
I
think
we
just
need
to
pick
two
or
three
that
we
really
get
vested
in,
but
my
my
top
picks
would
be
this
SRE
mind
set
into
the
obligation
and
then
making
sure
that
we
are
getting
better
and
better
at
running
a
gift.
Op
I
think
we
should
be
the
best
ones
running,
get
them
on
the
planet
and
I've
seen
customers
doing
amazing
things.
A
B
My
two
cents
is
one
index
for
the
entirety
of
Gil
and
become
seems
very
hard
and
expensive.
So
maybe
we
have
to
do
tiered
storage
or
GERD
searching,
so
that
groups
can
be
on
different
servers
and
we
can
also
have
also
like
elasticsearch
service.
Some
have
more
redundancy
than
others
like.
If
it's
a
super
popular
project,
you
probably
want
three
servers
so
that
you
have
redundancy.
B
B
A
I
think
in
in
some
respects
that
it's
the
decision
of
how
these
cluster
sets
of
clusters
gets
dealt,
is
really
product
driven
right.
It
depends
on
where
we
serve
or
where
we
place
to
search
boundaries
to
these
things
and
whether
you
know,
if
you
have
clusters
that
are
focused
on
certain
projects,
then
you
can
just
show
up
to
get
that
a
common
type
of
search
and
hope
to
find
something
so
we'll
build
whatever
we
need
to
build
to
make
it
cost-effective
and
so
that
it
works,
and
it's
it's
really
we're
working
with
product
to
understand.
B
And
I
don't
think
it's
a
binary
thing
for
like.
If
we
do
this,
we
can
never
do
an
instant
wide
search
like
we
can
do
an
instant
wide
search.
It
will
just
cause
a
read
on
every
single
of
the
servers
and
you
have
to
combine
results,
but
it
might
be
that
we
have
a
lot
more
group
search
volume
and
that
reload
Indiana.
F
B
A
F
F
A
F
F
B
A
Actually,
it's
worth
noting
that,
based
on
what
I've
heard
a
lot
of
this
work
happened
during
their
fastboot
about
a
month
ago,
which
was
amazing,
I,
don't
know
how
they
I
know
by
the
end
of
the
week,
they
were
exhausted
and
I'm,
not
surprised,
because
they
came
back
and
they
said.
Okay,
we're
done
it's
like
wait.
What
so?
Yes,
congratulations
and
thank
you,
amazing
I,
just.