►
From YouTube: Infrastructure Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
All
right,
since
there
are
no
questions-
yes,
I'll
I'll
say
the
obvious
in
which
June
and
August
have
not
been
particularly
good
months.
Forgive
calm
and
as
never
in
the
in
the
slides,
I
think
we
run
into
some
application
bottlenecks
and
we
were
also
a
little
bit
blind
to
them.
So
there's
been
a
number
of
steps
that
we're
taking
to
address
that
andrew
has
been
doing
an
incredible
amount
of
work
to
uncover
those
bottlenecks
and
work
of
those
parallax,
and
so
has
a
team
in
addressing
them.
B
So
from
where,
for
more
recent
infrastructure,
a
big
shout
out
to
development
is
they've
been
super
helpful
and
we're
very,
very
grateful
for
all
the
work
they've
done
and
and
the
new
collaboration
that,
with
we've,
kicked
off
to
the
rapid
action
process
and
then
the
the
weekly
performance
and
availability
review
that
we're
doing
so
super
grateful
still
no
questions.
I
know
I,
do
great
presentations
but
I
just
gotta
pee
questions
out
there.
B
All
right
so
as
I
wait
for
your
questions.
Obviously,
do
you
have
her
or
pricing
in
the
handbook
that
there
is
a
new
team
and
infrastructure
that
we're
putting
together?
It
is
called
the
scalability
team
and
it
is
intended
to
essentially
work
on
scalability,
as
the
name
implies
to
make
it
laughs,
calm
able
to
function
at
the
scale
that
that
we
have
and
that
we
expected
well.
B
We
wanted
to
have
a
team
that
is
very
close
to
the
SRE
team,
with
with
enough
development
chops
to
be
able
to
not
just
deal
with
things
that
come
up,
but
also
start
thinking
and
building
the
product
to
avoid
the
incidents.
The
type
of
incidents
that
we've
been
experiencing
recently,
this
team
is
not
sort
of
tier
2
support
for
sre.
This
team
is
actually
a
group
of
developers
that
are
working
on
making
it
less
scalable
and
they
will
participate
and.
B
I
think
you're,
not
certain
listening,
I
wasn't
sure
in
my
screen,
since
you
can
all
see
the
slides
and
we're
not
specifically
going
through
the
slides,
but
there
are
specific
questions
about
the
slides
by
all
by
all
means
customs.
So
I
see
a
question,
so
I'll
wrap
up
real
quick
on
on
this
team,
so
this
team
is
not
here
to
support
for
infrastructure,
but
it
is
a
group
of
developers
that
are
going
to
make
it
a
lot
more
scalable.
B
C
B
A
lot
of
that
has
to
do
with
yeah,
a
lot
of
that
has
to
do
with
our
observer
ability,
capabilities
and
also
in
trying
to
understand
what
the
capacity
of
the
infrastructure
is,
and
it
has
been
devoting
a
ton
of
work
to
that.
There
is
actually
now
a
capacity
and
saturation
dashboard
that
he
created,
which
tries
to
make
a
good
estimate
of
how
much
runway
we
have,
and
each
component
will
continue
to
work
towards
refining
what
that
capacity
means.
The
dashboard
stands.
B
Today's
it's
very
rough
first
iteration
on
that,
and
then
it's
you
know
in
terms
of
what
this
team
that
what
this
scalability
team
is
going
to
be
able
to
do
in
making
the
estimation
of
you
know
how
far
when
we
do
a
certain
thing
in
the
application.
How
far
can
we
take
this
so
I?
Don't
have
specific
details
or
steps
right
now
other
than
this
combination
of
observability
and
estimating,
and
then
what
application
and
infrastructure
changes
we
can
make
to
have
a
good
idea
when,
when
the
next
wall
is.
C
B
That
is
unfashionable,
it's
a
it's
kind
of
think
of
it,
as
we
were
trying
to
explain
in
a
graph
this
actually
Eric's
graph
and
who's,
trying
to
explain
in
at
a
very
high
conceptual
level
what
what
we
think
it
happened,
and
so,
for
instance,
in
the
graph
you
can
see.
There's
the
GCP
migration,
just
by
virtue
of
running
on
on
TCP,
bought
us
a
significant
amount
of
a
front
way.
B
Now
these
things
don't
work
on
a
on
a
straight,
stepwise
pattern
like
you
see
there,
but
roughly
they
kind
of
do
in
the
sense
that
you
know
you
develop
assistant,
you
design
a
system
deeper
in
production
and
then
you're
going
to
start
seeing
where
the
limits
of
that
system
are,
and
at
some
point
you
use
the
side
that
you
need
to
be
revolutionary
are
supposed
to
evolutionary
in
terms
of
adding
capacity
so
the
graph
itself.
This
is
very
conceptual
and
trying
to
to
explain
how
we
how
we
run
into
the
world
last
year.
D
Stephen
Gary
I'm,
sorry
interrupt.
Real,
quick
I
should
also
point
out
that
it's
really
important
for
everybody
to
understand
that
that
graph
is
not
only
representative,
but
it's
not
even
to
scale
right
like
like
the
graph
implies
that
were
close
to
hitting
our
threshold
where
users
are
gonna
start
slowing
down,
and
if
anything
we
expect
it
to
grow
much
more
dramatically.
So
we
need
to
keep
that
in
mind
that,
like
actually
the
back
end
of
that
graph
growth
is,
is
still
going
strong
and
going
to
be
that
way
for
a
long
time.
A
B
When
you
start
hitting
these,
this
scale
limits
they
start
shifting
around
and
it's
the
bottlenecks
exist,
but
there
are,
some
of
them
are
not
miserable
because
you
have
something
else:
sort
of
holding
the
rest
of
the
environment
back,
but
once
you
start
addressing
those,
then
those
well
next
up
that
were
hidden
from
us.
For
instance,
we
didn't
I,
knew
actually
had
some
ideas
that
the
background
Japs
were
we're.
Gonna
start
creating
problems,
but
we
didn't
really
have
the
the
actual
numbers,
and
once
we
did
to
work
on
reddit's,
then
that
became
very
apparent.
B
Others,
for
instance,
that
we
saw
with
PD
bouncer.
We
only
saw
PT
bouncer
here
wall
once
we
had
a
specific
fell
over
on
Patroni
that
shifted
the
world
in
a
way
that
we
didn't
expect.
We
expected
the
world
to
rebalance
across
o
PT
bouncers,
but
that
did
not
happen,
and
so
it
turns
out.
We
were
very
close
to
saturation
limits
when
PE
bouncer.
B
So
some
of
these
some
of
these
bottlenecks
are
just
not
super,
not
well
visible,
and
we
need
to
work
at
getting
better
at
getting
visibility
into
them
and
getting
an
understanding
of
where
they
are
and
what
we
can
be
we're.
Next
and
again,
andrew
has
been
doing
an
incredible
amount
of
work
in
that
regard,
and
almost
that
we
think
he's
predicted
has
actually
turned
out
to
be
true.
E
Sorry
sure
I'm
just
a
little
busy
today,
basically
yeah.
We
we
hit
these
kind
of
bottlenecks
all
the
time
we,
you
know
it's
just
a
very
common
thing.
When
you're,
when
you're
scaling
on
a
platform
is
you
you
hit
one
bottleneck,
you
fix
it,
it
triggers
a
bottleneck
in
another
system,
yada
yada,
it's
you
know.
You
said
exactly
the
same
thing.
B
B
You
sort
of
start
approaching,
and
it's
it's
this
way
for
coming
back
and
forth,
and
there
may
be
you
know
you,
may
you
may
reach
a
limit
where
you
see
some
slowly
on
the
platform
that
it's
not
only
goes
away
and
you
it
takes
a
while
to
understand
that,
and
so
we've
seen
that
some
systems
are
very
resilient
in
turn,
serves
taking
a
beating
and
then
they're
brittle
and
that
when
they
break
they
break
redis
is
a
good
example
of
this.
Postgres
has
a
saturation
that
is
much
more
smooth
in
that
regard.
B
You'll
start
seeing
slow
sounds
slowdowns
before
it
actually
is
completely
itself,
but
ready
to
wear.
This,
for
instance,
then
tends
to
be
pretty
drastic.
So
all
other
work
we
need
to
do
is-
and
you
know
identifying
these
patterns
before
they
actually
become
a
problem,
and
that's
that's
where
a
lot
of
the
work
that
that
angular
is
doing
is
center
around
and
trying
not
to
you
know
seeing
this
patterns
before
they
actually
become
a
problem.
B
So
Eric
decided
that
we
need
a
senior
the
regular
infrastructure
so
until
we
find
that
individual
and
we're
actively
looking
for
the
person
I
will
continue
to
to
serve
as
director
and
then
I'm
transitioning
and
to
an
individual
contributor
role
as
an
engineering.
Fellow,
both
of
those
developments
are
very
exciting
for
me,
as
specifically,
especially
because
I'm
I
don't
want
to
undo
the
work
that
the
team
has
done
over
the
last
18
months
by
waiting
into
the
limits
of
my
own
abilities
as
a
manager.
B
B
The
registry
is
now
completely
being
served
from
infrastructure,
lettuce,
kubernetes
based
and
now
we've
learned
where
we
now
have
a
lot
of
the
basics
built
to
be
able
to
support
this,
and
we
will
start
finding
components
and
services
in
the
infrastructure
that
can
be
migrated
until
we
essentially
get
the
entire
thing
on
kubernetes.
So
that
is
also
a
very
exciting
development
and
it's
a
non-trivial
project
to
tackle,
but
I
know
it's.
B
The
team
has
actually
been
kicking
and
screaming
for
the
last
year
that
we
need
it
to
do
this
and
I
think
we're
we're
finally
ready
to
to
take
that
step.
So
that
should
be
that
should
actually
unlock
a
number
of
things
for
us.
I,
specifically
we're
starting
to
run
into
some
limits
with
the
things
that
we
can
do
with
chef
asides
from
chef
licensing
changes.
So
moving
to
kubernetes
will
remove,
will
actually
deal
and
address
some
of
the
technical
depth
that
we
require
over
time.
So
that
is
also
very
exciting.
Development.
B
Yes,
Daniel,
it
does
surprise
you
in
all
fairness
is
taking
us
where
we
are
today.
So
I
am
always
of
the
belief
that
you
know
your
dog
tools
and
your
doctor
good
about
the
depth
and
all
that,
and
they
take
you
so
far
and
then
you
just
sort
of
once
you've
reached
the
limits.
You've
moved
in
the
next
one
and
you're
very
grateful
for
the
service
they
provided
so
yeah.
But,
yes,
chef
does
have
limitations,
I'm
sure
we'll
find
some
kubernetes
mutations
as
well
so
anyway,
if
there
are
no
more
no,
no
additional
questions.