►
From YouTube: 2021-08-25 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Exciting
exciting
demo
today
so
scarborough
over
to
you
no
not
exciting.
A
I'm
going
to
do
a
tour
of
metrics
there's
a
incident
that
was
just
logged
because
elasticsearch
apparently
stopped
handling
or
ingesting
vlogs,
and
I
think
that
was
probably
due
to
me
because
I'm
sitting
here
trying
to
create
reports
for
a
lot
of
stuff
which
is
searching
through
a
lot
of
data,
I'm
not
going
to
chime
in
on
the
incident,
because
I'm
just
drawing
pictures
but
I'll
share
my
screen
here.
A
A
So
this
is
looking
back
at
thursday,
which
is
not
the
best
day
to
look
at,
because
I
can't
compare
it
to
a
thursday
where
kubernetes
has
been
running
traffic
because
it's
not
yet
thursday,
and
it's
also
not
great
because
there
was
some
blip
in
our
radar.
So
you
know
something
happened
during
this
day,
but
we
could
see,
for
the
most
part,
like
our
p99
is
somewhere
between
one
second
and
half
a
second,
whereas
our
p95
is
much
better
at
well
below
half
a
second,
probably
hovering
around
300
milliseconds
kubernetes.
A
On
the
other
hand,
this
is
also
difficult
to
look
at
because
a
couple
of
reasons-
one
that's
tuesday,
two
we
were
still
performing
some
minor
tweaks
and
you
could
see
that
there's
a
you
know
a
definite
change
here
where
we
were
running
fewer
pods
and
then
we
modified
our
hpa
to
ramp
up
more
pods.
So
our
request
time
or
response
times
got
slightly
better,
but
in
here
you
know
we're
hovering
at
800
milliseconds
for
our
p99
and
which
color
is
it
green.
A
Our
green
line
is
around
400
milliseconds,
which
is
almost
double
what
our
vms
were
showing.
So
that's
rails
for
workhorse
response,
which
tab?
Is
it
this
one?
A
little
more
spiky
to
look
at.
This
is
also
measured
in
milliseconds,
but,
like
our
p99
is
sitting
or
p95
is
sitting
down
here
right
around
three
to
four
hundred
milliseconds
kubernetes
harder
to
look
at
because
we
still
have
vms
intermingled,
and
then
we
had
that
postgres
flip,
which
kind
of
screwed
up
our
chart
here.
A
But
if
I
could
pinpoint
the
green
dot,
which
is
somewhere
inside
of
here,
I
can't
I
can't
get
to
it.
Oh,
I
had
it
anyways
we're
roughly
double
is
what
I'm
trying
to
get
to
and
the
same
for
request
queuing.
So
our
virtual
machines,
you
know
we're
sitting
at
you,
know
2
hundredths
of
a
second
and
in.
A
I
don't
think
if
we
made
any
changes
to
our
horizontal
plot,
autoscaler
we're
going
to
see
any
changes,
because
right
now,
if
we
look
at
any
of
our
saturation
metrics
we're
in
an
okay
spot,
I
think
and
graham
also
did
a
little
bit
of
investigation.
Maybe
I
should
showcase
what
he
did
it's
kind
of
difficult
for
me
to
fully
understand
what
he's
attempting
to
show
here,
because
the
lines
they
clearly
have
a
cliff,
so
this
is
shown
before
and
after
the
migration.
A
This
is
like
a
almost
an
entire
week,
but
like
there's,
no
change
in
response
time,
essentially
like
you
could
see
this
gray
line
right
here
is
one
second
and
like
we're
right
below
it,
and
then
up
here
you
can
see
where
it
jumps
up
just
a
wee
bit
when
we
did
the
migration
and
it
kind
of
hovers
around
the
same
state.
So
we
could
either
determine
that
nothing
changed
or
that
we're
in
an
okay
spot
and
that
what
I'm
trying
to
chase
down
is
not
really
anything.
That's
super
exciting.
To
really
look
at.
C
Yeah,
I
have
a
question
to
that
because
we
are
always
comparing
the
p99
p95.
So
we
are
comparing
the
worst
cases
right.
Yes,
and
it
would
be
interesting
to
also
look
at
the
average,
maybe
because
oh
yeah-
maybe
maybe
there's
no
difference
at
all,
and
it
could
be
that
we
have
some
effect
like
like.
You
often
have
one
single
pot
behaving
badly
and
that
would
lead
to
a
huge
difference
for
the
p99.
C
A
A
Let's
just
look
at
50.
trash
update
this
one.
A
This
is
just
shy
of
what
I
asked
henry.
What
like
the
emea
load
looks
like
in
the
morning,
so
this
is
an
hour
before
henry
noted
and
I
didn't
really
know
where
to
stop,
because
apac
traffic,
clearly
just
you,
know,
drops
off
and
makes
it
look
like.
Nothing
is
happening,
but
I
chose
eight
o'clock
because
it
looked
sane.
So
so
this
is
our
p50
for
virtual
machines.
A
But
we
are
in
the
thousands
of
a
second
range,
I'm
I'm
hoping
you
guys
could
read
my
screen
because
the
font
is
kind
of
small,
but
this
is
.006,
which
looks
to
be
like
a
good
median
throughout
the
entire
day
has
kubernetes
finished
drawing
it
did
again.
We
had
a
spike
at
the
very
end
of
here,
so
that
kind
of
throws
off
the
chart,
but
for
kubernetes
it
looks
like
it's
roughly
the
same
like
if
not,
if
anything,
it's
slightly
better,
because
this
is
0.005,
so
5
000
of
a
second.
A
So
you
know,
50
of
our
requests
are
probably
showing
similar
behavior
and
performance
with
their
virtual
machines.
If
we're
trying.
C
A
B
I
mean
chatting
to
graham
this
morning
and
yeah
he
he
was.
He
was
talking
a
little
bit
about
this.
He
didn't
show
me
the
graphs,
but
he
was
talking
a
bit
about
this
that,
from
a
proxy
point
of
view,
so
from
the
user's
experience
it's
basically
like
for
like
so
it's
it's
most
likely
that
the
stuff
we're
now
seeing
on
kubernetes
previously
used
to
be
what
enginex
did
but
invisibly.
B
B
So
I
don't
know
if
you
showed
that
on
any
of
those
graphs
just
then
square
back
but
like
we,
there
are
certainly
times
where
we'll
see
more
queuing
going
on
than
we
did
before.
So,
let's
try
to
grab.
That
might
be
a
useful
thing
for
us
to
actually
look
at
like
what
is
the
actual
real
impact
of
queuing,
like
as
a
user
like
what
would
that
mean
to
me
if
my
request
queued
more
than
than
normal,
or
does
it
have
an
impact
on
resources
or
whatever
it
is,
and
we
can
see
for
that.
A
So,
let's
look
at
q
duration,
I
don't
know
if
we
have
q
count
in
our
metrics
at
all,
we're
not
going
to
see
that
inside
of
inside
a
cabana,
but
we
at
least
have
the
queue
duration.
So
how
long
something
has
sat
inside
of
the
queue
waiting
to
be
picked
up?
A
I'm
not
actually
sure
what
this
metric
is
going
to
look
like,
because
we
did
have
a
massive
change
in
how
many
workers
we've
run
and
then
per
pod.
We
have
well
per
pod.
We
have
a
different
set
of
workers
and
then
the
configurations
can
be
different
where
each
worker
has
a
different
number
of
threads,
like
our
total
number
of
threads
will
be
significantly
different,
but
for
virtual
machines,
we're
sitting
down
here
at
12,
hundredths
of
a
second
and
kubernetes
about
the
same
roughly,
if
not
slightly
better,
as
the
dale
tails
off
whoops
crap.
C
This
is
really
cool,
because
I
mean
this
shows
that
what
you
have
discovered
this
today,
the
with
the
threat
contention
right
that
a
single
pot
was
responsible
for
showing
us
this
really
bad
saturation,
while
the
whole
fleet
was
okay,
but
we
had
one
single
pot
and
our
metrics
are
built
this
way
that
we
obviously
always
show
the
worst.
C
C
Now
we
need
to
look
into
why
we,
okay
by
pots,
sometimes
act
up
and
also
I
forgot
what
I
wanted
to
say,
yeah
and
looking
at
the
the
deployment
issues
that
you
have
right,
because
the
during
deployments
we
often
see
workers
objects,
drop,
go
up
in
combination
with
the
node
scaling
events
that
we
see
so
there's
a
clear
correlation.
C
I
also
saw
this
this
morning
during
deployments,
but
it's
the
same
on
api,
so
we
see
the
same
drop
on
an
api
also
going
down.
So
it's
not
something
which
is
new
for
webs,
but
it's
a
special
thing
that
we
always
have
with
kubernetes,
since
we
have
api
and
web
running
there
and
see
problems
with
node
scaling
right.
We
looked
into
this
a
lot
and
I
think
I
still
don't
fully
understand
why
we
drop
performance
there,
but
there
is
something
going
on
with
scaling
at
least.
A
I
have
a
theory,
but
I
haven't
made
any
effort
to
prove
it
just
yet.
Our
appdex
looks
at
all
of
our
requests
and
there's
a
lot
of
errors
that
we
get
during
a
deploy.
We
intentionally
modify
our
readiness
check
to
where
we
set
the
readiness
check
to
send
503s
that
way,
the
pod
gets
pulled
out
of
rotation,
but
kubernetes
is
constantly
hitting
that
health
check
endpoint.
D
A
We
should
well
so
one
we
need
to
prove
by
theory,
because
if
errors
aren't
going
up,
maybe
there's
a
different
metric
that
we're
using
for
errors,
maybe
we're
previously
avoiding
like
gateway,
timeouts
or
you
know,
server
unavailable
situations
and
only
counting
500s,
for
example,
and
if
that's
the
case,
we're
not
going
to
see
that
increase
on
our
charts.
A
So
I
don't
have
an
issue
to
track
that
investigation.
So
I
should
create
one
but
but
yeah.
That
leads
into
the
next
conversation
that
I
wanted
to
have
a
chat
about
about
the
use
of
ruby
thread
contention.
But
before
I
do
do
we
have
any
further
conversation
that
we
want
to
discuss
about
what
I've
shown
so
far.
D
D
C
A
Maybe
I'll
try
to
look
into
that
and
I'll
post
on
the
issue,
we're
using
the
track
or
migration,
because
that's
that
would
be
an
interesting
thing
to
look
at.
A
Earlier
prior
to
me
hopping
online
this
morning
we
had
an
incident
that
was
created
because
our
ruby
thread
contention
was
saturated
and
sure
enough.
If
you
look
at
the
chart
that
we're
looking
at
you
know
we're
I'll
make
this
bigger,
we're
hovering
around
65-ish
percent,
and
then
we
hopped
up
to
100
just
sporadically
at
some
point
between
10
and
10
10
a.m.
Utc,
oh
okay.
A
So
I
don't
like
our
use
of
max
and
I
did
a
further
investigation.
Rahab
found
that
it
was
specifically
cluster
blue
line
cluster
c
and
then
I
delved
a
little
bit
deeper
and
found
out
that
it
was
just
one
particular
pod
and
it
was
actually
puma
process.
One
was
the
target
for
pod,
5,
x,
gnc
and
sure
enough.
That
pod
was
seeing
a
ramp
up
of
ruby
thread
contention
overall,
but
that's
going
to
happen
because
this
process
kind
of
shared
across
the
entire
node.
A
So,
of
course,
only
one
puma
thread
was
showing
a
lot
of
saturation,
so
I
dislike
the
use
of
max
as
a
resource
for
determining
saturation
because
we
run
over
100
at
this
point,
we're
over
100
pods
running
in
this
environment.
So,
just
looking
at
one
pod
and
saying
we're
fully
saturating,
our
ruby
threads
doesn't
make
any
sense.
A
So
I
fired
an
issue.
Andrew
already
has
an
idea
for
this.
This
doesn't
speak
english
to
me.
So
I'm
not
entirely
sure
if
I
could
tackle
this
particular
issue,
but
I
think
it'll
be
wise.
If,
at
some
point
we
went
through
all
of
our
saturation
metrics
to
make
sure
that
we're
not
erroneously
sending
off
saturation
alerts
for
something
that's
not
taking
into
account
the
entire
fleet,
but
whereas
we're
removing
outliers,
which
may
like
henry
mentioned,
there's
just
one
pod.
That
was
misbehaving
for
whatever
reason
we
don't,
we
we
shouldn't
care.
A
If
one
process
of
one
pod
is
failing,
we
should
care
if
the
entire
infrastructure
is
saturated
and
that
incident's
already
closed
so
because
that
pod
was
recycled
at
some
point.
So
I
just
wanted.
B
Yeah,
okay,
interesting,
okay,
it's
good
to
know
right!
This
is
super
valuable
stuff,
so
but
yeah,
okay,
not
ideal
that
one
pod
can
can
cause
instance
right.
A
A
C
So
cool
yeah.
I
think
it's
important
to
just
keep
in
mind
that
saturation
matrix
might
be
just
a
single
node
acting
up
right.
So
it's
it's
sometimes
misleading.
So
it's
good!
If
we
can
change
this
in
a
way,
because
even
if
it's
no
alert,
it's
still
misleading
right
and
you
think
there's
some
incident
or
something
not
not
right,
but
you
need
to
know
that
that
could
be
just
one
single
thing
and
that's
not
the
average
of
the
fleet.
But
what
you're
seeing
there
yeah
good
finding.
A
I
completely
agree
so
that
completes
anything
everything
I
wanted
to
showcase.
I
don't
really
have
anything
else
to
demo.
Does
anyone
have
any
questions
about
that
stuff.
A
B
Adam
great
news
huge
milestone,
so
we
haven't
got
game,
I'm
just
cool,
but
but
we're
done
over
on
that.
That's
really
like
really
awesome
and
our
smoothest
one
by
far,
which
is
like
for
the
biggest
one
awesome
and
also
to
get
rid
of
nginx
at
the
same
time
like
brilliant,
so
yeah
nice
work.
A
A
All
right,
can
you
still
hear
me?
Okay,
so
for
this
epic,
we
still
have
a
few
issues.
One
is
tuning
the
actual
service,
because
right
now
we're
running
roughly
three
times
as
many
nodes,
as
we
really
should
be
so
similar
to
what
we
did
with
the
api.
We
just
need
to
sit
down
and
tune
things.
A
I
think
we
could
use
the
api
as
our
baseline
and
tune
from
there.
So
I
think
at
some
point
we
just
need
to
do
the
actual
work
to
make
the
changes
in
production
it's
kind
of
late
in
this
week.
I
would
like
to
kind
of
keep
things
stable
for
the
rest
of
this
week,
so
considering
we
have
family
and
friends
day
on
friday.
So
I
think
next
week
would
be
a
good
target
time
to
touch
that
personally.
A
But
if
grain
wants
to
tackle
that
during
his
time,
I
have
no
fret
about
it.
So,
let's
rock
it
out,
I
know
we
still
need
to
go
through
our
run
books
and
get
rid
of
a
few
things,
and
we
also
need
to
peel
through
various
dashboards,
to
make
some
changes
so
that
we're
not
capturing
metrics
from
our
virtual
machines
and
then
go
through
chef,
repo
and
clean
a
lot
of
things
up.
So
we
still
have
plenty
of
work
left
to
do
on
this
epic,
but
I'm
hoping
within
the
next
two
to
three
weeks.
A
We
could
easily
close
it
out.
At
the
same
time,
we
also
need
to
remove
the
web
deployment
from
deployer
and
patcher
and
which
is
going
to
be
even
wonderful.
So
I
can't
wait
for
that
to
happen,
because
deployment
should
happen
theoretically,
roughly
two
minutes
faster
based
on
the
last
time.
I
looked
at
it,
but
that's
two
minutes.
So,
let's
rock
it
out.
B
Nice,
that's
awesome.
We
should
from
the
last
roll
back.
We
saw
that
probably
shave
a
decent
chunk
of
time
off
the
last
one
we
ran.
It
would
have
been
about
10
minutes
quicker
if
we'd
have
had
web
in
cooper,
assuming
kubernetes
deployment
doesn't
extend
by
a
huge
amount
but
yeah
we
spent
about
10
minutes
at
the
end
just
waiting
for
web.
So
there
should
be
big
gains
there
as
well.
A
Yep
one.
A
The
one
thing
I
do
want
to
make
sure
that
we
don't
lose
track
of
is
that
we
had
at
one
point,
or
we
still
have
the
problem
where,
if
the
web
fleet
filled
up
its
temporary
disk
space
with
those
scratch
files
that
just
never
got
cleaned
up,
we
have
a
remediation
for
that
in
place.
At
some
point,
I
want
to
make
sure
that
we
can
remove
that
remediation.
A
B
So
let
me
put
that
through
the
multi-large
working
group,
because
that's
exactly
we
have
a
weekly
agenda.
So
let
me
I
will
just
keep
adding
that
into
the
known
blockers
like
I
know
I
just
don't
know
which
team.
B
Yeah
yeah,
I
know
it
started
out
as
blockchain,
so
I
know
it
has
been
scheduled
and
I
know
the
team
there's
a
couple
of
like
dependencies.
They're
working
through
so
I
know
it
is-
is
on
their
sort
of
road
map,
so
I'll
keep
it
on
there
because
yeah
exactly
like
as
soon
as
they're
done
on
that
stuff
and
the
reason
I
say
keep
it
on.
There
is
just
we
have
a
rolling
agenda,
so
it's
it's
low
overheads
for
me
to
just
track
it
through
there
and
as
soon
as
we're
good.
B
Awesome
one
extra
thing
which
I
I
think
we
should
probably
also
plan
to
do-
is
going
to
be
a
whole
heap
of
issues
that
are
no
longer
needed
now
that
not
only
is
web
migrated,
but
we've
also
got
rid
of
nginx
on
web,
so
it
might
be
nice
for
us
also
to
just
schedule
in
a
bit
of
time
and
just
try
and
go
through
some
of
the
infra
backlog
and
see
if
we
can
find
any
obvious
ones
that
we
could
just
close
out,
particularly
if
there's
any
corrective
actions.
B
C
A
So
next
up,
I'm
kind
of
leaning
on
amy
to
drive
this
conversation,
but
I've
already
started
working
on
the
pages
migration
work,
I've
populated
the
epic
henry.
I
think
I've
copied
you
on
some
message
at
some
point
in
time
to
take
a
look
at
the
epic
and
make
sure
it
looks,
and
it
looks
like
it's
in
an
okay
spot.
A
A
B
On
those
two
blocking
your
shoes
go
back,
are
they
do
you
have
a
rough
sense
of
like
at
what
point
will
they
be
blocking
like?
Do
they
need
to
go
into
the
next
milestone
like?
Would
that
be
too
far
away?
Do
we
need
them
in
the
next
like
two
to
three
weeks.
A
A
So
my
goal
with
the
work,
the
way
that
I'm
approaching
this
is,
I
need
to
learn
how
pages
works,
so
I'm
going
to
get
this
rolling
in
pre-prod
first,
and
none
of
the
blocking
issues
that
I've
seen
so
far
prevent
me
from
doing
that
in
pre-prod.
A
The
blockers
will
come
for
staging
because
we
do
have
a
custom
header
in
place
and
right
now
that
configuration
just
doesn't
get
taken
into
place
at
all.
Logging
is
currently
not
working
from
my
personal
testing.
It
could
just
be
a
situation
where
I
might
not
have
something
set
up
properly,
so
that'll
be
important.
So
when
we
get
the
staging,
these
will
be
considered
blocking
for
me,
but
again,
they're
not
going
to
prevent
me
from
you
know
at
least
testing
the
process
and
reaches
procedure
of
migrating
again.
A
My
goal
is
to
test
this,
make
it
work
in
pre-prod
and
then
use
staging
as
a
test
bed
for
how
to
do
the
actual
migration
of
the
service
at
the
same
time
making
sure
it
works.
Obviously
so
it'll
become
a
blocker
at
that
point,
so
I'm
going
to
guess
within
sometime
mid-september
is
probably
when
we'll
get
to
that
point.
I
don't
know
how
long
cause
it's
going
to
take.
C
A
A
So
I
spun
up
at
least
one
new
issue
this
morning
thing
that
I'm
going
to
work
through
just
to
make
sure
I'm
testing
everything
and
then
there's
another
clarification.
I
need
from
him
about
one
particular
feature
that
I'm
not
too
familiar
with
so
so
soon,
but
not
immediate
is
what
I'm
trying
to
get
trying
to
say.
Yep.
B
Super
okay,
great
I've,
just
added
the
those
two
blocking
issues
to
the
multi-large
agenda
for
monday
to
ask
if
we
could
get
those
scheduled
in
for
the
next
two
to
three
weeks.
So
I.
A
B
Sounds
good
yeah,
so
in
terms
of
kind
of
what
we've
got
so
on
a
kind
of
what
we've
got
going
on
at
a
team
level
registry?
Well,
we
still
have
to
wrap
up
web
stuff
right.
B
We
know
that
registry
is
moving
forward
nicely,
so
we'll
have
like
kind
of
ongoing
tasks
there
and
we
also
probably
not
a
huge
amount,
but
it
could
be
a
little
bit
of
work
around
staging,
but
it's
probably
more
like
alessio
will
will
help
out
on
that
stuff
that
there
are
other
sres
assigned
to
that
project.
B
One
thing
we
do
have
coming
up
and
I
think
it
will
land
it's
going
to
kind
of
get
started
in
september,
but
it's
just
a
kind
of
investigative
thing
is
we're
working
with
scalability.
We
haven't
shared
okr
through
q3,
with
scalability
to
come
up
with
a
an
approach
to
migrate
redis.
So
what
we
need
to
do?
We
need
to
kind
of
understand
like
what
are
the
differences
between
the
different
types
of
redness.
We
have
four
different
types
of
redis
cluster
at
the
moment,
so
we
need
to
really
understand
like
what
is
the?
B
What
are
the
specific
differences
between
those
four
like?
Do?
We
need
to
have
support
for,
for
or
could
we
somehow
consolidate
and
then,
if
we,
when
we
would,
if
kubernetes,
is
the
solution
to
migrate
which
to
scale
which
I
would
think
it
would
be,
then
we
also
need
to
work
out.
How
would
we
like
move
data,
and
how
would
we
actually
operate
these,
so
the
work
will
begin
in
september.
B
The
goal
for
q3
is
to
basically
come
out
with
a
plan
and
work
out
like
what
would
we
use
to
operate
this,
which
cluster
would
we
migrate
with
first
and
like
yeah?
What
might
actually
be
involved
so
probably
setting
up
some
pocs
and
actually
investigating
that
stuff?
So
we
have
an
epic.
B
This
will
most
likely
kick
off
kind
of
early
september,
we're
still
working
on
details
there,
but
I
chatted
with
graham
about
it
this
morning,
just
as
a
certainly
probably
the
most
likely
person
to
do
at
least
the
first
leg
of
this
since
registry
is
almost
certainly
going
to
still
be
going
and
we've
got
pages
in
progress
as
well.
But
as
we
go
through
this
project,
I
think
you
know
we
can
rotate
through
and
like
work
out
who's
most
interested
in
various
pieces,
but
just
a
heads
up
that
redis
will
also
begin
moving
soon.
C
This
is
going
to
be
interesting
because
of
storage
being
involved,
so
looking
forward
to
that
one.
B
Is
that
in
a
you're
looking
forward
to
watching
it
happen
or
yeah.
B
C
Really
interesting
to
me,
I
mean
yeah,
because
you
don't
have
much
with
storage
and
then
I
think
there
are
several
solutions,
then
looking
into
operators,
maybe
that
are
there
for
it
exactly.
That
would
be
really
interesting
to
see
what
we
learned
there.
B
Yeah
exactly
so,
I
think
you
know
september
october
will
be
very
much
like
trying
to
learn
as
much
as
we
can
the
goal.
The
big
outcome
that
we'd
like
to
have
from
from
this
quarter
is
like
okay.
This
is
the
rarest
cluster
we'd
want
to
begin
with,
like
this
is
the
rough
you
know
operator
if
we
want
to
use
one
which
we
would
go
with
and
also
like
identify.
What
would
be
the
big
charts
changes
that
we
would
need
for
this
and
then
q4?
A
B
At
the
moment,
this
is
literally
like
this
is
as
much
as
exists,
and
then
early
september,
we'll
like
actually
like,
kick
things
off
and
and
work
out
what
issues
need
to
happen
and
like
like?
What
does
this
actually
start?
Looking
like.
A
B
B
No
actually
not
at
the
moment
we
managed
we
went,
we
went
a
totally
alternate
way
with
that
which
was
really
good.
Maybe
there
were
three
I'm
sure
there
were
four.
B
B
Multiple
ones
there,
let's
say
as
a
general
rule,
I
I
actually
don't
know
any
details
at
all
so
yeah.
I'm
also
looking
forward
to
learning
a
lot
about
redis
in
the
next
few
months.
B
Okay,
it
may
still
be
needed
because
I
mean
assuming
I
would
assume
we're
going
to
need
to
migrate
one
by
one,
so
there's
likely
at
least
a
couple
of
redis
clusters
that
are
going
to
be
in
their
current
state
until
next
year.
B
So
I
don't
think
it's
super
immediate
stuff,
like
it's
definitely
a
bit
different
from
any
of
the
others.
We've
done
so
we
will
need
to
work
this
out.
The
other
one.
B
B
Theoretically,
we
got
to
a
stage
where
pages
just
completed
and
redis
didn't
require,
like
you
know,
all
of
us
to
be
working
on
that
the
other
one
that
is
ready
to
go
is
prefect,
and
I
think
it's
going
to
be
a
little
bit
similar
in
that
we
at
the
moment
there
is
nothing
known
to
block
the
migration,
so
it'd
be
a
case
for
us
going
in
and
trying
to
do
the
migration,
but
I
think
I
feel
like
there's
probably
enough
unknowns
about
perfect
as
a
service
on
its
own,
that
we'll
likely
hit
some
interesting
stuff,
so
that
one
could
be
a
similar
one
in
that
there
might
be
quite
a
bit
of
up
front
work
for
us,
but
then
there
could
also
be
quite
a
big
gap
in
the
middle
if
we
have
to
like
wait
for
kind
of
like
development
support
or
whatever
we
need
to
actually
get
through
to
the
completion.
A
A
C
B
B
C
B
Yeah,
I
think
that's
true
yeah
so
yeah.
So
that's
going
to
be
the
other
one
like
once
we
get
through
once
we
complete
pages
we
can
work
out
like
either
reddit
will
be
in
a
place.
We
want
to
just
jump
in
and
start
work
on
that
or
if
it's
not,
we
can
take
a
look
at
perfect.
B
Oh
yes,
so
it's
a
good
one
so
graham
has
been
thinking
about
this
stuff.
He
so
we've
been
talking
quite
a
bit
about
us
getting
into
the
habit
of
having
tech
debt
running
alongside
all
of
our
work
like
we're
coming
out
of
the
stage
where
it's
like
great
huge
migration
and
we're
much
more
now,
I
think,
getting
into
a
different
mode
of
working
where
we're
going
to
have
a
lot
of
these
stuff,
there's
still
lots
of
stuff.
We
can
migrate,
so
stop
it
giftly
still
at
ho
proxy.
B
There's
lots
of
other
pieces
coming
up.
So
what
I
would
like
us
to
get
into
the
rhythm
of
is
having
always
having
on
the
board
two-
maybe
probably
two,
but
at
least
one
take
that
item.
That's
like
a
smallish,
distinct
piece
of
work
that,
when
someone
like
either
has
time
or
if
you
want
to
pick
up
something
slightly
different,
you
can
just
go
ahead
and
and
pick
up
so
graham
has
been
cutting
down
the
issues
for.
B
B
Having
some
of
these
things
in
place
will
unblock
us
on
lots
of
other
pieces,
graham
actually
had
a
quite
a
long
list
in
the
apac
demo
last
week,
so
that's
just
below
on
the
agenda.
So
there
are
a
couple
of
pieces
here
that
game's
going
to
put
into
issues
and
get
on
the
board.
I
think
it
it's
possible,
there's,
certainly
one
I
think
he'll
just
dive
in
and
pick
up,
but
for
any
of
you
like
they'll,
be
on
the
board.
B
If
you
want
to
take
them
and
and
start
moving
on
this
stuff,
they
are
going
to
be
written
up
as
issues
and
as
anything
else
having
this
stuff
in
place
will
mean
we
can
upgrade
home
and
actually
like
start
moving
on
some
of
like
some
of
the
other
pieces
sort
of
related,
but
slightly
separate
as
well.
Graham's,
also
looking
at
what
might
be
a
good
path
to
like
auto
deploy
chart
bumps.
B
A
I
guess
I
have
two
questions
right
now:
we're
still
using
the
home
built
tooling
that
jarv
and
I
first
worked
on
when
we
created
this
repository,
a
suite
of
shell
scripts,
our
tanker
repository
and
our
gitlab
helm
files
repository
and
this
one
for
that
matter.
They
all
operate
slightly
differently
across
the
board.
A
C
A
B
Yeah,
so
let
me
just
so
the
two
that
I
know
of
are
there
is
migrating
the
fluid
d
to
tanker
deployments
and
the
plantar
mole.
As
I
said,
it's
a
tank.
A
B
Okay,
yeah
there's
some
like
capitalization
missing
on
that
on
that
word,
so
those
are
the
two
that
we
have
on
there
already
now.
Definitely
so
what
I
was-
and
I
think
those
are
the
so
those
are
the
only
two
tech
day-
ones
we
really
or
like
tooling
improvements.
B
So
what
I
would
suggest
for
all
of
you
is
there
are
two
routes
really
for
us.
This
tech
that
stuff
to
be
like
super
well
serviced,
one.
I
know
graeme
and
alessia
were
talking
about
already
a
little
bit
this
morning,
which
is
ideas
for
how
we
can
actually
bring
in
the
kate's
workload
workflow.
So
it
sits
more
comfortably
alongside
auto
deployments
and
they
maybe
don't
block
each
other.
Quite
as
much.
I
haven't
seen
the
output
of
that,
but
I
know
they
were
chatting
about
it.
B
B
So
you
know
those
things
are
definitely
linked,
so
I
know
that
alessio
is
trying
to
think
through
that
stuff
and
has
been
and
talking
with
people.
The
other
one
is
the
more
the
as
someone
who
works
on
this
stuff
here,
I
can
see
some
improvements
now
this.
How
do
we
surface
these
tech?
That's
into
our
workflow?
B
It's
not
reserved
for
graham
so
at
the
moment,
he's
kind
of
pushing
ahead
on
some
of
these
things,
but
I
would
suggest
for
any
of
you
like
get
these
things
cut
down
into
like
if
they're,
like
smallish
tasks,
that
we
can
complete
a
piece
off
in
a
few
days,
then
it's
just
simply
a
matter
of
us
deciding
like
which
ones
do
we
want
to
do
the
moment.
B
I
don't
think
we
have
any
super
clear-cut
tech
de
issues,
but
correct
me
if
I'm
wrong
on
that,
but
they're
all
kind
of
semi-larger
projects
like
we
would
want
to
change
the
entire
repo
to
do
this
stuff.
Those
are
a
little
harder
to
schedule
because
we
need
to
like
pause
on
a
migration
to
to
get
to
those
things,
but
if
we
can
get
all
of
this
stuff
down
into
like
okay,
we
just
need
to
change
that
piece.
We
do
this
piece
and
we're
working
towards
something
better.
That's
really
easy.
B
Yeah
so
at
the
moment,
I'm
going
on
the
assumption
that
they
can
be
cut
down
into
issues.
If
we
have
pieces
of
that
that
that's
not
possible,
then
let's,
let's
definitely
like
work
out
how
that
fits
together.
A
And
the
second
question
I
had
was
you
know
soon.
I
think
pages
is
the
last
front
end
piece
which
means
therefore,
when
for
deployer
you
know
we
do
giddily
prefect
and
then
we
do
our
fleets.
Kubernetes
is
just
that
one
stage
in
the
future,
because
pages
will
eventually
go
away.
Those
virtual
machines
will
go
away.
A
Is
there
any
desire
to
try
to
figure
out
a
different
way
to
do
deployments
where,
instead
of
I
mean
I
guess
we
have
to
keep
ansible
in
place
because
we
have
giddily
and
prefect,
but
for
other
things,
like
our
pre-deployed
migrations
use
the
deploy
node,
we
could
instead
use
the
task
runner,
which
is
soon
to
be
renamed
inside
of
our
helm,
chart
to
perform
that
style
of.
C
C
Because
I
think
this
is
the
other
piece
that
maybe
also
needs
to
be
still
around
or
be
moved
to
some
equivalent
place,
I'm
not
sure
about
it,
but
yeah.
If
you
need
to
do
this
hot
patching
thing,
I
think
it
was
running
on
the
deploy
node
but
need
to
check
for
that.
But
anyway
I
mean
there
should
be
solutions
to
that.
Maybe
also.
B
Yeah,
yes,
definitely
we
should.
We
should
question
all
of
these
things
for
sure
marian
mentioned
in
slack
the
other
month
that
maybe
it's
actually
kate's
workload.
That
is
the
thing
instead
of
deployer
and
it
just
calls
out
to
deploy
if
we
happen
to
have
a
italy
update,
for
example,
so
we've
got
lots
of
once
we
get
through
pages
yeah.
We
got
lots
and
lots
of
other
improvements.
We
could
make.
A
B
Think
there
are
maybe
random
things,
but
there's
nothing
cohesive.
There
is
no
like
vision
of
like
of
stuff.
No,
so
I
think
it's,
I
think
it's
kind
of
in
people's
minds,
but
it's
not.
It's
not
been
properly
thought
through.
Yet.
A
Okay,
well,
personally,
I
think
after
pages
is
done
since
kubernetes
is
that
one
stage,
then
maybe
I'll
revisit
at
that
moment
in
time,
creating
the
necessary
epic
to
figure
out
what
we
could
do
then,
unless.
B
Something
happens
beforehand.
Awesome
yeah.
That
sounds
like
a
good
plan.
That
sounds
like
a
good
plan,
I'm
hoping
by
then
as
well,
that
a
lot
of
the
things
that
we
need
to
do
for
supporting
rollbacks
and
also
a
lot
of
the
stuff
coming
from
the
staging
stuff
might
be
in
place
as
well,
because
I
think,
having
the
answers
to
those
things
will
already
feed
into
this.
B
Like
we,
I
think
we're
at
the
point
after
pages
where
we
could
we
could
make
like
a
whole
new
version
of
the
pipeline,
which
is
like
kubernetes
first
right
so,
like
maybe
that's
like
a
like.
If
we
can
see
good
gains
for
that,
then
you
know
that
might
be
a
good
thing
for,
like
a
q4
kind
of
type
of
project.