►
From YouTube: 2021-07-07 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Hello,
so
I
don't
really
have
anything
to
demo.
I've
been
trying
to
catch
up
for
this
week
and
then
yesterday
I
spent
a
lot
of
time
working
on
random
stuff
just
struggling
to
catch
up.
So
I
wish
I
had
some
the
show
for
because
I've
been
trying
to
work
on
some
dashboards
for
console
and
nginx,
but
that
work
kind
of
got
held
up
due
to
vacation
and
such
so
I
don't
really
have
anything
fun
to
show
for
right
now.
C
D
C
Two
years
and
such
yeah
and
we're
going
to
take
anything
out
of
standard
out
from
our
pods
and
send
those
to
stackdriver.
Currently
stackdriver
is
still
not
the
greatest
solution,
but
it's
better
than
you
know.
C
D
Yeah,
it's
the
it's.
The
standard
errors
where
the
nginx
error
underscore
log
is
going
at
the
moment.
C
It
would
be
very
difficult
to
put
those
in
kibana,
so
I
thought
stackdriver
would
be
a
good
start
if
we
could
figure
out
the
appropriate
messages
that
are
useful.
Maybe
we
could
parse
just
those
particular
items
out,
but
I
figured
that's
going
to
be
a
solution
for
the
future.
I'm
just
trying
to
get.
C
Somewhere
yeah
those.
C
D
Yeah,
would
you
add
a
tooling
link,
one
of
those
tooling
links
to
the
nginx
dashboard
once
you've
done
that,
so
you.
C
I
was
waiting
for
me
to
finish
that
project
because
right
now,
no
logs
are
showing
up,
which
is
kind
of
a
little
bizarre,
because
I
know
we
are
getting
errors
like
pre
sees
a
lot
of
errors
come
through
a
decent
amount,
considering
it's
an
idle
system,
so
I
thought
I'd
at
least
see
a
few
in
staging,
but
I
don't
see
anything
at
all.
So
I
think
my
query
is
incorrect.
D
C
Also
see
a
bunch
of
stuff
related
to
ssl
certs,
because
there's
we're
not
configuring
ssl
certificate
for
our
tls
backend,
because
everything
is
already
encrypted
coming
from
our
google
load
bouncer.
So
it's
just
kind
of
a
missing
configuration
item
that
enginex
controller
periodically
complains
about
something
that
we
can
safely
ignore
because
we're
not
having
to
process
that
request.
C
A
C
C
D
Where
is
the
config,
for
this?
Is
this
in
terraform,
and
it's
just
the
you
just
saying
take:
how
do
you
do
it
like
like?
What's
that
side
of
it
look
like.
C
C
B
C
C
X,
ingress,
which
should
capture
all
objects
that
are
part
of
the
nginx
ingress,
including
the
controller
and
maybe
the
default
back
end,
I'm
not
entirely
sure
the
default
back
end
shouldn't
be
receiving
any
traffic
anyways.
So
we
should
never
see
any
data
from
those
at
all.
Maybe
the
pod
startup
and
shutdown
logs
as
they
periodically.
B
D
Remove
remove
so
just
like
if
you
go
to
the
query
and
you
click
on
it,
just
start
removing
bits
right
or
you
can
hash
them
out.
You
can
break
them
out
onto
different
lines
and.
E
C
F
C
D
C
D
C
C
B
Yeah,
that's
something
I'd
love
to
chat
about,
like
we
can
catch
henry
up
like
do.
We
know
what
like,
what's
the
status
of
that
one.
C
Last
I
saw
he's
got
a
merge
request
that
will
bump
the
amount
of
nodes
that
we
are
explaining
in
our
node
polls.
So
let
me
see
if
I
could
find
the
issue
that
I
was
commenting
on.
C
Okay,
so
my
the
very
last
comment
thread
that
we
started.
I
have
yet
to
read:
henry's
response,
but
prior
to
us
migrating
to
kubernetes,
we
only
had
19
nodes
for
these
three
shards
that
he's
currently
investigating
19
total
and
currently
we've
maxed
out
at
30
nodes,
and
this
is
10
across
each
zone
for
the
urgent,
cpu
bound
and
low
urgency
cpu
bound
and
we
his
merge
request
bumps
it
up
to
a
maximum
of
120
and
90
nodes
respectively.
C
D
E
C
C
Because,
currently,
like
we
scale
when
the
cpu
exceeds
450
millicourse,
which
is
not
even
a
full
core-
and
we
just
did
this
quickly
when
we
first
migrated
the
service
like
we
had
an
issue
in
the
backlog,
it
might
actually
be
this
one
yeah.
It
is
this
one
to
revisit
our
hp
targets,
so
I
think
we
need
to
start
looking
into
how
we
scale
what
metrics
are
important
to
us
to
determine
whether
or
not
we
are
suffering
you
know.
C
I
just
don't
know
what
those
metrics
should
be
like.
I
could
sit
here
and
utilize
aptx
all
day
long,
but
if
something
else
such
as
I
don't
know,
ruby
thread
contention
for
example,
starts
to
peer
up
again
as
a
saturation
metric.
I
don't
know
if
that's
one
that
we
account
for
sidekick,
but
that's
another
one.
We
need
to
look
into
that.
Perhaps
we
should
build
a
list
of
stuff.
We
want
to
monitor.
D
Like
you,
the
thing
that
you
really
want
to
do
is
pick
one
and
and
focus
on
on
one,
because
otherwise
there's
just
too
many
variables
otherwise
and
it
becomes
really
difficult
to
kind
of.
And
so
when
we
were
doing
the
sidekick.
Remember
when
sidekick
was
in
a
really
weird
state
and
like
different
workers
and
and
the
the
one
number
that
we
used.
D
There
was
just
the
the
amount
of
time
that
the
jobs
were
taking
to
run
right,
and
so
that's
kind
of
like
the
ultimate
and-
and
we
could
see
on
that
and
we
kept
like
tuning
up
and
down
and
up
and
down
until
we
found
the
right
number
for
each
of
the
different
fleets.
We
came
up
with
a
different,
but
the
thing
that
we're
optimizing
on
was
just
that
single
number
and
I
don't
think
you
want
to
have
too
many
different
variables
because
it
just
becomes
like
mind-blowing
to
kind
of
try
go
through
them.
D
C
C
I
had
to
figure
out
what
we
could
do
with
our
helm
chart.
I
suspect
we
should
be
able
to,
because
I
think
we
do
an
override
for
one
of
them.
D
Fine
to
just
have
the
one
and
then
just
you
know
you,
don't
you
don't
need
to
get
fancy
and
do
them
alongside
one
another.
You
can
just
do
them
at
like
at
least
you've
got
to
give
each
one
like
a
day,
at
least
right,
because
you
can't
say
this
one's
running
during
the
day
and
this
one's
running
in
the
middle
of
the
night.
So
they've
got
to
have
the
same
kind
of.
C
Yeah
the
these
so
there's
another
one
of
those
things
and
they
just
play
all
day
long.
They
sleep
for
five
minutes
and
then
just
immediately
back
to
play
mode.
It's
ridiculous
awesome
where
I
could
have
sworn
that
we
had
an
override
somewhere
for
our
scale.
B
C
D
D
Millicourse
number
until
you
start
seeing
a
degradation
on
the
you
know
and
see
how
high
you
can
take
that
and
just
push
it
up
and
up
and
up
and
up
and
up
and
up
and
up
and
then
like
when
it
starts
degrading
that
that's
at
least
gives
us
like
kind
of
the
top
point
and
then
do.
D
So
what
you'd
want
to
do
is
you
would
want
to?
I
think
I
mean
this
is
kind
of
one
of
those
things
where
it
probably
depends
on
on
once.
You
start
looking
at
the
data,
but
you
could,
if
you're,
using
long
enough
time
periods
like
24
hours,
you're
gonna
have
millions
and
millions
of
jobs
in
there,
and
so
you,
you
probably
get
enough,
like
average,
that
you
can
just
use
all
of
them.
D
If
you
wanted
to
do
shorter
periods,
you've
got
to
be
really
careful
that
you
use
kind
of
the
same
workloads
that
you
know
the
same,
but
I
would
say
if
you
use
long
enough
periods
like
24
hours,
you
could
probably
just
say
you
know.
On
average,
these
jobs
are
taking
less
time.
You
know,
particularly
because
it's
a
really
urgent
cpu
bond,
so
they
should
all
be
relatively
short.
Jobs
does.
D
That
is,
that
is
what
aptx
is,
but
also
I
would,
I
would
specifically
use
the
latency
and
not
the
aptx,
because
aptx
is
is
good
for
monitoring.
It's
not
good
for
for
what
you're
trying
to
do
here
right
because
remember,
I
think
the
bucket
for
urgents
for
the
aptx
for
urgent
is
one
minute
right.
So
that's
the
only
resolution
that
we
have
is
it's
below
a
minute
or
above
a
minute,
and
here
it
might
go
from
like
50
seconds
to
well.
D
It
won't
be
that
it'll
be
like
four
seconds
to
like
three
seconds
and
or
maybe
four
seconds
to
five
seconds.
The
aptx
is
still
going
to
be
a
hundred
percent,
but
you
know
things
are
going.
It
could
go
from
four
seconds
to
eight
seconds.
It's
running
at
half
the
speed
and
the
aptx
won't
be
impacted,
because
aptx
is
a
very
broad
measure.
It's
not
like
fine
grained
right.
So
that's
why
we
want
to
do
the
actual
latencies,
and
for
that
you
have
to
use
elk
and
not
prometheus.
C
So
we
have
two
aptx
items
for
at
least
a
specific
queue:
the
qepdx
and
execution
aptx
yeah.
D
So
it's
also
like,
even
even
in
the
short
detail,
that's
still
based
on
the
prometheus
histograms.
So
there's
a
bucket
that
goes
from
like
10
seconds
to
30
seconds
right
and
so,
if
everything's
in
there
and
it's
just
getting
in
like
it's
running
at
11
seconds
and
then
we
kind
of
change
something
and
things
get
really
slow
and
now
they're
running
at
20
seconds.
D
D
D
C
C
D
And
then,
if
you
go
look
at
one
of
the
c
shard
capsule
sli
index,
blah
blah
blah
on
the
right
hand,
side
where
it
says:
details
under
observability
tools
choose
scroll
down
a
little
bit
further
scroll
down,
because
that
should
be
scrolly
yeah
and
then
just
choose
percentile,
latency,
aggregated
kibana,
catch
all
percentile,
latency,
aggregated
click
on
that.
What's
the
difference
between
split
and
non-split
split
will
split
it
according
to
some
dimension,
so
it'll
give
you
five
charts.
D
The
non-split
will
give
you
one
one
series,
so
the
split
one
is
for
each
thing
that
we
have
in
cabana.
We
have
like
a
prominent
series
or
a
dimension
that
we
by
default
splits
over.
So
what
have
we
got
here?
Is
this
running?
D
D
Now
the
really
dumb
thing
is,
you
can't
do
that
it
gets
upset
if
you,
if
you
put
them
in
the
wrong
order-
let's
see-
maybe
it
doesn't.
Maybe
they
fix
that
it
used
to
which
is
like
really
ridiculous,
but
maybe
they
fixed
that
bug.
You
can't
there's
there's
no
point
in
having
that
down
to.
Oh,
this
is
over
five
minutes.
Okay,
so,
ideally
you
want
to
be
used
wait.
This
is
no
that's!
That's
too
high!
You
want
that
to
be
like
a
minute
or
something
like
that.
C
D
Yeah
yeah
and
you
you
can
see
like
so
those
movements
are,
are
very
fine
but
with
aptx
all
with
latency,
histograms
and
prometheus.
Those
all
fall
into
one
bucket.
So
you
don't
have
any
resolution.
It
just
looks
like
a
flat
line
right.
So
that's
why
you
should
always
use
elk
for
this
kind
of
optimization
and
not
aptx
or
prometheus.
C
D
C
B
C
D
B
C
D
D
D
But
but
we
can
start
like
right
now,
you
can
push
that
change
in
and
push
it
up
to.
You
know,
plus
50,
millicourse
or
100,
or
whatever
see
what
happens,
and
then
you
can
look
at
the
last
24
hours
of
data.
You
can
look
at
the
next
24
hour
day
and
whatever
happens
with
that
other
discussion.
At
least
you
starting
to
collect
data.
C
Yeah,
do
we
have
whether
it
be
in
our
service,
catalog
or
because
the
only
place
I
know
where
to
find
where
we
would
violate
a
our
metrics,
for
this
is
our
developer
documentation
where
jobs
have
to
complete
or
be
picked
up
within
a
certain
period
of
time?
Do
we
have
that
defined
elsewhere?.
D
It
is
in
sidekick.json
in
the
metrics
catalog
in
the
runbooks
repo.
It
might
be
kind
of
a
little
bit
deep
in
the
code.
It
might
not
be
super
obvious,
but
let
me
just
take
a
look
if
I've
got
a
chair.
D
So
if
you
just
look
at
the
sidekick
helpers
lips
on
it,
this
is
where
you
would
add
those
extra
shards
that
I
was
talking
about.
You
can
just
copy
them
and
then
it'll
generate
everything.
So
we
got
the
urgent
cueing
duration,
10
seconds
execution,
duration,
10
seconds
low
urgency,
60
seconds
and
300
and
throttled
doesn't
have
a
queuing
duration
because
it
can
queue
forever
and
it's
got
execution.
Duration
of
of
maximum
of
five
minutes.
D
D
Yeah
but
but
but
there
will
be
knock-on
effects
right.
So
you
know
this
is
what
we
are
giving,
but
I
think
a
lot
of
people
expect
those
things
to
be
a
lot
quicker
than
those
thresholds.
So
you
know
if
we're
pushing
it
up
that
everything's
taking
that
long,
we'll
have
to
have
a
lot
more
nodes,
because
our
throughput
will
be
so
much
lower.
So
you
know:
don't
don't
deliberately
bang
up
against
the
top
of
these
and
also
remember
you'll,
be
looking
at
p90.
D
I
would
say
that
you
want
to
be
focused
on
on
when
the
performance
degrades
from
current
from
the
current
performance,
by
a
certain
margin
rather
than
those
I
I
I
see
where
you're
coming
from
with
that,
but
it
sort
of
makes
me
feel
a
little
bit
queasy.
You
know
if
we
take
things
that
we're
running
in
under
a
second
and
you
know
they're
running
in
10
seconds
now,
so
it's
fine,
but
you
know.
Realistically
I
don't
know
if
people
will
be
so
happy
with
that.
But
definitely
you
know.
B
D
Yeah,
I
would
use
the
relative,
you
know
so
collect
the
stats
now
or
you
know
you
can
obviously
go
back
in
future
and
use
that
I've
got
to
run.
D
C
B
Cool
so
is
in
terms
of
a
rough
plan.
Then
let's
go
back
like
do
you?
Do
you
feel
comfortable
with
the
proposed
changes
that
henry's
got
now
to
resolve
the
saturation
alerts,
yeah.
C
B
E
B
C
The
web
stuff,
and
then
we
do
all
the
tapes.
We
can
do
this
in
parallel.
You
know
if
henry
keeps
on
with
the
psychic
work
that
he's
doing,
let's
rock
it
out
and
graham
can
continue
with
the
web
stuff
and
I'll
sit
here
and
do
the
nginx
and
console
work
in
incidents,
because
that's
all
I
do.
C
B
Okay,
would
it
be
useful
to
have
graham
jump
in
like
so
graham
is
so
a
couple
of
things
actually
on
the
website,
so
graham's
going
to
try
and
record
something
this
week
and
do
a
kind
of
async
demo,
so
we
can
kind
of
see
some
of
the
web
fleet
stuff
without
us
having
to
like
find
another?
Well,
you
two
don't
have
overlap,
but
also
to
avoid
another
meeting.
B
So
we'll
see
how
that
works.
He
is
in
the
progress
of
starting
to
move
towards
staging
he'll.
Have
some
questions
for
you
about
the
strategy
that
we
use
for
the
aapi
stuff
so
expect
some
questions
on
how
we
actually
like
split
traffic
and
kind
of
progress
there,
but
we
do
have
that
that's
kind
of
also
a
point
where
we'll
know
like
we
can
do
the
evaluation
and
we
can
see
like
what
like
what
work
we
have
now.
B
At
that
point,
we
could
certainly
make
a
point
of
deciding
like
we
round
up
on
observability
we'll
have
readiness
reviews
we
could
get
back
to
being
comfortable
like
it
does
feel
like.
We
should
just
bring
in
this
sidekick
retuning
and
get
this
back
like
the
numbers
are
so
different
right
like
this.
C
I
see
there's
no.
There
should
be
nothing
wrong
with
us
doing
this
work
in
parallel,
because
there's
so
much
work
with
both
of
these
tasks.
Anyways
it.
You
know,
I
say,
keep
graham
working
on
the
web
stuff
and
if
he's
got
free
time-
and
he
could
probably
help
work
with
the
sidekick
tuning
as
necessary
as
well
it'll-
be
a
little
difficult
for
him
to
contribute
towards
the
tuning
work,
because
it's
always
oh.
C
B
C
B
These
cat,
kids,
it's
just
too
amazing-
do
you
want
to
open
up
a
new
epic
to
follow
on
from
this
one,
or
do
you
want
to
you
continue
with
this
for
the
tuning.
C
C
I
don't
think
that
is
necessary.
We've.
B
Requests
yeah:
we
can
just.
C
B
I
would
quite
like
us
to
like,
for
I
think
we
should
try
and
break
that
issue
down.
If
we
can,
let's
get
it
so
that
it
tracks
the
kind
of
like
the
steps
we're
taking.
So
maybe
if
that
issue
becomes
the
kind
of
like
resolve
the
saturation
alerts
issue
and
that
can
be
closed
out,
I
know
we
can
have
like.
However,
many.
C
Okay,
I'll
I'll
try
to
create
the
necessary
issues
or
break
out
stuff
related
to
that,
if
possible,.
B
C
B
C
C
B
C
Okay,
I
would
imagine
that
we
just
have
so
many
pods
running
that
those
dashboards
are
churning
through
a
couple
hundred
pods
per
second
and
then
because
we're
turning.
B
B
C
B
Actually,
it
was
trying
to
climb
onto
the
back
of
your
chair
and
he
just
swiveled,
but
yeah
you're,
totally
right,
and
actually
that
puts
a
lot
of
that.
That
kind
of
helps
on
that.
We
should
definitely
do
the
future,
the
retuning
and
get
these
numbers
back
and,
I
suppose,
give
observability
a
little
bit
of
a
heads
up
that
you're
right
like
in
the
future,
we're
gonna
scale
and
we
we're
going
to
need
to
be
able
to
have
dashboards.
B
For
these
large
numbers
seems
to
be
a
new
problem,
so
awesome
cool.
Is
there
anything
else
that
we
should
go
through.
C
C
Due
to
the
incident
we
had
today,
I'm
kind
of
thinking
that
we
should
probably
do
like
a
a
good
look
into
our
kubernetes
configurations.
Jar
have
already
created
a
corrective
action
for
us
to
tackle,
which
I
think
would
be
wise
to
pull
soon,
if
not
sooner,
but
I
know
we
had
at
one
point
our
network
policies
related
to
our
gitlab
rails
applications.
We
have
it
kind
of
duplicated
between
our
sidekick
and
rails
configurations,
and
I
know
we
have
an
issue
for
this
somewhere
and
then
we've
also
got.
C
There
is
a
secret
that
we
had
to
create
and
kind
of
not
the
fun
way
when
we
were
migrating.
The
api
that
distribution
fixed
just
another
detected
item
related
to
our
kubernetes
configuration
upon.
If
we
should
start
looking
at
what
tech
debt
configuration
items,
we
have
that
we
should
probably
try
to
pull
soon.
That
way
we
prevent
ourselves
from
running
into
incidents
like
the
one.
The
incident
that
we
ran
into
today
was
should
have
been
avoidable,
but
it
was
kind
of
our
fault
for
having
something
pinned
unnecessarily.
C
B
C
B
Yeah
yeah,
I
think
that
makes
total
sense
yeah,
so
this
was
actually
kind
of
ties
in
fairly
neatly
with
okrs
as
well.
So
I
think
for
q3
we
can
start
thinking
about
like
what
are
we,
like?
Other
things
like
this,
that
we
want
to
like
tie
into
kind
of
hardening
off
the
cluster
yeah,
go
back
and
revisit
things.
C
B
B
B
B
Cool
okay,
yeah:
we
should
get
that
we
should
get
ourselves
out
as
well.
C
Oh,
that's
a
lot
okay,
so
I
just
have
a
single
question,
then
open
any
question
just
because
I
was
thinking
about
this
the
other
day,
I'm
kind
of
curious,
maybe
jarv,
would
have
a
better
perspective
on
this,
but
like
what
kind
of
things
do
we
need
to
start
thinking
about
when
it
comes
to
migrating?
E
C
B
Bit
might
actually
be
the
interesting
bit
yeah
he's
saying
that
he
thinks
that
bit
might
be
a
bit.
That's
the
trickier
pete.
Is
that
the
way
the
routine
works
because
going
to
all
kinds
of
different
like
sites
he
was
saying:
yeah
yeah.
He
was
saying
that
that
might
be
the
bit.
That's
actually
quite
unique
about
pages.
C
The
only
concern
that
I
have
currently
is
that,
right
now
we
are
missing
a
little
bit
of
visibility
with
the
service
say,
for
example,
like
last
week
when
we
started
getting
a
lot
of
errors
because
a
certificate
expired,
we
don't
get
those
requests
coming
into
or
logged
into
our
metrics
at
all,
and
we
don't
see
those
requests
coming
to
our
logs
either,
which
is
kind
of
unfortunate.
C
Operates
which
there
is
an
open,
active
issue
that
has
been
open
for
that,
but
I
don't
think
it
should
be
a
blocker.
I
just
think
this
be
something
we
should
be
aware
of,
but
outside
of
that,
I'm
not
really
familiar
with
the
page
of
service
at
all.
So
I
think
it'll
be
a
fun
one
to
move.
B
Absolutely
agree:
yeah.
Well,
let's
get
web
through,
definitely
through
staging
and
once
maybe
it's
sitting
like
if
we,
if
we
have
a
bit
of
a
lull
whilst
it's
sitting
on
canary
then
like
definitely
in
my
mind,
page
just
comes
right
afterwards
and
hopefully
isn't
a
major
project.
But
actually
one
thing
that
might
be
interesting
is
me:
do
we
know
already
what
observability
we're
missing
like
I'm
wondering
if
we
may
be
able
to
try
and
get
some
of
those
issues
scheduled
with
observability
beforehand,.
C
C
I'm
sure
there
might
be
other
potential
scenarios
where
we
might
be
missing
metrics
or
logging
in
certain
scenarios.
Okay,
so
this
issue
hasn't
been
assigned.
Yet
people
are
just
kind
of.
C
I
didn't
want
to
add
blocker
to
this,
because
this
is
already
a
situation
that
we're
blind
to
currently.
So
it's
not
going
to
change
anything
and
I
would
hesitate
if
they
fix
this
right
at
the
same
moment
that
we
migrated
it
and
then
we
all
of
a
sudden,
see
a
bunch
more
errors.
I'd
be
freaking
out
so.
B
Yeah,
that's
true:
okay,
cool
awesome,
all
right,
that's
good
to
know
nice,
so
yeah,
we
should
probably
see
if
they
do
start
working
on
it
soon.
We
maybe
schedule
around
that
a
little
bit
awesome
sounds
good
and
then,
in
terms
of
like
the
q3
okr's,
like
it'd,
certainly
be
interesting
to
have
a
think
about.
So
there
is
the
kind
of
issue
it's
a
little
slow
moving
at
the
moment.
So
I
don't
it's
not
going
to
be
q3,
which
is
italy
on
kubernetes,
there's
a
round
of
testing
or
need
to
happen.
B
It's
kind
of
expected
there'll
be
some
changes
that
need
to
happen
before
italy
could
go
to
kubernetes,
but
we
don't
know
what
those
are
at
the
moment,
but
in
q3
there's
a
couple
of
other
things
that
we
should
consider
whether
we
want
to
do
like
we
could
look
at
h.a
proxy.
We
could
look
at
secrets.
We
could
look
at
redis
or
any
other
items
along
those
lines
like
you
know
like
alongside
kind
of
tuning
and
like
cost
saving
or
whatever
the
other
hardening
pieces.
B
We
want
to
do
so
see
what
you
think
would
be
like
valuable
for
q3.
B
In
terms
of
like
assessing,
what's
going
to
be
the
most
valuable
stuff,
yeah.
C
Yes,
because,
like
aha
proxy,
would
be
an
interesting
one
and
should
be
relatively
simple
to
an
extent
because
it's
a
stateless
service,
but
we
have
a
lot
of
tooling
wrapped
around
it
yeah.
So
a
lot
of
tooling
would
need
to
be
touched
in
a
very
significant
manner.
Deployments
included
addressing
abuse
as
well
like
a
lot
of.
B
B
C
Been
tossed
around
a
lot
amongst
various
infrastructure
team
members
so
and
redis
might
be
an
interesting
one.
I
haven't
read
a
lot
about
running
redis
inside
of
kubernetes,
but
it
might
be
an
interesting
use
case
but
stuff
like
giddily.
I
worry
about.
B
B
Yeah
yeah
yeah,
I'm
going
to
add
some
comments
onto
the
gitly,
should
run
well
in
kubernetes
issue
and
I
think,
like
that's,
hopefully
going
to
be
the
place
where
we
can
start
to
work
out
like
what
do
we
actually
need
to
test
for?
B
B
B
Cool
but
yeah
have
a
think
about,
and
I
mean
there
were
other
things
we
talked
about
in
when
we
kind
of
went
into
tech,
debt
paydown
the
other
month.
You
know
like
whether
we
helm
or
you
know
like
there
are
other
bigger
pieces
there-
that
we
need
to
deal
with
like
definitely
open
for
kind
of
hoping
for
opinions
on
what
we
should
pick
up.
C
B
Sweet
sounds
good
awesome
thanks
for
this
cupboard.
This
was
like
a
super
super
interesting
discussion.
I
will
get
this
video
uploaded
so
that
henry
can
catch
up
with
it,
particularly
and
say,
like
there's
loads
of
great
stuff
dropped
in
the
beginning
about
how
we
approach.
B
Yeah
shadow
and
then
yeah
we'll
see
good
luck
with
the
the
logs.