►
From YouTube: 2020-10-22 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
A
B
Really
I
mean
we
have,
we
have
a
curfew:
9
p.m:
the
6
a.m,
curfew,
no
one
could
be
on
the
street.
Yes
and
everything
is
shut
down
now,
and
it
really
sucks
with
my
wife's
birthdays
on
the
15th
of
november
and
I
haven't
bought
or
anything
yet.
I'm
like
I'm
gonna
be
gonna
have
to
get
artsy
and
crafty.
A
Etsy.Com
they
ship
to
anywhere
right.
A
B
A
A
Are
set
up
as
appropriately
I'm
currently
working
on
developing
the
change
request
for
alt
ssh
in
production
today,
and
I
doubt
I'll
finish
it
today,
but
my
thought
is
that
it
will
be
a
multi-day
change
request.
Just
we've
already
got
the
servers
added
in
nha
proxy
we're
just
waiting
to
ship
traffic
over
to
it,
but
the
oh.
C
B
B
B
B
But
I
think
for
get
ssh,
we
need
to
finish,
get
https
first,
and
does
these
well
well?
Actually,
like
honestly,
like
all
the
issues
we're
having
right
now
are
around
like
rails
and
since
rails
isn't
part
of
get
gitlab?
Shell
get
ssh,
it's
probably
safer,
but
I
don't
think
I
can
do
both
at
the
same
time.
That's
what
I'm
saying
so
I
think
we're
gonna
have
to
so.
I
can
pick
it
up
as
soon
as
get
https
is
done
or
we'll
decide
to
shelve
get
https
and.
A
B
Yeah
yeah,
I
know
that's,
that's,
that's
fine.
I
guess.
Can
we
let's,
let's
go
over
the
blockers
and
talk
about
what
the
blockers,
what
blockers
are
left
also
for
get
ssh
specifically.
B
A
I
haven't
added
anything
to
our
document
about
blockers.
D
D
On
the
build
logs
once
so,
it
was
in
yesterday's
release,
so
it
is
running
just
adding
another
comment.
They
have
a
few
issues
that
they're
working
through
so.
D
C
Do
don't
don't
get
too
focused
on
the
actual
milestone?
That
is
only
a
target
for
them
doubt
as
they
go
along.
So
as
soon
as
we
hit
100
traffic
there
we
can.
We
can
call
this
unblocked
right,
which,
which
will
probably
happen
next
week,
and
then
they
have
a
lot
of
cleanup
tasks.
D
B
B
So
we
closed
this
issue
in
favor
of
the
new
issue
right
for
the
service.
So
let
me
replace
the
link.
B
Helps,
I
think,
we'll
need
to
talk
to
jason,
to
see
what
he's
thinking
for
this
right
now
and
how
how
long
it's
going
to
take.
B
C
B
I'm
gonna
say
we
take
off
the
prometheus
metrics
for
gitlab
shell
scarbec.
Do
you
agree.
A
My
thought
was
that
maybe
we
could
create
a
one-off
container
that
we
add
that
runs
alongside
gitlab
shell.
I
don't
know
if
this
is
supported
in
our
helm
chart
yet,
but
ben
just
recently
fixed
the
mtel
metrics
for
the
gitlab
shell.
So
we
now
have
a
fancy
dashboard.
A
Dashboard
that
he
stood
up
that
we're
at
least
getting
some
visibility
into
things
here,
I'll
post
the
dashboard
here
real
quick-
and
this
actually
has
me
mildly-
excited
that
maybe
we
could
just
use
that
same
configuration
open
up,
mtail
and
create
a
sidecar
container
and
at
least
get
metric
data
off
of
it.
I'm
not
set
on
this.
This
is
just
mainly
a
question.
A
Maybe
if
we
have
time
we
can
do
this
alongside
if
we're
waiting
on
something,
but
otherwise
the
only
view
we
have
in
the
gitlab
show
is
parsing
our
logs,
which
is
not
the
easiest
thing
to
do,
but
possible.
B
A
B
Yeah,
that's
why
I
kind
of
want
to
take
it
off
the
highlight
list
I
mean.
Maybe
we
can
even
leave
the
blocker
label
on
but
okay,
I
don't
see
this
as
a
blocker
for
us
going
to
canary
or
starting
to
take
a
percentage
of
traffic.
A
B
B
I
think
we
need
to
make
a
decision
whether
it
will
be
before
we.
So
maybe
we
just
put
it
on
the
epic
and
make
a
note
that
it's
gonna
be
something
we
should
decide
on
later
before
we
go
falling
for
production.
C
How
about
this?
Because
this
depends
on
others,
we
figure
out
the
severity's
priorities
as
if
it
was
a
blocker
put
that
on
that
issue
and
get
them
into
the
prioritization
process
there.
We
can
continue
our
work
separate
from
that,
because
we
aren't
sure
whether
it
will
block
us
immediately,
but
we
do
want
to
have
it
regardless.
So
we
can
move
past
it
but
get
get
the
the
rest
of
the
team
to
actually
work
on
it.
Does
that
work
yeah?
A
A
A
I
guess
if
we
discover
that
they
can't
get
this
done
in
a
reasonable
time,
would
creating
our
own
sidecar
container
be
a
reasonable
sidestep
that
gets
us.
This
information.
C
C
D
Agree
final
one
I
added
on,
I
feel
like
it's
linked
to
something,
but
I'm
afraid
I
haven't
yet
managed
to
link
it.
But
this
was
the
epic
that
jason
pinged
us
on
in
slack
the
other
day.
B
Yeah
I
mean
this
is
something
that
again,
like
really
needs
to
get
done,
I
would
say
like,
but
it
should
be
a
blocker
for
the
front
end,
but
maybe
not
a
blocker
for
the
current
work.
We're
doing
we're
handling
it.
Okay
for
now
not
having
the
ability
to
filter
out
different
log
files
in
our
log
outputs,
but
you
know
I
think
we
would
like
to
have
that.
A
I
agree,
I
think
this
is
very
painful
for
investigating
incidents
and
it's
just
adding
a
lot
of
garbage
inside
of
elasticsearch.
So
it's
irritating
the
observability
team,
but
it
should
not
be
a
blocker
for
today's
work.
D
B
B
Yeah,
I
I
just
yeah,
maybe
I
I
just
don't
know
how
many
other
things
is
on.
I
guess
robert's
robert's
plate.
So
if,
if
this
is
like
his
highest
priority
thing
or
whether
it's
like
a
background
task.
D
C
Came
in
with
an
epic
in
the
channel,
that's
okay!
But
at
the
same
time,
that
question
is
not
actually
asked
in
the
epic.
I
cannot
find
that
comment
now
anywhere
because
we
passed
it
and
it's
going
to
be
impossible
to
actually
track
down
what's
happening.
So
it's
okay
to
ping
people
in
slack
to
highlight
something
as
long
as
things
are
also
written
out
in
a
persistent
location,
so
it
for
this
coverability
so
same
thing
for
the
gitlab
shell
item.
We
want
to
write
down.
Why
is
this
a
severity
2?
C
B
I
have
a
new
one,
which
I
just
opened
up
right
before
this
last
incident.
This
is
related
to
all
my
investigation
on.
Why
we're
a
little
bit
slow
and
pod
start,
and
one
thing
I
found
is
that
hey,
it
looks
like
it
takes
two
minutes
for
database
load
balancing
to
come
online
when
a
pod
starts,
which
means
that
we
use
just
the
primary
database
for
two
minutes
and
then
like
after
that,
then
we
start
using
the
replicas.
B
This
definitely
would
slow
us
down.
I
don't
know
yet
whether
it's
the
reason
why
you
know
it's
the
only
reason
why
we're
slow
on
pod
start,
but
it's
something
we
need
to
figure
out.
It
looks
like
from
from
what
I
see
in
the
logs
right.
It
says
like
there's
no
secondaries
available,
it
looks
like
it
did
the
query.
The
console
got
a
host
length
of
zero.
B
That's
a
little
strange
to
me
because
console
is
running
in
a
separate
in
a
separate
name
space
on
this
episode
of
pods,
like
that's
not
like
going
away
during
upgrade.
So
it's
still
there
could
it
be
just
a
race
condition.
I'm
not
sure
we
need
to
kind
of
dig
into
this
a
bit
more.
B
Maybe
we're
not
polling
frequently
enough,
so
we
need
to
pull
more
frequently
on
start
so
that
we
can
get
the
replicas
initially
initialized
right
away,
and
I
also
think
it's,
I
think,
there's
an
argument
here
that
we
really
shouldn't
be
ready
until
we
have
the
replicas
online.
I
know
that,
like
for
safety,
maybe
you
want
to
revert
to
the
primary
if
the
replicas
aren't
there,
but
for
gitlab.com.
That
makes
absolutely
no
sense,
because
if
our
replicas
go
away,
we're
like
we're
completely
down
like
we
can't
the
primary
database
can't
handle
all
of
the
load
anyway.
B
So
if
you
have
thoughts
on
this,
just
add
them
to
the
issue.
I
guess
and
we'll
try
to
flush
this
out
a
bit
and
as
far
as
priority
and
severity
I
mean
it
seems
like
it's
pretty
high,
but
maybe
it's
a
little
bit
early
to
say.
C
I
I
would,
I
would
argue
that
we
put
this
to
the
highest
severities,
but
once
we
actually
get
to
the
bottom
of
not
to
the
bottom
of
but
get
a
better
idea
of
what's
happening
because
right
now
reading
yeah,
I'm
really
not
sure.
What's
happening,
whether
it's
even
gitlab
to
begin
with
the
application
right.
D
B
Let
me
let
me
see
if
I
can
like,
I
can
simply
just
like
see
if
I'm
able
to
do
dns
queries
before
railscan,
so
I'll
I'll
play
around
with
this
to
see
and
we'll
see
if
it
belongs
to
the
application.
I
think
at
least
for
the
application,
though
I
think
there
could
be
this
argument
that
we
have
a
configuration
flag
that
says:
hey,
don't
be
ready
until
database
replicas
come
online
right.
B
C
C
So
sure
continue
investigating
this,
but
amy
maybe
reach
out
to
rachel
as
soon
as
possible
to
see
whether
craig
can
take
a
look
monday.
His
time
start
looking
into
this
monday
his
time,
because
by
the
time
he's
offline
jar
comes
online,
so
we
have
at
least
a
better
idea
of
what
is
happening
so
that
we
can
escalate
this
further
up.
B
Okay,
let's,
let's
just
go
quickly
through
the
other
issues,
the
j.e
malek.
This
is
currently
was
just
deployed
to
a
production
right
before
the
last
incident
which
had
skarvik
and
I
freak
out
because
we
thought
it
was
related,
but
it
wasn't,
everything
seems
fine.
There
we're
going
to
demo.
Actually
we're
going
to
do
this
live
we'll
just
see
what
the
memory
impact
was
on
sidekick
after
this
change,
the
memory
impact
on
staging
was
quite
good.
B
I
saw
like
a
very
noticeable
decrease
in
memory
utilization
so
that,
hopefully
is
promising
cloud
native
build
logs.
We
discussed
web
service
generator
pattern.
We
discussed
pages
we
didn't
discuss,
but
I
think
we
know
what's
going
on
there
right,
like
pages,
is
on
its.
B
This
readiness
check
this
is
this
is
in
my
court
and
is
like
what
I'm
currently
investigating.
You
know
just
give
this
to
myself
right
now.
I
added
this.
I
added
this
delay
so
right
now
we're
delaying
60
seconds
before
we
we're
delaying
60
seconds
before
a
pod
can
be
marked
as
ready.
B
It's
not
yet
clear
if
that
actually
helped
at
all.
So
I
think
we
have
multiple
problems,
this
load
balancing
thing
being
another
problem.
I
might
close
this
for
now
until
we
have
more
information,
this
one
we
discussed
unstructured
logs.
Is
there
anything
more
to
do
with
this
one.
B
C
D
We
don't
have
it
on
our
list
either,
so
I
think
it
probably
worth
us
like.
I
I
don't.
I
know
like
what
what's
the
impact
of
this
and
when
is
it
going
to
hit.
C
C
Then
remove
kubernetes
blocker
because
we
then
have
an
option.
This
needs
to
happen
for
self-managed,
so
I
can't
force
people
to
implement
this.
We
already
shown
that
it's
important
right.
B
C
C
B
And
this
proxy
request
buffering
issue.
We
haven't
decided
yet
what
what
we
are
going
to
do
here
right
now,
it's
turned
off
globally,
but
the
discussion
here
was
about
whether
we
want
to
turn
it
on
for
web
and
api.
It's
turned
off
for
good
https.
B
Some
people
are
saying
like
okay,
we
have
cloudflare
in
front
anyway,
it's
not
going
to
be
necessary.
I
don't
know,
I
would
say
it's
the
right
priority.
Let's
leave
the
blocker
label
on.
We
can
like.
D
D
It
could
be
good
if
we
could
agree
of
whether
we
need
to
do
something
or
not.
I
think
that's,
probably
the
bit
that
I'd
like
to
see
next
is,
if
we
actually
like.
If
because,
if
we
don't,
then
great,
we
can
close
it,
but
if
we
do
then
I
mean
it
would
be
helpful
for
at
least
someone
to
know
they
have
to
do
something.
B
Yeah,
I
think
I
think
it
would
be.
I
guess-
for
the
service
split
that
jason
is
working
on.
I
assume
we
are
only
using
a
single
nginx
controller.
Maybe
it
would
be
nice
if
we
could
turn
off
request
or
toggle
proxy
request
buffering
for
each
different
path,
so
yeah
I
can
kind
of.
I
can
link
it
to
that
issue
and
you
can
see
I'm
not
I'm
not
convinced
we
can
leave.
It
turned
off
yet
for
the
front
end,
but
it's
just
not
something.
I'm
thinking
about
right
now.
B
B
B
C
B
B
B
B
B
B
B
Yeah
this
is
this,
is
this:
is
all
pods,
not
just
one?
I
see
okay,
this
is
looking
at
across
all
of
these.
C
B
B
No,
this
is
we're
just
looking
at
web
services-
sidekick
yeah-
I
don't
I
don't
know.
What's
going
on,
we
can.
We
can
take
a
closer
look.
B
Okay,
what
I
wanted
to
show
for
the
next
part
of
the
demo
is
deleting
a
web
service
pod
in
production.
C
B
And
it's
part
of
a
demo
yeah.
Let
me
just
prepare
this.
B
B
Okay,
so
I
we're
looking
at
us
east
1b,
looking
at
the
pods
on
usc
1b,
you
can
see
that
we
have
a
gitlab
shell.
These
guys
aren't
taking
any
traffic
yet
correct
scrub
right,
not
even
for
all
ssh.
B
B
B
Them
this
takes
a
long
time,
because
we
have
these
very
long.
We
have
this
very
long.
Wait.
It's
very
long
blackout
window
like
a
two
minutes,
so
the
cube,
ctl
pod
delete
doesn't
come
back
right
away.
It
takes
like
two
minutes,
so
while
that's
happening
in
another
window,
we
can
take
a
look.
B
B
Okay,
so
you
can
see
that
this
guy's
coming
up
right
now.
B
B
So
this
is
like
an
artificial
delay
that
we
injected
into
the
kubernetes
configuration
to
say:
don't
pass
the
readiness
until
like
60
seconds,
so
wait
60
seconds
before
passing
the
readiness
check
and
what
the
thought
here
was
was
that
we
were
passing
the
readiness
check
too
soon
and
because
of
that,
we
were
sending
traffic
to
the
pods
too
quickly,
which
then
was
causing
our
aptx
to
drop.
B
C
C
B
Yeah,
so
here
this
issue
is
the
like
the
canonical
issue
that
I'm
using
for
troubleshooting
this.
It
does
have
a
recent
update
here,
I'm
just
keeping
the
description
up
to
date,
and
these
are
the
two
things
that
we
tried
so
far.
I'll
update
this
with
the
load
balancing
thing
as
well.
B
B
So
we
terminated
a
pod.
I
wanted
to
show
first,
the
now
I
lost.
B
B
So
you
can
see
here
are
50th
and
95th
percentile
of
latency
for
rails,
and
you
can
see
it
just
just
drops
off.
So
this
is
nice
like
there's
no
increase
at
the
end
right
as
it
gets
terminated.
This
is
one
thing
I
was
concerned
about
initially
was
that
like
terminating
pods,
maybe
isn't
safe?
B
So
you
can
see
like
oh,
this
looks
kind
of
bad,
but
actually,
if
you
look
at
the,
I
pin
all
go
to
discover.
Hopefully
this.
B
A
B
Because
the
when
you,
when
you
terminate
a
pod,
we
first
send
this
sig
term
signal
which
then
tells
puma
to
enter
this.
B
I
can't
remember
its
grace
gray
spirit
or
blackout
window,
but
whatever
we
call
it,
it's
like
the
blackout
window,
which
says
that
it
will
return
503s
for
the
readiness
check
for,
however
long
while
still
processing
connection.
So
this
is
the
this
is
basically
that
window
being
shown
here.
B
The
same
for
for
workhouse
workhorse,
now,
what's
more
interesting,
is
looking
at
the
pod.
That's
coming
up.
So
let's
take
a
look
at
that
guy.
B
Workhorse
so
yeah,
so
you
see
here
like
when,
when
the
pod
starts
like
we,
we
have
like
these
latency
spikes
in
the
beginning,
and
this
will
eventually.
A
B
Okay,
so
let's
take
a
look
at
this
is
this
is
workhorse
and
we'll
just
take
a
look
at
all
kubernetes.
B
A
A
So
yesterday,
with
that
incident
flared
up
with
a
deploy
that
went
out,
I
did
some.
B
Yeah
and
that's
kind
of
what
we
see
there,
the
the
issue
that
we're
seeing
so
so
there's
kind
of
like
two
things
here.
One
is
that
when
we
do
a
deploy,
our
our
latencies
are
are
really
bad
and
they're
much
worse
than
if
you
just
terminate
a
single
pod,
and
I
think
the
reason
for
that
is
that,
for
whatever
reason
we
don't
understand
it,
it
looks
like
that
traffic.
It
looks
like
that.
We're
terminating
old
pods
before
the
new
pods
are
ready.
B
If
I
just
terminate
a
single
pod,
I
see
like
a
little
bit
of
a
latency
spike
when
it
first
starts,
but
it
gets
much
more
pronounced
when
we
do
these
rolling
updates,
where
we
are
searching
a
new
set
of
pods
waiting
for
those
to
come
online
and
as
soon
as
they
come
online,
then
we
terminate
the
old
pods.
B
It
looks
to
me
that,
because
the
latency
always
happens
on
the
new
pods
that
maybe
too
much
traffic
is
shifting
before
the
new
pods
are
ready
and
that's
like
that's
what
we're
looking
at
right
now.
B
Yeah,
so
we
have
like
two
knobs
here:
one
is
the
the
surge
which
is
we
use
the
kubernetes
default
of
25.
So
what
happens
is
like
20
it'll
create
25,
more
pods,
wait
for
those
to
be
ready,
and
I'm
pretty
sure
that
means
passing
both
health
checks
for
workhorse
and
rails
and
as
soon
as
they're
ready.
Then
it
starts
terminating
the
old
pods
and
we
just
set
so
that
we
just
set
it
so
that
the
maximum
unavailable
number
is
zero.
So
it
should
never.
B
It
should
never
terminate
more
pods
than
like
the
minimum
number
of
pods
that
we
replicas
that
we
have
set,
which
is
50..
What's
interesting,
is
that
it
seems
like
it's
terminating
pods
before
the
new
pods
are
like
ready
to
process
traffic.
It
could
very
well
be
related
to
this
load.
Balancing
thing
I
mean
this
like
having
a
sudden
surge
of
requests
going
to
all
of
these
pods
that
don't
that
aren't
using
load
balancing
that
aren't
using
the
low
bouncing
replicas
does
sound
pretty
nasty.
B
I
think
for
the
primary
we
might
see
it.
I
don't
know.
I
think
I
think
the
bigger
issue,
though,
is
that
yeah
that
they're
slow.
I
don't
think
that
they're
timing
out
but
yeah,
I
think,
like
we
should
see
a
small
spike
in
database
calls
on
the
primary.
C
C
B
D
B
Yeah,
so
it's
like
it's
like
we,
we
spike
up
new
pods
with
this
search
like
25,
more
pods
and
then
kubernetes
is
like
removing
the
old
pods
and
it
seems
like
we
are
at
reduced
capacity
at
that
point,
even
though
it's
keeping
the
replica
count
constant,
it's
keeping
it
at
50..
It
seems
like
the
new
pods
that
are
coming
up,
aren't
quite
ready
to
take
on
traffic
before
it
starts
germinating
the
old
ones,
even
though
the
readiness
is
passing
so
this
is
this
is
what
we're
kind
of
unsure
of
right.
A
I
hate
stackdriver,
but
we
should
be
able
to
see
where
a
pod
transitions
into
a
non-ready
state
being
terminated.
We
should
also
see
when
a
pod
goes
from
ready,
but
or
excuse
me
running,
but
not
ready
and
we'll
see
a
transition
from
not
ready
to
ready.
B
So
so
we
can
like,
I
can
kind
of
show
you
so
here's
the
pod
that
we
just
launched
so
first
it
runs
its
configured
container.
B
B
This
is
why
I
open
that
issue.
It's
like
oh
look
at
that.
No
secondaries
are
available
for
load
balancing,
so
it's
using
the
primary
here
still
coming
up.
B
B
A
Like
all
this
looks
normal,
but
stackdriver
will
tell
us
what
the
kubernetes
api
does.
It'll
tell
us
that
a
pod
is
in
a
ready
or
not
ready
state.
It
will
tell
us
when
it
transitions
between
those
states
for
all
pods
and
it'll,
take
a
long
time,
because
we
are
running
50
to
100
pods,
but
we
should
be
able
to
prove
whether
or
not
things
are
being
removed
or
added
to
this
endpoints
controller
to
earth.
Okay,.
B
B
B
It's
not
healthy
until
here,
but
the
the
load
balance
the
load
balancer,
like
the
replicas,
aren't
put
in.
B
A
A
C
B
So
I
don't
know,
I
think
we
need
to
dig
into
this
a
bit
more,
but
for
now
this
is
really
difficult,
because
when
I
have
75
percent
of
the
traffic
going
for
get
https,
occasionally
we're
seeing
these
aptx
alerts
on
deployments.
So
I've
I've
scaled
that
back
for
now,.
D
We
I
mean,
I
think
we
should
try
and
find
the
cause
of
this,
but
if
it
holds
us
up
for
a
lot
like
what
would
happen
if
we
rolled
out
through
the
fleet
to
a
smaller
percentage
like
with
the
impact,
would
be
smaller,
I
presume.
B
Yeah,
so
so,
right
now
I've
I've
scaled
back
our
traffic
to
like
around
30-
and
this
is
like
you
know
so-
there's
much
less
of
impact
during
deployments,
but
I
think
we
should
still
keep
some
of
the
traffic
to
the
cluster,
because
this
is
our
best
way
to
troubleshoot
these
kind
of
problems.
So
I
think
we'll
just
stay
where
we
are
now.
D
Cool
okay
makes
sense,
and
then
we've
got
two
things
to
investigate
right.
One
is
the
do
we
trust
the
readiness
check?
I
think
it's
probably
one
like
does
that.
B
D
Does
a
passing
readiness
check
really
mean
ready
and
then
the
other
one,
the.
B
Yeah
like
without
this
artificial
delay
that
delays
the
readiness
for
like
a
minute,
it
will
pass
as
soon
as
we
bind
to
the
porch
as
soon
as
puma
binds
the
portal
right
away.
B
A
B
Yeah
yeah,
we.
B
We
can
do
this
easily
on
staging.
We
can
just
do
a
hump,
so
you're
thinking
like
when
we
do
a
hop
which
gets
translated
into
eventually
into
a
sigint
for
puma,
whether
we
see
the
same
database
load
balancing
functions,
yeah,
that's
I
can
try
that
on
staging.
A
B
Yeah
great
graeme's
theory
was
that,
like
there
has
to
be
some
differences
between
spinning
up
like
a
brand
new
pod
and
having
a
vm
where
you're
just
hopping
puma,
and
maybe
there's
just
some
slow
warm-up
like.
Is
it
a
cache?
Is
it
dns?
Is
it
something
that
is
causing
us
to
be
much
slower
on
on
start
than
we're
used
to.
B
B
A
C
B
Cool,
that's
pretty
much
all
I
have
describe
if
you
have
anything
else.
A
A
A
In
the
meantime,
I'll
keep
repairing
the
necessary
change,
requests
and
get
stuff
ready
and
make
sure
the
issues
are
in
tip-top
shape,
but
yeah
we're.
I
think,
we're
in
a
good
good
spot.
D
Cool
sounds
good.
Can
we
just
have
a
very
quick
look
and
check
that
we
have
the
right
things
on
the
board?
The
delivery
board.
A
Amy
I
did
unrelated
to
this
work,
but
I
did
unassign
myself
from
that
container
registry
deployment.
Oh.
D
D
D
I
think
so:
we've
got
a
few
things
still
in
progress,
which
I
think
that
looks
correct.
Do
we
need
to
do
anything
with
this
action?
Cable
issue.
B
I
think
that
still
needs
to
be
blocked
because
we're
waiting
on
the
service,
the
charts
issue
for
splitting
out
services.
D
No,
we
don't
actually
no.
D
Okay
cool,
so
all
right
that
was
making
progress
good
and
then
are
these
the
right
things
to
have
up
next.
So
guess:
that's.
A
Definitely
it
looks
like
it
sounds
like
we
could
delete
the
one
two
seven
four,
the
mtail
side
container,
based
on
what
we
just
discussed.
D
D
Cool,
what
would
be
probably
the
thing
that
might
be
useful
actually
to
go
back?
It
might
be
good
to
get
some
of
your
input
if
you've
got
like
half
and
half
days,
just
on
the
things
like
the
documenting,
how
to
debug
multi-cluster,
if
we
just
put
in
some
thoughts
about
what
we're
hoping
to
achieve
with
that,
so
we've
got
kind
of
some
rough
exit
criteria
so.
A
Jarv
has
already
created
some
excellent
documentation.
I
don't
know
what
else
to
add
at
this
point.
D
Sorry,
I
meant
on
the
actual
issue,
like
is
this
list
of
things
like
everything
we
need
like,
because
I
guess
it
could
just
be
something
we
could
run
forever
right
keep
documenting
forever,
but
to
give
this
kind
of
some
sort
of
framing.
B
Yeah
I
I.
C
B
I
think
we
can
fold
that
into
just
a
general
like
how
do
you
troubleshoot
using
logs,
like
you
know
whether
it's
the
kubernetes
cluster
or
vms,
because
it's
the
same
index
most
of
the
I
mean
application
logs,
look
the
same
except
for
kubernetes.
You
have
a
few
extra
fields
like
the
pod
name,
the
image
version
things
like
that.
B
Yeah
I
mean
I
don't
know
starbuck
do
you
think
I
was
going
to
do?
The
next
thing
I
have
to
do
is
the
architecture
update
to
include
the
zonal
clusters
when
I'm
done
with
that,
is
there
anything
more
that
you
think
we
should
do.
A
I
don't
know
that's
why
I
didn't
have
anything
else
to
add
to
that
issue.
Yeah.
B
D
Okay
makes
sense,
I
think
the
sort
of
the
big
thing
I've
got
in
my
mind
is
around
making
sure
the
the
cluster
the
region
names
are
really
clear
so
that
people
can
like
really
quickly
when
they.
When
we
see
an
issue
like
on
a
for
example,
we
know
what
that
is.
We
know
how
broad
the
impact
is
going
to
be
from
that,
like
you
know,
is
it
a
region.
A
I
think
all
of
our
infrastructure
engineers
are
pretty
clear
on
what
a
region
or
zone
would
be,
so
I
think
they
would
know
precisely
what
they
need
to
troubleshoot
if
they
come
across
that
style
of
situation,
maybe
if
anything
jarv
maybe
send
the
final
set
of
docs
over
to
review
by
the
current
on
call
just
to
have
another
set
of
eyeballs.
Look
at
it
just
to
make
sure
that
they
we
address
any
questions
they
may
have
since
we're
familiar
with
this
stuff.
I
don't
know
what
else
to
add,
but
they
might
have
additional
questions.
B
D
B
D
Happy
halloween
enjoy
your
vacation
skerek
and
thanks
for
the
demo
job
that
was
really
really
good
to
see
and
really
interesting,
all
right.
Catch.