►
From YouTube: 2020-11-20 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Yeah,
that's
what
I
thought
and
yeah
a
couple.
I
had
a
couple
of
questions
about
the
epic
about.
I
don't.
A
The
observability
and
troubleshooting
epic,
I
was
hoping
to
just
get
a
brief
view
on
and
also
whether
like
how
we
want
to
go
about
getting
the
api.
A
Epic
epics
probably
set
up
and
ready,
or
what
do
we
want
to
pick
up?
Next,
I
guess
is
probably
the
better
question.
A
B
We
saw
like
some
strobing
light
there
for
a
second
square
back.
B
Let's
see
so
good
morning,
we
were
just
saying
how
we
don't
really
have
much
on
the
or
anything
really
on
the
agenda
today.
So
we're
just
going
to
go
through
blockers
and
amy
wanted
to
talk
about
the
monitoring,
epic
and
also
discuss,
what's
going
to
happen
next
for
api.
B
B
So
right
now
we're
all
waiting
to
see
the
result
of
the
generator
pattern
for
traffic
splitting.
I
pinged
jason
yesterday
on
this
I
swear
I
did.
Maybe
it
was
on
the
issue
and
not
the
mr.
B
B
That's
fair,
that's
fair!
So
maybe
if
I
get
a
heart
or
a
thumbs
up
or
something
later,
that'll
make
me
feel
good
and
also
it'll
help.
Probably
so
we
can
test
this
out.
B
Cool,
unified
structure,
logging.
B
This
becomes
a
big
issue
for
the
front
end.
We
decided
it
wasn't
a
blocker
for
gitlab
shell,
it
looks
like
robert
is
still
owning
it
and,
let's
just
check
out
one
of
these
issues
here
so.
A
13.7
yeah
jason
said
these
two
are
both
in
13.7
at
the
moment.
Is
that
okay,
how
much
does
that
hold
us
up.
B
I
think
it's
fine,
because
we
can,
I
think
we
can
probably
go
to
canary
before
we
get
into
trouble
and
I
don't
anticipate
us
going
to
canary
in
the
next.
Like
you
know,
I
guess
before
thirteen
seven
I
don't
anticipate.
So
I
think
okay.
A
What
do
you
think
about
the
third
one
so
b?
Five
is
that
five
yeah
b
five,
so
the
proof
of.
B
Yeah,
I
don't
think
I
mean
I,
I
don't
think
it's
necessary
if
we
use
the
community
contributed
contribution
of
the
rapper
thing,
but
I
think
that'll
be
sufficient
for
us.
B
A
Is
that
does
that
stay
within
our
unmodified
helm,
charts.
B
So
yeah,
so
this
is
a
single
logger
container,
watching
a
short
volume
yeah.
D
B
B
I
think
yeah.
So
I
think,
as
far
as
unmodified
home
chart,
I
think
we're
gonna
be
fine.
Now
asap
andrew
is
working
on
deciding
environment
here
type
stage,
shard
labels.
I
think
he's
kind
of
waiting
on
us
for
this
right
now
and
he
just
joined.
So
you
can
talk,
talk
to
them.
E
B
Yeah
so
I
I
did
submit
an
mr
for
that,
did
you
see
it
square
back
or
not?
Maybe
I
lost
this
in
my.
C
B
B
There
just
to
make
sure
yeah,
I
think,
okay,
I
just
wanted
to
give
you
a
quick
look.
Let
me,
let
me
add
this.
E
E
I
mean
presumably
I'll
take
a
look
at
it,
but
obviously
there's
two
things:
there's
one
thing
of
of
allowing
tags,
deployment,
tags
and
then
the
second
one
is
coming
up
with
like
some
sort
of
schema
where
we
can
say
yeah.
This
deployment
tag
is
like
the
sidekick
catch-all
shot
whatever,
but
I'll
I'll.
Take
a
look
at
that.
B
E
Like
a
lot
of
copy
and
paste
and
a
lot
of
things
that
can
go
wrong
and
when
they
go
wrong,
we
won't
really
know
they'll
just
sort
of
disappear,
which
I
mean.
If
this
stuff
was
being
more
generated,
then
you
could
do
stuff,
but
as
it
is,
I
don't
know
like.
I
don't
know
how
much
of
this
kind
of
this
this
I
would
consider
to
be
debt.
E
I
guess-
and
I
don't
know
how
much
of
it
you
guys
have
already
got
and
whether
you
feel
it's
manageable
or
whether
you
feel
that
it's
kind
of
tipping
over
the
edge.
B
Yeah,
I
don't
know
if
we're
tipping
over
but
yeah
we're
close
to
the
edge
at
least
I'm
close
to
the
edge
yeah.
I
don't
know
man
like
for
this
we're
kind
of
we're
setting
a
deployment
pod
label,
which
is
actually
not
the
name
of
the
deployment
right.
It's
it's.
The
name.
A
E
And
it
needs
to
match
between
canary
and
the
main
stage
so
that
we
can
compare
those
two
pods
yeah.
C
C
E
B
I'm
sure
we
could,
I
don't
think
so.
Well,
we
maybe
we
could,
but
it
gets
complicated
with
sidekick
right
because
we
have
all
of
these
shards
which
aren't
defined.
B
The
release
yeah,
okay:
we
can
take
a
look
at
that.
E
Back
to
the
original
plan
of
of
using
like
the
names
and
not
using
like,
like
the
prefix
of
the
on
on
the
names
and
and
bringing
those
in
line,
yeah
major.
B
B
E
Kind
of
close,
like
what
I
want
is
to
put
it
in
the
metrics
catalog.
I
don't
know
if
you,
if
you
kind
of
roll
back
to
that
little
demo.
I
did
probably
two
weeks
ago
last
week,
where
we
have
we
just
kind
of
define
like
these
are
the
you
know:
we've
got
these
deployments
or
whatever
we
want
to
call
them.
We
might
even
call
them
deployments
but,
like
maybe,
let's
just
stick
with
the
pros
now.
You
know.
E
B
B
And
we
don't
have
that
now
without
creating
another
label
and-
and
I
wonder
whether
we
should
maybe
calling
the
deployment
is
confusing
if
we're
not
going
to
actually
change
the
name
of
the
real
deployment
in
kubernetes
right
like
maybe
we
should,
but
I
don't
know
what
else
we
would
call
it.
E
E
So
I
like
that,
the
sidekick
catch
all
the
deployment
would
be
called
like
sidekick,
catchable
and
not
like
gitlab
dash.
You
know.
B
D
E
C
D
B
Okay,
let
me
just
write
some
notes
down
here
so
by
the
same
between
canary
and
the
main
stage.
C
Let's
say
we
discover
that
it's
not
possible
to
remove
the
release
name
from
the
deployment
names
if
we
redid
the
canary
environment
such
that,
the
only
thing
that's
different
is
the
name.
Space
we'd
still
have
the
stage
as
canary,
but
that
at
least
the
names
the
deployments
would
be
the
same.
Would
that
be
enough.
B
E
C
C
But
like
we
use
static,
ip
addresses
for
a
lot
of
things,
so
at
least
we
wouldn't
have
to
rebuild
a
lot
of
things.
It
would
just
be
the
the
naming
convention
used
for
that
release,
and
I
don't
know
what
negative
impact
that
might
have,
so
we
had
to
make
sure
that's
compatible
with
the
way
we
deploy
things.
A
E
Yeah,
I
think
we
should
do
that
anyway,
so
we've
got
that
on
the
we've
got
that
on
get
lab
vms
at
the
moment.
So,
although
there's
a
few
things
that
don't
but
generally
every
single
thing
has
got
a
type,
a
tier,
the
tier
we
throwing
away
and
a
stage
and
a
shard
and
a
lot
of
them
are
pretty
boring,
they're
just
default,
shard
and
main
stage,
but
being
able
to
knowing
that
everything
has
got.
E
Those
dimensions
makes
like
everything
much
simpler,
because
you
don't
have
to
say
well
give
me
the
thing
you
know
group
everything
by
shard
and
then
also
deal
with
the
things
that
don't
have
a
shard,
and
you
know
it
just
makes
a
lot
of
a
lot
of
the
monitoring
and
observability
stuff.
A
lot
well,
actually
kind
of
anything
that
uses
labels
simpler
because
you
get
two
groups
instead
of
three
groups.
You
know
the
and
groups.
B
E
Chart
is
what
we
know
so
the
question
there
job
is:
will
we
ever
have
two
deployments
in
the
same
service
that
are
not
just
kind
of
dealing
with
different
shards?
So
obviously
that's
the
pattern
used
in
sidekick.
E
B
The
thing
is,
is
that
sidekiq
already
has
maybe
correct
me
if
I'm
wrong
start
scarback,
but
doesn't
for
every
shard.
We
also
have
a
deployment
for
sidekick
right,
that's
correct,
so
so
we
would
have
multiple.
So
for
this
work
that
we're
doing
to
segment
the
you
know,
web
service,
you
would
have
a
different
deployment
for
every
you
know
for
for
every
shard.
So
in
essence
like
to
me,
it
sounds
like
shard
and
deployment
are
pretty
much
the
same
for.
E
Yeah,
it's
the
same
yeah,
oh
right,
you're,
actually
saying
charlotte's!
No,
because
the
we've
got
a
service
name,
but
we
just
need
to
be
able
to
yeah.
We
need
a
way
of
now.
The
problem
is
actually
stage.
Sorry
because
we
we're
we're
struggling
with
the
this
stuff.
Is
it
I
suppose
it's
just
naming
is
hard,
but
what
about?
If
we
just
focused
on
on
fixing
the
names
the
same,
I
kind
of
feel
like
that
would
be
the
best
next,
the
first
best
next
step.
E
The
best
first
next
step
is
to
just
have
those
the
same,
and
then
I
I
suspect
that
that'll
be
enough
to
kind
of
go
forward
with
that.
C
B
D
B
Okay.
Moving
on
this
issue
is
closed,
so
we
can
remove
it
for
now.
B
B
Upgrading
the
nginx
controller
doesn't
address
this
like
it
still
needs
to
be
addressed.
I'll
respond
to
josh.
A
A
Okay,
great,
do
we
need
to
add
in
any
other
blockers
in
here?
Have
we
got
any
other
like?
How
are
we
getting
on,
I
suppose,
with
the
get
ssh
stuff
do
we
need
to
do
any
like
do?
We
need
anything
at
the
moment
to
investigate
that
further.
B
Well,
I
don't
think
so
I
think
I
mean
things
could
come
up
today,
but
I
think
we're
still
on
track
to
probably
have
this
rolled
out
to
production
this
week.
What
do
you
think
scarbeck.
C
I've
I
am
caught
up
on
the
situation
and
your
merge
request
that
you
put
in
I
merged
that
and
it's
completed
going
through.
So
it's
just
a
matter
of
starting
again
and
reevaluating
as
necessary,
just
to
make
sure
that
we're
not
causing
issues
again.
So
I
think
we're
not
blocked
it's
just
a
matter
of
take
three.
I
guess.
B
Yeah,
yes
andrew,
were
you
following
the
get
ssh
degradation
issues.
E
Only
in
the
lightest
sense
in
the
lightest.
E
B
Though
the
one
thing
we
discovered
is
that
we
have
this
really
useful
proxy
metric,
which
is
the
you
know,
the
request,
duration
for
the
ssh
back-end
in
aj
proxy.
This
is
what
we
alerted
on,
but
we
don't
have
it
dashboarded
anywhere.
So
I
think
one
of
us
should
we
do.
We.
A
E
Where
is
it
load?
Balancer?
No,
oh,
wait!
Maybe
maybe
I'm
talking
nonsense?
Okay,
maybe
I'll
take
that
back.
So
we
have
a
load:
balancer,
ssh,
ssl,
sli,
okay,.
B
B
Yeah,
because
I
think
this
is
like
would
have
been
a
fantastic
metric
to
track
for
the
rollout.
That's
just
something
that
we
missed
like
it's
probably
like
honestly,
it's
probably
our
best
view
right
now
and
to
get
ssh,
because
the
the
latencies
that
we're
tracking
right
now
are
only
to
rails
not
for
end
to
end
get
ssh,
but
the
load
balancer
gives
us
that.
E
It's
still
reversed
using
like
push
gateway,
because
it's
actually
slightly
horrific
how
little
information
we
have
about
that
right,
like
it
makes
me
cringe
yeah
and
we
could
potentially
one
option,
would
be
to
put
a
push
gateway
in
and
have
those.
I
understand
why
we
don't
scrape
them,
but
maybe
push
to
like
a
push
gateway
or
something.
C
E
B
Yeah,
I
think
yeah.
E
So,
on
the
short
term,
I'll
add
that
h.a
proxy
bucket
the
the
histogram
into
the
sli,
so
that
that's
that's
like
a
very
easy
short-term
win
that
we
can
add
right
now
and
I'll.
Do
that
straight
I'll?
Pretty
much!
Do
that
after
this
cool.
D
B
B
A
Nice
yeah,
I
think.
Well,
I
think
that
start
well.
I
think
a
lot
of
stuff's
already
in
progress
I'll
find
out
what
we
can
expect
to
see
during
december,
because
I
think
there
will
be
some
more
stuff
so.
B
Yeah,
okay,
so
I
think
that's
pretty
much
it
for
reviewing
blockers.
A
Awesome,
yes,
so
we
have
in
progress
epic
on
the
observability
stuff.
I
think
this
was
a
bit
of
a
kind
of
generic
bucket,
so
be
interesting
to
see.
If
we
feel
like
we've
like
got
the
right
things
in
there
and
whether
we
kind
of
want
to
keep
this
one
moving
in
progress.
B
Sure,
let's
take
a
look.
B
I'm
I'm
not
sure
what
I
want
to
do
yet
for
this
great
tooling,
to
drain
clusters.
We
already
have
command
line
tooling
for
this.
It's
just
that
we
don't
have
chat,
apps
tooling.
It
needs.
B
Yeah,
I
I
guess
we
could
add
it
to
the
thing-
is
that
I'm
very,
like
I'm
very
nervous
about
giving
like
advertising
draining
an
entire
cluster
as
an
option,
because
it's
going
to
be
catastrophic
because
we
don't
scale
very
fast.
So
this
would
only
be
like
very
much
an
emergency
type
scenario.
I
could.
I
could
put
it
in
the
sre
onboarding
checklist
of
things
that
you
need
to
know,
but
I'll
need
to
put
a
big
warning
there
that
really
we
should
never.
We
should
never
do
this
unless,
if
we
intend
to
cause
degradation.
A
B
Probably
not
I
mean
I
I
think,
maybe
you
would.
I
think
what
people
do,
what
necessaries
do,
if
they're
in
an
incident,
they
need
to
know
something
they
just
grew
up
through
the
run
books
directory
and
hopefully
they
find
what
they
need.
I
mean
this
is
the
idea
here
and
I
I
shared
this
with
brent
recently
in
our
one-on-one,
that
I
told
him
like.
B
A
B
C
B
I
mean
this
is
this
kind
of
like
also
ties
into.
We
need
to
get
kubernetes
logs
in
to
elasticsearch
and
yeah.
Mikael
was
working
on
that
and
I
forget
where
we
left
it
like
did
we
leave
it
to
where
we
wanted
to
use
a
sync
or
are
we
going
to
modify?
B
C
Okay,
I'll
create
an
issue
and
if
you
know
of
the
issue
that
he
was
working
on,
we'll
link
them
together.
So
he
can
connect
everything.
B
I,
like,
I
know
that
there
is
an
issue
specifically
for
this
I'll
I'll
look
I'll
try
to
find
it.
I
don't
know
why
it
should
be
linked
to
this.
B
B
No,
that's!
Oh!
No!
That's!
That's
the
single
folder
one
yeah,
that's
a
different
one.
This
is
the
one.
D
B
Okay,
I
think
that's
pretty
much
it
for
the
observability
stuff.
Is
there
anything
else
you
wanted
to
look
over
amy.
A
I
don't
think
so
like
does,
that
is
that
all
stuff
that
we
want
to
kind
of
focus
on
now?
Let's
just
say
this
kind
of
epic
we
can
make
as
big
as
we
want
to.
B
Yeah,
I
think,
for
the
stuff
I'd
like
to
do
before
we
move
to
the
next
thing,
which
is
api.
I
think
probably.
B
The
revamp
kubernetes
metrics
and
let's
figure
out
the
tooling
one
and
and
then
what
scarbec
just
brought
up,
which
is
the
kubernetes
logs,
but
I
I
think
like
maybe
we
can
keep
this
open
while
we
work
on
the
api
or
would
you
prefer
like
is
that
is
that
okay,
yeah.
A
Nice
great
and
then.
C
I
have
a
quick
question:
I
have
an
issue
on
my
to-do
list
to
figure
out
how
to
remove
a
lot
of
junk
logging
out
of
the
get
lab
shell
pods
before
they
make
it
into
elasticsearch,
and
I'm
really
struggling
to
like
get
some
solid
amount
of
time
to
work
on
this.
C
C
B
C
As
far
as
I
know,
we're
not
blowing
up
the
index
sizes,
but
the
amount
of
events
that's
coming
up
is
pretty
high.
I
don't
know
of
us
causing
a
problem,
but
I
wanted
to
prevent
it
if
we
could.
So
that's.
E
Something
where
you
could
ask
mikhail
for
some
assistance.
B
C
B
C
B
B
B
C
B
No
affluent
d
doesn't
won't
change,
photoelastic
search,
won't
change
like
we'll
still
need
to
monitor
standard
out
from
each
container
and
we'll
still
need
to
do
things.
Influent
d,
like
redacting
tokens,
that
the
application
is
unable
to
redact
like
requests
to
nginx
that
contain
the
private
token
things
like
that,
like
there
yeah.
A
B
Definitely
like
we'll
definitely
need
to
keep
fluent
d.
What
it
will
allow
us
to
do
is
send
sshd
logs
to
a
dedicated
index,
but
if
we're
saying
like
the
log
volume,
isn't
that
bad
anyway,
then
I
would
say
it's
lower
priority
and
we
can
wait
and
not
worry
about
it
right
now.
C
A
D
C
A
Awesome
cool,
so
then
the
other
thing
is:
how
do
you
want
to
progress
with
the
api.
B
B
I
I
think,
like
I
guess,
skybeck
will
just
have
to
flip
a
coin
on
who's
gonna
who's
gonna
do
the
first
cut
of
like
fleshing
out
the
epic
is
in
opening
like
creating
all
the
issues
once
we
have
that,
then
we'll
start
like
we
did
before,
which
is
gonna,
be
with
the
production,
readiness
review
and
you
know,
start
spitting
up
the
infrastructure
for
staging
and
all.
B
Not
a
problem
you
know
like
I,
I
like
to
think
we're
in
really
good
shape,
because
we've
learned
so
many
lessons
from
get
https,
which
also
has
rails.
You
know,
but
who
knows,
surprises
we're
gonna
run
into,
but
I
think
a
lot
of
it's
gonna
be
really
just
starting
as
a
baseline.
It's
going
to
be
get
https
rails,
config
and
we'll
go
from
there.
B
I
would
say
so
yeah,
I
think
yeah
we
for
building
it
out,
yeah
for
sure
you
can
start
flushing
that
out
now.
A
Awesome
and
is
that
the
plan
to
do
that?
One
is:
is
that
the
next
piece
of
work,
or
would
we
want
to
look
at
the
websocket
stuff.
B
Yeah,
that's
a
good
point.
I
completely
forgot
about
websockets,
so
yeah.
I
think
next
we
should
do
websockets,
because
it's
smaller
and
so
as
soon
as
the
chart
stuff
lands,
then
maybe
that'll
be
our
first
test
of
it
like
we'll
we'll
split
off
websockets
we'll
move
web
sockets
as
it
is
right
now
to
you
know
the
new
pods
that'll
be
servicing
those
requests,
and
then
we
can
very
carefully
enable
action,
cable
and
hopefully
we
won't
have
any
s2
incidents.
B
B
Okay,
is
that
something
that
they
should
write
up
themselves
or.
D
No,
no,
not
us,
we
should
put
that
request
to
them
and
ask
them
to
work
on
it
and
we
can
help
out.
Obviously,
but
we
we
should
traffic
until
we
get
to
two
incidents
already
is
more
yeah.
B
Yeah
actually
another
another,
maybe
the
better
way
to
go
would
be
to
do
websockets
with
just
action,
cable
first
to
completely
isolate
it,
like
you
know-
and
you
know,
god
forbid-
we
prevent
someone
from
using
the
interactive
terminal,
but
we
could
like
keep
that
on
good
https.
For
now,.
B
Okay,
yeah
yeah,
so
so
yeah
so
amy.
I
think
that's
a
good
point.
I
think
we
should
probably
do
that
before
api.
Maybe
we
can
do
like
the
readiness
review
in
parallel
like,
but
we'll
probably
you
know
use
that
as
the
test
of
the
new
charts.
E
D
Who
can
who
can
engage
with
the
team
to
get
them
to
start
working
on
it?.