►
From YouTube: 2021-01-14 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
Cool,
so
let's
see
how
we're
going
so
logging
skelec
thanks
for
adding
in
the
comments
and
the
opening
up
the
new
issue
we
haven't.
B
So
we
have
this
new
issue,
which
is
the
gitlab
blogger
issue,
3,
so
I'll
follow
up
on
that
and
see
if
we
can
get
that
prioritized.
B
Stuff,
so
we
have
quite
a
few
issues
open
on
the
labels
related
to
the
labels.
They
all
seem
to
be
waiting
on
this
chart
update,
which
is
still
got
a
13.
another
one.
B
B
This
cool,
so
they
all
seem
to
be
related
to
or
all
waiting
on
this
adding
in.
D
Yeah
I'm
working
on
this
one.
We
had
one
merch
request
that
was
merged
in,
but
we
need
to
touch
a
few
more
individual
charts.
I
got
two
other
merge
requests
that
are
in
review
and
then
there's
like
10
000
other
charts.
I
still
need
to
touch
so
for
the
I'm
going
in
order
for
what
we
use
inside
of
gitlab.com,
so
hopefully
I'll
be
able
to
get
what
blocks
us
done
quickly
and
then
I
could
just
pull
the
other
charts
that
we
need
to
change
to
complete
this
particular
issue
over
the
course
of
time.
B
Great
and
then
that
hopefully
also
unblocks
you
andrew
on
your
things.
D
Well,
we'll
still
need
to
update
our
own
helm
chart
that
we
pull
into
the
get
labcom
repo
and
then
we
need
to
add
the
actual
labels.
So
there's
at
least
two
more,
mrs,
that
need
to
be
finished
up
with
the
distribution
team
before
we
could
do
that.
But
yes,
that's
precisely
what
we
need
to
get
done.
B
Okay,
is
there
anything
we
can
do
to
help
with
that
work.
D
Not
at
the
moment,
just
time
and
effort
really.
B
B
B
B
This
in,
amongst
all
those
so
we've
got
craig's
stuff
here,
he's
working
on
the
so-called
useful
set
of
metrics.
Is
there
anything
we
need
to.
C
B
I
guess
javan
scobeck,
like.
Are
you
aware
of
anything
additional
that
we
need
to
request
for
this.
B
E
This
is
assigned
to
skyrack
you're
you're
doing
this
right.
This
is
assigned
to
you.
B
B
Things
as
well,
I
was
thinking
sorry
this
this
one,
this
this
sorry,
the
one
above
this
one
this
this
one
here.
B
B
And
then
we've
got
this
one
around
from
the
blockers
list,
which
is
allow
setting
load
balancer
type
for
web
service.
I
was
just
checking
in
yeah,
still
a
blocker
still
a
blocker
for
bypassing
nginx.
D
E
B
E
Yeah,
so
this
once
we
have
this
deployed
to
production
and
the
plan
is
to
get
this
working
in
the
non-prod
environments
and
then
move
it
to
production,
hopefully
next
week
and
then
after
that,
then
we're
basically
not
relying
on
nginx
anymore,
we'll
just
we'll
just
disable
nginx
and
then
we'll
be
able
to
switch
to
using
the
version
of
charts
on
master,
which
does
the
nginx
ingress
controller
upgrade.
But
at
that
point,
since
we
won't
be
using
nginx,
it
will
like
not
matter
at
all.
That's
that's
the
plan.
B
Okay,
so
if
we
don't
get
this
setting
type,
how
far
can
we
get
through
that
plan.
E
The
the
charts
change
has
already
merged.
I
had
one
ran
into
one
issue,
so
I
created
a
new
mr
today
just
to
fix
that,
and
hopefully
that
gets
reviewed
today
by
the
distribution
team.
If
not,
I
expected
to
get
merged
well
early
next
week,
assuming
people
are
out
tomorrow
for
friends
and
family
and
then
after
that,
then
I've
been
kind
of
like
using.
E
Let
me
let
me
link
to
the
delivery
issue
for
this,
because
I'm
working
off
of
this
test
bypass
nginx
for
websockets.
E
So
I
would
say:
delivery
work
is
tracked
in
here
and
I
added
a
comment
at
the
end
of
that
issue
today,
to
kind
of
give
an
update
on
where
things
stand.
I
expect
that
you
know
we
have
the
ability
to
buy
well
I'll
show
this
at
the
demo,
but
we
have
the
ability
to
bypass
nginx.
So
now
it's
just
a
matter
of
hooking
in
proxy
so
that
we
can
use
it
instead
of
the
nginx
ingress
controller
use
the
new
service
endpoint.
B
Okay,
cool,
okay,
okay,
so
sorry
so
on
this
load,
balancer
type!
Well,
we
can
do
all
of
the
work
ourselves
is.
That
is
that,
right.
B
B
Cool
nice
and
then
we've
got
this
other
one
about
allowing
the
option
to
override
monitoring
unicorn
service
when.
E
So
let
me
share
my
screen
here.
E
E
I'm
not
sure
I
follow.
How
will
we
know.
E
The
thing
is
that
yeah,
the
thing
is,
is
that
we're
already
setting
this
to
five
seconds
in
on
gitlab.com
and
we
have
been
for
a
year.
So
I
this
is
like
a
no
op
change,
it's
just
a
matter
of
making
it
so
that
we
don't
have
to
change
the
default
in
charts
the
problem.
The
problem
here
is
the
application
defaults
to
10
seconds
for
virtual
machines.
We
override
it
and
we
set
it
to
5
seconds,
because
the
omnibus
package
allows
you
to
do
that.
The
charts
don't
allow
you
to
override
the
default.
E
What
I'm
hoping
to
do
here
is
just
change
the
default
in
the
application
to
five,
and
then
we
don't
have
to
think
about
this
anymore.
E
A
Slightly
silly
question:
yeah:
why?
Why
are
we
worried
about
something
called
unicorn
underscore
anything
in?
That's
actually
a
really
good
point,
and
I
didn't
even
think
of
that
I
mean
I
might
just
it
might
be
reused
now
or
something,
but
I
yeah.
That
was
the
case.
I.
E
I
kind
of
assume
it's
being
reused,
but
it's
pretty
good
I'll
look
into
that
yeah
yeah
good
catch.
Like
I
yeah.
I
didn't
even
think
that,
like
this
might
not
you
might.
B
Cool
okay
and
then
pages
is
progressing.
B
Oh
nice,
thanks
for
adding
that
marin
change
issue
for
migration
and
staging
is
open.
E
C
But
I
don't
know
how
much
you
followed.
The
team
is
looking
to
run
a
rake
task
because
they
are
unsure
of
the
impact
if
they
put
it
as
a
background
job.
So
that's.
C
Need
someone
to
help
them
out
with
just
like
randomly
running
array
tasks
in
staging,
so
they
can
see
some
impact
and
in
production
as
well
partially.
C
They
claim
that
that
shouldn't
be
the
case,
because
that
was
already
asked,
but
they
obviously
can't
claim
with
any
certainty
until
they
actually
start
seeing
a
larger
volume
of
data
flow
through
gotcha.
E
E
While
I'm
sharing
my
screen
here,
let's
just
take
a
quick
look
at
the
labeled
issues.
I
think
we
already
talked
about
these
this
one.
We
can
remove
and
then
yeah.
So
I
think
we're
good.
B
Cool
so
over
to
your
job.
E
E
E
So
here
you
can
see
that
we
have
the
git
lab
web
service
websocket
service.
Now
we
have
three
services
for
web
service.
We
have
git
web
and
web
websockets
git
just
to
remind
everyone
is
used
for
all
get
https
requests
and
also
is
used
for
the
internal
api
endpoint
for
git
ssh
and
the
web
is
the
catch-all.
This
is
just
like
anything
that
doesn't
go
to
git.
That's
routed
to
the
cluster
will
go
here
right
now.
It's
just
websocket
connections
for
interactive
terminal.
E
This
will
eventually
move
to
websockets,
as
we
like
start
changing
our
aj
proxy
backends.
To
point
to
this
you
can
see
it's
not
a
cluster
ip,
but
it's
a
load
balancer
and
it
has
an
external
ip,
which
is
actually
an
internal
ip,
which
is
what
will
tell
hj
proxy
to
use
one
of
the
major
differences
between
what
we
had
previously
and
what
we're
going
to
have
is
that
with
nginx
we're
connecting
on
https
to
the
cluster,
now
we're
going
to
be
connecting
directly
to
workhorse
with
port
8181,
which
is
listed
here.
E
These
other
ports
are
for
unicorn
and
also
sorry
puma,
and
also
the
monitoring
ports.
The
configuration
is
really
simple:
there's
not
much
now
that
we
have
the
charts
change
in.
E
So
right
now,
the
way
it
works
is
under
web
service.
You
have
deployments
and
then
under
there
you
have
the
different.
You
know
the
split
for
the
deployments
for
web
service.
You
have
websockets,
we
set
the
service
type
to
load
balancer,
we
have
a
firewall
rule
that
prevents
anyone
from
connecting
to
it
outside
of
the
internal
network.
This
is
just
added
safety.
They
wouldn't
be
able
to
anyway,
but
it's
just
like
a
layer
of
security.
We
say
it's
an
internal
load
balancer
and
then
for
the
ingress.
E
D
Questions
but
I
have
a
comment.
I
wish
you
luck
with
the
hd
proxy
changes
that
need
to
happen
inside
of
the
cookbook.
They
look
daunting.
E
E
A
And
the
one
thing
that
kept
I
kept
thinking
was,
I
would
feel
so
much
more
confident
if-
and
I
know
that
this
is
probably
totally
over
the
top.
But
if
in
the
test
you
could
almost
boot
up
h
a
proxy
on
a
config
and
test
like
the
routing
against
hfc.
But
I
think
it's
too
hard
like
it's.
E
D
E
That's
the
like:
removing
the
option
to
toggle
configuration,
that's
the
default,
so
that
will
help
clean
it
up
a
bit
because
we've
we
do
these
migrations
and
then
we,
then
we
like
enable
a
feature
and
then
we
don't
go
back
and
clean
up
the
option
to
turn
it
off,
because
it's
you
know
so
maybe
that'll
help
a
bit
but
yeah
it's
going
to
be
messy.
A
I
I
thought
you
said
I
thought
amy
said:
hf
foxy,
not
nginx,
so
I'm
probably
getting
confused.
The
nginx
logs
are
not
used.
E
Yeah
engine
nginx,
I
mean
they
are
going,
I
think
through
stackdriver
we
do
have
them
and
for
the
vms,
do
we
even
like
collect
nginx
logs?
We
may
not.
I
don't
even
remember
now.
A
Just
doesn't
do
a
whole
lot.
They
we
did
at
one
point
and
they
were
super
unusual.
They
like
their
defaults
sort
of
nginx
format,
which
is
like
horrible
and
so
yeah.
E
E
I
don't
think
so,
because
what
we
can
do
is
we
can
run
simultaneously.
We
can,
we
can
run
the
internal
load
bouncer
and
the
engine
x
ingress
in
parallel.
They
don't
interfere
with
each
other
and
then
it's
just
a
matter
of
toggling
haproxy
to
use
one
or
the
other
and
we'll
do
this
on
canary
first
run
it
there
for
a
bit
and
then
we
can
roll
it
out
slowly.
F
E
So
so,
right
now,
it
seems
like
we're
moving
away
from
that,
like
we're,
going
like
we're,
putting
a
stronger
dependency
now
on
aj
proxy.
Just
because
we're
not
happy
with
nginx.
C
C
For
I
think,
if
there's
a
counter,
I
guess.
E
Yeah,
if
there's
a
hope
for
getting
rid
of
h.a
proxy,
I
think
it
will
happen
on
registry
first.
We
could
maybe
get
rid
of
it
there,
or
maybe
maybe
pages
as
well.
D
I
would
argue
if
we
could
figure
out
and
sort
out
all
the
necessary
hooks.
We
push
into
aj
proxy
from
the
security
side
of
things
and
all
the
acls
we've
got
and
the
white
listing.
We
have
with
the
work
that
craig
that
you
know
has
been
doing
that
we're
implementing
on
monday.
If
we
could
push
more
stuff
into
cloudflare,
I
would
argue
that
we
probably
could
get
rid
of
a2proxy
at
some
point
in
time.
D
A
B
No
okay:
I
wanted
to
ask
about
corrective
actions,
so
we
have
quite
a
few
just
try
and
share
my
screen.
B
Cool,
so
I
was
just
trying
to
find
it
in
the
handbook,
but
I
couldn't,
but
I
believe
all
corrective
actions
have
a
set
priority.
Are
they
automatically
s2s
like?
I?
I
feel
like
this
is
written
down
somewhere,
but
I
couldn't
find
it.
C
E
I
have
a,
I
think,
like
we
should
use
severity
for
sort
of
like
judging
how
like
critical
the
corrective
action
is,
and
then
maybe
maybe
use
p2
or
p1
for
all.
If
that's,
if
that's,
if
we
really
want
to
take
these
seriously
and
get
them
done
as
quickly
as
possible,
then
I
think
we
should
make
them
delivery,
p2
or
delivery
p1
and
then
maybe
keep
lower
severities
for
just
so
that
we
can
kind
of
get
a
sense
of
like
where
it
sits
relative
to
the
other
quick
directions.
E
Feel
it
depends
on
the
severity
of
the
incident
like
if
it's
if
it
was
a
sub
three
incident,
and
some
of
these
are
corrective
actions
for
sev3
instance.
I
don't
think
it's
a
drop
everything
and
work
on
it.
Maybe
it's
a
priority
two,
but
and
also
if
everything
is
priority
one
then,
how
do
I
know
what
to
work
on?
First.
C
All
p1,
then
you
find
a
a
buddy
hug
him
or
her,
and
then
you
start
like
pounding
on
the
first
item
that
you
find
and
then
go
to
the
next
one
and
the
next
one
and
the
next
one
jarv.
I
I
know
where
you're
coming
from
with
the
severity
tying
it
to
the
severity
of
the
incident,
but
this
is
not
only
about
the
incident
itself,
it's
all
about
what
also
goes
around
the
incident,
so
how
much
effort
did
yester
need
to
exert
to
actually
do
something?
C
C
B
C
I
created
two
weeks
ago.
You
can
clearly
disagree
with
me
and
say:
look.
This
is
not
really
a
corrective
action.
This
is
an
employment
improvement,
we'll
be
we're
going
to
be
seeing.
This
is
not
as
impactful
as
well
and
I'm
happy
to
remove
the
corrective
action
issue
and
lower
the
priority
or
change
the
priority.
D
D
E
I
I
I
think
sure
I
mean
I.
I
made
it
a
corrective
action
because
it
would
have
helped
with
the
incident
and
it's
tied
to
an
incident
and
that's
what
I
think
like.
Maybe
we
should
maybe
redefine
corrective.
I
don't
know
like
it's,
but
if
I
guess,
if
we're
saying
all
corrective
actions
are
p1,
then
I
would
say
it's
not
a
corrective
action
because
I
don't
think
it's
a
drop
everything
and
work
on
it.
B
Was
it
something
like
so
from
helping
on
the
incident?
Was
it
something
like
documentation
would
help
like
if
we
had
a
runbook?
Would
that
help.
D
D
B
Okay,
let's
just
make
sure
that
fits
in
with
the
incidents
that
threw
these
out
in
the
first
place,
but
yeah.
C
Treating
when
they
start
corrective
actions
and
doing
something
about
them,
then
we
can
align
until
then
it
can
be
afraid
for
them.
E
D
C
D
E
Yeah,
if
we
go
with
the
label
definition,
a
lot
of
these
aren't
corrective
actions
because
it
says,
like
it
says,
issue
that
is
born
as
a
corrective
measure
after
an
outage
to
prevent
further
reoccurrences.
Well,
some
of
these
don't
fit
that
definition
like
it's,
not
that
it's
not
like
resolving.
These
aren't
going
to
prevent
future
occurrences
they're
just
going
to
help
with
the
troubleshooting
and
dealing
of
the
incident
and
maybe
shorten
the
time
to
resolution.
D
B
Yeah,
if
it's
not
something,
if
us
not
doing
this
work,
isn't
likely
to
you
know,
throw
up
a
similar
incident
in
the
near
future.
Then
absolutely
fine.
B
B
E
Yeah,
so
I
just
kind
of
want
to
make
sure
we're
still
moving
forward
with
action.
Cable
since
there's
a
lot
of
visibility
into
this.
There
were
two
things
that
came
out
of
the
readiness
review.
One
was
prometheus
metrics,
which
I
think
were
kind
of
like
okay,
with
this
not
being
a
blocker
but
maybe
needs
more
discussion
and
then
the
other
one
was
additional
load
on
redis,
which
is
something
andrew
raised.
It
sounds
like
to
me
that
we
don't
think
this
is
going
to
be
a
blocker,
but
it's
concerning.
A
Right
no
heinrich
pointed
out
to
me
that
it's
actually
quite
a
different
mechanism
compared.
D
A
Other
websocket
servers
that
I've
used
in
the
past,
and
so
redis
has
got
like
this
publish
subscribe
mechanism,
which
is
super
lightweight
and
that's
all
it's
using
where,
like
fae,
has
got
a
much
more
complicated
system
which
puts
a
lot
of
load
on
radius.
A
So,
oh
and
the
other
thing
that's
really
worth
pointing
out
is
that
if
it
became
a
problem
because
that
published
subscribe
mechanism
is
totally
stateless
like
it
doesn't
store
anything
in
redis
moving
it
somewhere
else
is
trivial
because
you
could
just
you
know,
restart
on
new
server,
and
it
would
be
really
easy.
So
I'm
I'm
really
my
my
concern
is
kind
of
heinrich
convinced
me.
I
wrote
something
like
that
on
the
readiness
review.
E
Yeah
yeah,
okay,
like
I,
don't
think
it's
trivial
to
move
like
spinning
up
another
redis
cluster
is
unfortunately
not.
A
E
A
No
but
sort
of
my
my
concern
was
that
you
know
if
it
was
like
faye,
where
there's
actually
messages.
D
A
In
the
reddit
server,
if
you
want
to
move
it,
it's
a
it's
a
whole
lot
of
work.
You've
got
to,
basically,
you
know,
communicate
with
both,
and
you
know
it's
a
very
complicated
process,
whereas,
if
all
you're
doing
is
readers
published
of
pubsa,
you
know
you
can
kind
of.
If
we
spun
up
a
new
server,
it
would
be
very
little
effort
to
migrate
those
clients.
If
you
know
what
I
mean
so.
E
A
A
It
looks
funny
because
yeah
because
sean
actually
the
tan
land
report
just
came
in
and
and
actually,
if
you
scroll
up
on
that
issue,
just
a
little
bit
like
that's
what
timelines
is
sorry
there.
My
timeline
says
it's
going
down
at
the
moment,
but
then
actually,
when
you
drill
into
the
server
that
doesn't
seem
to
be
the
case,
and
so
I
just
got
a
message
from
sean
where
he's
saying.
A
Well,
maybe
we
should
close
this
issue,
like
maybe
there's
no
longer
a
problem
here,
and
I
just
pointed
to
this
and
said
well,
there's
something
strange
going
on
because
timeline
is
saying
it's
going
down.
But
when
we
looked
at
the
actual
data
it
doesn't
look
that
rosy
right
so
where
we
are
right
now
is
is
I
do
want
to
look
into
that?
But
I
don't.
I
don't
think
it's
within
the
scope
of
this
meeting,
but
I
think
we
should
try
to
figure
out
what's
going
on.
A
Also
during
this
meeting
I
saw
that
igor
just
opened
the
epic
to
upgrade
to
reader
6,
and
I
think
that
will
make
a
big
difference
so
yeah
that
that's
probably
the
the
first
step
that
we
can
take
in
that
direction.
E
Okay,
so
we
just
need
to
decide,
then
whether
the
prometheus
metric
issue
is
a
blocking
issue.
A
A
Yeah,
so
in
that
case
I
think
it's
fine,
the
the
proposal
that
I
put
together,
I
think
it's
getting
a
little
bit
of
like
shove
around,
but
I
don't
think
it
would
take.
You
know,
I
don't
think
it's
a
lot
of
work
to
to
add
that
so,
but
I
don't
think
we
have
to
wait
on
it
like,
as
we
discussed
in
apollo
monday.
I
think
we
can
just
go
with
these.
The
500s
of
like
if
action
cable
is
500
that'll,
give
us
sufficient,
like
visibility,
but
obviously
there's
other.
A
With
websockets
we
can
get
those
later
so,
and
I
was
looking
at
the
the
notifications
that
you
get
through
rails
and
there's
some
about
like
rejections,
and
you
know
we
could
probably
build
up
some
slides
around
that.
That
would
give
us
the
kind
of
insight
that
we
need
into
websockets.
But
it's
fine
for
me
to
just
use
500..
A
A
It
was,
I
don't
know
if
this
is
the
right
place
to
talk
about
it,
but
there
was
some
discussion
around
sending
messages
back
through
websockets
to
the
servers
like
as
a
potential
future
thing
and
that
and
then
sort
of
making
changes,
and
that
gave
me
some
cause
for
concern
because,
obviously,
at
the
moment
we've
got
logs
and
we've
got.
You
know
all
of
our
metrics
and
everything
and
when
things
come
in
through
http,
it's
really
nice,
because
it's
logged
and
we
can
track
what's
going
on.
A
And
the
model
that
we
used
with
gitter
was
we
we
looked
at
that
and
we
decided
against
it
and
so
outbound
stuff
always
went
out
through
the
websockets,
but
then
on
the
way
back.
We
always
went
back
through
the
api.
For
that
exact
reason,
and
I
think
that
we
should
maybe
push
back
on
that
stuff.
A
That
heinrich
was
talking
about
just
because,
if
they
start
sending
commands
to
rails
via
in
through
the
websocket,
it's
totally
invisible
to
us
and
we
don't
know
like
how
long
the
calls
are
taking
and
if
they're
failing
and
all
of
that,
so
I
don't
think
they're
doing
that
yet.
But
that's
just
something
for
the
future
that
I
think
we
should
push
back
on
if
possible.
E
Okay
yeah,
it
sounds
like
we
don't
know
which
feature
team
would
even
be
working
on
this
right.
B
Which
which
team
was
working
on
this
feature.
E
E
C
E
No,
it
doesn't
yeah,
I
think,
we're
I
think,
we're
fine.
So
the
main
the
main
blocking
items
are
kind
of
in
our
court,
which
is
trying
to
get
this
web
circuit
websockets
deployment
set
up,
and
then
we
can
turn
them
on
the
feature
flag.
C
Awesome
and
I
really
like
your
approach
of
maybe
like
sending
some
traffic
like
rolling
it
out
slowly,.