►
From YouTube: 2021-08-04 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
we
are
at
time
so
welcome
to
the
to
the
demo
scamec.
I
see
you
have
the
first
item.
C
Indeed,
so
there
was,
I
wanted
to
discuss
the
remote
ip
situation
a
little
bit
and
just
showcase
what
I
had
found
there
was,
I
think,
emails
you
that
mentioned.
Do
we
need
to
fix
this
and
I'm
like
yes
and
graeme's,
like
no,
I'm
like
okay
where's,
the
disparity
but
here's
our
log
search.
This
is
a
type
api
because
that's
what's
accepting
traffic
remote
ip?
We
see
that
we're
getting
external
ip
addresses,
so
we
know
that
it
works
just
fine.
C
C
C
C
There
had
there
is
an
issue
in
which
hendrick
on
the
infrastructure
team
is
trying
to
make
changes
to
get
lab
shells
such
that
we
proxy
the
appropriate
ip
address
that
epic
is
still
in
progress
so
for
our
use
cases
and
for
moving
the
web
migration
forward.
We're
on
block
for
this.
So
I
proceeded
to
close
that
issue
because
it's
not
something
that
we
could
solve.
C
This
is
something
the
gitlab
shell
needs
to
solve
itself,
which
they
probably
will
because
soon
they're,
riding
of
open
ssh
in
favor
of
moving
to
gitlab,
shell
being
the
demon
itself,
which
is
kind
of
cool.
So
I'm
eager
to
see
that
through
just
time
is
of
the
essence.
I
guess
so
any
questions
regarding
remote,
ip,
okay.
B
So,
do
you
have
the
issue
just
to
link
into
the
agenda.
D
C
D
I
think
we
did
something
to
solve
it
maybe
a
year
ago
or
something
I
can't
remember
the
exact
details,
but
I
think
that
was
a
long-standing
issue
that
we
didn't
have
the
remote
ip
and
gitlab
sharia
and
yeah.
You
see
okay,
but
okay,
if
it's
just
that
one.
I
think
you
cannot.
C
Block
yourself
and
right
now,
it's
talking
to
our
git
deployment,
which
it
technically
shouldn't
be.
It
technically
should
be
speaking
to
the
api,
but
we'll
save
that
for
later
it
works.
That's
what
matters.
C
C
I
spun
up
an
issue
because
I
wanted
to
determine
why
we're
getting
an
imbalance
of
traffic,
so
in
this
chart
is
the
amount
of
requests
that
the
nginx
ingress
pods
are
receiving
for
cluster
d
for
the
api
service
and
you
can
see
for
the
most
part,
there's
like
a
normal
trend
line.
But
then,
when
new
pods
come
on,
they'll
see
an
exorbitant
amount
of
requests
and
at
some
point,
we'll
go
back
to
our
normal
trend.
C
You
know
during
a
time
when
we're
not
scaling
and
we'll
see
another
abnormal
set
of
requests
and
then
sometimes
we'll
see
a
few
pods
up
here,
because
there's
more,
I
don't
know
if
you
could
see
it,
but
there's
a
few
lines
here
and
a
few
lines
here.
So
there's
multiple
pods
serving
different
amount,
different,
differing
amounts
of
traffic,
which
is
a
little
weird
to
me,
and
I
want
to
try
to
figure
out
where
this
happens.
So
I
create
a
list
of
things
that
we
need
to
look
into.
C
We've
already
determined
that
this
is
not
specific
to
a
fresh
pod
one.
This
green
line
here
in
this
above
chart.
There
are
three
total
misbehaving
pods
that
I've
outlined
here.
One
of
them
is
this
green
one
and
we
can
see
for
the
most
part.
It's
been
a
lie
for
quite
a
while.
It
was
misbehaving
during
this
period
of
time,
so
those
are
receiving
a
lot
less
requests.
C
The
other
two
pods
noted
what
our
misbehaviors
are
like.
It
starts
at
around
300
requests
and
it
drops
down
to
you
know
less
than
50
comes
back
up
to
300
and
the
pod
gets
killed
off
same
situation
for
this
one.
It
starts
around
300
it
misbehaves
for
a
little
bit
longer.
Maybe
that's
why
the
screen
pod
was
taking
more
traffic
because
things
were
being
shuffled
around
to
differing
pods
and
then
it
came
back
online
and
then
the
pod
was
killed.
So
I
started
looking
into
this
and
I'm
still
working
my
gcp
support.
They
responded.
C
I
don't
fully
understand
the
response
just
yet
just
because
I
haven't
spent
enough
time
thinking
about
it,
but
I
was
looking
at
he
proxy
because
that
is
where
a
traffic
would
be
originating
from,
and
we
do
see
that
there's
a
rate
of
connection
errors,
that
kind
of
occurs
but
not
necessary
during
the
times
in
which
we're
having
issues
like
this
is
scattered
like
a
little
bit
like
this
is
when
we
had
the
issue
with
one
of
those
pods.
We
also
had
an
issue
with
one
of
the
pauses
during
this
time
period.
C
Another
issue
with
this
pause
during
this
time
period.
So
I
can't
explain
these
connection
errors
that
are
also
scattered
about
and
if
you
open
up
this
window
for
a
lot
larger
period
of
time,
we
see
connection
problems
quite
often
just
go
away
like
if
we
look
at
the
last
six,
okay,
never
mind,
okay,
so
there's
one
spat
of
errors.
Yesterday,
for
example,.
C
So
this
might
just
be
something
that
happens
often
you'll
notice,
I'm
restricting
my
search
down
to
only
the
proxy
instances
that
are
in
cluster
or
zone
d.
So
we
do
see
back-end
connection
errors
rarely,
but
I
don't
think
that's
like
a
good
causation.
I
think
it's
like
the
result.
C
C
So
connections
were
building
up,
but
for
the
most
part
like
this
looks
okay,
our
connection
times,
they
look
fine,
like
we
see
spurt
of
errors,
this
kind
of
alliance,
similar
to
with
the
connection
errors
to
our
back
ends,
but
I
can't
see
a
correlation
to
this
and
the
nginx
behavioral
issues
that
we
see
same
with
back
end
errors.
C
Failed
helichecks
is
a
little
bit
more
interesting
because
we
fill
our
health
checks.
Quite
often,
it's
actually
a
little
concerning
to
me,
but
again
no
correlation
to
nginx
behavior,
like
it
happens
so
frequently
I
can't
create
a
correlation,
is
what
I'm
trying
to
say
so
then
I
started
looking
at
network
traffic
going
into
the
nodes
that
are
running
the
nginx
ingress
bots,
and
this
I
didn't
have
enough
time
before.
I
ended
my
day
yesterday,
but
there's
a
few
things
that
make
this
a
little
difficult
to
look
at
one.
C
The
new
nodes
are
going
to
be
receiving
traffic
because
they're
downloading
a
lot
of
images
from
the
internets
in
order
to
accomplish
their
job,
which
is
starting
up
our
various
pods.
So
some
of
this
is
going
to
be
inflated,
it's
not
specific
to
nginx,
but
it's
going
to
be
all
pieces
of
network
traffic.
So
I
made
a
note
down
here
that
I
want
to
see
if
I
can't
find
the
actual
calico
network
interface
and
see
if
I
can't
figure
out,
there's
any
behavior
specific
to
that.
But
I
have
to
hope
I
could
find
that
information.
C
I
don't
know
if
I
can
yet
or
not,
but
looking
at
specifically
the
ethernet
device
we
can
see.
Traffic
builds
up,
and
then
during
the
time
in
which
that
pod
is
experiencing
behavioral
issues,
traffic
kind
of
drops,
the
amount
of
bandwidth
drops
significantly
and
like
for
this
pod.
That
was
misbehaving
here,
the
red
line
it
looks
like
it
was
failing
twice,
but
I
couldn't
determine
that.
C
I
found
it
coincidental
that
this
other
node
was
showing
a
similar
drop
in
network
traffic,
but
that
one
was
failing
over
here
in
the
1400
range,
which
we
see
like
a
drop
but,
like
I
don't
know,
if
that's
just
a
drop
in
traffic,
because
you
know
it's
wiggly
all
the
way
across
so
like.
I
can't
glean
too
much
from
this
information
at
all.
C
C
C
At
this
moment
in
time,
our
mitigation-
maybe
I
should
catch
all
of
us
up
here,
so
the
mitigation
efforts
that
we
decided
upon
was
to
increase
the
amount
of
ephemeral
ports
across
all
clusters,
and
we
did
decide
to
push
that
in
cluster
d
as
well.
So
so
far,
I
haven't
seen
any
occurrences
of
this
issue
today.
C
A
B
A
C
So
we
wanted
to
leave
cluster
d
alone.
That
way,
we
have
the
ability
to
say
hey.
This
is
still
occurring,
but
we,
this
is
not
occurring
on,
say,
cluster
b,
where
that
mitigation
of
slowing
down
the
pods
was
pushed
into
place,
but
we
ruled
that
out,
like
we
saw
in
the
network
traffic
graph,
I
was
just
showing
we
had
a
pod
that
was
living
for
a
very
long
period
of
time
that
suffered
this
problem.
So
it's
not
specific
to
pods
and
nodes
coming
in
line.
C
D
At
your
first
thanos
graph,
when
I
include
notes
into
the
summary,
it
looks
like
that
the.
D
D
Right
just
looking
at
this
metric
you
showed
before,
and
I
added
a
note
into
that-
I'm
not
sure
if
you
mentioned
this
before,
but
if
I
look
at
the
different
lines
here,
it
appears
that
it's
always
the
same
note
right,
so
it's
stable
for
one
node,
like
all
the
pots.
On
the
same
note
here,
the
same
is
here
it's
on
the
same
note
different
parts
yeah,
so
it
seems
that
the
in
balance
seems
to
be
per
node
right.
So
for
one
node
we
get
exactly
these
amount
of
requests
right
for
another
node.
D
We
get
this
amount
of
requests
here,
so
it
seems
that
different
nodes
just
seem
to
be
able
to
take
different
amounts
of
traffic.
C
That
in
itself
is
a
little
bizarre,
so
I
I
want
to
figure
out
why
that
would
be
happening
if
our
mitigation
is
working.
D
Yeah,
I'm
not
sure
what
this
is
telling
us,
but
but
different
notes
seem
to
be
the
cause
of
different
traffic
patterns
here.
D
Before
did
it
update
not
yet
I.
C
D
Maybe
maybe
if
if
engine
x
is
you
know,
just
rooting
run
drop
in
between
nodes
and
each
node
has
a.
I
don't
know
a
node
specific
ip,
which
is
shared
by
all
the
plots
on
the
node.
I'm
not
sure
how
this
is
working,
but
that
would
explain
that
if
a
node
only
has
one
single
part
that
no,
that
wouldn't
explain
it
well,
but.
D
C
What's
happening
is
aj
proxy
is
sending
traffic
to
the
google
load
bouncer
and
that
google
load
balancer
is
going
to
send
the
traffic
to
all
the
nodes
in
which
it
knows
about
that
are
healthy
and
are
running
the
nginx
ingress
pod.
If
there's
only
one
running
on
that
one
node
well,
that
node
is
going
to
see
the
same
amount
of
traffic.
The
other
nodes
are,
but
that
node
only
has
one
pod.
So
it's
going
to
be
blasted
with
a
lot
of
requests
unnecessarily.
So
that's
probably
exactly
why
this
situation
is
happening.
Yeah.
D
C
Need
to
figure
out,
if
there's
any
tuning
involved,
that
we
could
adjust
that
situation
and
that's
what
google
that's
what
our
last
conversation
with
google
is
trying
to
figure
out.
How
we
can
resolve
like
this
chart
right
here,
shows
it
perfectly
anytime.
You.
D
D
C
B
C
So
I
think
this
remediation
I
mean
so
far,
looks
like
it's
serving
its
purpose.
I
think
it
would
still
be
wise
to
figure
out
if
there's
a
way,
we
could
better
balance
this
traffic.
That
way
we're
good.
B
C
Yeah
and
trust,
but
that
you
know
kubernetes,
is
not
going
to
do
that
by
default.
So
I
think
something
that
we
could
probably
do
is
see.
If
there's
any
tuning
we
could
accomplish
at
the
google
load
bouncer
and
something
that
I'm
trying
to
wrap
my
head
around
is
we
have
the
service
ip
and
proxy
is
external
to
our
clusters.
C
I
think
if
that
were
not
the
case
like
if
aj
proxy
was
inside
of
our
clusters,
they
would
be
using
the
service
ip,
that's
internal
to
the
cluster
that
way.
Q
prox
is
the
only
thing.
That's
involved,
we're
not
going
outside
to
the
google
load
balancer,
which
is
the
direct
traffic
to
a
specific
set
of
nodes,
which
is,
I
think,
currently
happening
today.
C
So
I
think
it
would
be
wise
for
me
to
one
continue
down
the
investigation
of.
Is
there
anything
we
can
do
with
balancing
the
traffic
better
and
also
validating?
Are
we
suffering
because
this
one
pod
saw
a
tremendous
amount
of
requests
versus
the
others
not
doing
the
same,
like
we're,
stressing
out
these
pods
unnecessarily?
B
D
Would
be
that
we
root
pair
pot
right
so
that
it's
distributed
evenly.
But
another
thing
which
we
were
considering
because
of
tuning
for
resource
usage
is
if
you
would
end
up
with
a
solution
where
one
port
fits
on
one
node.
That
problem
would
go
away
right
or
if
we
would
fit
a
lot
of
plots
on
each
node,
then
it
would
be
mitigated
because
it
wouldn't
be
such
a
big
difference
anymore,
but
but
being.
C
B
So
good
and
that's
perfect,
like
example,
of
great
demoing
right
like
nice
cool.
Well,
let's
see
how
we
go
and
see
if
this
gives
us
more
stuff
but
yeah
great.
C
B
So
yeah
it's
tomorrow,
so
the
apec
ones
were
switched
on
to
a
fortnightly
schedule.
So
we
have
one
tomorrow
to
follow
on
from
here.
B
I,
although
it's
well
timed
because
actually
he
was
quite
blocked,
so
actually,
I
think,
from
from
what
he's
been
working
on,
this
scheduling
is
going
to
end
up
being
quite
quite
effective,
yep.
C
All
right,
I
didn't.
B
C
A
B
We're
going
to
try,
so
I
chatted
with
graham
this
morning,
and
so
the
I
think
the
state
we're
in
is
probably
not
too
december
from
where
we
were
last
week.
To
be
honest,
which
is
it
will
be,
it
would
be
a.
It
would
be
a
great
thing
if
we
can
actually
solve
this
enginex
problem
or
at
least
understand,
what's
happening
there
as
it
stands
right
now.
Graeme
is
confident
that
actually
it's
not
offering
us
anything
additional
in
the
web,
but
we
will
evaluate
that.
B
So
what
glenn
would
like
to
do
is
roll
out
to
canary
without
nginx.
It
is
all
wired
up,
though.
So
if
we
see
any
problems
and
we
think
they're
nginx
related,
we
can
just
change
settings
in
ho
proxy
so
that
we
bring
engine
x
back
in.
B
So
it's
pretty
low
risk,
and
certainly
at
the
beginning,
when
we're
going
with
canary
with
like
super
small
amounts
of
traffic,
it
seems
safe
enough
to
to
see,
and
what
we
should
see
is
that
cloudflare
is
handling
the
proxy
buffering
and
all
of
those
things
now
the
benefits
of
not
having
nginx
are:
we've
removed,
some
complexity
like
it's.
B
It's
a
simpler
setup
and
if
you
know
like
if
nginx
like
is
causing
kind
of
instabilities
or
we
don't
get
to
the
bottom
of-
why
it's
causing
them,
then
we've
kind
of
eliminated
that
now
the
only
downside
is
there
is
a
potential
like
graham
saying,
there's
a
potentially
small
performance
impact,
but
we
can
certainly
see
that,
as
we
start
rolling
out
like
if
we
certainly
on
canary,
if
we
start
to
see
things
we
can
reevaluate.
B
So
I
think
we're
at
a
good
stage
right
now
to
do
the
initial
test
without
nginx
and
then
as
we
like
so
graham's
kind
of
hoping
that
we
might
be
able
to
get
like
a
small
bit
of
traffic
this
week.
We
could
dial
it
back
down
again
over
the
weekend
and
then
go
properly
next
week.
But
if
we,
if
we
see
issues,
we
can
certainly
reconsider
that
decision.
C
Okay,
so
I
agree
with
all
that.
I
see
that
we
still
have
the
issue
for
auditing
the
configuration
still
as
a
work
in
progress,
and
I
know
graham
has
been
working
on
that,
because
I
saw
a
few
updates
regarding
the
production
cluster.
I
don't
recall
if
he's
opened,
any
merge
requests
to
resolve
any
findings
yet.
B
I'm
not
sure,
but
he
shared
so
shared
these
three
change.
He
had
trouble
getting
approvals
during
his
day.
B
Approvals
and
if
we
can,
if
we
feel
comfortable
rolling
anything
out,
we
can
we
should,
if,
if
you
don't
and
just
want
to
approve
it,
he
is
happy
to
roll
it
out
in
his
day,
be
great
to
get
all
those
three
like,
so
that
we
can
try
and
get
those
into
production
if,
if
possible,
before
the
demo
tomorrow
and
then,
if
we
can
graham's
going
to
try
and
demo
turning
on
canary
traffic
in
that
demo,.
C
B
The
readiness
reviews,
I
think,
we're
probably
okay
for
now
to
put
a
little
well,
I
mean
your.
I
guess
I
was
thinking
we'd
put
a
small
bit
of
traffic
on
canary
and
then
do
the
readiness
reviews.
A
B
Like
you
know,
we
we
can
certainly
like.
We
should
definitely
do
them
before
we
put
a
big
load
on,
but.
C
I'm
working
on
that
we
just
got
to
change
out
the
staging
grain,
push
it
out
overnight
into
our
non-prod
environments,
so.
D
C
B
Cool
yeah,
so
I
think
what
we'll
try
and
do
like
this
week
is
just
literally
get
a
small
bit
in
canary
turn.
It
all
back
down
next
week,
get
up
make
sure
we
have
everything,
ticked
off
and
checked
and
then
yeah
and
then
see
if
we're
ready
to
go
properly.
B
It's
going
really
well
yeah,
which
is
great
news,
super
happy.
So
one
thing,
oh
about
the
nginx
stuff,
actually
that
graham
I
have
chatted
about
this
morning,
so
the
reference
architecture
stuff
is
like
obviously,
totally
relevant
to
all
of
this.
B
What
I
think
we'll
do-
or
we
can
sort
of
review
in
the
future
like
after
or
as
we
go
through
web
and
after
web,
is
where
we
want
to
go
so
I
know
jason
was
talking
about
kind
of
like
the
future
stuff
and
changes
going
into
proxy
and
that
giving
us
more
stuff
graham's
got.
A
question
mark
around
like
is
ho
proxy,
the
best
like
ingress
to
use
with
kubernetes.
B
Where
does
nginx
fit
in
and
all
of
these
things,
so
I
think
we'll
we
can
look
to
sort
of
pull
all
that
stuff
together,
like
it
a
bit
in
a
bit
like
probably
maybe
after
pages
when
we've
got
all
these
pieces
together
and
actually
then
like
see.
What
this
looks
in
fact,
graham,
was
mentioning
that
some
of
this
stuff
may
be
very
relevant
to
pages.
We
may
have
to
do
some
extra
stuff
on
routing,
so
it
might
fit
quite
well
together.
C
We
need
to
look
into
because
I
know
they
asked
the
question
as
to
what's
stuff
that
we
should
be
thinking
about
that's
important
for
the
distribution,
deep
need.
Picking
up-
and
I
mentioned
pages
you
know,
think
there's
any
problems
or
issues
that
they
have
related
to
that
and
I
haven't
follow
up
to
see
if
they've
created
any
issues
or
anything
but.
B
A
B
A
page
is
going
to
be
an
interesting
one.
I
think,
because
in
lots
of
context,
we've
talked
about
pages
being
really
easy
and
very
small
and
a
quick
migration,
and
then
in
lots
of
other
contexts,
talking
about
pages
being
like
very
different
and
the
whole
kind
of
routine
stuff
being
potentially
harder.
So
I
think
it
might
actually
end
up
being
one
of
these
interesting
ones.
We
may
we
may
find
some
like
new
stuff
in
this.
I
think.
B
C
D
B
Right
awesome,
great:
is
there
anything
else
and
I
want
to
demo
or
discuss.
B
B
There
has
been
a
little
bit
of
progress
in
in
terms
of
moving
things
over,
but
not
a
huge
amount
because
of
on
call.
So
I
know
that
is
continuing,
but
it
is
not
complete
yet
so
last
I
heard
was
q
four
of
eight
or
batch
four
of
eight
is
in
progress
or
about
to
start.
So
it's
it's
progressing
but
yeah.
Not
not
that
yet.
B
Yeah,
I
think
I've
got
it.
I
think
unless
I
mean
as
long
as
it's
not
blocking
us
on
anything,
I
think
I
I
was
thinking
it's
just
a.
We
want
to
make
sure
it
completes
right.
So
exactly
yeah.
No,
I
think
they've
got
a
good
handle.
I
think
it
was
literally
just
a
vacation,
slash
on
call
delay
and
pcl
delays,
so
I
think
that's
going
to
move
through
like
decently
for
the
next
few
weeks.
B
Super
sounds
good
and
yeah
I
mean,
like
we've
still
got.
I
was
super
close
on
the
web
stuff.
I
guess
for
now
at
least
we've
got
a
bit
of
work
to
do
on
readiness
reviews
tidying
up
these
things,
but
yeah
we're
not
super
fast.
So
you
know,
let's
start
thinking
about
like
pages
and
and
what
we
want
to
do
for
that.
One.
B
Fantastic
good,
thank
you
very
much
for
the
demos
and
discussions
like
really
say
great
progress
and
super
useful
today.
So
good
luck
with
unblocking
diagnosing
nginx
should
I
say.