►
From YouTube: 2021-11-24 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
pretty
good.
Thank
you
pretty
good.
This
is
like
some
weeks
like
the
these
demos
fall
at
just
the
perfect
time.
So
I'm
super
happy
that
this
week
it's
done
it
and
we've
got
like
lots
of
discussion
on
an
issue
and
then
it's
like.
Oh
perfect.
Okay,
we
also
have
all
the
people
together.
Here
you
go
there,
you
go.
C
A
A
Awesome
so
we
have
everybody
here
so
welcome.
This
is
november
24th
kubernetes
migration
demo
in
the
emea
americas
time
zone,
so
henry
you've
got
the
first
demo
item.
D
So
I
don't
have
great
results
yet,
but
at
least
some
pointers
that
I
can
show
you
too.
Let
me
share
my
screen.
D
If
pots
are
scaling
up
or
down,
our
nodes
are
scaling
up
or
down,
and
this
is
what
I
want
to
check
into
right
now,
and
I
didn't
have
much
time
for
that.
But
the
idea
was
to
look
into
staging
to
see,
look
for
those
patterns
and
then
maybe
try
to
use
external
traffic
traffic
policy
local
instead
of
external
traffic
policy
cluster.
D
D
At
the
same
time,
we
terminated
pots
in
oil
clustered
actually
because
I
think
we
had
a
deployment
and
g-staging
getting
on
this
time
and
if
you
look
at
the
locks
on
aj
proxy
at
that
time,
it
is
fairly
easy
to
look
for
those
blocks.
It's
just
looking
for
this
string
for
h,
a
proxy
locks
called
sh,
which
means
it
was
terminated
on
server
side
and
edge
means.
D
Tcp
connection
was
terminated
and
you
can
see
we
found
something
around
this
time,
for
instance
on
this
back-end
front-end
load
balancer
so-
and
I
saw
this
also
last
week-
that
we
often
see
those
connection
terminations
happen
and
at
the
same
time
we
see
a
pot
scaling
or
termination
and
rating.
So
I
need
to
collect
more
data
because
we
don't
have
many
events
on
unstaging.
We
don't
have
much
traffic
there,
but
in
the
few
cases
I
saw
it
always
was
matching
up
with
spot
scaling
or
note
scaling,
I'm
not
sure.
D
Yet,
if
load
scaling
or
pot
scaling
is
the
necessary
event
for
this.
I
think
it's
more
pod
scaling
than
old
scaling,
but
this
looks
promising
to
look
further
into
because
the
correlation
seems
to
be
very
strong.
It's
easy
to
reproduce
and
it's
also
easy
to
cause
scaling
up
and
down
by
just
playing
around
with
the
hpa
and
setting
min
notes
to
a
higher
number.
For
instance,
I
tried
this
last
week
and
going
from
there
once
I
I'm
sure
this
is
really
a
correlation.
D
I
would
go
on
to
test
with
changing
the
external
traffic
policy
and
g-staging
and
see
if
you
can
reproduce
the
same,
then
so
that's
my
plan
when
I
find
time
between
those
release
management
tasks
and
to
go
on
with
that
and
that's
where
we
are
currently.
D
There
must
be
something
going
on
which
is
not
related
to
plot
terminations,
but
still
is
causing
connections
to
be.
You
know,
kind
of
forgotten
or
not
being
wanted
anymore
by
some
network
component
in
between
right.
So
not
sure
about
that,
and
I
didn't
get
in
like
like
google
support,
also
wasn't
sure
about
that.
They
just
pointed
to
this
would
be
one
thing
to
try.
D
E
Do
you
know
if
we
have
some
sort
of
metric
within?
Was
it
workhorse
as
to
the
amount
of
connections
it
might
have
open.
D
Need
to
check
that,
but
it's
staging.
I
guess
it
can't
be
that
much
right.
E
E
D
Thing
is
kind
of
disappointing.
We
never
see
anything
and
workhorse
looks
nothing
about
terminated
connections.
Right
I
mean
I
would
suspect
if
something
is
connecting
terminating
the
connection
that
we
see
something
and
worker
starts
saying.
Okay
connection
was
terminated
or
climbed
hung
up
or
something
like
that.
D
D
D
The
other
thing
that
google
support
suggested
was
switching
over
to
use
network
endpoint
troops,
which
would
mean
to
switch
over
to
tcp
http
load
balancing,
which
would
be,
I
think,
a
much
bigger
change
and
harder
to
accomplish.
So
I
think,
looking
into
those
external
traffic
policy
that
looks
like
a
very
easy
and
small
change
in
our
charts,
but
switching
over
to
different
global
answer
strategies,
I
think,
would
be
much
more
work
that
could
be
valuable
still.
A
Awesome
thanks
for
demoing
eagle,
you've
got
next
item.
B
Yes,
yeah.
I
wanted
to
kind
of
check
in
on
where
we're
at
with
redis
and
decide
on
some
next
steps.
So
we
had
some
discussions.
I
guess
the
the
main
thing
that
I
I
want
to
get
is
a
decision
here
like
how.
How
are
we
going
to
proceed
and
I
don't
think
this
decision
is
going
to
be
set
in
stone
if
we
try
something
and
it
sucks,
we
can
always.
You
know
change
course,
but
yeah.
B
A
E
Looks
like
we're
stuck
between
trying
to
decide
whether
we
want
to
continue
trying
to
dog
food,
our
home
chart,
which
it's
looking
like
based
on
the
initial
investigation.
It
may
not
may
not
be
the
best
route
or
going
down
the
road
off
using
tonka,
which
we
kind
of
know
from
a
sense
of
what
graeme
has
provided,
that
using
taco
is
available
to
us
and
is
able
to
create
the
necessary
redis
clusters.
E
I
think
what
we're
missing,
though,
is,
if
there's
any
feature
inside
of
our
helm,
chart
or
inside
of
helm
that
is
preventing
us
from
using
our
helm
chart
entirely
like.
I
know
that
today
we
don't
currently
support
the
ability,
which
is
fine,
but
I
guess
what
we
would
probably
need
to
have
documented
is
what
is
preventing
us
from
at
least
checking
that
out
or
building
the
functionality
inside
of
our
home
chart.
That
would
support
a
multi-redis
configuration.
E
If
we've
got
customers
large
enough,
that
would
benefit
from
this
in
the
first
place.
So
this
could
be
something
that
just
doesn't
get
off
the
ground
from
the
first
place,
and
I
think
if
we
understood
that
answer,
we
would
have
a
better
idea
of
whether
or
not
we
want
to
investigate
trying
to
figure
out
pushing
this
into
our
home
chart
or
not.
A
Yeah,
I
agree,
and
not
just
in
there
are
self-managed
users.
Also
reference
architecture
is
a
new
consideration.
We
need
to
think
about
right,
because
staging
ref
is
our
first
internal
environment
built
off
of
reference
architecture.
It
definitely
won't
be
our
last
one,
but
the
kind
of
the
the
reason
we're
starting
to
dog
food.
The
reference
architecture
is
because
more
of
our
customers
are
using
it
and
also
looking
for
this
to
become
a
more
mature
kind
of
offer.
A
So
it'd
be
great
to
figure
out
like
where
does
this
redis
work
fit
with
those,
and
perhaps
also
the
urgency
of
scaling
redis
as
an
independent
piece
of
work?
And
then
perhaps
that
helps
balance
out
the?
Do
we
just
go
fast
and
fix
things
up
later,
or
do
we
do
what's
possibly
a
little
bit
slower
but
meets
more
kind
of
more
user
needs
if
needed,.
E
B
Yeah,
I
mean,
I
guess
maybe
slightly
cynical
take,
but
if
we
can
sidestep
some
of
the
the
bike
shedding
and
like
pick
tanka
get
that
off
the
ground
and
not
lose
traction
that
that
sounds
sensible
to
me.
It's
separate
enough
that
if
we
decide
to
reintegrate
that
into
the
chart
later
on,
that
seems
pretty
doable
to
me.
It's
not
it's
not
as
entangled
with
everything
else
and
that's.
E
That
if
we
had
that
answer
as
to
whether
self-managed
customers
would
benefit
from
this,
if
we
knew
that
ahead
of
time,
we
could
either
a
figure
out
or
like
pivot,
a
little
bit
to
try
to
figure
out
how
to
reintegrate
into
a
home
chart
or
b.
We
could
then
start
documenting
hey
if
you
need
a
multiredis
configuration,
here's
how
you
could
go
about
it,
maybe
not
using
tango,
but
at
least
we
could
put
inside
of
our
documentation
for
a
home
chart
or
the
documentation
for
a
gitlab
installation
for
self-managed
users
at
that
point.
E
C
We
didn't
always,
but
I
am
not
aware
of
any
customers
doing
this
unless
it
gets
added
to
a
thing
like
a
reference
architecture,
because
when
we
split
out
ci
trace
chunks,
that
was
the
reason
we
decided
not
to
make
it's
not
very,
it's
not
very
easy
to
do.
Display
lci,
trace
trunks
for
self-managed
and
the
reason
we
didn't
make
it
easy
was
because
it
would
have
been
additional
work
and
we
didn't.
Actually.
We
couldn't.
C
Find
anybody
who
would
need
that
so
yeah.
E
C
C
Yeah
yeah,
no,
it
was
so.
It
was
a
while
ago
that
we
split
so
yeah.
C
C
I
think
you
know
we
have
five
soon
to
be
six.
I
think
basically,
basically,
every
other
gitlab
instance
has
one
so
yeah.
A
Okay,
so
our
rough
proposal-
because
I
think
the
other
piece
around
this-
is
to
go
back
and
kind
of
make
a
proposal
back
to
distribution
on
here's
the
plan.
We
have
here's.
What
we're
going
to
do
you
know
does
that
vaguely
fit
with
their
plan.
So
it
sounds
like
what
we're
saying
is
based
on
what
we
we
think.
A
We
know
today,
going
with
tonka
figuring
out
how
the
multi
radius,
instant
stuff
would
work
for
us,
feeding
that
back
to
distribution
and
allowing
them
to
kind
of
implement
that
in
the
github
chart
on
their
timeline,
it's
kind
of
our
preferred
route.
E
I'm
thinking
if
we
could
figure
out
if
self-managed
customers
would
benefit
at
all,
that
would
determine
what
kind
of
proposal
we
provide
for
distribution
and
that
may
be
just
documentation
if
there's
no
benefit
or
if
we
do
find
that
self-managed
customers
could
benefit.
We
would
need
to
try
to
figure
out
what
we
could
do
with
them
at
that
point
and
that
at
that
point,
that's
when
we
could,
you
know,
work
with
and
to
figure
out
how
to
introduce
a
multi-redis
configuration
yeah.
A
Okay
cool,
so
I
think
also
for
yeah.
That
makes
sense.
I
think
people
outside
of
well,
maybe
just
outside
of
scarborough,
actually
like
in
the
past.
We've
had
quite
a
lot
of
success
in
figuring
out
an
approach
with
distribution
and
then
us
implementing
the
change
and
just
getting
it
reviewed.
So
it
actually
hasn't
been
a
huge
sort
of
overhead
in
terms
of
us
getting
things
into
the
charts
as
long
as
we
know
kind
of
what
we
need
to
actually
do.
E
So
I
think
for
to
address
igor's
concern
of
you
know:
bike
shedding.
I
think
the
two
parallels
here
is
one
amy.
I
think
I
would
rely
on
you
to
try
to
figure
out
how
we
can
get
information
about
other
self-managed
users
and
reference
architectures
to
figure
out
how
much
effort
we
need
to
spend
in
that
realm
and
then,
in
parallel
ahmad,
unless
you
have
other
ideas
within
our
home
chart
or
within
helm
features,
it
might
be
time
to
start
looking
at
tonko
and
getting
that
portion
going.
B
I
can
pick
it
like
up
with
tanka
the
other.
A
And
just
on
timing,
as
well
in
terms
of
like
ideal
demo
timings
this
week
tomorrow
morning,
there
is
the
aipac
kate's
demo
scheduled,
which
graham
has
accepted
games
not
actually
actively
doing
any
kubernetes
stuff.
So
I
expect
he's
attending
as
a
we
can.
You
know
like
quiz
him
about
stuff
and
ask
his
input.
So
if
it's
useful
to
use
that
as
a
hands-on
hour
of
hands-on
tanker
like
time,
then
that's
a
great
use
of
these
demos
as
well.
A
E
A
B
B
A
B
Well,
let's
deploy
it
without
putting
anything
any
load
or
any
traffic,
or
any
data
in
it
just
to
you
know,
have
have
the
chart
installed
and
have
something
that
we
can
toy
with
awesome
cool
thanks.
That's
all
I
had
all.
A
A
Super
great
is
there
any
other
reddest
things
that
people
want
to
bring
up.
A
No
awesome,
okay
skull
back
over
to
it.
E
All
right
so
gitlab
pages
last
week,
I
wanted
to
try
to
get
things
into
canary
and
we
ran
into
two
problems.
Unfortunately,
so
during
attempt
number
one
it
turned
out,
we
were
accidentally
running
an
older
version
of
the
he
proxy
helm,
chart
or
h
a
proxy
cookbook
inside
of
production.
So
we
were
missing
a
configuration
and
it
took
me
an
obnoxious,
lengthy
amount
of
time
to
figure
this
out
and
thank
you
to
cameron.
E
If
you
ever
watch
this,
he
helped
me
determine
that
and
then
attempt
number
two
failed,
because
we
have
not
yet
fixed
the
problem
within
our
he
proxy
cookbook,
where,
when
you
make
a
tiny
configuration
change,
such
as
only
changing
the
port,
the
global
state
file
that
h.a
proxy
uses
doesn't
get
changed.
Therefore,
the
configuration
file
that
you
look
at
will
say:
hey
we'll,
send
traffic
to
whatever
port
you
want,
but
the
state
file
says
no
we're
going
to
send
traffic
to
this
port
that
you
didn't
want
to
use
in
the
first
place.
E
E
So
I
think
what
I'm
going
to
work
on
right
now
is
the
change
request
to
or
what
we
would
do
after
we
get
canary
going.
I'm
like
I
said,
prior
I'm
100
percent,
confident
that
the
third
take
is
going
to
work.
E
E
D
C
D
By
the
way,
this
happy
state
fair,
I
think
that
this
has
hit
us
a
few
times
already
right.
I
also
got
to
run
into
this.
A
D
Time
right
to
figure
out
what's
going
on,
because
it's
not
so
obvious
one
thing
what
I
think
worked
last
time
I
got
into
that
was
instead
of
you,
know
stopping
hf
proxy,
removing
the
file
and,
starting
again,
I
think
I
was
running
hi
proxy
reload
two
times
like
one
time
is
done
by
my
chef
client,
the
next
time
you
can
just
manually
do
it
by
yourself
and
I
think
that
made
a
change
to
the
state
file.
B
D
I'm
not
100
sure
if
it
always
will
work,
but
if
you
just
try
and
look
into
the
state
file
if
it
changed
after
a
second
reload,
then
we
know
and
and
then
we
are
fine
and
it's
much
faster.
Now
just
need
to
be
careful
that
when
you
run
this
reload,
then
internally
hr
proxy
will
spin
up
new
processes
and
then
the
old
processes
will
slowly
drain
connections.
D
If
you
do
the
reload
too
fast,
then
maybe
the
proxy
still
is
in
this
process
right.
So
you
should
have
some
kind
of
powers
in
between
two
reloads.
I
guess.
B
C
B
So
I
I
I
also
run
into
this
and
from
what
I
could
tell
this
looks
like
a
bug
in
aj
proxy
and
I
also
linked
a
blog
post
there.
So
my
understanding-
and
this
may
be
incorrect-
is
that
because
the
state
file
is
indexed
by
the
server
name
only
and
it
doesn't
care
about
the
port
number.
E
B
E
Just
to
keep
our
tooling
simple
and
keep
it
the
same
across
like
all
environments,
unfortunately
so
but
yeah
I
mean.
B
I
think
I
do
know
that's
a
workaround
yeah.
I
think
that's
kind
of
the
workaround
that
we
use
for
the
that
we
did
use
for
one
of
the
port
changes.
So
this
was
when,
when
we
rolled
out
proxy
v2,
that's
where
I
had
to
do
that
and
I
added
proximity
to
to
the
server
name.
So
it's
I
think
it's
fine
to
do
that,
but
yeah
it's
up
to
you.
D
D
D
A
Awesome
is
there
any
other
stuff
about
pages
or
any
other
topics?
People
want
to
bring
up.
A
No
amazing
that
was
like
a
super
rapid,
highly
productive,
27
minutes.
So
thank
you.
Everyone.
Let's
continue
on
the
issues
and
let's
say
there
is
an
apac
version
of
this
tomorrow.
If
people
want
to
use
it
for,
like
hands-on
or
more
discussion
of
particular
on
tanker
awesome,
all
right
enjoy
the
rest
of
your
day.
Everyone
thanks
very
much
speak
soon.