►
From YouTube: 2022-04-27 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
I'm
doing
well
glad
to
see
such
a
high
attendance
today.
C
B
Hey
we're
very
clear
we're
not
that
much
help
for
your
agenda,
though
right
your.
B
C
C
If
I
recall
correctly,
we
left
it
in
place
for
canary
taking
a
low
amount
of
traffic
during
a
second
attempt
last
week,
so
we
made
a
change
to
aha
proxy's
load
balancing
mechanism
get
lab.
Shell
was
a
special
use
case
where
we
had
the
type
of
load.
Bouncing
algorithm
was
slightly
different,
so
instead
of
round
robin
which
we
commonly
used
elsewhere,
this
was
set
up
to
lease
lease
con
or
lease
connection.
C
So
any
new
connection
was
sent
to
canary,
therefore
saturating
the
get
back
in
so
with
that,
with
the
we
made
a
change
to
make,
it
run
robin
that
way,
it
matches
everything
else
that
way
the
weights
are
better
honored.
I
guess
I
could
say
if
that's
the
appropriate
mechanism
that
happens,
uses
for
determining
weight
and
then,
during
the
second
run,
we
effectively
left
it
where
canary
has
taken
a
smaller
traffic,
and
if
I
recall
correctly
that
was
rolled
back
as
well,
it
was
rolled
back
because
get
lab.
C
B
Do
you
know
is
the
do
we
have
any
other
options
for
doing
more
thorough
testing
on
staging,
so
we've
had
three
roll
out
attempts
on
production
each
has
failed
in
a
different
way.
I
think
they've
all
been
you
know
like
handled
really
well
and
like
oh
by
all
reasonable
failures,
but
I
I
I
mean
I'd
like
to
hope.
This
is
all
of
them,
but
is
there
actually
a
better
way
for
us
to
test
this
on
stage.
D
C
B
Remind
me
that
it's
the
first
problem,
the.
C
C
Piece
during
the
first
attempt
a
couple
of
months
ago
in
the
first
place
so.
B
C
Test
that
out
fully,
but
I
do
think
that
our
testing
pro
or
excuse
me,
the
rollout
procedure,
is
more
refined
and
we're
also
going
to
do
it
a
little
bit
more
slowly.
So
I
might
prepared
to
change
requests
the
first
one
is
going
to
roll
out
to
canary
and
then
we're
going
to
let
it
sit
for
a
little
bit,
probably
a
day
and
then
reload
out
to
the
rest
of
the
clusters
afterwards.
C
So
we're
going
to
have
a
slower
roll
out
intentionally,
but
the
rollback
procedure
won't
change.
B
It
makes
sense
yeah,
I'm
happy,
I'm
super
happy
with
the
rollout
and
the
you
know:
detection
and
recovery
and
stuff
like
we
do.
You
know
that
stuff
we're
all
handing
brilliantly
it's
really
just
if
we
could
avoid
some
of
the
overhead
of
us
having
to
continually
open
crs,
stop
deployments
roll
out
incidents.
You
know,
there's
a
lot
of
overhead
of
us
just
sort
of
having
to
keep
on
trying
on
this.
C
D
B
C
Figure
out,
like
maybe
additional
load
testing
needs
to
occur,
to
find
this,
but
I
suspect
that
it's
either
going
to
be
something
that's
special
because
production
or
it's
just
a
new
use
case
that
they
did
not
hit
and
it's
something
new
that
was
maybe
introduced
because
it
has
been
over
a
month.
Since
the
last
time
we
tried
to
roll
this
out,
so
it
could
just
be
that
git
lab
show,
has
progressed
and
now
there's
just
a
new
failure
scenario
that
was
not
previously
known
so
I'll.
C
D
D
D
One
of
the
things
that
we
we
noticed
right
before
rolling
out
was
the
scope
of
the
change
was
pretty
broad.
It
was
gonna
ship,
the
whole
thing
to
all
environments
over
an
eight
hour
time
frame,
and
I
kind
of
pushed
back
on
that
to
do
only
canary
and
have
some
baking
time
in
between
and
once
it
was
on
canary.
We
discovered
complete
lack
of
metrics,
so
one
of
the
metrics
had
been
renamed.
The
the
latency
buckets
weren't
aligned
with
the
aptx.
D
There
wasn't
an
error
rate,
sli
defined,
so
a
lot
of
stuff
missing
there
and
the
the
dev
team
is
working
on
fixing
all
of
those.
So
that's
one
more
reason
why
we
drained
canary
and
yeah,
as
as
you
mentioned
that
that
error
message-
which
I
guess
was
the
the
timeout,
the
like
context,
timeout
exceeded
or
whatever
it
is-
which
had
we
had
an
error
rate
sli.
We
would
have
also
seen
yeah,
but
I
I
think
we
fixed
a
lot
of
the
the
previous
round
of
issues.
D
So
the
all
of
the
aj
proxy
stuff
is
is
resolved,
not
sure
if
that
was
covered
at
all.
A
C
B
Renamed
is
that
something
that
the
source
code
stage
group
are
able
to
do
great,
okay,
good,
I'm
sure,
they're,
loving
this
project
roll
out
did
we.
I
hope
we
didn't
ever
promise.
Am
I
fair?
I
have
a
bad
feeling
ego.
I
don't
even
remember
the
kickoff
we
had
last
october,
I'm
fairly
sure
we
told
them.
Oh,
this
is
pretty
simple.
Don't
worry
I
feel
like.
We
should
not
say
that
to
people
in
the
future.
C
Any
further
questions
before
the
next
one.
C
Okay,
so
now
that
I'm
no
longer
a
release
manager,
I'm
still
kind
of
sort
of
busy
at
the
moment,
but
I
figured
I
would
start
picking
up
some
of
the
redis
migration
work
and
I'm
glad
igor
is
here
I'm
kind
of
curious,
as
I
haven't
looked
at
any
epic
or
any
issue
in
terms
of
where
we
currently
stand
aside
from
what
you
showcased
in
the
last
demo,
so
I'm
wondering
where
I
should
start
picking
up.
I
assume
amy.
This
is
where
you
probably
want
me.
It's
back
on
the
reddit
stuff.
B
Well,
it's
a
great
question.
One
thing
as
we
go
into
q2
a
question
for
you:
eagle
will
be
around
how
like
what
how
much
support
does
delivery
need
to
provide
for
you
to
to
meet
whatever
the
goal
is
for
q2,
so
assuming
it's
some
then,
for
you
scrub
right.
The
only
things
you're
really
balancing
against
are
issues
on
the
board
that
we
do
need
to
do.
B
I
think
they're
all
smaller
things,
so
we
should
drop
those
in
those
corrective
actions
and
a
few
labeling
things,
but
I
don't
think
there's
anything
like
significant
aside
from
that.
It's
going
to
be
really
about
the
we.
We
will
certainly
have
some
sort
of
bigger
things
that
we
need
to
figure
out
if,
as
we
go
into
q2,
to
set
up
work,
but
I
mean
around
that
stuff
like
yeah,
absolutely,
if
that
you
probably
do
have
some
time,
if
you
want
to
you,
know,
also
pick
up
some
reddish
stuff.
D
Yeah
so
I'd
say
we're
currently
blocked
waiting
on
the
omnibus
thing
to
go
through.
Hopefully,
today,
waiting
on
final
approval
and
then
once
that's
out
of
the
way
we
can
start
rolling
that
out
to
pre
and
then
all
the
other
environments.
So.
D
Maybe
ahmad
can
handle
that
on
his
own.
I
I
certainly
trust
him
to
take
care
of
that.
D
So,
if,
if
you
want
to
focus
your
efforts
elsewhere,
scarbeck
for
for
maybe
the
next
two
weeks
or
so
yeah,
I
think
that
might
make
more
sense
and
then
once
we've
got
that
config
change
out,
we'll
have
host
names
enabled
everywhere,
and
then
we
get
to
move
on
to
the
next
phase,
which
is
going
to
be
actually
rolling
to
pre,
and
I
think
that's
where
your
help
could
come
in
in
handy
again.
D
C
B
It
could
also
be
worth
checking
with
ahmad
about
how
I
mean,
depending
on
when
sshd
comes
back
in,
because
he
is
on
the
redis
project,
but
he's
also
working.
D
C
B
Do
you
already
know
igor
like
what
what
do
you
think
the
like?
Firstly,
I
guess
is
there
a
q2
okr
for
the
redis
migration
and,
if
so,
what
sort
of
what's
the
focus.
B
It
sort
of
does
it
certainly
fits
with
our
strategy,
so
yeah
on
our
strategy,
page
migrating
three
services,
three
additional
services
into
kubernetes
this
year,
I'm
counting
rate
limiting
as
one
I'm
counting
camo
proxy
as
one
and
then
we
have
a
slot.
Maybe
it's
another
redis,
instant
or
yeah,
or
something
else
like
italy
also
could
be
a
possibility
later
this
year.
If
we,
if
we
want
to
go
so
there'll,
be
firm,
great
benefits
to
having
italy
done
so.
B
D
B
C
Okay,
so
there's
todd
okay,
so
igor
you
and
I
had
a
conversation
about
getaway
one
day
and
you
had
some
opinions.
I'm
curious
if
you
want
to
share
those
to
get
this
conversation
started.
B
The
software
engineer
in
test
john
who
works
for
gitly
he's
done
some
testing
and
his
view
is
from
what
he's
tested
so
far.
It
should
also
be
possible,
however,
there's
a
lot
of
unknowns,
and
I
think
the
unknowns
that
we
we
have
are
more
around,
that.
I
think
it's
very
similar
to
where
this
project
right,
like
the
unknowns,
are
around
performance
and
data,
and
what
would
it
actually?
How
would
it
actually
behave?
B
B
I
think
we're
probably
not
that
far
when
I
say
that,
like
some
probably
some
months,
but
we're
not
that
far
away
from
just
getting
to
the
point
where
trying
this
on
pre
and
giving
john
a
proper
environment
where
he
can
properly
properly
test
we'll
answer
the
next
round
of
questions,
I
think
we're
running
out
of
ways
to
really
pinpoint
like
what
is
going
to
be
the
big
problem.
If
we
did
this
goodly
cluster
is
definitely
out
the
getaway
team.
Don't
want
that
on
kubernetes
that
doesn't
make
sense
for
the
gitly
service
cut.
B
So
there
are
definitely
some
benefits
like
lots
of
users
want
to
be
able
to
run
in
this
way,
but
I
think
what
we,
what
we
can't
do
right
now
is
there
is
nobody
who's
put
together,
a
full
benefits
risks,
and
actually,
you
know
no.
A
B
Well,
that's
what
we
need
to
test,
that's
what
we
can
do
right.
So
I
think
there
is
a
there's,
a
known
use
case
and
from
what
we've
seen
so
far,
what
we've
tested
so
far
nobody's
been
able
to
demonstrate
a
problem.
B
Even
though
lots
of
people
have
concerns,
and
so
I
think
what
that's
saying
is
we
don't
have
a
good
enough
test
environment
to
properly
stand
this
up
and
be
able
to
properly
load
tests.
That's
where
I
think
we're
getting
to
the
point
where
we've
probably
answered
as
many
questions
not
yet,
but
we're
quite
close
to
answering
as
many
questions
as
we
can.
We
haven't
yet
got
to
the
point
where
we
go.
B
Oh,
this
is
how
it
performs,
and
these
are
the
limitations,
and
at
that
point
we
go
okay,
so
I
think
trying
to
get
it
stood
up
on
pre,
which
we
can
then
hunt
back
to
john,
who
can
then
have
a
proper
environment.
He
can
load
test
on
will
will
be
able
to
say
there's
a
lot
of
questions
around.
How
will
this
perform?
What
will
be
the
risks?
B
B
Yeah
exactly
right,
which
we
will.
We
will
see
what
that
happens,
so
I
think
we
probably
will
want
to
be
able
to
get
this
testable.
Basically,
as
the
next
step.
D
One
question
that
I
had
was
you
mentioned:
we
don't
want
to
do
this
with
gitly
cluster.
D
B
Yes,
let
me
ping
you
on
the
comments.
So
yes
from
the
getali
team,
they
are
reviewing
so
following
the
issues
they
had
last
year
that
they
had
to
deal
with.
Yes,
they
are
focused
on
improving
the
getaway
cluster
as
a
as
a
product
getting
that
fully
functional
and
then
once
they've
completed
that
I
think
they
have
a
reasonable
roadmap,
though
so
they
don't
want
to
change
anything
right
now,
once
they.
B
B
So
once
that
has
been
solved,
then
they
will
review.
Do
they
actually
want
to
be
able
to
offer
that
on
kubernetes.
D
B
B
I
mean,
I
think,
that's
that's
a
kind
of
other
conversation,
but
at
the
moment
we
don't
know
if
it's
possible
right,
so
we're
not
planning
to
move
to
getting
cluster
right
now.
But
I
think
the
gitly
road
map
will
affect
some
of
this
as
well.
A
Yeah,
but
I
mean
if
a
user
asked
to
run
gizly
on
kubernetes,
it
kind
of
underlined
that
they
expect
this
to
run
in
kubernetes
way.
Then
his
full
tolerance
can
be
rescheduled
and
you
can
just
it
kind
of
implies.
It
is
h,
a
capable,
but
if
we
are
just
making
allowing
them
to
run
that
thing
on
kubernetes
like
if
it's
a
vm,
then
there
is
a
missile
expectation
right,
because
you
can
run
postgres
on
kubernetes,
then
the
pods
get
rescheduled
and
you
lost
your
data.
D
This
poses
the
greatest
risk
to
our
availability
architecturally
right
now,
and
what
kind
of
worries
me
about
this
plan
is:
if
we're
moving
towards
a
a
design
that
is
explicitly
not
supporting
italy
cluster,
we're
kind
of
closing
off
that
path,
and
so
that's
that's,
basically
why
I'm
making
a
little
bit
more
noise
about
it,
but
I'll
I'll
leave
it
at
that
are.
B
We
is,
is
there
a
plan
like
is,
is
there
a?
Is
there
already
a
plan
in
place
to
move
dot
com
to
get
a
cluster.
D
B
C
It
certainly
adds
color
to
this
conversation
amy
if
you
would
be
so
kindness
to
drop
the
testing
issue.
That
john.
B
I've
linked
as
well
to
the
comment
that
mark
which
kittley
product
manager
left.
C
C
B
B
So
there
is
a
point
about
halfway
through
the
issue
where
it
suddenly
becomes
like
where
it's
like
not
getting
clustered
get
at
least
service,
but
everything's
just
called
gitly
so
but
have
a
look
through
john
has
done
quite
a
lot
of
testing
and
he
did
quite
a
lot
originally
on
the
cluster
and
found
some
issues
and
then
it
was
like.
I
know
on
the
service,
he
did
some
extra
testing
on
the
service
and
was
like
actually
from
what
I've
seen.
B
It
seems
to
work
so
have
a
look
through
there
and
see
if
there's
any
basically
got
to
have
got
to
the
point
where
I
mean
we're
clearly
not
in
a
position
to
take
on
any
new
migration
stuff
like
we
have
plenty
of
work
to
do
to
get
through
camo
and
to
get
to
ready.
So
that's
fine,
I
don't
think
john
has
any
more
scenarios
that
he
knows
of
to
test.
So
if
you
have
suggestions,
then
please
add
them,
and
then
I
think
what
we,
what
we
don't
know
is.
B
Whether
like,
where
this
sits
in
the
gitly
road
map
and
from
a
product
point
of
view,
whether
you
know
how
much
they
want
to,
how
much
time
are
they,
but
so
say,
assuming
we
stood
this
up
somewhere,
did
a
load
of
testing
and
found
problems
on
gitly.
I
don't
know
how
much
they
would
want
to
spend
time
having
to
fix
that
and
take
away
from
other
priorities.
So
that's
the
other
sort
of
product
side
as
well,
but
yeah
have
a
look
through
that
issue.
B
That
certainly
john
did
quite
a
lot
of
testing
and
and
tried
to
reproduce
sort
of
those
sorts
of
problems,
and
I
I
don't
believe
he
hit
any.
C
D
B
C
Excellent
is
there
anything
else
that
anyone
want
to
chat
about
today.
B
C
We're
still
in
the
planning
phases
at
the
moment,
I'm
currently
working
with
amy
to
refine
the
necessary
issues
in
epics
and
then
we'll
be
working
to
prioritize
that
for
the
next
quarter,.
B
Cool
thanks
the
additional
piece
we're
working
on
so
in
the
future.
We've
got
an
issue
with
the
proposed
team.
Split
and
sort
of
in
the
future.
Delivery
will
hopefully
be
able
to
sort
of
focus
more
on
how
the
application
runs
on
the
cluster.
So
these
sorts
of
things,
the
sort
of
a
sort
of
where
does
this
fit
between
reliability
and
delivery.
B
So
I
think
in
the
immediate,
we'll
probably
look
at
doing
some
of
the
like
short-term
solutions
to
try
and
get
us
out
of
the
immediate
danger
zone,
and
then
maybe
there's
phases
of
like
you
know,
can
we
just
drain
the
cluster
and
get
the
more
ip
addresses
that
we
need
and
then
is
there
another
step
to
like
improve?
How
we
do
that
and
then
a
kind
of
later
step
to
automate
or
something
like
that.
B
But
I
think
we'll
have
to
do
something
this
quarter
to
to
get
some
ips
so
assume
that
well,
that
will
get
done
one
way
or
another
and
then
longer
term.
I
think
automated
cluster
recreation
like
later
in
this
year
by
either
delivery
or
reliability.
C
Excellent
well,
thank
you
all
for
showing
up
today.
I
greatly
appreciate
it
enjoy
the
rest
of
your
day.