►
From YouTube: 2021-03-04 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
C
The
service
migration
is
as
follows:
we
have
the
api
service
deployed
and
pre-prod
and
deployed
in
staging.
It
is
taking
traffic
in
pre-prod
successfully
with
no
errors.
As
far
as
I
know,
I've
not
looked
at
the
air
or
the
logs
recently,
but
also
staging
doesn't
or
pre-prod
does
not
take
any
traffic
and
we
don't
have
anything
synthetic
happening
there.
So
we'll
see
a
deploy
today
and
we'll
see
qa
run
today
due
to
the
security
release.
C
C
It
is
not
taking
traffic
and
staging
currently
there's
an
issue
where
we
are
missing
a
secret
that
needs
to
be
populated
into
the
rails
service.
That
way
the
api
knows
to
authenticate
appropriately
to
the
container
registry.
For
events.
This
is
a
notifications
event
feature
of
the
container
registry
itself.
C
We
leveraged
this
capability
for
sending
data
to
sisense,
but
the
api
needs
this
such
that
we
could
have
events
coming
in
for
geo
replication,
since
geo
is
an
enabled
service
inside
of
the
staging
environment,
because
that
configuration
item
is
missing.
Henry
noticed
that
there
is
a
crap
ton
of
errors,
so
currently
sentry
or
staging
api
is
disabled
from
taking
traffic
and
kubernetes
along
that
error
message
somewhere,
we
had.
C
So
let
me
see
if
I
could
find
the
error
message
that
just
so
that
we
all
are
aware
of
where
we
are.
D
C
It
was,
it
was
relative.
It
was
released
relatively
recently
considering
the
circumstances
and
it
works.
Because
omnibus
is
it's
configured
to
accept
a
token
but
we're
using
we're
abusing
that
token
unnecessarily,
I'm
not
going
to
touch
that
portion.
I
thought
this
was
a
major
configuration
problem,
but
things
work,
fine
omnibus,
is
hand
and
list
like
it's
supposed
to,
which
is
why
we
don't
see
this
today
in
staging.
C
It
only
happens
when
the
api
increment
starts
taking
traffic,
but
just
to
show
you
I
mean
this
is
nothing
super
exciting,
but
really
what
it
boils
down
to
is
that
we're
getting
to
this
portion
of
our
rails
code
inside
the
api
and
it's
failing
because
this
secret
is
completely
missing.
Omnibus,
will
automatically
fill
this
out
when
the
gitlab
yamo
gets
built
and
subsequently
git
live
control
builds
the
file,
but
nothing
like
this
happens
inside
of
our
home
chart
today.
C
So
that's
my
goal
in
solving
my
initial,
mr
got
some
feedback
from
distribution
that
I
need
to
tackle
that
I'm
trying
to
resolve
and
then
late
yesterday
right
into
another
random
issue.
So
I'm
hoping
to
get
this
out
there
today
get
something
reviewed
today,
at
least
is
my
goal.
C
So
because
I
don't
really
have
anything
to
show
for
because
until
we
re-enable
the
staging
api
in
kubernetes,
I'm
not
really
sure
what
error
messages
we're
going
to
run
into
next
at
this
moment
in
time.
But
this
shouldn't
preclude
us
or
shouldn't
stop
us
from
doing
additional
testing.
So
we
have
a
few
issues,
but
I
guess
we'll
talk
about
that
during
discussion.
E
D
I
just
wanted
to
ask
another
question:
if
I
may
skarba
cure,
like
I
see
the
merge
request
was
created
two
days
ago,
you're
getting
reviews
you're
getting
movement
right,
it's
not
like
you're
waiting
for
something.
C
Yeah
so
far,
I've
gotten
some
initial
questions
from
balu
and
then
I've
been
working
with
jason
the
past
few
days
on
stuff
that
ends
up
blocking
me
because
simply
knowledge
and
experience
of
working
with
the
home
charts.
D
A
D
And
it
might
be
a
good
idea
for
you
to
start
participating
there
as
well
to
just
get
familiarized
with
what's
happening,
not
necessarily
like
contribute
code
immediately,
but
just
participate
in
there.
So.
A
C
C
Yeah
this
particular
change
just
kind
of
sucks,
just
due
to
the
nature
of
how
the
initial
implementation
was
completed,
it's
becoming
a
little
bit
more
difficult
than
it
really
needed
to
be,
but
that's
outside
of
what
we
really
care
about
as
far
as
getting
henry
involved.
I've
also
got
an
issue
that
I
pulled
that
is
blocking
andrew
from
completing,
adding
the
necessary
saturation
metrics
for
various
components.
C
I've
got
one
final
merge
request.
I
need
to
get
done
for
that
one
and
I
figured
I
could
bring
henry
into
that
one
as
well
to
hopefully
wipe
that
out,
because
there's
just
one
last
the
web
services
chart
is
the
last
thing
we
need
to
touch
and
then
that
one
is
done.
It's
just
a
matter
of
upgrading
our
chart
and
the
necessary
labels
everywhere
and
then
andrew
could
take
it
away
with
metrics
I'm
hoping
to
get
henry
involved
in
that.
I
just
haven't
had
time.
C
C
So
from
my
standpoint
of
things,
I
haven't
really
been
doing
a
lot
of
work
with
api
just
because,
if
we're
not
taking
any
traffic,
there's
it's
not
really
easy
for
me
to
test
things
until
I
feel
comfortable
with
getting
this
issue
out
the
door.
But
there
are
things
that
we
could
do
in
the
meantime
and
I'm
wondering
henry
if
you
want
to
take
some
of
these
items,
I
know
you're
working
on
the
readiness
review
and
some
of
this
work
is
directly
related
to
helping
you
complete
the
readiness
review.
A
Yeah,
absolutely,
I
will
try
to
be
of
more
help,
because
I
was
reading
a
lot
on
trying
to
understand
a
lot
the
last
week.
So
I
didn't
do
it
that
much
here,
but
I
think
I
can
jump
in
now
to
spread
the
load
a
little
bit
and
you
can
focus
on
your
charge
right.
There,
yeah.
C
C
C
A
B
C
Yes,
definitely
so
the
vetting
of
the
deployments.
This
was
just
a
the
second
one
there
that
one
it
the
goal.
For
that,
one
is
just
to
make
sure
that
we're
not
going
to
have
awkward
problems
like
we
did
with
the
web
services
deployment
process.
So
it's
just
a
matter
of
running
some
sort
of
test,
sending
traffic
to
it,
watching
the
deploy
roll
out
and
making
sure
we're
not
going
to
have
awkward
issues.
C
There's
a
little
few
more
bits
of
details.
Inside
of
the
actual
description
of
the
issue,
the
first
one
list
is
probably
going
to
be
the
easiest
one,
because
we
already
have
the
api
deployed
everywhere
and
that's
just
reviewing
our
configurations.
I
did
this
for
staging
already
somehow.
I
still
miss
this
registry
configuration
item,
but
I
have
not
done
pre-prod.
I
still
want
to
do
pre-prod
just
to
make
sure
that
we're
not
going
to
run
into
any
any
other
awkwardness
later.
C
The
third
one
will
probably
be
a
little
more
difficult,
because
you'd
have
to
do
a
little
bit
of
a
local
configuration
setup
to
send
traffic
through
the
nginx
ingress,
since
the
api
is
not
taking
any
traffic
right.
Now
so
that
one
will
take
a
little
bit
of
work
and
I'd
be
happy
to
work
with
henry
to
see
if
we
can
figure
out
a
good
way
to
test
that.
But
I
think
the
first
two
are
good
ones
that
we
could
test
quickly
while
we're
still
waiting
on
the
fix
for
the
api
right
now.
A
Yeah
sounds
good,
I'm
wondering
a
bit
if,
if
there
would
be
some
way
to
bring
kubernetes,
config
and
omnibus
config
like
chef
config
into
canonical
forms
that
we
could
compare
it
somehow
because
manually,
comparing
these
configurations
isn't
really
that
easy
right.
C
A
C
Love
a
tool
to
do
this.
If
I
knew
how
to
program
better,
I
would
probably
suck
everything
into
a
huge
yaml
object,
and
just
do
a
nice
like
diff
on
something
like
this,
but
it's
kind
of
hard,
because
the
configuration
files
are
built
significantly
different
between
the
two
systems.
Names.
C
Names
are
different,
the
structure
of
certain
items
is
slightly
different
because
I
forget
which
one
does
this,
but
one
of
them
will
point
to
files
to
read,
while
the
other
one
says:
here's
the
actual
content
of
that
file.
So.
C
B
Is
there
anything
we
can
do
now
so
like
this?
Is
some
of
these
are
going
to
be
a
little
trickier
to
get
started
on
like
so
vetting
the
api
deployments?
Henry's
never
done
a
deployment
before
so
that's
probably
not
as
trivial
as
it
might
be.
So
we
should
think
about
that.
But
is
there
anything
we
can
like
go
through
now
in
the
next
like
20
minutes
or
so.
C
D
Well,
as
karbek
is
setting
this
up,
is
there
any
reason
why
we
shouldn't
have
graham
involved
in
these
issues
as
well,
not
expecting
for
him
to
do
work
if
he
doesn't
want
to
do
work,
but
including
him,
if
nothing
else
mentioning
him
in
the
issue
that
this
is
what's
happening
here,
because
henry
is
already
going
through
onboarding
anyway,
why?
Why
not
do
it
in
one
go.
B
Yeah,
that's
true,
yeah,
okay,
that
sounds
good.
We've
also
got
another
blocker
issue
that
we
we
could,
which
we
can
talk
about,
but
it's
another
one
that
will
probably
be
on
our
list
of
work.
So
that's
another
one,
but
yeah.
D
B
C
Yep,
okay,
so
here
I
did
the
back
end
method
of
hacking
into
our
staging
environment,
and
here
I'm
just
doing
a
port
forward
to
port
8181,
which
is
where
we
accept
traffic
for
the
api
service.
And
here
I'm
just
doing
a
basic
curl.
Nothing
super
exciting.
C
C
C
But
then
what
we
could
do
is
just
do
some.
My
goal
here
is
that
we
don't
get
500
style
messages
getting
four
fours.
I
don't
care
because
I'm
hitting
the
blanket
endpoint
where
it's
not
useful.
For
us.
I
don't
care
about
that.
I
care
about
the
fact
that
I
don't
want
to
see
500
messages,
502s
and
503s
okay,
deploy.
I
think
it's
restart
gitlab
web
service.
C
Api,
I
could
have
sworn
it
was.
C
C
C
D
Can
we
use
logs
to
double
check
that
procedure.
C
D
C
Yeah
we
could
look
at
our
logs,
so
I'm
not
going
for
I'm
going
specifically
through
the
service
endpoint
for
this
test.
So
we're
not
going
to
have
anything
anything
for
h.a
proxy,
but
we
should
see
logs
for
rails
and.
C
A
I
was
wondering
a
bit
how
we
need
to
set
up
all
these
different
knobs,
for
you
know
doing
deployments
like
resource
requests
like
out
times
and
the
the
target
thresholds
average
thresholds,
and
things
like
that.
I
mean
we
have
some
settings
which
we
use
since
the
beginning
for
websockets
already.
I
think
we
didn't
change
much
for
api
right
now
right,
but
also
we
didn't
test
much
in
with
big
scale.
So
I'm
wondering
do
we
have
any
written
down
things
why
we
set
the
settings
like
they
are?
C
The
other
location
where
you
could
find
certain
values
is,
do
a
get
blame
and
figure
out
when
those
values
got
populated
and
look
at
the
commit
message
associated
with
it.
Jarv
and
I
have
both
done
a
relatively
decent
job,
either
documenting
either
in
the
commit
message
or
on
the
actual
issue
associated
with
that
merge
request.
That
goes
into
some
of
the
details
as
to
how
we
decided
to
calculate
how
to
set
those
values.
A
Okay,
I
will
check
that
because
it's
interesting
because
we
often
say
exactly
say
a
certain
amount
of
millis
of
cpu
resource,
for
instance,
and
it
was
only
if
maybe
taking
percentages,
maybe
would
be
better,
sometimes
because
if
we
change
the
node
type
or
have
more
cpus
and
things
like
that,
we
need
to
adjust
all
of
that
right.
So
yeah
that
looks
complicated.
C
And
that's
perfectly
reasonable,
it's
what
we've
had
to
do
in
the
past
as
well,
and
you
know
it's
hard
to
compare
our
vm
infrastructure
directly
to
kubernetes.
Just
due
to
oh
see.
This
is
not
what
I.
B
C
F
D
F
C
And
we
see,
I
think,
what's
happening
in
this
particular
case,
because
they're
still
in
terminating
state
they're
taking
forever
to
leave.
I
think
this
is
where
we've
got
the
blackout
period
set
in
a
global
fashion,
so
we're
waiting
like
240
something
seconds
before
these
pods
finally
get
torn
down.
That
needs
to
happen
faster.
C
C
C
C
Because
there
was
something
still
existing
taking
traffic,
we
saw
the
new
pod
come
in
and
it
looks
like
it's,
I'm
still
not
getting
any
traffic.
Despite
here,
let
me
go
back
to
my
shell.
C
C
C
C
C
C
C
C
C
And
henry
just
in
case,
you
are
unaware
of
this.
Our
console
server
is
in
the
same
network
as
our
kubernetes
clusters.
So
that's
why
I'm
able
to
hit
the
service
ip
address,
that's
external
to
the
cluster
without
doing
any
awkward
hoops
same
way.
C
Well,
I'm
using
shuttle
just
so
that
I
could
connect
to
the
cluster
and
initiate
this
rollout
restart
down
here.
That's
the
only
reason
why
I
use
shuttle
the
console
server,
which
is
running
the
actual
test
here,
is
using
the
external
ip
address
of
that
end:
point
which
resides
in
the
same
network
for
the
same
vpc
that
everything
lives
in
this
prevents
me
from
needing
to
do
a
port
forward,
which
is
what
I
was
doing
earlier,
and
I
wonder
if
that
was
caused
me
an
issue.
C
C
C
C
While
this
test
runs
marin,
I
wonder
what
your
knowledge
or
andrew
for
that
matter.
I
wonder
what
your
knowledge
is
on
the
api
and
it
accepting
file
uploads
from
end
users.
C
A
I
experienced
some
of
this
incidents.
I
think
there
are
at
least
there
were
some
problems
with
uploads,
where
puma
is
trying
to
write
those
down
to
temporary
fire
space
right
and.
C
A
I
don't
know
if
this
is
fixed,
apparently
not,
and
but
this
is
really
a
big
issue.
If
we
don't
have
enough
space
in
temp
right
and
sometimes
people
were
uploading,
hundreds
of
megabytes
or
even
bigger,
and
then
we
ran
into
this
issue.
F
A
A
A
Will
it
that's,
but
but
she'll.
C
C
A
Think,
looking
into
the
files
we
found
that
there
are
some
can't
remember
if
you're
looking
through
the
file
and
the
content,
you
figure
out
what
it
is
and
I
think
it
was
in
my
case.
It
was
puma
workers
crashing
and
then
they
just
leave
their
temporary
fights
staying
there
and
they
never
vanish
and
okay.
A
D
D
From
the
conception
of
charts,
how
we
were
thinking
about
this-
and
I
am
almost
ready
to
guarantee
that
nothing
has
changed
since
then,
we
went
with
the
idea
that
any
buffering
that
is
happening
that
goes
to
a
temporary
location,
we're
taking
the
best
case.
At
that
point,
we
were
taking
the
best
case
scenario
off.
D
The
tmp
is
large
enough
and
b
that
tmps
cleaned
automatically
frequently
enough
for
us
to
not
care
about
the
relatively
small
files.
So
anything
that
is
larger.
I
think
it
was
five
gigs
that
we
were
talking
about
back
then
so
anything
within
five
gigs
that
goes
into
tmp.
D
C
Yeah
and
yesterday
the
deployment
had
failed
and
there
was
a
few
more
but,
like
you
know,
this
severely
hinders
these
tiny
discs.
C
We
only
run
20
gig
discs
in
the
api
nodes,
so
you
know
five
of
those
files
really
locks
everything
up,
unfortunately,
but
yeah.
C
D
C
C
Yeah
so
henry,
I
think
this
would
be
a
good
thing
that
we
need
to
capture
in
a
readiness
review,
because
we
had
to
deal
with
this
for
sidekiq
as
well
project
exports
being
the
primary
one
where
we
need
to
write
temporary
data,
sometimes
a
lot
of
it
to
a
temporary
location.
C
C
C
A
Already
made
a
to-do
note
for
myself
for
the
writing.
This
review,
for
this
should
be
because
I
saw
the
api
issues
there,
but
I
mean
the
the
underlying
issues
that
we
shouldn't
keep
those
files
around
I
mean
should
be
fixed
somehow,
but
we
can't
be
sure
that
we
need
to
find
a
measure
and
who
are
to
do
this.
Yeah.
C
C
So
while
this
test
was
running,
I'm
guessing,
we
performed
a
deploy,
because
I
saw
a
few
api
pods.
Try
me
with
a
different
number.
I
did
not
validate
that
deploys
going
through,
but
if
you'll
notice
we
only
got
four
fours
this
entire
time.
So
I
think
my
original
test
was
flawed
and
I'm
looking
at
our
logs
and
I'm
not
seeing
any
500
class
error
messages
at
all.
So
I'm
kind
of
happy
with
how
this
has
turned
out.
C
F
B
The
blackout
stuff
jason
has
an
mr,
which
is
almost
ready
right,
so
we
can
do
something
with
that.
B
C
So
from
the
standpoint
of
interrupting
users,
a
simple
while
with
a
curl
proves
there's
nothing
wrong.
I'd
probably
try
to
redo
this
with
bombardier
as
an
extra
validation
step.
C
I
think
console
has
it
installed
in
jarv's
user
account
if
you
wanna
sudo
into
his
account
and
use
it
there?
Okay,
nice!
We
we
allow
that
on
our
console
boxes,
securities
could
have
a
field
day
if
they
watch
this
video.
A
B
Cool,
okay
sounds
good,
so
do
we
have
a
plan
then,
for
does
that
help
you
out
scarborough
you're
gonna
keep
going
on
the
charts
issue
that
you've
got
right
now
and
henry
sounds:
are
you
going
to
go
in
with
the
api
deployments?
First?
Is
that
the
one
see
if
you
can
wrap
that
up.
B
The
6-3
that's
evaluating
whether
we
still
need
to
set
the
internet
stuff
that
might
need
a
bit
of
detail.
I
I
see
that
we
would
want
to
test
something,
but
I'm
not
sure
what
we
would
what
we're
looking
for.
So
it
would
be
good
like
to
add
a
bit
of
detail
into
that
description
of
what
we're
looking
for.
Okay,.
C
Yeah
I'll
try
to
find
the
old
issue
that
led
to
this
and
I'll
link
it
here
and
I'll
fill
in
the
details
of
the
I'll
I'll
populate
it
with
data.
F
I
just
went
down
a
little
rabbit
hole
with
that
maximum
file
upload
size.
While
we
were
looking
here
and
the
change
request
went
in.
F
But
what
I
didn't
realize
is
that
the
ruby
has
to
tell
workhorse
what
the
maximum
file
upload
size
is
for
each
request
and
the
only
place
where
rails
does
that
at
the
moment
is
for
artifacts.
So
we've
got
all
this
cool
infrastructure,
but
we're
only
using
it
for
artifacts
and
not
all
the
other
things.
So
there
probably
needs
to
be
some
work
on
that
to
it's,
probably
not
even
a
big
change,
because
you
just
it
just
has
to
say
like
don't
give
me
anything
bigger
than
a
gigabyte
or
whatever.
F
No,
but
it's
so
what
happens
is
workhorse
receives
a
request.
It
goes
to
rails
and
says:
hey
I've
got
this
request
like
I'm
going
to
do
something
with
it,
but
I
suppose
these
are
all
ones
where
workhorse
is
uploading
it
to
to
object,
storage
directly
and
it's
kind
of
circumventing
rails,
so
rails
does
the
auth
and
then
it
says:
okay
upload
it,
but
don't
let
it
be
bigger
than
x.
F
F
D
D
Option
artifacts
are
being
handled
by
workhorse,
so
you
have
that
option
lfs
and
couple
of
others.
So
I
think
the
things
that
have
the
option
of
direct
upload.
If
you
want
a
a
fun
conversation,
start
it
up
with
alessio
alessia.
D
I
heard
last
time
so
he
will
tell
you
exactly
which
ones
have
actually
I'll
just
link
to
the
blueprint,
the
object,
storage
blueprint.
So
you
can
read
that
up
on
the
differences
there.
C
Is
that
just
setting?
Well,
I
guess
maybe
you'll
discover
this
when
you
find
it,
but
if
it's
just
setting
the
location
where
files
get
uploaded,
I'm
I
care
less
about
that
and
I
care
more
about.
Can
we
skip
writing
anything
to
disk
if
possible?
I
imagine
they'll,
probably
just
use
a
ram
at
that
point.
Maybe
that's
just
just
as
bad,
though.
A
B
Cool
well
we're
making
super
progress.
The
blockers
are
all
moving
along
nicely
as
well.
So
like
great
progress
there
do
we
need
to
have
the
readiness
review
ready
before
we
can
move
to
canary.
C
C
B
So
yeah
my
opinion.
Just
start,
though,
like
you're
happy
with
everything
else
or
readiness,
would
we
normally
do
the
readiness
review
for
canary.
A
I
mean
the
readiness
review
should
be
done
before
we
deploy
anything.
Ideally,
I
think,
because
it's
also
about
how
we
do
things
right
and
but
but
my
goal
was,
I
mean
normally
we
don't
do
this
in
most
cases
right
we
start
implementing,
and
then
we
write
writing
so
views
in
most
cases.
So,
but
my
goal
was
to
get
something
out
for
review
end
of
this
week,
at
least
so
I
try
to
hit
this
goal
so
that
there's
at
least
something
to
let
others
look
over
it
I
mean
finishing.
A
B
Awesome
good
stuff,
all
right
anything
else.
I
don't
want
to
cover.