►
From YouTube: 2022-05-25 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Good
morning,
everyone
good
morning
good
morning
welcome
to
the
kubernetes
demo
for
may
25th.
I
don't
have
anything
to
do
with
myself
before
we
get
into
discussion
items
henry.
I
see.
You've
got
a
discussion
item.
Do
you
have
anything
you
want
to
demo?
First,
otherwise
I'll
just
start.
The
discussion.
B
Okay,
all
right
so
I'll
get
us
started.
Gitlab
sshd.
B
B
Ahmad
has
created
a
new
change
request,
so
we
have
a
desire
to
roll
this
out
later.
Today,
we
are
waiting
for
a
specific,
merge
request
which
contains
some
adjustments
to
the
the
metrics
that
are
being
gathered
that
contribute
to
the
aptx
alerting,
so
should
that
go
well,
hopefully
we'll
and
we'll
be
in
a
place
where
we
can
keep
that
in
place.
If
not
we'll
roll
it
back
again,
and
you
know,
do
a
take
two.
B
I
think
this
kind
of
highlights
a
situation
where
I'm
a
little
frustrated
that
we
are
the
gatekeepers
for
the
style
of
configuration
change,
because
this
will
be
take.
I
think
eight
on
enabling
this
in
canary,
which
is
kind
of
a
little
infuriating,
but
I
think
to
sort
that
kind
of
situation
out
would
require
a
lot
more
effort
elsewhere
within
the
organization
try
to
figure
out
what
we
could
do
with
that
kind
of
thing.
B
But
outside
of
that,
the
other
thing
that
we'll
also
be
doing
shortly
is
creating
a
test
procedure
for
rolling
out
the
actual
proxy
protocol
change.
So
this
would
be
step
two
towards
allowing
the
ip
allow
listing
change
for
this
feature
set
or
for
this
rapid
action
group
related
to
gitlab
sshd.
B
The
first
thing
that
we
need
to
do,
however,
is
reset
staging
to
make
it
look
like
what
production
would
look
like
if
get
lab
sshd
were
to
be
fully
rolled
out
and
then
actually
test
the
procedure
of
creating
the
or
testing
procedure
rolling
it
out
and
rolling
it
backward
just
to
make
sure
that
we
understand
the
implications
and
make
sure
that
we
don't
have
any
outages
as
we
perform
this
rollout.
B
There
is
chatter
about
potentially
testing
the
use
of
open
sshd
instead
of
gitlab
sshd,
just
in
case
there
is
some
sort
of
significant
issue
found
after
a
rollout
has
been
completed.
B
Currently
that's
still
in
discussion
and
I'm
not
actually
sure
where
you
are
with
opensshd
being
compatible
with
the
proxy
protocol.
So
there's
a
little
bit
more
work.
That
needs
to
be
done
that
front
as
far
as
just
discovery
and
then
finding
a
place
to
perform
that
necessary
testing.
So
that's
still
a
conversation
at
the
moment.
B
B
The
yes,
so
the
rapid
action,
not
the
rapid
action
one
but
there's
another
issue-
that's
tied
to
the
rapid
action
that
contains
a
daily
update
that
one
has
a
link
to
our
failed
cer
and
akman.
If
we
don't
already,
if
we
haven't
done
so
already,
maybe
we
should
update
that
issue
with
the
link
to
the
new
cr
that
we
plan
on
testing
later
today,
but
as
far
as
testing
the
procedure
for
the
pricing
protocol
there's
a
dedicated
issue
for
that
particular
light
item.
In
fact,
I.
D
D
And
it
might
be
worth
I've
made
like
if,
if
you
haven't
already,
it
might
be
worth
trying
to
create
a
template
for
this
cr,
because
you
probably
are
going
to
be
creating
a
few
just
to
save
ourselves.
Some
some
admin.
C
Switching
over
to
cable,
proxy
and
quinnitus,
so
I
want
to
give
an
update
of
where
we
are
with
and
also,
as
the
last
point
talk
about
how
we
would
plan
to
switch
over,
because
this
changed
slightly.
C
So
I
wanted
to
get
your
opinion
on
that.
But
first,
let's
say
where
we
are
so
we
have
a
camo
proxy
and
creators
working
and
pre,
so
that's
nice
and
done.
But
that
was
easy
because
there
was
no
camel
proxy
before
and
we
just
needed
to
install
it
and
done
no
switch
over
procedure
needed.
We
also
have
a
deployment
in
g-staging
right
now,
which
was
easy
to
create,
but
not
taking
traffic.
C
Yet
we
did
some
load
testing
that
showed
that
we
have
plenty
of
capacity
with
just
the
minimum
of
three
parts
like
we
normally
get
below
20
requests
per
second
in
jeep
port.
Right
now
and
with
my
slow
dsl
connection,
I
was
able
to
run
60
requests
per
seconds
for
some
small
images
and
the
ports
didn't
really
show
any
big
cpu
usage.
So
I
guess
we
could
go
way
higher
with
just
a
small
amount
of
pots.
So
I
think,
with
scaling
and
everything
we
should
be
fine.
C
One
interesting
note
here
is
that
vlad
worked
on
also
scaling
on
custom
metrics
based
on
the
camo
proxy
responses.
So
this
is
even
built
into
the
chart
already
and
could
be
used,
but
we
decided
to
not
work
with
that
yet
because
that
also
needs
a
depend
as
a
dependency
to
have
a
prometheus
plugin
to
be
installed
in
our
clusters
and
which
maybe
is
even
done
already
and
also
that
needs
some.
C
You
know
playing
around
with
what
is
the
right
value
to
scale
on
that,
and
also,
if
we
just
scale
on
responses,
then
we
might
miss
open
connections,
because
we
might
have
a
lot
of
requests,
but
not
no
responses
coming
from
them.
So,
but
this
is
an
interesting
thing
to
look
into
maybe
for
later,
because
we
only
really
wanted
to
maybe
play
with
custom
metrics
for
scaling
also
for
other
deployments.
So
maybe
we
could
follow
on
on
this
later
and
coming
to
the
switch
switchover
that
we
plan
to
do
our
initial.
D
C
Was
to
just
to
change
in
our
dns
entry
for
the
domain
name
of
user
content,
but
to
point
it
to
the
new
ip,
but
for
that
we
would
need
to
first
create
the
managed
certificate
and
kubernetes
so
that
s
connections
are,
you
know,
certified
with
that,
but
the
problem
is,
we
only
can
create
a
self-managed
third
if
the
ips
are
already
pointing
or
the
dns
entry
is
already
pointing
to
the
new
ip.
C
D
C
This
new
url
by
just
changing
the
config
and
gitlab
to
point
asset
proxy
to
this
new
dns
entry.
This
requires
for
the
parts
to
be
restarted,
but
that's
not
a.
D
C
C
So
this
is
the
plan
right
now,
after
we
have
done
the
switch
over,
we
can
then
take
away
the
change
the
certificate
to
not
contain
the
old
entry
anymore
and
delete
the
old
dns
entry
and
for
the
vm
information.
So
the
only
questions
that
I
have
here
is:
is
there
any
problem
in
using
a
different
url,
like
user
content,
two
for
static
content
in
chem
proxy?
C
Like
do
we
have
anything
which
is
you
know,
valuing
that
we
are
using
specific,
especially
this
url
for
static
content,
and
the
second
issue
I
still
need
to
find
out
by
testing
is:
if
we
change
the
managed
certificate
to
not
contain
the
old
dns
entry
anymore,
if
that
would
cause
any
kind
of
blip
where
we
for
why?
I
don't
have
you
know
a
certificate
working
for
this
entry
anymore,
but
I
think
that's
not
the
case,
but
we
need
to
test
this.
B
So
for
your
first
question,
we
would
need
to
make
sure
that
the
content
security
policy,
for
you
know
staging
or
you
know,
dot
com
when
this
hits
production
isn't
blocking
requests
to
this
domain,
because
that
would
be
bad.
C
Do
we
have
any
csp
set
for
the.
A
A
D
C
Listed
in
the
managed
certificate,
if
that
is
causing
any
kind
of
interruptions,
why
google
is
switching
the
surface
certificate?
I
guess
not,
but
we
need
to
test
this
in
staging
to
really
show
that
it's
not
happening.
A
C
Same
here,
but
let's
test,
that's
the
only
thing.
C
Yeah
so
right
now,
I'm
waiting
for
mr
to
be
approved
so
that
I
can
create
the
new
dns
entry
in
staging
and
after
that
we
could
create
a
small
communities
change
to
let
the
managed
certificate
be
created
for
that
entry.
C
D
One
thing
I
mentioned
to
that
already,
which
you
have
at
henry
like
in
terms
of
what
we
launch
with
on
prod,
go
big
like
go,
probably
quite
a
bit
bigger
than
all
the
numbers
say,
because
all
of
our
experience
in
the
past
has
shown
us
that
some
things
behave
very
differently
on
kubernetes.
So
it's
fine
to
go
large
and
then
some
hours
later,
if
we
don't
need
the
capacity,
we
can
just
scale
it
back
down
again.
D
A
By
the
way,
our
our
load
testing
has
shown
that,
right
now
with
three
pods,
we
are
more
than
able
to
handle
three
times
the
the
highest
that
we've
seen
on
gitlab.com
so
but
still
yeah.
I
we
get
your
your
point,
amy
better
to
go
big.
D
I
want
to
say
actually
well
done,
like
great
progress
like
I
think.
Camo
has
been
kind
of
quietly
ticking
along,
so
awesome
that
we
have
pre
up
and
running,
and
that's
all
going
well
fantastic
that
we're
like
heading
into
staging
and
also
talking
about
prod,
so
yeah
well
done
for
all
of
you.
Who've
been
involved
in
this
like
really
great
progress.
C
Yeah
awesome
how
vlad
was
exploring
a
lot
of
different
things
like
network
endpoint
groups
and
working
with
custom
metrics,
so
I
think
we
can
get
some
value
out
of
that
later.
E
For
joining,
I
had
a
question,
so
I
was
just
thinking
about
blue
green
deployments.
On
the
last
meeting
and
in
blue
green
deployments,
you
have
like
a
new
deployment
to
prod,
which
is
the
blue
deployment,
and
then
you
switch
over
or
the
blue
is
the
old
one.
I
get
the
colors
confused,
but
you
then
switch
over
to
the
new
deployment.
E
So
I'm
wondering
how
do
we
do
it
now?
How
do
we
switch
over
to
the
new
version
without
you
know,
causing
inconvenience
to
any
customers
who
are
currently
connected.
E
Yeah,
I
was
thinking
more
about
gitlab,
but
yeah
in
general.
Also.
C
Yeah
I
mean
and
coordinators.
This
is
normally
happening
by
fine-tuning
how
many
pots
are
first
created,
new
and
and
at
the
same
time,
all
pots
are
destroyed
and
when
all
ports
are
getting
destroyed,
there
is
a
certain
kind
of
grace
period
where
you
still
let
them
running
to
maybe
let
them
finish
any
kind
of
requests
which
are
still
open,
so
there's
some
tunables
on
them.
So
normally
you
signal
that
you
want
to
shut
down
some
pots.
C
C
Do
you
have
a
good
example
for
something
which
is
running
very
long,
some
ssh
sessions?
Maybe
then
the
runners
runners
are
long,
bundling
yeah,
then
in
this
case
you
might
even
have
to
break
connections
after
a
while,
because
at
some
point
you
need
to
finish
and
terminate,
but
for
them
most
of
the
traffic,
it's
not
visible
for
our
customers.