►
From YouTube: 2022-06-01 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I'll
do
it
welcome
to
the
june
1st
of
mia
kubernetes
demo
meeting.
Let's
see
on
today's
agenda,
I
wanted
to
discuss
optimizing
arizona
clusters.
Just
a
wee
bit.
I've
been
working
on
an
issue
that
kind
of
creates
a
little
bit
of
efficiency
changes
to
how
we
run
our
node
pools.
B
We've
got
the
situation
where,
if
a
single
cluster
were
to
fail
for
any
reason,
I
highly
suspected
that
our
additional
zonal
clusters
will
not
be
able
to
accept
all
the
new
traffic
that
comes
to
them.
B
B
There's
one
node
pool
that
was
missed
during
this
initial
evaluation,
and
the
target
of
this
issue
was
initially
the
ip
address
space
associated
with
those
clusters,
because
that
was
kind
of
problematic
and
we
were
running
close
to
saturation
to
the
point
where
come,
I
think
it
was
august.
We
would
run
out
of
ips
available
that
way.
B
B
This
is
just
a
continuation
of
this
work.
That
way,
we
still
have
the
ability
to
try
to
perform
maintenance
on
targeted
clusters
in
the
case
that
we
need
to
rebuild
them.
So
the
next
thing
I'm
going
to
be
starting
to
work
on
is
kind
of
evaluating
where
our
current
saturation
levels
kind
of
sit.
This
is
since
we
don't
have
any
metrics
or
anything
related
to
this
specific
item.
There's
gonna
be
a
lot
of
math
involved
during
this
evaluation
period.
B
I
wanted
to
share
this
issue
specifically,
because
I
don't
want
to
sit
here
and
work
in
isolation.
I
did
tag
grain
and
mod
on
this
and
they
have
provided
well.
At
least
graeme
has
provided
a
little
bit
of
feedback,
but
I
just
wanted
everyone
on
this
call
to
be
aware,
such
that,
if
anyone
else
had
any
opinions
or
questions
or
thoughts
about
this,
please
take
you
know.
Five
minutes
of
your
day.
Read
through
this
issue,
let
me
know
if
you've
got
any
thoughts
or
opinions
about
this.
B
I
just
want
to
make
sure
I'm
not
unintentionally
not
sharing
the
information
that
I'm
working
on,
because
it's.
This
is
not
something
that's
very
kind
of
like
widely
shared
information,
since
this
call
is
usually
the
delivery
team
and
no
one
else
usually
shows
up.
So
I
wanted
to
like
kind
of
advertise
it
just
in
case
anyone
else
in
infrastructure
saw
these
recordings.
B
B
B
Remember
what
you
go
ahead
and
verbalize.
A
So
I
don't
know
if
you've
ever
done
something
like
this
before,
but
could
we
one
day
want
to
manually,
bring
down
a
cluster
or
sort
of
disconnect
disconnector
cluster
and
test?
What
actually
happens.
B
Yes,
so
the
epic
that
this
issue
falls
under
is
geared
towards
doing
just
that.
So
the
ultimate
goal
here
is
to
this
started
because
of
ip
space
saturation
and
we
knew
we
were
going
to
run
out
of
ip
addresses.
But
you
know,
one
of
the
secondary
goals
was
to
make
sure
that,
during
a
disaster
scenario
that
we
have
a
failed
cluster,
we
still
have
the
ability
for
extra
capacity
for
the
other
two
zones
to
make
up
for
the
fact
that
we've
got
a
failed
cluster.
B
B
B
Time
is
just
down
on
my
side,
but
the
next
step
will
be
to
bring
down
a
cluster
and
staging
and
test
the
capacity
of
being
able
to
build
a
cluster
without
impacting
team
delivery
or
operations
as
a
whole,
and
then,
after
that,
we'll
be
doing
the
same
in
production,
because
we
already
know
that
we
need
the
ability
to
rebuild
clusters,
as
is
there
are
certain
configuration
changes
which
we
are
not
allowed
to
change
with.
Clusters
such
as
our
ipa
address-based
configurations
and
our
network
policy
configurations.
B
There's
a
few
other
things
but
like
having
the
ability
to
take
down
a
cluster
for
maintenance
is
going
to
become
more
important
as
time
passes.
So
that
is
currently
our
goal
at
the
moment.
B
Currently,
I
think
it's
easier
if
we
destroy
the
old
cluster
and
the
reason
why
I
say
this
is
because
our
current
tooling
relies
on
the
naming
of
our
clusters
very
heavily.
All
of
our
repositories,
all
of
our
tooling
assumes
you've
got
a
named
cluster
and
we're
going
to
call
the
cluster
by
the
name,
and
our
tooling
is
going
to
be
like
give
me
the
information
for
this
cluster.
Here's
the
name
of
that
cluster.
B
B
I
think
in
the
future,
so
long
as
we
get
a,
but
so
long
as
we
could
prove
that
we
could
at
least
build
a
cluster
with
minimal
interruption
being
able
to
create
new
clusters
as
replacements
and
turn
down
old
ones
would
be
a
good
future
goal
to
have.
But
that's
not
currently
my
target
at
the
moment.
B
C
C
I
I
suspect
that
this
can
be
done,
but
yeah
before
deleting
something
for
test.
I
would
rather
just
create
something
else:
delete
and
click.
Yes,.
B
D
B
Yeah,
because
I
know
specifically
for
like
object-
storage,
those
names
they
get
stuck
around
for
a
certain
period
of
time
like
you
can't
reuse.
That
name,
I
think
databases
are
the
same.
So
I
think
testing,
kubernetes
infrastructure,
that's.
B
B
Cool
does
anyone
else,
have
any
questions
about
arizona
clusters
and
my
goals
here
before
we
move
on
excellent
ahmad.
Let's
talk
about
gitlab
s
hd
a
little
bit.
E
Sure
we
intended
to
execute
change
requests
today
to
rollout
to
increase
the
traffic
on
canary
50
percentage.
E
But
me
and
skarmak
figured
out
that
canary
cannot
handle
the
traffic
actually
due
to
different,
no
pool
type
and
the
count
of
the
nodes
or
instances
we
have
so
until
this
is
fixed,
because
this
will
also
affect
rolling
out
to
production.
E
This
will
be
fixed
tomorrow.
I
hope
I
will
like
prepare
the
mrs
and
I
would
like
for
skype
to
review
them,
but
this
to
be
changed
tomorrow
and
I
changed
the
due
date
to
increase
the
traffic
to
also
tomorrow.
Hopefully,
you
can
do
both
tomorrow,
but
yeah
finger
crossed
and
for
the
ip
allowed,
like
the
sorry,
the
proxy
rollout
test.
We
also
like
executed
this.
I
think
yesterday
with
hendrick
and
nicole.
E
B
The
one
thing
I
would
recommend
doing,
if
you
can,
before
you
end
your
day,
I
saw
that
you
updated
the
issue
about
why
we
decided
not
to
do
the
rollout
today
yeah,
let's,
let's
grab
some
details
on
how
badly
we
are
impacted
by
that,
and
maybe
link
to
some
of
the
configurations
that
you
and
I
were
looking
at
that
way.
People
outside
of
just
you
and
I
understand
how
bad
the
impact
actually
is.
Okay,
that
way
we
just
that
way.
B
Others
have
an
understanding
of
what
we
are
battling
and
how
we
reach
that
conclusion.
So
they
understand
the
situation
and
the
reason
for
pausing
that
work.
I
think
that
would
just
be
useful
for
everyone
who
tries
to
come
by
and
looks
at
it
outside
of
this
meeting.
A
B
Cool
excellent
vlad:
let's
talk
about
camo
proxy.
F
Hi
tim,
so
we
managed
to
deploy
to
staging
that
went
well.
There
were
some
minor
complications,
but
the
good
part
is
that
we
have
the
steps
that
would
apply
to
prod
as
well,
and
we
managed
to
go
around
or
to
eliminate
that
wrinkle
that
we
observed-
and
I
think
henry
made
a
note
in
the
document
about
it.
So
we
had
to
invalidate
the
markdown
cache,
because
when
we
did
a
switch
over
to
user
content,
one
the
old
images
were
still
pointing
to
the
previous
backend.
F
So
that's
why
we
had
to
do
that
and
I
guess
we
we
wanted
to
bring
it
to
the
attention
of
the
entire
team
to
kind
of
understand
thoroughly
what
the
implications
are.
Because
henry
mentioned
this,
there
is
some
kind
of
I
mean
a
pretty
big
load
on
the
db,
because
the
caching
is
done
in
the
db.
D
I
think
the
steps
are
super
detailed
to
go
through
in
detail,
but
I
think
the
important
part
is
that
vv
found
a
way
to
do
the
switchover
without
any
interruption.
D
But
the
strategy
that
we
need
to
follow
is
that
we
need
to
switch
over
to
a
temporary
other
dns
name
during
the
switchover
to
have
a
working
certificate
and
later
switch
back
to
the
original
dns
name
for
the
camera
proxy
urls
that
we
use
and
the
way
how
gitlab
is
working
for
markdown
is
that
when
markdown
is
generated
or
when
a
view
is
created
the
first
time,
then
this
will
be
cached
in
the
database.
D
E
D
Call
which
is
just
doing
that
it
increases
the
markdown
version
number
in
the
database
and
then,
when
checking
when
the
next
request
is
happening
for
some
markdown,
then.
E
D
D
But
that
means
that
once
we
invalidate
that
for
a
few
hours
or
days,
we
will
have
a
lot
of
database
rights
because
all
markdown
will
be
updated
when
first
viewed
again
and
we
had
those
cases
a
few
times.
I
think
where
we
wonder
if
we
can
do
this.
If
we
should
do
this,
we
also
had
some
mitigations
around
issues
with
canary
and
anti-port
having
different
versions,
for
instance,
and.
E
D
You
know
high
business
times.
So
this
is
the
one
complication
here.
I
think
it
should
work
fine
when
we
last
time
did
it.
As
far
as
I
could
see
in
issues,
it
didn't
really
have
a
very
high
impact
on
the
primary.
We
could
see
a
slight
increase
of
frights,
but
nothing
that
was
really
dramatic,
so
we
should
easily
be
able
to
survive
this,
but
still
it's
hard
to
estimate
how
to
test,
and
so
we
should
be
careful
and
do
it
on
the
weekend,
probably
yeah.
D
I
think
it's
important
to
think
about
that
and
that's
why
we
wanted
to
mention
it
here.
If
you
have
any
ideas
around
this,
besides
that,
I
think
the
steps
are
more
or
less
sound.
I
will
prepare
a
change
request
issue
for
g-prot,
with
the
detailed
steps
for
the
switch
over
and
then
we
should
be
able
to
do
this.
But
if
you're
ready
for
it.
F
B
D
B
And
I
saw
that
we
had
a
security
issue
spun
up
for
this
work.
I
marked
it
as
a
blocker
for
moving
forward.
Was
that
the
right
thing
to
do,
or
is
this
something
that
started
being
taken
care
of
elsewhere?.
D
F
Yeah,
I
think
I
mean
I
haven't-
looked
into
it
too
much,
but
I
think
security
policy
should
fit
into
this.
Basically,
we
just
need
to
make
sure
that
the
pods
are
not
using
internal
ips
to
to
be
you
know
to
for
anyone
to
do
any
nefarious
things
with
with
them.
F
B
Okay,
michaela,
I
did
want
to
thank
you
for
taking
notes.
I
saw
that
you
were
typing
earlier,
so
I
greatly
appreciate
that.
B
Thing
I
had
was
kind
of
an
open
question
and
the
reason
for
this
was
just
because
I
had
a
long
weekend,
so
I
was
just
thinking
about
all
of
the
various
things
that
we've
got,
whether
scheduled
or
unscheduled,
and
I'm
kind
of
just
curious
as
to
if
anyone
had
any
specific
things
that
kind
of
bug
them
when
it
comes
to
working
with
monitoring
or
watching
kubernetes
or
deployments
or
the
configurations
and
stuff.
B
I
know
we've
got
a
lot
of
open
issues,
but
I'm
curious
if
there's
anything
specific
anyone
in
here
had
that
was
at
the
top
of
their
mind,
I'm
not
looking
to
adjust
priorities,
I'm
just
kind
of
curious
as
to
what
others
might
be
thinking
about
when
it's
like
hey
this.
I
run
into
this
all
the
time
and
it's
irritating
me.
Why
do
we
still
deal
with
this
kind
of
situation?.
B
B
B
D
I
think
one
of
the
biggest
pain
points
that
will
also
be
increasingly
even
is
how
we
need
to
deal
with
mixing
deployments
with
conflict
changes
right.
This
is
really
painful
in
communities
right
now
and
that
can
easily
cause
a
lot
of
trouble.
It.
D
B
So
on
that
note,
let's
see
I
don't
know
if
you
are
part
of
this
issue.
I
don't
see
you
as
receiving
notification
on
this,
so
I'm
going
to
link
it
here
because
graham
is
working
on
trying
to
figure
that
out.
So
this
is
just
one
of
many
issues
that
fall
under
a
target,
epic,
which
is
trying
to
figure
out
what
to
do
with
our
ci
pipelines.
I,
since
you're,
about
to
leave
us
henry.
D
B
D
D
D
If
that's
the
end,
then
I
just
want
to
say,
because
I
think
some
of
you-
I
will
see
the
last
time
here
in
this
video.
I
just
want
to
say
thank
you
and
goodbye,
and
I
really
appreciate
appreciated
to
work
with
you
to
every
shot
of
slack.
Probably
because
I
need
to
take
care
of
you
know,
organizing
my
off-boarding
and
stuff
in
flight.
I
will
not
have
any
access
anymore,
so
it
was
really
great
to
work
with
you
and
hope
to
see
you
again.