►
From YouTube: 2021-02-11 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
C
D
C
D
Yeah
exactly
exactly
cool,
so
I
I
invited
henry
along,
so
he
may
drop
him,
but
just
to
give
me
a
kind
of
a
heads
up
of
what
to
start
expecting
from
next
week.
I
also
succeeded
in
failing
at
calendars
and
booked
over
the
last
10
minutes
of
this
meeting
for
our
next
meeting.
So
let's
get
started,
is
there
anything?
D
Open
is
there
anything,
even
though,
if,
if
and
not
as
a
demo
scott,
you
wanted
to
give
a
little
overview
of
the
api
migration
and
we
can
just
check
in
that
we've
got
not
so
much.
I
guess,
are
the
a
plan
for
this
specific
api
that
we
need
to
work
out,
but
how
we
go.
I
plan
for
the
planning,
but
if
you
want
to
give
us
an
update
on
where
things
are
going
and
yeah
getting
better
current.
A
State
the
api
deployment
is
inside
of
pre-prod,
but
not
taking
any
traffic.
I've
got
the
nodes
deployed
for
the
node
pool
in
staging,
but
I
do
not
have
the
deployment
deployed
out
to
pre-project.
Yet
my
next
plan
at
this
moment
in
time
is
to
create
the
deployment
in
staging.
A
Monitoring
has
a
merge
request
open
that
I've
got
both
jarv
and
andrew
as
reviewers
assigned
to
currently
andrew
there's
a
ci
failure
that
I
don't
understand,
because
when
I
do
make
generate
it
tells
me
to
make
those
changes,
but
then
ci
says
the
complete
opposite.
So
I
don't
know
how
to
fix
that.
But
we
could
address
that
in
the
merger.
A
Drop
a
note
about
that
I'll
I'll
have
to
check
what
I've
got
installed,
so
monitoring
has
a
merge
request
and
then
for
logging
is
in
place
and
working
for
pre.
So
it's
just
a
matter
of.
I
need
to
check
out
what
changes
need
to
happen
in
nha
proxy
to
support
the
api,
because
I
imagine
we've
done
some
changes
in
nature
proxy
before
that
jarv
has
done
so
I
imagine
I
had
to
probably
copy
and
paste
those
changes
over
for
the
api
work.
A
D
You
go
out
sorry
just
said
the
recording's
a
little
bit
more
healthier.
So
the
question
is:
should
we
we
should
decide
early
about
whether
we're
going
to
use
nginx
in
grass
control
or
not.
A
Yeah,
I
was
kind
of
hoping
we
wouldn't
just
utilize
the
same
method
of
entry
into
kubernetes,
just
like
we
do
with
our
git
services
jar
by
guessing
you're
asking
this
question
for
a
specific
reason.
B
When
you
talk
about
web
and
api
were
like,
I
think,
nginx
is
probably
doing
more
right,
and
I'm
just
worried
that
this
is
moving
into
kubernetes
is
a
big
change.
Getting
rid
of
nginx
is
also
a
big
change
and
I'm
not
sure
if
we
should
do
them
both
at
the
same
time.
Okay,
that
said,
I
think
doing
what
we
did
with
good
https,
which
is
like
to
move
to
kubernetes
first
and
then
get
rid
of
engine
x
is
kind
of
a
pain
in
the
ass.
B
So
I
don't
know
whether
that's
something
my
my
concern
is
as
well
is
that
our
our
aj
proxy
configuration
or
aj
proxy
cookbook
right
now
is
not
flexible
enough
to
have
multiple
back
ends
or
no
sorry,
multiple
servers
in
the
same
back
end
with
different
ports.
B
The
way
I
did
this
did
the
slow
roll
out
for
get
https
is
that
I
I
made
the
chef
configuration
and
I
stopped
chef
and
all
the
load
bouncers
and
I
just
rolled
the
change
to
the
load
balancers.
But
if
we
want
to
have
vms
and
kubernetes
running
side
by
side
for
an
extended
period
of
time,
it's
going
to
be
painful
for
us
to
have
some
servers
like
it's
just
it's
just
going
to
be
very
painful
for
us
that
some
servers
running
it's
not
possible.
B
A
D
Before
we
dive
too
far
into
the
solving
the
possible
problems
for
the
api
migration,
could
we
also
so
given
we've
got
henry
joining
on
monday
and
to
give
him
kind
of
overview
of
these
things
and
also
to
share
some
of
jeff's
knowledge?
Could
we
do
this
one
a
little
bit
more?
D
D
Do
we
have
issues
like
in
a
dock
somewhere
just
so
that
we've
got
a
bit
more
oversight,
so
henry
can
kind
of
more
easily
catch
up
on
some
of
this
stuff
like
before
we
deep
dive
into
specifically
engine
x,
or
something
like
that,
so
I
guess
make
a
plan
for
how
we
do
the
plan.
A
Yeah,
I
could
I'll
work
on
that
I'll
utilize,
the
epic
for
the
api
migration
to
hold
that
information.
D
Excellent
sounds
good
great,
so
yeah
henry
hey
welcome.
We
were
just
talking
a
little
bit
about
the
api
migration
and
spending
a
bit
of
time
next
week
going
through
the
plan
for
this.
So
you
can
get
a
bit
of
an
overview
of
of
the
how
the
tasks
link
together
so.
D
Yes,
sir
you're
welcome
cool
okay.
D
I'll
I'll
hook
up
with
you
on
that
one
to
go
back,
so
we
can
actually
get
some
work
out
kind
of
rough
schedule
for
those
things
so
that
it
doesn't
hold
you
up
on
this
as
well.
A
D
Nice
cool
anything
else
on
the
api
migration.
A
B
Least
we
can
yeah.
This
is
just
an
fyi
we're
waiting
for
this,
mr
that
will
hopefully
help
with
the
client
behavior
on
reconnect
for
websockets.
Once
this
is
merged,
then
we
can,
or
once
this
is
deployed
to
production,
we
can
scale
back
the
websockets
fleet
and
hopefully
remove
some
of
the
silences.
We
have.
D
Sounds
good?
Do
we
have
a
an
issue
or
anything
that
sort
of
shows
like
do?
We
know
what
we're
looking
for
like
for
when
this
does
get
merged
in
like
how
will
we
make
that
decision.
B
I
think
what
we'll
probably
do
is
like
watch
our
error
rates
during
deployments
and
see
if
we
would
have
tripped
our
slos
for
websockets.
I
do
have
an
issue
open
for
this,
which
is
I'll
link.
It.
D
A
Actually,
since
we
have
a
few
minutes
before
our
next
meeting
joe,
if
we've
got
that
issue
open
regarding
the
alerts
for
node
pulls
and
running
out
of
nodes
or
the
ability
to
scale
and
we've
created
silences
for
it
yeah
and
we
don't
have
a
way
to
create
an
appropriate
alert
today,
should
we
just
get
rid
of
that
alert
for
the
time
being,
which
I
shy
to
do
well,
it's
I
don't
know
what
should
we
do
with
that?
B
C
A
A
The
best
that
we
could
probably
do
right
now
is
create
a
custom
metric
inside
of
google,
which
they've
got
documentation
for,
but
I've
not
yet
explored
it,
and
then
we
could
hopefully
get
that
exported
via
our
stackdriver
exporter
to
toss
it
into
prometheus,
and
then
maybe
we
could
create
an
appropriate
layer
per
node
poll,
but
right
now
that
alert
was
initially
created
when
we
were
on
a
regional
cluster
where
we
could
max
out
the
available
nodes.
B
What
we
could
do
is
track
the
number
of
nodes
over
time
and
whether
it's
increasing
like
more
than
we
expect
right
like
we
could
set
a
threshold
for
the
rate
of
increase.
Maybe
I
don't
know
like
there's,
probably
something
we
could
do.
C
Can
I
like,
I
think
I've
discussed
this
with
people
in
the
past
but
like
I
think
we
should
have
a
service.
Well,
we've
mentioned
this
many
times
a
service
called
kubernetes
like
an
actual
git
lab
service,
called
cube
or
kubernetes
or
whatever,
and
these
node
pools
are
like
saturation
resources
just
another
one
of
those
kind
of
saturation
things
obviously
like
we
have
a
problem
with
figuring
out
what
the
max
is,
but
at
the
worst
we
could
just
hard
code
it
and
then
update
it.
C
Every
now
and
again,
you
know
as
the
first
hack
of
that,
because
the
note
pools
don't
really
fit
one-to-one
with
any
of
the
other
services,
because
they're
just.
A
D
D
So
one
thing
I
was
going
to
just
quickly
check
in
on
is
blockers,
so
we
have
three
blockers.
I
believe
for
the
api.
D
I
can
try.
I
have
got
the
annoying
screen.
In
fact
now,
I'm
not
even
gonna
try,
because
I've
been
through
the
zoom
screen
share
things
so
many
times
today
already
that
it's
killing
me
I'll,
put
I'll
link
them
in
here
and
so
where
I
guess
hang
on.
Let
me
stop.
C
A
So,
regarding
this
first
blocker,
this
was
overall
issue
to
replace
extail
with
the
gitlab
logger.
Since
we
don't
use
gideon
prefect,
we
don't
need
to
wait
for
those
two
issues,
but
I
see
gitlab
workhorse
still
has
an
open
issue.
A
A
D
Great
sounds
good,
and
then
these
two,
I
believe
both
are
and
then
there's
another
which
I.
A
A
D
Yeah,
I
think
we
should
definitely
wait
and
see
what
this
looks
like
on
canary,
like
we
don't
have
to
not
make
these
blockers.
I
just
want
to
check
that
we
are
like
in
fact
I
suppose
more
significantly.
Are
we
currently
aware
of
any
additional
new
ones?
I
know
we
also
have
the
blackout,
one
which
I'll
find,
but
it's
any
thing
else.
We've
come
across
in
the
last
week.
D
Good
stuff,
good
stuff,
so
using
the
multi-large
working
group
to
help
get
some
visibility
on
these.
So
they've
already
kind
of
made
aware
that
as
we
move
through
the
api
migration
on
canary,
we
may
find
some
new
stuff.
We
may
sign
some
stuff
on
production
and
we'll
bring
those
in.
So
that's
all
fine.
D
Fantastic
is
there
anything
else
that
people
want
to
cover.
D
Nope,
okay,
cool,
so
would
it
be
worth
henry
definitely
anyone
else
who
wants
to
join
like
just
have
a
bit
of
a
view
on
that.
Maybe
scope
like
how
do
we
sync
up
for
next
week
on
on
the
api,
stuff
and
other
stuff,
but
the
api
stuff
makes
sense.