►
From YouTube: 2021-05-27 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
Not
really,
I
think,
just
going
through
the
blockers
and
if
there's
any
points
that
need
to
be
discussed
from
the
the
europe-american
one,
it's
probably
enough
content.
I
think.
C
Sure
so
I
don't
think
there
was
too
much
from
the
europe
americas.
We
were
talking
well,
every
job
may
have
a
better
update,
but
like
skybex
working
on
fine
tuning,
is
there
anything
like
job
that
you
think
we
either
like
should
expect
scarbeck
to
need
to
us
to
jump
in
on
or
the
other
one
is?
Is
there
anything
like
kind
of
as
a
team?
We
should
be
thinking
about
more.
Like
futurey,
you
know
bigger
scale.
D
I
don't
think
so.
I
haven't
caught
up
with
well,
I
didn't
catch
up
with
scarbeck
at
the
end
of
his
day
yesterday,
so
I
don't
know
where
we
are-
and
I
haven't
really
caught
this
morning
so
I'll
take
a
look
to
see
what
is
what
is
going
on?
I
guess
the
issue
for
fine
tuning
see
if
there's
any
updates
on
that.
C
Because
the
main
thing
I
think
we
were
talking
about
yesterday
for
you
graeme-
was
just
that
it's
a
little
bit
slow
because
scott
was
saying,
like
you,
make
a
change,
wait
and
see
what
happens
follow
up.
So
there
could
be
that
yeah,
it's
sort
of
helpful
to
hand
on,
but
I
don't
think
it
was
especially
like
a
problem.
It
was
more
just
that
the
process
is
a
little
bit
longer
because.
B
E
D
So
the
last
the
last
change
that
I
see
changes
the
memory
utilization
percentage
for
engine
x.
It
changes
it
from
70
to
75
percent.
Initially
we
had
that
at
50
and
we
were
just
seeing
a
large
number
of
pods,
because
nginx
appears
to
be
memory
bound
and
it
would
get
to
be
over.
50
percent,
therefore,
like
the
scaler
would
just
create
more
pods
and
we
got
up
to
the
max
number
of
pods,
even
though
we
weren't
saturating
the
nodes.
D
So
that's
up
to
75
and
the
resource
requests
for
memory
for
nginx
also
went
up
to
a
gigabyte.
So
I
think
that
previously
it
was
set
to
500
megabytes,
so
we're
doubling
the
memory
requests
and
then
increasing
the
memory
utilization
percentage.
What
we
hope
will
happen
after
this
is
that
it'll
reduce
the
number
of
pods
this
wasn't
merged,
though
I
made
a
comment
last
night
asking
to
just
do
it
on
a
single
cluster
first,
but
I
think
that
might
have
been
after.
D
Maybe
maybe
scarbeck
just
didn't
have
time
to
do
that
change
so
I'll
make
that
change
this
morning
and
then
we'll
get
this
on
a
single
cluster.
I'd
like
to
do
all
of
these
changes
on
single
clusters,
if
we
can,
because
I
think
it'll
be
a
good
comparison,
you
know
to
see
what
the
resource
utilization
looks
like.
C
A
More
deeply,
I
mean
the
one
thing
is
that
we
need
to
test
for
sure
what
effects
we
see
when
we
tune
things,
but
on
the
other
end,
I
think
we
also
had
some
questions
and
understanding
of
how
how
the
tuning
in
quinny
this
is
working
like
if
you
have
two
different
metrics
you
use
for
scaling
like
cpu
and
memory,
how
this
is
affecting
each
other
like
like,
if
we
could
get
a
better
understanding
of
how
this
is
working
in
general
general.
A
D
Regard
yeah,
I
don't
know,
I
don't
know
graeme
if
you
like
know
how
target
seep
utilization
and
target
memory
utilization
play
together.
I
assume
that
both
like
they're
handed
together
so
that
they,
you
know
both
have
to
evaluate
to
be
true
like
or
the
hba
tries
to
make
both
of
them
evaluate
to
be
true
right.
E
D
So
so
what
I
imagine
what
will
happen
after
we
merge?
This
change
is
that
the
number
of
pods
will
increase,
deceptivization
will
rise
and
we'll
see
whether
we
are
still
memory
set.
You
know,
memory
bound
or
cpu
bound.
It
could
be
that
we
become
cpu
bound
after
this
change,
because
we're
increasing
the
number
of
requests.
B
D
B
Yeah
sure,
no
that's
so.
Basically
what
I've
been
covering
for
the
last
few
weeks
is
just
hey.
I've
had
on
call
trying
to
on-call
closing
stuff
out
from
that
slowly
burning
down
the
residual
stuff,
I've
had
from
core
infra
and
a
few
corrective
actions
assisting
with
the
api
migration
like
working
with
henry
and
scarborough
kind
of
turning
over
that,
and
then.
B
Putting
basically,
I
have
been
putting
together
more
and
more
issues
now
and
really
just
looking
at
the
pieces
for
both
trying
to
you
know,
make
sure
we've
got
at
least
issues
and
documentations
of
the
things
we
want
to
address,
protect
it,
and
then
I
guess
that's
pretty
much.
It
really.
B
We
had
oh,
the
gke
upgrade
came
out
of
nowhere,
which
I
just
kind
of
jumped
in
on
to
to
like
you
know
it
wasn't
even
a
lot
of
effort.
It's
just
a
lot
of
bookkeeping.
The
biggest
change
with
the
latest
gta
upgrade
is
a
bunch
of
metrics
changes
which
we
do
not
actually
need
to
action,
so
I've
opened
up
issues
for
those
I've
done
most
of
the
tooling
change.
B
Oh
and
the
ingress
change
stuff
like
that
we've.
Basically
a
lot
of
it
was
just
bookkeeping
and
opening
issues,
because
the
observability
reaper
with
tankers
got
a
lot
of
references
to
the
old
specification
like
plant
uml
and
all
these
other
random
pieces
need
to.
D
D
E
D
Yeah
sure,
and
do
we
have
a
corrective
action,
for
it
is
the
second
time
I
heard
someone
say
like
the
upgrade
came
out
of
nowhere,
and
is
it
like
really
that
it
came
out
of
nowhere
or
is
it
something
that
we
can
get
a
heads
up
on.
B
It
it
is
something
we
can
get
a
heads
up
on.
It
came
yeah
it
too
clear.
I
guess
it
came
out
of
nowhere
for
us,
but
google
google
did
display
this
information.
Just
we
got
no
one
actively
reading
it
because
it's
a
web
page
or
an
rss
feed
and
I'm
not
sure
if
anyone
uses
rss
feeds
anymore
but
and
the
see
okay.
Well,
there
you
go
well.
I
can
give
you
another
one
to
add.
Then.
B
Yeah,
well,
there
you
go.
The
good
news
is,
is
and
typical
with
google
stuff
as
well.
Is
they
change
it
every
few
months?
B
They
change
it,
and
so
you
look
at
something
or
you
think
you
understand
how
it
works
or
what's
going
to
happen,
and
then
they
just
silently
update
their
documentation
and
change,
really
important
things
that
you
don't
know
until
you
look
again,
but
the
good
news
is:
is
they're
sending
pub
sub
events
so
before
they
were
sending
pub
sub
events
for
us
to
consume
when
they
were
doing
the
upgrades,
which
is
useful,
but
not
that
useful,
now
they're
sending
us
pub
sub
events
to
say
they're
going
to
so
like
six
days
or
a
week
out,
they'll
say:
hey,
I'm
going
to
do
this,
this
new
we're
going
to
switch
you
over
the
problem
is
my
my
solution
for
consuming
them
at
the
moment
is
a
small
cloud
function
which
I
made
the
decision
to
put
them
as
grafana
annotations,
because
the
events
I
was
consuming
was
this
upgrade
happened
at
this
point
in
time.
E
B
B
Maybe
this
is
something
woodhouse
could
do,
I'm
not
sure
if
that's
what
woodhouse's
functionality
kind
of
is
but-
and
I
know
it
hasn't-
got
the
ability
to
do
this
stuff
yet,
but
it
would
be
great
if
we
could
sit
like
it
would
be
good
to
sit
down
and
actually
get
that.
So
we
get
a
very
clear,
loud
alert
that
say
in
six
days
we're
going
to
be
updating
to
like
1.20
or
something.
D
F
Cloud
provider
alerts,
but
that's
mostly
the
incident
channels
from
cloud
providers
and-
and
I
don't
watch
it
very
closely
because
most
of
it's
like
cloudflare
in
buenos
aires,
is
struggling.
B
Yeah
so
there's
gke-feed
and
that's
where
they're
going
at
the
moment-
although
maybe
that's
mostly
security
bulletins,
but
the
problem
is
it's
just
going
to
slack
it's
just
like
a
channel
with
like
updates
every
three
or
four
days,
and
you
just
can't
like
you've,
got
to
look
at
every
message
on
the
website
and
pick
out.
Okay.
This
is
actually
going
to
affect
me,
which
is
why
the
pub
sub
messages
are
tailored
to
us
because
they
will
say
this.
Particular
cluster
is
going
to
be
upgraded
soon.
With
this
version.
D
B
A
E
B
Infrared
simply
because
I
I
I
don't
know
I
feel
like
koi
infras
should
be
owning
like
gke-
is
a
platform
more,
but
I'm
happy
to
move
it
anyway.
Okay,
where
would
you.
F
Like
to
service
just
in
the
service
catalog,
it's
it's
owned
by
delivery.
C
Okay
should
yeah.
We
should.
A
A
B
C
Andrew
is
there
anything,
how
are
your
how's,
the
observability
looking.
F
F
It's
not
very
exciting,
but
it
is
monitoring
and
things
that
are
monitoring
that
aren't
beeping.
All
the
time
are
good.
So
it's
good!
If
you
look
over
here,
we
should
have.
You
can
see,
there's
a
bunch
of
cube
stuff
here
for
the
api
and
if
we
go
down
well,
first
of
all,
you
can
see
here
we've
got
nginx
as
an
sli,
so
this
is
the
kubernetes
monitored
nginx.
F
I
just
realized
that
this
is
a
bit
scant
at
the
moment,
so
we
could
probably
do
with
a
bit
of
extra
data
in
here
and
a
bit
of
extra
stuff,
but
I'll
come
to
that
in
a
minute.
And
then,
if
you
go
down
to
saturation
details
for
the
kubernetes
service,
we've
actually
still
got
the
old
vm
based
stuff
as
well.
F
We
should
probably
take
this
out
at
some
stage
like
how
many
inodes
are
available
on
the
on
the
vms
that
run
the
api
fleet
and
that
we
will
soon
be
getting
rid
of.
I
guess
once
that
becomes
decommissioned.
F
Yeah
open,
I
mean
it'll,
be
very
quick
it'll,
be
like
a
one
line:
json
change,
yeah
so,
but
but
you're
welcome
to
to
to
add
that
in
and
then
this
is
kind
of
what
the
thing
where
I
was
banging
on
about
the
the
labels,
because
now
you
can
see
so
this
is
hpa
desired
replicas,
and
this
is
telling
us
that
you
know
out
of
whatever
100
is.
We
are
currently
running
at
36
of
of
the
desired
replicas.
So
if
we,
if
we
look
over
this
sorry,
my
computer,
there
we
go
20.
F
Let's
give
this
a
24
hour
range
yeah.
We
have
sort
of
a
single
line
config
which
in
json
now
for
saturation
matrix,
which
is
like
these
things,
are
only
in
vms
and
then
as
soon
as
you
add
api
to
that
all
of
the
stuff.
That's
only
in
not
vms,
it
would
well.
You
know
all
the
vm
saturation
metrics
will
just
disappear,
so
this
is
20.
F
I
mean
that's
actually
remarkably
unexciting
and
that
to
me
tells
me
that
our
autoscaler
is
probably
in
need
of
some
some
work,
because
if
you
look
at
our
traffic
volume
over
that
period,
it
goes
you
know,
we've
got
that
real
sinusoidal
wave
and
we
have
like
absolutely
no
sinusoidal
wave
in
our
desired
replicas
at
all
which
to
me
I
mean
I
I'm
not
a
hundred
percent
on
that,
but
I
mean
there's
lots
of
signals
that
that
are
a
little
bit
off
on
this,
but
yeah.
It's
it's
remarkable.
F
How
flat
that
is
my
my
other
question
is:
do
we
want
to
spend
a
whole
bunch
of
time
tuning
this
with
the
very,
very
coarse
levers
that
are
desired,
cpu
and
desired
memory,
or
do
we
want
to
wait
until
we
get
prometheus
metrics
previous
custom
metrics
for
the
autoscaler
and
then
spend
the
time
doing
it
really?
Well
with
that,
because
you
know,
cpu
is
a
pretty
pretty
course
stick
to
to
to
beat
that
with.
In
my
opinion,
like
we
can
do
better
with
custom
metrics.
F
C
F
F
As
far
as
I
know,
everyone
talks
about
it
as
a
nice
to
have,
but
I
would
like
to
see
it,
but
I
don't
know
what
the
technical
complexities
of
doing
it
are.
I
know
it's
all:
it's
all
the
helm,
charts
that
need
to
be
updated
right.
B
I
so
I
thought
there
was
we
couldn't
do
it.
I
thought
now.
This
could
be
completely
inaccurate.
We
needed
119
of
kubernetes
as
first
two-way
to
get
have
hpas.
That
could
even
supports
custom
metrics.
So
I
thought
that
was
like
a
blocker
at
some
point
he
like,
like,
maybe
back
when
this
was
first
discussed.
It
was
like,
oh.
F
I
didn't
realize
it
was
that
I
didn't
realize
it
was
that
late
in
the
in
the
communities
versions
that
that
came
out.
I
think
so
yeah
look
I
I
could.
B
Two
weeks
ago,
so
I
think
we
crossed
that
threshold,
but
I
think
that
that
also
could
be
why
we
didn't
do
this
earlier
because
we're
like
oh
we
needed.
Maybe
it
was
only
g-
maybe
I
think
maybe
the
kubernetes
supported
or
the
hpa
supported
it
earlier,
but
gk
said
they
weren't
gonna,
like
put
it,
you
know.
Basically
they
take
all
these
upstream
components.
Yeah.
B
B
F
Is
just
a
side
note,
but
I
mean
look
at
this.
I've
just
asked
oh
house
for
30
days.
That
was
actually
I
mean.
Even
then,
it
should
be
able
to
display
this
in
30
days,
but
thanos
thanos
is
in
need
of
some
love
as
well.
But
that's
a
that's
another
discussion.
Sorry,
you
were
going
to
say
something
amy
I
interrupted
you.
I
was.
C
Just
going
to
say
that
I
I'm
pretty
sure
scarbeck
is
thinking
of
doing
a
sort
of
good
enough
tuning
now,
based
on
the
fact
we're
quite
over
provisioned.
And
then
I'm
sure
yeah
later
on.
We'll
have
to
do
a
full
review.
And
maybe
that's
when
the
custom
like
scaling
stuff
comes
in.
F
Yeah
yeah
yeah,
possibly
I
mean
even
that
look
is
thanos
down
anyway,
so
yeah
we've
got
that
and
then
I've
just
noticed
here.
We
should
have
the
cube.
So
this
is
api
main
stage.
I
bet
you
that
that's
not
labeled
properly,
but
if
we
go
back
to,
I
know
that
the
the
canary
stage
was
labeled
properly,
and
so
we
should
see
the
canary
stage
here.
F
So,
oh,
I
think
it's
my
computer,
so
I'm
just
going
to
change
this
over
to
the
canary
stage,
maybe
yeah,
and
then
here
we
should
see
this
tracking
the
and
we
need.
We
need
to
figure
out
why
we're
not
seeing
it
for
main
stage,
but
it
was
added
after
that
work
was
done.
So
I'm
guessing
that
the
the
setup
to
connect
that
node
pool
to
the
to
say
that
this
node
pool
is
running
the
api
main
stage
that
that
work
wasn't
done
and
I'm
hoping
if
thanos
returns.
F
Anything
that
we
see
this
metric
here,
but
it
looks
like
thanos
has
decided
that
it's
calling
it
a
day.
B
So
I
just
double
checked
what
I
was
talking
about
earlier,
so
I
I'm
wrong,
but
I
figured
this
is
worth
putting
into
the
discussion
as
well.
So
what
we've
just
got
now
with
the
upgrade
to
119
is
multi-dimensional
pod,
auto
scaling,
so
there's
a
lot
of
there's
documentation
here
on
it,
but
basically
you
can
go
down
to
scaling
by
number
of
pods,
but
also
scaling
the
limits
and
requests
and
all
those
settings
to
be
adjusted
on
the
pods
dynamically
as
well.
Oh.
F
B
Yeah
and
that's
that's:
what
they're
trying
to
push
towards
like
google
have
got
the
whole
page
on
it,
but
that's
now
they
put
that
in
beta
now
for
119.
B
and
basically
how
it
works
is
it
looks
like
it's
a
gke
specific
functionality
or
at
least
their
version
of
it,
because
it's,
I
guess
it
might
be
tied
to
their
platform.
So
we
can
do
if,
if
this
is
something
we're
interested
in,
we
can
do
this
by
just
dropping
crds
into
the
gitlab
extra
like
we
don't
need
upstream
helm,
chart
work.
We
can
do
this
ourselves.
B
Objects
just
directly
into
our
extra
deployments,
but
I
I
don't
know
if
this
is.
If,
because.
E
F
F
This
is
this
is
working
right,
so
we
probably
not
optimized
correctly
and
we
we're
probably
using
too
many
pods
and
the
other
thing
which
is
actually
related
to
exactly
what
you
see
now
with
that
little
tourney
thing
is,
I
think,
we're
spawning
and
shutting
down
pods
far
too
often,
and
that
is
actually
having
a
downstream
knock-on
effect
on
thanos
when
we're
trying
to
get
this
data,
because,
instead
of
having
like
a
purse
like
a
stable
number
of
pods,
we're
starting
them
up
shopping
and
stuffing
them
up,
and
so
there's
there's
a
massive.
F
You
know
if
we
have
a
metric,
that's
got
a
hundred
thousand
things
a
histogram
metric
in
the
application,
because
it's
kind
of
got
too
many
already
and
then
we're
starting
up
hundreds
of
pods
we're
multiplying
those
huge
metrics
by
that
stop
and
start.
So,
actually,
you
know
this
thanos
problem
is
is
possibly
related
to
that,
but
mostly
it's
a
kind
of
low
to
me
at
least
it's
kind
of
like
a
low
burner
problem,
not
a
not
a
high
priority
problem
and
yeah.
F
So
it'll,
it's
like
a
nice
to
have
and
it's
something
we
should
look
at,
but
it's
not
pressing
the
yeah
I
mean
I
think
we've
actually
got
pretty
serious
thanos
problems
now
by
the
looks
because
these
are
not
complicated
metrics
that
I'm
looking
for
here.
F
The
second
thing
that
I
was
going
to
say
is
that
I
am
getting
dragged
in
like
lots
of
different
directions
and
I've
been
asked
to
work
on
this
daily
stand-up
saturation
for
petroni,
and
so
that's
really
kind
of
taken
over
my
time
for
the
last
week
and
I
haven't
been
able
to
make
nearly
as
much
progress
as
I
hoped
to,
but
the
one
thing
that
I
wanted
to
talk
about
was,
I
was
kind
of
trying
to
see
if
I
could
shop
out
some
work.
That
is
like.
F
There's
really
no
need
for
me
to
do
it.
I
don't
know
if
anyone
else
has
got
time,
but
the
first
one
is
the
gcp
quotas.
So
I
think
like
this,
this
is
like
really
important
to
get
done
it
would.
It
would
really
help
with
that
incident
that
we
had
the
other
day
where
the
auto
scalers
were
unable
to
auto
scale
and
there's
even
some
stuff
in
here.
F
F
If
we
just
stand
this
up
in
the
monitoring
namespace
and
we
set
up
a
service
key
like
this
should
be
pretty
easy
to
do,
and
it
will
get
us
past
that
that
production
incident
and
it's
something-
we've
also
spoken
about
for
a
long
time
and
occasionally
it
does
bite
us
and
if
we
put
into
the
saturation
framework
we'll
have
like
long-term
full
costs-
and
you
know
everything
will
be
hunky-dory.
F
A
B
B
F
Yeah
to
to
yeah
I
mean
I
don't
know
if
there's
yeah,
so
that
that's
the
thing
and
then
prometheus
scrape
and
then
I
can
help
with
setting
up
the
saturation
metrics.
But
that's
like
you
know,
10
lines
of
json,
most
of
it
copied
and
then
the
second
one
that's
I've
been
talking
to
google
about
was
the
and
thanks
for
your
help.
F
Graham,
was
that
cluster
auto
scaling
metrics,
you
know,
google
are
like
you,
don't
need
that
just
be
like
us,
and
it's
just
the
difference
between
working
with
amazon,
sometimes
and
then
and
working
with
google,
but
that's
a
that's
a
moan
for
another
day,
and
there
was
basically
what
we
need
to
do
with
this
now
is
we
need
to
get
the
stackdriver
logs
paused
and
into
a
metric
which
is
not
great
but
like?
I
think
we
do
need
to
do
it.
F
We
we've
already
got
stackdriver's
exporter
running,
so
we
would
add
that
custom
metric
to
our
existing
stackdriver,
but
the
other
part
is
we
need
to
set
up
the
the
the
the
search
that
basically
does
the
counter
bit
a
dot.
D
F
So
I
mean
like
it
would
be
nice
if
you
get
the
scale
up
events,
because
you
know
in
the
same
way
that,
like,
I
think
we're
scaling
up
pods
too
much.
We
probably
as
a
result,
also
scaling
up
vms
too
much.
A
F
Yeah,
you
know,
and
so
how
much
time
are
we
spending
just
initializing,
probably
stupendous
amounts
of
time,
but
with
that
it
would
be
really
nice,
if
not
only
the
failures,
but
we
got
the
scale-up
events
on
the
on
the
nodes
and
the
scaled
down
events
on
the
nodes,
so
those
would
be
nice
and
maybe
the
time
between
the
start
or
I
suppose
it's
not
that
important
with
a
node
because
they're
pretty
they're
pretty
generic
machines
right.
F
B
Oh
yeah,
that
would
be
interesting
as
well
actually
yeah
yeah.
This
is
interesting
because,
as
you
said,
I
I
think
we're
scaling
way
too
much
when
we
were
looking
into
the
console
problem
like
we
were
discovering,
we
were
like
scaling
up
and
down
nodes
within,
like
a
10
minute
time
frame
like
like
a
lot
like
five
up
four
down
like
three
back
up.
It
was.
F
It's
it's
no
java,
I
mean
you'll,
see
it
at
like
three
utc
and
and
you
can,
you
can
take
any
period
of
time
where
there's
and
I
I
tend
to
do
it
outside
of
deployment
times.
Because
of
that
and
it's
it's
it's
like
10
up
12
down
5
up
it.
It
really
is
all
over
the
show,
it's
it's
quite
surprising,
to
see
and
it's
and
it's
like
3
utc,
which
I
don't
think
we're
going
to
be
doing
that
many
deployments
at
that
time.
F
C
B
F
Yeah,
so
we
do
already
have
it
for
remember
that
we
do
already
have
it
in
terms
of
well,
if
we'll
ever
get
the
number.
So
this
cube
pool
maximum.
F
But
the
the
other
thing
to
keep
in
mind
is
that
that's
almost
like,
following
on
from
the
pod
autoscaler
right,
because
the
first
thing
that's
driving
that
is
is
the
is
the
pods
and
the
number
of
desired.
You
know,
and
then
obviously
that's
following.
So
I
think
it's
more
important
to
like.
I
wanted
to
see
if
there
was
a
way
like.
F
If,
if
if
the
lifetime
of
a
pod
is
less
than
10
minutes,
then
it
was
probably
a
bad
decision,
and
then
we
can
start
optimizing
on
that
that
that
would
probably-
and
I
think
you
start
with
the
pods
and
then
when
you
fix
the
pods,
the
nodes
will
smooth
out
as
well.
F
Yeah
this
is
this
is
really
bad,
but
I
did
want
to
show
you
that
this
should
be
here.
F
So
yeah
those
the
the
the
the
cube
job
when
you've
set
those
up.
Do
you
just
set
them
up
in
the
ui
or
do
you
set
them
up
in
terraform
the
the
log
exporters.
D
F
C
Awesome
so
those
are
they
sound
like
great
ideas
to
add
into.
We
have
a
great
list
of
possible
things
to
work
on
through
june,
we'll
do
a
load
of
like
paying
down
tech
debt
before
we
move
into
the
web
migration,
so
I
have
put
together.
Yeah
go
for
it
can.
F
I
just
before,
while
you
say,
while
we
on
the
so
so
skybeck
raised,
why
don't
we
just
send
the
web
traffic
to
the
api
as
a
as
an
option,
and
I
thought
about
it
quite
a
lot
and
and
I'm
not
that
comfortable
with
it.
But
then
I
started
thinking
about
a
bit
more
like
how
much
extra
work
is
it
going
to
be
for
the
web.
Now
that
the
api
is
done,
is
it
it's
not
that
different
right
and
that's?
I
think
where
the
source
of
his
question
was
is
like.
We
just
send
the
traffic.
B
B
It's
not
things
like,
like
it's
more
user
noticeable
quirks
like
if
a
you
know
like
a
content,
type
doesn't
get
set
correctly
on
a
response,
and
then
it
doesn't
render,
and-
and
none
of
this
should
be
an
issue
right
like
this
shouldn't-
be
a
problem
yet
somehow,
typically,
because
now
we've
got
so
many
pieces
in
the
stack
right,
you've
got
hj
proxy
got
any
angerness
and
ingress
engine
x
and
all
this
other
stuff.
B
We
just
keep
finding
these
gotchas.
More
than
anything.
I
would
be
interested
in
maybe
like
some
kind
of
weird
controlled
experiment
where
we
like
enable
send
all
web
traffic
to
api
and
pre
and
then
like
run
a
qa
suite
or
something
to
start
like
I'm,
not
sure.
If
there's
an
option,
maybe
not
the
full
option
of
let's
just
turn
on
web
traffic
into
api,
but
maybe
in
staging,
we
do
it
or
and
then
like
like.
Let's
try,
because
either
either
it
will
go
without
issue
in,
in
which
case
we're
in
the
same
spot,
right.
E
B
F
Again,
like
the
other
thing
is,
is
that
the
the
pods
are
going
to
be
the
same
like
the
layout
is
the
same
as
api,
which
is
the
first
time
that
that's
happening,
which
is
really
nice
like
or
maybe
with
the
web
service
as
well.
I'm
not
sure,
but
it's
not
like
git
was
kind
of
gets
kind
of
different
and
it's
got
a
skit
lab
shell
and
everything
like
that.
But
it's
not
the.
G
G
It's
not
the
same,
though
right
like
you
have,
if
I'm,
if
I'm
not
mistaken,
for
the
web,
you
also
have
things
like
action,
cable
that
needs
to
connect
one
way
or
another
and
like
a
couple
of
those
machine
cables.
Web
sockets,
though
right.
D
But
I
would
say
from
an
infrastructure
at
the
infrastructure
layer
things
are
identical
between
web
and
api
as
far
as
we
know,
right
now
like
they
are
now
on
vms.
But
andrew,
like
are
you
sure,
like
one
of
the
benefits?
Is
you
know
having
isolated
node
pools?
I
would
I
would
start
with
that.
Like
do,
we
think
that's
a
benefit
or
not.
If
not,
then
yeah.
F
On
oh
so
from
a
note
pool
point
of
view,
I
would
definitely
argue
that
it's
probably
fine
to
share
it,
especially
because
we
got
the
monitoring,
that's
my
because
I'm
I'm
less
kind
of,
but
but
definitely
from
the
traffic.
F
D
F
D
Just
until
we're
more
comfortable
with
it,
there
two
things.
First
of
all,
we
have
node
pools
and
deployments
right,
so
we
could
have
the
same
node
pool
in
two
different
deployments,
which
would
mean
that
they
would
have
different
like
hba
settings
or
we
could
have
the
same
node
for
the
same
deployment
from
a
labeling
perspective.
D
I
I'm
a
little
bit
worried
like
if
they
share
the
same
node
pool.
What
would
the
label
type
be
for
the
nodes?
It
could.
F
F
F
D
D
So
I
think
the
the
separate
node
pool
is
cheap,
like
I
don't
think
that
is
a
lot
of
overhead
for
us.
I
think
the
the
expensive
thing
about
these
migrations
are
really
configuration
and
making
sure
that
we
have
all
of
the
environment
variables
and
config
in
place.
D
E
F
He
asked
about
it,
and
my
my
take
is
that
I
don't
think
we
should
do
that
yet
and
for
one
you
know
the
automated
traffic
that
goes
to
the
api
we
need
a
way
of
kind
of
separating
it
from
and
and
particularly
because
the
biggest
part
of
the
traffic
on
the
api
at
the
moment
is
apiv
for
jobs,
request
and
there's
a
part
of
that
which
is
bigquery,
which
is
super
sketchy
and
blows
up
all
the
time
and
people.
F
If,
if
you
get
delays
on
the
api,
people
don't
generally
notice
it,
but
if
that's
causing
puma
saturation
on
the
web,
that'll
be
bad
and
at
least
until
that
situation's
calmed
down.
I
think
we
should
just
keep
them
separate.
B
Basically,
we
can't
rely
on
ingress
engine
x
to
do
any
functionality
for
us.
That's
outside
the
specification
of
the
kubernetes
ingress,
which
is
pure
traffic
routing
and
that's
what
our
proxy
layer
does.
So
we
shouldn't,
technically
speaking,
we
shouldn't
really
be
using
ingress
engine
x
at
all,
because
h,
a
proxy
is
our
our
ingress
technology,
routing
piece,
but
we've
kind
of
like
jammed
it
in
there
just
for
api,
because
we
have
these
settings
which
we
had
to
have.
So
I
guess
what
I'm
trying
to
say
is
like.
B
If
we
keep
it
around,
if
we
get
rid
of
it,
then
you
know
we
control
things
at
proxy
layout
but
orthogonal
to
this
whole
discussion.
I
guess,
but
it's
it's
another
point.
I
would
like
to
see
further
understood
before.
Maybe
we
make
the
decision
for
web
and
api.
F
But
it
sounds
like
a
proxy
nginx
evaluated
or
because
I
don't
really
know
the
state
of
it,
but
sorry,
the
aj
proxy
engine
ingress
was
that
evaluated
as
part
of
this
process.
B
B
Oh,
we
have
to
add
it
back
in
and
we
have
to
put
a
bunch
of
random
settings
for
different
url
paths
as
well
like
proxy
buffering
here,
not
here
and
all
this
stuff,
and
so
when
we
did
that
we're
kind
of
it
caused
us
a
bunch
of
tech
debt,
because
now
we're
really
happy
we're
basically
relying
on
a
bunch
of
not
hacks
but
like
bad
configuration
data
to
wedge
ingress
engine
x.
To
do
what
we
want,
but
the
thing
is:
is
we
don't?
B
I
want
to
identify
what
problem
that
configuration
is
solving?
Why
does
it
need
to
be
there
and
if
it's
something
that
every
single
gitlab
installation
needs,
then
it
should
go
into
the
gitlab
installation,
because
ingress
engine
x
is
not
technically
part.
It's
it's
just
implements
the
ingress
spec
you
could
use
gke
ingress.
You
could
use
amazon's
ingress.
You
could
use
anything.
B
You
wanted
with
the
gitlab
chart
and
it's
supposed
to
work,
but
if
we're
saying
these
are
configuration,
settings
gitlab
needs
to
work,
then
we
shouldn't
just
be
relying
on
the
fact
that
you're
using
ingress
nginx
to
have
them,
and
so
either
they
need
to
go
into
the
application
further.
Somehow
or
if
we
are
like
no,
we
need
them,
but
the
only
reason
we
need
them
is
because
we've
got
h.a
proxy
or
because
we're
using
ingress
engine
x.
B
And
if
you
use
anything
else,
you
don't
need
them,
then
you
should
push
it
into
the
ingress
layer
which,
for
us
at
the
moment,
is
ho
proxy.
Then
later
on,
probably
after
web
migration,
I
would
like
to
treat
h.a
proxy
as
another
service
to
be
migrated.
We
sit
down,
we
talk
about
what
are
the
requirements?
What
does
it
do
and
then
we
could
go
through
all
the
options:
ingress,
h.a
proxy
there's
the
nginx
themselves.
The
company
make
a
different
nginx
ingress,
that's
actually
much
better,
but
you've
got
to
pay
for
it.
B
There's
gke
ingress,
there's
envoy
proxy,
which
is
also
content
like
there's,
so
many
different
things
we
can
choose,
but
we
can
like
sit
down
at
that
point.
I'm
like
what's
the
future
of
proxy
vms.
How
do
we
want
to
handle
that?
But
for
the
moment
I
just
like,
I
would
love
to
get
rid.
G
Not
requirements,
though
it's
it's
not
fully
isolated
from
the
application,
necessarily
because,
if
you're
talking
about
you
know
like
what
do
people
need
to
have
in
order
to
use
gitlab,
it
is
part
of
the
application,
but
you're.
G
I
I
fully
agree
with
graham
here
that
h,
proxy
slash
nginx
could
be
like
the
next
service
after
we
migrate.
Folks
just
keep
the
eye
on
the
ball
right
like
we
need
to
make
as
minimum
possible
changes
between
the
application
that
we
currently
have
and
the
application
in
the
new
environment.
G
In
order
to
be
sure
that
we
can
function
right,
we
can
address
technical
debt
like
we
are
making
room
to
address
some
of
this
technical
debt,
and
if
that
means
we
are
going
to
prolong
some
of
the
migrations
will
prolong
some
of
the
migrations.
We
just
need
to
be
safe,
and
we
need
to
also
think
about
how
this
is
going
to
relate
to
customers
later
on
right,
because
it
doesn't
even
have
to
be
external
customers,
we're
not.
G
We
will
now
have
internal
customers
that
will
have
to
leverage
this
whole
thing
that
was
built
right
so
eye
on
the
ball,
one
step
at
a
time
and
we'll
get
there
right.
B
F
F
So
so
with
that,
if
we
could
move
that
into
workhorse
and
workhorse
knows
all
of
these
routes
right
and
it
knows
which
ones
need
buffering,
which
ones
don't
and
it's
like,
it
is
a
really
good
place
for
it,
and
then
it
could
also
have
like
potential
like
you
know,
because
we
have
nginx
that
ships
with
omnibus-
and
it's
always
like
kind
of
this,
like
yeah,
unloved,
stepchild
of
omnibus.
F
B
F
B
G
G
F
C
B
That
we're,
I
guess
for
that
issue
and
I
still
need
to
go
back
and
write
that
up
I'll
do
tomorrow.
I'll
do
that
tomorrow.
As
in
I
need
you
to
call
me.
B
I'll
do
it,
I
spend
just
one
hour
and
I'll:
do
it
tomorrow
morning?
That's
fine,
but
I
think
what
I
would
like
to
do.
Maybe
I
need
to
rephrase
that
issue.
Then
we
need
a
plan.
I'm
okay,
if
we
don't
have
the
solution,
but
I'd
like
to
understand
what
the
plan
is
or
even
and
it
doesn't
even
have
to
be
us,
like
you
know,
andrew's
right
like
if,
if
we
can
convince
people
yeah,
actually
this
should
go
into
workhorse
and
you're
gonna.
Do
it
by
14
point
whatever
0.6
or
whatever
that's
good
enough.
B
I
just
want
to
know
the
plan.
I
guess
I
think
that's
important
and
and
going
back
to
what
you
were
saying
java
about
configuration.
I
I
think
correct
me
if
I'm
wrong,
for
enabling
apis
like
kind
of
walking
back
up
this
conversation
thread
enabling
api
like
in
terms
of
like
actually
turning
it
on,
like.
Let's
turn
it
on
in
the
helm
chart,
it
should
be
like
a
few
lines
of
code
right,
like
the
I
think,
the
dragging
out
of
these
things
never
seems
to
be
creating
the
node
pulls
doing
the
deployment.
B
I
think
we
could
probably
whip
that
up
in
under
a
week,
so
so
then
it's
like
well,
if
it's
only
going
to
be
a
week
of
work
to
do
this
whole
separate
deployment.
Is
it
still
work
I
feel
like?
Maybe
it
is
worth
then
still
keeping
the
separation,
because
it's
because
it's
not
that
difficult
to
keep
separated,
but
I
could
be
convinced
otherwise.
I.
D
Think
it's
worth
keeping
a
separation
for
monitoring
for
now,
just
because
we
have
we
want
to
keep
parity
between
the
vms
and
kubernetes.
As
far
as
monitoring
is
concerned,
then
we
want
to
use
our
existing
like
the
work
that
we
did
to
have
like
you
know,
labels
and
everything
like
I
just
feel
like
it's
going
to
be
simpler.
If
we
just
have
a
separate
pool,
separate
deployment.
C
C
Was
just
thinking
of
would
it
save
us
a
load
of
work?
It
sounds
like
no,
it
potentially
some
new
complexity.
So
let's
stick
with
the
current
plan
just
before
we
wrap
up,
because
we
are
out
time.
I
just
wanted
to
mention
about
planning
the
tech
debt
for
june.
So
thank
you
henry
graham
for
a
bunch
of
adding
your
ideas
for
this.
I've
moved
everything
into
a
spreadsheet.
C
I
want
to
just
share
it
here
because
it's
not
exclusively
for
just
the
three
of
you
to
to
look
through
so
jeff
andrew
marin,
like
if
you
want
to
also
take
part
in
adding
comments
and
things
here.
Please
do.
C
I
do
want
to
make
sure
that
in
june
we
actually
come
out
of
this
feeling
like
great.
We
we
achieved
some
stuff.
We
have
less
tech
debt
and
we're
in
a
better
place
for
the
web
migration,
so
we
definitely
won't
manage
to
do
everything.
C
Some
of
these
things
like
are
just
too
big
or
we
they've
fit
better
after
the
web,
and
we
can't
we'll
it's
not
a
one-time
tech
debt
paid
out
right,
so
we
can
prioritize
those
later
and
then
the
idea
is
if
we
could
get
sort
of
our
ranked
list
of
ideas
that
we'd
like
to
tackle
for
next
wednesday.
So
if
everyone
completes
before
wednesday,
then
we
can
actually
go
through
and
actually
make
a
decision
like
which
stuff
do
we
actually
start
working
on.