►
From YouTube: 2021-08-05 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
C
Hello,
everyone
awesome
so
demo,
let's
kick
off
graham.
B
B
Yeah
well,
I
realized
today
for
about
four
hours.
There
was
back
to
back
like
the
kate's
workloads.
Gitlab
com
pipeline
was
going
back
to
back
with
deploys
configuration
like
literally
just
they
were
all
backed
up
like
it
was
one
after
the
other.
So
there
was
a
lot
of
changes
happen
to
happen
today.
So
at
least
that's
good.
Let
me
share
my
screen.
B
So
if
I
just
look
at
the
logs
for
one
of
the
staging
clusters-
g-stage
usc
1b-
if
I
look
at
the
logs
for
all
of
the
ingress
nginx
pods,
we
can
see
that
we
have
a
lot
of
things
coming
through.
B
B
B
Now
so,
let's
do,
let's
see
web
service,
we
can
see
that
those
they're
definitely
taking
a
good
amount
of
traffic,
mostly
to
the
readiness
endpoint,
but
also
other
bits
and
pieces
readiness
liveness
come
on
now
that
I
say
that
it's
not
going
to
show
me
any
other
traffic.
Is
it.
A
B
I
think
we
saw
some
other
stuff
floating
past
prometheus
liveness.
Well,
I
thought
this
staging
would
have
more
traffic
unless
this
is
some
real
traffic
yeah,
that's
from
a
a
real
user.
So
anyway,
I
think
the
point
of
the
demo
is
essentially
that
I'm
demonstrating
that
the
web
service
pods
are
taking
traffic,
I'm
demonstrating
that
the
nginx
pods
are
not
taking
traffic,
so
I've
successfully
bypassed
it.
If
I
stop
sharing,
I
probably
should
have
shared
the
wrong
thing.
If
I
stop
sharing,
if
I
start
sharing
my
actual.
A
B
Go
back
for
maybe
one
hour
ago,
so
that
is
when
I
did
a
roll
out,
so
that
would
have
been
all
the
the
ci
job
running
all
the
ci
tests.
I
assume-
and
that's
probably
earlier
from
today
when
I
was
doing
a
bunch
of
deployments
as
well
with
ci
tests.
If
we
add
a
filter
and
do
json.status
is
between
500.
B
B
C
B
Noticeable
huge
increase
of
anything
being
broken
or
anything,
and,
in
fact,
I'm
pretty
sure
all
these
errors
that
are
being
reported
are
the
same
error,
filter
out,
liveness
and
readiness
yeah
see
if
you
filter
out
the
liveness
and
readiness
checks
from
deployment,
it's
very,
very
small
amount
of
errors,
and
it's
you
know
usually
something
about
get
a
lead,
no
repository
or
something
like
that.
B
So
yeah,
that's
more
or
less
what
I
really
wanted
to
demonstrate
because.
C
Have
we
had
those
changes
in
place,
so
we've
had
a
deployment
since
we
took
engine
x
off
staging
is
that
was
that
correct.
C
Great
okay,
which
I
mean,
I
suppose
that's
not
where
the
like,
oh
yeah,
I
don't
suppose
we
have
a
good
test
on
staging
right
like
based
on.
I
doubt
we
do
proxy
buffering
very
often
on
staging
but
yeah
the
config
and
all
set
up
well.
B
I
think
that's
the
thing
at
the
end
of
the
day,
I
I
you
know,
I'm
very
confident.
I
think
we
can
all
be
fairly
confident,
there's
nothing
that's
outright
broken,
which
is
what
we
already
expect
suspected.
It's
just
about
performance
profiling,
and
you
know
we
shouldn't.
We
shouldn't
have
like
cloudflare
to
between
cloudflare
and
us
and
gke.
I
would
imagine,
there's
pretty
good,
reliable
networking,
I'm
pretty
sure.
Cloudflare
does
a
lot
of
stuff
to
make
sure
all
the
cloud
providers
have
good
network
connectivity.
B
B
So
yeah,
I'm
pretty
confident
that
everything's
fine.
If
we
have
a
few
minutes,
I
can
talk
about
where
we
are
with
in
terms
of
web
migration.
So
I
had
a
whole
bunch
of
change,
requests,
various
bits
and
pieces.
I
rolled
them
out
today
some
important
things
of
note.
I
found
things
during
my
configuration
audit
of
web
that
were
missed
in
other
migrations,
so
I
did
a
change
request
today
to
basically
bring
a
whole
bunch
of
components
into
sync.
B
Not
just
web
web
was
the
most
fun,
but
there
was
also
sidekick
and
some
others
that
just
had
one
outstanding
change
that
is
gone
through.
It's
been
applied,
there
was
an
incident
or
sorry.
There
was
an
alert
I
just
saw,
which
is
making
me
alarmed
that
we
may
have
broken
something,
but
that's
another,
hopefully
not,
and
the
other
changes
that
I
brought
through
that
were
outstanding
were
related
to
like
the
content
headers
for
javascript,
which
is
obviously
very
important.
B
Otherwise,
javascript
won't
load,
things
will
break
and
the
other
one
was-
and
this
is
an
important
one-
is
the
allow
list
of
users
that
get
to
bypass
rate
limiting
for
api
when
I
first
set
that
setting
up
and
migrated
it
from
chef
or
synced
it
from
chef
like
18
months
ago,
I
had
a
spelling
mistake
in
it
in
the
field,
so
the
setting
was
never
getting
set
at
all
in
any
of
our
rails.
Stack
that
we're
running
in
kubernetes.
B
C
Interesting,
okay,
I
might
mention
that
scalability
just
as
a
heads
up
for
that,
because
I
know
they
monitor
that
stuff.
Sure.
B
So
yep
so
configuration
order.
It
is
completely
done
now,
so
everything
is
all
in
sync:
we're
all
good
to
go
so
removing
nginx
is
all
the
way
up.
We've
got
load
balancers
at
every
level,
we're
all
ready
to
go.
The
next
step
we
need
to
basically
do
is
go
into
haproxy
and
add
web
kubernetes
as
a
back
end,
so
that
we
could
start
adjusting
weights
to
send
traffic
to
it.
And
the
final
point,
unfortunately,
is
our
cookbook
for
hi
proxy
is
not
flexible
enough.
B
It's
like
because
we
have
to
have
our
current
vm
back
ends
using
like
nginx
port
443
and
the
kubernetes
backend
using
workhorse
port
8181.
Our
cookbook
just
take
takes
the
one
port
setting
and
just
wraps
them
all
with
the
same
port.
So
we
need
to.
I
need
to.
Basically
I
need
to
do
a
tiny
bit
of
work
on
the
cookbook
to
make
it
more
flexible,
and
then
I
can
add
the
back
ends
and
away.
B
We
go
note
that
if
we
hadn't
removed
nginx,
we
would
have
had
the
same
problem
with
readiness
checks,
we're
not
flexible
with
readiness
checks.
We
say
every
server
has
one
readiness
check,
whereas
the
readiness
check
for
kubernetes
and
vms
would
be
different,
even
if
we
kept
nginx
so
either
way
we
had
to
cross
this
bridge
and
I
knew
about
it.
I
I
knew
about
it
forgot
about
it
again
getting
distracted
about
everything
else,
but
now
I've
come
back
and
I've
realized
it's.
It's
just
chef,
cookbook
stuff.
B
You
know,
which
is
it's
not
the
funnest
thing
in
the
world,
but
it's
it's
a
simple
and
easy
like
I
can.
You
know
see
what
we
need
to
do
and
adjust
it.
I
just
need
to
roll
that
out.
Then
I
will
basically
add
web
kubernetes
as
a
back
end
in
canary
we'll
scale
it
up
so
scale
it
up
to.
C
Traffic
awesome,
that's
great
progress
and,
alongside
that
I
know,
scarbeck
made
good
progress
with
the
making
the
temp
directory
volume
available.
I've
been.
C
Oh
great
great,
as
I
said,
that'd
be
a
good
one
for
us
just
keep
an
eye
on.
I
assume
that
if
we
have
problems
with
that,
it'll
be
visible
in
a
deployment
at
some
point,
so
it
would
be
good
to
have
a
few
deployments
pass
through
for
this.
B
B
The
pro
I
was
having
a
chat
to
matt
smiley
about
this
as
well,
and
he
because
he's
looked
into
this
issue
a
lot
on
the
vm
side
and
he
had
a
good
point,
which
is
we're
actually
not
going
to
be
able
to
see
this
very
much
in
kubernetes,
because
it
we
saw
it
because
it
was
failing
deploys
because
it
would
fill
up
a
disk
when
we
were
trying
to
put
our
gitlab
package
in,
but
because
it's
the
whole
container
being
cycled.
The
volumes
go
away
with
the
container.
B
I
haven't
looked
into
it
enough
and
I
probably
should
I,
I
suspect
the
fact
that
we're
not
seeing
alerts
about
this
like
because
there's
two
ways
either
kubernetes
cleanly
shuts
the
pod
down
or
uncleanly
shuts
it
down.
If
it's
doing
it,
uncleanly
there's
a
potential
for
us
to
see
like
errors,
like
our
error
ratio,
taking
a
hit
of
con
like
something
something
noticeable
to
us.
I
I
suspect
right.
Like
I
don't
know,
we've
got
a
lot
of
like
shutdown
periods
stuff,
like
graceful
shutdown
periods.
We
send
signals
to
puma.
B
I
don't
know
if
that's
being
done
when
the
disc
fills
up
underneath
it.
That
being
said,
we've
given
it
such
a
large
temp
volume
that
it
would
take
probably
a
long
time
for
them
to
fill
them
up
and
the
average
life
cycle
of
any
one
of
our
pods
is
less
than
six
hours
because
we
deploy
so
often,
I
think,
we've
just.
I
think
I
just
don't
think
we're
going
to
hit
any
condition
where
it's
going
to
cause
problems.
Fortunately,.
A
Might
it
not
be
changing
in
the
way
that
if
you
have
a
certain
request,
which
is
always
trying
to
upload
a
very
big
file,
which
always
would
lead
to
crashes,
that
those
type
of
requests
would
always
fail
now,
and
we
would
not
really
notice
it
because
we
just
would
have
thoughts
being
recycled?
We
would
need
to
look.
A
A
A
B
A
It
was
a
timeout
issue
like
if
we
have
a
request,
which
is
taking
too
long
we're
reaching
the
timeout
of
workhorse.
Then
that
would
cancel
the
request
and
then.
A
Point
the
it's
failing
to
clean
up
the
temp
file,
where
the
big
upload
is
landing
and
that's
typically
around
two
gigabytes
when
the
timeout
is
reached.
So
it's
a
timeout
issue
there
and
that's
why
we
are
not
getting
much
more
than
two
gigabytes.
I
think
because
then
the
timeout
already
yeah
most
cases
yeah.
If
you
would
have
a
very
fast
upload,
maybe
we
could
reach
more
than
two
gigabytes,
but
I
can't
remember
ever
seeing
this.
So
probably
it
will
not
be
the
case.
Yeah
well,.
B
That's
interesting
as
well:
circling
back
to
proxy
buffering
and
cloudflare,
so
you
can't
upload
anything
to
gitlab
bigger
than
five
gigs,
because
cloudflare
just
won't,
let
you
and
it
is
proxy
buffering
it.
So
in
theory
that
connectivity
that
we
see
the
workhorse
type
like
it
should
be,
the
speed
we're
seeing
will
be
between.
B
B
C
I
don't
think
we
are
because
we
generally
see
these
like.
We
generally
know
they're
full
when
we
fail
at
deployment.
So
I
don't
think
we
have
any
it's
fine
right.
We
know
we
know
they're
full
because
we
failed
deployment
and
we
we
do
quite
a
lot
of
those.
So
it's
not
ideal,
but
we
we
would
know
fairly
quickly.
C
B
I
could
I
could
try.
So
this
is
my
thinking
for
those
of
you
who
are
who
know
more
than
me.
I
can
try
and
do
a
whole
bunch
of
complex
logic
about
like
making,
because
it's
basically
one
line
right
per
server,
and
it's
there's
a
bunch
of
logic
in
there
for
generating
that-
or
I'm
thinking
I
add,
just
a
loop
underneath
where
you
can
set
an
extra
setting
like
raw
back-ends
or
something
and
it'll
literally
just
take
the
string.
B
You
give
it
and
put
it
in
there
rather
than
trying
to
auto-generate
it
from
a
bunch
of
different
inherited
options,
and
then
we
do
that
temporarily,
because
this
is
only
a
problem
where
we
have
mixed
back
ends
once
we
cut
over
completely
to
the
other
way,
we
can
just
go
back
to
what
we
did
before.
So
I
was
kind
of
thinking
of
having
like
just
a
way
to
manually,
insert
back
ends
instead
of
trying
to
automatically
generate
them.
Alternatively,
I
can
look
at
the
generator
code
and
try
and
improve
it.
A
D
I
so
what
you're
trying
to
do
is
like
slow
roll.
The
back
end
change
right
grain
and
the
problem
is:
is
that
it's
sort
of
all
or
nothing
with
the
back
end
configuration
the
way
we've
done
this
before
is
to
slow
roll
it
by
stopping
chef
on
aj
proxy
and
doing
it
on
like
select
h
proxy
nodes,
either
with
or
or
you
can
use
a
node
override.
D
B
A
B
Side,
that's
the
problem.
It's
like.
We
have
a
global
setting
for
like
what
port
to
use,
but
some
backends
are
going
to
use
port.
The
vms
use
port
443
kubernetes
uses
port
8181,
so
it's
like,
and
we
want
this
for
weeks,
probably
while
we
slowly
increase
traffic
and
migrate,
so
that's
kind
of
the
place
I'm
stuck
at.
B
D
B
B
Alternatively,
I
do
something
weird
like
just
change
it
to
tcp
like
we
can
go
down
this
path
of
like
changing
the
whole
check,
type
and
stuff,
but
the
simple
I
mean
at
the
end
of
the
day.
I
can,
I
know
the
configuration
works
if
I
were
to
edit
it
in
by
hand
it's
just
making
our
shaft
module
flexible
enough
to
take
that
dual
configuration.
D
D
Another
another
idea
would
be
just
to
have
a
note
override
for
aj
proxy,
where,
like
you
know,
some
aj
proxies
are
sending
all
of
their
traffic
to.
D
Yeah
you
could
you
could
do
it
like
this,
I'm
trying
to
remember
how
we
did
this
for
so
I
know
that
when
we,
when
we
got
rid
of
nginx
for
get
https,
we
had
already
migrated
to
kubernetes,
so
that
wasn't
a
problem
when
we
did
the
slow,
the
slow
rollout
and
we
used
weights.
D
I
think
what
we
yeah.
I
remember
now.
D
We
used
canary
for
this
right,
like
we
had
all
of
the
all
of
the
vms,
and
then
we
had
the
canary
server,
which
is
also
put
into
the
main
server
block
right,
because
we
sent
a
small
percentage
of
traffic
to
canary
once
we
and
then
what
we
did
on
the
transition
we
just
like
lowered
the
weights
of
the
vms,
which
increased
the
weight
of
the
canary.
D
If
you
look
at
the
server
line
for
canary,
we
use
the
canary
port,
not
the
main
port.
Okay,
I
can.
I
can
kind
of
walk
you
through
this.
If
you
want
to
like
after
the
call,
maybe
I
can,
I
can
show
you
what
I
mean,
but
this
is
how
we
did
it
so
so
that
should
work
here
as
well.
Okay,
I
think
yes,
as
long
as
the
health
check
is
the
health
check,
the.
D
C
B
D
Yeah,
it's
really
quite
awful,
isn't
it
so
we
want
back
ends
and
web
right.
D
So
so
so
here
we
have
the
canary
web.
D
D
D
Yeah
yeah:
let's
take
a
look
so
backend,
https,
okay,
yeah
see
this
is
why?
Because
I
don't
know,
if
you're
able
to
parse
this,
I'm
so
used
to
looking
at
it.
It's
totally
clear
to
me,
but
so
you
have
server
and
then
you
have
the
name,
and
then
you
have
the
ip
colon
and
then
the
port
see
how
this
is
canary
https
here,
so
we're
able
to
specify
a
different
core,
a
different
port
from
the
canary
server,
even
though
it's
in
the
main
back
end.
B
D
We
look
at
if
we
look
at
back
end
web.
We
just
never,
maybe
maybe
this
was
a
change.
That
scarbec
did.
I
don't
even
remember,
but
so
if
we
made
this
canary
way
right
and
make
sure
that
we
have
that
as
a
setting,
then
I
think
we're
good.
That's
all
we
need.
B
B
D
Yeah
in
the
past,
what
we
we
never
mixed,
vms
and
kubernetes
on
canary-
and
I
don't
think
we
needed.
B
D
Racing
because
if
we
look
at
the
canary
issues,
which
is
down
here
yeah,
if
you
look
at
here
like
we,
also
use
this
one
yeah.
D
B
D
B
B
D
B
B
Okay
with
that,
I'm
okay,
with
that
you
know,
I
can
do
a
change
request
and
we
can
have
the
revert
ready
to
go.
D
Well,
the
revert
is
really
to
disable
canary
in
chat.
I
mean
that's,
that's
easy,
but
you're
right,
like
all
of
the
I
mean
we
can
or.
D
Sorry,
let
me
just
say
we
could
also
do
this
even
more
carefully
if
we
want
by.
We
have
these
paths
configured
in
chef
for
like
routing
to
canary.
We
can
just
say,
like
I
don't
know,
we
could
just
say,
like
no
special
paths
are
routed
to
canary
so
everything's,
going
to
like
just
go
evenly
to
the
main
stage
and
then
and
then
we
can
use
weights
to
just
take
a
percentage
of
traffic.
B
So
so
let
me
clarify
here
then
so
you're
saying
we
would
take
away
the
special
routing
to
canary
for
things
like
get
lab
com,
www,
gitlab
and
stuff.
Then
we
would
add
kubernetes
as
canary
exclusively
so
that
means,
even
in
the
main
setup
canary,
would
be
one
of
the
little
servers
amongst
all
the
others
with
a
different
port,
because
we
can
support
that
with
you
know,
changing
the
cookbook,
and
so
then
we
would
just
mess
with
weights
and
canary
would
no
longer
be
for
web
at
least
exclusively
canary
nodes.
B
All
of
these
options,
I'm
fine
with
it's
really
what
people
feel
most
comfortable
with.
D
D
What
we
could
do,
or
what
you
could
do
is
like
configure,
first
of
all,
configure
kubernetes
for
canary
in
chef
and
don't
even
bother
with
the
weights,
but
you
could
shut
down
like
you.
Could
you
could
do
you
could
do
a
slow
roll
of
habits
right
sure
right
and
I
guess,
like
I
don't
know
how
long
like
I
guess
the
question
is:
how
long
do
you
want
this
change
request
to
be
for
just
canary
like?
Are
we
talking
hours?
Are
we
talking
days.
B
D
B
D
D
B
C
B
D
Yeah-
and
I
think
we
can
slow
roll
canary
by
just
using
chef,
runs
on
h.a
proxy.
You
know:
stop
h.a
proxy,
I'm
starting
stop
chef
on
all
age.
Proxy
notes.
Merge
your
mr
and
then
just
like.
Do
the
first
aj
proxy,
though
that
will
take
a
very
small
amount
of
traffic,
and
I
mean
we
could
also
use
weights
as
well,
but
I
think.
A
Then
we
did
the
api
switch
over.
We
try
to
be
clever
with
weights
and
in.
C
A
B
D
B
C
D
D
My
next
meeting
sorry
guys.
C
I'm
happy
yeah,
I
think
that
makes
like
it
gives
us
a
chance
to
double
check.
We
have
all
the
pieces
in
place
and
everyone
knows
like
what's
happening
and
then
yeah,
let's
just
go
for
it.
C
Great
perfect
sure,
perfect.
What
do
you
think
so
we
talked
about
the
console
readiness
review
as
well
like
I'm,
assuming
like
we
should
get
that
I
mean.
I
guess
it.
Doesn't
it's
not
quite
so
blocking
for
canary,
but
we
should
also
like,
maybe
see
if
scarbeg
has
time
to
get
that
moving.
B
Yeah,
that
sounds
fine.
Yeah.
Look.
You
know,
I'm
I'm
happy
with
that.
If
that's
the
direction,
we
want
to
choose,
it's
not
a
problem.
So
I'll
yeah,
I
mean
I'll
just
change
to
readiness
reviews
top
priority
now,
which
is
fine.
Then,
when
we
get
that
at
a
state
we
can
continue
to
see
where
we
want
to.
B
B
And
then
it
sounds
like
what
we'll
do
is
we'll
get
the
nes
review
in
a
good
state,
and
then
we
will
basically
do
like.
I
mean
I'm
happy
to
do
it
during
apac
hours,
where
it's
quiet
I'll
just
do
maybe
over
four
or
five
hours,
I'll,
just
slowly
rotate
proxy
nodes
over
from
vms
to
kubernetes
at
an
entire
vm
level.
And
then
you
know
assuming
if
something
goes
horribly
wrong,
just
drain
canary
instantly
roll
it
all
back.
Otherwise,
then
we
just
basically
after
that
change
request,
is
complete.
B
We
are
hard
canary
using
kubernetes,
and
then
we
from
there
we
can
decide
the
plan
for
production.
It
sounds
like.
Actually,
we
may
have
better
options
in
production
with
what
java
was
saying
about
the
faking
canary,
not
faking
canary,
but
using
that
canary
thing
to
kind
of
it
sounds
like
we
might
have
some
some
some
better
controls
to
slowly
roll
it
out
of
production.
B
C
Cool
okay,
yeah,
that
sounds
good.
That
sounds
good
just
as
we
as
well
as
we
go
through
trying
to.
I
see
this
bit
of
discussion
on
the
issue,
so
we
can
certainly
continue
there,
but
I
was
just
wondering:
have
we
already
decided
how
many
pods
we're
going
to
start
off
with.
B
Yeah,
so
I
had
a
quick
look
at
the
the
shorter
answer
is
I'm
kind
of
using
api
as
a
guidance
because
leveraging
the
work
we've
already
already
done
there?
I
had
a
look
like
trying
to
compare
the
two
and
it's
unfortunate
now,
because
it's
been
so
long
like
basically,
I
haven't
got
vm.
I
can't
see
I
can
go
through
issues.
I
I've
got
a
little
bit
bits
of
data
of
what
the
vm
stuff
sorry
vm,
vm
to
kubernetes
migration
of
api.
B
But
since
then
we've
done
further
tuning
work,
so
I
can't
100
follow
it,
but
I
think
what
we've
got
latest
in
in
kubernetes
api,
I
think
is,
is
a
pretty
good
starting
point
for
web.
I
I
don't
even
know
I
was
trying
to
find
some
information
about,
like
is
api
in
general,
less
less
less
traffic
than
web,
and
I
couldn't
find
anything
too
conclusive.
So,
as
I
said,
I
think
it's
the
starting
point.
It's
probably
fine,
but
you
know,
as
always
we
need
to
I
mean
we.
A
C
In
some
of
the
previous
migrations,
we've
tried
to
kind
of
really
work
it
out
and
go
in
with
kind
of
what
we
think
is
the
right
size
and
have
incidents
just
because
it's
too
small,
like
I,
would
much.
Rather
for
the
first
couple
of
days.
We
are
like
massively
overrussioned
and
we
bring
it
back
down.
So
if
you're
not
sure
just
please
go
super
big
and
we
can.
We
can
scale
it
back.
B
Sure,
well,
I
think
at
the
moment,
even
the
scale
for
api.
We
can
still
allow
up
to
150
pods,
I
think
per
cluster,
and
so
I'm
going
to
do
the
same
for
for
hopefully
I
don't
think
we
get.
B
So
I
I
covered
that
quickly
in
a
comment
I
think
at
this
point
in
time
it
it
was
an
interesting
idea
and,
as
me
and
java
bounced
that
idea
backwards
and
forwards.
I
think
it's
more
of
we.
The
appetite
for
changing
things
now
is
probably
it's.
A
B
Definitely
like
a
technical
debt
thing,
I
I
do
think
it's.
I
actually
think
it
has
merit
in
terms
of
trying
to
squeeze
out
more
performance
from
a
single
machine,
but
obviously
it's
got
a
lot
of
down
like
a
lot
of
downsides.
So
maybe
it's
not
as
good
as
we
hoped,
but
it's
something
we'd
have
to
test.
B
Sure
no
yeah!
No,
so
I
think
we're
going
to
try
and
basically
at
least
to
start
with
unless
something
comes
out,
that's
very
different,
basically
keep
web
and
api
the
same,
and
I
think
that's
kind
of
what
we
mostly
expect
back
in
vms.
I
think
that
fleet
counts
were
similar,
similar
number
of
vms
same
vm
size,
so
I'm
hoping
that
will
provide
us
good
guidance.
B
B
A
For
tuning
but
but
I
definitely
can
improve
there
because
we
still
use.
D
A
B
Yeah
sure
yeah,
no,
no,
it's
definitely
important.
We
should
do
it,
but
I
think
we
can
just
add
web
to
the
list
of
things
to
look
at
hopefully
and
yeah.
A
By
the
way,
not
sure
if
you've,
where
about
the
yesterday,
figured
out
that
the
imbalance
of
of
traffic
to
the
pots
is
related
to
that,
each
node
gets
the
same
amount
of
traffic,
but
if
it
has
just
one
port
instead
of
three,
for
instance,
then
one
part
needs
to
deal
with
triple
the
amount
of
requests
right.
So
that
also
needs
to
be
taken
account
for
how
many
pots
you
put
on
an
on
a
note
and
things
like
that.
Yes,.
B
So
another
discussion-
yeah,
no,
it
is
but
very
quickly.
I
replied
on
that
issue
and
I
put
some
more
detail
in
there,
so
this
is
done
actually
so
basically,
the
short
answer
is
cube.
Proxy
is
supposed
to
prevent
that,
but
there's
a
certain
setting
that
they
can
set
to
override
q
proxy
doing
that
at
the
risk
of
unbalanced
load.
Balancing
that
kubernetes
says
if
you
change
this
you're
going
to
have
this
problem.
When
a
new
nerd
comes
online,
it's
going
to
get
all
the
traffic
and
the
reason
nginx
ingress
engine
x
change.
B
This
is
because
otherwise
they
lose
the
originating
ip.
They
get
the
bounce
ip
because
it
normally
it
bounces
through
every
single
mode
and
goes
to
the
pod.
But
this
one
they
only
goes
to
the
nodes
running
pods,
but
load
balancer
will
just
load
balance
it
between
just
those
nodes.
Instead
of
spreading
it
out
q
proxy,
then
rewriting
it
in
a
more
of
a
a
consistent
manner.
That
is
fair.
B
B
B
So
yeah
I'm
not
really
sure
if
there's
a
good
answer
but
yeah
no,
it
was
really
good.
You
found
that
and
as
soon
as
you
see
you
know,
skype
noted
you
said
that
I
was
like
I.
I
remembered
that
setting
and
I
looked
and
I
was
like
yep
there.
It
is.
A
C
Cool
great,
is
there
any
other
stuff
we
want
to
cover
of
today.
B
C
B
Yeah,
look,
I
think
the
plan
is
solid,
I
think,
do
the
documentation
and
that
readiness
review
now
get
a
lot
of
make
sure
a
lot
of
people
look
at
it
and
you
know
we're
very:
all
of
us
are
in
agreement
of
the
direction
we
want
to
do
to
the
next
step.
C
Awesome
sounds
good
great
stuff.
Thank
you
very
much
for
demos.
Good
luck
with
the
next
bits
and
yeah,
I
hope
you're
out
for
a
good
rest
of
your
day,
find.