►
From YouTube: 2021-02-25 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
Have
you
got
plans
for
the
long
weekend,
graham.
A
C
A
A
A
E
I
know,
and
I
there.
D
Yeah,
so
nice,
okay,
so
welcome
everyone.
Scarback
has
left
challenge
number
one
so
see.
Who
would
like
to
pick
this
up
so
he
last
night
he
got
traffic
going
through
kubernetes
on
staging
for
the
api
service,
so
he
suggested
we
could
demo
things
he
specifically
suggested.
I
could
demo
them
things
to
you,
but
maybe
like
like
henry.
I
don't
know
if
you
wanna
like
have
a
shot
at
that
and
we
can
dive
in
if
you
get
stuck
or
that's
too
too.
E
It's
a
question
of
what
what
to
demo
here
I
mean
I
could
just
have
a
look
at
the
dashboard,
maybe
how
it's
looking
like
I'd.
A
Even
be
happy,
if
you
could,
you
know,
show
your
screen
show
the
pods
running
and
just
show
the
proxy
config
sending
traffic
to
it.
That's
pretty
much
would
be
good
enough
for
me.
E
So,
looking
at
the
I
just
zoomed
out
to
24
hours
view,
it
seems
to
be
working
which
is
nice
and
g-stage
and
oh,
that's
g
prime,
have
a
look
at
g-state.
E
It's
a
little
bit
slow
to
load
because
because
it's
over
24
hours,
so
it
looks
like
apparently
we
started
yesterday
to
shift
traffic
over
to
the
api
pods.
I
guess
at
20
standalone
higher
error
rate
40.
That's
not
really
that
great.
Let's
have
a
look
at
the
cube
detail.
E
I'm
totally
so
I
just
need
to
have
a
look
here.
E
E
E
Let
me
the
weights,
I
guess,
for
aj
proxy.
E
E
So
I
shut
my
console.
Am
I
right
that
should
give
me
the
weights
for.
E
The
h8
proxy
4g
staging
right
now.
E
E
So
I
wonder
if
we
would
have
you
know
in
between
some
somebody
generating
some
traffic,
and
I
mean
it
changed
since
we
changed
the
deployment.
E
E
E
E
D
Okay,
maybe
we
should
like
move
along.
I
think
the
kind
of
view
on
staging
is
we
know
we
have
a
few
blockers
as
well
we're
talking
with
distribution
about
to
prioritize
those
around
improving,
improving,
reducing
the
logs.
D
We
get
dropping
out
some
of
the
duplication
as
well
as
looking
at
what
we
can
do
for
the
giving
some
more
control
over
the
blackout
options
for
the
api
service,
so
those
things
are
all
kind
of
coming,
so
we're
probably
unlikely
to
move
beyond
staging
before
we
have
at
least
some
of
those
things,
particularly
on
the
logging,
so
we'll
probably
sit
here
for
a
little
bit
anyway.
D
A
Just
something
else
to
be
aware
of
for
everyone
interested
with
api
and
web
as
well.
There
was
an
incident
in
the
last
week
relating
to
disk
usage
on
api
nodes
with
like
files
getting
buffered
onto
disk.
I've
opened
an
issue
in
the
delivery
queue.
Basically
to
say
we
are.
It
looks
from
what
I
can
determine.
We
are
going
to
have
this
problem
or
these
files
in
the
api
pods
it
when
we
migrate.
A
So
I
need
to
reword
the
issue
a
little
bit,
but
at
the
very
least,
I've
looked
at
where
those
files
are
going
to
go
and
we
probably
need
to
set
a
maximum
size.
So
we
don't
basic.
Basically
at
the
moment,
it's
quite
possible
for
someone
to
kind
of,
I
think
upload
enough,
that
it
will
fill
up
the
disk
of
the
kubernetes
node
and
cause
a
whole
node
to
be
recycled
and
everything
get
evicted
out.
So
there's
a
deeper
question
there
on
the
application
side
of
like.
Why
are
these
files
buffered?
Is
there
a
limit?
A
How
do
we
clean
that
up
and
what
have
you?
But
in
the
meantime
we
should
just
use
some
kubernetes,
just
basically
it
needs
some
changes
in
the
helm
chart
to
basically
say:
okay,
every
pod
can
only
use
this
amount
of
disk
space
and
make
sure
that,
however,.
B
We
we
is
it
it's
using
the
shared
for
buffering
right
and
that's.
D
B
A
A
B
Yeah
but
we
don't
have
any
size
limit
on
the
api,
vm
machines
and
I
don't
think
it's
been
an
issue
there.
B
B
You
have
possibly
fewer
notes,
servicing
requests,
I
see
and.
A
I
think,
while
it
may
not
have
been
in
the
issue
in
the
past,
it's
certainly
it's
not
causing
like
slo
alerts,
but
it's
certainly
causing
some
alerts
now.
So
I
I
don't
know,
maybe
maybe
this
is
just
a
purely
an
application
bug.
Maybe
it's
like
yeah.
We
found
a
use
case
with
these
aren't
getting
cleaned
up,
but.
A
Yeah,
let
me
they're
mostly
disk
space
alerts,
but
I
believe
there
was
something
deeper.
A
B
B
A
D
A
Look,
you
know
I'll
be
perfectly
honest.
I
haven't
looked
at
it
in
depth,
but
I'm
just
raising
it
with
this
migration
for
api
nodes.
It
sounds
like
there's
something
going
on.
Maybe
it's
just
an
application
bug
and
that's
all
this
is,
but
we
just
need
to
be
aware
of
it.
That's
all.
D
Cool
okay,
great
sounds
good
thanks
for
showing
us
that
stuff
henry
shall
we?
How
should
we
move
on
to
the
discussion.
E
E
So
it's
mostly
about
internal
pages,
api
endpoint,
and
I
see
a
lot
of
those
messages
coming
up
here
with
related
to
missing
credentials
or
something
so.
B
A
They're
going
swimmingly,
they're
absolutely
perfect
without
issue
every
one
I've
done
has
actually
been
great,
they're,
just
slow,
which
you
know
I
discuss
a
bit
further
down,
but
yeah
they're
just
slow.
I
I
do
think
I
I
haven't,
enabled
auto
upgrades
in
production
yet,
but
I
think
we're
confident
enough.
Now
it's
been
running
in
the
other
environments,
long
enough
that
I
will
and
yeah
I
that
that
will
alleviate
a
lot
of
the
pressure.
A
I
guess
because
the
upgrades
would
just
be
happening
in
the
background,
but
that
you
know
that
they've
really
been
fine,
they're
just
slow,
and
I
I
guess
I
I
could
even
change
to
just
like
bursting
up
double
the
note
like
basically
there's
different
strategies.
I
could
do,
but
I'm
doing
the
slowest
strategy,
so
I
can't
really
complain
but
yeah.
A
I
know
they've
been
fantastic,
hopefully
wrapped
up
early
next
week,
but
like
the
main
gprod
one
is
going
to
take
me
a
few
days
at
the
rate,
I'm
kind
of
going
so
it's
like
yeah,
but
I
might
try
and
just
like
burst
up.
New
node
pulls
like
just
brand
new
node
pulls
and
see.
If
we
can
just
shift
the
workloads
over
through
natural
deploys
or
something
different.
B
I
see
so
this
is
there's
two
types
of
upgrades:
there's
the
kubernetes
upgrade
and
the
node
upgrades
right.
D
A
Like
five
or
ten
minutes
rock
solid,
not
even
a
blip
and
it's
just
the
node
pull
upgrades
and
the
node
pull
upgrade.
So
you
can
it's
perfectly
valid
and
supported
to
have
like
kubernetes
node
pulls
of
a
a
minor
version
behind,
so
you
can
have
117
node
pools,
so
there's
nothing,
that's
pressing
for
me
to
like.
Oh,
I
need
to
upgrade
all
these
node
pulls
straight
away.
Now
that
I've
upgraded
the
masters
kind
of
thing
yeah.
A
The
reason
I
do
it
simply
is
more
for
procedural,
because
I
have
to
do
a
change
issue
and
the
change
issue
gives
me
like
feels
process
wise,
like
a
bounded
thing.
I
can't
just
be
like.
Oh,
this
change
will
happen
sometime
in
the
future.
I
kind
of
gotta
box
it.
So
I
kind
of
put
it
all
together
as
one
change
request
and
just
power
through
it
all
yeah,
but.
D
A
Don't
know-
maybe
maybe
that's
but
it'd-
be
great
to
do
something
like
either.
I
spin
up
new
node
pools
or
I
can
just
set
them
to
auto
up
like
if
they're
just
set
to
auto
upgrade,
and
we
just
leave
it.
They'll
do
a
couple
of
nodes
over
the
coming
weeks
and
actually
our
auto
scaling
will
probably
just
make
the
problem
like
don't
just
when
they
scale
up
dollar
scale.
Yeah.
B
I
I
think
we
should
just
do
that
personally,
like
the
work
of
spinning
up
a
new
node
pool
and
migrating
workload,
I
mean
this
is
like
a
full
day
a
half
a
day,
maybe
maybe
a
full
day
of
of
effort
just
to
go
through
all
the
change
issue,
documentation
and
I
think
it's
it's
more
risky
to
do
that
than
just
to
let
it
go
happen
automatically,
especially
if
we
do
it
on
the
zonal
clusters.
First
and
we're
happy,
have
you
done
this?
Have
you
done
the
node
upgrades
on
the
zono
clusters
already.
A
D
A
Yeah
sounds
good.
Okay,
semi-related!
I
won't
go
into
it
now.
If
you
check
out
the
infrastructure
like
readme
the
interesting
links
channel,
it
seems
google
have
just
done
a
bigger
announcement
about
even
more
self-managed.
Sorry,
google
managed
kubernetes
where
they
do
all
the
upgrades
and
everything
for
you.
It
looks
like
so
I
don't
think
it's
something
we
need
to
look
at
right
now,
but
you
know
it's
something
interesting.
We
could
consider
in
the
future
as
well
and
it
seems
like
they're
really
really
pushing
customers
towards
just.
Let
us
do
everything.
A
B
Cool
what
should
be
on
our
radar
for
futurism
118,
that
we
didn't
have
in
117.
A
So
two
things
there's
a
fix
coming
in
for
this
mail
room,
I
think,
and
possibly
a
lot
of
other
pods
where
the
tcp
settings
were
not
set
to
google
cloud's
match
like
not
the
tcp
timeout
settings.
I
think
or
we're
not
matching
google
cloud's
underlying
settings,
so
I
don't
know
how
that
manifests
itself
exactly.
We
did
see
it.
A
I
think
manifest
itself
with
long-lived
connections
and
mailroom,
but
it
may
have
other
unknown
side
effects
that
should
would
be
fixed
or
at
least
might
make
things
better
with
with
118..
The
other
big
thing
is
the
ingress
spec
has
finally
been
finalized.
So
all
our
ingresses
are
going
to
change
because
they've
changed
the
way
that
they've
moved
some
settings
from
annotations
into
real
settings
and
they've
changed.
Basically
they've
made
incompatible
changes.
A
118
will
still
happily
accept
the
old
spec,
it's
122,
so
like
four
versions
from
now,
where
they'll
stop
accepting
it,
but
considering
how
many
ingresses
we
have
and
how
many
charts
we
have
I'd
like
us
to
try
and
get
ahead
of
that
before
you
know
before
we
get
caught
and
the
other
thing
is,
there
was
a.
There
was
an
issue.
You
had
job
that
was
now
unblocked
related
to
pot,
auto
scaling
that
needed
118.
B
Yeah,
this
is
it's
related
to
the
hpa
right.
A
B
Yeah
this
this
this
this,
this
feature,
which
I
was
really
looking
forward
to
allows
you
to
prevent
the
dithering
we
see
on
scaling
where
you
know
when
we
scale
up.
Let's
wait
a
little
bit
before
we
scale
down
and
this
this
is
something
we
should
add.
We
should
probably
get
issues
in
for
all
this
stuff
for
the
nginx
ingress
or
all
ingresses.
B
I
guess
like
is
this
something
we
should
do
now
before
we,
because
right
now
we
don't
depend
on
at
least
the
engine
x
ingress
anywhere
until
as
soon
as
we
do
the
api
right,
then
I
don't
know,
I
don't
know
how
much
market's
going
to
be.
A
Yeah,
I
hopefully
it's
not.
I
I
haven't
looked
closely
into
the
differences
and
how
they
expect
you
to
like
migrate
the
settings
and
what
have
you
there's
all
yeah?
We
do
need
to
look
into
it
short
answer.
Is
we
should
look
into
it
and
figure
out
and
probably
have
to
do
a
quick
order
of
what
ingresses
we're
using,
because
it's
not
just
a
the
gitlab
chart
it'll
be
everything
in
gitlab
helm
files
like
we
use
it
for
all
the
monitoring
stack
and
things
like
that,
there's
probably
a
few
in
there.
A
I
don't
know
if
we
use
one
for
plant
uml,
we
do
yeah.
B
Okay,
I
think
I
have.
I
have
an
issue
open
for
the
hba
setting.
I
don't
know
where
it
is.
I
think
it's
in
the
delivery
tracker,
but
could
you
open
up
issues
for
the
things
that
you
know
about
just
so
we
shall
keep
them
on
our
radar.
A
A
E
Yeah,
I
just
wanted
to
ask
dream:
where
are
you
looking
for
information
about
queen
editors
news?
We
have
some
good
trackers
where
you
get
all
the
information.
A
Yeah,
so
really
I
just
look
at
their
kubernetes
project
release
notes,
that's
usually
my
best
gosh
you.
A
I
am
in
the
kubernetes
slack,
as
well
as
like
kubernetes.slack.com,
a
few
different
channels
in
there,
but
yeah
by
far
the
best
they
they
do
really
good
release
notes,
that's
the
short
kind
of
they
they're
so
reading
their
release
notes
as
soon
as
things
are
released
because
they're
a
lot
quicker
than
like,
as
the
versions
get
released,
there's
still
a
few
months
before
gke
sees
them
like
google
gets
them
and
like
certifies
them
and
patches
them.
A
So
we
can
get
a
full
picture
of
what's
coming,
even
before
it's
ready
for
us
and
gk
a
few
but
they're
trying
to
try
to
actually
close
that
window.
Obviously,
google
wants
to
be
able
to
deliver
it
quicker,
but
yeah.
The
upstream
release
notes
perfect.
E
B
All
right,
graeme,
you
got
the
next
one
yeah.
A
Sure
so,
there's
a
few
of
these,
which
are
just
more
you
know,
questions
taking
this
opportunity
that
they
have
to
to
chat
in
person
that
was
just
trying
to
fill
out
some
information
and
bits
and
pieces
from
java.
You've
already
put
a
lot
of
information
down
there,
so
I
won't
waste
too
much
of
everyone's
time
but
yeah.
I
was
just
interested
in
where
we
do
ssl
termination
for
pages,
considering
the
fact
that
we
do
have
custom
domains
and
all
that,
it's
interesting
to
note,
we
actually
terminated
in
the
application,
makes
sense
yeah.
A
I
just
I've
been
looking
at
ingresses
recently,
especially
of
the
back
of
the
work
for
cass,
so
ingresses
cloudflare,
how
it
all
fits
together
for
our
stack-
and
you
know,
it'd
be
great
to
push
that
further
out
the
stack,
but
obviously
with
pages
I
don't
think,
that's
possible.
So
so
it
sounds
like
we're
always
going
to
have
a
layer,
4
load,
balancer
right
up
to
the
pages
application.
A
B
D
B
Change
the
change
yeah.
We
could
just
change
the
endpoint
to
use
cloudflare,
there's,
probably
an
issue
out
there
for
it.
Yeah
but
you're.
A
B
B
Engine
x
or
whatever,
yeah
the
nginx
congress.
To
be
honest,
I
don't
know
if
that
would
be
a
smooth
transition.
We
might
we
we
might
have
to
like
schedule
a
little
bit
of
downtime
if
we
did
something
like
that
enough.
A
Yeah
so
the
other
point
there
and
you've
already
put
a
lot
of
information
in
job.
So
once
again,
it
was
just
going
back
off
the
upgrade
and
just
thinking
further
like.
If
we're
gonna
have
a
hundred
web
service,
pods
200,
they
take
a
very
long
time
to
cycle
one
at
a
time.
So
what
I'm
just
throwing
out
there
is
a
suggestion
or
idea.
If
there's
any
way,
we
can
try
and
shorten
our
turnover
times
on
the
web
service.
A
Pods,
I'm
not
even
sure
if
it's
possible,
but
if
there
is
any
way
we
can,
it
would
be
really
good
to
try
and
do
that
now.
Failing
that,
I
think
we
need
to
look
at
ways
of
like
pod
disruption
budgets
over
provision,
something
to
kind
of
spin
up
the
cycle
of
of
turning
over
those
pods.
Because
correctly,
if
I'm
wrong,
if
we're
going
to
try
and
do
a
helm,
upgrade
it's
gonna
wait,
a
home
upgrade
will
wait
for
every
single
pod
to
be
cycled.
A
B
A
A
E
We
just
can't
make
who
must
spin
up
faster,
so
I
guess
you.
A
A
But
if
we
start
shutting
down
and
spinning
up
more
pods
at
a
time,
especially
when
we
shut
down
more
pods
at
a
time,
is
that
tiny
amount
of
errors
we
cause
going
to
start
becoming
a
bigger
amount
of
errors
and
causing
aptx,
because
if,
if
during
a
clean,
pod
shutdown
of
the
web
service
pod,
we
know
there's
like
zero
errors
happen
like
we
do
it
correctly
and
everything,
then
it's
probably
not
an
issue,
but
I'm
not
sure.
If
that's
the
case
it
could
be.
I
just
need
to
have
a
look
at
it,
but
yeah.
A
D
B
We
we
need
to
have
more
control
over
that
for
api,
but,
and
that
will
speed
up.
I
think
we'll
see
a
lot
quicker
like
pod
cycles
on
api
than
we
will
on
than
we
do
currently
on
git
https,
oh
okay,
of
course,
because
if
we
were
able
to
set
the
blackout
seconds.
B
Actually,
let's
see
yeah
the
problem.
The
problem
is,
is
the
termination
timeout
has
to
be
is
currently
set
longer
than
the
blackout
window
so,
and
the
reason
for
that
is
because
puma
is
taking
a
very
long
time
just
to
shut
down
cleanly
because
of
the
long
blackout
seconds
window.
B
If
we
have
to
keep
that
long,
then
our
options
are
either.
We
wait
for
that
blackout
seconds
window
to
expire
or
we
give
it
a
sick
kill
like
we
tell
kubernetes
just
to
like
yeah.
Don't
wait,
kill
that,
but
that's
not
very
clean,
so
I
think
we'll
probably
have
to
live
with
these
long
shut
down
cycles
for
now
until,
but
I
I
assume
this
is
going
to
be
changed
soon.
A
B
D
Does
it
also
mean
if
we
have
more
pods
in
this
longer
like
there's
a
risk
right
at
some
point
we
just
run
out
of
pods.
Is
that
right,
if
we
can't
cycle
them
fast
enough.
A
A
D
D
A
A
A
I
started
looking
around
just
in
general
did
some
bits
pieces
relating
to
our
front
door,
and
I
had
some
thoughts
about
how
we
could,
if
we
wanted
to
use,
there's
a
very,
very
good,
a
very
healthy
and
quite
strong
ingress,
h.a
proxy
project
and
basically
using
that
to
replace
our
proxy
layer
that
we
currently
have
in
vms.
If
we
feel,
there's
still
benefit
to
keep
it,
and
I
actually
kind
of
do
think
there
might
be
benefit
to
keeping
it
at
least
a
little
bit
but
yeah.
You
know
I
put
some.
A
What
I
feel
are
some
pros
to
doing
that.
You
know
just
by
converting
our
h
proxy
layer
into
cut.
You
know
into
kubernetes
pods
how
I
could
split
that
out
with
different
controllers
for
pages
api
cards
and
the
main
one,
how
I
think
that
could
all
work
and
what
could
be
some
of
the
benefits
of
that
better
dynamic
canary
every
single
pod
can
become
a
back
end
in
h.a
proxy
can
do
some
pretty
cool
stuff
with
that
and
traffic
routing.
A
But
you
know
this
is
all
com
conditional
on
if
we
think
this
is
something
worthwhile
tackling
now
worth
while
tackling
later
I'm
pretty
flex.
But
I
do
think
there's
some
interesting
ideas
there,
which
I
could
turn
into
something
like
a
demo
or
something,
but
it's
obviously
incredibly
risky
to
actually
do
that
because
that's
our
front
door-
and
you
know
something-
goes
wrong-
we're
in
a
lot
of
trouble.
D
Is
there
any
kind
of
is
there
something
which
it
would
be
good
to
have
this
done
before
we
tackle
like
you
know?
Is
it
something
that
like
stateful
services
or
is
it
just
a
it
would
be
good
to
have
as
a
sort
of
general
setup.
E
E
I
guess
after
we
moved
the
majority
of
things
into
kubernetes,
then
it
would
be
great
to
also
look
into
the
ingress.
I
think
right
mean,
and
I
don't
know
how
how
easy
it
would
be
to
handle
the
same
thing
when
we
moved
it
over
to
h8
proxy
ingress,
probably.
A
A
You
can
also
do
things
like
we
can
add
what
is
a
what
is
essentially
a
kubernetes
back
end,
but
it's
actually
like
a
vm
network
endpoint
group
in
google,
which
allows
us
to
basically
still
send
traffic
to
virtual
machines
and
things
like
that,
so
it
you
can
do
some
pretty
good
stuff,
and
I-
and
this
is
what
I
was
kind
of
thinking
with
this-
is-
I
think
I
could
pull
together,
something
that
would
work
quite
well,
but
at
the
same
time
I'm
very
aware
this
is
probably
not
the
highest
priority,
so
I
might
even
just
put
some
notes
on
the
issues
that
skybex
linked
there
just
to
say
you
know
this
would
be
a
project
interested
in.
A
This
is
how
I
think
would
work,
and
then
you
know
whenever
we
choose
to
tackle
that
issue,
it's
there
but
yeah,
I
think
definitely
web
and
api,
and
things
like
that
are
way
higher
priority.
But
you
know
at
the
same
time
I
I
personally
I
I'm
open
to
you
know
I
I'm.
I
think
proxy
would
be
maybe
easier,
maybe
more
important
than
doing
our
stateful
services
to
kubernetes.
I'm
not
sure
if
you
know
what's
involved
with
that,
but
things
like
italy
and
stuff
and
kubernetes.
I
feel
like
the
pros
and
cons.
B
I
I
think
it's
interesting
to
think
about
what
would
how
we
would
replace
h
proxy,
because
in
in
the
kubernetes
world,
once
we
have
the
migration
fully
complete
the
two
things
that
we
needed.
Three
things
we
needed
for
our
cookie
routing,
which
maybe
we
could
do
a
cloudflare.
I
don't
know,
but
that's
that's
one
thing
second
thing
is
doing
waiting
for
like
weighted
traffic
between
the
clusters
and
the
third
thing
is
similar,
which
is
just
draining
an
entire
cluster.
B
So
so,
if
we
think
there's
value
in
those
items
canary,
I
don't
know
like
we've
already
shifted
a
bit
with
canary.
We
used
to
do
this
request.
Well,
we
still
do
this
request
path
routing,
but
lately
we've
been
starting
just
to
take
a
percentage
of
traffic,
and
if
we
do
that,
then
maybe
aj
proxy
is
not
helping
a
whole
lot
like
I
I
don't
I
don't
know-
and
for
draining
and
shifting
the
weight
of
traffic.
What
would
be
the
technology?
D
B
A
So
how
yeah
so
how
it
kind
of
actually
works
is
so
how
they
kind
of
get
you
to
do.
It
is
there's
basically
a
bunch
of
annotations
and
different
conflict
maps.
You
can
do
to
basically
say
this.
This
service
back-end
weighted
this
way,
and
you
can
do
things
like
you
can
weighted
just
traditionally
as
as
in,
like
you
know,
10
20
or
whatever
you
can
weight
it
by
number
of
pods
so
because
the
controller
itself
runs
in
kubernetes,
it
can
do
more.
Intelligent
things
like
essentially
all
it's
doing
is,
is
its
controller.
A
A
B
A
B
A
Would
so
you
would
put
what
is
essentially
a
gke
specific
object,
which
is?
This
is
a
back
end?
That's
a
network.
Endpoint
group
and
my
api
nodes
are
in
that
network
endpoint
group
and
suddenly
it
can
route
to
so
basically,
network
endpoint
groups
of
google's
big
they're,
their
kind
of
technology,
piece
for
doing
routing
to
vms
and
kubernetes
and
they've
made
it
a
first-class
kubernetes
object,
so
you
can
do
a
yaml
just
the
same
way.
We
do
some
of
the
custom
google
stuff,
like
google,
manage,
search,
google
backend
configs.
A
You
can
drop
something
in
into
the
kubernetes
land.
That
basically
says
I've
set
up
a
network
endpoint
group
with
my
vms
behind
it,
and
then
you
can
put
some
additional
settings
on
top
like
what
port
and
what
health
checks
and
all
that
or
whatever
that
ads,
and
that
means
proxy,
because
it's
a
kubernetes
native
object
can
pick
that
up
and
then
just
be
like
okay.
This
is
the
ips
of
the
you
know
the
back
ends
I
need
to
do.
A
I
I
do
actually
think
that
I
I
definitely
don't
have
all
the
answers
right
now,
but
from
an
initial
glance,
I
feel
like
there's
a
possibility
of
a
solution
to
to
basically
be
equivalent
to
what
we
do
now,
but
that's
from
very,
very,
very
bri,
not
briefly
looking
at
it
kind
of
play
with
it
a
little
bit,
but
the
the
way
you
would
do
it
would
be
slightly
different
and
it's
definitely
more
towards
how
google
kind
of
wants
you
to
do
it.
They
want
you
to
like
set
up
these
networking.
A
E
I
think
we
want
to
be
also
consider
to
look
into
istio-
I
guess
and
other
things.
So
maybe
we
should
think
about
that
first
and
then
look
into
if
you
want
to
spend
effort
on
going
with
hypoxia
or
sd
or
something
else
for
traffic
routing.
A
Yeah,
so
I
definitely
know
that,
that's
obviously
a
downside
is
we
haven't,
had
the
discussion
in
general
about
you
know,
service
measure
or
just
ingress
proxies,
and
what,
because
there's
lots
out
there?
I
I
just
kind
of
was
looking
at
the
ho
proxy
one,
because
we
have
it's
a
known
quantity
to
us.
You
know
it
would
be
easier
to
directly
replace
atria
proxy
with
another
proxy.
We
could.
You
know
we
knew
the
configuration
settings.
We
need
to
make
sure
and
things
like
that,
but
certainly
you're
right.
A
A
And
to
keep
this
meeting
trying
to
be
on
time,
the
only
other
thing
I
was
going
to
point
out
is:
you
know:
I've
seen
more
and
more,
we've
had
to
do
some.
I've
noticed
with
the
api
migration.
Henry
and
scarback
are
doing
more
and
more
trying
to
bend
the
current
helm
file
template
situation
to
their
will.
A
So
I'll
have
some
time
like
in
the
next
week,
or
so
I'm
happy
to
try
and
continue
to
kick
start
at
least
some
of
the
work
involved
into
what
I
would
believe
would
be
a
process
of
moving
the
helm
file
generation
into
jsonnet
and
then
basically
using
helm
file
to
just
run
helm
doing
what
it
does
best
running,
helm
and
doing
that
stuff,
but
actually
all
of
the
logic
around
what
values
go
into
helm,
basically
moving
out
into
jsonnet.
A
B
I
would
like
to
move
get
left
ci
to
jsonnet
personally,
because
this
is
where
it's
like
really
there's
a
lot
of
copy
and
paste,
and
it's
kind
of
like
seems
like
it'd,
be
a
pretty
easy
thing
for
us
to
do.
A
I
think
doing
conditional
logic
is
you
know
it's
trick,
there's
not
there's
nothing.
We
can't
do
but
yeah.
D
A
Know
conditional
logic
is
hard
being
able
to
do
functions
and
a
bit
more
in-depth
string,
translation
and
stuff,
I
think,
is
a
win.
The
value
validation
is
a
lot
stronger,
so,
like
you
know,
just
being
able
to
not
necessarily
even
with
moving
to
json
at
per
se.
So
there's
a
there's
a
few
different
things.
A
I've
kind
of
bundled
together
there's
converting
that
to
json
it,
but
also
changing
the
process
from
evaluating
the
execute
time
to
basically
moving
it
to
an
external
tool
that
you
can
like
you,
like
our
rules
files
you
just
run,
make
generate
it
spits
out
the
yaml.
You
can
look
at
the
yaml
and
the
the
ammo
gets
committed.
So
it's
decoupling
us
from
like
executing
chef
every
time
talking
to
chef
every
time.
A
And
then
it's
kind
of
you
know.
Helm
file
becomes
a
lot
more
simpler
and
it's
about
because
it's
not
evaluating
things
over
and
over
yeah
sure.
B
I'm
just
going
to
say
yeah,
I
think
the
the
the
hardest
part
for
me
is
reading
about
the
inheritance
chain
of
yaml
files
in
helm
file,
and
this
I
just
recently
added
the
zonal.
We
can
now
add
a
file
for
each
zone
cluster,
and
then
I
had
to
like
include
the
file
only
if
it
exists
and
that
you
can't
do
with
that
file
so
with
json.
It
would
just
be
like
you
would
you
just
have
like
one
yama
per
environment
right,
so
that
does
make
it
a
little
simpler
and.
A
A
E
Can
just
say
not
being
an
expert
in
a
lot
of
this,
but
trying
to
to
read
and
understand
our
configuration
there
right
now.
I
noticed
that
it's
not
easy
for
a
newbie
to
follow
where
things
are
landing
in
and
where
they
are
coming
from.
I
mean
I
get
it
after
a
while
looking
into
it.
B
I
I
I
think
it's
an
interesting
idea.
I
think
I
think
getting
rid
of.
I
think
the
the
biggest
pain
point
for
me
is
like
all
the
shelling
out
we
do
and
to
and
and
also
like.
We
need
to
start
thinking
of
what
we're
going
to
do
when
we
don't
need
the
chef
json
files
anymore.
I
guess
in
some
ways
this
becomes
simpler
right.
We
will
just
move
file,
we'll
move
configuration
out
of
the
chef
jsons
into
the
value
cml.
A
A
B
Yeah
because
it
means
that
we
aren't
running
rails
at
all
on
virtual
machines,
except
for
the
lingering
sidekick
catch-all
complete,
but
that
should
migrate
soon
as
soon
as
pages
transitions
fully
to
object.
Storage.
I
think
then,
then
we
are
good
to
go
there.
We'll
move
sidekick,
so
sidekick
might
be
the
last
thing
to
move
as
soon
as
rails
is
completely
off
of
virtual
machines.
Then
kubernetes
becomes
a
source
of
truth
for
config.
A
Yeah
so
so
yeah
the
work
I've
done
so
far.
I've
actually
covered
some
of
the
yeah.
I
I
think
I
because
once
again
moving,
I
I've
basically
moved
the
execution
and
pulling
stuff
out
of
json,
because
jsonic
can't
do
it
like
helm
template.
Does
you
can't
shell
out
and
change.
A
I
think
it's
nice
as
well,
because
yeah
I'm
just
having
even
the
ip
addresses
for
the
load
balancers,
I
run
one
g
cloud
command.
I
get
all
the
tell
g
cloud
to
output
it
in
json
and
then
I
can
just
manipulate
that
I've
got
a
function
in
json
where
I
just
go.
Get
the
vip
for
this
service.
Get
me
the
ip
for
this
service
and
just
pull
walks
the
tree
walks.
The
json
object,
grabs
it
grabs
anything
I
need,
and
then
I
can
just
pop
that
you
know.
Basically
wherever
I
want
but
yeah
look.
A
I
just
had
to
put
it
out
there
if
we
feel
it's
got
working
on
and
I've
been
happy
to
work
on
it.
If
we
don't
that's
also
fine.
B
Great
and
the
last
or
not
the
last
issue,
but
the
last
the
next
one.
I
just
wanted
to
highlight
again
the
net
capacity
issue,
and
this
is
something
we
really
need
to
be
aware
of.
As
we
move,
we
create
more
nodes
in
for
the
api
deployment,
we're
already
sort
of
at
the
limit
here,
I'm
not
sure
how
much
it's
affecting
us,
but
I'll
bring
it
up
in
the
next.
B
I
think
this
is
a
core
infra
issue
so
I'll
I
have
a
meeting
with
the
managers
today,
so
I'll
I'll
bring
it
to
their
attention
to
see
what
we
should
do
or
who
can
own
it
and
amy
you
have
the
last
one.
Yes,.
D
A
Yeah,
so
it's
a
good
question,
so
basically,
my
looking
at
everything
now
holistically
now
is:
I
need
I've
got
an
mr
up
to
get
the
network
policy
in
place
and.
A
A
But
beyond
that,
I
I
actually
can't
see
anything.
I'm
like.
Oh,
my
god.
This
is
critical.
We
need
to
kind
of
get
done
before
it
goes
to
a
writer
group.
I
I
think
they've
done
a
really
good
amount
of
testing.
Now
that
these
safety
mechanisms
in
cast
are
in
place.
We
have
got
it
behind,
cloudflare,
admit
rudimentary.
So
we've
got
you
know
the
coverage
there.
I
feel
like
we're
for
the
problems
we
can
see.
You
know
we're
pretty
well
prepared.
A
Obviously,
there's
problems
we
can't
see
or
may
not
see,
but
I
think
it's
looking
pretty
good
for
a
general
rollout.
You
know
it's
got
its
own
ingress.
It's
got
its
own
path
into
our.
It
doesn't
go
through
any
of
our
existing
infrastructure.
It'll
be
really
hard
for
it
to
cause,
even
if
it
was
to
you
know,
be
abused
or
attacked
or
something
it's.
You
know
I
isolated
it
and
did
all
the
work
completely
to
try
and
make
sure
that
there
was
no
way
it
could
affect
the
rest
of
the
infrastructure.
A
No
they're
they're
basically
happy
to
defer
to
my
judgment,
not
in
terms
of
like,
like
I
guess
it's,
it's
a
partnership
or
this
right,
so
they're
they're
they're,
looking
at
it
so
they've
started
getting
people
using.
What
bugs
do
they
find
is
this?
Is
this
ready
from
a
usability
perspective
and
then
also
at
the
same
time,
they're?
Really,
you
know
receptive
to
working
with
me
on
how
or
the
greater
infrastructure
group
on
how
do
we
feel
is
already.
You
know.
Is
there
things
that
are
outstanding
there?
A
I
don't
know
what
you've
been
hearing,
but
but
what
I'm
seeing
at
the
moment
is,
you
know,
they're
still,
they're
still
we're
barely
getting
one
request.
A
second
there's,
hardly
anyone
using
it.
So
I
I
think
they
I
there's
not
going
to
be
that
much
interest
in
it.
So
we've
just
they've
just
started
on
feature
flagging
people
on
so
it's
going
to
take,
I
think
a
week
or
you
know
we're
not
going
to
see
traffic
like
a
good
amount
of
traffic
for
a
while,
and
that
might
give
us
more
information.
A
So
I
I
would
expect
they're
not
ready
they're,
not
like
chomping
at
the
bit
to
roll
it
out
to
everyone
straight
away,
but
yeah
when
they
do.
You
know
they're
happy
to
get
our
collective
feedback
on
if
we
think
it's
ready
and
I'm
actually
I'm
pretty
okay
with
it
nice.
We.
A
A
A
D
A
D
Fantastic
is
there
anything
else
I
don't
want
to
cover
today.
D
No
fantastic
well
thanks,
everyone
that
was
like
super
great
discussion
and
thanks
for
the
demo,
henry
and
yeah
enjoy,
enjoy
the
rest
of
your
day
and
long
weekends.
A
Thank
you
one
last
thing
I'll
mention
before
I
go.
Okay,
if
there's
not
a
discussion
now,
I'm
happy
to
move
these
meetings
back
an
hour.
If
that
works
better
for
everyone
in
europe,
I
don't
know
how
early
people
get
up.
So
all
I'm
just
saying
is
if
you
want
to
move
it
back
an
hour,
although
probably
an
hour
later
for
craig
missile.
I
don't
know
how
good
that
works
for
him,
but
if
you
want
to
in
the
future
flying
by
me.
D
Cool
okay,
we
could
definitely
review
like,
I
think
for
me.
Certainly
this
time's
fine,
but
we
can.
We
can
work
around
it.
It'll
get
interesting.
I
think
the
clocks
change
right
like
when
the
clocks
change
end
of
march,
then
we
can
review
it.
That's
where
the
time
starts,
shifting
around
right
so
for.
D
Cool
okay!
Well,
we
can
certainly
like
review
this
going
forwards,
but
I
appreciate
all
your
inputs
today
have
a
good
rest
of
your
day.
Yep.