►
From YouTube: 2021-06-30 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
So
I
think
we
actually
got
jeff
said
he
might.
Job
is
accepted
so.
C
A
C
C
A
Well,
I
just
quickly
mentioned
the
discussion
points
I
haven't
seen
if
job
dropped
in
just
these
are
not
necessary
discussion,
more
just
creating
visibility
so
on
the
observability
work,
henry
gave
a
demo
of
some
of
those
items
in
last
week's
demo.
So
that's
nice
to
see
and
starbuck
has
actually
I
saw
scarbeck
has
got
the
console
observability
as
well
as
the
nginx
stuff
in
progress.
C
A
C
A
It
makes
sense
and
then
number
three
was
really
just
just
highlighting
scarbeck's
request
for
feedback
on
the
internet
stuff.
So
just
in
case
anyone
missed.
A
Sure
cool
so
yeah
over
to
you
graeme
on
the
dummy.
C
Sure
so
I'll
share
my
screen.
Cool
should
work,
so
I
am
gonna
hijack
pre
temporarily.
So
basically,
what
I'm
gonna
do
here
is
I'm
actually
just
gonna
deploy.
C
So
I'm
just
doing
a
deploy
job
now
to
actually
turn
the
web
pods
on
in
pre,
and
I'm
just
going
to
watch
this
deployment,
so
we
can
go
to
hopefully
I'll
bump
this
up
and
I
will
look
for
web
service
web,
so
it's
showing
nothing
at
the
moment.
This
is
just
showing
me
any
web
service
pods
that
are
going
to
come
up.
C
But
while
we're
doing
that,
I
can
also
talk
about
so
basically
I
did
this
today
I
put
it
in
pre.
I
ran
a
qa
pipeline
job
against
pre.
Just
to
try
and
see
what
I
could
see.
I
got
a
few
failures.
I
it's
the
sanity
test.
I
would
have
expected
to
all
fast.
I
mean
I'd
expect
most
of
these
to
pass
as
well.
The
failures
I
had
in
the
smoke
test-
I'm
I'm
not
convinced
this
isn't
a
permissions
issue
with
me.
So
basically
the
permission
areas
I'm
getting
and
I
will
drop
this.
C
C
But
essentially
in
the
smoke
test,
I
just
had
this
area
saying
from
the
selenium
saying:
misconfigured
source
labs,
authentication
error.
You
use
username,
none
with
access
key,
none
to
authenticate
to
me,
I'm
not
an
expert.
This
sounds
like
this
should
like
there's
a
username
and
password
that's
not
being
set,
and
I
wonder
if
they're
ci
environment
variables
that
I
don't
have
access
to
or
something
like
that,
I'm
not
sure,
but
basically
that's
the
only
reason.
C
This
test
failed,
as
it
was
just
trying
to
run
this
test,
and
I
mean
I
could
probably
have
a
poke
in
here
see
I
assume,
see
your
gitlab
username.
I
don't
know
if
that
means
it's
trying
to
use
me
and
I
don't
have
access
to
something,
but
it
was
just
interesting
that
both
of
these
two
failed
with
that
exact
same
error,
all
the
other
smoke
tests
seemed
to
pass.
I
did
some
of
these
ones,
which
were
browser
ui
browser
ui,
anything
with
kind
of
browser
ui.
I
tried
to
run
as
well.
C
They
all
passed,
except
this
last
one,
and
this
last
one
failed,
saying,
create,
pool
mirror
for
a
repository
of
his
ssh
with
private
key
configures.
The
sync
pull
request
mirrored
repository
so
that
test
failed
timed
out.
So
I
don't
know
once
again
if
this
is
an
actual
problem
or
if
it's
just
something,
if
I
retried
it
worked,
it
just
didn't.
Take
that
long
or
or
what
have
you.
C
C
A
Trying
to
run
them
again
like
do
we
have
the
web
running
on
the
vms
on
pre
that
we
could
run
a
kind
of
comparison.
I
could.
C
I
could
totally
just
yeah
if
I
just
launched
them
again
like
against
webs.
That's
probably
what
I
should
have
done.
I
I
didn't
to
be
honest.
I
didn't
spend
too
much
time
on
it.
I
was
just
like.
I
was
going
to
wait
until
I
actually
had
the
mr
properly
merged
and
everything
but
yeah
first
thing
tomorrow.
While
I
can
run
it
on
vms,
I
probably
should
and
just
double
check.
C
I
can
just
ask
qa
to
figure
it
out,
so
you
can
see
web
service
wed
is
running.
We
can
now
actually
do
so.
I
should
be
able
to
have
this,
so
this
is
just
tailing
the
logs
from
the
web
service
pod.
It's
passing
it
to
a
tool
called
angle
grinder
which
basically
because
they're
json
logs.
I
can
basically
take
those
logs
pass
it
to
angle
grinder,
which
will
read
every
line
as
json
object,
and
then
it
allows
me
to
manipulate
it
in
this
case,
I'm
getting
account
by
status,
so
http
status
code.
C
So
you
can
see
some
of
them
come
back
with
none,
which
is
when
we
hit
the
metrics
endpoint
and
then
everything
else
is
two
hundreds.
So
you
can
see
that
looks
all
fine
at
the
moment
now,
if
I
actually
go
on
the
the
load,
balancer
I'll,
just
do
chef,
client,
disable.
Oh
good
g,
gilly's,
test.
C
C
C
C
And
then,
hopefully,
there's
still
a
lot
of
stuff
hitting
that
that's
okay!
If
I
go
to
pre
dot
hitler
whoops
I'll
open
this
in
a
new.
C
Free.Gitlab.Com
see,
there's
me
here's
the
name
graham
gillies
right
there,
so
you
can
see
it's
hitting
the
pot.
Essentially,
the
demo
proves
it's
hitting
the
pods.
You
know
I've
there's!
No!
I
haven't
got
much
data
in
here.
That's
why
I
was
using
qa
tests,
but
you
know
we
can
see.
There's
me
again
going
to
the
groups
new
with
my
username,
my
fedora
linux
box.
So
we
can
see
that
you
know
the
website
from
what
I
can
tell
works.
So
I'm
fairly
confident
things
are
mostly
working.
C
There's
nothing
hideously,
broken
images,
javascript
seem
to
be
loading
and
what
have
you
but
yeah?
That's
more
or
less
what
I
really
had
to
demo.
Unless
anyone
has
anything
else,
they
would
like
me
to
show
them
in
particular.
A
fairly
boring
demo,
but
it
is,
you
know,
should
be
all
relatively
straightforward.
A
C
Yeah
yep,
so
for
those
who
aren't
aware,
so
we
have
nginx
now
and
so
api
is
under
nginx
and
web
is
under
nginx.
So
this
actually
taught
us
to
kind
of
rethink
the
way
we
do.
Health
checks
a
little
bit
because
before
we
would
just
send
a
health
check
to
like
slash
slash,
you
know
dash
slash
readiness
and
that
would
go
to
say
the
web
service
pods
and
because
the
back
end
is
the
same.
C
It's
the
same
engine
x
that
we
we
tell
h
a
proxy
we're,
checking
the
readiness
for
say
web,
but
we're
not
checking
the
readiness
for
say
api
independently
of
each
other.
So
what
we
had
to
do
is
we
actually
had
to
do
a
little
bit
of
creative
kubernetes
ingress
object,
work,
getting
extras
values
to
basically
create
a
new
ingress
object,
and
it's
a
little
bit
hard
to
read
here.
But
essentially
what
happens?
Is
we
take
two
paths
so
this
one
and
this
one
to
basically
rewrite
those
urls?
So
basically
what
I'm?
What
we?
C
What
we
have
is.
So
if
you
go
to
nginx
slash,
dash,
slash
k80
api,
then
that
will
go
directly
to
the
api.
Pods
and
same
way
for
web
that
goes
to
the
web
pods.
So
that
means
we
can
set
health
checks
independently
of
each
other
and
the
reason
we
couldn't
do
that
before
is
if
we
try
and
set
the
readiness
checks.
Basically,
api
lives
in
slash
api
and
web
lives
in
slash,
and
so
any
request
that
wasn't
slash
api,
like
the
readiness
retracts
would
always
go
to
web.
C
C
So
that
way,
because
what
happens
is
the
the
path
up
to
the
slash
api
gets
chopped
off
and
then
sent
directly
to
those
api
pods?
So
it's
a
little
bit
awkward,
but
that
way
we
guarantee
when
we're
doing
back
ends
that
are
both
behind
nginx
in
h.a
proxy.
The
readiness
checks
are
going
directly
to
the
pods.
We
actually
care
about
health-wise
we're
also
changing
the
these.
C
At
the
moment,
so
hds
git
and
websockets
used
to
be
tcp
web
health
checks,
and
that
means
that
they
would
just
connect
a
checker
could
do
a
tcp
connection,
and
that
was
good
enough.
It
did
nothing
more.
So
even
if
the
back
end
was
returning
500
errors
like
it
was
just
crashing
constantly
the
health
that
that
health
check
would
still
pass,
because
it
could
still
establish
a
tcp
connection.
It
wasn't
looking
at
http
code
as
part
of
this
work.
C
C
We
would
be
in
a
very
bad
situation
if
these
would
be
failing
in
such
a
way
that
entire
back
ends
and
kubernetes
would
be
taken
out,
because
you've
got
to
remember
a
back
end
gets
routed
to
a
random
pod,
so
it'd
have
to
be
all
the
pods
would
have
to
be
in
a
bad
state
for
this
to
actually
return
a
problem,
but
still
you
know
there's
always
that
chance.
So
we
try
and
you
know,
protect
ourselves
best.
We
can.
B
I
do
have
a
question
about
the
first
thing:
you
show
the
slash
dash
slash
skates
api
whatever.
So,
if
I
understood
correctly,
this
is
this
is
for
the
haproxy
health
check,
so
we
are
actually
rewriting
urls
so
that
at
proxy
level
we
route
to
them
to
the
to
the
right
place.
B
This
actually
means
that
for
from
the
product
perspective,
we
are
shadowing
that
namespace,
so
we
will
never
be
able
to
have
a
rails
dash
dash,
slash
kate's
routes,
because,
okay,
can
we
work
around
this
doing
going
straight
to
the
pods
without
because
it
will
never
happen,
but
I'm
kind
of
scared
that
someone
is
going
to
work
on
some
crazy
new
kubernetes
integration
and
decide
to
use
slash.
C
C
I'm
kind
of
thinking
destiny
should
should
be
addressed.
At
the
same
time,
we
want
to
the
epic
where
we
address
h.a
proxy
in
general.
I
think
there's
a
I've
mentioned
this
before.
I
think
a
discussion
on
cloud
flare,
hp,
proxy
ingress
engine
x.
The
amount
of
things
we've
got
load,
balancers
we've
got
sitting
in
the
stack
now
is,
is
and
and
the
amount
of
extra
like
you
know,
extra
ingress
objects.
We've
got
even
just
to
support
some
of
the
settings
we
need
is
starting
to
get.
C
It
is
starting
to
get
a
little
bit
crazy.
So
I
agree
I
would
like
this
to
change
and
get
rid
of
it.
I
would
like
to
see
us
replacing
like
hypothetically
ho
proxy
and
ingress
engine
x,
all
together
with
one
side
kind
of
single
front
door,
solution
behind
cloudflare.
C
That
does
everything
we
need
like
properly
kubernetes
native
as
well,
so
that
running
inside
of
kubernetes
can
check
the
pods
properly
and
we
just
have
cloudflare
straight
on
top
of
that
rather
than
you
know,
but
yeah,
and
we
have
a,
I
think,
an
epic
to
do
that.
So
that's
kind
of
my
thinking
at
the
moment.
I
do
think
this
should
be
changed,
but
I
want
to
try
and
see
if
I
can
push
it
into.
Let's
talk
about
h.a
proxy
in
general,
and
this
is
one
of
the
problems
with
it
that
we
have.
C
Yeah,
so
I
think
we
have
it's
called.
I
think
it's
literally
called
something
like
migrate
ho
proxy
to
kubernetes
or
something
like
that,
because,
as
a
lift
and
shift,
we
could
just
put
h.a
proxy
in
kubernetes,
but
I'm
not
convinced
it's
the
best
solution,
but
but
yeah
there
has
been,
and
I've
put
a
few
notes
on
there
of
like
different
english
controllers
and
like
different
a
few
different.
Basically
anytime.
I
find
something
that's
interesting
to
that
discussion.
I
have
been
putting
links
on
there.
C
C
Engine
x
around
to
do
proxy
buffering,
which
obviously
is
what
we
discovered
with
the
api
migration,
which
is
fine,
but
cloudflare,
can
also
do
that
for
ourselves.
So
that
saves
us
money
if
we
don't
have
to
run
nginx
like
it's
definitely
not
as
part
of
our
migration
with
api
and
web.
It's
now
is
not
the
time
to
be
changing
things
up
too
much,
but
I
think
post
that
I
think
there's
a
lot
of
simple
simplification
for
the
stack.
B
C
Done
a
lot
more
work
in
rack
attack
now,
like
a
lot
more
work,
so
a
lot
of
the
rate
limiting
we
used
to
do
in
h.a
proxy.
I
remember
as
an
engineer
on
call.
I
used
to
be
doing
a
lot
more
rate
limiting
in
proxy
and
I
think
that's
become
a
lot
less
now.
So
I
just
think
there's
a
lot
more
that
we've
been
able
to
like
shift
away
from
that.
You
know
we
can
figure
out
something
better
and
you
know
how
we
do
canarying
traffic
routing
all
that
stuff.
You
know
it
can.
B
A
Cool
sounds
good
great
thanks
for
demoing
that
stuff.
Graham,
I
just
wanted
to
share
so
yesterday
we
got
a
new
web
blocker,
so
I've
pinged
and
that's
11548.-
I've
linked
it
in
here.
So
there
was
an
incident
that
matt
came
across
and
stan's
comment
links
off
to
that
incident.
A
So
it's
not
a
new
problem,
but
it
will
be.
It
will
be
a
much
bigger
problem
with
the
with
the
migration.
So
with
that
in
mind,
we
might
want
to
consider
like
how
far
we
want
to
push
well.
I
hope
you
will
get
a
response
on
this
today
around
prioritization,
but
we
might
want
to
consider
just
the
kind
of
ordering
of
events
or
like
timeline,
because
I
guess
we'll
see
this
on.
A
Potentially,
I
guess
we'd
see
this
on
staging
depending
on.
What's
triggering
it,
but
obviously
it
doesn't
matter
so
much
there.
It
may
just
cause
some
noise
for
infrastructure
and
sort
of
deployments
and
things,
but
it
will,
I
guess,
be
a
problem
for
canary,
so
we
might
want
to
consider
just
as
we
have
a
lot
of
other
sort
of
smaller
tidy
up
tasks.
A
C
Yeah
that
makes
sense
I'll
keep
an
eye
on
that
and
obviously,
as
timing,
I
guess
the
big
concern
would
be
if
it's
somehow
like,
if
I
put
it
into
staging
and
then
somehow
tests
fail.
Would
that
mean
like
it
would
start
blocking
auto
deploys?
That's
that's
really
what
I'm
I'm
worried
about.
I
guess
yeah.
A
It
would
do,
and
I
mean
I
get
it-
it
potentially
gives
extra
work
to
whoever's
on
call,
as
they
kind
of
have
to
help
us,
so
there's
a
little
bit
of
a
risk.
It's
also,
I
guess
whether
how
much
value
like
do
we
gain
stuff
having
well,
let's
see
what
the
estimate
comes
back
on
this,
but
if
it's
not
a
short
piece
of
work
it
we
may
want
to
just
reorder
things
a
little.
C
A
Yeah
agreed
so
I'll
see
what
I
can
find
out
on
this
one
today,
so
I
can
leave
an
update
for
you
later.
I
know
we
can
do
ordering.
If
we
do
decide
to
push
this
out
a
little
bit.
We
have
got
other
things
we
do.
We
could
do
with
doing
a
helm.
Upgrade
we've
also
got
pages.
That
could
be
migrated,
so
it's
not
like.
We
obviously
never
run
out
of
work,
but
let's
maybe
review
whether
we
want
to
wait
on
on
this
one
before
going
to
staging
or
not.
A
Awesome
and
then
say
covered
proof
briefly,
two
and
three
just
mostly
fyis.
If
people
are
interested,
is
there
anything
else,
and
I
want
to
ask
or
discuss
the
demo
today.
A
Yep:
okay,
fantastic
awesome
thanks!
So
much
for
all
the
demoing,
graham
and
thanks
everyone
for
joining
enjoy
the
rest
of
your
day.