►
From YouTube: 2021-03-25 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
If
we
just
switch
a
small
setting
on
the
registry
buckets
that
we
have
right
now
what
yeah
we
have
versioning
enabled
on
our
registry
buckets,
which
is
meaning
that
we
more
or
less
basically
double
the
amount
of
storage
that
we
already
use,
and
we
can
easily
turn
this
off
and
just
delete
all
deleted
objects
and
we'll
end
up
with
half
the
usage
than
that
we
have
now
within
a
few
days.
Probably,
are
you
sure?
Do
we
delete
any
objects?
B
It's
yeah
because
we
upload
each
blob
and
then
move
it
from
a
temporary
place
into
the
real
place,
which
is
a
deletion
we
tested
it.
So
the
deleted
object
just
stays
there,
because
it's
versioned
and
it's
not
deleted
and
yeah.
So
ego
is
taking
over
to
do
the
setting
so
that
we
have
a
policy
which
is
just
deleting
old
deleted
objects
from
the
bucket,
and
that
should
free
up
a
lot
of
money.
I
guess.
A
B
And
we
also
need
to
look
at
the
other
buckets
that
you
have
for
uploads
and
things
because
if
they
do
it
the
same
like
uploading
and
moving
to
another
thing,
it
could
be
that
we
also
could
have
the
size
there.
I'm
not
sure
about
this,
but
at
least
for
the
registry,
I'm
I'm
sure
it's
the
case.
So
that's
really
interesting.
D
Awesome
morning,
everyone
or
evening,
graham,
what
have
we
got
for
demo
today,.
C
I
was
hoping
to
have
something
to
demo,
but
unfortunately,
this
week
has
very
much
got
away
from
me,
so
everything
I
was
kind
of
even
a
little
bit,
so
I
was
going
to
demo
of
maybe
some
different
pieces.
I
just
unfortunately
haven't
had
a
chance
to
to
put
them
together
so
really
in
terms
of
demo.
This
might
be
a
bit
of
a
short
meeting.
C
That
being
said,
you
know
I'm
interested
in
going
through
the
blockers
we
have
for
the
ap
api
migration,
at
least
even
quickly,
just
so
to
understand
what
they
are
and
what
have
you.
D
I'm
not
aware
we
have
any
actually
at
the
moment,
sorry,
I
haven't
updated
this
agenda.
Okay,
as
far
as
I
am
aware,
so
I
spoke
to
scarborough
yesterday
and
he
is
at
the
certainly
for
the
next
step
right.
So
he
is
looking
at
logging
and
that's
the
biggest
question
mark
for
moving
to
canary,
so
he
was
spent.
D
He
is
thinking
he
probably
this
week
might
be
a
stretch,
but
early
next
week,
he'll
have
all
the
logging
checked
out
and
then
we
need
to
do
the
vetting
the
deployments
and
then,
if
we
get
those
two
pieces
done
we're
good
to
go
to
canary.
B
Yeah,
my
last
impression
was
that
we
already
had
the
changes
for
logging,
at
least
in
an
mmr
which
should
be
fixing
what
happened
last
time,
but
I'm
not
sure
if
you
tested
it
right
now.
It's.
C
I
think
I
think
there's
another
issue.
It
could
be
the
same.
It
could
be
something
slightly
different
in
that
obviously
we're
trying
to
put
logs
from
two
different
sources
into
the
one
elastic
search
index,
and
I
think
we
we
had
some
kind
of
questions
or
yeah,
I'm
not
not
sure
exactly,
but
I
know
there's
another
issue,
that's
open
where
we're
still.
Basically,
we
need
to
be
sure
that
we're
not
losing
any
observability
like
we're,
not
dropping
any
logs,
but
it
seems
like
that.
C
D
Yes,
that's
right:
yeah
yeah,
we
had
two
gaps.
One
was
around
some
logs
being
missing,
like
gaps
in
the
locks
and
then
the
other
was
in
total
loss
of
logs.
Did
you
so
I'm
just
pulling
the
issue
up?
Do
you
want
to
just
summarize
that
summary
that
you
put
in
the
in
this
logging
issue
yesterday
great
I'm
in
the
investigation,
you
did.
D
On
the
enabling
the
gitlab
blogger.
D
D
C
Whoops,
sorry,
it's
okay!
I
was
just
talking
away
to
myself,
okay,
so
so
I
think
so,
I'm
not
completely
across
at
all,
and
I
was
just
kind
of
going
off
what
starbuck's
kind
of
written
there
and
my
understanding
of
it,
but
it
seems
like
we've
identified
at
least
places
where
we're
missing
logs
and
once
again
that
we,
we
believe,
I
think
at
least
at
some
level
we're
missing
those
logs
now,
anyway,
certain
types
of
requests
or
certain
fields
from
some
requests.
C
So
the
question
is,
you
know:
do
we
try
and
fix
that
before
we
go
to
kubernetes
or
and
and
what
I
kind
of
summarize
my
personal
feeling
is,
if
we're
not
if
we're
not
going
backwards,
in
other
words,
if
if
we
know
we're
missing
logs
but
we're
missing
logs
and
vms,
we're
still
going
to
miss
logs
on
kubernetes,
let's
not
make
it
a
blocker
and
let's
just
move
because
I
think
changing
the
logging
will
be
easier
when
we're
in
kubernetes,
rather
than
trying
to
fix
things
for
both
vms
and
kubernetes,
because
I
think
what
we're
kind
of
looking
deeper
it
seems
like
logs
coming
out
of
vms.
C
Look
like
one
thing:
logs
coming
out
of
kubernetes,
look
like
a
different
thing,
slightly
different
formats
and
then
elasticsearch
obviously
expects
things
in
one
the
index
we're
trying
to
push
them
into
expects
them
in
one
format.
If
we
do,
I,
I
suggested
a
few
different
other
options
like
we
create
a
whole
new
elastic
search
index
just
for
the
kubernetes
api
pods,
rather
than
pushing
them
to
the
the
same
index
as
the
vm
api
pods.
C
That's
not
great,
especially
if
we're
running
both
because
you
don't
want
to
have
to
look
through
two
end
indexes
to
see
them,
but
maybe
for
the
migration
period.
It's
not
going
to
be
the
end
of
the
world.
I
I
don't
know
I'm
just
trying
to
feel
what
is
the
quick
option
here
to
just
to
get
us
moving
and
if
it's
like,
hey,
we've
got
to
live
with
something,
that's
not
perfect
for
what
one
month
while
we
migrate,
then
you
know.
C
C
So
yeah,
I
think
we
just
kind
of
yeah.
We
just
oh,
you
know
I'll
see
what
scarbeck
says,
but
he's
got
a
better
understanding
of
what
we're
missing.
But
I
think
that's
something
we'll
cover
in
the
readiness
review
either
we'll
highlight
what
we're
doing
or
if
we
I
don't
know
we.
We
need
to
come
to
some
decision
about
that.
But
I
I'll
see
what
scarbeck
says
in
turn,
because
he
has
just
a
slightly
better
unders,
a
bigger,
better
understanding
as
to
what
we're
missing.
C
D
Yeah,
I
think
that
makes
sense
like
as
long
as
we
know
how
to
use
them,
and
I
think
we've
got
that
with
adding
in
run
books.
Then
yeah.
That
makes
sense
like
at
least
getting
to
canary
right,
because
then
that
hopefully
gives
us
more
information
and
we
can.
We
can
make
better
decisions.
A
B
A
So
any
blockers
that
are
like
this
are
not
blockers.
They
are
technical,
that's
that
has
to
be
assigned
to
the
epic
named
technical
debt,
the
two
of
you-
I
don't
know
if
you
know
about
it,
but
there
is
an
epic
call,
the
technical
debt
and,
at
the
end
of
every
migration,
we
take
some
time
to
clean
up
the
most
important
technical
debt.
Again
you.
A
This
is
the
first
time
you're
actually
going
through
with
this
in
in
the
team,
but
every
migration
we
did
so
far
had
at
least
two
weeks
of
technical
debt
cleanup
right
because
that's
by
moving
it
gives
us
new
information
that
allows
us
to
re-evaluate
where
we
are
at
and
if
you
think
we
are
in
a
good
enough
situation
that
buys
us
time
to
actually
clean
up
some
stuff
that
we
left
behind.
Right.
Amy
will
give
you
well,
you
can
find
a
technical
that
epics
right,
I'm
just.
C
D
Yeah,
exactly
sort
of
related
to
that
actually,
graham
just
follow
on
from
something
we
chatted
about
yesterday,
the
other
one
is
nginx
right,
so
we
talked
a
bit
about.
We
made
a,
we
had
a
kind
of
a.
We
should
make
a
decision
on
nginx
for
api
service,
so
we
bypassed
it
for
websockets
on
api
service.
We
haven't.
We
we're
using
nginx
to
minimize
like
how
many
things
are
we
changing
all
at
once.
D
So
I
chatted
to
skype
yesterday
and
again
similar.
We
should
push
this
like
way
out,
like
after
api
serves,
probably
after
web
notes
and
review,
and
you
know
come
back
to
that.
So
for
now
we
are
using
nginx.
It
is
upgraded.
We
can
just
keep
going
on
that.
One.
C
Yeah,
okay,
that
look
that
makes
sense
to
me
and
that's
fine.
I
I
kind
of
see
it
to
be
clear
as
well
to
everyone
taking
a
step
back.
What
we,
what
we
need
to
be
clear
here
is
is
kubernetes,
defines
a
specific
concept
called
ingress
that
is
supposed
to
be
pluggable,
so
the
specification
for
an
ingress.
You
say
I
want
to
listen.
You
know
for
this
and
I
want
certain
http
urls
to
hear
there
and
I
want
certificates
or
whatever
so
they've.
You
know
they've
defined
a
implementation
agnostic
specification
for
ingress.
C
We
should
always
we
have
that
in
the
chat.
We
should
always
have
that.
We
should
always
be
using
that.
But
at
the
moment-
and
perhaps
this
is
it's
not
a
bad
thing,
but
it's
like
that
omnibus
kind
of
philosophy
where
not
only
they
designed
the
specifications,
so
you
was
the
operator
of
your
kubernetes
cluster
picks
the
implementation.
If
you're
on
gke
you
get
a
gk
ingress
controller
or
an
implementer
automatically
amazon's
got
their
own.
C
There's
a
million
companies
out
there
that's
around
there's
like
there's
like
50,
at
least
of
them
and
they're
all
you
know
they
all
implement
the
specification.
They
all
have
their
pros
and
cons
and
some
are
good
and
some
are
bad
for
for
a
lot
of
things
ingress
engine
x,
which
is
the
most
one
of
the
most
popular
ones.
You
know,
is
used
by
a
lot
of
people
and
we
use
it
because
it
is
one
of
the
first
ones
and
everyone
knows
nginx.
We
use
nginx
and
the
vms
and
everything.
C
But
to
be
clear,
we
are
not
we
in
gitlab.com
and
we
gitlab
as
a
company
with
our
helm
chart
are
not
to
me.
We
are
not
100
bound
to
ingress
engine
x.
We
are
100
bound
to
the
specification,
but
some
and-
and
you
know
we
can
say-
we
recommend
you
using
gris
engine
x
to
fulfill
this.
But
the
whole
point
of
kubernetes
ingress
specification
is
so
that
us,
as
vendors
don't
have
to
ship
something
that
implements
it
it's
up
to
it.
C
You
know
we
certainly
can
and
we
certainly
are
happy
to
optionally,
but
we
don't
have
to
enforce
ingress
engine
x
and
I
think
we've
already
have
support
notes
about
using
it
with
different
ingress
providers
where
it's
been
tricky
is,
while
the
specification
has
been
there
and
the
plugable
implementation,
the
specification
has
never
been
finalized
until
gke
1.19
1.18,
so
it's
never
even
hit
version
1
of
the
spec.
So
it's
gone
through
a
lot
of
changes.
C
There's
a
lot
of
differing
implementations
where
people
extend
the
specification
to
do
their
own
thing
without
it
being
supported
by
everyone.
So
it's
actually
I
wouldn't
say
it
was
bad,
but
it
wasn't
one
of
the
great
specifications
that's
gone
through
the
kubernetes
working
group,
but
it
will
be
finalized
and
I
think,
when
that
happens,
you'll
see
a
lot
of
the
implementations
kind
of
come
back
a
bit
with
a
bit
more
uniform
like
at
the
moment
when
you
change
them.
C
There
are
some
bits
and
pieces
that
are
kind
of
different
between
them,
but
I
think,
as
the
maturity
of
the
specification
comes
along,
we
should
see
all
of
the
people
that
implement
the
spec
be
more
uniform,
and
so
for
us,
what
that
means
is
maybe,
after
web
and
api
and
honestly,
this
really
folds
into
the
proxy
discussion
as
well,
because
really
we
want
to
eventually
rip
out
our
hd
proxy
nodes
and
just
have
an
ingress
specification
with
some
something
implementing
that
in
front
of
behind
cloudflare,
and
that's
basically
underneath
that
as
our
pods,
because
that's
what
ingress
is
designed
for.
C
So
I
you
know,
I
said
I
think
everything
makes
sense
now
and
I
think
there's
a
larger
discussion
of
not
so
much
should
we
replace
nginx,
but
I
would
phrase
the
discussion
of
for
our
ingress
specifications
that
we
are
using.
What
is
the
implementer
of
those?
What
is
the
technology
we're
going
to
choose,
to
implement
them
and
evaluating
english
engine
x
and
all
of
the
other
ones,
and
the
pros
and
cons
for
that?
You
know
I
deployed
cass
on
gitlab.com
running
alongside
everything
else,
it's
using
the
gcp
ingress.
C
C
So
it's
really
nice
and-
and
that's
the
other
thing
too,
is
you
don't
have
to
use
the
same
one
for
everything
you
could
have
a
hundred
different
pods
or
a
hundred
different
services
using
100
different
ingresses?
They
could
be
implemented
or
actually
you
know
implemented
by
completely
different
technologies.
So
it
is
pluggable.
That
being
said,
you
wouldn't
want
to
like
have
a
hundred
different
types
for
trying
to
operate
it,
but
we
have
flexibility
there.
C
Yeah,
so
all
the
service
mesh
technologies,
usually
I
implement
the
ingress
spec
as
well.
So
yes,
it
kind
of
folds
into
that
as
well.
Oh-
and
it's
also
worth
worth
noting
funnily
enough
but
they've
just
finalized
the
ingress
specification
and
they've
already
said:
oh
man,
the
specification
we
know
so
much
more
now
we
can
do
it
so
differently,
so
they've
started
throwing
out
ingress
as
a
concept
and
now
they're
making
a
whole
other
new
specification.
That's
going
to
start
again
and
be
ready
in
like
four
years
or
something
so
it's
you
know.
C
As
I
said
it's
a
bit
of
a
it's
a
little
bit
of
it's,
not
the
best
spec
that
they've
done.
But
you
know
it
is
what
it
is.
C
C
C
I
think
what
I
was
thinking
is
the
the
one
we
have
for
ho
proxy
is
probably
could
be
rewritten
to
support
this.
I
think
we
would
flesh
that
out.
As
you
know,
we
use
h.a
proxy
more
or
less
as
a
vm
based
in
ingress
concept.
What
are
we
actually,
because
I
think
part
of
it
as
well-
is
not
just
oh
she'll
be
using
risk
and
genetics
and
everything
it's
also
looking
at.
Well,
we
have
h.a
proxy.
What
does
that
get
us
like?
Do
we?
C
We
might
do
ip
blocks,
or
you
know
how
we
do
canary
is
very
much
tied
to
h.a
proxy,
whereas
you
know
there
are
a
whole
bunch
of
technologies
like
service
meshes
and
all
these
other
things
that
can
implement
the
ingress
specification
that
can
do
better,
canarying
quicker
canary
blue
green
deployments,
full
different
canary,
like
we
can
change
actually
the
model
we
want
to
use
depending
on
the
tech.
D
Yeah
that
sounds
like
it
makes
sense,
definitely
a
good
one.
We
go
through
a
nice
milestone,
as
we
finish
the
stateless
services
in
in
the
next
few
months,
and
that
might
be
a
good
point
to
pick
up
or
review
these
things.
D
Well,
one
thing
I
realized
I
was
going
to
ask
you
about:
graham:
was
the
database
load
balancing
issue
which
I
don't
know
is
so
much
of
a
blocker
as
more?
I
guess
a
corrective
action
at
this
stage
right,
like
it's
clearly
causing
his
pain
and
probably
the
cause
of
some
recent
incidents.
C
Yeah,
so
basically
it's
I
should
have
well
the
only
issue
we
have
for
tracking.
That
at
the
moment,
is
a
git
lab
issue,
so
I'm
not
sure
whether
we
need
a
delivery
side
issue
for
it
or
or
what's
going
on,
because
I
was
a
bit
confused
about
that
as
well
of
like
how
to
assign
myself
or
how
to
track
that
work.
The
short
answer
of
it
at
the
moment
is,
I
think
we
want
to
you
know.
As
I
mentioned
yesterday,
jason's
identified
changed
to
the
code
base
to
make
debugging
of
this
easier.
C
So
I
think
we
want
to
get
that
into
the
app
first
and
straight
away,
because
basically,
the
problems
we're
seeing
we're
seeing
them
common
enough,
like
we
see
them
every
day,
multiple
times
a
day,
kind
of
thing,
that
what
we're
logging
and
what
we're
currently
seeing
gives
us
an
idea
where
the
problem
is
so
for
those
of
you
who
aren't
aware,
you
know
we're
seeing
so
our
database
load
balancing
relies
on
console
and
we're
seeing
web
service
pods,
having
problems
talking
to
console
and
get
load
balancing
information
which
is
obviously
bad.
C
C
There's
two
parts
to
the
dns
story
with
that
that
there's
the
cube
dns
side,
which
is
a
component
part
of
kubernetes,
essentially
that's
run
by
google,
that's
completely
hands
off
from
us,
and
then
there
is
the
console
service
itself
right.
So
when
we
do
dns
requests
or
look
things
up,
we
have
to
talk
to
cube
dns
to
service
discover
which
gives
us
dns
records
as
a
service
discovery
to
where
console
lives.
And
then
we
go
to
console
and
we
look
up
the
db
replica.
C
Whatever
the
record
is
that
we
say
get
me
the
replicas
basically,
and
then
that
returns
us
back.
We
are
seeing
a
breakdown
in
that
process
end
to
end,
but
we
haven't
got
good
enough
logging
to
determine
if
it's
on
the
cube
dns
side
or
if
it's
on
the
console
side
and
looking
at
the
logs
for
both
of
those
components,
we
couldn't
determine
accurately
where
we
think
it
is.
C
We
have
some
suspicion
so
jason's
working
on
that
section
of
the
actual
gitlab
code
base
to
make
it
error
very
clearly
which
dns
lookup
it's
doing,
because
at
the
moment
it
just
wraps
it
up
and
it's
like
dns
lookups
failed.
But
we
don't
know
because
there's
like
two
or
three
dns
record
lookups
to
get
there.
So
so
once
we
have
that
sorry,
who's.
E
Can
we
double
check
that
yeah.
A
E
Started
in
mr
and
and
then
stan,
I
saw
added
a
comment
to
it
and
I
didn't
see
what
happened
after
that,
but
I'm
not
sure
if
he
is
driving
it
or
whether
he
was
just
suggesting
a
change
and
hoping
someone
else
would
drive
it.
A
His
manager
so
yeah,
let's,
let's
just
double
check
that
where
that
is
right,
like
whether
we
need
to
escalate
somewhere
hand
it
over
or
maybe
jason
is
working
on
it.
So
just
let's
get
the
confirmation
sure.
C
C
It
is
if
it's
the
cube,
dns
google
side,
I
will
probably
almost
certainly
bring
google
support
in
and
as
a
first
step
and
say,
hey
what's
going
on
this
shouldn't
be
happening
if
it's
in
the
console
side
on
our
side,
that's
a
little
bit
trickier,
but
I
think
you
know
we'll
just
have
to
start
digging
and
understanding
how
consoles
working
in
kubernetes
at
the
moment.
E
That's
great,
I
added
a
couple
questions
here.
One
is,
is
that
are
we
seeing
this
on
staging?
Do
you
know
or.
C
E
I
mean
I
would
expect
even
what
I
know
about
the
issue.
I
would
expect
to
see
it
on
staging,
because
what
I,
what
I
see
on
production,
is
that
when
pods
start,
we
see
these
errors
and
it
always
happens
as
rails-
is
coming
online
right.
I
think
we
also
have
occasionally
see
these
errors
in
other
situations,
but
this
is
the
last
I
looked.
This
is
where
we
saw
like
the
errors,
mostly
and
that's
why
I
was
thinking
like
this-
had
something
to
do
with
initialization
yeah.
E
But
if
it
is
something
that
we
can
see
on
staging,
then
maybe
we
can
kind
of
narrow
it
down
because
yeah
I'm.
D
C
Is
why
I
was
kind
of
hoping
the
extra
logging
would
basically
point
us
completely
in
the
right
direction,
but
you
know
looking
at.
D
C
It
certainly
is
a
is
another
good
heuristic
to
look
at.
We
were
noticing
so
there's
three
kind
of
parts
right,
there's
there's
when
the
web
service
pods
cycle,
there's
where
the
console
pods
cycle
and
we
see
them
cycling,
actually
probably
more
than
we
should
be,
and
then
there's
when
node
cycles
happen,
which
you
think
would
be
the
same
as
the
other
two,
but
it's
actually
slightly
different.
So
when
we
looked
at
this
and
I
do
apologize,
my
membrane
is
mush.
At
the
moment
we
were
noticing
certain
different
errors
right.
C
So
this
is
the
thing:
there's
actually
two
different
classes
of
errors,
we're
seeing
we're
noticing
a
certain
type
of
error
when
pods
were
cycling
and
then
a
different
class
of
error,
which
was
this
unknown
one
that
I
think
we
couldn't
directly
track
to
any
event
yet
so
that
might
be
the
piece
of
another
piece
of
the
puzzle
at
any
rate
but
yeah.
I
agree.
I
think
it's
it's
it's.
Obviously
the
cube
dns
on
console,
knowing
my
experience
with
cube
dns.
C
It
could
very
well
be
cube,
dns
doing
something
weird,
but
but
it
would
be,
google
actually
have
written
it.
So
google
automatically
will
scale
cube
dns
for
us,
but
they
don't
actually
use
just
a
generic
horizontal
pod
or
low
scaler
they've
written
their
own
custom,
auto
scaler.
That
just
runs
as
a
pod
in
the
cube
system
namespace,
and
it
basically
does.
I
assume
some
magic
behind
the
scenes
to
scale
it
up.
C
Suspicious,
we
don't
capture
any
metrics
at
the
moment
for
how
many
dns
requests
are
going
to
cube
dns.
I
don't
think
or
anything
like
that.
So
right,
I
can't
say
it's
that
for
sure,
but
it's
certainly
another
gap.
We
have
an
observability
where
we
can
say
this
is
a
critical
component
to
us
and
we
really
don't
understand.
What's
going
on.
E
C
C
E
C
E
E
Sure
what
are
the
this
this
is
a
this
is
a
blocker
right
for
probably
moving
api
to
like
past
canary.
So
is
that
what
we're
saying
amy.
D
Well,
I'm
not
sure
I
think
it
looks
more
like
a
corrective
action
right
like
so
it's
it's
not
new
to
the
api
service
right,
it's
already
happening
in
production.
We
think
it's
linked
to
incidents,
though
right.
E
I
think
I
think
the
the
concern
here
is
that
it's
putting
extra
strain
on
the
primary
database,
we're
already
living
with
this
for
rails,
for
good
https,
but
to
introduce
this
problem
for
api,
could
cause
more
strain
on
the
primary
right
and
I
think
I
think
that's
the
risk.
We
need
to
weigh
so
maybe
it'd
be
better
to
understand
this.
A
bit
more
before
we
move
past
canary.
E
Think
it's
yeah,
I
wouldn't
say
it's
a
blocker
to
canary
personally,
it
feels
like
you
know
we
should
be
okay,
although,
although
lately
with
all
of
the
incidents
related
to
the
database,
I
think
we.
E
Yeah
but
soon
we'll
have
sidekick
talking
to
the
replicas,
so
that
will
relieve
some
pressure.
D
E
Yeah
I
mean
I
I
think
for
canary
it
will
be
fine
because
we'll
probably
run
with,
I
don't
know
how
many
pods
we're
gonna
run
like
for
for
canary
to
start,
but
probably
round
10,
maybe,
and
so
I
I
don't
think
it's
going
to
be
that
problematic
for
canary.
I
wasn't.
I
wasn't
thinking
that
before.
I
think
I
think
just
to
be
aware
of
this
before
we
move
to
the
main
stage
is
important.
C
Yeah
I
do
agree
like
I
really
do
like
don't
get
me
wrong.
As
I
said
it's
I've
got.
It
is
high
priority
to
me
just
because
I
haven't
spent
as
much
time
on
it
as
I
would
like
it.
I
do
want
to
understand.
What's
going
on,
I've
had
cube.
Dns
kill
me
in
that,
like
it's,
the
most
problematic
component,
I've
had
for
for
a
very
long
time
doing
on-premise
installation.
So
it
really
wouldn't
surprise
me
if
it's
an
issue,
I
still
I
never
got
back
to
it.
I
still
have
to
open.
C
I
I
cut
down
some
of
the
dns
traffic.
Remember
java,
that
mr
I
need
to
push
that
into
production.
Now
it's
been
in
stage
and
it's
baked
long
enough
I'll,
try
and
get
that
up
tomorrow
to
merge
that
into
production,
that's
not
going
to
be
a
bulletproof
fix,
but
it's
just
some
tiny
little
fixes.
I
do
probably
should
put
a
ticket
into
observability
to
see
if
we
can
start
capturing
more
metrics
out
of
cube
dns.
C
I
think
just
some
general
hit
rates
of
like
how
many
requests
we're
doing
and
things
like
that
could
be
in
speed
regardless.
I
think
would
be
really
useful
just
to
understand
you
know
if
there's
any
kind
of
you
know
any
things
we're
doing,
we
can
improve
or
to
track
any
sudden
changes
in
that
as
we
bring
new
services
on.
D
C
Yeah,
so
I
said
I
really
apologize,
I
haven't
even
had
a
chance
to
really
look.
I
had
a
chat
with
henry
earlier
this
week
and
I
wanted
to
chat
with
scarbeck
as
well.
I
just
haven't
had
a
chance
to
to
really
close
the
loop
on
that,
but
it's
from
my.
B
B
Getting
those
numbers
together
by
looking
also
what
we
came
up
with
as
numbers
with
the
under
other
communities
deployments
to
get
a
feeling
for
what
we
will
end
up
with
api
eventually
and-
and
this
is
coming
together
with
the
vetting
api
issue,
because
that's
also
about
you
know,
testing
what
happens
if
you
know
stop
traffic
on
a
node
or
crash
a
node,
for
instance,
and
to
see
how
it
reacts
coming
up
again
and
and
how
we
set
these
different
tuning
settings
for
scaling
and
blackout
periods.
C
Piece,
so
just
perhaps,
for
my
benefit,
some
very
basic
questions.
I
guess
just
to
make
sure
my
head
space
is
clear,
so
we're
going
to
rerun
an
api
in
the
zonal
modes
because
we
don't
want
cross,
zonal
traffic.
I
assume
and
we're
going
to
be
they're
going
to
be
obviously
web
service
pods.
That
are
exactly
the
same
as
like
the
git
hds
web
service,
pods
and
everything.
C
B
Okay
yeah,
so
if
you
look
at
the
radius
review
as
it
is
in
the
branch
of
the
emr
that
I
put
in
there,
it's
basically
down
to
the
details
of
which
kind
of
node
type
we
are
using
currently
and
from
there
what
what
kind
of
of
load
we
have
and
and
then
how
many
requests
and
how
many
nodes
and
the
vms
and
from
there
we
now
need
to
get
to
the
conclusion.
What
would
we
need
in
kubernetes
to
you
know
serve
the
same
kind
of
traffic?
So
that's
the
thing
that
needs
to
be
done.
D
Now,
cool,
okay-
well,
obviously,
like
incidents
of
you
know,
have
to
come
first,
so
completely
fair
that
that's
that's
where
we
are
right
now.
Is
there
anything
anyone
wants
to
usefully
use
the
last
10
or
15
minutes
for
or
more
useful
to
have
time
back.
C
So
I
guess
the
only
other
thing
I'll
briefly
mention,
because
it's
kind
of
tangentially
related
to
kubernetes
migration
and
I'll
try
and
be
very
brief.
I
put
in
the
delivery
channel
there.
A
little
note.
I
had
a
discussion
with
the
security
product
manager
for
gke
today
he's
in
sydney,
australia.
Hence
why
I,
I
guess
dave
pushed
in
my
way,
because
I
was
at
least
in
his
time
zone
for
him
it
was
a
good
conversation.
Actually,
it
seems
you
know.
C
They're
very
keen
to
work
with
us
to
you
know,
see
what
they
can
do
to
help
us
with
basically
anything
the
big
takeaways
for
there
that
I
think
are
usable
or
interesting
to
us.
Moving
forward
to
solve
problems.
We
have
they're
going
to
basically
be
introducing
google
container
registry
to
at
the
moment.
You
can
use
it
as
like
a
replacement
registry.
C
So,
like
you
push
things
to
it
and
it
just
acts
like
a
hosted
registry,
but
they've
had
a
lot
of
feedback
which
I
think
we
would
echo
that
we
we
like
our
own
registry,
but
we
would
like
you
guys
to
proxy
it
and
provide
a
bit
more
speed
and
availability
of
our
registry,
so
they're
releasing
functionality
to
do
that
coming
soon.
So
I
think
that
will
help
us
break
that
dependency
on
like
charts
and
images
on
dev.gitlab.org.
C
C
So
helm
will
say,
helm
has
experimental,
support
for
and
google's
like
committed
to.
Basically
so
there's
either
that
I
I
couldn't
get
a
straight
answer
out
of
him.
I
think
that
was
the
way
he
was
leaning,
but
he
also
possibly
indicated
that
they
will
actually
run
like
the
an
actual
chart
registry.
If
you
know
what
I
mean
like
a
http.
E
C
But
either
way
you
know
I
they're
trying
to
position
it
now,
as
that
model
of
have
your
own
registry
and
we'll
just
proxy
through,
and
you
know
you
can
just
leverage
us
for
availability
and
speed.
E
But
this,
but
it
doesn't
necessarily
make
our
configuration
less
complicated
though
it
does
allow
us
to
have
like
one
single
funnel
to
dev.
So
we
would
have
like
we
would
configure
their
registry
proxy
to
point
to
dev
or
or
something
like
that
right
and
then
we
would
just
use
use
it.
C
Yeah
so
I
think
how
it
works.
Actually,
so
I
I
don't
know
once
again,
I
don't
know
the
final
implementation.
The
devil
will
be
in
the
details,
but
I
think
there's
what
they
might
even
be
doing
so,
there's
typically,
you
can
use
it
now
by
just
like
literally
just
write
your
image
to
like
something
gcr
with
some
extra
like
paths
in
the
url
that
it
can
figure
out
where
to
proxy
to
like,
with
the
actual
full
name
later
in
the
url
or
something,
but
I
think
there's
also
they
might
use
a
mutating
web
hook.
C
So
when
you
send
the
manifest
to
the
gke
instance,
it
rewrites
it
and
then
actually
changes
it
to
basically
use
their
mirrored
copy,
if
that
makes
sense,
so
you're
you're
still
sending
dev.gitlab.org,
but
you
configure
gke
to
say
anytime.
You
see
dev.gitlab.org
actually
like
point
that
to
your
proxy
and
tell
your
proxy
to
proxy
it
kind
of
thing.
Does
that
make
sense.
C
That
would
be
interesting
because
then
we
don't
change
urls
or
do
anything
we're
just
configuring
gku
with
the
things
it
needs
to
know,
and
it
will
just
do
quote
unquote,
do
the
magic,
but
I
guess
we'll
see
how
the
implementation
looks
so
they're
improving
the
notifications
when
I
get
about
auto
upgrades,
which
is
nice
so
at
least
when
we
get
auto
upgrades
at
the
moment.
I
just
like.
I
have
no
idea
why
this
upgraded
but
they're
going
to
tell
us,
like
cds,
actual
real,
like
this
is
why
we've
upgraded
your
cluster
very
nice.
C
I
can't
remember
the
other
thing.
We
had
a
bit
of
a
discussion
about
gitlab
ci.
He
was
very,
very
excited
about
gitlab
ci,
probably
of
no
interest
to
anyone
in
this
meeting,
which
is
fine,
but
basically
he
was
saying
more
or
less
they've
invested
heavily
into
g
visor.
In
the
last
year
they
themselves
used
to
run
all
their
untrusted
workloads,
so
cloud
run
cloud
functions,
anything
that
was
purely
untrusted.
They
ran
on
gce,
gcp
vms,
obviously
same
as
us,
because
we
trust
they
now
run
everything
on
all
of
that
on
gke.
C
So
they
have
a
lot
of
experience
and
a
lot
of
know-how
that
they're
absolutely
happy
to
talk
to
anyone
and
anyone
about
how
to
run
purely
untrusted
workloads
on
gta.
So
he's
basically
going
to
introduce
me
to
some
people,
but
I
might
put
some
updates
on
some
tickets
or
something
for.
F
F
C
No,
that's
good,
and
I
I
don't
know
where
the
company's
overall
appetite
for
this.
I
actually
do
basically
I'm
kind
of
putting
the
ball
in
their
court
if
they
can
convince.
My
opinion
is
if
they
can
convince
us
that
this
can
really
just
run
untrusted
workloads
and
it's
in
a
state
where
it
can
do
that
yeah.
Maybe.
C
F
C
F
But
but
you
know,
if
that's
their
primary
sort
of
thing,
then
I
think
it's
good
enough
for
for
because,
because
having
that
would
just
bring
in
so
many
advantages,
you
know
the
the
we've
got
massive
ingress
expenses
again
in
the
last
month
and
no
one
really
knows
what
it
is
and
g
visor
would
stop
ping
floods.
For
example,
you
know,
there's,
there's
and
and
also
just
being
able
to
reuse,
vms
and
not
boot
up
a
vm.
Every
time
we
run
a
container,
I
mean
that
would
be
huge
cost
saving.
He.
C
Also
gave
me
a
bit
of
insight
into
so
there's
that
google
autopilot,
which
they're
very
much
pushing
at
the
moment,
which
I
thought
yeah-
that's
fine.
It
doesn't
really
interest
us,
but
he
actually
pointed
out
that
that
is
their
push
towards
general
kubernetes
as
a
hard
multi-tenancy
untrusted
workload.
So
he
was
saying
that
a
lot
of
the
work
they
did
on
that
is
now
they're.
Now
bringing
back
to
like.
We
can
leverage
work
on
that,
because
what
they
do
in
the
master
side
is.
C
They
have
they've,
actually
got
like
gatekeeper,
which
is
an
upsource
project,
they've
forked
that
they've
done
a
bunch
of
work
so
that
basically
at
other
levels,
not
just
using
g
visor
but
like
at
network
level
at
podspec
level.
They've
got
this
layered
defense
system.
Now
he
was
saying
that
they're
pushing
through
so
yeah
we'll
see
what
comes
out
of
it.
But
I
agree.
The
other
thing
is
at
the
moment.
C
A
C
We
probably
could
I
mean,
as
part
of
the
ongoing
incident
work
relating
to
ci,
I
believe,
like
we're,
rebuilding
runner
managers
right
we've
got
to
split
out
the
google
project,
we're
going
to
try
and
split
by
yeah.
So
so
I
think
we're
going
to
rebuild
them
and
try
and
do
it
right.
E
Now,
actually,
the
first.
The
first
thing
we're
doing
is
we're
launching
vms
and
alternate
projects,
but
we're
not
moving
the
shared
runner
managers
they're
going
to
stay
in
the
single
project.
So
that's
a
fairly
small
change
and
that
will
get
us
past
the
quota
issues.
I
think
where
this
could
help
us
is
to
save
cost
right,
like
we'll,
be
more
efficient,
launching
runners
and
kubernetes,
but
this
is
a
fairly
big
shift
in
how
we
manage
the
runners.
So
I
think
I
would
say
it's
a
medium
long-term
project.
A
What
is
cool,
what
did
I
want
to
ask.
A
The
the
reason
yeah,
the
reason
why
I
was
asking
is
because
we
have
a
question
for
the
registry
project
to
provide
new
runners
for
pre,
because
pre
doesn't
have
runners
what
we
did
in
staging
is
we
just
connected
the
existing
managers
there,
but
it
doesn't
work
like
it
doesn't
scale.
We
overload
those
runners
on
staging
as
well
periodically
so,
like
I
don't
like,
I
was
looking
into
it
yesterday.
A
I
don't
want
to
add
shared
runners
again
to
pre
now,
because
that
is
going
to
add
another
level
of
right
like
another,
hidden
level
of
load,
so
we
will
need
to
create
new
runners,
new
managers
for
pre,
which
will
also
mean
well
it's
time
to
create
new
runners
for
staging
as
well
to
separate
those
workloads
out,
and
this
is
not
cost
now
like.
We
are
not
talking
about
just
pure
manager
management.
A
You
know
we
can
continue
this
way
because
it
takes
forever
to
build
them
out.
So
I'm
wondering
if
there
is
a
shortcut
we
can
take
in
whatever
this
might
be,
but
apparently
no
because
that's
a
medium
to
long
term.
E
I
I
mean,
I
would
say,
the
benefit
is
for
cost
and
for
isolating
untrusted
workloads
for
internal
use.
I
think
we
could
just
use
kubernetes
to
run
for
for
runners
right
and
we've
done
that
before.
I
know
that
we
have
had
some
issues
with
that,
but
like
we
could,
we
could
take
that
route
for
pre.
Instead
of
using
a
runner
manager
or
a
dedicated
router
vm,
we
could
use
kubernetes.
C
Yeah,
I
think
we
would
just
toggle
some
more
things
in
the
helm
chart
to
install
the
runner
manager,
and
then
you
know,
create
a
new
name
space
and
pre
like
a
kubernetes
name,
space
for
the
actual
job
and
maybe
even
a
node
pool
who
knows,
and
then
we
would
just
configure
that
running
manager
through
the
chart
in
the
exact
same
pipeline.
We
do
now
and
then
just
only
turn
it
on
for
pre.
Just
like
we
turn
on
everything
else
in
that
chart
for
pre
or
for
staging
or
whatever.
C
D
A
D
A
Right
so
it
was
basically
like
one
of
the
registry.
Folks
said
oh
yeah,
and
we
need
runners
wait
what
now
we
also
need
runners.
So,
basically,
this
is
going
to
have
to
happen,
and
I
mean
now
you're
finding
out
in
the
meeting
with
everyone
else.
I
looked
into
this
and
no,
we
are
not
going
to
be
adding
what
we
already
added
to
staging.
We
need
to
separate
those
workloads.
We
can't
have
this
hidden
cost
that
exists
there.
A
So
if
this
is
the
shortcut
we
can
take-
and
it
also
allows
henry
to
yeah
check
out
something
new,
it
could
also
help
us
out
in
staging
if
it
ends
up
working
right.
C
A
D
D
Great
I'll
I'll
open
up
an
issue
then
so
we've
got
that
in
delivery
and
ping.
You
both
on
henry
graham
and
then
that
sounds
like
a.
It
sounds
like
a
great
place
to
test
out
this
setup
as
well
right,
so
that
we
can
hopefully
do
something
good
with
staging.
D
We
lose
an
hour
this
time.
Graham
it's
worse,
it's
the
bad
time
of
the
year,
so
awesome
super.
Is
there
anything
else
anyone
wanted
to
share
like
thanks
so
much
for
sharing
that
stuff?
Graham,
that's
all
incredibly
interesting.
I've
put
it
in
the
agenda
like
just
so
we
don't
lose
it
if
slack
clears
but
like
at
some
point.
We
should
get
these
into
like
issues
and
you
know
see
where
we
can
actually
take
this
stuff,
but
yeah
really
interesting
stuff.
D
Nope
awesome
all
right
thanks.
Everyone
hope
you
all
have
a
good
rest
of
your
day.