►
From YouTube: 2021-01-21 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Cool,
hey
everyone!
So
let's
get
started
to
switch
things
up
a
little
bit
this
week,
we'll
do
the
kind
of
demo
and
discussion
bits
up
front
and
then
afterwards
we
can
do
the
ad
mini
next
steps
and
blockers.
So
if
anyone
wants
to
drop
off,
they
can
do
so
jav
over
to
you.
C
Just
one
second:
getting
the
agenda
up
on
to
the
discussion.
C
So,
let's
do
the
discussion
first,
if
you
don't
mind,
I
just
wanted
to
get
some
things
out
of
the
way
with
regard
to
this
nginx
bypass
and
just
highlight
what
our
discussion
so
far
and
before.
A
C
C
D
B
C
Thank
you
I'll
share
my
screen
just
to
kind
of
like
go
through
this
quickly.
C
So
we're
proposing
bypassing
engine
x
and
gonna
start
doing
this
for
websockets,
with
the
hope
that
we
can
do
this
for
other
backends
as
well.
C
The
reason
that
we
want
to
do
this
is
just
to
reduce
complexity
and
also
to
kind
of
deal
with
some
availability
issues
that
we've
seen
around
the
nginx
controller
and
cycling.
Those
pods
we
do
have
draining
configured
for
the
nginx
controller
and
the
config,
but
it
doesn't
seem
to
work
very
well,
especially
for
these
long-lived
https
connections.
So
we're
just
hoping
this
helps
with
that.
We're
also
just
spending
a
lot
of
money
on
the
nginx
controller,
because
in
order
to
prevent
cycling,
we
just
decided
to
over
provision
them.
C
So
we
set
the
resource
requests
very
high
so
that
we're
not
scaling
them
as
traffic
comes
in
and
out
that
just
helps
with,
because
we
noticed
that
was
there
were
some
errors
when
we
were
scaling
up
and
down.
So
we
just
decided:
okay,
let's,
let's
over
provision
this
with
the
intention
that
maybe
we'll
get
rid
of
the
nginx
controller
soon.
So
I
I
sketched
out
what
the
current
request
path
looks
like
from
your
web
browser
or
from
your
git
client
to
workhorse.
You
can
see
we
go
through
a
lot
of
hops.
C
We
go
through
cloudflare
external
lb
to
aha
proxy
to
the
internal
lb,
which
is
layer
4
to
nginx
to
workhorse.
This
is
essentially
the
same
with
the
proposed
topology,
except
that
we
eliminate
the
nginx
controller.
C
The
concerns
that
the
concerns
that
I
have
and
the
reason
why
I
created
this
issue,
was
to
get
feedback
on.
You
know
what
is
nginx
doing
and
is
it
okay
if
we
remove
it,
there's
some
discussion
in
the
notes
just
feel
free
to
read
them,
and
if
you
have
additional
comments,
just
add
there's
also.
The
security
aspect
here
is
that
we
are
https
all
the
way
up
to
the
nginx
controller.
E
C
Instead,
if
we
remove
that
we'll
be
https
up
until
aj
proxy,
and
this
means
that
I
don't
think
the
internal
lb-
I
don't
think-
really
counts
because
it's
a
pass
through
layer,
four,
but
so
we're
going
from
we're
going
http
from
aj
proxy
to
workhorse,
where
before
we
were
going
http
from
nginx
controller
to
workhorse.
C
This
kind
of
led
me
to
think
like
why
don't
we
support
tls
for
workhorse?
And
I
didn't
see
an
issue
open
for
that,
so
I
opened
it.
I
don't
know
if
this
has
been
discussed
before
I
think
we
haven't
discussed
it
to
date,
because
we
use
a
local
socket
as
the
omnibus
default
talk
from
nginx
to
workhorse.
So
this
has
just
never
been
something.
That's
been
brought
up
with
cloud
native.
It's
no
longer
the
case
that
we
use
a
local
socket.
The
default
chart
configuration
that
the
nginx
ingress
is
to
use
network.
C
So
I
think
this
is
something
that
we
may
want
to
consider.
I'm
not
sure
like
it
doesn't
seem
like
it
would
be
too
difficult
to
implement.
You
know
if
we
just
use
a
self-signed
certificate
and
it
might
be
beneficial
for
others.
So
you
know
just
something
to
consider.
F
So
this
is
the
traffic
that
workers
would
be
handling
right.
C
C
C
Yeah
and
we
use
a
local
socket
on
omnibus,
which
is
what
we
do
on
vms,
but
with
cloud
native.
This
is
no
longer
the
case.
F
I
mean
go
to
the
workhorse
channel
and
read
the
topic:
what
it
is
it's
nginx
wannabe,
so
uh-huh
yeah,
it's
that's
pretty
much
it
right
like
we
didn't
build
out
these
type
of
things,
mostly
because
workhorse
was
supposed
to
be
very
simple,
very
non
smart
component
that
just
kind
of
routes,
the
traffic
without
thinking
about
all
the
bells
and
whistles.
F
That
was
what
six
years
ago,
five
years
ago,
and
now
it's
doing
a
bit
more
than
jakob
originally
wanted.
But
it's
worth
discussing
with
the
no
team
that
is
dedicated
to
workhorse.
C
G
I'm
just
going
to
throw
this
out
there
because
obviously
there's
loads
of
other
places
where
we're
building
tls
and
I'm
a
bit
of
a
stuck
record
on
this,
but
I
would
much
rather
we
used
a
single
component
to
do
that.
Tls
everywhere
and
the
one
that
I'm
always
banging
on
about
is
console
connect.
I
don't
really
care
what
it
is,
but
as
long
as
it's
the
same.
E
F
Well,
not
that
part,
but
no
to
raise
these
concerns.
G
And
that
would
run
as
like
a
sidecar,
then
painter
yeah
and
it's
it.
Then
it's
doing
what
I
guess
is
doing
what
nginx
is
doing,
but
all
it's
doing
is
the
tls
bit
of
it
so
yeah.
I
suppose
in
a
way
you're
trying
to
simplify
things
and
I'm
talking
about.
But
the
reason
why
I
like
it
is
that
it's
the
same
sidecar
that
we're
running
there
and
on
gidley-
and
you
know
redis
as
well,
you
know,
so
we
don't
need
another
thing
for
redis
and
then
another.
You
know
everything's
got
its
own
tls
mechanism.
G
We
just
rather
go
with
one,
but
I
understand
that
you've
got
deadlines
that
you
trying
to
get
to
that
might
not
be.
The
you
know,
might
not
be
the
quickest
route.
I'm
I'm
not
too
familiar
with
service.
G
G
Be
what's
something
underneath
istio,
the
other
yeah,
the
yeah
that
that
thing
yeah
they
they
do
that
as
well.
I
mean
console
connect
is
just
the
reason
why
I've
kind
of
tended
for
that
is
from
what
I
can
see.
Istio
is
much
more
focused
on
kubernetes
and
console.
Connect
is
kind
of
it
can
be
in
kubernetes,
but
it's
also
equally
at
home
outside,
but
yeah.
The.
F
C
G
G
F
G
G
Expires
and
so
having
like
someone
like,
istio
or
or
console
who's,
thought
about
it
and
thought
about
it
in
a
good
way
and
it's
built
on
vaults
and
everything
is
probably
better.
C
F
You
can
then
go
and
say:
well,
we
are
using
service
meshes.
We
are
smart,
but
jar
for
your
purpose
here.
C
Yeah,
if
you
could
scan
I,
I
pulled
an
entire.
The
entire
nginx
generated
configuration
from
staging
on
the
nginx
controller
pod
and
just
kind
of
scan
this
to
kind
of
see.
If
there's
anything
that
stands
out
that,
like
you
know,
if
you
remove
this,
we
may
run
into
problems.
That's
the
discussion
that
we
have
here
in
the
notes.
One
thing
that
came
out
of
this-
and
I'm
still
I
haven't,
dug
into
this
too
deeply
yet,
but
you
know
we
set
some
annotations
on
the
nginx
ingress
controller.
C
One
is
like,
for
example,
we
set
the
client
max
body
size
to
to
zero,
which
disables
it
means
like
an
infinite
size.
But
yet,
when
I
look
at
the
engine
x
configuration,
I
see
client
max
body
size
set
to
one
megabyte,
which
is
really
strange.
Maybe
starbuck.
If
you
can
double
check
me
on
this,
it
also
just
doesn't
jibe
with
the
fact
that
I'm
able
to
push
because
like
as
soon
as
I
saw
this,
I
got
really
paranoid
and
I
was
like.
C
Am
I
able
to
push
a
large
file
over
git
https
and
I
tried
that
and
it
worked
fine.
So
I
don't
understand
that
another
thing
is,
is
that
we
have
an
annotation
that
turns
off
proxy
request
buffering.
This
was
an
issue
from
a
while
back.
We
set
this
annotation
in
you
can
see
here,
nginx,
ingress
proxy
request,
buffering
off
and
sure
enough
it's
set
to
on
there
is
this
like
a
version
issue
with
our
ancient
nginx
ingress,
like
I
don't
know
like?
Is
it
just
not
supporting
these
annotations?
C
I
haven't
dug
into
it,
but
I
you
know
this
is
part
of
the
reason
why
I'm
going
to
be
happy
if
we
get
rid
of
nginx
ingress,
because
there's
just
like
a
lot
of
config
here,
that's
kind
of
hard
to
reason
about.
C
Years
ago,
yeah
yeah
so
as
far
as
the
feedback
I've
gotten
so
far,
it
seems
like
yakobe
is
fine.
Like
see,
he
thinks
it's
a
good
idea
for
git
http,
because
we
go
out
of
our
way
to
turn
off
all
of
these
nginx
features
anyway,
it's
so
it's
a
good
thing
to
do
for
git,
http
and
websockets
for
api
and
web.
We're
going
to
have
to
do
a
little
bit
more
analysis
to
see
whether
it
causes
any
issue.
C
F
Or
you
know,
system
of
elimination
right
like
go
through
that
and
see
like
what
what
it
actually
does
and,
like
I
said,
a
lot
of
these
settings
were
added
in
an
emergency
or
because
someone
requested
it
somewhere
because
it
broke
something
years
ago
it
it
might
be
really
hard
to
now
track
it
down.
It
might
be
possible,
and
I
could.
E
F
Go
through
my
items
because
I'm
certain
half
of
these
have
blame
on
me
in
omnibus
specifically,
so
I
can
try
and
track
some
of
these
downs
and
connect
the
issues,
but
in
the
meantime,
you
can
also
like
check
out.
F
B
C
That's
a
good
idea,
for
example,
like
I
don't
think,
gzip
compression
matters,
especially
for
git
https,
so
we
could
just
turn
that
off
on
the
nginx
ingress
we
have
for
web
and
api.
We
have
gzip
turned
on
on
cloudflare
anyway,
so
I
don't
anticipate
any
issue
there,
but
we
could
start
just
like
you
know.
Turning
these
features
off.
F
Wasn't
that
security?
I
know.
G
Go
ahead
so
jesus,
if,
if
we're
doing
the
gzip
at
cloudflare,
is
that
not
going
to
have
a
cost
implication?
G
Yeah
two
cloudflare
yeah,
I
think
just
because
we've
got
to
send
more
traffic
to
cloudflare.
C
Yeah,
I
think
I
mentioned
this
and
I
think
what
we
can
do
is
aj
proxy
supports
gzip
in
recent
versions,
and
I
think
our
version,
so
we
can
turn
it
on
there.
So
everything
north
of
aj,
proxy
and
everything
south
of
h.a
proxy
is.
C
A
Are
we
still
so?
This
sounds
like
it
possibly
is
a
slightly
bigger
thing
so
not
to
take
away
from.
We
definitely
want
to
do
it,
but
is
it
still
so,
for
a
couple
of
weeks
ago
we
discussed
about
upgrading
engine
x
versus
bypassing
yeah.
C
C
I
I
think
I'm
I'm
feeling
good
about
the
bypass
for
websockets
and
get
https
still.
I
I
don't
know
like.
Maybe
I
I
think
we
were
putting
off
the
nginx
controller
upgrade
because
it's
a
high-touch
operation
that
requires
us
to
go
cluster
by
cluster
and
if
we
could
get
rid
of
the
nginx
controller,
forget
https
and
websockets,
then
we
can
just
like
you
know,
move
forward
without
having
to
do
it.
Clusterback
cluster.
C
So
I
I
don't
know-
maybe
maybe
see
maybe
revisit
this
decision
on
monday
to
see
like
where
we
think
we
are
and
if
we're
feeling
like
this
is
going
to
drag
on
another
week,
then
we
make
plans
to
upgrade
the
engine
control
because
I
don't
like
where
we
are
because
right
now
we're
not
on
charts
master
we're
on
our
own
fork.
Because
of
this-
and
I
don't
want
to
stay
like
that
for
another
week.
If
we
can
avoid
it.
C
A
F
Okay
jeff
another
question
about
this:
how
do
we
ensure
that
whatever
people
do
in
charts
to
change
the
settings,
there
actually
ends
up
propagating
to
our
own
environments,
because
I
know
the
developers
will
likely
go
through
the
regular
process,
which
means
submit
mr
in
wherever
and
then
omnibus
and
then
charts
to
change
certain
settings?
How
do
we
ensure
that
actually
ends
up
on
our
plate?
F
Well,
right
now
we
implicitly
get
it
whether
we
like
it
or
not,.
F
E
F
The
distribution
team
to
give
you
some
ideas,
I'm
absolutely
fine
if
it
ends
up
becoming
a
part
of
a
process
rather
than
increasing
technical
complexity.
But
I'd
like
you
to
chat
about
that
with
distribution.
C
Yeah
sure
yeah
they
definitely
should
be
involved.
In
this
conversation,
I
think
jason,
and
he
said
he
was.
He
has
a
to-do
to
kind
of
look
through
this
as
well
so
yeah,
let's
bring
them
into
it.
C
Okay
for
the
for
the
demo,
what
I
was
hoping
to
demo
was
a
working
websockets
dashboard
for
staging.
I
submitted
an
mr
yesterday
and
tried
to
clean
it
up
a
little
bit
and
I
can
what
I'll
what
I'll
show
is
like
the
skeleton
of
the
dashboard,
but
it's
I'm
looking
andrew.
I
know
you're
like
extremely
busy,
but
oh
so
you
even
just
walked
away
doing
other.
C
I
didn't
just
meeting
so
yeah
okay,
so
I
need
to
go
through
them
and
what
I
don't
like
is
just
that.
It's
like
a
massive
copy
and
paste
and
maybe
there's
something
better.
We
can
do
there.
C
Yeah
but
I
yeah
so
I
I
need
to
go
through
go
through
that.
I
don't
know
if
there's
really
much
to
demo
like
other
than
I
can
maybe
just
show
the
the
dashboard
the
main
dashboard
as
it
is
right
now.
C
Sure
so
yeah
we
don't
have
any
data
and
what
I'm?
What
I'm
going
to
have
to
do
is
create
artificial
websocket
traffic,
and
I
have
some
ideas
for
how
to
do
this,
if,
even
if
it's
not
real
websocket
traffic,
at
least
regular
https
requests
to
staging
so
that
we
can
start
seeing
because
the
websockets
backend
is
just
rails,
so
it
can
serve
any
kind
of
https
traffic
as
well.
C
So
I
just
want
to
generate
lots
of
traffic
so
that
we
can
start
seeing
metrics
before
we
go
to
prod
andrew
was
I,
since
I
didn't
read
your
comments.
Is
there
any
kind
of
like?
What's
the
like?
What's
the
big
overview
of
like?
Are
we
are
we
okay.
G
With
generally
yeah
I
mean
it's,
it's
there's
just
going
to
be
one
sli,
which
is
like
requested
in
500
effectively,
which
is
like
not
very
good.
We
need
that
change
in
the
product
for
the
action,
cable,
metrics
that
everyone's
fighting
over
and
it's
kind
of
frustrating,
because
it's
a
very
small
change
and
so
to
see
it
going
backwards
and
forwards
about
who
owns
it.
It's
just
a
little
bit
frustrating
because
it's
it's
actually
quite
a
small
piece
of
work.
What
is
it
but
the
link?
A
G
It's
it's
the
one
where
the
the
memory
team
are
saying
the
someone
else
is
owning
it.
Then
someone
else
is
saying
yeah,
it's
actually
quite
a
small
change,
it's
like
yeah
so,
but
it
doesn't
really
matter
like.
I
think,
to
get
going.
Just
have
requests
per
second
and
500s
and
some
saturation
alerts
as
well,
because
you'll
probably
need
to
add
a
few
for
that
and
then
like
that's,
probably
fine
for
now
and-
and
we
can
just
get
going
with
that,
I
think.
G
C
G
G
It's
like
just
put
in
like
yeah.
I
think
you
you
put
in
tbd,
you
just
leave
it
as
tbd,
like
yeah,.
B
C
G
Out
just
okay
leave
it
out
yeah
I
mean
they
don't
make
any
sense
a
lot,
and
I
said
this
at
the
time
when
this
was
when
this
was
done.
You
know
I
made
you
know,
I'm
not
changing
my
mind
here
like
it
was
so
overbuilt.
Why,
before
compliance,
or
is
this
all
marin's
fault?
No,
it
was
ammar,
took
it
on
as
a
project,
and
he
had
these
big
plans.
Oh
yeah.
C
Okay,
that's
that's
all
I
got
scarbeck.
What
do
you
have.
E
Hi,
I'm
just
going
to
walk
everyone
through
what
I've
been
trying
to
work
on
for
the
last
like
nine
years,
in
attempting
to
resolve
a
issue
that
marin
opened,
I'm
just
going
to
share
my
screen,
I'm
just
going
to
show
you
the
same
stuff
that
jarv
is
eventually
going
to
review
and
tell
me
john.
E
You
need
to
fix
x,
y
and
z,
so
if
I
could
figure
out
how
to
share
my
if
zoom
has
been
doing
weird
things
since
I
upgraded
so
if
you
guys
see
something
weird
just
let
me
know.
E
Okay,
we
see
your
screen
like
okay,
so
I'm
not
gonna
show
you
the
diff
you
I
linked
it
all
there,
so
you
can
see
everything
there,
but
the
important
thing
is
that
the
goal
here
is
that
we
need
to
figure
out
how
to
try
to
cache
our
dependencies
such
that
we
aren't
rebuilding
them
for
every
single
time.
We
do
a
diff
and
every
single
time
we
do
a
deploy,
so
I'm
using
our
ci
caching
strategy.
So
in
this
particular
case
I
created
a
small
script
that
builds
a
cache.
E
All
it
does
is
clone
our
chart
inserts
a
random
file
that
I
could
use
to
validate
the
cache
and
then
creates
the
package
of
our
helm
chart,
which
includes
all
of
our
dependencies
from
the
get-go,
and
then
I
leveraged
the
caching
feature
of
gitlab
ci
to
rock
it
in
here.
So
there's
probably
some
things
I
could
clean
up
in
here,
but
this
is
what
a
ci
job
looks
like
when
it's
building
the
cache.
E
E
Maybe
one
of
you
could
correct
me
on
that
information,
because
I
really
don't
want
the
cache
job
to
run
at
all
if
it
exists,
but
I
don't
see
a
way
to
disable
that
capability
and
there's
no
api
call,
I
can
make
to
say,
is
this
cache
with
this
key
available
to
us?
So
I
don't
know,
but
the
key
point
here
is
that
during
a
deploy
in
this
case,
this
is
a
dry
run,
but
we
downloaded
a
chart,
so
it
has
all
of
our
dependencies
baked
into
it.
So
there's
nothing.
E
We
need
to
do
here
and,
more
importantly,
we
don't
see
stuff
like
this,
such
as
adding
our
kubernetes
char
or
adding
various
chart
dependencies
inside
of
these
statements
here,
which
is
where
those
dependencies
get
added
when
held
file,
runs
my
strategy
or
approach
to
doing
this
is
not
entirely
complete.
I've
essentially
broken
our
ability
to
run
local
development
right
now
and.
D
F
Quick,
real,
quick
aren't,
don't
we
have
a
version,
no
continue
sharing.
Please
don't
we
have
a
version
inside
of
the
pipeline
itself.
D
E
F
For
a
sec
somewhere
would
never
matter
in
our
case
because
we
are
running
xiao
always
so
you
will
always
see
the
div
anyway
right.
So
that's
on
its
own
is
not
a
problem.
It's
just
a
tiny
bit
confusing.
That
is
zero,
zero,
zero.
So
I'm
afraid
that
someone
else
is
gonna
get
really
confused
about
it.
So
is
there
a
way
to
just
like
not
show
zero,
zero
zero
and
show
something.
E
E
So
if
we
always
leave
it
zero
zero
zero,
this
will
never
show
up
as
a
diff
in
the
future.
The
question
that
I
have-
and
I
need
to
raise
this
to
the
distribution
team-
is,
if
there's
any
sort
of
requirement
in
anything
that
relies
on
that
version.
Such.
B
E
The
the
check
for
various
testing
capabilities
prior
to
a
chart
being
installed,
for
example,
because
I
know
we
throw
out
lots
of
warnings
if
you're,
like
your
gitlab
version,
may
be
incompatible
with
the
chart
version.
For
example,
I
don't
know
if
we
have
the
same
thing
specifically
for
a
chart
version,
but
the
one
thing
I
do
like
about
this.
If
we
stay
this-
and
we
upgrade
our
chart-
we'll
never
see
this
again,
which
is
wonderful,
because
this
is
just
garbage
in
my
opinion,
because
it
doesn't
have
any.
E
As
I
know,
but
I
just
worry
of
implications
regarding
that,
the
other
problem
that
I
have
currently-
and
I
don't
know
how
to
solve
this-
is
we
sometimes
deploy
differing
chart
versions
to
differing
environments
as
we
test
and
upgrade
our
chart
and
we
may
upgrade
pre
and
staging
before
we
upgrade
prod.
I
don't
know
how
to
do
that
with
the
approach
that
I've
taken
to
solve
this
problem,
so
I'm
open
to
feedback.
C
I
have
one
question:
why
did
you
decide
not
to
take
the
route
of
using
artifact
and
artifact
instead
of
and
use
cache
instead,
because
what
I
was
thinking
was
that
with
artifact,
you
can
query
that
through
the
api-
and
we
could
see
if
there's
been
a
change
or
not
an.
C
E
E
F
F
E
E
F
A
You
worked
several
of
the
ones
that
we
were
all
off
on,
so
yours
probably
is
nine
days
yeah
thanks
for
showing
that
cool.
Is
there
anything
else
and
it
wants
to
demo
or
discuss.
F
A
F
One
question:
just
before
we
continue:
I
didn't
have
the
time
to
check
either
jar
or's
car
break
what
is
happening
with
action,
cable
readiness,
review
on
the
side
of
development.
Is
there
any
movement
on
that?
Do
we
need
to
push
any
of
that?
I.
C
Don't
think
there's
really
anything
left
for
development
that
hasn't
already
been
discussed.
You
know
we
just
have
the
issues
that
are
opened
as
part
of
the
readiness,
I
think
I
mean
I
think,
we're
in
a
place
now
that
if
we
can
get
through
this
nginx
bypass
work,
then
we're
gonna
start
prepping.
The
change
issue
to
start
turning
this
on
slowly.
F
Okay,
my
suggestion
would
be
give
sufficient
time
for
people
to
actually
participate
with
you
on
that
issue,
with
the
nginx
controller
but
beginning
of
next
week,
start
planning
for
testing
various
of
those
config
options
and
seeing
what
we
need
to
do,
because
I
don't
want
us
to
pause
this
migration
for
for
a
long
discussion
with
developments.
F
A
Cool
thanks
a
lot,
so
you
don't
have
any
me
just
go
through
the
boards.
We
don't
actually
have
any
new
blockers,
but
just
prove.
A
A
So
just
got
these
two,
no
great
movement
on
those
I'm
going
to
follow
up
on
this
one,
that
the
feature
requests
to
trim
supplement
log
data
scope,
because
there's
been
no
movement
on
that
one
and
our
generic
labels
is
assigned
to
you
scale
back
so
shout
if
you
need
help
on
that.
One.
E
Still
a
work
in
progress,
I
think
I've
got
one
more
chart
that
I
need
to
create
a
merge
request
for
which
is
conveniently
the
more
difficult
chart
to
deal
with.
But
after
that
this
issue
kid
will
remove
the
migration
blocker
label
when
I
get
our
stuff
merged,
at
least
in
the
master
it'll
be
up
to
us
to
get
those
changes
version
to
our
stable
branch
or
upgrade
our
helm
chart
to
use
what's
on
master.
At
that
point,.
A
Cool
okay,
great
sounds
good,
any
anything
else.
I
want
to
mention
our
blockers.
A
Cool
so
then
corrective
actions.
We
still
have
a
few
outstanding,
so
external
repository
dependencies,
your
ones
go
back,
keep
up
to
date
on
all
tools.
We
haven't
made
a
move
on
that
one
yet,
but
it
sounds
like
it'd
still
be
useful
to
at
least
know
what
the
tools
are,
and
then
we
can
start
thinking
about
like
what
do
we
need
to
do
for
that,
so
that
would
be
a
good
one
to
pick
up.
E
I'm
struggling
to
understand
it
a
tiny
bit.
This
came
out
of
one
of
the
corrective
actions
from
an
incident
due
to
a
deploy.
Failure
because
of
a
helm,
3
upgrade
had
not
yet
been
completed.
We
blocked
ourselves
and
deploys
when
helm
turned
over
control
of
a
certain
fqdn
over
to
the
cncf.
E
F
Someone
who
who
already
had
to
deal
with
this
thing
inside
of
distribution,
I'm
telling
you
this-
is
a
lost
battle.
So
what
you
need
to
consider,
instead
of
doing
all,
is
focus
on
one
or
two
tools
that
are
the
most
important
for
us
at
the
moment.
Maybe
that
came
out
of
this
corrective
request
for
this
corrective
action
and
figure
out
how
to
fix
one
or
two
of
those.
Once
you
get
that
move
on
until
the
next
one
and
next
one
the
next
one,
it's
a
whack-a-mole
game,
you
will
never
catch.
F
It,
create
the
whole
list.
A
Yeah,
even
as
a
first
step,
it
might
just
be
words
for
us
to
identify
like
what
are
the
tools
that
we
most
care
about.
So
at
least
we
have
that
list
right,
because
then
we
can
find
a
way
to
track
that.
C
I
was
talking
to
amy
about
this
and
I
think
one
of
the
biggest
takeaways
from
the
helm3
upgrade
was
that
we
didn't
keep
on
top
of
the
prometheus
operator
chart.
So
I
would
put
that
right
at
the
top
of
the
list.
Is
that
somehow
bake
this
into
our
workflow
process,
where
we
were
looking
at
chart,
updates
and
keeping
it
regularly
updated,
because
we
we
jumped
so
many
versions,
and
that
was
just
a
huge
pain
point.
F
That
sounds
like
a
perfect
target
jar,
because
that
is
going
to
pull
all
of
the
other
dependencies
as
well
right
if
they
are
ahead
and
we
are
not
we'll
have
to
upgrade
others,
and
that
is
slowly
is
going
to
expose
others.
That's
a
great
suggestion.
A
So
related
to
an
incident
on
proxy
errors.
C
C
C
I
can
I'll
take
an
action
to
to
update
this
to
kind
of
see
what
the
next
steps
are.
I'm,
not
I'm
not
sure
or
close
it.
E
Do
we
want
to
discuss
the
revamp,
kubernetes
metrics
item?
I
see
that's
still
in
planning
but
has
been
assigned
a
jarv.
A
Yes,
so
this
goal
of
this
one
is
to
have
fast
painless,
metrics,
I'm
gonna
paraphrase
that
one.
It
is.
A
So
this
is
like
chained
back
so
yeah
this
one.
I
think
we
should
once
we
have
the
new
labels
in
and
those
charts.
We
should
probably
review
this
one.
A
Cool
okay,
so
the
only
one
we
haven't
that
we
haven't
touched
on
is
this:
remove
includes
from
ci
from
common,
which
you
opened
last
week,
job.
A
A
Okay,
so
let's
pick
those
up
as
well
right
and
then
link
to
this
is
what
we
have
in
progress
and
coming
up
next.
A
Just
filter
this
down
to
kubernetes
so
blocked.
We
have
the
saturation
metrics
one
so
in
progress.
A
A
C
I
think,
after
after
the
websockets
work
is
done,
I
expect
that
this
will
go
through
most
of
next
week.
I
think
the
next
plan
will
be
probably
be
for
either
either
I'll
hop
on
some
of
the
stuff
that,
like
maybe
help
scarbec
out
depending
on
you,
know
how
things
are
going
with
the
helm,
chart,
work
or
I'll
pick
up
readiness
for
api.
I
think
that
probably
the
next
thing.
A
A
C
I
think
we
yeah,
I
think
we
decided
it's
not
a
corrective
action
level
but
high
priority,
but
it's
something
that
would
be
nice
to
do
so.
This
could
be
something
I
could
pick
up
next
week,
depending
on
how
things
are
looking.
F
F
Just
drain
everything
yeah.
C
That's
a
great
idea,
I
think
I
don't
have
anyone
specific
in
mind
right
now,
but
we.
F
Can
think
of
someone
I
mean
there
are
a
lot
of
people
who
are
complaining
about
everything.
Tooling
wise
might
be
good
to
give
them
a
chance
to
improve
it.
C
Yeah
also
amy,
I
forgot
to
mention
for
next
week
I
we're
doing
the
patch
dry
run.
I
I
opened
up
an
issue
for
that,
but
it
might
not
be
in
the
right
column.
Oh.
F
A
E
A
So
the
only
way
it's
going
enough
on
the
board
at
the
moment-
yes
agreed.
Okay,
so
rough
priority,
we've
got
the
dependency,
repo
dependency
failure,
stuff
and
we've
got
the
bypassing
nginx
and
we've
got
websockets
plus
corrective
actions.