►
From YouTube: 2022-03-17 GitLab.com k8s migration APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
just
wait
for,
graham,
as
he
has
the
discussion
items
unless
there's
anything,
anyone
else
wants
to
add
on
the
agenda
and
start
with.
A
Hey
so
now
we're
all
here
welcome,
so
this
is
the
17th
march
2022
apac,
emir
timed
kate's
demo.
So,
graham
you
have
the
first
item.
B
Yep
so
just
making
people
aware
we've
got
the
issue.
B
There's
an
issue
open
basically
to
to
handle
how
we're
going
to
do
chart
pumps
in
the
future
and
in
the
power
of
iteration.
The
first
thing
we
can
immediately
do
is
just
kind
of
have
a
scheduled
job.
That's
going
to
run
once
a
week,
and
I
I'd
actually
want
to
discuss
this
with
people
here
and
now
about
the
schedule
of
it.
That
will
basically
open
up.
Merge,
requests
automatically
to
do
chart
bumps
requiring
manual
review
and
then
just
merging
the
hope
is
we
do
them
starting
say
once
a
week.
B
B
B
Cool
easy,
so
the
merge
request
is
up
to
basically
add
that
automation,
well
the
scripts
and
the
ci
jobs
and
then
once
that's
reviewed
and
merged,
then
I
will
set
up
the
schedules
to
do
that
and
yeah
essentially
that'll
move
us
forward
with
that.
So
at
least,
then
we
should
be
doing
regular
chat
box.
You
know
once
a
week,
as
I
said,
with
the
power
of
iteration
at
the
moment,
I'm
just
keeping
the
the
workflow
that
is
there
and
just
doing
the
automated
chat
bumps.
B
But
in
the
process
of
discovering
this
I
did
get
a
little
bit
sidetracked.
As
usual.
I
did
realize
that
the
charts
that
are
being
offered
to
us
now,
there's
they're,
offering
it
the
gitlab
has
a
helm
package
repository
built
in
and
the
distribution
group
is
offering
us
helm,
charts
in
both
gitlab
com
in
the
security
mirror
and
in
dev
dot,
gitlab
dot,
all
through
that
and
they're
actually
giving
us
the
valve
versions
where
they
weren't
before
I'm
pretty
sure.
B
B
I
didn't
realize
this
and
I
think
it
might
be
a
relatively
recent
development,
but
the
good
news
is,
I
think
we
want
to
change
in
the
future
from
chart
bumps
being
based
around
git
shares
like
the
like,
the
sha
of
the
git
commit
and
actually
move
towards
using
auto
deploy
version
numbers
that
you
know,
even
if
we
still
do
it
manually
as
a
first
step
and
then
in
the
future.
We
can
that
kind
of
paves
the
way
from
us
doing
it
through
autodeploy.
A
What's
the
so
you
mentioned,
I
like
that,
we've
switched
over,
like
the
distribution
has
made
some
changes
like
what
what's
the
actual
kind
of
difference
between
those.
B
Sure
so,
well,
it's
not
even
that
they
made
some
changes.
Well,
I
mean
they're
offering
us
the
package,
helm,
charts,
so
helm
charts.
We
can
pull
them
either
through
git,
so
you
can
directly
just
get
clone
them
down
or
they
can
be
packaged
and
deployed
through
the
package
registry
of
the
gitlab
product
when
they're
packaged
they're
actually
a
little
bit
easier
for
us
to
consume.
B
So
I
still
think
maybe
fetching
them
from
the
package
repository
and
vendoring
them.
So
we
still
vendor
them
and
keep
a
local
copy
makes
sense,
because
there's
less
cleaning
and
less
massaging.
I
have
to
do
with
the
chart
to
vendor
it,
and
it
includes
all
the
sub
charts
that
are
dependent
and
the
other
reason
I
think
it
makes
sense
is
at
the
moment.
If
you
want
to
see.
What's
in
our
gitlab
chart,
you
will
look
at
our
deployment.
You'll
see
like
a
sha,
which
is
the
git
commit
in
the
charts
reboot,
which
is
fine.
B
That's
easy
enough
to
follow
and
go
and
figure
that
out.
But,
as
I
said,
if
we
can,
we
could
switch
that
to
pulling
from
the
devel
helm,
repository
and
and
switching
to
the
version
number
that
we're
using
being
actual
auto
deploy
version
numbers.
So,
like
the
14.8
point
you
know
whatever
and
that
matches
to
an
auto
deploy
it's
not
like
it's
an
arbitrary
like
honestly.
This
is
semi-arbitrary.
B
As
I
said
it's
just.
If
we
can
start
doing
that,
then
obviously
it
means
when
we
do
a
chart
bump
we're
pinning
it
to.
I
mean
when
we
do
a
chart
bump.
Now
it's
arbitrary,
it's
just
we
picked
the
latest
commit
but
at
least
we're
matching,
auto
deploy
numbers
and
then
in
the
future.
As
I
said
that
paves
way
for
us
to
be
on
track
to
add
charts
as
a
auto
deployable
component.
C
Yes
sure
I
have
one
question
because
I
think
the
current
situation
is
that
most
developers
can't
see
the
ops
pipelines
for
kubernetes
workloads
right.
So
it's
mostly
us,
I
guess
who
can
review
the
actual
div
drops
for
chart
bumps
also
fee
for
other
changes.
For
instance,
if
the
registrar
team
does
a
chart
change
and
then
do
a
chart
bump,
and
then
we
need
to
review
how
it
looks
like
in
the
div
drops,
and
they
can't
see
it
right.
C
They
don't
know
what's
happening,
so
we
need
to
assess
if
that
is
maybe
what
they
wanted
to.
But
we
are
not
always
that
much
involved
in
charge
changes
right,
I'm
also
not
sure
for
the
distribution
team,
if
they
can
see
those
pipelines-
and
maybe
it
would
be
good
that
we
are
the
only
reviewers
of
that,
because
we
don't
do
most
of
the
chart,
changes
right
and
it's
often
when
reviewing
chart
changes,
it's
often
hard
to
assess.
Okay.
Is
this
exactly
what
we
are
expecting
and
want
with
this
change?
C
Maybe
it
would
be
good
to
have
the
actual
developers
and
distribution
team
to
also
look
into
the
div
jobs,
to
verify
that
it's?
Okay,
because,
as
we
end
up
to
be
the
only
reviewers
of
the
div
jobs,
at
least
I
know
it
could
help
in
general.
I
think
too,
to
have
more
visibility
in
this,
but
I
don't
know
if
this
is
possible
from
a
security
perspective.
B
B
I
think
getting
developers
to
review
the
diffs
would
be
good.
I
do.
I
definitely
will
not
want
to
block
us
on
that,
though,
like
saying,
we
need
their
approval
to
that,
because
I,
I
just
think
their
interest
and
potentially
their
ability
to
stay
confidently,
especially
at
the
start,
might
be
not
that
strong,
but
yeah.
Getting
visibility
on
the
diffs,
I
think,
would
be
good,
but
I
I
think
emmy
was
about
to
go
into
what
the
ramifications
or
the
reasons
behind
us
hiding
that
are.
A
Yeah,
I
think
so,
reuben's
been
sort
of
looking
into
this
stuff
and
the
it
is
a
little
there
are
risks
associated
with
opening
all
this
stuff
up.
So
it's
certainly
not
not
a
trivial
change.
A
I
wonder
I
wonder
if
we
have
an
option
or
whether
it
be
worth
trying
to
explore
whether
we
do
have
an
option
of
somehow
making
this
so
that
it
does
work
more
closely
with
how
a
code
change
works
right,
because
I
think
it's
a
in
a
way,
a
very
similar
problem
where
we
need
someone
sort
of
outside
of
the
team
to
say
here's,
the
change.
I
want
to
make
it's
the
right
change
and
then
we
are
kind
of
the
coordinators
of
of
when
that
happens,
and
at
the
moment
like
it
was
interesting.
A
Actually,
yesterday
I
was
reading
through
some
of
the
get
lab
personas
and
it's
interesting
to
sort
of
look
at
where
in
delivery
we
fit
inside
the
personas,
because
we
actually
hold
multiple
roles.
We
call
ourselves
release
managers,
we're
not
really
a
release
manager
in
the
gitlab
persona,
sort
of
definition
of
a
release
manager,
but
we
are
the
definition
of
a
developer
who
wants
to
get
a
change
out
and
we're
also
the
platform
engineer?
Who
does
the
coordination
of
getting
that
change
out?
So
maybe
on
this
one?
A
That's
might
help
kind
of
find
a
like
help.
Us
start
to
think
about.
Where
can
we
start
splitting
some
of
these
responsibilities
because
yeah
having
people
having
the
people
who've
made?
The
change
actually
verify
that
this
would
be
a
great
improvement.
B
D
Yeah,
that's
so
good
man,
so
I
wrote
the
initial
the
initial
iteration
yeah
I'll
use
that
word
of
that,
and
it's
a
shell
script
that
runs
on
ops,
that
does
some
fuzzy
matching
of
the
helm,
div
output
and
post
the
result
back
to
dot-com
later
someone
who
doesn't
work
here
anymore
started
this
woodhouse
project
and
we've
continued
to
maintain
that
and
now
woodhouse
has
the
ability
to
post
back
to
the
dot
com.
D
Mr
and
most
projects
are
using
that,
but
it
doesn't
give
you
it
doesn't.
It
doesn't
have
the
details
of.
Like
you
know,
contextual
details,
for
this
is
a
helm
diff.
So
I'm
going
to
do
some
fuzzy
matching.
It
doesn't
do
that,
so
we
left
the
the
janky
shell
script
approach
there.
What's
sorry,
I
I
was
looking
at
something
else.
What's
the
context
related
to
those
numbers.
B
C
B
D
Think
so
I
mean
like
if
a
secret
leaks,
it's
too
much
trouble
for
us,
so
I
don't
think
I
mean
what
we
could
do.
Is
I
mean
yeah
yeah.
B
D
That's
a
good
point,
I
mean,
I
think,
yeah,
I
think
our
job
logs,
I
don't
know
on
ops,
I
assume
they
aren't
visible,
but
we
would
need
to
check
that.
A
A
Okay,
I'm
gonna
suggest
like
so
actually
a
couple
of
things
on
what
you're
showing
us
so
far.
Graham
so
I'm
gonna
open
an
epic
we
can
put
on
release
velocity
and
actually
because
I
think
that
the
the
change
you've
made
is
a
great
first
iteration
right.
A
It
certainly
helps
us
like
you
know,
because
I
think
for
a
little
context
for
everyone
who
who
hasn't
enjoyed
chart
bumps
is
we
don't
tend
to
do
them
very
often
so
often
times
when
we
need
to
put
something
out
that
has
a
chart
bump
we'll
find
we
have
like
a
month
of
changes
that
need
to
go
out
in
one
go
so
doing
these
a
little
bit
more
regularly,
hopefully
just
reduces
that
batch
size
and
makes
things
a
bit
easier,
but
actually,
I
think
the
kind
of
the
maybe
the
sort
of
stronger
longer
term
thing
is:
how
do
we
get
this
to
auto?
A
Deploys?
How
do
we
switch
it
so
that
it
fits
in
the
model
of
all
the
other
code
changes
we
push
out
to
production
and
we
maybe
need
to
do
a
little
bit
of
kind
of
investigation
into
options
there.
So
I'm
going
to
open
this
as
an
epic
so
that
we
actually
can
pull
this
stuff
together
and
make
it
visible
that
we're
looking
at
this
and
then
we
can
like
open
up.
A
Maybe
some
issues
for
what's
the
next
piece
to
discuss
like
this
sounds
like
a
good
one
problem
to
try
and
solve,
and
then
how
do
we
like?
Do
we
switch
over
to
the
the
kind
of
like
packaged,
helm,
charts
and
pull
those
yeah?
We
can
actually
have
issues
for
these
changes.
C
I
have
one
more
comment
to
this:
if
we
want
to
go
to
auto
deploys
for
chart
bumps,
then
we
need
to
make
sure
that
a
chart
bump
as
it
is,
doesn't
change
any
default
configurations
right.
Most
of
you
do
chart
bumps
to
add
a
new
configuration
item
which
then
wants
to
be
set
right,
but
we
need
to
make
sure
that
this
configuration
item
then
doesn't
change
something
in
the
same
run
when
we
bump
a
chart
right.
E
C
B
We
would
need
a
discussion
right
with
distribution
to
make
sure
that's
right
and
like
really
making
sure
everyone
is
thinking
about.
This
is
going
to
be
auto
deployed.
What's
this
potentially
breaking,
it
needs
to
be
in
the
forefront
of
everyone's
mind,
because
you
know
it's
it's
not
at
the
moment,
which
is
fine,
because
we
don't
know
it
auto,
deploy
it.
So
that's
not
not
needed.
E
I
had
one
question:
this
might
not
make
sense,
because
I
don't
know
really
how
this
whole
system
works,
but
so
we
can't
we
can't
have
a
tool
that
sort
of
does
any
kind
of
fuzzy
matching
and
pulls
the
diff
out
of
the
job
logs
and
post
it
somewhere.
Because
then
you
have
the
risk
that
it
doesn't
match
it
properly
and
pulls
out
something
that
should
not
be,
but
can
can
the
whoever
is
generating
the
diff
and
adds
it
to
the
job
logs.
Can
that
same
resource
post
it
directly
to
the
mr.
D
E
D
It
on
ops,
everyone
with
a
dot-com
email
address-
I
think,
can
see
this,
although
I'm
not
sure
what
the
permissions
are
set
to
on
that
project.
Maybe
someone
can
try
who
doesn't
have
who's
nothing
infra
see
if
they
can
access
the
job.
Log.
D
This
would
be
the
kids
workloads
projects,
yeah
they're,
set
to
internal
right,
not
private.
I
can
check.
C
B
D
Yeah
they're
private.
I
guess
we
made
that
decision
a
while
back,
probably
for
compliance,
but
it's.
A
A
Other
problems
much
bigger,
so
I
think
we
should
put
some
of
this
into
an
issue
because
I
know
we've
got
another
discussion
we
want
to
get
through
in
time,
but
I
do
want
to
just
make
sure
that
we
so
in
terms
of
this
first
iteration,
like
so
graham
like
have
a
think
about
what
sort
of
process
this
will
look
like
and
where
people
will
know
about
that
like
how
will
how
will
someone
know
like
who's
going
to
do
the
email
like?
B
A
Let's
assume
not
like,
if
we're
saying
it's
someone
who
needs
to
evaluate
chart
changes,
then
I
think
we
need
to
think
about
yeah
who
is
going
to
be
like
who
do
we
have
can
do
that
or
do
we
need
to
train
more
people.
B
So
at
the
moment
it's
just
going
to
assign
it
to
every
sre
we
have
in
delivery
and
then
whoever
picks
it
up,
picks
it
up
and
merges
it.
If
we
want
something
more
formalized
than
that,
I'm
I'm
happy
for
any
idea.
I
could
random
an
sr.
It
could
randomly
pick
an
sre
or.
C
A
C
C
B
If
one
doesn't
get
merged
one
week,
it's
probably
not
the
end
of
the
world
either
right,
the
automation
will
work
and
just
make
a
bigger
one
next
week,
so
it's
probably
no
worse
than
what
we're
currently
doing
really,
which
is
we
generally
do
a
chart
bump
and
get
every
other
sre
to
review
it
pretty
much.
A
Oh
okay,
good
stuff,
so
I'll
open
an
epic
go,
maybe
I'll
just
catch
up
with
you
like
opinion
like
early
next
week
and
stuff,
so
we
can
figure
out
like
what
are
the
issues
like
to
start
moving
this
forward
so
that
we
can
get
closer
to
auto,
deploy
it's
like.
It
has
a
lot
of
similarities
to
what
we've
had
to
do
on
everything
else
going
into
auto
deploys.
So
you
know,
I
think
it's
certainly
an
interesting
and
useful
project.
B
Yeah,
so
this
is,
this
is
a
really
interesting
issue,
so
I
I
notice
there's
this
flag
that
we
can
pass
on
release
tools,
pipelines
which
gets
passed
down
to,
I
think,
to
the
kate's
workloads
pipelines.
I
believe-
or
it's
sorry
deployer,
so
that
basically
we
can
allow
kate's
workloads
pipelines
to
fail.
I
think-
and
someone
could
correct
me
if
I'm
wrong
the
history
behind
this-
is
when
we
started
the
case-
the
kubernetes
migration
for
services.
You
know
we
wanted.
B
Sometimes
I
guess
they
wouldn't
work
and
it
was
okay
to
like
set
this
flag
and
just
basically
allow
kubernetes
deployments
to
fail
and
continue
moving
forward,
and
I
immediately
I
initially
thought
that
now
with
everything
in
kubernetes
that
there
would
be
no
scenario
that
we
would
ever
want
kubernetes
deployments
to
fail
or
think
it
was
okay
for
them
to
fail.
But
on
the
discussion
on
the
issue
below-
and
I
do
thank
everyone
for
contributing-
has
kind
of
changed
my
mind
a
little
bit
on
that.
B
Maybe
there
are
kind
of
scenarios
in
which
we
should
use
that
flag,
but
I
guess
the
interesting
thing,
especially
in
highlight
of
the
incident
where
I
found
this
out
and
what
it
was
used
in.
It's
worth,
noting
that
when
you
use
this
flag,
it
just
means
that
the
kubernetes
you
know
apply,
job
can
fail
like
it
could
fail,
and
the
pipeline
continues.
B
What
that
can
mean,
though,
is
if
the
kubernetes
deployment
pipeline
fails
and
in
in
the
case
for
this
specific
incident,
it
was
failing
before
it
even
tried
to
deploy.
It
was
just
like
I'm
trying
to
get
a
lock
on
the
system,
so
I
can
deploy
it
and
I
can't
get
the
log
so
bypassing
that
and
say
continuing
deployment
and
deploying
to
another
environment
had
actually
had
the
effect
of
that
environment
was
never
deployed
to
so
it
was,
I
think
it
was
staging
or
something,
and
it
was
like.
B
It
might
have
been
valid
in
that
specific
case,
but
I
just
also
then
discovered
that
the
documentation
around
this
this
flag
is
light,
and
you
know
once
again
it's
not
immediately
clear
that
just
like,
oh
I'll
just
you
know,
toggle
this
flag
on
and
you
know
I
can
continue
on
I
I
guess
it
was
just
highlighting
the
fact
that
you
know
in
this
case
you
did
actually
skip
deploying
to
an
entire
environment,
so
you're
running
qa
tests
and
everything
afterwards,
but
you
weren't
actually
running
them
against
the
code.
B
You
were
you
know
deploying
to,
and
it
was
because
helm
was
stuck
in
a
stupid
reason
which,
whatever
so
yeah
I
just
you
know
if
anyone
has
got
any
thoughts
from
like
how
we
kind
of
reconcile
this
it
might
just
be.
I
need
to
do
some
documentation
and
run
book,
and
just
saying
like
you
know,
please
be
aware
that
if
you
don't
actually
see
certain
messages,
then
the
deployment
never
actually
made
it
to
the
cluster,
and
just
be
aware
of
that.
Maybe
maybe
that's
the
simplest
solution.
C
I
think
we
are
in
the
same
situation
with
conflict
changes
when
we
do
a
conflict
changer.
Mr
then
the
pipeline
is
starting
and
it's
taking
a
long
time
until
it
runs
through
all
of
the
environments,
and
sometimes
you
just
don't
watch
it
because
it
starts
and
runs
fine.
And
then
you
don't,
you
know
close
the
tab
and
then
you
don't
check
if
it
really
finished
right
and
it
might
be
that
it
just
doesn't
it
didn't.
D
C
If
it
really
succeeded,
as
you
will
never
know-
and
the
same
is
true
if
you
try
to
apply
this
flag
right,
if
you
take
this
flag
for
autographs
to
skip
or
to
allow
a
failure
and
a
community's
deploy,
then
you
should
know
you
need
to
manually
check
if
it
really
works,
what
you
are
doing
there
you
are
in
a
special
situation.
I
think
I
think
we
need
to
really
make
this
current
documentation
if
we
are
using
this
flag
that
you
manually
need
to
make
sure
that
everything
worked
as
expected
by
checking
for
it.
C
So
we
are
kind
of
in
the
same
situation
like
for
conflict
changes
here,
just
that
it's
for
a
whole
deployment
then,
which
takes
even
longer-
and
I
think
I
I
saw
one
of
the
I
mentioned
this
in
the
issue.
One
reason
why
we
maybe
might
need
this
flag,
because
if
we,
for
instance,
have
a
problem
with
one
zone
and
gcp
which
makes
us
unpossible
to
deploy
there,
then
we
might
still
want
to
drain
this
zone,
maybe
an
hr
proxy,
but
but
we
need
to
deploy.
C
Maybe-
and
maybe
this
situation
keeps
on
going
for
a
few
days
until
google
fixes
it
and
we
need
to
be
able
to
deploy
right,
and
in
this
case
it
would
be
cool
to
ignore
errors
for
this
zone
and
windows
deployments
to
be
able
to
deploy
and
use
this
flag.
Maybe
so,
I
think
that's
why
we
shouldn't
take
it
away
because,
as
we
would
be
stuck
in
this
situation
right,
I
don't
see
other
other
reasons
to
use
it.
This,
I
think,
is
the
most
urgent
one
where
we
might
need.
B
So
yeah,
like
absolute
perfect
sense,
you're
right
on
the
money,
the
case
of
clustered
down
that
we
knows
down.
We
need
to
continue.
I
this
is
my
personal
opinion.
I'm
I
really
get
a
bit
funny
about
like
running
pipelines
with
different
flags
and
different
inputs
into
them
and
getting
different
results,
and
I'm
really
starting
to
feel
like
that's,
causing
us
a
lot
of
these
kind
of
making
more
problems
than
it's
solving.
B
B
You
know
shouldn't
that,
be
what
we
really
are
doing,
we're
tracking
that
we're
disabling
it
in
git
we
could.
We
could
rearrange
the
ci
file,
so
maybe
there's
just
one
line:
you've
got
to
toggle
of
which
clusters
are
available
or
aren't
available,
but
it's
just
interesting
because
we
are,
we
are
very
pipeline
focused,
but
once
again
it's
like.
We
just
build
these
pipelines
with
these
inputs
and
changing
the
inputs
to
getting
different
outputs,
which
makes
sense,
but
you
know
it
makes
every
pipeline
hard
to
determine
like
predict.
B
What
is
it
going
to
do
and
I'm
always
kind
of
like?
Should
we
actually
enforce
the
change
in
get?
You
know?
I
know
it's
longer
and
I
hate
with
the
case
workload
stuff,
especially
the
pipelines,
take
forever
it's
so
long,
but
it's
like
if
you
want
to
disable
cluster
to
me,
that
is
a
a
state
change
that
should
actually
be
you
know,
reviewed
merged,
be
in
the
git
log
then
later
on.
B
A
Pipeline
to
be
green,
I
think
in
that
scenario
I
think
that's
a
great
suggestion,
because
actually,
I
think
in
that
case
we're
talking
about
two
emergencies
right,
because
it's
like
henry's
got
great
use.
Cases
like
a
whole
zone
is
out
right,
that's
an
emergency
and
we
also
have
to
do
an
emergency
deployment
like
it
feels
like.
I
would
hope
that
we
probably
don't
have
two
emergencies
on
top
of
each
other
too
often
so
yeah.
That
might
be
a
great
suggestion.
A
B
B
That
that
would
be
the
the
other
option
would
be
to
like
change
the
flag.
So
you
actually
have
to
pass
the
cluster
that
it's
allowed
to
fail
on,
and
so
at
least
you're
kind
of
targeting
it
because
you
know
you
can
say
like
oh
cluster
c
is
down
I'm
going
to
toggle
this
flag
and
it
just
allows
every
cluster
to
fail.
But.
A
I
think
it's
more,
I
guess
what
I'm
more
want
to
avoid
is
the
is
separating
out
the
use
case
where
we've
gone.
This
has
failed
and
people
mistaking
this.
For,
like
oh
okay,
I
will
allow
that
versus
actually
an
in
deliberate
like
this
scenario
where
a
zone
is
unavailable
and
I
still
want
to
make
a
decision
to
deploy
like
so
I
think
the
naming
I
I
don't
think
it's
super
clear
from
the
current
name.
B
Because
correct
me,
if
I'm
wrong,
basically
all
it
does,
is
it
just
sets
the
allow
flat
failure
flag
on
the
deployer
job
for
kate's
kicks
off
case
workloads?
That's
all
it
does,
so
it
just
allows
that
ci
job
to
fail,
so
the
whole
deployer
pipeline
will
go
green
even
if
you
never
actually
deployed
anything
yeah.
I
believe.
A
C
Yeah,
I
think
that's
the
problem
that
it's
not
very
often
occurring.
Then
you
need
to
ask
yourself:
okay,
what
do
we
do
in
this
case
right
yeah,
because
you
don't
have
the
experience
and
then
you
start
looking
through
run
books
or
examples
before
so
having
a
very
prominent
run
book
example
of
how
to
come
up
with
nmr
to
disable
running
in
a
specific
cluster,
for
instance.
That
would
be
then
very
important
to
have.
A
Because
we
may
have
other
considerations
as
well
if
a
whole
zone's
down
right
like
we
may
also
need
to
just
check.
We
have
capacity
on
the
remaining
zones,
so
it
I
don't
think
it's
just
isolated
to
we
skip
on
the
deployment
and
we
keep
going
it
feels
like
that
would
be
a
bigger
infra
decision.
Like
you
know,
do
we
still
have
capacity
in
order
to
do
a
deployment,
for
example
right
like
because
when
we
deploy
we
reduce
capacity,
so
I
think,
there's
probably
some
extra
stuff.
B
It's
worth
mentioning
both
of
these
issues.
Actually,
interestingly
enough
have
very
much
solidified
my
my
wants
to
actually
improve
the
gitlab
com
repo
this
year.
I
really
want
to
get
us
away
from
home,
so
we
don't
have
these
problems
of
help
just
failing
on
us
for
no
reason
and
actually
thinking
about
the
diffs
problem
as
well.
B
If
we
go
back
to,
if
we
get
all
the
secrets
out
of
that
repo
and
we
get
that
to
actually
using
raw
manifests
that
we
generate
and
commit
to
repo,
then
the
git
diff,
like
we
see
with
the
chart
bumps
becomes
the
diff.
We
don't
need
the
diff
output
from
ops
and
therefore
that
whole
problem
goes
away
as
well,
because
it'll
be
up
to
people
who
can
people
it.
Basically,
it
just
becomes
code
that
you
commit
so
anyway.
A
Yeah
great,
let's
get
some
of
this
stuff
into
issues
as
well,
so
we
can
prioritize
like
henry.
You
made
a
really
great
point
earlier
about
when
conflict
pipelines
fail.
There's
no
output
like
that
feels
like
something
we
should.
We
should
change
right
because
actually
we're
not
like
it's
kind
of
a
model
that
we
actually
want
to
adopt
for
other
things
where
someone
pushes
a
change,
it's
almost
a
self-serve
deployment,
but
we
give
them
zero
visibility,
which
seems
a
little
unfair.
A
C
Simply
what
we
would
yeah
so
so
we
see
if
a
pipeline
fails.
If
we
check
the
mr,
I
mean
we
see
that
the
armor
had
a
failed
pipeline,
but
because
it
takes
so
long,
sometimes
you
just
close
it
because
emergency.
A
C
A
A
A
C
I
can
open
an
issue
to
look
for
a
solution
for
this,
like.
C
For
coordinates,
workloads
pipelines.
B
A
Well,
we
could
mention
a
name,
and
people
could
set
a
slack
notification
rule
if
they
wanted
to
get
them.
Well,
let's
discuss
our
only
issue
because
I
think
there's
lots
of
options.
We
could
do
to
improve
that
stuff
as
well.
B
I
think
so
I
think
it's
a
documentation
in
a
run
book
honestly
and
then
probably
getting
everyone
feedback
on
that
run
book
and
making
it
clear
that
either
this
is
what
the
flag
does
or
we
are
actually
going
to
get
rid
of
the
flag
and
replace
it
with
a
different
process,
and
I
think,
as
long
as
everyone
looks
at
that
and
feels
they're
confident
and
they
understand
what
that
process
is
and
why
that
process
is
that
probably
be
the
way.
A
C
D
A
Awesome:
okay!
Well
thanks
very
much
for
discussions
and
enjoy
the
rest
of
your
thursday.
Take
everyone
bye.