►
From YouTube: 2020-09-09 Delivery team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
So
this
meeting
will
be
back
on
monday
on
monday.
I
just
moved
it
this
week
since
pretty
much
everybody
was
out,
so
I'm
not
sure
if
robert's
gonna
be
able
to
make
it
that
he
was
the
only
person
who
did
have
a
clash.
Is
he
on.
A
Oh
thanks
for
joining
robert,
I
was
saying
you
were
the
only
person
who
had
a
clash
on
your
calendar,
so
I
wasn't
sure
if
you'd
make
it
but
awesome.
So
let's
kick
things
off
so
today
I
wrote
this
on
monday
and
I
realized
I'm
giving
you
like
super
tons
of
notes.
If
you
don't
know
this
already
today
we
have
an
ama
about
releases,
there's
nothing
in
the
agenda
yet.
But
if,
if
you're
available,
please
come
along
and
we
can
chat,
release,
stuff,
cool
alexia.
C
C
There
are
a
couple
of
effects
if
we
set
this
manually,
the
first
one,
which
is
the
most
important,
is
that
if
we
forget
to
remove
it
like
it
happened
last
week,
basically
maybe
as
a
release
manager,
we
we
may
think
that
the
current
situation
is
okay
to
deploy
and
we
set
the
variable.
But
then
another
incident
may
be
triggered
or
another
situation
may
arise
and
the
old
override
will
still
be
there.
C
So
it's
better
that
every
deploy
fails
and
that
we
have
to
manually
check
why
it
failed
and
and
state
why
we
are
overriding
it
instead
of
just
blanking
out
all
the
error,
because
we
check
at
once-
and
maybe
situation
may
change
in
the
in
the
aggressive
delay.
So
that's
just
enhanced
the
other
one
is
that
we
had
a
test
failure,
but
this
is
a
development
problem,
because
test
also
runs
in
ops
and
because
the
check
was
set
in
oops
expectation
was
failing
because
yeah,
that's
the
detail.
D
What
one
comment
so,
if
you
do
do
the
chat,
ups
command
you're
going
to
be
initiating
a
new
deploy
via
chat
ups,
and
it
means
you
would
also
need
to
specify
a
single
environment
which
is
a
bit
different
right.
You're
not
going
to
have
the
chained
environments.
B
D
C
C
D
Yeah,
that
sounds
good.
Maybe
we
could
even
output
the
work
around
the
chat,
ups
command
in
the
check,
failure
or
something
so.
B
D
And
this
sort
of
like
a
general
problem
we
have
with
setting
ci
variables
and
forgetting
to
unset
them,
as
it's
happened
to
me
before
we
have
variables
that
other
variables.
Besides
this,
I
wonder
if
we
should
just
have
like,
I
don't
know
a
dashboard
or
a
chat.
Ups
command.
That
kind
of
gives
you
a
summary
of
all
the
variables
and
what
theirs,
what
their
current
or.
D
Yeah,
I
was
thinking
that
another
option
would
be
to
just
have
a
pre-check
to
fail
if
a
variable
is
set
where
we
don't
expect
it
to
be
set
like
in
the
beginning
of
the
deployer
pipeline,
it
could
be
the
first
check
or
something,
but
anyway
we
can
make
an
issue
for
that.
I
don't
know
if
you
can
solve
it.
B
Yeah
kind
of
adding
on
to
that
I
thought
starbuck
was
gonna,
have
my
same
idea,
but
I
had
an
idea
last
night
where,
for
the
for,
like
the
skip
omnibus
roll
check,
if
that
job
fails-
and
we
say
set
this
variable-
I
wonder
if
we
could
just
have
a
manual
job
kind
of
sitting
next
to
that,
one
that
has
that
variable
set.
So
if
you
need
it.
A
B
E
D
E
If,
if
we
didn't
set
any
variables,
but
instead,
we
say
chat,
ops
forcibly
run
this
package
through
its
deploy
pipeline.
Please
and
it
figures
out
which
pipeline
it
is
where
it's
at
and
sets
whatever
variable,
is
necessary
and
hits
the
play
button.
Potentially
I
don't
know
that.
Maybe
that's
thinking
too
far,
because.
C
D
Robert
with
regard
to
your
idea
like
how
would
that
work,
though,
because
you
have
one
job
that
would
fail,
and
then
you
would
have
another
job
that
would
run
with
the
override
set
and
that
would
pass.
But
you
still
have
the
failing
job
yeah
right.
D
C
C
B
For
a
thing
that
wouldn't
matter
but
then
now
the
problem
is
like
you
see
it
immediately,
so
you
just
like
cancel
it.
I
guess
and
then
set
the
variable
and
then
rerun
it
it'd
be
nice.
If
we
could
set
the
variable
as
it
was
running
like
if
it's
constantly
pulling,
but
that's
that's
not
how
the
ci
works.
D
A
Yeah
that
would
be
good
to
make
clear,
I'm
never
sure
when
I
can
take
it
off
either.
Also
I,
unless
I'm
gonna,
give
you
an
mr
about
updating
the
documentation,
as
I
thought
I'd
try
it
out
yesterday
and
I,
as
I
proved
I,
I
was
a
bit
lost,
so
that
was
also
kind
of
confusing,
because
I
ended
up
yesterday
with
ignore
production
checks
plus
needing
the
skip
omnibus,
and
you
have
to
set
them
differently.
So
that
was
that's
also
gets
a
bit
funky.
A
Out
also,
I
was
thinking
kind
of
related
to
this.
I
should
write
stuff
in
this
understand,
but
I
was
thinking
one
thing,
so
I
specifically
think
about
fact:
we've
had
this
big
chat
and
yorick's
gonna
come
back
right
before
he
goes
on
back
onto
being
a
release
manager.
A
I
wonder
I
know
we
have
documentation,
but
I
wonder:
is
there
a
a
good
way
or
of
of
basically
just
capturing
like
significant
process,
changes
that
we
make
when
you're
a
release
manager
so
that
the
next
release
managers
can
kind
of
get
the
highlights,
because
I
I
mean
I'm
going
to
assume
that
nobody
reads
the
full
release
management
documentation
at
the
beginning
of
every
shift
right.
So
I
think
we
kind
of
risk
as
well
like
because
we
don't
rotate
that
frequently
that
we
bed
something
like
this
in
and
then
next
month.
A
I
might
have
a
play
around
with
just
writing
down
some
notes,
which,
specifically,
in
this
case,
thinking
of
me
handing
off
to
uric
in
a
few
weeks
how
we
could
do
more
of
like
a
handover
thing
for
release
management.
Is
that
something
we've
done
in
the
past?.
C
Yeah,
I
also
agree
on
this
also
because,
if
you're
doing
this
with
yorick
one
will
be
so
everyone
goes
in
release
management
every
six
months,
more
or
less.
So
if
we
do
end
over
on
one
by
one
basis,
basically,
you
still
have
to
close
the
gap
for
the
all.
The
other
shifts
that
you
haven't
be
part
of
in
the
meantime.
So
if
you
have
something
written
down,
even
just
like
yeah
as
probably
say,
kind
of
a
change
like
release
management
changelog
would
be
easier
to
yeah
catch
up
with
what
changed.
In
the
meantime,.
E
He
is
a
friend
of
ours
who
is
currently
in
south
korea,
see
if
I
can
adjust
for
white
balance
he's
currently
in
south
korea,
so
we're
just
watching
him
for
the
time
being.
It's
not
a
foster
anything
he's
an
older
cat.
What
four
or
five
years
old
yeah
so
he's
an
adult
and
slowly
letting
him
out,
because
he's
been
a
only
cat
in
where
he
lives.
So
he
does
not
like
the
foster
kittens
or
any
other
cat
in
the
side
of
our
house.
Currently
anyways
continue.
C
Okay,
should
we
move
to
the
next
item?
Okay,
so
I
want
to
bring
back
this
old
discussion
point
about
having
the
shadow
releases
process,
so
it's
kind
of
a
complete
qa
system
for
release
tools
because
it
yeah
it
happened
again.
We
during
the
security
release,
we
yeah,
we
release
the
wrong
thing
right.
So
this
is
the
problem
that
we
have
with
release
tools
that
is
interact
with
too
many
external
systems
and
basically
all
of
them
are
mocked
out
during
regular
unit
testing.
C
So
this
issue
is
about
creating
a
full
qa
system
that
runs
releases
on
basically
a
sub
namespace
of
release
tools
where
we
clone
easily
workers
everything,
and
so
we
can.
We
do
real
hand
to
end
testing.
We
just
patch
out
our
project
classes,
so
they
are
pointing
to
your
fork
and
we
are
just
running
everything
there.
C
B
Yeah,
my
I'm
not,
this
is
not
a
reason
not
to
do
it,
but
my
only
concern
with
this
one
was
that
we
would
be
like
a
matrix
of
various
tests,
because
this
specific
failure
was
a
feature
flag
plus
a
security
release.
So
you'd
have
to
do
like
a
normal
release
with
the
feature
flag,
normal
release
without
speech,
flag,
scary
release
with
a
feature
flag
online.
C
I
don't
know,
maybe
we
can
do
something
like
if
you
want
to
flip
a
feature
flag.
You
have
some
something
that
helps
you
to
run
a
keyway
on
the
fischer
flex,
flip
so
to
say,
take
mustard,
I
don't
I'm
not
changing
anything,
just
run
it
with
this
flags
and
check
if
it
works.
But
I
don't
know
I
mean
it's
it's
clearly.
A
Is
this
something
that
it
it
feels
like
a
reasonably
large
task
is
my
sort
of
first
thought
yeah?
Is
it
something
that
we
think
we
should
prioritize
as
like
the
next
big
thing
we
tackle
right?
So
we've
got
a
few
things
in
play
at
the
moment,
like
security
releases
and
assisted
deployments,
as
well
as
the
kubernetes
stuff.
Like?
Is
this
something
we
should
pick
up
as
our
next
big
project.
C
D
I'm
kind
I
feel
like
I
might
be
missing
something
when
we
would
tag
these
releases
for
the
shadow
release.
C
So
but
let's
see,
for
instance,
that
is
outside
we
don't
control
dev
directly,
we
just
mirror
rings,
do
the
magic,
so
we
can
stop
and
say
yeah.
This
is
this
is
a
fork.
We
do
everything
we
should
do
on
fork
and
then
we
verify
that
the
fork
has
the
right
things
in
place.
For
instance,
I
don't
know
what.
D
What
if
we
don't
need
to
use
forks,
but
instead
it
seems
like
we
have
a
lot
of
logic
for
auto
to
play
releases
and
not
with
auto
deploy
releases
based
on
a
regex
of
the
of
the
semantic
version
right.
What
if
we
create
another
thing
that
is
not
auto
deployed,
that
has
the
same
logic,
but
it's
like
a
tag
that
won't
interfere
with
regular
releases
so
that
we
can
test
it
end
to
end.
A
C
A
C
A
D
D
C
Yeah
this
covered
just
one
aspect
of
the
failure,
which
is
what
happened,
for
instance,
during
this
security
release.
But
what
I'm
thinking
more
broadly
here
is
that
maybe
you
are
making
some
change
because
of
the
outer
devops.
You
are
changing
something
and
this
breaks
how
we
let's
say
pick
security
fix.
Mr
and
you
don't
know-
and
you
don't
realize
this
until
the
28th-
I
don't
remember
when
we
started
working
on
the
on
the
security
release,
which
is
what
happened
in
the
past.
C
When
I
wrote
this
issue
because
basically
I
was
released
manager
and
something
changed
in
between
and
when
it
was
time
to
do
security
release,
which
is
high
stress,
high
pressure
moment
on
the
release,
basically
towards
what
releases
was
not
working,
and
we
had
one
full
month
of
of
work
to
yeah
to
change
the
one
amount
of
changes
and
really
to
figure
out
what
broke
it
and
why?
Why
something
changed.
A
How
we,
how
we
solve
the
problem,
but
maybe
it
does
make
sense
for
our
next
project
to
actually
be
a.
How
can
we
have
more
confidence
like
ahead
of
security
release
or
a
monthly
release,
those
sorts
of
scheduled
ones
that
the
release
tooling
is
working,
and
maybe
it
is
like
it's
just
more
checks
in
place,
or
maybe
it's
more
of
an
end-to-end
thing
or
something
like
that.
A
Okay,
let's
make
sure
we
I'll
make
sure
we
pick
that
up
when
we
complete
one
of
our
current
projects,
so
that
we
can
actually
make
some
progress
on
this
call.