►
From YouTube: 2020-07-01 Delivery Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
A
A
B
B
B
C
You
awesome
hello.
Everyone
welcome
to
Monday,
it's
Wednesday
still
I
wish
pretend
it's.
Monday
you'll
have
a
super
short
week,
so
so,
let's
kick
off.
First
of
all,
I
want
to
give
a
shout
out
to
job
skarbek,
Urich
Mayra.
Thank
you
so
much
for
your
help.
Last
week
with
the
incident
super
kids
that
we
jumped
in
quickly
identified
the
issues
and
were
able
to
get
those
resolved
really
quickly.
So
well
done,
I
said
hey
job,
so
jobs
will
be
out
of
office.
D
A
E
E
E
C
We
could
try
and
like
you
know,
like
do
loads
of
manual
stuff,
to
make
it
faster,
but
actually
be
much
better
if
we
have
some
time
to
work
out,
how
do
we
do
that
with
automation
right?
So
the
next
step
before
we
reduce
the
target
is
working
out
like
what
can
we
do,
and
hopefully
something
around
those
century
logs
to
make
it
a
bit
easier.
Ambassador
to
stuff
can
make
these
things
so
but
yeah,
but
what
on
on
on
progress,
they're
cool,
so
I
went
to
follow
up
on
the
instance.
C
We
had
last
week
slightly
less
relevant.
Now
it's
moved
a
couple
of
days,
but
those
were
we
had
these
two
incidents.
One
of
them
was
the
interesting
edge
case
in
the
way
ansible
handles
touches,
which
was
was
like.
We
were
going
to
hit
it
at
some
point,
so
this
one
was
actually
a
decent
way
to
find
it
like.
It
wasn't
too
too
problematic.
C
We
fixed
it
really
nicely
quickly
and
the
other
one
was
around
the
century
logs
not
being
very
clear,
like
or
issue
areas
in
Canary,
not
being
that's
obvious,
as
we'd
like
them
to
be
causing
issues.
So
I
really
just
wanted
to
highlight
those
two
things
charge
or
to
give
a
little
bit
of
context
on
those
you've
added
a.
B
So,
like
Amy
said,
we
would
have
hit
this
bug
sooner
or
later.
This
had
to
do
with
an
empty
directory
which
caused
an
error
which
then
caused
us
to
drain
more
PM's,
and
they
thought
we
should
have.
The
fix
here
was
to
halt
the
deploy
if
any
any
server
within
a
batch
fails,
and
this
came
as
a
bit
of
surprise
too,
like
everyone
I
talked
to
like,
or
at
least
myself
as
far
back
in
others,
we
all
thought
that
failures
halted
the
ansible
run,
but
this
actually
was
not
the
case.
B
It's
not
something
we
saw
on
staging
because
our
batch
sizes
are
small.
You
would
think
this
would
be
the
default
behavior,
but
it
isn't
so.
We
be
updated
that
behavior,
so
that
if
any
host
fails
in
a
batch
instead
of
all
hosts
fail
in
the
batch,
then
it
will
just
halt
immediately.
That
change
me
rolled
out
right
away.
B
We
also
fixed
a
problem
of
kind
of
a
long-standing
problem
where
we
were
deploying
to
Canary
hosts
with
the
production
fleet,
and
this
will
be
much
nicer
now,
because
previously,
when
we
promoted
to
production,
we
were
including
the
canary.
Oh,
so
it
was
like
a
superset
of
production
and
canary.
This
meant
that
we
always
had
the
potential
of
rolling
back
canary,
and
you
know
unintentionally,
because
we
could
have
a
production
deployment
and
a
canary
deployment
happening
simultaneously
and
they
could
just
interfere
with
each
other.
So
now
that
this
changes
over
emerge
change
is
done.
B
There
is
another
comment
here
about
H,
a
proxy,
the
H,
a
proxy
script,
which
actually
goes
all
the
way
back
to
takeoff.
It's
been,
it's
been
modified
over
like
I
guess
years
now,
I
would
love
just
to
get
rid
of
it,
but
I'm
a
bit
hesitant
to
because
you
know
it's
a
pretty
essential
part
of
the
deploy.
B
We
do
have
a
Rudy
I,
converted
it
to
Ruby
for
chat
ops,
so
the
code
is
there
and
as
tests
and
it's
much
nicer,
it's
just
that
we
haven't
put
it
over
to
deployer
because,
yes,
you
know
like
we
already
have
Python
in
the
image.
I
were
to
use
his
Ruby
script.
I
would
need
to
also
include
Ruby
having
Ruby
and
Python
in
the
same
CI
image.
B
I
mean
it's
gonna,
make
it
much
larger
so
for
now,
like
maybe
we'll
hold
off
on
it
like
I,
don't
know,
I
think,
like
I,
don't
know
what
the
you
know,
but
the
impact
would
be
of
having
a
very
large
image
like
I
assume
the
image
itself
is
cached
on
the
runner.
So
maybe
it's
not
that
big
of
a
deal
like.
Maybe
we
only
have
to
pull
it
one,
since
we
aren't
using,
you
know
we're
using
a
static
runner.
We
are
using
Auto
scale,
others
that's
pretty
much.
It.
B
Yeah
I
mean
I'll
open
up
an
issue
for
the
converter,
eh
a
proxy
script
to
Ruby,
or
it's
usually
could
we
have,
but
I
think
this
is
a
pretty
low,
maybe
a
lower
priority
item
the
rest
of
you
corrective
actions
already
finished.
We
finished
the
last
one
today
with
the
deploy
to
production,
we
confirmed
that
the
Canary
host
weren't
included
so
I
think
we
can
pretty
much
wrap
this
up.
C
Awesome
and
yes,
thanks
for
your
fast
turnaround
on
those
things
really
good,
she
wanted
to
sort
of
raise
like.
Obviously,
some
couple
of
these
were
sort
of
risks.
Around
pipelines,
thing
I,
guess
more
of
me,
just
as
I'm
super
new
like
is
there
anything
else
that
we
should
be
that
anyone
sort
of
got
in
their
mind
that
that's
a
kind
of
risk
or
a
concern
they
have
over
our
pipelines
that
we
should
be
thinking
about
Jeff,.
B
B
That
use
the
cereal
like
doing
deployment
of
rowing
deployment
and
batches?
We
have
the
command
executor,
which
is
another.
It's
basically
a
way
to
run
ad
hoc
commands
across
the
fleet.
They
take
proxy
trains.
I
made
sure
that
this
change
also
applies
to
that.
So
that's
covered
I.
Think
we're
I
think
we're
pretty
good
here.
C
D
So
this
is
just
and
that's
up
so
we
are
moving
it
early
version
bombs
outside
of
other
deployed
branches,
because
this
was
causing
troubles
with
understanding
when
there's
breaking
changes
and
also
it's
making
the
security
really
is
more
complex
than
what
is
what
is
needed.
So
there's
an
issue
with
the
discussion
and
I'm
have
them
Americus
for
changing
this
behavior.
If
someone
will
receive
it
for
review
in
maybe
today
or
tomorrow,
and
your
your
question
now,
this
doesn't
impact
the
they.
The
work
is
doing
an
API
for
releasing
with
the
API
back
to
you,
Amy.
C
Awesome,
thank
you.
So
I
wanted
to
just
talk
a
little
bit
about
the
diverse
collation
process.
So
it's
come
up
a
couple
of
times
recently
last
week,
it's
like
and
a
few
other
mention
it.
Are
you
all
comfortable
that
the
dev
escalation
process
as
it
stands
right
now,
it's
meeting
all
of
our
needs
like
do
we
get
the
support?
We
need
I.
D
What
I
think
that
the
biggest
problem
that
we
have
with
the
fiscal
ation
at
the
moment,
at
least
for
my
side
but
I,
think
I'm?
Not
the
only
one
is
the
fear
of
retaliation
of
using
it.
So
every
time
you
trigger
it
especially
I
mean
as
a
release
manager.
Then
you
may
get
questioned
about
why
you
did
it
and
then
I
mean
you
just
find
out.
D
C
E
Course,
of
course,
so
let
me,
let
me
just
repeat
what
you
all
just
said
or
what
images
I'm
sorry.
There
is
a
reason
why
we
have
this
applause.
That's
if
delivery
is
blocked
by
the
before
deploy
or
there
is
a
quality
pipeline.
Failing
that
is
preventing
us
from
deploying
all
bets
are
off.
We
don't
have
to
know
the
priority
or
severity
of
it,
because
you
can't
know
it
right,
like
you
can't
know
the
whole
system
in
and
out,
so
the
fear
of
retaliation
that
you've
experienced
I
hope
is
gonna.
E
Be,
is
gonna
be
fixed
by
this
change.
So
if
anyone
literally
anyone
says
you
shouldn't
be
using
the
process,
pinckney
immediately
page
me
and
I'll
work
on
resolving
that
or
dismantling
the
process,
it's
one
way
or
another.
We
are
either
gonna
get
help
or
we
are
not
going
to
use
the
processor.
Have
the
process,
hello,
so
I'm
absolutely
happy
to
help
I.
A
Think
it's
worth
adding
there
I
think
now
we
might
get
questions
simply
because
I
think
we
don't
use
it
that
often,
which
I
guess
is
a
good
thing,
we're
on
up
and
gets
depending
on
the
reason
why
we
don't
use
it,
but
I
would
imagine
if
you
start
using
it
more
often,
and
more
people
are
familiar
with
it.
That
they'll,
probably
ask
less
for
that.
You
know,
of
course,
remains
to
be
seen
a
bit
how
people
react
to
that.
E
So
maybe
smaller
the
patient
from
our
side,
not
everyone
knows
people
from
delivery
team
or
what
release
managers
they're
doing,
especially
because
we
are
rotating
role
and
so
on
and
so
on.
Maybe
we
prefix
with
like
any
request
with
we
have
a
blocker
with
deploy,
or
we
have
an
unknown
blocker
for
deploy
or
for
release
that
should
provide
the
sufficient
heads
up
to
everyone
there
and
if
they
don't
know,
we
can
always
remind
them.
Here's
the
link
weights
as
time.
F
D
You
got
in
the
channel
in
the
topic.
There
is
a
link
to
a
Google
document
with
the
schedule.
So
basically
you
open
the
schedule
you
find
who
isn't
cold
and
then
you
ping
him
in
the
channel
explaining.
Where
is
the
problem
and
it
usually
helps
if
we
already
have
an
issue,
so
you
can
use
the
issue
for
discussing
it,
because
otherwise,
you
end
up
with
three
four
five
slack
threads,
where
people
are
discussing
the
same
problem
and
then
because
the
devas
collision
shifts
are
shorter
than,
for
instance,
uncles.
D
E
Also
19
to
highlight
here
with
the
devastation
process
is
I
know.
Some
of
you
are
back
into
engineer,
so
you
feel
the
obligation
of
or
feel
that
you
can
actually
debug
the
problem
yourself.
I
have
to
repeat.
This
is
not
your
job
when
your
release
manager,
when
your
release
manager,
you're
coordinating
multiple
things
at
the
same
time.
So
the
fact
that
you
actually
have
the
background
is
a
disadvantage.
E
If
you
try
to
block
it
out
during
that
period
of
time,
because
you
want
to
make
sure
that
all
of
the
other
things
around
it
are
in
sync
or
that
you
can
sync
them.
If
you
go
in
deep
yourself
to
investigate,
to
try
to
help
fix
problems,
your
a
going
in
to
delay
your
work,
we
won't
be
able
to
get
it
done.
If
you
spend
three
hours
debugging
and
then
you
have
to
do
something
and
B
you
are
going
to
create
a
huge
amount
of
pressure
for
yourself.
D
Yeah
I
want
to
double
down
on
this
that
it
also
applies
to
Becky
and
want
a
nurse
so
because
it
happens
that
10
people
are
signs,
you
the
fix
as
a
maintainer
and
it's
definitely
worth
say
no
I'm
release
manager
in
this
situation.
So
I
cannot
review
this
as
a
maintainer.
Please
assign
someone
else.
I.
E
Think
this
goes
also
for
Jarvan
Scarlett.
By
the
way
he
doesn't
have
to
be
back
on
back
in
maintains
it
can
be
rolling
out
a
change.
It's
the
same
thing,
the
fact
that
you
know
how
to
do
this
and
are
able
to
do
it
doesn't
necessarily
mean
that
you
should
be
doing
it
because
you
have
other
jobs
to
do
at
that
moment.
So
get
engaging
with
that
sorry
on
call
to
play
actually
roll
out
to
change
is
better
than
you're
doing
it
yourself.
So
it's
it
applies
for
everyone.
Oh.
F
F
Is
that
the
fact
that
we
need
to
wait
for
the
specific
approval
of
up
sec
like
they
need
to
review
all
of
the
items
on
staging
like
this,
one
that
they
have
22,
took
them
basically
a
day
and
a
half
to
do
it,
and
that
is
annoying
because
yesterday,
I
heard
like
I,
had
nothing
to
do
other
than
waiting
yeah,
so
that
shouldn't
happen
anymore
after
this
purple
song.
The
proposal
changes
for
to
require
explicit
approval
on
the
apps
icon.
The
security
merge
request.