►
From YouTube: 2020-07-13 Delivery Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
going
to
go
ahead
and
get
started
so
announcements
first,
Marion's
announcement
he'll
no
longer
be
able
to
join
these
calls.
It
clashes
with
the
working
group
for
the
multi
large
initiative,
so
hopefully
we'll
get
well
I'll
make
sure
we
get
input
from
that
group,
but
some
will
have
to
do
without
him
for
now,
I'm
also
going
to
be
out
on
Friday
and
Monday.
A
B
A
So
I
can
give
you
the
super
brief
I.
As
far
as
I
understand
it,
I
have
to
get
you
a
better.
It
would
be
a
good
discussion
for
next
week.
I
donated
research.
As
far
as
I
understand
it.
It's
a
strategic
thing
that
we
want
to
do
to
be
able
to
allow
essentially
I
think
it's
like
white
labeled
customers,
so
you'd
have
like
a
large
customer
on
a
kind
of
isolated
calm
instance
in
a
sense,
so
quite
interesting
definitely
would
mean
some
interesting
stuff
from
releases
so
nothing
this
year.
C
A
A
A
It's
been
tracked
for
a
while
now
and
it's
stable
and
you
know
nicely
tracking,
so
that's
being
proposed
to
be
upgraded
as
a
KPI
and
then
our
q3
OKR
will
be
around
removing
manual
points
within
our
pipeline.
So
like
theoretically
right
now,
we
could
gamer
by
our
MT
TP,
okay
by
just
like
starting
a
deployments
a
good
long
while
earlier,
but
that's
not
really
in
the
essence
of
what
we're
trying
to
drive
for
right.
A
The
stuff
will
all
be
finalized
for
the
department
next
week
latest-
maybe
maybe
even
this
week,
and
then
what
we'll
do
early
through
a
workout
like
you
know
what
exactly?
How
will
we
track
the
staff
like
what
sort
of
work
we'll
be
doing
what
so
it
was
I
wrote
about
that
like
an
early
August
as
we
go
into
q3
will
also
have
a
look
back
and
see.
How
did
we
get
on
with
a
q2
goals
as
well
so
be
sort
of
tracking
progress
there
as
well.
A
Cool
so
yeah,
the
next
thing
I
want
to
bring
out
was
around
hot
patches.
So
today's
been
interesting
super
awesome
job
to
see
your
hot
patch
pipeline
stuff
or
go
on
really
great
feedback
on
documentation
and
stuff.
What
I
wasn't
was
just
how
once
the
hot
patch
is
out,
that's
invisibly,
I
guess
everyone
else
blocks
our
pipeline,
which
was
surprising,
I
think
to
everybody
else
who
was
involved
to
set
everyone
outside
of
delivery,
so
I
think
I
yeah.
So
my
question
really
is
like
how
do
we?
A
C
It's
always
been
discussed
that
maybe
we
could
just
try
to
apply
patches
to
future
burr.
You
know
versions
and
oftentimes.
This
is
okay
and
I.
Think
the
reason
why
we
don't
do
that
is
that
were
always
a
bit
nervous
that
we
may
cause
problems.
I
mean
we
are
essence
just
applying
patches
sort
of
blindly
and
trusting
that
they,
you
know,
apply
about
the
definitive
version
like
if
we,
if
we're
not
applying
the
patch
to
the
same
version,
if
the
patch
was
generated
from
yeah
I,
think
we
open
ourselves
up
to
a
risk
there.
C
So
this
was
less
of
an
issue.
I
mean
a
lot
of
this
comes
out
of
when
we
were
doing
the
points
reduction
only
four
times
a
month.
You
know
we'd
be
sitting
on
a
version
on
production
for
a
pretty
long
time.
So
we
made
a
lot
of
sense
to
slam
these
patch
files
on
production,
and
then
you
know,
because
it
was
a
lot
slower
now.
It
seems
like
it's.
C
It's
almost,
we
could
almost
get
a
well,
not
quite
get
it
built
to
production
as
fast
as
we
can
hot
patch,
but
it's
it's
much
better
than
it
used
to
be
so
I,
don't
I,
don't
know.
We
have
always
thought
that
the
batch,
like
this
whole
patch
file,
stuff
business,
will
go
away
when
we
go
to
kubernetes,
but
I.
Don't
know
that
I
could
depend
on
how
fast
we
can
get
new
images
built
and
also
you.
A
C
Deployed
to
the
cluster
I
think
the
deployment
part
is
fairly
quick.
The
building
part
you
know-
maybe
maybe
we
have
this
in
similar
issues
there
so
I,
not
sure
I,
don't
I,
don't
really
have
a
good
answer
here,
other
than
like
I
suggested.
This
comment
like
the
input
to
the
patch
or
could
be
an
mr
or
a
branch
and
then
the
patch
or
could
have
more
intelligence
and
just
create
a
patch
file
more
dynamically.
C
A
Guess
there's
wondering
if
it
might
even
be
okay,
so
the
thing
that
really
worried
me
really
I
guess
was
that
that
we,
we
actually
won't
necessarily
be
involved
in
hot
patches
right,
which,
in
its
on
ways
it's
amazing
and
great.
In
other
ways,
it's
a
bit
scary,
so
I
was
kind
of
I
wonder
if
it's
almost
just
some
visibility
that
like
hey,
we
can't
we
can't
promote
for
now,
because
of
these
reasons
was
like
like
that,
so
yeah.
C
D
C
Guess
I'll
open
up
an
issue
for
that
as
far
as
us
not
being
involved,
yeah
I
think
I
think
we're
headed
that
direction.
It
seems
like
we're,
really
close
to
not
having
to
be
involved
at
all
in
hot
patches
and
that
only
an
SRE
needs
to
be
involved,
but
to
get
fully
there
I
think
we're
gonna
have
to
do
some
pouch
game
days
with
the
sres
to.
A
A
I
think
that's
definitely
right.
Yeah,
okay,
well
I'll
create
an
issue
where
we
can
start
capturing
like
some
of
the
issues
around
this
particular
I
think
on
the
process
stuff,
and
then
we
can
work
spin
out
issues
from
that
based
a
lot
more
actions
we
actually
need
to,
but
yeah
it
was
certainly
an
interesting
one
like
I
want
hard
to
say
today.
I
think
it
was
like
really
great
that
you
know
critical
security.
Vulnerability
could
be
hot
patched,
but
on
the
other
has
caused
us
like
a
lot
of
other
problems.
C
C
A
C
C
This
can
cause
problems,
usually
it
kind
of
works
itself
out
at
the
end,
but
I
honestly,
don't
really
have
a
good
solution
for
this.
You
know.
Resource
groups
would
be
really
nice
to
use,
which
is
a
CI
feature
that
allows
that
prevents
simultaneous
jobs,
but
it
doesn't
work.
If
you
have
like
a
lot
of
jobs
in
a
single
stage
we
could
have
like.
Maybe
we
could
use
da
G
and
have
like
a
job
that
remains
running
until
the
environment
is
done.
C
That's
kind
of
a
hack
and
I
think
the
ultimate
solution
is
going
to
be
to
do
this
release
tools
trigger
where
you
now
release
tools
as
a
coordinator.
It
triggers
the
deployer
pipeline
for
single
environment
and
then
I
think
then
I
think
the
trigger
job
could
just
run
for
the
duration
of
the
environment,
and
if
we
set
a
resource
group
on
that,
then
then
you
can't
have
two
triggers
that'll
run
simultaneously.
Does
anyone
have
any
other
ideas
for
this
I?
Don't.
B
C
God
yeah
that's
really
bad,
because
because
what
happens
is
like
usually
but
the
last
one,
the
finish
will
always
win,
because
the
last
one
to
finish
always
sets
the
chef's
version.
So
eventually,
chef
Ron's
will
just
like
start
rolling
back.
It's
whoever
the
last
pipeline
to
fix.
Oh
yeah.
So
it's
pretty
bad
I.
D
C
D
Georgia
I
do
have
an
idea:
I,
don't
know
how
feasible
it
is,
but
I'm
wondering
if
we
could
just
set
a
feature
flag
and
have
deployer
look
at
that
feature
flag
if
deploy
goes
out
and
it's
not
yet
done
flip
that
feature
flag
to
say.
True,
the
next
deployment
comes
through
that
says:
hey.
We
need
check.
D
C
C
This
kind
of
sucks
I
like
the
feature
flag
idea,
but
I,
don't
know
if
we
can
do
this
from
Python
last
time,
I
tried
it
didn't
work
because
our
client
library,
but
the
public
client
library
just
was
never
updated
to
use,
unleash
the
way
that
we
like
that.
We've
implemented
that
wire
protocol.
So
I
don't
know,
but
maybe
we
could
just
do
this.
C
D
Like
it
I
think
the
next
challenge
is
going
to
be
what
happens
when
there
are
three
deploys
that
are
being
attempted,
because
you've
already
got
one.
That's
waiting
and
you've
got
another
one.
That's
coming
in
he's.
Also
gonna
wait.
So
what's
what's
gonna
happen?
Next,
you
need
to
somehow
cancel
that
second
one
for
that.
Third
one,
we
go
through
one
prom
at
a
time.
Maybe.