►
From YouTube: 2020-12-21 Delivery team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Welcome
everyone,
final
delivery
weekly
of
the
year
so
yeah
make
it
a
good
one
right,
no
pressure.
This
has
to
be
our
best
one.
Yet
so
a
couple
of
announcements.
First,
one
this
one
came
up
today,
it's
been
recommended,
you
redact
all
personal
information
off
your
receipts
before
you
upload
them
to
expensify
cool,
and
I
opened
up
an
issue
for
q4
okr,
so
rough
thinking
on
this
one
is
we're
going
to
need
to
get
started
on
q1
okrs
when
we
get
back
after
christmas.
A
So
if
you've
got
any
thoughts
about
things,
we
can
continue
to
do
well
or
that
we
should
be
doing
better
if
you
can
stick
them
in,
so
that
we
can
make
the
next
round
even
better
than
this
one,
but
I'm
hoping
that
it
won't
be
quite
so
difficult
to
get
the
next
like
the
q1
okrs,
because
I
don't
think
we'll
be
changing
directions.
So
much
like,
I
think,
we've
got
a
lot
of
stuff
in
progress
and
it's
likely
we'll
kind
of
just
continue
from
there.
A
Awesome
so
discussion,
so
it's
quite
a
lot
of
changing
handing
over
of
release
management
responsibilities
for
the
next
few
weeks,
so
that
just
be
helpful
just
in
case
anyone
needs
to
know
as
well
like
over
the
next
few
weeks
like
who
is
currently
on
in
charge.
Here
is
the
rough
plan.
A
What
I
need
to
I'm
assuming,
but
I
need
to
confirm
I'm
assuming
on
the
24th
that
when
I
sign
off,
I
will
pause
all
of
the
auto
deploy
stuff,
since
we
have
a
hard
production
change
lock
on
the
25th,
but
I'll
confirm
that
with
marin
and
then
yeah
we'll
kick
off
new
year
with
a
security
release,
so
it'll
be
fun
but
yeah
over
jobs.
The
the
one
bit
I
was
going
to
ask
about
so
yeah.
When
do
you
want
to
squeeze
in
the
helm3
upgrade.
D
So
it's
gonna,
don't
need
to
be
some
coordination,
I'm
not
even
sure
like
we
may
even
wanna
consider
doing
it
over
the
weekend
if
we're
going
to
be
really
busy
that
week,
the
security
release
the
problem
here
is
that
we
don't
really
know
how
long
we'll
need
to
block
deployments,
because
we
don't
know
what
issues
we're
going
to
run
into,
I
would
say,
like
probably
the
worst
case
would
be
four
to
six
hours,
which
is
a
long
time
to
block
deployments.
E
We
should
try
to
coordinate
as
much
as
we
can
with
grain
when
he's
back
next
year,
because
we
don't
do
any
deployments
during
apac
timing.
Well,
we
do
some
deployments,
but
they
don't
continue
forward.
So
apec
would
be
a
prime
time
to
get
that
work
done
because
it
doesn't
impact
us
when
we're
doing
release
work.
D
D
We
could
we
could
postpone
it,
but,
but
maybe
I
think
how
about
we
say
that
we'll
we'll
leave
git
lab
alone,
we'll
leave
the
gitlab
release
alone,
so
we'll
keep
that
on
home
too
and
then
we'll
start
the
gitlab
helm
files,
which
is
all
the
monitoring
infrastructure
logging
infrastructure.
We
do
that
during
the
week
of
the
fourth.
D
F
D
It's
it's
sort
of
like
we're
committing
once
we
start
it's
it's
kind
of
a
pain
to
clean
up
once
you
start
the
once
you
start
migrating
data.
The
tooling
just
makes
it
so
that
it's
really
easy
to
do
this
migration,
but
it
doesn't
make
it
easy
to
undo
the
migration
start
over.
So
I
would
prefer
if
we
just
try
to
move
forward
instead
of
going
backwards,
if
we
can
avoid
it.
A
D
To
be
so
like
the
the
problem,
at
least
with
pre-prod,
the
problem
was
not
with
the
git
lab
release
as
much
as
as
it
was
with
the
gitlab
home
files
releases,
like
the
logging
monitoring
prometheus
operator,
especially
so
I
think
like
if
we
get
that
out
of
the
way
on
the
fourth,
we'll
probably
be
in
good
shape
to
to
get
lab
the
next
week
and
then
be
done.
A
D
Yeah,
we
actually
could
yeah
that's
an
option
too
scarbeck.
What
do
you
think
about
that?.
E
In
my
opinion,
it
is
too
close
to
the
holiday
schedule
where
a
lot
of
people
aren't
going
to
be
around
I'd
hate
to
get
something
started
and
then
you're
working
till
midnight
fixing
staging
rolling
backwards.
If
something
went
wrong,
I
don't
want
to.
D
Do
that
yeah?
I
think,
like
with
the
for
this
week
kind
of
being
a
short
week.
I
would
prefer
to
wait
as
well.
D
A
C
Yep
so
since
we
basically
have
sort
of
a
day
between
alessio's
time
and
mind
to
grab
up
the
first
parts
release,
I
wonder
if
we
should
start
that
after
30.7
is
completed.
So
like
a
broad
plan,
I
could
start
the
patch
on
my
afternoon
because
we
normally
grab
till
it
at
seven
on.
You
know
european
time,
so
I
can
start
it
on
my
afternoon
and
then
to
have
the
whole
wednesday
yeah
to
wrap
it
up.
A
Yeah
makes
sense,
we'll
probably
just
need
to
make
sure
on
wednesday
morning.
We
double
check
whether
there's
any
new
fixes,
because
my
fairly
big
concern-
I
I
actually,
this
a
lot
has
already
been
merged
in
so
I
think
a
lot
of
people
finished
last
week
and
merged
patch
fixes
in
already,
but
my
concern
is
we'll
release
tomorrow.
A
Something
will
get
detected
afterwards
and
fixed
and
obviously,
once
we've
got
this
patch
out,
we
won't
be
patching
again
until
january,
so
but
yeah
it
sounds
sensible.
There
is
one
p1
s1
that
got
flagged
up
today,
so
the
fix
is
merged
and
ready
to
go.
So
we
will
need
to
patch
this
year
for
sure.
F
Sure
so
I
was
thinking
that
maybe
we
should
add
with
you
emmy
and
also
marring
to
the
schedule
for
this
release
so
that
they,
you
all
got
part
of
the
risk
managers
slack
handle
and
then
once
you
are
no
longer
covering,
we
can
just
remove
you
because
every
day
we
refresh
the
handles
so
so
that
yeah
in
case
of
a
slight
notification,
they
will
reach
the
right
people.
A
Yeah,
that's
a
great
idea,
yeah
I'll
I'll,
send
that
over
for
review,
but
yeah.
That's
a
an
idea,
cool
awesome,
so
alessio.
F
Oh
yes,
it
was
still
my
friend
yeah.
So
this
morning
we
were
actually
amy
was
looking
at
the
stable
branches,
and
there
was
this
question
about.
If
the
pipeline
were
actually
green
or
not.
If
we
could
continue
so
we,
I
spent
a
bit
of
time
trying
to
understand
what
happened
with
god
fought
because
lean
yeah
lean
yeah
so,
and
the
thing
is
that
we
have
optimization
for
the
the
pipelines.
F
So
if
we
only
change
version
files
or
qa
tests
or
whatever
they
only
run
a
subset
of
it,
but
so
this
means
that,
for
instance,
on
the
stable
branches,
we
were
able
to
have
a
full
pipeline
only
because
we
also
tagged
rc,
42
and
tags
have
the
complete
pipeline,
so
the
same
commit
had
basically
an
empty
pipeline,
just
a
skipped
job
and
nothing
else,
plus
the
full
pipeline
for
the
tag
so
yeah.
F
This
is
more
about
not
only
the
problem
at
hand
so
being
able
to
run
a
real
pipeline
on
stable
branches
before
we
check,
but
also
as
a
more
general
approach.
And
how
can
we
keep
up?
Can
we
be
updated
on
what
is
happening
in
terms
of
engineering
productivity,
changing
the
pipelines
and
the
effect
that
this
may
have
on
stable
branches,
because
usually
we
don't
touch
stable
branches
and
then
it's
time
to
talk
and
maybe
the
all
the
logic
around
our
ci
completely
changed.
D
F
Yeah,
this
is
what
yeah.
This
is
what
was
proposed
because
making
sure
that,
on
stable
branches
we
run
everything
is
kind
of
a
huge
overhaul
over
all
of
our
pipeline
configuration
so
to
resolve
this
on
master
branches.
The
engineering
productivity
team
made
a
scheduling
job
so
every
two
hours.
D
I
think
I
think
periodically
running
pipelines
on
stables
might
be
nice
to
do
maybe
on
a
lower
frequency
because
of
back
ports,
maybe
once
a
week
or
something
if
they
already
are
doing
this
on
master.
F
E
F
E
So
maybe,
prior
to
starting
a
pet,
when
we
know
a
patch
release
is
coming,
maybe
we
add
a
step
where
go
ahead,
kick
off
a
set
of
ci
jobs
that
runs
the
pipeline
for
the
targeted,
stable
branches
ahead
of
time.
So
we
know
before
starting
the
rest
of
the
procedure.
We
know
what
we're
starting
off
with.
F
A
Yeah
that
makes
sense
thanks
for
bringing
that
up,
alessio
I'll,
also
I'll,
follow
up
with
quality
as
well
about
how
we
can
get
visibility
of
the
stuff
as
a
general
request.
What
are
we
asking
for
we're
asking
for
an
update
on
anything
on
any
stable
branch
or
something
broader.
F
I
think
that
is
the
it.
The
problem
is
more
on
our
side
because
they
are
doing
a
great
job
on
updating
everyone
in
engineering
we
can
review
so
is
maybe
just
we
should
just
force
ourselves
to
think
about
it
in
terms
of
stable
branches
and
the
effects
that
we
have
that
this
will
have
on
our
work,
because
I
usually
read
it,
but
sometimes
I
don't
even
think
about
this
type
of
effects.
A
Yeah,
that's
true:
okay,
cool!
Let's
try
and
maybe
wrap
that
into
delivery.
Weeklies
then
going
forwards,
because
on
a
monday,
hopefully
everyone's
read
engineering
we
can
review.
We
can
have
a
quick
chat
about.
If
there's
anything
we
should
think
about.
That
makes
sense,
cool,
okay,
because
yeah
there's
quite
a
lot
of
changes.
I
think
coming
over
speeding
up
like
well.
A
One
of
the
big
projects
is
running
tests
in
parallel
and
speeding
up
things
and
also
then
starting
to
run
not
running
tests
running
only
running
the
tests
that
you
need
to
run
so
the
impacted
tests,
which
we
need
to
make
sure
I
think
we
don't
get
affected
by,
because
obviously
we
always
want
to
run
everything,
but
it
will
also
be
worth
keeping
in
mind
that
it'll
be
interesting
to
see
what
happens
when
that
does
come
in,
because
engineers
will
be
committing
code,
they
won't
be
running
the
full
suite
of
tests,
so
it'll
be
worth
us
keeping
track
of
whether
we're
actually
seeing
more
test
failures
in
our
pipelines
as
well
so
yeah.
A
Awesome.
Anything
else
on
that
topic.
A
Cool
so
then
one
final
thing,
so
just
there's
quite
a
lot
of
things
going
on
at
the
moment.
So
just
to
kind
of
like
little
bit
of
visibility
for
early
january,
one
of
the
big
ones
will
be
doing
the
work
to
that.
Myra
has
put
in
to
start
preparing
us
for
the
renaming
of
the
master
branch,
and
if
you
want
to
share
any
more
on
that
one
mira
or
some
people
have
to
read.
C
Things
yeah
so
just
very
quickly,
I
think,
is
the
create
team
or
the
source
team
that
is
planning
to
do
a
rename
from
master
to
main,
and
I
think
it
is
going
to
be
in
the
first
perhaps
february
or
march.
I
am
not
clearing
the
end
date,
but
for
that
we
will
need
to
be
prepared
on
the
release
tools.
So
when
the
transition
is
made,
nothing
is
like
corrupted
in
our
site,
so
yeah.
There
is
an
epic
for
that
that
I
can
link
in
some
moments.
A
So
thanks
for
that,
so
that's
going
to
be
one.
There
is
also
scarbeck
your
work
that
you're
doing
with
geo
no.
E
E
I
made
sure
that
the
issues
I
created
associated
with
that
epic
are
on
both
alessios
and
jarvis
radars
but
yeah.
I
don't
suspect,
we'll
be
scheduling,
work
anytime
soon,
like
I
asked
the
question
with
the
team
as
like:
when
do
we
expect
work
to
be,
you
know
done,
and
no
one
really
had
a
straightforward
answer.
So
at
this
point
everything
is
backlogged
or
unscheduled
or
planning
or
whatever
we
call
it.
A
Ab
thanks
for
putting
that
in
and
we
have
another
one,
that's
sort
of
related,
which
is
the
kaz,
so
the
kubernetes
agent
service
similar
it's
going
to
need
a
little
bit
of
work
from
us
at
some
point
to
deploy
the
service.
But
it's
not
ready
right
now.
We're
waiting
on
some
chain
there's
some
work
to
do
from
observability
to
make
sure
there's
actually
visibility
of
this
thing
once
it
goes
to
production
and
then
similarly
registry
is
also
a
little
bit
similar
at
some
point
in
the
future.
A
They'll
need
some
deployment,
stuff
they're,
adding
a
database,
so
we
need
to
help
them
get
deployments
through
to
their
database,
but
no
request
on
that
right
now.
So
all
of
those
things
kind
of
in
the
pipeline
at
various
points
in
the
future.
F
I
was
thinking
about
this
in
the
past
when,
first
time
the
the
new
register
thing
came
out
and
then
also
cast
came
out
more
or
less
at
the
same
time.
So
what
I
was
thinking
is
as
the
re
as
the
delivery
team.
A
Yeah,
that's
a
really
great
idea
that
makes
a
lot
of
sense
and
then
hopefully
help
it'll
help
people
kind
of
self-serving.
What
they're
going
to
need,
because,
for
example,
registry
is
quite
a
straightforward
one
that
we
require
them
to
have
some
additional
tests.
Maybe
there
are
other
things
like
that
we
can
sort
of.
We
always
ask
for
so
yeah.
That's
a
really
great
idea.
E
A
I
think
we
need
to
be
concerned
in
that
we
will
need
to
make
sure
we're
prioritizing.
So
I
think
it's
a
it's
a
case
of
there
will
be
certainly
work
that
we'll
need
to
prioritize
over
what
we
planned,
but
maybe
not
all
of
it,
so
yeah
that,
let's
make
sure,
like
you
know,
we
we
keep
asking
those
questions
and
then
I'll
make
sure
that
I
keep
feeding
that
back
and
updating
on
the
okrs
I'll
be
doing
another
update
on
all
the
okr
issues
this
week.
A
A
lot
of
it
is
trying
to
highlight
that
we
are
doing
a
lot
of
work
and
we're
doing
a
lot
of
kind
of
foundational
things.
However,
we're
also
doing
a
lot
of
other
things,
so
yeah
I'll
make
sure
we
keep
on
sharing
that
so
yeah.
We
definitely
need
to
we.
We
can't
do
everything,
certainly.
So,
if
we
do
have
to
pick
up
other
things,
we
will
put
down
something.
A
Else
so
I
think,
if
you
hear
of
anything
or
if
you
suspect
there
are
other
things
that
we
may
need
to
be
involved
in
then
flag
them
up
as
soon
as
you
can
really
so
that
we
can
actually
work
those
in
but
yeah.
A
Let's
I'll
make
a
first
pass
that
of
the
checklist,
let's
see,
and
then
we
can
kind
of
all
kind
of
contribute
on
that
one.
So
I'll
open
up
an
mr
for
that.