►
From YouTube: 2021-02-15 Delivery team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Awesome
so
welcome.
Firstly,
most
importantly,
henry
welcome
welcome
to
delivery.
Welcome
to
your
first
delivery
weekly
meeting.
Please
everyone
could
you
schedule
in
a
coffee
chat
with
henry
over
the
next
week
or
so
so
that
you
can
all
start
to
get
to
know
each
other.
A
Awesome
and
so
initially
henry
will
be
focusing
on
the
kubernetes
work
and
getting
up
to
speed
there
to
help
out
the
migration,
and
then
we
can
bring
him
into
the
release
management
work
as
well
cool.
Are
there
any
scope?
Was
there
anything
you
wanted
to
add
about
your
announcement
about
the
dna
meeting.
A
Awesome
cool
and
I've
added
in
the
details
for
the
general
assembly
as
well,
which
is
this
week
full
details.
Most
importantly,
it's
in
hopping
not
into
so
you
do
need
to
find
the
details
for
that
cool.
So
discussion
points
alessio
over
to
you.
D
Thank
you
so,
basically,
last
week
I
got
a
ping
because,
a
long
time
ago
I
wrote
this
epic
about
bringing
auto
deploy
to
components
and
basically
someone
from
gitlab
shell
just
woke
up
yeah.
Maybe
we
could
get
some
advantage
here
and
have
the
how
to
deploy
features
I
went
through.
It
was
very
long.
Kubernetes
migration
was
not.
I
mean
it
was
very
far
far
away.
D
So
the
point
is
the
point
is
that
we
have
a
disparity
between
what
you
can
get
today
with
vms
deployment
and
what
we
can
have
on
kubernetes
and
the
is
that
now
that
yorick
did
all
the
hard
work
in
automating
the
public
release
with
the
apis
with
let's
say,
a
couple
of
code
changes,
it's
easy
to
add
something
that
is
handled
by
the
version
file
in
the
github
repo
to
an
omnibus
package
from
to
an
alternate
volume,
nervous
package.
D
So
if
it,
if
that
component
is
deployed
on
a
vm,
it's
just
it's
trivial
to
give
them
auto
deploy.
But
if
it's
in
kubernetes,
I
don't
I'm
not
quite
sure
we
are
on
the
same
page,
because
the
chart
we
are
not
using,
I
mean
charts,
are
more
complex
right.
So
I
don't.
I
don't
think
we
are
bumping,
a
building
images
from
shells
based
on
the
content
of
the
version
file
in
in
gitlab
repo.
D
So
this
means
that,
for
instance,
github
shell
cannot
get
the
disadvantage
because
it's
completely
deployed
in
in
kubernetes,
so
even
if
they
they
they
can
get
into
other
deploys
so
that
the
monthly
release
will
be
automated,
but
we
will
not
be
able
to
deploy
this
just
out
of
the
box.
D
I
mean
this
is
my
understanding,
so
I
would
like
for
us
to
think
about
this
in
general,
but
more
importantly,
thinking
about
how
we
can
have
some
kind
of
guidelines
for
for
new
components
and
things
like
that,
because
we
have
cass,
we
have
the
new
registry
database
edition,
and
so
there
are
several
moving
parts.
I
would
like
that
we,
as
a
team
have
some
kind
of
streamlined
narrative
and
say
if
you
need
to
deploy
something.
These
are
the
rules,
and
then
we
give
them
all
the
automation
not
only
for
them,
but
also
for
us.
A
Do
you
think
that
this
is
holding
us
up
now?
So
I
know
like
as
we're
working
through
how
to
get
there
registry
same
like
do
you
think
that
if
we
work
out
for
one
of
those
that
will
give
us
the
guidelines
we
need
or
do
we
need
to
work
this
out
ahead
of
one
of
those.
D
A
Okay,
this
ties
in,
I
think
scott,
with
the
sort
of
thinking
you
were
doing
as
well
for
registry
right,
cool.
Okay,
let
me
put
all
these
pieces
together
and
we
can
it's
going
to
be
a
mini
project
right
so
like
we
should
prioritize
that
around
other
things
we
have
but
yeah.
Let's,
let's
make
sure
we
actually
have
got
some
guidelines
and
know
what
we
can
advise
people
for
that
stuff
cool.
A
Was
there
anything
else
she
wanted
to
raise
on
that
alessio,
oh
cool,
great,
so
kind
of
related
to
point
a?
Is
I'm
really
trying
to
help
us
reduce
the
number
of
projects
we
have
in
flight
at
any
one
time?
So
we
have
a
few
coming
up
to
sort
of
wrapping
up.
I
believe
so.
We've
got
the
default
branch
testing
looks
like
we're,
making
good
progress
so
hopefully
we're
seeing
like
sort
of
a
natural
end
to
that.
But
we've
been
closing
out
some
of
the
epics
around
coordinated
deployments.
A
Changelog
is
progressing
really
well,
so
I'm
going
to
do
a
load
of
stuff
to
try
and
help
give
us
a
bit
more
visibility.
So
we
can
actually
contain
that
a
bit
more.
This
quarter,
mostly
so
that
we
have
more
than
one
person
on
each
project,
so
rollbacks
we're
making
good
progress
having
everybody
involved.
A
E
Thanks
amy,
I
was
talking
to
brent
this
morning
about
root
cause
labels
for
incidents
and
I'm
curious
on
your
opinion
about
adding
another
root
cause
for
deployments,
and
I
realized
like
we
don't
want
to
add
a
lot
of
these
categories,
and
I
mentioned
here
like,
I
think
I
think
we're
seeing
some
alerts
that
are
based
on
symptoms
of
deployments
like
we
had
one
today
where
there
were
aj
proxy
error
rate
increase,
and
I
noticed
like
okay
there's
a
deployment
ongoing.
E
I
don't
think
an
instant
issue
was
opened
for
that.
Maybe
it
should
have
been,
I
mean,
and
we
do
have
this
category
like
saturation,
which
is
very
generic,
but
I
would
like
to
start
tracking
this
because
I
think
it's
it's
helpful.
It's
helpful
to
us
to
kind
of
see
like
okay.
Are
we
continually
seeing
problems
during
deployments
like?
Are
we
tripping
our
slos
during
deployments,
and
I
don't
have
a
good
way
to
measure
that?
What
do
you
guys
think.
A
Do
you
mean
from
a
root
cause
from
like
our
deployment
code
or
root
cause
from
a
thing
we
were
deploying.
E
Yeah,
I
think
it's
I
think
it
you
could
say
it's
our
deployment
code,
but
I
think
it's
more
like
errors
during
deploys
that
we
can't
otherwise
explain
except
say
that
it
was
due
to
a
deployment
and-
and
that's
not
a
great
answer,
but
it
seems
like
we've,
been
kind
of
saying
that
a
lot
with
certain
alerts-
and
I
I
want
to
track
that,
because
if
we
are
kind
of
like
getting
numb
to
oh
there's
a
deployment
in
progress,
of
course
we're
going
to
have
a
page
like
I
want
to
avoid
that
right.
E
So
I
mean
part
of
the
answer
is
like
we
need
to
be
creating
incidents
for
more
of
these
issues,
but
I
think
on-call
is
sort
of
like
inundated
with.
I
mean
today
was
pretty
quiet
not
on
wood,
but
in
the
past,
like
things,
they
don't
open
up
incidents
all
the
time
for
alerts
that
fire
and
clear.
A
So
we
do
have
a
label
for
deployment
related
like
incidents,
it's
not
a
root
cause,
and
that's
kind
of
I
ask
you
like.
Are
we?
If,
if
the
root
cause
is
code,
we're
using
to
deploy
stuff
causes
an
incident?
I
think
that's
the
root
cause,
but
it
is
not
a
root
cause.
If
it
was
a
thing,
we
were
deploying
caused
as
an
incident,
if
that
makes
sense,
but
we
can
track
those.
We
have
the
deployment
related
incident
label
anything
that
we
any.
E
E
Yeah
are
we?
Are
we
following
that?
Generally
I
just
haven't.
I
wasn't
really
aware
that
this
was
a
thing
that
yeah.
A
Is
they
would
have
like
a
deployment
related
incident
label
which
says
we
saw
something
related
to
an
incident
that
was
related
to
a
deployment?
There
was
an
incident
and
then
ideally
there'd
be
like
root
cause
software
change
or
something
so
we
can
actually
tie
it
back
to
like
the
gap
was
qa
tests
or
something.
C
Like
that,
but
I
think
what
jarvis
is
trying
to
get
here
is
that
the
root
cause
might
be
a
configuration
related
to
how
say
our
web
service
is
configured
which
causes
error
rates,
because
we
kick
off
three
thirty
thousand
people
that
are
all
connected
to
our
websockets
or
something
or
it
could
be.
You
know,
deployer
did
something
wrong
and
didn't
properly
drain
a
node
from
aj
proxy
and
causing
a
high
error
rate.
E
D
B
D
Something
here
is
not
backward
or
forward
compatible.
Yes,
it's
because
of
the
deployment,
but
it's
because
you
change
something
that
is
not
compatible
or
you
I
mean
if
we
have
an
extra
label
that
we
have.
Let's
say
that
this
incident
happened
because
of
our
deployment,
then
the
root
cause
can
map
on
any
of
those
one.
Software
change
can
even
be
into
the
player
itself
right.
We
changed
something
into
the
player
that
is
broken
and
gave
us
an
outage.
E
B
Here
I
think
it's
about
ownership
right,
I
mean
it's
the
question:
when
should
we
as
a
team
take
ownership
of
incidents,
if
it's
maybe
related
to
things
that
we
do
and
our
responsibility
right
and
we
could
have
saturation?
Because
you
know
we
don't
have
enough
api
modes
when
we
do
a
canary
deploy,
but
it
could
also
be
because
we,
I
don't
know,
wrote
too
many
notes
at
the
same
time
and
that
should
be
maybe
fixed
on
our
side.
E
I
say
like
we
move
on
because
I
don't
want
to
take
up
too
much
time
and
I'm
happy
to
like
move
this
into
an
issue
if
there's
more
to
discuss,
but
I
wasn't
really
aware
of
this
other
label.
I
guess
so.
If
there's,
if
this
label
is
being
used,
then
maybe
this
is
the
right
way
to
track
it.
E
A
If
you
have
an
issue
I'll
ping
over
some,
so
you
can
see
it
not
against
the
root
cause
of
you
know.
Some
deployment
related
code
caused
an
incident
like
if
that
is
the
root
cause,
then
great,
but
I
don't
want
to
end
up
with
us
sort
of
assigning
root
cause
for
things
that
were
like
application
bugs,
for
example,
yeah.
A
D
E
And
the
next
item
is
just
an
fyi
for
the
event
logging.
We
have
a
lot
of
events
being
logged
now
for
all
configuration
deployment,
feature
flag,
related
changes
in
one
place.
I
put
this
link
like
when
you,
when
you
declare
an
incident.
You'll
have
a
link
to
this
immediately,
so
you
can
see
like
what
what's
like
what
happened
up
until
the
incident
occurred
and
if
you
could
think
of
anything
else,
we
should
add
to
this.
E
You
know
we
can
it's
pretty
easy
to
add
them
we're
just
like
sending
just
curling
to
elasticsearch.
Basically,
so
it's.
C
E
Well,
I
guess
the
first
thing
is
that
if
there
is
an
incident
and
we
want
to
kind
of
narrow
down,
okay-
is
this
related
to
a
deployment
right?
Now,
it's
very
hard
to
do
that
by
looking
at
pipelines
like
you
have
to
like
to
see
when
the
web
started
when
the
web
ended
or
something
like
that,
so
so
for
release
management.
This
is
like
going
to
be
very
helpful
to
see
whether
an
incident
was
caused
by
a
particular
fleet
deployment
for
one
or
kubernetes
deployment,
or
a
kubernetes
configuration
change.
E
A
Do
we
have
a
run
book
to
explain
that?
Well,
because
if
I'm
the
release
manager,
I'm
not
going
to
make
that
link,
I
just
want
you
in
advance.
E
Yeah
I
mean
I
put
the
link.
The
link
is
right
on
the
incident
issue.
When
you
open
it
up,
so
that's
that's.
You'll,
see
it
right
there.
There
is
a
readme
for
this,
but
I
don't
know
we
don't
have
a
run
book,
let's
see,
but
maybe
maybe
we'll
create
a
runbook
for
it.
Okay,.