►
From YouTube: 2023-08-07 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
D
But
let's
get
my
agenda
open,
so
welcome
everyone,
and
today
is
7th
August,
and
this
is
delivery.
Weekly
I
have
put
a
couple
of
announcements
in
the
async
section
of
the
agenda
for
you
to
read,
but
I'm
going
to
jump
to
the
most
exciting
part
of
the
discussion,
which
is
to
welcome
that
that
welcome
to
delivery
and
to
get
up
actually
so
I
would
say.
D
D
I
can't
hear
you
at
the
moment.
Is
it
just
me.
D
Yeah
I'll
tell
you
I'll
loop
back
to
you
that
we'll
I'll
reorder
these
things
so
you've
got
a
few
minutes,
so
you
can
awkwardly
just
watch
you.
Whilst
you
try
and
do
these,
but
I
will
come
back
to
you,
but
it's
great
to
have
you
on
the
team.
So
welcome
yeah!
All
right!
Oh,
hey!
Go
ahead!
Go
ahead!
Tell
us
tell
us
a
little
bit
about
yourself.
E
Hi
everyone
so
I'm,
not
yeah,
so
I'm
from
Vietnam,
but
right
now,
I'm
in
Germany,
so
yeah
today
is
my
first
day
in
delivery,
orchestration
team
so
to
apply
a
struggle
language
on
the
account
setup
and
so
on.
So
I
I.
Don't
don't
think
I
can
do
anything
or
can
start
anything
with
the
team
today
and
properly
for
the
next
week.
I'm
sure
about
that,
but
yeah.
So
really
looking
forward
to
work
with
all
of
you,
yeah.
D
Awesome
so
discussion
items
Aaron,
welcome,
welcome
to
delivery,
weekly
yeah.
C
I,
haven't
haven't
been
to
one
of
these
in
a
couple
of
years,
actually
because
I
have
overlap,
always
at
this
time
and
I
actually
jumped
from
another
meeting
to
to
come
here
to
talk
about
a
bit
about
this.
C
So
yeah
I
wanted
to
talk
a
bit
about
cells
and
GitHub
dedicated
because
you
probably
all
heard
about
it
somewhere,
but
we
we
need
to
start
thinking
more
about
it.
Ourselves,
like
on
the
delivery
team
side
as
well,
and
I
wanted
to
kind
of
share
a
bit
of
a
direction
change
for
for
delivery.
That
is
going
to
play
out
in
in
this
quarter.
C
To
start.
First
with
note
that
on
the
orchestration
side,
the
two
two
okrs
that
currently
exist
around
security
releases
and
the
change
of
the
monthly
release
date
that
we
need
to
do
that.
That
needs
to
happen
like
there
is
no
change
on
on
that
side.
But
as
I
was
reviewing
the
work
that
is
remaining
for
the
blue
green,
deploys
specifically
from
the
system
team
side
and
then
also
aligning
some
of
the
other
direction
items
that
we
have
on
the
dedicated
side
and
and
elsewhere.
C
It
became
apparent
that
we
are
probably
not
going
to
get
the
most
out
of
the
work
that
all
of
you
would
be
doing
on
on
the
setup
that
is
necessary
for
the
blue
breeding
in
blue
green
deploys
or
shifting
traffic
or
I
I
forgot
how
you
named
the
project
and
the
challenge
with
that
is.
C
We
specifically
have
to
start
thinking
in
different
scale
when
it
comes
to
sales,
for
example.
Right
now
we're
talking
about
one
to
two
environments
at
any
given
point
in
time
which,
for
which,
when
sales
gets
introduced
into
github.com
architecture,
the
scale
changes
to
n
number
of
cells.
All
of
a
sudden
and
doing
any
work
on
existing
tooling
seems
a
bit
wasteful
and
I'm
really
concerned
that
it
will
continue
that
direction
to
that
direction.
C
We'll
get
ourselves
into
a
situation
where
not
only
that
we
didn't
manage
to
finish
all
of
the
work
that
is
necessary
to
support
the
goal
that
we
wanted,
but
also
that
we
will
need
to
do
a
major
shift
later
on
when
dedicated
on
gcp
becomes
a
thing
and
when
we
need
to
start
thinking
about
how
we
will
do
the
same
thing
within
this
new
architecture.
So
as
part
of
that
I've
been
talking
with
Sam,
Amy
and
Michaela
to
start
focusing
more
on
building
out
the
new
tooling.
C
On
top
of
what
currently
is
gitlab
dedicated
and
then
learning.
C
On
that
side
of
the
architecture
and
being
prepared
when
the
sales
architecture
does
come
in
I,
don't
know
how
many,
how
much
detail
you
all
have
when
it
comes
to
sales
sales
is
relatively
fast
moving,
but
I
think
there
are
two
Dimensions
to
cells
work.
One
is
the
application
side
and
that's
something
that
it
that
tenants
scale
group
is
working
on
which
is
detangling
how
the
application
actually
works
internally,
so
that
it
can
operate
in
an
isolated
fashion.
C
Right,
like
one
unit,
can
operate
within
an
isolated
fashion,
but
on
the
other
side
we
have
the
infrastructure
side
and
the
infrastructure
side
is
imagined
to
be
on
top
of
GitHub
dedicated,
where
we
have
a
sufficient
Automation
and
then
also
sufficient
isolation
for
each
cell
that
is
being
deployed.
What
we
currently
don't
have
is
tooling
and
orchestration
necessary
to
deliver
gitlab
to
those
instances
in
I'm
trying
to
avoid
to
use
orchestrated
way,
but
I
can't
use.
C
Another
word
like
I:
don't
want
to
tie
a
team
to
this
coordination
coordination
in
a
coordinated
manner,
yes
and
I.
Think
delivery
is
best
positioned
to
start
investing
time
in
understanding
how
we
can
actually
make
that
pattern
there
and
through
proxy
that
will
land
on
that
will
land
on
github.com
eventually.
So,
instead
of
doing
some
short-term
work
right
now
we're
more
aligning
towards
the
north
star
that
is
being
set
by
York
and
not
only
Europe,
but
also
a
company
which
is
making
sure
that
sales
architecture
is
doable
for
github.com
and
I'm.
C
C
This
would
mean
that
instead
of
us
focusing
on
building
functionality
for
rolling
out
traffic
or
for
moving
traffic
between
different
instances
of
github.com
that
we
would
need
to
focus
on
that
same
thing,
but
within
a
smaller
unit
which
is
GitHub
dedicated
in
this
case.
C
The
benefit
of
that
is
direct
to
our
dedicated
customers,
because,
right
now
we
do
not
have
any
guarantees
around
GitHub
version
rollouts,
whether
there
is
going
to
be
any
downtime
or
not,
and
this
work
could
help
us
out
there
directly.
So
immediate
impact
on
our
customers
and,
like
I,
said
through
proxy,
it's
gonna
land
on
github.com,
eventually
as
well.
C
But
then
also
there
is
quite
a
lot
of
Orchestra
or
coordination
that
needs
to
happen
between
delivery
and
dedicated
teams
to
automate
some
of
the
actions
that
are
currently
manual
and
to
satisfy
some
of
the
requirements
that
exist
specifically
for
dedicated.
So
there
is
quite
a
bit
to
unpack
in
what
I'm
saying
here,
but
the
gist
of
it
is
that
okr
for
Q3.
C
We
will
need
to
adapt
for
system
team
and
then
in
reviewing
the
capacity
we
have,
we
will
also
need
to
kind
of
fuzzy
the
lines
between
system
and
orchestration
a
bit
and
work
together
all
together
on
all
three
targets
that
we
currently
have
previously.
This
was
set
on
system
only
specifically
the
traffic
routing
and
the
the
release
changes.
C
Orchestration
changes
were
on
orchestration
only
and
because
of
the
capacity
that
we
have
in
the
team
and
the
release
manager
shifts
that
exist
in
the
next
couple
of
months
will
need
to
all
work
together.
So
it's
not
going
to
be
a
specific
Team
Works
only
on
their
specific
task,
we're
going
to
have
to
mix
and
match
to
actually
support
the
capacity
we
have.
C
All
right,
I'm
gonna,
stop
talking
I,
see
a
lot
of
concerned
looking
faces
here
so
I'll
I'll.
Take
any
questions.
F
So
just
to
clarify
you
said
we
need
to
build
tooling.
That
will
help
dedicated
first
and
then
eventually
that
will
also
be
used
with
cells,
correct,
okay,
so
the
tooling
for
dedicated.
Will
it
be
for
stuff
like
traffic
shifting,
or
will
we
also
be
building
tooling
for
what
we
have
currently
for
github.com,
like
regular
deployments,
both.
C
We're
starting
with
something
that
is
a
topic
for
Q3
for
system
already,
which
is
traffic
routing
single
dedicated
instance,
does
not
have
that
we
depend
on
GitHub
environment
toolkit
and
whatever
they
claim
to
be
zero,
downtime
deployed,
which
is
in
most
cases
not
which
means
there's.
A
single
dedicated
instance
can
experience
some
downtime,
which
is
not
something
that
we
want.
C
The
traffic
routing
capability
we
would
add
in
dedicated
now
would
help
immediately
the
customers
we
have
on
dedicated,
but
through
the
work
that
is
happening
on
completely
separate
line,
which
is
adding
GitHub
dedicated
to
gcp.
C
D
And
and
maybe
just
to
expand
on
that
a
little
as
well
so
to
maybe
another
bit
to
your
question
now,
Reuben
is
we
when
we
deploy
to.com,
we
are
creating
and
validating
a
package
as
well
as
doing
the
rollout
I
think
for
this
one
we'll
be
at
least
initially
focusing
on
an
existing
package
will
be
rolled
out
safely
and
we
do
the
traffic
routing
and
then
there's
probably
a
phase
a
little
bit
further
along
where
we
work
out
exactly
what
is
in
the
package
and
validate
the
package
for
now
dedicated
on
N
minus
one.
C
Yeah,
so
in
the
short
in
the
short
term,
that's
going
to
be
the
focus,
but
again,
in
the
long
run,
delivery
group
will
have
to
focus
on
functionality
that
will
allow
us
to
observe
and
cells
that
exist
and
ensure
that
a
rollout,
not
only
within
a
single
cell
but
also
roll
out
across
and
sales
is
safe,
and
for
that
I
think
you
are
all
best
positioned
to
drive
those
initiatives.
C
That
means
all
the
tooling
that
we
either
currently
have
or
will
need
to
build,
will
probably
come
from
your
not
organization,
like
a
delivery
group
in
general
right,
like
all
the
dashboards
that
we
need
all
the
views
around
which
packages
where
what
version
of
gitlab
is
where
and
how.
C
How
is
the
system
reacting
to
the
packet
change?
All
of
that
remain
will
be
in
within
delivery
group.
G
And
also
to
clarify
this
is
going
to
be
like
the
work,
also
beyond
the
Q3
correct,
like
until
it's
done.
D
And
I
think
we
we've
got
the
very,
very,
very
first
Bare
Bones
pit
of
what
the
project
will
look
like.
So
we
have
quite
a
few
things
we
we
already
know.
We
know
that
we
don't
know
anything
about
dedicated.
So
we
know
a
big
piece
of
our
project
will
be
understanding,
dedicated
understanding,
the
current
infrastructure
architecture
and
looking
at
how
we
can
make
use
on
it.
We
learned
quite
a
lot
in
Q2,
so
it'll
be
working
out.
How
do
we
bring
that
knowledge
across
to
Q3?
D
How
what
tools
do
we
like
the
look
of
what
problems?
Can
we
see,
and
we
also
know
quite
a
lot
about
the
pain
points
of
being
a
release
manager?
So
it
would
be
a
really
good
idea,
I
think,
given
the
scales
of
cells,
that
we
try
and
eliminate
some
of
those
manual
points
or
the
pain
points
that
we
currently
Experience
day
to
day.
So
we
have
a
few
pieces
like
that
that
will
give
us
enough
to
get
started
and
then
yeah
we'll
shape
up
the
project
in
the
next
few
weeks.
F
And
the
tooling
that
we'll
be
building
will
not
be
part
of
the
product
or,
like.
Is
that
open
for.
C
You
all
have
the
experience
of
working
Within,
get
love
the
product
and
two
very
degrees,
but
I
think
between
sres
and
back-end
Engineers.
We
can
actually
make
some
of
these
tools
that
we
are
that
are
currently
outside
of
the
product
we
can
make
them
inside
of
the
product
and
make
it
easier
on
ourselves
as
well,
so
just
for
full
context.
C
Some
of
that
tooling
was
built
at
the
time
where
there
were
like
huge
projects,
products
session,
sections
missing
or
not
fully
formulated,
so
we
needed
to
do
something
to
actually
get
ahead,
but
now
that
we
kind
of
have
a
full
outline
of
what
our
release
process
looks
like
we're
not
going
to
be
changing
things
significantly,
there's
no
reason
why
we
wouldn't
be
starting
with
product
first
approach.
C
And
if
that
blocks,
if
we
are
blocked
anywhere
with
that,
you
have
an
escalation
path.
F
Because
with
cells
it
sounds
like
you
know,
starting
to
treat
I
suppose
cells
are
similar
to
environments.
I
I
haven't
read
much
about
the
architecture
cells,
but
with
n
number
of
cells
it'll
become
you'll,
have
to
treat
environments
as
cattle
rather
than
so
correct.
E
C
All
right,
I
have
a
final
remark.
So,
first
of
all,
if
you're
wondering
why
delivery
is
going
through
this
direction
change
and
if
you
feel
like
you're
alone
in
this
you're,
not
that's
number
one,
scalability
team
already
is
Shifting
some
of
their
deliverables
towards
dedicated
as
well.
C
So,
for
example,
some
of
the
capacity
planning
tooling
that
was
developed
for
github.com
is
now
in
this
quarter
already
being
planned
out
for
dedicated
right,
because
we
need
to
figure
out
how
to
plan
for
capacity
within
dedicated,
seeing
every
single
instance,
and
that
will
through
proxy
mean
that
this
will
land
back
to
gitlog.com
through
dedicated
as
well.
So
we're
moving
from
this
single
large
instance
view
towards
many.
Many
smaller
instances
views
so
what
has
applied
previously
on
a
big
instance.
C
There
is
no
reason
why
it
shouldn't
apply
on
number
of
smaller
instances.
It's
just
the
scale
is
different,
so
that's
already
changing
and
with
the
changes
around
the
general
direction
of
cells
and
the
timelines
that
are
being
set.
You're
talking
about
18
to
24
months
timeline
for
sales
on
github.com
I've
realized
that
we
do
not
have
much
time
on
our
side
to
get
ahead
to
build
this
tooling.
C
That
will
allow
us
to
support
cells
as
well,
so,
instead
of
instead
of
taking
our
time
to
understand
context
or
concepts
on
a
single
instance
of
this
scale,
which
is
a
different.
This
is
a
different
problem
than
a
single
instance
on
smaller
scale.
This
is
like
why
this
is
now
happening.
Why
is
it
late?
Why
I
couldn't
have
done
this
in
three
weeks
ago?
C
Simple
answer
is
because
it
simply
played
out
that
way.
Engineering
offsite
was
two
weeks
ago,
the
planning
for
okrs
or
three
weeks
ago,
I
guess
now,
two
weeks
ago,
two
weeks
ago,
actually
okr
planning
happened
after
the
fact
and
then
only
when
I
stepped
back
and
look
through
all
of
the
okrs
last
week,
when
the
quarter
started
and
I
understood
all
the
moving
pieces,
that's
where
this
became
apparent,
so
my
fault
that
this
didn't
happen
earlier.
C
All
right,
that's
it
for
me,
I
mean.
H
I
am
okay,
actually,
maybe
not
yet,
but
how
they
couldn't
work
on
going
migrating
dedicated,
gcps
gonna.
You
think
it's
gonna
impact
the
this
the
work
we
are
going
to
do
in
this
quarter
for
dedicated.
C
C
The
only
annoying
thing
about
this
is
going
to
be
that
you'll
all
need
access
to
AWS
sandboxes
to
set
up
dedicated,
because
it's
very
much
tied
to
that
provider
until
deporting
to
gcp
happens
so
I
think
when
it
comes
to
the
tooling
you'll,
have
at
your
disposal
it's
going
to
be
a
bit
different
to
what
we
have
in
gcp,
which
might
actually
force
you
all
to
think
in
more
generalist
terms.
How
this
would
work
across
clouds,
rather
than
tying
to
specific
to.
H
C
C
We
can
we
can
work
on
that
for
sure
right
now,
I
think
we
can
just
create
a
bulk
access
request.
I
can
help
you
out
with
that
and
get
all
the
access
provisioned
and
share
some
documentation
around
what
it
looks
like
to
provision
a
Sandbox
environment
within
your
environment,
and
then
we
can
go
from
there.
D
Great
thanks
for
sharing
for
kind
of
next
steps,
then
so
Sam,
McKelly
and
I
have
already
started
talking
about
a
updated
okr
to
reword
the
existing
system.
Okr
so
we'll
have
that
to
share
very
shortly.
We
also
have
a
cool
already
scheduled
with
the
system
people.
If
anyone
else
wants
to
jump
in
that
is
tomorrow,
we'll
talk
about
the
first
iteration
we
were
going
to
talk
about
the
first
iteration
of
the
existing
okr.
D
D
C
F
C
You
have
to
understand
what
dedicated
already
has
and
which
is
why
the
sandbox
environments
are
going
to
be
important
and
also
you
need
to
follow
the
documentation.
The
Liberty
dedicated
team
has
for
setting
up
sand
book
environments,
sandbox
environments,
and
that
will
allow
you
to
follow
what
exists
there.
C
A
C
D
D
I
think
one
thing
I
want
to
just
mention,
because
we
haven't
talked
about
it
much
because
we
haven't
yet
got
a
plan
is
Marion
mentioned
about
us,
blurring
the
lines
between
the
teams
a
bit
to
get
achieve
the
three
goals.
So
Michael
and
I
haven't
chatted
about
this
exactly
yet
so
I
think
we
will.
We
will
take
a
look
at
that.
What
we
will
most
likely
do,
I
think,
is
try
and
have
a
more
fluid
sort
of
arrangement.
D
B
Yeah
I'd
just
say:
it'd,
be
you
know
great.
If
we
get
the
dedicated
tool,
Wing
working
I
think
we
have
to
understand
it
really
well,
but
I
wouldn't
say
it's
an
absolute
hard
requirement
that
it
must
be
a
hundred
percent
dedicated
to
everything
we
can't
add,
adapt
or
change
it
to
kind
of
get
some
of
this
done.
D
Cool
okay,
great
well,
this
I'm
sure
will
be
the
first
of
several
conversations
about
this.
So
please
just
shout
when
you
have
questions
or
any
concerns
or
not
sure
what
happens
next,
but
let's
hand
over
to
release
managers.
F
Yeah
last
week
wasn't
too
bad.
We
just
had
a
few
incidents,
one
big
one,
but
otherwise
it
was
a
pretty
easy
week.
Think.
F
D
Yeah
from
that,
so
there
is
a
I
think
the
chat,
UPS
command
that
turns
failing
migrations
into
no
Ops
I
am
trying
to
track
down
the
documentation
for
that,
because
I
think
there
is
some,
but
I
just
wanted
to
mention
it,
because
Myra
mentioned
last
week
that
this
was
undocumented.
My
understanding
was,
it
was
documented,
so
I'm
trying
to
find
that,
but
just
to
mention
all
this
cool
it
does
exist.
And
so,
if
you
get
a
fairly
migration,
it
should
now
be
possible
to
switch
it
to
a
no
op
and
continue
on.
F
Yeah
I
think
someone
mentioned
written
so
in
one
of
the
incidents
we
had
last
week.
We
didn't
know
that
there
was
a
chat,
ops
command,
so
the
SRE
on
call
manually
SSH
into
the
rails,
console
and
ran
a
command.
So
in
that
incident
someone
mentioned
the
chat,
ops
command.
So
if.
D
F
F
Lead
time
also
is
not
bad
for,
for
the
lead
time
on
Monday,
we
added
a
new
metric
that
might
be
able
to
help
over
the.
Let
me
just.
F
So
this
is
the
standard
lead
time
metric
which
basically
just
counts
the
time
between
deployment
of
a
merge
request
and
merge
offer
much
request.
So
the
problem
is
if
something
gets
merged
on
Friday
and
it
gets
deployed
on
Monday.
It
gets
counted
as
a
three
day
lead
time,
but
we
don't
Deploy
on
weekends,
so
we
should
not
be
counting
weekends.
So
this
is
a
metric
that
does
not
count
weekends,
so
you
can
see
there's
a
big
difference
in
the
average.
F
D
I
feel
stupid,
like
mttp,
is
accurate.
If,
if
it's
as
a
developer,
you
merge
on
a
Friday.
Your
change
is
not
Landing
till
Monday
so
that
in
that
sense
it
feels
accurate,
but
it
will
be
interesting,
but
I
do
think.
Mttp
is
a
realistic
number,
even
though,
because
I
think
the
other
problem,
it's
indicating
is
that
we,
we
don't
have
reliable
enough
deployments,
so
they
can
be
running
without
a
release
manager
online.
So
that's
why
we
don't
deploy
the
weekends.
D
But
I
think
this
will
be.
This
is
a
side,
but
I
don't
think
we
should
consider
mttp
to
be
inaccurate.
It's
just
it's
a
little
bit
you,
you
lose
some
granularity.
F
F
Yeah,
so
one
thing
that
I
found
interesting
was,
if
you
look
at
the
mttp
through
the
week,
so,
for
example,
say:
first
to
fifth,
you
get
something
like
10
hours
and
then
Monday
comes
around
and
it
jumps
to
like
20
hours
or
something
yeah.
D
And
it's
also
reflecting
it.
It's
also
unfortunate
because
it's
probably
reflecting
a
kind
of
working
Habit
in
that
we
probably
get
a
lot
more
things
merging
on
a
Friday
afternoon,
compared
to
say,
for
example,
Monday
mornings,
and
it's
why
I
don't
think
necessarily
saying
Steve
and
I
were
chatting
about
the
other
week.
D
Just
doing
a
lot
of
deployments
in
our
day
probably
doesn't
shift
mttp
too
much,
because
some
of
those
deployments
may
not
have
very
many
changes,
and
then
there
are
times
we
get
a
lot
of
Mrs
and
if
we
don't
deploy
after
that,
probably
for
example,
Friday
evening
we
get
loads
of
mergers.
We
then
don't
deploy
those
till
Monday
and
you
lose.
You
know.
Quite
a
lot
of
them
have
have
lower
slower
numbers.
F
Yeah
it
would
not,
it
would
be
nice
if
we
tracked
the
size
of
each
team
set
that
gets
deployed.
So
then
we
could
see
when
the
maximum
number
of
changes
get
deployed.
Yeah.
D
F
B
F
F
D
Do
you
mind
updating
the
numbers
just
on
the
table
at
the
top
and
also
on
the
one
linked
over,
so
that
when
I
pull
those
up
into
the
release
manager,
metrics
I
get
the
right
numbers.
F
E
A
But
that
one's
only
blocking
G
staging
are
we
counting
the
staging
ones
as
production
blockers.
D
A
I
did
want
to
add
the
one
thing
that's
been
noticeable
is
that
at
least
during
my
time
zone,
instead
of
regularly
getting
out
two
to
three
deployments,
we've
been
able
to
get
out
three
to
four
due
to
all
of
these
improvements
on
the
overall
timeline
of
the
deployment,
so
I'm
used
to
it
being
two
and
a
half
hours,
and
now
it's
one
and
a
half
hours
for
each
deployment
for
staging
and
production.
So
it's
it's
quite
nice.
A
A
D
That's
amazing,
that's
really
good
and
as
part
of
the
availability
work
in
July
quality
switched
a
bunch
of
test
Suites
on
the
merge
pipeline
to
be
blocking
when
previously
they
hadn't
been.
This
was
the
direction
they
were
working
towards,
but
they
made
use
of
the
availability
kind
of
focus
to
turn
it
on
so
I'm,
hoping
that
we
will
start
to
see
a
lot
less
test
failures
in
our
diploma
pipelines,
because
things
should
be
failing
on
the
merge
Pipelines.
D
So
that
will
be
interesting
for
us
to
see
as
well
over
the
next
few
months.