►
From YouTube: 2021-08-16 Delivery team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
there
are
some
announcements.
Amy
is
out
today.
Unless
you
is
back
tomorrow,
hiding
update,
we
have
an
approval
to
hire
sres.
There
is
another
about
bringing
back
staging
canary
to
reduce
incidents
that
is
going
to
be
interesting,
interesting
and
on
the
first
discussion
regarding
something
that
also
discussed
on
the
last
weekly
session
about
our
setting
recurring
rollback
practice.
I
wonder
what
everyone
thinks
about
automatic
automating
automatically
roll
back
to
staging.
B
I
think
yesterday
we
had
a
good
example
of
that
we
need
a
way
to
easily
disable
the
automatic
rollback,
because
we
had
several
issues
where
we
needed
to
push
some
something
forward
to
production
as
fast
as
we
can
last
week,
and
then
we
needed
to
drop
again
the
production
drawback
and
even
the
staging
rule
that
we
couldn't
do
because
of
that
as
a
replacement.
So
I
think
it's
good
to
have
automation,
but
we
need
something
but
just
making
it
very
easy
to
stop
it
also
during
during
our
releases.
B
Maybe
no
that's
not
that
important,
but
yeah.
C
If
we
don't
have,
I
don't
see
that
concern
on
the
issue,
so
I
think
it'd
be
good
to
bring
up
if
it
hasn't
been
already,
but
I
do
wonder
if
we're
going
to
be
bringing
canary
back
into
staging
I'm
worried
about
how
much
longer
this
is
going
to
add,
because
we're
already
adding
a
lot
of
time
to
the
process
with
rollbacks
and
staging
is
just
only
going
to
increase
that
and
the
items
that
amy
mentioned
above
about
potentially
reducing
that
time
like
yes,
they
will,
but
I
feel,
like
we're,
gonna,
add
so
much
more
time.
A
How
is
the
staging
canary,
how
is
it
going
to
work?
Is
it
going
to
mimic
the
same
thing
we
have
in
canary
and
production
like
it's
going
to
double
the
time
to
deploy
stadium
down.
C
I
think
it's
going
to
work
in
a
similar
fashion
with
some
tweaks.
So
instead
of
waiting
the
full
hour,
we
might
wait
enough
to
run
qa
and
then
once
qa
at
least
passes
we'll
proceed
with
the
production
or
the
deploy
to
the
main
stage
of
staging
immediately
afterwards.
So
we
won't
have
the
full
baking
time
like
we
do
today,
but
we're
still
going
to
have
two
very
distinct
stages
of
deployment.
A
C
And
we
kind
of
need
that
for
qa
to
run
and
to
show
us
the
problems
that
come
out
of
code,
that's
not
compatible
with
throwing
forward
or
backward.
A
C
C
A
Yeah,
it
is
going
to
be
fun,
so
what
is
the
estimated
time
for
staging
and
canary
is
something.
C
A
C
A
Yeah
yeah,
I
guess
it
is
something
that
we
have
to
think
about
and
probably
something
that
rollbacks
should
consider
when
if
we
decide
to
automatically
roll
back
and
also
when
we
decide
to
integrate
canada
into
staging,
to
consider
rollbacks
as
well,
so
probably
ongoing
communication
okay!
Well,
thank
you!
Well,
scarborough!
You
have
the
next
point.
C
Thank
you.
I
am
going
to
be
attempting
to
migrate
the
canary
web
fleet
into
kubernetes
today.
So
henry
thank
you
for
locking
canary.
However,
I'm
still
waiting
for
approval
to
begin
the
change
request,
because
it's
quite
intrusive,
the
I
think,
if
I
do
get
approved,
to
move
forward,
I'm
going
to
ask
you
to
hold
off
on
the
production
deploy
that
way.
C
We
just
don't
have
two
competing
things
happening
at
the
exact
same
time,
because
I'm
touching
aj
proxy
and
stopping
shep
on
those
nodes,
and
I
don't
want
to
interfere
with
the
production
deploy
at
the
same
time.
If
I
don't
get
approval
in
the
next
few
minutes,
I
think
I'll
just
say:
let's
continue
forward,
and
then
we
could
touch
space
later.
That
way,
I'm
not
blocking
the
deploys,
but.
B
You
know
scovic,
this
is
touching
my
point
d.
I
think
that
robot
will
be
out
today
because
he's
not
fitting
well
and
he
would
be
the
release
manager
for
americas
normally.
I
would
now
hand
over
to
him,
but
so
I'm
wondering
who
will
cover
release
management
for
america's
today
I
mean
we
did
some
production
deploys
today.
So
we
are
fine,
I
think,
and
we
can
just
leave
it
at
that.
B
Yeah
I
mean
because
that
would
give
at
least
the
trigger
in
your
hand
when
you
want
to
do
the
next
deployment,
and
so
there's
no
coordination
needed
for
your
change,
and
I
think,
there's
not
much
more
to
do
for
release
management
today
anyway,
and
I
hope
there
are
no
problems.
C
B
Yeah
this
is
tricky
I
I
deployed
to
pre
today,
because
a
change
for
the
package
team
was
still
missing,
that
we
need-
and
we
anyway
deployed
to
pre
two
weeks
ago,
so
it
didn't
matter.
If
we
do
this
again,
I
think,
but
since
then
I
see
strange
behavior,
we
maxed
out
our
database
connections
and
this
they
are
maxed
out
all
the
time.
Right
now.
I
can
see
this
in
the
cloud
console
and
after
looking
for
a
while,
I
opened
an
incident
for
that.
B
I
asked
the
development
and
database
for
help,
but
nobody
showed
up
in
the
last
two
hours
for
that
and
I'm
failing
to
diagnose
what
is
causing
those
connections.
It
looks
like
I
took
a
while
for
me
to
get
connection
to
the
database,
but
it
looks
instead
of
the
database
that
we
see
a
lot
of
idle
connections
coming
from
sidekick
there,
but
I
can't
find
anything
matching
in
the
sidekick
blocks
so
far.
B
So
I'm
a
little
bit,
I
don't
know
out
of
ideas
here
and
pre
is
very
flaggy
because
we
constantly
are
close-up
mixing
out.
Our
database
connections,
which
made
also
the
qa
tests,
fail,
and
I
don't
know
how
to
best,
escalate
this,
because
it's
not
worth
the
death
escalation
because
they
will
not
support
pre.
As
far
as
I
understand
and
yeah.
If
anybody
of
you
has
ideas
else,
I
will
just
send
this
over
to
the
sre
on
call
and
ask
them
to
help
find
somebody
to
look
into
that.
A
B
A
B
B
I
just
get
an
application
name
in
the
database
logs,
but
I
can't
find
anything
in
our
elastic
locks
matching
that
so
it's
really
strange.
A
B
B
Yeah
with
that
being
said,
I
think
for
point
d,
we
already
agreed
if
we
don't
do
more
deployments
for
now
and
if
you
want
to,
then
skype
is
doing
it
as
he
meets,
and
then
I
think
we
are
through
the
discussions.