►
From YouTube: 2021-03-09 Delivery team weekly rollbacks demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
So
I
would
start
I
have
a
lot
of
agenda
items
and
the
first
one
is
more
something
that
we
should
do,
but
then
we
can
move
on
to
discussing
the
other
items.
So
I
will
start
by
demoing
attempting
to
demo
check
mode.
Staging
rollback
reason
being
is
that
in
future
we
want
to
test
check
mode
production
rollback
before
attempting
a
real
production
drawback,
and
so
the
what
we
want
to
test
today
is
the
the
is
the
check
mode
really
a
dry
run.
So
we
will
just
check
if
we
can
roll
back.
C
C
B
C
C
C
D
E
D
C
C
Because
omnibus
changes,
I
suppose.
C
C
Yeah
I
mean
this.
This
makes
sense
right
because
if
we
are,
this
is
comparing
I'm
back
here.
This
is
comparing
rails
code
changes,
so
we
are
assuming,
as
we
wrote
in
the
run
book
that
in
this
case
we
want
to
roll
back
an
application
change.
Another
nominee
was
changed,
so
the
most
recent
one
is
supposed
to
be
the
previous
version.
C
C
We
have
version
we
need
to
replace
the
plus
with
a
dash,
but
we
cannot
run
the
rollback,
as
is
ex
described
here,
because
we
want
to
run
in
check
mode.
So
we
go
to
the
manual
procedure,
which
is
here,
and
this
should
explain
us
okay.
So
basically
we
need
to
run
at
the
employer
project
pipeline
manually
and
we
need
to
provide
this
information
plus
check
underscore
mode.
True
or
yes.
This
is
a
good
question,
so
it's
not
written
here.
C
C
We'll
start
with,
let
me
deploy
environment.
D
C
G
C
A
C
This
should
run
check
mode
roll
back
on
staging
with
the
version
that
we
identified
here.
Okay,
so
I'm
going
to
run.
If
someone
wants
to
stop
me
on
the
count
of
three,
I
will
hit
run
pipeline.
C
C
C
C
C
C
So
there
is
a
an
extra
thing
that
I
usually
do
here,
which
is
checking
this,
so
every
every
deployer
job
at
the
beginning,
outputs
this
information
and
because
I
was
working
on
dry
running
tests
on
production
things
like
that
during
my
image
request,
I
developed
this
habit
of
checking
this
on
the
syntax.
So
you
are
you
see
that
you
still
are
in
time
for
canceling,
something
in
case
you
mess
it
up
with
the
check
mode
variable.
So
this
is
okay
and
this
basically
started.
So,
let's
just
check
one.
C
C
C
B
C
C
That's
it
so
I
andreas,
I
see
you're
writing
things
here.
So
do
you
want
to
verbalize
before
we
move
to
the
discussion
point.
A
Oh,
I
just
thought
I'm
looking
at
the
number
and
the
dashboard
I
just
thought.
Maybe
this
thanos
query
is
easier
to
track
that
we
have
version
changes
or
something
because
I
need
to
memorize
numbers
and
look.
I
don't
know
I'm
not
really
very
familiar
with
this
versions.
Dashboard,
I
thought
maybe
having
something
with
the
history
of
changes
and
versions.
Maybe
it
would
be
easier
to
read-
maybe
you
can
add
this
later
to
this
dashboard.
C
Okay,
so
regarding
this,
that
dashboard
is
not
even
entirely
what
we
need.
There's
this
issue
that
I
linked,
because
we
don't
track
versions.
We
don't
easily
track
versions
on
kubernetes.
So
if
you
have
ideas
of
how
we
can
improve
this,
maybe
we
can
comment
in
the
issue,
one
two,
seven,
seven,
oh,
and
because,
instead
of
just
doing
what
I
propose,
we
can
just
figure
out
what's
the
best
version,
but
the
best
way
for
understanding
what
is
running
in
all
the
environment
for
from
a
release
manager.
Point
of
view.
G
C
I
started
with
this
idea,
so
I
will
ask:
I
would
love
to
suggest
using
the
epic
four
one
one.
So
we
want
about
writing
the
the
rollback
pipeline
as
a
journal
for
our
discoveries.
The
reason
being
is
that
there
are
two
problems
here.
One
is
that
I
think
it's
a
bit
hard
to
follow
to
follow
on
action
items
in
this
agenda,
because
we
tend
to
rush
action
items
at
the
end
of
the
meeting,
because
rolling
back
takes
a
long
time
and
then
it's
really
hard
to
figure
out
if
someone
got
assigned
to
something.
C
C
What
I'm
thinking
here
is
that
we
there
will
be
a
lot
of
eyes
on
us
when
we
will
start
rolling
back
production
for
real,
so
I
think
we'd
be
safe
to
show
that
we
did
the
due
diligence
right
so
that
we
tested
things
that
we
think
without
through
possible
problems
and
things
like
that
and
dumping
down
what
we
did
and
the
decision
we
made
so
that
then
we
can
extract
things
out
into
a
some
kind
of
a
blueprint
or
it
can
be
the
production,
change
issues
or
whatever
format
we
decide.
C
Because
the
point
is
that
if
we
are
so,
if
you
are
in
an
incident,
we
are
already
in
a
broken
state,
so
rolling
back.
Maybe
we
move
from
broken
state
to
a
more
broken
state
or
a
less
broken
state,
but
it's
fine,
but
when
we
will
attempt
the
first
rollback
in
a
working
state
is
kind
of
yeah.
We
are
in
a
working
state
and
we
may
end
up
in
a
broken
state.
C
My
second
point,
which
is,
I
would
like
to
discuss
with
all
of
us,
what
can
go
wrong
so
thinking
about
things
that
may
broke
things
that
may
halt
the
roll
back
and
yeah
if
it
makes
sense
having
a
dryer
another
dry
run
a
test
on
staging,
where
we
in
some
way
reproduce
the
problem
so
that
we
try
to
figure
out
if
our
procedure
can
survive
this
type
of
failure,
and
things
like
that,
and
then
we
can
also
write
this
in
the
document
that
I
was
mentioning.
C
C
And
then
there
are
other
situations
like
what
happened
last
week,
basically
so
that
one
machine
went
missing
during
the
rollback.
C
C
F
H
A
C
So
you
mean
scanning
through
the
the
the
the
new
the
the
new
index.
Oh
no
you're
talking
about
prometheus,
not
about
the
but
prometheus.
It
tells
tells
me
this
current
state
of
the
system.
It
tells
me
nothing
about
the
past,
so
maybe
elasticsearch
the
new
index,
the
one
that
I
don't
remember,
the
name
of
the
new
index
we
created
one
that
we
send
logging
information
about.
We
are
deploying
this.
We
are
changing
this
yes,
but
I
was
more
thinking
about
something
so.
C
We
there
are,
there
are
too
many
things
that
we
touch
when
we
run
a
deployment
so,
for
instance,
everything
so
we
are
assuming
that
ops
is
online,
because
if
we
don't
have
oops,
we
are
screwed
up
right.
So
what
I'm
thinking
here
is
that,
can
we
move
some
of
those
information
from
canonical
and
security
repo
to
oops,
for
instance,
if
we
mirror
the
security
mirror
on
oops
without
without
pipelines?
C
I
don't
care,
I
mean
it's
just
for
having
the
same
code,
then
we
can
run
the
tracking
release
tracking
on
ops
and
also
on
on
canonical
and
security
mirror,
so
that
developers
get
all
the
nice
tracking
feature
in
their
merged
requests,
but
we
have
a
let's
say,
more
reliable
source
of
information
so
that
when
we
do
all
of
our
decision
about
tagging,
something
finding
the
latest
the
latest
deployment
or
things
like
that,
we
rely
on
the
same
instance,
which
is
also
usually
subject
to
I
mean
it's
not
out
of
deployed
right,
so
it's
a
bit
more
stable.
F
I'm
fairly
sure
we
have
an
issue
for
this
I'll
see.
If
I
can
try
and
find
it,
it
would
be
a
good
one
to
certainly
an
interesting
failure
scenario.
We
should
have
a
a
bit
of
a
plan
for.
F
C
I'm
thinking
about
exactly
the
same
that
we
format
that
we're
already
using
so
yeah
yeah
the
the
environment,
feature
and
deployment
feature
of
gitlab
itself,
so
we
mirror
security
and
we
mirror
social
security,
omnibus
security
easily
and
security
gitlab,
and
same
code
that
we
have
right
now,
but
instead
of
tracking
two
times
we
talked
we
tracked
three
times:
oops
canonical
and
security.
E
E
That
issue
is
not
really
useful
anymore.
That
report
is
no
longer
as
useful
as
it
used
to
be.
Arguably
I
don't
know
whether
it
was
ever
useful,
but
it
was
there
what
if
we
create
a
new
report,
that
would
be
something
in
the
format
that
we
actually
need
right
now,
which
is
I'm
coming
from
this
old
deployment
and
deploying
this
new
one
and
then
the
links
to
what
I'm
deploying
and
then
a
report
of
this
is
everything
that
is
going
into
this
report.
E
All
of
that
in
a
nice
format
in
an
issue,
and
we
can
then
do
that
in
two
places
right
dot
com.
Oh,
I
didn't
manage
to
update
it
there.
It's
fine!
I
always
want
to
update
it
on
ops
in
some
location,
so
you're
gonna
have
a
report
that
we
all
can
use
and
we
can
share
with
others
publicly
as
well
and
we
don't
have
to
go
through
multiple
levels
of.
E
That
needs
to
continue
it's
really
important,
but
we
didn't
build
anything
for
ourselves.
We
built
the
grafana
dashboard,
which
is
useful,
but
it's
the
same
problem
like
you
mentioned
earlier
with
prometheus.
It
gives
you
the
current
state
of
the
situation,
doesn't
give
you,
so
the
closest
we
have
to
this
historical
overview
is
what
jarv
created
with
that
elasticsearch
event:
stream
yeah.
C
E
But
the
point
I
was
trying
to
make
is,
it's
then
trivial
for
us
to
say
all
right,
I'm
generating
this
report
and
I'm
pushing
it
to
two
locations
in
one
go
done,
you
don't
have
to
think
about
it
again.
This
is
only
if
we
think
this
is
something
we
need.
I
just
know
that
we
didn't
do
anything
like
this
so
far,
because
we
were
always
rushing
to
serve
someone
else's
needs.
First,.
E
C
F
Okay,
yeah
and
that
might
solve
our
other
problem,
which
we've
kind
of
been
going
around,
which
is
all
the
different
bits
of
information
we
want
to
show
and
where,
like
how
many
comments
do
you
put
under
like
different?
Well,
how
many
like
threaded
comments
you
have,
and
maybe
we
just
have
one
report
that
has
everything:
okay,.
C
C
Okay
and
yeah,
the
the
other
point
about
the
failure
scenario
is,
was
exactly
more
about
problems
that
may
happen
so
yeah,
and
this
is
where
I
would
like
to
have
help
from
everybody
thinking
about
what
can
go
wrong
and
I
think
maybe
the
best
option
here
is:
if
no,
if
someone
has
ideas
right,
we
can
start
collecting
them.
Otherwise
we
can
open
an
issue
and
bouncing
ideas
and
then
trying
to
move
from
these
ideas
to
real
tests
that
we
can
run
in
one
of
this
meeting.
C
D
C
C
Yeah
also
the
help
page
is
it's
still
the
current
version,
so
we
didn't
roll
back,
so
that
was.
C
Long
longest
time
spent
with
what
is
the
omnibus
that
reconfigure
things
and
wait
for
process
to
be
killed,
but
it
just
in
that
case.
We
just
tell
you.
I
will
run
this.
F
So
since
we
have
some
time
left
over
like,
I
think
it
might
be
worth
just
getting
some
kind
of
starting
points
that
we
can
take
into
an
issue
to
discuss
so
like
in
terms
of
it
being
a
rollback
pipeline
like
how
could
this
go
wrong
like
what
do
people
think
of
and
use
the
chat
as
well?
If
that's
quicker
to
just
drop
ideas,
but
like
generally
like,
when
you
think
about
that,
what
are
all
the
things
that
could
go.
F
F
So,
like
one
thing
we
haven't
talked
a
lot
about:
is
the
application
not
being
backwards
compatible?
So
we
know
we
have
qa
tests.
F
C
So
the
the
one
of
the
failure
mode
that
we
have
in
our
regular
deployment
is
that
there
are
basically
we
have
stages,
and
things
should
happen
in
order
right,
so
you
first
print
around
migration.
Then
you
upgrade
easily,
then
when
it's
done,
you
upgrade
the
fleet
and
then,
as
opposed
to
problem
migration.
C
C
The
the
warm
up
will
download
the
same
image,
so
it's
already
there,
then
the
fleet
will
be
upgraded
again
and
most
of
the
machine
will
be
just
running
the
right
version,
so
we
just
do
nothing
and
the
because
the
cache
is
is
an
artifact.
It
will
not
be
there
in
the
next
in
the
new
rollback
pipeline,
so
we
will
generate
it
again
at
the
beginning,
and
so
we
just
if
that
was
the
problem,
it
would
complete
the
rollback.
C
What
scares
me
as
a
failure
is,
is
our
application
changes,
so
the
same
problem
that
could
cause
an
outage
rolling
forward
could
cause
an
outage
rolling
backward.
C
F
So,
in
terms
of,
if
that
happens,
is
that
so
that's
a
use
case
of
we
see
something
our
pipeline
goes
through
and
the
qa
tests
fail
is:
is
one
visible
way
of
doing
that?
I
guess
there's
a
worse
one,
which
is
the
pipeline,
goes
through
the
qa
tests
pass
and
there's
a
unknown
problem.
What
what.
F
To
ask
for
two
questions
one
is:
do
we
have
any
way
to
test
that,
probably
not
right
what
will
happen
if
the
qa
tests
fail
would
we
be
able
to
like
we
could
rerun
them
right?
What
would
we
actually
do?
Yeah.
C
C
C
We
were
already
running
that
code
with
that
database
schema,
because
when
we,
when
we
upgrade
can't
when
we
upgrade
cannery,
we
run
database
migrations,
but
production
is
still
running
with
the
with
the
old
code,
so
we
in
theory
we
know
that
the
let
me
use
the
right
terms.
The
previous
code
can
run
with
the
current
schema.
C
The
there
are
some
something
that
can
be
different
here
is,
for
instance,
radius,
state
so
and
by
ready
state
I
mean
both
cache
and
sidekick
parameters,
so
job
parameters
on
can
be
tricky
here,
but
I
want
to
reiterate
what
I
say.
What
I
said
before
is
that
we
have
extensive
documentation
on
how
to
implement
sidekick
workers
and
how
to
make
them
forward
and
backward
compatible.
C
F
Working
okay:
scarback:
do
you
wanna,
go
through
the
ideas
you've
got
listed
out.
G
G
G
We
do
this
to
speed
up
the
actual
deploy,
because
that
warm-up
happens
during
the
period
of
time
in
which,
during
the
baking
time,
so
at
that
point,
you've
downloaded
a
one
gigabyte
package
to
our
servers.
If
we
stop
the
deploy,
the
cleanup
job
does
not
run
when
the
cleanup
job
does
not
run.
We
still
have
that
one
gigabyte
app
package
cache
on
that
server
and
if
we
do
a
rollback,
we
need
to
warm
up
again
to
redownload
the
older
package
now.
G
G
G
C
Let's
talk
about
this
for
a
second,
so
I
think
it's
a
good
example,
something
that
we
should
test.
We
can
fill
the
disk
during
a
rollback
or
something
like
that.
So
what
my
idea
here
is
that
this
is.
I
mean
this
is
not
specific
to
a
rollback
right,
so
this
could
happen
even
in
a
with
a
roll
forward.
It's
just
that,
it's
more,
let's
say
more
likely
in
sense
that
maybe
we
stopped
something
and
we
didn't
clean
it
up
the
previous
installation.
C
Maybe
we
are
in
a
worse
starting
situation,
but
so
what
I
want
to
say
here
is
that
we
have
procedure
in
place
right
now
they
are
manual,
but
we
as
a
res
I
mean
it
could
happen
that
the
disc
is
full
during
a
deployment
and
you
we
just
ping
the
on
call
and
say
yeah.
We
have
the
deployment
failure,
because
this
machine
is
running
out
of
this
space.
C
We
see
this
from
the
failure
in
the
on
the
on
the
installation,
and
so
maybe
this
is
different
for
you
or
jarv
or
henry,
but
for
the
rest
of
us
without
production
access,
we
kind
of
say
we
can't
do
anything.
We
just
being
the
on-call
and
ask
for
help,
so
I
don't
see
special
risk
in
rollback
here,
but
I
think
is
worth
invaluable
to
figure
out
if
the
same
solution
we
have
for
a
regular
production
deployment
can
apply
here
as
well.
C
C
E
G
F
E
C
Scarback,
I
have
a
question
so
but
could
we
add
the
download
the
package
download
as
part
of
the
so
the
word
map
as
part
of
the
regular
deployment,
because
I
mean
usually
with
ansible?
If
something
is
already
there,
it
will
do
nothing.
C
C
F
G
Precisely
and
there
might
be
an
improvement
we
may
be
able-
I
would
again
I
would
like
to
test
to
validate
that.
This
is
the
actual
scenario
that
we
would
run
into,
but
there
may
be
a
flag.
We
could
set
an
ansible
pipeline
where
we
could
tell
app
to
run
an
update
prior
to
an
install
just
to
get
that
cache.
G
Yeah,
so
we
all
know
about
this-
we've
been
having
this
happen
rolling
forward
in
the
past
few
weeks.
Quite
often,
sometimes
it's
been
due
to
a
failure
of
some
sort
related
to
this
being
full,
so
ansible
is
forced
to
stop
so
the
next
time
we
retry
the
job
ansible
doesn't
want
to
because
it
detects
that
nodes
are
stuck
in
drain,
which
is
not
the
appropriate
setting.
We
need
it
to
be
in
I'm
curious
if,
instead
of
leaving
a
node
in
drain
during
a
deploy,
maybe
we
cycle
from
drain
into
maintenance
mode
during
the
deploy.
G
But
the
problem
with
that,
though,
is
that
we've
explicitly
made
maintenance
mode
the
option
where,
if
you
were
there
to
begin
with,
you
would
be
left
in
maintenance
mode
after
the
deploy
completes.
So
we'd
have
to
revisit
that
logic.
If
we
want
to
change
that
behavior,
otherwise,
I'm
not
really
sure
what
to
do
with
this,
because
leaving
a
node
in
drain
during
a
deploy
could
be
a
legitimate
situation
that
we
may
want
to
have
an
sre
investigate
a
problem.
G
C
So
this
is
a
good
point,
scar
book.
I
think
that
we
all
so
far
we
always
focused
on
a
deployment
was
completed
and
we
want
to
roll
back,
and
so
we
were
starting
from
say.
Clean
state
can
be
broken
in
sense
that
the
application
is
not
working,
but
it's
still
it's
a
clean
state.
We
never
tested
or
discussed.
I
mean
we
discussed
sometimes
about
the
we
had
a
broken
deployment.
We
figured
out
that
something
is
wrong
while
rolling
rolling
forward,
and
then
we
want
to
stop
and
roll
back.
C
So
we
we
mentioned
this-
that
there
are
those
manual
job
like
putting
things
back
in
and
out
putting
things
outside
of
drain
state
and
things
like
that.
So
I
don't
know.
Maybe
it's
worth
writing
something
about,
or
even
a
round
book
or
talking
a
bit
more
about
this,
because
what
I'm
thinking
is
that,
in
terms
of
backward
compatibility,
I
think
I'm
sure
we
are
safe,
because
we
we
didn't
reach
the
post
deployment
migration.
C
G
Nodes
that
are
left
in
drain
will
just
simply
stay
and
drain
after
the
deploy
completes.
That
way
we
get
the
rollback
completed
and
then
we
at
some
point
need
to
maybe
revisit.
Maybe
this
is
part
of
our
own
book
making
sure
that
all
the
nodes
are
not
left
in
a
drain
state
that
we
have
all
of
our
capacity
available
to
us.
G
F
A
Yeah
or
maybe
s2
or
s3,
but
if
you
want
to
roll
back,
we
probably
are
an
incident
right,
and
so
I
think
when
we
roll
back,
we
may
maybe
we
maybe
should
generally
ignore
incident
checks,
because
I
don't
see
us,
you
know
needing
to
stop
rollbacks
when
we
are
in
an
incident
because
normally
we
just
do
it
for
an
incident
right.
So
I
don't
know
if
we
check
for
incidents
by
old
x,
but
if
we
are,
then
we
should
ignore
it.
I
think.
F
It's
a
good
point.
It
actually
makes
me
realize
that
one
of
the
other
things
we
should
add
to
our
rollback
runbook
is:
how
do
we
announce
they're
about
to
do
the
rollback
right?
Because
that's
the
other
thing,
if
we've
got
multiple
incidents
running
it's
super
hard
to
keep
track
of
things.
If
we're
rolling
a
deployment
back,
we
should
make
sure
that's
really
clear.
F
Before
we
actually
just
kick
things
off
that
we
announce
somewhere
or
build
it
into
the
rollback,
announce
it
because
it's
really
hard
to
distinguish
it
from
a
regular
deployment
right
now,.
C
Yeah,
but
what
I'm
thinking
is
that
now
in
the
beginning,
I
don't
expect
us
release
manager
to
just
decide
to
roll
back
and
we
roll
back,
but
at
least
the
as
a
real
call
will
be
involved.
Well.
F
Maybe
I'm
thinking
like
when
we
have
multiple
incidents
right
so
like
when
we
you
know
like
the
other
month
where
we
had
like
four
or
five
and
we
didn't
have.
We
didn't
necessarily
have
the
s3
on
call.
We
had
an
sre,
but
there
were
other
people
investigating
other
things,
so
I
think
it
would
be
worth
just
having
a.
We
also
make
sure
like
announcements
or
one
of
our
channels
has
a
real
clear,
like
this
rollback
has
started.
G
I
don't
think
this
is
an
issue.
I
think
it's
not
the
prepare
nor
the
warm-up
job
that
performs
this
check.
There's
a
job
called
grpd
and
we
don't
run
that
during
the
rollback.
C
It's
not
that
one
there's
also
the
production
checks
so
you're
right.
The
there
are
two
type
of
checks.
One
is
the
one
that
you
mentioned,
but
the
other
one
is
the
production
check
and
production
checks
are
checking
for
change,
issues
and
incident
as
well
as
metrics.
So
it's
kind
of
expected
metrics
will
be
in
a
bad
shape,
because
if
we
are
in
an
s1
incident,
probably
we
have
an
outage
the
one
that
I'm
not
sure
about
is
change
issues,
because
I
would
like
to
make
sure
that
yeah
I
don't.
I
really
don't
know.
G
G
A
Are
not
rolling
back
without
an
incident?
I
think
then
we
should
be
fine,
because
we
should
always
first
have
an
incident
or
declare
an
incident
before
we
roll
back,
and
then
we
have
the
right
people
in
a
call
or
interacting
with
us
to
announce
that
we
will
roll
back
now
and
if
you
have
multiple
incidents,
then
we
need
to
find
the
central
place
where
this
is
all
handled.
A
Maybe
the
incident
zoom
room
or
some
other
channel,
but
this
is
the
way
how
we
coordinate
this
and
then
we
need
to
make
sure
just
to
announce
it
there.
But
we
should
just
be
sure
that
we
have
an
incident
for
the
throwback
and
then
I
think
it's
okay.
C
Yeah,
it
makes
sense
because
in
the
first
test
we
will
have
a
production
change
items
which
is
kind
of
the
same
thing.
As
an
incident
I
mean
it's,
it's
it's
not
the
same,
but
it's
a
synchronization
point
right.
So
we
know
that
is.
Everyone
involved
is
aware
that
we
are
testing
this
and
during
when
we
will
have
to
use
it
for
real.
There
will
be
always
an
instant.
G
Currently,
the
only
way
to
break
that
is
to
literally
wait
for
one
hour
or
maybe
delete
the
artifacts.
I
don't
know
if
that's
possible
in
our
api,
but
maybe
we
could
make
an
improvement
to
that
inventory
script
that
allows
us
to
provide
it
some
variable
or
some
method
of
breaking
the
cache,
forcing
it
to
redownload
as
necessary.
E
C
C
G
C
F
F
F
Okay,
easy:
does
that
mean
we
should
review
like
who
wants
the
action
to
check
the
run
book
has
the
right
words
and
do
we
need
to.
We
need
to
update
the
chat
ups
command.
I
assume.
C
B
C
Yeah,
maybe
we
need
to
make
sure
that
we
have
a
place
for
documenting
this,
and
so
that
we
can
reference
and
explaining
why
we
are
using
these
terms.
F
F
Awesome
is
there
anything
else
we
need
to
go
over.
C
I
don't
think
so.
I
will
do
the
exercise
of
summarizing
what
we
did
today
in
the
epic
and
then
I
will
try
to
figure
out
action
items
and
things
like
that
and
try
to
bring
people
and
see
if
we
can
assign
things
asynchronously.