►
From YouTube: Developing Anticipatory Awareness and Common Ground Jabe Bloom (Red Hat) OpenShift Commons Briefing
Description
Developing Anticipatory Awareness and Common Ground:
Working with Uncertainty and Complex Systems
Jabe Bloom (Red Hat)
OpenShift Commons Briefing
#TransformationFriday #OrganizationalLeadership #OpenShiftCommon
September 4, 2020
A
All
right,
everybody
welcome
back
to
yet
another
openshift
commons
briefing
and
today,
as
we
like
to
do
on
fridays,
we're
going
to
talk
about
about
some
transformational
issues
with
folks
from
the
global
transformation
office
at
red
hat
and
one
of
my
favorite
guests.
Jade
bloom
is
here
with
us
today
to
talk
about
developing
anticipatory
awareness
and
common
ground
and
I'm
going
to
let
jabe
introduce
himself
and
the
topic
and
we'll
have
live
q
a
at
the
end.
A
It's
always
an
interesting
conversation,
so
stick
around
for
the
last
few
minutes
and
15
minutes
or
so,
and
hopefully
we
can
have
a
good
and
in-depth
conversation
on
the
topic,
so
jay
take
it
away
and
thanks
again
for
coming.
B
Thank
you
for
having
me
it's
always
fun,
so
I
wanted
to
talk
a
little
bit.
The
last
time
we
talked,
we
were
talking
about
kind
of
social
practice,
and
one
of
the
things
we
were
talking
about
is
the
way
in
which
social
practice
kind
of
orchestrates
a
sense
of
shared
behavior.
B
So
people
have
expectations
about
what's
going
to
happen,
and
things
like
that,
I
really
wanted
to
kind
of
open
that
box
up
and
play
with
what's
inside,
of
this
kind
of
idea
of
inner
predictability
or
anticipatory
awareness,
the
relationship
between
some
of
these
ideas
and
and
transformation.
B
So
I'm
jabe,
I
I
work
at
red
hat
at
the
global
transformation
office
with
a
bunch
of
yahoos
andrew
clay,
schaefer
john
willis
kevin
bear
and
we
are
working
with
red
hat
clients
to
help
them
with
their
transformations
to
improve
their
outcomes
through
socio-technical
change,
so
super
fun,
interesting
stuff.
So
if
you
guys
want
to
chat,
find
me
or
chat
with
me,
you
can
find
me
on
twitter.
I
am
happy
to
talk
about
all
these
topics
and
talk
about
these
ideas
on
twitter.
B
A
lot
I
I
enjoy
it
so
find
me
there.
So
I
wanted
to
really
quickly
start
with
this
kind
of
like
this
about.
I
just
want
to
talk
about
clocks.
I
mean
my
dissertation's
on
time,
so
I
love
clocks,
but
one
of
the
reasons
I
want
to
talk
about
clocks
is
because
kind
of
this
idea
of
joint
action
or
interpretability
or
working
together,
cooperating
all
these
ideas
in
a
sociotechnical
system.
B
I
think
probably
a
clock
is
literally
the
the
first
machine
intended
on
creating
the
conditions
for
coordination
between
humans
right.
So
the
whole
point
of
a
clock
is
to
kind
of
create
another
coordinate
system
that
allows
you
to
say,
like
I
will
meet
you
at
the
office
at
noon
and
the
at
noon
part
is
is
enabled
by
this
machine.
B
This
this
device
called
the
clock,
and
you
know
initially,
these
clocks
would
have
been
on
a
tower
in
the
middle
of
town
and
therefore
they
would
have
been
a
shared
common
resource
things
we
can
talk
about
later,
but
also,
eventually,
they
become
watches
and
things
that
people
can
bring
around
with
them,
but
just
because
the
machine
is
a
personal
machine,
so
the
the
watch
becomes
a
personal
clock.
B
Time
is
still
a
shared
construct.
It's
still
an
inter-subjective
thing
that
we're
sharing
in
order
to
coordinate
so
just
because
I
have
a
watch
doesn't
mean
I
have
the
time.
The
time
is
something
that's
given
to
me
by
other
people
or
by
other
systems
and
in
an
interesting
way.
The
original
watchmakers
in
london.
Actually,
the
the
the
owner
of
the
clock
tower
in
london,
the
london
clock
tower
his
wife
literally
sold
the
time
to
to
watch
makers.
B
So
she
would
set
a
watch
based
on
the
clock
in
london
and
she
would
walk
around
to
to
watch
vendors
people
who
were
selling
watches
and
she
would
sell
them
the
correct
time.
She
would
let
them
see
her
highly
accurate
watch
that
was
set
according
to
the
the
official
time
in
the
london
tower
yeah,
and
so
the
interesting
thing
about
that
is
is
there's
two
things.
One
is
this
idea
of
kind
of
like
a
distributed
system
of
machines
that
are
being
synchronized
in
some
way?
B
So
you
know,
I
think,
not
only
should
you
not
think
about
kind
of
distributed
systems
and
the
problems
with
distributed
systems
and
distributed
socio-technical
systems,
but
you
should
also
kind
of
think
about
the
problems
of
synchronicity
and
things
like
that.
Finally,
you
get
this
weird
thing
which
is
all
of
the
clocks
are
in
this
state
of
what
we
would
call
continuous
partial
failure,
so
the
clocks
can
all
the
watches
in
the
watch.
B
Shops
can
kind
of
keep
time,
but
the
reason
why
they
are
paying
this
woman
to
come
and
give
them
the
time
is
because
they
constantly
need
to
be
corrected
and
they
need
to
be
corrected
by
a
human
exchange
of
information
yeah
and
this
you
know
this
plays
out
over
a
long
period
of
time
to
the
to
the
current
day
right.
So,
if
you
go
and
you
go
to
nist
and
you
try
to
find
out
what
is
the
official
time
what's
the
official
time
right
now,
what
is
the
time
of
the
universal
standard
time?
B
What
is
it
and
the
the
answer
to
that
question?
While
this
can
give
you
an
official
time,
is
how
do
they
like
calculate
that
time
and
the
interesting
thing
about
it?
Is
that
what
the
way
they
calculate
the
time
is
there
is
a
distribution
of
atomic
clocks
all
over
the
planet
and
they
all
submit
them
their
current
time.
I
believe
it's
monthly
to
a
central
server,
basically
there's
some
obviously
some
correction
for
the
submission
cycles
and
things
like
that.
B
But
then
the
there's,
a
formula
by
which
those
clocks
become
averaged
together
in
order
to
create
the
official
time
and
then
the
the
average
time
determines
how
much
any
one
of
those
little
atomic
clocks
is
is
incorrect
and
therefore
a
correction
is
sent
to
the
owner
of
that
clock
like
you
need
to
move
your
clock
forward
or
backwards
slightly.
So
what
you
get
is
this
very
bayesian
update
of
time,
but
the
official
time
doesn't
exist
in
any
one
clock.
B
In
fact,
all
of
the
clocks
are
assumed
to
be
incorrect,
but
at
the
same
time,
all
of
the
clocks
are
assumed
to
be
able
to
be
predictably
within
a
range
of
correctness,
and
that
range
of
correctness
is
what
allows
for
the
averaging
to
happen
so
that
you
get
this
kind
of
concept
of
a
universal
time.
So
no
one
clock
can
hold
the
time,
but
the
the
kind
of
average
of
the
predictable
variance
of
all
the
clocks
allows
us
to
calculate
a
kind
of
a
standard
time.
B
So
I
think
this
distributed
system
in
which
all
of
the
components
are
reliable.
In
other
words,
they
operate
within
a
predictable
range
of
performance,
but
or
only
only
produce
a
resilient
time,
a
a
time
which
can
be
thought
of
as
being
standard
by
combining
themselves
that
no
one
thing
can
hold
that
official
time.
So
this
is
kind
of
what
I
want
to
play
with
today.
I
want
to
play
with
these
ideas
of
kind
of
continuous
partial
failure,
communication,
the
creation
of
some
common
thing
and
how
that
works
in
sociotechnical
systems.
B
So
one
of
the
things
that
I
think
we
need
to
talk
about
in
order
to
get
there
is
this
idea
of
expectations
and
we'll
explore
why
we
need
expectations
in
a
second,
but
let's
just
just
another
kind
of
opening
concept
right.
So
when
I
talk
about
expectations,
I
like
that.
I
often
like
to
talk
about
music.
So,
music,
isn't
a
note
right
like
there's.
B
No
sense
in
which
one
note
is
is
what
we
would
normally
consider
music
music
is
is
in
relation
to
other
notes,
it's
in
relation
to
a
set
of
potential
notes
that
one
might
hear,
and
even
in
this
kind
of
coordinate
system
where
we
have
the
amount
of
time
the
note
is
playing
and
its
relative
pitch
in
relation
to
other
potential
notes.
This
also
isn't
music
right,
so,
music
itself.
B
It
has
to
do
with
a
relationship
between
notes
and
not
only
between
notes,
like
a
chord
where,
where
those
relationships
are
are
simultaneous,
the
relationships
occur
at
the
same
time,
but
actually
in
the
way
that
music
always
is
about
the
relationship
between
what
what
notes
you're,
currently
hearing
what
notes
preceded
those
notes
and
what
expectations
you
have
about.
B
The
next
set
of
notes
that
you
would
expect
to
hear
and
so
in
in
this
way
music
is,
is
like
a
form
of
entrainment,
where
you
get
pleasure
from
two
two
pieces,
when
you,
when
you
listen
to
music
one,
is
you
get
pleasure
from
the
entrainment?
B
In
other
words,
you
get
a
sense
of
of
a
pleasure
from
being
able
to
predict,
what's
going
to
happen
and,
and
it's
pleasurable
it's
nice
to
be
able
to
predict
what
the
next
set
of
notes
are
and
have
them
have
the
musicians
like
magically
produce
the
sounds
that
you
predicted
in
your
mind
and
those
predictions
are
based
on
what
you've
already
heard,
what
you're
currently
heard
hearing
and
what
you
might
expect
to
hear,
but
also
you
get
pleasure
from
this
occasional
it
can't
be.
B
It
can't
be
all
the
time
but
occasional
occasional
way
in
which
your
prediction
is
not
quite
right
and
that
the
music
produces
a
hook
or
a
shift
that
causes
you
to
like
wake
up
and
notice
that
your
predictions
were
not
correct
at
that
particular
moment,
and
then
they
tend
to
resolve
back
to
predictable
patterns
again.
B
So
what
we
can
see,
I
think
when
we
talk
about
this,
is
that
music
and
kind
of
temporal
experience
has
to
do
with
this
relationship
between
making
predictions
the
pleasure
of
those
predictions
coming
true
and
the
surprise
and
the
pleasure
and
the
surprise
of
those
things
not
playing
out
and
so
in
phenomenology.
B
We
call
this
idea
the
difference
between
retention
protection
and
the
immediate
present,
so
retention.
Is
this
idea
that
the
way
in
which
the
previous
notes
inform
your
expectations?
Your
pro
tensions
is
not
direct,
you
don't
you,
don't
mathematically
calculate
the
next
notes,
you,
you
use
those
notes
and
the
sense
of
them
to
kind
of
make
an
idea
about
what
you
expect
to
have
happen
and
protections
work
in
the
same
way.
B
In
that
you
are
you,
your
hearing
of
what
is
currently
being
played
is
colored,
let's
say
by
what
you
expect
to
hear
next
yeah.
So
there's
this
way
in
which
the
future
in
the
past
color
or
shape
the
present
and
as
milo
ponti
kind
of
points
out
here.
It's
not
a
time
or
our
experience
in
time
is
not
like
a
slideshow.
We
don't
see
one
slide
at
a
time.
B
We
have
this
kind
of
smeared
or
extended
concept
of
how
how
we
experience
things
through
time-
and
this
is
you
know
when
we
think
about
longer
periods
of
time.
B
This
has
to
do
with
things
like
your
sense
of
identity
and
your
sense
of
how
you
got
to
work,
and
things
like
this
are
all
related
to
the
way
in
which
you
knit
together
what
you've
done,
the
retentions
that
you
have
and
what
you
expect
to
do
and
the
way
in
which
those
things
that
you've
done
and
the
things
that
you
want
to
do
are
related
in
some
way
and
cause
action
to
occur
in
the
present
yeah.
B
So
one
of
the
things
we
could
say
I
think
about
this
is
that
human
experiences,
including
joy
and
meaning,
emerge
from
this
intersection
between
what
what
I
expect
and
it's
its
resolution
or
it's
novel
surprising
results
yeah.
So
that's
what
gives
us
pleasure
in
the
world?
That's
one
of
the
things.
It's
why
we
like
things
is
because
either
things
become
meaningful
or
they
become
surprising.
B
So
there's
you
know
when
we
think
about
creating
meaning
when
we
think
about
sharing
meaning
with
others,
these
expectations
about
what's
happening
and
how
our
our
performances.
B
So
if
we
were
the
musicians
produce,
you
know
pleasure
in
others,
has
to
do
with
this
kind
of
interaction
of
whether
we're
able
to
reproduce
either
kind
of
the
results
that
were
expected
or
in
any
kind
of
entrainment
way,
or
that
we
can
produce
pleasant
surprises
so
surprises
that
allow
people
to
understand
that
there
are
interesting
novel
ways
of
moving
from
whatever
system
they're
currently
into
another
one.
So
that's
that's,
maybe
music.
B
So
then,
then
I
another
thing
before
we
kind
of
get
into
the
meat
of
it,
but,
like
I
want
to
talk
about
like
uncertainty
and
complexity
a
little
bit,
because
I
think
the
words
are
like
hyper
overloaded,
especially
complexity,
and
I
want
to
point
at
some
specific
versions
of
it
in
this
particular
case.
So
we
can
kind
of
grasp
onto
it
and
think
about
it.
A
little
bit
better.
B
First
of
all
that
we
have
a
concept
called
the
problem
of
future
knowledge,
so
one
of
the
one
of
the
ways
in
which
the
systems
that
we
work
in
are
complex
is
is
based
on
this
idea.
That
says
there
is
future
knowledge
there.
There
is
knowledge
that
will
only
occur
in
the
future,
because
if
it
occurred
in
the
present,
then
it
wouldn't
be
future
knowledge.
And
if
we
had
that
knowledge
now
we
would
do
things
differently.
B
But
in
fact,
because
of
the
way
in
which
systems
evolve
and
co-evolve
and
change
there,
we
will
come
to
know
things
in
the
future
that
we
cannot
know
now.
And
so
I,
this
is
like
the
the
source
of
things
like
technical
debt
and
things
like
that
right.
The
way
in
which
we
make
the
best
decisions
we
can
in
the
present
and
then
in
the
future.
Those
best
decisions,
don't
look
like
good
decisions
anymore,
but
there
would
be
no
way
to
have
made
those
good
decisions
in
the
future.
B
In
the
present
we
can't
because
we
don't
have
the
knowledge
that
the
present
would
create
yeah.
So
this
is
an
idea
that
there's
no
stability
to
some
of
the
knowledge
that
we
have,
that.
The
the
knowledge
that
we
have
is
emergent
from
the
system
itself
and
if
the
system
changes
significantly
or
evolves
in
any
way,
then
we
will
have
new
knowledge
about
the
system,
and
then
we
will
have
different
expectations
about
what
that
system
should
or
should
not
be
able
to
do.
B
The
second
thing
is
kind
of
the
way
in
which
longer
periods
of
time,
extended
periods
of
time
exposed
components
to
two
other
forces
that
are
not
exposed
in
short
periods
of
time.
In
the
those
are
novel
configurations,
so
the
longer
components
around
the
more
likely
it
is
to
be
tru.
You
know
someone
try
to
use
it
with
another
component
in
a
new
or
novel
way,
and,
second
of
all,
we
have
this
idea
that
the
the
components
themselves
could
be
exposed
to
novel
stimuli.
B
In
other
words,
the
environment
could
have
new
expectations
on
what
the
component
should
be
should
or
shouldn't
be
doing.
So
in
this
way,
there's
there's
a
complexity.
That's
not
just
this
kind
of
like
the
stability
of
like
a
complex
network
at
any
one
moment,
but
that
the
network
over
time
is
modifying
itself
and
therefore,
there's
like
this
extra
layer
of
complexity.
B
I
call
it
temporal
complexity,
but
the
idea
is
that
over
time,
things
change
and
that
creates
complexity.
We
get
this
idea
of
environmental
change
and
environmental
change,
creating
complexity,
creating
frictions
between
the
current
components
and
the
way
they're
supposed
to
work.
B
We
get
the
idea
also
in
in
kind
of
modern
socio-technical
systems
theories
where
the
actors
are
part
of
the
system
they
are
modifying,
so
every
action
modifies
not
only
the
system
but
the
actor
themselves.
So
there's
this
like
feedback
loop
that
feeds
back
on
the
actors
themselves.
B
This
is
something
that,
in
the
past
called
ontological
design,
but
the
idea
of
it
here
is
that
there's
the
sta
there's
not
the
actor
themselves.
They
are
not
stable
in
relation
to
the
system
because
they
are
being
modified
by
the
system
as
well.
So
again,
the
actors
are
enmeshed
in
this
temporal
complexity
themselves.
B
And
finally,
we
get
this
idea
of
contingent
contingencies
where
what
contingent
consistencies
means
is
roughly
kind
of
physics
and
science
kind
of
correlation
theories
of
science
have
to
do
with
understanding
the
contingencies
between
two
physical
natural
objects.
You
know
atoms
rocks
things
like
this
and
that
we
can
create
a
set
of
rules
like,
for
instance,
newtonian
physics
that
determine
what
we
should
expect
out
of
those
interactions.
B
But
when
we
look
at
sociotechnical
systems,
what
we
are
looking
at
is
a
system
where
at
least
some
of
the
decisions
were
not
driven
by
physical
laws
or
mathematical
laws.
In
fact,
they
were.
The
decisions
were
either
driven
by
context
by
an
attempt
to
respond
to
a
event
or
a
moment
in
time
or,
and
they
were
made
in
a
satisficing
as
opposed
to
a
complete
analysis.
In
other
words,
the
the
people
who
were
designing
the
system
had
a
a
certain
amount
of
time.
B
They
could
make
those
decisions
in,
and
so
they
made
the
best
decisions
they
could
in
those
periods
of
time,
but
if
they
had
been
given
an
infinite
period
of
time
with
an
infinite
amount
of
information.
So
if
we
could
like
literally
freeze
time,
they
may
have
made
a
a
completely
satisfactory,
in
other
words,
a
complete
analysis
of
the
situation
and
that
almost
certainly
doesn't
occur
in
most
systems
that
we're
working
with
because
of
temporal
compression
and
competitive
the
competitive
nature
of
markets
right.
B
Often,
the
decision
criteria
can't
be
reproduced
and
therefore
it's
not
you're
not
able
to
use
normal
scientific
thinking
which
only
thinks
about
a
single
contingency
because
you're
dealing
with
contingent
contingencies,
in
other
words,
you're
dealing
with
decisions
that
were
made
about
physical
laws
contingencies
in
a
contingent
manner.
In
other
words,
based
on
what's
happening
in
that
context.
B
So
you
get
this
double
kind
of
complexity
and,
like
david
snowden,
would
kind
of
refer
to
this
as
like
a
form
of
anthro
complexity,
where
the
complexity
is
partially
created,
not
simply
from
physical
laws,
which
also
create
normal
complexities
that
we
might
think
about,
but
also
that
there's
another
level
of
it
in
which
humans
are
negotiating
decisions
based
on
policies
and
opinions.
And
things
like
this,
so
you
get
a
double
level
of
a
complexity,
and
so
the
result
of
this
is
kind
of
we
get.
B
This
idea
that
the
systems
that
we're
mainly
working
with
these
days
are
are
are
are
not
really
based
on
solid,
stable
decisions
that
are
are
are
a
temporal.
They
don't
the
the
decisions,
don't
stay
change
over
time,
but
because
of
these
ideas
of
of
kind
of
evolution
and
feedback
loops
and
the
actor
being
involved
in
the
system.
B
What
we
get
is,
instead
of
an
idea
where
the
system
that
we're
working
with
is
actually
just
be
kind
of
a
form
in
the
way
that
we're
working
with
the
system
is
this
kind
of
constant,
dynamic
rebalancing
of
forces
where
every
time
we
do
something
we
probably
overdo
it
or
underdo
it
a
little
bit
and
therefore
we're
constantly
having
to
like
re-engage
with
the
system
in
order
to
rebalance
the
forces
and
then,
of
course,
because
of
competitive
environments,
and
because
of
just
nature's
wants
to
do
things
like
throw
covet
viruses
at
us.
B
The
forces
that
stabilize
the
system
in
its
environment
are
also
changing
all
the
time,
so
we're
rebalancing
not
only
the
internal
relationships,
but
the
we're
always
constantly
rebalancing
the
environmental
relationships
as
well
ecological
relationships.
So
finally,
we
get
this
last
thing
which
I
started.
The
talk
with,
which
is
even
simple
machines,
like
clocks,
don't
actually
operate
for
long
without
human
intervention.
B
So
it's
not
just
that
complex
systems
require
or
rely
on
human
interventions
to
kind
of
stabilize
themselves
through
this
kind
of
dynamic
rebalancing,
but
any
form
of
mechanism
always
requires
humans
to
kind
of
interact
with
it
in
order
to
stabilize
its
performance
over
a
period
of
time.
B
Systems
in
and
of
themselves
kind
of
mechanistic
systems
can't
fail
themselves.
They
have
to
fail
in
relationship
to
what
we
expect
them
to
be
doing
so.
B
There's
this
idea
that
systems
are
always
kind
of
not
quite
doing
what
we
expect
them
to
and
there's
some
interesting
tensions
that
we
want
to
play
with
there,
because
what
we
end
up
with
is
kind
of
this
idea
where,
when
we're
trying
to
engage
in
complexity,
we're
trying
to
engage
in
the
management
of
complexity
or
the
wrangling
with
complexity,
is
this
idea
often
stated,
as
we
should
move
from
fail,
safe
to
safe
to
fail
and-
and
I
actually
think
that
that's
not
really
the
the
right
frame.
B
So
fail-safe
is
those
decisions
that
we
can
make
about
an
architecture
or
a
system
using
physical
laws
right,
so
you
should
not
be
able
to
expect
to
transmit
packets
faster
than
the
speed
of
light.
B
So
if
you
design
a
system
that
is
expects
faster
than
speed
of
light
performance,
you're
going
to
have
issues,
there
are
a
set
of
kind
of
load-bearing
architectural
decisions,
like
kind
of
a
minimum
set
of
criteria
that
define
the
the
the
physical
limitations
of
the
system
and
and
recognizing
and
understanding
what
those
are
is
important,
and
I
like
to
think
of
these
as
kind
of
like
decisions
that
are
made
or
refined
over
time
based
on
calculations
and
things
that
we
can
know
using
kind
of
a
scientific
theory.
B
On
the
other
hand,
we
have
these
ideas
of
safe
to
fail
architectures,
and
these,
for
me,
are
more
about
what
I
call
architecture
and
use,
and
what
I
mean
by
architecture
and
use
is
that
the
the
failure
to
to
to
meet
expectation
is
is
only
something
that
can
be
evaluated
in
use.
It
can't
be
evaluated
prior
to
prior
to
use.
It
can't
be
evaluated
before
the
system
is,
is
operational,
and
so
you
get
safe
to
fail
aspects
of
a
system
based
on
it
actually
being
operated.
B
And
so
what
we
see
here
is
kind
of
the
difference
between
a
form
of
planning
that
that
that
is
a
form
of
planning
about
what
we
might
call
governing
constraints
or
immutable
constraints
and
then
a
set
of
planning
about
how
to
constrain
the
system
so
that
we
can
see
whether
or
not
it's
meeting
our
expectations
and
react
appropriately
when
the
system
isn't
meeting
our
expectations
so
as
to
stabilize
it
by
intervening
in
an
appropriate
way
and
know
that
it's
been
stabilized
and
this
idea
of
humans
needing
to
interact
constantly
with
the
systems
that
they're
operating
in
order
to
keep
those
systems
appearing
as
if
they're
operational
is,
is
what
we
would
might
call
skillful
cl
skillful
coping.
B
And
so
what
we
can
say
is
in
any
sociotechnical
system.
What
we're?
What
we're
actually
observing
is
a
mechanistic
system,
a
technical
system
that
is
being
enabled
to
perform
according
to
expectations
by
humans,
intervening
in
order
to
nudge
the
system
towards
the
expected
outcomes.
And
that
is
a
continuous
process
and
the
moment
that
humans
would
maybe
stop
interacting
with
that
system.
That
system
would
essentially
collapse
and
therefore
you
know
we
can
see
things
like
this
all
the
time.
B
You
know
if
you
have
a
clock
anywhere
in
your
house,
if
you
don't
actually
intervene
and
reset
the
time
it
won't
work
well,
at
the
same
time,
you
could
look
at
things
like
open
source
projects.
The
moment
that
a
community
around
an
open
source
project
collapses,
the
technology
itself
starts
to
deteriorate
right
and
we
all
kind
of
know
this
based
on
our
experiences.
B
So
I
think
that's
that's
interesting.
So
why
are
we?
Why
are
we
kind
of
talking
about
all
this
noise?
What
we're
trying
to
talk
about
when
we
talk
about
this
when
we're
talking
about
transformations
in
particular?
Is
this
idea
of
how
do
we
increase
this
joint
activity?
B
This
ability
to
act
and
a
real,
quick
step
back?
One
of
the
reactions
sometimes
to
complexity
is
that
you
can't
do
anything
about
it,
that
you
can't
that
it
that
you
can't.
You
can't
manage
it
in
a
way.
B
And
yes,
I
I
think
it's
true
that
you
can't
manage
complexity
away,
but
also
complexity
is
not
an
excuse
to
not
attempt
to
manage
the
systems
that
you're
building,
no
matter
how
complex
to
make
sure
that
they
meet
the
expectations
that
you
have
of
them
yeah
and
so
joint
activity
and
interpretability
in
klein's.
B
So
what
we're
trying
to
do
is
talk
about
what
happens
during
a
transformation
in
which
we
increase
the
rate
of
change.
How
does
this
kind
of
interpretability
and
joint
activity
either
contribute
or
become
detrimental
to
the
transform
transformation
of
an
organization,
the
change
inside
of
an
organization?
So
let's,
let's
first
talk
about
kind
of
anticipatory
awareness
or
kind
of
like
what
I?
B
What
I'm
pointing
at
here,
which
is
like
a
different
kind
of
planning
about
the
world
where,
if
we
are
assuming
our
systems,
are
complex,
we
kind
of
need
to
think
differently
about
how
we
might
work
with
them
into
the
future
over
time,
and
so
I'm
going
the
wrong
direction.
B
So
the
first
thing
is
to
say,
like
a
lot
of
planning,
a
lot
of
traditional
planning
engages
experts
to
make
predictions
and
those
predictions
are
chained
together
in
a
way
where
early
predictions
have
a
lot
of
impact
on
late
predictions,
but
also
the
experts
are
valued
for
their
predictions
and
their
their
ability
to
produce
predictable
results,
and
therefore
they
become
preoccupied
with
making
sure
that
the
conclusions
that
they
made
earlier
come
true,
come
hell
or
high
water
later,
and
this
results
in
the
suppression
of
new
information
and
potentials
that
that
individuals
and
teams
can
interact
with
in
order
to
produce
better
results
than
could
be
envisioned
in
the
beginning.
B
So
that's
one
of
those
kind
of
problems
of
future
knowledge
pieces
where
the
future
knowledge
is
ignored,
where
it
could
be
used
to
create
better
results.
B
So
what
we're
trying
to
talk
about
when
we
talk
about
kind
of
these
kind
of
anticipatory
awareness
pieces
is
changing
the
organization's
focus
from
what
will
happen
and
also
changing
the
organization's
kind
of
this
is
like
the
alignment
argument.
What
should
happen
both
both
of
these
by
the
way
again
are
not
bad,
but
they're,
not
they're,
they're,
not
sufficient
for
a
trans
transformation
right.
So
it's
not
it's
not
defining
the
exact
cause.
B
It's
not
defining
the
kind
of
alignment
that
that
allows
for
these
kind
of
changes
that
we're
looking
for
it's
exploring
what
might
happen
with
as
an
organization
and
the
reason
we
want
to
explore.
B
What
might
happen
is
because,
if
we
want
to
predict
what
the
next
note
in
the
transformation
song
is
it's
better
to
know
what
all
the
next
potential
notes
might
be,
as
opposed
to
focusing
just
on
a
particular
note
and
saying
that
will
be
the
next
note
that
gets
played,
and
so
what
we're
doing
is
we're
moving
from
these
linear
causal
chains
that
have
to
do
with
more
that
kind
of
architecture
as
load-bearing
analysis
and
we're
moving
forward
from
kind
of
linear
normative
expectations.
B
In
other
words,
chains
of
expectations
that
are
based
on
on
opinions
where
the
chain
becomes
long
enough,
that,
if
one
of
those
opinions
is
incorrect,
we
we
get
problems
to
abductive,
not
adductive,
abductive,
parallel
plausibilities,
and
what
I
mean
by
this
is
that
the
organization
is
kind
of
constantly
re-examining,
what's
what's
plausible,
from
where
they
are
in
the
transformation,
as
opposed
to
from
some
pre-uh
conceived
chain
of
events.
B
And
so
the
result
of
this
is
not
a
form
of
planning
that
produces
a
lowering
of
cognition
in
the
organization.
So
I
don't
know
if
you
guys
have
ever
had
this
experience,
but
as
a
manager
I
used
to
have
this
experience
all
the
time.
If
you
give
someone
a
plan,
there's
there
seems
to
be
a
lower
likelihood
that
people
will
question
the
plan
because
they
assume
someone
has
thought
the
plan
through,
so
they
just
kind
of
adopt
the
plan
as
their
own.
B
On
the
other
hand,
if
you
use
this
kind
of
anticipatory
awareness
version
of
it,
what
you're
trying
to
do
is
actually
say
to
the
organization
as
we
move
forward.
It's
equally
important
at
every
kind
of
step
that
we're
scanning
for
the
possibilities
of
what
could
happen,
and
that
has
to
do
with
the
expectation
piece.
So
gary
klein,
who
wrote
about
anticipatory
awareness
with
with
david
snowden,
one
of
the
things
they
were
trying
to
point
at
was
this
idea
that
there's
two
ways
to
use
mental
models.
B
So
a
mental
model
just
really
quickly
would
be
like
a
way
of
thinking
about
what
might
happen,
but
but
not
just
a
kind
of
a
linear
way
like
a
set
of
expectations
and
their
interrelationships
to
each
other
yeah,
and
so
you
might
have
a
mental
model
of,
for
instance,
for
gary
klein's
studies.
You
might
have
a
as
a
as
a
firefighter.
You
might
have
a
mental
model
of
how
the
house
burns
and
what
he
says
is
that
a
mental
model
basically
kind
of
links.
B
Previous
experiences
with
what's
happening
right
in
front
of
you
with
action
so
like
what
should
I
do
based
on
what's
happening,
based
on
what
I've
seen
in
the
past?
So
that's
that's
and
then
there's
one
second
link
and
that
second
link
has
to
do
with
what
would
the
result
of
this
action?
What
should
I
expect
based
on
the
result
of
this
action
right,
and
so
you
know
one
of
the
ways
that
we
can
differentiate.
B
High,
performing
teams
from
low
performing
teams,
teams
that
are
likely
to
be
successful
in
transformation
and
less
likely
successful
transformation
is
that
low
performing
teams
use
the
mental
models
to
lower
their
cognition.
They
they
they
use
the
first
part
of
the
model.
My
previous
experience,
the
current
context
and
what
action
should
I
take,
and
then
they
take
the
action
and
then
they
kind
of
rinse
and
repeat
high
performing
teams.
Add
that
second
piece,
which
is
they
say
based
on
my
previous
experience.
B
I
need
to
slow
down
and
be
more
careful
and
try
to
figure
out
what's
happening
because
something's
not
working
according
to
my
expectations,
and
that
becomes
an
important
idea
so
that
anticipatory
awareness
is
about
increasing
the
awareness
of
what
we
might
anticipate
happening,
as
opposed
to
saying
this
is
what
will
happen
and
the
result
of
those
things
is
a
an
engagement
in
the
complexity
of
the
system.
The
human
social
aspects
of
the
system
become
engaged
in
the
relation,
the
temporal
relationship
between
what
I
expect
to
happen
and
what
did
happen
yeah.
B
So
I
need
my
my.
The
slides
are
not
doing
what
I
expect
them
to
so
we
can.
We
can
think
through
this
really
quickly
using
this
weird
model,
which
is
called
the
futurecon
model,
and
it
basically
says
that,
like
there's
or
organizations,
can
kind
of
think
through
the
future,
as
what's
probably
happening,
what's
preferable
to
happen,
what's
plausible
to
happen,
what
can
we
think
of
that
might
happen?
B
It
is
significant,
what's
possible
is
almost
always
greater
than
what
we
can
conceptualize
and
when
we
talk
about
common
ground
in
a
minute
common
ground
is,
is
the
shared
sense
of
plausibility.
The
shared
sense
of
where
we
might
arrive
in
the
future
is
what's
plausible
and
multiple
divergent
kind
of
mental
models
and
ways
of
thinking
about
the
world
would
point
out
possibilities
to
some
that
wouldn't
be
plausible
to
others.
B
So
we
get
this
kind
of
span
of
potential
contentions
and
ways
of
behaving
and
acting
that
seem
kind
of
difficult
to
mediate
or
or
or
engage
with.
But
what
we
want
to
you
know
kind
of
talk
about
when
we
talk
about
creating
interpretability
and
creating
common
ground
is
the
way
in
which
we
want
to
shift
from.
What's
probable,
what
we
ex?
B
What
we
expect
should
could
happen
if
we
don't
intervene
or
do
anything
with
the
system
and
what
is
preferable
so
what
we
want,
how
we
want
to
nudge
the
system
in
the
future,
so
that
when
we
talk
about
changing
the
system
to
a
more
preferable
state,
the
all
the
participants
in
the
system
can
understand
where
the
system
might
likely
arrive.
B
So
it
becomes
more
predictable
klein
in
his
sorry
woods
in
his
paper
about
about
these
ideas,
basically
outlines
the
idea
of
two
people
driving
in
cars
and
the
way
in
which,
if
you
are
following
someone,
there
is
a
set
of
expectations
that
are
created
about
the
the
driver
in
front
where
the
driver
in
front
needs
to
to
make
sure
it's
clear
that
they
are
predictable
in
their
actions,
so
that
the
driver
behind
can
actually
follow
them.
B
They
can't
do
unpredictable
things
like
run
red
lights
or
do
sudden
u-turns
things
like
this,
that
that
breaks
the
the
joint
action
in
a
way
that
makes
it
impossible
for
the
the
person
to
follow.
So
the
leader
follower
relationship
then
has
to
do
with
not
just
the
follower
following
somehow
subordinating
themselves
to
the
direction
of
the
leader,
but
actually
the
leader,
subordinating
their
decisions
to
enable
the
follower
to
follow
them.
So
there's
this
kind
of
interaction
that's
happening
there.
I
think
it's
kind
of
interesting
anyway.
B
So
then
we
have
this
idea
of
kind
of
common
ground
and-
and
we
want
to
explore
this-
so
we
talk
about
anticipatory
awareness,
it's
kind
of
like
increasing
the
scanning
and
maybe
the
diversity
of
scanning
inside
the
organization.
Then
we
have
kind
of
common
ground.
So
what
do
we
mean
by
common
ground?
So
the
first
thing
to
talk
about
when
we
talk
about
chronic
common
ground
is
to
point
back
to
those
kind
of
satisficing
comments.
We
talked
about
at
the
beginning
and
bounded
cognition,
so
we
can't
think
everything
about
anything.
B
We
can
only
pay
attention
to
a
certain
amount
of
information
in
our
environment,
and
the
result
of
this
is
that
the
mental
models
that
we
create
basically
are
about
focusing
our
attention
on
particular
parts
of
the
environment
and
the
relationship
between
those
parts
of
the
environment
and
our
expectations
about
what's
happening
in
the
environment,
and
this
means
that
we're
kind
of
intentionally
putting
blinders
on
ourselves
to
like
focus
on
particular
parts
of
the
environment
in
order
to
make
sense
of
the
world,
and
so
when
we
talk
a
little
bit
about
common
ground
and
anticipatory
awareness.
B
One
of
the
ideas
is
that
everybody,
all
the
actors
in
your
system
have
these
blinders
on,
but
they
could
potentially
be
kind
of
pointing
their
flashlights
in
the
kind
of
visual
metaphor
that
I
just
used
in
slightly
different
directions,
therefore,
increasing
the
scanning
of
the
future
and
the
possibilities
for
the
future.
So
you
get
a
multiple
diversity
of
thinking
about
what
might
happen.
Yeah
the
anticipatory
awareness
increases.
So
what
we
can
say,
then,
is
this
idea
that
we
have
a
requisite
variety.
So
we
have
a
minimum
amount
of
variety
of
mind.
B
Mental
models
required
in
order
to
stay
stable
in
a
system.
That's
in
this
kind
of
continuous
partial
failure,
state
the
if
we
homogenize
or
over
homogenize
our
our
mental
models.
So
we
try
to
get
everybody
to
think
exactly
the
same
way
about
what's
happening.
If
we
do
that,
then
we
risk
missing
stimuli
in
the
environment
that
could
indicate
that
the
expectations
of
the
system
aren't
being
met,
and
so
this
requisite
variety
is
a
way
of
increasing
the
scanning
yeah.
But
on
the
other
hand,
we
get
this
idea
of
requisite
coherence.
B
So
the
idea
of
requisite
coherence
is
just
to
say
if
everybody
thinks
completely
differently
about
what's
happening
and-
and
everybody
has
a
completely
different
relationship
between
the
stimuli,
the
action
and
the
expectation.
Then
we
have
no
interpretability
and
the
result
of
like
a
complete
lack
of
interpretability
is
kind
of.
We
can't
trust
each
other.
We
can't
follow
each
other.
We
can't
do
coordinated
action
and
therefore
kind
of
we
get
this
compression
of
of
what
kind
of
planning
we
can
do.
B
Things
become
very
real
time
and
it
becomes
difficult
to
follow
each
other
and
understand
and
make
and
do
planning
of
any
kind
of
form.
So
we
get
this
kind
of
idea
where
we
have
common
ground,
which
is
a
shared
set
of
practices,
vocabulary,
performances,
resources,
etc.
That
shared
set
of
things
has
these
two
pressures
on
it.
B
One
is
requisite
variety,
which
is
trying
to
minimize
the
common
ground
and
increase
the
the
the
the
cognitive
differences
between
people
and
the
requisite
coherence
is
trying
to
increase
the
common
ground
to
the
point
at
which,
when
coordination
is
required,
the
coordination
is
accessible
via
these
shared
practices,
vocabularies,
etc.
B
And
so
this
requires
this
kind
of
careful
dance
between
at
this
advocacy
and
inquiry,
where
we
want
to
be
advocating
for
our
particular
mental
models
and
the
observations
that
we're
making
in
our
models,
and
we
want
to
be
inquiring
and
open
to
new
models
from
other
people,
so
that
we
can
then
kind
of
converge
on
these
ideas
of
proposition
experimentation
being
able
to
challenge
each
other
about
whether
or
not
the
system
is
performing
according
to
expectation
and
and
therefore
negotiating
kind
of
a
movement
into
the
future.
B
All
right,
I
I
need
to
go
faster.
I
only
got
a
couple
left
so
then
we
get
this
idea
of
adaptive
capacity.
So
what
what?
What
is
adaptive
capacity?
Why
do
we
want
to
talk
about
adaptive
capacity?
So
I
I
I
will
get
in
trouble
for
this
almost
certainly.
But
it's
a
simple
metaphor:
adaptive
capacity
is
kind
of
like
a
bank
account
and
there
you
can
imagine
that.
B
There's
like
some
amount
of
adaptivity
that
you
have
in
your
bank
account
and
when
you
want
to
change
something
either
like
change
is
forced
on
you
or
you
want
to
create
change
through
some
sort
of
transformational
activity.
If
you
spend
all
of
your
adaptive
capacity,
all
your
adaptivity
credits
go
to
zero.
Then
the
organization
kind
of
stops
being
able
to
change
it
kind
of
locks
up.
B
One
of
the
ways
to
think
about
it
is
that
when
your
adaptive
capacity
is
exceeded
and
the
organization's
adaptive
capacity
succeeded,
people
will
become
afraid
they
won't
know
what
to
do.
They
won't
know
how
to
act
and,
as
a
result,
they'll
stop
acting.
So
you
can
think
of
this
kind
of
in
relationship
to
that
requisite
variety,
where
increasing
requisite
variety.
B
Increasing
the
number
of
mental
models
increases
the
ability
for
the
organization
to
come
up
with
novel
interventions
in
order
to
attempt
adaptive
behaviors
and
if
you
narrow,
that
adaptive
capacity,
I'm
sorry
that,
if
you
narrow
the
requisite
variety
too
much
at
the
same
time,
you
basically
narrow
the
ability
for
the
organization
to
adapt,
and
so
this
is
particularly
critical
in
in
the
relationship
to
this
kind
of
continuous
partial
failure
idea,
because
in
particular,
adaptive
capacity
isn't
something
that's
put
in
reserve.
B
It's
something
that's
constantly
exercised
and
it's
constantly
exercised
by
people
trying
to
skillfully
cope
with
these
kind
of
micro
failures
that
are
constantly
happening
in
your
system
and
that's
not
good
or
bad,
and
it's
not
fully
eliminated.
You
could
cannot
fully
eliminate
from
the
system.
All
sociotechnical
systems
are
always
in
continuous
partial
failure.
So
there's
this
question
then
about
like
what
do
we
do
about
this
adaptive
capacity
and
like
how
could
we
think
about
it
in
in
the
form
of
a
transformation
and
and
and
there's
two
ways?
B
I
like
to
think
about
increasing
adaptive
capacity?
There's
there's
others,
and-
and
there
are
certainly
people
who
will
comment
on
this,
but
there's
two
ways
to
that.
I'd
like
to
propose
right
now
to
increase
the
available
capacity
inside
of
an
organization
and
the
first
one
is,
is
to
recognize
operational
load
and
toil
in
which,
to
which
I
mean
there
is
a
minimal
amount
of
continuous
partial
failure
in
any
system
having
to
do
with
kind
of
change.
B
That
is,
that
is
endemic
to
the
system
that
you're
working
in,
but
there's
also
a
lot
of
of
other
things
that
happen
regularly.
B
That
could
be
removed
that
you're
deploying
skillful
coping
to
deal
with
right
now,
and
a
lot
of
that
could
be
kind
of
described
as
toil
and
a
lot
of
the
other
pieces
of
it
could
be
described
as
high
operational
load
or
systems
that
have
both
red
unnecessary
redundancy,
especially
if
you
take
a
more
modern
view
of
or
a
more
contemporary
view
of
how
to
how
to
create
a
resilient
system,
in
which
case
redundancy
is,
is
more
of
a
liability
than
it
used
to
be
seen,
but
also
the
idea
that
the
system
has
been
poorly
designed
for
operation.
B
So
you
think
about
issues
like
poor
observability,
in
which
case
it's
hard
to
see
when
the
system
is
performing
poorly.
The
expectations
of
the
system
are
not
encoded
in
a
way
that
becomes
readily
available
in
operation
or
in
use,
so
that
so
one
of
the
ways
to
kind
of
increase
adaptive
capacity.
B
If
you
want
to
think
of
adaptive
capacity
about
as
something
that
you
would
want
to
be
able
to
deploy
either
because
of
an
emergency
or
emergent
change
or
because
of
the
desire
for
transformational
change.
If
adaptive
capacity
are
used
for
those
two
things,
then
eliminating
these
things
doesn't
increase
the
total
amount
of
adaptive
capacity
in
the
organization,
but
it
does
increase
the
amount
of
available
adaptive
capacity
to
be
deployed
elsewhere
to
be
deployed
towards
those
ideas.
B
The
second
one
is
just
to
say
that
actively
understanding
how
to
create,
modify
and
recreate
common
ground
becomes
a
form
of
adaptive
capacity
itself,
because
common
ground,
as
we
described
it,
links
expectation
context
and
and
expect
sorry
previous
experience,
current
context
and
expectations.
B
It
links
those
things,
and
so
when,
when
environments
change
and
those
linkages
change,
then
the
common
ground
itself
needs
to
be
modified.
So
another
way
of
increasing
adaptive
capacity
inside
of
an
organization
is
increasing
kind
of
the
flexibility
and
ability
to
modify
common
ground
in
a
way
that
people
can
kind
of
constantly
renew
the
common
ground
and
make
it
more
valuable
really
quickly.
B
Most
of
this
talk
has
been
based
in
in
kind
of
a
linguistic
or
conceptual
version
of
common
ground,
but
I
I
think
it's
important
to
point
out
that
common
ground
also
is
kind
of
encoded
or
held
in
material
systems.
In
other
words,
the
technical
systems
themselves
hold
forms
of
of
common
ground
or
in
the
way
that
I
describe,
for
instance,
like
low
observability
systems
and
high
observability
systems.
B
The
high
observability
system,
one
of
the
things
it
does
is
it's
encoded
inside
of
itself
the
the
reproduction
of
a
set
of
common
ground,
so
that
you
can
see
what
the
expectations
are.
So
these
are
not
always
just
kind
of
conceptual
conceptual
mental
model
pieces.
Sometimes
they
are
shared
common
resources
and
therefore
the
you
know
the
the
organization's
ability
to
negotiate
and
adapt
these
shared
common
resources
becomes
a
critical
part
of
creating
kind
of
ongoing
common
ground.
So
I
have
one
last
rant
and
then
I
will
stop.
B
Why
do
we
care
about
these
two
ideas
or
these
three
ideas?
Well,
one
of
the
things
we're
trying
to
move
towards
inside
of
kind
of
contemporary
systems
is
resilience
and
resilience
is
the
ability
for
a
system
to
adjust
prior
to
during
or
following
disruptions
changes
or
frankly,
the
ability
resilience
is
the
ability
to
undergo
transformation
organizations
that
have
low
resilience,
who
try
to
transform
they.
B
They
lock
up,
they
get
brittle
and
they
don't
transform
well
because
of
low
levels
of
resilience
in
the
system
and
the
thing
that
I'm
we've
been
trying
to
kind
of
link
together
when
we
talk
about
socio-technical
systems
is
to
say
that
the
resilience
is
enabled
by
the
human,
coping
and
skillful
coping
and
the
availability
of
that
resource
to
the
system
that
that
ability
that
capability
in
the
system,
technical
systems
can
only
ever
be
reliable
within
performance
tolerances,
but
they
can.
They
can't
be
resilient.
B
The
humans,
the
human
part
of
the
system,
is
what
creates
this
resilience,
and
so
when
we,
when
we
think
about
how
how
this,
how
this
plays
out
events
happen
in
the
world
right,
things
have
market
events
covid
all
sorts
of
things
like
this
happen
and
to
the
extent
that
those
events
are
reasonably
predictable.
B
B
As
the
event
horizon
gets
closer
and
closer
to
us,
the
op,
the
number
of
options
goes
down
primarily
because
options
take
time
to
exercise
and
the
cost
of
those
options
often
goes
up
shorter,
shorter,
more
bursty
interventions
tend
to
cost
more
money,
and
the
result
of
that
is
that
we
can't
actually
modify
the
system
to
avoid
negative
reactions
to
the
event,
and
we
end
up
in
the
event,
and
so
we
move
then
from
a
kind
of
anticipatory
awareness
to
the
need
for
adaptive
capacity
inside
the
event.
What
determines
the
performance
of
the
system?
B
The
system's
ability
to
kind
of
interact
or
ch
or
adopt
change
inside
of
an
event
has
to
do
with
the
organization's
adaptive
capacity.
Those
organizations
that
have
a
high
adaptive
capacity
can
absorb
larger,
more
disruptive
events
than
organizations
that
have
low
adaptive
capacity.
So
these
are
the
relationships
between
the
two,
as
opposed
to
kind
of
like
the
last
one,
which
is
stabilization
having
to
do
with
like
if
we
survive
the
adaptive
capacity.
B
How
do
we
do
kind
of
retrospective
retrospectives
post-incident
reviews
in
order
to
improve
future
performance
yeah?
So
when
we,
when
we
think
about
this,
why
do
we
care
well?
First
of
all,
you
know.
We
know
that
the
way
in
which
organizations
tend
to
work
is
that
they
tend
to
do
this
kind
of
disruptive
behavior,
where
the
dominant
market
players
tend
to
over
perform
the
market
because
they
get
into
feature
races
with
each
other
and
by
over
performing
the
market.
B
The
conflict
occurs
when
the
dominant
players
markets
are
significantly
encroached
on
by
the
disruptive
players
and
therefore
we
get
kind
of
this
friction
in
a
marketplace,
and
so
what
we
can
say
is
that
we've
kind
of
got
an
incumbent
that
goes
through
these
diffusion
curves
right,
the
the
way
in
which
kind
of
crossing
the
chasm
works,
where
technology
just
kind
of
distributes
itself,
from
early
adopters
to
late
adopters,
etc,
and
the
incumbent
is
going
it
is
reaching
the
end
of
its
life
kind
of
where
the
disrupter
comes
in
because
of
this
market
encroachment
right,
and
so
what
we
can
see
here
is
this
idea
that
why
that
happens
is
because,
as
the
system
accelerates
up
the
diffusion
curve,
the
the
the
organizations
become
kind
of,
like
we'll
say,
fat
lazy.
B
But
what
we
mean
by
that
in
particular,
is
they
become
increasingly
insensitive
to
ecological
pressures,
because
they're
kind
of
riding
a
wave
up
the
system
and
as
they
become
increasingly
less
sensitive
ecological
pressures,
they
tend
to
shed
the
the
the
diversity
that
they
needed
when
they
were
trying
to
understand
the
environment,
because
they
think
that
they
can
create
efficiencies
and
maximize
kind
of
profits.
And
the
result
of
that
is
that,
just
as
they
need
that
critical
diversity
inside
the
organization,
when
the
disruptive
event
happens,
they
have
less
diversity
in
their
system.
B
And
then
you
get.
This
kind
of
this
distribute
sorry
this
disruptor,
who
has
a
high
level
of
diversity
in
their
system
because
they
are
still
trying
to
understand
how
the
market
works
and
how
the
ecology
works,
and
so
they
have
a
competition
between
a
dominant
player
who
who's
kind
of
like
insensitive
to
market
change
and
a
new
player
who's
highly
sensitive
to
market
change.
B
And
this
creates
kind
of
like
a
potential
moment
in
time
where,
if
the
dominant
player
can't
recreate
kind
of
the
diversity
common
ground
and
adaptive
capacity
required
to
interact
with
the
market
change,
they
they
get,
what
we
call
confidence
induced
failure
and
they
they
begin
declining
and
the
decline
is
indicated
by
the
adoption
of
the
new,
the
new
vendor
and
just
really
quickly
before
I
stop.
B
This
plays
out
not
once
this
plays
out.
You
know
this
is
in
in
worldly's
terms
at
multiple
stages
in
the
life
cycle
of
a
technology
these
happen
over
and
over
again,
these
diffusion
curves
don't
occur
once
in
the
adoption
of
the
technology
but
multiple
times,
and
therefore
these
kind
of
moments
of
high
complexity
that
are
caused
by
the
interaction
of
a
disruptor
and
a
dominant
player
that
that
interaction
happens
multiple
times
in
the
adoption
of
technology
and
therefore,
the
adaptive
capacity
that
we're
looking
for
is
important
to
survive.
B
Things
like
covid
and
also
to
survive
competitive
markets.
So
the
development
and
the
focus
on
creating
adaptive
capacity,
common
ground
and
and
anticipatory
awareness
in
sociotechnical
systems
is
a
way
in
which
you
increase
the
survivability
of
your
organization
through
radical
change,
either
by
choice,
aka
transformation
or
by
force
aka
transformation
inflicted
by
something
like
govind.
A
That
was
super
awesome,
talk
and,
and
and
it's
so
there's
so
many
questions,
and
if
you
have
time
we
we
can
go
a
little
bit
longer
to
have
a
conversation
and
if
any
of
the
folks
who
are
in
the
chat
want
to
pop
in
and
ask
questions,
please
do
so.
A
A
He
kind
of
he
wrote
an
essay
or
raised
an
alarm
about
society
having
a
mental
barrier
at
looking
at
the
year
2000
as
the
limit
of
the
future,
and
so
he
went
about
and
created
this
foundation
to
create
this
10
000
year
clock
so
that
people
would
start
thinking
about
things
in
much
longer
terms,
and
you
know
I'm
listening
to
you-
do
all
the
the
clock,
stuff
and
everything-
and
I
I'm
reminded
by
all
these
conversations
I
had
with
danny
as
we
walked
about
south
by
southwest
and
we're
stunned
and
amazed
at
all
the
diversity
there
and
it
was.
A
It
was
a
lot
of
fun,
but
I
also
think
he
there
are
a
lot
of
people
who
are
who
would
really
get
this
talk
and
the
need
for
this
anticipatory
thinking
point
of
view,
and-
and
there
were
a
number
of
things
that
that
resonated
for
me
in
in
what
you
were
saying
and
a
lot
of
it
felt
like
playing
three-dimensional
chess.
A
You
know
it's
like
you're,
you
know
we
in
technology
when
we're
planning
projects
and
we're
you
know
trying
to
avoid
feature
creep
and
all
of
the
things
that
we
do
when
we
become
entrenched
in
a
market
space
as
leaders
trying
to
figure
out
how
to
to
create
the
diversity
that
drives
the
innovation
into
our
projects,
sometimes
like
red
hat.
A
We've
acquired
companies
like
three
scale
and
core
os,
and
things
like
that
that
have
really
helped
us
in
continue
to
innovate
and
drive
that
diversity
into
our
culture
and
into
our
thinking
about
the
technology.
A
But
it's
it's
an
interesting
concept,
trying
to
keep
that
common
ground,
open
and
and
and
and
that
I'm
trying
to
think
of
all
the
words
you
had
so
many
words
in
there
that
top
level
coming
in
and
the
bottom
level
coming
in
the
bounded
cognition,
the
blinders
that
we
wear.
A
There
was
just
a
lot
about
that
and
I
think
and
as
jeff
is
mentioned
in
the
chat,
your
last
point
is
quite
dire,
given
the
motivations
of
companies
to
lean
in
towards
market
control
for
sustained
revenue,
because
we
all
know
we
want
that
sustained
revenue
and
yeah
and
as
jeff
is
saying,
diversity
is
pruned
since
it's
inefficient
to
maintain
and
inconvenient
to
manage-
and
I
I
kind
of
disagree
with
that-
it's
not
inconvenient
to
manage
you
just
have
to
learn
and
management
has
to
learn
behaviors
that
allow
us
to
have
common
ground
and
and
move
things
so
that
we
can
have
the
resiliency
that
we
need
and
the
diversity
that
we
need
to
to
take
on
these
basis.
A
So
there's
there's
quite
a
lot
going
on
here
and
you
know
I
I'm
just
trying
to
see
if
jeff
is,
if
jeff
you
want
to
jump
in
here,
I'll
unmute,
you
if
you
want
to
follow
up
on
that
at
all,
and
anybody
else
who's
chatting
here,
but
boy,
stunning,
tour
de
force
again
and
a
lot
of
things
that
we
need
to
think
about
as
a
company
as
socioeconomic
resiliency
is
something
here
and
you
know
and
jeff
is
arguing
in
in
the
chat
that
recent
history
does
not
demonstrate
that
management
learns.
A
And
I
wonder
if
you
can
talk
a
little
bit
about
like
the
idea
of
acquiring
those
startups
when
the
incumbent,
you
know
you
see
apple
and
google
and
red
hat
and
ibm
ibm
acquires
red
hat
whether
that
actually
is
helpful.
How
does
that
move
things
forward
and,
from
your
point
of
view,.
B
Yeah,
I
mean,
I
think
you
know,
without
without
getting
into
much
trouble.
I
think
kind
of
the
red
hat
ibm
stories,
interesting
one
in
that
ibm
has
been.
B
You
know
very
careful
to
honor
red
hat's
culture
because
they
want
to
learn
from
red
hat,
so
they
they're
giving
time
and
space
for
the
development
of
a
set
of
common
ground
between
the
two
companies
and
they're
not
doing
kind
of
the
the
the
the
death
embrace
version,
which
is
something
that
many
companies
are
known
for
right
like,
and
I
think
you
know
it's
interesting
to
look
at
something
like
the
ibm
red
hat
acquisition
as
something
where
there
was
a
recognition
that
they
weren't
just
buying
technology.
B
They
were
buying
a
way
of
working
in
a
way
of
a
culture
or
a
way
of
existing,
and
that
that
ibm
was
interested
in
having
that
access
to
those
insights
for
themselves,
but
also
thought
that
they
could
offer
those
insights
to
their
customers
as
as
a
new
way
of
working.
And
you
know,
I
think
red
hat
has
a
great
story
around
profitability
and
the
way
that
they
have
been
able
to
have
a
have
this
culture
and
also
produce
above
average,
results
quarterly.
B
For
you
know
whatever
it
is
22
quarters
in
arona,
or
something
like
that.
So
I
think
you
know
there's
there's
that
and
there's
this
kind
of
careful
like
what
do
you?
What
are
you
buying
when
you
buy
in
in
a
merger
acquisition?
What
what
is
it
that
you're
getting?
And
I
think
you
know
the
way
that
I
usually
talk
about
that
is
there's
two
different
kind
of
acquisition.
Mindsets.
B
One
is
that
you
own
a
book
of
business
or
you
own
kind
of
a
portfolio,
and
one
is
that
you
are
trying
to
create
a
system,
a
whole
system,
yeah
and
so
organizations
that
tend
to
think
of
themselves
as
portfolio
owners
will
often
do
acquisitions
because
they
see
an
opportunity
to
eliminate
kind
of
the
social
aspects
of
the
system
in
order
to
create
savings
by
normalizing
the
interactions
and
therefore
kind
of
reducing
the
cost
and
making
the
system
behave
better,
but
by
eliminating
the
culture
kind
of
like
the
the
let's
shock,
the
pool
with
chlorine
version,
and
then
on
the
other
side
you
get
kind
of
like
a
lot
of
organizations.
B
Nowadays,
you
look
at
like
somebody
like
google,
the
amount
of
accuhire
that
they
do
where
they're
actually
saying.
I
don't
actually
care
anything
about
the
technology
you
have.
What
I
want
is
your
well-functioning
team
to
come
in,
because
I
have
projects
that
I
think
are
more
valuable
for
your
well-functioning
team
to
work
on
than
the
the
project
you're
currently
working
on.
So
let
me
hire
you.
Let
me
hire
your
entire
team
because
we
value
that
social
network
that
you've
built
more
than
the
technology
that
you've
built,
and
so
I
think,
there's
this.
B
B
These
days
are
getting
it
from
social
capital
they're,
getting
it
from
relationships
and
the
way
in
which
humans
are
relating
to
each
other,
that
that
is
the
primary
driver
of
value
in
a
lot
of
organizations,
and
it's
not
the
same
thing
as
saying
hire
a
bunch
of
experts,
because
those
are
like
individuals,
I
literally
mean
the
way
in
which
you
create
a
set
of
interactions
between
experts.
That
is
the
most
valuable
thing
for
an
organization
to
have
access
to
right
now
and
that's
social
capital
is
so
important
right
now,.
A
Yeah,
I
think
that
that's
one
of
the
things,
especially
and-
and
we
can
go
a
little
bit
longer
here-
what
I
see
in
the
open
source
communities
that
that
I
work
in,
live
and
breathe
in
that
are
all
technology
based
and
they're
all
companies
collaborating
with
each
other.
A
It
is
those
relationships
and
the
social
capital
that
we
have
and
that
we
give
to
the
projects
we
work
on
and
the
ecosystems
we
work
on
and
understanding
those
relationships
and
nurturing
them
is
probably
one
of
the
most
valuable
things
that
you
can
do
as
as
a
commercial
organization
in
participating
in
that
and
a
lot
of
the
education
process
on
those
accu.
A
The
acquisitions,
not
the
aqua
hire,
is
moving
those
proprietary
products
into
being
open
source
projects,
as
we
are
want
to
do
at
red
hat
there's
a
couple
of
other
things
that
came
in
in
the
chat
tim
is
saying:
read:
read
out
to
point
to
the
point
of
using
mental
models
in
different
ways.
You
mentioned
the
high
performing
orgs,
add
in
expectations
of
results,
I.e,
mismatch
between
orbs
and
expectations
for
those
results
within
a
complex
system.
Can
you
talk
a
bit
about
the
risks
of
over
instrumenting
or
attempting
to
measure
those
through?
B
So
I
I
I
think
here
one
of
the
the
critical
things
about
most
metrics,
kpis
and
okrs
is
that
they
tend
to
be
vertically
aligned.
B
They
tend
to
go
up
and
down,
and
the
problem
with
that,
I
think,
ends
up
being
that,
as
you
aggregate
the
metrics
from
multiple
teams
together,
if
they
are
not
managing
the
same
complex
system,
you
are
basically
averaging
information
about
teams
that
are
doing
completely
different
things
and
therefore
you're
actually
creating
mud
like
you're,
actually
not
helping
the
organization
see
what's
happening
because
you're
you're
conflating
the
performance
of
multiple
different
systems
together
in
a
way
that
the
average
doesn't
indicate
the
average
performance.
B
It
indicates
some
sort
of
like
middle
point
that
doesn't
really
exist,
yeah.
So
there's
that,
like
there's
a
difference,
I
think
when
you
are
so
in
management
or
in
the
executive,
there's
a
there's,
a
certain
number
of
metrics
that
you
want
in
place,
and
I
call
them
hunting
metrics
and
what
I
mean
by
a
hunting
metric.
Is
it
only
tells
you
where
to
look
for
problems.
B
It
doesn't
tell
you
what
the
problem
is,
so
it
basically
is
like
raising
your
hand
and
some
you
know
like
having
a
way
for
the
system
to
raise
its
hand
and
say
like
there's
something
wrong
over
here.
I
and
not
attempting
to
diagnose,
what's
wrong,
but
just
point
at
what's
wrong,
and
that
requires
doing
things
like
you
should
have
candlestick
metrics,
and
things
like
this.
B
That
show
you
the
variation
in
performance
across
teams,
as
opposed
to
the
average
performance
across
teams,
because
variation
in
performance
can
indicate
skewing
and
drifting
away
from
things
and
making
it
harder
for
that
common
ground
to
exist.
Yeah,
you
want
a
certain
amount
of
variation,
but
if
you
get
too
much
variation
in
performance,
it
could
be
indications
that
people
are
having
problems
staying
together.
B
The
second
thing
I'd
say
is
like
the
difference,
then,
between
that
type
of
measurement,
which
is
the
okr
kpi
up
and
down
the
hierarchy,
negotiation
versus
something
like
slos
slis,
which
are
horizontal
negotiations
about
expectations
between
teams.
These
metrics,
where
we
agree
on
the
minimum,
the
minimum
performance
criteria.
So
that's
the
stuff
I
was
talking
about
with.
I
talked
about
load
bearing
right
like
in
order
to
perform.
Well,
our
system
has
to
respond
within
three
milliseconds.
Can
we
agree
that
that's
what
the
expectation
is?
B
Can
we
agree
that
there
is
a
a
a
point
where,
if
we
are
approaching
three
millisecond
response
times
that
we
will
get
together
and
try
to
prevent
exceeding
that
that
measure
the
these
are
that's
metrics
that
are
used
to
increase
the
observability
of
the
system,
to
increase
the
ability
to
do
predictions
and
correlate
results
to
those
predictions
and
they're
vertical
and
towards
the
work
surface,
not
towards
the
management
system,
and
so
I
think,
the
difficulty
you
know.
B
The
easiest
version
of
the
problem
is
like
story
points
where
you
take
something
that's
designed
to
help
a
team
or
a
set
of
teams,
make
sense
of
something,
and
then
you
try
to
generalize
it
across
a
broad
set
of
teams
and
then
average
it
as
a
management
system
and
then
just
get
nonsense
like
anybody.
Who's
tried
to
like
have
a
manager,
try
to
say
the
the
team
down
the
hall
is
doing.
Is
20
20
story
points
a
day.
B
Why
are
you
only
doing
five
and
just
have
the
teams
be
like
you
just
totally
don't
understand
the
problem
that
that
that
type
of
metricing
is
is
exactly
the
confusion
that
I
think
happens
in
a
lot
of
organizations,
so
I
do
think
there's
like
specific
metrics
that
are
appropriate
for
each
level
of
the
organization,
and
I
could
I'd
be
glad
to
kind
of
come
back
and
talk
about
that.
At
some
point,
I
think
it's
an
interesting
discussion.
A
Well,
I
think
we
do
have
to
wrap
up
a
little
bit
here,
though
I
could
probably
talk
all
day
today.
I
think
the
whole
concepts
and-
and
I
think
what
you've
done
today
is
give
us
a
common
ground
and
a
vocabulary
to
to
talk
about
this,
and
I
really
appreciate
that
you
know
it
goes
back
to
you
know
increasing
the
scanning.
I
think
that
was
the
phrase
that
you
use.
You
know
it's
like
the
diversity
of
these
conversations
that
we
have
on
fridays.
A
The
the
ideas
that
you're
presenting
really
help
us
understand
the
need
for
for
these
systems
to
grow
and-
and
I
think
gives
us
some
of
the
requisite
coherence
to
have
the
conversations
and
builds
up
our
ability
to
have
trust
each
other
in
these
conversations,
so
that
when
we
have
the
common
language,
so
I'm
going
to
be
watching
this
again
later
tonight,
because
I
got
to
edit
it
and
get
it
into
some
format
that
I
can
share,
because
I
definitely
will
be
sharing
this
internally
at
red
hat
and
everywhere
else
on
in
the
universe.
A
So,
as
always,
jabe.
Thank
you
so
much
for
taking
the
time
and
the
thoughtful
conversation
about
this.
This
applies
at
so
many
levels
of
so
many
different
kinds
of
organizations,
whether
they're,
technical,
political,
social
and
and
all
in
so
many
levels.
You
know
inside
of
our
own
organization,
red
hat
and
ibm,
as
well
as
in
the
open
source
communities
that
we
all
live
and
work
in
together
and
collaborate
in
so
again.
Thank
you
very
much
for
taking
the
time.
Thank
you
for
having
me.