►
From YouTube: SIG Interoperability Meeting - Mar 3, 2022
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
This
may,
in
fact,
be
quite
quiet
meeting.
A
I
I've
been
incredibly
busy
with
the
move
and-
and
I
don't
don't
have
a
speaker
coming
in
so
we'll
go
over
our
regular
agenda
items,
one
of
the
things
while
we
wait
for
other
people
to
come
in
that
I'd
like
to
question
you
on
is
for
cdcon,
which
is
the
7th
and
8th
of
june
in
austin
texas.
A
I
think
given
world
events
and
the
ongoing
pandemic,
it
might
very
well
be
interesting
for
us
to
do
a
virtual
track,
and
this
is
not
a
track
that
will
be
shown
at
tdcon,
but
will
sort
of
be
run
in
parallel
to
it,
and
given
that
this
is
becoming
something
that
I'm
increasingly
convinced
we
should
do.
C
A
Separate
program
of
talks-
and
I
don't
think
it
would
be
as
extensive
as
the
in-person
conference-
it
would
just
sort
of
be
a
series
of
talks,
and
I
I
don't
know
if
it'd
be
run
at
the
exact
same
time.
That
makes
it
very
hard
for
attendees
on
the
actual
conference.
So
maybe
it
would
be
run
the
following
day.
A
A
Certainly,
the
keynotes
will
be
live
streamed
from
my
understanding.
The
other
talks
will
be
recorded
through
the
hop-in
platform,
but
I
don't
think
they
will
be
live.
Okay,.
C
D
B
A
you
know
separate
from
the
speakers
and
sessions
that
will
be.
You
know
at
the
regular
at
the
in
person
event.
Is
it
possible
to
record
those
sessions
and
play
those
for
the
virtual.
A
At
this
point,
we're
planning
on
recording
them.
That's
really
the
lf
events
team,
but
they
should
be
recorded
and
then
they
will.
I
don't
think
they'll
be
live
streamed,
except
for
possibility
notes,
but
they.
B
Does
it
would
they
be
part
of
a
virtual
event,
though,.
A
Right
right
right,
so
the
live
talks
will
all
be
recorded
and
usually
there's
a
delay
a
number
of
days
before
they
go
up
on
youtube,
and
so
the
virtual
track
will
say
we
did
it
the
day
after
the
conference
and
ran
that
eventually
they
would
all
end
up
on
youtube
together
under
the
city
con
heading,
you
know
the
channel
for
for
cdf.
B
A
B
Right,
I
think
that
makes
sense
I
just
so
that
the
virtual
people
have
the
opportunity
to
see
the
same
talks
that
were
done
in
person
as
well.
A
That's
going
fantastic,
we
have
200
submissions
and
the
reviews
are
so
many
good
talks,
so
that's
actually
been
quite
thrilling
and
it
will
be
finalized
in
the
next
few
weeks
and
announced
so
the
schedule
will
be
finalized
and
announced.
So
look
for
that
mid
this
month,
yeah
that
should
be
really
good.
Okay,
I
will
share
the
notes
for
today,
although
I'm
really
glad
that.
A
Okay,
so
today
is
a
bit
of
a
quiet
meeting.
There
were
no
action
review
items,
so
that's
good.
Citicon
updates.
The
cfp
is
closed,
but
we
probably
will
be
opening
another
for
the
virtual
track.
I'll
have
more
information
on
that
shortly,
but
it
is
something
that,
if,
for
whatever
reason,
you
knew
that
you
would
not
be
able
to
attend
cdcon
in
person
and
therefore
did
not
submit
a
talk.
A
This
is
an
opportunity
for
you
to
submit
a
talk
to
share
with
the
cdf
community
and
that
would
be
shared
on
a
virtual
track.
So
I
think
that's
quite
good.
The
oh,
our
best
news
of
the
day,
I'm
so
so
happy
melissa
mckay
has
been
nominated
by
fatih
to
to
join
as
chair
of
the
interoperability
sig
and
yes,
very
good
news.
So
welcome
melissa.
B
Thank
you
very
much.
You
know,
I
don't
know
if
a
lot
of
you
know
me
at
all,
I
could
give
you
a
little
intro,
yes
quickly.
I
am
based
in
denver
and
I'm
currently
a
developer
advocate
with
jfrog
and
prior
to
that.
A
Welcome
we
really,
we
appreciate
your
participation
and
joining
us
chair,
so
this
is
a
really
good
moment.
A
Good
and
now
let's
we
can
discuss
the
prs
that
we
have
for
step,
stages
and
quality
gates.
C
I
just
added
that
topic
because,
like
I've
been,
you
know,
we
have
this
new
six
software
supply
chain,
which
was
approved
like
mid
february,
and
we
will
probably
have
the
first
very
first
meeting
of
to
seek
next
thursday
same
time
as
this
meeting.
So
we
will
be
rotating,
interrupt
and
supply
chain
seek,
and
as
mentioned
in
the
new
six
charter
or
whatever
read
me
file,
there
are
overlaps
or
relations
to
different
things
or
projects
like
interoperability,
events,
best
practices,
also
projects
from
necessary
and
so
on.
C
I've
been
reading
about
this
cncf
security
tag,
software
supply
chain,
best
practices,
open
ssf,
secure
software
factory
and
so
on,
and,
as
one
may
expect,
there
are
overlaps
as
well.
C
But
looking
at
these
pr's
and
the
quality
case
discussion
from
supply
chain,
would
it
make
sense
because
our
discussions
are
around
mostly
around
like
typical
steps
and
stages
and
quality
gates?
Perhaps-
and
we
may
touch
some
of
these
things
in
under
these
prs
from
software
supply
chain
perspective,
but
yeah-
how
we
can
extend
our
discussions
to
cover
these
aspects
as
well.
C
Both
or
perhaps
even
more
again,
that's
what
we
did
with
interop
as
well,
like
we
start
with
vocabulary
or
blesser
or
that
type
of
stuff,
like
we
probably
are
doing
similar
things
everywhere
like
when
we
bring
a
new
force.
We
run
those
through
some
skus,
like
you
know,
type
of
tools
like
whether
fossa
or
black,
duck
or
whatever
and
those
are
like
distinct
stages
in
our
false
consumption
pipeline
or
whatever.
I
know
the
name
exactly
what
we
call
them.
C
I
don't
let
me
pass
the
music
there.
G
So
I
have
I'm
going
to
give
you
all
the
names
of
all
of
our
tests
right
and
I
see
other
people
have
theirs
too
and
in
terms
of
naming
what
we
were
doing
is
trying
to
we're
building
tectontas,
so
we're
trying
to
use
the
the
wording
from
the
interoperability
pipeline
steps
as
like
prefixes
on
our
tecton
test,
so
that
we
know
exactly
what
they're
doing
and
then
what
we're
going
to
do
with
that
in
the
end,
is
you
know
we
want
a
system
that
says
okay,
as
these
tests
execute
we're
going
to
save
the
output
to
an
evidence
repository
and
the
last
step
is,
let's
make
sure
we
check
the
evidence
repository
and
make
sure
all
these
things
happened,
that
we
expect
to
happen
for
our
product
builds,
and
if,
if
the
steps
happen,
then
we're
going
to
approve
right.
G
E
G
Implement
the
same
maturity
they're
trying
to
meet
the
same
maturity
goals
right
there.
I
just
threw
it
up
in
there
in
the
issue.
E
From
a
developer's
perspective
and
in
our
in
our
world,
it's
pregnant,
like
you
get
a
badge
if
you
are
sarbanes-oxley
compliant
and
that
has
like
to
get
the
like
to
get
the
passing
grade.
You
have
to
do
like
six
things
or
whatever,
whatever
the
list
of
items
are,
and
so
that's
I,
we
might
call
that,
like
a
pipeline
certification,
something
along
those
lines.
I
know
amazon
had
a
similar
thing
where
it
wasn't
sox
related,
but
it
was.
E
Yeah,
apparently,
something
like
that
for
ebay
too,
like
some
understanding
of
what
it
means
to
be
like
good.
So
it's
like
trying
to
develop
levels
like
a
maturity
model
for
these
kind
of
things.
E
Yeah
yeah,
that
was
better
than
the
one
that
we've
come
up
with
by
a
long
shot,
but
the
thing
that
we're
doing
is
lesser
on
software
supply
chain
and
more,
like,
I
think,
like
bronze,
like
the
bronze
level,
is
like
automated
to
staging
silver,
is
automated
to
production
and
gold
is
experimentation,
culture
and
there's
a
handful
of
like
technical
things.
You
have
to
do
to
hit
those
bars,
but
this
is
not.
E
G
Yeah
we
we
have
maturity
levels
as
well.
It's
not
quite
the
same
idea,
but
we
have,
for
example,
I
work
in
a
managed
services
group
right.
So
if
you
even
want
some
customers
to
touch
your
managed
service,
there's
a
few
things
you
have
to
do
like
you
have
to
sign
your
images
and
you
know,
run
known
vulnerability,
checks
and
a
couple
of
things
right,
five
things,
I
think.
So
that's
like
level
one
and
if
you
want
to
take
money
for
this
service,
there's
actually
some
other
some
other
pipeline
checks.
G
You
have
to
do
so
for
us,
it's
more
like
a
software
life
cycle,
maturity
thing
but
yeah.
I
think
they're
trying
to
build
pipelines
that
will
sort
of
grade
people
and
how
many
of
these
steps
they
complete.
G
C
G
Interestingly,
and
what
we're
also
finding
is
that
often
people
actually
want
to
break?
They
don't
want
one
huge
pipeline.
They
want
at
least
two.
They
want
like
your
build
test
pipeline,
and
then
they
want
your
like
certification
pipeline,
which
is
like
security
provenance.
They
actually
want
that
separate,
because
then
what
you
can
do
is
you
can
give
teams
a
lot
of
control
over
their
build
test
pipeline
and
their
ci
process,
but
say
thou
shalt
go
through
the
security
process
like
make
that
more
rigorous
and
standardized
right.
C
Yeah
and
it's
like
these
things,
they
are
not
expected
to
run
in
sequential
order.
I
suppose
they
could
like
they
could
all
be
treated
or
considered
like
test
cases.
You
know,
like
you,
have
five
different
things:
running:
limiting
unit
tests
like
security
license
checks
or
whatever,
and
if
one
fails,
then
you
can't
pass
because
something
failed
there.
So
it's
like
it's
not
all
wait
each
other
until
and
you'll
see
the
light
at
the
end
of
the
tunnel
after
a
day
like
it
doesn't
like.
G
Yes,
in
fact,
both
I
know
ibm
and
when
they
implement
this
their.
What
they've
actually
done
is
they
made
the
source
of
truth,
like
an
internal
repo
like
at
ibm,
it's
literally
like
an
internal
github
repo,
that's
collecting
all
the
evidence
from
all
these
disparate
build
pipelines
and
at
the
end,
there's
one
step
that
says
tell
me
if
you
did
somehow
all
the
things
right,
that's
kind
of
what
they're
doing,
rather
than
it's
literally
one
pipeline.
E
That
we're
solving
on
that
the
this
like
wanting
control
over
your
pipeline,
but
also
mandating
steps,
is
this
mechanism
of
badging
like.
If
you
didn't
do
your
starbase
oxley
check,
I
mean
you
can
configure
your
pipeline
to
not
require
it.
You
can't
ship
that
to
prod,
because
you
didn't
get
the
magic
badge
that
says
that
two
people
reviewed
your
code
and
so
it's
a
trust
but
verify
sort
of
deal.
C
Like
yes,
tecon
chain
brings
some
of
those
capabilities,
but
how
like
how
gen
general
that
approaches
like
we
observe
our
you
know
cloud
native
applications
when
they
run
in
production,
but
where
are
we
observing
the
entire
pipeline
or
are
pipelines?
Are
they
observable
like
this?
Taking
this
type
of
approach
to
actual
pipeline
and
collecting
metadata
or
getting
whatever
is
important
and
again
looking
at
that
thing,
and
then
enforcing
policies
based
on
what
we
observe,
and
that
is
kind
of
interesting
approach.
Perhaps
I
know.
E
The
cool
thing
I've
seen
on
the
observability
front
is
like
using
trace
stores
in
kind
of
like
the
open,
telemetry
tracing
style,
where
you
get
to
see
like
a
like.
A
graph
of
like
this
step
took
this
long
and
it
did
these
three
sub
steps,
and
then
this
step
took
that
long
and
you
could
like
see
a
visual
representation
of
like
what
what
the
build
process
looked
like
and
what
your
release
process
looks
like
and
where
the
delays
are
and
things
like
that
pretty
cool.
C
Yeah,
it's
like
also
yeah
again.
This
is,
I
am
still
thinking
about
these
things
like
because,
like
again
like,
let's
say
we
are
building
a
container
image
revealing,
let's
say
same
versions
of
everything
and
we
are
generating
an
s
bomb
for
that
artifact
and
like
this,
one
could
be
considered
that
as
like,
I
know
it's
not
a
metric,
but
it's
some
metadata
and
then
that
observer
or
whatever
could
you
know,
get
that
s
bomb
and
compares
that
s
bomb
with
the
previous
build
of
the
same
thing?
C
And
if
there
is
any
difference
a
police
could
kick
in
and
say
you
are
supposed
to
be
building
same
thing,
but
what
we
have
now
is
different
than
what
we
had
during
previous
and
something
is
wrong
here,
like
you
just
catch
that
right
after
that
happens,
and
you
just
don't
let
that
thing
to
continue.
I
don't
know
if
it
just
could
be
called
observable
it,
but
in
a
sense
like
you
are
collecting
stuff.
A
C
Yeah,
if
the
iron
scary
such
metadata
last
like
s
bomb,
I
don't
know
matthias,
I
see
matias
there
or
it
is
just
saying
that.
Oh
something
happened
here.
Maybe
you
should
look
at
it
and
then
another
step
kicks
in
and
you
know
checks
what
happened
like
I
know
or
having
atomic
steps
rather
than
dumping
everything
on
events,
I
yeah
sorry.
H
So
retracing
so
seed
events,
the
idea
there
is
that
you
basically
for
for
each
step
in
the
pipeline,
you
send
events
to
notify
what
is
happening.
H
So
I
guess,
by
listening
to
the
events,
if
you
want
to
have
some
kind
of
observer,
one
could
listen
to
events
and
see
that
okay
in
a
pipeline
you're
sending
all
these
events.
Okay,
then
you're
you're
doing
what
you
comply
to
do
and
if
we're
missing
events
or
we're
noticing
that
you're
not
sending
events
we're
expecting,
then
you
can
have
some
alarm
or
something
standing
on.
So
that
way
you
can
make
a
pipeline
reservable
by
looking
at
events
sent
in
the
pipeline.
H
Also
linking
back
to
this
with
controlling
pythons
and
so
on
last
november,
we
had
a
presentation
from
siento
santiago
torres,
rs
approach
in
his
name,
but
in
toto,
which
is
kind
of
like
a
framework
to
control
the
pipelines
and
know
what
steps
you've
done
in
it.
So
maybe
that
is
something
in
this
discussion
also.
C
G
By
the
way
I
did
find
like
some
very
basic
stuff,
I
threw
that
in
there
too
code
reviews
signing
containers,
manifests,
improved
vase
images,
known
vulnerability
scanning
and
no
embargoed
issues
getting
published
too
soon,
so
that
was
yeah.
That
was
our
current
proposal.
Like
really
don't
let
anybody
touch
software
that
doesn't
do
these
things.
C
C
C
G
Yeah,
red
hat
is
a
pretty
fancy
one
it's
kind
of
hard
to
on
board
with,
but
once
you
do
it
it
well,
it's
integrated
with
quay
dot,
io
right,
so
that
has
like
a
a
manifest
of
what's
in
the
image
and
that
has
a
way
of
notifying
people
when
a
known
cbe
shows
up,
and
they
have
some
fancy
internal
pipeline
that
says,
oh,
when
the
notification
comes
through,
we
will
actually
automatically
pull
your
source
update
to
the
latest
image
rebuild
republish,
so
actually,
in
some
cases,
they're
able
to
do
that
automatically.
G
In
the
in
the
interoperability
thing,
that's
what
I
meant
by
the
maintain
stage
or
a
maintained
pipeline,
I
think
it
was
a
pipeline
type,
was
something
that
will
do.
That
kind
of
thing
like
check
for
vulnerabilities,
update
tags,
rebuild
republish.
C
G
E
We
have
this
maintain
stage
as
an
entirely
separate
pipeline.
That's
called
a
patch
pipeline
and
it's
its
job
is
to
rev
your
versions.
It's
managed,
I
think,
entirely
by
another
team,
also,
which
is
interesting.
It's
owned
by
the
platform
teams,
and
so
they
will
ship
updates
to
say,
like
hey
our
web
framework
version
revved.
So
here's
the
pull
request
to
make
that
fix
and
then
they'll
do
like
automatic
deploys
and
traffic
mirroring
to
validate
the
things
are
the
same
and
and
then
deploy
that
effect.
So
it's
pretty
interesting.
G
G
Yeah,
actually,
the
pipeline
stages
like
taking
a
step
back.
These
do
sometimes
get
separate
in
their
own
pipelines.
Right
usually
like
build
tesco's
together,
but
I
have
definitely
seen
separate,
deploy
pipelines
and,
like
you
said,
maintain,
but
I've
seen
it.
I've
seen
I've
seen
cases
where
it's
also
just
all
one
pipeline,
like
I've.
Seen
for
the
node
app
stays
like
a
basic
node
microservice.
You
can
totally
throw
these
all
into
one
pipeline.
G
G
C
Okay,
I
think,
like
I
agreed
to
justin
like
once
those
comments
are
addressed.
Perhaps
we
can't
continue
iterating
on
this.
Like
it's,
you
know,
that's
what
we
have
been
doing
so
and
I
think
events
it
will
be
used
for
events.
He
gets
well
as
well
as
supply
chain
seek.
So
we
can,
you
know,
base
something
to
stay
based
our
discussions
on
something
merged.
E
I
think
I
owe
the
group
a
definition
of
what
ebay
pipelines
look
like
and
written
in
terms
of
these
definitions.
A
Yeah,
I
agree,
I
think
iteration
is
a
good
approach
and
I
think
it's
good
to
get
them
up
there.
I
know
the
events
say
group
has
is
interested
in
this
vocabulary.
Work
very
much
so
it's
great
to,
I
guess,
merge
it
in
and
then
that
gives
them
more
of
a
basis
to
work
from
okay
yeah.
Thank
you.
It's
really.
C
A
No
there,
there
is
nothing
else
on
the
agenda
unless
there's
something
else
that
someone
wants
to
bring
up.
E
I'm
I'd
be
open
to
sharing
the
ebay
doc
around
the
different
pipeline
stages
and
different
maturity
model
things
that
that
would
be
interesting
to
talk
about
seems
like
a
group
of
folks
who
can
offer
some
useful
feedback.
As
I
mentioned,
this
is
not
a
like
vetted.
This
sort
of
deal
right,
okay,
so
bronze
automated
to
staging
general
idea.
Is
we
want
a
stable
staging
environment
which
we've
put
in
a
tremendous
amount
of
effort
into
achieving,
but
teams
are
still
doing
a
lot
of
in
the
broader
ebay
world.
E
People
are
doing
a
lot
of
like
I
want
to
test
my
code,
and
so
I'm
going
to
steal
staging
for
30
minutes,
so
I
can
see
if
it
works,
sort
of
stuff
which
is
not
nice
and
so
kind
of
the
criteria
to
hit
this
are
one
you
need
someone
on
your
team
who
is
a
velocity
champion,
and
what
this
means
is
so
like
velocity
is
the
internal
program
to
help
people
follow
kind
of
like
standard
best
practices
of
small
change,
sizes
and
automation
and
all
the
kind
of
devops-y
things
you
need
static,
validations
which
include
all
of
those
like.
E
Are
you
sargan's?
Oxley
compliant
and
eventually
like
are
you
doing
software
supply
chain
pieces?
We
have
an
internal
tool
that
tells
you
why
your
service
takes
forever
to
start
for
java
services,
and
so
you
have
to
use.
Basically,
this
is
a
placeholder,
for
you
have
to
use
the
fancy
tools
we've
made
for
you.
E
You
need
automated
tests.
Those
tests
need
to
appear
in
our
test
result
repository
so
that
we
have
broader
understandings
of
past
fail
rate
and
stability
issues,
and
things
like
that.
E
E
E
If
your
service
fails
all
the
time,
then
that's
not
great,
so
go
fix
it,
but
if
you're
not
getting
help
from
those
other
teams,
that
might
be
quite
hard
that
tremendous
amount
of
effort
that
I
was
talking
about
is
this
program
called
staging
get
well,
and
it's
a
group
of
folks
who,
whose
whole
job
it
is,
is
to
monitor
the
availability
of
staging
and
like
it's
like
hooked
up
to
pagers
and
things
like
that
so
like.
If
staging
is
broken,
then
you
get
you
get
the
ping
at
3am,
which
is
controversial,
but
works.
E
E
I
think
the
industry
refers
to
these
as
synthetic
tests,
small
tests
that
run
on
a
periodic
basis
to
proactively
monitor
that
the
thing's
working
and
there's
a
tooling
internally
to
help
with
that,
and
then
no
intervention
between
pr
and
deploy,
that's
kind
of
where
we
are
with
that.
G
So
this
reminds
me
what
we're
doing
at
red
hat
right
now.
They
have
this
thing
called
the
manage
services
scorecard,
which
is
24
steps
and
they
got
their
sres
together.
It's
mostly
about
sre,
but
there's
some
quality
stuff
in
there
too,
and
observability
and
they're
like
if
they
ended
up,
saying
well.
15
of
those
are,
must
do's
and
five
or
should
do's
and
we're
actively
trying
to
break
it
down
into
like
maturity
levels
like
there's
some
sre
things
that
yeah
again
even
before
you
let
a
customer
touch
your
stuff.
G
You
should
make
sure
you
can
handle
your
compute
nodes
dying
or
something
like
that
right
because
it
happens,
but
the
they're
working
on
it
in
a
there's,
a
there's,
a
build
I'll,
get
the
name
of
it
they're,
putting
out
an
open
source
project
that
might
have
some
useful
stuff
in
there
to
look
at
operate.
First,
that's
what
it's
called.
Let
me
see
if
I
can
find
the
link
the
managed
services
scorecard
isn't
out
there
yet,
but
a
bunch
of
advice
on
how
to
operate.
Things
is
out
there
and
I
think
maturity
models.
E
E
E
E
E
E
And
then
the
bottom
one
is,
we
call
it
experimentation
culture,
but
a
lot
of
it
is
like
idealism
so,
like
everyone
should
be
doing
tdd,
you
should
have
tiny
stories
that
ship
within
a
day
or
two
using
trunk
based
development.
E
Whoever
wrote
this
document
thinks
that
you
should
have
your
pr
reviewed
within
the
hour,
which
I've
turned
off
comments
in
this
view,
but
there's
a
long
comment:
a
way
to
manage
tech,
debt,
doing
slo,
monitoring
and
things
like
that,
and
then
opinions
about
how
you
should
be
running
your
sprints.
E
E
That
was
rolled
out
to,
I
think,
like
five
percent
of
active,
actively
developed
applications
of
like
the
16
000,
total
applications
we
have
internally
and
this
year
it's
going
to
scale
up
to
50
percent
of
active
applications,
and
so
a
big
theme
of
this
is
this
is
turning
away
from
an
initiative
and
more
into
indoctrination
and
culture,
work
and
kind
of,
as
as
we're
systematizing
the
work
to
get
deployed
to
production
solved.
E
There's
a
there's
an
alpha
program
starting
up
around
this
notion
of
what's
called
flow
framework,
which
is
testing
me
like
we're
supposed
to
do
this
until
pr's
merged,
like
tracking
that
piece
of
like
how
we
get
the
work
done,
which
is
kind
of
something
we're
piloting
this
year.
G
E
E
So
bronze
was
share
again
pick
a
person
that
should
be
easy
hook
up
the
like
hook
up
the
deploy
tooling
of
ebay.
It
should
be
fairly
straightforward
hook
into
this.
E
G
Right
because
I
know
that
red
hat's,
old
certification
pipelines
trying
to
build
an
ibm's
certification
pipeline,
the
ibm
one
only
supports
golang
and
like
building
on
openshift
or
kubernetes,
and
that
thing
takes
people
months
to
onboard
because
there's
so
much
diversity
right
and
the
red
hat
thing
supports
like
five
programming
languages,
and
it
also
takes
months
on
board.
So
yeah.
E
I
think
where
we're
heading
is
like
thou
shalt
use
containers,
and
then
things
need
to
be
discoverable
within
the
container
like
making
it
more
of
an
api
around
here's,
how
you
hook
into
our
tooling,
rather
than
use
this
thing
and
we'll
figure
it
out
for
you
yeah.
This
is
like
included
dependency,
that
one's
easy,
automated
tests
are.
E
You
know
how
people
should
work
anyway,
write
tests
it
should.
This
is
like
half
a
day
of
work.
E
B
A
E
E
E
We're
still
shopping
this
around.
Basically,
there
are
some
teams
who
were
like
bronze
is
way
too
hard
will
never
be
there
and
there
are
some,
and
then
there
are
some
groups
who
are
like
yeah,
okay
and
so
there's
still
like
a
lot
of
discussion
back
and
forth
and
and
of
course,
trading
more
or
less
around
what
these
things
mean
in
terms
of
how
much
of
this
has
been
developed
with
like
in
conjunction
with
teams,
not
a
lot.
This
is
more
idealistic
of
like
listen.
This
is
how
you
should
be
like
in
aggregate.
E
A
Nice
and
are
you,
are
you
using
say
dora
metrics
to
show
yeah
yeah,
give
positive
feedback
a.
E
Very
regular
door,
metrics
review
with
every
team
and
that's
the
core
of
what
that
velocity
initiative
has
been
and
then
we're
gonna
we're
layering
in
flow
metrics
framework
metrics
to
figure
out
like
the
development
piece
of
that
like
are
we
only
working
on
features
and
never
working
on
tech?
Tap?
That's
probably
a
problem
like
those
sorts
of
questions.
E
A
A
Okay,
great,
thank
you
all
for
being
here.
It's
really
great
discussion.
Yeah
super
informative
today.
So
thank
you
all
for
that.
That's
great
and
again,
thank
you
to
melissa
for
joining.
As
our
new
chair
really
excited,
it's
gonna
be
great,
so
good
news
all
around
great.
I
hope
you
all
have
a
fantastic
rest
of
your
day
and
we'll
see
you
in
two
weeks
bye.
Thank
you.