►
From YouTube: Cartographer Office Hours - Jan 31st, 2022
Description
00:00 Intro and shoutout
02:50 RFC 019 (RFC process) discussion
12:08 RFC 020 Update resources only after success/faliure
52:51 RFC 018 Workload report artifact provenance
1:00:38 Introducing multiple paths RFC
The purpose of this meeting is to discuss architecture-changing ideas (in the form of RFCs) and provide in-depth support to the community of Cartographer contributors.
You can continue the conversation by adding comments to the RFCs PR: https://github.com/vmware-tanzu/cartographer/labels/rfc
Agenda: https://docs.google.com/document/d/1ImIh7qBrOLOvGMCzY6AURhE-a68IE9_EbCf0g5s18vc/edit?usp=sharing
A
There
you
go
hello,
everyone
and
welcome
to
another
cartographer
office
hours
meeting.
Thank
you
for
joining.
This
is
honestly
one
of
the
highlights
of
the
world
week.
For
me,
I
learned
a
ton
from
this
session
and
I
hope
you
can
also
find
it
beneficial
welcome
scott
rosenberg.
A
A
There
you
go:
okay,
cool
yeah,
some
some
logistics
for
the
session.
First,
we
try
to
have
a
voluntary
for
for
note-taking.
So
who'd
like
to
take
note
for
today's
session.
C
A
And
yeah,
please
feel
free
to
add
yourselves
to
the
attendance
list
and
yeah
reminder
that
this
yeah.
Thank
you.
The
goal
of
this
session
is
to
answer
or
to
discuss
questions
from
the
community,
but
in
the
current
design.
Also,
it
has
a
lot
to
do
with
discussing
improvement
ideas
for
the
project
in
the
form
of
rfcs,
so
feel
free
to
have
any
topic
or
question
that
you
may
have,
and
also
we
will
have
today
time
for
discussing
currencies.
A
I
don't
know
it
will
be
enough
if,
with
four
rfcs,
I'm
kind
of
sure
that
it
won't
be
enough.
So
probably
we'll
we'll
end
up
continuing
the
conversation
during
community
meetings
right,
okay
without
any
further.
Let's
move
to
the
first
topic.
I
added
this
in
this
specific
order.
If
you
feel
it's
not
right,
just
let
me
know,
but
I
I
put
first
the
rfc
process
rfc,
because
I've
seen
a
stream
of
rfcs
being
open
and
I
feel
that
we
need
first
to
agree
to
have
consensus
on
the
rsc
rfc
process
itself.
B
B
So
a
little
bit
more
robust
should
make
it
a
little
bit
easier
to
read
and
check
on
the
status
of
it
and
also
proposes
a
process
for
voting
where
things
go
into
a
draft
state
and
then
there's
a
process
where
discussion
happens
and
then
kind
of
vote
to
move
it
into
a
final
comment
period
and
where,
by
default,
that'll
last
like
seven
days
and
if
nothing
changes.
No
substantial
comments
are
made
during
that
time.
B
Then
kind
of
the
vote
that
we
had
when,
when
we
vote
to
move
into
final
comment
period,
we'll
say
moving
into
final
comment
period
with
the
assumption
that
it'll
be
approved
if
nothing
changes
or
declined,
if
nothing
changes,
and
so,
if
nothing
changes,
then
you
know
it
will
be
approved
or
rejected.
Based
on
what
we
voted.
If
not,
if,
if
there
are
substantial
comments
in
the
mean
in
during
that
period,
then
we'll
reopen
it,
and
I
think
the
vote
is
two:
a
two
thirds
super
majority
is
needed
to
approve
an
rfc.
B
A
super
majority
is
based
on
those
who
vote.
So
if
you
choose
to
abstain,
your
you're
not
counted
towards
the
total
that
two-thirds
is
off
of
so,
if
every
one
of
things
and
only
two
people
vote
and
they
vote
to
approve
that
is
considered
two-thirds
majority,
because
it's
a
two-thirds
of
voting
members,
that's
pretty
much
the
gist.
There's
a
lot
of
words.
B
B
C
You
put
the
rfc
template
in
the
rfc
itself
and
not
in
the
yeah.
B
B
A
Thank
you
marty
for
writing
this
down.
This
looks
like
several
team
members
are
okay
with
with
this.
If
there's
any
objection
or
any
additional
comment,
otherwise,
we
could
move
forward
with
this
one.
B
C
Treat
absence
as
abstinence
when
people
are
away
on
holiday
for
a
couple
of
weeks,
that's
fair.
I
mean
I
just
call
that
out
somewhere.
If
that's
all.
B
E
Yeah
I'll
definitely
take
a
look.
Are
we
when
are
we
aiming
to
approve
it
as
soon
as
possible?
Sounds
good.
B
E
Yeah,
I
still
need
to
chat
with
him
about
it.
I
I
I
I
like
the
structure
and
the
rules
and
things
like
that,
how
we
approach
consensus
and
deal
with
like
like
I
think
this
is.
This-
would
be
a
really
great
process
for
a
foundation
project,
but
because
we
it's
a
vmware
internal
project
right.
I
think
some
of
the
ways
we
deal
with
consensus
may
need
to
be
a
little
bit
different.
E
If
that
makes
sense
like
no
one
should
feel
like
you
know,
their
manager
is,
you
know,
controlling
their
vote
right,
but
we
still
have
to
account,
for
you
know
how
we
balance
community
priorities,
and
you
know.
C
I
would
ask
that,
if
we're,
if
we're
going
to
hold
long
on
consensus,
that
we
do
the
rest
and
and
and
treat
it
as
a
majority
consensus
or
greater
than
50
or
something
that
that's
acceptable
to
you
and
james,
just
so
that
we
can
move
forward
with
the
rest
of
its
niceness.
E
F
E
G
It
can
be
even
useful
for
github
on
how
you
can
like
point
a
group
of
people
as
reviewers
of
a
particular
pr.
So
if
we
have
these
different,
like
groups
of
different
kinds
of
maintainers,
maybe
that's
useful
so
that,
like
on
any
rfc,
we
we
open-
we
just
like
at
rfc
approvers
or
something
like
that.
So
it
could
be
useful
for
the
purpose
of
making
our
lives
easier.
Asking
for
reviews.
A
Well,
next
one
and
I'm
not
sure
if
18
you
go
first
than
20.,
but
here
in
the
least
it's
it's
just
first
rc
20.
But
if
you
like
I'll,
let
you
share
your
screen
and
drive
this.
F
Okay,
but
that
actually
need
to
change
the
titles
on
these,
so
they
match
the
actual
proposal
yeah
the
so
the
the
base
motivation
is
establishing
providence
enabling
attestation
in
cartographer.
You
should
be
able
to
say:
hey.
I've
got
this
node
in
the
supply
chain.
Graph.
I've
got
this
artifact.
What
were
the
or
the
artifacts
that
led
to
this
or
hey?
I've
got
this
cut
like
to
to
bring
it
down
to
our
to
the
customers
of
cartographer.
F
Hey
I've
got
this
code
that
was
running
on
my
on
my
cluster
at
such
and
such
date,
and
I
saw
this
really
bad
behavior
and
now
I
need
to
like
know.
I
need
to
be
able
to
trace
back
and
see
like
what
was
the
commit.
What
was
the
image
like?
I've
got
to
be
able
to
to
investigate
this.
F
F
In
the
rfc,
I
I
talk
about
a
scenario:
that's
used
throughout
the
rfc
called
resource,
a
resource,
a
is
a
thin
facsimile
of
runable
or
kpac,
which
both
have
this
have
similar
behavior
of
they
will
tell
you
here's
my
latest
good
output,
but
that
latest
good
output
may
not
represent
the
most
recent
inputs
that
were
run
against
the
at
the
same
time,
resource
a
is
able
to
report
ready,
true,
ready,
false
or
ready,
unknown
and
ready
unknown.
Is
I'm
currently
doing
work
ready?
False?
F
F
It
run
through
some
different
solutions
that
could
be
used
to
give
us
attestation
and
detail
why
they
won't,
and
I'm
happy
to
describe
those
or
to
leave
that
as
an
exercise
for
the
reader
to
go
through.
But
ultimately,
the
conclusion
of
of
those
thought
experiments
was
what
we
have
to
do
is
restrict
reading
the
status
of
objects
to
when
the
object
has
completed.
Reconciling
is
and
is
in
a
good
state
eg.
F
When
ready
is
true,
then
we
can
read
outputs
and
we
shouldn't
read
outputs,
otherwise,
the
there's
a
lot
of
other
stuff
that
I've
that
have
dropped
in
here.
I
think
the
two
that
I
would
highlight
most
is
we
may
be,
we
may
be
concerned
about
hey
somebody
does
50
50.
F
Well,
actually
let
me
do
this
one.
First,
there
are
some
trade-offs.
One
is
in
performance
if
you
drop
in
n
commits
all
at
the
same
time,
but
you're
committing
faster
than
a
given
step
in
your
supply
chain.
F
The
supply
chain
won't
give
you
a
new
output
until
all
that
has
finished,
and
so
that's
yeah
that
takes
a
while.
F
The
the
other
problem
is
what,
if,
at
the
end
of
those
commits
your
last
commit,
was
unsuccessful.
In
that
case,
you
know,
if
it
was
that
50
commits
the
50th
commit
is
bad.
You
wouldn't
have
the
output
of
the
49th
commit.
Instead,
you
would
have
the
output
of
the
zero
with
commit
the
commit
before
any
of
those
started.
F
F
Right
now,
every
time
cartographer
reconciles
it's
just
able
to
read
status
off
of
the
objects
in
the
in
the
cluster,
so
when
it
goes
to
stamp
when
it
goes
to
stamp
out
a
config
object,
you
know
it
just
read
off
of
an
image
and
that
image
output
right
now
is
just
always
good,
like
we
consider
all
outputs
that
have
given
to
us
as
good
now
we're
going
to
introduce
a
situation
where
a
lot
of
the
time
that
image
object
wouldn't
be
in
a
good
state,
wouldn't
be
in
a
readable
state,
and
so
we
need
to
make
sure
that
we're
that
we're
caching,
those,
I
don't
think
that's
a
huge
issue
because
we're
talking
about
caching
those
values
in
the
artifacts
field
of
the
workload
as
part
of
rfc
18
anyway.
F
So
I
think
those
are
all
of
the
big
points.
Hopefully
I
haven't
gone
through
them
too.
Quickly
give
the
floor
back
to
the
moderator.
G
E
For
me,
there
there's
some,
like
kind
of
I
think
larger
things
around
the
framing
and
use
cases
for
the
future
that
we,
we
should
make
sure
we're
really
clear
on
in
the
rfc
like
I
don't.
I
don't
think
this
is
intended
to
replace
attestation
right
like
it's,
it's
not
the
tool
where
you
use
to
prove
you
know
this
image
was
really
built
from
this
commit
and
there's
a
signature
somewhere
that
you
know
right.
This
is
more
around
visibility
and
being
able
to
say
you
know
cartographer's
perspective.
E
These
are
how
the
inputs
and
outputs
were
connected.
I
think
I
think
that
you
know
yeah
situational
awareness
right.
I
think
I
think,
as
rash
put
it
there
in
the
chat.
I
think
that's
that's
an
important
thing
to
capture
because,
like
you
know,
I
think
a
perspective
you
could
have
when
you
look
at
this-
and
this
is
something
like
scott-
we
chatted
a
little
bit.
E
You
said
you're
worried
about
here
is
it
it
looks
like
it's
proposing
a
way
of
you
know
kind
of
having
a
stronger
attestation
that
something
was
built
from
something
else,
and
I
think
it'd
be
good
to
avoid
that.
But
you
know,
I
think,
those
like
larger
questions
aside.
I
think
one
thing
that's
missing
on
the
the
technical
side
that
I
wanted
to
call
out.
That's
I
think,
talked
about
as
like
a
an
issue
with
the
synchronization
process.
That's
worth
looking
into.
E
Is
you
know,
even
if
you
assume
that
the
conditions
are
going
to
be,
you
know,
as
described
right
like
ready?
Is
you
know,
like
guess,
spike
was
processed
false's
back
was
processed
something
bad
happened
and
unknown
is
you
know
we're
not
sure,
especially
in
that
unknown
case
right?
E
If
something
got
stuck
in
that
you
know
it,
could
it
could
halt
the
entire
thing
to
keep
moving
and
so,
instead
of
looking
at,
like,
I
think
I
kind
of
brought
this
up
before
instead
of
looking
at
it,
like,
you
know,
hold
spec
until
any
change.
You
know
until
you
there's
a
resolution
and
move
on
it's
more
like
you
know
that
you
can
achieve
traceability
right
if
you
control
spec
and
you
don't
make
changes
to
spec
right
and
observe
generation,
is
you
know,
reflector
status
is
updated
to
matchups
or
generation
and
a
known
condition.
E
That
means
that
you
know
spec
was
processed
is
true.
You
know
you
can
make
a
link
between
inputs
and
outputs
and,
if
you
have
you
know,
I
wouldn't
jump
from
there
to
holding
spec
until
that
status
is
true
is
the
only
outcome,
because
you
could
do
something
like
if
you're
an
unknown
for
a
configurable
amount
of
time
right.
You
need
to
forget
making
any
kind
of
traceability
or
promoting
anything
forward
and
wait
for
the
next.
E
You
know
thing
to
happen
so
that
things
don't
you
know,
I
think
there
are
other
optimizations
you
could
make
in
the
process
that
you
know
kind
of
would
make
sure
that
the
happy
pads
and
the
unhappy
pads
are
covered.
If
that
makes
sense,
you
know
I
I
wonder,
if
kind
of
in
the
rfc's
taking
the
approach
of
saying
we
know
that
when
these
conditions
are
true
for
and
then
do
some
research
and
figure
out
what
you
know,
third-party
resources
kind
of
conform,
you
know
provide
the
necessary
conditions
to
do
it
right.
E
We
can
establish
this
traceability
and
then
we
should
capture
that
traceability
and
then
maybe
some
overview
of
the
different.
You
know
edge
cases
that
could
happen
there
is
worth
talking
about,
but
but
I
think
it
says
a
little
too
strongly
like
we
should
hold
spec
until
we
see
status
right,
because
I
think
that
that
implies.
F
Makes
sense,
I
guess
the
well
to
to
pick
on
one
piece
that
I
heard
and
and
we
can
go
on
to
others.
Are
you
talking
about
the
the
strategy
that
ultimately
you
know
it
was
what
I
had.
What
I
had
been
intending
to
hit
or
propose
was
update
resources
after
success
or
update
resources
after
completion-
and
you
know,
as
you
said
like,
oh
then,
we
should
consider
some
corner
cases
and
see
what
we
get
and
I
found
that
that
was
not
sufficient.
F
Let's
laid
out
here
that,
if
you
and
and
most
of
these
revolve
around
when
yeah
like
what
do
we
do
with
unknown
and
what
can
we
infer
when
things
are
in
a
failed
state?
Ultimately,
success
is
the
only
one
that
tells
us
like.
Oh
that
you
know,
things
are
good
based
on
the
most
based
on
the
current
state
of
the
world,
and
it's
very
difficult
to
reason
about
the
the
object
isn't
telling
me
that
which.
F
Which
old
input
does
does
the
good
output
that
it's
giving
me
relate
to?
I
don't
know
if
you
want
to
dive
into
this
particular
corner
case,
or
are
we
saying
that
sometimes
we'll
get
traceability,
but
it's
okay?
If
we,
if
these
corner
cases,
we
don't
we'll
be
fine,
if
we
lose
it
sometimes.
D
I
think
the
idea
is
that
we
could
just
give
up
the
ability
is
to
make
it
so
that
we
can
give
up
on
an
input,
even
if,
ultimately,
it
ends
up
that
it
might
have
and
be
been
successful
later
on,
based
on,
like
some
timeout
or
some
sort
of
configuration,
but
just
be
able
to
give
up
so
that
we
can
move
on
to
the
next
thing
and
like
just
try
our
best
to
resolve
the
next
thing
in
life.
F
This
proposal
is
cartographer,
only
reads:
there's
no
limitation
to
cartographer
in
this
rfc
to
updating
and
to
providing
new
inputs
every
new
input
that
comes
yeah,
throw
that
new
spec
on
update
the
object
and
it's
up
to
the
object
to
have
its
own.
You
know,
k,
pack
and
and
runnable
actually
handle
this
differently.
F
Runnable
will
immediately
stamp
that
new
thing
and
like
it'll,
just
consider
all
the
things
that
are
in
flight
and
will
tell
you
like
here's
the
here's
the
latest,
whereas
kpac
will
we'll
we'll
do
that
not
synchronously,
but
one
at
a
time
so
in
terms
of
timeout,
if
we
wanna
so
so
that
it
we
don't
need
to
talk
about
like
oh,
we
need
a
timeout
before
we
can
update
in
in
this
proposal.
F
F
F
So
there
are
there's
a
discussion
of,
could
you
could
you
hold
the
spec
update
and
what
what
would
be
the
value
the
but
no
there?
This
does
not
intend
to
hold
spec
updates.
We
can
always
update.
E
F
To
if
that's
the
case,
I
need
to
update
the
summary,
but
the
yeah.
This
is
the.
F
F
Yeah,
sorry,
I
I
should,
as
I
said
I
had,
I
had
started,
hoping
that
this
strategy
would
would
be
sufficient
and
correct,
and
it
is
incorrect,
whereas
I
I
argue
that
this
yeah,
I
think
that
this
one
is
correct
and
my
I'm
hoping
to
get
eyes.
E
E
If
you
say
you're
going
to
hold
spec
until
you
see,
you
know
the
condition
that
resolves,
then
you
could
slow
down
the
process
of
promoting
things
forward
right
because
you
know,
if
you
don't
do
that,
if
you
let
spec
update,
then
you
kind
of
introduce
the
possibility.
If
you
have
people
committing
enough
to
prevent
anything
from
ever
moving
forward,
because
you
keep
updating
spec
and
then
a
new
build
triggers
and
suddenly
the
previous
build
isn't
promotable
because
right-
and
so
you
know-
I
I
think
I
think
it
kind
of
has
to
be.
E
E
You
know
to
tune
this
right
to
kind
of
optimize
for
the
thing
in
the
middle
right
and
still
get
you
know,
traceability
and
still
not
not
slow
things
down
too
much.
Although
it's,
this
is
a
very
hard
problem
or
it's
like
a
you
know,
we're
trying
to
translate
a
declarative
thing
right
into
a
imperative
thing.
F
Because
I
discuss
cases
where,
for
example,
cartographer
can
only
read
an
update
when
the
object
has
completed,
reconciling
in
a
good
or
a
bad
state
or
cartographer
can
only
update
the
object
when
the
object
is
completed.
Reconciling
but
carter
reads
continuously,
and
I
detail
like
here's
a
here's,
a
case
where
either
it
would
be
left
unable
to
do
the
tracing
or
it
would
surmise
incorrectly
rash.
C
Yeah,
hey
what
I
see
here
with
this
is
an
implementation
that
not
only
will
be
somewhat
complicated,
encode
and
complicate
the
code,
as
all
new
features
tend
to
do,
but
something
that
will
also
complicate
describing
behavior
and
explaining.
C
What's
going
on
in
the
system,
documentation
wise
as
well,
it
may
also
simplify
describing
some
things
to
people
who
are
used
to
pipelines
right
you
say:
oh,
we,
we
have
this
output
in
the
workload
that
will
tell
you
this
came
from
that
input
and
that
might
simplify
things
for
people
who
are
trying
to
understand
the
system.
But
it's
not
very
representative
of
eventually
consistent
systems,
which
I'm
just
wondering
like
is
traceability
at
a
high
level.
C
Is
traceability
as
important
as
being
declarative
and
like
kate's
and
guess
what,
when
you
see
this
state,
we
can't
necessarily
tell
you
where
that
came
from
which
run
of
that
previous
step
caused
that
to
happen.
Just
that
it
did
happen.
We've
got
an
output,
that's
the
most
resolved
one
and
right
now,
you've
got
this
state,
which
is
nothing
not
great,
and
that
maybe
perhaps
if,
if
we
show
historical
context
at
some
point
for
runs,
you
know
for
inputs,
here's
an
input
and
it
results
and
here's
an
output.
C
I
also
think
that
we
could
promote
the
better
habit
of
talking
about
all
of
the
dependent
inputs,
including
implicit
ones
like
in
kpac.
It
might
be
a
build
image
or
something
as
part
of
the
output,
so
that
an
output
is
actually
more
complete,
but
we
promote
that
as
members
of
the
industry,
rather
than
trying
to
resolve
something
for
our
users
right
now.
C
E
If
I
could
maybe
like
restate
a
little
bit
what
I
I
think,
the
goal
of
that
would
be
or
like
the
like,
a
plan
to
you
know
accomplish
what
we're
trying
to
accomplish.
Without
you
know,
adding
that
complexity,
we
could
do
things
like
look
at.
You
know,
examine
the
you
know
image
to
see
if
it
has
a
property
on
it.
That
says
what
you
know
commit
of
source
code
came
from
or
if
you're,
using
a
runnable.
We
know
what
we
do
know
what
inputs
led
into
a
pipeline.
E
What
outputs
that
pipeline
case,
we
have
strong
traceability
there.
We
don't
have
to
do
the
translation
and
we
could
use.
You
know
like
things
outside
of
the
spec
and
status
value,
to
achieve
traceability,
and
maybe
we
could
still
reflect
that
in
workload
status
if
we
wanted
to
right,
but
but
it
would
look
really
different
and
it
sometimes
it
wouldn't
be
possible
if
that
makes
sense,
and
it
would
be
very
resource
specific,
but.
C
C
I
think
we
could
just
do
what
we
do.
Cartogra
cartographer
does
today,
because
we
do
that
and
we
do
that
well
and
it
works
quite
well
and
instead
of
shifting
paradigms
present
what
we
can
know
and
even
provide
a
way
for
for
template
authors
to
say
this
is
how
you
can
know
the
most.
You
can
know
about
this,
like
there's
an
interface
or
a
contract,
with
a
template
that
a
template
can
like.
We
have
a
way
to
transform
an
input
into
a
full
spec
that
we
apply.
E
Or
or
demand
that
object
hierarchy
might
not
exist
right
like
there's
nothing
that
says
kpag
we're
trying
to
make
this
a
generic
interface
right
and
there's
nothing
that
says
that
kpac
should
create
builds
in
the
future,
could
not
create,
builds
and
it
shouldn't
be.
You
know
something
that
necessarily
breaks
cartographer
right,
that
that
really
feels
like
it's
dabbling
into
implementation.
Detail
so,
like
I
think,
if
you
designed
an
interface
here
it
wouldn't
it
shouldn't,
be
something
that's
built
into
cartographer.
E
C
C
That's
another
situation,
it's
an
off
off
brand
of
this
topic,
but
it's
sort
of
for
the
same
reason.
We
don't
know
what
that
resource
is
going
to
be,
but
the
template
author
does
and
they
may
have
to
get
implementation
specific
and
that
may
be
brittle,
but
they
can
say.
Oh
this
thing
failed,
because
this
thing,
oh
when
I
create
this
object,
there's
this
real
common
case.
Where
this
happens,
what
I'll
tell
you
is
you
need
to
go
and
check
this
bit
of
documentation
on
how
to
better
configure
these
inputs.
E
Is
this
kind
of
like
observed,
matches
or
observed
completion,
just
additional
properties
in
the
template
that
let
you
observe
additional
outputs
that
cardographer
can
use
like
like
observe
matches
is
instead
of
doing
this
observed
generation
thing
you
could
just
say
like.
If
you
see
if,
if
capec
were
to
provide
the
source
code
commit
in
the
in
its
status,
then
we
wouldn't
need
to
do
this.
We
could
just
absorb.
C
Possibly
yeah
I
mean
I
don't
know
what
shape
it
would
be,
needs
further
investigation.
I
think
there's
multiple
uses
for
being
able
to
talk
about
the
state
of
a
resource
that
got
stamped
out
and
there
carried
that
upstream
again.
Yeah.
C
Yeah,
but
I
think
I
think
very
specifically
not
to
carry
information
forward
right
to
to
the
next
resources.
That's
very
important
to
me
that
we
keep
the
shape
and
simple
contracts
right,
and
so
we
have
simple
contracts
right
now.
This
rsc
feels,
like
the
contracts,
start
to
get
a
little
bit
more
messy
and
hard
to
understand,
whereas
we
have
optional
contracts
which
can
strengthen
our
ability
to
do
traceability
and
debug
all
right.
I'd
love
to
be
able
to
implement
something
like
that,
and
this
is
just
spitballing
at
this
point.
E
I
think
what's
attractive
about
the
current
proposal.
I
mean
like
with
some
you
know
like.
I
think
we
need
to
clarify
when
we
should
expect,
and
things
like
that
right
what's
attractive
to
me
about
that,
is
that
it
it?
You
know
as
long
as
the
external
resource
implements
observed
generation
and
it
as
long
as
it's
conditioned,
it
doesn't
say
I'm
ready
when
the
spec
you
know
observed
generation
is
bumped
and
the
spec
doesn't
match
right,
which
seems,
like
you
know,
a
contract
that
a
lot
of
resources.
E
You
know
I
would
hope,
implement,
but
maybe
don't
right
like
that's
something
to
research
as
long
as
that
remains
true,
then
we
do
have
a
generic.
You
know
way
of
establishing
traceability.
Where
we
could
ship
a
lot
of
value.
We
can
ship
an
artifact
graph
on
top
of
the
the
build
graph
right.
So
no
well
I
like.
Does
that
mean
we
should
do
both
or
you
know.
Does
that
mean
that
is
it?
Was
it
worth
pursuing?
You
know
the
this.
E
C
H
About
to
raise
my
hand,
yeah,
I
mean
I
think
this
is
a
really
good
conversation
to
have
it's
a
very
hard
conversation
to
wrap
your
head
around,
because
there's
so
much
nuance
and
also
it
depends
on
nuance
of
resources
that
cartographers
choreographing,
of
which
we
don't
control
and
of
which
kate's
has
no
contracts
or
uniformity.
There's
some
conventions
that
some
resources
loosely
follow,
but
do
they
follow
it
consistently?
H
Taking
all
that,
I
think
I
want
to
circle
back
words
like
providence
and
attestation,
at
least
in
my
mind,
have
a
very
high
bar
and
have
like
a
very
specific
meaning
and
a
sense
of
assurance
and
all
the
chaos
under
the
hood
that
all
the
resources
that
we
are
choreographing
make
it
very
hard
to
for
cartographer
to
have
those
assurances.
H
This
is
me
still
trying
to
work
through
my
mind.
What
all
this
stuff
means,
so,
I
think
like
this
is
a
really
good
conversation
to
have
like
where
it
ends
up
falling
between
is
this
traceability?
Is
this
the
bug
ability
is
its
observability
versus?
Is
this
providence
like?
I
think
those
are
different
concerns
and
we
can
try
to
separate
them
out
and
maybe
address
them
independently.
F
Yeah,
so
one,
I
think,
that's
totally
fair,
you
know.
Obviously
the
job
of
carto
is
take
some
input,
stick
it
in
a
black
box
and
then
like,
and
then
what
comes
out
of
the
black
box
to
move
it
on,
and
certainly
we
can't
somebody
could
plug
in
a
black
box.
That's
just
a
random
number
generator
and
that's
not
at
a
station.
F
The
attestation
for
cartographers
is
simply
like.
I
gave
black
box
x
and
from
x
it
gave
me
y
and
whatever
we
call
that
I
I
have
no
I've,
I'm
I'm
happy
to
settle
on
any
term
traceability,
attestation,
prominence,
etc,
and-
and
certainly
I
think
that
you
are
correct,
like
what
we
know
ends
at
I
gave
black
box
x
and
I
got
y.
F
I
think
that
yeah
and-
and
you
know
in
inherently
even
in
this
rfc
there's
this-
there
are
some
assumptions
that
one
the
that
you
can,
that
you
can
observe
and
you
can't
determine
like
ultimately
at
some
point
this,
the
status
of
this
object
represents
the
spec
of
this
object.
You
know
the
observed
generation
is
a
very
case
native
way
to
to
let
users
know
that's.
F
What's
going
on
and
ready
the
ready
condition
status
equals
true
is
has
has
a
lot
of
has
a
lot
of
buy-in
as
well,
but
ultimately
it
will
be
the
responsibility
of
whoever
is
writing
a
template
to
assure
that
the
object
that
they're
that
they
are
creating
does
give
those
give
those
guarantees,
because
without
them
this
strategy
would
be
meaningless.
F
You
know,
if
you,
if
you
bring
a
badly
behaved
box,
you
will
get
you'll
get
garbage
you'll
get
garbage
out
garbage
box
garbage
out,
but
I
I
do
think
that
this
gives
us
yeah.
So
this
one,
the
other
thing
I
heard
like
in
terms
of
the
one,
certainly
in
terms
of
deciding
which
strategy
to
use
there's
a
lot
of
complication
and
in
terms
of
the
underlying
implementation.
F
There
may
be
some
imp,
some
complication,
but
I
think
that
ultimately,
the
strategy
recommended
here
is,
I
would
argue,
very
straightforward
and
I
can
explain
to
someone
very
very
easily.
We
stamp
out
objects
when
those
objects
have
completed
are
in
good
state
and
they're
at
rest,
they're
not
being
reconciled
anymore.
We
pass
those
values
on
and
that
value
will
be.
F
The
value
that's
passed
on
until
there's
a
new
good
object
when
the
when
the
object
is
in
rest,
the
new
good
output,
when
the
object
is
in
rest
and
that's
about
it
like
wait
until
things
are
good
and
keep
passing
that
good
value
on
until
the
objects
are
again
in
a
good
restful
state.
F
A
C
I
don't
want
to
drag
this
on
much
longer,
but
I
put
a
comment
in
on
this
one
as
well,
so
we
can
talk
about
it,
offline
async,
but
is
it
possible
to
not
block
outputs
but
only
add
these
sources
when
they're
known
so
the
same
implementation,
but
don't
block
the
outputs
so
you'll
have
sometimes
that
there
are
no,
not
known
sources
and
so
be
it
because
the
system
was
running
and
you
still
had
valuable
stuff
and
who's
going
to
get
upset
that
the
situational
awareness
is
imperfect,
all
right,
whereas
we're
still
running
one
in
a
a
native
way
and
two
in
a
way
such
that
we
don't
stop
us
from
ever
using
resources
that
we
can't
get
this
to
work
for.
F
Yeah,
so
so
really
quickly
to
run
through.
Why
shouldn't
we
do
that?
Let's
say
that
the
rule
is
you
can
update
the
object
when
the
object
has
completed,
reconciling
but
cardo
reads
continuously.
F
F
The
the
object
would
go
into
reconcile.
It
would
first
be
reconciling
that
update
because
of
the
state
of
the
world
and,
like
all
objects,
are,
are
merging.
You
know
some
special
sauce
that
they
know
about
the
state
of
the
world
and
the
inputs
that
you've,
given
them.
You'd
first
do
that
state
of
the
world
on
the
old
spec.
It
would
report
that
out
cartographer
knowing
that
it
hadn't
given
any
update,
would
assume
that
that
was
the
result
of
the
spec
that
it's
submitted,
and
it
would
pass
that
on
with
that.
F
With
that
understanding
that
would
be
incorrect
and
yeah
go
ahead.
E
But
that
assumes
that
cartographer
would
pat
like
if
the
external
thing
updated.
If
there's
a
new
build
pack,
and
then
it
builds
starts
right
before
another
spec
update
happens
and
then
the
spec
update
happens.
It's
the
resource
is
still
never
going
to
reach
condition
ready
until
both
the
build
pack
thing
is
bumped
and
a
new
build
has
started
the
completed
the
new
spec
and
then
condition
says
ready
and
then
it's
okay
to
promote.
So
there's
not
a
problem
in
that
that
case
right.
F
C
B
C
C
E
C
E
Not
talking
about
s
bombs,
I'm
just
like
like
if,
if
you
have
a
simple
pipeline
that
has
flux
and
then
kpac
and
then
or
maybe
flux
and
then
tact
on
test
and
then
kpac
and
then
you
know
convention
service,
and
then
it
right.
E
It's
this
in
a
simple
case,
if
the
right,
if,
if
you
like,
are
willing
to
promote
things
forward,
when
you
can't
achieve
achieve
traceability
because
of
you
know,
like
a
update
happen,
just
the
wrong
time
right,
then
you
could
end
up
with
an
image
getting
generated
from
the
k-pac
resource
that
you
can't
tie
back
to
a
commit.
No.
E
E
Web
interface
you're
building
on
top
of
this
see
it's
traced
back,
but
but
you
you
could
you
could
just
hold
promotion
of
that
thing.
If
you
don't
have
the
traceability
information
right
either
by
not
promoting
it
forward
or
by
holding
off
on
spec
updates
until
it's
promotable
until
it's
resolved
right
or
some
combination
of
those
with
a
timeout
right,
and
I
think
that's
what
mishuma
is
proposing.
D
But
I
don't
think
rash
is
talking
about
it,
he's
not
talking
about
one
artifact
within
the
context
of
something
that
where
this
normally
works
right
he's
talking
about
a
resource
where
these
things
just
don't
hold
true
at
all,
like
it's
not
possible
to
to
use
this
proposal
to
actually
evaluate
whether
or
not
something
was
good.
Oh.
F
F
Yeah,
I
would,
I
wonder,
if,
like
I
would
almost
do
that
as
like
something
configurable
to
say,
like
maybe
so,
for
example,
inner
loop.
I
I
think
this
would.
I
think
this
proposal
is
terrible
for
the
interloop,
because
you're
you're
looking
for
something
that's
like
just
live
updating.
Sorry,
let
me
back
up.
Let's
imagine,
you're
building
a
platform,
and
in
that
platform
you
want
to
support
people
having
live
updates
that
are
going
through
the
through
a
supply
chain
as
they
code.
F
You
know
this
is
gonna
just
starve
their
supply
chains
as
they're
coding,
and
you
know
they
just
have
to
take
a
break
every
once
in
a
while
and
then
it
would
finally
catch
up
to
them
and
so
making
it
config,
configurable
and
saying,
and
they
might
say
like
I
don't
care
about
tracing
I
prefer
live
updating,
makes
sense
to
me.
D
I
I
don't
like
the
idea
of
like
just
doing
best
effort
for
a
particular
resource
where
you
can
like
let
it
run
and
then
just
say
like:
oh
one
of
these
images
got
certified
and
then
a
good
one
got
skipped
because
we
couldn't
certify
it
as
it
went
by
so
like
I
think
we
just
need
to
I
yeah.
I
think
we
need
to
pick
a
strategy
per
resource
as
long
as
we
can
make
that
guarantee.
I
think
it
works.
A
F
F
Is
one
small
question:
I'm
hoping
this
will
come
up
on
internet
the?
Actually
it's
really
a
discussion
on
the
on
here.
There
is
there's
a
question
of
so
workloads
are
inputs
to
two
templates.
Obviously,
and
workloads
could
be
changed,
and
so
we
want
to
make
sure
that
we
know
which
workload
a
particular
output,
like
which
version
of
the
workload
the
output
came
from.
There
are
two
strategies
that
we
could
use.
F
We
could
either
one
when
the
workload
spec
is
changed,
just
wipe
out
the
status,
wipe
out
the
wipe
out
the
objects
that
the
artifacts
that
we
were
tracing
and
start
again
that
I
argue
that
only
works.
If
we
adopt
the
strategy
that's
recommended
by
rc20,
you
can
read
the
discussion
in
here.
F
The
other
alternative
is
in
the
from
field
of
the
artifact
tree.
We
include
the
workload
generation,
workload,
spec
generation,
I'm
really
agnostic
as
to
which
strategy
we
use.
F
I
think
both
are
correct,
so
yeah,
I
I
don't
think
we
have
time
to
discuss
well,
four's,
open
thoughts.
A
D
Kind
of
unrelated,
I
actually
don't,
have
an
opinion
on
which
one
we
pick
I'm
totally
agnostic
to
eat
their
option.
I
just
want
to
mention
also
that
I
think
there
we
could
probably
move
forward
with
18
without
20.
Also,
if
we
take
artifacts
to
not
mean
that
they
came
from
a
specific
previous
artifact,
but
we
just
draw
the
artifact
graph
of
the
latest
of
every
artifact
and
then
take
the
from
field
to
mean
it
came
from
an
underlying
resource.
D
D
D
Or
even
have
it
still
even
have
the
syntax
be
the
exact
same,
and
just
take
from
to
mean
that
it
came
from
the
resource
identified
in
past
right
and
then
we
would
still
be
able
to
build
the
graph
we
would
still
be
able
to
like
show
all
the
different
artifacts.
But
the
things
that
we
draw
are
just
always
the
most
recent
from
everything.
E
D
E
Solve
traceability,
why
not
just
cut
from
right
and
then
move
that
I
I'm
just
trying
to
from
my
understanding
like
why
not
just
cut
from
and
move
it
to
rfc
20.?
Well,
I
have
a
from
field.
That's
wrong
in
our
fc18.
Instead
of
just
adding
a
front
field,
that's
right
and
the
next
one
is
there
some
use
for
that
that
I'm
not
catching.
B
D
D
E
D
D
Anyway,
we
could,
it's
probably
like.
I
can
just
I'll
add
a
comment
or
something,
because
I
think
it
could
still
be
valuable
and
it
might
unblock
us
on
rfc
18.
A
E
A
C
C
It
still
needs
work,
but
if
I
could
just
visualize
it
real
quick
for
you
now,
it
means
that
I
can
start
getting
some
feedback
on
it,
and
that
would
be
really
helpful.
Is
that
okay,
it's
okay,
so
washima
introduced
an
rsd.
I
don't
remember
what
version
it
was,
but
in
his
rfc
it
is
possible
to
have
multiple
templates,
even
different
ones,
for
a
specific
resource
and
choose
them
based
on
field
label,
field
and
label
selection.
C
So
a
classic
example
might
be
that
you
have
a
github
step
at
the
end
and
if,
if
there
was
a
label
or
something
that
told
you
how
you
wanted
to
push
your
push,
your
resources
up
for
get
ops,
that
would
choose
a
different
template
and
then,
in
extension
to
that
rfc,
I
would
like
to
propose
that
one
of
the
other
selectors
that
we
can
use
as
inputs
availability.
C
C
So
in
the
case
of
configure
here,
if
the
source
that
was
provided
was
actual
source
and
there
there
is
hidden
matches
here.
This
would
look
to
see
if
there
is
a
field
for
spec
source,
url
or
spec
sports
image.
C
This
one
here
chooses
to
run
simply
because
it
says
has
test,
but
it
could
also
just
be
based
on
the
fact
that
that
oh
no
has
this
true
that
this
is
the
only
way
to
select
for
test
at
the
moment.
But
then
the
image
builder
could
decide
whether
to
use
the
source
tester
as
its
input
or
the
source
provider
based
on
whether
this
input's
available
or
not,
and
there
are
contentions
that
we
have
so
we
propose
that
these
this
is
a
priority
list
of
the
options.
C
C
C
I've
already
done
some
empirical
tests
to
make
sure
that
all
of
the
logic
behind
this
flows
and
that
it
can
be
analyzed,
I'm
happy
to
do
some
more,
but
I
thought
I'd
just
introduce
it
so
that
folks
see
see
what
it
is
that
I
was
trying
to
achieve
with
that.
It
is
the
ability
to
run
the
same
template
or
a
different
template
for
the
same
sort
of
resource
step
depending
on
inputs,
not
just
fields,
and
it
would
be
an
ending
of
inputs
and
fields
for
each
one.
C
A
Okay!
That's
it
see
you
on
community
meeting
wednesday
and
thank
you
for
your
time.