►
From YouTube: Cartographer Community Meeting - Jan 26th, 2022
Description
00:00 Intro
00:56 The TL;DR (What's new in the project/what the team is working on)
02:59 Tanzu TV live coding episode
03:36 OpenSSF Best Practices badge requirements status
05:54 RFC 009 discussion
34:23 RFC 018 discussion
A
There
you
go
welcome
to
the
cryptographer
community
meeting,
hope
you
can
find
the
agenda.
Otherwise
I
will
share
with
you
all
have
joined
here
live.
Remember
the
the
agenda
is
open
for
you
to
add
discussion,
topics,
and
hopefully,
today
time
will
be
enough
for
some
of
the
conversations
here.
You're
welcome
to
ask
questions,
and
in
general
there.
So
let
me
share
my
screen
and
there
you
go.
A
A
Several
improvements
there,
so
I
hope
you
can
try
it
and
we
have
some
bug,
fixes
and
new
features.
Please
let
us
know
your
feedback
and
well
that's
interesting.
Thank
you
for
making
it
possible
okay,
yeah
kind
of
an
overview
summary,
and
I
know
that
the
the
project
board
is
there,
it's
it's
public
what
the
team
is
working
on
right
now,
but
I
just
wanted
to
know
if
there
was
any
comment
on
what's
been
happening
in
the
project
right
now.
What
the
team
is
working
on.
B
A
lot
of
what
we're
doing
right
now
is
all
centered
around
rfcs,
which
I
think
we'll
get
into
in
plenty
detail
today.
So
yeah.
A
And
we're
doing
it
live
right,
cool,
that's
great,
so
what's
happening
here,
moving
rss,
that's
it
cool
from
previous
meeting
demo
work.
It's
it's
still
in
progress,
we'll
be
having
some
sessions
in
preparation
for
the
tgik
episode.
So
hopefully
that
will
give
you
some
context
for
the
demo
and
also
working
with
an
extended
team
to
vote
for
the
cncf
demo.
A
A
I
don't
know
how
to
stop
this.
They
will
be
using
cartographer
to
deploy
an
app
live.
So
you're
welcome
to
join,
learn
their
live
and
ask
questions.
It
will
be
automatically
recorded,
also
cool
yeah
open.
Well,
probably,
you
remember
the
cii
best
practices.
It
was
the
common
infrastructure
initiative
from
linux
foundation.
A
Now
it
was
moved
to
the
open
source
security
foundation.
It's
basically
a
baseline
of
best
practices
for
all
open
source
projects
out
there,
as
defined
by
linux
foundation.
We
have
cartographer
here.
We
we
are.
We
don't
have
a
passing
grid
yet,
but
probably
we
already
have
it.
The
thing
is
that
I
don't
know
it
right,
so
I
just
created
a
couple
of
issues
to
be
able
to
track
this,
because
there
are
two
domains
there
on
code,
analysis
and
quality.
A
That
probably
you
could
help
me
confirm
if
we
already
are
meeting
the
requirements,
the
link
is
there
for
each
one
of
the
requirements
on
code
analysis,
for
example,
and
quality.
So
yeah.
If
you
see
quality,
for
example,
there
is
a
question
there
that
if
we
are
using,
I
don't
know
testing
how.
How
do
we
do
testing?
So
probably
we
are
already
meeting
these
requirements.
A
So
if
you
please
confirm-
and
in
the
issue
that
will
be
helpful,
so
I
can
edit
this
and
start
moving
this
grade
to
the
passing
status
right.
It's
the
same
for
for
code,
analysis,
cool
and
even
in
the
issues.
I
I
pointed
out
to
a
a
project
who
already
that
already
has
the
pattern
grade,
so
you
can
see
how
they
are
responding,
questions
there.
Thank
you
for
that
cool
and
now
for
the
continuing
the
rfc
discussions
from
previous
office
hours.
A
I
was
trying
to
time
box
the
conversations,
but
I
I
know
that
each
one
of
these
topics
has
a
different.
I
don't
know
different
weight,
so
I
was
just,
I
would
just
probably
add
friendly
reminders
when
we
are
approaching
the
end
of
the
schedule
time.
So,
let's
get
started
with
rfc
9.
Any
additional
point
here
regarding
this
rsc.
C
Yeah
we,
the
maintainers,
had
a
conversation
yesterday
discussing
rc9
and
then
discussing
two
different
approaches
to
build
on
it.
Rc9
allows
switching
between
taking
one
step
and
switching
between
two
different
templates.
C
And
there's
a
previously
existing
rfc
rfc
5
that
proposes
snippets
that
can
be
used
to
turn
one
step
into
an
arbitrary
number
of
templates.
The
assembly
could
be.
It
could
be
one
step,
but
using
that
snippet
you
can
have
three
five,
whatever
many
templates,
and
there
should
also
come
up
with
a
different
way
to
build
on.
C
To
allow
one
step
to
pull
information
from
from
different
previous
steps,
which,
in
the
same
way,
allows
you
to
kind
of
have
that
accordion-like
behavior,
where
you
could
have
paths
that
has
their
supply
chain
defined
that
aren't
the
same
length.
C
As
I
said,
both
of
those
options
add
syntax
to
what's
already
in
rfc.
Nine
also
rat
should
come
up
with
some.
C
Great
suggestions
about
the
syntax
for
matching,
so
if
you
actually
go
into
the
carto
repo
and
hit
rfc
nine
then
go
to
the
text
that
has
been
updated.
Based
on
our
conversation
yesterday,
yeah.
A
C
So,
based
on
the
conversation
on
monday,
that
rc
had
presented
two
options:
side
by
side
selection
based
on
fields,
selection
based
on
labels.
The
rc
now
comes
down
firmly
on
one
side,
I
say
selection
based
on
fields.
C
I
the
conversation
yesterday,
the
the
syntax
for
doing
that
has
been
brought
a
little
more
in
line
with
what
we
already
have,
where
every
resource
continues
to
have
a
template.
Ref
that
template
ref
continues
to
have
a
kind
and
then
there's
some
key.
C
That
will
be
a
list,
so
you
could
either
specify
a
name
here
or
you
could
specify
some
key
for
a
list.
This
borrows
the
the
syntax
options
from
rash's
suggestion
the
and
then
a
name
is
given
for.
What's
the
name
of
the
template
that
we
are
looking
for
and
then
what
is
the
what's?
The
criteria
for
selecting
this,
or
this
rush
suggested
that
we
adopt
the
match,
expressions,
syntax
that
is
used
on
pods
or
on
jobs
and.
C
In
kubernetes
that
gives
us
a
key
which
would
be
a
path,
and
then
there
are
four
operators
exists,
does
not
exist
in
and
not
in,
and
if
you
specify
in
or
not
in,
then
you
additionally
include
an
array
of
values
that
this
would
that
the
whatever
is
found
at
this
on
this
field
would
need
to
match.
B
Yes,
is
this
syntax
backwards
compatible
with
what
we
already
have?
I
I
remember
seeing
so.
The
rash's
proposal
that
is
linked
in
here
makes
a
point
of
saying
that
the
syntax
is
backwards.
Compatible.
I
just
want
to
confirm.
Is
that
the
cases
here
I
mean.
E
B
F
F
D
C
Out
and
so.
C
This
does
not
define
this
does
not
define
what
to
do
if
nothing
matches,
but
there
is
a
question
about
what
should
we
do
if
too
many
things
match
I
I
would
be.
I
have
no
objection
to
saying:
oh,
if
nothing
matches,
then
we
should
throw
an
error.
F
C
So
I
I
think
that's
that's.
Definitely
I'm.
I
would
be
happy
to
see
that
I
I
did
drop
in
a
paragraph
in
here
to
say,
like
I
think,
determinist
determinism
is
desirable
and
it's
I'm
still
concerned
that
cartographer
may
not.
We
may
not
have
that
luxury.
It
may
be
that
we
find
use
cases
that
are
very
important.
That
would
require
checking
the
output
of
some
step
in
the
supply
chain
and
based
on
that
output
deciding
what
to
do
next.
C
C
The
open
source
tooling
that
we
use
kpac,
isn't
aware
of
that
and
so
doesn't
handle
that
well
and
my
first
pass
at
fixing
that
would
be
well
run
a
tecton
pipeline
to,
or
you
know,
do
your
kaneko
thing.
C
C
F
G
E
F
Called
out,
which
is
great,
it's
called
out
in
the
bottom
of
this
discussion.
We
can
put
it
in
our
comments.
If
we
have
comments
yeah,
I.
A
B
Have
a
document:
is
it
worth,
comparing
these
two
side-by-side
and
talking
a
bit
about
like
the
different
syntax
and
stuff
like
that?
Well,.
F
We've
tried
to
make
the
syntax
in
line
so
that
mine
would
just
be
an
extension
of
this,
so
mine
is
next
steps
being
able
to
switch
on
inputs
as
well.
The
only
syntax
difference
I
see
in
here
I
will
argue
about.
I
think
I
can
just
do
offline,
which
is
that
I
don't
think
we
should
use
the
word
match
expressions
for
our
extended
version
of
matching
on
fields.
F
It
should
be
match
fields,
because
I
think
we
should
still
be
able
to
use,
match
expressions
and
match
labels
in
this
in
this
selector,
and
it
should
start
with
the
word
selector
so
that
it
looks
exactly
like
selectors
but
extended.
But
I'm
going
to
argue
that
offline.
I
think
that's
a
minor
point,
but
that's
the
only
syntax
difference
between
this
and
mine.
B
So
how
do
we
want
to
reason
about
this?
Then?
Because
then,
are
we
going
to
reason
about
multi-templates
separate
from
the
idea
of?
B
F
C
C
Had
proposed
fallback
templates
and
those
were
explicitly
ordered,
of
course
you
define
each
each
option
in
an
array,
arrays
are
ordered,
and
so
that
would
allow,
if
we
consider
them
as
order,
then
that
would
allow
tiebreakers
if
two
options
are
fulfillable
the
earlier
one
in
the
list
would
be
chosen.
However,
it
may
be
at
least
surprising
if
we
consider
this
not
ordered
and
just
say
if
to
match,
then
we
throw
an
error.
C
I
have
a
slight
preference
for
throwing
an
error,
but
I'm
very
open
to
the
community
voices
that
we
might
hear
today
so
make
space
for
those.
C
And
yeah,
I
heard
rash
saying
that
he
had
some
feedback
on
the
syntax
for
matching
fields.
Previously
the
rc
used
language
that
was
close
to
what
we
use
for
observed
conditions
where
we
specify
a
key
and
a
value,
one
of
the
one
detriment
there
is
that
key
and
value
don't
capture
this
exists
operation,
and
so
we
would
have
to
introduce
something
like
a
star
or
wild
card.
C
But
we
would
have
the
advantage
of
having
our
selector
our
field
selector
here,
aligning
with
other
select
other
selection
that
we
do
in
cartographer
for
different
reasons
separately.
As
I
said,
this
idea
of
match
expressions
comes
from
rash's
the
write-up
that
rash
did
and
comes
from
kubernetes
spec.
C
So
yeah,
I
heard
we're
going
with
error
if
both
match
error,
if
neither
match
so
I'm
happy
to
write
that
in
a
syntax
is
proposed
here,
and
I've
heard
already
some
thoughts
on
feedback.
C
H
I
have
a
question
about
just
comparing
these
two
big
options
for
whether
you
know
we
kind
of
base
things
on
field
selectors
or
there's,
there's
the
alternative
at
the
bottom,
that's
based
on
labels
and
suggests
a
web
hook.
The
examples
in
the
rfc
are
pretty
simple.
H
It's
like
you
know,
switching
on
whether
there's
a
source,
image
or
or
source
from
a
git
repository,
but
I
worry
that
if
you
know
you
have
a
more
complex
supply
chain
that
has
like
you
know,
selectors,
based
on
whether
there's
a
tecton
pipeline
next
to
the
workload
you
know
like
these,
it
could
be
a
lot
of
repetitive
logic
in
supply
chains
and
that
option
two,
you
know,
could
you
know
with
the
web
hook
that
kind
of
decouples
that
definition
of
the
rules
and
makes
it
reusable
across
supply
chains
right,
could
kind
of
save
us
from
some
configuration
sprawl,
I'm
not
taking
a
strong
opinion.
H
We
should
do
that.
Necessarily.
I
just
wonder
if,
before
we
choose
a
path,
if
it's
worth
creating,
maybe
a
like
a
more
complex
example
that
covers
some
of
the
things
that
we
know.
You
know
that
we'd
like
to
kind
of
use
the
future
for,
like,
like
the
presence
of
a
testing
pipeline
right
before
we
we
proceeded
to.
You,
know
one
implementation.
H
That
said,
I
also
could
see
a
path
where
we
could
kind
of
do
both
of
these
things,
because
if
you
have
match
labels,
you
know-
and
you
can
just
say
yeah
you
can
put
in
the
supply
chain
embedded,
you
know,
or
you
could
extract
it
into
the
inspector
thing.
So
I'm
not,
I
think,
there's
a
path
forward.
That
says
yes,
it's
complex,
but
we're
going
to
do
it
anyways,
because
it's
simple
for
now
and
then
we'll
also
add
this
other
feature
later.
H
If
we
want
to
that,
does
the
inspection,
so
it's
not
an
either
or
I
just
want
to
call
out
that
there's
you
know
there
may
be
this
thing
that
pushes
us
more
towards
extracting
that
logic
in
the
future
and
wondered
if
it's
worth
kind
of
seeing
what
the
syntax
for
more
complex
use
cases
you
know,
might
look
like
earlier
in
the
process.
C
I
would
say
yeah,
as
you
kind
of
one
thing,
that
I
would
point
out.
Labels
are
part
of
the
workload
metadata,
so
I
argue
that
match
expressions
covers
the
use
case
of
match
label.
C
C
I'm
not
included
in
the
rfc,
but
if
someone
wanted
to
push
that,
I
would
or
not
push
back
very
hard,
and
I
would
also
also
point
out
that
match
expressions
is
an
array
of
of
conditions,
so
that
does
give
us
quite
a
bit
of
power.
I
think
the
you
know
the
the
one
thing
that
you
bring
up
and
I
haven't
been
thinking
in
terms
of
this
see,
but
I
have
been
in
terms
of
another
one-
is
that
question
of
what
would
be
a
good
language
to
specify?
C
That
sounds
like
a
plausible
thing
that
we
might
want
to
do,
and
I
suspect
that
we've
I
actually
come
down
on
the
other
side
errors
here
and
you
say
maybe
that
pushes
us
closer
to
labels.
I
suspect
that,
because
we
won't
know
if
something
is
so
that
tecton
pipeline,
if
the
tekton
pipeline
already
exists
on
the
cluster
when
the
supply
chain,
when
the
workload
is
applied,
then
it's
easy
for
a
mutating
web
hook
to
say:
let
me
search
the
cluster.
Let
me
see
if
this
thing
exists.
C
Let
me
apply
a
label,
but
if
it
goes
in
the
other
order,
if
the
workload
is
applied
first
and
then
a
pipeline,
then
you
have
to
have
a
mutating
web
hook.
That
knows
that,
every
time
that
needs
to
be
listening
for
pipeline
pipelines
being
added
to
the
cluster
and
sure
we
can
do
that,
but
what,
if
it's
not
a
pipeline?
What
is
if
it's
some
other
arbitrary
object,
so
I
my
expectation
is
that
that
that
language,
if
we
need
it,
will
have
to
be
here
rather
than
in
a
web
hook,.
H
That
makes
sense
you
could
also
have
a
reconciler
that
adds
labels
dynamically
to
the
inspector.
After
you
know,
the
thing
is
created
right
as
long
as
the
mutating
web
hook
is
willing
to.
H
F
H
I
I
think,
if
we
feel
like
it's
we're
not
trying
to
account
for
those
use
cases
right,
the
like
you
know
being
able
to
match
on
the
existence
of
another
like
getting
rid
of
very
distance,
a
test
pipeline
versus
a
no
test
pipeline
if,
if
the
or
supply
chain,
if
the,
if
we
don't
want
to
account
for
those
use
cases
in
this
rfc-
and
we
want
to
defer
that-
I
think
that's
okay,
it's
just
you
know
if
we
are
planning
to
add
more
complex
expressions
it
knowing
about
that
ahead
of
time,
you
know,
might
help
us
inform
the
design,
but
I
don't
think
it's
it's
necessary
if
we're
not
planning
to
cover
it
here.
D
I
heard
rash
mention
changing,
match
expressions
to
match
fields.
That
seems
like
a
change
in
the
spec
that
we
should
document
if
we're
accepting
or
not.
If
we're
not.
C
C
That's
I
think
he
said
that
he
was
going
to
put
a
comment
in
and
yeah
I
don't
have
a.
C
I
suppose
one
advantage
to
that
is
that
if
we
want
to
drift
from
from
the
syntax
of
match
expressions,
it's
easier
if
we're
not
using
the
exact
wording,
match
expressions.
H
D
C
H
You
know
on
github
we
have
a
formal
record
of
that
and
also
we
should
figure
out
what
the
rules
are
around
voting,
but
maybe
we
can
figure
that
out
after
this
rfc.
H
Sorry,
I
would
like
to
see
the
syntax
for
match
expressions
or
like
what,
whatever
you,
it's
still
not
totally
clear
to
me,
like
I'm
looking
at
rashes
thing
and
looking
at
this
thing
exactly
what's
being
supposed
to
pull
over,
I
think
you
know
I
don't
have
a
super
strong
opinion
about
it,
but
just
making
sure
the
rfc
has
exactly
what
we
want
to
move
forward
with
and
just
I
am
very
supportive
of
moving
as
close
to
what
kubernetes
ecosystem
you
know
does
for
these
types
of
selectors.
H
Even
if
it's
like
a
little
different
than
what
we
did
in
other
resources,
it
means,
let's
fix
it
in
the
other
places
and
move
those
forward
too
right.
I
really
like
that.
I
feel
like
that's
the
direction
that
you
know
that
seems
like
we're.
Heading
with
this
was
was
try
to
match
what
the
ecosystem
is
doing.
It
seems
like
a
good
idea.
H
I
had
another
question
actually
about.
I
see
name
is
kind
of
repeated
in
some
of
these
conditionals,
so
it's
like
there's
a
name
at
the
you
know
like
it's
components
or
resources
and
then
name
source
provider
and
then
in
the
options
there
are
additional
names
are
those
I
think
it's
the
outer
one
is
the
name
of
the
resource
and
the
inner
one
is
the
name
of
a
reference
of
a
instance
of
the
cluster
source.
Template
correct.
Is
that
right,
like
I
think
I
can.
H
I
can
deduce
that,
but
because
there's
like
two
name
fields
and
the
there's
an
outer
one
and
two
are
in
the
conditional,
it's
not,
it
doesn't
feel
like
the
outer
one
feels
like
they're
all
labels,
as
opposed
not
in
the
kubernetes
sense,
but
like
they're,
all
labeling,
something
as
opposed
to
like
the
inner
ones.
Are
you
know
references
to
something
else
and
the
upper
one
is
like
a
you
know:
internal
label
that's
being
used
in
the
supply
chain.
I
don't
have
an
alternative
to
it.
It's
just
like
when
I
first
read
it.
H
I
was
like
oh
you're
naming
the
inner
ones.
Oh
no!
No!
Those
are
the
references
for
the
other
things,
so
I
like,
I
don't
know
if
it
means
like
the
kind,
should
travel
with
the
options,
especially
if,
like
we
introduce
kinds
with
compatible
output,
you
know
duct
types
in
the
future.
Maybe,
but
I
I
understand
the
value
of
saying
no,
they
have
to
be
the
exactly
the
same
kind,
because
that
keeps
it
very
strict
right
and
making
sure
that
everything
plugs
them
together.
The.
F
H
C
Okay,
we
had
a
conversation
that
touched
on
that
with
snippets
that
snippets
essentially
will
need
to
be
implicitly
typed
to
output,
for
example,
a
source
or
an
image,
so
yeah
at
the
moment
I
would
say
this
is
the
most
the
most
compatible,
because
you
know
here's
a
template,
ref
a
template.
C
Okay,
I'm
going
to
hand
things
back
over
to
david,
though
I
I
don't
know
if
marty's
or
my
the
next.
C
Okay,
give
me
a
second
a
week.
C
Yeah,
so
this
is
talking
about
artifacts
reporting.
Essentially,
we
want
to
be
able
to
establish
providence,
hey
I've
got
some
output
at
the
end
of
my
supply
chain.
I
want
to
know
which
commit
is
resulted
in
this
or
which
intermediate
stage
in
the
process
got
me
there
in
order
to
allow
such
reasoning
to
be
done
in
order
to
allow
tooling
that
could
report
that
we
need
to
establish
providence
a
couple
of
things
so
yeah.
C
This
builds
off
of
rfc
14
and
for
the
motivation
and
use
case.
So
just
kind
of
point
back
to
to
that.
The
most
recent
discussion
that
we'd
had
there
came
up
with
this,
with
the
shape
of
what
artifacts
would
be
you,
we
have
a
list
of
artifacts
those
artifacts
are
of
type
either
source
image
or
config.
C
C
I
would
then
expose
those
not
as
an
id,
but
not
not
as
a
hash,
but
just
as
their
base
values
and
we
would
report
what
was
the
object
that
emitted
this
artifact.
C
One
change
that
I
have
been
spiking
on
this
and
haven't
updated,
updated
it
in
my
spike,
because
the
specs
started,
drawing
in
like
other
necessary
pre-components,
but
one
change
I
would
suggest
is-
and
this
really
comes
from
josh
in
our
previous
discussion.
We
considered
the
id
to
be
like
for
this.
Artifact
is
really
just
the
output
itself.
C
C
Let's
say
you
have
a
git
repository,
a
flex
git
repository
that
outputs,
a
url
and
revision
that
artifact
is
distinct
from
the
next
step,
the
source,
tester
and
tecton
pipeline
that
tests
that
the
ur,
the
url
and
revision
are
the
same,
but
the
obviously
the
resource
that
they
came
off
of
is
different
and
we
should
be
able
to
disambiguate
between
those
as
as
artifacts
in
the
supply
chain.
C
I
mentioned
that
and
then,
if
all
that
is
accepted,
a
implementation
of
this
is
fairly
straightforward
and
then
the
more
complicated
thing
that
we'll
need
another
rfc
for
work
that
was
proposed
long
ago.
Is
this
from
field
establishing,
let's
say:
you've
got
a
kpac
image
that
is
emitting
an
image.
C
You
know
it's
very
easy
to
read
off
of
this
to
say:
oh
yeah,
here,
here's
that
output,
but
associating
that
with
what
was
the
what
was
the
input
of
the
previous
step
that
led
to
this
output
requires
some
change
in
how
cartographer
works,
because
at
the
moment
we
don't
have
enough
to
track
that.
But
this
rfc
assumes
that
that
work
will
be
done
so
that
we
can
fill
out
this
this
from
field
and
nail
down
tying
this
id
to
some.
C
You
know,
for
example,
from
an
image
id.
We
tie
it
to
some
previous
source
id.
H
Quick
question
before
you
move
on:
that's
okay!
Oh
thank
you
froze
sorry,
so
I'm
a
little
confused.
The
you
know
past
is
resources
where
this
this
id
was
an
input
and
an
output
right
like
like
a
source
tester.
D
C
C
Fields-
this
should
be
part
of
that.
I
that
this
past
needs
to
be
part
of
that
id.
H
Okay,
maybe
I
I
asked
this
differently
than
I
should
have
so
you
have
a
pipeline,
you
have
a
commit
or
you
have
a
supply
chain.
You
have
a
commit.
That's
you
know
going
through
these
individual
steps.
Past
is
the
last
one
where
it
was
both
an
input
and
an
output.
B
Like
it
has
nothing
to
do
with
the
fact
that
it
was
an
input
and
an
output,
it's
it's
always
an
output,
and
it
is
always
strictly
from
the
resource
that
produced
it
and
so,
like
you,
would
get
an
artifact
for
the
very
first
item
in
your
graph.
Like
your
git
repository,
your
very
first
source
thing,
it
doesn't
take
an
input
right,
but
it
produces
an
artifact.
B
B
You
would
have
a
second
artifact
that
has
the
same
url
and
revision
right.
It
just
points
to
a
separate
resource
that
produced
it,
because
it's
like
it's
like
saying
that
it
came
from
here.
It
was
like,
then
again
validated
by
this
next
step
in
the
supply
chain
and
then
so.
Every
every
template
that
produces
an
output
produces
a
unique
artifact.
H
What's
the
value
of
separating,
is
it
just
because
it's
easier
to
implement
to
separate
those
things
in
this
tree
or
it's
like
to
me?
The
the
value
of
the
artifact
graph
is
to
know
you
know
this
artifact
turned
into
this
artifact
turned
into
this
artifact
and
then
also
maybe
like.
If
an
artifact,
you
know
is
passing
through
step
validation,
steps
knowing
that
it
was
validated
and
then
it
looked
looks
like
as
a
user.
H
If
this
for
maximally
useful
to
me,
it'd
be
like,
I
find
the
commit,
and
I
have
all
the
information
I
know
what
stage
does
it
pass?
Validation
for,
and
I
know
what
it
came
from.
What's
the
use
of
separating
the
you
know
the
same
output
of
different
steps
into
different
and
giving
those
different
artifact
ids.
C
B
B
I
I
don't.
I
don't
have
an
example
of
like
another
like
something
that
runs
three
source
templates
and
then
builds
an
image
and
then
runs
another
and
then
uses
that
same
artifact
as
like
a
source
later
on,
but
like.
H
You're
saying
it
encodes
the
validation
tree,
it
lets
you
understand
the
yes,
the
order
of
validation,
a
complex
order
of
validation,
the
nonlinear
order
validations
is
that
got
it
yeah,
and
so
so
we
want
to
preserve
that
information.
And
then
we
want
to
calculate
the
you
know
like
if
somebody
if
we
need
to
draw
an
artifact
if
we
need
to
draw
a
tree
or
provide
information
about
an
artifact,
we
need
to
calculate
that
later.
Based
on
the
data.
Is
that
yeah
right?
H
Okay,
maybe
artifacts,
isn't
the
right
term,
because
it
it's
not
a
list
of
artifacts,
then
it's
like
a
list
of
outputs
or
like.
H
H
Now,
if
you
have
every
step
accounted
for,
can
you
follow
back
through
the
tree
and
then
look
for
the
equivalent
of
revision
of
you
know
the
things
that
led
into
the
first
step
that
generated
it?
Do
you?
Why
is
there
what's
the
issue
around
from
if
we,
if
we
keep
track
of
the
whole
tree.
F
C
The
I
mean
you
could
look
at
the
supply
chain
that
you
have
at
the
moment
and
say.
Oh
I
see
this
is
an
output
of
this
step
and
here's
another
output
of
like
here's,
a
bunch
of
outputs
of
the
source
provider
and
here's
a
bunch
of
outputs
from
the
image
builder.
But
to
associate
those
you
need
some,
you
need
this
from
field.
H
H
You
know,
have
created
the
thing
and
then
yeah
we
could
descope
from
from
this
kind
of
conversation
about
this
rfc,
and
so
at
least
we
could
still
draw
an
artifact
graph
that
shows
the
relationships
between
digests
and
revisions
and
whatever
it's
just
going
to
be.
You
know
it
might
say
that,
like
your
ci
repo
input
into
your
you
know,
image
build
step
is
also
a
commit
that
led
to
the
creation
of
an
image
hold
on.
You
think
we
don't
need
past.
B
H
From
I'm
saying,
if
you
have
yes,
I
think,
but
maybe
something
a
little
more
specific
than
that.
I
think
I'm
saying
if
you
have
this
tree
of
every
output
right
of
every
you
know
every
resource
and
you
know
every
every
resource,
capital
r.
I
don't
know
how
you
want
to
say
that
if
you
have,
if
you
capture
every
output,
even
if
it's
the
same
and
you
capture
all
the
connections
between
them,
then
you
you
do
kind
of
know
the
inputs
that
led
to
an
output,
because
you
can
find
when
hold
on
that
yeah.
B
B
Is
from
this
k-pac
image
right,
I
got
this
image
url
from
this
kpac
image
object
and
the
the
reference
to
that
kpac
image.
Object
is
important
because,
like
we
need
to,
we
need
to
take
that
object,
spec
into
consideration
when
we
want
to
hash
the
artifact,
because
you
know
if
we
get
two
different
artifacts
from
two
different
underlying
kpac
objects.
H
B
H
Graph,
yep
and
so
for
from
you're
trying
to
capture
the
inputs.
Is
there
a
reason
in
I
see
id
and
then
oh,
I
see
the
the
reason
there's
no
type.
There
is
because
the
shads
are
all
going
to
be
unique
across
all
types,
and
so
you
yeah,
okay,
there's.
H
Yeah
yeah
got
it
and
in
the
case
of
from
is
the
reason
that
from
is
difficult,
that
we
are
still
trying
to
figure
out.
The
right
observed,
observed
generation
strategy
for
mapping
inputs
to
outputs
yeah
yeah
cool
got
it.
Thank
you
perfect.
C
B
This
yeah
can
we
talk
about
what
what
stephen
just
mentioned,
where
this
necessitates
the
fact
that
we
are
strict
about
matching
the
spec
to
an
observed
generation.
C
Ipm,
oh
yeah,
there
you
go
all
right,
so
we've
got
this
this
issue
that
has
been
hanging
around
for
a
while
about
matching
yeah.
So
we
take
kpac
kpac.
C
I
want
to
submit
a
kpac
image,
I
submit
it.
Its
observed
generation
will
almost
immediately
be
one.
I've
submitted
the
object
and
it
when
I
read
the
status
it'll
say:
hey,
I'm
processing
this
thing,
my
condition
is
unknown.
My
last
image
is
not
yet
filled,
but
I'm
still
generation
one
and
then
sometime
later
it
will
succeed
in
the
status
will
change,
it
will
have
succeeded
and
it
will
still
be
generation.
One
and
last
image
will
be
filled
and
the
status
will
be
happy.
C
Let's
say
let's
say:
let's
imagine
that
even
before
that
happens,
I
submit
a
new
definition
of
the
image.
Well,
almost
immediately
kpec
will
say:
well,
I'm
generation
two,
I'm
totally
reporting
to
you,
everything
that
I
know
about
the
spec
that
you've
submitted
it's
still
in
an
unknown
state.
Last
image
still
hasn't
been
filled
and
at
some
point
those
fields
get
filled
and
observed.
Generation
will
still
remain
two,
and
at
that
point
it's
impossible
to
reason
about
which
spec
is
this
image
representing?
C
And
so
I,
in
order
to
handle
that
the
strategy
that's
been
proposed,
is
let
us
not,
let's
not
submit
new
specs
to
a
resource
until
that
resource
has
already
finished
processing,
that's
already
reached
a
succeeded,
state
or
failed
state.
I
have
written.
I
have
a
branch
where
I've
taken
28,
because
this
represents
a
user-facing
api
change.
There's
a
and
a
branch
where
I'm
I've
create
started
the
rc.
It's
mostly
ready.
Just
sorry,
yeah
go
ahead.
H
Really
quickly
before
you
move
to
the
next
thing
I
want
to
be
like,
I
think
the
logic
there
is
covers
a
case
almost
exactly
for
how
you'd
you
know
be
able
to
hold
off
on
spec
updates
to
understand
a
mapping
between
the
two.
I
would
be
a
little
more
specific,
it's
more
like.
If
you
control
the
spec,
then
you
you
can
guarantee
knowledge
of
the
mapping,
because
you
could
you
couldn't
just
it's
not
just
that
you
hold
off
on
spec
updates.
H
H
I
have
a
new
spec
update,
I'm
going
to
give
up
on
creating
any
resolution
about
the
previous
thing,
for
whatever
reason
right
never
ended
up
anywhere
useful,
and
then
you
can
decide
to
update
the
spec
and
just
know
that
you're
not
supposed
to
you
know,
pass
the
information
along.
If
that
makes
sense,
so
you
do
have
a
little
bit
more
control
than
just
holding
off
on
spec
updates
it.
B
C
I
I
would
argue
that
that
accepting
this
rfc
and
saying
that
we
need
a
from
ties,
our
hands
and
says
that
we
can't
update.
So
I
think
I
may,
if
you
disagree,
then
I
then
I'd
like
you
to
take
that
again
and
then
I'll
start,
because
I'm
I
I
think.
H
I
think
once
I
articulate
this
well,
it
will
be
obvious,
or
you
know
this
won't
seem
as
interesting
a
thing
if
that
makes
sense.
It's
like
you
could
update
spec,
proactively
right
as
long
as
you're
willing
to
on
the
other
end.
You
know,
definitely
not
promote
anything
forward
until
you
know
that
the
spec
update
has,
you
know,
propagated
fully
through
right.
H
You
have
the
option
of
just
just
saying
up:
if
the
resource
is
in
a
bad
state,
throw
it
all
away
and
and
then
then
you
could
move
faster,
but
you
might
lose
good
outputs
right.
If
you
wanted
to
you
control
both
sides
that.
C
That
is
fair,
I
think
the
so
I
would
say
that
the
so
yes,
that
is
correct,
but
it
presents
a
problem
of
the
system
getting
choked
by
fast
commits
correct.
Yes,.
C
Come
in
faster
than
your
image
builds,
you
would
never
get
an
output
yeah.
H
C
No
and
so
yeah
there's
this
still
need
to
update
this
to
rfc
20..
I-
and
this
isn't
even
in
I
haven't-
put
it
in
in
draft
state
yet,
but
the
yeah
we're
proposing
is
that
we
use
observed
completion
which
has
a
succeeded,
feel
sorry.
Let
me
back
up
that
we
use
what's
on
deployment
template
currently.
C
Did
we
start
from
there?
What
those
are
are
observed,
completion
and
observed
match
it's
unclear
to
me
that
observed
match
would
get
leveraged
at
any
point.
So
maybe
we
want
to
just
keep
it
to
observe
completion.
There
are,
there
are
a
couple
of
limitations.
There
are
a
couple
of
possible
extensions
that
I
list
out
here.
I'm
happy
to
talk
about
these
now
or
we
can
say
order
to
to
move
on
to
things
that
are
we're.
H
All
I
would
say
is
very
brief.
Feedback
is
like.
If
there's
a
you
know,
I
think
the
problem
is
there's.
There's
conditions
aren't
very
well
standardized
and
so
oftentimes
you.
You
really
need
to
change
it,
but
if
there
are
there's
a
default
we
can
use.
You
know
like
ready,
false
already,
true
something
like
that.
Then
you
know
that
that
works
across.
You
know,
k
pack
and
some
other
things
right.
That's
common
enough.
It
might
be
worth
having
a
default
value
for
that.
C
I
think
that
that
is
fine
as
long
as
we
don't
come
up
against
a
some
resource
that
somebody
would
want
to
coordinate
where
ready,
true
was
not
didn't
actually
indicate
success
if
it
could,
if
it
could
have
ready
true
and
it
wasn't
successful,
then
we'd
be
in
we'd,
be
in
trouble.
Yeah.
H
C
H
A
Oh
okay,
now
thank
you
all
again
for
your
time,
yeah.
It's
a
shame
that
we
didn't
have
time
to
discuss
the
rfc
process
rfc,
but
hopefully
we
will
be
able
to
discuss
that
in
the
office
hour
session.