►
From YouTube: Cartographer community meeting - Nov. 3rd 2021
Description
Community meetings happen each Wednesday at 8:00 AM PT/11:00EDT
See the agenda here (https://bit.ly/2Z67z08), add any topic you may want to discuss and join us live!
00:00 Intro
01:16 The TL;DR
03:00 RFC 014 discussion - Change tracking
57:38 Deployment as an input - Issue 289 discussion
#devops
A
Okay,
yeah
yeah
feel
free
to
add
yourself
to
the
app
in
this
list.
Today
we
are
hoping
to
have
some.
A
Okay,
well
again
welcome
everyone
and,
let's
jump
straight
to
the
tldr.
You
see
this
as
a
I.
I've
been
following
the
engagement
stats
for
the
recordings
and
youtube,
and
now
that
we
have
a
tldr
at
the
beginning
seems
to
be
increasing
the
attention
or
the
engagement
of
the
users
so
they're
watching
the
recording.
So
I
think
it's
useful
now,
so
thank
you,
joshua
for
putting
up
the
notes
here.
B
Yeah
no
problem,
so
I
guess
we'll
jump
right
into
it.
We've
got
so
what's
new
this
week.
We've
got
a
big
breaking
change.
We
have
now
renamed
pipeline
pipeline
service
was
kind
of
poorly
named,
so
we've
we've
gone
with
runnable,
so
runable
service,
whatever
you
want
to
call
it
so
now
that
the
the
crd
pairings
make
a
bit
more
sense.
B
So
now
we
have
a
runnable
which
has
a
cluster
run,
template
so
again,
that's
a
big
breaking
change,
okay
and
then
the
things
we
have
coming
up.
So
I
think
the
project
board
is
the
first
item
there.
So
everyone
can
take
a
look
at
our
whole
road
map,
but
the
big
items
are
the
the
in
the
supply
chain
where
we're
looking
at
using
informers
now,
instead
of
just
doing
some
short
polling
on
the
reconcile
loop
yeah.
B
So
that's
our
project
board
right
there,
so
yeah
supply
chain
we're
just
using
informers
instead
of
short
polling
and
for
delivery,
we
started
working
on
promoting
a
deployment
which
is
you
know
the
idea
that
you're
going
to
be
running
some
checks
against
the
deployment
after
hit.
It
has
succeeded
or
failed
and
then,
after
that,
we're
going
to
start
looking
at
verifying
a
deployment
which
is
different
from
promoting,
in
the
sense
that
verifying
a
deployment
means
that
you
might
actually
be
running
tests
or
checks
against
an
active
deployment.
B
A
Awesome,
thank
you
so
much
joshua.
Okay,
that's
great!
In
terms
of
new
rfcs.
We
have
these
the
rfc
14
that
was
created,
I
believe,
was
last
week
and
been
very
active.
Well,
I
was
expecting
for
paul
warren
to
join.
A
He
was
supposed
to
join,
but
well
it's
it's
early
for
him.
So
meanwhile,
to
see,
if
some
of
the
active
conversation
here
in
this
rfc,
someone
else
would
like
to
jump
in
and
write
comments
about
the
current
status.
C
I
mean
I
pointed
out
that
there's
already
an
rcrc3
that
addresses
the
same
use
case
and
notes
that
an
alternative
approach
is
the
approach
laid
out
here
and
that
steven
commented
in
rc3
that
he
would
prefer
to
keep
state
on
the
workload.
C
I
I
will
say
that
one
of
my
concerns
is
the
workload
my
impression
is.
The
workload
is
meant
to
be
like
this
dev
friendly
artifact
and
if
we
start
storing
an
immense
amount
of
state
on
that,
I
worry
that
it
becomes
a
little.
D
I
I
think
some
prior
art
here
is,
I
think,
argo
uses
their
is
this
some
resource?
The
user
interacts
with.
I
forget
to
hold
a
similar
tree
of
you,
know,
executions
and
just
like
state
of
workflow
runs
yeah
sierra
mentions
it
uses
the
workflow
itself
that
the
workflow
run
is
based
off
of
so
that's,
that's
part
of
the
influence
is
just
that
there's
some
prior
art
for
a
thing
that
looks
kind
of
like
this
that
we
could
base
it
off
of.
D
I
think
it's
also
kind
of
you
know
the
flip
side
of
you
know
we
are
putting
a
lot
of
data
in
that
object,
that
a
user
might
access
a
lot
is
that
you
know
that
data
is
something
the
user
is
going
to
care
deeply
about,
because
it
describes
where
their
artifacts
are
in
in
the
progression
through
the
infrastructure
created
by
their
their
workloads.
That's
kind
of
my
thinking
there.
E
I
guess
my
I
guess
my
opinion
would
be
that
so
long
as
it's
in
the
status,
then
it's
you
know
it's
a
jq
away
from
filtering
what
you
need
to
know.
I
think
most
people
are
fairly
okay
with
filtering
that
their
objects.
D
We
could
also
make
cube
control.
Describe
do
a
you
know,
prettier
version
of
that
that
tree
and
kind
of
the
status
of
the
current
execution,
which
is
a
nice
advantage
of
keeping
it
in
the
context
of
that
resource,
because
there
is
a
way
to
see
it
in
a
nicer
way.
E
B
So
I
guess
maybe
we
could
talk
about
like
the
two
different
approaches
that
are
proposed
here.
I
think
there's.
So
if
we
actually
look
at
what
the
status
might
look
like
in
this
rfc,
the
original
proposal
was
very
output.
Centric
in
the
status
where,
like
once,
you
produce
an
output,
then
you
can
like
link
back
to
inputs
as
as
they
were
produced.
B
A
C
Okay,
I
think
of
them
as
a
tree.
That
has
excuse
me
for
just
a
moment.
I'm
gonna
find
a
quieter
spot.
E
E
B
I
think
either
could
work
right.
I
just
the
in
the
original
rfc,
the
output
for
source
omits
the
url
portion
of
it,
and
I
think
that
might
be
important,
and
so
we
would
just
need
to
figure
out
like
a
way
of
referencing
previous
outputs,
based
on
like
a
multi-multi-field
reference
like
we,
I
mean
it
could
be
simple
right.
We
could
just
name
them
or
something
like
that.
D
But
yeah
I
was
wondering
if,
like
oh,
are
you
saying
that,
like
the
there's
no
commit
identifier
on
source
so
you'd
have
to
like
you,
have
to
key
off
the
uri
and,
like
maybe
there's
a
risk
that
a
one
of
the
types
doesn't
have
a
unique
id
and
so
it'd
be
hard
to
keep
back
or
something
like
that?
B
Thing
was
just
I
figured
that
this
approach
might
might
reduce
a
bit
of
duplication
depending
on
how
like
how
the
keying
works
and
stuff.
But
I
I
worry
about.
D
D
If
you
have
this
tree
kind
of
have
to
traverse
the
tree
every
time
you
want
to
find
something,
it
probably
isn't
going
to
make
a
difference.
Performance
wise
like
unless
you
had
some
absurdly
long
supply
chain,
but
you
know
I
think
that
is
kind
of
an
advantage
of
keeping
all
the
artifacts
at
the
top
as
they're
all
immediately
referenceable.
D
B
Yeah,
the
other,
but
the
other
advantage
I
think,
of
a
tree
structure.
Is
it's
just
easier
to
look
at
if
you're
looking
at
the
status
right,
I
mean,
maybe
we
don't
want
people
looking
at
the
status
and
maybe
we
are
going
to
have
a
parser
of
sorts
but
like
a
flat,
a
flat
object.
An
array
of
outputs
is
not
it's
not
that
easy
to
look
at
and
trying
to
figure
out.
What's
going
on.
C
E
A
Yeah.
Here's
mr
paul
warren
paul
welcome
you're
the
original
author
of
the
rfc
14.
Sorry,
I
was
like
I
have
problems
with
my
computer
this
morning.
Welcome.
Thank
you
for
joining
no
problem.
Thank
you
for
looking
at
the
rfc.
A
Yeah
we're
discussing
the
some
possible
approaches
to
address
this
and
just
wanted
to
see
the
original
idea
behind
the
behind
the
rfc.
Were
your
opinions
on
the
discussions
here
in
the
in
the
er.
G
Yeah
I
had
a
very
quick
look
at
the
comments
last
night
right
before
I
went
to
bed,
so
I
really
haven't
had
much
time
to
to
digest
them
yet,
unfortunately,
but
I
can
spend
some
time
on
it
today
and
and
comment
on
the
actual
rfc
itself
if
you
want
but
but
regards
the
last
conversation,
I
mean
everything,
everything
everything
everyone
is
saying
makes
sense
yeah.
I
can
see
the
pros
and
cons
of
of
all
approaches.
B
But
paul,
do
you
mind
giving
a
quick
background
on
your
use
case
and
like
what
this
solves?
For
you
absolutely
yeah,
yeah
yeah.
G
For
sure,
so
when
you're
a
consumer
of
a
supply
chain,
I
mean
we
have
this
this
technology
and
it
behaves
in
in
certain
ways.
You
know
by
design
it
it
kind
of
strings
together,
a
a
set
of
what
would
be
independent
objects
that
have
backing
controllers,
that
that
can
all
do
work
and
the
supply
chain
kind
of
choreographs
these
together.
Hence
why
it
was
originally
called
choreographer
or
something
like
that
and
and
that
behavior
is,
is
all
very
good
and
useful
for
automated
workflows.
G
But
for
for
workflows
that
are
more
user-centric,
I
think
it's
going
to
be
necessary
to.
G
Impose
some
structure
on
top
of
that
a
case
in
point
being
our
tooling,
if
you
can
imagine
you're
a
developer
sitting
in
an
ide
and
you
make
you've
made
some
changes
to
your
code
and
you
then
want
to
debug
your
code,
you
either
either
automatically
or
personally,
if
you're
just
manually
doing
this,
you
know
when
you
when
you,
when
you
send
your
changes,
you
don't
have
to
wait
for
the
source
code
to
become
a
running
thing
before
you
can
actually
connect
to
that
instance
and
start
debugging
it.
G
So
we
need
to
wait
right.
We
need
to
wait
around
for
that
to
happen
like
I
said,
the
user
could
do
it
manually,
but
we
could
do
it
automatically,
and
you
know
looking
at
prior
art
that
was
out
there
looking
at
the
fact
that
kubernetes
itself
was
playing
around
with
a
cube
cuddle
weight,
it
seemed
that
there
was
some
precedent
there
to
to
try
and
impose
this
more
imperative
workflows
on
top
of
the
the
the
kind
of
eventual
workflows
of
the
system
underneath.
G
So
that
was
really
the
the
genesis
of
it.
In
a
nutshell,
there
is
a
bit
of
a
last
mile
problem
here
in
in
my
opinion,
when
you
look
at
what
we
have
to
do,
we
we
we
really
do-
have
to
wait
for
the
eventual
pod
to
be
running,
because
when
we
try
and
connect
the
debugger
that
that
is
the
that's
the
thing
we're
actually
connecting
to
with
that.
G
Obviously
we're
not
connecting
to
the
cap
app
or
the
k
native
service
or
the
replica
set
underneath
that
we're
actually
connecting
to
the
pod.
So
in
some
senses
this
this
doesn't
go
far
enough
because
there
there
is
a
little
bit
of
a
large
small
problem
there
for
us.
But
but
I
thought
it
was
worth
investigating
whether
whether
there
was
enough
utility
in
this
to
to
to
kind
of
stand
alone
separately
from
that,
and
we
can.
G
We
can
solve
the
last
small
problem
separately,
but
but
there
is
a
there
is
that
from
my
perspective,
there
is
that
issue
with
it,
but
that's
something
that
we
have
worked
around
and
something
we
could
continue
to
continue
to
kind
of
work
around.
G
E
It
does,
I
was
just
wondering
if
I
could
restate
it
as
a
as
a
development.
E
No,
no
you're,
fine,
I
just
I
I
think
I
think
you're
talking
about.
I
just
want
to
get
above
the
implementation
just
for
a
second.
So
as
a
developer,
I
would
like
the
current
state
of
my
code.
I
would
like
to
know
when
the
current
state
of
my
code
is
available
to
debug
in
ipod
yep
all
right,
and
I
think
that's
yeah.
Okay,
and
that's
I
mean
I,
I
hear
you
that
you
might
think
it's
your
last
mile
problem,
but
it's
certainly
worth
some.
F
E
G
Yeah
yeah
absolutely
and
the
other
aspect
of
this,
which
I
you
know,
I'm
not
sure
about
myself.
I
I
have
nowhere
near
as
much
context
on
the
cartographer
itself.
I
spent
I
spent
a
day
or
so
reading
the
code,
so
I
roughly
know
how
it
works
down
there.
G
But
of
course
I
don't
have
the
contacts,
but
the
other
thing
that
strikes
me
as
I
know,
there's
this
delivery
system
and
where
the
the
the
thing
you're
sending
to
your
supply
chain
may
not
end
up
on
the
same
cluster
and
it
may
end
up
being
commit
somewhere
else
which
gets
picked
up
and
sent
to
other
places,
and
I
don't
quite
know
how
we
deal
with
that
in
this
scenario.
Clearly
this
is
oriented
around
around
a
single
cluster
use
cases.
G
E
Would
that
be
desirable
for
for
ide
oriented
no.
G
H
So
with
just
a
double
check,
would
it
be
the
workflow
from
a
developer?
Is
this.
H
G
G
Really
one
is
debug,
one
is
live
update,
and
so
you
know
the
user
initiates
those
in
there
in
their
ide
and
and
they
may
have
made
code
changes
they
may
not
have
made
code
changes
and
the
the
job
for
that,
for
us
is,
is
to
figure
out
figure
out
when
we
can
tell
them
that
live
update
is
ready
to
use
and
when
we
and
when
we
physically
try
and
connect
the
debugger.
G
G
Knowing
that
we've
instigated
that
or
the
user
of
our
user
has
instigated
the
action,
and
we
then
pluck
the
information
off
the
workload.
So
we
say
well
all
right:
we
can
see
source
codes
change
and
we
can
see
the
debug
flag
was,
was
enabled
and
then
effectively
we
sit
in
a
in
a
little
weight
loop
waiting
for
conditions
to
become
true
on
the
eventual
pod.
G
So
we
have
to
go
discover
the
pod
there's
a
little
bit
of
there's
a
little
bit
of
complexity
there,
but
not
much
so
from
the
workload
we
discovered
the
pod.
We
know
what
changed
when
the
workload
was
applied,
so
we
sit
in
this
little
weight
loop
and
we
wait
for
various
conditions
to
become
true,
which
are
typically
like.
Does
the
delivered
thing?
Does
the
pod
have
the
right
source
code
digest
which
we
we
manually
apply
over
the
top
of
the
supply
chain?
G
Today
we
use
the
we
use
the
epoci
source
label
on
the
image
which
we
shove
the
source
digest
in
that,
so
we
so
it's
present
on
the
eventual
image,
and
then
we
use
a
developer
convention
to
grab
that
digest
and
put
it
on
the
actual
pod.
So
it's
there.
G
So,
generally
speaking,
we
wait
for
the
source
code
to
be
present
on
the
running
pod.
We
wait
for
the
relevant
flag
to
be
enabled,
so
that
could
be
debug
or
live
update
depending
on
what
heat,
what
they
triggered
and
we
wait
for
the
we
wait
for
the
pod
to
be
running,
that's
kind
of
what
we're
doing
in
the
ide
in
case.
That's
any
any
help
in
a
kind
of
reasoning
about
a
more
generic
version
of
that
on
the
on
the
platform.
So.
G
Yeah
and
the
other
interesting
thing
about
it
is:
we've
actually
got
rid
of
it
now,
but
but
I
could
see
it
easily
coming
back
so,
from
a
user
perspective,
we've
written
a
supply
chain,
or
in
fact
it
was
this
way.
Until
a
a
few
days
ago,
we
wrote
a
supply
chain
that
can
trigger
in
multiple
places
at
the
same
time,
so.
G
So
live
update.
It
was
true
for
debug,
but
it's
no
longer
true
for
debug.
It's
probably
a
better
example
given
that
we're
discussing
debug,
so
we
had
written
a
supply
chain
where
the
debug
flag
that
you
toggle
on
or
off
was
an
input
into
both
the
kpac
build
and
the
and
the
developer
conventions,
the
conflict
service,
and
so,
when
you
trigger
that
debug
flag
to
be
true
the
supply
chain.
G
Well,
I'm
probably
not
using
the
right
terms,
but
the
terms
I
understand
the
supply
chain
will
trigger
in
multiple
places
it
will
trigger
both
a
kpac
build
and
a
conflict
service
at
the
same
time
which
will
actually
lead
which
actually
end
up,
causing
you
having
two
two
revisions
of
your
k
native
service
to
roll
through
in
reasonably
quick
succession
and
again,
you
know
from
an
ide
perspective.
We
we
kind
of
have
to
don't
have
to
reason
about
that
because
they
may
not
have
changed
source
code.
G
So
the
only
thing
we're
waiting
for
is
the
debug
flag.
So
it's
entirely
you
know.
So
we
have
to
write.
We
have
to
wait
for
the
right
image
to.
We
still
have
to
wait
for
the
right
image
to
to
appear
on
the
on
the
eventual
pod.
G
We
can't
just
try
attaching
to
the
to
the
first
pod
that
comes
into
existence
in
that
new
revision
that
came
from
the
trigger
of
the
conflict
service,
because
we
know
there's
one
behind
that
now,
like
I
said,
there's
we've
actually
changed
that
behavior
now,
because
the
java
build
pack
deprecated
their
build
time
flag
for
debugging,
so
we
only
actually
need
to
trigger
the
the
supply
chain.
G
E
G
G
You
know
you
only
want
to
wait
for
the
the
new
revision
where,
where
debug
was
turned
on,
and
sometimes
they
didn't
do
anything,
so
they
literally
stopped
debug
and
started
debunk
again
and
in
those
instances
you
you
need
to
know
that
you
can
connect
the
debugger
right
away.
So
those
are
the
kind
of
three
scenarios
actually
we're
waiting
for
and
if.
G
Can
add
that
detail
into
the
into
here,
because
it
is
some
new
answers
in
that
it
is
some
nuances.
That's
the
problem,
I
think,
and
then
taking
that
more
generally,.
B
Interesting
right
to
me,
because
oh
I've
yeah,
I
was
gonna,
try
and
generalize
that
it's
the
idea
that
you
know
your
your
workload.
Config
is
changing
and
it
to
me
it's
like.
We
have
a
question
of
whether
or
not
your
outputs
are
still
valid
at
all.
If
your
underlying
config
has
changed
right
because
you
could
be
changing
a
debug
flag,
but
your
workload
config
could
also
change
to
point
to
a
different.
You
know
a
different
url
or
something
like
that
right,
and
in
that
case
maybe
it's
worth
just
invalidating
all
of
our
outputs.
E
This
is
why
I
was
asking
you
earlier
about
whether
workload
was
the
root
of
the
tree,
because
if
we
don't
look
at
it
that
way,
then
we
have
multiple
routes
for
the
tree.
That
makes
having
a
tree
representation
even
more
pointless
to
me.
That's
why
I'm
always
averse
to
representing
that
information
tree
wise
unless
you
can
guarantee
it's
a
rooted
tree.
E
So
that
was,
but
I
think
that's,
maybe
that's
solved
by
finding
a
way
to
make
sure
it's
always
rooted
in
some
sort
of
like
something
that
we
can
categorically
say
this
cause
followed
by
this
cause
will
eventually
lead
to
this
one,
and
it
can
retract
that
tree
or
to
treat
it
as
rootless
and
to
to
have
another
mechanism
which
may
be
more
complicated
for
consumers
such
as
paul
right,
to
find
reasons,
to
find
reasons,
just
to
claim
that
this
is
the
output
of
what
you
asked
for.
G
Yeah,
I
think,
from
from
my
perspective,
yeah.
I
think
I
think
the
workload
is
the
root
of
the
tree,
but
whether
that
holds
true
for
more
generic
use
cases.
I
don't
know
the
other.
The
other
thing
is
that
the
the
supply
chain
can
be
triggered
in
the
middle
by
by
external
events,
which
have
nothing
to
do
with
nothing
to
do
with
us
too.
So
I
think
the
classic
example
here
is
capex.
G
In
response
to
having
a
new
base
run,
image
may
may
do
work
right.
That
controller
made
it
sorry,
stephen,
I'm
using
the
wrong
terms.
I
apologize,
but
that
the
k-pac
controller
may
they
do
some
work,
yielding
a
new
image
which,
which
would
then
trigger
the
supply
chain
at
the
conflict
service.
C
D
D
The
workload
owns
this
kind
of
services
that
you
know,
process
artifacts
right,
but
then,
if
you
look
at
the,
if
the
tree,
we're
talking
about
right
now
is
a
tree
of
artifacts
right
that
have
references
to
what
pieces
of
infrastructure
they've
you
know
been
processed
by
that
have
references
to
other
artifacts
that
were
generated
from
that
we've
created
a
mapping
for
right.
The
workload
applies
across
all
of
the
resources
at
the
same
time.
It's
it's
parameters
hit.
D
It
can
be
referenced
by
any
template
right
and,
and
while
those
parameters
can
change
right
it
you
know
it's
it's
providing
input
to
everything
simultaneously.
It's
not
really
passing
artifacts
forward
right.
It's
passing
configuration
forward
when,
when
you
provide
the
source
to
flux,
you're,
not
providing
you
know
the
source
code,
deflects
you're,
giving
it
the
location
of
a
git
repo,
a
branch
not
not
a
particular
revision
that
it's
going
to
then
generate
artifacts
from
right.
So
so
because
it's
an
artifact
tree,
you
know
and
workload
doesn't
produce
any
artifacts.
D
It
just
provides
configuration
to
all
the
templates.
I
don't.
I
don't
really
see
it
as
rooting
that
tree
of
artifacts,
I
think
it
is
an
unrooted
tree.
You
can
have
every
step
in
the
supply
chain
could
be
totally
unrelated
to
every
other
step
and
the
proper
way
to
represent.
That
is,
you
know
everything
separately
with
no
references.
You
know
between
them
right
and
so
the
you
know.
I
don't
know.
C
C
D
What
makes
the
workload
an
artifact
like
the
workload
to
me
is
like
a
kpac
image
right,
it's
a
it's
a
you
know,
resource
on
the
cluster
that
has
some
referenceable
configuration,
but
it's
not.
The
workload
doesn't
get
promoted
right.
It's
it's
static,
it's
it.
The
artifacts
are
immutable
and
they
move
between
the
things
right.
The
workload
is
static
and
beautiful.
The
user
can
change
it
over
time
right
and
it
it
doesn't
pass
any
kind
of
immutable
configuration
forward
through
the
steps.
So
to
me,
the
workload
is
like
a
pretty
separate,
separate
concept.
D
E
E
That
makes
sense.
I
do
I
do
understand
what
you're
saying
I
just.
I
wasn't
really
arguing
about
the
shape
of
the
the
representation
more
about
paul's
initial
problem,
which
is,
he
needs
to
be
able
to
relate
initial
state
as
far
as
he's
concerned,
or
a
change
of
a
change
of
state
to
when
that
event.
That
is
eventually
reconciled
into
something
he
needs
to
be
able
to
debug
right.
D
I
I
don't
think
calling
the
workload
an
artifact
helps
with
that
goal
right
because
then
you're
drawing
it.
It's
the
same
thing
as
every
top
level
thing
existing
at
the
top
level
versus
every
top
level
thing
pointing
to
the
same
thing
right
and
then
having
to
figure
out
when
something
kind
of
isn't
like.
D
E
Also,
not
sufficient
it's
insufficient
because
it
points
to
something
like
maine,
which
is
not.
You
know
not
what
the
person,
what
what
anyone
who's
going
to
consume
this
tree
needs.
Maybe
paul
will
be
fine,
because
it
will
use
a
specific
image
reference
instead
and
you
could
treat
it
as
an
artifact,
but
not
everyone
could.
D
So,
like
you
know,
for
instance,
that
you
know
a
particular
revision
of
source
code
is
getting
uploaded
right
and
you
can
find
that
revision
in
the
graph,
and
you
can
follow
that
revision
to
you
know
something
that
looks
something
that
maybe
is
a
cluster
template
that
doesn't
have
any
more
outputs,
and
then
you
should
know
where
you
end
up
so,
instead
of
looking
at
the
tree
holistically
like
I
have
to
figure
out
the
beginning
and
end
of
this
thing,
it
doesn't
have
a
beginning
and
end.
D
D
You're
saying
like,
if
you
have
an
artifact
of
the
tree
that
artifact
some
of
the
input
that
you
know
occurred
to
create
that
artifact
is
all
of
the
parameters
in
the
workload.
I
think
that's
true,
but
I
also
think
that
there's
other
sources
of
information,
besides
the
workload
that
lead
to
and
the
inputs
into
a
resource
that
leads
to
the
creation
of
the
artifact
and
kpac,
is
another
good
example
of
this.
So
it's
going
to
build
an
image.
The
workload
inputs
are
an
input
into
that
image.
D
The
source
code
that
passed
the
unit
test
is
an
input
into
that
image,
but
also
the
builder
configuration
on
the
cluster
you
know
also
has
a
really
big
effect
on
the
input
to
that
image.
So
if
we
want
to
capture
like
even
more
detailed
information
for
reproducibility,
we
should
do
that
and
I
think
we
can
capture
that
in
the
artifactory,
which
is,
I
think,
what
you're
saying,
but
it's
it's
harder
than
just
just
also
sourcing
the
workload.
There's
there's
information
that
each.
E
Yeah
server
just
wondering
if
we
can,
if
we
can
make
an
effort
to
short-circuit
development
that
might
be
in
the
wrong
direction
here
by
actually
testing
hypotheses
or
if
anyone
actually
thinks
that's
necessary
at
this
point,
is
it
better
just
to
start
collecting
statuses
of
each
artifact
and
and
see
how
we
go
or
or
should
we
actually
try?
Some
spikes.
E
I
think
that
at
least
paul's
use
cases
solvable
yeah.
I
think
I
mean
paul.
Do
you
think
the
information
is
it's
just
because
paul
talked
to
presented
an
rfc
and
then
has
talked
about
some
other
edge
cases
this
afternoon
so
this
morning
that
might
still
not
be
covered
there,
and
I
was
just
wondering
if
we
need
to
go
a
little
deeper
before
we
embark.
H
Is
it
a
genuine?
I
guess
use
case
that
that
you
feel
that
should
be
solved
with
cartographer.
I
guess
that
would
be
my
first
thing.
Is
there
a
general
consensus
of
a
nod
of
the
head
or
shake
of
the
head
yeah?
I
I
that's
a
good
start,
that's
good
so,
and
so
yeah,
okay.
So
that
way!
Well,
that
makes
sense.
If
there's
something
maybe
we
could
just
write
up
what
a
spike
could
look
like
some
kind
of
criteria.
E
D
I
think
a
spike
can
mean
different
things
to
different
people.
That's
maybe
a
thing
to
point
out
is
like
some
people.
Maybe
a
spike
is
something
you
throw
out
completely
after
you're
done
and
start
over
again
with
you
know
some
tdd
process
or
something
other
people
might
view
a
spike
as
the
first
version
of
an
implementation
to
come
back
and
improve
later
right.
So
I
think
that
depends
on.
E
Right
well,
the
latter
one
is.
I
actually
presented
both
options,
so
I'm
obviously
treating
spike
as
the
one
where
you
throw
out
or
that
you
don't
expect
to
be
resolved
work.
You
might
keep
it
if
it
proves
to
be
good,
but
the
alternative
was
to
start
on
the
rfc
and
just
present
that.
H
My
view
on
the
spike
is
to
quickly
prove
out
if
this
is
valid,
any
any
concerns
that
there
are,
if
it
works,
build
on
top
of
it
and
go
get,
you
know,
get
and
use
it.
So
maybe
this
just
terminology,
but
I
think
the
fastest
way
to
get
if
there's
something
that
we
want.
We're
not
too
sure
about.
I
Sure
so
yeah
the
concern
that
I
had
was
with
regards
to
like
what,
if
we
end
up
generating
these
like
artifact
traces
but
then
missing
details
that
are
important.
I
I
would
imagine
that
oh,
we
have
also
an
image
repository.
That's
keeping
track
of
that
go
link
image
so
that,
if
there's
a
bump
to
that
image
like
in
this
artifact
trace
that
we're
generating,
we
can
see
that,
like,
oh
for
before.
Like
let's
say
two
weeks
ago,
we
were
testing
with
golang
1.13,
but
there
was
a
bump.
Now
it
was
114
and
things
broke
right
and
you
can
tell
exactly
why
that
broke
with
which
version
of
go
things
like
that.
I
But
then
I,
but
every
time
it
comes
back
to
cave
back,
I'm
always
like,
oh,
but
the
builder
will
get
an
update
under
the
hood
and
like
how
do
we
deal
with
those
cases
like?
Is
it
more
of
a
guidance
where
we
should
just
go
for
like
well?
It
might
be
that
you
have
resources
in
our
supply
chain,
they're
going
to
get
updates
under
the
hood
and
yeah
we're
not
going
to
be
able
to
see
them.
You
have
to
use
other
ways
of
figuring
that
trace
out
or
yeah
like
it.
I
C
That
kpac
is
one
that
we
think
about
a
lot,
because
there's
that
builder
object,
but
it's
no
different
from
the
source,
because
the
source
is
looking
at
some
git
repository
that
gets
updated
outside
in
the
world,
and
I
would
argue
further
that
any
resource
that
is
determinative
that
doesn't
rely
on
any
outside
source
could
just
be
folded
in
it.
Like
almost
isn't,
I
think
it's
more
different
than
the
other
artifacts
that
we
have,
then
then
it
is
similar.
I
think
the
the
default
should
be
a
mental
model.
That
says
we
have.
C
We
have
like
our
logical
supply
chain.
I
I
think
the
workload
is
is
a
piece
of
that
it
it's
it's
a.
It
is
a
provision
of
information
that
is
very
observable
to
cartographer,
and
then
each
node
each
resource
represents
something
that's
going
to
take
all
that
observable
information
and
mix
it
with
some
unobservable
state
in
the
world.
And
that's
like
that's
the
value
that
each
of
all
those
resources
that
we're
choreographing
provide.
E
I
think
this
identify,
as
highlights-
maybe
here
like
the
use
case,
is
quite
specific
from
paul.
It's.
There
is
one
one
unknown
which,
like
you
said,
sources
are
just
another
example
right,
so
one
unknown
is
what's
the
actual
source
code
right
and
paul
needs
to
observe
that
the
source
code
he
knows
about
is
is
finally
picked
up,
which
we
get
because
it
ends
up
as
an
artifact
during
the
process,
and
then
there
are
no
ones
which
is
the
state,
the
parameterization
he
put
into
workload
right.
E
I
think
in
paul's
case
correct
me
if
I'm
wrong,
but
I
think
in
paul's
case
it
doesn't
matter
that
capec
wanted
to
build
it
again
for
some
reason
right
as
long
as
there's
a
stable
delivery
that
he
can
target
and
there
may
be
edge
cases
around
that
it
may
be
problematic
and
I
think
what
it
comes
down
to
in
in
a
much
broader
story,
not
one
where
you're
in
the
in
a
loop
on
a
on
a
developer
machine
but
like
getting
to
staging
and
into
production,
is
that
you
want
to
know
you
want
to
know
what
caused
you
to
get
where
you
are,
so
that
you
can
either
one
reproduce
it
or
to
at
least
just
know
why
something
changed
cause
for
change
right.
E
If
the
resources
don't
tell
us
the
cause
for
change,
we
need
to
talk
to
the
people
who
use
those
resources
about
improving
them
right,
because
we
need
to
know
that.
I
mean
users
need
to
know
that
information
if
they
do
tell
us
what
their
resource
causes
are.
E
They
should
be
findable
in
the
tree
in
the
sense
that
we
should
be
able
to
say
this
is
the
image
that
got
produced
and
it
was
produced
by
this
object's
run,
and
that
will
be
about
whether
we
want
to
log
that
information
or
we
want
to
copy
forward
state
from
those
underlying
stamped
objects.
D
To
add
to
that,
like,
I
feel
like
the
tree,
we're
generating
right
now
is
is
just
representative
of
the
movement
of
artifacts
between
the
infrastructure,
the
cartographer
is
creating
and
there
are
other
tools
like
I.
I
don't
think
we're
ever
going
to
be
able
to
capture
every
single.
You
know
possible
input
inputs
that
changed
and
trigger
to
change
and
push
the
train
change
arbitrarily
between
builds
right,
but
both
of
those
can
happen
even
in
tecton.
D
You
reference
an
image
instead
of
feeding
it
from
another
job
by
tag,
and
it
could
be
a
different
image
on
the
subsequent
run
right.
There
are
other
tools
that
handle
that,
like
tucked
on
chains
that
give
you
that
kind
of
traceability,
so
you
can
know.
Oh
this
particular
run.
You
know
came
from
all
this
metadata.
You
know,
kpac
can
do
something
similar
and
capture.
D
Actually,
k-pac
captures
all
this
on
the
image
that
gets
generated,
so
the
the
all
the
build
pack
versions,
all
the
run
times,
actually
a
complete
s
bomb
of
you
know
every
runtime
dependency
with
a
high
guarantee
that
you
can
use
for
vulnerability
scanning
with
cpes
and
pearl
identifiers,
like
every
input
into
it,
ends
up
captured
on
the
image
in
that
case
right,
so
you
can
actually
key
from
that
image
identifier
and
know.
Oh
yes,
it
was
this
exact
set
of
dependencies
that
triggered
it.
D
So
I
think,
because
different
tools
have
different
ways
of
providing
that
you
know
complete
artifact
traceability,
right,
tachton
through
chains
k
pack
through
an
enormous
amount
of
metadata
right.
We
can
focus
this
tree
on
the
providing
a
representation
of
what's
moving
between
the
infrastructure
that
we've
created
right
instead
of
you
know,
trying
to
provide
full
traceability.
E
D
Inputs
for
all
artifacts
ever
which
I
think
is
we
can
do,
but
maybe
it's
not
this
way.
I
also
agree
with
the
idea
that
we
could
add
additional
keys
to
the
status.
So
if
you
wanted
to
capture
arbitrary
additional
information,
the
resource
is
exposed
in
their
status
right
like
if
you
wanted
the
build
packs
to
be
part
of
that
tree,
for
whatever
reason
or
if
you
wanted
a
tecton
chains.
You
know
output
of
some
sort.
I've
looked
too
closely
at
the
project,
but
to
be
associated
with
that.
D
You
know
particular
thing
in
the
tree.
There
could
be
optional
things
that
you
can
reference
in
the
spec
of
the
template
that
pull
from
the
status
just
like
we
do
for
outputs
that
are
like.
You
know,
also
capture
this
information,
but
you
know,
I
think
we
could
worry
about
that
later.
If
we
wanted
to.
E
If
you
ask
us
to
so
that
you
can
pull
something
out
of
the
ownership
tree
like
the
pod
that
got
created
or
something
that
would
be
useful
and
it
would
be
probably
very
much
a
local
dev
solution
not
used
often
in
the
staging
and
production
world.
But
it's
those
are
thoughts
all
right
about,
but
the
information
should
be
captured
by
the
tools,
and
I
think
we
agree
on
that.
Right
by
the
resources
we
use,
not
the,
but
not
by
cartographer.
A
C
I
I
think
I
heard
a
lot
of
consensus
that
we
should
have.
We
should
try
out
a
flat
yeah.
The
use
case
that
we
have
right
now
is
met
by
a
flat
tree
of
this
output
information
and
don't
forget
that
rc
we
can
start
writing
some
stories
to.
D
I
I
think,
there's
a
little
bit
of
a
question
about
prioritization,
so
I
know
this
is
like
a
big
feature
that
you
know
paul.
You
need
soon
on
the
you
know,
for
what
you're
working
on
with
kind
of
developer,
workflows
and
so
there's
maybe
a
question
of
some
contribution,
or
you
know
kind
of
you
know
having
more
folks
out
rather
than
those
who
are
just
on
the
kind
of
core
team
of
maintainers
in
the
project.
Can
I
help
build
this
forward?
Also
yeah.
G
That
is,
that
is
true.
I
I
can't
speak
to
resources,
obviously,
but
yeah.
There
was
some
discussion
about
that
earlier
on
that.
That
is
definitely
the
case
and
also,
we
should
probably
include
someone
from
the
cli
in
this
too,
because
the
way
I
envisaged
we
would
consume
this
is
via
via
the
cli
effectively.
So
the
cli
would
be
the
clients
to
this
thing
like
it
has
a
weight
flag
today,
which
only
works
in
a
very
small
set
of
use
cases.
G
So
I
would
hope
that
the
rules
that
we
were
talking
about
earlier,
the
the
kind
of
tree
navigation
that
we're
talking
about
earlier.
I
I
I
would
hope,
would
end
up
in
the
cli
in
some
shape
or
form,
so
we
probably
need
to
loop
them
into.
G
I
can
lisa
is
off
this
week
my
engineering
manager
for
ide
she's
off
this
week,
but
I
can
let
her
know
about
this
conversation
and
the
and
the
request
and
we
can
go
from
there.
I
think
that's
good.
A
G
Yeah
no
problem:
if
is
there
any
actions
you
want
me
to
take
on
the
rfc
itself?
I
know
there's
some
comments
on
there
now,
but
you
know
I
can
add
yeah.
Would
you
like
me
to
do
something
with
those
those
comments
or
should
I
just
leave
them
open
for
the
time.
C
C
Are
there
I
yeah,
I
was
saying
I
think
the
really
the
only
thing
that's
hanging
out.
No,
I
I
think
there
are
two
two
issues
that
I
heard
that
we're
hanging
out.
One
we
discussed
to
length
which
is:
should
it
be
a
tree
structure
or
should
it
be
a
flat
list?
C
B
E
I
I
would
like
to
see
on
the
rsc
before
it's
ratified
the
actual
users,
to
use
the
stories
or
use
cases.
I
just
like
to
capture
that
a
little
clearer
at
the
top
like.
D
C
C
All
right,
I
was
just
going
to
say
that
one
one
thing
that
I
also
heard
in
our
conversation
that
I'm
still
puzzling,
is
what
should
be
done.
D
It's
kind
of
like
the
same
changes
that
might
happen
behind
the
scenes
in
kpac
right
like
yes,
some
some
parameterization
into
the
services
changed,
but
that
doesn't
mean
that
you
know
artifact
a
is
not
like
a
particular
commit.
Is,
you
know,
didn't
pass
from
flux
into
kpac?
I
think
the
harder
thing
is:
what
happens
when
the
supply
chain
changes?
Do
we
keep
like
references
to
the
jobs
that
no
longer
exist?
You
know
like
do
we?
Do
we
try
to
preserve
some
of
it?
Do
we
blow
it
all
away?
D
C
Yeah
I've
I
get
yeah.
I
guess
I
guess
really
the
it's.
C
D
I
think
the
benefit
you
have
is
the
artifacts
are
immutable
and
so
like,
if
you
change
your
branch
right
to
a
different
branch
of
you
know,
it
would
be
a
different
source
of
commits,
but
the
commits
flowing
through
the
pipeline
right
now
would
still
be
valid,
commits
that
came
from
a
get
repo
that
retraceable.
It
doesn't
really
invalidate
the
context
of
the
graph.
So
I
guess
we're
okay,.
E
Yeah
and
as
to
the
supply
chain,
I
feel
like
the
only
thing
that
would
nuke
the
nuke.
The
status
is
if
the
supply
chain
that
was
selected
was
different,
I'm
concerned
about
multiple
selected
supply
chains.
We
haven't
talked
about
that,
but
if
you
select
for
a
different
supply
chain,
then
we're
going
to
the
old
one
is
not
going
to
be
updated
anymore.
Maybe
we
have
a
per
supply
chain
status
tree,
but
if
you
change
the
current
one
in
any
way
shape
or
form
it
doesn't
change.
What
was
it
only
changes?
E
What
becomes
so
quite
often
you
can
make
changes
to
a
supply
chain.
That
leaves
several
objects
in
play
right.
So
I
don't
think
we
need
to
worry
too
much
about
that
until
we
hit
edge
cases
on
that.
I'm
just
much
more
concerned
about
the
selecting
for
multiple
supply
chains
that
that
would
be-
and
I
think
we
can
solve
that
just
by
enumerating
them.
D
Yeah,
I
think
I
think
it
could
be
a
eventually
it
could
be
a
list
in
the
status
of
the
trees,
for
you
know,
keyed
against
the
supply
chain,
but
before
we
wrap
up,
I
just
wanted
to
back
on
the
topic
of
whether
we
do
the
tree
or
the
flat.
You
know
status.
I
think
I'm
a
big
fan
of
the
flat
status,
but
one
thing
I
said
earlier
is
about
about
cube
control.
Describe
you
know
like
being
able
to
visualize
there.
You
can't
get
there's
a
an
open
issue
about
it.
D
E
D
A
E
Right
yeah,
I
think,
like
very
quickly.
I
would
very
much
like
to
talk
about
the
top
item
as
quickly
as
we
can
to
find
out
if
there's
a
way
that
we
can
act
on
it,
which
is
two
items
needed,
or
a
name
in
this
finalized
issue.
That.
E
So
this
is
one
issue
I'd
just
like
like
us
to
look
at,
which
is
the
the
one
that
stephen
has
posted
about
receiving
a
deployment
instead
of
receiving
the
source.
I'd
just
like
to
get
ratification
on
that,
so
that
we
could
implement
rather
than
change,
because
we
haven't
written
the
code
yet
or
we've
only
written
a
tiny
amount
of
it.
So
it'd
be
easier
to
change
today
than
later,
so
it
doesn't
have
to
be
solved
in
here.
But
I
just
would
like
us
to
decide
whether
we're
going
to
do
it
or
not.
D
I'm
just
like
to
respond
to
your
comment.
You
got
to
drop
off
in
a
minute,
so
I'll,
be
very
quick
to
respond
to
your
comment
in
that
issue,
about
the
kind
of
motivation
for
moving
the
deployment
to
the
beginning
of
that
resource.
The
I
think
it
makes
it
more
explicit
so
like
at
the
end
of
that
chain
of
deployments.
You
cast
you,
you
provide
the
deployment
as
a
source
value
to
something
else
explicitly
like
you.
You
see
I'm
taking
from
deployment
putting
into
source
this.
E
D
For
for
isn't
it
a
tactile
pipeline
like
if
you're
you
know
running
unit
tests
in
that
example,
isn't
that
a
source
that
takes
a
source?
Yes,
no,
I'm
not!
I.
E
Was
saying
typically,
not
all
the
time
yes
yeah
yeah?
That
was
all,
and
I
I
don't
think
it's
a
big
issue.
I
just
wanted
to
raise
it
for
thought
it's
up
to
everyone,
whether
they
want
to
ratify
it.
The
other
one
was
also
you
mentioned
in
here.
A
customer
deployment
validation
template.
This
is
a
heck
of
a
mouthful,
but
I
kind
of
like
a
deployment
validation
over
the.
What
was
it,
what
do
we
call
it?
A
delivery,
template
delivery.
E
B
I
I
like
the
new
changes
too.
So
I'm
in
favor.
E
C
Just
in
terms
of
a
little
bit
of
ux
for
people
writing
them
a
rather
than
cluster
deployment
template,
we
could
do
a
cluster
deploy,
template
and
rather
than
deployment
validation,
we
could
do
deploy
check.
E
Validation
seems
like
a
better
word
for
me
and
the
deploy
versus
deployment.
I
always
thought
it
was
going
to
be
deployed,
but
it
got
changed
to
deployment
at
some
point.
I'm
not
sure.
I
think
we
need
to
raise
that
and
find
out
why
that
changed,
because
it
was
deployed
originally
in
the
original
rfc.
F
C
B
I
think
the
the
changes
in
general,
though,
with
the
the
source,
I
don't
think
anyone's
debating,
that
right
having.