►
From YouTube: Kubernetes SIG Release Bi-Weekly Meeting for 20220726
Description
Kubernetes SIG Release Bi-Weekly Meeting for 20220726
A
Hey
everyone
and
welcome
to
our
sick
release
meeting
this
meeting
adheres
to
the
cncf
code
of
conduct,
so
I
would
like
to
remind
you
to
just
be
excellent
to
each
other.
As
always,
I
pasted
a
link
to
the
agenda
in
the
chat,
so
please
add
yourself
to
the
list
of
attendees
and
let's
jump
over
to
the
recurring
topics.
A
A
B
Probably
again,
I
mean
we
have
a
couple
of
topics
on
the
agenda
today
regarding
that,
but
I
think
one
of
the
main
ones
is
that
we've
been
meeting
over
the
last
few
weeks
with
laurie
to
get
some
road
mapping
sessions
done,
and
I
think
she'll
present
a
little
bit
about
that
and
the
other
one
is
that
we
are
kicking
off
work
on
the
salsa
3
compliance
effort.
So
also
more
on
that
on
some
of
the
upcoming
topics.
C
D
I
I
was
doing
the
updates
for
goal
119,
but
then
I
got
like
a
vacation,
and
I
saw
like
this
week
that
dims
took
over
this,
but
I
feel
that
this
might
what
the
the
same
issue.
We
got
like
the
last
time
with
the
goal
118,
that
we
got
some
memory
leaks
and
other
things
and
like
I'm,
I'm
not
sure
if
we
we
want
to
do
125
and
move
to
go
or
19
right
away,
because
it's
very
close
to
the
quote,
freeze
and
all
this
stuff.
D
A
C
Sorry,
I
think
james
and
jordan
are
pushing
to
basically
get
one
19.
19
for
125.,
like
I
think
we
had
a
quick
conversation
with
them
and
they
were
okay.
We
basically
shipped
119
after
code
phrase
in
the
kk
repo,
it's
just
about,
basically
how
how
the
gea
version
will
be
test
and
that's
going
to
give
you
go
and
not
go
about
this.
Okay,
it's
about
schedule
because
the
same
way
we
cut
the
release.
In
the
same
way
we
the
129
gonna,
be
ga,
that's
the
goal.
C
D
And
regarding,
like
the
others,
the
goal
117
and
118.,
I
saw
last
week
that
were
those
have
some
patches,
but
I
didn't
see
any
issue
or
anything
on
our
site
to
update
I'm
gonna
work
on
that
after
this
meeting
and
create
the
issues
and
start
working
on
the
updates
for
those.
F
No
reason
say
again
james,
so
if
you
do,
we
have
a
plan
for
if
it's
a
show-stopping
bug
in
go
119
the
blocks
release
as
of
what
happened
in
last
time,
yeah.
D
I'm
glad
I
speak
with
yeah,
I'm
going
to
speak
with
teams
and
c
because,
like
when
we
open
that
I
put
like
and
make
sure
that
that
we
are
not
going
to
merge
but
looks
like
dims
and
lisht
is
willing
to
do
that.
Then
I'm
I'm
gonna
sink
and
see
the
plants,
I'm
I
don't
have
an
answer
right
now.
E
I
also
know
that
I
mean
no
one
has
communicated
this
to
me
directly,
but
just
fishing
and
conversations
around
different
slab
channels.
I
have
seen
people
propose
well
specifically
lig-it
and
tim's
propose
an
additional
rc,
which
of
course
we
can
do.
E
But
in
my
opinion
I
think
that
all
of
that
would
only
make
sense
if
we
end
up
delaying
everything
by
a
week
or
two.
But
I
don't
know
I
I
don't
want
to
call
any
shots
there,
but
yeah
to
your
question
james.
I
think
that
an
rsd
would
be
a
good
way
to
to
gauge
situations
and
then,
if
things
break
really
bad
well,
then
I
don't
know,
but
I
don't
think
so,.
G
Yeah
thanks
james
for
the
question,
I'm
I
was
going
to
ask
you
the
same
thing
like
for
the
current
release.
I
saw
some
discussion
around
the
go
version
bombing
and
I
saw
someone
already
proposed
like
if
I
just
curious,
if
it's
possible
to
get
like
the
go
version,
revert
back
to
118
and
didn't
postpone
to
the
release
date.
If
that's
an
option.
D
G
Yeah
conversation
seems
they
are
also
open
with
reverting
back
to
goal
1.18
if
it's
buggy
but
we'll
see.
Thank
you.
C
I
think
there
are
multiple
factors
we
need
to
consider
because
we,
by
flipping
to
119,
we
identify
a
bug.
I
think
james
and
jordan
identify
bug
in
the
galaxy
island
and
they
where's
the
issue
to
go
along.
I
think
that's
gonna,
the
fix
will
be
land
in
the
g
in
the
j
version
of
119
and
from
there
we
can
just
try
to
basically
trying
to
merge
that
in
the
master
branch,
if
it's
not
possible,
we
can
basically
say:
okay,
we
go
119
with
and
go
119
and
go
1
18
we've
commented
120
25..
C
A
G
Yes
cc
here,
I'm
currently
the
release
lead
for
the
1.25
and
thanks
ruby
for
adding
that
and
we
will
have
like
the
mid-cycle
retro
scheduled
tomorrow.
We
are
working
like
the
major
the
major
milestone
we
are
currently.
Having
is
next
week's
code
phrase.
I've
sent
out
the
reminder
to
the
community
and
I'll
send
out
another
reminder,
maybe
later
this
week
or
early
next
week,
yeah.
G
So
the
only
like
issue
we
are
not
sure
of
is
about
that
the
gold
version
and
besides
that
everything
is
working
fine
as
expected,
and
we
have
all
the
alpha
cut
thanks
for
veronica
ready,
yeah,
so
we're
looking
forward
to
the
code
phrase.
G
F
F
There's
a
whole
bunch
of
reasons
for
this
at
largely
making
it
easier
for
contributors
to
continue
being
a
part
of
the
team,
because,
right
now
you
have
to
go
through
the
shadow
survey,
multiple
times,
which
is
sub-optimal
in
my
opinion
and
after
a
bunch
of
discussions,
after
they
suppose
in
person
at
cube,
cardinal,
north,
american,
so
coupon
europe
and
online
at
various
points.
I
ended
up
writing
this
proposal,
which
I
kind
of
left
for
like
six
weeks
while
I
just
kind
of
fell
over.
F
But
now
I'm
back
it's
it's
here,
so
this
is
kept
three
three
four
four
I've
updated
it.
There
are
some
comments
on
there,
there's
some
from
from
sasha
as
well,
which
I
need
to
go
through,
but
I
think
it's
in
a
reasonable
state,
and
basically
I
just
wanted
to
highlight
well
a
it's
there.
F
I'd
like
people
to
take
a
look
at
it
if,
if
they're
interested
and
then
there's
a
couple
of
things
in
particular
about
it,
which
are
of
note,
but
which
is
one
that,
if
we
did
this,
it
would
be
a
straight
to
ga
process
enhancement,
there's
no
real
possibility
for
an
alpha
or
a
beta
unless
we
wanted
to
get
into
the
world
of
having
like
two
parallel.
These
teams,
which
just
sounds
horrendous,
so
there's
no
real
way
to
trial
like
this
we're
just
gonna.
F
Do
it
we
don't,
although
the
back
out
operation,
if
we
don't
like
it,
is
just
to
try
it
for
at
least
if
we
hate
it,
we
can
just
go
back
to
the
old
mechanism.
The
old
release.
It's
not
it's,
not
a
permanent
change.
If
we
don't
like
it,
for
example-
and
the
other
question
would
be,
is
there
enough
time
to
do
this
for
126.?
F
I
don't
know
the
answer
to
that
question.
I
suspect
that
the
only
person
here
that
might
know
would
be
rey
as
the
125ea,
so
yeah.
Those
are
the
two
things
I'd
be
really
interested
in
people's
opinions
on
age.
I
don't
have
a
concern
about
the
idea
of
doing
this
straight
to
ga
and
b
do
people.
Anyone
have
a
concern
about
126.
H
Yeah,
so
in
the
enhancements
subproject
meeting
last
week,
we
actually
talked
about
a
new
kept
template
for
process
changes
so
where
processes
for
ex
processes
for
cigs
things
like
this
might
have
to
go
straight
to
ga,
so
it
has
been
discussed
but
not
nothing
finalized.
H
I
think
if
we,
if
the
cap
does
go
through,
I
I
don't
know
if
I
still
also
don't
know
if
we
have
time
to
do
this
for
126
like
how
that
do
we
have
enough
time
to.
I
think
we
do.
I
mean
well
actually,
no,
because
it's
126,
the
the
shadow
selection
process,
starts
in
less
than
a
month.
We
have
enough
time
to
communicate
this
thoroughly
and
how
much
communication
we
do
we
need.
How
do
we
gauge
that,
like?
How
do
we
determine
how
we
communicate?
H
Can
we
communicate
this
enough
for
it
to
make
this
change
for
the
community
all
right
in
my
head?
I'm
I'm
I'm
saying
no
for
126
just
because
it's
we're
gonna
start
in
like
two
three
weeks,
pretty
much
for
announcement
for
shadows
or
so
that's
just
those
are
just
my
opinions.
F
Yeah
I
mean
most
of
you
overhead
for
communication
would
actually
be
to
existing
members.
That's
who
it's
really
going
to
change
for.
So
in
my
original
proposal
it
changed
the
shadow
survey
as
well,
but
sasha
made
the
point
that
we
don't
need
to
do
that
and
we
just
leave
the
shadow
survey
as
it
is,
so
the
change
would
end
up
being
that
if
you
are
a
brand
new
shadow,
you
go
through
one
process.
F
If
that
I
don't
know
if
that
changes
your
opinion
at
all.
If
that
lowers
the
communication
overhead,
but.
D
H
I
I
think
it
does.
If
we,
you
know
if
we
start
pretty
soon
on
this,
but
and
I
think
a
big
determining
factor
also
if
this
is
merged
and
when
that
is
merged,
then
we
can
start
communicating
or
do
we
start
communicating
now
as
an
open
pr,
but.
A
F
It
works
for
me,
okay,
I'll
I'll,
go
through
your
changes,
sasha
and
largely,
except
that
I'm
pretty
happy
with
them
and
then
I'll
send
an
email
to
cadet
this
afternoon
and
we'll
see
where
we
stand,
I
guess
and
ray.
I
guess,
we'll
keep
in
contact
and
see
what
the
feeling
is
around
it
like.
If,
if
it's
it's
127,
it's
not
the
end
of
the
world,
I
mean
I'd
like
it
to
be
126,
but
you
know
it
is
what
it
is.
A
Yeah
I
mean
the
question
is:
do
we
have
to
add
up
documentation
and
it
would
be
probably
better
to
have
one
more
release
for
doing
the
change
and
also
for
setting
up
the
roster
right?
So
we
have
to?
We
have
to
get
a
an
alias
on
the
kids.
I
o
repository
at
all
the
members
and
things
like
that.
Get
everything.
F
A
A
Because
now
we
can
move
over
to
adolfo.
Who
has
two
topics
and
the
first
one
is
the
overview
of
the
proposed
fire
signing
flow.
A
A
B
All
right,
so
this
is
coming
from
one
of
the
road
mapping
sessions
that
we
had
yesterday
regarding
file
signing.
B
B
There
have
been
some
plans
and
ideas
to
build
this
into
krell,
but
after
thinking
about
a
little
bit
and
how
we're
gonna
be
sending
the
at
the
stations,
which
is
the
next
topic
to
come,
and
I
think
with
that,
we
need
to
do
this
in
a
different
way.
So
this
is
how
we
are
doing
image
signing
today.
B
Instead
of
the
release
process,
images
container
images
have,
as
of
now
to
these
two
signatures,
one
that
happens
during
staging
and
in
our
inside
of
our
process,
and
then
we
have
the
kubernetes
wild
organization
signature
during
promotion,
and
so,
if
you
see
the
first
diagram
here,
we
build
inside
the
images
instead
of
searching,
we
sign
them
and
we
have
a
second
signature
during
image
promotion
and
then
we
release
those.
B
B
So
the
idea
is
to
split
file
signing
out
of
the
staging
and
have
a
new
step
that
we
can
use
to
sign
the
I
mean
the
the
the
most
pressing
one
is
now
the
the
the
files,
and
eventually
we
also
need
to
start
moving
the
signing
of
the
images
the
first
signature
out
of
staging.
B
So
this
is
what
it
would
look
like,
so
we
would
build
and
stage
the
images
and
files
and
then
add
a
second
step
in
our
google
cloud,
build
run
to
sign
for
now
the
files
and
then
eventually
also
move
the
images
outside,
and
then
we
would.
The
idea
is
that
we
know
what
we
built
from
from
staging,
because
we
have
the
s1
for
those,
so
that
gives
us
the
list
of
artifacts
and
then
we
sign
them,
and
then
we
store
the
signatures
and
the
files
in
the
searching
bucket.
B
And
then
we
push
the
signatures
to
the
registries
to
the
searching
registries,
and
then
we
carry
on
as
we
used
to
do
with
image
promotion
and
release
the
so
the
there's
also
this
new
box
about
file
promotion
which
I'll
get
into
in
a
little
bit.
But
for
now
what
I
wanted
to
call
out
is.
B
We
had
been
thinking
about
building
signing
capabilities
for
files
into
krell,
but
if
we
are
going
to
split
out
signing
into
a
step
into
a
regular
google
cloud
build
step,
we
might
as
well
use
just
cosign
to
generate
the
signatures
for
those
for
those
artifacts.
I
mean
there's
unless
I'm
missing
some
reason
that
we
need
to
build
sanning
into
into
grill.
I
think
it
would.
We
can
do
it
just
by
using
adding
a
step
to
use
cosine
to
sign,
and
this
applies
both
to
our
files
and
two
images.
B
If
we
start
signing
the
files
during
after
we
stage
them
this
will
this
will
not
result
in
a
benefit
for
any
projects
outside
of
kubernetes
itself,
because
the
only
files
that
we
will
be
signing
would
be
those
that
we
build
during
during
the
build
process
of
kubernetes.
B
So
if
you
remember
and
you've
been
in
some
of
the
release,
engineering
discussions,
we're
also
working
on
file
promotion
and
the
same
promoter
that
promotes
images
also
is
capable
of
promoting
files.
B
There
are
very
few
projects
that
use
it
today
and
but
I
think
we
can
still
use
our
code
to
move
to
wheel,
signing
file,
signing
capabilities
to
kpromo
when
promoting
files,
and
in
that
way
we
can
extend
the
benefit
of
signing
artifacts
to
the
rest
of
the
to
the
rest
of
the
community,
and
so
yesterday,
during
the
world
mapping
session.
B
A
lot
of
you
were
surprised.
By
the
way,
I
tried
to
break
down
the
problem
because
we
didn't
have
this
kind
of
context,
but
I
hope
that
seeing
this
in
this
way
can
make
you
visualize
things
better
so
right
now,
what
we
have
to
decide
is
if
we
want
to
split
it
out
and
second,
if
we
want
to
use
vanilla
coastline
to
center
our
artefacts
or,
if
there's
a
reason,
we
can
also
build
signing
capabilities
into
krell.
D
E
B
Yeah
yeah,
maybe
that
didn't
wait.
I
wasn't
clearing
that
in
just
just
about
so
signing.
Adding
a
step
to
sign
the
files
during
our
google
cloud,
build
runs
will
get
us
the
benefit
of
with
very
low
work,
actually
ticking
off
the
box
that
we
need
to
finish
the
cap
of
signing
artifacts,
and
but
the
problem
with
that
approach
is
that
it.
It
only
applies
to
the
regular
kubernetes
release,
binaries,
so
ctl
and
all
and
all
the
files
that
you
find
in
the
release
bucket.
B
If
we
want
to
extend
this
and
help
the
whole
community
to
sign
their
files,
we
need
to
build
and
also
ensure
that
people
start
using
more
the
file
promoter-
and
this
is
this-
is
a
different.
This
is
a
different
tool
and
this
is
a
different
process
than
the
kubernetes
and
artifacts
do
not
use
today.
B
But
as
we
are
making
some
changes,
especially
now
considering
things
like,
for
example,
the
rework
of
the
packages
and
since
we're
going
to
be
moving
things
a
little
bit
more,
we
can
consider
building
file
signing
capabilities
to
the
promoter.
A
So
I
have
two
two
points
here.
The
first
one
is
that
I
think
that
this
new
workflow
is
way
better
than
the
one
we
proposed
in
one
of
the
caps,
and
I
think
we
have
to
update
the
enhancements
which,
to
clarify
everything,
also
collect
all
issues
which
are
now
linked
to
this
cap
somehow
together.
A
So
I
would
like
to
so
I'm
kind
of
tending
towards
stopping
the
implementation
and
updating
the
enhancement
and
then
doing
the
implementation.
The
next
release
cycle,
because
we
don't
have
one
month
left
right,
so
we
just
would
also
have
to
update
documentation
for
end
users,
and
I
see
a
risk
that
we
that
we
are
not
able
to
do
that.
A
Yeah.
And
the
second
thing
is
that
for
using
cosine
as
plaincli
tool,
the
thing
is,
I
wouldn't
expect
that
it
will
be
this
simple.
So
I
would
say
we
probably
end
up
having
a
bash
script
somewhere
which
collects
the
files
and
does
something,
and
that
was
the
reason
why
we
get
rid
of
anago
right.
So
we
wanted
to
get
rid
of
any
bash,
which
is
pro,
which
is
hardly
testable
for
us,
for
example.
B
D
B
All
right,
so
this
is
the
these-
are
open
issues
on
ok
release.
So
if
anyone
anybody
wants
to
provide
commentary
or
criticism
whatever,
please
feel
free
to
do
it
there
now
the
next
one
about
provenance.
So
this
was
another
topic
that
two
people
were
suppressed,
because
there
is
not
very
widespread
knowledge
about
what
providence
is.
B
So
if
you
see
one
of
the
first
question
that
came
out
yesterday
was
what
is
a
provenance
at
the
station,
so
I
uploaded
one
of
our
at
the
stations
to
to
be
able
to
show
you
well.
I
just
noticed
that
I
took
an
an
older
one,
but
this
is
basically
what
it
looks
like
so
provenance
at
the
stations
have
two
parts,
one
it's
a
subject
in
a
predicate.
If
you
think
about
like
a
phrase,
is
it
basically
a
provenance?
B
Another
station
is
telling
you
I
did
this,
which
is
a
predicate
to
that
which
is
the
subject.
And
then,
if
you
take
a
look,
this
is
one
of
the
our
and
our
at
the
stations
that
we
produce
with
a
kubernetes
release.
B
Currently,
and
if
you
see
here,
we
have
the
subjects
which
are
basically
all
of
the
artifacts
that
go
out
with
inside
of
the
bucket
right
now
and
then,
if
you
go
all
the
way
down,
you'll
see
here
the
predicate
and
in
the
predicate,
we're
trying
to
explain
to
someone
looking
at
these
documents
and
maybe
possibly
trying
to
reproduce
or
audit
our
releases,
what
we
did
to
those
to
those
what
we
did
to
get
those
those
artifacts.
B
So
if
you
see
here,
we
have,
this
is
a
salsa
version
0.1
at
the
station.
So
the
syntax
is
not
the
current
one.
We
we
are
producing
0.2,
but
I
I
fumbled
the
version
and
downloaded
the
0.1
version,
and
so
but
it's
basically
the
same
idea,
so
you
have
inside
captured
inside
of
the
of
the
at
the
station,
the
builder
that
is
running
your
bill,
which
in
this
case
is
trail
or
we
are
pointing
here
to
our
release,
engineering
repository,
then
the
recipe.
B
So
what
are
we
running
to
make
this
happen,
and
this
is
the
this
is
the
build.
That's
called
thing
that
salsa
sometimes
mentions,
and
this
is
our
our
entry
point,
which
is
the
the
cloud
wheel
for
staging
and
the
arguments
that
we
passed
to
the
to
the
to
launch
the
the
command
and
some
metadata
about.
If
you
have
the
complete
list
of
arguments
in
there,
the
environment
is
variables
are
captured
in
there.
B
We
used
to
to
build
the
the
code
which
I'm
just
noticing
it
has
a
block
here
of
a
parsing
bowl
of
it.
So
we,
it
is
it's
kind
of
the
same
story
here
with
that
stations.
As
with
signing
the
images
we
are
currently
building
inside
of
of
staging,
we
generate
an
attestation
to
describe
our
staging
run,
and
then
we
also
do
the
same
thing
during
release.
B
The
one
I
just
showed
is
one
of
the
staging
at
the
stations,
so
we
run
it
inside
of
the
release
process
and
now
one
of
the
requirements
to
move
to
salsa
level.
Two
is
that
we
need
to
sign
those
and
it's
the
same
thing
about
the
signatures,
as
as
with
the
images
that
we
need
to
send
them
outside,
because
we
don't
want
the
processes
inside
of
staging
particularly
to
have
access
to
the
credentials.
B
B
So
if
you
see,
for
example,
if
if
I
tell
you
this
is
my
my
inputs
to
my
wheel,
I
could
easily
forge
this
from
inside
of
the
from
inside
of
the
build
process
to
not
only
tell
you
a
different
commit,
but
I
may
also
forge
the
rep
on.
Maybe
I
didn't
pull
that
code
from
kubernetes
kubernetes.
Maybe
I
pulled
it
from
somewhere
else.
B
So
that's
why
you
need
to
ensure
that
the
provenance
gets
built
outside
of
staging
and
outside
of
release
releases
less
of
a
problem,
because
it's
code
that
doesn't
execute
arbitrary
code.
I
think
from
from
others.
As
for
example,
the
builder
when
we
run
make
inside
of
the
kubernetes
code,
we
are
exposed
to
whatever
is
there
that
can
potentially
and
not
only
that
but
also
dependencies
behind
it,
then
we
execute
other
code.
So
what
we?
B
What
we
need
to
do
is
build
and
sign
the
attestations
outside
of
of
those
two
steps.
So
what
I
was
thinking
is
a
flow
like
this,
which
is
a
little
bit
more
complicated
than
the
previous
one,
so
the
idea
would
be
before
we
run
staging
and
we
after
we
clone
the
repository.
B
We
build
the
the
a
partial
at
the
station
which
captures
things
like
the
wheel
points
things
like
the
parameters,
we're
going
to
be
using
to
execute
trail
and
and
as
much
as
information
as
we
can
before
the
run.
B
Then
we
pass
it
out
to
krell,
which
would
execute
produce
some
outputs,
the
files
whatever
and
we
are
already
storing
those
in
the
s1.
So
the
idea
would
be
to
have
a
second,
a
post
staging
step
that
would
be
able
to
retrieve
that
at
the
station
which
we
stored
in
a
volume
that
is
not
available
to
grill
outside,
and
so
we
retrieve
that
at
the
station
in
a
bucket
or
a
volume
whatever,
and
then
we
use
the
previously
partial
attestation
the
subjects
that
would
be
defined
in
the
s-1.
B
We
combine
those
to
combat
to
create
a
full
attestation
and
we
sign
the
attestation
outside
of
of
krell
and
from
them
we
store
from
there.
We
store
the
attestation,
alongside
with
in
the
bucket
and
then
do
the
same
thing,
to
describe
the
the
release
step.
We
execute
something
before
store
a
partial
station
and
then
we
run
release,
and
then
we
complete
the
attestation
afterwards.
B
B
The
idea
is
that
I
want
to
get
going
and
a
provenance
builder
that
we
can
also
use
to
generate
attestations
for
other
things,
because
if
you
think
about
attesting
to
things,
you
need
to
get
the
complete
picture
of
everything
that
went
from
source
to
every
transformation
until
we
got
to
the
finish
to
the
finished
product.
So
we
would,
we
would
be
generating
an
attestation
for
stage
and
at
the
station
for
release,
but
also
potentially
attesting
to
what
the
image
promoter
does.
B
So,
if
I
want
to
know
when
was
this
image
promoted
and
what
what
did
we
do
to
promote
to
promote
it?
The
best
version,
the
best
way
to
get
that
information
would
be
to
generate
an
attestation
during
the
image
promotion
and
sign
it
so
that
I
can
go
and
have
a
written
record
of
what
happened.
B
B
No,
it's
not.
I
mean
right
now,
I'm
just
proposing
a
change
to
the
to
the
flow
to
see.
B
If
anyone
has
any
objections
on
how
we
can
do
this
and
I
think
for
next
week,
probably
I
I
think
I
can
get
the
the
builder
ready
and
maybe
do
an
execution
run
in
my
branch
so
that
we
can
effectually
verify
that
it
works
and
the
idea
would
be
to
produce
another
another
tool
that
we
can
share
and
actually
this
what
got
me
going
was
when
we
were
working
on
the
cryo
looking
into
the
cryo
supply
chain.
A
I
I
Issue
so
we've
updated
the
issue
with
a
work
plan.
If
you
look
at
the
description
field,
you'll
see
all
of
the
milestones
and
tasks
that
we've
identified
so
far.
There
is
also
an
effort
to
build
user
personas.
I
would
really
like
to
get
feedback
from
this
group
about
those,
because
the
goal
is.
Are
we
capturing
the
customer,
the
user
of
the
work
that
we
are
doing
accurately
and
thanks
jason
for
helping
with
that?
I
We
did
a
user
persona
exercise
yesterday
during
a
session
on
signing
release,
artifacts
topic,
if
you
click
that
link
in
the
agenda,
you'll
go
to
the
marrow
where
we
have
a
lot
of
the
action
items
from
yesterday's
session,
mapped
out
in
a
sequence.
E
C
I
We
are
addressing
them
at
the
same
time.
It's
just
a
breakout
that
there's
doc
needs
and
code
needs.
So
that's
why
they're
all
like
milestone,
one.
E
I
And
then
like
some
milestones,
only
have
dock
needs,
so
you
won't,
you
know
for
now.
Maybe
we
saw
we
see
some
code
needs
later,
but
yeah
they're
they're
numbered
like
milestone.
One
code,
deliverable
dock
deliverable
questions
to
resolve,
consider
that
as
one
package
and
then
I
think,
milestone
two
has
a
similar
setup,
a
variety.
E
I
A
All
right,
thank
you,
and
with
that
we
are
mostly
out
of
time.
You
have
two
topics
left.
I
would
like
to
move
them
over
to
the
next
meeting
or
follow
up
on
slack,
because
one
is
regarding
the
kk
branch
rename
callers.
I
see
bob
on
the
call.
I
think
we
have
a
slack
thread
open.
So
can
we
follow
up
with
the
topic
on
that
thread?.