►
From YouTube: Kubernetes Release Engineering 20200302
Description
Kubernetes Release Engineering Weekly Meeting for 3/2/2020
A
Hello,
hello:
everyone
today
is
March
2nd
2020.
This
is
an
edition
of
the
sig
release,
release
engineering
sub
project
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
overall
just
be
awesome
people
all
right.
So
we've
got
a
pretty
light
agenda
today
and
I
figure.
We
can
use
some
of
this
time
to
to
go
over
the
project
board,
so
I
mean
first
off.
A
A
E
A
A
A
Let's
keep
your
spin
them
all
right,
so
there's
a
separate
issue
up
and
so
keep
crosses
one
of
the
containers
that
we
used
to
build
kubernetes
it
lives
in
Kiruna.
These
kubernetes
basically
keep
crosses
the
base
for
the
build
image
that
then
we
essentially
do
builds
and
releases
within
containers
with
within
this
within
some
of
the
CI
jobs.
So
this
is
this:
these
containers
are
the
base
for
those
jobs,
so
there
are
a
few
things
to
do.
So
what
we
wanted
to
do
was
a
cup
of
the
essentially
the
go
update
goes
this
way.
A
And
because
this
is
a
patch
release,
this
is
a
little
easier
so
but
we're
getting
ready
to
go
into
the
one
14-0
update,
which
will
be
a
little
stickier
based
on
some
of
the
comments
I've
gotten
so
far,
but
we
have
this
cool
file
called
build.
Slash
dependency
is
that
yeah?
Well
right.
First
thing
you
do
here
is
bump
the
version
in
that
right
and
from
there
you
can
run
a
hack,
verify
external
dependencies
within
kubernetes
kubernetes
right
and
that
will
spit
out
a
list
of
files
that
you
have
to
update
right
based
on.
A
We
have
a
set
of
dependencies,
a
name,
a
version
for
the
dependency
and
then
reference
paths
right,
so
a
location
for
a
file
as
well
as
a
regex
Mac
match
for
that
file
right.
So
the
verify
external
dependencies
script
is
basically
going
to
look
at
this
file
and
try
to
determine
if
the
version
is
correct,
for
if
the
version
is
correct
for
each
each
of
those
files
or
each
of
those
rejig
that
matches
right.
A
A
Essentially
has
to
assign
it
to
a
Googler,
so
Ben
and
Javier
here
who
are
able
to
actually
stage
an
image
within
staging
Kate's,
GCR
idea,
and
then
an
internal
Google
pull
request
to
promote
the
image
on
their
internal
image,
promoter
from
GCR
io2
kate,
from
staging
Kate's,
Kate,
CCRI,
dot,
IO
right,
so
so
the
process
before
so
before
this
PR
is
complete.
Essentially,
what
a
Googler
will
do
is
clone
the
PR,
build
the
image
from
that
push
it
to
a
staging
repository.
Google
internal
staging
repository
then
do
the
image
promotion
right.
A
A
C
A
When
we
added
a
set
of
build
image,
approvers
and
reviewers
been
Christoph
DIMMs,
myself
and
Linus
from
Google
and
you'll
see.
There
are
some
search
and
replace
bits
here
right
to
the
USG
steroid
at
I/o
case
artifact,
prod,
slash,
build
image,
cube
grafts
right.
So
that
is
the
location
for
that
this.
This
prefix
is
essentially
where
all
of
our
Prague
images
go
right
and
then
this
this
folder
is
maps
to
the
staging
projects
that
we
have
within
that
are
that
are
configured
within
the
k,
dot,
io
repo
right.
A
A
A
A
The
current
tag
on
the
repo
and
whatever
the
cube
cross
version
is
so
cats
that
version
file
that
we
that
we
bump
and
then
tags
the
image
mold
with
a
few
different
tags
so
that
the
Q
cross
version
and
then
the
progress
tree
as
well
right,
then,
the
push
only
pushes
the
staging
registry
tag
and
cube
cross
version.
We've
got
a
cloud
build
yeah
memo
in
here.
A
So
the
next
piece
of
this
was
to
do
a
little
manual
stuff.
So
basically,
the
active
versions
of
cube
cross
right
now
won
13-6,
the
previous
active
versions
of
cube
cross
or
113
6-1,
and
then
112,
12
12
right
so
before
before
we
could
turn
turn
the
images
on
for
for
Cates
in
front.
We
needed
to
promote
the
old
ones
right,
so
I
basically
manually
pulled
them
and
then
push
them
to
the
new
registry,
and
then
here
we're
taking
the
digests
of
the
pushed
images
and
tagging
them
appropriately
right.
So
this
fits
out.
F
E
F
A
A
But
we
wanted
to
make
sure
that
we're
not
also
bumping
the
version
in
this
PR.
So
we
had
to
backfill,
though
the
image
versions
right
and
from
there
we
cherry,
picked
these
back
to
117
and
116,
and
the
115
one
you'll
see
that
it's
a
combination
of
a
few
things,
one
the
ability
to
actually
point
at
the
new
registry
and
then
a
going
bump
to
112
117
112.
17.
Excuse
me.
A
So
that's
going
to
be
the
final
bump
that
we
do
for
the
115
branch
before
it
goes
out
of
support
and
then
all
the
usual
stuff
that
I'd
showed
you
all
right.
So
so
I
have
another
PR
of
them,
which
is
to
discuss
actually
moving
the
cube
cross
image
from
from
the
over
to
2k
release
right
and
the
reason
for
this
is
that,
while
we're
building
on
kubernetes
infrastructure
now
for
that
image,
it
still
requires
like
an
intra
PR
dance.
A
A
So
what
we
did
essentially
was
remove
the
requirement
for
that
person
to
be
a
Googler
out
of
the
process,
but
we
did
not
get
the
human
out
of
the
process
right.
So
the
idea
here
is
so.
This
is
generally
what
the
process
is
today,
which
will
turn
into
a
longer
document
to
you,
propose
a
PR
with
a
going
boom,
trying
to
infer
being
the
location
of
where
the
the
cube
cross
image
will
be.
You
tweak
it
until
all
the
tests
pass.
A
So
basically
we're
looking
for
test
passes
on
everything,
but
the
pole,
kubernetes
cross
image
and
the
pull
the
pole
kubernetes
cross
test
and
the
pole
kubernetes
verify
test
now.
The
reason
for
this
is
the
cross
test
and
the
verify
tests
use
and
expect
the
versions
of
Q
cross.
So
if
you
bump
them
in
the
PR
and
they're
not
available,
yet
those
tests
will
fail
right.
So
then
you
build
the
image
locally
from
the
PR
right,
and
this
is
the
googly
part-
are
the
no
longer
googly
part.
A
You
build
the
image
locally,
you
push
it
to
the
Kate's
in
for
a
creative
PR
for
the
the
image
promotion
and
then
retest
the
open
PR
once
once
the
image
is
promoted.
So
what
I'm
thinking
of
instead
is
you
propose?
So
this
is
pretty
much
a
copy
and
paste
of
you
know
of
the
the
stuff.
That's
in
a
build,
build
image
cross
in
kubernetes
and
moved
over
to
images
build
cross
in
kubernetes
release.
So
the
idea
for
the
process
would
be.
You
propose
a
PR
to
take
a
release
to
update
the
going
version.
A
If
everything
looks
good,
you
merge
the
PR
which
triggers
a
post
submit,
and
then
we
would
have
Canary
versions
of
the
cross
and
verify
jobs
that
would
run
against
the
new
staging
image.
If
those
passed,
then
we
would
promote
that
image.
Then
we
can
propose
a
PR
for
kubernetes
kubernetes
that
does
the
goaline
club
this
time
the
image
already
exists,
so
whoever
is
opening
the
PR.
Don't
wanna
have
stone,
do
that
dance
with
between
them
and
the
people
who
have
access
to
actually
push
the
image
right
and
then
hopefully,
success
right.
B
A
Cool
cool,
so
I
will
push
continue
pushing
on
that.
So
the
idea
is
that
eventually,
we'll
move
the
ownership
from
just
this,
build
at
images
group
just
kind
of
like
five
people
right
now:
Ben
Christoph,
dims,
myself
and
I'm
Lennis
at
Google
into
the
release
engineering
group
right.
So
ideally
that
would
start
with
release
managers
and
then,
as
people
get
this
down
kind
of
open
will
not
open
the
floodgates
but
expand
the
the
group
of
people
who
can
manage
this.
A
The
reason
they
wanted
to
start
small
is
this
is
one
of
the
more
important
images
that
exists
in
kubernetes
kubernetes.
A
lot
of
the
tests
are
dependent
on
it
as
well
as
if
you
look
at
the
kate's
cloud
builder
image,
which
we
maintain
on
the
kubernetes
release
side.
That
image
is
based
on
cube
cross
right,
so
the
image
that
we
use
to
build
stage
and
release
kubernetes
is
based
on
on
cube
cross,
so
very
important
image.
A
You
can
see,
there's
also
a
test
in
fro
PR
open
to
enable
the
build
the
building
and
pushing
so
targets.
The
image
build
cross
directory
and
the
staging
build
image
project
will
inform
on
a
few
different
dashboards.
Rel
engine
forming
master,
informing
and
image
pushes
and
runs
in
the
test
in
frost
trusted
cluster
right.
So
basically,
this
job
will
run
on
master
branch.
If,
if
this
directories
shrink
changed
right.
A
Alright,
so
the
next
steps
here
are
really
there's
some
publishing
bots
go
versions
to
bump
I
did
the
version
bumps
for
for
the
Cates
cloud
builder
image
that
I
was
referring
to
so
basically
we're
just
making
sure
that
added
a
hair,
a
skosh,
more
stuff
to
a
knocko
just
for
the
sake
of
pointing
so
basically
in
August.
Looking
for
it
has
like
a
CCRI
she's,
our
that
IG
cRIO
path,
prod
location,
that
the
image
was
pointing
at
before
and
now
that
it's
moved
away
from
that
location.
A
A
A
A
E
A
E
A
B
A
A
You
know
Friday
or
something
and
there's
no
expectation
of
the
patch
release
team
to
be
reviewing
patches
between
the
Friday
and
whatever
the
date
is
set
to
be
for
the
actual
release,
whether
it
be
that
Tuesday
or
Wednesday,
so
that
was
I,
think
we
shifted
it
over
to
Monday
for
the
cherry-pick
deadline,
so
I
think
yeah.
The
9th
is
the
cherry-pick
deadline
and
then
Thursday
is
the
patch
release
or
there's.
B
B
Ci
is
just
soaking
on
the
weekend,
and
and
that's
it,
but
often
that
means
then
that
we're
looking
to
release
on
a
Tuesday
and
Tuesday
is
also
when
there's
other
related
work,
often
going
on
within
the
release
team
and
because
of
that
conflict
it
makes
sense
to
to
have
a
little
bit
of
a
spread
so
having
things
due
on
Monday
having
them
soak
for
a
couple
days,
releasing
on
a
Thursday
is
kind
of
the
only
other
option
to
fit
things
in
the
weekend.
Yeah.
D
A
B
This
is
always
there
when
a
release
comes
out,
they're
expected
to
do
something
within
two
days,
sometimes
especially
if
there's
a
CVE,
for
example,
there's
like
there's
stuff
to
do
so.
If,
if
Friday
evening
or
after
the
close
of
business
in
their
time
zone,
they're
hearing,
there's
a
new
release,
then
they're
potentially
in
then
to
evaluate
the
release
and
what
they
should
do.
And
if
there's
a
CVE,
then
they're
rolling
it
out
on
the
weekend,
and
that's
not
nice
so
doing
it
earlier
in
the
week
is
as
nice
or
further
consumers.
A
For
sure,
for
sure,
and
as
Tim
mentioned,
we
want
to
make
sure
that
we
don't
have
these
I
think
initially,
we
were
releasing
on
several
different
days
and
then
we
kind
of
bundle
them
all
into
one
day,
but
that
was
in
addition
to
the
work
that
tap
happens
on
the
the
release
team
right,
so
we
were
letting
out
like
a
pre-release
plus
a
set
of
patch
releases,
so
the
pre-release
for
for
next
week
is
on
the
tenth
that
Tuesday
and
then
we're
giving
it
a
bit
of
a
gap.
I
figure.
A
This
is
a
good
gap,
for
if
there
is
anything
that
goes
well
key
with
the
tools,
because
that
happens
occasionally
that
we
have
in
Opera,
we
have
a
few
days
to
to
clean
anything
up
that
we
need
to
before.
We
actually
cut
that
of
the
patch
releases
cool,
all
right
Carlos.
You
want
to
go
next.
F
Yeah,
okay:
it
was
not
following
the
agenda:
okay,
mainly
doing
the
break
forward
fast
forward
and
I
did
some
help
first,
with
Sascha
to
like
fix
some
issues
that
we
found
in
the
bridge.
Fast
forward
was
like
very
small
issues,
just
some
details
and
for
now
like
we
have
I
need
to
talk
with
you
and
that
team
and
Sascha
to
give
some
like
access
to
the
Daniel
for
the
next
cut.
We
are
talking
about
that
right
idea.
F
B
A
C
Don't
know
but
it's
red,
but
taking
a
look
at
that
working
with
the
team
and
just
kind
of
unraveling
the
whole
release
process
and
on
the
side
I've
been
kind
of
working
on
my
go
skills.
I
got
wonderful
accountability,
buddy
Jim!
He
is
an
angel
and
he's
been
helping
me
out
with
that
and
keeping
me
accountable
so
working
through
that.
So
I
can
kind
of
step
in
and
work
a
little
bit
more
with
y'all
on
some
of
these
release
tools
and
things
of
that
nature.
A
Awesome
thanks
for
the
update,
Taylor,
all
right.
Let's
see
any
other
release,
managers-
I!
Guess
me:
yeah
I,
guess
I'm
the
last
one
here,
alright,
so
the
team
craft
stuff
that
we've
been
that
I
walked
you
through
before
I've,
been
working
on
the
go
bumps.
The
next
one
is
the
next
one
is
114
zero,
which
is
going
to
be
trickier
because
of
some
changes
that
they've
made
upstream
the
I'm
going
to
try
to
circle
back
to
keep
EKGs
and
wrap
up
the
RPM
section.
A
Now
it
only
does
Deb's,
and
that
is
not
great.
The
so
QP
kj4
will
have
people
on.
The
call.
Tour
not
aware,
is
a
tool
that
we
built
to
allow
you
to
build
our
package,
depth
and
rpms
of
the
of
the
kubernetes
control
components
right
so
that
the
client
and
server
components
the
what
else?
What
else?
What
else
has
been
going
on
yeah
some
of
the
some
of
the
tests
in
for
work
to
clean
up
some
of
our
jobs?
A
That's
not
entirely
complete,
although
some
of
its
been
fixed
by
the
fact
that
the
114
branch
went
away.
So
we
had
some
basil
failures
on
the
114
branch
that
no
longer
exists,
because
that
branch
those
jobs
no
longer
exist.
There
was
an
issue
opened
around
removing
basil
from
everything,
and
some
of
the
release
managers
have
have
volunteered
to
help
out
with
that.
We
don't
exactly
know
what
the
shape
of
that
is
going
to
be
I.
A
A
Yeah
I
guess
I
get
some
of
the
the
infrasounds
the
staging
and
release
process.
Gcb
manager
work
that
I
referred
to
last
meeting.
That
is
I.
Think
I
have
to
do
a
few
tweaks
on
the
release
quo,
but
that
makes
me
actually
good
to
go
so
soon,
I'm
going
to
deprecate
the
GCB
manager,
where,
essentially,
if
you
run
GCB
manager,
it
will
tell
you
hey:
did
you
know
Krell
gzb
manager
existed?
A
B
I
think
it's
an
area
that
we
should
maybe
even
think
about
opening
an
issue
to
track
longer
term,
how
we
enable
debugging.
So,
for
example,
with
the
the
an
ago
failures
that
we
had
kind
of
December
January
I
was
trying
to
go
through
and
find
like
okay.
So
this
is
weird
in
the
scenario,
but
where?
Where
did
we
do
it
last
cycle
and
a
UUID
is
kind
of
opaque,
so
I'm
like
I'm,
looking
for
which
UUID
matches
with
last
quarters
beta
beta,
1
beta
2?
B
B
I
I
bet,
if
I
pulled
up
my
branch
right
now,
my
branch
would
be
dirty
because
I've
changed
that
five
to
a
45
or
something
like
that.
Because
I've
been
having
to
go
back
and
try
and
hunt
out
you,
you
IDs
and
then
I
dump
that
you
dump
the
log
for
a
giving
you
your
ID
to
a
file
and
I
crepin
search
and
I,
build
my
own
little
table
and
then.
B
The
relevant
things
and
and
the
the
web
version
of
this
doesn't
make
it
easier
either,
but
I
think
is
we
get
more
and
more
automated
we're
gonna
pay
attention
to
these
things
less
and
less
day
to
day
and
then
debugging
is
going
to
be
a
very
emergent
situation
because
we're
not
going
to
be
used
to
looking
at
the
things
or
finding
relevant
stuff
in
them.
So
we're
gonna
want
something
eventually
in
that
space,
yeah.
A
No,
that
sounds
great
I
mean
the
the
idea
that
I
had
was
so.
If
you
look
at
the
go
mod,
outdated
output,
great
that
gives
you
so
that
runs
during
the
compiled
release
tools,
process
that
gives
you
like
a
pretty
table
rate
that
we
can
use.
That's
not
the
super
important
part,
I
think
the
the
more
important
part
will
be
to
start
leveraging
the
tags
that
we
put
on
the
releases
right.
So
the
release
is
tagged
by
the
user,
the
GCV
user,
the
the
release
type,
whether
it's
a
staged,
our
release.
A
The
version
I
want
to
start
shoving
more
data
into
that.
Like
the
get
Shah,
that's
used,
the
you
know
if
it's
targeted
against
a
non-standard,
a
non-standard
tools,
org
or
repo,
and
our
branch
stuff
like
that
right
and
then
I
think
that
the
the
search
functionality
that
we
could
have
there
can
be
based
on
searching
by
tags
right.
B
Ultimately,
the
tags
are
gonna,
be
our
first
order
pathways
I
think
into
it
eventually
having
that
spit
out
a
UUID,
so
you
can
dump
the
full
log
of
that
thing
probably
makes
a
lot
of
sense,
but
that's
not
the
starting
point,
and
the
only
reason
it
was
a
starting
point
now
was
that
you
would
start
a
job.
It
would
spit
that
out
you
could
you
a
tail
it
if
you.
A
Yeah
I
know
it
is.
It
is
definitely
difficult
to
track
so
I
think
I,
think
by
by
hitting
the
tags
and
the
you
know
trying
to
decide
if
the
tags
that
we
use
today
are
useful
right,
because
the
ones
that
we
use
today
are
essentially
just
the
ones
that
were
there
when
we
picked
up
these
tools
right
so
taggings,
something
whether
it's
a
pre-release,
the
release
that
it's
intending
to
stage
or
release
yeah
we
can
yeah.
We
should.
We
should
open
up
an
issue.
A
A
Alright,
okay,
we're
gonna,
do
the
project
board,
so
Tim
and
I
were
chatting
about
it.
You
think
what
we'll
do
is
try
to
make
the
try
to
make
that
little
release
managers
update
part
of
the
call
more
officially
do
that
more
frequently,
so
that
we
have
we
kind
of
feel
what
we're
I
mean
like
we're
a
team,
but
you
know
we're
all
remote
and
it's
good
to
you
know
just
you
I
guess:
we'd
call
that
a
stand-up.
A
Do
you
do
that
a
quick
stand
up
just
to
see
what
what
all
the
release
managers
are
up
to.
You
definitely
helpful
for
people
who
are
not
on
the
release
within
the
release.
Managers
group,
if
you
want
to
pair
on
certain
pieces
of
work,
are
just
learn
about
what
someone's
been
doing
you
can
you
can
do
that
so
sharing
the
screen,
and
here
we
go.
A
A
A
B
A
A
A
Pulling
the
GC
windows
2019
container,
gave
master
tests
off
of
release
and
forming,
and
the
updates
of
the
pr
will
be
moving
it
over
to
I,
think
the
GC
windows,
or
something
directory
where
they
yeah
windows
GCE
mo
file
where
they
store
the
rest
of
those
GC
windows
tests.
That
test
has
been
pretty
frequently
a
flaking.
So
that's
the
reason
that
we're
pulling
that
off
I
think
it
was
a
work
in
progress
that
has
been
was
still
a
work
in
progress,
so
that
should
not
be
on
informing
right
now.
A
A
Super
lovely
description,
essentially
what
we're
doing
here
is
there
was
some
analysis
done
by
Tim
all
clear
on
this
PR
for
cleaning
up
the
image
building
for
for
Debian
images,
so
that's
Debian
based
having
a
hypercube
base
and
Debian
IP
tables
the
images
that
we
manage
again.
They
suffer
from
a
similar
issue
where
only
certain
people
can
build
and
push
those
images
or
promote
those
images,
and
they,
those
people
are
Googlers,
so
this
PR
was
essentially
created
to
you
start
moving
down
the
path
of
image
building
for
for
those
base
images.
A
These
are
the
base
images,
that's
a
key
components:
API
server,
controller
manager.
What
have
you
are
built
on
top
of
queue
proxy
scheduler?
All
that
all
that
good
stuff
so
is
so
Hannes
was
doing
a
bit
of
analysis
on
these
images,
and
so
all
the
fun
details
are
captured
in
some
of
these
details,
blobs.
A
A
Back
to
the
board,
ok,
so
add
support
for
arbitrary
go
templating
when
generating
markdown
the
submitter
for
this
PR
added.
Some
updates
today,
as
I
haven't,
had
a
chance
to
review
just
yet.
But
essentially
this
is
some
updates
to
the
release.
Notes
test
looks
like
Joe
or
John
is
a
contributor
for
your
fav
CLI
and
they
are
considering
using
our
tool
our
release,
notes
tooling,
for
for
their
project
as
well,
which
is
really
really
cool.
A
Fix
Basel
version
markers
this
okay,
now
that
the
114
branch
is
gone,
I
can
come
back
to
this.
One
I
need
to
rebase
this.
Essentially,
what
we're
doing
is
moving
around
one
making
sure
that
Tessa
and
Franck
all
gets
hit
for
these
for
these
Basel
jobs
and
then
and
then
also
making
sure
that
they
fixing
some
of
these
version
workers
up.
D
A
It
is
my
fault,
essentially
so
the
dependencies
yeah
Mille
that
I
was
referring
to
you
before.
Essentially,
the
idea
here
was
to
move
dependency.
Is
that
yeah
mole
into
a
build
external
dependencies
or
some
such
directory,
where
we
could
give
multiple
sig
reviewer
approvers,
not
just
within
release
access
to
update
these
dependencies?
A
So
before
we
can
do
this,
we
essentially
have
to
draft
a
policy
of
how
it's
all
very
cyclical
right
before
we
do
this,
we
have
to
draft
the
policy
of
how
someone
would
update
a
dependency
and
before
we
do,
we
draft
that
policy.
We
have
to
understand
how
to
do
the
updates
ourselves.
So
this
comes
back
to
doing
the
the
going
update
at
the
beginning
of
the
call.
So
I
want
to
understand
very
deeply.
A
All
the
little
steps
that
are
involved-
and
if
you
remember
the
you
remember
the
list
there
quite
you
know
it's
it's
it's
a
bit
involved
right,
so
I
want
to
make
sure
that
this
is
like
succinctly
documented,
at
least
for
the
go
case
and
then
use
that
as
the
basis
for
the
other
cases
right.
There
are
some
similarities
and
the
way
that
we
would
update
the
external
dependencies
to
the
way
that
we
would
have
say,
update,
go
so
once
that's
documented,
then
we
can.
A
D
A
Yeah
yeah,
so
essentially,
someone
proposed
adding
a
adding
the
h
ba
tests
to
the
master
blocking
the
dashboards,
but
it
was
not
discussed
with
Singh
release.
So
the
idea
was
that
this
PR
be
held.
C
A
They
put
forth
a
proposal,
I
haven't
heard
anything
back
or
seen
any
chatter
on
this.
It's
been
known
since
November,
so
the
LLL
leave
CI
signal
to
to
follow
up
on
that
and
if
we're
gonna
introduce
it,
we
should
probably
introduce
it
if
master
informing.
First
all
right.
We
are
four
minutes
to
time
and
I
doubt
we're
going
to
get
through
this
stuff
uh-huh,
so
we're
gonna
call
it
on
the
board.
D
A
A
A
Awesome
awesome
all
right,
so
more
PR
reviews
and
more
deep
dives
and
we'll
try
to
you
try
to
get
the
demos
back
on
the
schedule
where
we,
you
know
we'll
we'll
pick
a
chunk
of
code
and
and
just
stare
at
it
for
a
while
I
think
that
would
be
pretty
cool
for,
like
the
changelog,
that
you
know
that
sasha-ann
folks
have
been
working
on,
and
you
know
other
other
portions
of
krell
that
we've
been
refactoring
lately
yeah.
That
should
that
we
can,
we
can
do
that.
We
can
for
sure
do
that
all
right!