►
From YouTube: Kubernetes Release Engineering 20191209
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone
today
is
December
9th.
This
is
a
release,
engineering
sub
project
meeting
for
cig
release.
This
is
a
meeting
that
will
be
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome
people,
so
we've
got.
We've
got
a
few
special
guests
on
the
call
today,
our
friends
from
cig
docks
we
wanted
to
talk
about.
A
B
Sure
I
least
fill
in
some
of
the
details
on
some
of
the
problems
that
happened
with
the
docks
for
117
sometime
ago
a
PR
was
merged
and
everything
was
good.
The
person
had
a
signed
CLA,
but
the
time
it
was
merged
into
the
dev
117
British
and
PR
being
cut
forged
in
the
master.
B
They
essentially
switched
jobs
and
no
longer
had
access
to
that
email
or
anything
like
that
and
had
signed
a
the
CLA
with
a
new
address
and
they
had
no
way
of
correcting
the
previous
one.
So
the
plan
was
to
essentially
rebase
the
branch
and
removed
the
committin
at
a
essentially
the
same
changes
as
a
new
one.
B
But
when
doing
this
process,
there
were
a
whole
slew
of
like
conflicts
and
other
things
like
that,
and
we
wound
up
having
to
cherry-pick
all
the
you
know.
Enhancement
or
you
know
you
just
117
into
a
new
branch
and
essentially
force
push
that
over
at
the
current
dev
branch.
I,
don't
know
the
docs
workflow
too
well,
but
it
looked
like
during
the
release
process.
They
will
merge
like
periodically
merge
master
into
the
dev
release
branch
and
just
sort
of
keep
it
sort
of
in
sync,
with
master
and
big.
B
This
is
very
similar
sort
of
to
the
workflow.
That's
done
if
KRL
FF
and
just
keeping
things
in
sync,
the
other
possible
alternative.
Instead
of
you
know,
keeping
the
branch
in
sync
that
way
would
be
to
essentially
just
periodically
rebase
off
master,
instead
of
doing
the
other
again
sort
of
leave
that
to
what
what
would
be
the
best
workflow
for
docks.
B
C
Keep
for
all
of
your
help
with
grain,
resolving
that
would
became
a
much
heavier
situation
and
I
think
any
of
us
realized
going
in
so
just
wanted
to
say
how
valuable
it
was
to
have
you
there.
As
far
as
whether
to
take
the
the
K
rail,
F,
F
approach
or
a
rebase.
We
have
some
experience
in
a
really
cycle
with
rebasing
off
of
off
of
Master
during
the
release
cycle,
and
it
worked
well
I
serve
at
the
base
level,
but
any
if
there
were
any
open,
docs
PRS.
C
C
B
A
A
So
we've
been
using
the
the
branch
fast-forward
tool
for
at
least
as
long
as
I've
been
in
and
around
the
release
team.
We
recently
started
on
a
path
to
rewrite
the
shell
tools
in
go
and
I'm,
going
to
link
that
issue
in
to
the
notes.
So,
essentially,
what
we're
doing
this
we've
created
a
tool
box
of
a
release,
managers,
tool,
box,
called
Krell
or
kubernetes
release
toolbox
I
couldn't
have
thought
of
something
more
imaginative.
A
So,
yes,
a
little
by
little,
we've
we're
working
on
migrating
those
tools
from
shell
over
to
go
and
one
of
the
first
tools
that
we
chose.
Since
it's
small
enough
and
it's
it's
used
fairly.
Often
enough
was
branch
fast
forward,
so
there
is
an
open,
PR
right
now.
A
So
branch
fast
forward
currently
does
a
few
things
right.
It
does
the
branch
fat
it
does
emerge
into
the
release
branch
in
question,
but
it
also
runs
release
open
eye
API
update
open
API
spec,
which
is
a
hack
within
kubernetes
kubernetes.
That
essentially
makes
sure
that
the
version
that
lands
on
the
release
branch
within
the
open,
API
spec,
is
the
version
that
that
release
branch
is
targeting.
So
that's
that's
not
necessarily
something
that
branch
fast
forward
should
be
doing
so
we're
working
on
removing
that,
but
do
like
actual
important
work.
A
We
we
planted
that
for
the
next
cycle.
So
that's
one
thing
that
remains
to
be
done
before
we
can
before
we
can
actually
turn
on
Krell
fast
forward,
but
I
think
that
we
should
plan
to
do
that
for
1:18.
I.
Think
that's
the
the
open,
API
spec
update
is
the
last
thing
that
needs
to
be
fixed
before
that
tool
can
be
turned
on
and
adding
I
guess
more
test
coverage,
I.
A
A
All
right
so
I'm
going
to
I'll
assign
that
the
docs
migration
are
the
Doc's
usage
of
Grell
FF,
issued
to
you
and
and
we'll
keep
you
all
posted
on
the
removal
of
the
open
of
the
update,
API
spec,
there's
one
more
piece
that
we'd
like
to
have
in
place,
which
is
being
able
to
test
being
able
to
test
the
fast
forward
workflow
via
some
repo
that
isn't
necessarily
personal
repo.
So
there's
also
discussion
about
kubernetes
feature
Forks.
D
A
C
A
Alright,
so
yeah,
what
we
want
is
essentially
a.
We
want
a
fork
of
kubernetes
within
the
kubernetes
org
that
we
can
use
as
a
essentially
an
integration
test
right
for
integration
tests.
So
I'll
open
an
issue
for
that
today
and
kind
of
tie
all
the
threads
together.
I
think
most
of
them
are
already
tied
together,
but
I'll
check
on
that.
A
A
Alright
questions
from
docks
before
we
continue
comments
concerns
all
that
good
stuff,
I.
C
C
C
A
So
Crowell
alright,
so
that
is
the
top
level
command
for
Carell.
The
fast-forward
would
look
like
this
right,
so
essentially
krell
takes
it
would
be
Crowell
fast-forward
branch
and
then
the
release
branch.
You
could
say
blah
blah
right
and
then
some
some
ref.
If
you
want
to
use
a
so,
the
default
ref
would
be
head
which
would
be
the
head
of
the
master
branch.
This
is
the
master
referent.
If
you
wanted
to
choose
an
arbor
arbitrary
commit,
you
could
do.
E
B
E
A
It's
it's
it's
a
part
of
it's
part
of
my
usual
workflow,
so
I,
yeah
I
know
the
things
that
are
not
checked
in
or
things
that
I
don't
want,
checked
in.
So
everything
else
is
pretty
fun,
so
essentially
we're
using
reason
Cobra
for
our
to
kind
of
like
mock
our
CLI
commands
and
our
create
our
CLI
commands
right.
So
you
can
see
Cobra
use
short,
the
short
and
long
usages
of
this
thing,
an
example
of
how
to
use
it
and
then
the
actual
command,
which
is
just
run
ffs
right.
A
So
before
we're
getting
started,
it's
pulling
in
it's
checking
for
a
branch
master
F
and
an
org
and
setting
that
setting
that
in
the
fast-forward
options,
then
the
fast-forward
options
are
passed
into
run,
ffs
right.
So
it's
doing
a
if
the
branch
is
empty.
Then
it's
going
to
ask
you
to
specify
a
release
branch.
That's
setting
the
master
ref
its
stuff
things
remote.
A
If
I
I
think
it's
checking,
if
the,
if
master
is
a
removed
or
no
it's
actually
it's
taking
origin
and
taking
origin,
slash
and
master
and
concatenating
those,
then
opening
the
repo
it's
checking.
If
you
want
to
run
this
in
an
omok
mode,
which
is
essentially
how
we
set
most
of
our
tools
up
which
I
should
have
remembered,
which
is
why
this
won't
do
anything,
this
is
fine,
because
I
didn't
set
no
machi
err,
so
it's
essentially
a
dry
run
of
that
process.
A
We've
also
got
a
so
it's
going
to
do
it's
gonna
it's
going
to
check.
If
this
is
a
release,
branch
right-
and
you
can
see
what
that
looks
like
here-
basically
saying
it
looks
at
this
branch
regex
and
and
sees
if
that
branch
regex.
If
the
branch
that
you
supplied
matches
that
reg
X,
which
my
computer
froze
for
a
little
bit
but
regex
is
somewhere
right
right,
so
either
master
or
release
and
the
rest
of
this
right
release.
Basically,
this
turns
into
you
know
some
set
of
one
dot,
blah
right,
the
all
right.
A
It's
checking
to
see
if
the
branch
is
available
on
the
remote,
so
actually
one
if
the
branch
has
one
if
the
branch
matches
the
release
regex,
which
we'll
have
to
modify
to
support,
to
support
the
k
website
and
then
to
if
the
remote
branch
actually
exists
right.
So
then
it
checks
it
out.
It
checks
to
see
if
you
want
to
clean
up
your
local,
because
essentially
what
this
is
doing
is
cloning,
a
copy
of
the
of
that
repo
in
a
separate
temporary
directory
right.
A
Okay,
so
I!
Guess
it's
cleaned
up
all
right,
yeah
cleaned
up
since
there
was
it
failed
all
right,
so
it
basically
so
it
looks
for
a
merge
based
tag.
Merge
face
tag
is
a
common,
basically,
a
common
ancestor
between
the
two
branches
between
the
master
branch
and
the
release
branch,
or
between
the
master
rest
that
you
choose
and
the
release
branch
and
then
uses
that
as
the
basis
for
the
fast-forward
merges
the
changes
into
the
master
changes
into
the
release
branch
and
then
gives
you
a
nice
little
message
that
says:
hey.
A
A
So,
and
then
lastly,
it'll
it'll
say:
are
you
ready
to
push
that
local
branch
fast
forward
upstream
right
and
this
will
this
will
ask
you
three
times
and
it'll
expect
respect?
Yes
as
an
answer
and
if
it
doesn't
get
that
it'll
bail
out,
so
there
are
a
few
kind
of
back
stops
that
prevent
you
from
doing
bad
things.
A
Right
so
this
is
what
the
current
tool
is
doing
is
doing
a
merge
with
the
hours
the
hours
merge
strategy,
which
means
it's
going
to
take.
What's
on
the
branch
that
you're
choosing
the
master
branch
and
use
that
as
the
source
of
truth,
all
right
so
so
before
so
it'll
prevent
conflicts
by
choosing
master
as
the
most
accurate
content
is.
Would
that
be
a
problem?
Ketchup.
D
Potentially
so,
since
we
worked
predominantly
out
of
masters
or
upstream
documentation
of
the
current
releases
times
were
devil
commit
APR
to
the
dev
branch,
which
is
different
than
our
release.
Branch
that
you
know
might
have
something
to
master,
that's
the
source
of
truth.
You
also
might
have
something
in
that
dev
branch,
that's
the
source
of
truth.
That
wants
to
be
changed
for
a
master.
Ok,
so
you
prefer.
You
prefer
the
dev
ranch
to
be
treated
as
the
source
of
truth.
D
A
But
yeah
you
know
it's
outside
of
that.
I
think
the
tool
I
want
to
say
the
people
who
have
so
myself,
Sasha
and
Dan,
who
have
been
working
on
it
like
it's
done
right,
a
functionality
wise
so
yeah,
so
adding
I,
guess
adding
support
for
merge
strategy.
Adding
support
for
arbitrary
non
kubernetes,
kubernetes,
repos
or
non
kubernetes
repos
would
be
pieces
that
you'd
need
to
do
and
then
making
sure
that
that
stuff
is
probably
properly
covered
via
tests.
So
sound
good.
D
Yeah
be
nice
if
someone
who's
a
little
bit
more
well-versed
and
get
could
it's
just
you
know
with
me
on
that
Early's
tackling
the
you
know
be
nice.
We
could
implement
something
that
just
says.
You
know
the
merge
strategy.
Is
you
know
you
choose
the
kind
of
the
way
forward
there
I,
don't
know
how
realistic
that
is.
A
A
A
A
So
Jim,
if
you
don't
mind,
adding
the
notes
into
that
that
issue
just
so,
we
have
an
up-to-date
status
of
what
we
need
to
do.
You
can
do
it
awesome
which
issue
sorry,
it's
linked
in
the
doc.
It's
this
956
and
kubernetes
release,
yep
and
you're
I
think
you're
you're
already
on
that
issue,
so
you'll
get
updates
excellent,
good,
good.
A
F
F
A
A
F
Yeah
I
tried
to
build
them
by
flipping
a
flag
and
wherever
that
is
repo
basil
config,
so
it
built
the
proto
files,
but
still
basil
complained
anyway.
So
yeah
I,
honestly
I
have
no
idea
what
basil
how
it
works,
what
it
does
it's
pretty
much
a
black
box,
magic
thingy
for
me,
but
I
assume,
if
you
know
some
stuff
about
basil,
it's
probably
an
easy
fix,
because
it's
not
like
a
ridiculous
thing.
We
are
doing
there
yeah.
A
F
The
first
part
I
just
want
to
check
if
I'm
right,
so
my
assumption
is
we
figured
out
how
we
build
images
with
we
built
them
on
GCB
and
potentially
in
the
future,
commit
them
via
or
or
maybe
we
already
do
that
by
the
testing
from
builder
thingy,
but
we
have
not
figured
out
how
we
promote
those
images.
I
mean.
Probably,
we
will
use
the
image
promoter
which
is
in
the
works
by
Linnaeus,
but
we
don't
have
any
processes
or
tests
or
whatever
set
up.
Is
that
correct?
It's.
F
F
F
A
A
Okay,
right
so
Kate
style,
which
is
a
repo
that
I
only
found
out
existed
like
beginning
of
the
year
I
guess
there
is
a
so
there's
a
talk
about
maintaining
the
kubernetes
container
images.
Each
sager
project
has
their
own
cell
use.
Cluster
API
Azure
is
an
example
since
I'm
familiar
with
that
one
has
their
own
images
set
right,
so
one
the
owners
are
defined,
I
need
to
find
them
as
whoever's.
You
know,
usually
whoever's
maintained,
errs
or
approvers
for
that
repo
and
then
the
image
set
here
right.
A
All
right,
so,
the
problem
that
we
have
is
release-
and
this
is
I,
think
I-
think
a
few
of
you
are
following
this
one,
but
you
will.
If
you're
not,
it
is
here
right.
So
there
are
a
bunch
of
things
to
do
before
some
of
this
words
right.
So
the
problem
that
we
have
right
now
is
that
the
images
that
we
create
we
can
easily
push
them
to
kubernetes
our
kate's
staging
release
test
right,
which
is
our
test
staging
release
test,
really
a
staging
bucket
right,
very
confusing.
A
So
they
they
kind
of
adhered
to
the
initial
convention
that
they
had
for
the
staging
buckets
and
our
the
staging
projects,
and
what
I
was
saying
was
to
the
Kate's
infra.
Is
that
release
isn't
really
a
staging
thing,
like
our
staging
is
kind
of
fraud
or
near
prod
right
for
any
release
related
thing,
so
we
need
to
have
buckets
and
names
and
and
policies
around
those
things
that
reflect
that.
So
there's
some
work
in
progress
to
make
it
easier
and
so
right
now
we
have.
A
Right
so
you've
got
this
little
script.
Ii,
that's
people
with
access
to
admin.
Access
to
the
info
projects
can
run
essentially
it
takes.
These
projects
goes
in,
looks
at
admins,
writers
and
viewers,
so
these
lists
were
recently
configured
and
the
the
sig
chairs
the
patch
release
team
branch
managers
and
release
manager
associates
are
of
these
lists.
A
The
so
goes
into
the
project,
make
sure
the
buckets
are
configured
right
again.
We
don't
really
have
staging
buckets
right.
We
are
staging
buckets
or
would
be
considered
prod
for
a
lot
of
people
right.
So
what
we
need
to
do
is
one
figure
out
the
names
for
these
things.
I'm
planning
on
writing
an
issue
up
for
that
today.
Actually
the
who
is
going
to
maintain
them,
how
we
need
to
tweak
these
scripts
to
do
that
stuff,
and
then
we
also
need
to
figure
out.
A
We
have
to
configure
service
counts
that
can
run
GCP
jobs
or
can
or
have
access
to
drop
images
within
from
a
new
case,
infra
staging
project
into
an
old
kubernetes
release
project
right.
So
the
kubernetes
release
test
is
where
we
primarily
do
most
of
our
work,
but
there's
also
kubernetes
release,
which
I
don't
believe
I
actually
have
access
to,
which
is
our
prod
project.
There's
also
where
our
containers,
land,
google
containers,
that
name
space
within
GC
RI
is,
is.
A
Actually,
there's
an
alias
for
it.
That's
Kate's
that
GCR,
I
io
right.
So
that's
so
there's
a
plan
somewhere
to
cut
that
stuff
over.
But
there
are
a
lot
of
things
in
play
that
make
this
tricky
right.
I
know
that
so
so,
going
all
the
way
back
to
what
you
said.
Yes,
a
promotion
today
would
be
me
clicking
the
button
to
push
an
image
from
one
registry
to
another
right,
retaking
it
and
then
pushing
it
in
to
to
our
registry.
If
it's
something
that,
like
I
I,
don't
have
access
to
push
broad
images
right.
A
Okay,
so
yeah,
that's
the
that's
still
the
roundabout
way
of
saying.
Yes,
it's
an
annual
process
right
now,
but
there
are
pieces
that
are
automated
and
just
figure
out
how
to
get
ourselves
into
that
process.
We
also
I
also
need
to
figure
out
if
the,
if
it's
even
possible,
for
the
image
promoter
or
like
if
the
image
promoter
even
has
credentials
to
promote
from
Kate's
infra
into
a
Google
infra
location.
A
F
A
It
is
so
so
the
plan,
at
least
as
soon
as
the
release
goes
out.
I
know
that
you
have
updates
to
Kate's
cloud
builder,
which
are
you
had
two
updates.
One
was
to
move
to
Python
3
and
then
the
other
one
was
to
remove
some
of
the
pieces
that
we
didn't
need
you
stripped
down
a
bunch
of
the
image
right.
No
so
I
want
to
get
those
in,
but
only
after
we
cut
should
we
wait.
Till
I
was
going
to
say
after
we
cut
1:17,
but
should
we
wait
until
after
the
patch
releases
as
well?
A
F
A
No
I
am
concerned
about
the
image,
though
so,
because
you
actually
pointed
out
I
thought:
I
I
thought
it
was
promoted
to
the
correct
place,
but
we
had,
since
we
had
previously
parameterised
the
namespace
for
the
image
that's
used
in
the
GCB
stage
and
release
configs.
So
so
I
think
when
we
did,
the
post
submit
push
of
the
new
version
of
the
image
and
only
push
to
staging
release
test.
F
A
Yeah
for
sure
so
I
actually
I
spent
I
spent
way
too
long
on
over
the
weekend.
Looking
for
where
you
had
written
this
down,
I
was
like
I
know:
Hannes
wrote
this
somewhere
and
I
was
searching
through
issues,
so
I'm
gonna
get
this
I'm
gonna
get
all
the
stuff
into
an
issue.
The
corollary
to
some
of
this
stuff
is
the
debian
base,
IP
tables
and
hypercube
ones.
There
are
two
issues
open
for
that,
or
rather
there
were
two
issues
open
for
cv,
cv
remediation.
F
F
My
first
thing
was
well:
can
we
build
them
on
GCB
and
then
eventually
use
the
image
promoter
again,
so
this
PR
is
making
it
at
least
easier
to
build
them
on
GCP,
so
that
works
today
with
this
PR
still
the
image
promoter
and
all
that
business
needs
to
be
figured
out,
but
at
least
we
have
that
part.
We
can
build
those
images
at
any
time,
so
yeah
so.
A
What
I
would
request
is
for
this
PR
one
the
to
use
a
get
move,
so
we
capture
the
diff
between
the
docker
file
and
the
docker
file
dot,
build
that's
in
Debian
base,
because,
right
now
it
looks
like
it's
an
addition
and
a
removal
of
a
file
instead
of
a
rename
and
to
add
a
cloud
build
to
this
one.
So
we
can
wire
up
a
post
submit
on
kubernetes
kubernetes
to
push
to
the
staging
release
test
bucket
right.
A
That
way,
once
we
finish,
but
once
we
figure
out
like
the
last
mile
of
this,
which
is
promotion
from
release
staging
release
test
into
whatever
new
Prada
old
prod
is,
we
can
just
turn
it
on
and
will
already
have
images
in
there
tagged
properly
right.
So
if
you
can
add
that
to
this
BR
as
well
and
I'll
review
it,
when
I
get
a
chance,
yep
will
do
sweet
all
right.
F
F
A
A
F
F
Service
we
can
find
I
just
happen
to
test
with
Sentret
that
works
well,
I,
also
kind
of
copy
and
pasted.
The
thing
I
had
running
on
over
here
anyway,
since
since
a
while
for
notifying
for
new
patch
announcing
new
patch
releases
that
user
sent
grid,
I
ported
it
to
GCP,
it
just
works,
so
we
could
use
the
same
thing
for
the
release:
notify
script
somewhere
in
some
issue.
I
have
outlined
what
we
would
need
for
that.
F
F
A
G
F
F
A
A
A
F
A
Right
and
and
and
send
that
way,
but
I
always
forget
what
the
I
always
forget,
where
the
location
is
so
I
think
print
would
just
be
a
nice
convenience
function
for
us
to
know
exactly
where
the
bucket
location
is
as
well
as
get
the
output
for
the
email
to
just
copy
and
paste
so
yeah.
If
that
sounds
good,
you
can
you
can
go
and
go
yeah.
A
I
A
I
I
A
See
we
I
know
Larissa's
used
pretty
extensively
across
Europe
in
his
repose
I
just
want
to
make
sure
that
we're
choosing
a
tool
that
other
people
know
how
to
use
when
we
and
that
we
choose
one
and
we
don't
kind
of
like
turn
around,
which
ones
we
want
and
which
ones
we
don't
want.
So,
let's
make
a
decision
I
think
it's
fine
I,
think
it's
fine
I
just
wanted
to
make
sure
that
it's
open
for
people
to
discuss
before
we
move
forward
on
it.
I.
F
Don't
know
anything
about
la
gross
and
I,
don't
know
if
it
like
automatically
in
installs
itself
on
in
the
global
scope.
If
that's
the
case,
I
would
argue
against
it.
I
would
like
to
have
something:
I
can
fake
out
and
yeah
fake
out
in
tests
and
something
that
does
not
clutter
my
global
global
namespace
and
when
it
comes
to
passing
the
logger
around
I.
Think
that's
more
than
if
you
will
architectural
problem
of
the
release,
notes
tooling,
more
than
anything
else,
because
if
you
had
I
don't
know
properly
proper
struck,
it's
where.
F
I
A
Yeah,
no
I'm
I'm
easy
I'm
super
easy
yeah,
so
our
nose,
mentioning
K
log
I,
think
K
log
is
feels
heavy
for
this
purpose.
There's
also
a
I'm
trying
to
remember
the
name,
but
there
is
a
K
log
like
thing
that
is
not
K
log,
but
it
is
basically
a
rewrite
of
the
of
the
standard
logging
package.
Yes,
so
Dan,
that's
exactly
what
I
was
going
to
get
to
I,
don't
care
with
what
we
go
with
as
long
as
we're
consistent.
A
A
F
H
A
So
for
Krell,
that's
why
I
initially
chose
to
just
use
log,
because
it's
part
of
the
standard
library
yeah,
whoever
has
opinions.
If
you
have
opinions,
let's
say
Sasha,
that's
currently
your
issue.
If
you
want
to
poke
around
some
of
the
kubernetes
repos
and
get
a
kind
of
get
a
pulse
on
what
we're
using
across
most
of
the
repos
and
I,
think
that
would
be
a
good
exercise
and
then
base
your
decision
off
of
that
I'm
happy
to
go
with
whatever
works
as
long
as
it's
consistent,
yeah.
G
And
we
make
a
definitive
decision
on
the
testing
last
time,
there's
kind
of
a
similar
decision
to
this
logging.
One
I
thought
like
we
were
all
kind
of
like
dave
cheney,
says
cool
things
and
yeah
yeah,
so
I
didn't
like
as
I've
been
working
on
stuff
I
was
just
kind
of
gonna
conform
to
what
we've
done
so
far,
but
I'm
not
sure.
That's
exactly
in
line
was
like
table-driven
tests
and
that
sort
of
thing
so.
A
J
Weren't
super
clear
on
that,
but
we're
sort
of
pragmatic
like
go
with,
which
one
makes
the
most
sense
in
the
case,
where
maybe
a
lot
of
the
time
it's
stable.
But
if,
if
that's
not
natural,
don't
do
what
natural
things
to
try
to
use
the
table
test
for
a
given
scenario,
and
then
that
we
did
also
need
some
integration
tests
that
we're
a
little
more
into
nd
type
of
stuff.
J
F
I
heard
last
time
probably
start
with
vanilla,
go
test
type
things
table-driven
or
not.
If
you
want
to
use
testify
because
for
for
matcha,
basically,
because
we
already
have
that
and
okay
release,
if
that
is
not
good
enough
for
whatever
you
are
doing
I
guess
we
want
to
have
a
discussion
before
we
pull
in
a
different
matcher,
but
maybe
not
so
my
my
point
being
I
would
start
with
plain
go
test.
Maybe
use
testify
for
matching
yeah.
G
A
A
All
right,
so,
over
the
last
few
days,
I've
been
working
on
been
working
on
sprucing
up
some
of
our
issues.
All
of
the
issues
that
are
were
stale
rotten
have
been
refreshed
in
some
way.
I've
started
to
remove
myself
from
issues
that
I
think
that
the
rest
of
you
can
take,
and
we
can
like
talk
about
so
there
are
a
bunch
of
there
should
be
a
bunch
of
issues
that
are
marked
both
area
release
engineering,
our
area,
slash,
release
edge
and
also
marked
as
Help
Wanted.
A
So
if
you
want
to
take
a
look
through
the
repo
and
see
what
you
want
to
work
on,
this
goes
for
pet
release.
Branch
managers
and
release
manager
associates
as
well
as
anyone
who
is
in
the
periphery
of
release,
engineering
and
interested
in
jumping
in.
Please
take
a
look
at
the
help,
the
Help
Wanted
stuff
and
feel
free
to
ask
any
questions
on
the
on
the
issues.
A
I
think
Tim
I
think
that
we
should
try
to
plan
a
I
was
considering
doing
it
for
the
release
meeting
next
week,
but
that's
going
to
be
the
retrospective
I
think
we
should
try
to
get
a
planning
session.
Maybe
a
two-hour
thing
in
the
calendar
somewhere
between
now
and
the
20th
I
think
past
that
everyone
will
be
disappeared
for
holiday
stuff.
Let's.
A
Week,
still,
ok,
and
also
for
the
retro,
the
so
blocky
is
going
to
be
the
118
emeritus
advisor
he's
going
to
create
a
retro
action
items.
Issue
I
have
assigned
him
the
116
retro
items
as
well
and
unassigned
myself
from
that.
The
things
that
had
updates
I've
connected
all
the
dots
between
issues
and
PRS,
the
ones
that
need
issues
or
keps
were
marked
as
need
issue
need
kept.
So
y'all
can
also
take
a
look
at
that
as
I
know.