►
From YouTube: K8s Release Engineering Subproject 2019-08-05
B
A
Are
going
to
start
soonish,
probably
gonna,
give
it
maybe
five
minutes
or
so
for
people
to
pop
in
since
it's
a
new
meeting-
and
this
is
a
brand
spanking
new
time
zone
for
us
and
this
one
day
and
it's
Monday
and
it's
Monday
all
of
those
things
compounded
uh-huh,
so
the
calendar
invite
would
have
given
you
a
set
of
links
for
this
meeting,
including
the.
A
Oh,
this
is
Otto
recording
I'm
wondering
if
I
should
turn
auto
recording
for
the
meetings.
A
A
All
right
enough
anticipation,
I
think
we've
got
enough.
People,
hello,
hello,
everyone.
This
is
the
August
fifth
edition
of
the
release
engineering
meeting
for
kubernetes.
This
is
also
the
first
release
engineering
meeting,
so
welcome.
Thank
you.
Everyone
for
showing
up
please
be
mindful
that
this
is
a
meeting
that
is
recorded
and
on
the
internet
for
posterity,
so
be
mindful
of
what
you
say
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
just
be
generally
excellent
people
right.
So
I've
posted
the
release.
A
A
Don't
have
a
lot
in
particular
that
I
want
to
cover
outside
of
saying
that
we
got
the
meeting
off
the
ground
and
being
happy
about
that.
I
think
that's
over
the
last
cycle,
or
so
we've
made
lots
of
headway
in
terms
of
just
understanding
what
release
engineering
is
for
kubernetes
and
trying
to
trying
to
get
an
idea
of
what
the
improvements
should
be
moving
forward,
so
in
116
we're
kind
of
on
this
path
of
moving
from
brainstorming
into
more
brainstorming,
formalized,
brainstorming
and
and
then
execution.
A
So
a
lot
of
that
is
understanding
the
understanding
of
the
things
that
we,
the
artifacts,
that
we
produce
for
kubernetes
the
way
that
we
host
them,
the
the
tooling
that
we
use
to
do
that
and
what
can
be
done
to
improve
all
three
right.
So
if
you
haven't
seen,
if
you
haven't
seen
the
release
engineering
brainstorm,
this
is
kind
of
the
work
product
of
a
few
chats
that
Tim
and
I
have
had
both
together
and
with
several
people
that
is
linked
in
the
chat
as
well.
A
So
take
an
opportunity
to
read
through
that
brainstorm.
What
will
be
happening
over
the
next
few
next
few
weeks
or
so
we'll
be
turning
that
into
caps.
I,
think
that
you
know,
there's
been
some
shuffle
in
terms
of
releases
as
well
as
sorting.
You
know,
sorting
some
administrivia
and
now
that
the
release
is
kicked
off
and
running
in
earnest.
We
can.
We
can
step
back
and
move
back
to
the
to
generating
the
caps
based
on
some
of
that
information.
A
A
All
right,
so
you
may
be
aware,
may
not
be
aware,
but
kubernetes
has
a
set
of
a
set
of
org-wide
project
boards
that
you
can
view
at
any
time.
Release
engineering
has
quite
a
few
well
sig
release
overall,
have
quite
a
few,
so
we've
got
a
sig
release
board
a
licensing
board,
the
release
team
board
and
the
release
engineering
board,
which
we'll
be
focusing
on
in
this
meeting.
A
So
what
I'd
like
to
do
is
make
sure
that
we
capture
anything
that
has
not
been
captured
before
so
there's
a
little
search
term
here
is
open
label
area
release,
edge
repo
star
right,
that's
capturing
everything,
that's
labeled
release
engineering,
which
is
our,
which
is
our
label
for
the
sub
project,
and
and
let's
capture,
all
of
the
repos
within
kubernetes
right
are
within
the
kubernetes
org.
This
doesn't
capture
everything
across
all
kubernetes
orgs,
which
we
can
probably
fix
later.
E
A
Question
for
you,
we
were
kind
of
mulling
this
around
in
the
last
cig,
release,
meeting
I
believe
and
the
question
was:
does
it
make
sense
for
the
publishing
bot
to
be
part
of
the
release
engineering
sub-project
its
kind
of
listed
as
it's
separate
as
a
separate
thing
right
now,
or
we're
kind
of
thinking
that
maybe
it
made
sense
to
be
part
of
the
release?
Engineering
bundle,
yeah.
E
I
think
it
makes
actually
I
wasn't
ask
you
about
it
if
it
made
sense
as
well.
So,
okay,
there.
A
It
is
so
we'll
fix
that
in
owners
later
due
to
just
yeah
thanks
all
right.
So
let's
move
this
one
and
it
looks
like
the
review
is
in
progress
already
right:
yeah,
okay,
ipv6
still
stack
wacka-wacka.
That
is
not
really
us,
which
means
some
area
of
code
is,
is
auto-tagging
with
area
release
hinge.
So
we
should
probably
look
at
that.
A
A
Okay,
that's
licensing
and
okay.
All
right
so
did
an
okay
job,
but
making
sure
cards
are
added.
For
this
thing,
let's
also
make
sure
that
so
in
general,
I'd
like
to
see
the
that's
due
column
and
the
in
progress
column
are
rather
anything
that
is
marked
for
already
critical,
urgent,
hopefully
already
in
progress.
If
not
in
progress,
then.
A
A
A
F
F
A
D
Yeah,
it
could
mean
it's.
There
is
a
reference
like
as
a
design
concept
idea.
I,
don't
think
I'm
gonna
implement.
If
I
were
to
implement
something
like
this.
It
wouldn't
happen
until
the
other
things
we're
talking
about
rewriting
we're
rewritten,
but
maybe
they
get
rewritten
in
a
way
that
just
embodies
this
I,
don't
know:
yeah
yeah,
cuz,
I'm,
very.
D
B
A
So
Tim
had
in
a
PR
open
for
make
file
wrapping.
So
the
idea
here
is
that
we
want
to
essentially
ensconce
the
the
commands
that
we
use
to
to
cut
a
release
right
so
commands
in
an
AA
go
or
GCB
manager
which
or
two
of
our
release
tools,
one
being
a
wrapper
for
the
other,
to
make
it
a
little
easier
to
one
mock
some
of
these
flows
and
to
like
place
them
in
CI.
A
So
we
could
test
staging
or
test
releasing
make
it
and
also
just
make
it
a
little
bit
more
into
it,
for
maybe
newer
release
managers
stepping
into
it.
So
that
issue
is
sig
release
621
and
we're
closing
it.
For
now,
pending
a
rewrite
of
the
branch
manager
and
patch
release,
team
playbooks
run
books,
so
you
can
see
that
we've
created
a
new
done
column
for
done
116,
so
automatically
everything
that
gets
swiped
over
to
done-
and
this
is
for
all
of
the
sig
release.
Boards
should
be
moved
into
done.
A
A
A
A
D
A
A
Essentially,
what
I
want
to
do
here-
and
this
is
kind
of
this-
is
me
just
pleading
to
the
CI
gods
for
fer
test
tests
to
pass.
So
essentially
when
we
release
kubernetes,
we
also
package
a
few
other
things
for
depth
and
rpms
right.
So
it's
cube
ATM
the
cube
ATM
cubelet
cube
CTL,
kubernetes,
CNI
and
CRI
tools
right.
A
A
So
this,
and
usually
it
involve
some
archaeology
understanding
where
these
version
numbers
exist
may
be
doing
the
exact
thing
that
this
file
would
be
doing
right.
So
this
is
utilized
by
verify
dependencies.
A
utility
in
kubernetes,
slash,
command,
slash,
verify
dependencies,
slash,
verify
dependencies
that
go
and
that
will
check
this
file
right
and
essentially
make
sure
that
the
version
numbers
spec.
The
version
number
specified
in
these
ref
paths
matches
matches
the
version
number
specified
in
dependency
samel
right.
So
you
can
see.
A
Each
of
them
have
a
slightly
different
process
for
changing
things,
people
to
contact
we're
working
on,
essentially
a
policy
for
exactly
how
to
do
this,
who
you
should
contact
and
when
right
now,
this
is
kind
of
being
handled
by
the
the
contributors
who
are
already
reviewers
approvers
within
the
build
directory
and
and
some
of
the
release
engineering
personnel.
So
essentially
what
I'm
trying
to
do
here
is
figure
out
all
the
places
that
I
need
to
touch.
This
is
this.
A
Pr
was
actually
started
before
that
build
dependencies
animal
existed,
but
you
can
see
that
we're
we're
changing
it
in
a
make
file,
we're
changing
it
in
the
cube,
ADM
spec
and
the
cubelet
spec
and
the
shah's
and
the
version
and
in
the
basel
workspace
definition
and
more
things
here.
So
we
recognize
is
that
initially,
the
it
was
pointing
to
a
GCS
bucket
kubernetes
release
which
slash
network
plugins,
which
requires
someone
with
access
to
push
to
that
bucket.
A
I
think
at
this
point
in
time,
at
the
time
of
recording
we're,
unsure
who
actually
publishes
the
tar
balls
for
CNI
plugins
to
kubernetes
release,
which
is
our
official
release
bucket
and
because
of
that
I
you
know
I
kind
of
said,
okay.
Well,
it
might
make
more
sense
to
just
use
the
github
download
link
right
so
that
this
PR
also
switches
over
to
the
github
download
link,
because
if
we
can't
tell
who
actually
publishes
this
that's
kind
of
a
break
in
our
system,
because
they're
kind
of
outside
of
the
release
engineering
flow
right.
A
This
is
you
know,
I
think.
Traditionally
it's
probably
been
like
hey,
let's
contact
this
person
make
sure
it's
uploaded
and
then
and
then
we're
able
to
update
and
that
you
know
these.
These
network
plugins
seem
to
have
not
been
updated
in
that
bucket
for
some
time,
so
that
kind
of
moves
us
over
to
using
github
download
URLs
I'm.
Also,
a
point
is
that
the
container
networking
is
not
officially
a
part
of
kubernetes,
so
I'm
not
sure
that
it
makes
sense
to
host
binaries
for
them.
A
That
was
true
prior,
but
it's
not
true
anymore,
so
I'm
trying
to
get
this
PR
to
work
is
the
short
version
of
all
of
this
and
I
think
we're
at
the
point
where
end-to-end
tests
are
failing
now,
so
actually
failing
for
for
good
reason,
so
still
looking
into
that,
so
I
haven't
assigned
anyone
to
that
one.
Just
yet
one.
A
A
Right
so
so
the
first
part
would
be
proving
that
it
works
on
master
if
we
fix
end-to-end
tests
and
or
we
find
out
that
their
flakes
events
and
tests
if
it
works
on
master
I.
Would
cherry
pick
this
back
to
the
other
branches
and
see
what
happens
right,
so
this
is
kind
of
what
we
wanted
to
do
to
prove
that
it
would
work.
I
mean
I
pick
all
the
dependencies.
Zmo
related
changes
right
exactly,
but
I'm,
not
cherry-picking
the
dependency
that
UML
changes
until
we
settle
on
that.
D
A
D
A
One
of
the
one
of
the
aching
problems
that
we
currently
have
in
that
repo
is
first,
is
that
the
main
repo
that
we
use
for
kubernetes
is
name
are
for
all,
at
least
on
the
debian
side
is
named
kubernetes
xenial
right,
so
it
didn't
it
I
mean
just
by
name.
It
implies
that
these
are
packages
that
only
work
for
kubernetes
annual,
which
is
not
true
like
if
you
have.
If
you
have
bionic
it
should
work.
If
you
are
updating
to
Buster
or
starting
to
use
Buster,
it
should
work
as
well.
A
If
you've
got
a
older
non
end-of-life
version
of
Debian,
it
should
work
as
well
right.
So
so
we're
kind
of
locked
into
this.
We
get
a
few
issues
that
say
like
hey.
I
was
what
I
was
wondering
when
support
is
going
to
happen
for
this
right.
For
this
you
know,
release
of
the
distro
and
we
probably
can
just
use
the
packages
that
are
in
the
repo
today.
That's
where
we
publish
all
of
our
packages
on
the
RPM
side.
A
Implying
that
you
would
be
using
some
version
of
relish
distro,
seven
right,
so
whether
it
be
sent
to
us
or
L
or
Fedora,
and
so
on
and
so
forth.
So
that's
one
problem,
repo
naming
we'd
like
to
eventually
move
and
you'll
see.
If
you
check
out
the
brainstorm
that
we
want
to
move
to
something
that
captures
the
version
instead
of
kubernetes
right
and
then
also
the
channel
right,
so
whether
it's
a
stable,
unstable
testing
figure
out
how
exactly
we
want
to
name
that
before
doing
it.
So
part
of
that
is
like
do.
A
A
We
have
an
entire
working
group
dedicated
to
that
multiple
multiple
hands
in
the
pot
I
think
many
of
them
are
on
the
all.
Currently.
So
you
know
part
of
part
of
starting
this
effort
was
a
Google
donated
nine
million
dollars
in
credits
to
the
CTF
to
say:
hey,
build
kubernetes
the
like
build
this
out
in
the
open
for
the
for
the
further
community,
and
here
are
some
credits
to
do
it
right,
so
that
working
group
is
kind
of
tasked
around
understanding.
A
All
of
the
bits
that
go
into
building
like
the
infrastructure,
that's
required
to
maintain
kubernetes
the
project
and
standing
that
up
in
parallel
right.
So
we
have
access
to
tim
myself
in
caleb
we
have
access
to
new
new
projects
for
release,
staging
and
a
prod
project
and
access
to
GCS
buckets
within
those
GCP
projects.
So
I
think
it's
now
now
it's
a
matter
of
playing
around
understanding.
What
the
directory
structure
needs
to
be.
Writing
scripts
around
that
stuff
to
to
make
sure
that
we
can
stand
it
up,
throw
it
away.
A
A
A
The
at
some
point
I
think
we
switched
from
equals
to
greater
than
or
equal
to
which
fixed
some
things,
but
I
think
you
know.
Some
people
may
have
noticed
that
within
the
I
believe
is
114
really
cycle,
that
that
was
changed
or
accidentally
changed
and
broke
some
stuff
and
the
reason
it
broke.
Some
stuff
was
because
the
Assumption
the
I
mean
your
package.
Your
package
manager
will
do
what
you
tell
it
to
do
right.
A
So
if
you
publish
a
package
that
says,
I
only
want
I
only
want
this
version,
it
has
to
be
equal
to
the
surgeon.
R
has
to
be
greater
than
or
equal
to
this
version.
One
problem:
what's
setting
greater
than
or
equal
to
a
version
is
that
we
introduced
a
set
of
skew
massive
skew
across
different
release,
branches
and
combinations.
So
does
do
we
know
that
CR
itools
112
will
work
absolutely
with
with
you
know,
with
a
kubernetes
114
or
something
or
115
right
now.
A
Start
publishing
your
packages,
github
has
voice
recognition
now,
so
we've
got
a
few,
and
these
were
recently
moved
if
anyone's
looking
at
this
and
wondering
how
they
got
there.
So
there's
a
cubelet
spec
first
problem,
the
spec
is
named
cubelet
and
it
is
not
the
spec
for
building
the
cubelet
package.
It
is
a
spec
for
building
all
packages,
write
all
the
packages
that
I
mentioned
so
cube.
A
Adm
cubelet
keep
CTL,
kubernetes,
CNI
and
CRI
tools
right,
so
you
can
see
it's
telling
you
the
locations
of
certain
things
how
to
how
to
download
them,
but
you
can
see
that
there
are
some
versions
locked
in
right,
so
this
is
set
at
1:13
and
the
reason
that
it's
set
at
1:13
is
1:13.
Is
our
our
minimum
supported
kubernetes
right
now
right?
So
once
we
cut
the
release
for
for
1:16,
1:13
will
go
out
of
service.
Essentially
right
are
out
of
support,
so
this
will
be
need
to
be
changed
again.
A
A
But
actually
before
we
be
straight,
you
for
well
I'll
continue
yapping
about
the
packages
right.
So
right
now,
where
we
are,
is
that
because
packages
are
set
with
the
dependencies
to
greater
than
or
equal
to,
that
means
all
of
the
ones
that
are
essentially
untested
are:
are
we're
introducing
a
new
vert,
we're
introducing
new
SKU
right,
we're
saying?
Okay?
A
So
basically
we're
saying
a
version
of
one
nine
should
work
with
I
want
you
to
update
to
the
latest
CNI
CNI
zero,
eight
one,
and
we
have
no
guarantee
that
that
will
work.
Sure
one
nine
is
out
of
support,
but
right
now
everything
is
in
that
repo.
So
even
something
supported
like
113.
Do
we
know
that
one
thirteen
works
with
what
CNI,
zero
eight
one
or
Sina
are
yeah?
Are
the
the
CNI
plugins
that
are
published
in
zero?
A
A
So
we
made
some
additional
changes
to
it:
adding
some
owners
in
for
release
engineering,
reviewers
and
approvers,
as
well
as
moving
it
from
build
dependencies,
build
/
dependencies
to
build
external
dependencies
so
that
we
could
apply
that
owners
file
right
and
bring
a
few
more
owners
in
instead
of
just
the
top
level,
build
approvers,
more
updates
dependencies
and
swapping
the
location
for
for
the
script
to
check.
So
we're
waiting
on
that
to
land
which
is
waiting
on
I
have
two
rights
policy
for
how
we
want
to
do
this
moving
forward.
A
So,
basically
a
if
you're
updating
go.
This
is
what
you
should
do
if
you're
updating
at
CD
and
so
on
and
so
forth
right,
so
we're
kind
of
using
the
the
CNI
version
bump
to
wrap
my
head
around
it
and
document,
some
of
it
and
I
think
finding
people
who
have
similar
experiences
bumping
like
steroid
tools
or
someone
who's
bumped,
go
before
or
at
CD,
and
making
sure
that
we're
capturing
all
of
those
experiences
in
the
policy
before
merging
that
in
do
you
want
to
add
some
color
on
the
packaging
thing.
D
It's
a
mess,
we've
been
doing
things
in
a
way
that
used
to
make
sense,
but
maybe
doesn't
depending
on
what
our
users
want.
We've
had
a
lot
of
users
complain
about
issues,
but
it's
also
hard
for
us
to
know
what
the
right
thing
is
across
multiple
sets
of
conflicting
user
demands
and
then
the
other
big
caveat
on
all
of
this
is
I'll,
say
it
officially,
for
the
recording
you'd
be
kind
of
crazy
to
use
our
repos
directly.
D
So,
of
course,
you
might
want
to
pull
it
into
a
CI
system
to
tentatively
test,
and
at
that
point,
if
you
saw
failures,
you'd
report
them
so
the
more
we
start
operationalizing.
Moving
these
things
forward,
the
more
risk
we
have
of
incurring
user
issues
as
we
break
them
either
we
break
them
in
production
or
we
break
them
and
test
either
way
we're
still.
We
we
have
reason
to
believe
that
we
will
cause
breaking
changes.
D
So
the
biggest
thing
that
we
can
do
here,
I
think
is
splitting
those
repositories
out
into
version
specific
clusters
of
packages
and
where
this
I
think
the
word
this
becomes
controversial
is
that
starts
looking
like
making
a
kubernetes
113
preferred
distribution
of
packages
and
a
career.
These
115
preferred
distribution
of
packages,
but
we
kind
of
need
that
for
our
own
CI
anyway,
and
it's
the
type
of
validation
and
proof
point,
our
users
seem
to
value,
especially
in
the
changelogs,
where
we
list
all
of
the
dependencies,
but
those
are
haven't
historically
been
well-managed.
A
So
so
coming
soon,
there
are
a
lot
of.
There
are
a
lot
of
balsa
jackal
on
that
one.
But
again,
if
you
have
not
had
a
chance
to
check
out
this
Pharrell
engineering
brainstorm,
we
actually
talked
about
some
of
this
stuff
here
and
yeah
super
valuable
to
take
a
look
at
because
everything
that
we're
mentioning
is
is
documented
here,
essentially,
okay
back
to
the
board,
actually
any
questions
based
on
it,
everything
that
we've
said.
A
All
right
all
right
here
we
go
so
okay
that
I
don't
need
to
do
anything
with.
So
this
is
adding
a
build
canary,
a
job
to
canary
the
release
tooling.
So
this
is
based
on.
So
this
is
a
test
in
fro
one
three,
three
four
zero,
and
this
is
related
to
a
bunch
of
changes
that
we
made
on
on
some
shell
scripts.
Adding
shell
checking
our
shell
checking
our
shell
scripts,
which
led
to
various
breakages
in
our
release,
blocking
our
release,
plucking
jobs,
kind
of
broke
kubernetes
for
a
little
bit.
A
This
happened
a
little
before
July
4th,
maybe
the
end
of
the
end
of
June.
So
a
few
things
to
do
here
and
one
of
the
problems
that
we
have
is
that
the
kubernetes
kubernetes
repo
depends
on
kubernetes
release,
two
to
run
some
of
its
CI
jobs
and,
and
the
reason
for
that
is
a
fun
script
called
primarily
a
script
called
called
my
computer
froze
called
push,
build
dot,
Sh
right,
so
push
fill.
Is
it's
in
the
name?
A
It's
responsible
for
pushing
builds
of
kubernetes
and
CIA
then
uses
these
builds
to
to
run
its
tests
against
right.
So
push
build.
Is
also
I
believe
Ben
elder
recently
opened
an
issue
about
moving
push
build
to
moving
push,
build,
secure,
burn
a
disturber
Nettie's
which
would
solve
the
problem
which
would
break
that
dependency,
but
there
are
also
different
libraries
that
we
source
yes,
bash.
Libraries,
different
bash
libraries
that
we
source
Kommandant
sh,
get
lived
on,
sh
release,
lib
sh,
all
of
which
would
need
to
either
move
along
or
be
more
easily
consumable.
A
A
So
maybe
we
use
bats
potential
idea,
that's
kind
of
just
tossing
that
out
there
longer
term
goals.
We
want
to
be
able
to
refactor
the
libraries
that
exist
in
the
Kerber,
Nettie's,
coronaries,
release,
repository
and
and
then
basically
call
them
from
whatever
parts
of
the
whatever
parts
of
the
scripts
currently
exist
right.
So
Ben
was
kind
of
thinking
that
he
would
like
to
work
on
and
I
know.
A
Someone
else
mentioned
to
me
interest
in
working
on
some
of
this
he'd
like
to
work
on
essentially
decomposing
one
of
the
tools,
a
smaller
tool
that
we
use,
maybe
branch
fast
forward
or
something
and
rewrite
the
required
pieces
of
the
library
right.
So
one
we
have
rewritten
library
or
rewritten
libraries
plus
a
smaller
tool
rewritten
in
in
a
more
testable
language.
So
I
think
you
know
go
is
the
plan
right
now
and
only
when
that
happens
like
if
these
libraries
become
become
go
packages,
then
that's
something
that
we
can.
F
A
D
A
So
what
we
want
moving
forward
is
a
very
concerted
effort
on
doing
that,
refactor
again,
finding
smaller
tools
being
able
to
rewrite
the
libraries,
because,
like
these,
the
common
get
live
and
release.
Lib
are
all
libraries
that
we
depend
on
for
essentially
most
of
these
release
tools,
so
so
yeah
TBD
and
then
once
once
the
once.
Those
tools
are
moved
over
or
once
those
tools
or
in
a
place
where
they're
important
or
kind
of
easier
to
shift
around
from
repo
to
repo.
A
A
But
I
think
that
I
think
that,
given
some
of
the
I
think
that,
given
some
of
the
recent
traction
that
we've
had,
that,
that
is
not
impossible
and
given
we've
got
more
of
your
lovely
faces
on
the
call
people,
people
showing
up
people
willing
and
interested
in
doing
the
work
that
we
will
move
a
lot
faster
than
we
have
in
the
past.
Hey.
D
B
B
So
can
you
see
my
screen
yep,
so
what
I
was
doing
and
during
the
last
two
or
three
weeks,
was
like
digging
very
deeply
inside
all
link
and
like
trying
to
understand
the
whole
flow
dependencies,
etc?
So
I
was
working.
I
created
like
a
already
flow
charts
for
almost
everything
in
the
link,
and
then
I'm
finishing
right
now,
like
checking.
B
So,
when
I
finish,
that
and
I
need
a
week
or
so
I
will
show
like
the
whole
flow
chart
with
the
document.
Where
are
dependencies
and
what
are
like
a
logical
flow
of
these
scripts
and
I
think
that
it
can
be
helpful
first
to
write
proper
tests
in
the
future.
Second
check:
what
are
what
is
what
actually
these
particular
scripts
are
doing?
Logically,
and
this
is
just
said,
what
I
wanted
to
show
you
and
I
was
working
on
it
and
I,
don't
know
how?
A
Yeah,
this
is
really
awesome,
so
Tim
Tim
showed
me
this
I
think
last
week,
and
so
the
one
thing
I
mentioned
and
I
think
like
one
I,
think
it's
awesome,
I
think
it's
gonna
I
think
it's
gonna
move
the
needle
forward,
especially
like
we
when
we
start
exploring
these
scripts,
we
do
a
lot
of
like
staring
at
it
for
a
while
and
then
I
think
we
do
and
I
think
we
do.
We
start
doing
the
same
workflow
in
our
minds
like
okay.
Well,
how
does
this
map
to
this
I
open
this?
Finally,.
B
B
A
Yes,
I
think
it's
gonna
be
super
helpful.
What
I
would
I
would
say
so
what
I
meant
with
regards
to
documentation
is
stating
your
intent
to
work
on
something
with
the
community
right
that
way.
Everyone
is
aware,
like
so
so,
to
give
an
example
for
this
one
and
I,
and
I
mentioned
this
to
Tim,
like
I,
was
essentially
doing
the
same
archaeology.
A
When
you
were
working
on
the
the
artifacts,
the
artifacts
documentation,
I
was
doing
the
same
work
right,
so
we
need
to
make
sure,
and
especially
because
we
have,
we
have
so
many
more
people
involved
in
this.
Now
we
have
people
on
on
working
group,
Cates
infra,
who
are
doing
things
that
are
that
are
related
to
release
engineering.
We
also
have
new
people
scaling
up
on
the
release
engineering
side.
A
A
If
you
are
going
to
work
on
something,
if
you
plan
to
work
on
something
one
make
sure
you
discuss
it
with
the
the
it's
great,
if
you
can
discuss
it
with
the
release
chairs,
alright,
so
myself,
Tim
and
Caleb,
and
make
sure
you
document
it
make
sure
you
state
your
intent
somewhere
so
that
we
can
track
it.
The
same
way
we're
tracking
everything
on
this
board,
knowing
the
work
that
people
are
doing
so
we
don't,
we
don't
step
on
each
other's
toes.
We
don't
duplicate
work
moving
forward
is
is
really
really
important.
A
A
All
good,
okay,
all
right
so
I
think
that
you
know
I
think
that
we've
got
we've
got
more
board
to
walk,
but
we've
only
got
four
minutes
and
we've
and
for
anyone
who
is
on
the
release
team
you've
got
a
double
header.
You've
got
the
release
team
meeting
coming
up,
so
I
want
to
give
give
a
final
opportunity
for
any
introductions.
Anyone
who
hasn't
hasn't
said
hi
yet
and
any
open
discussion
items
hey.
G
A
Cool
welcome
welcome
to
walk
so
one
more
thing
to
mention
before
we
go.
This
meeting
is
well
it's
a
release.
You
already
know
that,
but
for
the
for
the
people
who
are
on
who
are
on
any
of
the
release
manager
groups
right,
so
the
patch
release
team
apat
release
team,
the
branch
managers
release
manager
associates
I,
really
hope
that
you
will
be
attending
this
meeting.
We
tried
to.
A
We
recently
moved
all
of
our
meetings,
essentially
to
make
sure
that
it
was
convenient
time
for
at
least
amia
and
and
some
of
the
further
reaching
European
time
zones.
It's
still
tricky
I
think
we
could
find
a
tricky
an
easy
time
for
for
a
PJ,
which
is
unfortunate,
so
I
know
Nikita.
Who
is
on
thank
you
for
thank
you
for
biting
the
bullet
and
still
animating
that
it's
probably
like
10
11
year
time
the
we
I
think,
depending
on
depending
on
the
interest
level
of
interest
for
this
meeting.
A
If
we
need
to,
if
we
need
to
look
at
doing
something
for
free
APEC,
we
probably
can
in
future,
but
at
the
very
least
we
wanted
to
make
sure
that
we're
pulling
in
more
of
the
AMIA
folks
at
a
more
convenient
time
zone
for
free
all.
So
thank
you.
Everyone
who
participated
in
the
in
the
doodle.
Thank
you,
everyone
who
has
complained
to
us
for
cycles.
That's
that
the
meeting
needed
to
be
moved,
so
we
we've
done
it.
We
can
finally
close
a
ticket.
There
was
ticket
open
for
it.
A
We
can
close
that
now
uh-huh
and
thank
you,
everyone
who
showed
up
to
the
first
one,
hopefully
we'll,
hopefully
we'll
continue.
This
very,
very
interesting
discussion
about
release
engineering
I
will
see
you
all
next
time.