►
From YouTube: Kubernetes SIG Testing - 2021-02-09
Description
A
Hi,
everybody
today
is
tuesday
february
9th,
and
this
is
the
kubernetes
sig
testing
bi-weekly
meeting.
I
am
your
host
karen
krickenberger
aka
aaron
of
sickbeard
aka
spiff
xp
on
all
the
places
we
adhere
to
the
kubernetes
code
of
conduct
in
this
meeting,
which
basically
means
we
be
our
very
best
selves
and
don't
be
jerks
to
each
other.
If
you
have
a
problem
with
the
conduct
of
this
meeting,
please
email
conduct,
kubernetes
dot,
io
or
you
are
free
to
reach
out
to
me
privately.
A
This
meeting
is
being
publicly
recorded
and
will
be
posted
to
youtube
later.
So,
let's
see
here
first
up,
I
feel
like
I
recognize
most
of
the
names
here,
but
does
anybody?
Is
anybody
new
to
this
meeting?
Do
you
want
to
introduce
yourselves
say
a
little
bit
about
why
you're
here.
A
We're
gonna,
have
ben
and
ahmet
talk
about
some
caps
that
we
are
proposing
for
this
release
cycle,
and
then
I
was
to
talk
a
little
bit
about
the
work
remaining
to
be
done
to
close
out
the
kubernetes
ci
policy
work
for,
like
all
release,
blocking
jobs,
running
on
community
infrastructure
and
then,
if
we
have
time
I'm
happy
to
talk
about
some
of
the
lessons
I
learned,
while
trying
to
rename
the
kubernetes
org
repos
default
branch
to
main
and
some
of
the
work.
A
We
could
use
your
help
with
to
make
this
transit
transition
easier
for
the
other
150
something
repos.
We
have
in
the
project
so
with
that
I'm
gonna
hand
off
to
ben,
but
I
will
give
him
co-host
access
so
he
can
share
his
screen.
If
you
would
like.
C
C
So
the
bulk
of
the
cap
is
here.
This
is
also
linked
in
the
dock,
which
may
be
a
little
bit
the
dock
for
this
meeting,
which
may
be
a
little
bit
easier.
So
the
gist
of
this
is
at
some
point.
In
the
past
we
introduced
the
bazel
build
system
to
kubernetes.
The
reason
for
this
was
we
wanted
more
hermetic
builds
that
are
like
isolated
from
external
infrastructure
and
reproducible.
C
We
wanted
builds
with
good
caching
at
the
time
go
didn't
have
any
caching
yet
and
kubernetes
is
a
really
large
build
and
we're
doing
it
over
and
over
and
over
in
ci,
and
you
know
just
okay.
This
will
be
great,
so
a
few
of
our
sick
testing
members
in
particular
put
a
lot
of
time
into
this.
It
got
to
a
pretty
good
place,
but
it
has
some
developer
friction
and
it
never
quite
shipped
a
release.
C
We
ran
into
some
issues
around
seago,
in
particular,
with
the
go
integration,
and
at
this
point
I
think
the
community
has
pretty
well
decided
that
we're
not
interested
in
pursuing
this
further
it's
gone
on
for
years
and
hasn't
gotten
traction
as
what
we're
switching
to
so,
rather
than
continue
to
maintain
two
build
systems
in
parallel,
I'm
proposing
to
spin
down
the
bazel
build
system
and
continue
using
the
make
one
that
we
actually
ship.
Our
releases
with
we've
also
run
into
some
issues
in
the
past,
with
things
like
slightly
different
configuration
in
the
build-in
test.
C
So,
for
example,
at
one
point
we
were
shipping
releases
that
didn't
actually
have
sd
linux
enabled
properly,
even
though
our
tests
were
passing
in
ci
because
they
were
using
the
like
bazel
build.
That
did
have
it
on
correctly.
So
that's
pretty
much
the
idea.
C
The
kept
goes
into
some
more
detail
about
how
we're
going
to
migrate,
but
it
is
pretty
much
what
you'd
expect
we're
going
to
start
with
the
most
critical
jobs
like
kubernetes,
blocking
pre-summit
testing
and
migrate
them
one
by
one,
and
we've
done
some
pre-work
to
check
things
that
like,
for
example,
the
unit
tests,
which
were
one
of
the
things
that
benefited
the
most
from
switching
to
caching,
actually
run
pretty
acceptably
today
and
we're
doing
some
more
work
around
that
just
in
general,
because
the
test
should
be
faster
and
we
have
some
future
work.
C
We
could
potentially
do
to
do
things
like
improve
the
make
system
to
be
a
little
bit
less
terrible,
but
for
the
most
part
it
functions.
It
has
shipped
our
releases
for
a
long
time
and
I
believe
it's
what
most
contributors
are
using.
C
A
C
Right,
yes,
and
I
should
also
have
been
clear
up
front-
this
is
exclusively
for
the
kubernetes
core
repo
as
well.
We're
not
trying
to
you
know,
declare
how
sub-projects
should
handle
their
builds.
We
might
provide
input
or
guidance,
but
you
know
that's
their
decision.
We're
only
trying
to
reduce
the
maintenance
load
on
the
main
kubernetes
repo,
which
has
quite
a
lot
of
work
on
going
into
its
build
systems.
To
do
things
like
go
upgrades,
they're,
surprisingly
more
involved
than
you
might
hope.
C
Potentially,
depending
on
the
friction,
with
maintaining
the
ongoing
release
branches,
we
might
cherry
pick
this
back,
but
in
general
I
think
we
try
to
avoid
sweeping
changes
to
branches
that
have
already
had
a
release
cut
and
we're
fairly
hopeful
that,
logically,
it
should
actually
cherry
pick
just
fine.
It
will
actually
have
less
conflicts.
You'll
just
need
to
generate
any
of
the
build
changes
on
the
previous
brands,
but
you
won't
need
to
generate
them.
C
On
top
of
some
cherry
pick
back
change,
so
hopefully
it
will
actually
be
one
of
the
smoothest
routes
to
just
do
this
to
the
master
or
the
primary
development
branch
and
then
age
this
out
over
time.
We
also
don't
do
a
lot
of
the
build
maintenance
like
go
upgrades
very
often
in
the
release
branches.
We
mostly
freeze
the
tooling
at
once.
We
release,
so
we
get
most
of
the
benefit
from
doing
this
in
the
ongoing
development
branch.
A
Ben
all
right
thanks
a
bunch
for
your
time
ben
appreciate
it.
A
D
Yeah
I'll
I'll
start
along
the
same
way
and
drop
a
link
to
the
cap
in
the
chat,
a
quick
refresher
to
folks
who
don't
know
what
cube
test
2
is.
Last
year
we
started
this
effort
called
cutest
2,
which
is
basically
an
extension
of
cube
test,
and
the
reason
we
did
that
is
cube.
D
Test
has
sort
of
grown
like
this
organic
tool
with
a
bunch
of
hacks
here
and
there
and
making
changes
to
it
is
fairly
brittle,
and
it
means
that
if
something
goes
wrong,
all
ci
jobs
end
up
breaking,
and
so
we
have
this
sort
of
good
separation
of
concerns
in
quick
test
2,
where
the
deployers
are
separate.
Binaries
and
the
testers
themselves
are
also
separate
binaries,
which
gives
us
a
lot
more
lot
of
control
over
how
we
can
change
specific
jobs
and
keep
the
scope
like
keep
the
scope
of
the
change
fairly
limited.
D
So
this
cap
basically
talks
about
starting
to
move
some
of
the
jobs
from
cube
test
to
keep.
This
too,
we've
had
a
lot
of
progress
in
most
of
the
common
deployers
that
are
being
used
in
jobs
today,
the
biggest
one
which
is
the
gc
cluster
up
deployer,
and
we
already
have
a
bunch
of
periodic
jobs
that
are
canary
jobs,
trying
to
mimic
what
we
do
existing
with
cube
test
jobs
and
we've
seen
good
progress
there.
D
So
we
probably
want
to
start
migrating
some
of
the
existing
kubernetes
pre-submit
blocking
and
release
blocking
jobs,
to
cube
test
two
and
see
that
way.
We
get
deprecation
of
most
of
the
things
that
we've
said
will
deprecate
like
scenarios
and
bootstrap.
So
this
is
this,
this
effort
sort
of
tracks
all
of
these
in
in
one
place.
D
We
specifically
are
talking
about
pre-submit,
blocking
and
release
blocking
in
this
cap,
because
there
are
like
from
some
numbers,
there
are
more
than
1700
end-to-end
jobs
running
and
we
definitely
don't
want
to
be
switching
over
them
all
at
once.
So
this
will
be
sort
of
a
one,
a
job-by-job
basis
of
migration.
D
So
we've
chosen
this
subset
of
jobs
that
we
can
monitor
closely,
but
we'll
also
be
working
on
a
guide
that
will
let
other
folks
sort
of
migrate,
their
jobs
on
their
own,
monitor
and
we'll
basically
document
what
exactly
needs
to
change
from
for
a
job
to
use
quick
tests
to
instead
of
give
test,
yeah
and
the
basic
details
are
there
in
the
cap.
D
We
just
migrated
job
by
job,
and
we
want
like,
like
similar
to
the
bazel
cap.
We
won't
really
be
making
any
changes
to
existing
release
branches,
because
they're
like
affecting
their
release
signal,
is
something
that
we
don't
want
to
do
and
they
already
have
jobs
using
cube
test
running
so
we'll
just
let
them
age
out
sort
of
yeah.
Any.
C
Have
we
thought
a
bit
about
how
we
might
break
up
this
effort?
This
seems
like
it's
going
to
touch
on
a
lot
of
people's
ci.
If
we
can
ideally
get
everything
migrated,
this
kind
of
feels,
like
the
scope
of
say
the
ci
and
efforts
that
aaron
started
earlier
this
er
at
the
end
of
last
year.
D
Yeah,
we'll
definitely
need
a
bunch
of
folks
collaborating
on
this,
especially
ci
signal
and
sick
testing.
Also
sig
release,
but
mainly
we
will
probably
have
just
to
start
with,
will
probably
have
duplicate
jobs
so
that
the
existing
jobs
will
still
be
there
to
still
have
the
old
signal.
While
we
are
trying
to
get
new
jobs
in
place
and
once
we
do,
the
switch
over
we'll
still
keep
the
old
jobs
as
optional.
D
That
way,
if
something
is
failing
on,
say,
apr
folks
can
still
run
the
old
job
and
get
the
same
signal
and
we'll
have
like
tracking
dashboards
tested
dashboards.
We
already
have
a
dashboard
called
please
pre-submit
non-blocking,
which
has
a
couple
of
canary
jobs
already
there,
and
that
seems
like
a
good
place
to
have
these
jobs
that
will
shadow
the
existing
cube
desktops.
D
So
if,
if
you
remember,
one
of
the
efforts
that
we
were
trying
to
do
is
move
away
from
bootstrap
to
use
odd
utilities,
which
is
sort
of
the
new
way
of
writing
projobs,
and
one
of
the
reasons
we
couldn't
just
do.
That
is
cube
test
and
the
kubernetes
e2e
scenario
specifically
depended
on
bootstrap
a
lot
and
the
reason
and
since
cubetest2
doesn't
have
any
dependency
with
bootstrap.
We
get
that
migration
for
free,
at
least
for
the,
at
least
for
the
subset
of
jobs
that
we
are
trying
to
migrate.
D
A
Scenarios
yeah
I've
got
another
another
sort
of
thing
related
to
that.
Just
migration
from
pod
utils
to
bootstrap.
I
opened
up
sort
of
an
umbrella
issue
to
cover
that
I
feel,
like
I
also
kind
of
want
to
subtly
shift.
It's
wording
there,
like
pot
utils,
have
been
the
way
to
write,
proud
jobs
for
at
least
a
year
and
a
half
that's
when
we
marked
boots
trapped
up
high
as
unsupported
by
test
infra
on
call,
and
it
was
marked
as
deprecated
for
a
couple
years
prior.
A
If
I
remember
correctly
so
I
want
to
give
a
shout
out
to
ricardo
cats
and
dims,
who
didn't
sort
of
put
together
a
a
guide
on
how
to
migrate
from
bootstrap
to
pod.
Utils
and
ricardo
went
through
this
exercise
for
the
node
e2e
pre-submit.
A
It
also
wouldn't
be
that
hard
to
write
some
some
like
tests
or
something
that
help
us
survey
like
which
jobs
don't
use
pog
utils
right
now,
so
we
just
sort
of
get
kind
of
an
upper
bound
of
like
how
many
jobs
are
there
to
migrate
and
we
could
start
identifying
like
which
cigs
own,
which
jobs
and
this
sort
of
stuff
can
then
be
leveraged
to
help
us
keep
track
of
outside
of
the
release
blocking
and
merge
blocking
jobs.
A
A
A
Okay,
I
guess
I
have
one
last
question
arno,
I
think,
as
a
as
a
tech
lead
for
sig
release,
did
you
all
talk
about
this
in
the
sig
release
meeting
this
morning?
I
I
didn't
have
time
to
attend,
but
sasha
made
it
sound
like
y'all
chatted
about
this
a
bit.
A
Sounds
good
yeah,
unfortunately,
like
I
have
a
standing
conflict
with
that
time.
I'll
see
if
I
can
cut
it
short
one
of
these
days.
A
Okay,
so
I
guess
next
up,
I
I
wanted
to
talk
about
kind
of
wrapping
up
the
ci
policy
work
for
the
release
blocking
jobs
that
we
started
back
in
august.
So
I'm
going
to
be
doing
a
lot
of
talking
here.
I
would
appreciate
it
if
somebody
wants
to
take
notes
in
case
I
say
stuff
that
sounds
useful
or
like
issues
we
should
create
or
whatever-
and
I
am
going
to
do-
the
boring
thing
of
showing
you
walls
of
text.
A
While
I
talk
just
so,
you
all
can
see
the
same
thing,
I'm
speaking
to
so
in
the
meeting
notes
here.
I've
linked
to
the
issue
that
says
all
ci
blog
all
release
blocking
ci
jobs
need
to
run
in
community
infrastructure.
You
can
see.
A
Basically
all
of
them
are
done,
except
for
the
build
job,
and
then
this
periodic
basal
build
job
sort
of
in
light
of
ben's
cap
to
kind
of
remove
the
basal
stuff.
A
C
You
your
screen
is
like
frozen
white,
we're
not
we're
not
receiving
video.
What
you're.
C
A
All
right,
I
will
stop
sharing
and
try
again,
but
maybe
safari
is
just
being
annoying
I'll,
try
one
more
time.
Otherwise,
I
guess
you'll
just
have
to
hear
me
talk
to
it
all
right,
he's
saying
it
now.
A
I
will
have
to
test
out
my
setup
before
the
meeting,
so
I
will
instead
link
the
issue
in
chat,
but
it's
it's
linked
in
the
docs.
So
that's
the
that's
the
umbrella
issue.
The
two
remaining
you
heard
what
I
said
the
very
sort
of
comment
at
the
very
bottom.
There
is
the
test
we're
using
to
keep
track,
of
which
jobs
have
not
been
migrated
over
to
community
infrastructure.
A
A
Sense
is
this
something
that
does
anybody
want
to
help
out
with
this.
A
Okay,
I
will
ping
the
release,
ci
signal,
channel
and
arno
I'd
love
to
have
your
continued
help
in
pushing
this
forward.
That
would
be
greatly
appreciated.
A
So
I
think
that
this
is
the
work
necessary
to
close
out
the
ci
policy
stuff,
now,
there's
kind
of
broader
work
to
make
sure
that
we
stop
using
kubernetes
release
dev
and
the
images
that
are
all
the
images
that
cube
adm
uses
that
are
also
that
also
live
in
a
google.com
owned
project
called
kubernetes,
ci
images.
A
I
want
to
make
sure
that
eventually,
all
jobs
use
community-owned
and
hosted
artifacts
and
then
sort
of
as
a
next
step.
I
want
to
ensure
that
all
sort
of
hard-coded
mentions
in
the
various
repos
around
kubernetes
stop
referring
to
kubernetes,
release,
dev
and
start
referring
to
kate's
release
dev,
so
that
last
part
since
it
spans
a
bunch
of
projects,
may
ultimately
need
to
be
a
cap
which
I
don't
think
I'm
going
to
have
time
to
land
prior
to
enhancements.
A
Freeze
today,
but
we
will
be
able
to
close
this
out
and
we
can
say
hooray
all
kubernetes
jobs
involved
in
creating
kubernetes
run
on
the
community
and
pull
from
community
hosted
artifacts.
E
A
A
We
added
we
added
a
flag
to
test
and,
I
believe,
cube
test
2
to
describe
like
which
bucket
you
want
to
be
pulling
your
releases
from,
and
so
my
suggestion
was
that
we
explicitly
specify
the
kate's
release
dev
bucket
for
a
couple
jobs
and
then,
when
we
are
satisfied
that
that
works
for
those
jobs,
we
can
flip
the
default
of
that
flag
so
that,
if
you
don't
explicitly
list
that
flag,
you
will
be
pulling
from
kate's
release
dev
and
that
should
capture
most
of
the
jobs
that
use
cubetest
or
cubetest
2,
leaving
only
the
bootstrap
jobs
that
pull
from
the
old
bucket
and
based
on
ahmet's
work.
A
None
of
the
coupe
tests.
Sorry,
none
of
the
old
bootstrap
jobs
will
be
involved
in
release
blocking
or
merge
blocking.
E
A
Thank
you
for
helping
out
with
the
sig
network
stuff.
It's
really
appreciated.
F
A
Yeah
so
my
proposal
there
would
be
to
take
the
the
ci
kubernetes
build
shop
that
pushes
to
google.com
to
kubernetes,
release
them
and
move
that
to
the
release
informing
dashboard,
and
so
that's
no
longer
a
release
blocking
job,
and
then
we
can
sort
of
define
a
deprecation
window
that
is
relevant
to
continue.
Having
that
job
produce
new
ci
builds
at
the
old
location.
A
We
may
eventually
want
to
do
something
like
sync:
the
builds
from
the
community
location
to
the
old
google
location,
but
I
feel
like
that
is
orthogonal
to
closing
out
the
ci
policy
work.
G
A
So
the
next
thing
I'm
going
to
talk
about
a
little
bummed
I
can't
share
my
screen
is
the
list
of
things
that
went
well
and
could
have
gone
better
when
renaming
the
default
branch
of
the
kubernetes
word.
Repo.
A
This
sort
of
overlaps
the
so
this
is
me
speaking
as
a
member
of
the
github
admin
subproject
as
part
of
sig
controvex,
but
this
is
also
very
relevant
to
the
interests
of
the
naming
working
group.
A
Basically,
github
now
provides
a
button
that
you
can
click
that
will
rename
the
branch
in
your
the
default
branch
in
your
repository,
and
it
looks
really
scary
in
slack,
because,
basically,
it
looks
like
the
person
who
clicks
the
button,
deletes
the
master
branch
and
then
immediately
creates
a
new
branch
with
your
name,
so
we're
using
main
as
our
default
branch
name,
and
so
I
have
appreciated
the
flashing
alarm
emojis
when
people
noticed
that
happen,
but
then
github
will
automatically
retarget
all
open
prs
and
make
sure
that
they
are
targeted
at
the
main
branch.
A
We
don't
currently
have
the
capacity
to
handle
a
spike
of
9700
jobs.
All
running
at
once
say
nothing
of
the
fact
that,
like
some
of
those
jobs,
spin
up
hundred
node
clusters
and
whatnot,
so
I
feel
like
collectively,
we
need
to
figure
out
a
way
to
have
prow
not
do
that.
A
We
currently
have
a
max
concurrency
setting
that
you
can
set
on
a
per
job
basis,
so,
for
example,
the
100
node
cluster
jobs,
the
scalability
pre-submit.
We
can
only
have
12
of
those
running
at
a
time.
A
A
Maybe
the
next
12
would
have
their
jobs
run
and
pass,
and
then
probably
everything
after
that
would
get
do
time
out
and
so
you'd
see
the
little
status
context
of
the
pull
request.
Look
like
you
know
a
pod
pending
timeout,
or
something
like
that.
A
A
So
presumably
there
is
something
that
can
be
done
to
prowl
to
ensure
that
we
don't
re-trigger
everything
that
might
be.
We
need
to
listen
to
a
different
version
of
the
github
api
than
we
are.
I
don't
think
that's
quite
it
or
maybe
we
could
be
smarter
about
when
whether
we
consider
a
prs
excuse
me
whether
we
consider
a
pr's
base
branch
to
have
actually
changed
and
whether
we
actually
need
to
retrigger.
A
So
that's
one
thing.
The
other
kind
of
annoying
thing
is
all
of
our
periodic
jobs,
but
you
know
run
on
the
release
blocking
dashboard
and
stuff
like
that
they
have
the
branch
name
hard-coded
in
their
configuration,
which
means
you
kind
of
have
to
synchronize.
When
do
I
push
the
button
to
rename
branches,
and
when
do
I
update
the
job
configurations
to
use
the
new
branch
names?
A
C
I
think
I'm
not
sure
that's
actually
going
to
work,
because
so
a
lot
of
repos,
what
you
have
is
you
either
have
jobs
that
don't
care
because
they
don't
have
branches
or
something
like
kubernetes.
You
have
jobs
that
are
like
this
is
the
pre-submit
for
only
that
branch
and
then
there's
other
jobs
that
run
on
the
release,
branches
that
are
different,
so
we
match
by
regex
that,
like
we
want
to
be
that
one
specifically,
so
I
think
the
thing
you
can
do
instead
is
update
it
to
target
either
of
those
names.
A
You
can
do
that
for
pre-submits
and
post
submits.
I
have
done
that
that
works
so
that
that
gets
to.
Like
my
goal,
is
you
you
open
up
one
pr
and
then
you
can
click
the
button
and
and
then
it
doesn't
matter
like
when
you
go
back
and
like
clean
up
configs
or
whatever
so
having
pre-submits
and
post
submits
both
trigger
on
either
master
or
name
works.
It's
great.
It
also
works
for
the
milestone
applier.
A
It
also
works
for,
what's
the
other
thing
branch
protector,
if
you
explicitly
rename
branches
the
website,
repo
is
the
only
one
that
does
that
it's
the
periodics,
which
is
by
job
config
volume
like
we
define
way
more
periodics
than
we
do
presume
bits
or
post
submits,
so
both
pod
utils
based
jobs
and
bootstrap-based
jobs
could
benefit
from
modifying
bootstrap
and
modifying
pod
details
to
not
require
an
explicit
designation
of
the
branch.
A
In
pogitil's
case
you
have
to
specify
a
branch
name
right
now.
I'd
love
to
be
able
to
say
if
the
branch
name
isn't
specified,
use
whatever
the
remote
repos
default
branch
is
there's
a
really
simple,
git
command.
You
can
use
to
do
this
if
you
wanted
to
do
it
at
the
pod
level
or
perhaps
there's
a
way.
We
could
have
prowl
interact
with
github's
api
when
it's
triggering
a
pull
request
to
do
the
same
thing
with
bootstrap.
A
It
does
actually
allow
you
to
specify
just
check
out
the
repo
just
yeah
just
check
out
this
repo,
and
you
don't
specify
the
branch
but
bootstrap
currently
defaults
to
master.
If
you
don't
set
that
so
again
with
bootstrap,
it
would
be
ideal
if
we
could
give
it
logic
to.
I
don't
know
either
default
domain
or
default
to
whatever
the
remote
repos
branch
is.
A
I've
done
this
migration
before
the
kubernetes
org
repo
and
the
kubernetes
case,
io
repo,
which
both
had
under
20
pr's,
so
the
spike
of
all
jobs
being
triggered
simultaneously
was
not
that
bad.
If
you
own
a
repo
that
sort
of
fits
those
criteria-
and
you
want
to
work
on
this
talk
to
me
or
like
come
talk
to
us
in
the
github
management
slack
channel
and
we'll
we'll
sort
of
sign
you
up
to
to
trial
this
for
your
repo
at
the
moment,
I
feel
like
until
we
solve
these
issues.
C
A
Okay,
I
have
run
out
of
things
to
talk
about
if
you
all
are
willing
to
bear
with
me
for
a
couple
minutes.
I
just
want
to
kind
of
ask
a
meta
question.
A
I
kind
of
feel
like
over
the
past
couple
of
months.
These
meetings
have
mostly
been
me
talking
to
a
mostly
silent
audience,
which
gives
me
concern
that
you
all
don't
find
this
informative
or
interesting.
I
mean
you're
showing
up,
which
is
definitely
one
signal,
but
I
kind
of
want
to
understand
how
we
could
work
to
make
this
space,
something
where
you're
you're
more
willing
to
ask
questions
or
bring
issues,
because
I
feel
like
watching
me
talk
for
30
minutes
straight
is
pretty
boring.
B
B
A
Thank
you
like
it's,
certainly
not
my
intent
to
to
call
people
out.
I
recognize
that
not
everybody's
comfortable,
showing
their
face
on
screen,
not
everybody,
participates
in
meetings
or
communicates
the
same
way
so
maybe
like
showing
up
and
talking
a
bunch
is
not
your
gym,
which
is
cool,
which
kind
of
leads
me
to
the
other
meta
question.
I
had
that
ben
probably
cares
more
about
if
all
I'm
doing
is
showing
up
and
talking-
and
these
are
mostly
informational
blasts
would.
Could
this
meeting
be
an
email.
A
C
Erin,
it
could
be
an
email.
Actually.
I
have
a
little
bit
of
a
different
idea.
I've
been
in
discussion
with
sig
contrabex
about
this.
They
first
started
with
doing
asynchronous
meetings
through
slack
by
having
a
thread
for
each
topic.
I'm
not
sure
that
particular
approach
panned
out
the
best,
but
I'm
interested
in
doing
something
similar
and
instead
refocusing
this
meeting
on
sort
of
the
like
people-to-people
part.
C
I
think,
because
it
is
just
like
one
or
two
of
us
talking,
it
doesn't
do
a
whole
lot
for
having
a
sense
of
community
and
I
think,
that's
the
actual,
useful,
valuable
part
of
like
hopping
on
a
video
call
for
collaborating
with
our
global
community
on
topics
like.
Should
we
remove
this
build
system?
I
think
we
should
use
asynchronous
discussion,
so
I'm
actually
planning
to
file
proposal
later
about
get
github.
C
Discussions
are
now
a
thing
I'm
wondering
if
we
can
make
a
like
kubernetes,
zig
testing,
repo,
similar
to
kubernetes
sig
release
and
host
github
discussions
there
for
topics
that
otherwise
would
have
been
just
one
of
us
talking
this
meeting,
I'm
kind
of
hoping
that
people
will
be
more
likely
to
comment
on
what
is
sort
of
like
github
issue
or
something
instead
of
you
know
in
this
very
particular
video
meeting,
and
maybe
we
can
try
to
repurpose
the
video
meeting
for
a
bit
more
of
like
like
meet
and
greet,
and
that
sort
of
thing.
A
Yeah,
I
mean
not
to
keep
talking,
but
to
echo
ben's
sentiment.
I
personally
get
a
lot
more.
I
am
an
introvert,
so
group
stuff
doesn't
usually
work
well
with
me,
but
I
do
find
I
get
a
lot
out
of
connecting
with
people
as
human
beings,
and
so
that's
more.
What
that's
more!
The
satisfaction
I
derive
from
this
meeting
is
getting
to
see
you
all
here,
every
two
weeks
and
catching
up
and
stuff.
Personally,
I
feel
like
in
the
the
current
pandemic
reality
in
which
we
all
live.
A
I
feel
like
it
is
especially
important
to
remember
that
there
are
humans
on
the
other
end
of
the
screen,
but
when
it
comes
to
like
making
decisions
and
collaborating,
I
want
to
make
sure
that
we
are
friendly
to
people
who
can't
make
this
particular
meeting
in
this
particular
time.
Spot
can
stop
selfishly.
A
A
It's
a
lot
easier
for
me
to
ramble,
but
I
think
I
agree
with
with
most
of
what
ben
said
like
and
I
think
steer.
I
think
the
steering
committee
talked
about
this
two
meetings
ago
too.
A
So
for
those
of
you
who
are
showing
up
here,
I
especially
value
your
your
opinion
on
like
what
you
like.
Why
you're,
showing
up
and
like
what
you
want
this
meeting
to
be
how
we
can
make
it
a
better
space
for
you.
G
From
my
side,
I
usually
in
this
meeting
say
nothing.
I
just
gather
information
and
I
find
the
interactive
discussion
much
more
useful
than
just
another
email
because
emails.
I
think
we
all
have
enough
of
those.
So
I
don't
think
there
is
no
value
in
this.
I'm
just
hearing
you
raise
the
topics
and
getting
some
feedback,
even
though
it's
not
a
a
wordy
discussion
on
every
topic.
I
do
think
there's
value
in
this
for
the
community,
especially
if
you
are
still
in
need
of
new
information.
C
Thanks
yeah,
so
I'll
also
add
two
comments
here,
and
so
this
was
discussed
with
steering
and
I
believe,
apparently
they're.
Looking
into
recommending
this
sort
of
thing
for
the
same
sort
of
like
accessibility
and,
like
you
know,
time,
zone
concerns
and
things.
C
I
believe
the
approach
that
controversy
is
taking
currently
is
to
alternate
a
bit
so
like
you're
still
having
video
meetings
but
de-emphasizing
them
a
bit
from
things
like
decision
making
and
more
on
like
circulating
ideas
getting
to
check
in
with
each
other.
That
sort
of
thing.
A
Sure
and
then
in
that
sense
I
guess
I
do
feel
like
that's
kind
of
what
this
meeting
has
been,
though
maybe
it's
less
circulating
and
more
me
just
rambling
about
them
or
you
you
and
ahmet,
which
is
super
appreciated,
but
yeah.
I
feel
like
early
on.
I
tried
to
title
this
meeting
the
sick
testing
office
hours
to
sort
of
encourage
more
of
that
atmosphere
of
like
hey.
Let's,
let's
chat,
why
not?
A
Let's
you
know
if
you
get
if
you
value
sort
of
the
spontaneity
of
in-person
discussion
we're
here
for
that,
so
I'm
happy
to
make
this
that
kind
of
space.
But
if
I
felt
to
me
like
when
I
sort
of
titled
the
meeting
office
hours,
people
were
like.
Oh
it's
just
office
hours.
I
don't
care
about
that.
I'm
not
gonna
show
up.
C
Yeah,
I
think
the
problem
is:
we've
been
trying
to
fill
it
with,
like
sort
of
relying
on
oh
we'll
just
get
input
from
this
meeting
and
and
it's
maybe
less
the
meetings
problem
and
more.
The
like
getting
ideas,
circulated
and
commented
on
outside
of
the
meeting
leading
to
like
we're
attempting
to
do
that
in
the
middle
of
the
meeting.
I
think
it's
also
harder
for
something
like
a
kept
to
like
usefully
comment
on
it.
C
If
you
haven't
seen
it
ahead
of
time,
because
they're
just
so
wordy
with
the
kept
template,
for
example,
I
I
also
have
some
concern
that
if
we
like
renamed
it
back
to
office
hours,
that
probably
people
just
wouldn't
show
up,
but
on
the
other
hand,
if
peop,
if
like
no
one's
interested
in
that,
then
like,
maybe
it
isn't.
A
Yeah,
I
don't
know
I
kind
of
feel
like
per
the
comment
about
making
decisions
in
an
async
manner.
I
don't
want
to
make
a
decision
here,
but
I
do
think
we'll
we'll
take
this
into
consideration
and
email
out
something
for
us
to
make
a
decision.
But
I
I
like
the
idea
too.
I
don't
know
my
gut
kind
of
tells
me
keep
the
meeting
name
but
send
out
a
communication.
That's
pretty
explicit
about
hey
we're
sort
of
trying
to
retool
or
repurpose
and
here's
here's
what's
going
on
anyway.
A
Back,
okay
sounds
like
we're
good
yeah.
I
said
it
before,
but
I
super
appreciate
all
of
you.
You've
all
done
great
stuff
to
help
move
this
project
forward,
and
I
look
forward
to
seeing
you
again
in
two
weeks
time.