►
From YouTube: Kubernetes WG K8s Infra Bi-Weekly Meeting for 20200513
Description
Kubernetes WG K8s Infra Bi-Weekly Meeting for 20200513
A
Hello,
everybody
today
is
wednesday.
May
13th
you
are
at
the
wg
kate's
infra
bi-weekly
meeting.
I
am
your
host
aaron
crickenberger.
You
are
all.
A
A
So
I
would
normally
take
the
time
to
welcome
any
new
members,
but
I
feel,
like
I
see
a
bunch
of
familiar
faces
here
today.
Thank
you
all
for
showing
up
I'll
start
today
off
the
way
we
start
every
meeting
by
taking
a
look
at
our.
D
E
A
Sgtm
something
I
just
wanted
to
do
if
I
take
away.
C
A
A
So
it
looks
like
it's
the
end
to
end
tests,
so
this
is
all
of
the
resource
usage
in
the
end
to
end
projects
themselves.
So
this
is
where
test
clusters
are
getting
stood
up
and
are
running
and
the
end
tests
that
exercise
them
are
happening
elsewhere.
C
A
I
don't
know
I
was
going
to
say
the
reason
you're
seeing
usage
increase
over
time
is
because
we've
been
running
more
tests
over
time,
but
we
haven't
been
increasing
tests.
Each.
C
And
every
day
I
think
you
partially
paid
you
have
to
pay
for
storage
of
logs
as
well,
so
it
could
be
related
to
that.
If
we're
not
deleting
it,
then
it
would
be
constantly
growing,
but
I
guess
there's
going
to
be
a
bounded
time
period
that
we'd
store
it
for
I
don't
quite
know.
Actually
I
need
to
dig
in.
C
A
I
unfortunately
don't
have
the
bandwidth
to
to
dive
into
this.
Do
we
feel
like
I
should
cut
tests
back
over
to
the
other
build
cluster
in
the
interim
or
tim?
You
are
muted,
unfortunately,
or
you're
talking,
but
we
can't
hear
you,
which
is
weird,
even
though
zoom
doesn't
think
you're,
muted.
D
For
some
reason,
my
hardware
mute
got
pressed.
Can
you
hear
me
better
now?
Yes,
I
can.
Yes,
I
think
I
pressed
it
when
I
was
trying
to
make
you
go
louder.
I
don't
think
we
need
to
not
work
and,
like
I
don't
think
you
need
to
cut
back
sorry.
The
money
is
there
to
be
spent
and
we're
not
really
putting
a
dent
in
how
much
we
have
yet
okay.
So
I
I'm
not
worried
about
it.
D
I'm
looking
at
the
breakdown
by
project
for
just
logging
and
they
all
look
roughly
equal
except
there
are
lots
and
lots
of
them.
A
A
A
A
So
maybe
there
is
something
about
retention
that
we
need
to
look
into,
or
it
could
be
something
about
the
way
those
clusters
are
stood
up
needs
to
disable
logging
to
stackdriver.
I'm
not
entirely
sure.
A
D
D
Can
we
file
a
bug
and
get
somebody
to
like?
I
wouldn't
say
it's
urgent,
but
you
know
eventually
somebody
should
go
and
take
a
look
at
this
and
see
is
the
value
of
the
logs
commensurate
with
the
cost
of
them,
and
if
not,
should
we
change
logging
or
purge
logs
or
what
is
the
right
action
to
take
here?.
A
Okay,
so
that
kind
of
leads
me
to
action
item
review.
So,
as
you
saw,
I
stood
up
build
clusters
and.
A
A
I
chose
a
naming
scheme
and
I
aimed
to
have
the
clusters
hooked
up
to
product
case
that
I
owe
and
some
amount
of
migration
of
jobs.
I
did
that
in
this
full
request.
A
Everything
that's
in
this
poll
request.
I
have
basically
actuated
and
made
real
I'm
a
little
uncomfortable
with.
A
A
Okay,
that's
okay,
so
I
tried
to
summarize:
what's
going
on,
I'm
gonna
meet
you
real,
quick
tim
that
guy
distracts
me,
but
so
I've
created
40
projects
because
you
saw
we
were
already
using
some
of
them.
I
created
some
other
projects
to
like
manually,
pin
jobs
to
for
debugging.
A
A
Would
be
number
one
so
greenhouse
is
our
basal
cache,
it's
basically
a
high
cpu
instance
to
maximize
iops
and
then
a
pod
is
scheduled
to
that
instance,
and
then
we
give
it
like
a
three
terabyte
ssd.
So
it's
got
all
the
iops
to
that
ssd
possible
and
then
anything
that
uses
bazel
and
is
configured
to
do
so
uses
that
as
a
cache,
it
provides
the
most
benefit
to
any
of
our
pre-submit
jobs
that
use
bazel.
A
This
is
not
just
for
kubernetes,
but
for
other
projects
like
the
google
cloud
provider
or
cloud
provider
gcp,
I
think,
uses
it
it.
So
the
pro
build
cluster
that
I
set
up
here
in
kubernetes.io
land.
We
decided
to
set
up
as
a
regional,
build
cluster.
A
The
build
cluster
that
is
used
over
in
google.comland
is
a
zonal,
build
cluster,
and
so
we
assume
there's
just
one
node,
that's
configured
as
a
high
cpu
node,
and
we
pin
greenhouse
that
since
we're
using
a
regional,
build
cluster
here.
If
I,
I
can't
create
a
single
node,
I
can
create
a
single
node
per
zone,
and
so
I'm
torn
as
to
whether
to
just
deal
with
it
and
have
the
other
two
like
special
nodes.
A
I
just
toss
them
into
the
build
pool
or
if
I
try
to
like,
restrict
the
node
pool
locations
for
this
cluster
down
to
a
single
zone
so
that
I
think
we'd
have
like
a
regional
control
plane,
but
a
zonal,
node
pool.
I
don't
I'm
curious
to
hear
what
your
opinion
on
this
is
tim.
A
What's
the
goal
of
being
regional,
we
had
said
the
goal
of
being
regional
was
to
make
sure
that
there
was
high
availability
for
the
control
plane.
What
I
have
been
given
is
feedback
from
the
testing
for
team
is
they've,
never
really
had
the
control
plane
going
down
as
an
issue.
A
The
only
other
issue
that
has
been
encountered
occasionally
is
that
us
central
one,
sometimes
experiences
stockouts,
but
because
we
kind
of
keep
the
build
cluster
at
a
relatively
fixed
size.
We
don't
tend
to
bump
into
that,
so
I
could
also
just
tear
this
whole
thing
down
and
recreate
it
as
a
zonal
cluster.
I'm
a
little
concerned
about
doing
that
and
causing
some
downtime,
but
that's
that's
another
option.
D
Can't
we
still
configure
oh,
never
mind
if
we're
concerned
about
mastered,
like
a
control,
plane
downtime
across
things
like
upgrades
than
regional
matters.
If
we're
really
concerned
about
a
zonal
outage,
which,
honestly,
we
probably
should
be
it
well,
which
role
this
is
the
cash
for
all
the
builds.
So
if
a
zone
went
down
for
six
hours,
what
would
happen
it
would
be
all
ci
would
stop
for
six
hours
right.
F
No,
but
not
if
all
of
our
other
nodes
were
there
and
for
some
reason
we
couldn't
reach
them.
The
other
sorry
guys
we
don't
do
a
lot
of
adding
removing
nodes,
at
least
currently,
except
I
guess
during
like
upgrades,
so
that
hasn't
been
an
issue
in
years
so
far,
but
I
could
see
it
in
the
future
depending
on
how
we
were
managing
it.
Going
forward.
A
The
other,
the
other
thing
to
keep
in
mind
is
traffic
across
zones
is
network
egress,
which
costs
like
one
cent
per
gigabyte.
I
think
so
there'd
be
some
spend
there.
I
don't
think
it
would
be
obscene
spend,
but.
A
It
it
kind
of
doesn't
so
I'm
not
sure
you
know.
Option
d
here
would
be
figure
out
some
way
to
deploy
a
zone,
local
instance
of
greenhouse
and
have
each
zone's
node
pool
talk
to
that
instance
of
greenhouse
that's
a
little
more
complicated
than
I
have
time
for.
If
somebody
else
wanted
to
take
that
on
that
would
be
awesome.
I
feel
like
the
easiest
compromise
of
what
we
have
now
is
set
up
a
set
up,
a
a
node
pool
with
a
you
know.
A
I
don't
know
ben
you
had
some
concerns
when
we
were
talking
about
that
yesterday.
Did
you
want
to
sorry
and
why
is
it
32
cpus?
We
were.
We
actually
burning
32
cores
it's
because
iops
are
directly
related
to
cpu,
so
I
thought
I
up
was
related
to
disk
size.
A
But
right
here
this
is
the
iops
performance
for
anything
up
to
31
cpus,
and
this
is
anything
from
32
to
63,
cpus.
F
F
It
runs
in
the
build
cluster,
so
the
like
access
from
the
jobs
that
are
building
is
just
like.
It's
just
a
service.
D
D
Green,
I'm
sorry
green.
The
other
cores
are
available
for
scheduling
or
no,
I
think
right
now.
It's
painted
so
they're,
not.
A
A
Yeah,
for
comparison's
sake,
the
way
the
existing
crowd
build.
Cluster
is
sized,
it's
168,
cpu
instances,
so
32
cpus
looks
relatively
tiny
in
the
face
of
that,
and
even
with
that
many
instances
we
appear
to
be
hitting
some
kind
of
resource
contention,
so
it
may
be
undersized.
A
D
A
A
For
sure,
maybe
we
could
get
some
help
figuring.
F
Out
sure
how
important
this
thing
is,
I
think
aaron
was
just
moving
it,
because
we
wanted
to
lift
and
shift
like
everything
that
we're
running.
Yes,
if
we
continue
down
the
path
of
using
bazel
for
a
lot
of
stuff,
I
think
eric
wants
us
to
just
use
gcp
remote
build.
F
A
So
so
I
I
I
hear
you
loud
and
clear:
I'm
just
operating
right
now
with
the
goal
of
trying
to
get
all
of
the
release
blocking
jobs
running
on
this
customer
because
they
have
a.
D
A
So
putting
greenhouse
on
here
will
benefit
the
kinds
jobs
and
it
will
also
benefit
some
of
the
basal,
build
and
basal
test
jobs
and
generally
kubernetes
pre-submit,
and
it
opens
the
door
to
precipitates.
That's
yeah.
F
A
Kind
of
can't
do
almost
any
pre-submit
job
without
greenhouse,
so
if
there
are
no
objections
I'll
just
do
it
the
the
slightly
manual
way,
I
was
suggesting
where
I
stand
up
a
node
pool
that
is
132
cpu
instance
and
then
I'm
going
to
manually
taint
one
of
the
nodes.
So
I
set
that
aside
for
greenhouse,
but
the
other
two
nodes
will
just
be
part
of
the
build
pool
that
everything
else.
D
D
D
A
So
you
have
it
running
as
a
which
we
have
it
running
as
a
service
like
it's
just
a
regular
single
pods
deployment
and
then
a
service
bound
fast.
D
Okay,
let's
try
that
and
and
then
we
can
come
in
and
argue
about
efficiency
later,
like.
Oh,
your
cpu
usage
is
really
really
low.
Let's
go
tweet
that.
A
Okay,
thank
you
for
going
through
that
journey
with
me,
the
other
more
pressing
concern,
oh
sorry,
well
yeah.
The
other
thing
to
do
is
to
actually
set
up
some
sane
apples.
For
all
of
this.
A
I
really
want
the
end
state
of
all
this
to
be
that
community
members
have
this
ability
to
troubleshoot
what's
happening
in
this
cluster
and
then
a
smaller,
more
restricted
group
of
trusted.
Community
members
have
the
ability
to
write
things
into
this
cluster
or
or
help
mess
with
it
in
terms
of
troubleshooting.
A
A
D
A
So
there
are
two
clusters
to
keep
in
mind.
One
would
be
the
build
cluster.
D
Oh
right
got
you
sorry,
I
my
brain
was
still
on
the
old
logging
problem.
Yes,
yes,
but
what
does
community
members
mean
here.
A
That's
a
really
good
question.
I
heard
that
I
feel
like.
I
need
folks
help
in
crafting
a
policy
just
before
the
immediate
moment.
I
want
to
make
sure
I'm
not
the
single
point
of
failure.
I
need
more
people
than
me
to
be
able
to
do
stuff
to.
B
A
And
so
the
people
who
have
org
admin,
privileges
such
as
yourself
tim
and
I
think
christoph
and
tim's,
have
the
same
privileges
as
as
me,
but
I'd
like
to
kind
of
have
like
a
read.
Only
group
of
people
who
could
like
see
this,
but
not
necessarily
change
things.
D
So
I
don't
know
off
the
top
of
my
head
what
the
granularity
of
permissions
that
stackdriver
offers
is.
But
if
the
granularity
of
permissions
is
there,
then
we
can
go
figure
it
out
and
I
don't
have
a
problem.
Granting
access
read
access
to
this
pretty
broadly,
unfortunately,
there's
just
no
way
to
say,
like
everybody
on
the
internet
can
go
view
this
unless
we
turn
it
into
our
own
dashboard.
D
A
Yeah,
I
think
so.
For
example,
I
know
that
sig
node
is
working
on
improving
their
test
posture,
like
they're
kind
of
taking
a
look
at
all
of
the
node
e2e
test
jobs,
they're,
trying
to
figure
out
what
they
are
willing
to
support,
trying
to
figure
out
what
else
do
they
think
they
need
and
I'd
like
to
make
sure
that
the
people
you
know,
gets
access
to
all
of
the
troubleshooting
that
they
need
to
to
do
that
task.
G
And
I
I
have
a
question
so
why
not
just
create
dashboard
for
everybody?
It
should
be
straightforward
to
connect
to
you
know
to
put
the
dashboard
a
few
things
like.
We
can
see
right
now
to
the
grafana
and
open
it
to
everybody.
G
To
it,
I
just
don't
know
how
to
do
it,
so
I
I
created
right
now.
There
is
a
test
test,
dashboard
graphical
dashboard,
on
a
which
I
was
playing
with
to
do
the
monitoring
it's
currently
on
my
domain.
It's
it's.
I
just
send
it,
but
it
is
the
grafana
working
already
there,
so
that
would
be
very
straightforward
to
just
connect
the
to
the
start,
driver.
A
That
that
might
be
cool,
I'm
operating
from
perspective
of
I
don't
have
a
lot
of
time
to
build
something
new.
If
somebody
else
wants
to
I'm
all
for
it,
I'm
not.
Definitely
I
can
can
do
it.
Okay,
I
I'm
viewing
it
as
like
when
I
was
trying
to
help
troubleshoot
what
was
going
on
when
all
the
node
e2e
tests
were
blocked.
I
had
to
be
able
to
go
to
a
project
page
and
be
able
to
view
all
of
the
recent
activity
on
that
project.
A
So
I
could
see
what
was
happening
and
what
was
getting
blocked.
I
needed
access
to
stackdriver
logs
to
see
you
know
what
things
thought
they
were
doing.
Why,
and
it
was
helpful
to
see
the
stackdriver
monitoring
page
just
to
see
like
what
that
was
about
yeah.
D
So
yeah,
so
we'll
just
need
to
figure
out
what
the
list
of
those
things
is.
The
activity
logs
are
interesting,
although
they're,
not
they
shouldn't
be
sensitive,
so
we
shouldn't
hesitate
to
grant
access
to
read
those
pretty
broadly.
I
think
I
I
yeah
the
audit
logs
just
seem
like
that.
One
oddball
case
okay,
audit
logs,
got
you
that
makes
sense.
G
We
can
just
just
do
some
proof
of
concept.
I
will
see
what
permissions
do.
We
need
to
grant
the
grafana
and,
let's
see.
A
D
Oh,
I
just
just
pasted
my
question.
I
was
just
poking
around
at
the
logging
in
the
boscos
nnn
clusters.
The
vast
majority
of
the
logs
look
like
they're
coming
out
of
I
mean
the
charge.
Is
just
these
log
lines
right?
What
would
happen
if
we
spun
up
those
clusters
without
logging
turned
on?
Is
the
law?
Are
the
logs
collected
some
other
way.
A
True
logs
are
logs
for
all
of
the
kubernetes
components
are
written
to
disk
and
then
some
other
logs,
like
I
think,
d
message
and
docker
and
stuff
are
collected
as
well.
So
those
are
all
collected
after
the
fact
assuming
the
node
is
reachable
whether
the
test
has
passed
or
failed.
A
A
Okay,
that
was
a
long
ai
review.
Sorry,
next
up
bart,
you
wanted
to
talk
about
turning
down
a
cluster.
G
A
G
The
next
thing
is
the
dns
update
automation
and,
unfortunately,
I
will
probably
need
team
teams.
I
on
my
refactoring
pull
request,
because
this
is
what
I
had
to
do
to
make
it
possible
to
use
project
okay,
and
I
encourage
the
pull
request.
So
is
there.
E
A
D
Oh
862.,
all
right,
I'm
a
little
backlogged
right
now.
I
will
try
to
shift
to
pr's
before
this
week
is
out
and
I'll
put
you
push
these
ones
to
the
front
perfect.
G
And
the
the
last
of
my
action
item
is
actually
retiring.
The
note
per
dash
today
or
tomorrow
is
the
last
date
of
lazy
consensus.
I've
sent
the
email
to
the
signal,
google
group
and
we,
I
think,
can
are
ready
to
retire
tomorrow.
D
Okay,
so
nobody
they
didn't
get
back
on
this.
G
G
G
A
Okay,
two
quick
things
before
we
get
to
the
board
the
first
one's
kind
of
a
blocker
for
doing
more
stuff.
As
I
was
creating
pool's
projects
to
use
for
end
testing,
I
hit
the
billing
billing
quota
limit.
I
think,
like
the
number
of
projects
that
are
allowed
to
be
linked
to
a
billing
account,
so
I
opened
up
an
issue
where
I
described
the
questions
to
the
form
tim
said:
go
ahead
and
fill
it
out.
These
things
are
usually
auto
approved.
I
unfortunately
haven't
heard
anything
back.
A
A
B
A
Think
is
asking
for
a
staging
project.
I
can't
do
that
for
them
right
now.
I
could
try
as
a
workaround
or
to
temporarily
mitigate.
I
could
try
taking
down
the
number
of
projects
that
I
had
set
aside
for
and
then
testing
it'll.
Take
me
some
time
to
do
that,
but
that
might
give
us
some
breathing
room.
D
Let
me
see,
I'm
gonna
pull
it
up
right
now
and
see.
If
I
mean
I
don't
even
know
who
to
talk
to
internally,
but
let's
try
not
to
like
use
our
internal
connections.
F
Okay,
this
is.
A
A
That
is
a
possibility.
It's
not
something
I
have
time
to
do,
but
we
could
have
somebody
revisit
the
entire
staging
project
and
bucket
and
service
account
so
on
and
so
forth
per
her
build
artifact.
I
think
I
think
I
kind
of
liked
the
having
a
project
per
thing
specifically
for
the
the
other.
The
reason
you
approached
me
claudia,
where
like
if
we
want
to
start
having
like
if
a
given.
A
Needs
a
specific
secret
to
build
their
thing.
It's
by
having
things
segmented
up
by
subproject
right
now,
only
they
can
have
access
to
that
secret.
That's
that's
really
great.
If
we
start
to
merge
everything
into
a
single
project,
google
cloud
build
will
always
run
as
the
same
service
account
for
a
given
project,
and
so
we
lose
the
ability
to
more.
A
Secrets
which
job
has
access
to
does
that
make
sense
yeah
I
see
yeah.
That
makes
sense,
but
that
is
another
mitigation
we
could
try
to
do,
but
I
I
feel
like
these
are
all
workarounds
around
the
fact
that,
like
really,
we
need
more
than
100
projects.
We
need
like
at
least
200
projects,
probably
more
like
300,
and
I
don't
think
projects
are
really
that
much
of
a
problem.
I
think
this
is
just
like
a
building
sanity
check.
Are
you?
Are
you
not
making
sure
we're
not
like
abusing
our
cloud
usage
or
something.
A
So
tim,
you
said
you're
going
to
take
a
look
at
it.
I
also
feel
like
I
should
have
ihor
give
a
poke
at
this
as
a
non-googler
that
might.
B
A
A
Now
that
we've
moved
to
google
cloud
builder,
we
need
some
way
to
get
these
secrets
over
to
that
cloud.
Build
job,
so
I
thought
a
good
way
to
do.
This
would
be
to
use
google
cloud
secret
manager.
A
And
so
right
now
I
have
something
where
I've
turned
on
the
secret
manager
api
for
all
of
our
staging
projects,
but
then
I
go
through
and
actually
combine
things
so
that
the
staging
project
for
building
end
and
test
images
gets
access
to
secrets
that
are
in
the
staging
project
for
end-to-end
test
images.
A
This
could
be
a
really
convenient
pattern
going
forward
where
each
project
gets
to
just
dump
secrets
in
there
and
then
google
cloud
build
will
automatically
have
access
to
those
secrets.
A
a
tighter
approach
that
I'm
kind
of
favoring
is
this.
This
windows,
docker
windows
certificate
might
be
neat,
might
need
to
be
used
by
multiple
jobs
all
over
the
place,
so
I'd
rather
constrain
access
to
just
that
secret
and
not
opt
to
have
that
secret
per
project
pattern.
A
Possible
I
tried
creating
a
project.
Yeah
yesterday
was
the
last
time
I
checked.
I
haven't
checked
this
morning,
so
that
would
be
really
cool.
It's
not
just
creating
a
project.
Did
you
use
ensure
project?
It's
the
thing
that
fails
is
linking
the
project
to
the
billing
account.
A
I'll
stop
my
share.
There.
A
A
So
I'm
walking
from
right
to
left
static,
ip
management
is
something
we
didn't
discuss
last
time
because
you
weren't
here
tim
bart.
Did
you
maybe
want
to
give
us
context
on
this
issue.
G
Yeah,
so
the
the
issue
is
what
currently
we
are
doing.
Is
there
is
a
shell
script,
written
like
two
weeks
ago
by
me,
which
is
responsible
to
create
the
static
ip
addresses
which
we
need
then,
to.
G
G
D
A
I
have
no
strong
preference
one
way
or
the
other
if
you
want
to.
G
D
G
A
A
Now,
okay,
next
up
so
whatever
quote,
increases
image
promotion
process.
I
haven't
heard
from
linus
lately
to
understand
where
we
are
with
the
vanity
domain
flip.
I
think
that's
why
this
is
open
right.
D
Yeah,
so
I
can
give
a
quick
update
on
that.
The
first
flip
realized
that,
because
of
the
length
of
time
that
this
project
has
existed,
it's
grown
roots
down
into
a
bunch
of
other
stuff
that
we
couldn't
easily
have
predicted.
D
So
we
are
disentangling
those
things
because
of
all
the
virus
and
stuff
it's
difficult
to
get
the
bandwidth
that
we
want
for
this
thing
that
isn't
super
critical,
that
some
of
these
other
teams
to
help
us
with,
and
so
we're
just
fighting
for
a
priority
to
get
these
disentanglements
done
at
this
point,
you
know
we're
still
we're
basically
day
for
day
slip
like
we
have
a
few
days,
maybe
a
week's
worth
of
like
work
to
do
by
these
teams
and
every
day
they're
not
doing
it.
It
gets
pushed
out.
D
D
G
D
D
D
I'm
trying
to
be
sensitive
to
the
fact
that
we're
making
these
other
teams
do
work
for
good
goals,
but
you
know
for
many
of
them.
This
is
one
of
those
like
unfunded
mandates,
so
trying
to
be
patient,
but
this
is
a
case
of
we've
been
pinging
them
for
two
months.
So.
G
E
G
Progress
I
saw
that
there
there
we
have
that
email
address,
but
video
I
didn't
have
time
to
so
check.
A
A
G
A
I
think
we
just
need
to
approve
the
pr,
but
the
gist
of.
B
It
I
think
that's
should
be
everything
should
be
finished
there.
Augustus
did
go
with
the
suggestion
by
tim,
which
is
instead
of
duplicating
everything
either
by
simulinks
or
duplicating
we
linus
suggested,
or
at
least
that
was
the
result
was
that
we
can
just
add
another
promotion
target
which
is
the
root,
and
then
it
will
duplicate
them.
B
It
will
take
the
first
one
as
a
source
and
then
push
them
to
both
both
things,
one
with
the
prefix
one
without
the
prefix,
and
I
think
it
should
be
done
as
far
as
what
I've
got
from
the
pr.
A
Okay,
there's
a
part
of
me
that
kind
of
wants,
like
an
all
caps
explainer
somewhere,
that
describes
that,
like
this
is
kind
of
the
only
place
this
is
allowed
and
why
and
how
we
came
to
that
decision.
A
Yeah,
let's
see
in
the
backlog,
turning
down
something
based
on
notepark
dash,
so
we're
gonna
have
to
wait
on
that
as
we
work
to
migrate,
prof
dash,
okay,.
A
Tim,
just
since
you
were
asking
about,
I
really
wish
we
had
something
that
like
ran
the
audit
scripts
every
hour.
I
had
started
on
something
that
does
run
the
audit
script
every
hour.
But
what
is
missing
is
the
follow-up
of
like
creating
a
pull
request
with
the
results
of
the
audit
script.
A
We
use
a
tool
over
intestine
for
land
called
pr
creator,
which
will
automatically
create
a
pull
request
of
a
given
it'll
like
push
to
a
given
branch
or
force
push
to
that
given
branch,
and
it
will
create
or
update
an
existing
poll
request,
that
is
from
the
net
branch.
It's
how
we
have
a
pull
request
out
there
to
bump
prow
and
all
of
its
stuff
to
the
latest
versions.
A
G
A
H
G
H
Yeah,
that
sounds
good.
Let
me
confirm
with
you
on
a
day
but
yeah,
something
maybe
tuesday,
wednesday
might
work
better.
I
think
we're
gonna
have
the
week
next
week
and
the
end
of
this
week.