►
From YouTube: wg-k8s-infra biweekly meeting 2020708
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
A
C
That
probably
was
right
around
the
22nd
when
we
started
it
and
then
I
boarded
it
now.
I'll
explain
that
in
a
minute.
Let's
so
the
lion's
share
of
this
is
compute.
The
lion's
share
of
Lee
group
by
project
is
gonna,
be
in
prowl
I
mean
there's,
there's
a
tall
stack
of
small
projects,
but
the
biggest
one
by
fair
by
far
as
Kate's
in
for
prowl
build.
B
You're
not
seeing
any
costs
due
to
flaky
tests
right
now.
This
is
part
of
why
what
I've
intentionally
said
is
the
first
goal
to
migrate,
purchased
the
jobs
that
show
up
on
the
release
blocking
dashboards,
so
those
are
jobs
that
run
periodically
okay,
they
don't
run
any
more
frequently
there's
no
fluctuation
based
on
retest
and
we're
volume
of
PRS
or
anything
like
that.
Oh
excellent.
Okay,
that's
great
to
know.
C
B
C
Then
the
question
to
follow
up
in
is
like:
is
this
number
correct
right?
If
are
we?
Are
we
comfortable
with
the
idea
that
we're
spending
seventy
eight
hundred
dollars
a
month
on
tests
just
just
these
recurring
tests?
Obviously,
as
a
project,
these
are
very,
very
valuable
tests.
I
just
have
no
idea
what
the
dollar
number
is
or
if
it's
worth
any
engineering
effort
to
try
to
move
the
needle
I.
B
I,
don't
know
that
off
the
top
of
my
head.
The
engineer
any
me
would
love
to
advocate
for
less
flaky
tests
and
more
efficiency,
but
the
amount
of
time
and
resources
that
may
require
needs
to
be
evaluated
against
the
costs
and
I
don't
have
a
number
for
what
percentage
of
the
total
cost
of
testing
this
represents,
but
I.
Imagine
it's
under
a
quarter,
maybe
10%.
So
the
interesting
or
cost
out
there.
So.
D
B
Get
more
from
the
perspective
of
these
are
the
jobs
I
know
we
care
about
the
next.
The
next
tier
from
here
that
I
would
go,
would
be
all
the
jobs
that
block
merge
into
the
kubernetes
and
then
I
would
start
to
raise
the
question.
Okay,
based
on
these
costs,
do
want
to
lower
these
costs,
or
do
we
want
to
migrate
more
jobs
over
her
or
did
we
think
we
need
all
those
other
jobs
stuff
like
that
right,
but
it
is
yeah,
I.
Think.
C
That's
an
interesting
it's
an
interesting
question
and
the
reason
I
think
it's
interesting
to
ask
right
now
is
as
soon
as
we
flipped
GCR
over.
This
will
be
noise
right.
This
is
this
is
margin
compared
to
what
the
current
GCR
costs
look
like,
and
obviously
we
have
the
same
same
conversation
around
GCR.
Like
you
know,
is
it
normal
that
we're
actually
experiencing
these
level
of
costs
or
is
it
abnormal
is
something
going
wrong?
We
see.
C
All
right,
so
maybe
I
can
speak
to
the
domain
stuff
for
a
moment.
So
what
happened
last
a
couple
weeks
ago
is
basically
the
credits
that
will
funded
ran
out
for
the
second
year
and
I.
My
calendar
was
off
by
a
week
and
so
the
account
ran
dry.
We
decided
not
to
proceed
with
the
vanity
flip,
just
because
I
didn't
want
to
run
up
the
bill.
While
we
didn't
have
the
credits
in
the
account.
Currently
we
have
we.
We
had
a
stopgap
credit
that
we
issued
for
a
hundred
K.
C
That's
in
now,
where
we're
currently
running
through
that
the
rest
will
be
in
soon.
The
ticket
is
filed,
we're
just
jumping
through
hoops.
So
then
we
can
restart
the
dynamic
domain,
flip
I'm,
hoping
that
things
credits
will
clear
this
week.
So
we
can
start
again
on
Monday
if
everybody's
cool
with
that,
if
they
don't.
C
Chrome,
crashed
on
me
cool
aw
snap,
who
went
away
so
free
one
of
those
things.
Oh
I
have
to
make
it
call
if
we,
if
we
don't
get
the
credits
in
by
by
Friday,
do
we
try
again?
If
it
you
know,
if
we
could
be
on
Wednesday
I,
don't
I
don't
want
to
run
it
into
the
weekend.
I
wanna
start
it
on
a
Monday.
So
hopefully
we
can
get
them
in
this
week
once
the
the
projection
of
the
numbers
is
actually
sort
of
interesting
I.
C
As
part
of
this
effort,
I
just
sort
of
played
out
what
the
growth
has
been
and
what
it
looks
like.
It's
been
pretty
much
linear
growth
month
month.
At
this
rate,
we're
going
to
burn
through
two
million
ish
dollars
by
this
time
next
year,
which
is
great.
We
have
three
million
dollars
in
credits.
That
leaves
us
about
a
million
dollars
to
play
with
for
the
rest
of
the
year
and
that
would
include
all
any
linear
growth
that
I'm
missing
so
like
if
prowl
continues
to
get
bigger,
I'll,
probably
I
didn't
account
for
that.
C
If
we
move
new
projects
over
I
didn't
account
for
that
million
dollars
is
a
lot
of
money,
but
it's
not
infinite
money.
So
at
some
point
we
will
have
to
discuss
like
what
engineering
are
we
gonna
spend
on
cost
optimizing
and
investigating
and
those
sorts
of
things
so
I
just
think
was
interesting.
At
the
beginning,
three
million
felt
like
literally
infinite
money
like
we
could
do
anything
we
wanted
and
now
we're
like.
Oh
actually,
maybe
we
need
to
budget,
but
that's
a
problem
for
tomorrow.
I
think.
A
A
B
My
priorities
have
shifted,
so
it's
basically
been
running
the
exact
same
load
for
a
month
now,
I
think
for
the
last
set
of
jobs
that
I
migrated
over
were
the
100
nodes,
scalability
jobs,
and
so
that
gives
us
an
idea
of
like
that's
what
I
mean
like.
If
you
look
at
this
cost.
This
is
almost
the
steady-state
cost
for
release
blocking
stuff,
not
entirely,
but
we
don't.
We
don't
have
everything.
C
B
Feel,
like
that's,
maybe
a
broader
conversation
to
have
with
sig
release
and
sig
scalability
in
the
mix,
because
I
believe
sig
scalability
would
say
it's
absolutely
important
that
the
5,000
their
jobs
be
used
to
evaluate
whether
the
release
goes
out
the
door
and
now
those
jobs
live
on
and
informing,
but
they
still
started
get
to
escalate
and
block
the
release.
If
there's
something
that
looks
wrong
there
and
so
to
me,
I
feel
like
the
community
is
fully
funded
in
control
of
its
own
destiny.
C
I
agree:
I
really
would
like
to
see
it
move.
It
seems
like
a
symbolic
thing,
if
nothing
else,
so
maybe
after
we
get
the
domain
flip
over
and
we
actually
are
confident
that
the
numbers
are
what
we
think
they
are,
you
know
and
not
like
2x
what
we
think
they
are,
then
we
can
start
talking
about
what
do
the
costs
of
this
actually
look
like,
and
is
this
the
next
priority
target.
B
Okay,
I
will
I
will
just
say
out
loud
from
my
perspective.
I
have
no
real
feel
for
whether
those
5k
node
jobs
would
be
the
larger
cost
or
if
I
looked
at
the
cost
of
all
of
our
pre
submit
jobs.
All
right
like
we
do
have
like
we
haven't.
We
run
up
a
hundred
a
hundred
node
cluster
for
every
pull
request
to
sort
of
help,
scalability
catch
stuff
early,
and
that's
one
of
those
things
where
maybe
I
would
question
whether
the
evaluate
the
value
of
that
yes.
B
E
Sure
I
don't
really
have
much
an
update
because
I
think
to
kind
of
address
the
stuff
around
billing.
That
was
the
main
issue
dive
contest
to
revert
the
flip
that
we
have
Ted
attempted
a
couple
weeks
ago.
So,
yes,
he
said
we
still
want
to
try
it
again
on
a
Monday,
so
yeah
early
as
possible.
Next,
what
date?
It's
possibly
next
Monday!
But
that's
assuming
that
we
know
for
sure
that
the
dust
has
settled
around
billing
I
suppose
by
tomorrow,
or
something
like
that.
C
F
So
before
I
talk
about
the
main
issue
up
just
for
context,
if
people
are
not
familiar
with
what
group
start,
a
mule
is
it's
a
huge
hammer
fire
which
contains
configuration
for
our
cheese,
food,
email
crooks,
we
have
attacked
them
and
the
Yammer
file
is
really
huge.
The
idea
is
that
we
should
split
it
into
multiple
groups,
Tateyama
file,
so
that's
easier
to
maintain
what
I
want
you
to
talk
about
so
I
have
a
PR
I
can
to
do
this
and
I
think
you
know
it's
already
as
approved.
F
That's
point
one,
and
the
second
point
would
be:
how
do
we
then
use
owners
files
if
we
are
having
multiple
directories
again,
I
had
two
approaches,
beginning
towards
the
second
one.
The
first
one
is.
The
current
state
of
the
PR
is
to
allow
route
owners
to
approve,
because
we
want
to
have
control
over
adding
new
groups.
We
don't
want,
say
well,
sigelei
it
so
whoever
are
in
the
owners
files
to
just
approve
additional
groups
without
I,
guess
the
KH
central
working
group.
Being
okay
with
that.
F
So
we
point
the
route
on
us
well
route,
as
in
groups,
honest
or
proof,
and
the
sick
owners
would
just
review
pr's
that's
option.
One.
The
option
to
that
I'm
leaning
towards
is
that
we
also
allow
the
sick
owners
to
approve
changes
to
existing
groups.
But
then
we
set
it
up
such
that
route
owners
will
approve
addition
of
new
groups.
How
we
do
this
is
that
we
will
have
an
auto-generated
file.
I
know
like
lots
of
auto-generated
stuff.
F
C
You
thank
you
for
jumping
on
this.
I've
been
thinking
about
this
over
the
last
day
or
two
and
I
I
wanted
to
ask
the
question
of
what
are
we
trying
to
do
with
these
groups?
What
I
mean
is
historically
we
we
made
a
decision
in
the
steering
bootstrap
that
we
didn't
want
at
Koreans
IO
to
become
a
status
symbol
for
people
to
have
on
their
personal
mailing
list.
So
we
said
we're
not
giving
people
G
suite
accounts
and
we
sort
of
also
decided
that
we
didn't
want
to
make
all
of
the
sig
mailing
lists.
C
B
B's
G
sweet
groups,
but
that
decision
I
think
was
less
well-founded.
Like
you
know,
sig
fubar
at
Google
Groups,
calm
versus
sig
fubar
at
Granny's.
That
IO.
Does
it
really
matter,
like
actually
kind
of
makes
sense
to
own
it
ourselves?
Because
we
can
do
things
like
automation
right,
but
we
didn't
want
to
switch
everybody
over
the
hardest
part.
Was
you
know
we
had
so
many
groups
already?
C
C
So
the
question
I've
been
sort
of
chewing
on
all
night
was:
does
it
make
sense
for
SIG's
to
have
any
voice
in
this
group's
Yambol
structure
or
is
the
group's
really
for
infrastructure,
and
we
can
still
create
a
hierarchical
structure,
but
the
structure
is
not
sig
related.
It's
like
purpose
related,
like
I,
would
have
no
problems
delegating
to
some
owners
file.
All
of
the
staging
or
repositories
right
and
some
small
group
of
people
say
we're
dealing
with
all
the
staging
and
image
promotion
stuff.
C
F
Because
we
discussed
I'm
gonna,
find
in
the
steering
committee
meeting
on
Monday,
so
country
backs,
like
I,
have
been
doing
most
of
the
work.
I
want
to
give
credit
to
Bob
and
Josh
they've
been
looking
at
how
the
signal
eats
mailing
lists
have
been
set
up
and
think
that
they've
noticed
is
that
a
lot
of
them
have
still
owners
and
we
told
and
because
they
weren't
updated,
we
haven't
been
able
to
contact
them
and
we
don't
have
control
over
them
anymore,
and
these
leads
groups
have
access
to
a
bunch
of
calendars,
and
things
like
that.
F
F
C
Have
no
real
problem
with
that,
like,
like
I,
said
the
main
reason
I
think
for
not
making
all
the
sake
mailing
lists
on.
There
was
just
inertia.
There
was
all,
or
they
were
all
already
on,
Google
Groups
and
it's
too
difficult
to
migrate
people
over,
because
we
can't
chain
them
I'm,
I'm,
okay,
with
making
it
giving
SIG's
more
authority
here.
I
just
want
to
be
careful
that
it
doesn't
become
a
vanity
thing
or
get
sort
of
spammed
with
a
million
different
groups.
F
F
Guess
we
talked
about
who'd
approve
new
groups
requests
and
the
thing
that
we
came
up
with
was
contrived
expects
tearing,
doesn't
really
need
to
make
those
decisions
and
it
was
delegated
to
contributor
experience,
and
this
working
group
I've
been
thinking
about
it
and
I'm,
not
really
sure
how
that
allocation
would
be
split.
I
think
when
Trebek's
doesn't
really
need
to
comment
on
this,
and
this
working
group
can
decide
whether
like
well,
is
that
you
proving
out-
and
if
it's
nice,
so
you
would
it.
C
F
B
C
Yeah
and
I
think
like
universal
things,
totally
cool
like
creating
a
group
for
every
state
lead
for
every
sake,
totally
reasonable.
What
I'm
questioning
is
you
know,
would
is
there
any
reason
why
I
say
signet
work
would
want
five
new
groups
that
have
no
mapping
and
other
sings,
like
I,
can't
think
of
a
reason
why
that
would
be
something
we
want
to
do
if
they're
not
aligned
to
working
groups
like
working
group
is
our
formal
structure
right
or
some
projects
or
some
projects.
Yes,.
B
We
can
encode
these
sorts
of
things
in
tests,
so
that,
like
PRS,
will
fail
if
they
don't
adhere
to
whatever
our
well-defined
policy
is,
we
just
actually
have
to
decide
what
we
want
the
policy
to
be
and
then
write
the
tests
to
enforce
that.
The
super
loaf
I
approach
is,
is
Nikita's
option,
one
where
we
keep
control
of.
B
We
keep
final
approval
with
members
of
this
group
who
have
the
policy
sort
of
vaguely
floating
around
in
their
head
and
they
get
final
approval
and
then
option
two
seems
sort
of
like
amid
a
compromise
where,
like
we
sort
of
had
this
vague
policy
thing
floating
in
our
head
as
far
as
what
group
names
are
allowed
to
exist,
and
so
we
get
final
sign-off
on
that.
But
then,
once
somebody
has
gone
through
us
for
that
membership,
changes
and
stuff
don't
need
to
get
through
us.
B
That
fact
that
was
the
compromise
that
sounded
like
it
sort
of
would
unblock
most
people.
From
my
perspective,
I
still
have
some
slightly
little
paranoid
things
about
like
what
we
want
to
allow
arbitrary
people
to
change
arbitrary
settings
on
their
Google
Groups.
Or
is
that
something
that
we
would
want
some
kind
of
sign
off
on
and
so
I.
C
Was
thinking
about
that
in
that
last
part
to
whether
it
makes
like
we
have
all
of
our
options
encoded
in
that
UML
and
how
many
groups
actually
change
those
options
other
than
trivially
right
like?
Could
we
just
take
away
those
options
and
be
more
comfortable
with
the
fact
that
all
the
groups
are
the
way
they're
supposed
to
be
I,
enforced.
B
Most
of
that,
but
I,
don't
know
that
it's
we
have
tests
in
place
now
that
enforce
most
of
that,
but
not
universally.
So
we
would
need
to
look
at
other
options
stuff.
There
are
also
groups
that
exists
sort
of
slightly
outside
the
scope
of
six
and
working
groups.
Today,
for
example,
you
know
the
product
security
committee
and
there
are
a
couple
different
security
lists
that
have
special
settings
and
special
membership,
so
I'm
amenable
to
either
of
these
compromises.
Nikita
proposes.
B
F
C
The
alternative
is
that
we
use
their
different
yeah
Mel
schema
and
we
say
like
this
is
REM
schema
right.
So
here's
the
group,
here's
the
name,
here's
the
kind
of
group
it
is
well.
This
is
a
sick
group,
okay!
Well,
that
means
this
blob
of
pattern
and
this
blob
of
config,
and
this
is
a
security
group
and
therefore
it
has
this
different
blob
of
config
right
I
mean
if
we
want
to
get
all
engineer
you
with
it.
I.
C
B
Yeah,
there's
more
turn
on
staging
groups
and
I.
Think
you've
expressed
like
those
are
another
thing:
I
wouldn't
mind.
Seeing
tests
this
sorry,
this
flow
scope,
but
just
to
digress
for
a
second
Tim
you've
had
opinions
one
time
on
how
sometimes
the
various
KML
files
that
are
part
of
the
setting
up
the
staging
process,
kind
of
fall
out
of
sync.
So
you
need
like
a
group
in
groups
yeah
mo
you
need
to
make
sure
you
have
the
right,
repo
and
and
manifest
setup
over
and
the
kate's
GC
r
io
sub
directories
and
stuff.
C
B
F
A
F
C
Yeah
they
told
me
the
process
was
improved.
I
chose
at
that
time
to
let
you
drive
it
as
the
the
actual
customer,
because
I
don't
want
to
go
behind
the
curtain
as
much
as
possible.
If
we
think
that
that's
stalled
and
we've
filed
another
request,
but
we're
not
getting
any
response,
I
don't
know
if
we
have
any
metadata
on
like.
Is
there
a
ticket
ID
that
was
assigned
or
something
that
I
can
reference
that
otherwise
I
think
in
far
away
again,
I
would.
A
A
So
the
last
topic
in
our
agenda
is
the
topic
from
Yoda
and
the
mic
is
yours.
G
So
just
a
really
quick
introduction
in
case
people,
don't
remember
my
last
meeting,
but
my
name
is
Jude:
hey
Internet,
Google
working
with
Linus
on
the
container
image
promoter
for
the
summer.
Basically,
what
my
work
has
been
so
far
has
just
been:
creating
a
like
a
Chex
interface
for
adding
more
checks
for
the
promoter.
An
example
of
a
check
that
have
implemented
is
checking
against
pull
requests
that
attempts
to
remove
images
from
the
promoter
manifests.
So
you
can
imagine
the
checks
are
just
things
along
those
lines.
G
G
The
image
or
the
question
that
kind
of
arises
from
that
and
the
reason
I'm
bringing
it
up
to
the
community
today
is:
do
we
have
any
strong
opinions
on
the
strictness
of
that
check?
You
know
if
a
vulnerability
has
a
low
severity
or
very
looks
like
minimal
severity.
Do
we
still
allow
the
pleura
quest
to
promote
those
images,
or
do
we
just
reject
any
image
that
has
any
vulnerabilities
whatsoever.
G
So
currently
the
check
would
be
running
in
oh
I,
see
you
mean,
there's
actually
something
we
are
considering
doing
for
like
future
work,
so
kind
of
surfacing.
These
vulnerabilities
before,
like
the
promoter,
needs
to
check
against
them
just
to
make
things
like
clear
for
people,
one
thought
process
or
like
one
idea,
we
kind
of
discussed
was
maybe
allowing
people
you
know
to
kind
of
define
within
the
manifest
themselves
what
level
of
strictness
images
contained
in
that
manifest
will
follow.
C
C
The
I
mean
it's
unfortunate
that
we
can't
give
them
a
direct
signal
right
away.
We
could
I
mean
this
is
just
a
problem.
We
throw
money
at
and
turn
on,
vulnerability
scanning
on
staging,
but
that's
it's
it's
money
that
would
have
to
spend
I.
Don't
know
how
much
the
alternative
is.
Maybe
just
promote
and
flag
right
away.
Is
that
gonna?
Second,
a
result,
I'm
just
thinking
out
loud
here.
I'm
sorry
is
that
gonna
result
in
a
bunch
of
images
that
are
not
updated.
E
So
I
just
want
to
interject
or
just
add
another
related
thought,
which
is
there's
an
open
issue
around
getting
like
scanning
vulnerability
scanning
for
this
production
itself
and
that's
like
the
other
side
of
this
coin.
Like
you
know,
you're
diving,
in
from
the
perspective
of
adding
some
sort
of
visibility
around
new
images
getting
promoted
into
production,
but
then
there's
the
other
side
which
is
after
they're
promoted.
You
know,
do
we
continually
scan
images
and
I
think
we
should
so
we.
C
E
E
C
C
C
Is
that
sufficient,
like,
if
I'm
trying
to
put
myself
in
a
you,
know
some
project
maintainer
shoes
right
I,
build
my
fubar
image
and
I
push
it
to
my
staging
or
benarjee
GCB
builds
it
and
pushes
it
to
my
staging
and
I
send
a
promotion
PR
somebody
approves
my
promotion,
PR,
it's
more
or
less
rubber
stamp
because
they
trust
me
and
then
a
day
later
something
Flags
it
and
says.
Oh
by
the
way,
this
has
a
vulnerability
like
I,
guess:
I'm
gonna
be
kind
of
grumpy.
C
E
B
That
pull
request
time
was
sort
of
what
I
had
in
mind.
I
like
the
idea,
at
least
like
what
I'm
trying
to
promote
the
action.
I'm
taking
to
promote
an
image,
is
I'm
opening
at
Paul
request
to
add
it
to
the
manifest,
and
if,
at
that
time,
I
get
a
test
failure
that
says
your
image
is
vulnerable.
You
can't
do
this,
then
I
can
respond
to
it.
C
E
So
that's
what
you
dye
is,
you
know
trying
to
implement.
First
and
I
think
we're
feature
work
right
like
we
would
want
something
that
does
consume
the
puffed-up
queue
or
something
how
about
the
running
like
live
figures
about
you
know
which
images
are
vulnerable
and
expose
that
I
suppose
you
know
people
that
we
trust
or
certain
image,
admins
or
artifact
admins.
We
can't
then
police
it
or
at
least
tell
Pope
people
about
vulnerable
images,
but
before
we
get
there
right.
E
C
Exactly
like
I'm,
not
sure
that
offering
corncobs
is
gonna,
give
us
better
security
or
better
experience
like.
Let
me
put
another
way
I'd
rather
we'll
wait
for
people
to
complain,
say
you
keep
notifying
me
with
these
CDs
that
don't
matter
to
me,
why
do
I
have
to
keep
doing
all
this
work?
I'd
rather
deal
with
a
complaint,
then
design
the
system
in
the
absence
of
a
signal.
B
B
Default,
we
could
say
you're
not
allowed
to
do
that,
like
I'm,
just
without
defining
the
policy
today,
I
think
in
allowing
us
to
encode
the
policy
going
forward,
the
first
us
having
to
make
that
decision
today.
If
it's
a
ton
of
extra
engineering
effort,
then
not
do
it,
but
it
sounded
like
that
was
sort
of
part
of
the
plan.
G
Okay,
I
guess
it
seems
like
the
focus,
is
to
just
kind
of
I
mean
it
seems
like
everybody's
okay,
with
just
setting
up
a
default
tree.
Like
this
my
check
and
then
I
guess
further
discussion
can
occur
for
whether
or
not
we
have
kind
of
more
granularity
or
a
lot
more
choice
in
how
strict
who
you
are
another
question
I
did
have,
though,
was
you
know
for
this
check
to
work?
I
am
Not,
sure
I,
don't
think.
One
ability
scanning
is
currently
enabled
on
all
the
staking
projects,
but.
C
G
G
C
The
the
reason
that
it's
not
turned
on
on
every
staging
repo
is
simply
that's
gonna
cost
money,
and
so
we
turned
it
on
on
the
main
repo
once
because,
basically,
you
pay
every
time
an
image
is
scan
so
yeah
when
people
push
something
to
staging
and
then
they
never
delete
it
out
of
staging.
It's
gonna
cost
money
every
time
that
image
gets
scanned
in
perpetuity
and
we're
already
gonna
pay
that,
in
the
main,
repo,
so
I'm
a
little
bit
anxious
about.
C
C
G
I'm,
just
I
guess
just
thinking
live
right
now,
although
I
think
you
know.
Turning
on
the
stage
me
but
I
mean
I
know
it's
a
lot
more
cost.
It
seems
like
a
cleaner
alternative,
but
you
know,
and
another
option
is
kind
of
you
know
we
set
up
a
a
separate
project
which
is
meant
for
vulnerability
scanning
because
every
time-
and
we
just
kind
of
push
images
to
those
projects
for
testing
or
for
vulnerability
checking
and
then
do
some
like
polling
I,
guess
to
check
when
the
initial
scan
is
finished
and
get
the
results
back.
B
That
sounds
like
a
good
alternative
I'm
still,
despite
Tim,
laying
out
really
big
scary
numbers
at
the
beginning,
I'm
still
willing
to
live
dangerously
and
see
if
we
turned
it
on
for
everything
right
now.
Just
what
would
that
cost?
Look
like
for
our
load
today,
help
us
understand
if
it's
worth
engineering
to
use
a
separate
project,
I.
G
G
B
C
B
B
C
C
Can
I
throw
one
data
point
out?
I
just
looked
at
the
billing
for
container
images,
scanned
and
I
think
that's
the
correct
line
item
our
year-to-date
bill
is
less
than
four
hundred
dollars,
so
you
know
what
we
can
just
turn
it
off
on
staging
and
it's
probably
we
find
and
if
it
turns
out
to
not
be
fine,
then
we'll
deal
with
it
boo-yeah.
A
C
C
E
H
B
C
C
C
C
B
B
Pretty
solid
use
case
where
you
are
a
member
of
the
community,
you
want
to
run
an
app.
You
know
looked
at
the
app
we
like
the
app
now
we
want
you
to
go
and
do,
and
you
keep
kind
of
tripping
into
these
things
that
are
like
it's
a
great
friction
log
for
anybody
who
wants
to
run
stuff
on
our
infrastructures,
so
anything
we
can
do
to
help
make
your
life
easier.
I
think
will
pay
dividends
to
anybody
who
wants
to
run
infrastructure
on
this
on
our
platform
for.
C
Sure
the
true,
the
sad
truth
is
most
of
the
things
that
are
running
today
are
being
run
by
people
who
have
larger
privilege
already.
So
it's
hard
to
know
where
the
bumps
are
cuz,
we've
got
your
much
bigger
tires,
and
so
it's
great
to
have
you
come
through
and
sort
of
bulldoze
a
clean
path.
Thank
you
definitely
thank
you
or
not.
Thank.
A
C
C
B
We're
sorry
aren't
go
ahead,
let's
go
say
if
you're
talking
about
having
the
proud
job
they'll
like
if
the,
if
the
job
run,
fails
to
push
some
DNS
records
or
something,
and
you
want
that
to
alert
to
the
slack
channel.
Prowl
has
a
slack
reporter
that
I
think
can
be
configured
on
a
per
job
and
per
channel
basis.