►
From YouTube: Kubernetes SIG K8s Infra - 20230315
Description
A
B
Hi
everyone-
this
is
Gates
and
from
meetings
welcome
to
everyone
in
the
call,
just
a
quick
reminder
that
this
meeting
is
under
the
code
of
conduct,
so
I
invite
everyone
to
nice,
CBF
and
this
recording
will
be
will
be
published
on
YouTube
later
before
we
start.
Do
we
have
a
note
ticker?
Thank
you
AP
again,
I
can
say
enough.
Thank
you
to
uep.
C
B
Okay,
so
we
can
do
billing
report
in
that
case
I,
don't
we
are
in
the
middle
of
the
month,
so
I
will
do
a
partial
billing
review,
so
the
one
thing
I
want
to
basically
talk
about
it.
Basically
I'm
positive
impact
of
to
change,
introduce
in
the
six
scalability
jobs,
so
I
think
that's
the
graph
talk
bar
for
itself,
like
we
I
think
we
choose
the
course
by
75.
B
D
D
A
D
A
D
Yes,
but
at
that
point
we
hadn't
had
a
full
week
of
the
scale,
yet
we
should
have
slightly
better
data
now.
D
B
Okay,
anyone
has
a
question
for
the
gcp
part.
B
Okay,
on
the
AWS
side,
let
me
let
me
make
that
way.
We
have
cost
growing
because
basically
redirect
traffic
by
default
to
S3,
and
we
also
now
have
issue
AKs
cluster
running
there.
So
we
are
cost
growing
costs
yeah.
We
are
cost
growing
and
I.
Think
that's
a
good
thing.
B
E
Yeah
I
wanted
to
update
everyone
here.
Yesterday,
I
went
one
took
one
deep
dive
with
hippie's
help.
Looking
at
you
know
all
the
things
that
we
have
moved
there
under
the
the
root
organization
that
we
created.
We
made
sure
that
you
know
at
least
I
know
you
and
I
should
have
access
to
the
list
of
accounts
and
things
like
that,
and
you
know
there
was
some
discrepancy.
I
was
seeing
in
on
the
AWS
systems
I'm
trying
to
still
track
it
down.
E
You
know
the
Salesforce
thing
on
our
systems
and
not
the
Linux
Foundation
one,
so
I'm
trying
to
figure
out
a
fix
that
on
our
side
other.
So
that's
that's
why
I
got
I
started
looking
into
this
and
making
sure
that
all
the
things
that
we
see
are
correct
and
cross-checked
all
the
emails,
all
the
accounts
that
we
started,
that
we
created
and
moved
over
and
attached,
and
things
like
that.
So
you
know
there's
a
basic
sanity
check,
hippie
anything
else.
F
Just
making
sure
that
you
guys
both
have
access
the
full
AWS
admin
access
to
all
API
calls
and
console
things.
Yeah
I
looked
at
the
Falco
thing.
It
also
in
I.
Think
the
internal
Salesforce
also
has
my
home
address.
G
E
F
They
it's
a
primarily
deaf
people
that
work
there,
and
so
they
have
all
the
milk
come
in
and
they
scan
it
all
and
they
put
in
the
mailboxes
and
they're.
Their
only
interactions
are
digital
and
Via
SMS
text
messages.
So
it's
good.
It's
a
great
great
business
to
support
nice.
B
Okay,
any
one
other
question
about
Billing.
At
this
point.
B
Good,
so
we
can
do
some
AI
review
quickly,
so
I
think.
The
first
item
is
like
next
Monday
There's,
an
announcement
made
that
it,
the
teacher
that
I
will
be
redirect
to
the
new
version
stream
starting
Monday
March
20th.
E
I
wanted
to
check
Justin's,
you
know
health
status.
How
are
you
feeling
Justin.
I
I
I
think
we're
our
priority
is
to
make
sure
that
we
don't
roll
it
back
right,
and
so
we
are
going
to
do
things
but
I
think
we're
we're
trying
to
pre-flight
every
possible
objection.
And
so
right
now
we
are
like
hearing
all
the
objections,
but
that
is
deliberate
and
and
I
want
to
give
a
big
shout
out
to
Ben,
if
he's
here
in
Fede
and
other
people
who
might
not
be
here.
I
But
oh
there's
Diego
here
who
are
like
dealing
with
all
these
objections
and
finding
all
these
objections
and
digging
up
objections
where
we
didn't
even
know
there
would
be
objections,
so
they
can
probably
oh
also.
D
I
D
It
weren't
for
the
docker
Hub
announcement.
The
only
thing
I
would
be
working
on
right
now
is
this.
Unfortunately,
we
have
two
registry
changes
to
sort
out
so.
E
So
other
than
tomtoming
this
to
everybody
that
we
can
think
of
anything
else,
we
can
do
to
help
you
all
next
week.
E
Right
so
one
one
thing:
Ben
I
think
I've
seen
you
like
paste
different
things
in
different
places
like
tips
around
like
hey
use,
dig
for
this
and
use
so
can
we
like
collect
them
for
next
week,
so
people
we
can
give
it
to
people
and
say:
hey
yeah,
you
are
in
New,
Zealand,
okay,
go
run
this
stuff
from
where
you
are.
That
kind
of
you
know
can
we
can
we
do
some,
some
sort
of
that
which
will
help
people
self-diagnose
things.
D
What
Bob
just
said
was
my
suggestion
we
should
we
should
probably
create
a
template
issue.
Yeah
I
would
appreciate
help
with
that
as
well.
A
lot
of
this
is
going
to
be
fairly
straightforward.
Like
Network
debugging
stuff,
you
know
making
sure
that
they
can
reach
the
registry
and
so
on.
The
biggest
thing
that
I
would
add
is
if
we
can
get
them
to
use
a
tool
like
crane
dash
dash
verbose.
D
Then
we
can
actually
see
each
request
and
find
it,
whereas
if
we
tell
them
to
debug
something
like
the
docker
logs,
usually
the
default
logging
doesn't
have
enough
info
right
other
other
than
that,
though
the
rest
should
be.
You
know
pretty
standard
stuff.
I
can
try
to
help
review
that
or
I'll
try
to
write
that
soon,
I've
been
a
little
focused
on
the
Google
site
and
some
like
sort
of
hardening
out
of
this.
Just
last
night
we
shipped
enforcing
code
coverage
for
the
registry
code,
a
couple
things
like
that.
D
We
also
have
some
last
minute
phone
I
see
that's
how
someone
Adolfo
is
here.
The
image
promoter
is
struggling
with
the
registry
quotas
and
that's
something
that
we
had
rolled
out
pretty
recently
ahead
of
any
big
redirect
to
try
to
get
ahead
of
any
users
hitting
that
we
still
have
some
more
Fallout
to
do
there.
H
D
Yes,
this
is
a.
This
is
a
gcp
quota
that
applies
to
the
registry
API
for
read,
requests
we
didn't
set
right
requests
because
we're
the
only
ones
with
permission
to
write,
requests.
Anyhow,.
D
Yeah
so
I
triggered
this
on
accident
in
a
region
working
on
some
other
tooling.
There
is
a
there's.
A
like
quota
allotted
to
your
project
for
artifact
registry
reads:
GCR
has
an
out
of
the
box
non-configurable
quota
that
it
applies
per
user
per
IP
address
to
make
sure
that
no
one
user
uses
too
much
artifact
registry
doesn't
have
one
by
default
and
gives
you
control
over
the
per
user
quota.
D
Right
it
will
at
least
it
would
the
the
the
capacity
that
we
have
to
work
with
is
per
region,
and
we
can
also
have
it
increased
if
we
have
a
lot
of
usage
and
we
have
had
that
done,
but
yeah.
So
if,
if
like
a
tool,
is
really
excessively
hammering
the
API
in
a
region,
then
you
could
wind
up
preventing
anyone
from
reaching
that
region.
D
We
have
other
mitigations
like
we
could
update
the
registry
to
Route
around
that
region.
If
there
were
like
a
persistent
attack-
or
you
know,
gcp
support
like
blocking
spam
or
whatever,
but
when
we
are
ourselves
authenticated,
hitting
the
API
and
using
too
much.
We
could
also
do
that.
So
now
we
can't
do
that
because
no
one,
IP
or
user
can
can
rip
through
the
quota.
We
started
with
a
quota.
It's
as
analogous
to
GCR
as
we
can
TCR
was
50
000
per
10
minutes.
D
We
set
5
000
per
one
minute,
because
we
only
have
that's
the
granularity
that
we
have.
We
can
adjust
that,
but
we
would
actually
like
to
see
it
go
down.
It's
a
like.
The
GCR
quote
has
a
fairly
High
very
generous
quota,
because
it's
not
user
tunable.
H
D
D
We
actually
were
hitting,
we
actually
have
had
the
BR
we've
had
the
tool
rip
through
the
AR
quota
before
when
there
wasn't
a
per
user
quota
and
we've
also
the
I
think
the
main
difference
is
just
GCR
will
give
you.
Since
it's
a
longer
time
window,
you
could
have
more
burst.
D
Okay,
maybe,
but
are
the
promoter
still
runs
over
the
course
of
like
like
oh
of
hours,
so
it
should
be
pretty
similar
I
think
we
can
work
more
on
mitigating
the
API
usage.
If
we
get
really
stuck,
then
we
can,
then
we
can
increase
it
again,
but
it's
something
that
we
want
to
be
a
little
bit
hesitant
to
do,
because
we're
trying
to
set
a
baseline
for
you
know
as
we
bring
in
all
these
new
users
and
they
start
getting
used
to
using
this.
H
So
the
the
thing
is
we,
especially
when
we
set
up
the
new
mirrors.
We
work
really
hard
to
ensure
that
it
does
its
thing
in
parallel.
So
because
we
were
hitting
like
we
had
the
promoter
running
at
some
point
for
eight
hours
per
release
or
something
like
that,
and
we
had
to
have
shifts
of
people
looking
after
it
and
we
worked
to
paralyze
it
more
so
that
we
could
do
more
operations
at
the
same
time.
So
at
the
I
guess.
H
D
Sorry
about
that
Arnold
and
I
tried
to
ping
everyone
involved
last
week
with
heads
up
and
we
have
been
monitoring
the
jobs
and
there
wasn't
any
breakage.
Actually,
we
broke
ourselves,
the
the
sync
to
S3,
but
we
made
it.
We
improved
the
tool
to
to
be
to
use
much
less
API
calls
yeah.
H
D
I
think
I
have
some
follow-ups
on
the
thread
about
about
other
mitigations
we
can
make.
But
if,
if
you
know,
if
those
don't
pan
out
in
short
order
and
we're
still
blocked,
then
leads
I
know
our
no
at
least
has
access
to
increase
the
quota
as
a
another
temporary
measure.
H
D
To
okay,
yeah
I
also
shared
some
things
that
I
found
from
working
on
the.
Similarly
the
tool
to
copy
to
S3,
where
there's
some
room
to
take
heavy
advantage
of
the
list,
calls
that
GCR
provides
and
avoid
a
lot
of
API
calls.
H
C
Okay,
then
we
are
at
this
topic.
Let
me
ask
one
question:
I
saw
that
I
mentioned
it
in
the
thread
summary
in
release
management.
Are
we
confident,
to
let's
say,
reduce
the
number
of
runs
of
periodic?
That's
running
K
promo
to
like
once
or
twice
a
day,
because
this
is
massively
so
we've
done
the
image
promotion
process,
because
that
periodic
can
take
like
more
than
an
hour,
and
we
made
it
in
a
way
that
only
periodical
possum
it
can
run
only
one
of
them
can
run
at
the
same
time.
C
So
when
we
are
doing
three
or
four
releases
that
they
can
take
for
hours
and
I,
don't
think
that
this
periodic
is
like
providing
some
value
at
this
point,
so
like
Kevin,
Gate
once
or
twice
a
day
will
probably
be
enough.
So
I
will
propose
that
as
a
PR
I
wanted
to
see
if
we
have
any
objects
to
that,
maybe.
B
I
mean
we
have
the
we
have
promotion,
loss
merging
in
case
the
direct
repo,
so
I
think
it's
fine
to
basically
get
that
periodic
running
twice
a
day
we
just
need
to
be.
We
just
need
to
communicate
to
the
community
that
we're
going
to
do
that,
and
there
are
some
level
of
expectation
does
need
to
be
made
like
you're
not
getting
right
away
if
the
promotion
is
failing,
yeah.
C
So
that's
why
I
understand
the
understanding,
basically
that
this
periodic
is
mostly
useful.
If
both
Summit
fails,
then
we
run
periodicated
eventually
promotes
the
images
that
fail
to
promote
initially,
but
I
think
this
is
happening
very
rare
and
regarding
the
kubernetes
services,
we
are
very
closely
monitoring
that
stuff.
C
So
we
know
also,
we
have
some
checks
in
Krell
before
continuing
the
release
of
the
promotion,
I
think
they're
just
getting
some
message,
your
community,
please,
we
will
reduce
the
number
of
periodic,
runs
if
you're
promoting
something
keep
an
eye
on
job
to
make
sure
that
possibility
screen.
Eventually,
it
will
get
that
that
thing
it
will
get
periodic
micromotive,
but
it
is
better
to
keep
eye
on
post
submitted
the
project
certificates
rank.
D
Yeah
I
come
here
about
this
in
the
thread
and
I
think
that
what
we
should
do
is
set
a
Cron
that
runs
when
we're
unlikely
to
be
publishing
releases
as
a
backstop.
We
can
run
it
at
like
midnight
Pacific
or
something
like
that.
That's
well
off
from
release
schedule.
The
other
mitigation
we
can
do
is
as
far
as
expectations.
We
should
I
think
we
should
update
the
tool
to
have
some
back
off
and
retry
so
that
the
post
submit
doesn't
fail.
D
It
just
takes
a
little
longer
and
then,
if
it
still
fails,
we
have
a
team
that
has
access
to
rerun
it
or
eventually,
one
like
you
know,
once
a
day
or
whatever
the
backstop
will
do.
A
full
resync
I
think
avoiding
doing
full
resync's
during
hours
where
people
are
otherwise
promoting
things
will
help
a
lot
and
should
be
reasonable.
H
The
the
the
the
other
thing
that
I
was
going
to
say
is
that
I
want
to
make
the
promoter
a
little
bit
smarter
in
the
sense
that
we
can
periodically
check
all
of
the
images
to
ensure
that
they
are
properly
mirrored.
But
at
some
point
we
would
I
would
prefer
to
have
it
understand
the
last
images
that
were
published
and
only
check
those
we
did
that
for
the
signatures
and
I
think
it
works
reasonably
well,
so
we
could,
if
we
need
to
have
it
run
more
frequently,
we
could
just
check
out
the
last.
B
Can
we
have
this
conversation
again
is
secretly
smithing
yeah,
because
yeah
I
feel
like
we'll
we'll
like
the
right
people
to
make
the
ultimate
decision.
So
we
should
carry
all
everything
we
that's
been
said
here
in
to
seek
release
meeting
I
have
a
final
decision.
There.
C
J
Not
really
we're
all
on
track,
there's
a
number
of
PRS
open
that
need
to
be
merged,
so
I
think
we'll
see
some
traction
after
the
actually.
This
is
a
car
I
think
we'll
run
out
3.5
for
the
promoter
for
the
image
promoter,
then,
along
with
all
the
PRS
that
I've
opened
and
we'll
see
where
we
are
at
the
end
of
the
week.
B
H
Yeah
I
have
one
yep
so
well
just
a
comment:
Muhammad
I
I'm
trying
to
keep
track
on
the
ones
that
need
like
the
thumbs
up
from
us
from
secretes.
So
if
I
miss
one
because
I'm
handling
kind
of
a
lot
of
at
the
moment
just
make
sure
to
ping
me.
J
Okay,
I
will,
like
the
main
one
is
like
merchant.pr
that
actually
rolls
out.
3.5
for
caper
would
I
believe
we're
just
waiting
for
the
patch
release
to
be
done.
H
B
C
Actually
not
really
because
unit
tests
are
failing
constantly.
There
are
four
jobs
that
are
failing
for
some
reason:
that's
not
happening
on
the
main
trial
instance.
I
think
it
should
be
the
right
link
by
the
way
yeah.
A
C
It
as
you
can
see
all
the
unit
tests
are
red
and
they're
always
deforesting
cases
that
are
failing,
and
the
very
good
thing
is
that
kind
tests
are
working
so
document
dock
here
and
stuff
like
that
is
working.
So
that
is
a
really
good
progress
regarding
the
state
of
the
cluster.
C
J
There's
a
couple
of
things:
we're
also
trying
to
clean
up.
One
of
them
was
external
Secrets
operator
that
one
was
a
little
bit
of
a
fun
one
so
top,
although
I
think
we're
almost
there.
What
else
was
there
I
think
kyvern
was
the
other
piece
so
that
we
can
pull
the
images
from
ECR
instead
of
going
to
Google
cloud
or
type.
J
B
B
Okay,
yeah
so
feel
free
to
Marco
feel
free
to
change
instant
type
if
the
one
you
are
picking
right
now
is
not
good
enough.
C
A
C
The
key
problem
that
we
have
is
there
is
a
comment
on
the
issue
for
that
cluster.
Is
that
those
those
meis
that
we
are
using
might
not
support
using
SSD
backed
nodes?
So
we
are
using
EBS
storage
and
that
that
is
maybe
not
good
enough.
I
think
that
I
can
work
around
that
manually,
but
I
need
some
time,
for
that
and
I
will
be
I
me
and
Patrick
will
be
looking
at
it
as
tomorrow.
C
So
let's
hope
that
to
be
able
to
get
something
working
at
least
some
manual
work
around
until
we
don't
get
some
official
support
that
Ubuntu
any
eye
that
we
are
using.
B
B
E
Okay,
so
Terry
is
here:
Terry
is
going
to
help
us
with
some
of
the
stuff.
Steven
I,
don't
see
him
here.
Steven
Zhang
is
going
to
help
as
well.
You
know
partial
time
when
they
get
a
chance
to
do
stuff.
E
So
definitely
we
want
to
you
know,
help
increase
things
like
the
CI
jobs
for
sure
you
know,
move
CA
jobs
into
the
AWS
infrastructure,
because
that's
the
next
upcoming
work
right
and
Terry
has
already
done
some
PRS
and
Steve
and
I
kind
of
paired
him
off
with
something
that
Muhammad
is
doing
so
hoping
to
see
more
work
and
more
people
from
AWS
coming
here.
You
know
folks,
like
Chris
short
and
other.
E
You
know,
Justin,
Garrison
and
other
developer
Advocates
will
come
and
go
when
they
when
they
are
able
to
so
hoping
that
we
can
keep
them
interested
in
coming
back
here,
for
the
all
the
things
that
we
need
to
do,
you
might
have
seen
Justin's
videos
popping
up
now
he's
recording
some
shots
on
YouTube
shots,
and
you
know
just
Arty
things
when
you
see
it.
K
Nothing
in
particular,
just
hello,
everyone,
I
guess
hopefully
Marco-
maybe
can
get
with
you
at
some
point
and
talk
about
where
you
might
need
some
help.
In
particular,.
E
And
Teddy
you,
your
background.
Working
on
you
know
some
other
open
source
stuff
can
help
us
with
the
registry
things
here
right.
K
Right
yeah,
I'm
maintainer
on
the
orus
project
and
yeah
I.
Have
you
know
I've
worked
on
openstack
years
ago
as
well,
so.
L
Yeah,
so
this
is
just
a
friendly
reminder
that
people
might
might
get
the
chance.
I
mean
the
next
two
weeks
will
probably
be
a
little
bit
stressful,
but
if
you
can
squeeze
in
the
time
and
have
a
look
over
the
requirements,
talk
that
dims
created
and
some
of
the
folks
already
filled
with
comments.
L
If
any
topic
that
still
comes
up
that
is
missing
there.
Please
take
a
look
and
at
your
comments
so
that
we
can
pair
in
parallel,
proceed
on
this
topic
as
well.
That's
all,
basically,.
E
Yeah
Mario
one
other,
you
know
when
we
were
brainstorming
this
issue
internally
in
AWS.
Also
there
are
other
a
few
teams
that
have
been
using
some
tools
like
tecton
and
kit
and
other
things
put
together
for
I.
Think
some
of
it
was
running
scalability
jobs,
some
of
it
for
ad
hoc
testing,
with
testing
kubernetes
from
source,
so
I
I
might
ask
them
to
give
us
a
demo.
E
Talk
us
through
what
they
have.
It
may
or
may
not
be
useful
right
away,
but
certainly
something
that
they
are
gonna,
that
they
want
to
show
us
and
maybe
even
show
to
like
C6
scalability
as
well.
So
yeah
I'll
try
to
get
their
time
and
ask
them
to
come
to
this
meeting,
and
you
know
you
know,
maybe
give
us
a
quick
run
through
for
like
10
minutes
or
so.
B
J
Okay,
so
that
issue
is
about
running
cubelet
node
E3
tests
on
AWS.
It's
also
partly
related
to
the
issue
that
Mario
is
talking
about
earlier.
We
from
understanding
is
we
have
a
lot
of
tests
to
keep
the
test
I
just
spin
up
a
single
server.
Do
some
stuff
kill
the
server
and
Report
the
results
right
and
those
are
good
candidates
for
running
on
AWS.
J
Now,
there's
a
few
problems,
so
I
actually
went
to
a
secondary
meeting
last
week
to
kind
of
discussed
this,
and
there
wasn't
support
from
seeing
their
leadership,
but
yeah,
Sims
and
Todd
are
working
on
the
code
to
actually
spin
up
the
servers
on
AWS
and
to
work.
Another
thing
I've
been
working
on
is
trying
to
decoupify
the
installation
strips
that
we
use
right.
So
we
can
run
this
on
anywhere
as
long
as
they
spot
something
like
cloud
in
it.
J
E
Yeah
one
One
update,
there
is
also
you
know:
I
think
we
managed
to
land
a
couple
of
PRS
in
the
last
few
days,
Muhammad
right.
So
what
we
ended
up
doing
was.
There
is
a
test.
E
E2E
node
has
a
mode
where
you
can
SSH
into
something
and
run
all
the
tests,
so
that
piece
I
think
we
were
able
to
merge
and
the
other
thing
that
we
were
able
to
merge
was
you
know,
making
it
pluggable
right
so
so
that
the
code
for
AWS
doesn't
need
to
live
in
kubernetes
kubernetes
itself.
It
can
live
in
another
repository,
but
still
from
there
able
to
run
the
same
test,
ETV
node
in
a
slightly
different
fashion,
so
that
PR
also
got
merged
in
I.
E
Think
we,
the
three
of
us,
taught
me
Muhammad.
Will
you
know
resync
back
again
to
see
how
much
of
that
can
we
can
do
and
how
we
can
use
that
for
some
of
the
node
ETV
stuff
as
well.
So
that's
something
that
we're
going
to
do
as
well.
D
Since
this
also
has
a
sequencing
of
lab
Muhammad
I
have
talked
about
this
a
bit
and
I.
Think
the
one
of
the
tricky
objections
is
around
like
what
operating
system
is
tested.
I
think
we
have
a
very
straightforward
path
on
that,
one
to
just
focus
on
we
already
test
on
Ubuntu
images,
One,
GCE,
I'm,
sure
Ubuntu
is
available
on
AWS
yeah.
We
can
very
closely
match
those
yeah.
E
D
We
can,
we
can
expand
from
there
and
I
think
we
should
be
able
to
do
the
exact
same
thing
where
we
take
Ubuntu
and
then
control
the
containerdy
version
and
present
the
same
environment.
Correct.
E
And
the
reason
for
going
down
this
path
was,
you
know?
Typically,
we
talk
about
using
kind
for
conformance
tests
right,
but
then
node
conformance
is
important
too.
So
if
we
have
a
way
to
do
node
conformance,
that
is
the
reason
why
we
went
down
this
road.
E
And
tomorrow,
sham
from
AWS
is
going
to
go
to
I
mean
he's
the
chair
for
one
of
the
chairs
for
the
sixth
scalability,
so
he's
gonna
go
with
a
couple
of
people
there
to
figure
out.
E
You
know
how
we
could
be
doing
scalability
testing
as
well
so
they'll
come
to
us
and
ask
for
you
know,
resources,
and
you
know
we
a
lot
of
kind
of
like
figure
out
how
much
cost
and
what
they
can
run
and
will
it
even
work
and
maybe
they'll
need
a
way
to
test
those
things
we
might
have
to
give
them
a
space
to
work
to
try
things
out
so
they'll
come
to
us
when
they're
ready.
B
Okay,
apif
I
think
you're
next.
F
Probably
easier
for
me
to
share
my
screen
and
also
make
it
where
everybody
can
kind
of
look
on
their
own
I'm,
dropping
a
link
into
our
chat
and
I'm
going
to
put
it
into
our
docs
as
well.
In
our
notes,
it
is
a
Google
sheet
where's.
Our
notes,
I
lost
my
place
a
couple
of
weeks,
tracking
the
change
and
there's
the
link
and
the
the
pictures
are
there
to
make
it.
F
F
F
F
These
are
not
the
same
scale
that
800
000
on
the
left
and
the
eighty
thousand
dollars
on
the
right
will
eventually
look
similar,
but
today
we
just
see
the
the
cumulative
obviously
going
up
and
to
the
right,
but
you
can
see
the
rate
of
change
getting
quite
hockey,
stick
over
there
on
AWS
and
hopefully
our
our
cumulative
year.
Today,
it's
starting
to
curve
a
little
slightly
flatter.
F
F
If
you
go
down
a
little
bit
onto
that
next
line,
I
guess
it's
column,
80
or
row
81..
This
is
our
total
spend
Target
date
based
on
our
year-to-date
average
to
try
to
getting
down
like
we.
We
obviously
aren't
going
to
make
it
yet
we're
just
getting
down
to
somewhere
around
3.9
and
the
the
guy
on
the
right
there
is.
F
When
does
the
money
run
out
and
that's
the
the
the
the
red
we
want
the
red
to
disappear,
we're
in
week
10
a
lot
of
times
when
you
see
numbers
one
to
ten
or
something
like
that,
it's
the
week
of
the
year,
if
you
want
to
click
on
the
the
actual
Excel
spreadsheet
or
whatever
What's,
this
called
the
scroll
up
a
little
bit.
Oh
we're
in
it
never
mind.
I'll
show
you
where
the
numbers
come
just
scroll
up
in
the
sheet
at
the
top.
F
The
data
that
we're
pulling
it
from
is
coming
from
billing
data,
so
this
is
as
accurate
as
we
can
get.
If
you
go
to
the
top
left,
so
we
show
one
column,
one
or
yeah
rows.
One
go
all
the
way
back
to
the
left.
This
these
numbers
will
make
sense
if
we
have
billing
data,
so
you
can
see,
we
don't
have
week
11
data
in
there.
Yet
we
just
have
week
10
starting
on
the
10th
of
March.
F
The
gcp
billing
bigquery
BQ
weekly
cost
report
seems
to
have
the
most
accurate
data.
Please
comment
and
tell
me
where
we're
wrong.
We
were
looking
at
the
gcp
console.
Cloud
billing
log
there
on
26,
but
those
numbers
they
just
don't
add
up
for
for
what
I?
For
what
I'm
saying
so
I'm
trusting
those
light.
Blue
numbers
on
9
through
20,
more
than
I,
am
on
the
other
side.
F
Sorry,
incoming
call
separately
is
the
AWS
and
the
Blue
Link
on
33,
26
and
9
gets
where
the
data
comes
from,
and
the
number
the
number
I
think
we
should
be
talking
about
is
either
line
13,
which
is
average
daily
cost
for
this
specific
week.
You
can
see
it
changing
going
down
quite
dramatically
or
the
average
cost
year
to
date.
That's
going
to
be
more
slowly
but
won't
have
the
it's
a
little
more
trustworthy.
F
E
This
looks
really
nice
and
you
know,
hopefully
we
will
be
able
to
update
this
and
keep
this
going
right.
F
Yeah
rion's
on
that
so
Rihanna,
just
a
second,
it
keeps
ringing.
He
does
every
week
on
our
Monday.
So
before
you
wake
up
on
the
most
everybody
else's
Monday,
this
will
be
updated
and
and
kept
up
to
speed
and
so
it'll
give
us
a
really
quicker.
How
do
we
do
last
week?
How's
it
going.
E
Okay,
that's
perfect
so
and
I
think
Ben
asked
the
question
here.
Ben.
Do
you
want
to
watch
that
Jesse.
I
Oh
I,
okay,
thank
you.
Yes,
this
is
really
great.
Just
a
question
so
I
see
in
our
most
recent
projection
the
day
Zero
is
12
for
the
4th
of
December
and
we
spend
on
average,
eight
thousand
dollars
a
week.
So
does
that
mean
that
we
are
only
thirty
thousand
dollars
short
like
four
weeks
or
whatever?
It
is
so.
F
M
M
But
it
really
started
learning
so,
yes,
it
seems
like
we
don't
need
all
that.
Much
and
I'm
really
excited
about.
What's
going
to
happen
next
week
on
on
the
20th
I'm
actually
going
to
monitor
it
by
the
day
next
week.
M
Just
so
we'll
see
a
probably
another
good
drop
on
our
Monday
data
and
then
from
the
20th
onwards,
I'll
daily,
track
it
and
and
update
to
see
you
and
also,
if
you
scroll
down
to
the
to
the
bars
at
the
bottom,
the
blue
and
red
boss,
which
that
that
graph
there
is
basically
explaining
it's
a
360
die
count.
So
the
red
is
basically
the
amount
of
days
that
will
not
have
money
in
a
visual
display.
M
So
at
the
moment,
yes,
we
seem
to
be
in
December
but,
as
you'll
be
said,
we're
all
in
all
everybody's
invited
to
have
a
look
at
the
spreadsheet
comment.
If
there's
anything
that
you,
you
think
that
we've
got
the
wrong.
Spreadsheets
is
not
the
best
way
to
do
big
calculations,
but
I
think
we're
pretty
on
it.
D
I
had
a
different
question,
but
now
I
have
a
more
immediate
comment.
It
looks
like
the
projected
spin
through
the
end
of
the
year,
is
just
based
on
multiplying
the,
like
average
daily
spend
for
the
week
times
the
amount
of
days
in
a
year
so
to
Justin's
Point.
Our
actual
like
realistic
projection
for
linear
is
going
to
be
a
bit
higher,
given
that
we
were
at
worst
spend
earlier
in
the
year.
D
D
M
Is
looking
better
okay?
Actually
to
speak
to
that
Vineyard
and
I?
Looked
at
the
wrong
number,
these
two
there's
two
lines
we
look
at
line
15
and
16.
15
is
considering
if
we
use
the
amount
of
money
that
we
had
in
this
week.
This
is
where
we're
going,
but
unfortunately
we
have
to
pay
back
the
money
that
we
burn
too
much
of
earlier
in
the
year,
so
the
yield
of
date
is
actually
the
accurate
number
to
look
at.
M
So
if
you
look
in
row
15
at
column
k,
it
actually
tells
us
we'll
be
running
out
on
the
9th
of
sorry.
This
American
dates.
I
used
European
lights
in
my
head.
Let
me
just
check
the.
D
License
cover
I
would
suggest
a
moment
to
that
model
as
well,
that
that
appears
to
be
based
on
the
your
year-to-date
daily
average
I.
Think.
If
we
want
to
get
a
pretty
tight
estimate,
we
can
use
the
amount
of
money
spent
prior
to
the
current
week
and
then
project
the
rest
of
the
year
based
on
the
current
week,
because
we
know
the
year
to
date.
Up
until
that
point
is
accurate
and
we
know
the
current
spend
I
think
that
will
give
us
a
little
bit.
M
D
Also
the
the
estimate,
above
that
that
is
based
on
the
the
current
daily
rate-
that's
really
nice,
to
see
for
knowing
you
know
if
we
started
a
new
year
where,
where
are
we
at
yeah
I,.
M
The
straight
number
thanks
for
that
being
on
it
makes
sense
to
say,
I
will
make
a
note
to
go
retweek
that
the
straight
number
per
day
should
be
eight
thousand
two
hundred
dollars,
and
that
will
leave
us
at
three
million
dollars
with
the
end
full
of
change.
If
we
can
get
every
month
of
every
week
at
8200
next
year.
That
is
the
number
four
that
takes
us
to
the
right.
Underneath
three
million.
D
Yeah
that
one
makes
complete
sense,
I
think
that
I
think
I
would
tweak
the
the
other
number
for
a
little
bit
closer
where
we're
at
the
original
question
I
had,
though,
was
that
the
Amazon
spend
is
maybe
at
least
to
me
a
bit
higher
than
I
would
have
expected
at
this
point.
D
Do
we
know
how
much
of
that
is
coming
from
the
registry
stuff,
given
that
we're
about
to
send
a
whole
lot
more
traffic?
There,
like
it,
looks
like
we're
spending
almost
like
half
now
between
these
two
am
I.
Reading
this
wrong
45k
versus
20,
25
26.,.
M
M
Oh,
that's
a
good
point.
Apparently
we
could
write.
M
I
will
have
that
also
that
doesn't
sound
right.
Let
me
double
check
that
it
does
it
just
doesn't
sound
right.
D
I
mean
if
it
is
right,
then
I
think
we
we
need
to
be
double
checking.
How
much
of
that
is
like
the
S3
for
the
registry,
and
we
might
want
to
be
preparing
to
do
further
cost
optimizations
before
we
send
a
whole
lot
more
usage.
M
E
Yeah,
so
we
will
keep
tweaking
for
sure
Ben
there
was
a
there
was
something
that
Rian
said.
So
not
everything
will
be
switched
on
on
Monday
on
on
the
20th
I
think,
there's
a
phased
rollout
for
the
whole
week,
Justin,
that's
five
days
right
or
is
it
four
days?
How
are
you
doing
it?
E
E
Take
the
whole
week
right
like
so:
if
you
want
a
full
week
worth
of
data,
it
will
have
to
be
either
week
after
the
next
week
to
get
a
full
week's
worth
of
Hey.
Here
is
our
total
load
coming
from
other
places,
and
you
know
gcp
going
back
to
gcp
and
the
rest
of
the
ones.
D
I
believe
we're
aiming
for
four
days
still,
but
you
know
this
is
a
very
the
all.
This
is
in
very
active
discussion
still
with
any
remaining
folks
that
are
that
have
concerns,
and
that
sort
of
thing
where
so
you
know
there's
some
possibility
that
the
like
exact
rollout
speed
is
is
fine-tuned
or
that
sort
of
thing.
Okay,.
E
And
we
Converge
on
our
usual
channel
for
any
updates
from
you
all.
D
Yeah
I
expect
I
mean
we
don't
have
a
lot
of
time
left
for
any
further
conversation,
so
we'll
we'll
know
soon
enough.
I'm
still
expecting
four
days.
Okay,.
E
D
I
really
appreciate
these.
If
someone
could
get
back
to
the
channel
soon
with
what,
where
we
I
don't
have
access,
whatever
we
think
the
Amazon
numbers
are
are
definitely
at
especially
the
S3
spin
I'm.
A
little
concerned.
I
would
like
to
check
that.
B
We
can
check
back
quickly
but
we're
out
of
time.
So
I
don't
know.
B
Okay,
we.