►
From YouTube: Kubernetes SIG K8s Infra - 20221009
Description
B
Hi
Brian,
it
looks
like
the
inform
meeting
is
happening
in
about
12
minutes.
Oh
okay,
great
it
was
a
time
zone
SKU,
the
the
the
Pacific
time
time
was
correct,
but
the
UTC.
B
Good
to
meet
you
my
first
time
on
this
call
I'm
in
equinix.
A
A
D
E
Hi
all
hello.
E
So
I'm
taking
hosts
today
last
minute
because
our
note
is
stuck
in
a
cab
and
gonna
be
a
bit
late,
but
he
may
be
able
to
join
us
later
in
this
meeting.
E
Okay,
well,
hello:
everyone
welcome
to
Sig
Kate's
infra.
This
is
the
Wednesday
November
9th
meeting
this
meeting
is
under
the
kubernetes
code
of
conduct
essentially
be
excellent
to
each
other.
Keep
that
in
mind
this
meeting
will
be
recorded
and
posted
YouTube
later
I'm
Benjamin
Elder
I'm,
not
one
of
your
leads
but
I'm
stepping
in
today
in
the
last
minute,
because
our
leads
are
unavailable
to
make
it
hopefully
or
no
will
join
us
later.
E
So
thanks
everyone
for
showing
up.
Please
put
your
attendance
in
the
agenda,
doc.
If
you
can
and
do
we
have
anybody
that
could
take
notes.
E
F
G
E
And
I'm
gonna
figure
out
screen
sharing
the
data
Studio
report
here
is
our
standing
first
item
foreign.
A
E
E
And
this
is
linked
in
the
agenda
doc,
but
we'll
also
posted
in
the
chat.
If
you
are
a
member
of
the
Sig
Kates
and
for
mailing
list,
you
can
see
this
report.
The
mailing
list
is
open
to
join,
but
for
auth
reasons
or
whatever
we
have,
we
tend
to
have
things
shared
with
the
list.
E
H
E
Over
the
last
28
days
so
the
past
month,
or
so
as
you
can
see,
it's
quite
High-
we're
exceeding
our
budget
quite
a
bit
still
and
we'll
be
okay.
This
year,
because
we
have
600
000
additional
injection
from
Google
recently.
But
this
is
of
concern
for
the
future
foreign.
F
E
Yeah,
a
couple
of
us
have
been
in
hospitals
a
lot
of
fun
meetings,
exciting,
but
I'm
glad
we
should
be
good
through
the
end
of
the
year
and
I.
Don't
think,
there's
anything
public
yet,
but
just
kind
of
like
a
precept
we're
also
discussing
if
we
can
get
a
little
bit
more
to
make
sure
that
we
have
bit
more
overhead
going
through
the
rest
of
the
year.
So
we
don't
have
any
surprises
at
the
end
of
the
year.
E
At
the
moment,
I
looked
at
this
yesterday,
I
estimate
we
had
about
59
days
of
our
current
spend
rate
and
we
had
53
days
left
in
the
year.
So
there's
a
ongoing
escalation
to
see
if
we
can
put
a
little
bit
more
on
this
year,
at
least
so
that
we
can
be
committed
through
making
it
through
the
end
of
the
year
without
a
problem
and
kind
of
reset
going
into
next
year.
H
Any
specific
I'll
just
page
through
that
and
stop
me
if
there's
anything
specific.
E
Sure
yeah
I
I,
we,
you
know
we
have
a
couple
of
these
Pages
like
this
Cloud
build
breakdown
which
are
good
to
keep
an
eye
on,
but,
as
you
can
see,
they're
pretty
small
part
of
our
it's
been
in
actuality.
So
I
think
this
is
probably
the
the
main
page
to
look
at
so
I
think
we
have
a
few
new
folks
here
this
time,
we'll
cover
this
a
little
bit
so
Kate's
artifacts
prod
is
the
main
place
where
we
host
content.
E
There
are
also
places
that
are
still
build
inside
of
google.com
instead
of
the
kubernetes
gcp
account,
which
are
directly
to
Google
on
a
different
budget,
but
for
the
things
that
are
running
out
of
the
community,
which
is
on
this
budget
directly
controlled
under,
like
totally
Community
managed
infrastructure,
instead
of
some
sort
of
like
granted
access
or
something
the
biggest
cost.
You'll
see
here
is
case,
artifacts
prod
and
the
main
thing
that's
doing
is
hosting
the
container
images
at
capes.gcr.ao.
E
The
cloud
storage
cost
for
that
is
primarily
those
container
images.
A
very
small
amount
of
that
cost
is
backing
storage
for
artifacts.catestudio,
which
is
an
endpoint.
That
is
an
experimental,
more
cost
efficient,
binary
hosting,
and
that
has
chaos
and
some
old
cry
cuddle
binaries.
So
it's
primarily
chaos
users.
The
main
cost
you'll
see
from
that
is
item
two
here
networking,
because
that
bucket
is
fronted
by
Google
Cloud
CDN,
so
the
egress
traffic
is
built
as
under
gcp
networking,
instead
of
gcp
storage.
E
So
you
can
see
that
that's
also
quite
an
expensive
item
that
we're
looking
at
migrating
proud
build
is
the
main
CI
cluster
running
on
the
community.
So
that's
where
we
schedule
CI
tasks
as
pods
and
a
kubernetes
cluster
artifact
registry
is
a
is
a
newer
item.
We
are
migrating
to
artifact
registry
as
part
of
backing
registry.kates.io,
so
we
should
see
that
spin
go
up
and
the
cloud
storage
spin
go
down
which
we're
seeing
here
and
item
number
five.
Is
the
5K
node
scale
testing
six
scale?
E
So,
as
you
can
see,
this
is
still
not
a
terribly
large
percentage
of
the
budget,
but
one
of
the
few
remaining
notable
items
most
things
beyond
that
are
much
smaller
but
add
up
collectively,
you'll
see
that
the
others
is
and
the
pie
graph
is
a
relatively
large
chart.
Part
of
the
chart
and
a
pretty
good
chunk
of
that
is
sort
of
miscellaneous
CI
costs.
E
We
have
a
large
pool
of
gcp
projects
that
we
use
to
isolate,
testing
and
we
rent
and
then
Auto
clean
up
the
resources
in
so
a
good
bit
of
that.
As
you
can
see,
a
bunch
of
these
projects
on
further
pages
that
have
each
individually
have
small
charges
but
kind
of
collectively
add
up
prior
running
all
sorts
of
different,
like
end-to-end
testing.
E
Yes,
there's
a
there's
there's
about
11
caves
of
spin
there.
If
we,
if
we
hover
those
it'll,
tell
us
the
percentages,
you
can
see
that
just
the
artifact
hosting
is
like
easily
the
largest
slice.
That's
that
purple
slice
there,
but
these
other
things
do
add
up
a
bit
and
we
do
have
some
room
to.
E
You
know,
move
them
to
other
vendors
as
we
get
credits
in
some
cases.
We
mostly
don't
have
a
lot
of
room
to
just
make
them
more
cost
efficient.
We
already
do
things
like
automatically
ensuring
that
any
leaked
end-to-end
test
resources
are
cleaned
up
by
Ci
or
you
know,
the
community
runs
CI
Auto
scales
with
workloads,
so.
B
Does
the
cloud
storage
slice
of
the
pi
include
Does?
It
include
egress
charges
for
that
I
guess:
I'm,
not
yeah.
C
E
E
E
The
networking
charts
that
you
see
under
number
two
is
some
different
storage,
not
the
container
images
where
there's
a
Google,
Cloud
CDN
in
front
doing
some
caching
globalization,
and
that
shows
up
as
a
networking
charge
and
the
storage
charge
you
would
see
from
that
would
just
be
the
storage
operations
and
the
actual
like
disk
space,
which
is
a
very
small
fraction.
We
have
a
relatively
reasonably
sized
data
set,
but
we
serve
it
to
the
world
all
day
right
and
in
particularly
from
admittedly
one
of
the
smaller
clouds
to
the
others.
C
To
look
for
some
skus
underneath
the
Kate's
artifact,
prod
and
cloud
storage
to
try
to
identify
the
egress
charges
versus
the
storage
and
I
wasn't
able
to
find
an
easy
way
to
get
the
report
to
do
that.
But
it
might
be
useful
if
someone's
got
deeper,
skew
knowledge
to
pull
that
out.
At
some
point.
Yeah.
E
We
have
other
user
interfaces.
We
can
use
to
look
at
more
of
this,
but
the
for
the
like
Dynamic,
like
really
public,
available
data
Studio
report
I,
don't
have
that,
but
I
can
assure
you
that
it's
primarily
the
egress
charge.
E
Similarly,
we'll
see
that
for
item
number
four
artifact
registry,
that
is
us
moving
from
Google
container
registry
to
artifact
registry,
as
we
move
from
caseardio
to
register.k
Studio,
where
the
primary
backing
store
there
has
been
moved
to
Regional
artifact
registry
instances
instead
of
multi-regional
Google
container
registry,
so
that
is
also
largely
egress
cost,
but
because
there's
not
that
much
traffic
on
it.
Yet
it
is
not
quite
as
dominated
by
egress.
Yet
some
of
that
is
actually
the
storage
cost
of
storing
the
data
set
in
all
the
regions.
I
E
The
total
spin
should
go
down
because
we
don't
serve
traffic
directly
to
artifact
registry.
We
have
an
application
in
front
and
from
a
past
verbal
agreement
with
Amazon
prior
to
the
3
million
announcement,
we
are
sending
traffic
from
Amazon
IP
addresses
to
Amazon
cloud
storage,
four
layers,
which
is
the
bulk
of
the
egress
so,
and
the
percentage
of
requests
coming
from
AWS
on
the
old
infrastructure
was
greater
than
50.
So
we
should
expect
a
pretty
substantial
cost
drop
overall.
E
E
So
we're
only
serving
redirects
there,
so
the
actually
half
of
the
cost
of
hippie
weekend
Sierra
Manor,
most
there
or
Rion,
most
of
the
most
of
the
costs
of
running
that
service,
like
half
of
it,
is
actually
the
logging
at
the
moment
which
we're
going
to
turn
down.
Now
that
the
we're
more
confident
in
the
like
stability
of
it
we're
paying
about
as
much
to
store
the
logs
from
the
info
from
that
infrastructure
as
we
are
to
run
it.
B
E
I,
don't
think
we
have
that
readily
available.
When
was
the
registry
impending
well,
the
registry
has
been
implemented
over
the
course
of
this
year,
but
that's
not
tied
to
when
the
traffic
moves
over.
E
E
So
it's
kind
of
a
complicated
question
like
when
will
the
traffic
shift
and
it's
something
the
project
is
still
working
on,
but
it
would
have
started
most
of
what
we
would
have
started
seeing
would
have
been
around
August
when
it
started
becoming
more
broadly
finally
being
used
as
default
in
more
places
in
the
project,
notably
kubernetes
125
release.
E
Yeah
it's
somewhere.
We
have
some
data
on
the
the
like
actual
traffic
numbers,
but
I.
Don't
think
anybody
has
that
handy
right
now,.
E
So
we
have
a
couple
more
pages
in
the
report.
We
should
probably
take
a
quick
look
at
and
also,
if
any
of
you
joined
the
mailing
list,
you
can
access
the
report
and
the
these
pages
are
actually
Dynamic.
You
can
filter
on
the
services
or
change
the
dates
to
help
see
how
these
things
change
over
time.
E
So
this
page
can
be
a
bit
misleading,
because
traffic
does
just
sort
of
have
patterns
over
time,
and
it
doesn't
necessarily
mean
that
this
is
definitely
the
like
month
over
month,
Trend.
But
it
does
give
some
idea
that,
as
you
would
expect,
artifact
registry
costs
are
way
up
and
our
cloud
storage
costs
and
for
that
were
from
GCR,
are
going
down.
E
That
doesn't
mean
our
net
cost
is
necessarily
shifted
much.
They
should
be
pretty
similar.
We
expect
artifact
registry
to
be
a
bit
cheaper
at
scale,
because
we
also
took
the
opportunity
to
move
towards
regionalizing
ourselves
instead
of
paying
for
an
October
cloud
storage
pricing
change
to
reflect
the
cost
of
replicating
for
multi-regional
storage.
E
But
we
know
when
we're
adding
new
content,
and
we
already
have
a
system
that
promotes
from
staging
to
production
storage.
So
we
are
now
copying
to
end
locations
ourselves
and
then
the
redirect
service
is
handling
the
regionalizing.
E
So
we
hope
that
the
traffic
that
does
still
go
back
to
Google
Cloud
will
cost
less
as
it
moves
between
these.
But
it's
a
bit
premature
to
say
it.
E
And
you
can
see
that
also,
for
example,
caitson
for
proud
build
trusted.
That
is
a
smaller
CI
cluster.
That's
like
restricted
access.
What
workloads
are
there
that
handles
some
like
critical
automation
for
the
project,
so
that
going
down
probably
isn't
a
particular
thing
that
we've
done
it's
just
Trends
in
the
real
usage
of
the
project
fluctuating
like,
for
example,
the
main
build
cluster
has
gone
up
56.1
before
the
previous
month.
E
That's
typically
just
because
of
where
we
are
at
in
the
release
cycle
with
people
heavily
testing,
pull
requests,
leading
up
to
code
freeze,
as
opposed
to
anything.
The
project
has
like
intentionally
done
to
shift
costs
around
or
to
increase
usage.
E
So
we
have
a
lot
of
other
cloud
storage
buckets
used
for
things
like
logs
in
the
Ci
or
intermediate
storage
locations
before
being
promoted
to
the
production
serving.
So
that's
what
essentially
all
of
these
are
here.
You
can
see
they're
pretty
they're
generally
pretty
small
charges,
the
ones
that
are
the
most
active
staging
locations
like
Kate's
release,
where
we
stage
binaries
they're
going
to
be
released,
can
be
a
little
bit
larger,
but
not
substantive
in
comparison
to
case
artifact
prod,
where
that's
the
container
image
serving.
E
Oh
yes,
there's
also
there's,
like
total
cost
tracking
at
the
bottom
of
the
graph,
and
let's
get
some
idea
of
like
where
he
has
gotten
here
to
date.
So
you
can
see
I
believe
we
estimated
we're
on
track
to
1.8
million
again
for
the
container
version
hosting
and
just
on
GCR,
not
including
artifact
registry,
and
you
can
see
that
over
the
course
of
the
year
the
scale
testing
does
cost
more
than
the
just
the
part
that
runs
the
CI.
E
But
that's
not
all
of
the
CI,
because
it's
just
the
place
where
we
execute
tasks.
Most
of
those
tasks
are
going
to
turn
around
and
spin
up
an
ephemeral
cluster
from
source
code
and
those
costs
are
spread
over
many
projects
because
we
we,
we
have
the
CI
rent
projects
and
then
we
have
a
tool,
go
through
and
delete
everything
in
the
project
after
the
project
cycles
that
are
being
rented.
E
So
it
gets
spread
over
many
projects
so
that
we
can
ensure
that
we're
making
sure
we
clean
everything
up.
So
we
don't
leak
resources.
E
C
C
I'll
be
real
quick
about
that.
I
got
to
spend
some
time
with
Matt
Ray
from
Kube
cost,
while
at
kubecon,
and
they
have
a
product
that
allows
us
to
spin
upside.
C
We
deployed
inside
of
our
clusters
we're
actually
able
to
get
the
cost
per
job
if
we
want
to
start
to
sort
the
jobs
that
we
want
to
bring
over
from
a
cost
perspective,
they've
offered
a
license
to
help
aggregate
that
and
I'm
looking
forward
to
trying
to
get
a
commitment
around
actually
helping
to
implement
that,
but
he
got
covered
while
at
kubecon
and
it's
been
out
of
pocket
and
is
in
you
know,
recovery
mode,
so
I'm
looking
forward
to
that,
but
I
think
as
far
as
giving
us
deeper
visibility
into
the
job
level
stuff.
C
This
is
the
best
thing
I've
seen
so
far,
and
that
was
because
I
think
Caleb,
Muhammad
and
I
all
looked
at.
How
do
we
get
some
more
data
on
our
costs?
Spend.
E
Cool.
Thank
you
happy.
E
So,
and
should
we
Implement
here
or
should
we
come
back
or
I
see
our
no
has
a
hand
up,
I,
don't
know
hey.
J
Hi,
sorry
for
being
late,
I
think
there's
value
to
use
Cube
costs
specifically
for
Ikeas,
because
we
I
think
recently
GK
and
mostly
gcp
release
the
same
feature
close
to
what
you've
got
is
doing.
We
just
don't
use
it
that
at
the
moment,
so
I
think
we
can
keep
that
option
once
we
get
the
credit
from
AWS
and
we
start
moving
jobs
over
EGS
cluster
because
from
the
data
should
I
saw
on
the
website.
K
I
have
a
question:
are
we
currently
crunching
the
information
that
we
gather
from
the
usage
metering
for
the
current
GK
clusters.
J
K
J
E
It's
worth
pointing
out
that
typically,
the
larger
cost
to
CI
is
the
end-to-end
test
and
for
the
things
that
aren't
end-to-end
tests.
We
hopefully
know
that
as
soon
as
we
can
get
an
EK
s,
cluster
up
into
CI,
like
there
shouldn't
be
any
problem
to
move
that
spend
over
and
in
general
our
CI
spend
has
like.
We
can
look
in.
E
And
see
that
it
hasn't
been
out
of
control,
but
we
know
that
distribution
costs
have
spiraled
not
just
on
the
community
budget,
but
also
on
the
internally
build
things
that
we
hope
to
migrate.
E
And
for
the
CI
component
that
still
runs
internally,
I
can
tell
you
that
by
far
the
biggest
cost
is
because
the
project
hasn't
invested
in
being
able
to
Auto
scale
that
yet
it
is
artificially
expensive,
though
that's
not
on
the
community
bill.
So
it's
not
an
immediate
pressing
concern.
But
when
we
go
to
move
it,
we
can
do
the
same
thing
that
we
did
for
jobs.
We
moved
previously
where
the
cin
forces
that
you
must
set
requests
and
limits
on
jobs
running
in
the
in
the
community
clusters.
E
So
as
they're
gradually
migrated,
we
put
them
into
a
state
where
we
will
be
able
to
Auto
scale
the
CI
and
that
will
make
a
pretty
large
cost
Improvement.
And
if
we
didn't
have
the
distribution
costs,
we
could
actually
already
move
all
of
the
remaining
infrastructure
from
internal
to
gcp,
with
plenty
of
overhead
without
making
that
optimization.
J
Yeah
and
because-
and
we
got
also
basically
once
we
have
the
the
best
credit-
we
can
move
all
the
unit
tests
to
eks
and
that's
going
to
save
a
lot
of
money
and
basically
and
on
eks
we
can
have
Cube
does
and
that
basically
help
us
really
identify.
What's
cost
the
most,
because
we
move
and
we
we
might
get
better
instance
on
AWS,
so
Brill
can
be
faster
than
currently
that's.
J
What
I'm
saying
Cube
cost
is
a
good
option,
because
if
we
get
eks,
we
deploy
Cube
goes
there
and
we
can
identify
what
does
the
most.
E
Absolutely
also
the
jobs
that
will
be
most
easily
migrated
to
a
non-gcb
cluster
will
be
the
ones
that
do
not
use
external
resources
heavily.
You
know
build
unit
tests
and
so
on,
so
the
Cube
cost
should
give
us
the
actual
cost
of
those
jobs
which
will
be
nice
as
opposed
to
in
the
current
cluster,
where
for
a
lot
of
them,
a
lot
of
jobs
will
look
cheap
when
they're,
actually
quite
expensive.
D
A
note
on
using
an
emission
web
hook
to
enforce
resource
limits,
I'm
actually
going
through
this
at
my
day,
job
right
now,
and
there
is
a
shocking
number
of
charts
and
things
in
the
world
that
don't
give
you
the
option
to
set
things
or
you
know,
there's
a
knit
containers.
They
don't
have
limits
or
not,
and
so
I
am
perhaps
going
to
back
away
from
trying
to
enforce
it
all
within
a
measurement
control
and
actually
just
modify
the
resources.
E
So
we
actually,
we
have
a
configuration
layer
for
the
CI
ourselves
and
we
have
a
unit
test
when
you
try
to
alter
the
CI
configuration
that
says.
If
you
are
scheduling
to
this
cluster,
you
must
set
limits
and
requests
yourself.
F
E
Yeah
we
we
have
the
important
in
policy
for
us,
but
we
have
been
using
the
pivot
point
of
moving
the
community
cluster
where
we're
much
more
concerned
about
the
spend
and
to
kind
of
have
a
point
where
someone
is
paying
attention
to
this
job.
So
they
have
a
reason.
Okay,
you're
modifying
this
job.
In
order
to
do
this
modification,
you
must
figure
out
setting
this
as
opposed
to
getting
people
to
set
it
for
all
of
the
rest
of
the
jobs.
E
F
J
I
think
I
think
it's
possible
because
that's
what
chaos
is
doing
right
now
and
they
have
a
bus
called
they
are
basically
a
bus
goes
deeper
on
the
address
using
a
pull-off
database
account.
That's
all
it's
possible
for
some
if
we
test,
but
some
of
basically
most
of
the
utilities
for
the
KK
Ripple
are
opening
about
gcp.
So
that's
a
different
conversation.
E
It
it's
it's,
it's
the
wrong
layer,
we
at
least
AWS,
like
sub-accounts
in
Bosco's,
for
doing
end-to-end
testing
and
that's
a
whole
other
discussion.
How
we're
gonna
move
that
for
moving
things
like
unit
test
and
build
currently
in
proud.
E
You
have
to
explicitly
configure
which
cluster
they're
on,
but
in
this
case
that
should
be
relatively
fine,
because
we
can
easily
identify
which
jobs
are
readily
moved
and
just
leave
the
ones
that
aren't
and
the
ones
that
aren't
are
a
lot
of
them
because
of
the
end-to-end
testing
part
where
we
rely
on
barely
maintained
tooling
and
the
kubernetes
repo
that
can
spin
up
a
cluster
from
Source
and
that's
honestly,
sorting
that
out
could
be
its.
F
We
could,
but
could
we
instead
of
it,
we
could
change
how
that
works
on
work,
it's
scheduled
to
schedule
to
a
pool
of
clusters
right
and
then
do
something
like
Bosco's
to
check
them
out.
I'm,
just
I'm
just
trying
to
get
to
the
position
where
we're
not
front
loading,
a
bunch
of
stuff
onto
one
environment
and
in
the.
E
It's
possible
I,
don't
actually
think
it's
very
necessary.
Hopefully
we'll
have
good
visibility
into
where
the
cost
is
going,
and
we
already
know
at
the
moment
we
have
two
vendors
and
and
too
much
on
one
of
them
and
we
know
which
things
we
can
afford
to
to
move
and
which
things
we
can't
and
yeah
hey.
This
is
really
more
of
a
question
for
Sig
testing,
but
I
have
enough
familiar
to
say
that,
like
I,
think
that
would
be
a
fairly
complex
Endeavor
for
relatively
limited
wind
in
the
short
term.
J
I
have
my
Android,
so
I'm,
going
to
speak
just
to
answer
to
Eddie
I
think
we
can
keep
that
option
for
the
project
like
give
the
possibility
to
boost
trap
into
a
test
a
little
bits
exclusively
because,
basically
we
say
we
are
opinion
about
e2a
test
for
the
kubernetes
project,
but
for
the
sub
project
like
cluster
API
I,
don't
know,
I
saw
different
cell
pressure
in
six
scheduling
stuff
like
that.
We
can
try
to
improve
the
definition
of
the
entry
test
and
be
multi-cloud,
but
that's
a
bigger,
that's
a.
E
Yeah
I
could
go
into
a
lot
more
depth
on
that
I'm.
Not
sure
I
want
to
derail
too
much,
but
I'll
point
out
that
we
actually
did
have
CI
on
multiple
clouds
in
the
past,
and
we
don't
really
today
because
the
bill
lapsed
for
a
while,
and
we
had
to
remove
chaops
from
kubernetes
pre-summit
and
since
then
we
haven't
had
another
cool
actually
keeping
up
with
the
absolute
latest.
Kubernetes
source
code
changes
confidently
having
gotten
kind
into
pre-submit.
E
That
was
about
a
year-long
Endeavor
of
commencing
people
and
patching
anything
that
came
up
the
Staffing,
for
that
is
going
to
look
a
bit
different
from
moving
the
infrastructure,
but
for
things
like
build
and
unit
tests,
it's
going
to
be
glaringly
obvious
to
some
of
us
that
those
things
can
can
be
moved.
And
since
we
are
at
120
of
the
gcp
budget
right
now,
we
will
want
to
move
them
to
wherever
is
available,
and
it
will
be
fine
if
that's
a
fixed
config
for
the
next
year.
Probably.
E
A
K
Okay,
all
right,
let's
start
from
here.
Okay,
so,
let's
see
so.
This
is
Amazon
account.
It's
the
what
we
call
a
management
account
that
hosts
sorted
accounts
and
in
here
there's
information
about
the
billing
and
what
it
costs
I'm,
just
using
one
from
work
where
we
use
for
testing
and
stuff.
K
As
you
can
already
see
like,
you
need
to
have
access
to
the
accounts
to
log
in
and
take
a
look
and
see
what
things
are
costing
right.
So
what
Amazon
has
done?
Is
they
allow
you
to
export
this
somewhere
and
run
some
big
data
analytics
on
them?
So
what
I
did
was
I
exported
this
data
to
a
theater
which
is
Amazon's
company
of
bigquery,
and
what
we
can
do
is
we
can
use
third-party
products
to
then
connect
to
Amazon
and
present
that
data
right.
So
this
is
what
I've
done
in
grafana.
K
So
one
of
the
reasons
why
I
suggest
we
do
something
like
this
is
that
this
is
like
public
application.
You
can
go
in
here
and
log
in
and
take
a
look
similar
to
the
current
refunding
instances
that
we
have
that
are
very
old.
That
need
to
be
upgraded
at
some
point
and
the
the
kubernetes
part
that's
running.
This
only
needs
access
to
AWS,
the
other
people,
don't
so,
if
I
get
a
table
up
and
running
one
second.
K
So
you
can
easily
get
figures
of
what
we,
what
this
account
has
been
spending
on
ideas,
for
example
along
with
the
cost
of
service,
and
what
things
were
costing
us
more.
You
can
also
run
random
queries
yourself
and
create
dashboards
of
things
that
you're
after
so
this
here
is
the
field
five.
K
A
K
Computer
is
stuck
yeah
if
I
go
here,
you
can
kind
of
see
the
underlying
query.
That's
been
run
to
get
that
data
up
and
running,
so
you
could
go
in
there
and
work
out
the
kind
of
information
that
you're
after
and
then
graph
it
on
grafana
and
then
share
with
everybody
else
like
we
do
with
data
studio
today.
K
E
K
Okay,
that's
fine
yeah,
so
Athena
is
great
for
pulling
billing
data.
It
runs
every
hour,
so
we've
got
the
latest
set
of
information.
Amazon
pulls
there's
all
sort
of
information.
You
can
pull
it's
a
complex
data
set.
So
that's
the
billing
piece
right.
The
other
piece
is
information
about
what's
happening
with
infrastructure
on
AWS,
so
I'm.
Sorry,
if
I
go
down
here,
S3,
for
example,
if
you
get
some
information
about
buckets
right,
so
I
need
to
pick
region.
A
K
K
It's
got
this
many
objects
and
it's
this
big
and
then
you
can
also
get
information
about
how
busy
is
the
bucket
what's
happening
in
there?
These
writing
he's
reading
on
some
of
the
metrics.
So
this
is
the
kind
of
information
we
want
to
collect
about.
We
want
to
be
able
to
see
about
the
buckets
that
we
already
have
today
in
there
that
are
serving
AWS
customers.
E
That's
really
cool.
My
only
question
is:
is
there
a
managed
service
equivalent
to
this
like
a
little
open
source
and
everything,
but
also,
as
you
noted,
that
our
current
girlfriend
instances
are
actually
out
of
date
and
there
is.
E
I'm
not
sure
it's
option,
but
it's
kind
of
nice
that,
for
example,
the
data
Studio
report
doesn't
really
require
active
maintenance,
as
we've
had
a
lot
to
run,
but
it
regardless.
This
seems
quite
awesome,
I'm
just
curious.
If
there's
yeah
there's
a
reason
that,
like
one
of
them
like
I'm,
something
similar
to
data
Studio,
where
we
don't
have
to
run
it.
K
Yeah
there
is
manage
grafana
from
AWS.
You
can
get
us
to
run
grafana
for
you
and
we
can
get
it
to
show
information
from
Amazon
products
or
anything
else
that
you
can
pull
information
from
like.
If
you
go
here.
E
K
E
Think
that
answers
that
question
that
we
can
always
consider
using
a
managed
grafana
then,
and
if
that
makes
sense,.
K
Yeah,
we
should
probably
use
that
opportunity
to
even
replace
the
existing
one
that
we've
got
I
think
it
was
V6
and
with
all
sort
of
Prometheus
information.
That's
that
is
pulling
from
I
kind
of
did
the
same
thing
for
K
native
a
while
back
where
the
grafon
instance
runs
in
our
kubernetes
cluster,
but
it's
more
modern,
much
easier
to
deploy
and
it's
using
managed
Prometheus
for
the
metrics.
K
But
that's
a
sick
testing
story,
not
one
here.
Yeah.
E
K
Yeah,
so
it's
very
straightforward
to
do.
There's
some
configuration
that
Caleb
has
to
do
in
AWS
to
make
it
work
once
he's
done
it
and
let's
say
we
go
with
manageable
Finance,
so
you'll
have
to
spin
that
off
make
it
public
or
tight
to
get
a
login
for
members
of
the
organization
to
be
able
to
log
in
and
just
get
rid
access
to
everything
and
just
go
with
that.
K
That's
what
I've
done
for
the
Canada
instance,
everybody
in
all
can
read
it
and
the
public
just
don't
get
access
to
it
because
they
don't
need
to.
A
K
E
Okay,
Caleb
I
just
made
you
co-host
for
when
we're
ready
to
show
yours,
but
does
anyone
have
any
further
comments
about
Muhammad's
demo.
D
I
thought
that
was
great
I
think
there's
probably
some
value
trying
to
get
both
gcp
and
AWS
into
the
same
dashboard.
To
try
and
like
just
even
be
able
to
get
the
aggregate
numbers
on
things.
I
think
that
would
be
useful
and
I'm
saying
this
is
someone
who
has
been
meaning
to
do
this
for
my
day,
job
for
forever
and
haven't
done
it
so,
but
yeah
that
would
I
think
it
should
be
an
exciting
thing
to
say
to
be
able
to
go
to
One,
dashboard
and
just
say:
okay.
K
K
D
I
mean
you
could
just
export
dump
all
the
data
I
mean
it,
it
doesn't
have
to
be.
You
have
multiple
data
sources.
You
could
just
move
everything
into
the
same
back.
End
I
mean
that
may
be
even
more
work
in
the
short
term,
but
in
the
long
term,
if
you
just
pick
one
and
say
crap
on
is
going
to
be
the
front
end,
then
we're
going
to
use
one
data
source
and
we're
just
going
to
massage
all
of
the
cloud
billing
data
from
various
providers
into
the
same
data
source.
E
Yeah
I
think
that's
an
interesting
follow-up
topic
in
the
fullness
of
time
for
this
meeting
Caleb.
Do
you
want
to
start?
Your
demo
should
have
co-host
now.
L
Apparently,
I
need
to
tell
the
system
that
Zoom
is
allowed
to
see
my
windows,
so
I
might
need
to
rejoin,
but
to
maybe
not
do
that
now.
Is
it
possible
that
I
I'll
drop
the
link?
In
any
case,
let
me
just
bring
that
one
up.
L
E
Since
I
had
trouble
with
that
earlier
Muhammad
do
you
wanna
or.
L
A
E
L
So
this
is
something
that
I
built
a
few
months
ago,
but
it's
definitely
not,
as
it
doesn't
have
the
cool
factor
and
the
the
same
data
that
was
pulled
in
from
what
Muhammad
just
showed
us.
But
what
this
would
provide
as
like
an
alternate
solution
would
be
cost
data
from
aggregated
across
all
of
the
the
sub-accounts
in
an
AWS
account
and
all
the
other
ones
in
the
organization
which
would
then
filter
down
to
kubernetes
ones
and
cost
data
per
service
in
all
of
those
accounts.
L
And
then
it
doesn't,
it
won't
have
anything
related
to
S3
usage
or
anything,
but
it
would
what
what
this
will
do
is
pull
data
from
cost
Explorer
and
then
it
will
export
it
into
a
GCS
bucket
and
then
create
a
bigquery
data
set
out
of
it.
That
you
can
then
pull
into
your
data
Studio
report.
L
So
this
might
be
a
a
slightly
more
integrated
approach
from
how
we
do
the
cross
report
and
usage
analysis,
but
yeah
like
if
you
scroll
down
a
bit
you'll
see
the
the
goal
operation
sections,
if
you're
here,
to
take
a
look
at
that
yeah
just
presenting
it
another
thing
that
might
be
useful
in
the
tool
chain.
E
E
E
Okay,
Ed,
you
want
to
talk
about
equinex.
B
Sure
so,
by
way
of
introduction,
I'm
Ed
vilmetti
work
at
equinix
previously
at
packet,
which
was
acquired
by
equinix.
B
We
have
had
for
a
long
time
support
of
a
number
of
cncf
projects
through
the
community
infrastructure
lab,
as
well
as
other
projects,
Linux
Foundation
projects
and
other
and
others
that
are
using
the
equinix
metal
infrastructure
to
support
their
efforts
and
I'm
interested
in
trying
to
probe
figure
out
both
on
my
side.
What
sort
of
a
budget
we
could
apply
to
supporting
this
kubernetes
effort,
but
also
from
a
technical
side,
try
to
figure
out
well,
if
you're
going
to
do
it
like
like
what
stuff
would
you
run
and
who
would
run
it
Etc?
B
So
the
I
want
to
sort
of
lead
by
way
of
comparison
of
some
other
systems
that
people
are
running
on
our
infrastructure.
B
There's
a
link
in
the
document
of
a
case
study
from
the
folks
doing
ntp
pool,
which
is
a
global
distributed
time.
Distribution
network
ask
Byron
Hansen
his
lead
on
that
another
pretty
good
sized
one
that
just
looked
at
was
providing
some
infrastructure
for
a
number
of
operating
systems
that
are
doing
CDN
type
activities.
B
B
B
B
B
The
the
division
of
labor
is
that
we
provide
the
machines
and
the
data
centers
and
the
heating
and
the
cooling
and
the
power
in
the
network,
but
the
project
itself
is
responsible
for
the
entire
software
stack
and
the
operation
of
it.
So
it's
it's,
not
the
classic
sort
of
managed
service
that
a
lot
of
other
Cloud
things
provide
the
other,
the
other
piece
which
I
don't
don't
actually
know
quite
as
much
about,
but
given
the
cost
of
just
distribution
and
egress
in
this
network.
B
There's
a
product
called
economics
fabric,
which
is
a
system
to
system
interconnect,
product
or
system
to
Cloud,
which
provides
a
cost-effective
way
of
getting
bandwidth
from
from
here
to
there,
and
that
might
be
a
component
of
some
other
solution
that
would
tackle
directly
some
of
the
bandwidth
costs
and
egress
charges
that
are
currently,
you
know
absorbing
in
one
way
or
another.
A
lot
of
the
budget
for
the
system.
B
I
have
a
lot
more
control
over
spend
on
that
equinix
metal
side
than
I
do
in
the
span
of
the
economics
fabric
side
like
I
know
how
to
get
things
approved
for
metal.
This
would
be
like
project
number
one
for
fabric,
but
I'm
really
interested
in
in
sorting
out
like
what
kind
of
infrastructure
can
the
project
absorb?
B
Does
it
help
to
give
you
a
server
right
with
bandwidth
or
not
right?
It
might.
It
might
not-
and
you
know
where
strategically
in
the
architecture
as
it
currently
stands
or
might
stand
in
the
future
that
we
could
step
in
with
with
some
amount
of
either
metal
or
or
bit
moving
to
help
things
along
so
happy
to
take
any
questions
happy
to
go
into
more
detail
with
with
you
know
any
of
the
existing
stuff
that
we've
done
to
give
you
a
sense
for
what's
possible,
but
I
think
there's
something
there.
E
So
we
only
have
a
few
minutes
left.
What's
the
best
way
for
us
to
follow
up
with
you.
J
Like
we
should
just
have
a
chief
friend
made
a
yet
again
meeting
like
how
everybody's
going
to
do
any
resources,
true
cncf
to
the
project,
but
anyway
that's
the
follow-up.
We
should
do
because
I
think
the
first
then
we
need
to
clarify
is
basically
what
we
can
get
in
terms
of
resources.
Like
can
I
spend
one
million
of
environmental
instance
a
year,
I
mean
I,
take
a
random
number
I.
J
Think
that's
like
the
first
thing
we
need
to
discuss
stuff
and
from
that
we
can
have
a
conversation
about
what
we
can
use,
because
we
already
know
how
we
can
leverage
economics.
We
currently
have
economic
systems.
We
use
for
basically
trying
to
come
to
the
registry,
but
there's
like
a
fear
of
overspelling
and
blow
up
the
existing
pressure.
So
I
think
the
follow-up
that
trying
to
have
a
different
conversation
about
how
we
can
Define
the
obviously
the
budget,
hello
to
the
project.
B
Can
coordinate
through
the
slack
Channel
and
and
figure
out
a
time
that
might
be
good
for
a
subset
of
folks
who
would
who
would
know
and
Care
I
I
guess
from
a
budget
perspective,
I'd
like
to
sort
of
figure
out
what
the
network,
architecture
and
system
architecture
might
be.
B
I
know
from
other
systems
that
we've
put
together
that
in
in
these
sorts
of
distribution
networks,
bandwidth
costs
overwhelm
the
metal
costs.
So
that
was
part
of
my
probing
questions
about
like
how
many
you
know
petabytes
per
second.
Are
you
sending
or
what
have
you
kind.
C
B
C
To
get
some
information
from
the
kernel.org
people,
they're,
probably
the
closest
being
under
the
elbow
and
also
having
similar
infrastructure
problems
for
Distributing
things
globally,
would
be
a
good
who's
been
down
this
road
before
conversation.
B
Yeah
Constantine
has
been
lead
on
that
and
if
we
can
even
perhaps
Lieutenant
Loop
him
in
on
things
directly
get
a
sense
for
for
how
he's
doing
stuff.
E
Yeah
I
also
have
some
thoughts
about
how
we
can
use
this
in
some
of
the
infrastructure
we're
building
for
content
distribution
so
happy
to
help
join
any
chats
about
that.
B
Okay,
yeah
I'll,
look
in
I'll,
look,
I'll,
look
slack
for
some
coordination
and
we
can
get
together
a
subgroup
and
you
know
do
something:
foreign.
E
Well,
we've
got
about
four
minutes
left
now,
thanks
everyone
for
coming
today.
B
J
Is
not
it's
always
the
20
20
p.m,
and
1
pm
e
eastern
time
always
is
never
really
changed
regardless
of
the
time
zone.
That's
why
I
know
we
never
change
that.
B
Well
accepted
that
the
meeting
today
started
at
2100,
not
2000.,
okay,.
E
Yeah
we,
the
kubernetes
project,
tends
to
have
the
meeting
scheduled
through
a
Google
Calendar
in
a
U.S
time
zone.
Unfortunately,
and
it's
something
that
probably
ought
to
be
Revisited,
but
it
is
the
current
state
of
I
believe
all
of
the
sub
project.
E
I
think
we
actually
have
some
like
guidance
in
the
docs
for
the
community
site
to
make
sure
that
you
link
it
so
that
these
will
get
converted
from
like
Pacific
time
zone
because
of
this
kind
of
thing.
But
that
doesn't
mean
that's
what
we
should
be
doing.
But
it
is
what
we're
doing
currently
in
all
the
subproducts
that
I'm
aware
of
so
yeah.
D
E
E
Well,
thank
you.
All.
Thanks
for
coming
enjoy
the
rest
of
your
day
hope
to
see
some
folks
around
again
in
the
future.