►
From YouTube: Kubernetes SIG K8s Infra - 20220803
Description
A
A
A
B
A
B
C
Yeah,
we
that's
already
an
issue
for
them
because
they
only
have
one
or
two
projects
that
can
support
this
scale.
So
they
have
to
keep
a
pretty
good
eye
on
that.
If,
if
a
test
takes
longer
than
it
should
and
lingers
clean
up,
they
notice,
because
then
they
can't
run
another
test
because
they're,
not
they
can't
lease
bosco's
projects.
Here
we
only
have
one
or
two
projects
that
are
like
cleared
to
create
5000
vms.
A
And
that
job
run
once
a
day,
and
even
if
it's
failing
it's
going
to
clean
up
by
itself,
because
I
think
there's
a
there's
a
mechanism
to
ensure
they
are
not
stealing
resources
for
this,
we
checked
with
them
last
we
work
on
that
last
year.
So
I
think,
compared
to
I
think
april
is
just
is
10k
in
term
of
resource
use,
so
it's
possible.
They
increase
the
number
of
node
for
this
specific
job.
C
A
Yeah
yeah,
this
one
is
a
periodic,
so
I
don't
think
there's
a
way
to
trigger
that.
Maybe
they
run
that
manually
because
they
have
access
to
that
project.
C
Well,
so
I
think
I
guess
we
should
one
things
we
should
check.
I
think
there
might
be
manually
triggerable
jobs
with
restricted,
triggering
or
something
that
are
pointed
at
that
project,
because
I
don't
think
there
are
more
projects.
C
A
A
C
A
A
D
A
That's
very
western,
and
even
though,
even
even
by
enabling
that
all
the
job
running
in
the
same
name
space,
but
we
not
delay
we,
but
with
no
labels
for
some.
So
it's
gonna
give
it's
gonna,
be
difficult
to
break
down
5k
jobs
per
six.
I
think
the
first
thing
we
need
to
do
is
basically
trying
to
label
all
the
project.
C
D
C
D
C
Be
most,
it
would
mostly
be
kubernetes
right.
Oh
okay,
like
the
vast
majority
of
our
ci,
is
running
different
configurations
of
intune
testing,
kubernetes
and
then
within
one
of
those
jobs.
You
have
like
sub
tests
that
are
labeled
for
different
stakes,
so
it's
hard
to
actually
look
and
say,
like
oh
sig,
storage,
added,
a
ton,
more
test
cases,
and
now
all
of
the
ci
jobs
are
taking
longer.
So
this
costs
more.
A
A
C
B
A
A
C
C
Problem
realistically,
like
even
as
we're
trying
to
cut
costs,
I
s,
I
took
a
quick
look
and
my
very
rough
estimate,
just
what
I
could
get
my
hands
on
quickly.
I
think
that
prow
costs
easily
another
like
million
a
year
in,
what's
still
running
inside
of
google,
and
that
I
mean
like
just
the
just
the
main
like
proud
projects
are
consuming
close
to
that
amount.
Last
time
I
looked,
we
have
large,
not
auto
scaled
cluster
usage
there
and
that's
not
easy
to
fix
with
the
kate
simfre
stuff.
C
We've
said:
okay,
you
are
not
allowed
to
schedule
a
job
to
this
cluster
without
at
least
setting
some
kind
of
resource
request,
but
the
old
state
of
prow
is
just
a
total
free-for-all
and
most
of
the
jobs
don't
actually
set
resource
requests.
They're
just
thrown
out
of
big
cluster
and
the
cluster
was
scaled
up
to
meet
demand.
D
C
A
Aren't
we
yeah,
I
think,
once
we
finish
to
address
the
issue
with
control
image
distribution,
we
might
take
a
look
at
how
a
closer
look,
how
basically,
we
can
trying
to
extrapolate
resource
consumption
coming
from
the
breed
cluster.
C
But
what
I'm
saying
is
that
is
just
getting
to
a
state
where
that
could
happen
is
a
big
lift
and
it's
really
difficult
for
job
authors,
because
they
don't
because
in
the
current
state,
there's
no
way
for
them
to
have
visibility
in
this
I'd,
say
even
in
sick,
kate,
sinfra,
there's
very
poor
visibility
into
like
am
I
sizing
my
resource
requests
appropriately,
which
helps
us
size
the
cluster
automatically,
and
we
don't
people
are
going
to
ask
for
they're
either
going
to
get
it
wrong
and
ask
for
too
little
and
the
ci's
going
to
run
poorly
or
they're
going
to
ask
for
too
much
and
we're
going
to
over
auto
scale
for
more
than
we
actually
need,
and
that's
not
a
super
easy
to
solve
problem,
because
the
ci
makes
one-off
pods
and
it's
and
doesn't
have
any
sort
of
metrics
for
like
active
resource
consumption.
C
C
In
the
future,
we
might
be
able
to
do
in-place,
pod
scaling.
That
looks
like
a
thing
that
might
be
landing
in
kubernetes,
but
short
of
something
like
that.
I
don't
think
anybody's
put
enough
effort
into
like
how
can
you
appropriately
auto
scale
using
this
ci
and
without
that
we're
running
like
huge,
fixed
clusters.
So
there's
a
ton
more
resource
usage,
not
on
the
community
building
account.
C
It
it
hasn't
been
historically
because
pod,
auto
scaling
involves
recreating
the
pod
entirely,
because
the
pod
is
immutable.
I
think
we
might
have
something
coming
in
the
pipeline
that
allows
you
to
actually
change
resource
requests
without
like
deleting
or
recreating
a
pod.
I'm
not
sure
if
that
still
works
like
if
that's
actually
going
to
update
the
processing
space
in
place
or
just
let
the
api
be
mutable.
A
A
A
C
We
we
diversity,
we
have
a
huge
amount
of
resource
usage
yeah
that
is
going
to
be
difficult
to
produce,
and
you
know
if
we
want
to
finish
migrating
things.
We
actually
have
we're
like
height.
If
we're
trying
to
look
at
like
how
much
is
a
sig
using
a
whole
bunch
of
that
is
like
not
actually
even
in
in
this
report,
so.
A
Yeah,
so
I
think
if
we
want
to
basically
be
realistic
about
our
approach,
I
think
the
first
thing
we
need
to
ensure
is
combine
labeling
of
the
project
plus
gk
matrix.
I
think
that's
basically
one
step
to
basically
see
cpu
and
memory
consumption
so
from
that
we
can
identify
resource
consumption
and
see
what's
happening.
A
C
A
C
Sure,
but
I
mean
I
think
that
is
worth
noting
a
different
problem
that
we
could
start
thinking
about.
Sooner
is
what
do
we
expect
sig
to
be
using
like
what
is
like?
What
is
the
reason
like?
Are
we
just
going
to
allow
unbounded
resource
usage
or
like
what
does
that
look
like,
and
that
might
help
us
understand
what
we
need
to
be
able
to
keep
an
eye
on
that,
because,
right
now
I
don't
even
know
what
a
proposal,
what
a
reasonable
proposal
would
look
like.
I
think
it
that's.
A
C
Yeah,
but
so
another
thing
I'm
getting
at
here
is
when
we
were
running
these
things
inside
of
google,
there
was
like
no
one
was
aware
of
like
a
cap
on
cost.
There's
a
cap
on
available
quota
in
a
project
right
like
you,
are
allowed
to
have
x,
mini
vms,
and
you
have
to
request
quota
bumps,
there's
not
like
day-to-day
engineers.
Touching
this
don't
aren't
super
privy
to
the
billing,
and
so
when
the
project
needs
more
resources
because
it
wants
to
do
more
stuff,
we
just
add
more
resources
at
some
point:
we're
gonna.
C
C
I
think
we've
expanded
right
now
by
we
have
all
of
the
existing
internal
quota
and
the
external
stuff
we've
added
and
I
don't
think
we've
actually
shut
down
much
costs
other
than
maybe
the
scale
tests
on
the
internal
resources.
So
you
know
there's
another
like
150,
eight
core
vms,
for
the
build
pool
internally.
That
are
a
total
free
for
all.
So
I
do
think
like
down
the
line
at
some
point,
if
we
already
once
after
we
get
our
current
cost
in
control,
that
the
next
problem
is
we're.
C
A
Okay,
I
think
we
should
basically
look
into
this
in
126.
like
when
we
basically,
I
hope,
we're
going
to
fix
the
cos
problem
for
content
image
125..
I
think
we
should
basically
look
into
that
in
126.,
okay.
B
A
B
Question
just
is
we
got
the
these
three
buckets
out:
caleb
and
jay
sorted
all
the
things
with
the
bucket
he
shared
access,
currently
he's
auditing
the
buckets
synced
anything
else
that
you
need
from
us
that
we
can
help
with.
I
saw
you
did
some
some
more
things
on
that
are
not
anything
that
you
need
help.
Are
you
blocked
on
from.
A
Well,
there's
a
one
thing
I
would
like
to
see
in
the
bucket
right
now:
it's
a
huge
security
mechanisms
and
something
related
to
disaster
recovery.
Let
me
find
that
issue
somewhere.
A
A
I
mean
that's
one
of
the
things
we
we
that's
going
to
help
build
confidence
in
the
production
cluster
right
now,
because
the
background
version,
if
a
blob
disappear
by
somehow
we
find,
but
we
might
want
to
basically
seek
release
we'll
be
interested
to
get
out
from
that.
C
C
A
What
I'm
saying
is,
basically,
if
somehow
there
is
an
action
deleting
a
blob
in
the
extra
bucket
we
should
get,
we
should
get
an
alert
the
how
it
is
not
really
important.
Basically,
this
there
are
mechanisms
in
the
street
basically
get
notified
if
and
not
check.
It's
delayed,
and
I
would
like
to
see
that.
A
B
We
had
done
something
like
this
early
on
within
the
initial
creation
of
the
google
cloud
auditing
and
I
think
we
stopped
doing
that
at
some
meeting.
Maybe
when
I
stepped
away
for
a
bit,
but
it
was
the
configuration
for
everything
yeah.
C
B
C
B
I
put
a
link
in
the
document
about
a
repo
that
I
I
started
working
on
like
about
a
week
and
a
bit
ago
and
pretty
much
it's
a
github
action
that
runs
two
shell
scripts
and
one
of
them
generates
the
list
of
all
of
the
contents
of
the
bucket
using
the
s3
api
sub
command
of
aws,
and
then
it
commits
it
to
json
and
then
creates
a
pr
and
auto
emerges
it.
B
C
So
the
bucket
itself,
I
said,
should
be
content,
address,
blobs
and
we're
hopefully,
okay
with
that.
But
I
would
love
to
see
audit
on
the
bucket,
for,
like
the
I
feel,
like
it'd,
be
really
hard
to
review.
Once
we
have
automatic
sync
going,
but
like
are
all
of
these
blobs
valid.
I
feel
like
at
that
point
that
effort
is
spent
on
the
tool
that
does
the
sync
making
sure
it's
valid,
but
but
like
the
the
permissions
is
the
thing
that
we
have
in
the
audit
folder.
A
B
C
B
Too,
if
I
may
address
your
point,
I
know
yeah
this.
This
runs
as
a
unauthenticated,
anonymous
user
every
three
hours
and
there's
no
issues,
so
I
think
we
won't
have
any
issue
with
that
and
this
specific
command.
I
wanted
one
that
dumps
all
of
the
metadata
that
I
could
find,
and
so
it
also
includes
the
thing:
that's
got
an
etag,
so
it's
kind
of
a
hash
of
it,
which
is
useful
and,
of
course,
the
hash.
B
The
256
hash
is
in
the
file
name
for
each
blob,
which
is
great
but
yeah
that
dumps
it
all.
You
can
see
when
it
was
last
written
to
which
is
also
quite
cool,
yeah.
Okay,.
C
Well,
so
thanks.
This
is
also
indirectly
useful,
because
one
of
the
things
that,
with
the
like
head
check,
cache
that
I
was
trying
to
verify
that
we
hadn't
tested
at
scale
yet
is:
are
we
gonna
hit
some
kind
of
rate
limit
asking
s3?
Does
it
exist
and
for
best
I
can
tell
it's
basically
no
kind
of
this.
B
C
B
A
A
No
log,
okay,
so
I'm
fine.
I
think
I'm
okay
with
that.
We
can
basically
do
that
for
and
dump
that
somewhere
in
the
github
repository,
but
is
basically
not
what
I'm
talking
about
here
in
those
in
this
list,
because
right
now,
basically
javascript
audits,
only
the
object
in
the
specific
bucket
in
all
the
10
bucket.
A
What
I'm
asking
is
basically
a
nudge
of
the
entire
account
on
a
specific
point.
Basically,
if
a
human
access
to
the
operation
bucket,
we
should
basically
get
an
already
and
say:
oh:
would
access
would
access
or
someone
access
to
that
like,
because
what
we,
no
one,
should
be
able
to
basically
get
because
we
just
want
to
sing
to
that.
A
B
A
B
It
I'm
certain,
that's
possible,
do
bear
in
mind.
B
The
the
threat
is
also
rather
low
here
for
the
the
objects
in
the
bucket
that
is
with
so
in
order
to
get
access
to
that
account,
you
need
to
be
in
the
case
that
io
account
or
the
cncf
root
account,
which
is
already
quite
high
privilege,
so
that
that
would
be
the
first
thing,
but
as
ben
just
noted
shortly
before
the
objects
of
what
did
you
say,
something
forgetting
the
words
it's
early
in
the
morning
here
content
address,
thank
you,
and
so
there's
that
and
if,
if,
if
someone
would
just
write
any
file
to
anywhere,
which
would
not
be
good,
but
it
wouldn't
affect
anything.
C
C
Don't
try
to
serve
from
the
like
messed
up
bucket,
yep
and
then
figure
out.
How
did
someone
manage
to
write
like
bad
data?
That's
making
image
downloads
fail
or
like
delete
files
which
is
making
which
shouldn't
fail,
since
we
do
the
head
check,
but
if
you
did
something
like
overwrote
the
files
with
like
empty
blobs
or
like
blobs
full
of
garbage,
the
client
downloading.
The
image
should
note
that
the
hash
doesn't
match
because
it
requested
by
hash
and
whereas,
if
you
wrote
like
another
file
next
to
it,.
C
I
mean,
I
guess,
maybe
we're
like
hosting
content,
but
like
no
one
should
be
downloading
like
no
one
should
be
using
the
bucket
directly
shouldn't
be
a
big
deal
like
we
don't
really
care
about
like
the
reputation
of
it
or
something.
C
A
Wait
wait,
wait.
I
don't
want
to
go
in
detail
about
this
issue.
I
think
you'll
find
with
what
I
mentioned
in
this
issue.
B
To
have
notifications
for
any
changes
to
anything
in
the
account.
B
When
an
object
is
deleted,
yep,
I
believe
that's
possible.
A
C
I
don't
actually
think
we
should
should
focus
on
on
doing
this.
If
we
have
a
sync
tool,
that's
keeping
things
added
as
they
need
to
be.
Then
the
deletion
is
like,
like
the
only
concern
here
is
a
change
to
the
permissions
of
the
account.
A
deletion
itself
is
something
we
can
handle
and
the
the
real
concern
is.
Why
did?
Why
was
someone
allowed
to
what
changed
with
the
permissions
on
the
account?
So
if
we
want
to
monitor
the
permissions
on,
the
account
sounds
cool.
C
Do
that,
but
if
we're
just
trying
to
like
keep
track
of
where
something
deleted,
I
mean
we're
already.
Hopefully
implementing
maintain
that
the
state
is
correct.
I
don't
think
we
should
build
a
separate
process
for
keeping
track
of
the
state.
A
D
I
have
one
question:
if
you're
worried
about
objects
being
deleted
from
the
bucket
of
wiring
just
use
the
s3
object,
lock.
C
I
know
like
if
I've
escalated
permission
somehow
that
I've
been
able
to
grant
myself
delete
where
we
don't
currently
have
delete.
Why
can't
I
grant
myself
like
deleting
the
monitoring
of
the
delete
or
something
like?
I
think
the
thing
we
want,
I
think
the
thing
we're
worried
about
here
is
permissions.
Changing
deletions,
just
shouldn't
be
happening
because
we
shouldn't
be
handing
out
permission
to
delete.
A
Okay,
I
see
what
you're
saying
right,
because
ultimately,
every
hour,
if
we
have
a
script
that
synchronize
the
bucket,
we
may
not
even
see
that.
B
Yeah,
what
I
hear
saying
is
that
we're
not
concerned
around
the
auditing
that
caleb's
got
in
place
so
far
as
far
as
what's
there,
because
we're
syncing
it
so
often,
even
if
a
delete
happened
to
occur.
But
what
we
are
more
interested
in
is
upgrading
this
particular
script,
to
audit
the
ims
roles
and
their
permissions.
And
if
we
do
that,
then
we'll
feel
a
lot
more
confident
and
we'll
capture
that
in
a
ticket-
or
I
can't
so.
I
don't
have
a
link
to
this
ticket
that
fonts
real
small.
B
C
From
back
in
the
day,
we
also
probably
should
be
like
while
we're
at
it,
we
can
consider
whatever
we
do,
for
this
might
be
something
that
we
do
for
gcp.
We
have
the
thing
to
do
the
audit,
but
I
believe
we
manually
audit
and
then
diff
right
now
it
would
be
cool
if
we
could
just
notify
us
when
changes
occur
like
if
we
set
up
some
automation
monitor.
C
I
don't
think
we
should
have
that
bar
for
either
right
now,
but
like
for
thinking,
I
feel
like
that's
the
thing
to
alert
on
is:
why
is
im
changing?
Why
is
permissions
changing
and
then
for
for
s3?
It's
just
we
shouldn't
be
handing
out
permissions.
B
C
I
also
think
this
is
something
that
we
can
follow
up
a
little
bit
more
offline,
we're
through
most
of
the
meeting
time.
A
A
Wait:
let's
go
over
the
meaning,
finish
the
issue
and
come
back
if
we
basically
have
another
subject
about
esther
bucket,
because
we
it's
kind
of
late
for
for
mad
men,
so
I
want
to
be.
I
want
to
be
sure
we
can
leave
this
meeting
even
before
we
finish.
I
will
talk
about
that
later,
because
we're
already
50
minutes
left
so
yeah.
C
A
A
B
A
D
Yeah,
so
what
I'm
saying
is
like
the
top
line
item.
So
if
you
go
back
to
the
dogs
yeah
like
with
yeah
gcse
migration,
that
one
there
perfect
so
all
right,
so
we
got
a
plan.
There's
a
there's,
all
the
open,
pr's
available.
So
first
thing
we
need
to
do
is
get
ar
deployed,
which
is
the
the
third
tab.
That's
opening
your
screen!
D
So
right
now
we've
got,
I
think,
15
regions
that
we're
going
to
create
arn
and
sync
the
images
too.
I
think
that's
all
right
for
now.
A
C
Yeah,
I
I
think
we
should
do
that,
because
the
thing
is
right
now.
I
believe
artifact
registry
doesn't
have
the
pricing
increase
coming
from
gcs,
but
I
think
they
kind
of
indicated
that
it's
possible
to
come
down
the
line,
because,
fundamentally
it's
basically
the
same
storage
underneath
so
gc
r
is
getting
hit
immediately,
because
gcs
is
changing.
Pricing
and
gcs
is
exposed,
but
artifact
registry
could
come
down
the
line
since
we
are
going
to
be
sitting
in
front
doing
some
regionalizing
type
stuff.
C
Anyhow,
we
could
just
regionalize
the
individual
regions
and
avoid
the
cost
overhead
of
multi-regional
similar.
We
already
do
sync
between
regions
globally
ourselves.
So
it's
not.
It
shouldn't
be
a
problem
to
handle
that
ourselves
and
avoid
the
like
fees
associated.
A
Okay,
I
already
approved
the
request
and
I
left
I
think
secretly
should
be
aware
that
we
do
that,
because,
during
the
promotion
process
we're
going
to
have
yeah.
My
eighth
api
really
meant
something.
So
my
thinking
about
this,
I
think
we
should
not
doing
this.
We
should
go
with
multi-regional,
because
cos
increase
for
mature
regional
is
not
worth
that
different.
We
are
not
busy.
D
A
D
C
A
C
C
Not
really
like,
we
still
have
to
have
a
list
of
things
we
promote
too,
and
we
still
have
to
in
the
oci
proxy
pick
the
back
end
to
use
okay,
it's
like
either
way
we
the
the
thing
that
we
got
that
was
really
simple.
Before
was
we
had
a
single,
alias
provided
by
gcr
for
us
for,
like
the
internet
globally?
C
We
if
we
want
the
option
of
redirecting
case
side
gcrdio,
we
need
to
take
on
the
regionalizing
and-
and
we
already
have
to
do
something
similar
for
aws,
and
we
have
a
very
simple
approach
here
that
we
we
need
to
split
up
the
we
need
to
split
up
the
deployment,
so
we
can
configure
per
per
region
in
the
collection
and
then
we
can
just
set.
What's
the
artifact
registry
region.
A
Okay,
I
already
gave
my
approval
on
this,
so
feel
free
to
talk
to
released
him
about
this,
like
you
got
the
lcm
and
we're
good
to
go.
C
Either
way
once
we
get
that
stuff
in
place
which
kinds
of
artifact
registries
and
where
they
are,
is
an
implementation
detail,
we
can
tune
over
time
as
long
as
this
seems
like
a
reasonable
starting
point.
A
C
D
D
C
Then
see
how
it
goes
besides,
being
able
to
deploy
to
them
and
making
sure
that
the
the
promotion
tool
works,
fine
with
them.
The
remaining
complexity
we
have
is
just
changing
the
cloud
run,
deployment
to
have
like
per
region
cloud
run
config
once
we
have
that
which
we
have
to
do
either
way
it
yeah.
It
doesn't
matter
how
granular
we
go
and
it
sounds
like
we
can
avoid
needing
to
change
in
the
future.
If
we
just
go
ahead
and
go
granular.
B
D
Already
spoke
to
secret
release
got
approval,
for
I
just
need
to
get
work
on
this
to
what
looks
good
to
me
on
that
one.
If
you
go
back
to
the
issue
again,
where's.
A
A
D
D
D
All
right,
I
will
we've
already
spoken
about
it.
I
just
need
to
get
the
approval
on
the
pr
we
had
a
nice
offline
chat
with
meeting
with
adolfo
and
jeremy
about
it.
A
Okay,
I
don't
see
the
approval
from
there
yeah.
A
This
week
be
fine,
because
that
change
I
mean
I
just
want
to
be
transferred
with
secret-
is
about
this
about?
Yes,
because
we
want
to
affirm
that
oh,
we
are
not
doing
only
three
controversy
we
go
to
51..
I
think
we
should
be
transparent
about
this,
not
throw
under
the
bus
when
yeah
change.
So,
okay.
D
A
Why?
Okay,
let
me
we
need
a
promo
okay.
I
think
that
sorry,
I
used
two
keyboards,
so
I'm
always
lost.
D
B
B
D
It's
also
possible
to
use
the
generic
order.
Bumper
terraform,
I
just
reject
the
code
to
read
the
image
information
from
a
yama
file.
You.
A
D
D
That's
it
really,
that's
it.
For
this
week,
I'm
going
to
go
and
work
that
out
with
release
and
get
all
the
approvals
on
there.
Thank
you.
A
Recordings
yeah:
this
has
been
a
conversation
for
two
years
with
save
controversy.
Currently
the
tooling
used
for
auto
publishing
stuff.
It
is
broken
right
now,
so
we
can
talk
about
this
in
detail
about
this.
The
only
thing
I
know
is
this
is
an
issue
for
two
years
people
try
to
do
stuff,
there's
no
existing
tooling
to
do
that
at
the
moment.
So
you
need
to
build
something
from
scratch,
capable
to
talk
to
the
zoom
api
and
the
youtube
api.
At
the
same
moment,
people
try
to
do
stuff.
I
saw
nothing
about
it
currently.
A
A
C
B
Brianna
are
you
concerned
around
the
community
scaling
part
or
for
this
particular
meeting
this
particular
meeting,
so
my
question
here
is
specifically
about
our
this
meeting
because
we
like
to
go
back
to
this
meeting
recordings.
So
if
we
can
do
in
this
meeting
have
a
plan
to
get
it
up
earlier.
That
would
be
great,
unfortunately,.
A
Unfortunately,
we
try
tools
and
none
of
none
of
them.
C
C
Earlier
this
year,
I
think
paris
reached
out
and
like
helped
them
get
access
to
the
sick,
testing
zoom,
and
then
they
did
something
and
our
meetings
are
showing
up
reliably.
Now.
A
A
A
A
Yeah,
I
know
because
I
know
something's
happening
between
maki
and
foster's
contributes.
I
don't
know
what
tooling
they're
running
to
make
that
happen.
I
just
know
it's
not
fully
automated.
I
just
know
at
some
point
someone
triggered
an
action
and
the
be
the
meeting
I
upload
to
youtube.
So
I
need
to
talk
to
marketing.
A
B
A
B
Great
and
if
it's
possible
to
grant
rihanna
or
someone
access
or
if
we
need
to
have
a
different
coach,
you
know
put
some
people
in
place.
Who
can
do.
B
B
A
C
We
could
also
ask
paris,
paris,
is
the
one
who
worked
with
me
on
this.
C
I
can
maybe
follow
up
with
the
iphone's
offline.
I
have
a
couple
of
points
just
to
confirm
whether
or
not
the
s3
has
anything
else
outstanding.
I
I
left
a
note
in
the
dock.
A
I'll
see
what
you
post
friend,
where
thanks
okay,
see
you
folks.