►
From YouTube: Kubernetes SIG K8s Infra - 20220622
Description
A
Cloud
hi:
everybody
today
is
wednesday
june
22nd,
and
this
is
the
meeting
of
the
sick,
kubernetes
infra
and
we
have
a
light
crew
here
today
we
will
still
have
some
stuff
to
talk
about.
Okay,
let's
start
as
usual,
with
the
kate
sintra,
I'm
gonna
share
my
screen.
A
Okay,
you
all
can
see
my
screen
right.
A
Okay,
287k.
D
Good
to
go
to
go
to
trying
to
play
it's
very
slow,
I'm
trying
to
basically
on
my
side
as
well
to
see
if
there's
anything
significant.
I
haven't
had
time
to
look
at
it
before,
but,
yes,
it
is
steadily
growing
by
the
month.
C
A
C
A
Okay,
I'll
just
leave
it
running,
we
can
shoot
the
next
topic.
I
guess
we'll
come
back
and
look
at
this
okay,
anybody
new
here.
I
don't
see
anybody
new
here.
We
all
know
each
other.
Okay,
billing
review
action
items,
action
items.
I
don't
see
any
specific
thing
in
the
last
call:
okay,
let's
go
to
rihann
matching.
D
D
Was
thinking
is
if
we
have
a
way
to
match
the
shows
through
specific
releases,
we
can
do
an
analysis
and
see
volume
of
traffic
per
release
and
see
what
is
the
oldest
releases
that
we're
looking
at?
What's
the
most
popular
releases,
so
I'm
not
entirely
sure
I
will
remember
vaguely
something
that
justin
sun,
barbara
had
a
way
to,
or
was
working
on,
a
way
to
match
that.
So
I
don't
know
if
anybody
in
this
group
can
give
us
some
advice.
D
It's
gonna
be
a
bit
of
a
heavy
pull
if
to
figure
out
how
to
get
that
done.
A
So
my
first
question
is
what
shows
up
in
the
artifact
logs
right
like?
Are
we
talking
about
the
new
thing
that
we
are
deploying
in
registry.ks.io
or
are
we
talking
about?
You
know
pulling
it
from
gc
gcr.
D
Which
is
a
valid
point
of
cause
at
the
moment
we're
only
pulling
gcr,
so
we
probably
would
have
to
pull
the
locks
for
the
new
registry
as
well.
That's
also
it's
actually
that
went
beyond
boston.
I
didn't
think
about
that.
So
we
need
to
get
the
logs
for
the
new
registry
as
well
and
start
pulling
that
what.
C
F
Yeah,
so
the
the
gcs
behind
gcr
has
auditing
and
we
can
use
the
audit
logs
from
the
gcs
to
see
where
it's
coming
from
the
thing
I
point
out,
though,
is
I'm
not
actually
sure
I
mean
like.
Let's
say
we
find
out
how
much
we
get
per
release
and
we
built
some
kind
of
model
and
stuff
even
then
like
that,
doesn't
necessarily
tell
us
how
much
we're
gonna
save
or
how
soon
we
need
to
do
it.
I
think
it's
more
like.
F
If
we're
not
saving
enough
money
with
whatever
else
we've
accomplished,
then
we
need
to
consider
doing
the
switch
and
it
doesn't
really
like.
We
might
be
better
to
spend
our
energy
on
getting
things
rolled
out
like
at
this
time.
We're
not
saving
money,
because
we
don't
have
the
buckets
set
up
yeah.
So
we
we're
saving
zero
we're
spending
more
money.
F
D
Okay,
if
there's
no
appetite
for
that
at
the
moment-
and
we
want
to
focus
our
interest
rather
on
getting
the
buckets
assorted
and
getting
the
redirection
sorted
if
there's
a
later.
D
F
So
I
apologize,
I
wasn't
sure
if
I
was
going
to
be
able
to
make
it
today,
so
I
didn't
put
anything
on
the
agenda,
but
I
have
some
topics.
If
we
have
time.
A
Yeah,
I
don't
think
we
have
anything
else.
We
just
have
time.
I
also
jay
is
not
here.
I
was
gonna,
get
an
update
from
him.
I
think
he's
on
video
this
week.
F
Got
it
so
I've
had
a
little
we've
had
a
bit
of
a
running
thread
with
the
artifact
registry
team
talking
about
follow-ups
from
when
we
did
some
initial
exploration
with
them.
One
of
the
things
that
we've
discussed
that
I
wanted
to
discuss
with
this
group
is
that
if
we
get
to
a
point
where
they're
asking
us
to
move
to
artifact
registry
and
just
point
the
case.gcr.o
at
oci
proxy
and
not
have
a
regionalizing
domain
for
us,
there's
a
pretty
straightforward
approach
we
can
take
for
all
of
the
well.
F
You
are
running
in
this
region
and
let
the
google
cloud
load
balancer,
pick
the
region
and
then
based
on
the
cloud
run
region,
we're
in
pick
what
regional
back-end
to
hand
off
to
so
we
can
have
artifact
registries
in
in
regions
and
then
we
can
just
say
I
am
this
cloud
run
instance
and
region
foo.
I
should
point
you
to
the
gcr
and
region
bar
and
just
always
that
gcr
each
regional
cloud
run
can
just
be
configured
to
point
at
one
of
the
gcrs
and
it
can.
F
F
F
There's
push
from
them
that,
instead
of
building
new
things
in
gcr,
we
should
be
moving
to
artifact
registry.
They
are
not
doing
features
in
gcr,
so
they
can
maybe
help
us
with
something
went
off
to
take
that
existing
special
domain
and
do
something
special
with
it.
But
they
don't
want
to
make
more
special
domains
if
they
can
avoid
it.
A
A
F
E
F
In
gcr
we
can
so
I
basically
I'm
just
floating.
How
does
that
sound?
If
that's
fine,
then
we
can
get
back
and
say:
okay,
we
don't
mind
if
we
lose
the
regionalizing
domain
aspect
if
and
then
we're
just
at
then
we're
just
asking
for
them.
Is
it
possible
for
them
to
point
the
existing
domain
to
to
the
proxy
instead
of
directly
a
gcr
yeah?
I
I
agree
with
that.
Yes,
okay!
F
So
then
I'll
continue
that
conversation
they're
still
looking
into
the
that,
I
think
I
I
think
there
was
a
I've
been
kind
of
buried.
I
think
there's
another
update
on
that,
but
that
someone
has
been
looking
into
like
what
possible
ways
could
they
reroute
that
traffic,
but
that
that
was
one
of
the
sticking
points
separately?
F
We
should
be
talking
to
sig
release
about
while
we're
moving
these
things
around.
That
would
be
a
good
time
for
us
to
introduce
artifact
registry
alongside
gcr
and
start
and
shift
the
load
there,
because
we
have
a
different
cost
issue
where
the
multi-regional
gcs
buckets
are
going
to
cost
us
a
lot
more
in
october.
I
believe.
F
A
A
What
we
are
doing
with
aws
right,
like
right
instead
of
gcs
buckets
fronted
by
gcr,
we'll
have
the
artifact.
F
We
can
use
oci
proxy
as
the
cut
over
mechanism
to
get
traffic
routed
to
artifact
registry
instead
of
gcr
yeah,
so
we
can
totally
so
then
we
can
leave
case.gcr.io
in
place
until
we're
ready
for
them
to
point
that
traffic
at
oci
proxy.
But
until
that
point
in
time
everything
stays
the
same.
We
just
stand
up
some
new
stuff
alongside
it
and
then
I
think
there's
just
like
less
hops
in
moving
the
traffic
around.
F
If
we
can
do
that
and
that's
another
potential
cost
cutting
is
whatever
traffic
does
wind
up
on
gcr
artifact
registry
should
avoid
the
cost
spike.
We
should
be
able
to
do
like
regional
artifact
registries
and
route
ourselves
instead
of
right
now
we're
going
to
be
paying
for
multi-regional
gcs,
which
is
about
to
get
more
expensive.
A
F
The
price
changes
public
thing
that
our
nose
been
talking
about
for
a
bit.
There
was
an
announcement
a
while
back.
I
believe
that
hits
in
october.
F
F
Link
but
I
don't
have
to
read
again
what
the
price
change
is.
A
So
what
does
that
mean
for
us
for
the
release
team?
I
think
they
need
to
update
image
promoter
to
promote,
to
one
more
set
of
you
know,
use
a
different
api
to
publish
the
artifacts.
A
B
The
storage
class
near
line
called
line
standard,
I
think
standard
that
has
a
big
implication
on
the
storage
pricing.
F
Yeah,
I
hope
we
didn't
dig
into
this
too
much
recently,
because,
like
we've,
we've
known
and
been
discussing
this
issue
for
a
while,
it's
just
when
I
was
talking
to
the
artifact
registry
team.
Again
they
were
pointing
out
that,
like
the
move
to
artifact
registry
is
also
potentially
a
way
to
mitigate
that.
B
A
lot
of
cost
disappears
when
you
do
that
even
today.
Multi-Regional
attack
registry
is
the
same
price
as
regional,
but
it
wouldn't
rely
on
that
because
google
might
turn
around
if
he
hasn't.
F
A
Can
I
ask
you
something
ben,
do
they
have
some
way
of
doing
a
bulk
copy
from
what
we
have
to
the
artifact
registry.
A
F
F
A
Worried
about
that,
because
you
know
that
is
like
on
an
everyday
basis
right
like,
but
the
first
time
bulk
copy
is
what
I'm
worried
about
right
now
and
if
we
are
able
to
run
that
bulk
copy
every
day
basis.
You
know
which
will
just
promote
the
diffs,
then
we
then
we
can
avoid
doing
the
image
promoter
thing
like
immediately
right.
If
you're
able
to
do
one
welcome
to
the
entire
data.
A
The
differences
every
every
day
or
every
week
that
should
be
there,
there's
a
way
to
like
convert.
F
Car
like
front
to
artifact
registry,
I'm
not
sure
that
we
like
I'm,
not
sure.
If
that's
what
we
want
to
be
doing,
I
I
I
think
I
mean
we
had
to
do
this-
that
we
already
had
to
do
the
same
thing
when
we
went
from
gcr.io
for
google
containers
to
to
this,
we
just
start
promoting
to
the
the
new
place
for
a
bit.
F
A
Okay,
fine,
so
let
then,
let's
assume
that
you
know
there
is
something
that
we
can
use
for
bulk.
A
F
A
Okay,
then,
the
question
is:
what
is
the
set
of
things
that
we
need
to
stand
up
for
the
artifact
registry
in
is
like
for
every
region.
We
need
to
stand
something.
F
Up
probably
what
we
want
to
do
is
something
very
similar
to
what
we
have
today,
but
with
artifact
registry
right
now
we
have
yeah
eu
us
and
asia
yeah.
We
probably
want
to
at
least
stand
up
some
regional
non-multi-regional
artifact
registries,
that
map
to
where
those
multi-regional
instances
covered.
F
So
some
we
probably
want
some
us
eu
asia,
regional
artifact
registries,
and
then
we
want
to
maybe
a
manual
backfill,
but
I
think
mainly
we
want
to
get
the
we
want
to
get
them
configured
in
the
image
promoter
tool.
So
I
think
I
think
this
is
mostly
organizational
slash,
picking
some
regions
like
talking
to
sig
release
once
that's
in
place.
I
think
we
can
pursue
having
oci
proxy
select
a
like,
be
configured
with
a
region
really
per
instance.
F
We
have
some
existing
action
items
which
are
our
the
production
instance
right
now.
I
realized
is
not
fully
configured
in
terraform
or
anywhere
else.
Some
of
it
is,
I
think,
are
no
random
commander
two,
so
I
have
a
tracking
issue
to
like
get
that
source
control.
F
We
have
automatic
deployment
to
staging,
but
that's
done
like
slightly
differently,
so
we
should
probably
fix
that
before
we're
prepared
to
roll
anything
out.
The
other
thing
that
would
be
super
helpful
that
probably
just
about
anyone
could
look
into
is
we
don't
have
any
real,
alerting
around
registered
case
studio,
even
if
we
can't
necessarily
you
know,
staff
an
sre
rotation
or
something
like
that.
F
We
probably
should
get
some
more
direct,
alerting
on
it's
not
responding
or
something,
as
opposed
to
us
just
noticing
when
kubernetes
ci
breaks
or
something
I'm
not
sure
what
the
best
approach
for
that
is.
I
don't
think
we
have
a
lot
of
this
in
the
kubernetes
project.
Today,
prow
has
had
some
prober
stuff
in
the
past.
E
So
so,
do
you
think
ben
then
the
both
the
tasks
to
to
to
automate
standing
up
standing
up
artifact
registry
and
whatever
way
we
architect
it
and
alerting
at
the
same
time
that
they
could
be
planned
out
together,
at
least
oh
sorry,
I
mean
the.
F
I'm
talking
about
oci
proxy
okay,
making
sure
that
the
the
the
the
configuration
that
we
deploy
is
fully
source
control
so
that
we
can
so
anyone
can
pr
that
right
now.
I
think
some
aspects
like
which
images
running
or
something
are
just
like
are
no
random.
Interactive.
Deploy
command
is
my
the
best
I
can
tell
right.
F
We
have
like
get
ops
deployment
to
staging,
but
that
one's
a
little
bit
different
in
heck
here,
because
we
also
don't
want
the
staging
instance
deployed
to
all
the
regions
it
we
don't
need
all
the
regions
to
test
and
it's
just
cost.
F
So
we
we
should
sort
that
out
before
we're
going
to
be
able
to
move
any
of
this
stuff
to
production.
The
tag
we
have
right
now
should
be
fine.
We
shouldn't
need
to
change
it
until
we're
ready
to
add
new
functionality,
but
when
we're
ready,
add
new
functionality,
we're
going
to
be
blocked
on,
we
really
don't
want
to
break
production
and
right
now
you
need
one
of
a
few
people
to
like
manually,
deploy
things.
We
should
get
that
source.
F
Automated
so
I
think
that's
a
blocker
before
we
can
actually
roll
any
of
this
out,
and
I
think
that
in
parallel
we
should
also
be
looking
at
like
what
do
we
want
for
some
kind
of
like
prober,
alerting
something
like
that.
I'm
not
super
worried
about
cloud
run,
managing
to
go
down
or
us
totally
breaking
the
app.
We
have
pretty
good
testing,
but
it
feels
kind
of
wrong
that
we
don't
have
anything
monitoring
it
directly.
F
At
how
do
we
get?
How
do
we
get
artifact
registry
like
who
do
we
need
to
get
on
board
with
this
and
like
what
region
should
we
have?
Probably,
we
should
start
with
regions
that
closely
match
where
we
have
gcr
today,
okay-
and
so
that's
another
thing,
I
think
pretty
much.
Anybody
could
like
look
into
that
and
and
sketch
up
like
what
like,
what
do
you
think
we
should
do
there?
What
region
should
we.
F
B
F
I
don't
I
just
I
guess
I
think
we
want
to
try
to
offer
like
the
same
level
of
quality
of
service.
So
if
we
have
like
west
and
east
copy
in
the
us,
or
something
like
that,
if
we
can
identify
that
and
match
it
closely,
that's
probably
good
starting
point
in
the
future.
We
might
want
more
regions
or
something.
A
F
A
A
So
rihanna,
I
think
this
might
be
a
good
solid.
This
thing
for
the
ii
team.
You
know
just
going
through
the
process
of
making
sure
that
everything
is
deployed
from
source
control,
githubs
and
then
adding
the
artifact
registry
instances,
updating
the
terraform
to
add
the
artifact
registry
instances
and
then
figuring
out
how
we
do
the
bulk
copy
and
then
making
changes
in
image
promoter
to
support
the
artifact
registries.
That.
A
F
A
A
No
okay
step,
one
fresh
off
the
you
know,
fresh
news
from
ben.
After
talking
to
the
teams
that
we're
doing
the
you
know
redirected
that
we
currently
have.
F
We
have
issues
for
the
oci
proxy
source,
controlled
and
monitoring.
We
do
not
have
an
issue
yet
for
there's
a
there's,
an
older
existing
issue
talking
about
migrating
to
artifact
registry,
but
we
don't
have
one
where
we're
looking
at
doing
this
approach.
C
A
E
E
A
F
A
Of
things
that
we
need
to
line
up
before
we
go
talk
to
the
real
director
team
again,
so,
let's,
let's
get
going
for
sure.
E
A
E
F
F
In
the
sandbox
today
by
using
the
ngcrs
that
actually
back
case.gc.io
instead
of
redirecting
a
case
of
gcrdio,
we
could
actually
go
ahead
and
start
attempting
to
regionalize
ourselves
to
the
three
gcrs.
F
F
E
So
I'm
just
going
to
put
very
high
high
level
notes
in
in
the
in
the
meeting
notes
as
to
as
to
all
of
that,
because
I
think
there's
a
lot
in
there
ben
and
that
I'll
probably
miss
the
detail
on.
A
So
rob
my
gut
reaction
here
is:
nothing
is
stopping
us
from
creating
the
artifact
registry
instances
and
populating
them
yeah.
A
E
A
A
little
bit
later,
but
you
know,
let's
get
the
basic
stuff
going,
make
sure
the
terraform
stuff
is
set
up
right
and
then
we
said
we
have
enough
in
there
to
create
the
artifact
registry
and
then
we'll
figure
out
the
tools
for
the
bulk
copy
and
the
image
promoter
deltas.
A
So
when
we
have
that
ready
for
both
for
the
artifact
registry
and
the
s3
buckets,
that
is
the
time
when
we'll
have
to
figure
out
like
okay
oca
proxy.
What
does
it
need
to
do
and.
C
F
So
I'm
going
to
go
ahead
and
take
on
looking
at
the
per
region
redirect
thing
just
because
I
want
to
make
sure
that
we
have
a
path
forward
on
that
not
even
necessarily
fully
empathy
or
just
writing
up
an
issue
and
double
checking
the
tech,
so
we
don't
stumble
over
that
later,
but
for
these
other
things
at
minimum,
if
rob,
if
you
want
to
yeah
poke
me
with
any
questions
later,
yeah.
E
Yeah
yeah:
let's
do
that
and
I
think
yeah
between
ben
and
myself
and
help
from
caleb
and
real.
We
should
figure
something
out
yeah.
Let's
do
that.
I'm
not
captured
well
in
the
notes,
but
it's
it's
kind
of
in
here,
but
I
think
maybe
then.
F
We
have
one
smaller
topic
related
to
that,
but
I
just
want
to
go
ahead
and
everyone
straight
so
muhammad
filed
an
issue
with
cloud
support
about
the
audit
logs
that
we've
been
using
for
gcs
versus
artifact
registry
and
they
got
back
about.
F
Yes,
there
are
audit
logs,
but
they
don't
have
what
we
want
in
them.
I
have
mentioned
this
in
the
thread
where
we've
been
talking
about
what
the
what
we
can
do
with
the
artifact
registry
team,
but
I
don't
expect.
F
Big
feature:
I'm
not
expecting
that,
so
we
may
need
to
consider
for
kubernetes.
If
we
need
logs
like
that
in
the
future,
we
may
need
to
consider.
Are
we
comfortable
capturing
those
and
no
ci
proxy
in
some
other
format?
And
how
do
we
want
to
do
that
for
the
moment?
We're
mostly
just
avoiding
logging
details
in
production,
because
we
also
have
to
sort
out
the
privacy
policy
for
that
kind
of
thing,
but
something
to
have
on
a
radar?
B
So
what
you
can
do
you
get
up
to
write
a
json
log
line
with
all
the
details
that
you
want?
Sync
it
to
bigquery
and
then
sync,
it
bigquery
and
call
it
a.
F
B
F
Awesome
well,
it
sounds
like
we
have
a
great
answer
for
that,
then,
when
we
get
to
needing
that,
I
don't
think
this
is
a
blocker
right
now,
but
just
want
everyone
to
be
aware
that,
like
that's,
also
something
we're
going
to
wind
up
with.
A
And
the
privacy
policy-
I
remember
digging
into
it
and
chris
any
check-
told
us
to
just
use
the
lf
privacy
policy,
and
you
know
he
said
just
redirected
to
that.
A
F
Okay,
well,
we
still
have
some
time
to
anyone
else.
Have
any
topics
or
questions
or.
B
Yeah,
I
have
one
so
do
you
remember
the
issue
that
I'm
working
on
testing
images
in
pre-submit?
You
said
you
weren't
too
comfortable
running
pre-submitted
jobs
on
the
trusted
cluster.
So
I
was
wondering
if
you
had
a
project
lying
around
somewhere
for
part,
build
only.
F
E
I
I
I,
I
only
cut
the
tail
end
of
that,
and
I
I
used
co
for
the
first
time
today
to
push
to
push
it.
It
was
a
golang
app
and
just
create
a.
I
can
create
a
container
image
from
that
and
it
lickety
split
was
up
in
a
open,
a
repo
in
a
or
so
that
that
was
that
was
super
easy
to
use.
F
F
They
only
get
built
on
after
merge,
but
the
most
straightforward
way
is
to
just
copy
over
the
post
submit
merge
to
pre-submit
the
problem
with.
That
is
where
we
run
and
push
those
images.
That's
in
the
the
trusted
cluster
and
like
push
to
the
actual
place
that
we
pull
images.
B
A
So
I
lost
contact.
Sorry.
F
So
we've
got
some
cloud,
builds
that
build
images,
but
they
only
run
in
post,
submit
right
now
so
that
we
don't
know
if
the
image
is
built
reliably
muhammad's
been
trying
to
add
precements
to
do
this,
but
right
now
I
think
all
of
the
image
building
happens
in
the
trusted
cluster
right.
I've
been
saying
we
should.
We
should
not
put
the
trusted
cluster
in
pre-submit.
That's
that's
yeah!
That's
awesome!
F
Yeah
trouble
yeah,
but
I
don't
have
a
super
great
answer
for
what's
the
path
forward
to
run
it
outside
of
the
trusted
cluster
and
make
sure
that
the
builds
work
can
we
create
a
staging
project
and
put
the
credentials
into
the
untrusted
cluster.
F
I
think
that's
what
we
have
to
do
here.
I
think
we
need
a
special
staging
project-
that's
just
used
for
pre-submit.
So
what
is
the
objective
of
this
pre-summit
job.
B
It
does
cloud
build
image,
builds,
so
the
images
in
test
infra
they
get
built
after
the
pr
gas
merge.
So
if
you've
got
a
bad
issue,
you
gotta
rework
the
pr
again.
A
Can
it
run
without
actually
uploading
the
image.
B
E
And
muhammad,
what's
the
what's
the
job
name
or
what's
the
job
for.
F
The
program,
testing
and
testing
for
muhammad
also
already,
oh
yes,.
E
F
A
cute
trick
where
you
take
yq
and
just
drop
the
push
step,
so
you
could
run
the
builds
without
actually
pushing,
but
we'd
still
need
cloud
build
to
make
sure
the
cloud
build
works.
Okay,.
E
C
B
Yeah
I've
been
poking
around
kate's,
I
o,
so
I
think
I
know
how
to
create
one
of
those,
so
I'm
going
to
raise
a
pr
for
it.
Okay,
so
I'm
going
to
take
a
look.
E
F
B
Yeah,
I
think
I've
worked
it
out
this
workload
identity,
so
it
should
be
straightforward.
Okay,.
F
There's
also
multiple
untrusted
pre-submit
clusters,
I'm
not
sure
which
one
testimony
is
using
today,
that
might
be
worth
checking.
We
have
one
that
was
originally
just
for
like
because
we
weren't
ready
to
move
everything,
yet
it
was
just
for
like
kubernetes
release
blocking
things
and
kubernetes
pre-submit.
I
think
we've
relaxed
that
at
this
point,
but
I'm
not
sure
if
test
infra
has
moved
yet
it
might
be
on
the
old
google.com
main
build
cluster.
F
E
F
The
cluster
field
and
the
proud
jobs
there's
also
some
way
to
manage
test
infrasecrets
for
the
other
for
like
prow
itself,
but
those
are
that's
a
little
bit
different.
I
mean.
F
Yeah,
I
don't
think
I
have
access
to
that
anymore
either,
but
the
close,
but
like
they
have
names
in
the
pro
config
and
all
the
ones
that
aren't
google.com,
say
something
like
sig,
cates
and
friend,
the
name.
Okay,.
A
Going
once
going
twice,
the
only
other
thing
that
I
need
to
bring
up
to
this
group
is
me
and
arno
are
with
the
same
employer
right
now.
So
do
we
need
to
do
anything
about
it
or
you
know?
What
do
we
want
to?
How
do
we
want
to
deal
with
that.
A
New
okay,
I
just
thought
I'll
bring
it
up
explicitly
for
sure
cool,
okay,
so
rob
when
you
finish
doing
all
the
issues.
Just
you
know
ping
us
in
the
chat
and
we'll
go,
take
a
look
and
make
sure
that
things.
E
Yeah
yeah
yeah
yeah,
I
think
yeah,
I
think
I'll
reach
out
ben
and
just
and
rihanna
and
caleb
just
reach
out
and
put
together
a
plan.
So
so
so
what
I'm
kind
of
taking
on
is
is
is
looking
at
sort
of
planning
out
planning
out
that
deployment
basically
and
how
we
capture
that
and
and
how
we
codify
it,
etc,
etc.
It
sounds.
A
A
Do
what
we
can
in
the
public
channel
so
are
those
oh.