►
From YouTube: Infrastructure sync for Code Suggestions accelerated GA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
have
the
first
couple
items
I
added
this
kind
of
the
last
minute,
but
I
just
wanted
to
take
a
quick
inventory
of
what's
in
this
project,
to
kind
of
see
like
what
parts
of
it
are
for
code
suggestions,
I
see
that
there's
a
bunch
of
buckets
gke
clusters,
we've
set
up
some
vpcs
and
there
are
a
bunch
of
instances.
Andres
is
this
something
you
can
answer
for
us
yeah.
B
A
Makes
sense,
okay,
VPC
and
I.
Don't
think,
there's
anything
to
discuss
here.
I
assume
you're,
just
using
the
GitHub
AI
assist
Network
I
saw
that
you
have
appeared
with
another
there's
some
VPC
purine
between
I
think
the
assist
Network
and
something
else
maybe
the
CI
Network
or
something,
but
that's
fine
and
then
for
instances.
A
B
No
I
have
I
have
not
seen
this
before
I'm,
not
quite
sure
I
believe
Alexander
can
answer
this.
Maybe
10
as
well.
B
That
might
have
been
so
the
VMS
I
think
these
are
for
running
models.
Maybe
this
was
used
during
the
initial
development
number
of
testing
various
models.
D
B
What
I
can
say
for
sure
that
RR
and
S
are
or
for
the
suggested
viewer
and
the
reviewer
recommenders?
It
was
called
okay
back
in
the
days.
A
B
D
And
those
machines
currently
running,
even
when
they're
not
taking
jobs,
I
just
wondered
about
the
cost.
I
know
those
are
some
Hefty
machines
cost
wise.
B
I'm
not
sure,
but
just
their
broad
stance,
I
think
cost
wise
we're
looking.
Okay,
we're
in
the
we
have
certain
budget
and
I
think
we'll
develop
it
in
the
projections.
At
the
moment.
B
A
So
I
was
just
doing
kind
of
as
an
exercise
to
say
like
okay,
if
I
was
going
to
adopt
this
into
our
into
like
or
if
I
was
going
to
move
code
suggestions
into
a
separate
project
and
set
up
how
we
set
it
up
like
just
a
few
things.
I
would
change
like
the
public
IP
for
the
nodes,
control,
plane
access,
so
I
think
we
covered
all
the
questions.
Maybe
U.S
Central
one
was
epic
for
any
particular
reason.
C
A
And
why
are
the
Cogen
training
instances
in
Asia
Southeast
one?
Is
this
because
of
like
capacity
issues
or
are
you
also
not
sure.
B
I'm
not
sure,
but
it
could
because
hometown
our
main
model
training
personal
is
in
Sydney
or
in
Australia,
so
it
might
be
for
the
closeness
to
him.
A
Okay,
cool
so
yeah,
so
I
was
so
I'm
I'm,
not
sure
like.
We
have
a
pretty
compressed
timeline
for
this
so
kind
of
seeing
what
what's
involved.
A
A
Maybe
in
the
short
term,
we
just
get
monitoring
and
logging
set
up,
and
then
we
tackle
that
later
and
I'm
thinking.
That's
probably
the
right
approach.
After
digging
into
this
a
bit.
C
A
A
That
sounds
good
to
me,
I
think,
like
yeah,
we
could
create
two
new
projects
and
switch
start
start
like
getting
them
ready
and
in
the
meantime,
you
can
just
get
monitoring
and
log
in
working
for
the
current
deployment
yeah.
That
sounds
that
sounds
okay
to
me.
A
A
That's
about
it,
I
see,
but
nothing
for
infrastructure,
yeah,
so
I
think
I
think
what
we'll
probably
want
to
do
is
just
create
new
projects,
and
you
know
terraform
them
and
start
using
terraform
to
build
out
the
infra
and
abandon
this
project
eventually,
but
I
don't
know
like
I
think
in
a
few
weeks
time,
that's
probably
not
going
to
happen.
It
gets
a
pretty
big
I
mean
not
that
big,
but.
A
So
so
I
guess,
then
the
question
is
is
like
what
should
we
need
to
do
to
get
logging
and
the
monitoring
working
you've
already
prepped
a
couple
Mrs
for
the
logging
stuff?
So
what
I
think
we'll
do
is
we'll
we'll
use
the
existing
Pub
sub
I
mean
we'll
we'll
configure
the
pub
sub
topics
in
the
existing
staging
and
production
accounts.
A
And
what
we
need
is
just
a
service
account
and
I
guess
we
can
use
whatever
service
account
you're.
Using
for
these
notes,
we
can
just
give
that
service
account
access
to
the
pub
sub
topics
and
then
you'll
need
to
configure
your
fluent
to
you
know,
publish
your
or
put
your
log
messages
on
those
topics
that
makes
sense.
Yeah.
B
A
The
the
service
account
it'll,
be
the
service
account
for
the
node.
That's
that's
generating
the
log
file.
A
Yeah,
it's
probably
the
default
one,
so
I
guess
we
can
look
at
VMS.
A
So
as
far
as
how
yeah
I
think
I
can
I
can
drive
most
most
of
this.
So.
A
So
you'll
have
to
configure
fluent
D
on
your
kubernetes
cluster.
Can
you
can
you
do
this
part.
D
We
could
also,
depending
how
different
the
conflicts
are
between
your
existing
fluidity,
setup
and
kubernetes.
We
could
generate
manifests
from
tanker
we're
out.
Existing
deployments
are
because
that
will
have
a
roughly
configs
for
Pub
sub,
already
built
in.
A
Get
there
yeah,
I
I
think
I
would
rather
do
that.
I
I,
don't
want
to
bring
this
project
into
our
pipelines
until
we
have
it
under
like
Source
control
like
the
whole
thing
so
yeah
we
can.
We
can
do
that
if
necessary,.
A
So
we
have
set
up
the
remote
right,
endpoint
or
Thanos.
D
You
are
yep,
it's
a
remote
right,
endpoint
is
actually
built
and
Thanos,
which
is
setting
up
the
Ingress
at
the
moment.
Okay,
so
we're
going
to
do
an
interim
solution
which
is
basically
just
using
basic
auth
yeah,
but
that
that'll
get
us
where
we
need
to
be
in
the
time
frame.
We
have
So.
D
Currently
we
sh
we're
on
time
to
have
that
sort
of
done
by
the
end
of
this
week
and
then
from
the
remote
environment's
perspective.
It's
just
a
matter
of
configuring,
the
remote
right
block
and
Prometheus,
which
means
we
no
longer
need
all
the
sort
of
VPC
pairing
or
you
know,
Thanos
Pockets
set
up
and
all
of
those
dependencies
yeah.
D
A
That's
good
address
like
what
kind
of
what
kind
of
metrics
are
we
going
to
expect
once
we
have
the
remote
right,
endpoint
set
up
and
you're
able
to
send
us
metrics
like
what
are
we?
What
do
we.
B
Have
so
far
just
basic,
so
HTTP
request
logs
Jr
PC
request
number
response
codes
we
want
to
set
up
GPU
usage
as
well
I,
don't
think
it's
done
yet.
Nothing
really
exotic
at
this
point.
C
A
Like
once,
we
have
metrics
we're,
probably
going
to
have
to
have
some
some
kind
of
like
label
taxonomy
for
how
we're
going
to
identify
this
stuff.
First,
one
is
probably
like:
we
need
an
environment
label,
that's
that's
like
the
most
basic
one.
A
So
like
end
stage
and
type
I,
think
yeah.
Is
there
anything
else,
I'm
missing.
B
A
D
I
would
be
inclined
to
create
a
separate
environment.
Okay
in
the
meantime,
it's
just
a
matter
of
is
this
likely
to
be
promoted
to
production
at
some
stage.
A
I
mean
it's
it's
part
of
it's
kind
of
part
of
production,
but
it's
a
separate
thing:
I
I,
I,
don't
know,
I
think
if
we
kind
of
follow
other
services
like
customers,
for
example,
is
something
that
sits
like
it's
his
own
thing.
So
maybe
maybe
we
should
give
it
a
separate
label
and
then
it's
a
matter
of
whether
we
call
this
like
AI,
assist
or
suggested
reviewer
messages,
reviewer
or
code
suggestions
or
what.
B
Just
a
question
regarding
this
because
we're
pushing
to
get
everything
related
to
post
suggestion
to
work,
but
we
also
need
to
do
the
same
thing
for
strategy
reviewer.
What
do
we
need
to
create
a
separate
environment
for
this,
or
is
there
if
there's
an
AI
assist
environment,
we
can
push
everything
there.
D
Yeah
I
think
it's
easy
enough
to
start.
We
can
also
relabel
anything
that
goes
into
the
bucket
within
60
days
before
it
gets
down
sampled,
so
yeah
to
turn
around
and
say
this
is
a
bad
idea.
We
can
actually
relabel
them.
Everything.
A
Okay
sounds
good
and
then
of
course,
the
other
task
is
to
configure
Prometheus
on
the
aiss
side
to
send
metrics
to
the
remote
right,
endpoint
yeah
on
Andres.
Do
you
have
like
a
handle
on
that
once.
B
Once
we
have
the
remote,
what's
my
my
understanding
was
that
it's
our
promoters
getting
scrapes
and
it's
not
pushing
anything
and.
D
B
D
Yeah
there
was
some
confusion
early
on
between
Prometheus
scraping
targets
like
exporters
and
the
concept
of
Prometheus,
actually
sending
metrics
to
long-term
storage.
So
there's
kind
of
two
pieces
to
Prometheus.
There
is
a
a
pull
model
for
the
exporters,
but
then
we
push
it
to
Thanos
for
long-term
storage.
B
A
We're
pretty
much
wrapping
up
at
this
point
Stan.
Just
to
give
you
a
quick
recap:
I
wanted
to
take
a
quick
inventory
of
like
everything,
that's
in
this
project
to
understand
it
a
bit
better.
We
did
that
there's
some
questions
about
what
things
are
used
for,
but
I
think
I
have
a
good
handle
on
what's
used
for
code
suggestions,
then
I
wanted
to
see
whether,
like
we
can
pull
out
the
code
suggestions
stuff
into
separate
projects.
A
That
sounds
like
it's
going
to
be
a
bit
of
an
undertaking
and
not
going
to
be
done
in
the
short
like
accelerated
time
frame
we
had
so
we
were
thinking
about
doing
that,
maybe
in
parallel,
why
we
also
get
logging
and
monitoring
working
for
the
current
project,
okay
to
get
login
and
monitoring
working.
It
doesn't
seem
like
it's
going
to
be
that
difficult.
We
just
need
to
give
the
service
account.
The
default
service
account
access
to
the
pub
sub
topics
and
production
staging
so
we'll
just
use
the
existing
production.
A
Staging
accounts,
that'll
be
I,
think
all
pretty
straightforward,
I
I,
don't
know
I,
don't
know
whether
we
want
to
do
staging
in
the
next.
A
Like
before,
like
there
was
discussion
about
creating
a
staging
environment
for
this
and
I,
don't
know
whether
that's
going
to
happen
in
the
next
three
weeks
or
not,
but
we've
talked
a
little
bit
about
label
taxonomy
and
like
because
we
need
to
define
environment
station
type
label
at
a
minimum,
so
we're
going
to
do
like
environment
AI,
assist
type
will
be
or
code
suggestions
or
whatever
and
stage
means.
That's.
A
C
Yeah
we
don't
quite
have
a
staging
environment,
it's
not
something
we
can
do
in
parallel.
Now,
that's
great.
A
C
It
made
we
may
need
to
get
automation
set
up
right
now,
because
I
think
a
lot
of
the
setup
is
manual
right
now
like
run
this
script
and
all
that
so
I
don't
know
if
we
want
to
automate
it.
It
sounds
like
it'd,
be
nice
to
automate
this
stuff
much
more,
but
I'm,
not
sure.
If
it's
requirement
right
now.
A
Well,
we
need
to
decide
how
this
ties
into
what
the
scalability
team
is
working
on,
which
is
this
General
framework
for
running
experiments
and
like
maybe
we
wait
for
that
or
in
parallel
we
create
two
separate
projects,
one
for
production
and
one
for
staging,
bring
it
into
the
infrastructure
fold
like
put
it
into
our
terraform
repo
use
our
deployment
pipelines
for
Helm
and
tanka,
and
then
we
can
start
moving
stuff
there,
but
I
I,
don't
know
what
the
you
know.
A
Medium
term
goal
will
be
here,
because
it
depends
on
how
quickly
scalability
team
can
deliver
on
their
their
framework,
stuff
and
I.
Think
they're
just
deciding
what
to
do
so,
I'm,
not
sure
how
that's
coming
out
early
yeah,
okay,
that
sounds
sounds
good.
I
think.
Is
there
anything
else,
guys
that
I
missed
here
that
you
want
to
talk
about.
B
A
Dropped
a
link
to
the
repo
which
you
may
already
have
seen:
oh
yeah,
so,
but
you
can
at
least
like
set
it
up
for
local
development,
though,
without
actually
access
access
to
the
production,
Vault
I,
don't
think
you're
gonna
get
the
the
secrets
you
need
for
that.
I
would
I
would
say
just
Reach
Out
reach
out
to
Andrew
on
slack
and
and
if
that
doesn't
work
out,
we
can
go
from
there.
Yeah.