►
From YouTube: Infrastructure sync for Code Suggestions accelerated GA
A
Okay,
welcome
to
the
infrastructure
sync
for
code
suggestions.
I
have
the
first
item:
I
just
wanted
to
touch
base
on
the
graceful
degradation
issue
that
I
opened
and
also
like
how
many
users
we
think
will
be
taking
using
this
feature
and
whether
we're
going
to
be
able
to
handle
the
capacity
and
kind
of
just
generally
about
load,
testing,
I,
don't
know,
I,
don't
know
Andres
whether
you
have
any
any
input
here
or
any
thoughts
on
this.
But
you
know
we
can.
B
A
A
Think
I
saw
we
have
and
this
up
no
not
this
doc.
This
Epic.
B
B
C
B
Otherwise,
I
think
in
Broad
terms,
it's
it's
a
good
place.
A
Okay:
okay,
all
right
that
sounds
that
sounds
good
and
I
assume.
There's
no
there's
no
immediate
plans
for
graceful,
graceful
degradation,
or
are
there
any
kind
of
like
medium-term
plans
for
this
or
any
way
to
I?
Guess
I'm
I'm
most
worried
about
this
service
kind
of
falling
over,
although
we
already
did
the
blog
post
and
I
guess
we
can
see,
you
know
how
it's
being
used
now
and
monitor
it
closely,
but.
B
A
Okay,
number
two
is
also
mine:
I
just
wanted
to
check
really
quickly
and
on
infrastatus
chances,
helping
out
with
the
metrics
catalog
updates.
He
already
has
an
MR
and
for
that
I
see
that
some
of
the
labeling
is
wrong,
so
we
need
to
fix
it,
but
I
think
it
overall
we're
in
pretty
good
shape
to
get
this
going.
Probably
today
we
will
need
to
well.
A
We
need
to
check
what
metrics
are
being
measured
and
also
I'm,
also
curious
about
application
metrics,
and
what
we're
going
to
have
available
to
come
up
with
some
slos
for
the
service.
Do
we
do?
We
have
an
idea
yet
and
I
haven't
really
followed
up
with
my
to-do's,
yet
so
I
don't
know.
I.
Think
I
asked
this
already
yeah.
A
That
sounds
good,
so
we
can
just
do
error
rates
for
an
SLO
for
now,
we'll
probably
just
do
5xx
status
codes
to
start
something
like
that,
and
it
would
be
nice
if
we
could
get
some
latency
measurements
as
well
to
measure
like
an
app
deck
score
for
it.
Maybe
that's
something
we
can
do
yeah.
B
A
Like
I
said,
I'll
be
taking
I'll,
be
taking
a
look
at
the
draft
Mr
for
that
today,
and
since
I
saw
me
Kelly's
here,
I
was
curious,
like
what
will
be
the
pros,
be
the
process
for
deployments.
D
A
D
A
I
actually
was
I,
don't
think
like
this
wasn't.
This
was
not
really
directed
at
you.
Mckelly
is
more
than
everyone
else,
because
I
don't
think
I
I,
don't
think
you
you
would
know,
because
this
has
been
a
totally
isolated
project
until
now.
So
so
what
will
the?
What
will
the
deployments
like?
Are
we
we're
just
doing
we're
just
kind
of
doing
things
from
your
workstations
right
for
the
time
being,
yeah.
B
B
C
E
C
But
I
do
think
we
should
automate
the
deployments
you
know
with
if
it's
even
an
ansible
script
that
runs
to
kubernetes
stuff.
A
Yeah
I
guess
I
guess
my
my
primary
concern
here
is
just
like
keeping
an
audit
trail
of
when
new
versions
are
deployed,
so
we
can
correlate
them
to
any
kind
of
like
if
we,
if
we
get
a
page
for
this
or
if
we
have
to
investigate
a
problem,
we're
going
to
want
to
know
if
there
was
a
change
that
was
deployed
and
what's
in
the
chain,
you
know
what
what's
in
the
diff
I
think
this
is
going
to
be,
maybe
not
so
important
right
now,
but
will
be
important
soon
like
how
often
do
we
expect
there
to
be
deploys
of
this
service.
C
C
Right
but
there's.
C
Themselves
that
we're
deploying
there's
the
model
Gateway,
that's
the
python
server,
there's
vs
code
stuff,
you
know
like
client-side
stuff
happening.
Those
are
the
main
components,
I
think
maybe
the
web
IDE,
but
that's
also
GitHub
yeah.
E
E
They
have
their
Cadence
on
it
and
then
the
code
suggestion
side
it
would
depend
on
the
model
and
model
Gateway
as
well,
so
for
the
model.
Probably
currently,
we
are
almost
tracking
on
a
bi-weekly
sort
of
new
version
pushed
to
the
production
and
then
based
on
changes
on
the
model,
Gateway
sort
of,
as
we
are
going
yeah
so
that
it
will
be
a
lot
more
frequent.
I
want
to
save
it
a
lot
next,
two
to
three
buns.
E
A
Do
we
have
do
we
have
an
issue
when
we
open
about
like
trying
to
formalize
deployments
a
bit
more
just
like
so
that,
and
maybe
you
weren't
on
the
call
when
I
said
this,
but
my
my
main
concern
here
is
that
if
we
do
have
a
problem
with
this
service,
we're
going
to
want
to
know
like
okay?
Was
there
something
deployed
recently?
Was
there
a
change
made
recently,
and
if
that
information
is
not
easy
to
come
by,
it's
going
to
be
tricky
for
us
to
diagnose
problems.
E
Yeah
we
have
it
on
the
model
side,
but
not
all
put
together
like
we
are
talking
right
now,
because
it's
also
different
teams
with
different
components,
but
we
can
work
together
on
figuring
out
a
more
kind
of
a
Consolidated
way
of
looking
into
this,
because
for
the
for
the
vs
code.
Sorry,
we
basically
go
to
the
way
school
team
who
helps
us,
and
we
track
that
through
that.
So
so
yeah
I
can
take
an
action
on
it
and
help
consolidate
it.
Yeah.
A
Okay,
that's
all
we
have
on
the
agenda.
Is
there
anything
else
anyone
would
like
to
talk
about
before
we
end
the
meeting.
E
Not
oh
yes,
no
I
think
we
we
already
when
async
on
a
I
know,
Jerry
had
a
question
on
the
data
that
we're
passing
through
yeah.
A
I
had
a
question
whether
any
of
this
any
will,
if
there
be
any
processing
of
red
data
and
I,
think
we
agreed
like
the
only
thing
that
could
potentially
be
read,
is
the
prompts
and-
and
that's
because
you
know
this
is
customer
code-
small
Snippets
of
customer
code-
probably
so
I
I,
don't
know
if
that's
warrants
any
concern,
but
every
the
model
itself
doesn't
use
any
proprietary
code
from
customers.
So
I
don't
think
that's!
That's
so
much
an
issue.