►
From YouTube: English Google SEO office-hours from March 5, 2021
Description
This is a recording of the Google SEO office-hours hangout from March 5, 2021. These sessions are open to anything webmaster related like crawling, indexing, mobile sites, internationalization, duplicate content, Sitemaps, Search Console, pagination, duplicate content, multi-lingual/multi-regional sites, etc.
Find out more at https://developers.google.com/search/events/join-office-hours
Feel free to join us - we welcome webmasters of all levels!
A
All
right
welcome
everyone
to
today's
google
search
central
seo
office
hours.
My
name
is
john
mueller.
I'm
a
search
advocate
on
the
search
relations
team
here
at
google
and
part
of
what
we
do
are
these
office
hour
hangouts,
where
people
can
jump
in
ask
their
questions
around
search
and
we
can
try
to
find
some
answers.
A
B
I
would
like
to
ask
the
first
question:
okay
go
for
it.
My
name
is
rohan.
I'm
an
I.t
director
of
search
engine
journal
and
one
question
I
have
about
corvae
vitals
is:
is
there
a
possibility
that
google
can
start
revise
that
score
in
webmaster
quicker,
like
I
did,
updates
on
the
website
and
have
found
that
corvae
vitals
are
broken
after
like
20
days,
because
it's
updating
in
a
week's
right
in
in
long
periods
so-
and
I
have
fixed
that-
but
it
will
take
another
couple
of
weeks
to
have
that
fixed?
B
Is
there
a
possibility
that
google
can
review
and
make
those
updates
more
quicker,
like
not
like
28
days,
but
couple
of
days.
A
Yeah,
I
don't
know,
I
doubt
it
because
the
the
data
that
we
show
in
search
console
is
based
on
the
chrome
user
experience
report,
data
which
is
aggregated
over
those
28
days.
So
that's
that's
the
primary
reason
for
the
delay
there.
It's
not
that
search
console
is
slow
in
processing
that
data
or
anything
like
that.
It's
just
the
way
that
the
data
is
collected
and
aggregated
it.
A
It
just
takes
time
so
usually
what
what
I
recommend
when
people
ask
ask
about
this
kind
of
thing
where
it's
like
I
I
want
to
know
early
when
something
breaks
is
to
make
sure
that
you
run
your
own
tests
on
your
side
kind
of
in
parallel
for
your
important
pages,
and
there
are
a
bunch
of
third-party
tools
that
do
that,
for
you
automatically.
You
can
also
use
the
the
pagespeed
insights
api
directly
and
pick.
A
I
don't
know
a
small
sample
of
your
important
pages
and
just
test
them
every
day,
and
that
way
you
can
kind
of
tell
if
there
are
any
regressions
in
in
any
of
the
setups
you
made
fairly
quickly
it.
Obviously,
a
lab
test
is
not
the
same
as
the
field
data,
so
there
is
a
bit
of
a
difference
there.
C
Now,
quick
follow-up,
there
then
so
john.
C
Does
that
mean
that
if
I
see
that
I
have
really
bad
poor,
cold
weather
whitens
and
managed
to
fix
most
of
those
issues
in
in
one
day,
the
next
day
that
would
be,
or
rather
two
days
later,
that
would
be
part
of
these
28
days,
because
that
would
would
it
be
a
rolling
update
or
is
it
every
28
days
that
it
happens?.
A
So
I
I
don't
know
for
sure
I
think
the
the
chrome
user
experience
report
site
has
that
documented
it
I'm
I'm
not
sure.
If
it's
okay,
it's
like
a
batched
update.
That
is
just
done
every
28
days
or
if
it's
more
of
a
rolling
thing.
I
know
in
search
console
we
have
kind
of
the
the
overtime
data
and
it's
not
that
it
jumps
every
28
days.
So
my
guess
is
it's
probably
more
rolling.
A
A
B
So,
just
to
know
everybody
what
did
brought
the
corvette
white
score
since
something
was
updated
on
the
corvette,
vitals
calculations,
since
it
seems
better
so
there
we
were
hiding
the
navigation
manual
sticky
when
you
scrolled
down,
and
it
show
when
you
scroll
up
and
used
tess's
like
position
to
animate
that
like
hide
it
and
seems
that
position,
animations
did
spoil
the
corvette
vitals.
B
So
even
though
it's
a
sticky
menu
and
it's
not
causing
layout
to
shift,
but
it's
calculated
as
a
layout
shift,
because
you
are
animating
the
position
of
the
element.
So
we
did
change
that
animation
from
position
to
opacity
like
we
are.
We
are
just
fading
out
it.
Instead
of
moving
away
from
the
screen
by
position,
animation.
B
A
Yeah,
I
I
think
for
for
things
where
you
feel
that
the
calculations
are
are
being
done
in
a
way
that
doesn't
make
much
sense.
I
I
would
also
get
in
touch
with
the
the
chrome
team.
I
I
think
especially
annie
sullivan
is
working
on
improving
the
cumulative
layout
shift
side
of
things
and
just
make
sure
that,
like
they,
they
see
these
kind
of
examples
and
if
you
run
across
something
where
you
say,
oh
it
doesn't
make
any
sense
at
all
then
make
sure
that
they
know
about
it.
A
It's
not
so
much
that
from
google
search
side
we
we
would
try
to
dig
in
and
figure
out
exactly
why
this
is
happening
with
that
score,
but
we
kind
of
need
to
rely
on
the
score
from
the
chrome
side
and
our
goal
with
the
corvette,
vitals
and
page
experience
in
general
is
to
update
them
over
time.
So
I
think
we
we
announced
that
we
wanted
to
update
them,
maybe
once
a
year,
and
let
people
know
ahead
of
time
what
what
is
happening.
A
So
I
I
would
expect
that
to
to
be
improved
over
time,
but
it's
also
important
to
to
get
your
feedback
in
and
make
sure
that
they're
aware
of
these
weird
cases.
Thank
you
sure
all
right,
hi,
john.
Yes,
you
raised
your
hand
perfect
exactly.
E
Great
yeah,
so
you
know
our
website
consists
of
a
lot
of
like
thousands
of
guides
that
we
made
to
help
people
understand
visa
requirements
based
on
their
location
and
the
way
we've
designed
these
guys
are
that
they're
meant
to
be
a
one-stop
destination.
E
But
that
said,
some
of
the
content
is
overlapping
and
there's
a
risk
that
google
might
interpret
these
pages
as
doorway
pages.
Even
though
that's
not
our
intention,
for
example,
like
we
have
a
guide
for
france
visa
in
los
angeles
and
then
francis
and
detroit
and
it's
personalized
to
the
city.
But
a
lot
of
the
content
is
also
like
similar,
and
so
you
know
what
is
your
advice
to
make
sure
it's
not
interpreted
as
doorway
or
canonicalized
by
google,
given
that
we
have
so
many
of
these.
A
I
don't
know
it's
it's
hard
to
say
so
offhand
just
hearing
that
you
have
city
level
pages
for
visa
advice
feels
a
lot
like
you're
headed
into
the
direction
of
doorway
pages.
So
that's
something
where
I
I
would
be
really
cautious
about
expanding
too
much
in
that
direction,
because
obviously
you're
going
to
have
things
like
well.
All
of
the
country.
Information
is
exactly
the
same,
and
essentially
the
difference
is
the
address,
and
maybe
a
telephone
number
opening
hours
and
that's
something
where
I
suspect.
A
Our
systems,
on
the
one
hand,
could
be
getting
confused
from
the
the
content
side
and
where,
probably,
if
someone
from
the
web
spam
team
were
to
look
at
that,
they
would
say
this
is
potentially
too
much,
especially
if,
if
you
really
have
like
on
on
a
city
level,
country-wide
information
that
feels
like
you
could
easily
expand
that
to
ten
thousands
of
cities
and
claim
that
it's
it's
like
all
unique
content,
because
it's
for
every
city
individually
and
I
have
some
kind
of
basic
information
about
each
city
as
well
on
those
pages.
A
But
in
the
end,
it's
all
about
one
country
and
the
visas
for
the
other
country.
Right.
So
that's
something
where
I
I
would
try
to
take
a
strong
look
at
what
you
can
do
to
try
to
concentrate
those
pages
so
that
you
don't
end
up
in
a
situation
where
you're
just
taking
giant
lists
of
cities,
giant
lists
of
countries
and
then
crossing
them
in
a
database
and
generating
pages.
E
Got
it
and
then
just
a
follow-up
question,
so
we'll
definitely
look
into
it.
We
so
across,
like
all
the
pages
that
we
have
google's
indexed
will
be
only
about
100
of
them,
and
the
primary
reason
when
I
look
at
the
crawl
stats
report
is
that
google
only
crawls
about
60
pages
and
all
the
other
pages
are
excluded,
like
discovered,
but
not
crawled,
which
means
that
they
didn't
want
to
overload
our
website.
E
But
we
have
like
a
full
success
rate
and
really
low
response
times.
How
would
you
suggest
we
go
about
getting
the
crawl
budget
up.
A
Yeah,
so
there
are
two
aspects
there
with
the
crawl
budget,
on
the
one
hand,
kind
of
the
technical
limitations
of
the
server
and,
on
the
other
hand,
the
demand
from
google
with
regards
to
what
we
think
is
important
to
index,
and
it
sounds
like
the
technical
limitations
on
your
server
are
less
less
of
an
issue.
If
you're
saying
like
it's
a
fast
server,
it
responds
quickly,
then
primarily
what
you're,
probably
seeing
is
that
google
is
running
into
a
situation
where
it's
saying.
A
But
it's
it's
not
something
that
you
can
force
and
even
if
you
can
force
it
for
like
a
temporary
situation
where
google
goes
off
and
indexes
tens
of
thousands
of
pages
from
your
site,
if
then
it
still
determines
well.
Actually
this
site
is
not
as
useful
as
I
thought
it
was.
Then
it
will
just
drop
those
pages
from
the
index
over
time.
So
it's
assuming
this
is
still
the
the
visa
level
size.
Then
that's
something
where
I
I
would
strongly
recommend
starting
off
with
a
much
smaller
set
of
pages.
A
A
much
smaller
set
of
content
so
that
it's
something
that
google
can
index,
google
can
learn
over
time
that
actually
this
is
really
good
content
and
based
on
that,
it
can
expand
and
say
well
actually:
okay,
instead
of
just
100
pages,
I'll,
go
and
crawl
thousand
pages,
or
go
and
call
10
000
pages
and
try
to
get
those
indexed
as
well,
but
kind
of,
especially
when
you're
starting
off
with
with
a
site
that
has
a
lot
of
content.
F
Yeah
hi
john
hi
hi,
so
this
is
ramandara.
I
have
a
question
regarding
sitemaps,
which
is
similar
to
the
previous
discussion,
so
we
have
a
new
site
and
we
have
older
site
maps
like
ranging
from
2009,
2015
till
2015,
say
and
we're
facing
some.
Some
of
the
site
maps
do
have
certain
issues
like
images
your
image
urls
have
migrated
and
because
of
that,
we're
getting
issues
in
the
older
site
map.
So
just
wanted
to
check
with
you.
F
How
important
are
the
older
site
maps
since
for
the
bottom,
since
we
feel
it's
already
archived
in
that
case,
and
secondly,
does
it
help
the
crawl
budget
by
removing
the
older
site
mask.
A
Yeah
good
good
question,
so,
with
regards
to
site
maps,
we
use
that
to
better
understand
which
pages
we
need
to
crawl
and
and
update,
and
if
you
know
that
you
have
sitemap
files
that
point
to
pages
that
don't
exist
anymore
or
that
you
don't
care
about
anymore,
then,
essentially,
I
would
try
to
remove
those
pages.
So
if,
if
this
is
a
sitemap
file,
that
was
useful,
I
don't
know
a
couple
years
ago
and
your
website
has
migrated
in
the
meantime.
A
You
have
a
different
url
structure,
then
telling
us
about
those
old
urls
doesn't
give
us
any
useful
information.
So
I
would
consider
regenerating
the
sitemap
files
and
really
making
sure
that
they
contain
the
current
set
of
urls
from
your
website
that
you
care
about
and
that's
less
a
matter
of
crawl
budget.
I
think,
but
more
just
kind
of
like
general
website
hygiene
in
that
you're,
making
sure
that
google
understands
what
you
care
about
and
is
able
to
process
that
as
easily
as
possible.
A
A
E
Mohawk,
I
think
already
you
already
answered
my
question.
G
Okay,
so
recently
we
implemented
structured
data
for
voice
search
like
speakable
structured
data,
and
you
would
like
to
understand
what
will
be
the
impact
and
is
there
any
way
that
we
can
somehow
understand
that
what
percentage
of
the
traffic
come
from
the
voice
search
or
is
there
any
trick
that
we
can
use
to
extract
this
information
from
the
search
console.
A
I
I
don't
think
you
can
see
anything
with
regards
to
to
voice
search
there
at
the
moment.
So
that's
that's
something.
People
have
asked
about
on
and
off,
not
for
a
long
time
now,
but
every
now
and
then,
but
I
don't
think,
there's
anything
in
search
console
that
gives
information
about
how
the
speakable
markup
is
used
or
when
that's
shown.
So
it's
almost
something
that
you'd
probably
have
to
try
out
yourself
and
see.
Is
it
doing
what
you
think
it's
doing,
or
is
it
not
not
that
useful.
G
Okay,
thank
you
and
I
have
another
question
sure
it's
about
like
how
we
can
force
google
to
refresh
the
resources.
I
mean
the
like.
The
cache
resources
cache
like,
for
example,
like
if
we
change
something
with
our
javascript
file,
or
I
don't
know,
css
or
something
like
this.
How
we
can
force
google
to
refresh
this
file.
G
A
A
It's
a
bit
tricky
there,
because
the
caching
that
we
do
for
rendering
is
very
different
from
the
caching
that
a
browser
usually
does
for
for
users.
So
it's
it's
helpful,
but
it's
not
the
the
perfect
solution.
The
other
thing
that
I've
seen
people
do
is
to
update
the
urls
when
they
make
significant
changes
within
the
content.
A
So
I
have
seen
some
that
use
something
like
a
content
hash
in
the
url
for
for
the
javascript
files
for
the
css
files
and
in
general,
when
we
try
to
render
an
html
page
and
we
see
a
link
to
a
new
javascript
file
which
would
be
based
on
the
content
hash,
for
example,
then
we
would
know
that.
Oh
this
is
a
new
javascript
file.
We
need
to
take
a
look
at
that,
whereas
if
you
keep
the
same
file
names
for
javascript
and
css,
then
we
might
be
tempted
to
just
reuse
our
cash.
A
Sure
esther,
I
think,
you're
next.
A
I
A
I
Hi
I
we're
currently
conducting
a
large
site
migration
after
our
company
was
acquired
by
another
company,
so
we're
basically
lifting
and
shifting
some
pages
over
to
our
website
amongst
our
existing
content
and
changing
the
domain.
Are
there
any
top
five
tips
on
what
I
should
look
out
for,
whilst
I'm
doing
this.
A
A
The
other
thing
I
would
watch
out
for
is
all
of
the
internal
linking
so
that
you
really
make
sure
that
all
of
the
internal
signals
that
you
have
as
well
that
they're
forwarded
to
whatever
new
urls,
because
what
sometimes
happens
or
what
I've
sometimes
seen
with
these
kind
of
restructurings,
is
that
you
redirect
the
urls
you
move
them
over.
But
you
forget
to
set
the
rel
canonical.
You
forget
to
set
the
links
in
the
navigation
or
in
the
footer
somewhere
and
all
of
those
other
signals
there.
A
They
they
wouldn't
necessarily
break
the
navigation,
but
they
make
it
a
lot
harder
for
us
to
pick
the
new
urls
as
canonicals.
So
that's
kind
of
the
effect
that
you
would
see
there.
It's
not
so
much
that
it
would
stop
ranking.
But
it's
more
that
oh,
we
just
keep
the
old
urls
for
much
longer
than
we
actually
need
to.
A
J
Hi
john,
thank
you
so
we're
rethinking
how
we
display
our
publication
dates
on
site
on
the
front,
and
I
wanted
to
know
what
the
official
stance
is
on
whether
we
need
to
have
both
the
publish
date
and
update
date,
our
last
modified
date,
to
comply
with
structured
data.
I
know
that
most
of
structured
data
requires
that
you
have
some
physical
element.
That's
marked
up,
so
it's
kind
of
want
to
see
what
you
think
is
best
there.
A
We
have
a
help
center
page
on
on
dates.
I
I
would
double
check
that
I
don't
think
it's
necessary
to
to
specify
any
date
in
structured
data.
A
A
What
is
on
the
page
itself,
so
as
much
as
possible,
like
like
a
lot
of
seo
signals,
I
think
it's
important
to
make
sure
that
all
of
these
things
align
that,
if
you're
giving
us
a
date
in
one
place
that
it's
actually
date,
that's
visible
on
the
page
that
the
time
zone
is
correct,
that
you
specify
there
all
of
these
kind
of
small
details
as
well,
just
that
it
matches
the
other
information
that
we
find.
J
Okay,
yeah,
the
reason
we're
thinking
is
because
we're
seeing
that
sometimes
you
know
we
aren't
seeing
the
proper
modification
they
showed
up
in
search,
so
we
want
to
be
able
to
have
that
work
properly.
So
thank
you.
Yeah.
A
Yeah,
it's
our
systems
kind
of
work
in
a
way
that
we
pick
up
all
of
the
dates
that
we
can
find
for
this
piece
of
content,
and
then
we
try
to
figure
out
which
ones
kind
of
match
up
the
best
and
based
on
that,
we
we
try
to
pick
the
right
date
and
sometimes,
if
you
have,
I
don't
know
a
news
site,
then
you'll
have
related
articles
or
things
like
that
where
you'll
also
have
dates
kind
of
mentioned
in
the
text,
and
if
we
don't
find
clear
information
in
the
structured
data
for
the
date,
then
we
might
pick
one
of
these
other
random
dates
on
the
page
as
well.
J
A
J
A
Yeah,
I
I
don't
know
how
we
decide
between
publication
date
and
last
modification
date.
I'm
not
not
really
sure
how
how
you'd
be
able
to
easily
influence
that.
I
could
imagine
that
we
try
to
see
when
significant
changes
happen
on
a
site,
but
we're
on
a
page,
but
I
don't
think
we
have
any
explicit
guidelines
on
that.
H
Hi
john
hi
hi,
this
is
media
boss.
Again,
I
am
here
with
a
very
simple
basic
question,
as
I
started
learning
seo
so
this
time
my
question
is
related
to
headings
heading
structure.
Basically,
is
it
mandatory
that
we
should
structure
our
headings
in
ascending
order?
We
can
leverage
these
structure
as
per
our
need.
A
It's
it's
not
necessary
to
to
have
kind
of
this
theoretically
correct
order
of
the
headings,
and
the
number
of
like
h1
headings
that
you
have
is
also
something
that
is
kind
of
open.
I
I
think,
for
large
part,
it
makes
sense
to
try
to
have
a
semantically
correct
heading
structure,
because
there
are
things
like
screen:
readers
that
also
rely
on
headings
to
kind
of
navigate
to
the
right
part
of
the
page.
A
H
Another
question
is
related
to
s1
tags:
how
what's
the
optimal
our
ideal
numbers
of
h1
tag?
We
can
have
a
in
a
page
because
I
have
read
so
many
articles
of
industry
expert
that
there
should
be
only
one
s1
tag
from
the
perspective
of
seo.
So
what
does
you
recommend.
A
You
can
have
as
many
as
you
want,
so
I
I
think
in
many
cases
you
have
one
primary
heading,
but
if
you
have
multiple,
that's
fine
with
html5,
it's
also
the
case
that
by
design
in
html
you
have
multiple
h1
tags.
So
it's
it's
something
where
there's
definitely
not
a
hard
limit.
With
regards
to
a
number
of
primary
heading
tags,
I
think,
from
a
kind
of
a
focused
point
of
view
that
you
have
a
clear
focus
on
your
page.
A
It
makes
sense
to
have
one
primary
topic,
but
that
can
be
something
that
you
highlight
in
other
ways
it
could
be
like
it
doesn't
necessarily
need
to
be
the
h1
tag.
It
could
be
something
that
you
have
like
with
a
big
big
title,
or
if
this
is
your
home
page
then,
like.
Maybe
you
have
other
ways
of
structuring
what
what
you
think
is
the
primary
topic
of
the
page.
H
So
can
I
say:
google
gives
the
importance
on
the
basis
of
structures
of
headings
to
the
content.
A
To
to
some
extent
it's
not
not
only
that,
but
it
it
does
help
us
to
understand
what
you
think
is
important
for
the
page
and
if
you
say
well,
this
is
the
primary
heading
for
my
page
and
what
this
page
is
about,
and
we
see
it
matches
what
you
have
in
your
content.
Then
that's
that's
always
useful
for
us
to
know.
D
Am
I
am,
I,
though
so,
can
you
can
you
give
us
your
current
take
on
the
supported
image
types
for
the
core
web
vitals
and
page
speed,
because
you
guys
still
seem
to
be
recommending
file
types
that
you
don't
actually
support
or
are
not
really
the
optimal
image
types
to
use?
D
And
I
say
that
because
when
I,
when
I
look
at
our
pastry's
inside
scores,
they
seem
to
be
fine.
You
know
we
can
save
0.15
seconds
on
javascript
0.9
seconds
on
deferring
off
screen
images,
but
we
can
save
12
seconds
by
changing
the
file
types
to
image
types
like
jpeg,
xp
or
jpeg,
9000.
Abc
that
you
don't
even
support.
So
can
you
clarify
that.
A
I
I
haven't
seen
that
so
it's
it's
hard
for
me
to
say
I
can
check
my
screen.
Maybe
not
no.
I
mean
just
just
because
like
since
we're
talking
about
images
and
things,
I
don't
want
it
to
suddenly
be
like.
Oh,
it's
like
you're,
showing
copyrighted
images
on
your
youtube
thing.
So
that's
that's
the
primary
thing
I'm
a
little
bit
worried
about.
A
D
Okay,
okay,
but
but
in
particular
the
the
time
because
I
there's
we
have
no
problem
updating
the
image
to
a
file
type
that
you
like.
But
why
is
that
still
the
recommended
file?
Type?
When
you
don't
you
guys,
don't
really
support
jpeg
2000.
A
To
be
configuring,
information,
yeah
yeah-
I
I
don't
know,
but
it
might
be
also
a
sign
that
there
are
other
image
optimizations,
that
you
could
do
to
decrease
the
file
size.
But
it's
but.
A
Jpeg
2000,
that
seems
like
forever
ago,
okay
I'll,
take
a
look
at
that
interesting
cool.
Okay,
let
me
run
through
some
of
the
submitted
questions
and
then
we'll
have
more
time
for
for
questions
from
you
all
along
the
way
as
well.
I'm
managing
one
of
the
biggest
news
sites
in
our
country.
I
want
to
improve
internal
linking
for
news
and
I'm
thinking
about
including
latest
news
section
in
the
sidebar,
so
that
the
latest
articles
are
linked
not
only
from
the
home
page,
but
also
from
most
pages
on
the
site
as
well.
A
A
Usually,
when
it
comes
to
news
sites,
we
focus
a
lot
of
our
crawling
on
the
new
site
map
that
you
have
and
on
so-called
hubs,
which
are
usually
the
the
home
page,
the
category
pages
pages
that
we
can
refresh
very
quickly
where
you
link
to
the
individual
news
articles
so
probably
for
a
a
reasonably
sized
news
site.
A
A
If
you
have
other
articles
that
you
care
about
and
that
you
want
to
be
more
visible
in
search,
and
that
could
be
something
that
is
older
could
be
something
that
maybe
is
a
cause
that
that
your
site
cares
about.
Maybe
it's
important
information
that
you
need
to
share
for
the
long
run
then
I
would
definitely
also
include
that
in
something
like
a
sidebar
or
footer
links.
A
So
I
think
overall,
it's
it's
a
reasonable
strategy.
I
think,
especially
for
non-news
websites.
It
definitely
makes
sense.
So
if
it's
an
e-commerce
site,
for
example,
and
if
you
have
new
products
or
related
products,
you
link
to
those
in
the
sidebar,
that's
a
fantastic
way
for
us
to
to
get
that
information
for
news
site.
We
probably
can
already
find
those
pages
in
other
ways
and
it's
more
about
giving
a
subtle
hint
of
importance.
A
What's
the
best
practice
for
anchor
text
wording
on
internal
links
as
well
as
external
links,
for
example,
using
the
website
name,
the
blog
post,
title
exact,
match
or
lsi
keywords.
So,
first
of
all,
we
we
have
no
concept
of
lsi
keywords,
so
that's
something
you
can
completely
ignore.
A
I
I
think
it's
interesting
to
look
at
lsi
when
you're
thinking
about
understanding
information
retrieval
as
a
theoretical
or
computer
science
topic,
but
as
an
seo.
You
probably
don't
need
to
worry
about
that
with
regards
to
internal
links,
you're,
giving
us
a
signal
of
context.
So
basically
you're
saying
in
this
part
of
my
website,
you'll
find
information
about
this
topic
and
that's
what
you
would
use
as
the
anchor
text
for
those
internal
links.
A
So
that's
something
where,
on
the
one
hand,
usually
that's
something
that
you
want
to
kind
of
give
that
context
to
users
as
well,
so
the
kind
of
internal
links
oops
the
kind
of
internal
links
that
you
would
use
for
for
users
usually
match
what
you
would
use
for
seo
as
well.
So
that's
something
where
luckily
there's
a
nice
overlap
there
with
regards
to
external
links,
if
you're
linking
out
to
other
people's
websites
the
same
things
like
supply,
some
contacts,
why
people
should
go
and
click
on
this
link?
A
A
Second
part
of
the
question:
is
it
true
that
the
more
links
you
have
on
a
page,
the
more
page
authority,
gets
divided
amongst
the
the
links?
If
so,
is
there
a
range
to
be
within
so
we
we
do
use
pagerank,
we
don't
use
page
authority
in
our
systems.
We
we
do
use
pagerank
as
a
way
of
kind
of
understanding
the
internal
structure
of
a
website
a
little
bit
better
and
it
does
take
into
account
the
links
on
a
page
and
we
kind
of
forward
the
signals
across
these
pages.
A
But
it's
not
the
case
that
there's
any
optimal
number
of
links
that
you
need
to
have
on
a
page
to
make
that
work.
Well,
essentially,
if
you're
creating
a
website
that
has
a
reasonable
structure
for
users
where
they
can
navigate
around
your
website
in
a
reasonable
way,
then
that
would
work
for
search
as
well
and
especially
if
you're
getting
started,
then
that's
something
where
I
wouldn't
necessarily
focus
on,
like
the
number
of
links
on
the
page
and
use
that
as
a
metric
to
try
to
figure
out
how
to
keep
your
pages
running.
A
I
think
at
some
point,
if
you
have
a
very
large
website
and
google
has
trouble
understanding
the
structure
or
has
trouble
crawling
it
properly,
then
the
internal
linking
structure
is
is
certainly
something
to
look
into,
but
especially
with
smaller,
newer
sites.
That's
something
you
probably
don't
need
to
worry
about
too
much.
K
John,
can
I
ask
a
follow-up
to
last
question
sure
yeah?
Actually
I
just
want
to
know
like
like
if
I
am
linking
to
other
pages
on
my
website
from
my
article.
Does
google
try
to
understand,
like
the
other
article,
which
I
am
linking
to
by
looking
at
the
article
from
which
I'm
linking
I
mean,
like
google
say
like
if
this
is
like
500
word
tags
from
here?
K
You
are
linked
to
this
page,
so
we
may
try
to
understand
like
from
500
words
like
get
some
information
about
that
page
which
which
I
am
linking
to
internally,
does
google
do
that.
A
A
little
bit,
but
not
not
so
much
in
that
random
words
on
a
page,
will
impact
how
linked
pages
are
handled.
So
we
we
take
that
into
account.
With
regards
to
understanding
the
context
of
the
pages
that
you
have
there.
A
Usually
the
anchor
text
is
the
most
important
part
there
with
images
it's
it's
something
where
we
do
take
into
account
kind
of
the
the
content
around
that
image.
A
A
Yeah
exactly
and
you
you
can
see
you
can
kind
of
guess
some
of
this.
If
the
destination
page
is
blocked
by
robot's
text,
then
there's
still
some
situations
where
we
would
show
it
in
search,
even
if
it's
blocked
by
robot's
text,
and
we
would
only
be
able
to
show
it
in
search
based
on
the
links
that
we
have
to
that
page.
A
A
My
blog
has
been
completed
since
two
months.
I
posted
content
in
january
and
in
starting
it
was
an
index,
but
now
it's
not
in
google
anymore.
What
is
the
reason
for
the
indexing
of
the
post
and
how
can
I
get
it
back
into
google?
A
I
don't
know
it's
it's
really
hard
to
say
without
any
additional
information
there.
So
I
the
one
thing
I
would
recommend
in
a
case
like
this,
is
go
to
the
search
central
help,
forums
post
your
url,
some
keywords
that
you
are
targeting
there
and
try
to
get
some
advice
from
the
community.
Sometimes
there
are
technical
things
that
are
wrong
on
a
site
that
you
need
to
fix
to
make
it
so
that
we
can
index
it
properly
again.
A
A
A
Our
log
files
show
that
google
bot
finds
a
lot
of
json
files
even
more
than
normal
pages.
Might
these
kind
of
pages
exhaust
our
crawl
budget?
Does
google
crawl
api
routes,
or
is
this
somehow
internally
prevented?
If
we
disallow
our
api
in
robots
text?
How
does
that
affect
the
xhr
requests?
Would
they
still
be
able
to
deliver
content?
A
So
super
good
question?
Yes,
all
of
these
requests
do
count
towards
the
crawl
budget.
So
that's
something
where
essentially
every
request
we
make
to
the
server
that
goes
through
the
google
bot
infrastructure
is
something
that
we
would
count
as
a
request
that
we
make.
I
mean
it
is
a
request
and
we
would
try
to
limit
that
based
on
the
technical
limitations
that
we
feel
apply
to
your
server
and
based
on
that
we
might
say
well,
we
can
get
more
or
we
can
get
less
content
here.
A
So,
in
a
case
like
this
you're,
probably
more
looking
at
the
technical
limitations
when
it
comes
to
crawl
budget
and
less
the
crawl
demand
side
of
things
and
it's
something
where,
if
you
just
look
at
the
server
logs-
and
you
see
a
lot
of
json
files
that
are
being
requested-
it's
not
necessarily
a
sign
that
something
is
broken.
It's
not
necessarily
a
sign
that
the
crawling
is
limited
due
to
these
json
requests
that
are
made
it's
just
well.
A
We
happen
to
make
a
lot
of
these
and
it
could
be
the
case
that
we
make
a
lot
of
these
requests,
but
we
can
still
crawl
enough
of
the
normal
content
and
it
doesn't
really
affect
the
normal
crawling
and
indexing
of
the
site.
So
that's
something
I
I
would
try
to
figure
out
ahead
of
time.
There's
no
simple
way
to
see
that
there's
no
information
in
search
console
directly.
That
says,
oh
you've
exhausted
the
technical
limitations
that
we
think
apply
to
your
server.
A
So
that's
something
where
you
almost
need
a
little
bit
of
experience
and
a
little
bit
of
guessing
when
looking
at
the
graphs
and
looking
at
your
servers
and
kind
of
testing
individual
urls
to
figure
out.
Is
it
possible
that
google
is
running
into
limitations
with
regards
to
what
it
can
crawl
or
is
it
possible
that
google
is
is
like
crawling
as
much
as
it
wants,
and
it's
okay
with
that,
so
that's
kind
of
the
the
first
step
there
with
regards
to
blocking
these
json
files.
A
If
you
do
find
out
that
these
requests
are
causing
problems,
I
mean
one
approach
is
to
kind
of
speed
things
up.
So
that's
the
technical
limitations
are
less
of
an
issue
which
could
be
to
move
some
content
into
a
static
cnn,
some
cdn
somewhere,
so
that
those
requests
are
easier
cachable
that
they
go
a
little
bit
faster
than
the
rest
of
the
requests
to
your
site.
That
might
be
an
option.
If
you
decide
to
that,
you
do
need
to
block
the
crawling
of
these
json
files.
A
So
that's
kind
of
the
the
downside
of
blocking
things
there.
It's
similar,
if
other
people
on
other
websites
are
using
your
apis
and
kind
of
using
your
apis
to
generate
content
for
their
pages.
L
Yes,
john,
we
have
event
discovery
platform
olympus
dot
in
and
we
are
facing
some
issue
or
related
duplicate,
canonical
thing.
So
the
issue
is,
google
is
picking
the
wrong
pages
instead
of
choosing
original
pages.
A
M
Okay,
so
let
let
me
add,
I'm
from
the
same
team.
Okay,
so
basically
we
have
all
you
know:
20
000
cities
and
each
city
has
a
different
events
on
them,
basically
listing
the
events.
So,
since
a
few
months
we
are
saying
google
is
picking
up
wrong
city
instead
of
other
cities.
If
you
search
about
like
events
in
washington,
it
will
show
result
of
san
francisco
and
when
we
check
canonical
it
has
picked
up
the
wrong
canonical
and
whole
page
and
the
content.
Everything
is
different,
still
is
picking
up.
A
A
There
are
two
possible
things
that
that
I've
seen
that
could
could
be
playing
a
role
here.
The
first
one
depends
a
little
bit
on
the
way
that
you
have
your
site
set
up.
I
I
didn't
check
in
in
this
case,
but
if
you're
using
javascript
to
generate
some
of
the
city
level
content
and
for
whatever
reason
we
can't
process
the
javascript
properly,
then
it's
possible
that
we
see
more
of
a
generic
page
for
some
cities,
and
then
we
could
say
well.
A
This
generic
page
is
the
same
as
that
other
generic
page,
and
then
we
treat
them
as
duplicates,
and
we
pick
one
of
them
as
canonicals
that
that's
something
where
you
could
probably
recognize
that
fairly
quickly.
If
you
use
like
view
source
on
the
page-
and
you
see
well,
it
pulls
in
javascript,
it
doesn't
have
in
the
html
the
actual
content
from
from
the
individual
cities.
A
A
So
what
what
tends
to
happen
on
our
side
is?
We
have
multiple
levels
of
trying
to
understand
when
there's
duplicate
content
on
a
site,
and
one
is
when
we
look
at
the
page's
content
directly
and
we
kind
of
see
well,
this
page
has
this
content.
This
page
has
different
content.
We
should
treat
them
as
separate
pages.
A
I
have
seen
that
happen
with
things
like
I
don't
know,
automobiles
is
another
one
where
we
saw
that
happen
where
essentially,
our
systems
recognize
that
what
you
specify
as
a
city
name
is
something
that
is
not
so
relevant
for
for
the
actual
urls,
and
usually
we
learn
that
kind
of
pattern
when
a
site
provides
a
lot
of
the
same
content
with
alternate
names
so
with
an
event
site.
I
I
don't
know
if
this
is
the
case
for
your
website
with
an
event
site.
A
It
could
happen
that
you
take
one
city
and
you
take
a
city
that
is
maybe
I
don't
know
one
kilometer
away
and
the
events
pages
that
you
show
there
are
exactly
the
same,
because
the
the
same
events
are
relevant
for
both
of
those
places
and
you
take
one
a
city,
maybe
five
kilometers
away
and,
like
you
show
exactly
the
same
events
again
and
from
our
side
that
could
easily
end
up
in
a
situation
where
we
say
well,
we
checked
10
events
say
event
urls,
and
this
parameter
that
looks
like
a
city
name
is
actually
irrelevant
because
we
checked
10
of
them
and
it
showed
the
same
content
and
that's
something
where
our
systems
can
then
say.
A
So
what
what
I
would
try
to
do
in
a
case
like
this
is
to
see
if
you
have
this,
this
kind
of
situation,
where
you
have
strong
overlaps
of
content
and
to
try
to
find
ways
to
limit
that
as
much
as
possible,
and
that
could
be
by
by
using
something
like
a
rel
canonical
on
the
page
and
saying
well,
this
small
city
that
is
right
outside
the
big
city
I'll
set
the
canonical
to
the
bigger
city,
because
it
shows
exactly
the
same
content
so
that
really
every
url
that
we
crawl
on
your
website
and
index.
A
We
can
see
well,
this
url
and
its
content
are,
are
unique
and
it's
important
for
us
to
keep
all
of
these
urls
indexed,
or
we
see
clear
information
that
this
url,
you
know
is
supposed
to
be
the
same
as
this
other
one.
You
have
maybe
set
up
a
redirect
or
you
have
a
rel
canonical
set
up
there
and
we
we
can
just
focus
on
those
main
urls
and
still
understand
that
the
city
aspect
there
is
critical
for
your
individual
pages,
so
that's
kind
of
the
the
approach
I
will
take
here.
A
M
Right,
I
just
dropped
the
url
in
the
chat
and
one
more
thing
that
I
have
noticed
that
there
is
some
pattern
that
the
smaller
number
of
cities,
like
I
mean
in
length
like
four
or
five
character
length.
Cds
are
affected
most
most
instead
of
the
larger
name,
cds.
A
It's
it
sometimes
picks
up
weird,
weird
things
there
so
especially
I
like,
if
you
mentioned
shorter
city
names
and
longer
city
names,
I
could
imagine
a
case
where
you
have
abbreviations
as
well
for
the
same
city
and
then
we
might
learn
well.
The
shorter
version
is
the
same
as
this
longer
version.
A
Therefore
we
can
do
that
more
more
likely
with
shorter
urls,
but,
like
I
I
don't
know,
maybe
some
examples.
If
you
could
post
them
in
the
chat,
then
I
can
pick
those
up
as
well
and
and
forward
them
on
to
the
team
or
rob
has
some
tips
he's
been
working
with
event
side
since
forever.
A
Cool,
let
me
just
see
what
else
we
have
so
I
have
a
bit
more
time
we
can.
We
can
hang
around
after
the
recording
stops
as
well.
Let's
see
one
one
question
here:
is
someone
has
a
bunch
of
xml
sitemap
files
and
two
of
them
have
a
status
of
can't
fetch
sitemap
couldn't
be
read,
the
files
are
valid
and
like
why?
Why
don't
they
work?
A
It's
it's
hard
to
say
that's
something
where
it
would
be
useful
to
to
have
specific
example,
urls,
and
one
way
you
can
give
us
that
information
is
maybe
in
the
help
forum
so
to
post
some
of
the
sitemap
urls
that
are
working
and
some
that
are
not
working.
The
folks
in
the
help
forums
can
escalate
individual
issues
as
well,
so
that
googlers
can
take
a
look
and
see
if
there's
something
on
our
side.
That's
maybe
blocking
that
as
well.
A
I
have
occasionally
seen
the
these
kind
of
cases
where
you
have
a
bunch
of
sitemap
files
and
some
of
them
work
and
some
of
them
don't
sometimes
they're,
just
technical
quirks
on
on
the
server
side
that
are
happening.
Sometimes
there
are
weird
things
on
our
side
that
are
happening,
so
it's
sometimes
useful
to
to
look
at
the
specifics.
A
One
thing
I
I
have
seen
is
sometimes
just
changing
the
url
of
the
sitemap
file
kind
of
resets,
google's
opinion
of
that
url
and
gives
us
a
chance
to
take
a
look
at
the
kind
of
new
sitemap
url,
so
that
might
be
kind
of
a
tricky,
workaround
cool.
Okay,
let
me
take
a
break
here
with
the
recording
and
I'll
stick
around
for
more
as
well.
So
if
any
of
you
want
to
hang
out
a
little
bit
longer
or
chat
off,
the
record
you're
welcome
to
stay
if,
if
you're,
watching
this
on
youtube.
A
Thank
you
for
watching
and
if
you'd
like
to
join
one
of
these
events
in
the
future,
make
sure
to
watch
out
for
the
event
listing
on
our
youtube
page
or
subscribe
to
the
calendar
that
we
have
on
the
search
developer
site.
All
right
with
that.
I
wish
you
all
a
great
weekend
and
see
you
all.
Maybe
next.