►
From YouTube: English Google SEO office-hours from August 6, 2021
Description
This is a recording of the Google SEO office-hours hangout from August 6, 2021. These sessions are open to anything search & website-owner related like crawling, indexing, mobile sites, internationalization, duplicate content, Sitemaps, Search Console, pagination, duplicate content, multi-lingual/multi-regional sites, etc.
Find out more at https://goo.gle/seo-oh-en
Feel free to join us - we welcome folks of all levels!
A
All
right
welcome
everyone
to
today's
google
search
central
seo
office
hours
hangouts.
My
name
is
john
mueller.
I'm
a
search
advocate
at
google
here
in
switzerland
and
part
of
what
we
do
are
these
office
hour
hangouts,
where
people
can
join
in
and
ask
their
questions
around
search
and
we'll
try
to
find
answers.
A
A
bunch
of
questions
were
submitted
on
youtube
already,
so
we
can
go
through
some
of
those
but
maybe
like.
Like
always.
If
you
want
to
get
started
with
the
question,
we
can
go
and
look
at
those
first.
It
looks
like
some
of
you
already
have
your
hands
raised,
so
we
can
go
through
that.
Let's
see
praveen.
B
Hi
jun
hi,
so
my
question
is
about
expired
domains.
So
from
past
few
weeks
or
seven
months,
what
I
am
seeing
is
there
are
some
sites
that
are
ranking
prominently
in
search.
B
So
when
I
look
at
their
website
they
don't
they
don't
look
like
a
quality
websites
and
when
I
look
their
history
like
through
webarchic,
so
it
it
came
to
my
notice
that
those
domains
usually
like
they
belongs
to
indian
government
agencies.
Some
institutions,
you
know
government
institutions,
so
what
the
government
institution
did
is
they
created
new
websites
and
and
since
they
are
government,
they
don't
care
care
about
migrations
of
301,
redirections
and
all
that
stuff,
so
they
created
their
new
websites
and
you
know
abundant
these
old
ones.
They
got
expired.
B
Someone
bought
those
domains
and
now
created
a
you
know,
block
kind
of
thing.
So
initially
they
started
writing
about
government
jobs
and
then
they
writing.
They
started
writing
about
education.
Now
they
are
writing
about
health
topics
like
back
on
vaccines
and
they
are
ranking
prominently
on
like
best
action,
the
best
vaccine
to
take
all
this
stuff.
B
A
A
It
also
sounds
like
something
where
you're
not
sure,
if
it's
spam
or
not,
and
for
for
those
kind
of
things
I
I
would
just
also
use
the
spam
report,
form
and
kind
of-
let
us
know
from
from
there
what
what
you've
been
seeing
and
if
it's
not
spammed,
then
the
website
team
will
recognize
that
as
well.
But
that's
I,
I
think,
always
a
useful
approach
too.
B
Yeah
yeah,
I
think
what
good
recommends
for
like
health
topics
is
that
they
should
be
written
by
someone
who
is
specialist
or
someone
who's.
You
know
health,
specialist
or
doctors,
but
if
you
just
go
through,
I
just
shared
the
links.
If
you
just
go
through
these
links,
there's
no
way
you
can
figure
out
how
this
information,
based
on
on
what
conclusion
they
are
saying
that
this
is
the
best
or
this
is
what
you
should
do
for
vaccinations.
B
A
B
A
It's
it's
hard
to
say,
because
there
are
always
so
many
factors
that
are
involved
with
regards
to
kind
of
showing
up
in
search.
But.
C
D
All
right,
evan
hi,
thank
you
for
doing
this.
We
have
a
thing
we
noticed
back
in
june.
You
know
we
have
a
website.
D
That's
been
around
for
nine
years
with
a
huge
copyrighted
database
of
200
000
profiles
that
we've
manually
written
over
nine
years
with
a
team
of
40
writers
and
there's
always
been
websites
that
have
copied
our
entire
database
and
just
literally
sometimes
they
copy
our
terms,
use
page
and
our
team
page
that
are
blatant
about
it,
but
they
copy
our
website
and
we've
noticed
that
they're
showing
up
in
search
sometimes
above
us,
and
sometimes
they
might
copy,
like
you
know,
maybe
if
we
700
words
on
the
page,
250
words
and
then
they'll
add
additional
machine
text
that
are
pages
longer
or
other
times,
they'll
just
copy
it
directly
and
again,
it's
never
been
an
issue
because
we've
ranked
ahead
of
them,
but
lately
they've
been
ranking
ahead
of
us
with
our
copyrighted
content,
and
I
think
the
most
concerning
thing
we
noticed
is
that
when
we
kind
of
do
a
quote,
search
and
and
search
for
two
of
our
sentences
in
google,
the
five
or
six
copycat
sites
show
up
and
then
our
site
doesn't
show
up,
and
it
says
in
order
to
show
you
the
most
relevant
results.
D
We've
admitted
some
entries.
So
it's
almost
as
if
it's
if
reversed
and
now
it
thinks
we're
the
copycat,
even
we're
the
original
source.
So
we've
went
down
the
dmca
path,
but
it's
hard
to
do
that
at
a
full
domain
level
and
we
accidentally
de-indexed
our
site
two
months
ago
when
we
launched
a
different
language.
D
So
we're
wondering
if
maybe
that,
one
day
we
de-indexed
once
we
re-indexed
it,
google
thought
that
these
copycat
sites
were
the
original
source.
So
we're
you
know
we're
running
in
circles.
We
got
two
of
the
sites
removed
via
dmca
web
host,
but
then
they
just
moved
to
another
host.
So
you
know
we
are
the
source.
It's
a
copyrighted
database,
but
other
sites
are
ranking
ahead
of
us
with
our
direct
content,
so
we're
kind
of
running
in
circles.
So.
A
A
So,
if
you
have
a
lot
of
I
don't
know
or
if,
if
a
site
has
a
lot
of
copies
of
your
content,
then
it's
it's
kind
of
a
lot
of
work
to
get
that
done,
but
that's
kind
of
the
the
ideal
approach,
because
that's
I
don't
know
the
the
official
path
for
you
to
say
it's
like
well,
this
is
my
content
and
it's
not
their
content,
and
I
can
prove
it
to
you
kind
of
thing
with
regards
to
kind
of
being
the
index
for
a
brief
time.
A
I
don't
think
that
should
play
a
role
there.
That's
something
is
like
the
these
kind
of
things
happen
on
technical
level
from
now,
every
now
and
then
and
sometimes
legitimate
sites
do
something
wrong
and
they
go
offline
for
a
day
or
so,
and
that
fixes
itself
automatically
again
it's
not
something
that
would
be
held
against
the
site
and.
D
How
do
you
say
if
they
keep
once
their
new
domains
pop
up
so
once
you
get
it
blocked,
they
move
it
to
another
domain,
it's
kind
of
like
a
never-ending
game.
So
how
do
you
kind
of?
Let
google
ultimately
know
that
you're
the
initial
source,
some
of
these
other
sites,
might
have
a
zero
authority
like
random
domains
and
we
have
an
eight
authority
or
10
authority
been
around
for
nine
years.
So
how
do
we
kind
of,
I
don't
know,
fix
the
error?
A
I
I
don't
think
there
is
like
this
one
trick
that
you
can
do
to
say
it's
like
well.
This
is
always
our
content.
So
that's
that's
one
thing
to
keep
in
mind.
The
other
thing
I
I
usually
try
to
recommend
is
to
focus
on
sites
that
are
actually
a
problem
for
for
your
site
in
terms
of
like.
A
D
The
last
question
on
that:
how
about
the
immediate
results
when
we
like
search
two
sentences
in
quotes
and
they
show
eight
copycats,
but
then
ours
is
actually
one
omitted.
That
was
like
the
most
glaring
concern,
because
google's
almost
saying
this
is
a
supplemental
site,
we're
not
gonna
show,
but
it's
showing.
So
it's
again,
it's
at
the
bottom
being
omitted,
but
the
copycats
that
are
just
blatantly
copying
are
being
shown.
So
how
to
you
know
now?
I.
A
I
think
that's
more
of
a
technical
thing
with
regards
to
kind
of
how
you
search,
so
that's
something
that,
depending
on
on
the
type
of
query
that
you
do
it's
it's
possible
to
trigger
that,
but
it
doesn't
mean
that
google
thinks
your
site
is
a
copy.
Okay.
So
that's
that's!
I
don't
know
there
are
a
few
of
those
things
around
search
where,
if,
if
you
search
in
a
very
specific
way,
then
it
it
comes
out
as
like,
well
google's
showing
this,
and
it
probably
means
that.
A
D
Okay
and
then
for
the
dmca,
since
it's
so
hard
to
do
it
euro
by
url.
If
you're
talking
about
200
000
your
unique
pieces
of
content,
you
know
I
know,
there's
an
email
you
can
submit
to
google
like
a
more
formal
request
on
the
entire
url
like
database,
because
it's
hard
to
do
it
one
by
one.
Obviously,
so
you
know
how
would
you
kind
of
like
suggest
scaling
that
if
there's
like
five
sites
that
are
a
problem,
200
000
urls
per
site,
you
know
that's
a
lot
of
clerical
work.
You
know
I
don't
know.
I.
A
A
My
understanding
was
that
if
we
recognize
that
sites
submit
a
lot
of
dmca
complaints,
then
we'll
reach
out
and
see
if
there
are
more
scalable
ways
to
do
that.
But
I
I
don't
know
what
the
actual
process
is
there.
Okay.
Thank
you
very
much.
I
appreciate
it
sure
thank
you
all
right,
elster.
E
Hi
john,
how
are
you
hi
thanks
thanks
again
for
doing
this?
I've
got
one
question
which
is
in
the
comments,
and
I
apologize
if
this
has
already
been
answered
in
a
previous
hangout,
but
I
own
a
website
called
healthy
principles
and
it's
dot
cod
uk
dot
co.
Uk
cctld
should
we
be
writing
in
british,
english
or
american
english,
and
it's
we've
sort
of
switched
from
british
english
to
american
english,
because
about
60
to
70
of
the
traffic
is
from
is
from
the
us.
So
what's
your
advice.
F
Hi
john,
I
just
wanted
to
to
ask
about
how
do
you
feel
offline
branding
impacts,
seo
organic
rankings
to
put
a
bit
of
context,
we're
working
on
on
a
highly
competitive
transactional
market,
and
we
are,
let's
say
getting
close
to
to
being
in
the
h1
and
stuff
like
this,
but
we're
clearly
at
a
disadvantage
in
terms
of
brand
awareness.
A
I
I
don't
think
it
affects
seo
directly,
but
it
is
something
where,
if
people
search
for
your
brand
because
they
they
recognize
your
brand,
then
essentially
you
have
no
competition
right,
because
your
site
is
by
far
going
to
be
the
most
relevant
one.
If,
if
people
are
searching
for
your
company-
and
you
are
that
company,
then
it's
almost
like
a
navigational
query,
what
we
call
it
where
people
want
to
go
to
a
specific
page
and
they're,
just
like
entering
something
into
google
to
get
to
that
page.
A
So
that's
something
where
I
I
kind
of
think
building
up
some
kind
of
brand
awareness
amongst
your
users
and
having
a
brand
that
that
is
easily
findable.
That
can
help
in
the
long
run,
in
that
all
of
this
kind
of
brand
branded
traffic
that
you
get
that's
traffic,
that's
not
kind
of
going
to
fluctuate
as
our
algorithms
change,
because
if
people
are
searching
for
your
site,
they
will
always
want
to
find
your
site.
And
if
our
algorithm
shows
something
else,
then
it's
almost
like
a
problem
in
our
algorithms.
A
Whereas
if
people
are
searching
for
kind
of
the
generic
kind
of
product
that
you
have,
whether
we
show
your
site
or
another
one,
the
the
search
quality
engineers
on
our
side-
they
they
can
argue
about
that,
and
there
is
no
correct
answer
for
that.
A
But
if
you
kind
of
build
up
that
brand
awareness
and
have
a
certain
level
of
kind
of
branded
traffic
to
your
site,
then
at
least
that
traffic
is
something
that
that
will
be
fairly
stable.
And
I
I
wouldn't
see
this
as
something
where.
If
you
have
branded
traffic,
then
suddenly
you're
ranked
better
for
non-branded
traffic
but
more
like
well.
The
different
sources
of
traffic.
F
Okay,
and
and
how
do
you
think
that
that
google
deals
with
with
yeah
like
non-branded
traffic,
where
they're
brands
that
are
basically
expected
by
the
user
to
be
on?
On
top,
I
mean
like
from
our
position
if
I
understand
it
from
a
there's,
a
pure
theoretical
perspective,
if
the
top
five
positions
are
kind
of
expected
there
to
be
some
some
brands,
how
can
we
get
in
into
those
places,
because
I
guess
we
will
always
be
at
least
in
a
disadvantage
in
the
propensity
to
click
on
on
our
search
result?
A
A
I
can
think
of
like
one
situation
where
I've
seen
something
similar
happen,
but
that's
something
where
essentially,
we
or
our
systems
recognize
the
brand
as
almost
being
a
synonym
for
the
non-branded
term,
which
is
is
something
that
from
from
our
point
of
view,
I
I
would
call
that
a
bug
on
our
side,
but
it
it
is
sometimes
something
where
people
just
like
purely
associated
a
specific
brand
with
a
specific
kind
of
item
and
that's
something
where
it's
it's
less
a
matter
of.
A
Well,
you
have
to
do
some
seo
tricks
to
get
around
that,
but
you
kind
of
really
have
to
build
up
the
awareness
that
actually
there
are
multiple
brands
that
are
active
in
this
space
and
like
people
should
be
searching
for
for
other
stuff
as
well.
A
Okay,
I
don't
know
if
that
made
a
lot
of
sense,
but
yeah.
I
I
think,
I
think,
for
the
most
part,
I
wouldn't
see
it
as
a
given
that
we
always
show
brands
for
non-branded
queries,
and
it's
definitely
not
the
case
that
these
would
be
kind
of
like
a
fixed
set
of
brands,
that
we
would
always
show.
F
Okay,
no,
but
I
think
just
in
general
like
like,
if
the
algorithms
are
kind
of
trying
to
give
the
best
result
for
for
use
of
query,
and
it
happens
to
be.
You
know
that
the
users
want
to
see
those
brands
that
I
can
kind
of
in
the
long
term.
It
self
teaches
that
some
of
those
brands
need
to
be
on
the
top
positions.
A
F
Okay,
cool
thanks
a
lot
sure
rupesh.
G
Hi
hi
john.
My
question
is
regarding
the
updated
echo
date
in
the
sub
snippets,
like
we
show
that
seven
days
ago
or
two
days
ago.
So
how
does
google
assumes
that
it
is
updated?
Two
days
ago,
like
we
submitted
through
the
meta
tag,
article
modified
and
through
the
schema
also,
but
still
like.
If
you
have
updated
the
page
yesterday,
it
is
still
showing
like
it
is
updated
two
days
ago,
whereas
in
the
case
hour
of
competitors
it
is
showing
that
updated.
Yesterday.
A
We
we
use
multiple
factors
to
try
to
figure
out
which
dates
are
relevant
for
a
page.
So
it's
not
just
the
the
metadata
on
the
page,
but
we
we
try
to
figure
out
like
has
there
actually
been
a
significant
change
on
the
page
and
we
we
have
a
help
center
article,
I
think,
on
on
this
topic.
So
so
I
would
check
that
out.
A
So
if
you
just
change
the
date
on
a
page,
then
that's
something
where
like
we,
we
need
to
really
recognize
that
you've
made
significant
changes
there,
so
that
we
can
show
that
to
users
and
say
well
actually
something
significant
happened
on
this
page
two
days
ago
or
one
day
ago,
or
whatever
and
kind
of
all
of
that
needs
to
come
together,
and
it's
it's
very
possible
that
for
some
pages
we
we
don't
pick
that
up.
A
So
that's
something
where
I
I
would
not
say
that
it's
it's
a
given
that,
even
if
you
do
everything
right,
we
will
use
the
date
that
you
give
us.
Sometimes
our
situations
where
our
algorithms
just
pick
something
else,
regardless.
G
Regarding
the
visible
dates,
if
there
are
multiple
dates
in
the
page,
like
our
main
article,
we
have
updated
it,
but
there
are
listing
of
other
news
articles
below
the
article
like.
We
show
related
news
in
that
also
we
have
the
date
so
like
by
any
chance.
Google
can
pick
that
date
also.
A
A
So
if
it's
just
a
random
date
on
a
page,
then
probably
we
will
ignore
that
unless
the
the
actual
date
is
also
positioned
as
a
random
date
on
the
page
like
if
you
can't
like
present
it
in
a
way
that
is
clear
to
us
that
this
is
really
the
date
that
you
want
to
use,
then
we
we
might
have
to
guess
and
that's
something
where
the
the
metadata
comes
into
play
a
little
bit
also
where
things
like
time
zones
sometimes
play
a
role
where,
if
you
say
well,
it
changed
today
on
this
date,
but
you're
in
a
time
zone
that
is
very
far
off
from
where
google
is
crawling.
A
Perhaps
then
google
might
say
well,
you
said
this
date,
but
we
think
it's
a
date
later
and
that's
more
because
of
the
different
time
zones
involved.
So
for
the
kind
of
the
metadata
on
the
page
also
make
sure
that
the
time
zones
are
correct.
If
you
have
dates
and
times
on
a
page
as
well
mentioning
the
time
zone
there
in
the
text
is
also
a
good
idea.
H
I
I
Verified
through
your
own
prefixes
and
when
we
are
comparing
the
the
data
that
the
impression
clicks
between
those
two
different
property
setups
for
one
specific
domain,
we're
getting
quite
a
big
delta
in
the
impressions
and
clicks
somewhere
between
like
40
to
80
percent.
I
So
we're
wondering
if
maybe
there
are
any
suggestions,
reported
theories
on
why
that
might
be
the
case.
This
isn't
true
of
all
setups,
like
that.
We
have
some
domains
that
don't
show
that
discrepancy,
but
this
one
particular
one
does.
A
Yeah,
so
sometimes
that's
pretty
tricky,
it's
it's
hard
to
say
off
of
hand
exactly
what
what
what
is
happening
there.
What
usually
plays
a
role
is,
on
the
one
hand,
we
we
have
queries
that
we
filter
out,
which,
which
might
be
more
visible
in
some
domains,
are
or
less
visible
than
others,
and
on
the
other
hand,
we
have
a
limited
amount
of
data
that
we
keep
per
day
essentially
for
each
of
these
properties.
A
So
if
you
have
a
domain
level
property-
and
it
has
a
lot
of
different
kinds
of
data
for
every
day,
then
it's
very
possible
that
we
will
trim
that
and
kind
of
keep
the
the
top
set.
And
usually
what
you
would
see
is
that
the
the
overall
sum
is
still
correct.
But
if
you
look
at
the
tables,
then
you,
you
might
see
differences
there.
So
that's
my
my
guess
is
like
what
one
of
those
things
is
is
happening
there.
I
Okay,
yeah,
we
sort
of
went
through
the
documentation
for
the
performance
reports
and
inventions
that
some
long
tail
data
loss
might
occur.
If
you
are
looking
at
the
grouping
by
query
or
page,
you
mentioned
some
of
the
queries
where
you
filter
out.
I
understand,
like
some
of
them
are
atomized
for
privacy.
Several
reasons
could
you?
I
A
I
it's
hard
to
say
offend
without
looking
at
the
site,
but
you
usually
they
with
regards
to
long-tail
data,
if
you're
looking
at
a
really
large
site
or
a
site
that
has
a
lot
of
different
kinds
of
data
for
every
day,
then
it's
more
a
limit
of
the
number
of
specific
data
points
that
we
have
per
day.
A
So
if
you
have
a
lot
of
different
search,
impression
types
or
if
you
have
a
lot
of
different
queries
that
lead
to
to
a
lot
of
unique
pages,
then
kind
of
all
of
that
multiplied
essentially
means
that
we
have
a
lot
of
data
entries
per
day
and
we
have
to
cap
those
kind
of
for
performance
and
stability
reasons.
So
that's
probably
what
you're
seeing
there
and
we
we
call
that
more
long,
tail
data,
because
it
tends
to
be
data
that
has
either
low
impressions
or
low
clicks.
I
Okay,
when
you
say
data
entries
is
that,
like
is
the
best
way
to
consider
the
large
site,
just
the
total
volume
of
impressions
total,
like
urls,
that
you
have
clicks
like.
Is
there
a
specific
metric
that,
like
correlates
really
well
with
large
or.
A
Is
it
not
really
not
really
it's
because
of
the
way
that
the
the
data
is
collected
like
if
there
are
a
lot
of
unique
urls
or
a
lot
of
unique
queries
that
lead
to
the
site?
Then
all
of
that
kind
of
multiplies
out
and
or
if
you
have
a
lot
of
unique
kind
of
search,
search,
features
that
are
being
triggered.
I
That,
from
both
the
queries
and
the
pages,
can
we
get
around
that
through,
like
filtering
for
specific
directories
or
queries
like
will
that
pull
more
data
in
or
could
we
use
something
like
the
api
to
pull
that
data?
Would
that
be
better.
A
I
I
would
definitely
try
it
with
the
api
I
mean,
especially
with
the
larger
site,
then
with
the
api,
it's
easier
to
get
the
the
full
set
of
data
and
then
to
to
kind
of
just
put
together
either
some
some
dashboard
on
your
side
to
pull
out
the
details
that
you
want,
but
I
would
also
especially
when
you're
looking
at
multiple
properties
within
a
really
large
domain.
I
would
try
to
kind
of
focus
on
the
data
for
the
individual
properties.
I
Okay-
and
I
guess
last
question
here
like
what
what's
a
subdomain
if
it
had
like
some
categorization
issues,
would
that
affect
how
that
data
is
aggregated.
A
I
A
A
So,
if
you
have
something
like
dub,
dub,
dub
and
non-www,
and
it's
kind
of
fluctuating
between
those
different
versions,
then
on
a
domain
level
that
would
all
still
be
in
the
same
domain,
but
on
a
kind
of
a
lower
level
sub
domain
level.
That
would
be
different
subdomain.
So
that
would
be
tracked
individually.
A
You
www
those
two
verified
separately
in
search
console.
Then
you
will
see
the
data,
sometimes
in
one
property
and
sometimes
in
the
other
property
because
of
okay,
because
of
the
way
the
canonical
changes
usually
like
on
on
a
larger
site.
The
canonicals
don't
change
that
much.
So
it's
it's
less,
something
that
you
would
see,
but
it
it
can
happen,
especially
if
it's
an
older
site
that
was
maybe
not
created
in
the
cleanest
way
possible
in
that
the
internal
links
are
sometimes
with
http,
sometimes
with
https
and
sometimes
with
dub,
dub
dub
or
not.
A
Then
sometimes
it
happens
that
we
kind
of
move
urls
around.
But
it's
it's
fairly
rare
for
modern
sites.
I'd
say.
I
Okay
and
just
last
question
here:
we've
talked
about
large
sites
quite
a
bit.
Is
there
some
sort
of
like
a
threshold
or
or
at
some
point
where
we
could
say
like
you
know,
100
million
impressions
or
you
know,
or
something
like
that.
A
Not
not
really
so
it's
it's
like.
I
mentioned
it's
more
a
matter
of
kind
of
the
unique
entries
that
you'd
have
and
it's
it
can
be
the
case
that
a
site
gets
a
lot
of
impressions,
a
lot
of
clicks,
but
it's
all
focused
on
a
smaller
set
of
urls
or
it's
focused
on
something
like
I
don't
know,
the
top
ten
thousand
or
top
hundred
thousand
urls,
and
in
in
cases
like
that,
then,
despite
it
having
a
ton
of
impressions,
it
doesn't
mean
that
it
has
a
ton
of
unique
data
entries
per
day.
A
So
that's
that
makes
it
a
bit
tricky
to
kind
of
find
a
threshold
where
we'd
say
well.
This
is
from
where
you'd
need
to
start
looking
and
that's
that's
kind
of
why
we
frame
it
as
the
the
long
tail
data
in
in
the
help
center,
because
there
is
no
kind
of
absolute
number
where
we
can
kind
of
apply
that.
D
A
It
looks
like
a
bunch
of
you
still
have
your
hands
raised,
but
I
need
to
go
through
some
of
the
submitted
questions
as
well.
We'll
definitely
have
more
time
for
live
questions
as
well
along
the
way,
but
just
to
make
sure
that
people
who
submitted
questions
don't
get
their
questions
lost
completely.
A
Let's
see,
I
read
online
that
people
said
that
they've
seen
an
uplift
after
the
june
core
update
they've,
also
seen
one
for
july
as
well.
For
us,
it
wasn't
the
same.
What
could
have
reverted
in
july?
Could
it
be
related
to
speed
these?
A
As
far
as
I
know,
these
were
essentially
separate
and
unique
updates
that
we
did
so
they
we.
We
call
them
both
core
updates
because
they
affect
the
core
of
our
ranking
systems,
but
that
doesn't
mean
that
they
affect
the
same
core
parts
of
the
ranking
system.
So
from
from
that
point
of
view,
it's
not
the
case
that
if
you
see
a
change
in
during
one
of
these
core
updates,
you
will
always
see
a
change
during
the
other
one
as
well
so
from
from
there.
A
Things
can
definitely
be
related
to
speed,
because
the
the
whole
page
experience
update
is
something
that
I
think
started
rolling
out
in
june.
So
in
july
you
might
still
see
some
changes
there,
but
the
the
core
update
itself
isn't
something
that
I
I
think
is
related
to
the
page.
Experience
update,
that's
kind
of
a
separate
thing.
A
Does
google
look
at
the
amount
of
customers
and
or
reviews
a
website
has
to
rank
it
higher
in
the
search
results?
Why
are
some
pages
getting
indexed
after
more
than
two
weeks?
Is
it
because
the
crawlers
don't
deem
these
pages
strong
enough
to
index
them
faster
than
two
weeks?
A
So,
as
far
as
I
know,
we
don't
use
the
number
of
customers
or
reviews
when
it
comes
to
web
search
with
regards
to
ranking.
Sometimes
we
we
do
pull
that
information
out
and
we
might
show
it
as
kind
of
a
rich
result
on
the
search
results.
It
might
be
that
for
the
the
google
my
business
side
of
things,
maybe
that's
taken
into
account
more,
I
don't
have
much
insight
there,
but
with
regards
to
normal
web
search,
we
don't
take
that
into
account
with
regards
to
getting
indexed
after
more
than
two
weeks.
A
It's
it's
really
hard
to
say,
usually
that's
a
mix
of
either
technical
issues
that
we
can't
crawl.
All
of
the
content
that
quickly
and
quality
issues
that
our
systems
are
not
that
interested
in
crawling
the
content
so
quickly
and
kind
of
figuring
out
where,
in
between
there
your
site
sits,
is
sometimes
a
bit
tricky,
but
sometimes
it's
also
easier
to
figure
out,
especially
if
you
can
determine
that
from
a
technical
point
of
view.
Your
site
is
actually
pretty
fast.
A
A
What
are
some
things
that
a
webmasters
can
do
to
trigger
site-wide
re-evaluation
from
a
quality
point
of
view
like
when
you
change
domains
or
when
does
google
say?
Okay:
let's
try
to
collect
new
signals
and
see
whether
the
site
is
better
from
a
quality
point
of
view.
I
don't
think
there
is
anything
technical
that
you
can
do
to
reach,
to
trigger
kind
of
a
re-evaluation
and
usually
that's
also
not
necessary,
because
essentially,
our
systems
re-evaluate
all
the
time
and
they
they
look
at
the
content
that
we
found
for
a
site
and
over
time.
A
As
we
see
that
change
we,
we
will
take
that
into
account.
So
that's
not
something
where
you
kind
of
have
to
do
something
manually
to
to
trigger
that.
A
The
one
time
where
we
do
have
to
kind
of
reconsider
how
how
the
site
works
is
if
a
site
does
a
serious
re
restructuring
of
its
website,
where
it
changes
a
lot
of
the
urls
and
all
of
the
internal
links
change
where
maybe
you
move
from
one
cms
to
another
cms
and
everything
changes
and
looks
different,
then,
from
from
a
quality
point
of
view
or
from
a
technical
point
of
view,
we
can't
just
keep
the
old
understanding
of
the
site
of
the
pages,
because
everything
is
different
now,
so
we
kind
of
have
to
rethink
all
of
that,
but
that's
also
not
something
that
is
triggered
by
anything
specific.
A
But
rather
it's
just
well.
Lots
of
things
have
changed
on
the
site
and
even
to
kind
of
incrementally
keep
up.
We
we
have
to
do
a
lot
of
incremental
changes
to
re-evaluate
that
our
website's
posts
are
being
indexed
after
one
week
the
site
is
old
and
mobile
friendly.
When
I
submit
urls
and
inspection
tools,
it
shows
it's
crawled
by
google
smartphone
button
about
section
the
indexing
crawler
is
shown
as
googlebot
desktop.
Does
this
mean
the
smartphone
bot
is
crawling
the
url
and
the
desktop,
but
is
indexing
this
url?
A
Is
this
the
reason
behind
the
indexing
delay?
This
would
not
be
a
reason
for
any
kind
of
indexing
issues.
A
It's
I
don't
know
off
hand
why
you
might
see
kind
of
this
kind
of
mix
of
desktop
and
mobile
crawlers
there.
It
might
just
be
that
the
search
console
is
essentially
saying
well
by
default
with
the
inspection
tool,
we'll
just
use
the
mobile
google
bot
in
in
practice.
We
we
always
crawl
with
both
of
these
crawlers.
A
It's
just
a
matter
of
like
how
much
we
crawl
and
index
with
any
of
these
crawlers,
so
usually
you'll,
see
something
like
80
is
coming
from
mobile,
googlebot
and
20,
maybe
from
desktop
googlebot
and
that
kind
of
ratio
that
is
essentially
normal
and
doesn't
mean
that
things
are
slower
than
anything
else.
A
Is
there
any
consistent
reports
on
amp
tanking
websites
performance?
How
consistent
is
the
page
experience
report
in
regards
to
its
interpretation
of
amp
pages?
A
But
for
the
most
part,
using
amp
on
a
website
is
a
great
way
to
get
kind
of
that
speed
boost
without
doing
a
significant
amount
of
work.
So
I
wouldn't
see
that
as
something
that
would
be
causing
problems
with
the
website
unless
the
the
amp
integration
is
somehow
broken
in
a
weird
way,
then
I
think
the
verification
line
we've
talked
about
that
I've
been
using
search
console
for
my
employer's
website
and
was
wondering
what's
the
best
way
to
track
the
value
of
a
blog's
blog's
value
on
an
e-commerce
site.
A
It's
drawing
majority
of
the
web
traffic
with
most
of
those
being
first-time
visitors.
However,
blog
visitors
notoriously
don't
always
buy
on
initial
visit
to
a
website,
but
do
return
later
and
are
truly
the
first
step
in
the
conversion
path.
Employers
want
to
see
concrete
data-
I
don't
know
so
to
to
me.
This
sounds
like
something
that
would
be
best
looked
at
with
regards
to
analytics
and
there
are
different
approaches
that
you
can
take
with
analytics.
A
I
think
the
the
overall
topic
area
is
attribution
modeling,
where
you
try
to
attribute
individual
steps
that
are
being
taken
on
your
website
to
maybe
figure
out
like
where
is
this
coming
from,
or
what
what
parts
of
a
website
are
kind
of
contributing
to
to
a
conversion
to
a
sale,
those
kind
of
things,
but
that's
something
where
I
I
think
search
console,
can't
really
help
that
much
and
from
a
search
point
of
view.
We
don't
have
that
much
insight
into
what
happens
within
your
website
and
like
who
comes
and
goes
within
your
website
in
search.
A
So
you'd
almost
need
to
kind
of
look
at
that,
maybe
with
an
analytics
expert,
maybe
with
the
analytics,
help
forum
and
try
to
get
some
input
there,
but
that
usually
is
something
that
people
can
figure
out.
So
how
many
affiliate
sponsored
links
are
safe
or
good
to
have
on
a
single
page?
Is
there
a
perfect
ratio
of
links
to
article
length
to
maintain
here
there
is
no
limit.
So
that's
that's
something
from
from
our
side.
It's
not
that
we're
saying
that
affiliate
links
are
bad
or
problematic.
A
A
The
kind
of
ratio
of
links
to
article
length
is
also
totally
irrelevant,
but
essentially
what
we
need
to
find
is
a
reason
to
show
your
site
and
search
for
users
who
are
looking
for
something
and
that
reason
is
usually
not
the
affiliate
link,
but
the
actual
content
that
you
provide
on
those
pages.
So
from
from
that
point
of
view,
kind
of
trying
to
optimize
the
affiliate
links
or
trying
to
hide
the
affiliate
links
or
whatever
you're
trying
to
do
there.
A
I
I
think,
is
almost
like
wasted
effort,
because
that's
not
what
we
care
about.
We
care
about
the
content
and
kind
of
why
we
would
show
your
pages
in
the
first
place
and
if
the
the
content
of
your
page
is
essentially
just
a
copy
of
a
description
from
a
bigger
retailer
site,
then
there's
no
reason
for
us
to
show
your
site.
A
A
The
cctld,
I
think
we
talked
about
there's
a
site
that
ranks
in
top
positions
for
multiple
high
search
volume,
city
plus
service,
without
providing
unique
content.
They
just
duplicate
their
pages
and
score
high.
Even
when
there
are
other
websites
that
seem
to
provide
better
content,
they
don't
have
any
high
quality,
backlinks
or
internal
links
on
these
city
pages.
Is
there
any
reason
they
score
high
on
google?
I
don't
know
it's
it's
hard
to
say
just
based
off
of
your
description
there.
A
It's
it's
also
something
where
we
take
into
account
a
lot
of
different
signals
when
it
comes
to
search,
and
it
can
be
the
case
that
a
site
does
some
things
really
terribly
and
does
other
things
really
well
and
in
the
search
results
we
might
still
show
that
site
fairly.
Visibly.
A
You
do
kind
of
something
stupid
along
the
way,
and
just
because
you
do
some
things
incorrectly
or
sub-optimally
shouldn't
mean
from
our
side
that
we
shouldn't
show
your
site
at
all.
So
this
is
something
that
that
we
see
all
the
time,
especially
with
smaller
businesses,
in
that
they
they
listen
to
something
or
some
blog
online.
That
says,
oh,
you
need
to
put
your
keywords
and
hidden
text
on
your
pages
and
search
engines
will
fall
for
that
and
they
go
off
and
do
that
and
essentially
that's
against
our
webmaster
guidelines.
A
So
I
think
that's
always
tricky
when
you
look
at
other
people's
sites
and
kind
of
say
well
they're
using
hidden
text
they're
using
this
other
technique.
That
is
against
their
guidelines.
Therefore
they
shouldn't
be
visible
or
ranking
above
me,
because
that's
not
that's
almost
like
way
too
simplistic.
A
A
We
understood
this
to
mean
that
all
off-site
authority
is
passed
after
a
year,
even
once
deleted.
What
would
happen
if
the
old
page
that
was
redirected
previously
two
hundreds
again
like
essentially,
would
a
term
come
back?
Would
both
pages
benefit
from
an
off-site
link?
A
No,
so,
essentially,
what
what
happens
with
regards
to
to
redirects
is?
We
will
forward
those
signals
as
as
best
that
we
can
and
what
what
happens
with
regards
to
external
links
going
to
those
pages
will
forward
those
external
links
as
well,
and
if
that
redirect
is
gone
at
some
point,
then
essentially
we
will
have
kind
of
two
pages
and
they
will
rank
individually.
A
So
it's
not
the
case
that
you
can
just
redirect,
and
suddenly
you
have
two
times
as
as
much
value
on
your
website,
but
rather
it's
like
well,
we
take
whatever
value
goes
into
the
website.
A
We
follow
the
redirects
and
we
point
it
at
the
the
final
urls
there
and
if
you
remove
redirects,
then
that
kind
of
chain
is
broken
there
and
it
remains
somewhere
else,
so
just
kind
of
like
taking
a
site
adding
a
redirect
and
then
after
you're
dropping
those
redirects
and
hoping
that
suddenly
you
have
two
times
as
much
value
on
your
site.
That's
that's
not
how
it
works.
So
from
from
that
point
of
view,
I
don't
think
you
would
see
any
value
in
doing
that.
A
The
the
main
reason
for
for
this
is
less
to
kind
of
prevent
abuse
around
this,
but
rather
like
people
change
their
minds
all
the
time,
and
it
can
happen
that
you
move
a
url
from
one
location
to
another
and
then
some
point
later
on
you
say
well,
actually
I
want
to
have
the
other
url
index
as
well,
or
I
want
to
kind
of
revert
that
change
and
go
back
to
the
previous
url
and
we
need
to
be
able
to
kind
of
go
back
and
understand
a
site
again
even
after
it
had
redirects
for
a
while.
A
Okay
looks
like
we're
coming
kind
of
close
to
time.
So
maybe
I'll
switch
back
to
to
live
questions
from
you
all.
I
also
have
a
bit
more
time
afterwards.
If
any
of
you
want
to
ask
even
more
questions,
so
we
can
go
through
some
of
that
too.
A
Let
me
just
one
one
quick
question
here
that
that
I
also
see
all
here
a
site
that
I
looked
into
briefly
with
regards
to
coupon
site
that
that
is
mentioned
there
with
regards
to
kind
of
indexing
and
being
shown
in
search.
One
of
the
things
I
noticed
there
was
that
this
is
like
a
really
low
quality
site.
A
So
the
I
don't
know
the
person
who
who
asked
this
question.
I
would
recommend
definitely
not
focusing
on
technical
things,
but
really
focusing
on
the
quality
overall
first
and
making
sure
that
the
the
quality
of
the
site
itself
is
actually
a
lot
better
before
you
worry
about
things
like
how
quickly
google
is
crawling
and
indexing
pages.
A
Okay,
let's
see,
I
think,
mano.
I
think
you're
next
up.
J
Hi
john,
I
got
a
couple
of
questions.
Let
me
start
with
the
first
one.
So
basically
search
control
console
actually
reported
some
static
files
as
affecting
the
performance
and
actually
we
minified
those
files
and
also
added
the
setup
rule
on
the
robot's
text.
J
I'm
not
sure
if
that's
the
correct
thing
to
do
so,
once
we've
done
that,
actually
what
happened
was
the
the
mobile
usability
was
affected
and
then
it
was
flagged
saying
that
it
defense
mobile
spirit,
so
just
wanted
to
know
what
what
is
the
best
practice
and
how
we
should
go
about
doing
that.
A
What
what
kind
of
files
were
there?
It's
like
css,.
A
Okay
yeah,
so
I
think,
with
the
robots
text
disallow
essentially
you're,
preventing
us
from
looking
at
those
files,
and
if
those
files
are
necessary
for
the
rendering
of
a
mobile
page,
then
we
won't
be
able
to
render
those
mobile
pages
to
recognize
that
it's
mobile
friendly.
A
So
that's
probably
why
you're
seeing
that
shift
with
regards
to
kind
of
mobile
friendliness
in
in
the
report
there.
J
Right,
and
so
should
we
actually
even
at
the
disallow
rule
for
those
or
should
we
just
keep
them
open.
A
No,
I
I
think,
for
for
the
most
part,
we
recommend
making
javascript
css
files
crawlable
so
that
we
can
render
the
pages
we
won't
index
them
individually.
So
it's
not
that
we'll
say.
Oh
this
javascript
file
is
suddenly
a
text
file.
We
should
show
it
in
search
because
that
doesn't
make
sense,
but
we
need
to
be
able
to
access
them
so
that
we
can
render
the
pages
and
see
what
they
look
like.
J
A
We
we
don't
support
no
index
in
robots
text,
so
that's
that's
kind
of
like
the.
I
think
the
the
main
difference
there
so
essentially,
if
you're
talking
about,
then
the
no
index
robots
meta
tag
on
a
page
versus
robot's
text.
A
The
the
robot's
text
file
prevents
us
from
crawling
a
url,
but
it
doesn't
prevent
us
from
indexing
the
url
alone,
which
is
a
kind
of
a
weird
situation.
But
essentially
there
are
pages
that
are
blocked
by
robots
text,
but
they're
still
useful
for
users,
and
we
don't
know
what
is
on
the
page,
but
we
might
be
able
to
recognize
they're
actually
still
useful
for
users.
So
it
can
happen
that
we
index
a
page
just
purely
by
the
url
and
based
on
the
anchor
text
of
links
pointing
to
that
url.
A
We
can
still
show
that
in
search
and
we
we
essentially
don't
know
what
is
on
the
page,
but
we
see
that
lots
of
people
are
recommending
it
for
this
one
particular
topic.
So
we
might
go
along
with
that.
With
regards
to
the
robot's
meta
tag
no
index,
if
we
can
crawl
the
page-
and
we
see
that
no
index
meta
tag,
then
we
won't
index
that
page.
Then
it
won't
be
shown
in
search
so
in
in
practice
for
for
most
normal
pages
on
on
a
website.
A
If
it's
blocked
by
robots
text,
then
we
won't
show
it
for
normal
searches,
because
we
usually
have
more
relevant
content
from
your
website
to
show.
If
you
search
specifically
for
that
page,
then
we
will
still
show
it.
If
you
do
like
a
site
query,
then
we
might
still
say
well
actually
like
for
this
particular
url
you're
looking
for
we
have
this
page,
so
those
are
kind
of
the
the
differences
there.
J
Sure,
and
what
does
it
mean
when
the
search
consort
says
indexed,
though
blocked
by
robotics?
I
think
it's
it
links
with
the
same
response
that
you
gave
before
yeah.
A
Yeah,
exactly
that's
that's
that
where
we
essentially
index
the
page
without
knowing
what
the
content
is
and
it's
it's
not
necessarily
a
bad
thing.
It's
just
kind
of
like
for
your
information.
If
you
wanted
to
have
the
page
indexed
and
ranked
based
on
its
content,
then,
with
with
that
set
up
there,
that
wouldn't
be
possible.
J
Right
and
if
you
want
to
change
the
link
on
the
website,
that
is
basically
to
rename
the
url,
how
can
you
do
that
without
affecting
the
search
console
ratings
celebrated
with
faster.
A
Essentially,
you
would
just
set
up
the
redirects
there,
so
you
would
redirect
from
the
old
url
to
the
new
one
and
that's
something
that
our
systems
follow,
and
that
should
should,
for
the
most
part,
just
be
fine
sure.
That's
john
sure,
all
right,
michael.
K
J
K
I
K
So
if,
if
a
site
uploaded
a
number
of
images
over
the
years
without
being
aware
of
like
the
importance
of
file
sizes
and
it
slowed
up
the
site
and
then
someone
like
myself
pointed
out
to
them
this
issue,
could
they
go
back
and
optimize
the
same
image
uploaded
again
as
a
smaller
file
and
use
the
same
original
alt
attribute?
Or
do
they
have
to
change
the
alt
title?
A
Change
it,
so
it's
it's
something
where,
essentially,
when
we
I
mean
what
happens
in
practice
is
with
images.
We
tend
not
to
recrawl
them
that
often
so
it
might
be
a
while
for
us
to
actually
recognize
that
the
image
file
has
changed,
even
if
it's
just
a
matter
of
like
recompressing
or
kind
of
tweaking
the
the
settings
that
might
take
a
while,
and
we
would
tend
to
pick
that
up
based
on
the
crawling
of
the
web,
page
so
kind
of
the
landing
page.
A
If
we
recall
that
often
then,
we'll
see,
oh,
this
image
is
still
embedded
here
and
every
now
and
then
we'll
double
check
the
image
file
to
see
how
that
is
set
up.
And
if
the
image
file
changes,
then
the
image
file
changes
and
we
can
kind
of
reprocess
that
if
the
alt
text
changes
on
the
page,
we
can
also
include
that
a
little
bit
faster,
because
the
html
pages
tend
to
be
recalled
more
often.
A
If
you
need
to
have
an
image
changed
quicker
in
search.
Maybe
there's
something
in
the
image
that
you
don't
want
to
have
shown
in
the
search
results.
Then
I
would
recommend
using
a
different
url
for
that
image
so
that
when
we
crawl
the
html
page,
we
clearly
see.
Oh,
this
old
image
is
no
longer
on
this
page,
but
there's
a
new
image
here.
We
should
check
out
this
new
image.
Url.
A
Yeah
and
and
with
regards
to
speed,
it's
not
something
where
we
need
to
have
the
kind
of
the
optimized
image
indexed,
but
that's
something
where
essentially
we
for
for
the
core
web
vitals,
we
use
the
the
real
user
metrics.
So
we
see
kind
of
what
people
are
actually
seeing
on
the
page
and
that's
independent
of
what
we
have
indexed
for
that
page.
A
H
Hello,
hello,
everyone,
hello,
john
good
morning,
hi
hi,
so
practically
I'm
working
on
on
the
website
on
the
website
instructor.
It
is
a
company
in
the
finance
sector
and
specifically
mortgage
and
two
questions
so
building
the
new
instructor.
I
have
two
options,
so
one
is
build.
H
H
That
is
my
choice
and
however,
yes,
I
I
have
these
two
options
at
the
moment.
So
the
two
questions
are
what
will
be
the
main
impact
by
using
pages
over
categorization
and
build
a
new
a
new
category
on
the
cms?
H
A
Okay,
so
so
I
think,
first
of
all,
I
I
would
not
worry
about
kind
of
the
the
pages
versus
posts
and
categories
question
from
from
our
point
of
view,
we
we
see
all
of
these
as
html
pages
and
how
they're
classified
within
your
server
and
within
your
cms
is
essentially
totally
up
to
you.
It's
not
something
that
our
algorithms
would
care
about.
A
So
from
from
that
point
of
view,
I
I
would
choose,
whichever
setup
is
easier
for
you
to
maintain
it's
easier
for
you
to
make
sure
it's
it's
fast
and
usable,
all
of
kind
of
the
almost
like
the
non-seo
factors
and
focus
it
based
on
that
and
that's
something
where
or
sometimes
using
wordpress
makes
sense.
Sometimes
using
another
cms
makes
sense.
Sometimes
it
kind
of
a
mix
makes
sense,
but
that's
that's
essentially
purely
on
your
side.
A
The
the
one
place
where
I
I
think
I
would
keep
an
eye
out
on
is,
if
you're
migrating
from
one
setup
to
a
different
setup,
then
that's
something
where
you
essentially
have
to
think
about
all
of
the
the
usual
site,
migration
things
so
kind
of
making
sure
that
you
know
which
the
the
old
urls
are
and
the
new
ones
making
sure
you
have
all
the
redirects
in
place.
A
Fixing
the
internal
linking
all
of
that
kind
of
plays
into
that
as
well
as
well
as
everything
that
is
on
the
page
as
well,
especially
if
you're
moving
from
one
cms
to
another
you'll
have
things
like
headings
are
suddenly
different
or
the
images
are
embedded
in
different
ways.
And
suddenly
you
have,
I
don't
know,
maybe
captions
below
the
image.
L
Hi
john
I've
been
in
the
last
webmasters
hangout,
also
and
asked
you
about
our
dutch
home
page,
which
which
isn't
in
the
home
page
anymore.
I
don't
know
if
you
can
remember.
I
also
posted
the
question
in
the
youtube
comments
and
yeah.
We
couldn't
really
find
a
reason
why
this
specific
page
isn't
indexed
anymore.
I
also
checked
the
troubleshooting.
L
Reports
on
supportgoogle.com-
and
I
just
couldn't
find
a
reason
why
this
one
page
isn't
in
the
index
anymore.
So
maybe
afterwards,
you
and
your
teammate
could
have
a
look
and
take
a
look
into
this
and.
A
L
L
Okay,
I
post-
I
also
posted
this
in
the
community
poem,
so
I'll,
just
post
the
link
of
deaths
in
the
chat,
okay,
cool.
Thank
you
very
much
sure.
A
Okay,
let
me
take
a
break
here
with
regards
to
the
recording.
It's
been
great,
having
all
of
your
questions
here
and
I
I
hope,
if
you're
watching
this
on
youtube,
you
find
this
recording
useful
and
maybe
we'll
see
you
in
one
of
the
future
hangouts
as
well.
All
right
that,
let
me
just
pause.