►
From YouTube: English Google SEO office-hours from February 5, 2021
Description
This is a recording of the Google SEO office-hours hangout from February 5, 2021. These sessions are open to anything webmaster related like crawling, indexing, mobile sites, internationalization, duplicate content, Sitemaps, Search Console, pagination, duplicate content, multi-lingual/multi-regional sites, etc.
Find out more at https://developers.google.com/search/events/join-office-hours
Feel free to join us - we welcome webmasters of all levels!
A
All
right
welcome
everyone
to
today's
google
seo
office
hours.
My
name
is
john
mueller.
I'm
a
search
advocate
at
google
here
in
switzerland
and
part
of
what
we
do
are
these
office
hour
hangouts,
where
people
can
jump
in
and
ask
their
questions
around
their
website
and
web
search,
and
we
could
try
to
find
some
answers.
A
B
Hey
john,
I
have
a
question
so
this
question
is
regarding
to
ecommerce.
So
there
is
a
lot
of
e-commerce
pages.
They
have
like
a
refined
pages
right,
so
they
will
be
a
pages
that
they
will
put
a
certain
amount
of
product
in
the
page
based
on
a
predefined
logic
to
show
up
those
products.
In
those
pages
and
those
pages
that
I
in
my
case
I
have.
B
The
client
is
a
very
good
pages
to
capture
long-tail
keywords,
because
they
are
like
refined,
saying
that,
for
example,
computer
and
at
the
same
time
have
amd
chips.
Things
like
that
and
when
people
are
searching
for
longtail
keyword
that
computer
amd
processor,
those
pages
are
very
good
place
to
capture
them.
However,
so
so
here
come,
my
question
is
for
those
pages.
They
are
some
kind
of
decide
that
people
will.
B
They
are
some
kind
of
design
that
they,
because
they're
automatically
a
side
product
right
and
you
can't
put
all
the
product
in
the
first
page,
first
rendering
of
those
kind
of
pages.
So,
every
time
when
there's
new
product,
adding
into
the
inventory
or
some
product
is
removed
from
the
inventory,
the
page
content
of
first
rendering
product
will
change,
and
I
I
I
will
send
a
sample
here
in
the
comment
section.
B
For
example,
this
is
not
my
client,
but
it's
simple,
because
I'm
not
allowed
to
show
my
client
website,
but
this
is
a
similar
situation.
So
when
you
click
into
that
link,
you
go
into
you.
You
can
see
that
they
are
a
a
lot
of
product
in
the
page,
but
when
you
scroll
down
to
the
bottom,
they
will
have
load
more.
So
people
only
see
the
product
that,
before
a
load
more
right
and
whenever
we
add
inventory
into
the
product
or
remove
something
the
first
rendering
product
will
change.
B
So
google
will
constantly
seeing
different
content
of
that
page.
Will
they
confuse
google?
How
do
we
solve
this
problem?.
A
That's
that's
essentially
fine.
That's
that's
totally
normal,
I
think,
with
e-commerce
with
a
very
busy
site,
you
you
have
those
kind
of
shifts
all
the
time
with
news
websites.
You
have
it
similar
that
you
have
news
articles
all
the
time
and
when
you
look
at
the
home
page
of
a
new
site
like
there
are
always
different
articles
that
are
linked
there
and,
from
our
point
of
view,
that's
fine.
The
important
part
I
think,
especially
with
e-commerce,
is
that
we're
able
to
find
the
individual
product
pages
themselves
so
somewhere
along
the
line.
A
We
need
to
have
kind
of
persistent
links
to
those
products,
so
that
could
be
on
on
that
page.
It
could
be
on
page
two
or
page
three
or
page
four
of
that
listing
something
like
that.
So
that's
kind
of
the
the
important
part
there.
I
wouldn't
worry
that
the
pages
change
when
from
load
to
load,
because
what
will
happen
from
from
a
search
point
of
view
is
we
will
recognize
there.
A
There's
specific
content
for
this
topic
on
this
page
and
we'll
try
to
bring
queries
to
the
page
that
kind
of
match
the
the
general
topic
and,
if,
like
a
computer
model,
1
or
computer
model,
2
is
shown
there
and
they're
essentially
equivalent
because
they're
in
the
same
category
of
product,
then
that
doesn't
really
change.
Much
for
us.
B
So
what
are
your
from?
What
I
heard
is
that
as
long
as
the
logic
of
the
signing
product
is
consistent
and
the
product
is
showing
up
in
the
first
rendering
match
the,
for
example,
title
tag
on
the
page,
then
that
is
fine.
But
if
the
logic
is
like
not
consistent
and
they
certainly
have
other
products
and
there
will
be
a
problem.
A
Yeah,
so
so,
for
example,
if
you
have
a
clothing
store-
and
you
have
a
category-
that's
just
blue
and
the
category
blue
has
everything
from
socks
to
jackets
and
everything
in
between.
Then
it's
really
hard
for
us
to
say
this
is
a
landing
page
for
this
type
of
product.
So
we
will
like.
We
will
constantly
be
confused
by
a
page
like
that,
whereas
if
it's
a
landing
page
about,
I
don't
know
blue
jackets,
for
example
like
category
jackets
and
color
blue,
then
it's
it
doesn't
really
matter
which
jackets
you
show
there.
A
B
So
as
long
as
this
product,
new,
adding
products
are
still
blue
jacket,
even
it's
different,
projecting
the
first
rendering
and
continuous
transit
is
okay.
A
B
A
C
Hey
john,
thank
you,
my
questions
regarding
image
search
and
why
one
image
might
be
shown
preference
over
another.
Specifically,
it's
on
a
product
page
that
uses
like
an
image
slider
to
display
pictures
of
the
product
and
considering
that
pretty
much
everything
is
nearly
identical.
Like
alt
text
file,
name
nearby
text
dimensions
weight
things
of
that
nature.
C
Why
might
a
kind
of
seemingly
random
image
from
within,
like
the
slider
sequence?
Maybe
third
or
fourth
thumbnail-
be
showing
preference
over
like
a
featured
image,
like
usually
the
first
image
that
you
would
see
on
a
product
page.
A
I
don't
know
it
is
it's
hard
to
say
I
I
don't
think
so,
so
we
have
various
things
that
kind
of
go
into
image
search
and
on
the
one
hand,
it
is
kind
of
the
aspects
that
you
mentioned,
like
the
the
titles
of
the
page,
the
image
file
name,
the
captions
all
text
things
like
that,
but
we
we
do
also
have
some
some
logic
that
tries
to
understand
like.
Is
this
a
high
quality
image
or
not?
A
And
it's
it
it's
possible.
I
I
don't
know
those
images
that
our
systems
are
either
getting
confused
by
by
the
contents
of
the
image
or
that
they
they
like
clearly
see
like
one
image
is
significantly
higher
quality
than
the
other
and
for
us
then,
maybe
we
would
show
it
a
little
bit
more
visibly
like
that.
A
But
it's
it's
something
where
I
think
there.
There
are
always
a
number
of
different
factors
that
play
into
that
and
even
for
multiple
images
on
the
same
page,
which
are
kind
of
in
the
same
category
of
things,
it's
it's
possible
that
we
kind
of
show
them
in
one
order
once
and
show
them
in
a
different
order.
Another
time.
C
Okay,
so
are
you?
Are
you
able
to
comment
on
this?
I
imagine
that
cloud
vision
has
something
to
do
with
that,
trying
to
match
similarities
with
machine
learning
to
the
entities.
Am
I
on
the
right
track
here.
A
I
I
don't
know
how
far
we
we
would
use
something
like
that.
I
I
do
think
at
least
as
far
as
I
understand
we
we've
talked
about
doing
that
in
the
past,
specifically
for
image
search,
but
it's
something
where
just
purely
based
on
the
contents
of
the
image
alone.
It's
sometimes
really
hard
to
determine
how
the
relevance
should
be
for
a
specific
query.
A
So,
for
example,
you
might
have,
I
don't
know
a
picture
of
a
beach
and
we
could
recognize.
Oh,
it's
a
beach,
there's
water
here
things
like
that,
but
if
someone
is
searching
for
a
hotel
is
like
a
picture
of
the
beach
the
relevant
thing
to
show,
or
is
that
I
don't
know
a
couple
miles
away
from
the
hotel.
It's
it's
really
hard
to
to
judge
just
based
on
the
contents
of
the
image
alone.
A
So
I
I
imagine,
if
or
or
when
we
do
use
kind
of
machine
learning
to
understand
the
contents
of
the
image.
It's
something
auxiliary
to
the
other
factors
that
we
have.
So
it's
not
that
it
would
completely
override
everything
else
got
you.
Thank
you.
D
Sure,
john
of
just
one
follow
up
on
this:
does
google
have
any
plan
of
machine
learning,
auto
detection
of
what
is
happening
in
what
is
there
in
picture,
because
I
am
saying
that
different
devices
also
have
this
kind
of
feature?
A
What
what
is
happening
with
within
the
image-
I
I
don't
know
kind
of
like
with
the
previous
question-
it's
something
where
it's
it's
certainly
possible
to
some
extent
to
to
pull
out
some
additional
ima
information
from
an
image
which
could
be
like
objects
in
the
image
or
what
what
is
happening
in
the
image.
But
I
don't
know
if
that
would
override
any
of
the
other
factors
that
we
have
there.
A
A
If
we
have
multiple
images
that
we
think
are
kind
of
equivalent,
and
we
can
clearly
tell
somehow
that
this
one
is
more
relevant
because
it
has,
I
don't
know
the
the
objects
or
the
actions
that
someone
is
searching
for
then
maybe
we
would
use
that,
but
I,
I
honestly
don't
know
what
we've
announced
in
that
in
that
area
or
what
we're
actually
using
for
search
there,
because
the
the
thing
to
keep
in
mind
is
that
there
are
a
lot
of
different
elements
that
that
are
theoretically
possible.
A
That
might
be
done
kind
of
in
consumer
devices.
There
are
lots
of
things
that
are
patented
that
are
out
there
that
are
kind
of
like
theoretically
possible,
but
just
because
it's
possible
in
some
instances
doesn't
mean
that
it
makes
sense
for
search.
A
And
we
we
see
that
a
lot
with
with
patents
when
it
comes
to
search
where
someone
will
patent
a
really
cool,
algorithm
or
setup.
That
could
have
an
implication
for
search.
But
just
because
it's
patented
from
google,
and
maybe
even
from
someone
who
works
on
search,
doesn't
mean
that
we
actually
use
it
in
search.
A
Okay,
let
me
run
through
some
of
the
submitted
questions
and
if
you
have
questions
along
the
way
feel
free
to
jump
in
and
we'll
almost
certainly
have
time
towards
the
end
for
more
questions
from
all
of
you
all
right.
The
first
question
is
about
google
discover
one
of
the
sites
I'm
running
is
about
anime
fan.
A
Arts
cosplay
fan
fiction
was
performing
fairly
well
in
discover,
but
one
day
to
another,
the
traffic
dropped
to
zero
without
any
significant
change
on
the
site
in
google
search
it's
growing
before
and
after
that,
what
kind
of
problems
could
bring
that
situation?
A
A
Well,
it
makes
sense
to
show
this
more
and
discover
and
then
suddenly
you
you
get
a
lot
of
traffic
from
discover
and
then
our
algorithms
might
at
some
point
say
well,
it
doesn't
make
sense
to
show
it
that
much
and
discover
anymore
and
the
traffic
goes
away
and
especially
with
discover
it's
something
which
is
not
tied
to
a
specific
query.
So
it's
really
hard
to
say
like
what
you
should
be
expecting,
because
you
don't
know
how
many
people
are
kind
of
interested
in
this
topic
or
where
we
would
potentially
be
able
to
show
that.
A
A
Additionally,
we
also
for
discover
we
have
a
a
help
center
article
that
goes
into
for
pretty
deep,
pretty
much
detail.
What
what
kind
of
things
we
watch
out
for
and
in
particular,
what
what
kind
of
things
we
we
don't
want
to
show
and
discover.
A
A
What
are
the
levels
of
site
quality
demotions?
Is
there
like
a
first
level
where
everything
site-wide
looks?
Fine,
no
demotion,
second
level,
that
you
demote
some
pages
that
are
not
relevant
or
third
level.
Site-Wide
is
not
good
at
all.
So
from
from
my
understanding
is
we
we
don't
have
kind
of
these
different
levels
of
site-wide
demotion
where
we
would
say
it's
like.
We
need
to
demote
everything
on
the
website
or
we
don't
don't
need
to
demote
anything
on
the
website.
A
I
I
think,
depending
on
the
website
you,
you
might
see
aspects
like
this
or
might
feel
like
aspects
like
this,
but
for
for
the
most
part
we
we
do
try
to
look
at
it
as
granular
as
possible
and
in
some
cases
we
can't
look
at
it
as
granular
as
possible.
So
we'll
look
at
kind
of
like
different
chunks
of
a
website.
A
So
that's
something
where,
from
from
our
side,
it's
not
so
much
that
we
have
different
categories
which
stage
like
in
this
category
or
in
that
category.
It's
just
that,
there's
almost
like
a
fluid
transition
across
the
web
and
also
when
it
comes
to
things
where
our
algorithms
might
say.
Oh,
we,
we
don't
really
know
how
to
trust
this.
It's
for
the
most
part.
It's
not
a
matter
of
trust.
Is
there
or
trust
is
not
there?
A
So
that's
something
where
there's
like
a
lot
of
a
lot
of
room.
A
Let's
see,
I
have
a
question
about
omitted
results.
We
published
two
large.coms
horoscope
and
astrology
each
with
each
own
url
in
content
teams
after
ranking
on
the
first
page
for
astrology
queries
for
multiple
years
in
february
last
year,
only
one
of
the
sites
began
to
show
up
for
normal
search
results
at
a
time
whichever
site
has
the
highest
ranking
for
a
given
query
will
show
up
with
the
other
site
being
classified
as
an
emitted
result,
there's
no
duplicate
content
across
links
between
the
sites.
I'm
curious
why
this
is
happening.
A
A
So
from
that
point
of
view,
it
might
also
just
be
something
that
is
kind
of
not
not
related
to
to
what
you're
suspecting
in
that
that
our
algorithms
think
that
it's
like
the
same
site.
We
should
only
show
one
of
these.
A
At
the
same
time
it
I
I
have
seen
situations
where,
if
there
are
large
number
of
sites
that
are
involved,
a
large
number
of
domains
that
are
algorithms
might
say
well,
all
of
these
domains
are
essentially
the
same
content,
and
we
should
just
pick
one
of
these
to
show
rather
than
like
all
of
these,
but
usually
if,
if
they're,
two
websites
and
they're
kind
of
unique
in
their
own
ways,
then
that's
something
where
we
would
try
to
show
them
individually.
A
So
I
I
think
from
from
a
practical
point
of
view,
what
I
would
do
here
is
go
to
the
webmaster,
help
forums
and
post
the
details.
What
you're
seeing
here,
maybe
some
screenshots
specific
urls
and
queries
where
you're
seeing
this
happening
and
the
folks
there
can
take
a
look
at
that
and
maybe
guide
you
into.
I
don't
know
if
there's
something
specific,
that
you
could
be
doing
differently
there.
Maybe
they
can
point
you
at
that,
or
maybe
they
can
point
you
in
the
direction
of
saying
well.
A
It
is
how
it
is:
that's
nothing
kind
of
unnatural,
that's
happening
there,
and
but
also
the
the
folks
active
in
the
help
forums
have
the
ability
to
escalate
things
to
to
google
teams.
So
if
they
think
this
is
really
weird-
and
maybe
something
weird
is
happening
on
google
side,
then
they
can
escalate
that
to
someone
at
google.
A
Let's
see,
does
google
search
consider
each
url
of
a
website
individually.
For
example,
there's
a
low
score
on
a
domain
home
page
have
any
effect
on
the
other
pages
which
have
a
high
score.
A
A
So
there's
kind
of
a
connection
between
all
of
these
pages
and
if
one
page
is
really
bad
and
we
think
that's
the
most
important
page
for
your
website,
then
obviously
that
will
have
an
effect
on
the
other
pages
within
your
website,
because
they're
all
kind
of
in
context
of
that
one
main
page,
for
example.
Whereas
if
one
page
on
your
website
is
something
that
we
would
consider
not
so
good
and
at
some
random
part
of
your
website,
then
that's
not
going
to
be
the
central
point
where
everything
evolves
around
then.
A
A
A
We
have
those
two
versions
and-
and
the
data
there
so
we'll
show
that
in
search
console
like
that,
whereas
if
you
set
up
your
website
in
a
way
that
you're
consistently
always
working
with
the
amp
version
that
maybe
all
mobile
users
go
to
the
amp
version
of
your
website,
then
that's
something
where
we
can
like
clearly
say.
Well,
this
is
the
primary
version,
we'll
focus
all
of
our
signals
on
that
version.
A
The
next
question
there
is
since
amp
is
enabled:
will
google
mobile
search
consider
only
the
amp
version
which
passes
the
core
web
vitals
test
when
ranking
the
website,
or
will
the
original
link
also
be
considered?
So
I
I
mean
on.
On
the
one
hand,
there
is
the
aspect
of:
is
it
a
valid
amp
or
not?
If
it's
not
a
valid
amp,
then
we
wouldn't
show
it.
A
So
that's
that's
one
aspect
that
goes
into
play
there,
but
I
I
think
in
the
theoretical
situation
that
we
have
data
for
the
non-amp
version
and
data
for
the
amp
version,
and
we
would
show
the
amp
version
in
the
search
results.
Then
we
would
use
the
ver
the
data
for
the
version
that
we
show
in
search
as
the
basis
for
for
the
ranking.
A
So
in
in
a
case
like
that,
where,
like
we,
we
clearly
have
data
for
both
of
these
versions,
and
we
would
pick
one
of
those
versions
to
show.
Then
we
would
use
the
data
for
that
version.
A
That's
similar,
I,
I
think
also
with
international
websites,
where
you
have
different
kind
of
urls
for
individual
countries,
and
if
we
have
data
for
one
version,
we
show
that
or
if
we
have
one
version
that
we
would
show
in
the
search
results,
and
we
have
data
for
that
version.
Then
we'll
use
the
data
for
that
version,
even
if
we
have
kind
of
other
data
for
the
other
language
or
other
country
versions.
A
Now
and
the
the
only
case
where
I
know
where
we
would
fold
things
together
is
with
regards
to
the
amp
cache,
because
theoretically
the
amp
cache
is
also
located
in
yet
another
place,
another
set
of
urls,
but
with
the
amp
cache
we
we
know
kind
of
how
we
should
fold
that
back
to
the
amp
version
and
track
that
data
there.
A
Does
google
weigh
the
exact
match
title
tag
more
in
comparison
to
the
title
tag
focused
more
on
users,
so
let's
say
the
phrase
I
want
to
rank
for
is
audi
83
and
one
version
of
the
title
tag.
Is
this
exact
match?
The
other
version?
Is
this
car
for
sale,
152,
great
models?
A
A
A
A
So
my
my
recommendation
there
would
be
to
test
this
out
and
just
try
it
out
and
not
so
much
in
terms
of
seo
like
which
one
will
rank
better
but
kind
of
think
about,
like
which
one
of
these
would
work
better
in
the
search
results,
and
that's
something
you
you
could
try
out
on.
One
page,
you
could
try
out
on
multiple
pages,
are
kind
of
set
up
in
a
similar
way
and
then
based
on
that,
you
can
determine
well
this
one
kind
of
attracts
more
clicks
from
users.
A
A
I
imagine
you
could
get
into
really
long
arguments
with
the
people
that
are
working
on
information
retrieval
on
which
one
is
better
or
not.
So
I
yeah,
I
don't
think,
there's
like
this
one
one
clear
answer
comments
below
the
blog
post
are
comments,
still
a
ranking
factor.
I'm
migrating
to
another
cms
and
would
like
to
get
rid
of
all
comments
about
one
to
three,
not
really
relevant
comments
below
many
blog
posts.
Can
I
delete
them
safely
without
losing
any
ranking?
A
I
I
think
it's
ultimately
up
to
you
from
from
our
point
of
view,
we
do
see
comments
as
a
part
of
the
content.
We
do
also
in
many
cases,
recognize
that
this
is
actually
a
comment
section,
so
we
need
to
treat
it
slightly
differently,
but,
ultimately,
if
people
are
finding
your
pages
based
on
the
comments
there,
then
if
you
delete
those
comments-
and
obviously
we
wouldn't
be
able
to
find
your
pages
based
on
that.
A
So
that's
something
where,
depending
on
the
the
type
of
comments
that
you
have
there,
the
amount
of
comments
that
you
have
it,
it
can
be
the
case
that
they
provide
significant
value
to
your
pages
and
it
can
be
a
source
of
additional
kind
of
information
about
your
pages.
But
it's
not
always
the
case.
A
So
that's
something
where
I
think
you
kind
of
need
to
look
at
the
the
contents
of
your
pages
overall,
the
queries
that
are
leading
to
your
pages
and
think
about
like
which
of
these
queries
might
go
away.
If
my
comments
were
not
on
those
pages
anymore
and
based
on
that,
you
can
try
to
figure
out
like
what
what
you
need
to
do
there.
A
It's
certainly
not
the
case
that
we
completely
ignore
all
of
the
comments
on
a
site
so
just
blindly
going
off
and
deleting
all
of
your
comments
in
the
hope
that
nothing
will
change.
I
don't
think
that
will
happen
when
using
interstitials
as
product
pages
as
google
index
stick
content
on
those
interstitials,
or
does
it
only
index
the
content
on
the
static
pages,
so
I
wasn't
quite
sure
how
how
you
use
interstitials
as
product
pages.
A
A
Then
we
wouldn't
have
that
product
information
to
actually
index,
whereas
if
you
kind
of
load
that
page
and
it
takes
a
second
and
then
it
pops
up
the
full
content-
the
full
product
information,
then
essentially
by
loading
the
page.
We
have
that
information
and
we
can
use
that
to
index
and
to
rank
those
pages.
A
So
that's
that's
kind
of
one
thing
to
to
watch
out,
for
I
think
what
what
threw
me
off
with
this
question
initially
is
also
the
the
word
interstitials
there
in
the
sense
that
usually
interstitials
are
something
that
are
kind
of
between
the
content
that
you're
looking
for
and
maybe
kind
of
like
what
is
actually
loaded
on
a
browser.
So
if
you
go
to
a
page
and
instead
of
the
product
page,
it
shows
a
big
interstitial
showing
something
else.
A
That's
kind
of
the
the
usual
setup
for
interstitials
and,
from
our
point
of
view,
those
kind
of
interstitials,
if
they're,
intrusive,
interstitials
in
the
sense
that
they
get
in
the
way
of
the
user,
actually
interacting
with
the
page.
Then
that
would
be
something
we
would
consider
a
negative
ranking
factor.
A
So
if
it's,
if
it's
really
a
case
that
when
you
go
to
your
pages,
it
just
takes
a
bit
and
then
your
product
page
pops
up,
then
I
wouldn't
call
those
interstitials,
maybe
use
some
other
word
for
that,
because
if
you
ask
around
like
in
the
help
forums
or
elsewhere-
and
you
say
oh
my
interstitials
and
I
want
to
rank
for
my
interstitials,
then
probably
a
lot
of
people
will
be
confused
seems
like
google
images,
crawl,
some
svg
files
in
svg
and
then
summit
renders
into
png
while
serving
in
the
search
results.
A
A
I
wasn't
aware
of
the
kind
of
like
how
how
this
is
is
happening,
so
I'm
not
not
100
sure
what
what
exactly
you're
you're,
referring
to
my
understanding
is
that
when
it
comes
to
images,
especially
the
the
vector
formats
like
svg,
which
don't
always
have
a
well-defined
size,
what
we
do
internally
is
we
con
convert
that
into
a
normal
pixel
image,
so
that
we
can
treat
it
the
same
way
as
we
can
treat
other
kinds
of
images.
A
That
means
for
all
of
the
the
normal
processing
internally
and
all
of
that,
and
also
specifically
with
regards
to
the
thumbnail
images
that
we
can
show
so
that
we
can
scale
it
down
using
the
the
normal
pixel
scaling
functions
and
get
it
to
the
right
size
and
get
it
into
an
equal
resolution
to
the
other
thumbnails
that
we
show
so
probably
that
that
is
something
that
that
is
happening
there
and
that's
not
something
that
you
can
easily
change,
because
we,
we
kind
of
have
our
system
set
up
to
deal
with
pixel-based
images,
and
that's
that's
what
we
would
do
there
with
regards
to
kind
of
the
the
next
step
from
there.
A
The
expanding
of
the
image
when
you
click
on
it
in
the
image
search
results.
I
don't
know
how
that
would
be
handled
with
regards
to
svgs
or
if
we
do
some
kind
of
in
pixel
based
bigger
preview
or
svg
based
bigger
previews.
I
don't
quite
know
how
we
would
handle
that
there.
A
I
think
those
are
kind
of
separate
topics,
so
it's
not
something
that
you
would
do
seo
to
improve
user
journey,
but
rather
you
you
have
your
user
journeys
that
you
use
to
kind
of
analyze
your
products
and
try
to
find
the
best
best
approaches
that
you
can
do
there
and
then,
based
on
that,
you
would
also
try
to
do
some
seo
to
improve
things
in
the
search
results.
So
one
is
kind
of
improving
things
for
the
user
and
the
other
is
a
kind
of
improving
things
for
search
engines.
A
What
is
the
best
way
to
treat
syndicated
content
on
my
site
if
the
content
is
already
in
other
sites
too,
and
I
have
do
I
have
to
know
index
my
page
or
do
I
canonicalize
to
original
source?
Do
I
no
follow
all
the
internal
links
of
that
page,
yeah,
good,
good
question?
I
I
don't
think
we
have
exact
guidelines
on
syndicated
content
generally,
we
we
do
recommend
using
something
like
a
rel
canonical
to
the
original
source.
I
know
that's
not
always
possible
in
all
cases.
So
sometimes
what
can
happen?
A
A
So
that's
something
to
kind
of
keep
in
mind
if
you're
syndicating
content,
if
you're
hosting
syndicated
content
on
your
website,
then
that's
kind
of
similar
to
to
keep
in
mind
in
the
sense
that
most
of
the
time
we
would
try
to
show
the
original
source
and,
if
just
because
you
have
a
syndicated
version
of
that
content
on
your
site
as
well,
doesn't
mean,
we
will
also
show
your
website
in
the
search
results.
A
So
usually
what
I
recommend
there
is
to
make
sure
that
you
have
significant,
unique
and
compelling
content
of
your
own
on
your
website.
So
if
you're
using
syndicated
content
to
kind
of
fill
out
additional
facets
of
information
for
your
users,
then
that's
perfectly
fine.
I
wouldn't
expect
to
rank
for
those
additional
kind
of
facets
or
filler
content
that
you
have
there.
A
It
can
happen,
but
it's
not
something
I
I
would
count
on
and
instead
before
the
kind
of
the
seo
side
of
things
for
the
ranking
side
of
things,
I
would
really
make
sure
that
you
have
significant
kind
of
unique
content
of
your
own
so
that,
when
our
systems
look
at
your
website,
they
don't
just
see
all
of
this
content
that
everyone
else
has,
but
rather
they
see
a
lot
of
additional
value
that
you
provide.
That
is
not
on
the
other
side
as
well,
when
we
want
to
rank
for
a
specific
topic
on
google.
A
Is
it
a
good
practice
to
also
cover
related
topics,
for
example,
if
we
sell
laptops
when
you
want
to
rank
for
that,
is
it
useful
to
create
posts
like
reviewing
laptops
the
introducing
the
best
new
laptops,
those
kind
of
things,
and
if
it's
useful,
then
it
doesn't
have
to
be
done
in
any
special
way.
A
So
I
think
this
is
always
useful,
because
what
you're
essentially
doing
is,
on
the
one
hand,
for
search
engines,
you're
kind
of
building
out
your
reputation
of
knowledge
on
that
specific
topic
area
and
for
users
as
well.
It
provides
a
little
bit
more
context
on
kind
of
like
why
they
should
trust
you
if
they
see
that
you
have
all
of
this
knowledge
on
this
general
topic
area,
and
you
show
that,
and
you
kind
of
present
that
regularly.
A
Then
it
makes
it
a
lot
easier
for
them
to
trust
you
on
something
very
specific
that
you're
also
providing
on
your
website.
So
that's
something
where
I
think
that
that
always
kind
of
makes
sense
and
for
search
engines
as
well.
It's
something
where,
if
we
can
recognize
that
this
website
is
really
good
for
this
broader
topic
area,
then,
if
someone
is
searching
for
that
broader
topic
area,
we
we
can
try
to
show
that
website
as
well.
A
A
How
long
should
we
wait
for
search,
console
manual,
action
response?
It's
been
months
in
I
avoided
resubmitting,
because
that's
not
nice,
but
do
these
ever
get
lost
if
we
don't
get
any
replies.
What
should
we
be
doing
is
the
next
step,
depending
on
the
type
of
manual
action.
It
can
take
quite
a
bit
of
time.
So
in
particular,
I
think
the
the
link
based
manual
actions
are
things
that
can
take
quite
a
bit
of
time
to
to
be
reviewed
properly,
and
it
can
happen
in
some
cases
that
it
takes
takes
a
few
months.
A
So
usually,
what
happens
is
if
you
resubmit
the
reconsideration
requests,
then
we
will
drop
the
second
reconsideration
request,
because
we
think
it's
a
duplicate.
The
team
internally
will
still
be
able
to
look
at
it
and
if
you
have
additional
information
there,
that's
perfectly
fine.
A
If
it's
essentially
just
copy
and
paste
of
the
same
thing,
then
I
don't
think
that
changes
anything.
It's
also
not
the
case
that
you
would
have
a
negative
effect
from
resubmitting
a
reconsideration
request,
so
in
particular,
if
you're
not
sure
that
you
actually
sent
the
last
one
and
you're
like.
Oh,
it's
like
someone
on
my
team
sent
it,
and
now
I'm
not
sure
if
they
actually
sent
it
or
not,
then
resubmitting
it
is,
is
perfectly
fine.
It's
not
that
there
will
be
kind
of
an
additional
penalty
for
resubmitting
the
reconsideration
request.
A
It's
just
when
the
team
sees
that
one
is
still
pending.
They'll
focus
on
that
pending
one
rather
than
the
additional
ones.
A
If
you
don't
see
any
response
with
regards
to
manual
actions
specifically
around
the
link
manual
actions,
I
would
recommend
maybe
also
checking
in
with
the
help
forums
or
checking
in
with
other
people
who
have
worked
on
kind
of
link-based
manual
actions,
because
because
it
when
it
takes
so
long
to
kind
of
be
reprocessed
like
this,
it's
something
where
you
really
want
to
make
sure
that
you
have
everything
covered
really
well,
so
that's
something
where,
if
you're
seeing
it
taking
a
long
time,
you're
like
I
don't
know
if
I
needed
to
do
more
or
needed
to
do
something
different
then
going
to
the
help.
A
Forums
is
a
really
good
way
to
get
additional
feedback
from
people,
and
it's
very
likely
that
you'll
go
to
the
help.
Forums
and
they'll.
Be
like,
oh,
you
should
have
submitted
these
500
other
things,
and
it's
not
the
case
that
you
have
to
do
whatever
feedback
comes
back
from
the
help
forum,
but
rather
it's
it's
additional
input
to
take
in
in,
and
you
can
review
that
and
say.
A
Okay,
I
will
take
into
account
maybe
a
part
of
this
feedback
and
maybe
skip
another
part
of
this
feedback,
because
the
the
folks
in
the
help
forum
are
very
experienced
with
tons
of
topics,
but
they
don't
have
the
absolute
answers.
I
I
don't
think
anyone
really
has
that.
So
it's
it's.
I
think
it's
great
to
get
all
of
this
feedback,
but
you
still
have
to
kind
of
judge
it
and
and
weigh
it
out
yourself,
as
as
with
anything
on
the
internet.
A
Does
pagespeed
insights
use
the
googlebot?
I
wonder
because
when
I'm
looking
at
the
rendered
screenshots
and
page
speed
insights
based
on
our
site
behavior,
it
looks
like
those
weren't
rendered
by
googlebot
you're,
probably
right
so
in
particular,
page
speed.
Insights
is
something
which
is
based
on
the
the
chrome
setup.
A
So
that's
something
where,
as
far
as
I
know,
the
the
server-based
system,
that
does
the
page
speed,
insights,
screenshots
and
calculations
and
kind
of
metrics.
All
of
that
is
just
purely
based
on
chrome
and
googlebot,
also
uses
chrome
to
render
pages.
But
there
are
some
kind
of
unique
aspects
with
regards
to
googlebot
that
don't
apply
to
page
speed
insights,
for
example,
robots
text.
A
So
one
when
google
renders
a
page
it
has
to
comply
with
the
robot's
text
of
all
of
the
embedded
content
there.
And
if
you
have
maybe
a
css
file
or
javascript
file
blocked
by
robot's
text,
we
wouldn't
be
able
to
process
that
from
google
block
point
of
view.
But
page
speed
insights
would
still
be
able
to
review
that
and
show
that.
A
So
that's
probably
where
you're
seeing
those
differences.
It's,
I
think
the
the
difference
is
more
and
more
blurred,
because
googlebot
does
use
chrome
as
well.
So
it's
it's
very
similar,
but
you
can
certainly
find
situations
where
there
are
differences
and
you
can
certainly
construct
situations
where
they're
differences,
like
I
mentioned
with
robot's
text,
is
a
really
simple
way
to
kind
of
see
those
differences
with
regards
to
the
way
that
we
calculate
speed
for
for
search,
though
we
so
kind
kind
of,
I
guess
looking
looking
forward
at
the
the
core
web
vitals
at
the
moment.
A
I
I
don't
really
know
often
how
we
do
that,
but
with
regards
to
core
web
vitals,
we
use
what
users
actually
see.
So
it's
not
the
case
that
googlebot
renders
a
page
very
quickly,
and
then
it
gets
a
good
score
or
chrome
in
pagespeed.
Insights
renders
a
page
very
quickly.
A
Therefore,
it
gets
a
good
score,
but
rather
we
look
at
what
users
actually
saw,
and
I
think
that's
that's
really
important,
because
that's
kind
of
a
measure
of
what
what
the
real
world
performance
is
and
all
of
these
tools,
that
kind
of
render
it
in
more
of
a
lab
environment
like
like
google
googlebot,
when
it
renders
a
page
or
pagespeed
insights
when
it
renders
a
page,
is
something
that
is
almost
more
of
a
prediction
than
an
actual
kind
of
measurement.
A
A
I'm
sorry
I
was
going
to
know.
I
was
going
to
ask
a
follow-up.
I'm
sorry,
you
continue
yeah,
I
and
and
it's
something
that
these
tools
also
try
to
build
in
in
the
sense
that
they
will
say.
Well,
I
run
in
a
data
center,
but
I
will
act
like
I
have
a
3g
phone
and
kind
of
a
slow
connection
and
they'll
try
to
emulate
that.
But
it's
it's
still
very
different
than
actual
user.
E
Go
ahead:
yeah,
I'm
sorry
about
that.
So
so,
in
terms
of
assessing
it,
you
know
I
I
was
I
was
reading.
A
site
seemed
like
it
had
a
lot
of
ad,
so
I
decided.
Okay,
let
me
see
how
this
is
scored
on
page
speed,
insights
and
it
rendered
a
score
with
the
circle
of
21,
which
was
in
the
red,
not
very
good,
but
below.
E
That
was
a
visual
and
below
that
numerical
and
visual
representation
was
a
sentence
that
read
that,
based
on
field
data,
the
page
passed
in
green,
the
the
assessment
and
then
below
that
the
measure
there
were
these
measurement
bars
for
cumulative
layout
shift.
You
know
first
input
delay
et
cetera
and
those
were
all
mostly
in
the
green
so
where
where's
the
disconnect
and
what
should
one
be
paying
attention
to
that
that
first
visual
circle
or
the
fact
that
it
says
it
passed
the
core
web
vitals
assessment.
A
And
I
need
to
keep
in
mind
like
how
how
the
page
speed
insights
looked.
I
I
think
it
has.
E
E
A
Easier,
so
so
I
think
what
happens
in
page
speed
insights
is
we
take
the
various
metrics
there
and
we
try
to
calculate
like
one
single
number
out
of
that,
and
sometimes
that's
useful
to
to
work
on
or
to
give
you
a
rough
overview
of
like
what
the
overall
score
would
be,
but
it
it
all
depends
on
like
how
strongly
you
weigh
the
individual
factors.
A
A
So
that
I
I
think
the
the
overall
score
is
a
really
good
way
to
get
a
rough
estimate
and
the
the
actual
field.
Data
is
a
really
good
way
to
see
like
what
people
actually
see
and
usually
what
I
recommend
is
using
those
as
a
basis
to
determine
like.
F
Sure,
with
regards
to
c
core
vitals,
a
field
data
is
gonna,
be
the
the
one
to
pay
attention
to
correct
in
terms
of
ranking
signals,
or
is
it
going
to
be
yes?
Yes,
it's
the.
G
While
we
are
in
this
correct
vital
topics,
I
have
a
small
question
in
this
regard.
Is
that
when
this
becomes
a
ranking,
sign,
cls
and
all
the
other
friends?
Is
it
going
to
be
page
level
or
domain
level?.
A
Good
question
I
so
it's
essentially
what
happens
with
the
the
field
data.
Is
we
don't
have
data
points
for
every
page,
so
we
for
the
most
part
we
need
to
have
kind
of
groupings
of
individual
pages
and,
depending
on
the
amount
of
data
that
we
have,
that
can
be
a
grouping
of
the
the
whole
website
kind
of
the
domain,
or
I
think
I
think,
in
the
chrome
user
experience
report,
they
use
the
origin
which
would
be
the
subdomain
and
the
protocol
there.
A
So
that
would
be
kind
of
the
the
overarching
kind
of
grouping
and
if
we
have
more
data
for
individual
parts
of
a
website,
then
we'll
try
to
use
that
and
I
believe
that's
something.
You
also
see
in
search
console
where
we'll
show
like
one
url
and
say
like
there's
so
many
other
pages
that
are
associated
with
that
and
that's
kind
of
the
the
grouping
that
that
we
would
use
there.
G
Just
why
I
asked
this:
we
have
this
set
of
pages
that
they
are
slow.
They
exist
for
a
different
purpose
than
our
other
pages
on
the
side,
and
these
we
have
a
no
index
on
them,
but
they
are
very
slow
and
that's
why
we
don't
want
it
to
be
accounted
for.
A
Yeah,
I
I
don't
think-
or
I
don't
know
for
sure,
like
how
how
we
would
do
things
with
the
no
index
there,
but
it's
it's
not
something
you
can
easily
determine
ahead
of
time.
Like
will
we
see
this
as
one
website
or
will
we
see
it
as
like
different
groupings
there?
A
Sometimes
with
the
chrome
user
experience
report
data
you
can
see
like
do.
Does
google
have
data
points
for
those
no
index
pages?
Does
google
have
data
points
for
the
other
pages
there,
and
then
you
can
kind
of
figure
out
like
okay.
It
can
recognize
that
there
are
separate
kinds
of
pages
and
can
treat
them
individually
and
if
that's
the
case,
then
I
I
don't
see
a
problem
with
that.
A
If
it's
a
smaller
website
where
we
just
don't
have
a
lot
of
signals
for
the
website,
then
those
no
index
pages
could
be
playing
a
role
there
as
well.
So
my
understanding,
like
I'm,
not
100
sure,
but
my
understanding
is
that
in
the
chrome
user
experience
report
data,
we
we
do
include
all
kinds
of
pages
that
users
access
so
not
there's
no
specific
kind
of
will.
A
This
page
be
indexed
like
this
or
not
check
that
happens
there,
because
the
indexability
is
sometimes
quite
complex
with
regards
to
canonicals
and
and
all
of
that,
so
it's
not
trivial
to
determine
kind
of
on
the
chrome
side.
If
this
page
will
be
indexed
or
not,
it
might
be
the
case
that
if,
if
a
page
has
a
clear,
no
index
that
then
even
in
chrome,
we
would
be
able
to
recognize
that,
but
I'm
not
100
sure
if
we
actually
do
that.
A
Yeah
I
I
would
also
check
the
the
chrome
user
experience
report
data.
I
think
you
can
download
that
into
bigquery
and
you
can
play
with
that
a
little
bit
and
figure
out
like
how.
How
is
that
happening
for
other
sites
for
similar
sites,
that
kind
of
fall
in
the
same
category
as
the
site
that
you're
working
on.
H
Hi,
I
suddenly
see
as
well
as
it
started
all
at
the
middle
of
january.
I
suddenly
saw
in
search
console
that
there
are
a
lot
of
old
urls
popping
up,
especially
in
the
404
subcategory,
under
excluded
and
in
the
url
inspection
tool.
These
old
urls
are,
for
example,
old,
http
versions
of
you
of
urls
and
it's
even
old
domains,
because
the
websites
were
moved
to
a
new
domain
like
three
years
ago.
H
So
my
question
is:
why
is
that,
should
I
be
worried?
And
if,
yes,
how
can
I
fix
it?
H
A
Okay,
I
I
think,
if
they're
just
shown
as
404s,
I
I
would
completely
ignore
that
what
what
happens
in
our
systems
is
that
pages
which
are
404
are
essentially
still
tracked
on
our
side
and
from
time
to
time,
we
will
double
check
to
see
that
they
still
have
a
404,
and
that
can
happen
like
that
at
a
site
is
like
has
changed
significantly,
doesn't
have
these
pages
for
years
now
and
still
from
time
to
time
our
system
say
well,
we
we
will
double
check
those
old
urls
and
see
if
they
still
return
404
and
that's
not
a
sign
that
anything
is
stuck
with
those
pages.
H
Yes,
for
example,
I
use
the
url
inspection
tool
on
a
url,
that's
still
present
and
then
in
the
url
inspection
tool.
You
see
where
google
knows
this
page
from
and
there
it
says
for,
for
example,
it
noted
from
the
sitemap
and
then
there
are
like
four.
U
four
urls
listed
below
that
and
in
that
list
this
list
contains,
for
example,
an
old
http
version.
It
contains
the
same,
the
same
file
name
but
from
the
old
url
of
the
website,
which
are
all
ul's
that
don't
exist
any
anymore.
A
That's
completely
normal
yeah,
that's
something
where
I
I'm
not
100
sure
like
which
which
data
we
show
there
in
search
console,
but
we
have
a
concept
of
kind
of
the
the
first
scene,
location
of
the
link
to
a
specific
page
and
like
we,
we
might
have
seen
that
url
from
that
page
at
some
point
way
in
the
past
and
if
that
page
doesn't
exist
anymore.
It's
still
like
this
is
where
we
first
start.
H
Okay,
so
it's
basically
just
make
sure
that
if
the
original
page
doesn't
exist
anymore,
it
returns
a
proper
four
or
four.
If
it's
redirected
then
make
sure
it's
a
proper
read,
I
direct
and
in
other
cases
just
ignore
it.
A
Exactly
yeah,
so
usually,
if
you,
if
you
have
an
older
website,
then
over
the
years,
you
will
collect
more
and
more
of
these
404
pages
and
our
systems,
even
when
they
rarely
check
a
404
page.
It's
just
like
the
amount
of
urls
that
could
be
returning
404
grows.
So
if
you
look
at.
A
H
H
What
yeah
yeah
sorry!
This
website
is
like
10,
10
years
old,
so
yeah
and
last
question
for
that,
because
the
websites
I'm
talking
about
they
were
also
moved
to
new
domains.
We
used
the
the
address
change
tool
and
so
basically
just
make
sure
that
the
old
domain
still
redirects
to
the
to
the
new
website.
This
would
be
the
perfect
good
setup
and
we
shouldn't
worry
about
anything
anything.
A
Further
yeah,
I
that's
that
that
sounds
great.
The
one
place
where
people
also
get
confused
with
with
that
which
is
kind
of
similar,
I
guess
with
with
old
urls-
is
that
when
we
recognize
that
pages
have
moved,
we
still
have
some
association
of
the
old
location.
A
I
Sure
so
I
have
a
related
question.
For
example,
if
you
said
the
proper
three
zero
ones
redirects
and
I
was
just
trying
to
understand
the
relation
of
pay
backlinks
with
the
ones
you
know,
if
google
has
the
history
of
the
old
links,
is
it
possible
that
it
passes
some
sort
of
page
rank
to
new
euros
that
we
we
set?
301
redirects?
For,
for
example,
we
have
a
site,
you
know
with
the
backlinks
and
we
decided
to
change
the
url
and
with
the
proper
threes
or
ones,
and
you
know,
backlinks
are
always
there.
I
A
Yes,
so
it's
essentially
what
happens
there
is
we
we
will
have
kind
of
the
old
url
on
your
website.
That
has
some
signals
from
the
links
that
go
to
the
old
url
and
we
have
the
new
url
on
your
website
and
with
the
a
redirect
you're,
basically
telling
us
these
are
equivalent,
and
you
probably
prefer
the
new
url
to
be
shown.
A
So
that's
something
kind
of
to
to
keep
in
mind
that
if
you
want
to
move
everything
to
a
new
url,
then
make
sure
that
everything
is
aligned
with
the
new
url.
So
the
redirect
the
sitemap
files,
the
internal
linking
as
much
as
possible,
also
the
external
linking
so
that
everything
just
fits
together
with
that
new
kind
of
url
that
you
want.
I
Yeah
thanks
for
this
question,
I
have
another
question.
You
know
most
of
the
tools
the
seo
checkup
tools.
They
they
pop
up
with
the
warning
which
says
low
text
to
html
ratio,
which
means
code.
Is
you
know
more
than
you
know
original
text?
Would
it
be
something
we
need
to
worry
about,
or
is
it
okay
for
google
to
pick
the
right.
A
Text
we,
we
don't
have
a
notion
of
text
to
html
ratio
for
search.
So
that's
something
where
I
think
a
lot
of
these
tools
are
able
to
calculate
this
and
they
think
oh,
it's
worthwhile
showing.
But
it's
not
it's
not
an
seo,
ranking
factor
kind
of
thing.
The
there
are
two
places
where
it
could
play
a
role,
on
the
one
hand,
with
regards
to
speed.
A
A
We
we
have
limits
with
regards
to
what
the
maximum
page
size
is
that
we
would
download
for
an
html
page,
and
I
think
that's
in
the
order
of
I
don't
know
hundreds
of
megabytes,
something
like
that.
So
if
you
have
an
html
page
that
has
hundreds
of
megabytes
of
html
and
very
little
text
in
it,
then
yes,
that
could
be
playing
a
role.
But
that's
something
that
I
I
suspect
is
extremely
rare
and
if
you
have
that
problem,
then
that's
that's
like
a
bigger
problem
than
just
like.
A
Got
that,
thank
you
sure.
Let
me
just
pause.
The
recording
here
you're
welcome
to
to
stick
around
a
little
bit
longer
if
you
like,
but
it's
it's
always
good,
to
kind
of
keep
the
recording
limited
to
avoid
it
becoming
super
long.
A
Thank
you
all
for
joining
thanks
for
all
of
the
questions
that
were
submitted
and
that
were
asked
from
you
along
the
way
and
I'll
set
up
the
the
next
office
hours
probably
later
today,
which
will
also
be
next
friday
but
evening,
european
time
more
for
the
the
american
folks,
so
michael
doesn't
have
to
get
up
in
the
middle
of
the
night.
I
don't
know
how
he
does
it,
but
thank
you
cool
all
right.
Let
me
just
pause
here
and
we
can
continue
after
that.