►
From YouTube: English Google SEO office-hours from February 19, 2021
Description
This is a recording of the Google SEO office-hours hangout from February 19, 2021. These sessions are open to anything webmaster related like crawling, indexing, mobile sites, internationalization, duplicate content, Sitemaps, Search Console, pagination, duplicate content, multi-lingual/multi-regional sites, etc.
Find out more at https://developers.google.com/search/events/join-office-hours
Feel free to join us - we welcome webmasters of all levels!
A
All
right
welcome
everyone
to
today's
google
search
central
seo
office
hours.
My
name
is
john
mueller.
I'm
a
search
advocate
on
the
search
relations
team
at
google
and
part
of
what
we
do
are
these
office
hour
hangouts,
where
people
can
join
in
and
ask
their
questions
around
google
search,
and
hopefully
we
can
find
some
answers
along
the
way
lots
of
stuff
was
submitted
already
on
youtube,
so
we
can
go
through
some
of
that,
but,
as
always,
if
any
of
you
want
to
get
started
with
the
first
question,
you're
welcome
to
jump
in.
B
Hey
joan
yeah
can,
I
start
yeah
sure
sure,
okay,
so
you
might
have
answered
this
question
in
one
of
the
previous
meetings,
but
somehow
I
missed
it.
This
is
related
to
core
web
vitals.
Just
wanted
to
ask
this
where
this
web
vital
thing
is
going
to
impact
a
website
on
the
domain
level
or
the
page
level
like,
for
example,
there
is
a
website
with
20
pages,
with
good
vital
score,
and
then
there
are
other
20
pages
with
average
two
per
score.
So
so
these
four
pages
can
impact
rankings
for
these
good
pages.
Also.
A
Good
question
so
the
the
chrome
user
experience
report
data
is
available
on
different
levels.
So
that's
something
where,
depending
on
the
the
website,
the
amount
of
data
that
we
have
available
to
work
with,
then
we
might
be
able
to
be
more
fine-grained
or
we
might
just
have
one
score
that
we
need
to
use
across
the
whole
site,
and
you
can
see
some
of
that
in
search
console
directly
where
you
see
kind
of
like
how
many
pages
are
grouped
together
in
this
one
data
point
that
you
have.
B
Okay,
so
this
domain
does
it
include
subdomains,
also
or
subdomain,
is
a
separate.
A
Thing
as
as
far
as
I
recall,
the
crux
data
is
kept
on
an
origin
level,
so
for
for
chrome,
the
origin
is
the
host
name,
which
would
be
kind
of
on
a
subdomain
and
protocol
level.
So
if
you
have
different
subdomains,
those
would
be
treated
separately.
C
Yeah,
this
might
be
a
silly
question
and
maybe
one
that
you've
covered
beforehand,
but
I
work
with
a
company
that
has
a
very
large
footprint
on
the
internet.
We
have
over
100
million
pages
that
we're
trying
to
have
index
and
obviously
you
know
that's
a
struggle,
but
in
google
search
console
we
have
submitted
a
site
map
index
with
over
a
thousand
different
site
maps
contained
therein
currently
in
google
search
console
is
saying
that
we
have
submitted
and
indexed
three
million
pages.
C
My
question
is:
we
have
some
issues
with
google
crawling
our
site
and
returning
server
errors
like
500
errors
and
some
soft
404s
would
that
contribute
to
google
stopping
crawling
or
not
recognizing
our
site
maps,
because
it's
getting
stalled
up
somewhere
like
I,
I
guess
I'm
just
a
little
bit
confused
as
to
where
to
go
from
here.
As
far
as
next
steps.
A
Now
I
I'm
not
100
sure,
but
I
I
suspect,
in
search
console,
maybe
you're,
just
seeing
a
part
of
the
picture
because
of
the
way
that
the
sitemap
files
within
the
sitemap
index
files
are
treated
there.
So
that's
something
where
maybe
you're
you're,
just
seeing
like
one
of
those
sitemap
files
in
search
console.
But
actually,
if
you
added
the
sitemap
files
individually,
you
could
probably
see
data
for
the
other
ones
and
that's
that's
more
of
a
reporting
thing
rather
than
an
indexing
things.
A
So
it's
more
that
you
just
don't
see
the
details
there
with
regards
to
crawling
in,
in
general,
with
with
a
really
large
website.
It
sounds
like
that's
the
case
there.
It
is
kind
of
helpful
for
us
to
be
able
to
crawl
as
much
as
possible
and
that
kind
of
flows
into
the
the
general
topic
of
crawl
budget.
A
So
there
was
a
blog
post.
I
think
now,
maybe
three
three
years
back
from
gary
about
crawl
budget
kind
of
going
into
the
different
aspects
that
we
take
into
account
there.
Okay
and
on
the
one
hand
we
have
kind
of
the
the
crawl
demand
from
our
systems
where
we
think.
Oh,
this
is
a
fantastic
site.
We
need
to
crawl
as
much
as
possible
from
it
and
with
a
really
large
website,
some
sometimes
we
we
have
mixed
signals
there.
Sometimes
it's
like.
A
A
Oh,
maybe
we're
crawling
too
hard
and
causing
these
problems,
and
we
want
to
leave
capacity
for
users
too,
so
we'll
back
off
a
little
bit
and
similar
with
server
response
time.
So
so
the
rendering
side
kind
of
the
the
speed
aspect
that
most
people
focus
on
is
usually
less
critical
when
it
comes
to
crawling.
But
the
the
response
time
like
how
how
quickly
does
a
server
respond
to
requests?
A
That's
something
that
kind
of
on
a
technical
level
slows
us
down
from
crawling
got
it
okay,.
A
All
right,
michael,
I
think
you
have
your
hand
raised
too
yeah.
D
So
I
often
notice
that
an
established
outlet
will
republish
a
wire
stories
articles
such
as
reuters
and
then
the
leapfrog
in
search
the
original
wire
service,
whether
it's
reuters
or
associated
press.
All
those
services
aren't
spammy
they're,
very
trustworthy.
What
factors
would
contribute
to
that
happening.
A
A
A
So
that's
something
where
it's
not.
I
don't
think
there
is
there's
anything
technical
or
anything
specific
that
is
happening
there
where
it's
like
well,
if
it
gets
republished
here,
then
we
just
take
that
one,
but
anytime
you.
You
have
content
that
is
syndicated.
It
can
happen
that
our
systems
don't
recognize
that
it's
like
we
should
be
showing
this
version
instead
of
the
other
version.
A
A
I
I
think
to
to
some
extent
we
we
probably
try
to
recognize
press
releases
and
understand
that
these
are
pieces
of
content
that
are
just
republished
in
lots
of
places
and
to
try
to
act
accordingly
for
that,
but
otherwise
it's
it's
just
content,
it's
kind
of
like
if
I
write
a
blog
post
or
news
article,
it's
essentially
a
piece
of
content
for
us
right
now.
E
The
nature
of
press
releases
is
that
you,
don't
you
want
it
to
be
somewhere
else.
I
mean
the
whole
point
of
releasing
a
press
release
is
that
it
goes
out
into
the
world
where
people
actually
read
it.
No
one
goes
to
reuters
to
read
the
news
they
go
if,
if
there's
a
new
iphone
released,
for
example,
which
is
always
the
most
popular
example,
you
go
to
all
of
the
bloggers
and
tech
reviewers
and
apple
itself
to
to
read
that.
A
I
I
know
from
I
don't
know
some
book
about
google
a
long
time
back
where,
in
the
early
days
of
google
news,
they
definitely
tried
to
recognize
a
situation
where
people
were
writing
about
the
same
topic
or
writing
with
the
same
content
and
trying
to
understand
like
does
that
make
this
topic
or
this
article
more
important
than
other
topics
but
within
web
search.
It's
it's
really
mostly
a
matter
of
these
are
different
html
pages
essentially,
and
we
find
content
there
and
we
try
to
index
it.
F
John
quick
question
just
jumping
in
first
to
say
hello
long
time,
no
see
just
while
you're
talking
syndication.
How
might
geolocation
come
to
that
if,
if
people
are
syndicating,
let's
say
an
an
english
article
across
different
locales,
australia,
us
great
britain?
Could
there
be
an
impact
there
with
geolocation?
Also
on
that.
G
Yes,
hello,
so
we
have
a
bit
of
a
conundrum
here.
I
just
posted
a
question
in
the
comments,
but
I
don't
think
you
have
responded
yet.
I
joined
seven
minutes
late.
Sorry,
so
we
have
a
big
site
with
a
global
footprint
and
a
lot
of
different
english
variant,
language
sites
which
are
located
in
different
subdirectories
on
on
our
domain,
and
we
are
using
self-referencing
canonicals
and
the
correct
hf
line
tags
for
each
of
the
countries.
So
we
are
specifying
that
the
australian
site,
for
example,
is
using
australian
english.
G
We,
it
is
the
fact
that
we
do
have
a
bit
since
the
the
sites
are
virtually
copies
of
each
other.
We
are
using
using
one
english
language
master
site
and
then
allowing
each
country
to
localize
the
country
as
they
see
fit,
but
in
most
cases
they
don't
do
that,
because
we
have,
for
example,
product
pages
which
are
identical
in
all
countries.
G
So
we
can
see
in
google
search
console
that
there
are
a
lot
of
exceptions
due
to
google,
either
choosing
another
canonical
than
than
the
one
that
we
have
specified.
So
initially
we
thought
there
was.
G
This
would
be
a
huge
problem
because
we
would
not
have
any
visibility,
any
local
visibility,
but
I
have
done
some
tests
where
I
either
tried
to
use
an
australian
ip
to
google
on
google.com.edu,
and
I
still
and
and
the
same
for
the
american
site,
and
I
still
like
get
the
us
sites
for
for
pages,
which
shouldn't
be
indexed
so
so
the
pages
in
the
index
are
called
indexed
according
to
search
console,
but
it
still
is
the
url
in
in
google
search,
which
is
a
bit
strange.
G
So
my
question
is
basically
twofold:
is:
is
there
any
way
we
can
get
around
so
we
could
have
our
pages
indexed
without
google
search
console
complaining
too
much,
even
though
the
the
sites
are
technically
duplicates
and
if
we
can't
do
that
is
how
is
google
evaluating
the
the
ranking,
because
we,
I
think
that
I
mean
some-
some
domains
will
have
more
links.
For
example,
external
links
to
the
american
site
is
much
bigger
than
the
philippine
site,
for
example,
and
and
so
the
american
site
would
have
much
more
links
to
it.
G
I
see
a
my
theory
is
that
the
philippine
version
would
have
a
lower
lower
google
rank,
even
though,
but
it's
the
one,
that's
the
template
for
for
our
rank
in
the
on
the
search
results
page,
even
though
it
displays
the
american
page,
yeah.
A
I
I
think
this
is
like
a
super
complicated
topic
and
we
we
make
it
extra
complicated.
I
think
in
search
console,
unfortunately,
so
it
it
makes
it.
I
don't
know
super
confusing.
So
basically,
what
what
is
happening
is
we're
recognizing
that,
like
I,
I
don't
know
your
site.
So
I
don't
know
100
sure
if
this
is
actually
the
case.
But
usually
this
is
what
happens.
A
We
recognize
that
the
the
content
is
the
same,
so
you
have
the
same
english
language
content
maybe
for
for
I
don't
know
australia,
new
zealand
or
something
like
that,
and
our
systems
will
then
understand
that
these
are
duplicates
put
them
into
one
cluster,
and
then
we
will
pick
one
canonical
url
to
use
for
indexing.
For
that
we
do
pick
up
the
hf-link
annotations
on
these
pages.
So
we
understand
the
the
connections
between
the
urls,
but
we
we
still
index
them
as
one
canonical
url
and
in
the
search
results.
When
you
search.
A
If
we
understand
the
the
href
links
properly,
then
we
will
show
the
appropriate
local
url
in
the
search
results,
but
we
will
base
it
on
the
canonical
index
version.
So
in
the
snippet
you
might
see
kind
of
signs
that
oh
this
is
that
canonical
version,
but
the
url
should
be
the
right
one
and
the
ranking
should
be
based
on
whatever
for
that.
Local
url
should
be
relevant.
A
So
it's
not
not
the
case
that
we
would
pick
like
like
if
we
had
the
the
philippine
version
as
canonical
and
all
the
links
go
to
the
us
version,
then
we,
when
we
put
them
into
a
cluster
for
canonicalization,
then
essentially
like
whoever
is
shown,
gets
those
links.
So
it's
not
that
those
get
lost,
but
the
the
tricky
part
is
in
search
console.
We
report
on
the
canonical
urls
for
the
most
part.
A
So
in
both
the
index
coverage
report
and
in
the
performance
report,
we
will
simplify
things
for
you
and
show
the
canonical
url.
So
if
in
australia
like
well,
take,
for
example,
the
situation
where
we
have
the
philippine
in
the
u.s
site-
and
we
picked
the
philippine
version
as
canonical
for
whatever
reason
in
the
u.s,
you
would
probably
still
see
the
u.s
url
if
we
understand
ahreflink
properly,
but
in
search
console.
We
would
report
that
as
being
the
philippine
url.
A
So
in
the
performance
report
it'll
see
say
like
oh,
the
philippine
url
is
like
very
popular,
and
you
look
at
the
details
in
search
console-
and
I
say
it's
popular
in
all
of
these
countries,
where
it's
it's
kind
of
the
the
same
content,
that's
being
shown
just
with
the
local
country
versions
and
similar
in
the
index
coverage
report,
we'll
focus
on
the
canonical
url.
So
if
you
have
say
a
separate
subdirectory
for
us,
then
it
would
look
like.
A
Maybe
the
u.s
version
is
not
getting
indexed
at
all,
but
the
philippine
version
is
getting
indexed
and
that
can
be
very
confusing,
but
essentially
from
from
a
ranking
point
of
view
that
should
be
working
out.
So
if
we
understand
the
href
length,
we'll
show
the
right
version,
it'll
rank
accordingly
in
in
search
it's
just
everything
with
search
console
with
the
tracking.
There
is
super
complicated.
A
We
we
see
this
a
lot
more
in
in
europe
with
german
language
content,
where
the
the
german
language
countries
are
right.
Next
to
each
other
and
in
switzerland
we
have
different
currencies
and
we
can't
kind
of
import
things
from
europe.
All
of
those
weird
extra
situations,
and
still
we
we
kind
of
have
that
canonical
issue
there
as
well.
So
it's
not
not
just
specific
to
to
english
things
there.
A
I
I
don't
see
this
changing
in
search
console
in
the
near
future,
so
that's
kind
of
something
where
you
have
to
keep
that
in
mind
and
understand
that
for
your
content,
one
version
is
being
chosen
as
canonical
and
the
reports
that
you
see
in
search
console
you
kind
of
have
to
pick
them
apart
and
try
to
understand
them
on
your
own.
A
To
avoid
that,
the
only
thing
you
can
really
do
is
make
sure
that
these
pages
are
not
recognized
as
duplicates,
which
would
be
to
make
sure
that
they're
really
significantly
localized,
which,
if
it's
an
e-commerce
site,
and
you
have
the
same
product
across
like
all
of
these
different
country
versions.
That's
probably
not
not
that
feasible.
A
G
Okay,
thank
you
very
much.
I
think
the
the
our
worst
worry
was
that
our
our
ranking
was
negatively
impacted,
but
I
did
find
that
one
of
my
uk
colleagues
sent
me
a
couple
of
urls
for
changing
and
I
think
he
googled
it
because
he
did
not
send
the
uk
versions.
He
sent
me
the
philippine
versions.
That's
why
that's
why
I
use
that.
As
an
example,
is
there
any
way
to
help
google
even
more
beyond
the
hdf
link
tag
to
understand
that
these
are
really
localized
pages.
A
Not
really
not
really,
I
think,
also
with
ahreflang,
especially
if
you
have
a
lot
of
different
country
versions,
then
it
is
something
that
sometimes
we
pick
it
up
properly
and
sometimes
we
struggle
with
it.
So
that's
something
where
anyone
who's,
I
don't
know,
worked
on
bigger
international
sites,
you'll
see
that
lots
of
hreflang
gets
picked
up
and
used,
and
then
some
parts,
just
like
google,
gets
confused
and
shows
the
wrong
version.
A
Yeah
I
mean
one
thing
that
I
usually
recommend
is
to
assume
that
users
might
be
going
to
the
wrong
version
on
your
site
and
to
try
to
catch
them
on
your
site
as
well.
So
yeah
show
something
like
a
banner
on
top
and
it's
like
hey.
It
looks
like
you're
from
the
uk.
Do
you
want
the
uk
version
and
make
it
so
that
they
can
get
there
fairly
quickly.
G
Okay
yeah,
thank
you,
yeah
we've
thought
of
that,
but
we
we
wanted
to
to
know
if,
if
our
ranking
foremost
was
negatively
impacted
before
we,
we
did
that
type
of
action.
Okay,
thank
you
very.
A
Much
very
much
sure,
let
me
run
through
some
of
the
submitted
questions
and
I'll
get
back
to
to
everyone.
Who's
raising
hands
as
well
later
on.
Don't
worry,
let's
see,
I'm
working
on
the
biggest
mom
advice
site
in
our
country,
which
was
demoted
with
a
core
update.
They
have
a
great
brand
search
and
a
community
of
loyal
returning
users.
So
my
approach
is
going
through
the
content
that
is
deemed
the
most
relevant
in
google
for
queries
from
search
console
and
making
sure
that
when
users
come,
the
content
is
the
highest
quality.
A
Content
solves
the
need
behind
the
query
and
is
trustworthy.
Is
that
the
right
approach?
I
I
think
in
general?
That's
that's
a
great
approach
and
that,
especially
when
you
have
a
larger
website,
you
can't
do
everything
at
the
same
time,
but
focusing
on
the
parts
that
are
seen
as
the
most
relevant
for
your
website
that
are
kind
of
the
the
most
important
gateways
kind
of
to
your
content.
I
think
that
that
always
makes
sense.
So
that
seems
seems
like
a
good
approach.
A
There
are
various
people
within
the
seo
community
who
have
kind
of
focused
on
the
the
whole
topic
of
eat,
expertise,
authority,
trustworthiness
and
they've,
written
lots
of
really
interesting
blog
posts
and
case
studies.
So
I
would
also
go
through
a
lot
of
that
just
to
make
sure
that
you're
not
missing
anything
obvious
that
might
be
relevant
for
your
site
as
well,
core
web
vitals
and
subdomains.
I
think
we
talked
about
that
briefly.
A
The
the
answer
is.
It
depends,
unfortunately,
so
it's
something
where
sometimes
we
can
pick
up
content
very
quickly
after
it
was
created
and
crawl.
It
very
quickly
index
it
very
quickly.
A
Sometimes
all
of
that
takes
a
lot
longer
and
discover
in
particular,
is
yet
another
level
on
top
of
that,
because
for
discover
we
want
to
make
sure
that
we're
actually
recommending
something
that
is
really
appropriate
for
users,
because
users
are
not
searching
for
something
specifically.
So
we
have
to
be
kind
of
almost
extra
careful
with
regards
to
the
content
that
we
show
in
discover
so
there
in
particular,
it
can
happen
that
it
takes
a
little
bit
longer
for
it
to
start
showing
up
and
discover.
A
A
What
would
be
the
difference
between
the
new
passage
index
passage,
ranking
algorithm
and
the
current
featured
snippet
from
my
understanding
is
the
kind
of
the
ui
part
of
this
is
completely
separate
from
from
the
ranking
aspect
there
when
it
comes
to
passage
ranking
it's
more
a
matter
of
us
going
into
the
content
and
recognizing
that
this
page
is
relevant
for
something
that
is
kind
of
hidden
away
on
the
bottom
of
the
page
and
then
how
we
display
it
in
the
search.
A
Results
is
completely
different
question,
so
it's
not
that
the
featured
snippet
or
longer
snippet
or
anything
like
that,
would
be
tied
to
the
way
that
we
do
ranking
for
those
pages,
what's
the
best
practice
of
guest
posting
and
is
it
recommended
by
google?
Oh,
my
gosh.
Okay,
so
from
from
our
point
of
view,
it's
it's
fine
to
go
off
and
promote
your
content
on
on
other
people's
sites.
A
So,
if
you're,
creating
content
and
you're
publishing
it
on
other
sites
to
kind
of
draw
awareness
to
your
website
to
what
you're
working
on
that's
perfectly
fine,
it's
just
the
links
within
these
guest
posts
should
be
nofollow
because
essentially
you're
placing
those
links
on
other
people's
sites.
A
So,
from
our
point
of
view,
that's
something
where
you're
putting
these
links
on
other
people's
sites,
so
that
those
links
should
not
be
passing
a
new
page
rank
and
that's
essentially,
the
the
main
aspect
there
so
going
off
and
creating
something
to
draw
awareness
for
what
you're
working
on.
That's
perfectly
fine
just
make
sure
that
the
links
don't
pass
any
page
rank.
A
No,
not
necessarily
so.
Pbns
are
private
blog
networks,
which
is
kind
of
a
technique
that
I
don't
know
some
people,
usually
spammers
use
to
try
to
create
a
network
of
sites
that
looks
natural
that
all
tend
to
link
to
something
that
you
care
about,
with
the
hope
that
when
google
runs
across
this
it
thinks
oh,
so
many
people
are
recommending
this
page.
We
should
show
it
higher
in
the
search
results
and
obviously,
if
you're,
creating
these
sites
and
you're
just
putting
links
there.
A
We
would
see
that
as
as
web
spam,
or
at
least
as
as
links
that
are
unnatural
and
try
to
discount
those.
So
from
our
point
of
view,
links
from
subdomains
are
perfectly
fine.
It's
not
that
we
would
see
subdomains
as
a
private
blog
network.
Sometimes
sites
use
subdomains
www
is
a
subdomain
technically
pbn.
A
Backlinks
are
essentially
something
completely
separate
where,
if
we
understand
that
this
is
a
part
of
a
private
blog
network,
then
we
will
just
discount
that
by
default
and
it
doesn't
matter
if
it's
on
a
subdomain
or
on
a
main
domain,
or
anything
like
that,
so
those
are
kind
of
completely
separate
things
just
to
make
it
clear.
I
would
not
recommend
using
private
blog
networks
to
promote
your
website
because
that's
essentially
against
our
webmaster
guidelines.
A
What
matters
the
most
the
number
of
unique
referral
backlink
domains,
are
the
total
number
of
backlinks.
So
I
don't
think
we
differentiate
like
that
in
our
systems.
So,
from
my
point
of
view,
I
I
would
tend
not
to
focus
on
kind
of
the
the
total
number
of
links
to
your
site
or
the
the
total
number
of
domain
links
to
your
website,
because
we
look
at
links
in
very
in
a
very
different
way
and
we
try
to
understand
what
is
relevant
for
a
website
how?
A
How
much
should
we
weigh
these
individual
links
and
the
total
number
doesn't
matter
at
all,
because
you
can
go
off
and
create
millions
of
links
across
millions
of
websites
if
you
wanted
to,
and
we
could
just
ignore
them
all
or
there
could
be
one
really
good
link
from
one
website
out
there.
That
is
for
us,
like
really
important
sign
that
we
should
treat
this
website
as
something
that
is
relevant
because
it
has
that
one
link,
I
don't
know,
maybe
from
like
a
big
news
site's
home
page.
A
For
example,
so
the
total
number
essentially
is
is
completely
irrelevant.
Do
you
ever
consider
content
published
across
various
websites
sourced
from
the
same
press
release
as
duplicate
content
across
those
sites?
We
talked
a
bit
about
that
in
the
beginning.
Technically,
if
you're
duplicating
content,
then
that
is
duplicated
content.
We
don't
penalize
duplicate
content
in
our
algorithms.
It's
just
that.
We
would
try
to
pick
one
version
and
show
that
appropriately
and
the
the
other
things
that
we
talked
about
with
syndicated
content
kind
of
in
the
beginning.
A
If
you
want
to
watch
that
on
the
recording
my
home
page
outranks
my
internal
pages,
how
can
I
fix
it?
It's
not
necessarily
something
that
you
need
to
fix.
So
this
is
something
that
just
happens.
Sometimes
sometimes
internal
pages
rank
higher
than
the
home
pages,
and
people
complain
about
that,
but
essentially
for
individual
queries.
A
We
try
to
recognize
what
is
the
most
relevant
result
from
within
your
website
or
across
the
web,
and
we
try
to
show
that
appropriately
in
the
search
results,
and
sometimes
that's
your
home
page,
sometimes
that's
a
product
page
might
be
a
category
page
might
be
a
blog
post
on
your
site
about
a
specific
product.
From
our
point
of
view,
that's
not
something
that
we
would
consider
something
wrong.
A
The
thing
that
you
can
do
a
little
bit,
though,
within
your
website,
is
to
help
us
understand
which
parts
of
your
website
you
consider
to
be
the
most
important
and
the
best
way
to
do
that
is
with
internal
linking
so
the
clearer
you
can
make
it
to
us
that
this
is
something
that
is
really
important
within
your
website
by
showing
it
to
users
more
frequently
having
kind
of
visible
links
to
that
content
within
your
website
the
clearer
we
can
understand
that
this
is
probably
something
that
you
care
about
and
that
you
want
to
treat
with
a
little
bit
more
weight.
A
We
have
a
website
that
we
found
many
spammy
sites
pointing
to
in
the
links
section.
Is
it
worth
disavowing
those
domains?
We
don't
have
any
manual
actions
in
search
console
and
we
think
those
bad
sites
are
dragging
down
our
traffic,
so
in
in
general,
in
cases
like
this,
you
don't
need
to
disavow
those
links.
A
Our
systems
are
really
good
at
understanding
which
links
are
relevant
and
which
links
we
can
completely
ignore,
and
there
are
lots
of
levels
in
between,
but
essentially
random
family
links
pointing
to
a
website
is
super
common.
It's
something
that
happens
to
pretty
much
every
website
out
there
and
our
systems
have.
I
don't
know
tens
of
years
of
experience
with
dealing
with
that
and
ignoring
a
lot
of
that.
A
So
I
often
I
would
not
worry
about
this
if,
if
you're
seriously
worried
that
these
might
not
be
getting
picked
up
properly
by
google
systems-
and
you
really
really
want
to
make
sure
that
they
don't
cause
any
problems,
then
go
ahead
and
drop
them
into
this
about
file.
The
disavow
file
is
something
we
see
more
as
a
technical
thing,
it's
something
we
look
at
when
we
say.
Oh,
you
don't
want
these
links,
then
fine.
We
will
take
them
out.
It's
not
something
that
we
would
count
against
your
website.
It's
not
something
where
we
would
say.
A
Oh,
this
person
must
have
been
a
spammer
because
they're
trying
to
disavow
all
of
these
links.
Now
it's
it's
really
just
a
clean
sign
for
us,
like
you,
don't
want
those
links
counted,
and
so
we
won't
count
those.
So
if
you're
really
really
worried
about
these
links
and
you're
losing
sleep,
you're
kind
of
wondering
is
it
affecting
our
rankings,
then
just
drop
those
domains
in
the
disavow
file
and
move
on
then
you're
really
really
sure
that
our
systems
are
ignoring
it.
A
During
the
december
update,
we
had
a
ranking
drop
to
our
game
site.
This
is
very
surprising,
as
in
my
opinion,
we
have
one
of
the
best
and
most
complete
experiences
for
this
type
of
site,
while
other
sites
offer
mainly
questions
for
the
game.
We
do
that
as
well
as
show
votes
for
christian
and
we
even
have
country-specific
data.
A
I
don't
know
like
the
the
last
part.
Was
it
a
mistake?
I
I
tend
to
assume
that
these
kind
of
changes
are
not
mistakes,
but
they
can
happen.
A
So
if,
if
you're,
really
kind
of
super
sure
that
you've
checked
everything-
and
you
think
you
you've
kind
of
gone
through
everything
with
regards
to
your
website,
then
you're,
what
I
would
recommend
doing
first
off
is
posting
in
the
webmaster
help
forum
and
getting
some
input
from
other
people
there
and
the
folks
in
the
the
forums
can
also
escalate
that
to
googlers
as
well.
A
So
if
there's
something
really
really
weird
happening,
then
it's
theoretically
possible
that
there's
a
mistake
on
our
side.
I
I
think
it's
super
super
rare.
It's
extremely
rare
that
I
I
run
across
something
where,
when
I
send
it
off
to
the
engineering
team
they're
like
oh
something,
weird
is
happening
that
we
have
to
fix,
but
it's
it's
theoretically
possible.
A
Now
it
might
be
that
there's
some
crawling
issues
if
you
have
a
really
large
website
with
regards
to
speed,
but
it
sounds
like
if
your
site
has
been
around
for
a
longer
time,
then
probably
we
have
indexed
the
the
appropriate
content
for
your
site
already.
So
that's
less
of
an
issue,
so
my
assumption
would
be
more
in
the
direction
of
the
the
content
and
the
way
you're
providing
that
content
for
users.
A
I
took
a
look
at
a
handful
of
pages
from
your
website
and
I
could
see
how
our
algorithms
might
be
a
little
bit
confused
with
regards
to
how
we
should
be
showing
this
site
in
search
in
particular,
because
on
a
lot
of
these
pages,
there
isn't
a
lot
of
content
there.
A
So
that's
something
where
maybe
it's
worthwhile
to
to
kind
of
rethink
a
little
bit,
how
you
want
to
be
found
in
search
and
how
you
can
put
your
best
foot
forward
when
it
comes
to
being
found
in
search,
one
one
thing
might
be
to
think
about
like
do
you
actually
need
all
of
these
individual
pages
indexed
or
should
you
have
something
more
about
the
content
overall
itself
as
an
indexed
thing,
so
maybe
less
focusing
on
individual
pages
like
which
would
just
have
like
one
or
two
questions
on
them
and
focusing
more
on.
A
A
Sometimes
we
can
pick
up
images
through
structured
data
and
we
can
show
those
in
the
search
results,
but
essentially
the
algorithms
that
we
have
for
showing
kind
of
the
thumbnails
within
the
individual
search
results.
Entries
are
things
that
are
more,
I
don't
know
broader
algorithms
in
terms
of
us
recognizing
their
multiple
images
on
a
page
and
kind
of
making
the
assumption
that
maybe
these
multiple
images
on
a
page
are
particularly
useful
or
relevant
for
users
to
understand
why
your
pages
are
relevant
for
the
user
now
and
the
search
results.
A
So
that's
kind
of
the
direction
I
would
have
there.
I
think
there's
there
are
two
or
three
types
of
structured
data
that
are
slightly
different
in
that
regard.
In
that
you
can
provide
a
carousel
of
images
or
carousel
of
pieces
of
content.
A
I
think
something
like
the
the
how-to
structured
data
falls
into
that
category
and
maybe
something
around
recipes,
not
not
quite
sure,
but
all
of
that
should
be
documented
in
our
developer's
guide,
and
you
can
kind
of
look
at
that
to
see.
Is
there
something
specific
that
you
could
be
doing?
A
My
recommendation
would
be
to
look
in
the
search
results
of
what
what
your
site
is
focusing
on
and
then
based
on
that
to
try
to
work
backwards
like
is
there
a
specific
type
of
structured
data
that
these
people
are
using,
or
is
this
just
google's
algorithms
trying
to
be
helpful
and
just
show
more
images,
cool?
Okay,
I
think
maybe
we'll
just
switch
back
a
little
bit
to
questions
from
you
all,
since
there
seem
to
be
more
and
more
hands
being
raised,
not
quite
sure
where
we
were
rob.
Where
were
you
next.
E
I
just
want
to
ask
about
404
errors
in
search,
console
or
400
errors
in
general.
Are
they?
Are
they
still
a
thing,
because
I
only
see
500s
and
no
400s,
despite
having
lots
of
404s?
Are
they
still
reported,
or
they
moved
to
some
other
section.
A
Good
question:
I
don't
know
a
fair
end
yeah
we
we
used
to
have
the
crawl
errors
report,
but
I.
G
A
A
G
H
A
I
I
think
yeah
I
I
don't
know
like
these
changes
in
search
console.
It's
sometimes
hard
to
keep
up.
That's
that's
why
we
have
daniel
on
the
team
working
directly
with
search
console
and
I'm
like
alright.
E
I
First,
I
have
a
question
about
like
these
pages
that
don't
have
actual
content
on
them
and
it
doesn't
make
sense
to
put
directly
put
content
on
them
like
online
tools
online
calculators,
online
translator,
they
don't
actually
have
content.
How
do
we
optimize
those
pages
for
google?
A
Yeah,
I
I
mean
the
from
from
from
our
point
of
view:
we
don't
have
a
a
kind
of
a
system
that
uses
online
tools
and
tries
to
see
what
they're
useful
for
so,
if
you
have
a
calculator
on
your
site,
our
systems
will
go
and
just
see.
Oh,
the
number
zero
through
nine
and
some
symbols
are
on
this
website.
We
don't
necessarily
know
that
it's
a
calculator
so
anything
that
you
can
do
to
make
it
easier
for
us
to
understand
what
this
page
is
for
that
helps
us.
A
A
If
you
have
a
blog,
where
you
can
write
about
these
tools,
that's
that's
obviously
also
useful,
because
then
we
understand
a
little
bit
the
context
of
those
tools
a
little
bit
better,
so
essentially
anything
that
you
can
do
to
bring
more
textual
information
to
those
tools
that
helps
us
in
particular.
If
these
are
tools
where
you
have
some
functionality
that
we
just
don't
see
because
googlebot
doesn't
interact
with
the
tools.
So
if
you
have
something
like
a
map-
and
you
can
click
on
different
locations,
you
can
see
opening
hours
for
different.
A
I
don't
know
businesses
that
that
you're
working
with
googlebot
is
not
going
to
click
on
the
map
and
see
what
happens,
but
rather
we
need
to
be
able
to
find
that
information
in
some
textual
format
in
another
way,
so
either
having
more
general
information
on
the
page
in
a
case
like
that
would
help
like
these
are
the
opening
hours
of
our
different
branches
or
even
listing
that
information
directly
on
the
page
itself.
That
also
helps
us.
I
Thank
you
and
the
second
question
is
we
had
a
big
news
website
and
on
december
updates
we
lost
like
50
percent
of
the
traffic
and
mostly
that
discover
part
it
completely
like
suddenly
got
gone
away
and
we
we
couldn't
find
out
the
issue.
Can
you
please
explain
more
more
about
that.
A
A
Yeah,
so
so,
with
the
decor
updates,
it's
not
so
much
that
we're
flagging
problems
within
a
website
and
saying
we
think
this
website
is
bad
or
has
a
problem,
but
more
that
we're
trying
to
understand
the
relevance
of
these
pages
a
little
bit
better
and
the
relevance
is
something
that
is
is
tied
to
so
many
different
factors
and
that's
essentially
what
what
we
try
to
pick
up
with
the
core
updates
and
the
information
from
the
core
updates
is
also
used
in
other
parts
of
search,
in
particular
in
discover
as
well,
and
because
of
that,
it
can
also
happen
that
maybe
a
change
is
kind
of
subtle
in
search
in
general,
but
has
a
stronger
effect
in
discover
just
because
we're
a
little
bit
more
critical
in
on
the
discover
side.
A
With
regards
to
kind
of
improving
the
situation
there,
we
have
a
blog
post
about
core
updates
that
has
a
little
bit
more
background
information
on
the
kind
of
things
we
we
look
for
when
we
try
to
understand
relevance.
I
would
definitely
go
through
that.
It
also
has
a
little
bit
of
information
on
how
things
work
with
regards
to
recovering.
After
making
changes
within
your
website
because
from
from
our
point
of
view,
it's
not
the
case
that
a
website
is
doing
something
wrong
that
it
has
to
fix
when
it
comes
to
core
updates.
A
But
obviously
you
want
to
have
similar
traffic
as
before,
so
improving
your
website
overall.
That
will
help
us
to
better
understand
how
how
the
the
relevance
is
kind
of
better
than
before
for
your
website.
So,
unfortunately,
there's
no
one
simple
thing:
to
focus
on
there.
There's
lots
of
small
things
that
kind
of
add
up,
and
a
lot
of
that
is
listed
in
the
the
blog
post
as
well.
H
Actually,
I
have
a
question
one
of
our
client.
We
are
working
for
him.
I
apologize
for
my
english.
It
is
very
weak,
so
please
don't
mind
yeah.
So
the
issue
is
that
a
few
of
the
competitors
they
have
raised
some
kind
of
reviews
on
other
websites.
So
the
problem
that
we
are
facing
right
now
is
that
our
orm
is
not
maintained
as
it
was
before.
H
So
what
we
did
is
we
asked
the
website
that
were
showing
up
the
reviews
in
google,
but
ultimately,
the
result
is
that
they
are
asking
for
two
thousand
dollars
to
remove
those
reviews.
So
I'm
not
sure
that
what
exactly
should
we
do
yeah.
A
I
I
don't
know
what
what
the
the
best
approach
is.
There
I
mean
that's,
that's
something
we
we
sometimes
hear
about
as
well.
I
I
think
the
most
straightforward
approach
is
either
to
find
find
some
kind
of
a
solution
there
in
regards
to
kind
of
having
that
content
removed.
If
that's
something
where
you
think
for
legal
reasons,
they're
publishing
things
that
are
not
correct,
then
that
might
be
an
approach
that
you
could
take
there.
A
A
So
for
things
like
that,
I
would
recommend.
Also,
maybe
posting
in
the
webmaster
help
forum
so
that
other
people
can
take
a
look
and
give
you
some
tips
there.
I
don't
think
there's
one
simple
answer
that
you
can
use
in
in
these
kind
of
cases
and
it's
similar
with
other
situations
where
maybe
there's
a
legitimate
negative
review
out
there,
where
you
can't
just
force
people
to
remove
negative
reviews,
but
in
in
the
webmaster
help
forum,
you'll
probably
get
some
input
on
on
some
options
that
you
can
do
there.
Maybe
you'll
also
get
some.
A
I
don't
know
more
straightforward
feedback
on
how
well
maybe
you
should
just
be
providing
a
better
service
or
whatever,
but
I
think
this
kind
of
direct-
and
I
don't
know
more
more
or
less
honest
feedback
is
sometimes
very
useful
to
understand.
Well,
what
is
the
general
perception
externally
with
regards
to
this
type
of
issue,
and
is
it
something
that
I
can
resolve
on
my
own?
Or
is
it
something
I
need
help
with
from
other
people
or
do
I
need
to
kind
of
rethink
what
I'm
doing
with
regards
to
how
you're
active
online?
A
That
might
be
something
that
also
comes
into
play
there
and,
as
I
mentioned
in
the
other
questions
before
the
folks
active
in
the
help,
forums
can
also
escalate
these
issues
to
googlers.
So
if
they
see
something
that
comes
up
again
and
again
in
the
forums
from
from
the
same
sites
or
doing
similar
schemes
kind
of
thing,
then
they
do
pass
that
on
to
the
team
here,
and
we
do
try
to
take
a
look
at
that.
It's
it's
sometimes
tricky
with
regards
to
what
we
can
do
in
cases
like
that,
in
particular.
A
If,
if
the
website
is
not
spammy,
if
it's
a
normal
website
that
we
wouldn't
remove
for
web
spam,
then
there's
no
kind
of
web
spam
policy
reason
for
us
to
remove
it.
There
might
not
be
other
policy
reasons
for
us
to
remove
it
either.
Sometimes
it's
it's
really
quite
complicated,
but
yeah.
I
I
think
take
taking
this
step
and
getting
more
input
from
other
people
might
might
be
a
good
first
step.
H
Thank
you
so
much
john.
I
have
one
more
question
that
is:
can
I
please
sure
yeah
yeah,
so
basically
one
of
our
page
was
not
indexing.
I
don't
know
the
exact
reason
what
what
what
it
was.
So
what
we
did
is
we
started
google
ads
for
that
page
and
besides
this
we
started
social
sharing.
They
started
to
share
this
on
facebook
and
other
platforms.
H
Now,
I'm
not
sure
what
was
the
exact
reason,
but
first
that
page
was
on
third
page
of
google
result
and
now
it
is
under
10..
So
I'm
a
bit
confused
that
what
was
the
exact
reason?
Was
it
traffic
that
matters
or
was
it
yeah.
A
Yeah,
so
both
of
those
things
ads
and
social
sharing
are
not
things
that
we
take
into
account
for
search.
So
my
guess
is
it's
either
something
an
an
indirect
side
effect
of
that
or
it's
it's
just
that
our
systems
decided
at
some
point
that
oh,
we
should
also
index
this
page
there,
but
both
both
the
ads
and
the
social
sharing
side
are
things
that
we
don't
take
into
account.
Traffic
in
general.
A
Also,
not
it's
something
where
various
seos
externally
have
made
tests
with
regards
to
traffic
to
see
if
if
traffic
can
get
a
page
indexed-
and
that's
not
the
case
so
from
from
my
point
of
view-
probably
it's
just
like
it
was
almost
indexed
and
at
some
point
our
system
said.
Okay,
we
will
actually
index
this
page
and
show
it
in
search.
A
Yeah
all
right
sure,
lena.
J
Yeah,
I'm
happy
that
we
got
some
time,
so
we
have
a
news
website
that,
in
addition
to
articles,
publish
a
bunch
of
urls
with
short
updates,
so
not
much
content,
as
I'm
wondering,
if
there's
a
best
practice
for
news
websites
to
avoid
thing
content
and
if
you
would
recommend
to
just
no
index.
All
of
these
urls
are
just
the
short
stories.
A
J
Yeah
we
would,
but
I
was
just
afraid
it
will
be
viewed
as
short
starts,
because
it's
just
a
paragraph,
not
nothing
else
and.
A
A
I
I
vaguely
recall
some
some
errors
that
we
had
in
search
console
way
in
the
beginning,
where
we
would
show
this
content.
This
news
article
is
too
long
or
too
short,
maybe
that's
something
that
plays
a
role
there
with
regards
to
news
content,
but
if,
if
that
is
the
case,
you
can
also
use
the
the
google
news,
google.news
meta
tag
to
specifically
block
it
from
google
news
so
that
you
can
keep
it
shown
in
search
and
just
block
it
for
google
news.
Okay,
thank
you
sure.
A
Okay,
let
me
just
pause.
The
recording
here
I'll
stick
around
a
little
bit
longer
for
all
of
the
the
questions
that
are
still
pending
and
if
anyone
has
anything
else
they
they
want
to
share,
but
thank
thanks
for
joining
and
watching
so
long.
If
you're
watching
this
as
a
recording
and
thanks
again
for
all
of
the
questions
that
were
submitted
and
hopefully
I'll
see
you
all
again
in
one
of
the
future
hangouts
bye,
everyone
thanks
john.