►
From YouTube: Scalability Team Demo 2022-07-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
yes,
I
I,
want
to
do
a
quick,
reddit's
demo,
since
we
provisioned
a
new
registry
cache
redis
instance
and
I
wanna,
just
kind
of
show
that
working.
But
then
I
want
to
talk
through
one
piece
that
we
didn't
really
talk
about
in
past
demos,
which
is
kind
of
our
external
DNS
setup
that
we're
using
for
for
this
new
threader
setup
so
go
ahead
and
share
my
screen.
A
So
we're
on
the
pre
console
host
and
check
I
might
need
to
I'm
sharing
a
sec
as
I
prepare
the
password,
but
well
we'll
do
that
when
I
get
there.
So
the
the
first
thing
is
the.
A
These
three
DNS
names,
and
so
the
redis
Sentinel
client,
is
going
to
be
configured
with
these
three
host
names.
This
is
currently
still
publicly
resolvable,
but
will
will
quite
likely
change
that
to
be
internal
in
the
near
future.
It
doesn't
matter
too
much
but
yeah.
So
these
resolve
to
IPS
that
are
reachable
from
within,
like
from
a
VM
and.
A
We
can
connect
to
both
the
Sentinel
port
and
the
redis
port.
The
way
that
the
Ruby,
client
and
I
suppose
any
client
that
we
would
use
works
is
that
you
give
it
a
sentinel
report
or
a
set
of
Sentinel
ports
or
Sentinel
hosts
with
a
sentinel
port.
And
so
that's
this
one
right
here.
The
redis
port
is
the
same
one
without
the
leading
two
we're
talking
default
ports
here,
and
so
we
can
ask
Sentinel
viewers
the
current
primary
and
it's
going
to
tell
us
node
one.
A
So
this
is
basically
the
service
discovery
elements
and
then
give
me
a
sec
to
quickly
unshare,
while
I
paste
the
password.
A
B
A
It's
it's
a
good
point.
Actually
I
think
I'll
make
a
note
of
that,
because
we
are
introducing
new
credits
deployments,
which
don't
depend
on
Cross
talking
to
the
the
existing
Fleet.
So
we
have
a.
We
have
more
degrees
of
freedom
there
and
I
guess.
The
main
trade-off
is
around
compatibility
and
how
many
different
configurations
we
want
to
support,
but
I
think
this
one
might
actually
be
worth
just
doing,
because
it's
I
think
on
the
on
the
client
side.
A
It's
it's
not
a
huge
deal
to
add
that
in
so
authenticated,
Sentinel
for
new
deployments.
A
And
actually
that
same
vein,
multi-password
supports
for
new
deployments
is
also
something
worth
looking
into,
so
that
we
can
more
easily
rotate
passwords,
which
is
the
capability
that
we
kind
of
lack
right
now.
I
mean
we
could
probably
figure
it
out,
but
we
don't
really
have
a
good
procedure
for
that.
B
A
Okay,
so
I've
behind
the
scenes,
I've
pasted
in
the
the
Reddit
CLI
auth
environment
variable,
which
is
what
the
Reddit
CLI
uses
so
that'll
that'll,
look
something
like
this,
and
so
we
can
now
connect
to.
A
The
node.1,
which
is
what
we
learned,
is
supposedly
the
primary
and
sure
enough
it
is,
and
so
we
can
also
ask
the
other
two,
but
they
think
they
are,
and
they
are
indeed
replicas.
Well,
this
one
is,
let's.
A
For
the
sake
of
completeness
check
the
other
one
as
well
yes,
so
yeah
I
mean
that's
that's
kind
of
the
the
basic
demo
that
I
just
wanted
to
share
and
then
well
I
guess:
I'll
open
it
up
for
questions
on
this
first.
A
Yeah
yeah
sorry
I
wanted
to
kind
of
show
that
so
this
is
still
kind
of
incomplete.
I
was
still
working
on
this,
but
I
just
wanted
to
kind
of
show
how
the
different
pieces
work
together
when
it
comes
to
DNS.
So
we've
got
our
gke
node
pool.
A
We've
got
you
know
one
one
to
two
nodes
per
Zone
depending
on
whether
we're
rotating
right
now.
So
actually
no,
we
determined
that
on
rotation
we
usually
remove
a
node
before
we
add
it.
So
in
practice
we
should
only
have
one
note
per
Zone
and
we
have
one
node
pool
per
deployment.
So
if
we
have
a
redis
registry
cache
and
then
maybe
we
have
a
redis
rate,
limiting
they'll
be
on
distinct,
no
pools
so
that
we
can
guarantee
isolation
there.
A
So
yes,
so
then
we've
got
kind
of
one
node
per
Zone
and
then
we've
got
one
part
per
node,
and
this
follows
similar,
a
similar
architecture,
which
is
what
we
have
have
under
VMS.
So
red
is
seven
Sentinel
co-located
per
in
this
case,
node
in
this
case
pod
in
kubernetes.
A
So
this
is
where
it
gets
a
little
tricky
because
we
want
to
access
these
or
we
want
the
ability
to
access
these
from
outside
of
kubernetes,
because
we
still
have
some
stuff
in
VMS,
and
so
that
means
we
need
to
expose
this
somehow
and
with
our
GK,
like
with
a
standard
gke
networking,
there's
two
types
of
ips
that
are
routable
from
outside,
you
can
either
address
the
Pod
IPS
directly.
A
Alternatively,
you
can
create
a
load
balancer
and
then
access
it
through
the
load.
Balancer
IP,
those
those
are
the
two
that
psyllium
is
what
we're
using
for
networking,
there's
other
two
that
it
can
do
so
we
we're
currently
still
using
the
Pod
eyepiece,
but
we're
transitioning
to
using
load,
balancer
IPS
and
the
reason
for
that
is
these
pod
IPS
change.
Whenever
we
rotate
a
pod-
and
so
you
know
with
like
clients,
caching,
DNS,
lookups,
potentially
and
with
delays
in
propagating
pod
IP
2dms
records,
that's
kind
of
risky
like
that.
A
That
adds
failure
modes
and
it
adds
delays,
and
so,
if
we
can
instead
have
a
more
stable
IP,
then
we
pretty
much
don't
need
to
worry
about
it.
Unless
we're
rotating
the
load
balances,
so
it
makes
the
whole
thing
quite
a
bit
safer,
and
so
the
the
new
setup
which
is
coming
up
is
gonna
have
one
service
per
pot.
A
So
we
have
the
services
that
also
have
an
index
and
then
the
DNS
records
point
to
the
services
as
opposed
to
pointing
to
the
parts
directly,
which
is
what
they're
doing
right
now
and
so
well,
not
not
shown
here
is
there's
a
component
on
the
side
called
external
DNS,
and
that
is
a
kubernetes
I
guess
if
kubernetes
operator
I'm
not
sure
if
it
fits
the
the
definition
of
an
operator
but
I'm
going
to
call
it
an
operator
and
it
scans
your
resources
for
annotations.
A
So
you
can
annotate
either
the
Pod
or
the
service
and
say
oh
by
the
way.
This
is
the
DNS
record.
That
I
would
like
you
to
create
for
me,
and
so
external
DNS
scans
that
and
then
connects
to
whatever
DNS
provider
you've
configured
it
with
so
this
test,
gcp
DNS
and
it
creates
those
records
for
you,
and
so
that's
basically
how
we
propagate
this
information
out
into
the
DNS
Zone
and
then
it's
a
clients.
Will
go
to
DNS
look
up
this
DNS
record
that
will
point
it
at
the
load.
B
These
are
these:
are
your
TCP
TCP
I
load
balancers
right?
A
B
Single
pod,
right
and
I
mentioned
this,
because
we
don't
incur
the
penalty
of
an
extra
hop
because
because
this
particular
style
of
the
balancer
is
effectively
software
defined
routing.
So
we
don't.
We
should
not
incur
extra
latency
to
for
accessing
reddits
through
the
the
load
balancer
IP
versus
the
bot
IP.
A
B
B
A
Yep
yeah
Philippe
suggested
this
so
just
giving
credit
as
well.
C
A
Yeah
yeah,
that's
that's
the
idea,
so
we
want
to
keep
things
as
uniform
as
possible
and
so
I'm
I
think
I'd
also
like
to
have
a
DNS
record
for
for
every
redis
and
have
that
as
the
standard
mode
of
accessing
a
redis,
even
if
they
could
be
using
internal
access
just
for
consistency,
sake
well,.
C
A
C
C
So-
and
that
was
the
the
registry
cash
item,
so
how?
What
is
what
are
your
thoughts
in
terms
of
like
when
that's
actually
usable
in
production
and
and
it's
all
connected
up
like
how
far
away
do
you
think
we
are.
A
Yeah,
so
one
of
the
reasons
we
decided
to
kind
of
switch
from
rate
limiting
to
registry
was:
it
allows
us
to
cut
quite
a
bit
of
scope,
so
yeah
we're
working
with
the
registry
team
on
this.
They
still
have
some
some
changes
that
they're
making
on
their
end,
because
we've
asked
them
to
kind
of
partition,
the
red
as
clients
into
separate
ones,
so
that
you
know
we
can.
A
We
can
have
those
partitions
early
on
and
don't
have
to
migrate
later
on
the
we
have
this
instance
running
in
Pre
right
now
and
now
there's
nothing
connected
to
it.
Yet
so
we're
kind
of
waiting
on
the
registry
team
to
set
that
up
on
their
ends.
So
we
can
start
testing.
A
We
don't
really
need
persistence
for
this
instance,
so
that
means
we
can
kind
of
defer
that
whole
story.
We
also
don't
need
hybrid
deployments
because
we're
not
migrating
anything.
This
is
a
new
instance,
so
we're
just
like
provisioning
this
new
and
that
that
cuts
a
lot
of
scope
out
from
where
we
were
previously
so
yeah
yeah
so
with
with
those
requirements,
deferred,
I.
Think
the
the
main
blocker
at
this
point
is
completing
the
Readiness
review
and
a
whole
lot
of
testing,
which
you
know,
depending
on
what
we
find
there.
A
C
So
in
terms
of
getting
through
the
Readiness
review,
is
there
any
help
that
you
need
there.
A
C
Okay,
so
we've
we
found
on
reddit's
Cache
that
there's
a
slice
of
that
that
we'd
like
to
partition
away
and
create
a
new
instance
for
that,
and
we
would
like
to
I
guess
we
would
be
adding
the
complexity
of
a
migration
for
it.
But
we
would
like
to
copy
what
you've
put
in
place
and
sort
of
follow
your
guidance
on
creating
the
new
instances.
In
the
same
way,.
A
Yeah,
so
I
I
had
a
chat
with
Bob
about
this.
Yesterday,
I
I
was
a
little
hesitant
initially
to
adding
more
stuff
onto
our
plates.
Just
in
the
interest
of
keeping
also
us
focused
I
think
it
might
actually
be
feasible
to
do
a
a
redis
cash
partition
in
addition
and
I'm
thinking.
A
A
Also
in
terms
of
migration,
we
will
have
a
migration,
but
that
migration
is
happening
on
the
application
side,
not
on
the
infrastructure
side,
and
so
Bob
was
talking
about
using
a
feature
flag,
which
would
also
allow
us
to
do
incremental,
rollout
and
potentially
even
on
a
per
repo
basis,
because
we're
doing
repo
data,
which
that
would
be
actually
really
cool,
because
then
we
can
very
slowly
and
gradually
shift
traffic
to
the
new
instance
and
if
anything
goes
wrong,
we
turn
the
feature
flag
off
and
shift
the
traffic
back.
B
So
so,
I
I
guess
I'm
not
able
to
visualize
without
more
context
how
a
feature
flag
would
be
semantically
safe
in
the
sense
that
we
we
don't
want
to
oscillate
between
a
fresh
versus
stale
cache
entry.
So,
if
like,
for
example,
X
percent
of
requests
are
going
to
one
cache
and
X
percent
are
going
to
a
different
cache
unless
both
of
those
caches
are
maintained
in
perfect
tandem,
we
could
have
requests
to
oscillate
between
an
old
and
new
versus
new
value
for
a
cache
entry.
B
How
would
such
a
feature
flag
work,
or
would
it
instead
be
a
Boolean
rather
than
a
percent?
I
guess
I'm
still
not
seeing,
even
even
if
it's,
even
if
it's
just
just
a
Boolean
rather
than
a
scale
up
model,
the
the
turning
it
back
off
would
implicitly
be
reverting
to
stale
cash
from
from
from
the
fresher
cache
that
doesn't
seem
safe.
A
Yeah
I
I,
don't
know
if
we
have
an
answer
for
that,
yet
I
think
the
the
planet
it's
still
kind
of
in
the
early
planning
stage.
At
this
point,
there's
a
few
approaches
that
come
to
mind
that
we
could
explore
for
that.
But
yeah
I
think
you
should.
You
should
comment
that
on
on
the
Epic
I
can
I
can
dig
it
up
for
you.
B
Of
this,
as
as
as
as
the
effectively
the
the
cash
and
validation
class
of
problems,
so.
A
Yeah,
so
yeah
I
mean
off
of
the
cuff
I
would
say
we
could
either
do
double
right
and
then
switch
the
reads,
yep
and
and
kind
of
maybe
make
the
double
right,
full
tolerant,
so
that
if
the
right
to
the
to
the
partitioned
cash
fails,
we
we
kind
of
at
least
during
the
initial
stage
we
kind
of
swallow
those
exceptions
and
and
well
at
least
log,
something
sure
kind
of
assess
the
risk
there,
while
yeah,
while
maintaining
availability.
Yes,.
B
A
Like
this
yeah
but
I
think
it's
it's
an
open
question.
So
thanks
thanks
for
bringing
that
up,
yeah.