►
From YouTube: Webinar: Better walls make better tenants
Description
Multitenant clusters are cheaper and more efficient to run, and ultimately easier to manage, but it can be hard to get going. This webinar gives you a way to think about the needs of your tenants, your organization, and your own sanity, and provides an overview of how to build a robust and safe multitenant solution.
Presenter:
Adrian Ludwin, Senior EngineerĀ @Google
A
All
right,
we
are
going
to
go
ahead
again.
Start
I'd,
like
to
thank
everyone
who
joined
us
today.
Welcome
to
today's,
since
F
webinar
mood,
tennis
webinar
bear
was
make
bear
tenants.
I'm
posting
my
since
their
ambassador
in
solution
engineer
at
Oracle
I
will
be
moderating
today's
webinar.
We
would
like
to
welcome
our
present
today,
either
in
living
sin
or
soft
engineer
at
Google
a
few
housekeeping
items
before
we
get
aside
during
the
webinar
who
are
not
able
to
talk
as
an
entity.
There
is
a
keyway
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
your
questions
in
getting
there
and
we
will
get
as
many
as
we
can
at
them.
The
this
is
an
official
webinar
off
of
the
sensor.
As
I
said
you
subjects
of
the
CNF
code
of
condom.
Please
do
not
add
anything
to
this
chat
or
questions
that
would
be
invalid
violation
of
the
code
of
conduct.
Basically,
please
be
respectful
of
all
your
fellows
parts
ponds
and
presenters.
Please.
A
B
And
so,
if
you
have,
if
you
were
at
cube
Con
in
San
Diego,
for
example,
last
November
you
may
have
seen
some
of
these
slides
might
look
vaguely
familiar
because
a
lot
of
our
customers
were
there
and
we
have
taken
and
try
to
collect
a
lot
of
the
lessons
that
we've
learned
from
them
and
I'm,
hoping
that
this
would
be
useful
to
some
of
you
as
well.
So
here's
the
topics
we're
going
to
cover
today.
B
This
will
make
up
the
bulk
of
the
presentation.
There's
a
fair
bit
of
material
on
there
and
I'm
not
expecting
everybody
unless
you're
taking
notes
to
remember
everything,
that's
in
there,
but
it's
really
supposed
to
give
you
an
idea
of
the
kinds
of
tools
that
are
available
to
you.
If
and
when
you
need
them,
we'll
have
a
note
about
some
advanced
topics
and
and
then
we'll
wrap
things
up,
and
we
will
try
to
leave
about
10
to
15
minutes
for
questions
at
the
end.
B
By
the
way,
if
you
do
have,
if
you
have
a
quick
question
like
a
request
for
a
clarification
for
anything
you
see
on
screen,
please
add
it
to
the
Q&A.
I
might
not
see
it
because
I
can
only
see
my
slides
right
now,
but
I
will
see
it
and
can
can
interject.
And
but
if
you
have
any
questions
about
topics
that
are
not
covering
I'm
happy
to
answer
them,
but
please
leave
that
until
the
end.
So
let's
talk
about
multi-tenancy.
First,
why
use
it
in
the
first?
B
There
are
many
other
definitions
of
tenants
and
tenancy
and
we'll
discusses
a
little
bit
at
the
end,
but
that's
kind
of
the
sort
of
mental
model
that
we're
going
to
use
for
this
presentation.
Now
that
works
great.
If
you
are
just
a
small
company-
and
you
just
have
a
small
number
of
teams,
because
you
can
just
well
as
a
second
team,
come
needs
to
spin
up
and
you
spin
up
a
second
cluster
for
them.
But
this
pretty
quickly
starts
to
run
out
of
steam.
B
So
the
first
problem,
you're
going
to
run
into
is
the
velocity
problem,
which
is
that
every
team,
every
time
it
needs
to
become
an
expert
in
running
either
needs
to
become
an
expert
in
running
kubernetes
clusters,
which
they
probably
don't
want
to
do
or
your
central
platform
team
is
going
to
have
to
become
an
expert
at
managing
large
numbers
of
clusters,
each
of
which
can
often
be
a
little
bit
different,
especially
if
they
are
not
created
in
a
uniform
way.
The
other
problem.
B
So
this
is
the
the
velocity
side
of
multi-tenancy
and
then
the
on
the
cost
side.
All
of
those
clusters
might
have
different
usage
patterns.
Some
tenants
might
be
running
batch
jobs.
They
could
be
running
it
like
other
teams
might
have
higher
priority
or
lower
priority
jobs,
or
they
might
be
scaling
for
their
peaks
and
but
because
they're
on
separate
tenants,
there
aren't
separate
clusters.
B
Sorry,
there
is
no
way
for
them
to
share
resources,
and
so
we've
seen
customers
who
have
some
teams
that
have
a
5
node
cluster
because
they
need
it
for
a
certain
amount
of
redundancy
or
peak
usage,
but
on
average,
are
using
only
a
fraction
of
one
node,
and
so
this
is
fine
when
you're
starting
up
and
I'm,
not
I,
don't
want
to
make
it
sound
like
this
is
a
single
tendency
is
a
bad
thing.
It
can
be
very
fast
with
a
good
way
to
start
up.
B
Then
you
will
start
to
hit
the
the
limitations
of
that
both
from
the
cost
in
a
velocity
perspective
relatively
quickly
if
you
started
to
grow
beyond
a
certain
size.
So
the
alternative
to
this
is
that
you
have
multiple
timelines.
I'll
show
you
in
one
cluster,
and
you
use
this
concept
on
a
namespace
to
divide
that
cluster
up
among
those
different
tenants.
Now
you'll
be
hearing
a
lot
more
about
namespaces
throughout
the
rest
of
this
presentation.
B
B
So
production
readiness
tool
link
can
be
things
like
web
hooks
operators,
things
like
Prometheus,
any
kind
of
low-level
pieces
of
software
that
needs
to
be
running
on
all
of
the
clusters.
This
is
another
factor,
of
course,
in
cost.
If
every
team
has
their
own
cluster
running
hundreds
of
different
copies
of
these
of
these
tools,
but
with
shared
clusters,
you
can
advertise
the
cost
of
those
across
all
of
the
different
teams.
B
Now,
before
I
talk
about
when
you
shouldn't
use
multiple
multiple
clusters,
I'm
going
to
talk
about
even
in
a
context
of
using
multi-tenancy,
when
you
should
so,
I've
already
alluded
to
the
first
one,
which
is
regionalization
most
kubernetes
clusters,
don't
work
well
across
the
globe.
Usually
there
is
zonal
or
a
regional
construct,
and
so,
if
you
want
to
run
in
multiple
regions,
well,
then
you're
going
to
have
to
have
multiple
clusters.
Even
if
you
each
one
of
those,
is
multi-tenant.
Another
absolutely
critical
component
is
the
blast
radius
and
this
is
reducing
blast
radius.
B
Every
cluster
is
a
failure
domain.
If
you
miss
config
or
something
a
webhooks
of
permissions,
one
of
those
common
pieces
of
production,
tooling,
everything
on
that
cluster
can
go
down
and
because
for
all
humans,
if
it
can
go
down
it
most
likely
will.
And
so
you
need
to
think
about
that
as
you're
setting
up
your
clusters
atom
bare
minimum,
you
will
probably
want
to
separate
your
environments.
You
don't
want
your
one
to
run
things
in
dev
before
you
run
them
in
product,
possibly
staging
as
well.
B
For
example,
if
you
can't
afford
to
have
even
one
region
go
down
fully,
this
is
going
to
depend
a
lot
on
your
own
needs,
and
this
will
be
a
refrain
that
you
hear
for
me
frequently
is
if
you
do
have
to
think
about
your
own
needs,
but
these
are
the
kinds
of
things
you
should
be
thinking
about
scalability.
This
is
not
something
that
most
people
are
probably
going
to
run
into,
especially
not
at
first,
but
kubernetes
does
not
scale
infinitely.
The
control
plane
does
have
limits.
B
We
usually
think
of
those
as
off
the
top
of
my
head.
I
think
it's
about
5,000
nodes
and
perhaps
I
forget
the
exact
number,
but
it's
maybe
about
a
hundred
thousand
pods
or
so
or
perhaps
that's
containers.
There
are
limits,
but
there
are
many
other
limits
well,
and
there
are
presentations
that
you
can
watch
about
scalability
in
kubernetes,
which
I'll
attach
to
this.
This
presentation
before
we
share
the
slides.
B
If
you
can
one
of
these
problems,
you
might
just
need
multiple
clusters,
even
if,
for
no
other
reason
and
then
finally,
there
are
all
of
the
problems
which
are
just
going
to
drive
you
crazy
for
various
reasons.
Not
everything
in
a
kubernetes
cluster
can
be
put
into
a
namespace
things
like
CR,
DS,
Web
books,
cluster
scoped
operators,
sometimes
one.
Maybe
your
organization
has
tied
an
acquisition
and
you
have
a
different
department
that
has
a
different
history
of
doing
things
and
you
just
haven't
had
the
time
to
make
those
uniform.
B
B
B
Let's
go
into
the
scenario
where
you
do
want
to
implement
multi-tenancy
on
a
cluster,
so
multi-tenancy
is
really
about
different
units
of
isolation,
and
so
let's
talk
about
that,
starting
not
from
the
top
one,
but
let's
start
on
the
second
one
on
this
list,
which
is
the
cluster.
The
cluster
is
the
strongest
level
of
isolation
within
kubernetes.
Two
different
clusters
will
not
share
nodes,
which
generally
means
that
none
of
their
workloads
can
run
on
the
same
computer
or
at
least
not
on
the
same
virtual
machine.
B
They
won't
share
the
API
server
where
any
part
of
the
control
plan.
They
won't
share
secrets,
they
won't
share
networking,
and
so
it's
a
really
strong
level
of
isolation
both
for
your
control
plan
have
your
data
playing
within
the
cluster.
Now
about
that,
you
have
something
that
is
obviously
not
kubernetes
specific.
So
in
google,
you
will
have
a
GCP
project
and
other
platforms.
You
will
have
similar
constructs
and
those
are
useful
to
to
to
garner
access
to
the
clusters
that
so,
for
example,
in
GCD.
B
If
you
don't
have
access
to
a
project,
you
cannot
list
the
clusters.
You
cannot
get
credentials
to
a
cluster.
You
cannot
be
interact
with
the
cluster
in
any
way
at
all,
unless
somebody
gives
you
those
credentials
through
sign
channel,
and
so
that's
the
strongest
level
of
isolation.
But
that's
going
to
be
very
specific
to
the
platform
you're
on
now
within
a
cluster,
you've
got
a
couple
of
things
that
start
to
overlap
a
little
bit.
You've
got
namespaces,
which
I'll
talk
about
more,
and
that
is
a
unit
of
control,
plane,
isolation.
B
It
is
not
a
unit
for
data
plane,
isolation
and
we'll
talk
more
about
that
in
a
bit,
you've
got
the
pod,
which
most
of
you
will
know
that
really
doesn't
have
any
isolation
at
all.
It
kind
of
behaves
like
localhost
azure
bathes
a
little
bit
like
it's
running
on,
like
it's
a
little
VM
and
we'll
talk
about
ways
where
you
can
add
things
like
data
plane,
isolation
to
it,
but
your
default
mental
model
of
a
pod
is
just.
Is
that
it's
it's
there
for
your
convenience?
B
It's
not
there
for
your
security
and
then
you've
got
them
knowns
themselves,
people
often
think
of
nodes,
as
though
they
are
some
kind
of
boundary.
They
are
good
for
resource
isolation.
So,
for
example,
if
you
have
multiple
pods
on
the
same
node,
you
can
have
a
noisy
neighbor
effect
where
one
of
them
is
trying
to
hog
things
like
RAM
or
CPU
or
network,
and
in
many
cases
there
are
things
you
can
do
to
mitigate
this.
A
B
I
believe
walls
within
walls
which
talks
about
how
workloads
on
one
node
can
attack
workloads
on
another
node.
This
is
improving
all
the
time,
but
nodes
are
not
considered
a
security
boundary
for
now.
So
with
that
in
mind,
let's
try
to
get
a
precise
definition
of
what
multi-tenancy
means.
It
means
that
we
are
going
to
provide
isolation
for
security
and
also
resources,
as
well
as
fair
resource
sharing
between
multiple
users
and
their
workloads
within
a
single
cluster.
I'll
talk
a
little
bit
as
I
said.
A
B
Think
about
this
all
is
occurring
within
one
cluster.
Now,
how
do
you
apply
those
principles?
Well,
the
best
practices
are
as
follows.
The
first
one
is
to
understand
what
you
need.
It's
going
to
be
very
rare
that
everybody
will
need
everything
at
once.
You
will
not
need
on
day
one
to
go
through
this
presentation
or
any
other
presentations
say
I
need
to
do
everything
on
that
list.
We
have
Google
as
the
best
practices
guide,
which
will
be,
which
I'll
mention
again
at
the
end
of
this
meeting.
It's
quite
long.
B
You
do
not
need
to
do
all
of
that.
You
should
really
be
focused
on
what
your
risk
tolerance
is
if
one
tenth
accidentally
goes
crazy
and
starts
causing
problems
for
another
tenant.
What's
the
impact
of
that,
it
could
be
that
you
decide
the
risk
of
that
is
low
enough,
and
the
impact
of
that
is
low
enough
that
you
will
solve
that
problem
in
six
months
or
in
a
year
kind
of.
B
Conversely,
you
might
say
well
I'm
healing
with
very
sensitive
user
data,
and
it
is
absolutely
critical
that
nobody
who
is
not
authorized
to
interact
with
these
workbook
boats
can
interact
with
this
with
with
my
workloads
in
any
way
at
all,
and
in
that
case,
we're
gonna
go
make
a
different
side
of
the
decision.
So
there
is
no
one-size-fits-all
for
multi
tendency.
It
will
depend
on
your
needs
where
you
are
in
your
kubernetes
journey,
which
leads
you
to
point
number
two
of
all
of
these
approaches
that
I'm
going
to
discuss.
B
They
all
the
benefits
and
they
all
have
costs,
and
they
all
have
strengths
and
weaknesses,
and
so
it's
important
that,
while
I'm
going
to
mention
a
lot
of
them
today,
maybe
give
a
brief
overview
of
what
those
are.
You
will
have
to
look
at
them
into
them
a
little
bit
yourself
before
making
a
decision.
And,
finally,
as
you
deploy
your
solutions,
it
is
best
to
deploy
them.
Iteratively.
Make
sure
that
you
check
in
from
time
to
time
to
see
if
any
of
the
guidance
has
changed
and
I
keep
them
up
to
date.
B
So
with
that,
let's
get
into
the
heart
of
this
presentation,
which
is
how
do
I
actually
do
this
now
pretty
much
everything
I'm
going
to
talk
about
is
for
the
rest
of
this
presentation
is
based
around
the
concept
of
namespaces.
Now,
if
you
don't
know,
I'm
gonna
say
that
most
people
have
at
least
a
basic
idea
of
what
a
namespaces
and
kubernetes,
but
it's
the
primary
unit
of
tenancy.
It's
it's
think
of
it
as
on
a
file
system.
It's
like
a
folder,
but
only
a
single
level
floor.
B
You
can't
have
game
spaces
inside
of
other
namespaces
and
so
I.
Think,
although
we'll
come
back
to
that
in
a
second,
so
it's
the
prior
unit
of
tendency
in
kubernetes
now
by
themselves.
They
don't
do
much,
and
the
idea
of
walls
in
an
apartment
building
are
actually
a
pretty
good
one
here,
because
you
can
actually
make
a
hole
through
a
wall
and
connect
to
apartments.
If
you
want
to
lots
of
quotes
aw,
let's
do
that.
B
You
can
also
put
up
dividers
within
one
apartment
and
get
some
kind
of
division
like
that,
so
the
walls
are
not
really
kind
of
magical
or
special
in
any
way,
but
what
they
are
is
they
are
a
solid
boundary
between
yourself
and
your
other
tenants.
They
work
well
by
default.
Pretty
much
all
of
the
security
features
and
isolation
features
that
are
going
to
discuss,
support
namespaces
as
a
first-class
attachment
point
so
well.
In
some
cases
you
can
attach
a
things
that
are
smaller
than
the
namespace
or
larger
than
a
namespace.
B
If
you
stick
with
namespaces,
things
will
usually
work
pretty
well
for
you
and
that's
why
it's
so
critical
to
talk
about
them.
They
do
also
provide
isolation
in
naming
so
in
communities.
If
you
have
a
deployment,
for
example
in
two
different
namespaces,
they
can
share
the
same
name,
whereas,
if
you're
within
one
namespace,
all
the
names
have
to
be
unique,
but
other
than
that
they're
really.
B
For
the
purpose
of
this
presentation,
you
should
be
thinking
about
them
as
a
means
of
providing
isolation,
and
how
this
is
going
to
work
is
basically
each
team
will
get
one
namespace
where
sometimes
more
namespaces
and
we'll
put
all
of
their
workloads
into
that.
And
if
you
have
multiple
clusters,
for
example,
dev
staging
and
prod
one
team
will
own
the
same
namespace
in
all
different
clusters,
and
we
actually
recommend.
B
If
there
isn't
already
now
I'll
mention
a
couple
of
properties
of
namespaces,
because
there
are
some
downsides
to
them,
they
do
look
require
cluster
cluster
level
permissions
to
create.
Now,
generally
speaking,
if
you
have
shared
clusters,
you
want
a
very
small
number
of
people
to
be
able
to
have
any
kind
of
cluster
level
permissions
at
all,
and
so
that
can
reduce
their
effectiveness,
because
only
a
small
number
of
people
can
create
these
very
useful
units
of
tenancy.
Their
policies
aren't
fully
independent,
and
usually
this
is
good.
B
Usually
you
want
all
of
your
policies
to
be
you
would
you
want
different
namespaces
to
have
different
policies,
because
different
tenants
can
have
different
needs,
but
sometimes
that's
actually
downside.
Sometimes
you
want
a
lot
of
your
tenants
to
all
be
governed
by
the
same
set
of
rules
if
you
want
to
to
use
them
manually
for
policy
applications.
So,
for
example,
network
policies
can
control
traffic
from
one
namespace
to
another.
You
have
to
label
them
manually
and
because
labeling
is
not
considered
to
be
a
separate
operation
from
any
other
kind
of
updating
operation.
B
What
that
means
is
that
anybody
who
can
edit
a
namespace
in
any
way
can
change
labels,
and
that
means
that
labels
are
not
really
ideal
as
a
security
mechanism,
but
they
are
included
in
kubernetes
natively,
no
matter
which
version
of
kubernetes.
You
have
I,
think
all
the
way
back
to
well
before
1.0.
Before
my
time,
you've.
A
B
Namespaces
now
I
I
am
personally
working
on
a
project
as
part
of
the
open
source
community
that
tries
to
address
some
of
the
issues
that
you
can
have
with
this,
and
these
are
called
hierarchical
namespaces.
So
this
is
a
call-out
to
some
of
the
work
that
I'm
doing
as
part
of
multi-tenancy
working
group.
Hierarchical
namespaces
are
regular,
kubernetes
namespaces,
but
with
a
couple
of
additional
properties,
they
allow
you
to
assign
each
namespace
an
owner,
which
is
another
namespace
and
you
can
have
any
number
of
routes.
B
So
in
this
diagram
here
you
have
three
routes
or
one
or
two
and
then
down
below
the
snowflake
team,
that
isn't
part
of
any
organization,
and
then
you
can
have
a
namespace
per
team
and
then
teams
can
on
multiple
namespaces.
So
if
you're
interested
in
this
project,
please
go
look
for
the
hierarchical
namespace
controller.
This
is
not
a
new
feature.
That's
inside
curricular
Nettie's.
B
It
is
an
open
source
controller
that
you
have
to
install
yourself,
but
this
project
is
just
getting
going
now
and
it
is
useful
if
you
want
to,
for
example,
inherit
policies
from
your
ancestors.
So
the
most
useful
thing
about
hierarchical
namespace
is
that
you
don't
need
that
cluster
level
permission
anymore.
There
is
a
way
that
if
you
have
a
sub
name
space
creation
permission
by
the
name
space,
you
can
create
another
namespace
underneath
that
that
means
base
will
inherit
the
policies
from
its
ancestors.
And
so
you
don't
have
to
worry
about
this
new
namespace.
B
Being
a
free-for-all
agency
also
sets
up
all
of
the
labels
for
you
and
doesn't
allow
anybody
to
edit
them.
So
you
actually
can
rely
on
the
labels
created
by
agency
for
use
in
your
policy
and
your
security
story
in
it
for
your
policies
and
as
I
said,
these
are
provided
by
an
OSS
project
called
agency,
or
if
you
use
gke,
we've
got
a
new
feature
called
the
hierarchy
controller,
which
is
based
on
agency.
That's
coming
out
in
the
next
couple
of
days,
I'm
actually
available
today,
but
it
will
be
available.
B
I
believe
by
the
end
of
this
week,
and
that's
just
an
alternative
way
of
installing
agency,
plus
a
couple
of
other
things
that
we'll
be
adding
over
in
the
future
now
once
so.
That
was
an
overview
of
the
namespaces.
Now,
let's
talk
about
what
you
can
do
inside
of
those
namespaces,
and
so
they
really
fall
into
four
categories:
access
control,
resource
sharing,
runtime
isolation
and
insights.
I
can
think
of
that
as
things
like
logging
and
monitoring.
So,
let's
start
with
access
control.
B
Tenancy
is
really
about
ownership.
If
anybody
can
walk
into
your
apartment,
you
don't
really
own
it
in
some
way.
It's
it
makes
sense,
I've
a
door
that
you
can
lock
and
so
job
number
one
is
making
sure
that
tenants
can't
control
each
other's
resources.
The
primary
way
that
we
do.
This
is
through
a
process
called
authentication
and
authorization.
So
authentication
is
how
does
kubernetes
know
who
you
are
an
authorization
says:
what
are
you
allowed
to
do
inside
kubernetes?
B
So
if
you
are
talking
to
the
control
plane,
the
control
plane
will
have
two
ways
of
answering
these
two
questions.
First
of
all,
authentication
and
authorization
can
be
pluggable,
so,
for
example,
if
you
are
on
gke,
it
integrates
with
cloud.
I
am
and
different
vendors
will,
of
course,
integrate
with
their
own
identity
systems,
and
those
can
also
be
used
for
authorization
as
well.
Sometimes
if
an
external
or
sorry
in
all
cases
of
an
external
authentication
and
authorization
provider
says
you
are
allowed
to
use
something
in
kubernetes,
the
control
plane
will
let
you
use
it.
B
If
it
doesn't,
you
fall
back
to
our
back,
which
we're
going
to
talk
about
it
in
a
second
in
our
back
is
the
recommended
way
to
do.
Auto
is
a
ssin
in
kubernetes,
and
it
is
a
part
of
of
oh
s:
s
kubernetes.
Our
back
stands
for
role
based
access
control
and
it
controls
access,
I'm,
going
to
say
two
main
spaces
in
kubernetes.
You
can
actually
use
it
in
some
cases
to
control
things
that
are
not
in
namespaces,
but
for
the
purpose
of
multi-tenancy.
B
B
So,
for
example,
on
Google
you
can
use
a
GCP
service
account,
and
so
both
humans
and
other
services
that
are
not
running
in
kubernetes
can
be
used
to
have
access
to
the
communities
API,
and
they
can
also
be
used
to
give
access
to
pods
that
are
running
in
the
cluster
access
to
the
communities.
Api
using
kubernetes
service
accounts.
Again,
these
are
built
into
communities.
B
There
are
basically
two
key
concepts
that
you
need
to
understand
for
our
back.
They
are
the
concept
of
role,
which
means
this
is
a
set
of
things
that
one
is
allowed
to
do,
and
the
concept
of
a
role
binding,
which
is
to
say
who
is
allowed
to
use
this
roll
one
user,
can
have
multiple
rules.
Rules
can
be
balanced
to
multiple
users
and
they're,
both
clusterer
level
rules
and
roll
bindings
and
namespace
level
rules.
B
In
those
days
for
multi-tenancy,
of
course,
we
recommend
that
you
base
this
all
on
namespaces
and,
if
you're,
using
hierarchical,
namespaces,
any
rule
or
rule
binding
that
you
created
in
an
ancestor
namespace
will
automatically
be
propagated
to
all
of
the
descendants.
It's
important
to
note
that
not
you
can
differentiate
access
within
a
namespace
two
different
types
of
resources
to
do
for
different
people.
So
within
it
within
one
team,
you
can
have
a
team
admin
and
you
can
have
a
regular
team
member
and
they
can
have
different
access
within
that
namespace.
B
But
if
you
use
role
bindings
for
both
of
those
you'll
be
guaranteed
that
neither
of
those
access
accesses
can
ever
bleed
outside
of
that
namespace
or
a
tree
of
namespaces.
If
you're
using
agency
now
we're
gonna
identity
is
a
way
to
sort
of
run
that
full
process
in
Reverse
everything
I
said
before
is
about
accessing
the
kubernetes
api.
But
what
happens
if
you
have
a
workload,
that's
running
in
kubernetes
that
needs
to
access
something
outside
of
the
cluster.
Now
your
workload
in
kubernetes,
it's
only
identity.
B
Let's
move
on
now
to
resource
sharing.
So
even
though
you
have
this
concept
of
namespace
namespaces
are
a
control,
plane
or
a
control
plane
construct,
but
there
are
lots
of
things
that
are
not
part
of
the
control
plane
that
needs
to
be
shared,
including
the
API
server
itself,
and
so
it's
important
you
that
your
different
tenants
are
can
cooperate
with
one
another
and
not
take
up
too
much
of
their
fair
share.
So
there's
a
couple
of
constructs
here:
I'm
going
to
go
through
them
quickly.
B
The
most
obvious
one
is
quotas
which
basically
allows
you
to
say
things
like.
No
one
namespace
can
exceed
this
amount
of
resource
usage.
No
one
quota,
sorry,
no
one
namespace
can
use
too
many
CPUs
or
too
much
RAM
or
chana
GPUs,
and
you
can
also
use
it
to
control
dangerous
objects.
So,
for
example,
you
can
say
these
namespaces
are
not
allowed
to
create
any
ingress
objects
or
load-balanced
services
at
all
to
permit
them
from
being
open
to
the
outside
world.
B
Limit
ranges
are
very
similar
to
quotas
in
a
way,
but
what
they
say
is
that
no
one
pod
in
that
namespace
can
exceed
a
resource
usage,
and
so
it's
quotas
are
aggregate
limit
ranges
are
basically
they
prevent
you
from
tenants
from
doing
things
like
saying.
Well,
I
want
to
use
96
core
machine
for
every
one
of
my
pods.
There
might
ascend
chily
evict
everybody
else
from
those
nodes.
B
You
can
use
pod
affinity
and
anti
affinity
to
keep
pods
scheduled
on
the
same
node
or
on
different
nodes
and
sometimes
within
to
the
same
zones
or
different
zones.
There's
a
wide
variety
of
things
you
can
do
to
control
the
scheduler
and
affinity
and
add
to
infinity
or
the
most
important
ones.
There
are
other
things
like
teens
and
toleration
x'.
B
If
you
are
interested
in
those,
you
can
look
up
the
scheduler
documentation
and
that
when
all
else
fails,
there's
priority,
if
there
isn't
enough
to
go
around
pod,
priority
will
allow
higher
priority
plans
to
evict
lower
priority
plans.
This
can
be
useful
if
you
have
bachelor
clones
that
are
not
time
critical
or
if
you
have,
for
example,
a
lower
priority
service.
Let's
say
a
recommendation
engine
that
is
nice
to
have,
but
is
not
critical,
and
if
your
checkout
system
is
is
overloaded,
it
can
evict
those
other
those
other
less
essential
services
that
can
degrade
gracefully.
B
So
these
are
the
fundamental.
Probably
the
most
important
constructs
that
you
can
use
for
resource
sharing
again,
there's
no
need
to
necessarily
set
up
all
of
these
on
day
one.
We
have
some
customers
who
have
been
running
multi-tenant
clusters
for
a
year
who
don't
use
resource
quotas
because
it
has
never
arisen
for
that.
It's
never
been
an
issue.
We
have
others
who
were
concerned
about
that
from
day
one,
and
so
they
implemented
them
right
away
and
then
fine
tune
them
over
time.
That's
fine
too.
It
will
really
depend
on
your
own
needs.
B
Let's
talk
now
a
little
bit
about
isolation
so
run
time.
Isolation
is
really
about
keeping
things
in
the
containers
like
they're
supposed
to
be
so.
Vulnerabilities
and
attacks
do
happen
and
containers
are
not
a
security
boundaries
which
means
of
pods
are
not
a
security
boundary,
which
means
that
if
you
are,
for
example,
running
some
component,
that
you
didn't
write
yourself
and
it
has
a
vulnerability
which
can
be
exploited,
there's
a
chance
that
somebody
could
break
out
of
that
pod
and
attack
other
things.
B
On
the
same
note,
or
even
if
they
get
access
to
the
API
server,
they
could
attack
other
workloads
on
the
same
cluster,
even
if
they're
not
on
the
same
nodes,
and
so
you
should
think
about
adding
runtime
isolation
to
try
to
prevent
this
from
happening,
keep
everything
in
their
cages
yeah.
This
just
illustrates
that
to
refer
back
to
that
earlier
diagram
that
I
showed
we're
really
talking
now
at
the
pod
level,
there
is
no
isolation
at
the
container
level
by
my
definition,
and
so
all
the
isolation
that
we're
talking
about
now.
B
The
runtime
isolation
have
really
focused
around
the
concept
of
the
pot,
not
the
namespace
or
the
node,
although
these
policies
can
be
applied
at
the
newest
base
level.
Now,
what
are
the
constructs
for
this?
Well,
the
most
important
one
is
the
pod
security
context,
so,
for
example,
when
you're
creating
a
pod
or
a
deployment
or
anything
that
can
create
a
pod,
you
can
restrict
what
that
pods
workload
to
does
so.
B
For
example,
you
can
say
that
the
workload
that
is
started
by
this
pod
must
not
run
as
root
or
that
it
only
has
access
to
I
believe
certain
ports.
To
be
honest,
I
forget
all
of
the
things
that
pod
security
context
can
do,
but
there
is
a
large
number
of
things
that
you
should
go
look
through
and
say.
Well,
if
I,
don't
trust
the
workload
that
I'm
running,
maybe
I
should
restrict
it
to
make
sure
that
if
there
is
a
vulnerability,
there's
a
limit
to
what
the
damage
can
do.
B
Pod
security
policies
are
away
to
enforce
that
across
your
entire
cluster.
So
what
they
do
is
they
enforce?
That
upon
must
declare
an
appropriate
context
so
upon
security
policy
is
when
you
don't
trust
the
author
of
the
security
context.
The
security
context
is
when
you
don't
trust
the
author
of
the
workload,
so
that's
kind
of
how
they
they
scale.
Now,
unlike
just
about
everything
else
in
this
presentation,
cloud
security
policies
are
actually
quite
hard
to
enable
incrementally
so
either
you
should
worry.
B
You
should
actually
consider
whether
you
need
these
from
day
one
for
a
variety
of
reasons
which
I
won't
get
into
now.
If
you
don't
want
to
use
them,
if
you're
thinking
of
doing
something
incremental,
you
should
consider
an
alternative
like
gatekeeper
which
allows
you
to
write
these
similar
kinds
of
rules,
but
but
for
any
object,
not
just
pods
and
also
allows
you
to
increase
them.
Incrementally
ethics
policy
controller,
if
you're
on,
if
you're
in
the
Google
ecosystem,
is.
A
B
Itself
is
open
source
and
it's
easy
to
look
under.
Network
policy
is
a
way
to
forbid
pods
from
talking
to
each
other
if
they
have
no
good
reason
to
so.
There's
no
good
reason,
for
example,
that
your
recognition
engine
should
be
talking
to
the
credit-card
processing
component,
the
service,
and
so
it's
probably
a
good
idea
to
put
something
in
place
to
make
sure
that
those
can't
communicate
at
any
level.
Network
policy
works
on
the
l4
level,
which
is
TCP
level.
B
If
you
are
using
sto,
you
can
also
implement
different
policies
at
the
l7
level,
but
this
is
network
policy
is
really
like
the
kind
of
bare-bones
and
then
finally,
if
you're,
if
you
are
worried
about
people
breaking
out
of
containers,
despite
all
of
the
other
things
that
you
have
been
you've
put
in
place,
you
can
set
the
runtime
class
by
default.
Runtime
classes,
the
renter
class
uses
RNC,
which
is
the
I
believe
container
D
I
always
get
those
two
mixed
up,
but
basically
doctored,
but
there
are
alternatives
such
as
G,
visor
or
cata
containers.
B
So
G
visor
is
an
open
source
tool
which
basically
runs
your
pod
into
sandbox,
and
even
if
it
breaks
out,
it
is
considered
to
be
a
security
boundary
unlike
regular
container,
and
this
thing
is
true
for
counter
containers.
If
you
were
on
GK.
This
is
very
easy.
You
can
turn
it.
You
can
runtime
class
to
the
gke
sandbox,
which
was
based
on
G
visor,
and
it
allows
you
to
without
any
application
changes,
basically
make
sure
that
you
run
securely
in
a
sandbox,
and
this
is
based
on
technology.
B
B
It's
important
to
be
able
not
it's
important
for
tenants
to
be
able
to
observe
themselves.
Otherwise
that
falls
to
the
central
platform
team,
which
is
going
to
be
overloaded,
and
it's
important
for
that
platform
team
to
be
able
to
understand
the
resource
usage
of
each
tenant,
either
just
to
make
sure
that
nobody
is
using
something
unfairly
or
in
larger
organizations
that
tenancy
business
unit
might
need
to
be
charged
back
through
the
resources
they
can
see.
B
So
usage
metering
is
a
feature
on
gke,
where
you
can
look
at
everything,
that's
in
a
namespace
and
break
it
down
by
memory,
CPU,
GPU,
etc,
and
then
you
can
export
that
to
bigquery,
join
it
to
the
data
in
jesusĆ
billing
and
use
it
to
figure
out
how
much
each
tenant
is
actually
consuming
in
terms
of
dollars.
I
should
before
I
go
on.
I
should
add
that
most
of
this
is
not
actually
built
into
kubernetes,
so
I'll
be
using
Google
specific
examples,
but
there
are
many
alternatives,
both
open
source
and
vendor
specific
for
these.
A
B
More
as
an
illustration
of
the
kinds
of
things
that
are
available
to
you,
this
is
an
example
of
what
usage
metering
looks
like
in
the
the
UI
there's
multi
tenant
logging,
so
logs
are
quite
sensitive
because
logs
can
in
turn
I
can
include
user
data.
They
can
include
all
kinds
of
details
about
the
operation
of
your
workloads
that
you
might
not
necessarily
want
to
share
widely
between
your
different
tenants,
so
you
need
to
control
access
to
them
very
carefully.
B
So
the
best
way
to
do
this
on
gke
is
that
you
create
a
tenant
project
for
each
of
your
tenants.
So
this
isn't
a
speed
project
and
give
each
of
them
a
bigquery
data
set,
and
then
you
can
use
stackdriver
log
reading
to
make
sure
to
look
at
all
of
the
logs
that
are
coming
out
and
then
based
on
their
namespace
route
them
to
those
different
projects,
and
this
will
ensure
that
that
every
team
every
tenant
only
has
access
to
their
own
logs,
but
nobody
else's.
B
There
are
actually
some
improvements
coming
here
that
are
a
little
bit
less
heavyweight
than
creating
entire
new
projects
for
every
tenant,
but
in
general,
having
having
some
kind
of
external
construct
like
attending
project
is
not
a
bad
idea.
In
the
specific
case,
there
are
some
some
improvements
coming
over
the
next
couple
of
weeks,
which,
which
you
can
look
up
later
now,
monitoring,
which
is
to
say
metrics,
are
generally
a
lot
less
sensitive.
Usually
they
don't
include
nearly
as
much
information
and
a
lot
of
our
customers,
don't
actually
differentiate
access
by
tenant.
B
Generally
speaking,
every
tenant
can
see
the
metrics
coming
from
every
other
tenant.
It's
just
not
worth
it
to
people
to
go
to
the
trouble
of
reading
them
in
different
directions.
However,
with
that
said,
if
you
do
need
to
control
access,
there
are
ways
to
do
that.
So,
for
example,
you
can
create
a
Prometheus.
You
can
install
Prometheus
into
your
cluster
and
you
can
use
the
stackdriver
adapter
to
say
that
all
of
the
metrics
that
come
from
a
certain
4mr
namespace
get
rented
again
to
the
correct
10
project.
B
A
Yeah,
just
some
questions
about
logging
right,
really,
two
questions
and
one
questions
about
Luke
10
and
what
Tennessee
and
the
other
about
about
logging
just
start
to
be
flogging
that
it
was
the
last
less
book.
The
question
is:
can
you
also
tag
slash
label
log
streams
to
easily
identify
them
in
GCP.
B
Yes,
you
can,
and
if
my
memory
is
correct,
any
label
that
you
put
on
the
pod
will
be
available
to
the
law,
other
Raburn
and
so,
which
means
that,
if
you
put,
if
you
put
a
label
for
example,
on
the
deploy
at
the
the
pods
back
for
the
deployment,
you
will
be
able
to
filter
based
on
that.
So
namespaces
are
the
default.
Waiting
spaces
are
always
available,
but
if
you
want
to
add
additional
tags,
you
can
do
that
through
kubernetes
labels.
B
A
B
I
can
see.
Okay,
that's
looking
at
the
other
questions,
we'll
come
back
to
those
in
a
few
minutes,
so
I'll
just
finish
up
very
quickly
with
a
couple
of
advanced
topics,
mostly
I've
been
talking
about
in
cluster
multi-tenancy,
but
it's
usually
a
good
idea
to
think
about
storing
your
source
of
truth
somewhere
else,
just
in
case
well.
For
starters,
you
might
have
multiple
clusters
and
you
want
to
keep
them
all
in
sync
or
you
can
lose
a
cluster
I
need
to
replace
it
with
another
err
clustering.
B
There
are
lots
of
other
open
source
alternatives
to
this
as
well.
The
other
recommendation
I
would
give
here
is
that
please
PLEASE
test
out
your
changes
on
a
canary
cluster
first,
even
in
prod,
even
if
it's
gone
through
dev
and
staging
and
everything's
worked,
fine,
there's
always
the
chance
of
when
you
put
it
in
your
Prada
environment.
You've
got
in
the
name
of
some
service
account
wrong
and
everything
goes
down.
B
So
it's
always
a
good
idea,
especially
with
these
very
sensitive
objects,
test
them
out
on
a
canary
cluster
first,
so
that
if
something
goes
wrong,
you
only
impact
1%
of
your
traffic
or
0.1%
of
your
users.
And
finally,
if
you
are
using
it
ups,
we
recommend
that
you
separate
your
policies
from
your
workloads
because
they
really
do
operate
at
different
levels.
A
possible
exception
to
this
might
be
daemon
sets
which
are
kind
of
straddling
the
line
between
policies
and
workloads.
B
If
you
want
to
enforce
the
policies,
are,
are
being
applied,
evenly,
I'd
recommend
using
gatekeeper
or
a
policy
controller
to
define
and
apply
custom
policies.
What
you
do
here
is
you
define
rules
in
Borrego,
which
is
a
Python
like
rule
language?
You
can
apply
them
across
your
clusters
if
there
are
existing
objects
that
violate
them,
they
won't
suddenly
stop
working.
They
will
just
show
up
as
a
violation
as
a
violation
in
your
audit
and
they're,
a
good
alternative
to
pause
security
policies.
There's
a
lot
of
work
going
on
on
gatekeeper
right
now
from
I.
B
Believe
Styron
is
the
company
that
that's
leading
the
effort.
I
know
the
Microsoft
is
heavily
involved,
as
well
as
some
of
my
colleagues
at
Google
and
it's
a
great
project
and
I
encourage
you
to
check
it
out,
and
finally,
I
will
just
briefly
touch
on
two
topics
that
I
haven't
really
spoken
about,
which
are
they're
both
subcategories
of
what
we
call
hard
multi-tenancy
in
the
mental
model
that
I've
outlined
up
until
now,
I've
been
encouraging
you
to
think
of
tenants
as
teams
within
an
organization
which
is
to
say
you
can
assume.
B
On
a
spectrum,
if
you're
at
a
bank,
for
example,
you
might
have
to
see
that
even
your
internal
tenants
or
some
hostile
there's
a
couple
of
ways
that
you
can
go
about
fixing
these
problems,
one
of
them
is
called
virtual
clusters
like
agency.
This
is
a
project
in
the
multi-tenancy
working
group.
It
gives
each
tenant
its
own
control
plane,
while
sharing
the
data
plane.
B
You
have
to
combine
it
with
sandboxing
to
make
sure
that
to
truly
isolate
all
of
your
workloads,
but
that's
a
good
way
to
that's
something
that
you
can
consider
using
to
truly
separate
your
tenants
from
one
another.
Sas
multi-tenancy
is
all
all
different
kind
of
type
of
multi-tenancy,
where,
instead
of
the
tenants
being
teams,
they
are.
Instead,
your
customers
but
they're
all
usually
running
the
same
kind
of
thing,
usually
you're
going
to
require
some
kind
of
sandbox
in
between
them
and
some
kind
of
control
plan
automation.
B
So
things
such
as
our
back
might
become
a
little
bit
less
important
because
the
tenants
don't
actually
access
the
kubernetes
api,
but
things
such
as
resource
usage
and
attribution
becomes
more
important
because
you
need
to
charge
them.
So
your
needs
are
gonna,
be
very
specific
to
your
threat
model
model,
your
usage
model
and
it's
a
very
big
topic,
but
these
are
what
I've
shown
in
this
presentation
should
be
a
good
place
to
start.
So,
just
in
conclusion
make
sure
you
understand
your
needs,
don't
feel
like
you
have
to
do
all
of
this
at
once.
A
B
So
it
depends
what
kind
of
isolation
you're
looking
for.
So,
for
example,
as
I
said,
node,
node,
level,
isolation
and
pools
and
chica
are
an
extension
of
nodes.
Are
very
good,
can
be
very
useful
for
resource
isolation.
So,
for
example,
if
you've
got
one
workload
that
has
very
different
characteristics
to
anybody
else,
they
need
different
sized
nodes.
They
have
very
high
networking
network
bandwidth
requirements,
for
example,
then
it
can
be
useful
to
put
them
onto
their
own
note
pool
they
are
not
a
security
boundary
is
that
you
can
talk.
B
B
The
the
api
server
has
a
fairly
large
attack
surface
and
is
not
generally
considered
to
be
a
hard
multi
tendency
solution.
So
it's
useful
now,
as
for
the
other
side
of
that,
should
I
just
go
ahead
and
use
different
clusters
again
that
depend
on
your
needs.
If,
let's
say
that,
one
team
has
some
workloads
that
need
that
level
of
isolation,
but
some
that
don't
probably
more
useful
to
keep
them
all
in
the
same
clusters.
B
Let's
say
that
you've
got
a
team
that
a
central
platform
team
that
is
setting
up
all
of
those
clusters
individually
again,
it
might
be
easier
for
them
to
create
a
node
pool
than
a
whole
new
cluster.
If
that's
not
the
case,
then
it
might
make
sense
in
some
cases
to
create
that
snowflake
Westra
for
that
or
for
that
purpose.
B
A
B
B
Resource
quotas
might
be
important
if
you
are
have
auto-scaling
any
kind
of
auto
scaling
turned
on,
but,
generally
speaking,
you
will
have
your
own
control
plane
in
front
of
the
committee's
control
plan
and
a
lot
of
the
threats
that
are
present
in
a
sort
of
enterprise
context
can
be
addressed
by
your
own
control
plane.
In
that
context,
so
I
hope
that
answers
your
question.
It
is
a
an
interesting
area,
but
I
would
say
that
it's
there's
a
fair
amount
of
overlap,
but
it's
not
exactly
the
same
problem.
A
B
Right
so,
very
briefly,
hierarchical
namespaces
I.
Thanks
for
that
question,
so
the
use
there's
basically
two
main
use
cases
that
it
addresses,
one
of
which
is
delegation
of
namespace
creation.
So
it
allows
you
to
delegate
this
is
usually
a
very
powerful
operation
and
there
are
lots
of
reasons
that
you
might
want
a
an
isolation
boundary
like
a
namespace,
but
you
don't
want
to
have
cluster
level
permissions
or
you
don't
want
to
give
that
person
or
that
robot
account
cluster
level
permissions.
B
In
the
first
place,
not
all
policies
can
be
applied
hierarchically,
but
when
you
do
want
that
kind
of
concept
of
ownership
hierarchy
is
by
far
the
best
way
to
do
that,
because,
unlike
labels,
which
can
be
which
first
of
all
the
the
labels
are
not
access
control
themselves.
Secondly,
you
can
forget
labels.
You
can
make
a
mistake
with
labels
as
hierarchy,
it's
usually
much
clearer,
either
door
in
namespaces
owned
by
another
namespace
or
it's
not
so
I
would
encourage
you.
If
you
have
more
questions
about
agency
to
please
go.
B
A
B
There
was
a
there
was
apparently
a
long
debate
in
the
early
days
of
kubernetes
about
whether
namespaces
should
be
nested
or
not
and
be
hierarchical
or
not,
and
the
the
answer
that
they
came
upon
at
the
time
is
still
true
today,
which
is
that
there's
at
the
time
there
wasn't
enough
evidence
that
the
additional
complexity
of
article
namespaces
or
nested
namespaces
would
be
worth
it.
That
is
still
true
today.
However,
there
was
enough
evidence
that
it
would
be
useful
that
we
embarked
on
the
concept
of
agency.
B
What
I
would
say
is
that
if
we
see
a
lot
of
uptake
of
agency
over
the
next
year
or
two,
that
would
certainly
raise
the
possibility
of
of
making
it
a
first-class
construct
in
kubernetes,
which
would
have
a
number
of
benefits
so,
for
example,
all
hierarchical
namespace
names
must
be
you
across
the
cluster,
which
is
annoying
it's
not
as
bad
as
you
might
think.
For
example,
at
Google
we
have
tens
of
thousands
of
namespaces
at
a
flat
that
are
visible
to
everybody
in
that
are
flat.
It's
not
the
the
best
in
the
world.
B
There
is
one
question
that
I'm
wondering
that
we,
if
we've
overlooked
from
1131
mind
if
I
answer
that
one
9x
because
they've
been
waiting
a
while
for
their
answer,
it
says
consider
the
case
of
two
applications
handling
privacy,
sensitive
data
coming
from
competing
vendors.
Would
you
trust
the
barriers
you
have
today
is
good
enough
to
trust
running
those
two
applications
in
the
same
cluster?
They
might
boast
further
source
of
truth
externally,
but
would
likely
still
use
kubernetes
for
secondary
secrets.
B
B
If
you
are
saying
that
you
would
like
to
use
store
that
information
there,
then
that
becomes
a
little
bit
more
dicey.
Secrets
are
not
available
outside
of
their
own
namespace,
but
if
you're,
not
careful
about
here
are
back
a
workload
from
one
namespace
could
somehow
gain
access
to
secrets
to
another.
So
if.
A
B
So
I
think
what
I
would
say
is
that
network
policies
as
a
kind
of
built-in
aspect
of
kubernetes
are
the
way
that
you
handle.
It's
not
really
routing
it's
more
like
lack
of
routing
ingresses
are
also
a
little
bit
complicated,
because
ingress
is
well
sorry,
ingresses
are
per
for
namespace
as
well,
and
so
that
does
allow
you,
if
you
trust
somebody
to
create
their
own
kubernetes
ingress
attendant.
I
should
say
then
grabbing
is
not
really
a
problem
there,
and
then
you
use
network
policies
to
restrict
what
you
do
beyond
that.
B
Where
you
can
be
very
careful
about
what
gets
communicated
between
different
services,
how
they're
encrypted
you
can
do
both
service,
the
service
authentication,
also
and
user
authentication
sto
is
an
amazing
project,
and
so
I
don't
have
time
to
go
into
the
I'm.
Not
an
expert
and
I
don't
have
time
to
go
into
that,
but
you
can
check
that
out.
B
That's
kind
of
funny
I
only
found
out
about
kiosk
a
couple
of
days
ago
from
Colonel
Eisenberg
tweeted
about
it,
and
so
I
went
and
looked
it
up.
It's
similar
to
some
earlier
projects
that
were
going
on
in
the
multi-tenancy
working
group
and
so
I
think
what
I
would
say
is
it's
in
use
if
it
matches
if
it
matches
your
use
case
perfectly,
then
it's
quite
useful.
The
recently
we've
gone
with
this
approach
of
higher
community
places
is
because
namespaces
really
are
the
unit
there's
a
couple
of
reasons.
B
Instead
of
adding
a
new
concept,
we
wanted
to
add
new
behaviors
to
an
existing
concept,
so
they
can
address
some
of
the
same
issues,
but
we
what
we
think
they're
done
has
made
something:
that's,
maybe
a
little
bit
more
flexible
at
the
cost
of
perhaps
being
less
well
suited
to
whatever
kiosks.
These
cases
are
so,
as
with
all
things,
I
would
encourage
you
if
you're
considering
both
of
them
to
look
to
look
them
up
and
see.
What's
the
best
for
you
now,
I
have
been
thinking
that
I
should
probably
reach
out
to
the
I.
B
B
Am
unfortunately,
not
the
expert,
if
my
memory
is
correct,
what
security
policies
can
be
used
to
enforce
selinux
properties
or
whatever
you
call
them
I'm,
not
the
expert
there,
but
I
believe
that
that
is
something
that
people
do.
I
am
not
personally
familiar
with
it.
I
don't
know
if
anybody
else
any
of
the
other.
Anyone
else
on
the
call
wants
to
chime
in
but
I
believe.
The
answer
is
yes.
Oh.
A
B
B
A
A
Okay,
thank
you,
tickle
Hadron.
Take
it
very
much
was
amazing
presentation,
I
learn
a
lot
and
I
will
ping
you
for
other
questions.
Mine
question:
should
everyone
thanks
for
joining
us
today?
The
webinar
record
and
slider
will
be
aligned
later
today
and
we
are
looking
forward
looking
forward
to
see
you
at
the
future
say:
inset
webinar
have
a
great
day.
Oh
okay,.