►
Description
Intro and CNCF Update
Talk #1: The Kubernetes Security Specialist Exam is Here! What to Know and How to Get Started
By Michael Foster, StackRox
Talk #2: What's new in Hierarchical Namespaces: now with less hierarchy! By Adrian Ludwin and Ginny Ji, Google
A
B
A
So
we
are
live
and
recording-
and
I'm
super
excited
about
this
as
usual,
because
it's
that
time
of
the
month
where
we
get
to
come
together
so
welcome
everybody
to
our
kubernetes
and
cloud
native
meetup
for
the
eastern
canadian
cities.
So
we
have
some
exciting
news
on
that
front
that
we
will
share
in
a
moment
but
welcome
to
our
communities
in
montreal,
quebec,
toronto,
kitchener,
waterloo,
ottawa
and
our
newest
member
halifax
that
you
guys
will
hear
about
in
a
moment
so
welcome.
A
And
if
there
are
any
french
questions
that
arise
for
people
who
only
speak
english,
we
will
be
there
to
translate.
So,
let's
get
going.
A
Today
we
have
a
full
agenda
on
the
docket,
we're
going
to
be
doing
a
bit
of
an
intro,
so
that's
usually
done
by
us.
So
that's
the
organizing
committee
so
we'll
go
through
a
few
things
around
our
team
that
is
growing
in
terms
of
organizing
committee,
what's
happening
in
our
ecosystem,
our
speakers
of
course,
and
how
to
stay
connected
with
us
and
submit
talks,
and
then
we
will
have
two
presentations.
A
So
the
first
one
is
by
our
dear
friend
mike
foster
who's
going
to
be
talking
about
the
new
cks
exam
from
the
cncf,
and
then
we
will
have
another
talk
about
hierarchical.
Namespaces
say
that
fast
ten
times
adrian
and
ginny.
So
that's
gonna
be
our
second
talk
and
then
at
the
end
as
usual,
we
have
some
really
awesome
prizes
and
this
week
they
are
brought
to-
or
this
month
sorry
they're
brought
to
us
by
stackrock.
A
So
our
wonderful
hosts
today
are
myself
julia,
so
hi.
I
work
at
a
company
called
cloud
ops
based
in
montreal,
and
these
are
my
co-hosts
for
today.
So
we're
all
gonna
have
different
roles.
Of
course,
most
of
us
know
archie
already
he's
our
fearless
leader
and
cncf
ambassador.
Then
we
have
anthony
and
suvro
who
are
going
to
be
also
helping
with
moderating
questions
and
helping
do
the
prize
draw
and
all
that
fun
stuff.
A
And
we
want
to
give
a
shout
out
to
our
colleagues
who
are
on
our
committee,
so
our
committee
is
growing
and
we're
really
excited
because
we
have
a
lot
of
representation
from
the
different
cities.
We
now
are
have
six
cities
in
our
crew.
So
it's
really
been
nice
to
have
the
representation
across
the
board.
So
thank
you
to
all
of
you
who
are
on
our
committee
and
a
special
shout
out
to
johan
who's
joined
our
committee
from
the
actual
east
of
the
eastern
canadian
term
in
halifax.
A
So
we're
super
excited
to
have
you
on
board,
and
you
know
it's
always
such
a
pleasure
working
with
all
of
you.
So
typically,
what
we
do
is
we
take
turns
hosting
these
meetups,
so
you'll
get
to
know
each
of
us
over
the
course
of
the
next
year
months.
However,
long
we're
going
to
be
doing
this
online.
A
And,
of
course,
we
have
to
give
a
shout
out
to
our
wonderful
sponsors.
We
could
not
do
these
without
them.
Obviously,
being
online
is
a
different,
a
bit
of
a
different
ball
game,
but
we're
very
grateful
for
the
sponsors
who
help
us,
because
the
funds
that
they
give
us
allow
for
us
to
create
other
in-person
events,
hopefully
one
day
to
have
prizes
and
swag
to
give
out
to
everybody.
So,
as
I
mentioned
earlier,
stackrocks
has
donated
a
couple
of
gift
cards
to
us.
A
So
thank
you
to
them
and,
of
course,
to
our
other
wonderful
sponsors.
If
anyone
works
at
a
company
or
you're
just
feeling
generous
and
you
want
to
sponsor
our
meetups,
there
is
a
qr
code
here.
So
please
feel
free
to
scan
that
and
it'll
send
you
to
some
information
around
what
a
sponsorship
package
looks
like.
A
So,
as
mentioned,
we
have
some
really
exciting
talks
today,
we're
always
very
honored
to
have
people
from
actually
across.
I
guess
the
world
now
that
we're
in
this
virtual
reality
come
and
support
our
community.
So,
as
mentioned
our
first
little,
intro
here
will
be
cncf
updates
from
us
and
then
again,
michael
will
be
chatting
about
the
cks
and
adrian
and
ginny
from
google
will
be
talking
about
hierarchical
namespaces.
A
A
And
with
that,
I
will
pass
the
baton
over
to
archie.
Oh
sorry,
I
think
I'm
doing
a
slide.
Sorry,
we
we
want
to
hear
from
all
of
you.
So
this
is
the
way.
These
are
the
ways
to
be
in
touch
with
us
and
to
follow
what's
happening,
so
we
have
recently
created
a
linkedin
page,
which
sometimes
gets
a
little
neglected,
but
I'm
definitely
trying
to
be
more
active
there
I
enjoy
being
on
linkedin.
A
I
know
don't
hate,
but
if
anyone
wants
to
join
us
on
linkedin,
there
is
a
page
there
cloud
native
canada
as
well
as
on
twitter.
We
have
a
twitter
handle
that
is
posted
here.
It's
at
cloud
native
canada,
and
there
is
this
qr
code
again
to
join
our
kubernetes
canada,
slack
channel.
So
that's
a
different
slack
channel
from
the
main
kubernetes
one,
which
has
many
many
many
many
people
and
ours
is
growing
quite
steadily
as
well,
but
it's
a
really
wonderful
community.
A
In
there
people
are
posting
articles
and
asking
questions
and
sharing
information.
So
we
welcome
you
to
those
these
three
spaces
to
interact
with
us
and
follow.
What's
going
on
in
our
world.
A
And
here
as
well,
we
are
always
looking
for
not
only
not
only
hearing
from
you
but
also
for
talks.
We
really
rely
on
our
community.
Of
course
our
sponsors
help
us,
but
we
definitely
rely
on
our
community
for
content.
So
we
have
two
types
of
talks.
So
one
is
a
story,
so
it's
a
success
or
failure
story
of
what
I
tried
to
do
it
did
it.
A
Was
it
a
massive
success
or
did
it
fail
miserably
and
here's
what
I
learned
either
way
we
want
to
hear
from
you
about
those
stories
and
we'd
love
to
have
something
like
that
be
presented
at
our
meetups
there's.
Also,
the
option
of
a
tech
talk
so
similar
to
what
adrian
and
ginny
will
be
doing
today,
so
one
that
is
more
around.
This
is
the
topic.
This
is
how
it
works
more
of
a
deep
dive
into
a
tool
or
technology,
ideally
something
that
is
open
and
accessible
to
everybody.
A
So
these
are
the
types
of
talks
and
content
we're
looking
for.
If
you
have
any
ideas,
suggestions
wanting
to
join,
please
don't
be
shy.
Our
team
is
really
wonderful
and
supportive
to
help
guide
the
cfp
process.
So
if
you
have
any
doubts
on
your
proposal
or
you're,
not
sure
we're
here
to
help
you,
so
please
don't
be
shy
and
as
as
we
all
know,
you
know,
we
started
this
all
started
off
talking
about
just
kubernetes
and
we
have
expanded
that,
of
course,
because
we've
kind
of
gone
through
a
gamut
of
topics
around
that.
A
So
anything
that's
related
to
the
cncf
any
of
the
projects
or
tools
that
are
involved
there.
As
you
can
see
mike,
is
going
to
be
talking
about
exams
that
the
cncf
is
offering
so
yeah.
We
really
welcome
you
with
open
arms
and
would
love
to
hear
from
you
and
with
that
without
further
ado,
I
will
pass
it
to
archie
and
he
will
take
it
away
with
some
of
the
rest
of
the
updates.
B
All
right
juliet
before
I'm
gonna,
take
the
mic
completely.
Do
you
want
to
quickly
see
how
people
can
ask
questions
and
then
that
we
will
moderate
them
in
the
tool.
A
Yeah,
actually,
I
think
in
this
one
the
questions
go
directly
in
the
chat
I
know
on
on
zoom
there's
a
q
a
section,
but
I
will
be
monitoring
the
chat,
as
I
think
a
few
of
us
will
be.
So
if
you
have
any
questions
for
us
when
I'm
done
talking,
I
will
take
a
look
at
the
chat
and
if
you
have
any
questions
for
any
of
the
speakers,
please
post
them
in
there
and
we
will
make
sure
that
they
get
answered
in
function.
A
I
didn't
see
that
so,
if
you
guys
are
in
the
chat
at
the
top
there's
a
general
there's,
a
q,
a
and
there's
dms
so
feel
free
to
add
any
questions
into
the
q,
a
section
there
there's
like
three
little
tabs
at
the
top,
and
thanks
for
pointing
that
out,
I
hadn't
seen
that.
So
that's
perfect.
So
if
you
have
questions
for
our
speakers,
it's
easier
for
us
to
cue
them
up
in
the
q,
a
area
so
feel
free
to
put
those
in
there
and
any
other
comments.
Feedback.
A
Shout
outs,
love
whatever
you
want
to
give
us
all
all
positive,
hopefully
or
pleasant,
I
should
say,
can
go
in
the
general
chat.
B
All
right,
thanks
julia,
so
yeah
hi,
my
name
is
archie.
I'm
cncf,
ambassador
from
canada.
I'm
happy
to
be
here,
organizing
this
stan
valentine's
edition
of
cncf
meetup.
We
really
love
cncf
kubernetes
technologies
around
and,
most
importantly,
we
love
you
our
community,
so
happy
to
be
here
again
and
we
have
some
exciting
announcements
and
news
around
our
meetup.
So
I
don't
know
if,
if
you
see
you
know
if
this
picture
is
familiar
to
you
and
maybe
you're
playing
a
assassin,
crit
valhalla
we're
kind
of
expanding
our
territories
on
east.
B
So
it's
not
going
to
be
essex
this
time,
but
we're
going
to
nova
scotia
and
we're
having
a
new
member,
a
new
chapter
in
our
meetups,
so
we're
starting
a
new
meetup
at
in
halifax.
So
if
you
have
friends
or
some
camp
companies
who
is
from
halifax,
please
you
know
share
with
them
this
news.
So
we
have
a
local.
B
B
This
event
in
halifax
will
be
part
of
our
virtual
events,
but
in
the
future,
when
the
covet
will
finish,
we
will
have
some
local
events
in
halifax
as
well,
and
with
that
we
have
another
exciting
news,
so
cncf
kind
of
reached
out
to
us
and
said:
hey,
you
guys,
kind
of
doing
great
job
and
organizing
all
the
events
in
canada.
B
While
you
don't,
you
know,
take
care
of
also
the
cloud
native
canada
kind
of
chapter.
So
we
we
we're
starting
a
new
chapter
which
is
going
to
be
used
mainly
right
now
for
central
event.
So
in
the
future
you'll
see,
the
promotion
will
be
done
for
the
cloud
native
canada
and
we're
going
to
be
using
this
cloud
native
chapter
for
future
virtual
events.
B
One
will
go
post
covet
so
so
we'll
continue
organize
events
in
each
city
as
usual,
but
we
will
have
this
virtual
link
in
case
we
will
be
having
some
virtual
meetups.
So
I
hope
you
like
this
idea.
You
know,
and
you
know
we
know,
that
we
keep
introducing
new
changes
all
the
time,
but
I
I
think
this
will
be
a
good
addition
to
our
meetup
all
right
with
that
I'm
going
to
go
to
the
cncf
announcements.
I
hope
you
were
excited
about.
B
You
know
our
meetup
announcements,
but
we
also
have
few
cncf
announcements
as
well.
Just
as
just
like
a
recap
from
the
last
year,
you
know
it
was
very
difficult
to
keep
up
with
all
the
changes
in
the
cncf
landscape.
B
Last
year
there
was
35
projects
has
been
added
to
the
sandbox
and
I'm
sure
you're
using
some
of
them,
and
we
spoke
about
some
of
them
already.
But
it's
you
know.
If
you
want
to
share
about
your
journey
about
cncf
projects,
you
know
our
meetup
is
welcoming
you
and
you
can
always
have
a
chance
to
present
there
all
right
and
then
the
on
top
of
the
sandbox
projects.
B
We
had
five
graduated
projects,
helm,
tk,
tv,
ecd,
harbor
and
rook,
which
we
covered
pretty
much
extensively
within
our
meetups,
and
there
are
also
a
few
other.
New
projects
has
been
added
to
the
cncf
landscape,
and
this
is
all
possible
thanks
to
these
people,
technical
oversight,
community
that
are
working
hard
to
bring
new
projects
every
year.
So
they
they
they.
You
know
the
voting
they
they
preparing
the
the
the
plan
for
the
new
projects
to
join
cncf.
B
So
it's
a
technical
oversight
committee
and
it
consists
of
different
people
from
different
companies
and
they've,
been
reelected
every
two
years
and
we're
excited
that
there's
five
new
members
joining
from
spotify
apple
weave
works,
alibaba
and
cern,
and
they
will
be
the
toc
members
for
the
next
couple
of
years.
B
Obviously
you
know,
cncf
is
a
organization
that
you
know
helping
to
promote
cloud
native,
but
it
kind
of
depends
a
lot
on
the
external
investment.
So
a
lot
of
companies
are
gold,
silver,
platinum
sponsors.
Google
recently
also
donated
3
million
for
google
cloud
credits
to
test
kubernetes
in
cncf.
B
We
already
have
some
news
about
2021,
so
very
popular
project
in
our
community.
Open
policy
agent
oppa
has
been
recently
graduated
in
cncf.
You
know
for
the
people
who
who
loves
kubernetes
they're
using
gatekeeper,
probably
today,
and
we
see
more
and
more
projects
starting
around
opa.
I
actually
wear
probably
my
hoodie
with
opa
right
now.
B
So
hopefully
you
know
if
you
have
some
interesting
stories
to
share,
please
let
us
know,
but
obviously
open
policy
agent
become
a
very,
very
popular
tool
to
use
and
if,
if
you're,
trying,
for
example,
use
pot
security
policies,
this
is
today
can
be
done
through
the
opa
and
it's
deprecated
in
kubernetes.
So
this
is
one
of
the
ways
to
use,
but
like
there's
many
many
policies
that
are
available
right
now
for
kubernetes
in
opa.
B
So
you
know
if
you're
trying
to
build
multi-tenancy,
if
you
want
to
implement
some
policies
and
runtime
and
networking
open
policy
agent
is
probably
your
best
friend
today
and
it's
a
huge
community
around
that.
So
please,
you
know,
be
active
in
this
community.
B
C
Yeah
sure
yeah,
so
beer
packs
one
of
their
packs,
actually
they've
been
around
for
a
long
time.
Maybe
some
of
you
already
know
them
from
back
in
the
days
with
eroku.
That
is
a
platform
as
a
service
and
ben
also
cloud
foundry
started
using
blue
pack.
Maybe
you
know
using
iroku,
you
remember
that
you
had
to
get
push
to
a
roku
and
then
magically
your
your
project
was
being
built
because
it
would
detect
what
kind
of
project.
C
If
it's
a
java
or
python
or
not
gs
project,
and
then
you
would
choose
the
right
environment
and
it
would.
It
would
actually
build
it
for
you
right.
So
this
is
what
we
have
built
packs
in
the
day,
but
now
buildbacks
are
more
generic
and
they
have
actually
joined
with
the
cncf
as
a
sandbox
project
in
2018,
and
now
they
are
in
incubation,
meaning
that
they
are
getting
traction
and
they're
getting
some
users
and
commuters
from
the
community.
C
C
The
way
it's
a
spec,
of
which
the
pac
cli
is
the
default
implementation
implementation
tool,
so
yeah
go
ahead,
go
ahead
achieve
so
in
a
nutshell:
you
can
work,
you
can
use
it
locally
on
your
machine
using
the
pax
cli,
so
you
can
install
it
on
on
mac
with
brew
on
linux,
on
windows
as
well,
and
then
you
just
cd
into
your
project
and
then
you
run
pack
build
my
project
and
the
pack
will
download
the
builder,
and
this
builder
will
actually
organize
several
beer
packs
in
a
given
order
and
each
of
those
build
packs
they
are
going
to
detect.
C
They
are
going
to
detect
if
they
apply
to
this
project.
So
if,
for
example,
it's
a
java
project,
then
the
java
build
pack
will
probably
try
to
build
the
project
so
once
it's
detected,
then
of
course
the
next
step
is
to
build
the
project
and
then
optionally
they
can
also
well.
The
baxiali
can
also
push
your
image
to
a
registry.
So
this
is
not.
Pack
is
the
default.
C
Is
the
default
user
experience
that
you
would
encounter
using
the
packs,
but
it's
a
rich
ecosystem
with
many
builders,
so
actually
you've
got
a
lot
of
vendors
that
are
providing
their
own
builders,
for
example.
Of
course
you
have
eroku,
because
historically
they
were
kind
of
the
first,
but
also
you
have
cloud
foundry
that
have
renamed
their
effort
into
the
pacquito
bill
packs
as
well
as
google
cloud.
So,
of
course,
if
you
would
deploy
to
a
roku,
you
would
choose
the
buildback
from
your.
C
If
you
go
to
google
cloud,
you
would
use
the
via
packs
from
google,
but
there
are
not
only
builders.
There
are
also
different
implementations,
so
pax
cli
is
the
default
one,
but
you
can
actually,
you
can
encounter
the
one
many
tools,
such
as
tecton
vmware,
a
kpak
of
waypoint,
but
actually
know
how
to
use
build
packs.
How
to
apply.
Bear
packs
to
your
project.
So
back
is
not
only
is
not
the
is
not
the
the
only
tool
that
you
can
use,
build
packs
with.
C
You
can
use
them
in
a
lot
of
ci,
tooling
already
so
yeah
that's,
but
that
was
it
for
the
quick
focus
on
on
the
on
the
build
pack
technology.
I
believe
that
it's
it's
a
great
technology
actually
and
it's
really
getting
traction
a
lot.
If
you
go
to
pivotal
talks-
or
I
don't
know
you
know
also
in
the
java
community-
you
will
see
that
it's
really
getting
attraction
more
and
more.
So
thanks
for
listening.
B
C
B
I'm
really
excited
about
this
project
because
you
know
you're
kind
of
escaping
one
extra
step
in
the
pipeline
so
before
we
had
to
create
docker
file
and
build
the
image.
Now
you
can
completely
skip
this
step
and
you
have
your
source
code
in
github
and
you
can
create
an
image
from
that
automatically
and
deploy
to
kubernetes,
for
example,
or
to
key
native
or
some
other
project.
B
So
it's
kind
of
simplifying
a
lot
of
the
step
and
I
think
the
developers
especially
would
be
the
ones
who
are
really
take
advantage
of
that,
because
they
don't
necessarily
want
to
learn
docker
or
kubernetes.
They
want
to
write
the
code
so
for
them
it's
kind
of
simplifies
a
lot.
The
the
steps
around
definitely.
C
And
it
also
allows
an
organization
to
enforce
the
good
practices
you
know,
because
you
don't
let
the
developers
just
handle
a
random
docker
files
that
we
would
find
anywhere.
You
know
you
just
enforce
good
practices
across
your
organization
using
bill
packs.
So
this
is
what
is
pretty
interesting.
B
Cool
thanks
a
lot
anthony,
I'm
gonna
go
continue
and
usually
you
know
we're
covering
the
latest
kubernetes
release
and
this
release
was
released
actually
in
2020,
but
it
was
released
after
the
meetup
we
organized
our
last
meetup
from
the
kubecon.
So
I
want
to
say
thank
you
to
the
whole
community
community
that
contributed
to
this
release
and
congratulate
the
release
team
with
leading
the
best
release
of
kubernetes
1.20.
B
It's
incredible
that
we
already
lived
together
within
all
of
these
20
releases
and
organized
so
many
meetups
around
kubernetes.
This
release
has
42
enhancements
and
there
was
a
lot
of
work
has
been
done
in
terms
of
moving
cron
jobs
to
beta
and
cri
to
ga,
which
will
happen
shortly
in
the
upcoming
121
release.
B
There
was
a
few
features
that
been
graduated,
like
volume,
snapshots,
runtime
classes,
support
for
container
d
and
on
windows
in
terms
of
new
features.
I
think
graceful
note.
Shutdown
will
be
something
greatly
appreciated
by
the
you
know:
people
who
is
running
kubernetes
in
production,
better
pod
resource
request,
metrics
and
container
resource
bait
based
port,
auto
scaling.
So
if
using
hpa,
it
will
be
able
to
also
take
decision
based
on
specific
container,
not
like
you
know,
so
it's
a
kind
of
easier
way.
B
It's
not
easier
but
like
more
flexible
way
to
do
hpa
and
cover
more
use
cases,
but
I
think
the
major
change
for
everybody
that
you
know
discussed
a
lot
in
twitter
and
and
media
was
docker
shim
deprecation
right.
So
we
get
to
the
point
where
when
docker
is
deprecated
from
kubernetes-
and
there
was
a
lot
of
fear-
you
know
people
were
scared.
You
know
like.
Should
I
learn
a
new
tool
you
know
like?
Should
I
stop
using
docker?
B
The
answer
is,
is
no,
you
know
you're
still
gonna
be
running
your
docker
containers,
docker
images
on
kubernetes,
but
docker
shim
itself.
That
was
supposed
to
be.
You
know,
kind
of
running
docker
itself.
It
has
a
lot
of
tool
in
terms
of
maintaining
it.
So
it's
been
deprecated
right
now.
So
you
have
time
until
1.23
to
move
to
container
d
or
use
cryo,
but
also
mirantis
the
company
which
you
know
acquired
docker.
They
will
be
adding
docker
support
as
a
container
runtime
interface
and
they
will
be
supporting
it.
B
So
there's
no
much
change
for
you
know.
For
many
people,
it's
more
would
be
work
on
the
cloud
providers
that
are
using
docker
shim
today,
or
maybe
the
the
vendors
that
are
using
docker
shim,
but
in
terms
of
you
know,
change,
there's
no
major
changes
for
end
users
with
this
behavior
and
you
can
still
run
your
docker
containers
with
kubernetes
and
then
the
the
next
release
of
one
that
21
we
expecting
on
april
8th.
So
hopefully,
next
meet
up.
We
can
cover
about
it.
B
Some
of
the
you
know,
important
events
that
are
coming
on
may
4th.
We
are
going
for
a
next
cubecon
eu,
it's
still
going
to
be
online
and
we
have
a
a
code
for
our
community
which
provides
x,
25
discount.
Unfortunately,
the
the
price
went
up
a
few
days
ago,
so
it
was
ten
dollars
per
ticket
now
at
75,
but
you
still
have
this
discount
and
kubecon
will
also
host
a
lot
of
co-located
events
that
will
be
sold
separately.
B
If
you're
interested
to
join
any
of
these
events,
you
can
register
during
the
registration
to
kubecon
itself.
So
I
think
this
is
it
for
for
me
right
now
and
subaru.
You
want
to
present
michael.
D
Hi
everyone,
my
name,
is
supra.
I
hang
out
with
archie
and
the
team
once
in
a
while,
and
thank
you
so
michael
today
would
be
talking
about
the
new
certification
which
came
out
in
last
kubecon
michael,
is
a
cloud
native
advocate
at
stackross
and
feel
fr
enjoy
his
talk,
michael
you're.
There
awesome
thank
you.
F
And
working
at
stackrocks
yeah,
it's
awesome
and
and
julia-
and
I
know
we
have
some
some
ex-cloud
ops
people
in
the
chat
too
wojo's
here
as
well,
as
I
think,
a
couple
others
that
julia
pointed
out
so
yeah
thanks
again
for
having
me,
you
guys
can
see
my
screen:
okay,
right,
everything's,
good
awesome,
so
yeah
like
I
was
just
touched
on:
cks
the
new
certification
from
the
cncf,
just
kind
of
want
to
talk
to
you
about
what
you
need
to
know
how
to
get
started.
F
It
is
somewhat
of
a
unique
test
compared
to
the
ck
and
ckad,
so
I
think
it's
worth
covering
so
again
what
we're
going
to
cover
I'm
just
going
to
intro.
You
know
why
these
certifications
are
useful.
Talk
about
the
exam
structure,
exam
format
break
down
the
individual
sections
without
giving
away
too
much
right.
I
see
you
know
magnus
got
some
good
resources
along
with
waleed
he's
in
the
chat,
so
the
the
cks
people
are
here
which
is
great
and
and
he's
always
worth
pinging.
F
Give
you
resources
not
talk
too
much
and
then
talk
about
some
general
tips
that
I
found
really
useful
for
the
exam
and
I'm
sure
other
people
can
can
chime
in
as
well
feel
free
to
throw
some
q
a
in
the
chat
too,
because
I'll
we
have
some
q
a
time
at
the
end.
So
yeah
a
little
bit
about
me
so
like
it
was
just
touched
on
cloudy
I've
advocated
stack,
rocks,
I've
done
the
ck
cks.
Actually,
I'm
taking
the
ckd.
F
I've
been
like
a
little
slacking
on
it
a
little
bit,
but
I
got
a
very
nice
discount
when
I
was
at
cubecon.
So
again,
I
think
part
of
that
early
sign
up
bonus
for
the
kubecon
eu
is,
I
think
it's
like
ten
dollars
for
full
access
and
they
normally
have
a
lot
of
discounts
on
these
certs,
because
300
for
cert
is
kind
of
a
lot.
F
So
look
for
those
discounts
again,
always
ping
me
twitter,
slack,
linkedin,
the
kubernetes,
canada
slack
channel,
I'm
on
all
those,
so
you
know
feel
free
to
message
me
if
you
have
any
questions
so
why
kubernetes
certs
just
a
blank
slide,
because
if
I
I
mean,
I
think
all
of
us
are
here,
because
we
understand
why
the
cncf
ecosystem
is
growing
and
that's
because
kubernetes
as
an
orchestration
tool
is,
is
taking
off
and
even
if
you're
looking
to
work,
you
know
for
any
of
the
big
providers
and
you're
doing
like
cloud
run
or
lambda
function
or
something
that
you're
using
containers
underneath
the
hood.
F
F
Recently,
security
and
supply
chain
security
is
becoming
such
a
big
topic,
so
the
cncf
decided
and
rightfully
so
that
they
wanted
to
come
up
with
this
with
a
tool
with
a
certification
that
you
know
just
looks
at
security
and
really
whole
life
cycle
right,
because
when
we're
talking
security,
we're
not
just
talking
in
kubernetes,
so
I
think
they
did
a
great
job
now
on
to
exam
structure.
F
Cks
exam
structure
is
two
hours
long.
You
have
to
get
67
if
you
go
so
this
is
just
pulled
straight
from
the
documentation
which
I
recommend
you
read.
It
says
15
to
20
performance-based
tasks,
but
then
it
kind
of
gives
away
how
many
questions
are
at
the
end.
So
it's
basically
15-16
questions
the
new
version,
so
they
just
updated
the
version
to
1.20
and
it
normally
trails
so
like
whenever
the
release
happens.
I
think
it's
like
six
weeks
or
eight
weeks
after
they'll
update
the
the
exam
version.
F
Just
it's
it's
worth,
knowing
so
that
when
you
use
your
study
tools,
you
are
using
the
correct
version:
cost
300
usd
you
get
a
free
exam
retake,
which
is,
I
think,
awesome
and
really
useful.
The
cert
is
valid
for
two
years.
I,
the
cka
and
the
ckid,
I
believe,
are
both
valid
for
three
years.
Correct
me
if
I'm
wrong
on
that
archie,
but
I
believe
they're
both
valid
for
three
one
of
the
reasons
for
the
two
years
is
because
a
lot
of
there
are
a
lot
of
moving
parts
of
security.
F
Like
you
know,
pod
security
policies
just
became
deprecated
right
and
that's
originally
in
the
cks.
So
there's
I
think
it's
a
quicker
moving
ecosystem
and
they
want
to
make
sure
that
people
are
staying
updated
so
when
it
says
a
12-month
exam
eligibility
they
mean
after
you
sign
up.
So
if
you
sign
up
in
november,
you
have
basically
till
the
next
november
to
take
your
two
tests.
You
get
a
free
retake
as
well.
If
you
fail
and
then
a
valid
cka
assert
is
required
and
I'll
touch
on.
F
Why
that's
required
as
I
go
through
the
sections,
but
it
is
a
good
policy
to
have
it
as
required.
I
think
they
did
a
good
job
with
that,
so
the
format,
if
you've
you
have
to
take
the
secret
to
take
the
cks,
but
I'll
just
you
know
talk
about
it
a
little
bit
to
make
sure
everybody
has
the
same
context.
F
You
basically
get
into
a
pain,
go
through
a
bunch
of
checks
to
make
sure
you're
not
cheating,
obviously,
and
then
you
get
into
your
exam
environment,
the
exam
environment
compromises
basically
a
terminal
window
on
your
right
and
you'll.
Have
your
question
sort
of
navigation
pane
on
your
left?
You're
allowed
two
two
tabs
open,
so
I
recommend
like
two
monitors
but
you'll:
go
through
you'll
have
a
question:
you'll
get
a
cluster
for
each
question
and
there'll
be
some
sort
of
task
that
you
need
to
perform,
whether
it's
on
the
node.
F
Sometimes
it's
within
kubernetes,
sometimes
you're
debugging,
the
master
node
or
the
control
plane,
node,
sorry
and
the
the
worker
node,
but
you'll
get
an
info
box
and
it
will
walk
you
through
exactly
what
you
need
to
do
now.
Every
single
question
means
you
have
to
change
the
context
right
because
you
do
have
16
clusters.
Make
sure
that
you're
aware
of
that
and
know
that,
when
you're
going
to
go
into
each
node
to
debug
or
look
at
some
different
security
tools
that
you
can
access
using
the
node
name,
you
can
use
cubecontrol
when
you
ssh
in.
F
So
you
have
access
to
all
of
these
applications
when
you
ssh
into
control
plate
or
worker
nodes,
which
is
really
useful.
If
you're
debugging,
something
in
kubernetes,
while
you're
working
on
the
node,
you
can
use
keep
control
from
the
node.
But
it's
a
it's
a
very
unsecure
environment
that
you're
testing
in,
but
it's
worthwhile
so
that
you
know
how
to
like
move
through
the
test
in
two
hours.
You
do
have
elevated
privileges
by
default,
which
is
great
because
you
basically
need
to
do
whatever
you
need
on
the
on
the
on
the
node.
F
One
thing
I
will
warn
people
of
is:
don't
delete
files
because
there
are
files
and
there
are
configuration
files
and
things
like
that
that
are
present
always
create
or
copy
new
ones.
Otherwise,
you're
in
for
a
rough
one,
and
I
think
that
should
be
just
a
self-evident
slap
on
the
wrist.
If
you're
deleting
files
on
the
node
yeah,
you
must
return
to
the
base
node.
So
if
you
shin
make
sure
you
come
out,
nested
ssh
is
not
supported
and
yeah.
You
can
use
q
patrol
keep
ctl
to
work
on
any
of
the
clusters.
F
Like
I
said
before,
once
you
get
in,
you
can
use
qctl
once
you're
inside.
I
need
to
update
this
because
it
says
1.1.19,
it's
actually
1.2
now,
so
the
quarterly
update
has
actually
just
been
pushed.
I'm
a
little
late
on
this
and
the
vi
vim
is
the
default
editor.
One
thing
to
note
is:
if
you
like,
using
you,
know:
g
edit
or
emacs
or
nano
they're,
not
there,
you
do
have
privileges,
so
you
can
go
install
them,
but
let's
say
you're
working
through
every
single.
F
You
know
node,
you
don't
want
to
be
installing.
You
know
application
on
every
single
node,
so
it's
worth
it
just
getting
used
to
vi
and
vim,
and
especially
if
you're,
using
the
cube
control
edit
function
to
edit
configuration
to
edit
emails
on
the
fly
vi
via
them
is
definitely
worth
it
all
right.
Exam
sections,
so
just
going
to
break
down
each
of
the
six
sections
talk
about
what
you
need
to
focus
on
and
hopefully
not
give
away
too
much.
F
If
you
have
any
questions,
throw
them
in
the
chat,
so
yeah
section
number
one
cluster
setup:
network
policies
to
restrict
cluster
access
network
policies
are
a
must
know
if
you're
going
to
you
know,
enforce
security
across
your
networks
in
your
kubernetes
clusters.
So
there
are
multiple
different
functionalities,
there's
the
to
and
from
functionality.
That
is
new,
I
think,
is
new
in
like
as
of
117,
but
really
you're
gonna
need
to
know.
You
know
how
to
do
default.
F
Denies
how
to
do
default,
allows
how
to
manage
ingress,
how
to
manage
egress,
how
to
manage
two
pods,
how
to
name
it
and
manage
within
name
spaces.
You
need
to
understand
how
the
the
permissions
and
how
to
configure
them
so
that
the
pods
can
talk
to
each
other
or
not
talk
to
each
other,
so
definitely
worth
knowing
the
cis
kubernetes
benchmarks.
F
So
you
want
to
review
all
the
kubernetes
components,
one
of
the
tough
things
about
working
in
something
like
gke.
Is
you
don't
see
the
masternodes
right,
the
the
exam
uses
cube
adm,
I
believe,
to
set
up
or
cue
spray,
but
either
way
the
kubernetes
components.
They're
not
containerized
they're
on
the
node
you'll
be
able
to
go
and
see
their
configuration
they're,
not
like
an
rke
where
the
components
are
containerized,
so
it's
worthwhile
understanding
those
components
and
how
the
open
source
configuration
sets
it
up.
F
Because
if
you,
if
you've
worked
in
a
managed
cluster
and
you've,
never
touched
the
master
nodes,
you
will
feel
out
of
place
when
you
have
to
go
in
and
debug
and
figure
out
these
components
so
worth
knowing
that
and
also
worth
knowing
all
the
cluster
benchmarks
of
how
to
properly
set
up
and
maintain
your
cluster
by
looking
at
the
configuration
files,
ingress
objects
for
security
controls.
F
So
I
think
archie
touched
on
this,
but
maybe
maybe
it
was
in
like
1,
9
or
1
8,
but
you
can
have
multiple
ingress
controllers,
obviously
in
a
namespace,
so
there's
a
new
ingress
class
as
well
functionality
where
you
can
specify
a
default
ingress
controller
when
you
create
an
application
namespace
just
in
general,
making
sure
that
pods
aren't
connecting
to
the
wrong
english
controller
is
is
very
useful
and
then
you
know
protecting
metadata,
endpoints
and
graphic
user
interfaces,
services
that
shouldn't
be
there.
F
That
should
just
be
a
regular
occurrence,
so
section
two
cluster
hardening
restricting
kubernetes
api
access
like
the
billing
and
lapse
attack.
We
want
to
make
sure
that
our
api
access
is
vetted
and
and
secure.
We
want
to
make
sure
that
we
know
how
we're
applying
role
based
access
control
and
really
our
back
is
that
core
functionality
for
applying
all
of
our
policies
right
using
caution
and
default
accounts.
So
one
of
the
issues
with
pod
security
policies
was
the
default
accounts
that
are
associated
with
them.
F
And
then
I
think
this
is
great.
They
snuck
this
in
here
update
kubernetes,
frequently,
because
I
believe
when
this
came
out
in
kubecon,
most
people
were
still
using
115,
which
we're
at
120
now,
and
I
don't
1.5-
isn't
even
supported
anymore
by
the
kubernetes
community,
and
so
the
cloud
providers
have
done
a
good
job
at
trying
to
push
people
to
adopt.
You
know
the
later
versions,
even
though
I
know
116
had
a
little
bit
of
an
api
api
but
worthwhile.
B
B
F
All
right
so
yeah
definitely
big
security
issue,
so
system
hardening,
reducing
os
footprints.
F
It's
it's
really
interesting
how
they
they
try
to
incorporate
kind
of
whole
lifecycle
and
outside
of
kubernetes
into
the
exam
as
much
as
they
could
obviously
like
you're,
not
really
creating
your
your
base
image
but
realistically
you'd
be
wanting
a
secure
base
image
that
you
can
vet
with
a
minimal
attack
surface
managing
iam
rolls
that
was
used.
It
throws
some
confusion,
we
don't
necessarily
mean
like
amazon,
iim
roles
or
something
like
that.
We
just
mean
our
back
and
how
would
be
applied
within
kubernetes
minimizing
external
access
to
network.
That
would
be,
you
know.
F
Maybe
something
like
you
can
use
open
source
tools,
maybe
something
like
app
armor
to
control
access
into
different
processes
or
network
policies
as
well
and
again
it
make
it
make
sense
it
mesh.
It
mentions
app,
armor
and
staccom,
so
there
are
other
open
source
tools
and
you're
allowed
to
use
the
documentation.
I'd
recommend
that
you
become
aware
of
those
tools,
as
you
really
are,
not
going
to
be
a
security
expert,
so
to
speak.
F
Unless
you
understand
those
tools
and
their
place
within
the
ecosystem,
all
right,
we
got
three
more
left
so
set
up
appropriate
os
level,
security
domains,
so
psp's
opa
security
context.
I
always
get
a
bunch
of
questions
about
this,
especially
because
psps
have
become
deprecated
as
of
one
two
one
psp's
enforce
security
context
right.
We
are
talking
security
context
at
the
container
and
pod
level
and
then
we're
using
psps
or
opa
to
enforce
the
security
context
against
our
clusters.
F
So
whatever
the
mechanism
for
enforcing
those
security
context
that
might
change,
but
the
core
component
doesn't
change
security
contexts
are
here
to
stay.
That
being
said,
you
need
to
know
how
to
apply
these
different
mechanisms,
as
there
still
is
a
conversation
about
how
to
best
move
forward
with
with
the
application.
So
like
archie
mentioned,
open
policy
agent
is
worth
knowing
gatekeeper.
F
F
All
right,
my
favorite
section
was
supply
chain
security
because
really
like
this
is
kind
of
the
the
core
issue
that
that's
cropping
up
nowadays.
So
you
know
minimizing
base
image
footprint,
securing
your
supply
chain
right,
kubernetes
by
default,
pulse
from
docker,
don't
know
how
long
that's
going
to
be
a
default
arch
or
you
can
fill
me
in
on
that
one,
but
for
now
it's
still
a
default
that
it
pulls
from
docker
hub.
F
So
you
know
setting
up
your
your
supply
chain
to
make
sure
that
you're,
basically
setting
an
allow
list
of
the
registries
you
can
pull
from
static
analysis
of
user
workloads.
So
trivi,
for
example,
is
mentioned.
Falco
is
mentioned
as
some
open
source
tools,
so
enforcing
runtime.
Using
those
tools
on
the
nodes,
for
example,
trivia
so
scanning
for
known
vulnerabilities,
need
to
know
how
to
use
a
an
image
scanner,
how
to
apply
it,
how
to
actually
make
assessments
based
off
of
some
of
the
recommendations
and,
lastly,
monitoring
logging
and
runtime
security.
F
This
is
this
is
the
largest
slot
I
have
so
you're
gonna
perform
behavior
analytics,
like
I
said,
that's
like
a
tool
like
falco.
Does
right,
you're
gonna,
detect
threats
within
physical
infrastructure
apps.
This
is
kind
of
just
a
general
topic
for
hey
we're,
applying
all
these
to
detect
threats
and
detect.
All
phases
of
attack,
regardless
of
where
it
occurs,
perform
deep
analytical
information.
F
Basically,
what
this
is
trying
to
get
at
is
audit
logging
container
scanning.
You
know
runtime
enforcement,
applying
it
understanding
that
when
you
apply
a
runtime
enforcement
or
something
like
falcon,
it
gives
you
feedback.
Saying
hey.
You
know,
you're
running
a
process
as
a
root
pid,
you
know.
How
would
you
enforce
that
in
kubernetes?
How
would
you
go
and
figure
out
which
that
what
pod
that
is
and
and
make
an
adjustment
accordingly
right?
So
it's
not
just
a
lot
of
these
questions
are
like.
F
F
All
right,
general
tips.
I
don't
know
if
I'm,
I
think,
I'm
getting
close
to
the
time
so
I'll
try
to
move
through
this,
and
if
anybody
has
any
questions
they
can,
they
can
hit
me
up
in
the
q
a
period
but
read
the
documentation
number
one
recommendation.
A
lot
of
the
stuff
I'm
talking
about
is
there
and
it
gets
updated
regularly.
So
I
recommend
you
do
that.
Take
your
time
read
the
question
manage
your
time,
because
you
know
bite
off
the
questions
that
you
understand
that
you
you
can
read
through
it's.
F
My
biggest
recommendation
is
to
go
through
and
understand
it
get
hands-on
experience
with
a
lot
of
these
tools
and
then
bookmark
what
you,
what
you
find
the
most
valuable
points
are
you
can
use
bookmarks.
You
can
only
use
the
documentation
that
they're
allowed
to
give
you,
but
you
can
use
bookmarks
in
your
tab,
and
so
I
found
that
really
useful,
because
you
don't
really
have
a
lot
of
time
to
be
searching
through
web
pages.
F
Recording
task
progressions,
so
the
exam
has
a
little
flag
option.
You
can
flag
questions
to
return
to.
I
actually
found
it
more
useful
to
flag
questions
that
I
was
confident
in
and
then
I
returned
to
the
questions
that
I
weren't,
but
it's
useful
to
help.
You
keep
track
of
time
yeah,
so
you
actually
have
autocomplete.
There
is
an
autocomplete
for
cube
control.
F
Like
I
said,
the
ck
is
a
precursor
to
the
cks,
and
one
of
the
reasons
for
that
is
because
you
do
things
in
the
ck,
like
you
know,
create
yaml
files
on
the
fly
use
dry
run.
You
know,
use
basically
a
lot
of
these
workflow
things
in
kubernetes
and
the
cks
sort
of
like
allows.
You
gives
you
some
of
those,
so
it
might
give
you
template
files
or
it
might
give
you
things,
because
we
know
that
you
know
how
to
work
with
kubernetes
yaml
files
and
objects.
F
The
point
is
like:
do
you
understand
the
security
aspects
of
them?
So
it
is
a
little
bit
different
from
from
the
ck,
and
that
in
that
part,
and
you
should
take
advantage
of
that
to
to
cut
down
on
those
typos,
be
provisioning,
vi,
vim
and
pay
attention
to
the
question.
Context
that
cube
control
config
context
make
sure
that
you're
not
in
the
wrong
the
wrong
environment.
F
Looking
at
the
wrong
question,
because
you
can
delete
some
things
that
you
might
not
have
wanted
to
deleted,
never
write
a
gamut
from
scratch,
learn
how
to
sort
through
json
outputs
right.
You
do
have
jq,
there's
a
list
of
applications
that
you
have
on
every
single
node,
that's
available
to
you
and
again
review
those
systemd
basics
and
review
those
kubernetes.
F
Config
files
understand
what
the
open
source
kubernetes
application
looks
like
and
yeah
try
to
avoid
doing
something
like
on
gke,
because
you're
never
really
going
to
get
that
experience
on
the
node
and
resources
just
finishing
up.
So
I
got
a.
I
got
a
chance
to
go
into
killer
sh.
Actually,
when
I'm
just
looking
after
my
ckid,
I
didn't
actually
use
it
for
the
cks,
but
I
really
liked
it
as
a
resource.
F
I
created
a
personal
repository
just
of
the
links
that
I
use
to
repair
and
it's
like
a
little
terraform
script
that
creates
a
gke
cluster.
Just
for
me
to
experiment.
Well,
lead
has
a
great
resource.
F
I
saw
mag
magnus
in
here
too
magno
logan
he's
got
a
good
resource
as
well,
so
there's
a
lot
of
people
that
are
willing
to
help
you,
and
I
recommend
that
you
know
either
get
in
touch
on
slack
the
kubernetes,
the
general
kubernetes
slack
channel
as
well
has
a
whole
area
dedicated
to
the
cka
ckd
and
cks,
where
people
would
love
to
help
you
so
again
get
in
touch
with
me:
waleed
magno,
the
cncf
community.
F
B
D
Thank
you,
michael.
We
have
one
question
from
bala
sundaram.
D
He
or
she
asked
me
his
question
is:
is
there
a
partial
scoring
available
for
a
question
as
there
will
be
sub
tasks
or
all
the
tasks
in
our
question
has
to
be
performed.
F
There's
I
believe
that
there
is
some
sort
of
sub
scoring
now,
because
there
are,
but
don't
quote
me
on
this
they're
one
of
the
my
biggest
gripes
and
archie
don't
hit
me
is
there
is
not
as
much,
I
would
say,
visibility
into
the
scoring.
So
when
you
do
the
cka
a
lot
of
it
is,
it
has
to
be
exact.
The
name
of
the
potty
crate
has
to
be
exact
and
I'd
recommend
that
even
when
you're
creating
manifest
files,
you
are
very
specific
in
what
you
write.
F
D
Okay,
thank
you
michael.
So,
as
I
have
a
question
myself
like
a
free
exam
retake,
is
that
a
subtle
way
to
say
that
it's
very
difficult
everyone's
to
fail.
F
I
think
because
people
come
in
and
they
think
they
have
a
a
hold
on
security
and
kubernetes,
but
they
might
not
have
an
understanding
of,
let's
say
all
of
the
security
tools
that
are
available
and
how
they
would
and
how
you
would
use
them
to
then
interact
back
with
kubernetes
right.
A
lot
of
people
might
be
used
to
working
in
something
like
gke
or
aks,
where
you
know
they
go
to
cloud
logging
and
it
gives
them
and
says:
hey
here's
the
log,
but
do
they
really
go
and
set
the
logs
themselves
right?
F
If
you
were
to
use
something
like
audit
logging
in
like
audit
policy
and
k8s,
do
you
actually
go
and
work
with
that
config
file
and
change
that
config
file
so
that
you
get
different
audit
logs
coming
out?
So
there's
there's
a
lot
of
different
use
cases
that
I
think
they
highlight
all
the
security
around
kubernetes
really
well
and
that
I
think,
is
probably
the
most
challenging
and
to
be
fair.
Also
the
resources
haven't
necessarily
caught
up.
I
know
you
know.
F
B
We'll
let
him
to
just
to
start
preparing
for
his
presentation,
but
yeah.
I
think
just
trying
to
be
mindful
of
time.
We
you
know
we
we're
gonna,
continue
to
take
questions
and
subaru
will
be
helping
to
triage
the
questions.
Please
continue
asking
them.
It's
just
for
the
second
second
of
night.
B
Really
really
happy
to
see
you
and
happy
we'll
be
interested
to
hear
you
know
when
the
stack
rocks
has
been
acquired
by
red
hat.
So
you
know
I
was
expecting
you
wearing
a
red
hat
today,
but
it
seems
still.
B
All
the
best
thanks
so
yeah,
we'll
subaru,
do
you
want
to
pass
it
to
anthony
anthony?
You
want
to
present
yeah
all
right.
Yes.
Sure
now
is
the.
C
Time
for
our
second
talk
about
hierarchical
name
species,
so
we
are
welcoming
today
adrian
and
jeannie,
both
software
developers
from
google
adrian
and
jenny.
It's.
G
It's
your
turn
now,
thanks,
so
I'm
adrian
ludwig,
I
have
become
accustomed
to
saying
the
word
hierarchical
a
whole
lot
over
the
past
year
and
I'm
from
google
waterloo,
which
of
course
is
in
kitchener.
I
personally
am
in
toronto
today
and
most
of
the
time
since
the
pandemic
started,
and
so
is
jenny,
so
we
are
spread
at
most.
Most
water
wigglers
are
spread
out
through
the
gta
and
kitchener-waterloo
corridor.
So
yeah,
I'm
an
engineer.
G
I
work
on
a
gke
kubernetes
engine
and
I'm
gonna
talk
a
little
bit
about
hierarchical
name
now.
Some
of
you
may
have
heard
about
this
before
I
presented
at
qcon
eu.
I
gave
a
seminar
last
summer
about
this
as
well,
so
I'm
gonna
kind
of
skip
over
some
of
the
older
stuff
and
or
maybe
go
quickly
and
jenny
will
be
talking
about
some
of
the
new
features
that
are
part
of
hierarchical
namespaces.
G
So
why
do
we
have
this
feature?
Well,
it's
all
about
multi-tenancy,
and
so
just
for
those
who
don't
know,
let's
talk
a
little
bit
about
why
you
would
want
to
use
multi-tenancy
so
kubernetes
at
a
glance.
Many
of
you
probably
know
this
you've
got
a
user,
you
use
something
like
cube,
cuddle
or
any,
or
the
ui
to
talk
to
the
control
plane
and
the
control
plane
has
a
bunch
of
nodes
with
cublets
on
them.
G
Now
it's
usually
not
just
you
usually
there's
a
team
and
and
we'll
often
call
that
team
a
tenant,
it's
kind
of
like
a
generalization
of
a
team.
Maybe
it's
a
person.
Maybe
it's
a
group
of
people.
Maybe
it's
multiple
teams.
Now
the
problem
starts
coming
about
once
you
start
getting
multiple
tenants
like
do
you
give
everybody
their
own
cluster
and
if
you
start
giving
everybody
their
own
cluster,
how
does
that
scale
who's
in
charge
of
you
know
operating
them?
All
you
end
up
getting
a
lot
of
stuff.
G
That's
repeated
like
a
lot
of
deployments
that
are
repeated.
You
know,
good
luck,
trying
to
upgrade
them,
and
so
we
generally
don't
want
to
have
too
many
clusters
in
one
organization
you
want
to
say
like
okay,
let's
have
multiple
tenants
and
they
all
talk
to
the
same
control
plane,
and
then
they
have
their
stuff
in
different
name
spaces.
G
So
you
can
apply
all
that
cool,
like
all
of
the
the
security
policies
that
we
were
talking
about
in
the
last
talk
now,
I
don't
want
to
imply
that
that,
like
you,
should
only
have
one
cluster
or
anything
like
that,
because
that's
absolutely
not
true-
and
this
is
something
that
I've
heard
it
said
something
like.
While
google
only
has
one
cluster.
This
is
not
true
at
all.
There
is
no
model
cluster,
and
so
when
should
you
use
multiple
clusters?
Well,
regionalization
is
an
obvious
one.
G
Most
clusters
are
only
available
in
one
availability
zone
or
in
one
region,
and
so
you
would
want
to
keep
so
if
you
are
deploying
multiple
regions,
you
want
multiple
clusters.
Last
radius
is
probably
a
big
one.
G
If
you
want
to
update
from
your
old
115
cluster
to
116,
you
probably
don't
want
to
discover
that
the
api
you
depend
on
has
been
deprecated
across
all
of
your
prop
workloads
at
the
same
time,
so
you're
going
to
want
to
upgrade
kubernetes
one
cluster
at
a
time
and-
and
there
are
always
other
kind
of
cluster
level
dependencies
that
you
want,
and
so
you
are
definitely
not
going
to
have
once
a
single
one
you're
going
to
want
at
a
minimum
one
per
environment.
G
You
know
development
test,
staging
prod,
probably
you're,
going
to
want
at
least
two
in
prague,
maybe
a
canary
in
a
regular
one
scalability.
This
doesn't
have
big
come
up
too
often,
but
there
are
people
who
run
over
the
the
limit,
which
is,
I
think,
5
000
nodes
in
in
open
source,
15
000
nodes
in
some
of
the
distributions
and
then
there's
always
like
some
weird
snowflake.
G
But
if
you
do
want
to
get
onto
the
same
cluster,
then
there
is
the
multi-tenancy
working
group
which
I
am
a
member
of
and
which
is
the
the
owner
of
the
agency,
the
hierarchical
namespace
controller
project,
as
well
as
several
other
projects
like
virtual
clusters,
and
so
our
goal
is
to
make
it
easy
to
support
multi-tenancy,
which
is
to
say
multiple
tenants
in
one
cluster.
There's
another
group
called
sick
multi-cluster
and
they
will
set
you
up
for
the
for
managing
all
of
your
clusters.
G
They
don't
seem
like
they
do
much
they're
just
like
they
might
look
like
just
a
name,
it's
kind
of
like
having
one
kind
of
like
one
level
of
folder
that
you
can
put
things
into
so
on
their
own
they're,
not
massively
useful,
but
they
make
all
those
other
policies
possible.
So,
for
example,
we
mentioned
secret
management
in
as
a
part
of
the
cks,
and
we
mentioned
I
don't
know
if
we
mentioned
service
accounts,
those
are
both
bound
to
name
spaces
with
not
only
by
default.
G
I
think
that's
the
only
binding
that
they
have
so
if
there
is
a
service
account
in
a
namespace
that
has
access
to
various
resources,
any
pod
that
you
create
that
anybody
creates
in
that
namespace
can
use
any
service
count,
they
can
all
bind
to
any
secret.
So
if
you
want
to
keep
some
service
accounts
and
pods
and
secrets
separate
for
separate
pods
that
are
created
by
separate
people,
your
only
option
is
to
use
name
spaces,
and
so
that
is
why
that
that's
at
a
bare
minimum.
G
That's
only
going
to
be
the
reason
you
want
to
use
name
spaces
by
the
way
name,
spaces
only
isolate
the
control
plane.
They
do
not
isolate
the
data
plane.
So,
for
example,
if
you've
got
a
vulnerability
in
your
container
or
in
docker
or
container
d
and
you've
got
something
that
can
attack
other
things
on
the
node.
Namespaces
are
not
going
to
help.
You
use
g,
visor
or
kind
of
container
for
something
else
like
that.
G
Other
scope
for
this,
for
this
talk,
other
features
do
not
necessarily
require
namespaces
exactly,
but
they
sure
work
better
with
them
are
back.
I
think
the
only
way
you
can
scope
object.
Creation
is
at
the
name,
space
level.
I
might
be
wrong
on
that.
I
have
not
passed
the
cks
exam
myself,
maybe
maybe
I
should,
but
certainly
all
of
the
other,
our
back
operations,
even
though
you
can
target
them
to
individual
objects
with
individual
names.
Don't
do
that?
G
It's
it's
just
going
to
be
a
pain
for
you,
so
you're
definitely
going
to
want
to
do
that
at
the
name.
Space
level,
resource
quotas
limit
ranges.
They
apply
to
namespaces
network
policies,
which
we
mentioned
in
the
last
talk
as
well.
They
can
be
more
finely
targeted,
but
they
do
use
namespace
boundaries
by
default
as
well.
The
one
caveat
there
is
that
they
require
labels
on
those
namespaces
which
are
typically
not
secure,
although
I'm
not
going
to
get
into
it.
Hierarchical
namespaces
actually
gives
you
a
version
of
labels
that
are
secure.
G
So
what
if
you
want
to
use
policies
across
main
spaces,
as
I
mentioned
name
spaces-
are
flat
within
one
cluster.
It's
as
though
you
only
get
one
level
of
folders,
and
so
what
happens?
If
you
want
policies
that
can
apply
to
groups
of
namespaces,
so
traditionally
you
would
need
some
tool
and
a
source
of
truth,
that's
outside
of
kubernetes
like
flux,
argo,
config,
sync
etc.
G
And,
alternatively,
there
are
some
tools
that
have
this
kind
of
idea
of
account
like
it
or
project
like
an
open
shift
or
tenant,
and
so
kiosk
is
one
example
that
the
tenant
c
or
d.
But
what
we
thought
there
was
a
space
for,
was
something
that
both
has
the
source
of
truth
inside
kubernetes
and
so,
for
example,
there's
no
dependency
on
git,
which
all
of
those
first
set
of
tools
have
or
some
some
source
control
and
doesn't
involve
some
entirely
new
concept
like
project
or
or
tenant.
G
We
wanted
to
do
something
where,
because
everything
was
already
based
on
namespaces,
let's
extend
namespaces
instead
of
introducing
something
entirely
new.
So
that's
what
hierarchical
namespaces
are.
They
are
provided
by
an
add-on
to
your
cluster
called
hnc.
The
hierarchical
namespace
controller-
this
is
not
going
to
be
a
core
part
of
kubernetes,
for
the
first
people
feature
is
an
add-on
and
all
hierarchical.
Namespaces
are
also
regular
name
spaces.
G
What
this
does
is
it
lets
you
express
an
ownership
relationship
between
parents
and
children
and
what
that
does
is
it
allows
you
to
to
propagate
policies
across
them?
The
default
way
of
doing
this
is
that
they
simply
get
copied.
So
if
you
put
in
our
back
policy
in
org
one,
it
will
get
copied
to
team
a
and
tb
and
all
of
the
services,
but
not
say
to
team
c.
You
know
for
secrets
and
anything
else,
h
c
can
propagate
any
object
type.
We
don't
recommend
propagating
pods.
G
That
will
be
bad
for
you
by
default.
Adjusters
are
back
objects,
we
can
add
anything
else.
So
yeah,
as
I
mentioned
entirely
kubernetes
native,
but
compatible
with
good
obstacles.
You
don't
have
to
use
git
you
can,
as
I
mentioned,
it,
has
cascading
policies.
It
also
allows
you
to
create
sub
name
spaces,
which
is
just
to
say
a
regular
name
space,
but
it's
created
underneath
an
existing
namespace.
G
G
Now
the
problem
with
hierarchies
is
that
they
can
be
very
limiting
because
you,
you
can
express
exactly
one
type
of
ownership,
one
kind
of
hierarchy
and
one
way
that
the
things
get
get
propagated
and
that
can
often
be
too
limiting,
and
so
that
is
where
our
next
topic
comes,
in
which
oh
sorry,
I
need
to
continue
sharing
my
screen,
which
are
exceptions,
and
so
I'm
going
to
let
jenny
my
co-worker
take
over.
So
jenny
go
ahead.
H
Thank
you,
agent,
hi,
everyone,
I'm
jeannie.
I
work
at
google
in
the
agency
team,
especially
on
the
exceptions
feature
with
adrian
and
today
we're
going
to
talk
about
the
exceptions
and
to
understand
what
exceptions
is.
Let's
first
look
at
an
example:
org
structure,
so
here
we
have
on
the
team
a
b
c
and
the
playground
all
within
the
ant
department,
and
each
box
here
is
also
a
namespace
and
team.
H
A
b
and
c
are
our
internal
teams,
but
the
playground
can
be
accessed
by
anyone
outside
this
department
next
slide,
please
so
here's
an
example
case
where
there
are
some
common
secrets
in
the
eng
department
that
any
proud
workload
can
access
click.
Please.
H
H
First
of
all,
we
don't
want
to
restrict
team
b
with
the
common
quota
and,
moreover,
we
don't
want
to
put
the
replacement
quota
in
team
b,
since
there
are
users
with
admin
privileges
in
dining
space
who
should
not
be
able
to
who
should
not
be
allowed
to
set
their
own
quarters
click
please,
and
instead
we
want
to
create
the
team
brq
resource
quota
in
the
eng
department,
while
owning
propagating
at
q
team
b.
And
how
do
we
do
this
next
slide
piece?
H
So
the
solution
is
the
exceptions.
Exceptions
annotations
allows
you
to
limit
the
propagation
of
an
object
to
descendant
namespaces
and
to
understand
how
exceptions
work.
Let's
look
at
and
let's
look
at
a
demo
here,
I'm
just
going
to
start
sharing
my
screen
go
for
it.
Can
everyone
see
my
screen?
Okay,.
H
So
first
we
will
be
creating
the
team's
architecture,
as
we
just
talked
about
in
the
first
scenario,
for
that
I'm
just
gonna
a
little
bit,
I'm
copying
and
pasting
the
comment
I
have
in
another
doc.
So
here
let's
create
architecture
perfect,
and
now,
let's
verify
the
structure.
H
We
do
this
by
saying,
keep
cuddle
asian
s3
and
it
shows
the
architecture
of
the
entry
department
where
playground
team
a
b
and
c
are
all
the
substances
in
sublime
spaces
under
the
edge
department,
which
is
exactly
what
we
want
and
for
those
of
you
who
are
new
to
hnc
just
note
this
sub
name
spaces
are
also
real
sub
name
spaces,
as
we
can
see
when
we
say
keep
got
cube,
cutter
get
namespace
and
we
can
say
those
sublime
spaces
right
here.
H
Okay,
now,
let's
clear
the
screen
and
we
will
be
creating
the
secret
we
talked
about
in
the
first
scenario.
H
We
are
doing
this
by
creating
the
secret
within
the
end
department,
and
we
give
it
a
name
called
real
secret
and
note
that
for
the
demo
purpose,
we're
using
some
fake
credential
data,
but
of
course,
in
the
real,
the
real
scenario
you'll
be
using
your
own
credential
great.
We
have
already
created
a
secret,
and
now
we
will
be
asking
hnc
to
propagate
the
secret.
For
us.
We
do
this
by
set
the
resource
type
secret
to
the
mode
propagate.
H
H
Now
we
have
added
the
exceptions
annotation
onto
our
secret,
let's
check
if
it's
gone
in
the
playground
review
a
great
secret
once
more
and
we
cannot
say
the
real
secret
anymore,
but
we
also
want
to
make
sure
that
this
secret
is
propagated
to
other
namespaces,
which
is
team
a
b
and
c,
and
we
just
leave
team
a
as
an
example.
H
G
H
Perfect,
so
to
learn
more
about
the
h,
the
exceptions
feature
you
can
visit
our
hsa
user
guide
in
our
github
repo
or
you
can
just
google
for
hsc
v0.7
exceptions
and
now
I'll
hand.
Over
back
to
adrian.
G
Thanks
jd,
we
will
also
try
to
post
a
pdf
version
of
these
slides
so
that
some
of
these
links
will
actually
be
clickable
as
well.
I'm
just
going
to
talk
very
briefly
about
hnc,
because
people
ask
us
things
such
as
like.
When
can
I
use
this
in
prod
because,
as
jenny
just
mentioned,
we're
version
0.7
right
now.
Is
this
something
I
should
be
using.
G
So
we
with
the
addition
of
exceptions.
We
actually
believe
that
we're
pretty
much
feature
complete
now
for
our
first
major,
like
our
first
product
release.
This
doesn't
mean
that
we
will
refuse
new
features,
but
we're
not
planning
on
adding
any
more
and
so
really.
What
we
want
to
do
is
things
like
stability
and
and
feature
proofing.
So,
for
example,
we
have
been
using
some
of
the
older
versions
of
the
apis
like
b1
beta
1.
For
crd,
those
have
been
deprecated,
I
think
in
119,
or
so
not
remember,
but
deprecated.
G
So
we
do
want
to
get
on
to
the
latest
version
of
all
of
those.
As
I
said,
we
want
to
add
things
like
stability.
Make
sure
that
none
of
our
web
hooks
are
applying
to
are
applying
to
sensitive
name
spaces
like
cube
system,
so
that
you
can't
take
down
your
cluster.
Those
will
all
be
coming
fairly
shortly
and
also
if
there
are
usability
improvements
that
we
need
to
make,
then
those
are
great
too
we're
considering
moving
to
our
own
repo
under
kubernetes.
G
We
actually
have
permission
to
do
this
now,
but
I
don't
know
if
it's
actually,
I
don't
think
it's
necessary
for
1.0.
G
If
you
want
to
get
involved,
if
you
want
to
help
the
most
important
thing
that
we
would
love
your
help
with
is
just
by
using
it,
try
it
out,
use
it
not
necessarily
in
pride,
if
it's
something
that's
very
sensitive,
that
you're
running
right
now,
but
we
are
at
the
point
where
you
should
be
able
to
install
it
and
use
it,
and
it
should
work
the
way
you
expect
it
to
so
there's
a
couple
of
ways
that
you
can
do
this:
you
can
go
to
github.
The
latest
release
is
version
0.7.
G
You
can
check
out
all
of
our
documentation
and
our
quick
starts
and
install
h
c.
In
a
couple
of
in
a
couple
of
lines,
you
can
get
the
plug-in
that
jimmy
demonstrated
the
hns
plug-in
from
crew.
If
you're
using
gke,
then
you
can
install
it
through
hierarchy
controller,
which
is
our
free
kind
of
enterprise,
ready
version
of
hmc.
G
It
has
a
couple
of
additional
features
like
git
ops,
like
kind
of
native
git,
ops,
integration,
hierarchical
observability,
so
that
you
can
filter
your
pods
and
the
records
hierarchical
resource
quota
is
coming
soon.
So
this
is
the
idea
where
you
can
actually
share
your
quota
natively
across
multiple
namespaces
and
then
multi-cluster
installation
management,
which
ginny
is
hard
at
work
on
right
now,
should
be
available,
probably
by
the
end
of
march,
not
sure
exactly
when
that
will
roll
out,
but
either
of
those
methods.
G
Please
do
do,
try
it
check
it
out
and
try
it
out
and
give
us
all
the
feedback
that
you
can.
If
you
feel
like
coding,
that's
great
too.
We
have
a
couple
of
backlog.
Sorry,
a
couple
of
milestones:
the
agency
0.8
an
agency
backlog,
milestone
and
feel
free
to
pick
them
up
or
just
give
us
a
shout
on
slack
or
any
of
the
ways
through
the
multi-tenancy
working
group.
If
you
want
to
learn
more,
I
gave
a
talk
to
keep
going
to
you
called
multi-tenant
clusters
with
hierarchical
namespaces.
G
A
webinar
earlier
in
the
year
called
better
balls
make
better
tenants.
These
are
both
available
on
youtube
and
also
the
thank
god.
It's
kubernetes
folks
from
vmware
gave
a
demonstration
called
let's
try
hnc
in
september,
and
that
was
a
long
one,
so
maybe
turn
it
on,
but
and
watch
it
at
double
speed.
G
But
it
is
it's
a
fun
exploration
of
just
how
easy
agency
is
to
use,
because
I
was
pretty
happy
with
even
though
he
hadn't
read
any
of
the
documentation,
he
was
able
to
get
it
up
and
running
right
away.
G
C
B
G
Thanks:
okay,
thoughts
on
cell
architecture,
as
I
said,
these
are
tools
for
you
to
use.
Namespaces
are
people
name
spaces
multi-cluster,
so
yeah
by
all
means,
go
shard
your
applications
by
cluster.
If
it
meets
one
of
those
needs
that
I
mentioned
earlier,
so
regionality
blast
radius
scalability.
Those
are
all
good
reasons.
G
So
yes,
by
all
means
use
that
architecture
where
it
makes
sense
for
you,
that's
not
really
much
of
an
answer,
but
but
I
would
say
that
I
would
go
back
to
those
four
principles
and
say:
if
any
of
those
things
are
true
and
you
need
to
use
multiple
clusters
by
all
means,
go
for
it
and
we
do
like.
I
think
there
are.
There
are
tons
of
tools,
both
open
source
and
proprietary,
on
whatever
platform
you're
on
they
can
help
you
with
that.
C
G
If
the
secret
is
in
the
parent
name
space,
you
are
not
allowed
to
make
changes
to
it
in
the
child,
because
this
is
really
about
policy
enforcement,
and
so
it's
not
really
enforceable
if
anybody
can
change
it
now,
with
that
said
now
that
we
have
exceptions,
you
could
ask
your
friendly
cluster
administrator
to
not
propagate
that
secret
to
your
namespace,
and
then
you
can
replace
it
with
another
secret
of
the
same
name
and
then
it
actually
would
be
propagated
to
his
own
children,
but
by
and
large,
unless
you
use
exceptions,
if
you
create
a
an
object
in
a
parenting
space,
it
gets
propagated
to
all
of
the
descendants
and-
and
you
are
not
allowed
to
change
it.
C
Thank
you
adrian.
This
question
is
from
me.
I
was
wondering
if
there
were,
if
there
was
a
ui
available,
that
could
display
graphic
cable
propagation
in
place,
just
think,
like
the
the
community's
dashboard
extension
for
hierarchical
namespaces
for.
G
Example,
so
no,
there
are
no
uis
to
the
best
of
my
knowledge
today,
there's
the
cucumber
hms
plugin,
and
so
that's
our
that's
what
we
offer.
I
don't
know
of
anybody,
including
gke.
I
don't
know
of
anybody
who's
displaying
this
stuff
natively.
Yet,
but
it's
something
that
as
adoption
goes
up,
I
can
see
happening
then.
C
G
No,
there
is
nothing
like
that.
That's
an
interesting
idea.
Exceptions
are
so
new,
but
I
don't
think
we've
had
a
demand
for
that.
Yet
any
object
that
has
been
propagated.
You
can
look
at
it
and
it
will
have
an
annotation
saying
where
it
came
from,
but
there's
nothing
that
goes
in
the
opposite
direction.
So
yeah.
If
there's
a
demand
for
that,
that's
the
kind
of
thing
we
could
look
at
adding.
C
Thank
you.
We
had
also
a
question
from
she
was
asking
if
you
can
apply
network
policy
across
different
name
spaces
with
labels.
G
So,
yes,
you
can
network
policies,
do
support
labels,
and
so
you
absolutely
can
well
okay.
Let
me
rephrase
you
cannot
share
a
network
policy
across
namespaces.
A
network
policy
lives
in
exactly
one
namespace.
So
if
you
want
to
have
a
group
of
namespaces
that
have
the
same
policy,
you
have
to
use
something
like
hnc
or
something
outside
of
kubernetes.
G
However,
with
that
said,
network
policies
can
apply
to
pods
that
come
from
name
spaces
and
you
can
select
those
namespaces
with
labels,
so
you
can
say,
for
example,
only
allow
traffic
from
from
a
namespace
that's
marked
as
prod,
but
don't
allow
traffic
from
a
namespace.
G
That's
marked
as
staging,
not
that
I
recommend
putting
both
of
those
in
the
same
cluster,
but,
for
example,
you
could
do
that
so
hnc
doesn't
actually
add
anything
to
to
network
policies,
but
what
it
does
do
is
it
adds
those
trusted
labels
that
you
cannot
that
nobody
can
modify.
G
That
reflects
the
hierarchy
itself,
and
so
you
can
switch
on
saying
you
can
say,
for
example,
like
only
allow
traffic
from
game
spaces
that
are
owned
by
this
team
and
that
that's
the
kind
of
thing
that
it
lets
you
do.
I
hope
that
answers
your
question.
You
can
follow
up
with
me
on
slack
or
on
the
working
group,
but
the
multi-tenancy
working
group
mailing
list
later,
if
you
like,
thanks
adrian.
C
We
have
the
last
question
it's
from
joanne
and
I
think
it's
it's
kind
of
the
same
topic
as
the
previous
question.
He
was
wondering
if
java
can
core
namespaces
plus
network
policies
would
provide
all
that
is
needed
to
achieve
a
multi-tier
architecture.
C
For
example,
we
will
proceed
here,
application,
tier
and
database
here.
It's
something
that
that
comes
up
often
in
government
use
cases.
G
I
wouldn't
say
I
certainly
hnc
is
not
a
cure-all
anymore.
The
name
spaces
themselves
are
so
I
would
not
say
that
it
provides
you
with
all
you
need,
but
it's
probably
a
good
starting
point
because,
for
example,
like
you've
got
a
multi-tier
application
and
you
want
to
have
multiple
multi-tier
applications
in
the
same
cluster,
certainly
putting
like
creating
a
parent
name.
G
Space
for
each
of
those
applications
would
be
useful
because
then
you
can
put
all
the
different
components
into
different
child
name
spaces
and
have
a
common
set
of
policies
applying
to
all
of
them,
as
well
as
using
those
labels
to
to
restrict
them
based
on
or
back
sorry
to
restrict
traffic
based
on
on
the
hierarchy
that
you're
in
as
well
as
any
other
criteria
that
you
might
like
as
well.
So
while
I
wouldn't
say
it,
it
gives
you
all.
You
need
it's.
It's
probably
a
really
good
starting
point
for
that
kind
of
architecture.
B
Thanks
anthony
adrian,
I'm
personally
very
excited
about
this
feature,
probably
one
of
the
most
exciting
things
I've
seen
in
kubernetes
lately
and
I
for
some
reason.
I
noticed
that
kubernetes
community
doesn't
spend
a
lot
of
attention
for
multi-tenancy.
I
don't
know
why,
but
like
I,
I
think
it's
been
like
difficult
from
the
beginning.
It's
still
difficult
for
many
people
and,
like
I
think
the
the
agency
is
kind
of
a
good
starting
point
for
many
new
things,
and
you
know
obviously
having
uis
is
pretty
would
be
really
really
nice.
B
G
Yeah
it
is
it
certainly
so
agency
is
patterned
after
a
couple
of
things,
so
I
wouldn't
call
openshift
projects
like
a
direct
influence,
but
it's
certainly
something
that
we
knew
about.
The
more
direct
influence
is
actually
from
a
product
called
anthoconfig
management
which
comes
out
of
google
there's
now
a
free
version
of
it
called
configsync
as
well,
and
it
kind
of
introduced
this
idea
of
of
hierarchical
application
of
policies
so
openshift
projects
to
the
best
of
my
knowledge,
you
can
have
basically
it's.
G
It
adds
exactly
one
level
of
hierarchy,
so
you
can
now
have
like
a
grouping
of
namespaces,
but
I
don't
believe
you
can
have
a
grouping
of
of
projects
to
the
best
of
my
knowledge,
and
it
does
require
that
you
have
this
additional
idea.
That
is
not.
That
is
different
from
the
kubernetes
built-ins,
and
so
what
we're
trying
to
do
is
we
try
to
generalize
these
concepts
so
config,
sync
or
anthros?
Config
management
is
kind
of
nice,
but
all
of
that
inheritance
has
to
be
expressed
in
git
as
files
and
directories.
G
None
of
it's
actually
done
on
the
cluster.
The
intermediate
name
spaces-
don't
even
actually
show
up
on
the
final
cluster,
so
we
took
inspiration
from
a
couple
of
things
that
we
tried
to
build,
something
that
was
as
minimal
as
possible,
while
still
to
extend
the
sort
of
built-in
concept
of
name
spaces
while
giving
you
these
this
freedom.
So,
for
example,
you
can
have
any
number
of
levels
of
hierarchy
with
h
c,
I
don't
recommend
going
too
high.
I
think
that
two
is
going
to
be
enough
for
many
people.
G
If
you
start
going
above
four,
I'm
going
to
probably
start
asking
you
if
this
is
really
the
best
way
to
structure
your
hierarchy
structure,
your
cluster,
but
certainly
knowing
that
you
have
the
ability
to
do
that
is,
is
nice
and
without
having
to
understand
this
concept.
That
is,
you
know,
particular
to
one
distribution.
G
So
yeah
multi-tenancy
is
hard
because
a
lot,
because
there's
no
standard
way
to
even
think
about
tenancy,
and
that
is
the
other
reason
why,
instead
of
trying
to
create
a
new
concept
on
financial
projects,
we
just
try
to
extend
the
building
block
to
make
this
as
broadly
applicable
as
possible.
Yeah.
B
And
it's
really,
of
course,
kubernetes
right,
so
you
don't
have
to
really
worry
about
walking
in
a
sense
that
this
feature
will
be
part
of
the
kubernetes
all
the
time,
and
anybody
can,
you
know,
write
a
cap
and
add
additional
features.
If
you
want
to
write
something,
you
know
you're
welcome
to
do
that
exactly
it's
not
part.
G
Of
the
core
distribution,
but
it
is
open
source.
It's
part
of
the
sigs
it's
going
to
have
its
own
sick
repo
fairly
soon
we're
under
the
we're
going
to
be
very
shortly
under
the
authority
of
single
off
so
we'll
be
just
like
controller
runtime
or
any
other
project.
That's
not
actually
part
of
the
core
kubernetes
distribution,
but
it
is
part
of
the
core
community.
It
is
part
of
the
kubernetes
projects.
F
If
I
might
jump
in
here,
I
think
this
is
awesome,
especially
for
for
research
on
on
the
research
side
for
open
source
like
if
researchers
want
to
get
into
kubernetes.
They
don't
have
the
money.
Let's
say
to
do
something
like
openshift
right,
like
just
figuring
out
how
to
manage
a
bunch
of
researchers
in
a
multi-tenant
way
that
makes
sense
to
administrators.
I
think
it's
a
it's
a
great
way
to
be
spending
time.
I
think
there's
a
huge
use
case
and
upside
for
it.
So
thanks
for
doing
this,
thank
you.
B
Yeah-
and
I
think
one
of
the
like
a
lot
of
people
are
asking
how
to
do
delegation
like
have
you
have
a
devops
team
that
creates
namespace
how
you
can
delegate
your
users
to
create?
This
is
a
really
good
answer
for
that,
too.
G
Yeah,
you
can,
you
can
even
delegate
the
sub
names
for
the
submitting
space
creation,
so
we
distinguish
between.
You
can
take
existing
name
spaces
and
arrange
dimensional
hierarchy.
We
call
those
full
name
spaces.
You
can
also
create
a
little
object,
called
an
anchor
in
an
existing
namespace
and
that
will
automatically
create
a
child
name
space
for
you.
G
We
call
those
sub
namespaces
they're
locked
to
the
parent,
but
you
can
delegate
the
creation
of
them
and
as
because
you
can
only
critical
other
places,
they
become
bound
by
any
of
the
constraints,
all
any
of
the
policies
that
apply
to
the
parents.
So
it
really
does
make
it
easier
for
you
to
delegate
the
administration.
G
You
can
even
give
the
person
like
full
clustered
men,
privileges
in
the
child
name
space
and
as
janine
with
conceptions.
Did
you
not
have
permission
to
override
anything
that
comes
out
of
the
parent,
even
though
they're
they
otherwise
have
full
permission
to
create
any
object
in
that
new
space.
B
Fantastic
thanks
adrian
for
nice
presentation
and
genie
for
nice
demo
and
thanks
to
kubernetes
community
also
for
contributing
this
new
functionality.
We're
going
to
the
fun
part
where
we're
going
to
draw
some
prizes.
So
the
first
two
prizes
are
going
to
be
aws
cards
from
stackrocks,
so
we're
going
for
the
first
one,
drumrolls.
B
We
have
40
people
who
submitted
the
survey.
All
right,
borco
is
getting
the
first
one.
B
Congrats
borko
the
second
prize,
also
card
from
stockrocks
fifty
dollars.
B
And
now
we're
going
for
kubernetes
hats.
I
have
it
here
with
me:
you
I'm
not
gonna
open
them,
but
we're
gonna
ship.
You
ship
to
your
home,
address.
B
Minute
all
right,
that's
very
weird
here
it
shows
tebow
and
then
it
shows
us.
B
B
Johan,
I
think,
with
this
we're
going
to
conclude
our
meetup.
Thank
you
very
much.
We
planning
our
next
event
and
we
actually
talked
to
argo
cd
community
and
we're
going
to
have
one
of
the
project
managers
from
into
it
who
want
to
be
speaking
about
argo,
cd,
future
road
maps,
and
you
know,
what's
the
community
doing
with
argo
cd,
it
would
be
very
interesting
to
know
what
you
want
to
learn
about
argo
cd,
but
stay
tuned
for
for
the
our
next
event.
B
We're
planning,
maybe
something
around
march
or
it
could
be
also
elastic,
we'll
be
presenting.
So
we
still
like
in
the
planning
phase,
but
we
want
to
thank
everybody
who
joined
us
today,
congrats
the
winners
and
thank
you
to
jeannie,
adrian
and
michael
for
presenting
at
our
events
and
thanks
to
the
amazing
team
of
organizers
who
are
helping
us
to
build
such
a
nice
event
and
we're
looking
forward
to
see
you
next
time.
Thank
you
very
much,
bye
everyone
giving
you
back
time
for
lunch.