►
From YouTube: Security Self Assessment: Cluster API - Part 1
Description
Part of Kubernetes SIG Security under CNCF
A
So
we've
started
the
recording.
We
are
under
the
same
code
of
conduct
of
cncf
and
kubernetes
want
to
summarize
for
people
who
are
recording
where
we
are
so
far.
A
We
decided
that
we
can
create
different
point-in-time
data
flow
diagrams
based
on
the
life
cycle
of
cluster
api,
we'll
focus
on
cluster
api
for
aws
in
terms
of
just
purely
scoping
and
the
people
are
in
the
room
and
the
idea
is
after
we
have
a
common
understanding
of
this.
A
We
will
start
going
through
the
working
dock
and
see
what
needs
more
content,
and
if
there
are
new
github
issues
that
are
coming
out
of
the
discussion,
those
can
be
something
we
can
also
link
to
this
talk
so
that
we
know
that
those
two
things
are
related
might
be
useful.
There
may
be
to
use
some
sort
of
a
label
to
filter
on
all
of
these
issues
that
are
related
to
this,
but
something
we
can
do
later.
B
A
Sorry
I
went
on
mute.
Do
you
think
we
can
keep
this
diagram
aside
for
now
and
start
with
a
high
level
diagram
first,
where
we
very
very
at
a
ten
thousand
twenty
thousand
feet
level
explanation
of
how
cluster
api
components
will
interact
with
each
other
and
then
kind
of
keep
driving
deeper
and
deeper
as
we
go
from
there.
A
So
I'm
gonna
try
this
I
this
is
for
my
cube
contact,
so
I
need
to
remove
this
somewhere.
A
A
Basically,
the
idea
is
there
are
each
box
represents
a
component
arrows
represent
the
flows
from
and
to
then
there
are
trust
boundaries
which
can
be
sometimes
network,
firewalls
or
actual
physical
boundaries
of
regions
or
data
centers,
which
we
have
to
draw
with
dotted
lines
sometimes,
and
then
there
are
ports,
sometimes
that
we
also
use
in
the
data
flow
diagram,
to
explain
that
these
ports
are
open
and
these
ports
from
this,
and
this
component
talk
to
each
other.
So
I'm
thinking
of
kind
of
starting
from
there.
A
We
can
also
create
cluster
boundaries
for
the
management
cluster
and
then
workload
cluster
and
then
sort
of
continue
how
how
to
go
from
there.
So
I
I'll
start
drawing
this
diagram
from
scratch.
With
all
of
you
help
me
and
tell
me
what
to
draw
I'm
kind
of
your
pen
right
now,
and
then
we
can
kind
of
we'll
see
how
far
we
go.
A
Okay
and
yes,
robert
us,
while
you're
here,
let
us
know
if
we
should
do
something
differently
or
we
should
do
something
in
addition
to
what
we're
doing.
C
No,
I
the
only
thing
I
would
comment.
Is
you
know
it's
probably
not
as
important
for
for
this
exercise
to
you
know
annotate
the
ports,
you
know,
maybe,
if
we're
starting
to
do
scans
or
something
that'll
be
useful
to
document
somewhere,
but
I
think
it'd
be
more
important
to
indicate
you
know
at
least
some
form
for
a
little
lock
icon
or
use
different
colors.
But
you
know
where,
where
the
encrypted
data
flows
are,
where
you
know
where,
if
there's
any
sensitive,
keying
materials
stuff
like
that,
that
yeah.
A
A
So
maybe
I'll
start
with
two
api
servers,
one
for
management,
one
for
workload
and
feel
free
to
interrupt
me.
If
I
do
something
different,
if
or
if
I
should
do
something
different
go
ahead.
A
A
A
A
Okay
sounds
good.
I
don't
think
I
can
share
this
link,
so
we'll
probably
have
to
follow
the
screen
share,
because
this
is
just
like
a
local
link
for
me
if,
but
I
can
download
this
later
and
then
share
it
with
the
group
and
put
it
in
the
dock
later.
A
B
A
A
B
If
we
take
the
default
case
for
kaffir,
then
yeah
so
yeah,
it's
a
vpc
boundary
and
the
default
case.
Just
because
the
way
this
sort
of
quickstart
works
is
we're
exposing
the
api
server
endpoint
of
the
workload
cluster
on
the
internet,
so
that
on
using
the
elastic
load
balancer.
So
the
instances
are
on
private
subnets.
A
Okay,
that's
useful
okay,
so
we
have
another
boundary
internet
boundary.
Maybe.
A
Okay
sounds
good,
so
we
have
two
boundaries
established
now.
Let's
take
a
point
in
time
where
workload
cluster
does
not
exist,
management
cluster
exists
and
the
end
goal
for
this
flow
would
be
workload.
Cluster
exists
and
management
cluster
continues
to
exist
as
well,
and
then
we
can
go
through
the
next
iteration
of
adding
a
node
or
deleting
a
cluster
or
creating
another
cluster
and
how
those
two
workload
clusters
if
they
at
all
they
interact
with
each
other.
A
A
B
B
B
A
A
Got
it
plus
something
else
based
on
what's
plugin,
let's
keep
it
like
this
for
now,
okay,
so
now
the
resources
are
created
here.
What
happens
after
that.
B
So
I
guess
we
bring
in
the
cluster
api
call
controller.
I
guess
that's
a
thing
like
this.
B
B
B
No
just
copy
core,
I
guess.
A
D
A
B
Certainly
in
the
default
case,
if
you
want,
I
mean
I
don't
want
to
complicate
things
right
now,
but
what
what
people
do
and
if
you
look
at
our
product
so,
like
you
know,
tanty
frame,
fancy
kubernetes
grid
or
something
you're
going
to
create
a
self-managed
cluster.
At
the
end
of
this,
and
quite
often
you're
going
to
run
the
cappy
components
co-located
with
the
control
plane,
but
I
don't
think
well,
I
think
we
can
come
to
that
as
another
snapshot
in
time.
Maybe,
okay.
A
Yeah,
I
think
that
makes
sense
so
for
now
we'll
assume
both
are
in
the
same
node.
So
that's
why
we
have
this
node
boundary,
which
means
that
these
two
components
exist
in
the
same
node
anytime.
We
know
for
sure
some
component
is
in
a
different
node,
we'll
create
another
one
that
will
be
similar
to
this.
So
just
keep
this
in
mind
as
we
continue.
B
I
think
for
this
exercise.
I
think
we
should
assume
that
it's
already
running
okay,
it's
there,
so
I
think.
A
B
What's
next
well,
so
there's
a
bunch
of
control
loops
in
there,
so
so
to
provision
a
cluster.
Then
user
basically
needs
to
create
a
set
of
resources.
So
cluster
resource,
an
infrastructure,
machine
template
resource,
a
bootstrap,
template
and.
A
A
control
plane
all
of
these
exist
within
the
hcd
as
crds
correct.
A
B
B
We
need
to
create
a
we.
We
need
to
create
some
network
for
in
which
the
cluster
is
going
to
live
in.
So
that's
that's
a
aws
cluster
resource
and
then
a
control
plane
is
going
to
spin
up.
Well
it's
to
create
a
control
plane.
We
need
to
create
some
machines,
so
there's
a
dual
representation
in
sense,
so
the
core
controllers
have
a
machine
resource
and
then
the
infrastructure
providers
have
an
intro
machine
resource.
So
I
guess
I
can.
I
think
we
can
probably
put
on
now
their
other
controllers.
I
guess
so.
B
There's,
okay,
two
more!
No
three
more
controllers.
I
guess
we
can
bring
into
the
mix,
which
is
the
bootstrap.
B
C
A
Hey
squad,
hey
guys,
we
are
just
doing
data
flow
diagrams
right
now
together,
but
yeah
just
continue
doing
that.
Let
us
know
if
you
have
any
questions
while
you're
following
you
know.
A
Okay,
go
ahead,
another
you're,
saying
we'll
have
a
two
or
three
controllers,
so
one
is
bootstrap
controller.
B
A
D
D
All
right,
what's
the
other
two,
the
control
plane,
controller.
A
Okay,
that
makes
sense
so
and
capicore
is
a
separate
process
from
all
these
three
at
this
point,
okay-
and
these
three
will
also
interact
directly
with
api
server.
Yes,
and
both
similar
to
this
two-way
flows.
A
All
right,
all
right,
almost
there,
so
okay
keep
going
I'll
follow
along
with
you.
What
would
be
the
next
step
after.
D
This
okay,
so.
B
What's
happening
right
now,
so
the
control
plane
controller
is
gonna,
importantly,
is
gonna,
create
a
bunch
of
key
material.
A
Okay,
interesting,
okay,
and
where
would
that
be
located?
You
can
store
them
on
the
api
server
sequence
right.
So
why
api
server
into
xcd,
yeah,
okay,
so
we'll
and
correct
me
if
I'm
wrong
in
this
whole
thing,
whatever
happening
in
this
node
boundary
that
we
have
created
will
be
common
for
all
the
nodes
of
the
control
plane
nodes
right.
It's
just
like
one
instance
of
things
happening,
but
the
same
thing
can
happen
in
second
node.
Third,
node
of
the
control
plane,
cluster.
A
Okay
sounds
good,
so
this
will
be
stored
as
kubernetes
secrets
question
on
the
secret
encryption.
This
is
still
default,
which
is
no
encryption
for
for
this.
For
these
secrets,
or
we
have
a
different.
B
A
B
That
yeah
so
certainly
kind
if
you
bootstrap
kind
you're
not
going
to
get
any
additional
encryption
out
of
that.
A
B
It's
yeah,
it's
just
encoding
and
our
statement
so
far
has
been
build
your
own
ami,
your
machine
disk
and
turn
and
encrypt
it
using
aws,
kms
and.
A
A
B
So
once
it's
created
a
bunch
of
key
material,
the
the
control
plane
controller
is
going
to
create
a
machine
resource
a
and
stamp
out
a
bootstrap
config
resource
to
match
the
machine
and
an
infra,
an
aws
machine
resource
to
match
the
machine.
So
there's
only
if
you've
got
machine
resources
and
a
corresponding
cube,
adm
config
resource.
C
B
Is
what
the
bootstrap
controller
is
doing
and
the
aws
machine?
No
there's
no
additional
sequence
at
that
point,
but
this
is
where
the
bootstrap
controller
now
comes
in.
B
B
First
of
all,
within
users
creating
clusters,
aws
control
has
been
reaching
out
to
the
ec2
api.
A
B
B
A
B
There
is
an
optional
behavior
over
web
sockets,
but
we
have
that
we
haven't
even
enabled
it
by
default.
So
no,
let's
not
one
way.
D
A
B
It's
aws,
it's
hmac
v4.
Is
that.
A
B
So
that
ec2
api
behind
the
scenes,
it's
all
on
aws,
it's
completely
out
of
scope,
I
think
for
us,
it's
creating
the
network
yeah.
The
network
is
created.
We
then
reach
out
to
another
api,
the
elb
api.
B
A
A
A
B
B
A
Okay,
got
it
all
right,
so
maybe
I'll
just
add
notes
creates
lpe
for
workload.
Api
server
and
another
note
is,
creates
network
or
workload.
Cluster
okay
sounds
good.
Let's
keep
going.
B
Yeah,
so
it
it,
I
mean
it
just
keeps
pinging
dns
see
if
that
elb
is
alive
and
eventually
it's
there
once
we
got
that
the
aws
controller,
given
all
the
other
coordination,
that's
happened
with
the
cloud
in
it
being
generated.
It's
now
ready
to
it
create,
goes
back
to
dc2
api
actually
and
creates
a
ec2
instance.
B
A
Oh
sorry,
I
wasn't
one
question:
are
these
two
steps
in
any
way
sequential
or
they
can
be
parallel
lent
for
creation,
lb
creation.
B
Yeah
we
can't,
we
can't
create
a.
We
can't
really
create
a
machine
until
the
elb
is
ready.
A
B
Oh
yeah,
that's
true
yeah,
so
this
is
the
yes
you're
right.
I
have
forgotten
about
that.
Also.
This
is
more
representative
of
the
other
providers,
so
the
default
behavior
for
aws
is
actually
different,
so
yeah,
so
the
default
behavior
for
aws
is
actually
before
sorry
before
we
create
the
machine
we
call
out
to
another
api
as
which
is
aws
secrets
manager.
So
the
secrets
manager,
api.
B
B
Yeah,
well,
we
take
it
from
the
api
server
and
we
save
the
the
cloud
in
it
yeah
but
yeah,
it's
the
aws
controller.
That's
doing
it.
B
F
F
B
A
B
A
A
B
No,
that's
fine.
I
mean
to
be
fair.
It's
not
generally
well
understood
this
process,
even
within
the
cluster
api
community,
some
extent.
So
it's
this
is.
This
is
spelling
it
out
like
this
is
really
helpful,
so
we've
got
so
and
then
we've
got
another
component.
I
guess
it's
within
well
I
mean
it's
on
the
aws
boundary,
it's
not
on
the
internet,
it's
the
hypervisor
boundary.
I
guess
so.
We've
got
the
instance
metadata
service.
A
B
A
Okay,
got
it
all
right,
so
probably
worth
mentioning
here.
Notes
notes
are
assumed
to
be
vms,
all
right
and
then
you're
saying
there
is
a
hypervisor
boundary
between
the
physical
node
and
this
vm,
and
there
is
a
component
that
exists
on
the
physical
node
that
can
interact
with
this
vm,
which
is
called
instance
metadata
service.
Correct.
A
Got
it,
this
is
gonna,
be
interesting,
I'll,
probably
put
it
somewhere
in
between
the
node
boundary
somewhere
here
and
the
instance
metadata
service
and
cloud
in
it
are
these
components
we
use
but
exist
and
are
controlled
by
aws,
correct.
B
A
Right
so
questions
question
in
terms
of
like
scoping
of
ownership.
If
we
have
to
do
eventually
a
source
code
review
from
security
perspective
would
clown
in
a
cloud
in
it
could
be
in
scope
for
cappy.
A
A
A
Okay,
so
this
is
two
way
one
way.
A
Okay
and
any
kind
of
authentication
encryption
for
these
two
to
talk
to
each
other.
B
No,
it's
http
and
I'm
pretty
certain
we're
using.
I
am
it
probably
not
even
imdsv2,
so
it's
probably
imdsv1
yeah.
So
this
is
the
plane
high
cp.
A
Got
it?
Okay
sounds
good,
I
mean
we
don't
have
to
figure
that
out,
but
it's
good
to
know
which
ones
are
in
authenticated,
which
ones
are
not
so
essentially,
in
this
case.
B
B
B
B
A
Okay,
maybe
it's
out
of
scope
in
this
case,
okay
sounds
good,
so
maybe
I
think
I
missed
a
step
somewhere
here
once
the
machine
is
created
cloud
in
it
is
part
of
the
machine
by
default,
and
that's
why,
like
then,
these
things
get
triggered
like
cloud9
starts
talking
to
a
imds.
B
Yes,
on
startup
cloudina
is
started
by
systemd.
A
B
There's
basically
a
or
script
in
there
that
then
we
now
so
cloud
in
it
is
now
going
to
execute
the
aws
cli,
which
is
now
going
to
read
from
secrets
manager.
A
Manager
this
one
okay,
okay,
interesting
so
maybe
I'll
move
a
few
things
and
aws
cli
is
then
installed
by
cloud
init
in
the
node
boundary
or
it
already
exists.
A
Maybe
like
this,
oh,
the
arrows
are,
would
be
other
way
around.
I'm
guessing
yes,
right,
okay,
okay,
I
found
a
way
to
do
double
arrows
good,
but
it's
fine
for
now
so
secrets
manager,
api,
I'm
gonna
change
the
colors
for
this
to
represent
aws
components
with
blue
this.
I
think
this
is
the
one.
B
So
it's
both
reading
data
from
secrets,
manager
and
also
initiating
the
delete
of
the
data
as
soon
as
it's
read
it
well
it
well
or
does
it
it's
going
to
do
it
after
everything's
successful?
I
think
I
can't
remember
now.
No,
no,
it
does
so.
B
A
B
B
Tlscis
and
ca
keys
case
for
and
it's
the
four
of
them
that
cube
adm
uses
so
xcdca
fontbox,
cca.
A
That
seems
or,
like
all,
the
essentially
all
the
cas,
that
kubernetes
control
plane
needs
in
general,
yeah.
Okay,
all
right-
and
this
will
include
ca,
private
keys
and
search,
not
just
search
soft
cs,
yeah.
B
A
A
B
A
B
B
B
A
A
B
A
B
B
A
And
I
think
that
makes
sense,
it
would
need
it
for
sure
okay.
So
I
think
that
makes
sense
to
me
so
now,
after
it
fetches
the
user
data,
which
is
essentially
the
ca,
search
and
private
keys,
it
stores
it
locally
and
then
deletes
it
in
secrets,
manager,
api,
yeah,
okay,
so
then
the
secrets
manager
repair
doesn't
have
it
anymore.
A
A
C
A
A
G
B
A
Okay,
all
right
yeah,
so
we
are
still
at
maybe
very
early
stages
of
workload,
cluster
creation,
but
yeah.
I
think
this
is
important
so
when
we
are
at
that
stage,
nadir
and
kita
remind
us
that
about
this
part,
this
seems
interesting
for
sure
what
lupum
will
mention.
B
Yeah,
so
I
think
now's
the
time
to
bring
it.
So
the
we've
got
that
we've
got
the
real
user
data
now,
which
has
all
the
key
material
cloud.
Init
restarts
and
executes
the
real
user
data
yaml
and
now
at
some
point
it's
going
to
execute
cube
adm
you
can
do
a
whole
bunch
of
other
stuff,
but
the
bit
that
we
care
about
is
cuba.
Idm.
A
Okay,
where
does
cube
adm
come
from?
Is
it
part
of
the
machine.
A
B
A
A
A
Okay,
make
sense
so
that
changes
my
color
all
right
so
for
anyone
listening.
The
reason
for
asking
this
question
is:
cubedium
is
sort
of
the
most
important
thing
for
cluster
api
and
without
that
it
would
be
very
hard
to
do
what
we
are
able
to
do.
So.
This
is
probably
the
most
important
thing
or
component
in
this
whole
flow.
That
makes
things
possible.
B
A
Yeah
good
point:
okay
sounds
good,
so
we
are
with.
We
have
cube
adm
now
and
we
have
cloud
in
it.
Cloud
init
has
restarted.
You
just
mentioned.
So
what
triggers
cube
adm
to
get
into
action.
B
Cloud
in
it,
so
one
of
so
along
with
the
key
material
there's
also
a
cube
adm
configuration
that
was
generated
by
the
bootstrap
controller.
So
we
execute
cube
adm
in
it
with
that
yaml
file
and
all
the
ca
bits
and
pieces
as
and
cube
adm
in
it
is
going
to
drop,
was
gonna
drop.
The
key
material
into
the
correct
locations
on
disk,
create
the
static
pod,
manifest
that
and
start
or
restart
kubelet.
A
B
That
would
have
come
from
the
secrets
manager,
so
so
we're
not
grabbing
individual
files.
What
we're
doing
is
we're
grabbing
a
cloud
in
it
configuration
file
from
secrets
manager
which
has
been
chunked
up
into
little
pieces
that
was
generated
by
the
bootstrap
controller
and
then.
A
A
A
All
right
sounds
good.
I
think
that
makes
sense
to
me
so
now.
Cube
adm
will
use
this
config
and
remind
me
where
cubelet
comes
into
picture
now.
A
Okay,
nice
one
second,
probably,
okay,
no,
never
mind
I'll,
just
add
a
block
here
for
cubelet.
A
B
They
should
start.
We
also
do
kind
of
have
a
presumption
that
the
oci
images
are
pre-staged
on
the
machine
like
it,
it's
not
going
to
have
to
go
out
to
the
internet
to
pull
down
those
images.
It's
not
a
requirement,
but
in
order
to
have
consistent
boot
times,
we
consider
the
core
kubernetes
images
to
be
pre-bait.
A
Right,
okay,
so
the
it's
it's
kind
of
like
another
prerequisite
like
the
other
components
which
are
part
of
the
machine
itself:
machine
images.
Okay!
A
A
A
Okay
sounds
good,
so
now
these
images
that
were
pre-baked
are
started
as
static
parts
using
cubelet.
Now
I
think
the
main
question
is
these
two
have
to
interact
with
each
other,
and
this
is
this
on.
B
G
G
I
mean
it's
kind
of
complicated,
but
eventually
you
get
a
valid
client
certificate
from
the
side
by
the
controller
manager,
and
you
don't
really
need
a
socket
to
communicate
between
the
two
components.
The
way
you
bootstrap
is
for
cube
adm
to
write
these
files
on
disk
and
trigger
restarts
of
the
kubernetes
so
that,
basically
it
it
eventually
boosts
throughout
the
load.
B
G
No
need
for
circuits.
G
I
mean
it
depends
early
stages,
it
does
not
it
just
writes.
G
I
mean
it
it's
kind
of
complicated,
but
I
guess
the
question
is:
should
we
do
a
like
a
security
audit
of
cabrinium
nested
inside
this
security
outlet.
A
G
A
B
Let's
assume,
then:
we've
got
our
we've
got
our
control
plane
running
okay,
so
at
this
point
hold
on,
let's
see,
there's
a
couple
controllers
that
could
be
talking
to
we've.
We
now
have
at
least
the
capi
core
controller
and
the
control
plane
controller
talking
to
the
api
server
and
wired
a
load
balancer.
A
A
A
B
It's
a
remote
client,
oh
yeah,
there
is
it's
only
used
in
eks.
Is
that
right
yeah.
It's
only
usually
chaos.
Okay,
fine.
A
Okay,
so
control,
plane,
controller,
capi
core.
Both
talk
to
this
api
server.
D
B
The
control
plane
endpoint,
that's
recorded
in
the
cluster,
which
is
going
to
be
the
load
balancer,
so
it
doesn't
really
have
a
way
to
talk
to
the
node
directly
because
node
is
on
the
private
ip
address,
it's
not
on
the
internet.
So
it's
only
going.
It's
only
ever
going
to
be
mediated
through
the
elastic
load.
Balancer.
B
C
A
I
think
we're
doing
okay
on
time.
We
have
15
minutes
more.
Do
you
think
we'll
be
able
to
get
the
workload
cluster
up
and
running
by
then.
B
It's
it's
more
or
less
running.
I
think
we
need
to
report
yeah
I
mean
it
will
be
running,
it'll,
be
a
single
node,
but
it
will
be
running.
The
workflow
is
a
bit
marginally
different
for
a
worker
node
in
the
sense
that
we're
not
shoving
key
material
around
and
even
actually
for
a
second
control
plane
instance.
So
right
it
changes
a
bit
then,
but
we
do
otherwise
have
a
running
cluster.
A
Okay,
so
we
have
a
running
cluster
with
zero
worker
nodes
right
now.
Yeah
okay
sounds
good.
I
think
I
need
this
one
to
go
here
and
then
there
is
another
flow
from
elastic
lp
to
api
server.
Elastic
lb
would
also
be
aws,
won't
yeah,
so
changing
the
color
here.
A
Okay,
all
right
sounds
good
I'll
mention
this
just
to
be
extra,
clear
control,
plane
mode,
all
right,
and
so
we
have
15
minutes.
We
don't
have
to
finish
the
worker
node
if
it's
gonna
take
longer.
A
Instead,
I
want
to
also
open
up
for
questions
if
people
have
any
or
things
we
should
plan
for
next
time.
Definitely
we're
not
meeting
next
week
with
kubecon.
A
Potentially
we
could
give
everyone
a
break
for
a
week
after
as
well.
I
need
some
time
to
kind
of
get
myself
up
to
normal
energy
after
kubecon,
maybe
end
of
october.
We
can
see
if
we
can
meet
again
and
kind
of
continue
from
there
in
the
meantime,
I'll
make
this
diagram
a
bit
better,
potentially
put
it
in
the
document
and
see
if
see,
if
we
need
to
do
anything
else
next
time.
What
what
do
everyone
think
about
this.
A
All
right,
and
for
next
time
we
could
continue
from
here,
create
a
worker
node
and
see
how
things
look
like
there.
What
would
be
the
next
flow
that
would
be
useful
after
that,
I'm
guessing
adding
a
node
will
all
will
end
up
being
included
in
this
flow,
where
the
worker
node
is
added.
A
B
Yeah,
maybe
actually
one
thing
that
came
to
mind
is
like
there
is
an
interesting
point
that
we've
created
this
machine
and
we've
we've
said:
there's
a
bunch
of
prerequisites
on
it,
but
yeah
the
that
machines,
the
templates
come
from
somewhere,
the
image
has
come
from
somewhere
and
we've
also
how
how's
the,
how
we
told
the
ec2
api
to
you
to
get
that
to
use
that
particular
image
as
well.
So
I
don't
know
how
best
to
represent
it.
A
B
C
A
Right
right,
so
what
you're
trying
to
say
is
like
the
a
cluster
api
or
capi
core,
and
the
controllers
are
going
to
trust
the
user
to
share
the
image
camels
in
such
a
way
that
they
will
not
be
misused
or
abused,
and
I
see
lubumir
has
his
hand
up.
So
please
quote.
G
The
same
applies
to
custom
image
repositories.
If
somebody
gives
you
a
corrupted
api
server
or
something
like
that,
you
can
potentially
take
all
those
api
calls
into
something
with
them
or
use
mining
or
whatever
yeah,
so
is
anything
that
is.
Custom
feels
like
a
potential
entry
point
for
security,
breach
it's
in
the
hands
of
the
user,
to
make
sure
it's
not.
A
Compromised
yeah
yeah
exactly
I
mean
essentially
the
core
point
of
always
have
your
trusted
users
as
admins
for
control,
plane,
cluster
and
workload
cluster,
because
otherwise
they
can
essentially
do
anything
they
want
so
yeah.
I
agree
I
may
have
something
that
comes
to
mind
when
managing
privileged
user
actions
or
a
malicious
insider
scenario.
A
Do
we
have
any
auditing
enabled
by
default
for
control,
plane,
api
server
control,
plane
clusters,
api
server,
where,
if
somebody
puts
in
something
that's
malicious
or
a
new
image,
is
requested
from
ec2
api?
Do
we
have
audit
logs
that
we
can
go
back
and
check
like
did
that?
Did
this
person
really
do
this,
and
when
did
when
did
this
happen?.
B
You
mean
on
the
management
cluster,
that's
no!
Not
by
all
right.
I
mean,
if
they're
doing
the
kind
bootstrap
workflow,
no
not
by
default,
but
then
they're
only
acting
on
their
local
laptop
anyway.
So
that's
not
really
useful.
So
it's
going
to
be
up
to
the
cluster
operator
who's
operating
that
management
cluster,
that
they're
setting
up
some
auditing
on
there.
G
Well,
I
this
this
for
this
question
we
also
inside
of
cube
area.
We
do
not
verify
whether,
for
instance,
if
you
take
a
an
official
image
from
the
gcr
repository
of
the
api
server,
for
instance,
you
just
want
to
push
it
for
you
to
your
local
repository.
G
We
do
not
perform
a
check
against
a
specific
share
against
a
specific
tag
to
verify
that
this
image
is
the
original
one.
We
don't
signal
any
warnings
to
the
user.
The
we
basically
guarantee
that
the
only
secure
images
are
those
that
we
they
can
pull
from
gcr
those
have
the
sha's
checksums.
Everything
is
valid
there,
but
the
moment
you
establish
a
custom
repository
it
it's
in
the
hands
of
the
user,
to
verify
right.
B
B
We
build
node
images,
okay,
using
packer
from
our
laptop
using
the
upstream
image
builder
project
right
and
store
them
in
that
in
a
specific
ec2
account.
So
if
you
don't
specify
an
image
id,
it's
just
going
to
go
and
look
for
them
there.
So
it's
taken
on
trust
from
us,
okay,
yeah
and
it,
but
that
account
is
I
it
is
secured
with.
Is
I
think
it
pretty
serious
too?
If
I
can't
remember
but
yeah,
I'm
sorry.
A
Okay,
I
see
so
it's
completely
reversed
and
what
I
thought
earlier,
where
container
images
are
something
that
are
maybe
something
that
the
user
has
control
over.
But
node
images
are
the
ones
that
are.
B
A
A
I
think
I've
got
a
lot
from
this
for
sure.
Hopefully,
this
was
useful
for
everyone
as
well
any
fighting
thoughts,
and
we
could.
We
should
probably
plan
for
a
break
next
time
if
we
are
gonna
meet
for
90
minutes
or
more.
I
apologize
for
forgetting
about
that
today,
yeah
any
anything
else
before
we
drop
off
and
I'll
schedule
something
for
future,
but
want
to
give
five
minutes
for
everyone
else
to
join,
and
especially
folks
who
have
been
quiet
throughout
the
call
anything
you
want
to
share
go
for
it.