►
From YouTube: CNCF SIG-Security Meeting - 2018-05-18
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
B
Into
the
mute
sorry
so
that
did
clear
everything
up:
sorry
about
that
I
needed
to
cycle.
B
Alright,
so
since
I
haven't
had
time
to
bug
folks,
did
everyone
already
sign
up
for
scribe.
B
So
if
I
can
get
a
couple
scribes,
we
have
a
we're.
Gonna
kick
off
our
use
case
exploration
again
and
we
have
a
special
guest
today.
Dr.
Roy
has
joined
us
and
I'm
right
is
here
to
continue
some
of
the
discussion
that
Mark
shared
with
us
and.
B
B
B
A
Be
glad
to
do
that,
listen
thinking
the
NIST
big
data
working
group
has
been
cranking
along
since
this
summer
of
2013.
This
is
not
a
standards
body
per
se.
The
output
is
three
technical
reports
of
which
we've
produced
one
another.
One
is
in
review
at
NIST,
it'll
pry.
We
come
out
in
the
next
five
or
six
weeks,
maybe
before
that
we
haven't
got
around
to
publish
in
our
papers,
but
we're
working
on
that
aren't.
A
We
are
not
are
not
been
I,
co-chaired
the
security
and
privacy
subgroup
of
that
big
data
working
group,
and
in
that
role,
Arnab
is
the
primary
guy
that
covers
all
the
crypto
aspects
of
that.
So,
while
we've
worked
on
the
the
models
together
and
basically
handed
out
the
drafts
together
and
helped
adjudicate
the
content,
we
got
from
other
third
parties
he's
really
the
primary
contributor
to
our
background
on
blockchain
crypto
aspects
of
data
at
rest
and
what
the
role
of
some
of
those
things
might
be,
and
some
of
the
emerging
be
data
technology.
A
B
C
B
C
B
Bet
I'll
give
you
our
elevator
pitch,
so
you
know
the
safe
working
group
exists.
You
know
in
the
the
cloud
native
space
we
are
a
proposed
working
group
for
the
CN
CF.
There
are
very
few
of
those.
So
there's
you
know
infrastructure
and
CI,
and
you
know
a
couple
other
server
lists,
the
in
the
actual
CN
CF
and
the
cloud
native.
You
know
overarching
ecosystem.
There
are
very
few
of
these
working
groups.
B
If
you
go
down
like
in
the
kubernetes
there
you
know
is
a
extensive
ecosystem
of
SIG's
and
specialist
groups
and
and
working
groups
that
are
operating
there,
and
you
know
what
we're
focused
on
in
the
safe
working
group
is
safety
in.
You
know
this
this
cloud
ecosystem,
where
you
have
the
operator,
the
administrator.
D
C
C
C
So
we
start
off
with.
We
started
off
this
working
group
with
a
lot
of
discussion
on
what
is
Big
Data.
So
this
was
back
in
2013
and
you
know
they
were
very
diverse
opinions
on
what
constitutes
Big,
Data
I'm
sure
the
answer
is
not
canonicalized
even
now,
but
this
is
the
definition
we
came
up
with
in
our
document.
C
It
may
not
be
perfect,
but
it
was
a
consensus,
so
it
goes
like
big
data
consists
of
extensive
data
set
in
the
characteristics
of
volume,
variety,
velocity
and
or
variability
that
require
a
scalable
architecture
for
efficient
storage,
manipulation
and
analysis,
and
this
definition
is
there
in
part
one
of
our
documents,
you
so,
of
course,
security
and
privacy
are
important
for
a
big
data.
You
all
know
that
so
I
don't
have
to
go
through
the
slide,
essentially
itself,
as
it's
very
important,
because
it
causes
damage
to
company
reputation.
C
The
subgroups
have
kind
of
spread
out
from
2013
to
and
also
we
may
have
more
deliverable
spans
of
group.
And
this
part
the
definitions
are
a
bit
nebulous,
but
the
deliverables
are
what
you
see
on
the
right
one
through
seven
and
number.
Four
is
the
big
data,
security
and
privacy
documents
which
I'm
going
to
talk
about
you,
so
we
released
our
version
one
three
years
ago
and
the
nist
SP
1500
four
is
our
document
and
it's
available
on
this
site
that
I
give
a
long
link
to
in
this
slide
version
to
draft,
as
Mark
said.
C
So,
given
that
background,
I'll
go
into
some
of
the
characteristics
that
we
identify
in
the
document,
measured
seemingly
different
for
big
data
compared
to
what
was
before.
So
we
spent
a
lot
of
time
understanding.
You
know
what
is
emergent
about
the
security
and
privacy
of
big
data.
Given
it
principal
characteristics,
so
it
seemed
that
there
were
two
aspects:
one
is
due
to
scaling
and
this
you
can
attribute
to
the
volume
and
velocity
characteristics
of
big
data,
and
it
has
to
do
with
many
things
that
I'll
I'll
cover
those
in
the
next
slide.
C
The
other
more
foundational
aspect
is
mixing,
and
this
is
the
notion
that
one
of
the
characteristics
of
a
very
important
characteristics
of
big
data
is
that
you
get
data
from
diverse
endpoint
and
a
huge
amount
of
data,
and
some
of
that
data
may
not
be
completely
accurate
itself.
So
you
get
this
mixing
characteristic,
which
can
be
attributed
loosely
to
the
variety
and
velocity
characteristics
of
big
data,
and
that
causes
emergent
problems
for
security
and
privacy.
C
So
to
go
into
some
amount
of
detail
so
on
the
left
is
water
different
due
to
scaling
on
the
right?
What
are
different
due
to
mixing,
so
the
scaling
can
be
scaling.
Problem
can
be
summarized,
as
you
know,
how
do
you
retarget
your
existing
systems
due
to
the
infrastructural
shift
because
of
big
data,
so
the
infrastructure
shift
is
due
to
various
things
like
distributed.
Computing
platforms
like
Hadoop,
non-relational
data
stores,
etc,
so
paradigm
shift
in
infrastructural
thinking
has
required
is
still
requiring
new
solutions
in
security
and
privacy.
C
The
other
is
a
more
foundational
aspect,
the
mixing
aspect,
and
here
the
problem
is
to
control
the
visibility
of
data
while
enabling
utility.
So
what
is
this
about
so
here?
The
principal
questions:
are
you
know?
How
do
you
balance
privacy
and
utility?
So
you
get
a
lot
of
data
but
and
to
be
useful
all
that
data
needs
to
be
used,
but
then
you
also
run
into
these
privacy
aspects
where
you
combine
different
sorts
of
data
about
different
individuals,
and
you
get
a
bigger
picture
that
may
not
be
quite
apparent
from
individual
data
set
below.
C
Different
security
and
privacy
aspects
that
arise
due
to
these
principal
aspects
of
big
data,
so
there
are
five
key
words
that
were
identified:
volume,
velocity
variety,
velocity
volatility
and
what
I
give
in
this
class
is
this
slide?
And
the
next
one
are
examples
of
security
and
privacy
concerns
that
arise
due
to
especially
due
to
each
of
these
characteristics.
C
Requires
that
you
store
them
in
multi-tiered
data
storages,
so
that
is
a
lot
of
back
and
forth
of
data
between
different
storages
and
all
of
this
communication
requires
threat
models
to
identify.
You
know,
is
the
communication
secure
or
not?
Is
the
data
being
handled
properly
or
not?
So
these
are
complex
and
evolving
issues
and
then
the
velocity
aspect
is
that
is
the
retargeting
the
right
talked
about,
so
data
is
coming
at
a
very
fast
pace.
How
do
you
treat
our
get
traditional
security
mechanisms
to
support
this.
D
C
And
how
do
you
take
care
of
that?
Given
a
complex
movement
of
data
between
nodes
entities
and
geographical
boundaries,
volatility
of
data
is
another
big
aspect,
so
in
definitely
persistent
data
requires
evolving
SMP
consideration
because
the
ownership
may
change
mergers
and
acquisitions,
and
so
on.
The
crew
takes
ownership
and
responsibility
of
keeping
the
data
safe.
C
D
C
Section
4
try
to
classify
security
and
privacy
topics.
We
have
two
kinds
of
specification:
one
is
cross
domain
and
cross
infrastructure
and
trying
to
look
at
the
type
of
property
that
each
SNP
requirement
is
so
some
properties
of
privacy
properties.
You
want
to
keep
data
secret
or
safe,
confidential
provenance
properties.
You
want
to
keep
the
data
accurate,
you
want
to
identify
who
owns
that
it
and
so
on
system.
Health
has
to
do
with.
Are
there
security
vulnerabilities
in
the
infrastructure
itself?
Can
somebody
exploit
that?
C
How
do
you
keep
the
health
of
the
system
safe
and
then
some
of
these
have
to
do
with
public
policy
aspects.
So
these
are
things
like
you
know
what
is
right
and
what
is
wrong
to
do
with
data
from
a
policy
point
of
view
and
then
their
operational
classifications
of
SNP
topics,
so
this
has
to
do
with
the
particular
infrastructure
that
we
have
in
place
today.
C
C
C
The
reason
is,
security
and
privacy
does
not
compose
what
do
I
mean.
So,
let's
say
we
have
two
systems
system
a
and
system
B,
and
we
have
completely
analyzed
them
the
system
we
have.
We
have
seen
over
the
endpoints
of
system
here.
What
in
point
four
system
B?
Are
they
have
data
inflows
and
outflows?
We
have
complete
accountability
for
each
of
them
and
we
have
let
a
VM
guarantee
that
you
know
it
if
they
satisfy
some
security
require
right.
C
But
when
we
put
system
a
and
system
B
together,
then
suddenly
it
may
turn
out
that
security
properties
are
no
longer
satisfied
and
that's
because
there
may
be
Oh.
Api
is
a
system
B
which
leaked
data
from
system
a
so
they
made
together.
They
may
have
unknown
data
flow
patterns
that
were
not
analyzed
when
they
were
in
isolation.
So
the
mine
system
can
have
an
unexpected
data,
so
they
can
destructively
interfere.
So
it's
very
important
was
the
point
of
this.
C
C
So
there
is
a
need
for
architectural
thinking
and
that's
where
it
becomes
important
that
we
refer
to
this
big
data
reference
architecture-
so
mark
might
have
already
talked
about
this,
but
this
is
also
described
in
one
of
the
documents
in
our
working
group.
I
think
number
six
and
it
sexualizes
big
data
systems.
As
these
boxes.
We
have
data
providers
and
data
consumers.
C
There
is
an
application
provider
which
is
in
the
middle
of
that,
and
it
provides
different
collection
and
access
access
capabilities.
The
frame
of
provider
is
the
underlying
infrastructure
which
which
gives
processing
and
platforms
and
infrastructures,
and
there
is
system
market
Orchestrator
at
the
top.
Who
is
orchestrating
all
this
movement,
and
you
can
see
that
there
is
a
security
and
privacy
fabric
all
around
the
system?
So
what
does
that
mean
meant
to
signify?
C
So
that's
what
we
at
least
preliminarily
did
in
the
version
of
one
of
our
document.
In
section
5
of
the
document,
you
can
find
some
of
the
security
aspects
that
we
talked
about,
like,
for
example,
in
the
interface
between
data
provider
and
application
provider.
You
have
to
do
endpoint
input,
validation
on
the
other
end,
from
going
from
big
data
application
provider
to
data
consumer
there's
concerns
about
privacy,
preserving
data
analytics
and
determination
in
the
frame
of
provider.
C
B
I
have
a
question
on
architecture
before
you
move
on
to
the
next
section
here.
So
it's
a
bit
of
a
meta
question,
my
apologies!
So
when
you
were
saying
reference
architecture,
did
you
the
group
there
go
and
actual?
Actually
you
know
build
this
out
or
you
know
just
laid
outs
of
the
architectural
definition
of
what
a
typical
system
is.
It
looks
like
a.
C
Combination
of
both
so
this
this
was
a
lot
of
discussions.
Actually
it
consumed
a
year
and
a
half
I
would
say
so.
We
started
with
a
lot
of
existing
architectures
like
there
was
an
architecture
from
IBM.
There
was
an
architecture
from
other
places.
We
actually
have
a
document
in
our
working
group
that
cools
to
each
of
these
proprietary
or
public
architecture.
This,
and
then
the
group
sifted
through
those
architectures.
C
So
what
are
the
principal
characteristic
that
we
were
looking
for,
and
this
is
the
architecture
that
evolved
out
of
all
those
discussions,
so
it
took
a
lot
of
time
to
evolve.
Yeah
it
had
been
evolving
even
till
like
last
year,
so
I,
don't
think
in
a.
We
haven't
have
change
it
in
a
in
in
the
last
year,
but
that's
what
the
amount
of
evolution
that
it
went
through.
C
B
C
D
C
B
C
Yeah
I
understand
your
point
so
cloud
security
Alliance
had
this
huge
reference
architecture
with
like
300
boxes
right,
but
what
we
opted
for
well,
one
of
the
reasons
is
big.
Data
systems
are
so
diverse,
it's
not
as
homogeneous
and
entity
as
a
cloud
right.
So
when
we
describe
big
data
systems,
you
know
big
data
systems
are
everywhere.
C
You
have
health
care,
you
have
fundamental
physics,
you
have
a
VA
ssin,
you
have
transportation,
you
have
so
many
use
cases
and
each
of
those
use
cases
can
identify
at
least
something
that
you
know
may
not
fit
readily
into
this
architecture,
but
it
it
is
actually
one
of
the
reasons
why
our
reference
architecture
is
so
succinct.
Instead
of
going
into.
You
know,
300
little
pieces
of
detail
students
because
it
has
to
homogenize
and
inherently
in
homogenous
collection
of
use,
cases
right.
C
C
C
C
Some
of
these
are
in
limited
deployment,
but
most
of
it
is
in
use
in
research,
States
and
all
of
these
technologies
provide
different
kinds
of
features
while
affording
visibility
to
control
the
entities.
So
what
do
I
mean
by
that?
The
first
example
is:
how
do
you
outsource
computation
securely?
So
an
example
is
suppose
you
want
to
spend
all
your
sensitive
data
to
the
cloud
photos,
medical
records
and
so
on.
C
You
can
send
everything
encrypted,
but
the
cloud
can't
tell
you
much
after
that,
so
you
can't
find
out,
for
example,
how
much
you
spent
on
movies
last
month,
if
everything
you
sent
to
the
cloud
was
encrypted,
just
with
your
own,
so
fully
homomorphic
encryption
is
the
crypto
technology
which
enables
you
to
do
just
that.
So
you
encrypt
your
data
and
then
the
cloud
can
do
analogous
computation
called
home,
Orphic
computation,
which
is
which
is
a
transformation
of
the
actual
computation
and
then
the
the
amazing
thing
about
this
is
that
it
only
operates
on
ciphertext.
C
It
never
has
to
decrypt
the
data.
So
all
ciphertext,
all
processed
ciphertext
are
all
random
sequence
of
bits
to
the
cloud
and
then
the
cloud
can
send
you
your
processed
encrypted
data
and
only
you
can
decrypt
it.
So
this
is
great
because
you
know
the
only
can
elected
the
user
can
decrypt
the
processed
data.
There
is
an
end-to-end
security,
so
you
you
get
to
pick
your
key
so
on.
C
We
can
also
control
visibility
like
who
we
give
access
to
based
on
encryption
technology,
so
this
is
traditionally
done
by
role
based
access
control
or
some
other
types
of
X
is
controlled
by
systems
like
operating
system
virtual
machine.
So
these
usually
restrict
access
to
data,
but
the
data
is
still
in
plain
text.
C
So
now
the
question
we
ask
is:
can
we
encrypt
it
in
such
a
way
that
we
do
not
have
to
go
through
all
this?
So
decryption
is
only
possible
by
entities
allowed
by
the
policy,
so
this
is
kind
of
you
know
technologically
enforced
rather
than
system
and
forced
well.
Of
course
you
can
hack
keys,
but
this
is
a
much
smaller
attack
surface.
C
So
Q
is
a
key,
can
be
a
few
kilobytes
and
you
can
have
very
special
protective
mechanism
to
protect
small
keys
rather
than
you
know,
gigabytes
of
data
and
then
encrypted
data
can
be
moved
around
as
well
as
kept
at
rest.
The
handling
is
uniform,
so
so
many
of
you
might
already
know
examples
of
this.
So
the
starting
point
is
public
key
encryption.
C
So
how
public
key
encryption
works
is
that
there
is
a
certificate
authority
it
finds
for
certificates
of
public
key,
and
then
you
know
you
can
show
let's
say:
Alice
and
Bob
are
trying
to
communicate.
Then
Bob
can
show
his
signed
certificate
of
public
key.
Then
Alice
can
use
that
public
key
to
encrypt
data
and
only
Bob
can
decrypt
it.
C
C
You
can
just
use
the
identity
of
some
person
and
there
is
a
master
public
key,
just
one
master
public
key
and
you
just
use
that
master
public
key
and
the
identity
of
the
person
you
want
to
encrypt
to
and
that's
all
you
need
to
encrypt
your
data.
Any
other
person
cannot
visit
e-even
using
the
same
master.
Public
key
cannot
decrypt
your
data,
so
in
this.
C
So
taking
this
to
the
extreme,
we
have
policy
based
encryption,
so
here
the
policy
can
be
a
complex
predicate
so
which
is
indicated
as
PI
here.
So
this
is
one
simple
scenario:
where
there
is
the
there
is
the
hospital
and
let's
say
somebody
can
see
a
patient's
data
only
if
he
or
she
is
a
doctor
or
a
nurse
who
also
works
in
ICU,
so
it
this
is
a
more
complex
policy
predicate
than
just
identification.
C
C
Finally,
we
also
talk
about
blockchain,
so
we
avoid
the
financial
aspects
of
blocking
in
this
document.
So
we
don't
know
how
how
important
that
is.
But
there
are
many
technological
aspects
of
blockchain
which
can
be
very
useful
in
the
security
in
cyberspace,
especially
things
like
asset
and
ownership
management,
transaction
logging
for
audit
and
transparency
bidding
for
auctions
and
contract
management,
and
so
on.
C
So
the
high
level
recommendations
are
as
follows
so
which
technology
to
use
among
all
these
cryptographic
technology,
if
it
involves
a
lot
of
risk,
benefit
analysis.
We
have
to
consider
sensitivity
of
the
data
cost
of
grades
and
cost
of
security.
Securing
systems
when
doing
this
realm.
So
I
give
an
example
where
you
know
there
are
three
different
cost-benefit
analysis.
So
let's
say
we
want
to
run
the
task
of
running
software
on
encrypted
data
at
rest.
C
There
are
three
possibilities,
so
let's
say
we
just
do
what
is
traditionally
done,
which
is
decrypt
the
data
in
the
cloud
and
run
software.
So
you,
your
data
is
encrypted
at
rest,
but
you
can
decrypt
it
and
then
just
run
plain
phosphorus.
So
what
are
the
pros
of
that?
A
very,
very
fast
execution
problem
is
if
the
server
is
hacked
decryption
key
is
leaked.
All
the
data
is
exposed.
C
C
E
C
C
So,
just
to
conclude,
you
know
there
are
four
things
that
I
want
to
take
you
away:
think
of
security
and
privacy
at
the
time
of
architecting,
the
overall
system
and
not
as
an
afterthought,
which
is
which
is,
which
is
the
way
many
systems
are
designed
today.
Unfortunately,
in
security
and
privacy
systems
do
not
compose
so
you
have
to
reanalyze
security
and
privacy
when
you
add
new
features
or
join
new
system,
there's
a
lot
of
cryptography
that
is
emergent.
B
Great
thank
you
dr.
Roy,
so
I've
looked
over
the
floor
to
any
questions.
You
know
a
request.
You
know
either
to
yourself
dr.,
Roy
or
mark
it'd
be
fantastic.
If,
in
the
meeting
notes,
we
could
link
to
not
only
dr.
Royce
presentation,
but
if
we
could
provide
links
to
the
documents
I
believe
you
mark,
you
touched
on
some
of
these
in
the
issues
and
our
github
repo,
but
you
know
for
those
that
are
you
know
following
along
if
we
can
point
them
to
a
way
to
go
deeper
in
this,
be
fantastic
I.
E
Have
a
question
about
a
combination
of
and
the
way
that
changes
the
access
control
right,
we
see
both
that
you
can
by
combination
you
can
D
and
on
in
my
state,
I
saw
data
that
was
previously
anonymized
and
that
maybe
does
need
strong
access
control.
Suddenly,
by
combining
that
you
you,
you
have
the
need
for
stronger
access
control
and
also
the
reverse,
where
you
have
data
that
gets
aggregated,
so
the
excess
control
doesn
t
have
to
be
as
secure
right.
C
It's
unclear
at
this
point
like
how
you
can
so
you
know,
technologies
like
differential
privacy.
They
have
a
privacy
budget,
which
is
that
you
always
leak
some
amount
of
information.
Even
if
you
aggregate-
and
if
you
do
that
too
many
times
and
then
the
privacy
budget
is
lost,
which
means
that
over
time
you
get
clearer
and
clearer,
more
and
more
accurate
picture
of
the
sensitive
data.
So
this
is
kind
of
inevitable
so
other
than
like
completely
restricting
access
to
the
data.
It
is
not
clear
how
to
stop
this
leak
of
information.
C
There
are
technical
aspects
too,
so
I
talked
about.
You
know
how
do
you
reconcile
authentication
and
anonymity
right?
So
that
is
a
technical
question
that
the
research
community
has
been
looking
at.
So
there's
there
are
primitives
called
group
signatures.
For
example.
What
does
the
group
signature
mean?
It
means
that
you
have
a
group
of
people
and
anybody
can
sign
a
message,
but
you
won't
know
who
signed
it.
So
you
can
still
authenticate
that
person,
but
you
will
not
know
beyond
the
group
structure
who
that
person
is
or
entities.
C
So
you
could
say
you
know,
give
them
the
same
signing
key
right,
but
that
is
not
desirable
because
later
on,
there
might
be
an
arbitration
process.
Well,
where
you
want
some
amount
of
non-repudiation,
you
want
to
hold
that
person
responsible
if
a
legal
case
comes
up,
for
example.
So
so
that's
why
this
this
kind
of
primitive
is
it's
far
more
sophisticated
than
just
giving
out
the
same
signature
key
to
everybody.
So
this
system
in
fact
allocates
a
trusted
or
vital.
C
Who
has
some
more
information
so
that
she
can
look
at
the
signature
and
identifies
who
saying
it,
but
without
going
through
this
elevator,
nobody
can
find
out
who
signs.
So
that
is
one
of
the
technologies
that
address
is
reconciling
an
authentication
and
anonymity,
and
you
can
think
of.
It
is
in
an
IOT
context
as
though
there
were
different
IOT
devices,
you
don't
want
to
specifically
pinpoint
which
device
it
came
from,
maybe
that
very
personal.
B
C
A
One
of
the
approaches
that
it
came
up
and,
as
our
nan
said,
we
don't
really
get
very
prescriptive,
but
we
talk
about
trying
to
t
treat
PII
and
what
PII
is
varies
depending
on
the
domain.
You
know
it
could
be
a
floating-point
number
depending
on
the
scenario
right,
but
if
you
have
a
domain
that
you
can
consult
to
understand
the
meaning
of
the
thing
you
might
want
to
tag
that
data
throughout
a
system,
and
that
includes
when
you
federate
the
data,
so
the
the
persistence
of
some
people
call
this
metadata.
A
But
really
it's
you
know
just
carrying
other
data
along
with
it
in
some
kind
of
structured
framework,
so
that
you
can
do
traceability
and
provenance.
So
you
can
understand
when
it's
been
violated,
so
that's
kind
of
the
fundamental
principle
and
doing
PCI
compliance
or
being
HIPAA
compliant,
which
is
you
know
something
most
of
the
big
companies
were
in,
have
to
do
on
a
regular
basis.
A
The
problem
is
there
for
everybody
really,
because
if
you
think
of
PII
is
just
an
instance
of
really
really
important
data
in
some
domain,
then
that's
a
that's
an
issue
we
all
face
at
some
level.
So
from
a
security
point
of
view,
you
want
to
know
that
those
that
you
can
expose
where
that
date
has
been
used.
If
you
need
to,
and
who's
touched
it
and
to
authenticate
the
people,
who've
done
the
touching,
and
that
includes
machines
and
that's
why
and
if
you
were
there
Dan
we
were
trying
to
get
booted
up.
A
A
A
We
have
a
cloud
service
from
Amazon,
doing
the
driving
a
local
IOT
device,
ie
Alexa
on
your
home
network,
probably
on
a
single
segment
collecting
data
for
Amazon,
but
going
out
to
these
other
cloud
services
to
direct
traffic
out
to
these
devices,
and
if
you
blow
this
into
a
neighborhood
or
utility
scenario,
it's
an
interesting
problem,
which
kind
of
is
part
of
the
rationale.
Why
we
we're
glad
in
retrospect
that
we
stayed
away
from
the
more
expansive
cloud
specific
model,
because
this
is
more
realistic.
A
B
A
B
B
All
right,
so
thank
you,
dr.
Royer,
for
sharing
that
this
has
been
insightful
look
forward
to
integrating
capturing
these
in
in
our
notes-
and
you
know,
Mark
III
added,
you
know
to
my
rollin
agenda,
a
check-in
from
the
the
NIST
big
data
working
group.
You
know,
if
there's
nothing
to
report,
you
know
please,
you
know
just
feel
free
to
ignore,
but
it
would
love
to
have
you
share
at
the
beginning
of
our
meetings,
anything
that
any
contacts
or
any
information
that
that
this
group
would
find
relevant.
A
That,
just
let
me
do
that,
since
you
invited
me
and
I'll
make
it
short
in
light
of
our
time
we
I
introduced.
This
was
me
dominating
the
last
conversation
we
had
in
that
group.
We
were
trying
to
understand
how
to
do
traceability
for
ethical
requirements
that
are
put
out
in
organizations
and
it's
a
big
data
problem,
because
often
these
things
are
authored
by
people
outside
the
organization
so
or
inside
it
who
who
the
developers
are
not
connected
to
so
to
some
extent
it's
a
traceability
challenge.
A
B
Right
right,
exactly
yeah,
that's
the
you
know,
that's
a
great
perspective
of
how
this
that
ends
up
playing
out,
and
you
know
the
individuals
that,
because
I
get
the
real
hit
for
bigger
decisions
like
that,
so
you
know
coming
up.
I've
got
Jerry
who
unfortunately
couldn't
join
States.
You
had
a
sick
kid
to
take
care
of
is
going
to
be
joining
us
for
an
overview
of
you
know
some
of
the
security
infrastructure
that
she's
been
working
on
at
cyber-ark
that
overlaps,
the
the
kubernetes
and
Cloud
Foundry
deployments
of
cloud
native
infrastructure.
B
So
looking
forward
to
that
I
think
next
week
we'll
have
a
TP
lined
up
and
then
June
1st
I
am
canceling.
The
meeting
I'm
gonna
be
on
the
road
and
in
Berlin.
So
you
know
a
couple
weeks,
we'll
give
you
a
Friday
off
to
enjoy
Friday
things
all
right
thanks.
Everybody
thanks
for
joining
us,
see
you
next
week.