►
From YouTube: CNCF TOC Project Presentation Meeting - 2019-07-09
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
C
You
so
hello,
everyone,
my
name
is
barsik
vodka
and
I
can
hear
anything
hello,
hello.
Can
you
hear
so
yeah
I
work
for
improbable
and
I'm,
one
of
the
maintainer
and
issue
authors
of
the
Thomas
project
and
today,
together
with
Frederick,
who
would
love
to
present
it
to
the
scenes?
Yes,
so,
first
of
all,
what
is
tano's
Thomas
is
a
monitoring
system
or,
in
other
words,
set
of
cloud
native
components
that
you
can
install
on
top
of
the
Prometheus,
which
is
a
graduated
CN
CF
project.
C
C
So
what
we
call
a
global
view,
making
prompt
use
highly
available
so
finally
allowing
zero
loss
for
rolling
Westers
of
prom
queues
or
gracefully
handling
failover
scenarios
and
find
out,
finally
supporting
cheap
and
easy
to
operate
way
to
store
virtually
unlimited
retention
for
your
metrics
with
efficient
support
for
long
time
range.
Queries
thanks
to
build
in
something
next
slide.
C
Please
thanks
tunnel
project
is
quite
unique
in
making
sure
you
can
deploy
and
experiment
without
major
changes
to
existing
monitoring
prompt
you
setup,
we
can
distinguish
free
main
deployment
models
for
channels
without
going
into
much
details
in
a
very
basic
one
presented
in
on
this
slide.
You
don't
need
to
setup
any
separate
resources,
nodes
or
clusters
for
tunnels.
You
just
add
a
sidecar
any.
C
It
is
either
Sarkar
to
each
of
your
from
queues,
server
and
a
fully
stateless
square.
On
top
of
it,
query,
then,
is
able
to
execute
prompt,
QL
query
on
the
global
level,
fetching
the
metrics
from
required
from
queue
sources.
It's
also
capable
of
transplanted
application,
which
allows
to
run
for
materials
in
IJ
groups
in
this
form
it
does
not
allow
in
the
long
term
matrix,
but
it
is
quite
obvious
option
if
you
want
to
start
with
Thanos
project
and
in
fact,
some
production
users.
C
We
know
successfully
stays
with
this
option
because
it's
already
matching
their
requirements
next
slide.
Please,
if
you
want
to
add
long
term
metrics
attention,
you
can
easily
set
up
Thanos
to
upload
from
fuse
files
in
a
native
tsdp
format
to
object.
Storage
of
your
choice,
then
you
can
connect
global
query
to
the
long
term.
Object
storage
using
exactly
the
same
common
single
string,
share
PC
API.
This
allows
for
relatively
cheap
storage
without
worrying
about
you
know:
high-bandwidth
low-latency,
sample
streaming
and
complex
replicates,
replicated
ingestion
path
in
some
cases.
C
C
C
Essentially,
this
option
is
a
bit
more
complex
to
operate,
so
it's
not
for
everyone,
but
it
adds
more
options
to
choose
from
when
running
channels,
as
specially
in
the
case
when
you
don't
have
direct
access
to
print
you
service,
for
example,
next
slide,
please
why
you
can
say
that
tano's
is
simple
and
flexible.
So,
first
of
all,
almost
all
those
free
deployments
we
mentioned
are
kind
of
short
eyes:
showcase
ability
of
the
tano's
project
to
be
shaped
and
tailored
to
your
needs.
C
In
fact,
you
can
have
a
mix
of
those
free
deployment
models
under
a
single
centralized
system
which
many
of
production
users
are
doing.
Furthermore,
it
is
simple,
because
you
can
incrementally
deployed
a
nose
once
you
once
you
are
your
business
model
progress.
This
also
really
help
avoiding
scoped
script
during
the
immigration
process
and
help
with
initial
experimenting
as
well.
C
C
We
are
also
focusing
on
having
very
simple
and
unified
API,
and
the
query
layer
does
not
need
to
understand
from
what
place
the
metrics
are,
what
place
that
the
metrics
come
from.
They
can
be
fetched
from
raw
materials
directly
our
object,
storage
or
another
career
layer
all
from
totally
different
solution.
For
example,
some
company
build
and
maintain
open,
tsdp
integration.
This
simplifies
things
a
lot
and
allows
further
customizations.
C
We
also
built
tunnels,
while
reusing
and
contributing
as
much
as
possible
to
the
prompt
to
use
code.
This
Tanis
is
not
meant
to
reinvent
things
like
well
language.
We
are
here
really
to
make
those
pieces
more
distributed
bit
more
cloud
native
and
scalable,
and
finally,
it
has
essentially
single
and
optional
dependency,
which
is
an
object.
Storage
next
slide,
please!
C
So
how
we
look
in
terms
of
community.
We
are
fully
open
source
on
their
Apache
2
license.
We
establish
code
of
conduct
and,
to
be
honest,
we
are
quite
surprised
with
the
grow
of
the
community
and
the
wide
adoption.
We
are
quite
popular
project
on
github.
We
are
extremely
grateful
for
a
really
large
amount
of
external
contributions.
We
hit
117
I,
think
unique
contributors
mark.
We
have
a
Panos
website
with
roughly
100
daily
viewers.
We
have
a
Twitter
account
for
announcements
and
very
active
slack
channels
with
more
than
600
users.
C
We
are
also
excited
that
we
are
able
we
were
able
to
build
quite
large
maintenance
base.
There
are
five
maintainer
and
three
official
users
that
helps
us
to
triage.
The
people
have
issues.
We
believe
this
is
a
quite
big
responsibility
and
and
hard
work,
so
it
shows
a
big
commitment
from
everyone
involved.
Also,
the
unique
thing
about
our
team
is
that
almost
all
of
us
are
from
the
different
company,
which
is
nice.
C
C
D
Next
time,
please
thank
you.
So
we
have
a
number
of
production
adapters
in
our
call
here
so
Chun
from
Alibaba
and
someone
from
Mons,
oh
and
from
and
Ben
from
github
as
well.
So
this
is
just
a
selection
of
companies
that
have
been
asked
or
have
asked
us
to
be
on
this
presentation,
and
some
of
those
are
also
this
call.
Should
you
have
any
questions
for
them?
So
I
think
one
thing
that's
very
interesting
about
the
production.
Adopters
is
the
kind
of
model
in
which
they
run
Kano's.
D
So
in
comparison
to
many
of
the
other
projects
out
there
in
the
in
a
similar
space,
Kano's
is
almost
exclusively
used
to
satisfy
this
company.
The
metrics
needs
for
their
use
companies
themselves,
as
opposed
to
like
a
suffer
as
a
service
model,
for
example.
So
I
think
this
is
kind
of
a
unique
to
the
autonomous
project
and
we
think
why
our
order
it's
the
reason
why
a
lot
of
companies
are
using
it,
because
tannaz
works
really
well
in
this
kind
of
case
next
slide.
D
Please
so
a
little
bit
about
the
history,
so
Bartek
who's,
who
just
presented
and
Fabiana
by
nuts,
were
the
original
creators
at
improbable
in
late
2017
and
in
early
2018.
The
project
was
first
publicly
announced
at
the
London
Prometheus
user
group
and
while
it
was
launched
their
first
or
announced
first
I
think
what
really
kicked
it
off
for
the
community
was
the
s3
object,
storage
support,
which
was
added
in
early
March,
2018
and
I.
D
Think
this
is
kind
of
a
milestone
for
the
project,
because
I'm
probably
nominally
didn't
need
s3
support,
and
this
was
entirely
driven
by
the
community.
So,
like
two
months
after
the
project
was
first
announced,
the
community
started
contributing
to
this
in
a
really
meaningful
way.
So
I
think
this
is
this
is
really
awesome
and
then
in
March
2019,
first
maintainer
outs
are
even
probable
were
introduced
and
now
we're
here
proposing
the
project
to
the
CNCs
sandbox
next
slide,
please.
D
So
there
are
a
couple
of
alternatives
and
they're,
of
course,
a
lot
more
monitoring
systems
out
there.
We
decided
to
keep
the
list
to
those
that
are
most
closely
related
to
Thanos,
so
there
is
cortex,
which
is
already
a
CN
CF
project.
Cortex
is
quite
similar
to
the
third
deployment
option.
That
patek
showed
us
where
essentially
Prometheus
replicates
its
database
to
a
remote
place
and
cortex
ingests.
That
and
m3
to
be
created
by
uber
has
distinct
integration
mechanism
with
Prometheus,
but
it
is
able
to
handle
those
Prometheus
data
as
well.
D
D
All
of
tano's
communication
internally
and
externally
is
instrumented
with
open
tracing
and
Jager
and
Google.
Cloud
tracer
are
the
two
adapters
that
are
currently
available
for
this
and,
while
not
tied
to
kubernetes,
there's
Thanos
was
kind
of
born
into
the
world
of
kubernetes.
So
a
lot
of
the
examples
are
for
kubernetes.
D
There
is
a
direct
integration
with
the
Prometheus
operator
for
deploying
files
with
the
Prometheus
operator,
their
home
charts,
customized
templates
and
a
bunch
of
blog
posts
describing
how
to
how
to
run
tano's
on
entrepreneurs,
but
I
think,
even
though
there
is
all
of
this
relationship.
I
just
want
to
point
out.
Thomas
is
in
no
way
tied
to
kubernetes,
but
it's
just
thriving
within
that
ecosystem.
Next
slide,
please.
D
So
we
make
heavy
use
of
Prometheus
right.
So
how
does?
How
does
the
tunnels
actually
stand
in
terms
of
a
relationship
with
a
Prometheus
project?
So
a
number
of
pieces
are
actually
literally
vendored,
so
we
in
order
to
be
able
to
compatible
with
the
proof
with
the
Prometheus
project.
We
don't
actually
need
to
spend
much
time
on
that.
C
D
Next
slide,
please,
so
why
do
we
think
see
the
C&C
of
sandbox
is
the
right
thing
for
tano's,
so
I
think,
first
and
foremost
is
the
neutral
ground.
So
a
number
of
companies
have
approached
us
and
have
said
we
would
really
like
to
contribute
to
the
Thomas
project,
but
we
want
it
to
be
on
neutral
ground
for
us
to
for
that
to
happen.
D
And
again
we
don't
know
if
that's
going
to
happen,
but
for
that
to
be
ever
be
a
possibility,
we
would
need
to
be
part
of
the
CN
CF
and
be
on
a
new
job
round
together
and
last
but
not
least,
we
think
tano's
fits
really
well
into
the
like
portfolio
of
the
cloud9
of
computing
foundation
project,
as
we
I
think.
We've
already
shown
we
make
heavy
use
of
these
technologies
and
just
I
think
it's
a
benefit
to
the
ecosystem,
to
extend
this
portfolio
with
us
and
next
slide.
That's
it.
D
B
E
E
B
C
B
G
G
So
here
we
go
alright,
so
hello,
I'm
Fabien
from
the
Qbert
project,
and
that
is
the
second
slide.
Please
go
one
slimebag
backwards.
Please
that's
you
going
forward
yeah
yeah!
There
we
go
alright,
so
Qbert
q
was
started
in
2016
as
16,
so
it's
already
quite
old.
The
main
driver
back
then
for
us
too,
to
start
with
Q
Bert
was
that
we
want
to
have
a
single
Orchestrator
for
both
compute
form
factories
right.
We
knew
that
containers
were
coming
out
and
but
we
also
saw
that
VMS
are
still
around
and
with
Hubert.
G
Both
form
factors
have
different
properties
and
different
life
cycles,
different
workflows
and
we
want
to
be
able
to
to
map
but
to
acknowledge
both
and
properly
handle,
both
in
general.
The
the
pragmatic
statement
we
came
up
with
back
then,
was
to
say,
even
if
we
want
to
run
virtual
machines
inside
of
kubernetes,
it
is
important
to
us
that
we
maintain
the
kubernetes
or
cloud
native
look
and
feel
when
working
with
those
features
or
the
virtual
machines
over
having
all
the
virtualization
features
that
exist
in
the
virtualization
world.
G
One
outcome
of
saying
we
want
to
just
poke
workloads
was
also
that
we
could
unify
the
stack
right.
So
we
said
we
are
interested
in
in
bare
metal
deployments
and
having
VMS
and
containers
on
the
same
platform
on
kubernetes.
What
allow
us
to
to
also
unify
the
the
underlying
infrastructure,
so
storage
network
and
the
supporting
technologies
like
authentication,
logging,
metrics
and
so
on
so
forth.
After
the
small
research
period
in
2016,
we
actually
open
source
to
bird
in
the
jaw
in
January
of
2017.
G
G
So
what
does
cuber
do
for
most
cuber
provides
a
comprehensive
api
to
run
virtual
machines
and
kubernetes,
and
those
are
virtual
machines,
as
you
know
them
right.
So,
if
you're
running
VirtualBox
or
virtual
virt-manager
on
Linux
or
VMware
Workstation,
you
know
these
kind
of
VMs
are
what
we
want
to
run.
It's
also
the
same
VM
see
what
be
running
and
OpenStack.
For
example,
we
have
different
properties
or
we're
inheriting
the
properties
of
scale,
for
example
from
kubernetes.
G
The
API
today
is
supporting
to
define
virtual
device
devices,
as
you
can
do
with
other
virtual
machines.
We
can
do
live
migration
in
a
in
a
kubernetes
friendly
way
is
so
we
don't
provide
life
emigration.
We
do
not
provide
live
migration
coupons,
but
we
do
provide
life
immigration
for
virtual
machines.
We
allow
to
have
multiple
NICs,
which
is
actually
backed
by
motors.
We
will
just
support
booting
from
broad
disk
images
and
we
can
put
a
range
of
Linux
distributions
and
actually
Microsoft
Windows,
which
was
also
one
of
our
our
goals.
G
G
Qbert
is
also
community,
say
kubernetes
native
application.
What
does
it
mean
that
cubed
itself
tries
to
integrate
with
the
cluster,
so
we
aren't
add-on
to
communities
right,
so
you
deploy
cubed
on
top
of
kubernetes,
and
then
you
can
right
away,
reuse,
the
class
resources
like
storage
network
services,
but
also
the
compute
resources
right,
so
wherever
plots
can
run.
In
addition,
we
can
leverage
node
level
features
like
the
CPU
manager,
work
or
multi
network.
Huge
pages
emptied
here
and
block
storage.
G
All
of
these
have
been
kubernetes
features
which
came
into
kubernetes
over
the
past
years
and
we
were
improving
community
Cubert
in
order
to
allow
us
using
those
features
as
well
for
VMs.
Sometimes,
this
requires
some
some
small
glue
in
order
to
to
make
a
feature
usable
for
virtual
machines,
but
often
it's
also
just
to
pass
through,
for
example,
huge
pages.
G
It's
really
simple
to
address
or
to
provide
that
feature
to
virtual
machines,
because
it's
a
simple
to
to
dock
the
virtual
machines
to
the
feature
provided
by
kubernetes
other
other
features
are
more
difficult
to
consume,
for
example
CPU
manager,
and
that
is
where
we
actually
still
have
it
on
our
list,
or
we
were
already
engaging
in
the
upstream
discussion
to
also
see
that
we
solve
the
problem.
These.
G
This
technology
is
addressing
for
both
zip
for
pods
and
for
virtual
machines
and
for
the
more
when
just
not
only
integration
with
kubernetes,
but
we
also
try
to
see
that
we
integrate
into
the
ecosystem.
So,
as
prometheus
was
also
mentioned.
Early
on
qubit
is
also
providing
metrics
virtual
machine
specific
metrics
in
primitives
compatible
endpoints.
On
the
other
hand,
it
also
means
that
the
VMS
are
can
actually
also
be
seen
through
the
regular
kubernetes
metric
endpoints.
G
If
we
then
look
at
further
how
how
it's
being
implemented
and
how
we
extend
communities
that
is
by
using
the
all
state
of
the
arc
or
technologies
like
CR
DS,
the
custom
API
servers
at
illustration
web
folks
to
do
validation
and
mutation.
We
use
client
goji
RPC,
which
is
also
seen
CF
project,
so
there's
a
range
of
project.
We
we
pick
up.
We
actually
try
to
see
what
we
can
pick
up
before
we
build
it
ourselves.
G
This
actually
was
proven
because
we
see
that
in
our
community
with
a
range
of
kubernetes
distributions
and
for
them
the
the
approach
we
took
is
working.
So
we
are
seeing
very
few
issues.
For
example,
if
you
try
to
run
Qbert
on
a
cluster
which
is
running
on
Ubuntu
or
some
other
Linux
distribution,
and
over
time
we
were
able
to,
we
think
we
were
able
to
maintain
the
kubernetes
negative,
look
and
feel
and
behavior
you
are
expecting
from
pots
and
other
entities
and
kubernetes
itself.
Alright,
please
move
along
to
the
next
slide.
G
This
is
a
great
architecture.
Diagram
I,
don't
want
to
go
too
much
into
it.
So
what
we're
seeing
here
is
basically
that
that
we
have
the
usual
operator
pattern,
so
chorus
going
to
this
operator
pattern,
but
in
the
end
it's
we
need
the
controller
pattern
that
kubernetes
is
using
fall
of
its
controllers.
G
Yeah
next
slide,
please
so
the
use
cases.
What
do
we
want
to
do?
I
think
the
the
simple
one
is
to
run
virtual
machines
to
support
the
change,
so
we
saw
that
people
always
assume
that
people
want
to
go
to
containers
containers
I've
the
use
case
after
all
right,
but
the
reality
is
that
all
the
VMS
which
were
built
over
the
last
20
years
or
so
not
all
of
them-
can
be
moved
to
containers.
G
So
we
need
to
provide
a
way
to
to
support
the
people
in
order
to
do
the
change
where
it's
possible
to
move
to
containers,
to
increase
their
efficiency
and
to
lower
the
footprint,
but
we
need
to
give
them
an
escape
hatch
for
stuff
that
cannot
be
moved
to
containers
right.
So
very
old
code
or
code
that
cannot
remove
due
to
licensing
issues.
G
G
It's
true
that
we
provide
different
api's
for
pots
and
virtual
machines,
but
nevertheless,
all
the
surrounding
look
and
feel
and
tooling
can
be
used
to
control
both
so
cube.
Ctl
is
the
or
cube
cuddle,
depending
on
who,
you
ask
it's
a
tool
of
choice
to
manage
both.
We
remain.
We
maintain
the
estate
as
a
pro
state
decorative
approach,
which
is
also
used
for
the
other
entities
in
kubernetes
itself.
G
We
saw
actually
a
couple
of
community
members
who
started
to
use
cube
root
in
such
a
way
to
run
cubed
on
a
bare
metal
cluster
and
then
use
it
to
either
yet
to
provide
additional
kubernetes
cluster
to
their
tenants
with
with
them
yeah
working
hard
isolation,
because
these
layered
communities
clusters
are
then
run
in
in
virtual
machines.
This
also
led
to
external
projects
like
the
cubed
cloud
provider,
which
now
actually
allows
a
tenant
to
introspect
the
underlying
cube
or
cluster,
that's
shown
on
the
right-hand
side
on
the
left-hand
side.
G
On
this
slide,
we
also
see
another
use
case
for
for
cube
root,
which
has
been
over
discussion
and
in
is
partially
used
by
community
members
in
house
to
use
cuber
to
run
virtual
network
functions.
So
we
know
that
being
F's
have
a
long
history
when
it
comes
to
OpenStack,
for
example,
and
people
are
on
the
pressure
to
see
how
how
are
those
vnfs
are
used
in
in
a
container
native
world?
How
can
they
be
run?
We
know
that
see.
G
Nfs
are
coming
up
right
to
move
stuff
to
containers,
but
they
have
their
own
challenges
in
containers.
It's
hard
to
have
your
own
kernel
modules
or
then
you're
very
constrained,
so
that
there's
a
pro
and
con
for
vnfs,
but
also
seen
apps
and
here
it
Cuba,
can
also
help
because
you
can
take
your
existing
vina
move
it
into
your
Qbert
VM
and
run
it
up
on
top
of
kubernetes
and
then
integrate
with
stuff,
like
the
network
service,
mesh
molto's
or,
for
example,
calico.
G
G
This
is
the
small
comparison
of
them.
So
if
we
look
at
Qbert
what
what
does
Qbert
characterize
best
and
I
think
two
things:
first,
the
API:
how
do
you
deal
with
the
workload?
The
second
is:
what
are
you
intended
to
do
with
that
API
for
Qbert?
We
say
we
haven't
dedicated
API,
that's
why
we
say
a
custom
resource,
so
we
have
custom
resources
to
work
with
the
workloads
and
the
purpose
is
to
run
VMs.
If
we
look
at
cata
containers
devise
a
firecracker
and
friends
here,
the
purpose
is
a
different
one
right.
G
The
next
project
I
want
to
compares
to
is
vertol
it,
and
this
is
the
purpose
of
that
project-
is
also
to
run
VMs
they're,
specifically
focusing
on
cloud
workloads.
But
here
the
drawback
is
that
they're
using
a
pod
API.
The
issue
is
that
you
cannot
really
use
the
pot
API,
for
example,
to
provide
virtualization
specific
features
like
connecting
to
the
graphical
console
or
as
supporting
live
migration
that
is
much
unclean
or
to
solve.
G
If,
if
we
reusing
the
pot
API,
the
last
project
I
want
to
mention
is
Rancher
VM,
which
is
today
also
providing
custom
resource
to
to
run
VMs,
which
is
not,
which
is
smaller
in
scope.
But
the
problem
we're
seeing
with
the
ranch
of
yem
approaches
that
they
are
currently
not
focused
on
on
integrating
with
all
of
the
kubernetes
features,
but
rather
focus
on
a
more
streamlined
experience
with
certain
storage
backends
and,
for
example,
networking
plugins
yeah,
that's
a
short
comparison
to
other
projects.
G
Next
slide,
please
I
mentioned
the
cubed
community
and
a
few
more
words
on
those.
So
first
there's
a
scene
C
of
sandbox
PR
up
on
on
github,
it's
quite
fresh,
but
please
take
a
look.
We've
got
about
1,300,
github
stars
continuing
to
increase.
We
have
a
lot
of
contributors
from
red
head.
That
is
because
we
are
interested
and
we're
going
to
yeah.
We
want
to
see
how
we
can
use
Qbert
and
some
of
our
products,
but
we
also
have
external
contributors
like
17.
G
All
of
these
contributors
created
about
1500
pull
requests,
270
kata
Forks,
and
we
have
actually
now
19
releases
with
the
July
release.
All
of
the
community
can
usually
be
found
in
the
community
meetings
or
in
virtualization
channel
on
slack
and
on
the
q''-word
mailing.
This,
which
I've
forgot
to
add
here.
Some
of
our
existing
users
and
contributors
are
Akamai
Apple
cloud
for
Cisco
roots,
OSI
rated
as
APN
stack
path.
G
The
checkmarks
indicate
from
whom
we
know
how
it's
going,
how
they're
using
it.
So
we
see
that
we've
actually
got
a
lot
of
users,
and
quite
many
of
them
are
also
contributing
to
it.
All
of
them
are
using
it
for
different
use
cases,
loots
and
as
ap,
for
example,
they
worked
more
in
the
hard
multi-tenancy
model
with
Qbert
and
did
some
adjacent
work
and
other
projects.
That's
why
the
check
marks
aren't
in
brackets
next
slide,
please.
G
We
see
that
we
already
have
a
white
community
for
Qbert
and
we
would
like
to
to
have
a
neutral
ground
for
that
community
to
continue
to
work
on
Qbert
and
make
sure
that
there's
not
the
impression
that
reddit
is
steering
it
to
its
own
own
likes,
but
rather
F
a
more
neutral
ground.
We
also
want
to
give
Qbert
more
exposure
and
hope
that,
if
it's
either,
if
it
becomes
the
scene,
CF
Center
project
that
the
visibility
will
be
higher
and
ultimately
Qbert
itself
can
also
be
seen
as
a
building
block.
G
So
it
can
be
used
to
run
traditional
VMs.
We
actually
have
our.
We
are
developing
a
UI
for
our
use
case
to
be
able
to
run
classical
VMs
on
top
of
kubernetes
using
that
UI
and
the
CLI
approaches,
but
with
the
hard
multi-tenancy
use
case,
we
also
see
that
Qbert
can
be
a
building
block
for
other
use
cases,
and
we
hope
that
by
making
Qbert
and
sandbox
project
we
we
can
highlight
that
can
be.
Building
block,
for
example,
falls
off
Olivia
nephew's
case
next
slide.
Please
so
that's
it
from
my
side.
G
H
From
folks
I
had
one
it's
Quentin
here,
presumably
these
virtual
machines
that
that
Cupid
runs
require
a
lot
of
the
surrounding
infrastructure
that
pods
have.
For
example,
you
know
replication
controllers.
Replicas
sets
demon,
sets
jobs
all
of
the
sort
of
lifecycle
management
stuff.
That
pods
have
do
you
sort
of
implement
a
parallel
set
of
different
controllers,
or
do
you
reuse
the
existing
kubernetes?
How
does
that
work?
Yeah.
G
We
have
a
few
ideas
how
that
can
be
implemented,
but
currently
we
don't
support
the
direct
use
of
the
high-level
worker
controllers.
It's
they
are
some
again.
There
are
some
workarounds
how
you
can
enable
that,
and
we
have
some
preliminary
work
to
enable
the
native
controllers
and
we
actually
have
a
VM
replica
set,
which
is
replicating
what
the
replica
set
is
doing
for
four
points
again.
I
think
this
story
is
not
answered
or
not.
It's
not
is
not
concluded,
but
mainly
because
it's
currently
not
in
the
focus
of
one
of
the
community
members.
Sorry.
H
G
By
the
way
we
would
I
would
like
to
see
that
sort
of
one
of
the
so
first
the
VMS
are
running
pods.
So
that's
not
the
problem
that
problem
is
we
technically
have
is
that
the
entry
points
to
define
games
are
custom
resources,
and
if
you
look
at
the
diplomas
and
other
high-level
workload
controllers
to
enqueue
minetti's,
then
the
issue
is
that
you
can
only
provide
templates
for
pods.
So
that
means
the
entry
point.
G
G
The
thing
we
discussed
back,
then,
was
to
say:
couldn't
we
create,
or
it
couldn't
be,
allowed
the
high-level
water
controllers
to
template
other
entities
as
well,
for
example,
virtual
machines
or
cakes
that
did
not
fly
I
would
actually
look
if
we
can
rearrange
it
in
these
times
again,
because
we've
seen,
for
example,
with
the
garbage
collection
or
the
eviction
API,
that
there
are
no
examples
where
contracts
exist.
For
example,
the
eviction
API
from
the
garbage
collection,
for
example,
which
also
works
with
custom
resources.
G
H
G
So
we
don't
I
I,
mean
I,
see
some
from
our
side
right,
I
didn't
advertise
it
too
much
in
the
community.
Yet
so
I
did
not
invite
them,
and
they
were
not
a
word
that
this
is
taking
place
right
now.
They
were
aware
that
we
are
going
to
sandbox
it,
but
I
did
not
advertise
the
this
specific
date
and
time
I.
B
G
Yeah
so
yeah,
that's
obviously
I
mean
that's
really
unfortunate.
That
I
didn't
share
that
so
I
we
obviously
asked
them
and
we
have
we
can
I
mean
we
definitely
gave
to
support,
for
example,
from
loops,
and
there
are
a
few
others,
but
I
don't
want
to
say
names
now,
because
I
don't
have
the
emails
in
front
of
me.
So
I
can
bring
up
the
names
which
are
supporting
to
bring
cuber
to
to
the
CNCs
sandbox,
but
there's
definitely
support
there.
Otherwise
we
would
have
not
done
the
step
to
to
propose
it
here.
I.
B
B
I
Application
for
todo
mixed
life
leaves,
or
those
were
not
familiar
in
total-
is
the
first
framework
secure
software
supply
chains
as
a
whole.
It
works
in
and
outside
of
the
cloud,
but
given
that
the
cloud
is
probably
one
that
stress
tests,
this
environment,
it's
the
most
they're,
the
various
environments
with
like
multi-tenant
hosts
and
different
components,
working
loosely
in
loose
connections
in
total
allows
you
to
create
a
policy
that
can
give
you
security
assurance
and
compliance
checks
and
another.
It's
trail.
I
That's
cryptographically,
verifiable
I'll,
explain
a
little
bit
how
this
works,
because
I
think
it's
very
important
to
on
the
context
of
why
it's
always
important
and
how
it
can
be
they're,
powerful
tools,
your
the
cloud
native
environment,
when
creating
artifacts
next
life.
This
total
basically
is
formed
by
three
components.
One
of
them
is
what's
called
a
layout,
you
can
think
of
it
as
similar
to
a
jenkins
file.
It
specifies
all
of
the
steps
that
need
to
be
in
place
and
then
it
also
specifies
the
interrelationship
of
the
artifacts.
I
As
they
flow
through
the
supply
chain,
for
example,
in
this
case,
we
have
a
version
control
system
that
creates
some
sources,
and
then
it
sends
them
to
a
CI
CD
system
and
also
to
feel
for
the
built
form
is
blessed
to
create
the
final
binary.
That's
eventually
going
to
be
packaged
into
an
alien
package.
I
The
layout
also
says
who
is
allowed
to
perform
and
all
of
the
operations
in
the
supply
chain.
For
example,
in
this
case,
we
have
the
public
keys
of
bar
assigned
to
the
VCS
step
and
the
public
key
of
Carol
and
Aaron
for
the
building
and
packaging
respectively.
Next
slide,
please
now
well,
this
layout
was
created.
A
side
of
the
Stations
of
evidence
of
each
step
they
can
place
are
also
created
like
this
functionaries,
that's
how
we
call
them.
I
I
This
is
actually
used
when
you
were
verifying
the
end-user
takes
in
layout.
It
takes
the
final
product,
the
final
artifact
and
the
series
of
robber
stamped
at
stations,
and
it
walks
down
the
paper
trail
to
see
that
all
of
the
operations
that
were
meant
to
happen.
They
actually
happened
next
likely.
It's
now
I
think
it's
easier
to
understand
how
how
in
thought
of
words,
with
some
integration
case
studies,
the
first
one
that
I
wanted
to
talk
about
is
the
reproducible
builds
project
which
is
part
of
the
Debian
and
Arch.
I
We
are
using
in
toto
to
verify
that
all
packages
created
in
the
Debian
packaging
infrastructure
are
reproducible
and
not
only
that
they
are
it's
possible
to
reproduce
this
packages
in
different
environments
and
verify
that
upon
installation.
So
we
have
a
in
this
case,
for
example
the
project
owner
the
one
that
creates
the
layout
there's
the
Debian
developers
they
sign
the
they
sign
a
layout,
saying,
hey,
there's
notes
that
must
be
reproducible
and
there
needs
to
be
a
threshold
of
n
signatures
right
now,
it's
but
we're
trying
to
bump
it
to
a
higher
number.
I
As
we
get
more
reveal
their
organizations
then,
and
then
there's
row,
builders
are
the
functionaries,
the
one
that
will
be
rebuilding
everything
and
creating
rubber-stamp
at
the
stations
with
their
private
keys.
Saying,
hey,
I
rebuilt
this
package.
This
is
the
this
is
the
link
metadata
for
it.
And
finally,
the
client,
in
this
case
is
the
after
apt-get
install
er.
I
So,
in
this
case,
for
example,
a
release
manager
within
an
organization
can
create
the
total
layout
sign
you
to
their
private
key
and
then
send
it
to
the
interocular
Netezza
mission
controller.
Then
the
Jenkins
plugin
will
start
asking
all
of
their
slaves
to
create
into
the
metadata
and
then
relay
it
to
the
admission
controller.
I
The
third
integration
case
study
that
I
want
to
talk
about
the
next
slide.
Please
is
beta.
Dog
I
also
wanted
to
have
three
shank,
which
is
the
one
that
made
this
happen,
talk
a
little
bit
more
about
it
from
the
data
box.
I'd
he's
he's
a
security
solutions,
engineer
at
theta,
dog
and
I
think
you're
on
the
call
right
for
sure.
F
A
F
1
min
share
okay,
great
okay,
let's
do
this,
so
this
is
the
blog
post,
I,
just
posted
it.
We
don't
a
paper
about
it,
that's
been,
except
for
the
you
snake.
So
all
I
need
to
gritty
details,
but
let
me
quickly
walk
through
everyone
with
how
this
roughly
works
right.
This
it's
gonna,
be
great,
eight
minutes
so,
okay,
but
basically
we
use
in
toto,
along
with
another
project,
called
tough
and
we'll
talk
about
how
they
both
relate
each
other
to
build.
F
What
we
think
is
the
industry's
first
untrusted
CIA
CD
system,
meaning
you
know
we
don't
use
any
special
trusted
hardware.
We
can
use
any
generic
CI,
see
the
system
in
a
cloud
and
we
don't
care
if
any
part
of
it
is
compromised
between
our
develop
developers
and
end
users
right
we
get
this.
What
with
a
security
feature,
we
call
compromised
resilience,
I'll
talk
about
what,
how
and
and
what
we
did
this
for
so
data
dog
is
a
monitoring
company.
F
We
collect
your
metrics,
your
app
performance,
your
logs
and
so
on,
and
you
use
the
agent
install
on
your
hosts
and
containers
through
this
right
and
so
integrations.
That
just
add
on
some
plugins
that
that
you
install
and
give
the
agent
super
power
so
that
you
can
monitor
now
Kafka,
for
example,
or
you
can
monitor
nginx
and
so
on.
F
Right
so,
and
we've
got
hundreds
of
these
things
that
come
out
of
the
box,
a
challenge
was
that
we
bundled
all
of
these
hundreds
of
integrations
with
the
agent
every
six
weeks
and
we
wanted
to
decouple
them
so
that
people
can
install
new
integrations.
This
new,
add-ons
or
plugins
are
out
of
cycle
right
out
the
agent
cycle
so
that
they
can
test
new
features,
but
the
problem
is
that
the
state-of-the-art,
basically,
everyone
uses
what
we
call
online
keys,
basically
something
like
TLS,
where
you've
got
robots.
F
What
I
call
robot
signing
your
your
code
for
you
automatically
building
and
signing
your
code
and
distributing
it
the
users
instantly
right.
This
is
great.
Your
developers
don't
have
to
worry
about
reproducibility
handling,
code,
signing
keys
and
so
on,
but
downside
is
that
what
can
go
wrong?
Well,
someone
compromises,
you
developer
keys
or
your.
They
compromise
your
github
repository,
let's
say,
or
they
compromise
in
to
your
CI
CDE,
your
container
image
registry.
F
F
You
can
think
of
a
metaphor
that
Santiago
used,
which
I
quite
like,
which
is
that
in
toto,
is
like
so
if
you
think
about
a
bottle
of
drugs
in
toto,
is
the
thing
that
tells
you.
Oh,
this
person
made
this
ingredient.
This
person
made
this
ingredient
and
so
on,
and
so
on,
composing,
it
and
tuff
is
the
plastic
seal
around
it.
Making
sure
that
you
know
things
are
not
tampered
with.
F
F
Yes,
theoretically,
people
can
cost
malicious
offer
to
be
built,
but
we
use
UV
keys,
which
are
this
Hardware
tokens
of
trust
that
that
basically
store
our
developers,
GPT
keys,
so
that
you
know,
even
if
there's
malware
compromising
the
user,
we
have
a
secret
bin
and
we
have
touch
authentication
to
make
sure
that
it's
not
instant.
You
would
know
something
funny
is
going
on
and
you
can
x-ray
designing
keys.
Furthermore,
you
can
use
threshold
schemes
like
requiring
more
than
one.
F
So
you
can
say
you
know
tree
developers
need
to
sign
off
source
code
before
it
is
trusted
by
any
users
right.
But
the
interesting
thing
is
that
now
we
don't
care
how
github
repositories
compromised,
because
signatures
one
match
wouldn't
match.
Whatever
developer,
sign
right
and
the
keys
are
stored
on
hardware
tokens
value
be
keys,
we
don't
care
if
our
CI
CD
is
compromised
because
we
would
known
signatures
wouldn't
match
same
thing.
Continue
image
registry
file,
service
and
key
servers.
You
get
the
idea,
let
me
show
you
a
quick
demo
of
whatever
and.
I
Sorry
that
was
only
three
minutes,
so
let's
keep
the
world
map.
Can
we
go
to
the
community
and
other
informations
like?
Yes?
So
generally
is
sponsoring
us.
We
want
to
be
an
incubation,
because
we've
seen
enough
production
used
to
know
that
this
is
not
an
experiment.
We
are
we're
looking
for
visibility
because
we
come
from
academia
still
compared
to
other
projects.
We
don't
have
as
much
as
much
exposure
with
an
industry,
but
well
that's
out
of
our
sessions.
We
can
have
a
little
bit
more
in
detail.
We
also
we
have
this
URL
for
website.
I
I
The
last
two
or
three
days,
we've
had
two
more
contributors
come
in,
but
all
in
all
we're
having
a
I
think
we're
having
a
very
vibrant
community
that
we're
building
out
from
different
places
across
academia
and
Industry,
and
open
source
communities
like
Debian
and
Arch
Linux,
open,
zeusie,
we're
also
commuters
from
gates,
reproducible
builds
so
on
and
so
forth.
I
wanted
to
just
next
slide.
I
Please
just
give
a
quick
snapshot
of
all
of
the
places
that
you
can
contribute
if
you
would
like
to,
and
I
also
wanted
to
give
a
quick
plug
on
the
next
slide.
Please
we're
the
first
project
that
had
the
six
security
security
assessment.
It's
it's
been
a
year-long
project
process.
We
went
through
many
iterations
and
it
was
actually
very
amazing
experiences
with
which
we
were
able
to
tighten
and
verify
and
review
all
of
our
security
practices.
I
think
it's
very
important
for
security
project.
They.
I
This
is
his
life,
that
they
created
that
the
basically
says
what
they
think
about
the
state
of
project.
They
think
they're,
it's
a
they
think
the
design
is
straightforward
and
that
Apple
that's.
That
was
like
a
paramount.
Since
the
beginning.
They
I
think
they
prove
that
they,
like
our
security
analysis.
B
E
Think
we
need
to
do
a
little
bit
more
digging.
I
think
you
know
the
the
space
and
the
project
looks
like
it's
on
the
right
track.
I
think
sandbox
versus
incubation,
for
me
hinges
on
some
of
the
I'd
like
to
understand
more
about
the
governance
and
sort
of
the
the
the
depth
of
the
bench
in
terms
of
like
hazus
all
hanging
on
a
couple
of
folks
or
is
it?
Is
it
something
that
well
will
you
know
be
sustainable
and
I?
Think
those
are?
Those
are
some
of
the
things
that
we
look.