►
From YouTube: CNCF Runtime 2021-04-15
Description
CNCF Runtime 2021-04-15
B
Hi,
how
are
you
I'm
good?
How
are
you
I'm
good,
I'm
good!
Thank
you
for
joining
yeah.
Let's,
let's
wait
a
little
bit
and
until
more
people
show
up.
A
A
C
B
Okay,
so
three
minutes
past
two
hour,
thank
you
for
joining
thanks
everyone
for
joining
so
yeah.
Today
we
have
yulin
I'll,
be
talking
about
quark
at
another
runtime
written
rust,
so
yeah.
Take
it
away.
A
Thank
you
here
I
will
introduce
the
quarker.
It
is
a
high-performance,
secure,
container,
long
time,
yeah
yeah,
for
the
security
wrong
hand.
We
have
three.
Oh
let
me
present
that
we
have
major
three
major
dimension.
First,
is
the
security?
Another
is
performance
here
we
we
are
working
on
this
linux
compatible
one
hand
container
and
run
time,
so
we
need
it
to
provide
us
linux
compatibility,
yeah,
yeah
here
for
the
quark
we
are,
we
are
using
securities
based
on
the
kvm,
based
as
a
virtual
machine.
A
Let's
base
the
isolation,
and
now
also
we
are
using
the
secure
program,
language,
that's
rust
and
and
for
the
for
the
performance
yeah.
Well,
we
have
designed
the
dedicated
for
the
containerized
workload
and
it's
optimized
for
the
multi-core
multi-body
cork
cpu
and
using
this
high
performance,
hyperlinks
language
rust
and
our
goal
is
provided
as
the
linux
compatibility.
A
So
far
we
already
implemented
the
one
hundred
times
more
than
two
hundred
ten
system
call.
In
future
we
will
add
more
system
car
and
provide
more
compatibility,
yeah
actually
for
the
photo
security
container
wrong
time
so
far
in
the
market.
There's
a
another
security
on
that
secure
container
around
time.
First
is
kata,
another
is
divisor,
yeah
yeah
and
the
card
has
based
on
the
washing
machine
and
it
is
a
small
there's,
a
straightforward
implementation
that
is
running
the
container
inside
the
virtual
machine.
A
In
in
this
mode
for
virtual
machine,
actually
we
have
we
have
linux
kernel,
that's
a
common
design
and
common
apache
that
we
have
linux
kernel.
We
have
a
qmo
and
we
also
can
run
different
os
inside
this.
In
this
virtual
machine,
like
it
could
be
linux
kernel
or
the
windows
kernel.
A
Actually,
we
can
based
on
this
architecture.
Actually
we
can.
We
can
get
some
promise
opportunity,
for
example,
for
the
qmo
kumo.
It
is
a
it
is
a
general,
its
general
purpose.
As
a
water
machine
monitor,
it
is
a
it
supports,
not
only
linux
it,
it
can
support
other
os.
This
is
but
for
our
linux
container
runtime.
We
only
need
to
support
linux
workload,
so
this
opportunity
to
optimize
the
tumor.
We
got
a
better
vmm
to
get
a
better
performance.
A
A
It's
is
running
on
the
linux
kernel,
so
we
can
get
some
benefit
to
to
support
to
support
the
linux
kernel
and
the
linux
kernel
also
supports
all
kind
of
hardware
and
device,
but
because
our
container
runtime
only
run
on
service
server
hardware,
so
they
have
multi-car
and
highway
highways,
x64
cpu,
64-bit
cpu.
Maybe
in
future
we
can
support
this
arms
receiver
and
it
doesn't
support
radio
and
audio
and
it
it
doesn't
need
to
support
other
devices.
A
For
example
the
software
the
software
disk
driver,
that's
a
as
a
support
in
the
linux
kernel
yeah,
so
we
can
get
benefit
to
support.
Only
limited
hardware,
limited
hardware,
yes,
yeah
and
and
the
linux
kernels
support
all
kind
of
workload,
but
for
the
for
the
sql
container
worker
target,
his
workload
is
majorities
for
the
cloud
native
workload.
His
his
workout
means
pcp
tcp
protocol
and
also
may
have
that
hardware.
Disk
io
may
also
have
with
this
that's
a
cloud
based.
That's
a
io,
for
example,
rs3!
A
A
So
so
we
designed
the
designer
environment,
that's
our
let's
kill
just
the
target
for
the
continental
run
time.
A
Just
for
this
continuous
workload
here
compared
with
the
coleman
linux
virtual
machine
solution,
quark
is
the
developer.
It's
developed
a
quarter
kernel,
it
is,
it
will
just
work
as
a
linux
kernel
and
provide
the
linux
kernel
compatible
system
call
to
the
linux
container,
application
application
yeah
and
for
the
vm
side
we
pro
we
provide
the
the
qrizer.
A
A
Yeah,
this
is
our
high
level
high
level
design.
A
Here
for
the
linux,
for
the
part
container,
the
the
q
kernel
is
running
inside
the
inside
the
the
gas
kernel
space
and
the
under
the
q
riser
is
running
inside
the
host.
It's
just
like
a
common
common
linux
application
and
between
between
the
pure
kernel
and
the
q
visor.
A
We
use
the
we
use
the
a
special
that's,
a
special,
that's
the
car,
that's
q,
that's
called
mechanism
to
life
as
a
two
kernel
and
q
visor
to
communicate
with
each
other.
That's
a
cure
card.
A
Qcar
is
based
on
a
shared
mind
based
on
the
shared
memory
based
communication
between
the
qr
code
and
the
qriser,
and
so
when
kill
two
kernels
send
requests
to
the
weather.
We
don't
need
to
every
time
to
call
a
hyper
car
hybrid
car
in
the
kvm
architecture.
The
hyper
cars
cost
is
very
high.
A
Based
on
this
share
memory,
the
channel
the
we
don't
need,
the
the
thread
running
inside
the
queue
kernel,
don't
need
to
execute
the
the
guest
space
because
he
just
used
that
memory
queue
to
communicate
with
q
riser
so
that
we
can
get
better
performance,
get
better
throughput
and
better,
better
latency
and
inside
this
q
q
kernel,
we
have
multiple
virtual
cp
water,
cpu
and
inside
this
purevisor
we
have
one.
That's
the
pure
car
thread.
This
qr
code,
slider
qca
thread,
use
the
shared
memory
cue
to
talk
with
the
key
kernel.
A
We
find
that
even
we
use
that
qr
code
mechanism,
the
latency
for
the
for
some
system
calls
latency
and
throughput
for
the
subsystem
card
is
still
very
high,
so
we
also
use
the
next
iou
ring
that
linux,
using
to
celebrate
the
process
with
iou
dream.
There's
a
cure
call
and
the
linux
kernel
directly
shares
a
memory
and
then
kernel
can
send
the
I
o
request
to
the
linux
kernel
directly.
A
A
For
example,
the
read
write,
send
message
we
use
iou
ring
for
the
other
method,
data
metadata,
related
operation
like
the
open
and
the
circuit.
This
kind
of
system
call
we
still
use
use
the
queue
card
so
that
we
can
have
better
better
check.
Another
level
check
in
the
cubizer
to
make
sure
this.
This
key
card
is
not
a
is
a
not
compromised
compromise.
The
request
yeah.
This
is
a
high-level
design.
B
Question
so
how
did
you
handle
the
virtual
cpus,
the
resources
so
yeah
in
the
q
kernel?
You
may
see
you
know
four
virtual
cpus,
but
what
does
that
mean
at
the
host
level?
Is
there
any
sort.
A
A
Oh
yes,
actually
in
the
kvm
architecture,
the
virtual
cpu
is
a
host
process,
a
host
thread
yeah
and
we
just
use
and
the,
but
it
is
virtualized
in
the
recipe.
What
in
the
that's
a
horse,
the
kernel?
Okay,
is
the
kernel
as
the
water
cpu,
but
it's
just
the
thread
inside
the
inside
inside
the
guide
inside
the
host
in
the
gas
kernel.
A
It
is
just
it's
a
cpu,
yeah,
yeah
and
another
year
for
that,
in
the
queue
kernel
we
have,
we
should
have
multiple,
that's
a
kernel
thread
just
like
a
linux
kernel
and
for
this
linux.
For
this
better
kernel
thread,
we
will
just
use
this
host
thread
just
wave
cpu
to
handle
mode
to
handle
this
guy's
thread.
But
it's
just
like
a
kind
of
there's
a
the
user
space
thread.
This
is
in
the
solaris.
That's
the
operating
system.
We
have
between
the
cost,
cp
host
thread
and
the
guest,
and
that's
the
kernel
thread.
A
A
A
We
have
learned
much
from
divisor,
but
we
have
different
design
and
different
design,
and
we
also
do
some
optimization
to
our
goal
is
to
get
better
performance
first,
optimization
that
we
use
rust
instead
of
go
language.
I
think
this
is
a
major
major
performance
improvement
area
here.
We
know
that
the
gold
language
is
very
good
language,
but
the
data
is
not
designed
for
this
kind
of
there's
a
os
kernel
level
developing,
for
example.
A
It
is
support,
it
doesn't
support
system
level,
that's
the
memory
management
it
supports
gc
and
the
performance
permanency
is
slower
than
the
rust.
Also,
it's
a
yeah.
Also
it
is
it.
I
will
yeah.
Let
me
introduce
the
difference
first,
that
for
the
minor
minor
minor
memory
measurement,
that's
the
divisor
used
two
heiser
have
its
own.
That's.
A
We
cannot
control
the
memory
location
fully
control
that
before
maybe
three
years
ago,
I
did
some
some
discriminative
corporate
performance
tuning.
I
find
that
the
cooperator
consume.
A
Maybe
a
few
hundred
megabytes
memory
with
like
when
I
do
tuning,
cannot
find
where
is
it
and
where
likely
is
it
is
reserved
by
the
gc
and
but
now
for
the
rest,
we
can
plug
in
our
own
as
a
heap
management.
A
So
far
in
the
quest,
we
are
using
body
body,
algorithm
plus
slab
as
a
manual
management
to
manage
the
hip
memory.
Another
is
a
scheduling.
That's
a
just.
Now
we
mentioned
that
in
the
in
the
kvm,
based
that's
a
virtual
machine,
there's
a
one
host
hotspot,
mac
tool,
one
virtual
cpu,
and
then
we
and
we
have
multiple
kernels
right,
winging
the
scheduling
between
this
virtual
cpu
and
the
and
the
host
gas
kernel
windows
scheduling
in
the
keyboard
implementation.
A
Another
thing
is:
that's
a
that's.
A
cool
vmm
car,
it's
a
user,
kill
car.
So
this
is
the
share
memory.
I
will
share
memory
queue
based,
that's
what
is.
This
is
also
we.
When
we
do
the
tuning
for
the
divisor,
we
find
that
he
has
a
randomly
handled
as
a
high
qps
car,
for
example.
The
read,
for
example,
the
the
workload
like
radius.
That's
the
for
the
videos,
his
that's
a
his.
His
call
is
just
the
just,
have
a
small
workload
inside
the
memory
process
and
then
do
the
I
o.
A
A
D
Hi
hi
julian
hi,
my
name
is
caesar.
I
have
a
question
please
so
in
order
to
do
the
performance
optimizations
that
you're
doing
between
the
communication
between
the
guest
machine
and
the
underlying
pmm,
for
example,
you
are
sharing
the
the
memory.
Do
you
think
that
has
any
impact
as
far
as
reducing
isolation,
because
rather
than
going
through
a
hyper
code,
which
is
a
very
sort
of
narrow
interface
right?
D
I
wonder
if,
if
in
you
know
going
through
the
shortcut
memory-
yes
it's
faster,
but
then
it
ends
up
reducing
the
isolation
of
the
vm
in
the
sense
that
it
makes
it
easier.
Let's
say
for
an
attacker
that
compromises
the
key
kernel
to
then
get
into
the
into
the
q
visor.
A
Yeah,
actually
this
thing
we
need
a
consider
level
securing
the
balance
of
security
and
security
and
the
under
the
under
the
performance.
That's
the
this
is
the
balance
dependencies
actually,
in
theory.
Yes,
we
use
a
shared
memory
based
communication
between
the
linux
kernel
and
the
house
kernel
here.
This
is
a
we
make
data,
some
larger,
that's
a
that's
a
text
service
service,
that's
possible,
but
actually,
I
think,
with
a
better
demo
evaluation
for
the
the
concrete.
That's
the
that's
the
threat
model.
A
A
We
only
support
the
data
plan
operation
like
the
like
the
the
the
read
write
and
also
for
the
for
the
for
the
circuit
for
the
metadata
operation
like
the
create
socket
and
create
a
creator
file
descriptor.
A
For
this
kind
of
thing,
we
can
also
also
do
more
protect
in
the
in
the
in
the
queue
card.
In
the
that's,
the
query
layer,
and
also
we,
we
limited
the
the
iou
rinse,
our
urine
urine-
is
also
based
on
the
fair
descriptor.
They
also
limited
the
permission
for
this
is
this
ifd
and
so
far
we
haven't
created
this
threat
model
to
see
this.
D
So
would
you
say
that
it
let's
say
compared
to
a
traditional
vm
right
yeah?
Would
you
say
that
it's
slightly
less
isolation
at
the
benefit
of
more
performance,
but
even
though
it's
less
isolation,
you
still
feel
that
you
still
haven't
found
a
a
threat,
a
security
threat
to
break
to
break
out
of
it.
Is
that
it
would
you
would
you
yes.
A
A
This
is
maybe
it's
this
much
chance
to
combine
the
whole
kernel,
but
but
in
our
side
we
divide
the
system
with
rust,
so
at
least
this
level,
let's,
let's
attack
surface,
is
decreased,
so
this
is
kind
of
so
this
is
a
kind
of
balance
between
the
between
the
security
and
the
performance
from
my
panel
might
understand
that
security
hole
is
a
civil
kind
of
bug.
We
cannot
fix
all
the
bugs.
Also,
we
cannot
fix
all
the
security
hole,
so
we
still
balance
between
the
security
and
performance
sure.
D
And
one
more
question:
if
I
may
about
the
q
kernel
itself,
given
that
you're
running
the
containers
on
top
of
that
correct
doesn't
mean
that
the
q
kernel
has
the
same
concepts
of
name
spaces
and,
let's
say
maybe
c
groups.
You
know
the
things
that
the
linux
kernel,
the
primitives,
that
the
linux
kernel
has
in
order
to
create
the
containers.
Do
those
also
exist
in
the
kernel
or
or
not.
A
Really
yeah,
actually,
our
goal
is
that
the
implant,
our
goal
is
to
implement
all
the
objects
inside
the
linux
kernel.
So
far
we
don't
support
the
sig
group
and
then
partially
support
name
space
inside
the
queue
kernel,
but
for
but
for
the
cure
contain
container.
In
the
overall
hook,
two
car
containers
run
inside
a
linux
container.
A
It's
running
inside
the
one.
Oh
I.
D
Got
you
I
got
you
yeah,
because
you're
running
a
single,
a
single
sort
of
container
per
per
vm,
so
therefore
the
kernel
doesn't
need
to
necessarily
support
multiple
main
spaces
itself.
Right.
It's
like
like
almost
like
the
the
whole
thing
is
part
of
a
single
name
space.
The
underlying
legs
kernel
is
the
one
that
is
separating
the
difference.
Yeah,
okay,.
B
Have
you
considered
a
fallback
mechanism
like
if
some
users
don't
want
to
share
the
memory
at
the
expense
of
performance.
A
Oh
yes,
actually
we
actually
missed
the
power
of
three
kind
of
car.
First
is
a
hyper
power,
a
cure
car
and
the
iou
ring.
Actually,
this
is
a.
This
is
a
hypercars
director
at
first.
So
all
the
all
the
this
kill
car
pure
car
requests
can
support
a
hyper
car
under
this.
Just
a
switch
inside
our
code
got
it
got
it.
A
Yeah,
okay,
I
can
give
more
information
about
our
current
performance.
We
have
tests
that
pass
the
our
performance
with
and
compile
that
with
kata
and
the
divisor.
Okay
is
our
some
past
result.
A
For
the
there
are
some
some
matches
first
is
is
the
startup
time?
That's
a
this.
Is
our
test
result?
We
use,
we
use
a
time
command
to
pass
this
for
the
wrong
wrong
c.
That's
just
the
wrong
thing
is
wrong
around
the
container.
That's
native
wrong
c,
on
the
biometric
machine
and
his
start
time
is
a
is
a
is
a
600
millisecond,
and
the
quality
is
just
like
that.
It's
a
have
small
difference,
but
for
divisor
it
is
a
this
is
still
good,
but
for
kata
it
takes
a
much
longer
time.
A
I
think
it's
also
reasonable.
Your
cut
has
started
a
fooling
instead
of
fullness
kernel,
and
it's
already
done
much.
I
think
a
lot
of
much
optimization
so
that
this
is
almost
two
second
problems.
I
think
it's
pretty
good,
but
still
it
is
a
full
in
this
kernel.
So
startup
time,
that's
quite
high,
slow.
A
Also
another
set
is
the
memory
or
hide
I
use
a
that's.
The
pc
box
put
has
a
memory.
Your
memory
is
a
consumption
for
the
quark
it
takes
about
about
20,
12
megabytes,
overhead
divisor
is
28
and
the
card
high
smart
is
the
largest.
It's
because
the
card
started
the
full
linux
kernel
and
and
actually
for
the
memory
side.
We
have
more
benefit
with
the
quark
for
the
carter
that
is
linux
kernel
when
they
consume
the
memory
they
cannot.
A
It
cannot
release
that
to
the
to
the
house
kernel
so
as
memory
consumption
will
keep
the
increase,
but
for
the
quark,
when
the
application
frees
the
memory
the
qual
panel
can
reset
to
house
memory-
and
this
is
three
environment-
can
be
used
by
another
container
for
the
divisor
similar
thing.
That
is
space
on
the
go
wrong
time.
They
have
capture
collection,
that's
the
memory.
A
Free
memory
become
unexpected
yeah.
We
also
do
some
that's
performance.
Throughput
comparison
is
some
some,
this
industry
benchmark
the
first
one
is
the
etcd
and
we
can
say
that's
the
ecd's
this
for
for
majority
of
this.
Oh,
this
is,
this
number
is
qps
for
run
c
for
the
put
operation
is,
qps
is
33,
3
700
and
the
quark
is
less
smaller
than
8,
because
divisor
is
it's
still
good,
but
for
qatar
in
this,
this
benchmark
is
the
most
slow.
A
Similar
thing:
that's
for
the
ltcd.
The
cutter
is
slow.
It's
quite
quieter
is
better,
and
sometimes
it's
still
better
than
the
quark,
but
the
majority
of
the
quake
is
is
faster
because
so
far
for
the
divisor,
sometimes
it's
better
than
the
quark.
We
have
no
time
to
do
deeper,
deep
problems
tuning,
but
we
think
that's
for
any
optimization.
He
rather
can
do
quite
can't
do
that
either.
So
until
we
can
fill
this
skype
yeah
for
the
readers.
This
is
something
that's
something
that
divisor
is
not
good,
but
the
kata
is
good
yeah.
A
We
can
see
that
this
is
a.
This
is
the
the
radius
passive
result.
Trunks
is
our
our
city,
because
actually
we,
the
quake,
is
wrong
instead
of
inside
a
container,
so
wrong
c
is
obviously
problematic,
yeah,
sometimes
sometimes
as
a
we
can
run
better
than
the
wrong
c.
I
think
it's
maybe
the
test
test
problem
and
and
in
future,
because
we
will
use
now
we
are
using
iou
in
future,
we
are
targeted
to
women
use
more
advanced
technology,
for
example
rdma
to
improve
that
ios
io
performance.
C
B
A
Yeah
yeah,
so
all
these
tests
is
based
on
throughput.
So
far
I
haven't
passed
the
latency
yet
yeah,
okay
yeah.
This
is
test
we
can
see.
This
is
a
quark
is
much
slower
than
the
wrong
c.
A
A
A
Yeah
all
this
time
is
the
second,
how
much
time
it
will
take
to
start
rmdb
and
the
mexico.
D
I
have
a
question
also:
have
you
done
any
performance
benchmarks
measuring
how
many,
let's
say,
containers
I
could
put
on
a
host
with
just
the
barrancy
versus
you
know,
with
quark
and
and
with
the
other
things.
In
other
words,
you
know
it
seems
like
these.
These
tests
are
just
for
a
single
sort
of
container,
but
I
wonder
if
you
start
running,
let's
say
you
know
if
you
get
a
big
host
and
you
start
running,
maybe
100
or
150
containers
right
and
then
you
do
the
same
with
quark.
A
D
D
Sometimes
you
may
be
surprised,
it
may
not
scale
linearly
as
you
think
it
may
you
know
it
may
or
may
or
may
skill
in
early
one.
One
never
knows
right,
but
people
are
often
looking
at
okay.
How
much
stuff
can
I
run
on
my
server?
You
know
right
and
how
much
capacity?
How
much
can
I
get
out
of
it
as
far
as
workloads.
A
Yeah
yeah,
that
makes
sense-
I
mean
in
future.
We
will
add
this.
This
kind
of
bike
mark
yeah
yeah,
the
last
part
is:
I
can
give
some
demo
yeah
actually.
A
Here
we
have
different
on
this
kid.
We
have
different
commander
to
wrong
francis
parker,
debug
and
ron
says:
that's
the
question,
an
achievement
and
the
qatar
you
know
they
have
given
the
wrong
time
here.
I
I
just
demo,
that's
a
demo
execution
for
the
b
shell
wrong
b
share
with
ubuntu,
let's
use
quark.
A
Yeah
this
is
a.
This
is
a
they
use.
Quarkx
started
started
the
bundle
and
we
can
see
we
have
just
like
a
common
common
share
and
go
to
epc.
We
can
do
the
kite,
for
example,
system
here.
A
A
Yes,
for
example,
now
for
this
test:
let's
use
the
quark
release
version
under
his
promise
of
160,
I'm
160
megabytes.
You
may
yeah
in
my
machine
and
if
we
use
the
wrong
c,
that's
the
that's
the
the
divisor,
it's
much
slower,
it
is
much
slower
and
the
cutter
is
better
than
the
than
the
and
the
yen
than
the
keyvisor
yeah.
A
A
A
Oh,
why
this
time
is
maybe
you
when
it's
time
it's
much
slower?
Maybe
I
I
just
do
the
test.
I
had
some
regression
from
this
regression
you're
only
one
thousand,
that's
weird
yeah,
but
yeah.
Maybe
I
had
my
record
at
some
regression
yeah.
This
is
another
benchmark.
A
This
is
this
is
least
capable
this
kind
of
thing,
and
we
can
also
oh
this
log.
Oh
I
I
forgot,
you
need
a
visible
log,
but
I
can
add
more
logs
if
the
backlog.
A
A
Log
we
can
see,
we
got
so
many,
this
kind
of
system
car
from
the
application.
This
is
a
system
read
and
have
this,
this
kind
of
system
car.
This
is
a
log
and
the
last
car.
A
D
I
have
a
question
also,
so
I
have
two
questions.
One
of
them
is.
If
I
wanted
to
run,
let's
say
quark
on
the
cloud
right.
Let's
say
I
go
to
you
know
aws
or
gcp,
and
I
get
a
ec2
instance.
Let's
say
on
aws
right,
which
is
itself
a
vm,
and
I
want
to
run
quark
on
it.
You
know,
what
do
I
need
is.
Is
that
is
that
possible,
or
do
I
need
nested
virtualization
enabled
what
what?
What
are
the.
A
Requirements
well:
firstly,
I
haven't
tried
that
and
then
it's
possible,
but
we
need
to
make
sure
that,
for
example,
for
the
amazon,
I
heard
that
the
case
virtual
machine
is
based
on
the
kvm
and
we
need
to
make
sure
that's
the
the
virtualization.
This
same
feature:
that's
a
name,
the
afghan
name,
just
to
make
sure
that
the
recursive
that's
the
resolution
can
work
enabled
so
that.
A
D
Next
nested
virtualization,
they
call
it
yes
yeah.
I
don't.
I
think
amazon
doesn't
support
it,
but
I
think
google
google's
gcp,
does
have
an
option
to
yeah.
Mr
italian,
do
you
know
ricardo
sir.
D
Cool
cool
got
it
again
and,
and
that
would
be
enough
julian.
So
let's
say:
if
you
have
a
machine
as
long
as
the
machine
supports
nested
virtualization.
Would
that
be
something?
Would
that
be
enough
or
are
there
any
other
requirements
that
you
need
yeah.
D
A
D
A
Yes,
yeah,
that's
a
very
important
part:
that's
the
linux
capability.
So
far
we
support
200
more
than
200
tons.
That's
a
system,
cars
there's
still
something
called
without
support
her.
I
gotcha
your.
We
have
tesla
to
like
test
some
some
common
service
like
mysql
the
injects
this
kind
of
thing,
but
maybe
there's
some
some
application.
If
you
use
a
special
system
car-
and
we
won't
support
that
at
all.
But
now
we
are
working
hard
to
add
the
most
part
and
improve
this
lens
compatibility.
A
D
A
D
That's
got
it
got
it,
so
you
would
say
a
lot
of
workloads
run
in
it,
but
I'm
sure
that
there
are
certain
workloads
that
either
the
user
system
calls
that
you
don't
implement
yet
or
they
do
access
things
like
proc,
that
probably
are
not
implemented.
Yet
that
right
now
don't
work,
maybe
in
the
future
they
will
work
right.
Is
that
correct?
Yes,
you're,
right,
yeah,
okay
and
and
one
final
question,
if
you
were
to
launch
a
pot,
you
know
in
a
kubernetes
pod.
D
Does
this
work
with
pots,
because
you
know
in
pots
you
have
multiple
containers.
If
they're
sharing
some
of
the
namespaces
they're
not
sharing
some
of
the
other
names,
for
example
the
network
namespace
they
share
right,
but
the
other
namespaces
they
don't.
Typically.
A
A
Yeah
we
haven't
do
the
fully
tested.
Firstly,
we
support
that
one.
Actually
one
one
is
a
pure
kernel.
Is
us?
Can
we
we
name
it?
Sandbox
multiple
container
can
run
inside
of
a
sandbox,
just
use
a
use,
run
c
to
start,
but
just
like
run
system,
that's
the
container
inside
the
sandbox
that's
possible,
but
but
for
but
so
far
we
have
implemented
some
name
space
inside
the
queue
kernel.
But
we
didn't
fully
test
that
so
we
haven't
had
that
for
the
part.
D
D
D
A
Yeah
this
our
your
this
is
our
target.
They
must
be
wrong,
be
able
to
run
that
inside
the
communities,
excellent
yeah
and
we
are
also
collaborating
with
with
one
company-
that's
one
called
provider
company
to
work
on
this
and,
while
you're
paying
this.
This
kind
of
requirement
is
a
key
requirement,
a
critical
integrity
path
to
support
the
communities
and
we
are
collecting
collaborating
with
them
and
try
to
add
this.
B
Support,
do
you
have
any
users
now
or
are
not
yet
no.
A
We
are
working
with
a
two
cloud
provider.
One
of
them
is
trying
to
trying
to
put
some
production
production
workload,
but
we
are
doing
that
completely
together.
Do
that
step
by
step.
A
B
C
A
Yeah,
actually,
all
of
them
is
a
is
inside
the
the
quacker
quack
kittens.
That's
the
folder,
the
performance
pdf
and
another
thing
is
how
to
test
that.
How
to
run
that-
and
this
is
how
to
report
this
test-
result,
for
example,
how
to
run
this
standard
time
and
how
to
run
different,
run,
the
ecd
etc
always
put
here,
and
the
test
result
is
put
in
the
promise.pdf.
C
C
A
So
far,
actually
we
just
open
sourced
this
project
last
month
and
we
are
in
the
initial
stage
now
but
yeah
we
so
far
we
have
no.
No.
This
is
a
solid
sponsor.
Now.
C
So
how
many
contributors
do
you
have
today?
I
I'm
just
curious
what
the
like
the
is,
this
just
a
start
off
as
a
personal
project
or
a
work
project
or
what.
A
C
So,
okay,
I
was
just
yeah
just
here,
so
is
quark
containers
like
your
company's
product
that
you've
been
working
on.
A
Oh,
it's
already
open
salsa
yeah
and
the
open
source
it's
apache
license.
Yeah
and
the
metallic
computer
is
from
our
company.
Now.
C
B
Yeah,
so
I
think
his
question
is
more
about
like
how
did
you
get
started
with
with
the
project?
How?
How
did
I
guess
the
idea
became
to
fruition
because
of
you
know
some
of
the
limitations
from
gvisor,
and
then
you
started
that.
A
C
C
B
Any
other
questions
so
yeah
this
is.
This
is
great.
I
think
that
I
mean
it's
exciting
to
see
this
and
it's
yeah,
it's
an
alternative
to
something
like
gvisor
and
kata
and
yeah,
and
then
you
already
applied
for
sandbox
right
for
the
cnc.
Yes,.
B
What
are
some
of
the
things
that
you
want
to
get
out
of
being
in
the
cncf?
What
do
you
expect
to
get
more
contributors
more
community
support,
so.
A
Okay,
yeah,
that's
a
motivation
is
a
more
more
contributor
and
nothing
is
aligned
with
sincere
ecosystem.
We
hope
we
we
can
land
in
and
put
inside.
This
sense
of
ecosystem
be
part
of.
B
Well,
I
think
that's
about
it
so
yeah.
Thank
you,
everyone,
thank
you
for
presenting
and
we'll
keep
in
touch
yeah.
Thank
you.