►
From YouTube: CNCF SIG Runtime 2020-04-02
Description
CNCF SIG Runtime 2020-04-02
A
B
C
A
And
I
think
we
can
start
with
basic
stand
up
so
I'm
Ricardo
I'm,
one
of
the
co-chairs
and
I
don't
have
any
update
so
I'll
put
up
an
agenda
so
I
have
the
roadmap
discussion
for
the
seat
and
then
there's
the
other
items.
Volcano
and
cutout
containers
so
we'll
just
go
around
the
list
so
that
what
I
see
on
the
attendance
so
Philip
or
Philip.
A
A
A
A
F
No
update
on
my
own
really
I
just
was
interesting
listening
to
the
roadmap
and
also
just
I'm,
seeing
where
I
can
help
with
maybe
doing
some
of
those
analysis
and
some
of
the
existing
projects.
Okay,.
A
A
A
G
A
I
C
J
A
C
A
A
A
Think
it's
kind
of
hard
to
find
volunteers,
but
but
if
you
know
anybody
who's
interested
so
what
I
mean
somebody
is
to
run
the
meeting,
but
at
the
same
time
somebody
needs
to
take
notes.
I'm
actually
been
running
the
meetings,
but
if
I
run
the
meetings,
basically
I
can't
take
notes
and
at
the
same
time
that,
but
if
you
eat
it,
you
have
access
to
right
now
to
the
notes
and
everyone
can
write
you
and
them.
A
So
if
you
want
to
just
say,
I
want
to
write
something
about
the
meeting
feel
free
feel
free
to
do
it.
Oh
thanks
for
the
motorbike
yeah.
So
that's
that's
what
background
so
now
they
did
because
there
were
so
many
virtual
events.
Now
people
are
doing
this
virtual
background,
so
I
just
decided
to
do
it.
One
of
them.
A
Okay,
cool,
so
yeah,
so
Quentin
and
I
got
together
with
you
like
three
weeks
ago,
and
we
talked
about
the
roadmap
for
this
sake.
So
if
you,
if
you
know
other
people
who
are
interested
and
want
to
participate,
feel
free
to
you
know,
pass
the
word
and
we
want
to
get
more
participation,
basically
any
any
any
projects
related
to
runtime
any
technologies.
They
want
to
discuss
what
so
that
I
mean
they're
in
the
scope
with
the
sig.
So
one
of
the
things
that
there's
Quentin
a.
A
So
yeah,
so
we
we
looked
at
the
half
the
projects,
so
oh
they're
running
you
chickens
that
we
have
to
do
for
the
health
of
the
project,
so
we
have
currently
all
these
projects
and
and
the
different
stages
and
CSUF.
So
of
course,
kubernetes
container
d,
there's
a
couple
of
incubating
cryo
in
hard
word,
so
there's
also
sandbox.
We
have
convert
kata
cube
edge,
the
virtual
cube
lid
and
I'm
just
gonna
read
through
these
I
mean
if
you
have
any
anything
to
talk
about
these
or
any
comments.
A
Just
let
me
know,
then
we
have
dragonfly
I.
Think
is
right
on
sliders
vote
into
the
in
incubation
right
now
and
there's
build
packs,
and
then
the
signals
wants
to
identify
any
gaps
and
some
of
these
projects.
Some
of
these
projects,
don't
cover
all
of
the
seat
cloud
native
landscape
in,
and
so
we
want
to
bring
in
some
you
know,
insights
and
and
possibly
projects
that
want
to
be
donated
into
the
foundation.
So
types
of
workloads
are
like
AI
and
the
data
pipelines.
A
There's
also,
maybe
some
bear
man
type
of
tools
like
you
know,
deployment
of
bare
metal
machines
or
different
types
of
kernel,
Linux
kernels
for
running
for
running
workloads,
and
then
thanks
related
to
multi-tenancy,
when
people
want
to
run
different
tenants
and
in
the
same
machines,
for
example
right
so
then
you
want
to
have
some
sort
of
isolation.
So
there's
no
security
compromises
and
you
know
one
tenant.
You
don't
see
the
other
those
type
of
things
so
third
types
of
gaps,
maybe
some
some
gaps.
Let
them
people
haven't
thought
about
that
much.
A
So
that's
also
within
the
scope,
and
we
want
to
continue
educating
the
community
so
for
them
to
know
about
all
these
different
new,
cutting-edge
technologies
and
so
we're
looking
by
people
to
present
its
in
one
example
is
kind
of
containers.
It's
given
an
overview
today
and
then
some
of
the
other
projects,
you
know
I,
think
about
also
OCI
open
container
initiative
and
some
folks
am
I.
Gonna
come
in
I
did
actually
get
checked.
A
A
Hopefully
somebody
will
come
in
and
present
and
give
some
updates,
and
obviously
we
want
to
do
some
due
diligence
on
some
of
these
projects.
So
there's
some
currently
in
that
process
for
harbor
isn't
being
reviewed
by
several
six,
so
welcome
compile.
We
have
a
document
that
will
compile
all
the
feedback
from
all
the
six
and
provide
that
as
a
recommendation
to
the
TOC,
so
they
can
decide
whether
they
want
to
graduate
so
hardware
is
looking
at
graduating
and
inque's.
Also
coming
in
there's
another
company
quiz
trying
to
go
for
incubation.
A
There's
also
interaction
with
other
states
and
other
groups.
We
have
the
community
sig
node,
there's
AB
delivery,
so
some
of
these
projects
have
operators
and
sometimes
those
fall
within
the
scope
of
the
sig
after
delivery,
for
example,
there's
several
assignment
specifications
and
they
also
fall
within
six.
A
gap.
A
Delivery
there's
also
conversation
about
maybe
creating
a
six
Everland
school,
but
that
hasn't
happened
yet
and
yeah
and
we
want
to
identify
reclamation
in
the
beginning
of
the
meeting
folks
who
can
participate
in
terms
of
writing,
notes
for
these
meetings
so
a
scribe
and
then
we're
looking
for
a
couple
techniques
or
there's
two
spots
for
tech
leads
so
for
people
interested,
and
you
know,
with
the
technologies
and
learning
more
about
some
of
these
projects
and
helpful.
Did
you
diligence
we
need.
We
need
help
in
that
aspect.
A
L
L
There's
one
other
item
that
I
should
probably
add
to
the
list
here,
which
is
that
I
realized
Brian
Grant,
who
was
one
of
our
TOC
liaisons,
is
no
longer
on
the
TSE.
So
we
need
a
new
TSE
liaison.
I
actually
took
the
liberty
of
speaking
to
one
of
the
new
TRC
members
who
has
provisionally
sort
of
expressed
interest
in
being
in
that
new
liaison.
She
just
wants
to
speak
about
I,
think
about
it,
some
more
but
I
would
imagine.
We
have
an
answer
that
fairly
soon.
Okay,
I
think
this
will
be.
A
A
L
L
And
sorry
I
should
have
actually
spoken
to
mystique
before
I
did
that
I
just
happen
to
speak
to
Lane
about
other
things
first
and
making
sure
the
turn
she
expressed
interest.
So
apologies
for
that
we
can.
If
the
sig
doesn't
want
her,
is
a
tier
C
liaison.
We
I'm
sure
we
could
tell
her
that
and
I'm
hoping
that's
not
the
case.
Yeah.
A
L
Exactly
okay,
cool
yeah-
this
is
one
of
the
biggest
SIG's
and
Brendan
are
other
liaison,
is
one
of
our
more
busy
TSP
members
and
so
yeah
I.
Think,
for
both
reasons,
we
could
have
two
and
Elena
seems
to
be
very
interested
in
getting
pretty
hands-on
involved
overlaps
with
a
lot
of
the
stuff
she
doesn't
work,
so
I
think
we
can
expect
to
get
a
fair
amount
of
time.
Availability,
great.
D
It's
very
peril.
Making
I'll
make
a
comment,
I
think
for
Forrest
rollers,
who
are
some
people
from
the
outside,
largely
striking
or
looking
into
some
of
these
projects
and
contributing
and
I
he'll
be
good
to
have
tracking
as
to
that.
These
projects
actually
work
across
like
architectures
and
only
tied
to
a
particular
one
or
if
they
are
that
there
is
a
plan
or
from
thinking
around
how
these
can
can
be
deployed
on
different
platforms.
D
I
think
that's
fair
and
I.
Think
part
of
the
the
effort
here
is
to
at
least
identify
that,
as
you
start,
and
so
you
know,
if
we
can
pick
it
up
for
some
of
the
projects
will
do
if
we
can
avoid
some
others
are
interested
and
can
contribute
to
it.
I
think
it's
good
to
think
about
it
and
when,
when
it
will
happen,
will
depend
on
on
you
know
who
wants
to
use
it
or
not.
D
L
Yeah
we
could,
we
could
possibly
do
things
like
ask
the
projects.
You
know
questions
in
their
health
checks.
We
could
ask
them
whether
they
run
on
other
architectures,
if
not
are
they're
planning
to
and
and
if
they
haven't
got
any
concrete
plans
to
do
so.
Do
they
have
any
estimates
of
the
amount
of
effort
required?
I
would
imagine
the
the
amount
of
effort
required
to
make
these
things.
L
D
L
D
A
D
A
A
L
Just
one
final
suggestion
on
that:
what
I
what
I
would
propose
is
that
we
set
a
time
line,
which
is
that
we
leave
this
open
for
comments
and
finalization
for
the
next
two
weeks
and
I'm
meeting
in
two
weeks
time.
We
we
prioritize
these
things
and
try
and
put
names
to
some
of
these
items
so
that
we
can
sort
of
get
get
going
on
them.
So
if
anyone
on
the
call
is
able
to
find
people
in
their
companies
that
are
willing
to
do
some
of
this
work,
I
think
two
weeks.
L
A
A
Yeah
I
think
this
fills
a
gap
in
kubernetes
now
so
because
you
know
Quinton
and
I,
we
were
talking
about
you
know
having
just
their
jobs
from
kubernetes,
doesn't
kind
of
it's
pretty
raw
right.
So
so
volcano
fits
and
fills
in
that
gap
where,
where
you
want
more
complex
type
of
batch
workloads
and,
for
example,
for
data
pipelines,
yeah
big
data
type
of
operations,
yeah.
E
L
That's
a
very
profound
question
and
maybe
more
profound
than
you
even
realized
with
us,
seeing
it
so
so
we
actually
need
to
be
doing
both.
We
need
to
be
proactively
identifying
gaps
in
the
in
the
CNCs
portfolio,
projects
that
we
think
need
to
be
filled
and
we
need
to
be
actively
identifying
projects
to
fill
those
gaps.
In
addition
to
that,
they
will
inevitably
be
in
there.
That's
been
the
vast
majority
of
the
more
recent
projects
have
been
projects
that
have
come
to
have
an
S
and
wanted
to
be
part
of
the
CNC
F.
L
E
L
A
Yeah
I
think,
if
you
see
any
technology
that
may
not
be
part
of
the
CSE
F
and
and
yaddam,
for
example,
I
I've
reached
out
to
some
of
the
web
assembly
people.
So
that's
one
area,
for
example,
that
I
see
a
gap
right.
So
if
you
see
any
other
gap
that
was
kind
of
similar
or
related
to
run
time,
you
know
yeah
feel
free
to
kind
of
talk
to
some
other
communities
and
see
if
there's
some
some
project
that
could
be
part
of
the
CNC
F.
A
Yeah
so
there
there
may
be
some
other
things
related
to
how
you
run
the
I
may
be
framework
for
from
machine
learning
or
deep
learning
those
type
of
things.
Yeah
I,
don't
think
we
we
have
that
type
of
thing,
but
then
we
have
also
have
to
see
that
they
don't
overlap
with
some
things
that
in
the
linux
foundation,
because
the
Linux
Foundation
also
has
this
other
group
called
Linux,
AI
or
ai
Foundation,
or
something
like
that.
F
L
L
A
L
The
work
around
there
happens
there
and
there's
various
data
groups
as
well.
Standardizing
data,
interchange
formats
and
all
this
kind
of
stuff
I
think
that
that
we
should
not
sort
of
venture
into
those
spaces
because
they
have
their
own
foundations
in
general.
This
is
not
because
the
CNC
F,
or
not,
because
the
Linux
Foundation
has
such
a
thing,
but
but
I
think
in
general
it
doesn't
fit
with
the
CNC
F
I
think
the
CNC
F
is
is
more
about
the
actual
infrastructure
to
you
know
enable
those
workloads
and
so
I
think
so.
L
Volcana
is
a
great
example.
Volcano
is
not
actually,
you
know
the
the
the
framework
for
building
for
building
AI
things.
It's
it's
really
to
facilitate
those
kinds
of
workloads
on
kubernetes
and
those
I
think
are
the
kinds
of
projects
that
we
will
want
to
be
looking
at,
and
there
are
many
others
in
that
space.
I'm
sure
that
we
could
go
and
figure
it
out,
but
some
of
them
are
not
as
obvious
as
one
might
think.
I
could.
E
L
Yes,
so
I
think
that
that
would
be
a
great
example
of
of
a
good
project.
I
think
there's
a
history.
They
are
not
intimately
familiar
with
the
history,
but
but
cupola
is
not
obviously
part
of
the
CN
CF
at
the
moment,
and
it
would
be
good
to
get
a
clear
answer.
Why,
then,
is
the
case?
And
you
know
the
sig
hasn't
been
around
the
whole
time,
but
I
know
there
is
some
history
there
and
and
I
agreed.
L
A
A
L
Yeah,
they
have
very,
very
different
properties
in,
in
some
cases,
very,
very
different
properties
than
their
traditional
back
batch
workloads.
For
example,
many
of
these
things
run
for
many
weeks
on
end.
They,
you
know
very
sensitive
to
node
failures.
Typically,
unless
you
have
very
elaborate
schemes
to
prevent
that.
So,
if
one
node
fails
during
that
you
know
four-week
run,
then
the
whole
run
basically
gets
corrupted
and
you
have
to
rerun
the
whole
thing,
which
is
bad
and
they
run
on
expensive
hardware.
So
it's
even
doubly
bad.
It's
a
trap.
That's.
E
H
L
Diana,
if
you
would
want
to
kind
of
spearhead
a
little
kind
of
working
group
to
go
and
dive
into
that
area
with
Klaus
and
whoever
else
is
interesting,
interested
and
perhaps
think
about
either
a
white
paper
or
some
other
form
of
Education,
where
we
can
teach
the
world
how
or
you
know
how
ml
stuff
runs
on
kubernetes
and
where
the
challenges
are
and
what
we're
doing
to
fill
them.
I
think
that
would
be
super
useful,
because
there
are
a
lot
of
questions
in
that
space.
I
think
I'd.
E
A
I
Hello,
my
name's
Tobin
and
I
work
for
a
financial
in
the
Chinese
internet
payment
company
and
have
been
working
on
containers
since
the
very
beginning
and
I'm,
one
of
the
main
maintainer
and
main
contributors
to
it
and
the
art
a
yukata
last
class.
Three.
If
we
can
give
some
introduction
to
they
seek
to
save
one
time
so
I'm
here
and
it's
good
to
to
know.
I
Beginners
we
studied
when,
by
observing
these
traditional
containers,
then
it
is
deployed
in
the
PI
by
users
today
and
or
several
years
ago,
and
they
are
the
emboss
to
be
isolated
by
namespaces,
say
groups
and
the
Henze.
They
share
the
same
minions,
Colonel
Halsey
host
and
then
this
how
it's
it's
before
containers
and
we
see
some
something
to
do
here.
We
won't.
I
We
introduced
a
virtual
mission
there
as
you,
sir,
and
is
a
middle
layer
between
the
containers
and
the
hardware
and
as
I
was
Carla,
and
with
this
we
care
him
some
kind
of
this
better
resource
isolation
and
better
security
and
the
host.
So,
basically,
you
feel
wrong
containers
you
can
you
can
make
give
your
concave
Cavey
users,
oh
you're,
the
untrusted
users
to
their
James
Wong.
As
you
all
know,
some
work
of
the
hanger
machine
and
you
you
do
not
care
about.
I
You
see
the
embed
some
into
some
piercings
and
each
other
or
I'm
your
host.
So
the
same
a
idea,
essentially
the
virtual
machine
interface,
is
a
interest-rate,
proven
interface
that
is
being
used
in
the
highest
world
form
for
many
years.
So
we
just
inherent
assent,
and
with
this
we
can
we
we
combine
the
best
of
the
two
word
we
have.
They
have
the
speed
of
containers
and
also
have
the
security
of
our
over
virtual
machine.
L
I
A
virtual
machine
I
know
you
feel
you
feel
you
start
virtual
machine.
Your
hammer
for
for
virtual
spoke
for
kiss
kernel
and
for
guest
operating
systems.
We
we
by
putting
a
virtual
machine
here
we
use,
are
very
in
that
create
virtual
machine
and
also
we
customized.
It
gets
kernel
and
we
so
also
we
customized
a
gesture
guest
operating
system.
I
Oh
everything
is
reduced
down
to
very
minimum,
to
just
support
running
a
container,
so
you,
for
example,
if
you
you
feel
natural
virtual
merging
on
a
database,
it
will
introduce
text
several
minutes
or
several
hours
before
several
years
ago.
We
study
lists,
but
we
with
color
containers.
You
can
have
a
free.
You
can
have
a
running
container
inside
a
virtual
machine
in
one
or
two
seconds.
I
And
this
is
the
the
architecture
we
current
have.
Is
it
there
are
two
actually
the
the
about
one.
You
say
the
architecture
we
used
to
have
until
last
year
for
every
container
inside
the
sandbox.
We
have
kerosene
here
and
also
we
we
have
container
if
we
all
run
in
continuity,
there's
a
container
tissue.
So
so
there
may
be
many
seams
in
the
system
and
many
interaction
layers,
and
we,
after
working
with
the
hunter
D
community,
we
introduced
to
the
conductive
scene
with
to
API,
so
the
entire
container.
I
We
can
just
call
API
to
cut
a
container
this
ship
and
with
that,
we
remove
all
the
interaction
layers
and
all
these
components
into
just
one.
One
component,
4%
box
now
is
not
very
pro
container.
It's
for
for
every
send
books.
We
had.
We
just
have
one
machine
now.
So
then,
since
there's
a
very
good
simile
face
simplification
and
connection
color
containers,
architecture
and
mnsure
have
for
for
many
users
the
main
engine
before
the
end,
miniatures
try
in
your
containers
and
sink.
I
I
And
current
leaders
way
beside
beside,
you
can
run
very
that
way
to
continue
emerging
containers,
there's
a
basic
function,
and
now
we
we
support
many
architectures
and-
and
we
saw
how
to
different
hypervisors.
The
Frank
are
called
hypervisor
and
Hawker
all
indeed
I
think
last
year
and
still
qmue
is
a
default
one
because
it
has
most
features
battery
you
if
you
want
to
run
some
special
workloads
and
want
to
have
some
different
optimizations,
you
can
use
just
use
different
hypervisors.
I
Also
also,
we
support
different
distributions.
Although
the
biggest
Connolly's
is
fake,
okay,
they
occur,
scariest
operating
system
is
mini
master.
It
can
still
be
made
of
different
distributions,
so
they
enter
you.
Can
users
can
do
very
easy
customization
with
it
also
an
answer.
I
I
You
go
system
integration,
we
support
cry
or
continuity,
talker
and
the
poor
man,
so
you
feel
systems
long.
Any
of
this.
You
can
just
install
characters
in
as
well
very
easily
and
also
since
katakana
is
new
to
convince
world,
and
we
many
were
the
main
drive
between
behind
the
two
two
important
various
features.
The
first
one
is
cut
to
his
runtime
class
on
time.
Classes
to
wear
with
drunken
class
can
specify
what
which
don't
have
you
which,
which
container
runtime
you
wants
to.
You
want
to
run
inside
in
your
poor
llamo.
I
So
you
just
say
you
you
can
justify
her
Trondheim
class
for
Karan
and
an
end
to
your
promises
and
I
want
to
learn
this
part
of
a
as
kata,
so
in
some
submitted
to
through
the
to
kubernetes.
You
know,
be,
you
know
automatic
Medicare
schedules.
Yes,
the
run
time
they
report
to
a
twenty.
If,
then,
then,
can
support
cutter
containers
provided
there
to
you,
your
ham?
He
thought
cut
hazir,
of
course,
and
also
as
a
there's
another
feature.
Then
it's
called
a
pod
overhead.
I
Although
there
how
there
are
various,
do
some
overhead
for
every
port
and
every
container,
but
resent
he
is
not
accounted
so
so
when
kubernetes
is
trying
to
make
some
schedule
into
seizure,
the
there
are
situations
that,
since
can't
go
very
bad
because
the
resource
is
just
there
as
resource
then
cooperate
his
sinks,
his
free,
but
it's
actually
been
used,
be
active
used
by
some
unknown
components.
So,
right
now,
the
poor,
the
poor,
the
every
port,
can
define
its
own
resource
overhead,
especially
the
CPU
and
memory
overhead.
I
So
was
it
with
the
part
overhead,
the
kubernetes
care
him
have
a
pay-per-view
resource
overview
of
the
entire
cluster
and
to
bid
her
scheduling
the
seniors
so
then
say
main
main
features.
We've
been.
We
have
put
to
the
Aquarius,
mainstream
and
poster
of
the
nice
things
there.
They
are
the
aterna.
And
how
much
were
this
despotic,
the
them
the
Master,
will
all.
I
1.15,
maybe
10
never
had
a
croissant
neurontin
classes
before
that
I
think
is
one
point
13
or
14.
So
if
you
install
kubernetes,
these
features
are
automatically
enabled
arena,
and
and
this
year
we
are
looking
to
release
cutta
cutta
canoes
to
the
zero,
and
we
are
planning
some
important
features.
The
first
one
is
we
we
also
we
we
have
been
minimizing.
Our
harvest
was
resource
consumption
for
every
container
and
every
port.
We
will
notice
that
I
should
have
mentioned
that
right
now.
I
The
project
is
mostly
deleted,
looting
written
by
co-co-co
language,
but
we
have
identified
that
because
it
goes
on
ham
used
is
to
have
it
for
some
of
the
components,
so
we've
been
justifying
some
of
the
components
as
well
stiff
I
I
mean
rejected
reverted
in
rust.
So
right
now
we
have
in
alongside
of
the
co-agent,
we
have
a
ginger,
then
he's
been
active
testing
and
we
plan
to
push
it
to
the
ends.
I
84
agent
inside
that
kissed
in
2000,
and
also
the
communication
channel
yeah
now
is
it
is
using
G
RPC,
but
the
actual
is
it
HTTP
layer
is
not
necessary
for
for
us,
so
we
think
it
is.
It
is
too
heavy.
So
we
with
with
that
her
GTR,
is
a
rust
to
the
press,
repress
the
echo
go
her
PC
component
and
also
there's
another
thing
that
we
want
to
do
in
the
in
a
to
talk
to
the
old
time
frame.
I
But
this
this
will
have
a
venture
back
venture
and
since,
since
this
CI
a--
demons
are
no
no
the
white
demons,
they
cannot
in
actually
in
her
inning
username
spit
user
net,
especially
user
networking
in
space,
but
for
for
cata
we,
the
important
use
case
for
cata
used
to
is
for
the
cloud
huddle
cloud
cloud
vendors
for
the
cloud,
the
nurses.
They
want
the
allah,
the
one
we
want
to
allow
different
users
to
run
their
containers
and
ports
on
the
same
and
the
same
host,
but
different
users.
We
have
different
a
network,
so
we
have.
I
We
have
to
pull
the
image
inside
users
their
token
in
space.
So
that's
why
we
want
to
do
the
image
cooling
incidence
inside
the
sandbox
and
also
with
it.
We
can
do
some
to
some
tricks
about
the
image
format
so
that
we
can.
We
will
can
accelerate
the
image
spooning
process,
for
example,
by
instead
of
a
poor
entire
image.
We
can
just
pour
various
more
metadata
layer
then
can
just
saying
that
can
construct
him
should
have.
His
name's
demonstrates
overview
for
the
container
battery,
but
no
data
is
actually
poured
so
the
enters
a
container.
I
We
want
to
improve
color
containers
of
their
ability
and
we
were
defining
Carter's
own
event
API,
so
that
we
can
integrate
it
with
with
projects
joseon's
permissions
and
I
will
also.
We
are
defining
some
cutter,
a
specificity
but
depart
a
P
I,
so
that
users
do
not
have
to
actually
walking
through
the
container
to
debug
their
applications
and
another
feature
we
are.
We
are
looking
at,
who
used
to
improve
Carter
containers,
iostream
Henry
ran
now
in
every
stdio.
I
Iostream
is
handled
from
container
key
to
cut
her,
to
tell
us
who
the
agent
in
sericata-
and
there
are
too
many
layers
I-
think
we
think
there
are
many
too
many
layers
and
the
will.
We
want
to
simplify
this
use
case
and
make
it
easier
to
for,
for
the
for
the
cut
for
the
console
pass
to
have
greater
than
themselves
and
another
main
change
we
are
trying
to
actually
actively
working
on.
I
Is
the
code
super
tree
consolidation
right
now
we
have
different
reporting
for
runtime
for
different
ships
for
fork
say
for
agent,
so
we
want
to
come
to
consolidate.
Resist
occurs,
reporting
to
just
six
just
a
single
reports,
and
so
that
is,
you
should
be
easier
for
for
new
developers
to
just
get
chrome
and
test
with
their
local
changes.
I
So
it's
all
for
their
to
the
open,
and
this
is
our.
This
is
our
community
channels.
We
have
color
containers,
store
Hyeok
since
our
main
page,
and
we
have
a
github
organization,
it's
color
containers
and
we
have
IRC
IRC
channel
and
selection
or
Twitter,
and
many
nice,
if
you
are
interested
in
kadhai,
feel
free
to
contact
them
by
any
of
them.
L
L
So
could
you
sort
of
summarize
you
know
if
if
I
took
a
kbm
qm?
U
vm,
I
mean
the
basic
process
is
that
I
load
the
guest
colonel
and
there
in
boot,
whatever's
in
the
root
filesystem,
and
then
I
have
a
running
virtual
machine
right,
yes
and-
and
that
takes
I
haven't
done
it
for
a
while,
but
I'm
guessing
something
in
the
order
of
a
minute
is
that
is
that
run
about
accurate
yeah?
L
Yes,
yes,
okay
and
now
it
sounds
like
you
do
essentially
the
same
thing,
but
you
have
presumably
a
small
kernel
and
a
smaller
root,
filesystem,
and
so
so
building
a
kernel
and
stripping
out
all
the
stuff.
You
don't
need
is
a
reasonably
straightforward
process
and
then
similarly
stripping
down
a
root
filesystem
to
only
be
the
stuff
that
you
need
is
fairly
straightforward.
So
what
is
the
difference
between
that
and
cotta
containers
and
I
mean
how
do
you?
How
do
you
get
this?
I
I
think
they're
being
too
deep
to
difference
between
them.
The
first
one
is
in
you
know
you
order
to
cut
down
the
speed
and
and
the
consumption
we
have
very
minimum
hardware
support
for
in
in
Kumu.
So
we
will
customize
cue
me
as
well.
So
there
are,
there
are
various
or
the
memory
footprint
of
it
is
very
small
as
a
device.
I
Humilation
is
very
small
and
and
the,
but
when
we
started
and
cube
the
project
we
actually
were
default
into
to
cumulate,
then
the
intense
event
developed
by
Intel
and
as
we
as
we
were
developing
kadha
and
the
inter
team
are
sending
both
of
the
humanoid
features
to
have
same
queue.
So
that's
why
we
are
switching.
We
have
switched
to
absent
queue
because
we,
because,
if
before
that
they
have
some
communities,
the
overhead
of
the
upstream
Kamil
is
way
above
the
accumulate.
So
yes,
yes,
so
yeah
right
now.
The
Akemi
overhead.
I
Isn't,
though,
because
most
of
the
accumulate
features
have
Sunnah
and
but
would
still
recover.
We
and
there's
an
important
feature
in
certain
introduced
to
pay
as
Intel
team
is.
There
is
a
qm
k
modes
of
support,
so
they
enter
every
more
every
component
of
the
QM.
You
can
because
can
be
configured
independently
so
and
with
ways
that
we
we
can
make
basic
because
they
have
the
virtual
machine
manager
very
small
right
now
and
then
say:
first
differences.
A
second
difference
between
Cara
and
virtual
machine
is
and
I
have
cycle
management.
I
With
with
a
virtual
machine
you
manage
into
another
virtual
machine,
you,
you
create
a
virtual
machine,
you
put
into
your
user,
your
steps
on,
tells
you
your
shadow
into
the
converge
machine
and
it
is
inside
of
the
you
such
infrastructure-as-a-service
world,
but
with
Kara
containers.
The
lifecycle
is
a
is
a
container
and
or
or
ports
and
box.
So
so
all
your
workload
is
integrated
into
the
container
work.
Instead
of
managing
a
perversion,
you
are
managing
a
container.
I
So
so
you,
instead
of
say
you
create
a
virtual
machine
and
install
install
everything
there
and
run
your
application.
You
create
her.
You
build
a
container
image,
you
put
your
Creator
Toriyama
for
it
submitted
to
Cooper
areas
and
you
could
end
their
to
zip
corporate
is
their
covariance
schedule
and
imagine
all
the
knapsack
of
it.
L
I
If
you're
really
interested
about
speed
and
I
have
some
data
we
just
recently
found
out
found
out
they
premiered
to
static
you
inside
with
mascara
to
stellarcom,
you,
you,
you
basically
spent
100
to
200
milliseconds
and
to
then
dancing,
then
generally
being
cool
to
create
as
a
given
environment,
to
create,
say
all
the
the
qemu
emulation
and
also
start
and
put
the
guest
Colonel
and
next
next
is
to
start
as
a
way.
I
There
is
no
how
they
we
implemented
a
cutter
agent
inside
the
inside
of
the
the
question
they
generally
be
wrong
and
you
need
to
process
so
that
we,
with
their
starters,
ends
about
2000,
sir
yeah
milliseconds.
So
in
total
you
can
you
can
start
it
and
we
wouldn't
sup.
You
feel
just
wow
wow,
hello
word,
for
example,
if
you
have
an
image
wrong
Hereward,
you
can
start.
You
can
create
from
nothing
to
to
you
through
to
the
point,
and
you
see
hello
words,
there
swing
come
out.
Then
there
will
be
below
300
milliseconds.